entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.05455v2
20230711174304
Can the gravitational wave background feel wiggles in spacetime?
[ "Gen Ye", "Alessandra Silvestri" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-th" ]
^1 Leiden University, Instituut-Lorentz for Theoretical Physics, 2333CA, Leiden, Netherlands Recently the international pulsar timing array collaboration has announced the first strong evidence for an isotropic gravitational wave background (GWB). We propose that rapid small oscillations (wiggles) in the Hubble parameter would trigger a resonance with the propagating gravitational waves, leaving novel signature in the GWB spectrum in the form of sharp resonance peaks. The proposed signal can appear at all frequency ranges and is common to continuous spectrum GWBs with arbitrary origin. Due to its resonant nature, the signal strength differs by a perturbation order depending on whether the GWB is primordial or not, which makes it a smoking gun for the primordial origin of the observed GWB. We show that a large part of the parameter space of such signal can be constrained by near future PTA observations, while fitting the signal template to the current NANOGrav 15yr data already hints an interesting feature near 15 nHz. Can the gravitational wave background feel wiggles in spacetime? Alessandra Silvestri^1 August 12, 2023 ================================================================ § INTRODUCTION A common-spectrum red noise (noise whose power spectrum decreases with increasing frequency) shared between pulsars has been reported in 2020 <cit.>. Recently, multiple pulsar timing array (PTA) collaborations, NANOGrav <cit.>, EPTA <cit.>, PPTA <cit.> and CPTA <cit.> have announced detection of Hellings-Downs like spatial correlation <cit.> in this common-spectrum signal at 3-4σ, which indicates that such signal is highly likely to be the first detection of an isotropic gravitational wave background (GWB). Aside from astronomical sources like supermassive black holes, (see <cit.> for some up-to-date constraints, and also <cit.>), cosmological processes, such as inflation <cit.>, scalar-induced GWs <cit.> and collision of bubbles of first order phase transitions <cit.> as well as the leftover topological defects <cit.> and magnetic fields <cit.>, (see <cit.> for a review and <cit.> for some updated constraints), are also possible sources of the GWB reported by PTA collaborations. The cosmological GWB is of particular interest because the Universe becomes transparent to GWs below the Planck scale, hence the GWB would enable us to directly probe the high energy physics hidden behind the last-scattering surface which would otherwise be difficult to access due to the thermal equilibrium between species. There are also interesting secondary effects that the GWB accumulates as the GWs propagate through large scale structure, and that could carry valuable information over dark energy <cit.> and local fields <cit.>. For GWB with a continuous spectrum, which is likely the case for the signal observed by PTA <cit.>, background cosmic evolution typically only modifies the broad shape of the GWB spectrum, e.g. spectral index, through Hubble friction. In this paper, we propose a novel possibility where the background evolution imprints a localized peak in the GWB spectrum that is within reach of near future PTA experiments. For sub-Hubble GW, the effect of the background is generally suppressed by ℋ/k, with ℋ being the conformal Hubble parameter and k the conformal wave number of GW. However, if the background spacetime () has rapid small oscillations (wiggles) = ⟨⟩ + _osc≡(1+δ_osc), δ_osc<1 on top the slowly varying part ⟨⟩≡, with ⟨·⟩ standing for averaging over the rapid oscillation periods, parametric resonance between GW and δ_osc will effectively cancel out the sub-Hubble suppression ℋ/k and leave a resolvable resonance peak in the GWB spectrum at k_c = k_osc/2, k_osc being the wave number of δ_osc. Dark energy/Modified gravity non-negligible at early times, e.g. <cit.>, can potentially source δ_osc during the radiation dominating era, which will be very difficult to probe/constrain without the proposed GW signal due to the thermal equilibrium between standard model species. We found that current NANOGrav 15yr data already hints at a resonance-like signal near 15 nHz. Further data and analysis are needed to assess whether it is a physical signal or a statistical fluctuation. Nonetheless, this confirms the viability of such signal in current and near future PTA measurements. § GW RESONANCE We start from the propagation equation for a GW mode γ(τ,k) on the homogeneous and isotropic cosmological background γ̈+2Γγ̇+k^2γ=0 where upper dots indicate derivation with respect to the conformal time τ. Here we have promoted the Hubble friction to a general friction term Γ which can additionally include possible contributions from modified gravity. We assume that the speed of tensors is luminal. In parallel to Eq.(<ref>), we can split Γ into a part Γ̅ slowly varying on cosmological time scales plus fast oscillating wiggles δ_osc with k_osc≫ Γ = Γ̅(1 + δ_osc), δ_osc<1. As we discuss in detail in Appendix-<ref>, such wiggles can be sourced in early-dark-energy-like scenarios characterized by a scalar field with a standard kinetic term and a ϕ^4 potential, possibly non-minimally coupled to gravity. For oscillations of small amplitude, i.e. δ_osc<1, Eq.(<ref>) can be solved perturbatively γ̈^(n)+2Γ̅γ̇+k^2γ^(n)=-2δ_oscΓ̅γ^(n-1), n=0,1,… γ^(-1)=0. We will focus on epochs with a constant equation of state w and assume Γ̅≃=2/3w+1τ^-1; in this case, the two homogeneous solutions of Eq.(<ref>) are: γ^(0)_1≃J_ν(kτ)/(kτ)^ν,  γ^(0)_2≃Y_ν(kτ)/(kτ)^ν, ν=3(1-w)/2(1+3w). The leading order correction term γ^(1), can be found via the Green's function method γ^(1)(τ) =∫_τ_i^τ-2δ_oscΓ̅γ̇^(0)(τ̃)γ^(0)_1(τ̃)γ^(0)_2(τ)-γ^(0)_2(τ̃)γ^(0)_1(τ)/γ^(0)_1(τ̃)γ̇^(0)_2(τ̃)-γ̇^(0)_1(τ̃)γ^(0)_2(τ̃)dτ̃ ≃∫_τ_i^τ2τ̃^q/τ^qΓ̅δ_oscγ̇^(0)(τ̃)/ksin[k(τ̃-τ)]dτ̃, q=2/1+3w where in the second line we substituted in the homogeneous solution of γ^(0) and took the sub-horizon limit by keeping only terms with the highest power in kτ. The leading order correction, γ^(1), is generally highly suppressed by both the sub-Hubble parameter /k as well as rapid oscillations. However it can get amplified if δ_osc oscillates in resonance with γ^(0), in which case its amplitude at resonance will be determined by |δ_osc|, regardless of /k. To see this, let us introduce the ansatz δ_osc=ψ_0sin(k_oscτ+α) for the wiggles with ψ_0 characterizing the amplitude of δ_osc. The leading order GW with conformal wave number k can be expressed as γ^(0)≃γ_0sin(kτ+β)/(kτ)^q. α and β are arbitrary phases. Now the second line of Eq.(<ref>) becomes γ^(1)(k) ≃4/1+3wγ_0ψ_0/(kτ)^q∫_τ_i^τ1/τ̃sin(k_oscτ̃+α)cos(kτ̃+β)sin(k(τ̃-τ))dτ̃ ≃ψ_0 f(Δ k)γ_0cos(kτ+α-β)/(kτ)^q where we have used the rapid oscillation approximation ∫_τ_i^τg(τ̃)sin(kτ̃)dτ̃∼∫_τ_i^τg(τ̃)cos(kτ̃)dτ̃∼𝒪(1/k(τ-τ_i))→0 when k(τ-τ_i)≫ 1 and ġ/g≪ k, which is true for sub-Hubble GWs propagating over cosmological distances. Resonance happens at k_c= k_osc/2 and f(Δ k) characterizes the shape of the resonant peak with Δ k≡ k-k_c. For Δ k/k≪1, one has the analytic approximation f(Δ k)∼1/1+3w[Ci(2|Δ k|τ) - Ci(2|Δ k|τ_i)] where Ci(x)≡-∫^∞_xdtcos(t)/t is the cosine integral function. In particular f(0) = log(τ/τ_i)/1+3w=log(a/a_i)/2=N/2, corresponds to one half of the number of e-folds, N, the Universe underwent from the beginning of the resonance at τ_i. Specializing to the PTA frequency nHz, if resonance starts at horizon reentry (k∼ 10^-9 Hz∼ 10^5 Mpc^-1) and continues all the way to radiation-matter equality (k_eq∼ 10^-2 Mpc^-1), we get N≃16 and f(0)≃8. Usually when comparing with observations, the more relevant quantity is the GW energy density spectrum <cit.> ρ_GW(k) ≡dρ_GW/dln k = M_p^2/4k^3/2π^2(|dγ/dt|^2+k^2/a^2|γ|^2) ≃[1+2f(Δ k)ψ_0sin(2β-α)+f^2(Δ k)ψ_0^2]M_p^2/8π^2k^3/a^2τ^2. where we have assumed γ≃γ^(0) + γ^(1) and used Eq.(<ref>). The term linear in ψ_0 arises from the fact that γ^(1) is sourced by γ^(0). There are two physical limits of interest concerning the phase factor sin(2β-α): * Completely random phase If GWB is of sub-Hubble origin, such as supermassive black hole binaries or bubble/topological defects collision after inflation, the phases β of GWs generally satisfy a uniform random distribution. In this case the linear correction term vanishes when summing over all β phases, i.e. 1/2π∫_0^2πρ_GWdβ, and the leading order correction is 𝒪(ψ_0^2). * Completely aligned phase If GWB undergoes horizon reentry mechanism, such as primordial tensor perturbations generated during inflation, the GWs will have exactly the same phase β∼π/2 due to the adiabatic initial condition. α depends on the actual physics that sources the wiggles δ_osc. But if such physics also has its initial condition set outside of horizon and becomes dynamical when it goes sub-Hubble, a general expectation would be that |α|∼π/2 and the factor |sin(2β-α)|∼1, as is the case for the explicit example in Appendix-<ref>. Therefore we argue here the phase factor in this case does not vanish and that the leading order correction is 𝒪(ψ_0). Assuming completely aligned phase, Fig.<ref> plots the numeric results of the energy spectrum ρ_GW, defined by the first line of Eq.(<ref>). For the case of k_cτ_i=1 we plot also the corresponding analytic approximation from the second line of Eq.(<ref>), using Eq.(<ref>). One can notice that the analytic approximation is able to capture the overall shape of the peak, with the peak height slightly smaller than the numerical result. This is to be expected because γ^(n), n≥2 are also important at resonance. It is clear from Fig.<ref> that the peak width is affected by k_cτ_i. The more sub-Hubble the GW is when resonance starts, the narrower the peak width, though the peak height only depends on how long the resonance lasts logarithmically, through the e-folding number N. This implies higher frequency resolution would be required if the resonance starts when the GW is deep sub-Hubble. We will come back to this issue in the next section. It is worth noting here that in the aligned phase case the phase factor can be negative, which gives a dip rather than a peak in the spectrum, as in Appendix-<ref>. All argument will be the same for both cases if we take ψ_0→-ψ_0. Therefore, without loss of generality, we will always assume a peak hereafter for better presentation. § RESONANT GW AND PTA Experiments have finite frequency resolution and measure integrated power in frequency bins. An important property of the integrated signal is that it is insensitive to how long the resonance lasts, i.e. the e-folding number N, as long as it is long enough (i.e. k_cτ≫1 and τ/τ_i≫1). This property can be seen by analytically integrating Eq.(<ref>) over a frequency bin of width λ≡Δ k_max/k_c ∫_|Δ k|<λ k_ck^3f(Δ k)dln k /∫_|Δ k|<λ k_ck^3dln k ≃1/1+3w[-Ci(2λ k_cτ_i)+sin(2λ k_cτ_i)/2λ k_cτ_i] + 𝒪(1/k_cτ) which is independent of N. The bin width λ should satisfy (k_cτ)^-1<λ≪1 for the approximation to hold. The explicit dependence on k_cτ_i also explains what we see in Fig.<ref> regarding the peak width. The resonance peak is very narrow, to assess the viability of Eq.(<ref>) in observations, we plot the relation between log frequency resolution Δln k≡ln(1+λ) and the signal as integrated power excess Δρ_GW/ρ_GW,fid≡(ρ_GW-ρ_GW,fid)/ρ_GW,fid in the frequency bin [e^-Δln kk_c, e^Δln kk_c] in Fig.<ref>. As already mentioned in the previous section, higher frequency resolution is needed to resolve the signal if resonance starts deep sub-Hubble. To see this, we plot in Fig.<ref> the required frequency resolution Δln k to reach 10% power excess signal for different resonance starting time k_cτ_i. According to Fig.<ref>, for a detector with 10% energy precision, the relative frequency resolution needs to be above 40% (note the actual bin size is 2Δln k) in order to resolve the resonant signal sourced by wiggles δ_osc of order 10%. For PTA experiments, if the spectrum is binned with bin width 1/T_obs, T_obs being the observation time span, 40% relative resolution can be reached for bin number i≥3. Assuming the most optimistic situation, where the resonance starts immediately after horizon reentry, i.e. k_cτ_i=1, and |sin(2β-α)|=1, we propose the following resonant GW signal template for PTA Ω_GW(k)=2π^2/3(year^-1/H_0)^2[1+ψ_0F_Δ k(k, k_c)]A_GWB^2(k/2π/year^-1)^5-γ_GWB where Δ k is the width of the frequency bin used and k_c (f_c) is the resonance wave number (frequency). Typically Δ k≃ 2π/T_obs. The shape function F is defined as F_Δ k(k, k_c)≡∫_k-Δ k/2^k+Δ k/2k̃^1.8f(k̃-k)dlnk̃/∫_k-Δ k/2^k+Δ k/2k̃^1.8dlnk̃ where the factor k^1.8 comes from setting γ_GWB=3.2 according to NANOGrav GWB bestfit <cit.> so that Ω_GW(k)∝ k^1.8 and f(Δ k) is approximated by Eq.(<ref>). We fit the template Eq.(<ref>) to the public NANOGrav 15yr data <cit.> using Monte Carlo Markov Chain (MCMC) method, with Δ k = 2π/16.03yr as described in the paper. The priors are summarized in Tab.<ref>, in which resonance frequency f_c is confined to the region where GWB has been observed <cit.>. Fig.<ref> shows the MCMC posterior distributions of the spectrum parameters {log_10A_GWB, γ_GWB, ψ_0, f_c}. Current data is not enough to well constrain local features in the spectrum, thus the wide contours in the plot. However, quite interestingly, both the results assuming HD spatial correlation (HD) and common-spectrum uncorrelated red noise (CURN) display a peak in f_c posterior around 15 nHz, hinting a possible feature there. This peak is significantly higher when CURN is assumed, which leads us to attribute this curiosity to the difference in power excess signal between CURN and HD at frequency bins 6 and 7 (roughly corresponding to f=10-16 nHz) in the NANOGrav results <cit.>. § CONCLUSIONS Yes, it is possible to detect the resonant signal sourced by spacetime wiggles through PTA. Such signal features a sharp resonant peak, whose strength increases in a particular way when frequency resolution increases, as depicted in Fig.<ref>. It is worth further studying if such behavior will help distinguish it from other possible bump-like features in the GWB spectrum. Furthermore, the leading order signal changes by an order of magnitude in ψ_0 as well as has different peak shape, see Eq.(<ref>), depending on the origin of GWB, making it a potential smoking gun to identify whether the observed GWB is primordial or not. A signal template Eq.(<ref>) is also proposed for use with PTA data. Intriguingly, fitting the template to the recent NANOGrav 15yr data hints a resonance-like peak near f_c∼15 nHz, see Fig.<ref>, implying that the proposed signal is accessible to current and near future PTA data. It is unclear whether the hinted signal itself is due to physics or statistical fluctuations. Further study is needed to evaluate its credibility. Finally, the proposed resonance signal is very general, as it applies to GWB with continuous spectrum regardless of its origin and is not limited to the nHz frequency band. It would be interesting to study its phenomenology in other frequency ranges, such as ground based LIGO/Virgo/KAGRA <cit.>, space based LISA <cit.>, Taiji <cit.> and CMB B-mode <cit.>. It opens up the possibility of studying oscillatory early dark energy/modified gravity theories during radiation dominating era which would otherwise be hard to constrain without the GWB resonance signal. *Acknowledgment GY particularly thanks Alice Garoffolo for insightful discussion during the initial phase of this work and for careful proof-read of the draft. PTA data analysis is performed using the software <cit.>. Our work is supported by NWO and the Dutch Ministry of Education, Culture and Science (OCW) (grant VI.Vidi.192.069) § AN EXAMPLE THEORY TO SOURCE WIGGLES IN H In this appendix we provide a simple scalar field model from within the Horndeski <cit.> family which is able to produce wiggles in H. The corresponding action is 𝒮=∫ dx^4√(-g)[M_p^2/2G_4(ϕ)R - 1/2(∂ϕ)^2 - λϕ^4]+𝒮_m , i.e. a non-minimally coupled scalar field, with standard kinetic term and a quartic potential. 𝒮_m is the action for all matter components. The field is initially frozen at its initial value ϕ_i by Hubble friction when m_eff^2∼ V_ϕϕ≪ H^2, then thaws at V_ϕϕ∼ H^2 and undergoes oscillations driven by the potential V=λϕ^4 around its minimum. If the thawing time is near matter-radiation equality, both the ϕ^4 potential <cit.> and non-minimal coupling G_4 <cit.> could alleviate the Hubble tension <cit.>, but not resolve it <cit.>. The quartic potential V(ϕ)=λϕ^4 is important for our example because it drives the radiation-like oscillations in the scalar field. Specifically, at max field displacement ϕ_0 in one oscillation cycle, one has V(ϕ_0)=λϕ_0^4∼ρ_ϕ∝ a^-4 which implies ϕ_0∼ a^-1. At ϕ=0 we have 1/2(dϕ/dt)^2∼ϕ_0^2k_phys^2/2∼ρ_ϕ∝ a^-4 implying the physical wave vector k_phys∝ a^-1. Thus ϕ oscillates with a fixed conformal wave number k_ϕ, which is essential for resonance with GW. Assuming G_4=1 and neglecting the Hubble friction, we can estimate the oscillation period by T=4∫_0^ϕ_0dϕdt/dϕ=4∫_0^ϕ_0dϕ[2(V(ϕ_0)-V(ϕ))]^-1/2≃3.7λ^-1/2ϕ_0^-1, which gives the effective conformal wave number k_ϕ/ℋ_i≃ 2.2 (λ f_ϕ)^1/4 where we have denoted the conformal Hubble scale at some initial time as ℋ_i and defined f_ϕ≡ρ_ϕ,i/3M_p^2H_i^2≃λ(ϕ_i/M_p)^4/3 as the energy fraction of ϕ field at that time. During radiation dominance we have f_ϕ≃ const.. We also de-dimensionalize λ→λ H_i^2M_p^2 here. In principle, wiggles in H are automatically sourced by ρ_ϕ even if G_4≡1, but their size is highly suppressed by (ℋ/k_ϕ)^2 for sub-Hubble oscillations. To have a sizable effect, we let G_4 source the wiggles in H instead, through the Freedman equation 3G_4(ϕ)(^2+Ġ_4(ϕ)/G_4(ϕ))=a^2[1/2ϕ̇^2+V(ϕ)+ρ_m], where ρ_m is the total energy density of all ordinary species, including radiation. To obtain numerical results we choose the explicit form G_4(ϕ) = 1+α(ϕ/M_p) which gives Ġ_4/G_4≃αϕ̇/M_p∼α√(6f_ϕ). Therefore the terms inside the parenthesis on the LHS of Eq.(<ref>) give comparable contribution to δ_osc. Aside from , non-minimal coupling also contributes directly an oscillating friction term in the propagation equation γ̈+2( + Ġ_4/G_4)γ̇+k^2γ=0 , so that Γ=+Ġ_4/G_4. In conclusion, the scalar field model (<ref>) introduces oscillations in the background of strength |δ_osc|∼𝒪(α). Fig.<ref> plots the actual signal calculated by numerically solving for the cosmological background consisting of (<ref>) and radiation and integrating the GW propagation (<ref>), with parameters α=0.1, λ=10^-4 and ϕ_i/M_p=1. We assume GW follows a scale-invariant primordial spectrum and resonance starts immediately after horizon reentry (k_cτ_i=1) and lasts for N=9. There are in fact two kinds of signal in the GW spectrum associated with theory (<ref>). One is the resonance signal studied here, the other is a step like feature in the spectrum induced by the field thawing from G_4=1+α(ϕ_i/M_p) to the oscillating phase with ⟨ G_4⟩=1. To keep only the resonance signal which is of interest to us, we only turn on the non-minimal coupling α after the corresponding GW entering horizon in the numeric integration. The dip in Fig.<ref> will turn into a peak if one performs α→-α. For scalar fields, oscillation in ϕ can also trigger parametric resonance in the scalar field perturbations, whose phenomenology in the CMB has been studied in Ref. <cit.>.
http://arxiv.org/abs/2307.04791v1
20230710180004
A self-averaging spectral form factor implies unitarity breaking
[ "Apollonas S. Matsoukas-Roubeas", "Mathieu Beau", "Lea F. Santos", "Adolfo del Campo" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech", "hep-th", "math-ph", "math.MP" ]
sgn H̋ x y p q k̨ i e łl ḍ ⟨ ⟩ ω Ω ε tr †
http://arxiv.org/abs/2307.05814v1
20230711214740
Time Moves Faster When There is Nothing You Anticipate: The Role of Time in MEV Rewards
[ "Burak Öz", "Benjamin Kraner", "Nicolò Vallarano", "Bingle Stegmann Kruger", "Florian Matthes", "Claudio Juan Tessone" ]
cs.CR
[ "cs.CR", "cs.GT" ]
The minimum neighborliness of a random polytope Brett Leroux August 12, 2023 =============================================== This study explores the intricacies of waiting games, a novel dynamic that emerged with Ethereum's transition to a Proof-of-Stake (PoS)-based block proposer selection protocol. Within this PoS framework, validators acquire a distinct monopoly position during their assigned slots, given that block proposal rights are set deterministically, contrasting with Proof-of-Work (PoW) protocols. Consequently, validators have the power to delay block proposals, stepping outside the honest validator specs, optimizing potential returns through MEV payments. Nonetheless, this strategic behaviour introduces the risk of orphaning if attestors fail to observe and vote on the block timely. Our quantitative analysis of this waiting phenomenon and its associated risks reveals an opportunity for enhanced MEV extraction, exceeding standard protocol rewards, and providing sufficient incentives for validators to play the game. Notably, our findings indicate that delayed proposals do not always result in orphaning and orphaned blocks are not consistently proposed later than non-orphaned ones. To further examine consensus stability under varying network conditions, we adopt an agent-based simulation model tailored for PoS-Ethereum, illustrating that consensus disruption will not be observed unless significant delay strategies are adopted. Ultimately, this research offers valuable insights into the advent of waiting games on Ethereum, providing a comprehensive understanding of trade-offs and potential profits for validators within the blockchain ecosystem. § INTRODUCTION The blockchain landscape has seen tremendous growth and evolution over the years, with innovative concepts continually emerging to enhance its functionality and efficiency. Ethereum, a key player in this arena, has recently undergone major upgrades, with the merge[<https://ethereum.org/en/roadmap/merge/>] being the most prevalent, replacing the Proof-of-Work (PoW)-based block proposer selection mechanism with a Proof-of-Stake (PoS)-based one <cit.>. With this upgrade, Etheruem also introduced a new consensus protocol Gasper comprising Latest Message Driven Greediest Heaviest Observed SubTree (LMD GHOST) for fork-choice and Casper Friendly Finality Gadget (FFG) for finality, replacing the longest-chain consensus. Switching to PoS remarkably reduced the electricity consumption of the Ethereum blockchain (down by 99.98% according to the 2022 report of Crypto Carbon Ratings Institute[<https://indices.carbon-ratings.com/ethereum-merge>]) which led to a significant cut down in block proposal rewards as consensus participants, known as validators, do not require as high incentives as PoW-Ethereum miners did. Hence, the validators depend even more on transaction fees and exogenous rewards like Maximal Extractable Value (MEV), then miners did. Incentives play an essential role in the evolution of public, permissionless blockchains such as Ethereum. Misaligned incentives, however, can lead to a lack of interest in contributing to a network or, potentially, to profit-seeking attacks, dangering consensus stability <cit.>. MEV has emerged as a powerful incentive within Ethereum <cit.>, enabling network participants to gain profits beyond protocol rewards through strategic transaction issuance and ordering <cit.>. In particular, participants who partake in MEV extraction employ transaction ordering techniques inspired by traditional finance to execute profitable strategies based on the network state concerning pending transactions in the mempool and the most recently finalized blockchain state <cit.>. The prominence of MEV has risen with the advent of Decentralised Finance (DeFi) ecosystem, as shown by the emergence of sophisticated bots exploiting MEV opportunities, regardless of their potential disruption to users <cit.>. Currently, the PoS-Ethereum protocol allows validators to release their block in a 12 seconds slot time, during which the designated validator has a total monopoly on the block release. This feature introduces new opportunities for MEV extraction through strategic waiting games. To illustrate, we hypothesize that the block proposer may exploit their monopolistic position and wait as long as possible within the slot limits to release the block in order to extract additional MEV. However, if a large proportion of validators on the Ethereum network engage in these delayed releases, it could result in instability within the consensus of the network, as delays in block release may lead to multiple forked blocks. Thus, this paper delves into the impact of delayed block releases, or more broadly, waiting games, on the Ethereum consensus framework. We primarily focus on answering three pivotal research questions: * Are waiting games profitable for validators, considering the dynamics of value evolution during a slot time and the relative importance of MEV rewards compared to proposal rewards from the consensus layer? * Can validators engage in waiting games without risking their blocks being vulnerable to getting orphaned? * Could waiting games potentially disrupt the stability of Ethereum consensus and, if so, under which conditions? §.§ Related Work As far as we are aware, <cit.> is the sole study exclusively focusing on waiting or timing games in PoS-Ethereum, although <cit.> also notes a positive correlation between bid arrival time and value, while exploring the intricate details about the block construction market available on Ethereum, MEV-Boost <cit.>. In <cit.>, the authors present a model where honest-but-rational consensus participants can delay their block proposals to maximise MEV capture, while still ensuring their inclusion in the blockchain in a timely manner. Despite noting that timing games are worthwhile, they find these strategies not currently being exploited by consensus participants. However, drawing attention to the differences between our study and that of the authors in <cit.> is critical. While we also analyze the value of time, we further investigate the importance of MEV rewards relative to consensus layer proposal rewards and examine unrealised values due to validators' acting prematurely, not capturing the maximum payments by the builders. Additionally, we assess attestations concerning forking vulnerability and explore the connection between orphaned blocks and the timing of their winning bids, determining if orphaned blocks arrive later than others or if there are non-orphaned blocks with later submitted winning bids. This comprehensive inspection provides deeper insights into the value and significance of waiting games and the associated risks. To test the waiting games under different network conditions, we adopt an Agent-Based Model (ABM) specifically devised for PoS-Ethereum by <cit.>. In particular, studying the impact of waiting games on the Ethereum consensus requires a robust model to examine how different waiting strategies influence the consensus properties. Moreover, latency plays an essential role in defining consensus. Therefore, the ABM model from <cit.> is ideal as the model explicitly includes variables for the topology of the peer-to-peer network, and of particular interest, the average block and attestation latencies. Furthermore, the ABM provides specific techniques for accurately measuring the consensus of the simulated PoS-Ethereum network. §.§ Contribution This work delivers the following contributions: * Illuminates the potential profitability and inherent risks of waiting games in PoS-Ethereum, offering new insights into block proposal strategies and validator trade-offs. * Underlines the economic impact of MEV rewards, revealing their influence on rational proposer behaviour and showcasing unrealised value within premature bid selection. * Delivers a comprehensive analysis of attestation shares, identifying blocks at higher forking risk, and contrasts orphaned and non-orphaned blocks, highlighting the impact of proposal timing on confirmation. * Employs an agent-based model to simulate various validator behaviour scenarios in Ethereum consensus, enhancing our understanding of its resilience and stability against timing games. §.§ Organization Beyond this introduction, the paper is organized into four additional sections. Section <ref> presents essential background information on PoS-Ethereum mechanisms, MEV, and Proposer-Builder Separation (PBS). This is followed by Section <ref>, which details the methodology adopted for data collection and processing. The findings from our empirical analysis and agent-based simulation are presented in Section <ref>. We conclude the paper and summarize our findings in Section <ref>. § BACKGROUND This section aims to provide relevant background information essential to our study. Initially, we outline the Proof-of-Stake (PoS) Ethereum consensus and follow it up with a discussion on the reward structure within PoS-Ethereum. Next, we delve into the concept of Maximal Extractable Value (MEV), an additional profit that network participants can capture through certain strategies employed during block production, which forms a crucial aspect of our research. Lastly, we present the Proposer-Builder Separation (PBS)-the separation of the roles in block production to proposers (validators) and builders (who construct blocks)-to mitigate the risks associated with MEV centralization around the consensus layer. §.§ PoS-Ethereum Consensus The Ethereum network is a peer-to-peer network that operates in a decentralised manner and comprises nodes (computers or servers) that maintain a copy of the blockchain. Moreover, the nodes can offer varying functionalities, such as full nodes or light nodes. However, certain nodes perform a specific role as validators, incentivised by reward mechanisms. Proof-of-Stake (PoS) is a blockchain mechanism used to select the next block proposer in a Sybil attack-resistant manner. In PoS, network members who wish to participate in block production must stake money to the protocol, and unlike PoW, there is no need for participants to solve a complex puzzle or use mining hardware. Instead, participants have a financial stake that can be slashed in case of misbehaviour. This makes PoS chains more energy-efficient compared to PoW chains. In the PoS-Ethereum consensus, validators are participants involved in the consensus process. Validators need to deposit a stake as collateral (32 Ether) to the deposit contract, which contains a Merkle root for all deposits. In addition, validators are required to maintain a node that facilitates communication and message exchange within the gossip network. Validators are incentivised by the rewards they receive for honest conduct, thereby compensating them for their associated costs. The protocol randomly selects a validator to propose a block. If a validator misbehaves (e.g. proposes multiple blocks in one slot), the respective validator gets penalised (i.e. stake slashed). Additionally, a committee of validators is chosen at every slot to vote on the validity of the proposed block. At least two third of the committee must agree for the block to be considered valid. Validators who submit contradicting votes are penalised. The PoS-Ethereum protocol divides time into epochs and slots, with each epoch containing 32 slots and each slot lasting 12 seconds. In PoS-Ethereum, a block is considered finalised after two epochs. Validators in Ethereum use the LMD GHOST <cit.> fork choice rule to establish the canonical chain and accurately propose or attest to blocks. Under this framework, validators maintain a record of the latest message received from all other validators and only update it if the new vote comes from a slot strictly later than the existing entry. Therefore, in the event that a validator receives two equivocating votes from the same validator for the same slot, only the vote that arrives first is taken into consideration. The consensus mechanism operates in two phases. In the first phase, LMD GHOST operates on a smaller time scale, with time segments referred to as slots. In the second phase, Casper FFG <cit.> operates on a larger time scale, within epochs composed of 32 slots. Specifically, Casper FFG is a leaderless, two-phase propose-and-vote-style Byzantine fault-tolerant consensus protocol that functions to finalise blocks and ensure security even during temporary network partitions. The confirmation rule at the Casper FFG level involves outputting the most recent finalised block and its prefix. §.§ Reward structure In the post-merge Ethereum, there have been considerable changes in the reward structure. This paper provides a brief overview of these changes while directing interested readers to <cit.> for more detailed information. Validators earn rewards by casting votes that align with the majority of the other validators (i.e. honest interaction), specifically when proposing blocks and participating in syncing committees <cit.>. Moreover, validators receive rewards for attesting and proposing blocks on time (e.g. attestation rewards). As a result, the network reaches a consensus and validates the data in the blockchain (i.e. finalisation). Block proposers receive an aggregated reward of the Gas Tips (i.e. the fees users pay to prioritise their transactions) included in a block. On the other hand, the value of the attestation rewards for each epoch is calculated based on a Base_Reward, which functions as the fundamental unit for the other rewards. The Base_Reward is determined by the validator's effective balance and the total number of active validators and represents the average reward that a validator can receive per epoch when operating under optimal conditions. Specifically, the calculation entails dividing the product of the validator's effective balance and a Base_Reward_Factor by the square root of the total active balance, as shown in Equation <ref>: Base_Reward = Effective_Balance×Base_Reward_Factor/√(Active_Balance) Where the Effective_Balance is the effective balance of the attesting validator and the Active_Balance is the total active balance staked across all active validators, and the Base_Reward_Factor is a constant of value 64. Therefore, the Base_Reward is proportional to the effective balance of the validator and inversely proportional to the number of validators present. Accordingly, as the number of validators in the network increases, the rewards are distributed among more validators, which decreases the rewards per validator. Moreover, validators are rewarded based on the correctness of their behaviour. Specifically, the attestation reward is influenced by the Flag_Reward <cit.>, shown in: Flag_Reward = Flag_Weight/Weight_Denom×Base_Reward×Attestation_Balance/Active_Balance Where Flag_Weight and Weight_Denom are constants that vary between flags, and the Attestation_Balance is the sum of the effective balances of all attesting validators. Accordingly, the Attestation_Reward is the sum of the flag rewards (∑Flag_Reward). In addition, sync rewards and block proposal rewards form part of the reward structure. The former is a reward for validators that participate in the sync committee to sign new block headers in every slot. The latter are the rewards linked to proposing a beacon block. Specifically, the reward consists of including the attestation aggregations from validator votes not yet present in the beacon chain and including sync aggregates. Moreover, the block proposer is responsible for including the aggregate from the sync committee participants <cit.>. §.§ Maximal Extractable Value The concept of Maximal Extractable Value (MEV) has grown significantly within the Ethereum blockchain, particularly with the surge of the Decentralized Finance (DeFi) domain since 2020 <cit.>. Daian et al. <cit.> initially coined the term Miner Extractable Value, implying the additional value that miners, or privileged actors, could seize by manipulating transaction sets and order in a block, beyond standard protocol incentives such as block rewards and transaction fees <cit.>. Despite these privileged actors having a position of power, this value is also available to any network participant who can observe the confirmed blockchain state and monitor pending transactions. Consequently, the Miner or Privileged Extractable Value forms a subset of Maximal Extractable Value, which includes any value that can be extracted within a blockchain network <cit.>. To exploit MEV, network participants who do not possess authority over block content must influence the transaction order by offering payments to block proposers. These payments can be transaction fees or direct payments facilitated by MEV markets such as Flashbots Auction <cit.>. Participants seeking MEV, known as MEV searchers, employ transaction ordering techniques inspired by traditional finance, such as frontrunning and backrunning <cit.>, and DeFi instruments like flash loans <cit.>, to execute profitable strategies based on the network state concerning pending transactions in the mempool <cit.> and the most recently confirmed blockchain state <cit.>. Existing research <cit.> and web services such as MEV Explore <cit.> or EigenPhi <cit.> that quantify the extracted MEV on Ethereum are limited by their heuristic approaches to detecting certain MEV patterns (e.g., arbitrages, sandwiches, liquidations) and the protocols they inspect. Consequently, they only provide a lower-bound estimate of the actual value extracted. According to MEX Explore <cit.>, more than $675 million worth of MEV had been harnessed until the merge in September 2022. Furthermore, data from EigenPhi shows ongoing MEV extraction, with approximately $9.5 million in profit generated within a month from May 2023 to June 2023 <cit.>. Although a lower-bound, these estimates already highlight the significance of MEV as a powerful incentive on Ethereum, emphasizing its consideration in design decisions to prevent consensus destabilizing attacks <cit.>. MEV's negative implications are manifold <cit.>, with issues including user value loss, network congestion due to MEV searchers competing <cit.>, value disparities among blocks incentivising consensus destabilising attacks <cit.>, and centralisation of MEV supply chain components due to economies of scale <cit.>. In response, innovative solutions are emerging across different system layers <cit.>. These include fair transaction ordering protocols <cit.>, privacy-preserving mechanisms to protect valuable transaction information of users <cit.>, efficient MEV extraction designs that facilitate flow across different stakeholders <cit.>, and MEV-aware applications providing user rebates <cit.>. For a comprehensive analysis of the MEV mitigation space and existing solutions, we refer the reader to <cit.>. §.§ Proposer-Builder Separation and MEV-Boost Proposer-Builder Separation (PBS) is a novel design concept introduced for Ethereum to address the potential risk of validators becoming excessively sophisticated in block construction <cit.>. This approach introduces a clear division of tasks within the block production process, assigning separate roles to proposers and builders. The core responsibility of the proposer revolves around consensus participation, while the intricate task of block construction is outsourced to specialised builders. By integrating PBS into Ethereum's core protocol, validators can function solely as consensus nodes and source the execution payload from the builder's market on the execution layer. This enables validators to profit from MEV available in the mempool without direct involvement in the complexities of block building. This shifts the competition for MEV and its inherent centralising influences from the validator level, closely related to the consensus layer, to the builder level, primarily associated with the execution layer. Within the framework of PBS, builders are involved in a competitive environment where they strive to offer the maximum value to proposers. Their bidding capacity depends on their ability to construct a valuable block using transactions from their exclusive order flow and public mempool. Determining the optimal set of transactions or bundles that yield the highest value necessitates using complex algorithms, which may consume significant energy resources. Builders willingly undergo this computational cost to secure a monopoly over a single slot duration without engaging in staking or, previously, mining activities as consensus participants do. Yet, the successful integration of PBS into the Ethereum core protocol relies on addressing the complexities surrounding trust assumptions and commitments between proposers and builders <cit.>. As a temporary PBS solution, Flashbots introduced MEV-Boost <cit.>, which employs relays <cit.> as intermediaries between proposers and builders. Relays gather blocks from builders in the form of bids, validate them, and present them to proposers. Consequently, they function as a data availability layer, safeguarding proposers from Denial-of-Service (DoS) attacks stemming from excessive bid submissions. While PBS aims to counteract the centralising impacts of MEV, MEV-Boost leads to new censorship-related risks, as relays could selectively withhold bids from certain builders to specific proposers. However, in the absence of reliable relays, proposers can turn to their execution clients for local block building. Hence, in the MEV-Boost architecture, relay competition promotes honest behaviour as their activities are subject to public auditability. Architecture The architecture of MEV-Boost <cit.> is designed around three key players: validators, who control the proposers, relays, and builders, who prepare the execution payloads. To qualify for receiving execution payloads from builders, validators need to register with their chosen relays and provide details such as the payment address, block gas limit, and their public key <cit.>. The block proposal process then unfolds as follows, when a validator is scheduled to propose a block <cit.>: * Builders, beginning from the preceding slot, check if the validator for the forthcoming slot is registered with a relay they submit bids to. If so, they start assembling execution payloads using transactions from the public mempool and their private order flow, if it exists. * Builders then forward the prepared execution payload, along with their bid for the validator, to the relays they work with. * Once the relays receive the submitted payloads, they validate them and make them available for the validator. * The validator's MEV-Boost middleware collects headers and associated bids from the registered relays and provides the validator with the header offering the highest value. * The validator blindly signs the header and sends it back to the MEV-Boost middleware, which forwards it to the relevant relay. * The relay then verifies the validator's signature on the signed header and propagates the complete block to the rest of the network. MEV-Boost Analytics Following the merge in September 2022, MEV-Boost has risen to prominence as the primary solution for validators' block production. Analysis and data presented in <cit.> clearly showcase the widespread adoption of MEV-Boost among block proposers. As of November 2022, the proportion of blocks produced by MEV-Boost utilising validators persistently exceeds 85%, even peaking beyond 90% by June 2023 <cit.>. The remaining validators presumably depend on local block construction via their execution clients. The adoption of MEV-Boost has triggered substantial compensations for validators, surpassing an aggregate of 215k ETH <cit.>. However, it is important to note that the MEV-Boost ecosystem has exhibited a degree of centralisation. Presently, ten active relays participate in MEV-Boost, with Ultra Sound Relay, Flashbots, Agnostic, and BloXRoute Max Profit contributing significantly to the block production, constituting 32%, 25%, 17%, and 10% respectively <cit.>. This centralisation around specific relays can be attributed to the positive reputation and trust placed in these relays by both builders and validators. Additionally, a centralisation trend is evident within the builder market as well. Four leading block builders-Flashbots, Beaverbuild, Builder0x69, and rsync-build-cumulatively account for nearly 75% of all MEV-Boost blocks <cit.>. This pattern suggests that certain builders may have access to exclusive order flow in addition to the transactions available in the public mempool. § DATA COLLECTION AND PROCESSING To comprehensively analyse the impact of waiting times on the value accrued by proposers, attestations, and consensus stability, we collected data from the MEV-Boost protocol and the Ethereum consensus layer. Our methodology involved leveraging the public data endpoints presented by the MEV-Boost relays, which furnish information regarding submitted builder bids[<https://flashbots.github.io/relay-specs/#/Data/getReceivedBids>] and proposed blocks[<https://flashbots.github.io/relay-specs/#/Data/getDeliveredPayloads>]. Our data extraction process primarily focused on three dominant relays <cit.>: Ultra Sound, Flashbots[For Flashbots bids and blocks, we also used the data dumps available at <https://flashbots-boost-relay-public.s3.us-east-2.amazonaws.com/index.html>], and Agnostic. Specifically, we procured bid and proposed block data covering a slot range from 6 087 501 to 6 100 000, constituting 12,500 slots during the time frame of March 27 to 28, 2023. To fetch the consensus-related data such as attestations, forked blocks, and consensus rewards, we utilised the API endpoints provided by beaconcha.in[<https://beaconcha.in/api/v1/docs/index.html>]. We leveraged the builder overview found in mevboost.pics[<https://mevboost.pics>] to identify the builders. Notably, some builders adopt a strategy to submit identical bids (those with matching block hash and value) repetitively until the conclusion of the block auction. Builders are also observed to be submitting the same bid to multiple relays to enhance their selection likelihood. We aimed to pinpoint unique bids in our time-based bid value analysis, which necessitated the removal of duplicates within a single relay and across multiple relays. For each relay, the arrival time of a bid was identified by its first appearance, and we disregarded subsequent duplications from the same builder for the same slot. When aggregating bids across all relays, the earliest bid from any relay was considered to be the first, thereby indicating the initial contribution of a particular bid. Across the range of all analysed slots, we discovered an average of 81, 119, and 47 duplicate bids per slot submitted to Agnostic, Ultra Sound, and Flashbots, respectively, resulting in an overall average of 83 duplicates. The sheer volume of submissions-with each relay receiving an average of 647 bids per slot-underscored the necessity for accurate identification and removal of these duplicate bids, a step critical to upholding the accuracy of our results. Figure <ref> located in Appendix <ref> provides an example of how the duplicate bids are distributed within a particular slot for each relay. § RESULTS Our results, a combination of empirical on-chain data analysis and agent-based model simulations, deeply examine the complex dynamics of waiting games on the Ethereum blockchain. Leveraging datasets from the MEV-Boost protocol and the Ethereum consensus layer, we compiled a robust dataset that encapsulates block production activity over a specific period. Our analysis spotlights essential elements such as the value of waiting, the relative importance of MEV rewards versus consensus protocol proposal rewards, and the relationship between timing, attestation shares, and block orphaning likelihood. Moreover, we utilise an agent-based simulation model tailored for PoS-Ethereum to inspect how waiting strategies influence emerging consensus properties. This approach allows us to examine system behaviour and dynamics when a set percentage of validators engage in waiting games, adopting diverse delay durations. Ultimately, we strive to address the central question: can waiting games be played without disrupting consensus? §.§ The Value of Waiting In the context of PoS-Ethereum, a validator selected to propose a block holds a monopoly for the duration of their allocated slot. While honest validator specs compliance would require proposing a block immediately at the slot's beginning (at 0ms), a positive correlation between value and time may motivate rational proposers to postpone their block proposal. This section delves into the exploration of value of time and scrutinises the significance of MEV rewards in comparison to proposal rewards from consensus. The goal is to assess whether the strategic delay in block proposals provides a substantial advantage. §.§.§ The Evolution of Value over Time In order to comprehend the potential value of delay for proposers, we have tracked the bids submitted across the three relays under consideration: Ultra Sound, Flashbots, and Agnostic. Initially, we consider all distinct builder bids submitted across these relays, aggregating them for slot 6 093 815. Figure <ref> dissects the distribution of bids, which are represented as dots and colour-coded based on the submitting builder. The relation between their arrival time at the relay and their associated value is examined. The study uncovers a distinct upward trend in bid values over time. It is important to note that we measure the arrival time relative to the start time of the slot; therefore, any negative time value indicates that the bid arrived during the previous slot. With this in mind, the earliest recorded bid arrived at -10905ms with a value of 0.014 ETH. On the other hand, the winning bid chosen at 299ms had a much higher value of 0.046 ETH. This represents a substantial increase of approximately 228% from the initial bid, demonstrating the significant potential advantages of waiting. It is observed that different builders adopt distinct strategies, such as Flashbots builders submitting approximately every 0.5s since the start time of the previous slot, in contrast to rsync-builder.xyz or Bob the Builder, who only commence submissions after a specific point in time. This increase in value is attributable to the expanding public transaction pool, and potentially private order flow, that builders observe over time. As new transactions and bundles are successively submitted by regular users and MEV searchers, builders have access to a larger set of opportunities from which to construct their blocks. As a result, they can offer increased value to the validators. The escalating number of transactions included in the builder blocks, as demonstrated in Figure <ref>, further substantiates this. In order to measure the incremental value gained for each millisecond of delay, we collected unique bids from each relay across 750 slots, yielding slightly over 480k unique bids. To accurately capture value progression, we residualized the bid values against slot and builder fixed effects, which might cause artificial value fluctuations due to high or low MEV regimes, as discussed in <cit.>. Following this, we performed a linear regression on these residualized bid values relative to time, revealing a positive marginal value of 5.71 × 10^-6 ETH/ms. This supports our initial observation from a single slot, reaffirming that by prolonging their wait time, validators can enhance their MEV payments. §.§.§ The Significance of Waiting Although we noted a positive marginal value of delay, if the rewards gained from waiting are insignificant compared to the consensus protocol rewards issued for block proposals, then it might not be worth risking the consensus in the first place. To scrutinise the relationship between the rewards from waiting (i.e., MEV rewards) and the proposal rewards from consensus, we examined 5,726 proposed blocks from the relays we have monitored. These blocks belonged to three different epoch ranges between June 1, 2023, and June 11, 2023. While we used the MEV reward data provided by the relays, we fetched the consensus rewards validators received for their proposals from the beaconcha.in API. In Figure <ref>, we analyse three distinct epoch ranges, each containin around 80 epochs on average. The diagram distinguishes between two forms of rewards – the blue and orange segments. The blue section represents the median MEV reward that validators, who proposed a block during that epoch via one of our observed relays, received. In contrast, the orange segment represents the median proposal reward issued by the consensus protocol to validators. The proposal reward comprises attestation and sync aggregate inclusion rewards <cit.>. A prominent observation across all three epoch ranges is the dominance of MEV rewards over proposal rewards. The first epoch range analysed showcased the largest disparity, amounting to a notable 30%. Specifically, the median MEV rewards came out to be 0.067 ETH, 0.038 ETH, and 0.042 ETH respectively, culminating in an overall median of 0.048 ETH. Meanwhile, the proposal rewards consistently stayed at 0.034 ETH. This led to a median difference of 0.013 ETH per proposed block. Furthermore, MEV rewards accounted for 58.32% of all the rewards across the analysed slots. To determine the median MEV reward since the launch of MEV-Boost (which coincided with the merge), we utilized data from mevboost.pics, revealing 0.053 ETH as the median value. Given that proposal rewards from consensus are anticipated to remain steady at 0.034 ETH, we conclude that MEV rewards have a significant edge over consensus rewards. As a result, the potential gains from waiting outweigh protocol rewards, affirming the value of risking consensus. §.§.§ Unrealised Value Schwarz-Schilling et al. <cit.> established that currently, validators do not actively participate in the waiting games, and any delay observed primarily stems from the complex signing processes utilised by certain staking entities and validator clients. Our research supports their findings as we analyse the arrival time of winning builder bids in comparison to the highest value bid observed in that slot. Figure <ref> portrays the distribution of both early winners (coloured in green), those whose bids arrived prior to the highest bid, and late winners (coloured in yellow), those whose bids arrived after the highest bid. This distribution is presented in relation to time and the value difference between the winning bid and the highest bid. Our findings reveal that out of the 8,121 unique winning blocks processed by the relays we have studied, 7,672 (94.47%) had early winners and 269 (3.31%) of them had late winners. The remaining 180 blocks (2.22%) constituted the highest bid itself. The early winners, on median, arrived 1001ms before the highest bid, albeit with a median value of 0.001 ETH less. The late winners, contrarily, arrived 392ms post the highest bid but still delivered 0.0008 ETH less value. In total, across the 7,941 blocks that did not capitalise on the highest bid, with a positive median time difference of 992ms (indicating the winning bid arrived first), a remarkable 931.27 ETH remained unrealised as the highest bid was not available when the winning bid arrived. On the flip side, 27.76 ETH was realisable as the highest bid had already arrived but went unrealised. These results reinforce our contention that validators are not engaging in the waiting game and often act prematurely in selecting the winning bid. For future work, this analysis should be repeated, incorporating the getHeader call timestamp available in the relays. This timestamp signifies the exact moment when the proposer called for the block header (hence the winning bid) from the relay. Comparing this time with the arrival time of the highest value bid could yield more precise figures regarding the total unrealised value and the time difference between winning bids and highest value bids. As this data is not publicly accessible, we resorted to using bid arrival times, which still provide valuable insight. §.§.§ Playing the Game Rationally Our research thus far has uncovered considerable incentives for rational validators within the Ethereum network to participate in waiting games. However, the manner in which these strategies are employed differs from relay to relay. In the default relay implementation <cit.>, each builder bid undergoes a simulation to confirm its validity before it is made accessible to the proposer. This process introduces an average latency of 140ms <cit.>, shortening the duration of the block auction and reducing the number of competing bids. To counter this latency, the optimistic relay design has been proposed <cit.>, and adopted by the Ultra Sound Relay. Under the optimistic approach, it is presumed that builders are submitting valid blocks, which are instantaneously made available while the validation is delayed. However, this strategy necessitates that builders deposit funds upfront, securing payment for the validator if the builder fails to deliver the promised block or payment. In examining 12,500 slots, we verified the positive influence of optimistic relaying on latency reduction. Out of these slots, 9,250 were relayed using at least one of the three relays we studied[A builder may have submitted the same winning bid to multiple relays. In such instances, we categorise all relays featuring that bid as relayers of the winning block.]. Table <ref> displays the distribution of block deliveries across the relays. Notably, Ultra Sound Relay, known for its adoption of optimistic relaying, garnered the most bids per slot and delivered the highest quantity of blocks with the largest median value. Additionally, Ultra Sound reported the latest median winning bid arrival time. A more detailed distribution of winning bid timings can be found in Appendix <ref>. These results suggest that by diminishing the block simulation latency, optimistic relaying enables relays to consider more bids and encourages builders to dispatch blocks later in the slot duration. The ultimate consequence is an increasing trend of block auctions being clinched by higher-value, late-submitted bids, thereby augmenting the rewards for validators. While no direct evidence of validators partaking in waiting games has been identified <cit.>, our analysis of 12,500 slots, along with the extensive historical data provided in <cit.>, confirms that validators strategically act to maximise their profits from MEV-Boost by registering with relays which deliver the most value. Currently, Ultra Sound Relay leads the pack, delivering around 30% of all blocks relayed through MEV-Boost, and offering the highest median value of 0.06 ETH to the validators <cit.>. §.§ The Risks of Waiting Having established that waiting games present potential profit opportunities for validators, our focus now turns to evaluate the associated risks, particularly in terms of consensus disruption. Although a validator enjoys a monopoly during their slot duration, the ultimate confirmation of the block can hinge on the attestations it gathers from the attestors in the respective slot committees. If a block is proposed too late for attestors to vote on it, these attestors might opt to vote on its parent block instead. Under such circumstances, a block securing less than 40% of attestations is exposed to the risk of being forked out by the following slot's proposer, using proposer-boost as outlined in Ethereum consensus specifications <cit.>. In this section, we delve into the analysis of attestation shares for the proposed blocks and identify those susceptible to forking. We contrast orphaned blocks with their non-orphaned counterparts to comprehend the potential impact of timing. Moreover, we employ an agent-based model to simulate Ethereum consensus, which allows us to observe consensus stability when a specific portion of validators engage in waiting games, with varying degrees of delay times. §.§.§ Attestation Shares Expanding on the analysis in <cit.> about the share of attestations included in the subsequent slot of a proposed block, we examine the next slot attestation shares for the blocks within our data set. We focus on the relationship between the winning bid arrival time to the relay, the attained attestation share, and the vulnerability to being forked out by the next slot proposer through the proposer-boost mechanism <cit.>. Recalling that the fork choice protocol, LMD-Ghost, computes a block's weight as the cumulative effective balances of validators who attested to the block in a prior slot as an attestation for a block at slot n is only included starting from slot n+1, plus the weight coming from the parent block, we assess the impact of the proposer-boost mechanism. This mechanism instantaneously assigns a 40% committee weight to a timely block proposed within the first 4 seconds of a slot, derived from the total effective balance of validators assigned for that slot, thereby enabling a potential re-organisation (re-org) of a lower-weight past block. To analyse the relationship between this re-org vulnerability and the winning builder block's arrival timing, we measure each block's share of attestations included in the succeeding slot. This involves calculating the block's weight (presuming a uniform effective balance of 32 ETH for each validator) and normalising against the slot's committee weight, which yields the share of attestations a block has accrued from the total potential attestations. Any block with an attestation share below 40% in the subsequent slot is deemed susceptible to a re-org through the proposer-boost. Our results highlight that, on average, blocks gather 98% of attestation shares in the following slot. Out of the 8,121 distinct blocks proposed by the relays under our analysis, only 27 blocks acquired under 40% shares, with a median arrival time for winning bids of 1223ms. Remarkably, we observed 75 blocks that arrived later than the 1223ms mark yet accrued sufficient shares to be unaffected by the proposer-boost, with an average share of 90%. This analysis is visually represented in Figure <ref>, where each blue dot stands for a block, and those within the red region identify blocks vulnerable to a re-org using the proposer-boost, based on their attestation share in the subsequent slot. We further examined the relationship between the arrival times of winning bids in consecutive slots and the accrued attestation shares. As depicted in Figure <ref>, each dot corresponds to a block, with its x and y-coordinates representing the arrival times of its own and its following slot's winning bids, respectively. Blocks are colour-coded based on their attestation shares in the following slot. Our results revealed a tendency for attestation shares to decrease when a block's winning bid is delayed while the succeeding slot's winning bid arrives in a timely manner. Specifically, we identified 23 blocks vulnerable to the proposer-boost with attestation shares under 40% and a following slot's winning bid arriving within the initial 4s. Nevertheless, these blocks remained in the canonical chain, implying that the potential proposer-boost re-org was not exploited. §.§.§ Orphaned Blocks In our final empirical data analysis, we delved into the occurrence of orphaned blocks within our slot range and its correlation with the timing of the winning bid arrivals. We discovered 151 slots, amounting to roughly 1.2% of all slots we have analyzed, where a block failed to be included in the canonical chain. Among these, 123 slots had missed proposals, while 28 slots featured orphaned blocks, half of which stemmed from MEV-Boost and the other half were locally built by validators. Figure <ref> in Appendix <ref> illustrates the distribution of these orphaned blocks across relays. When we compared the timings of winning bids for both orphaned and non-orphaned blocks, Figure <ref> reveal that the winning bid for the earliest orphaned block was submitted 271ms prior to the slot start. We identified a total of 5,631 blocks with winning bids arriving after this point that made it to the canonical chain. Orphaned blocks displayed a median proposal time of 1115ms, contrasting with the non-orphaned blocks' median time of 155ms. Interestingly, 130 of these non-orphaned blocks had winning bids arriving later than the median time of the orphaned blocks. These observations lead us to two key insights: firstly, orphaning is a relatively rare event, occurring in approximately 0.22% of the 12,500 slots. Secondly, orphaned blocks are not necessarily late arrivals, as some blocks with later-arriving winning bids avoid orphaning. Thus, we conclude that delaying the selection of the winning bid, and playing the waiting games, does not necessarily result in a block's orphaning. §.§.§ The Effects on Consensus: Agent-based Simulation Results Until here, we based our observations on the empirical data. In this section, we present an agent-based model to study a scenario where a consistent share of the validators follows the same delay strategy: whenever a delaying validator is selected as a block proposer, they wait until a certain time (which is the same across delayers) into the slot to release the block. They do this to maximise their MEV: from what we observed in the previous sections, there is a good reason to think that MEV rewards correlate positively with the interval between the block release and the previous block release. For this simulation, we are not interested in estimating the actual amount of increased profit these delayer validators would accrue, as we have already shown that it would be positive. Here we want to observe the eventual fall-out of such a family of strategies on the overall consensus strategy: we want to observe if such strategies may decrease the number of blocks included in the mainchain, which we assume to be an indicator of the consensus efficiency of a blockchain. The framework we refer to is essentially the same presented in <cit.>: the agents are validators, connected on a peer-to-peer network generated following the Erdos-Renyi random model <cit.>. Time is assumed to be continuous and divided into slots; validators are randomly selected to be block proposers for each slot, and because they are assumed to be fully honest, they release the block exactly at the beginning of the assigned slot. The remaining validators are selected as attestors: they release an attestation to certify on the blockchain that they received the block from the block proposer and that it is valid. If they do not receive the block before the 4 seconds threshold, they are allowed to attest for the previous head of the chain. The only two random events that may happen are block gossiping and attestation gossiping, which follow exponential waiting times of parameters τ_block and τ_attestation. Gossiping events happen when an agent is randomly picked to communicate the information about the blocks/attestations they received to one of their neighbour agents, picked at random as well. In <cit.>, the authors showed how a phase transition in consensus efficiency is observed because of τ_block: when the value becomes larger than a certain threshold depending on the topological properties of the peer-to-peer network, the number of blocks included in the mainchain declines abruptly. In the present work, we increase the number of parameters to take into account the delay strategy as part of the waiting games. We introduce x^d∈[0,1] the share of agents who are actively following the delay strategy, and t^d∈[0,12] which is the actual delay the delayers wait from the start of the slot before releasing the block. We proceed by generating a sample of simulations where we vary x^d and t^d, while we keep fixed τ_block=τ_attestation=3 and the peer-to-peer topology, as well as the simulation time. As measures we care to observe, we consider the mainchain Rate μ, from <cit.>, to estimate the consensus efficiency, formally defined as: μ = |M|/|B| where |M| is the number of blocks included in the canonical mainchain(M) while |B| is the total number of blocks produced during the simulation. The second measure we consider is the orphan rate of the blocks produced by the delayers Θ^d, which we use to proxy, on the one hand, the rewards missed by the delayers and, on the other hand, the direct damage to a consensus as a consequence of the delaying strategy. Θ^d = |{b ∉ M s.t. proposer(b) ∈𝒟}| where 𝒟 is the set of agents who follow the delayer strategy and proposer(b) represents the agent who proposed block b. Simulations results are plotted in Figure <ref> and the results are intuitive: the effect of the time delay exercised by the delayers does not significantly affect consensus until it becomes larger than the slot time minus the latency time: 12-3=9 (the latency also represents the average time between two consecutive gossip interactions between the same two nodes). We also observe that the effect of the share of delayer stops increasing around x^d=0.5, at which the number of consecutive mixed blocks (delayer blocks followed by honest blocks) is maximum. This is particularly apparent in the left plot of Figure <ref>, where all share larger than 0.5 μ always assume the same value. While more research is needed to interpret these results, we believe that the results support the hypothesis that a delayer strategy supported by enough validators can be profitable and does not lead to consensus degradation for a range of delay times way more extensive than the ones we observed in the previous sections, in line with the theoretical results on the equilibrium of the waiting games described in <cit.>. § CONCLUSION In this study, we have investigated the dynamics of waiting games emerging with Ethereum's transition to a Proof-of-Stake (PoS)-based block producer selection mechanism. We found substantial incentives for validators to deviate from the honest validator specifications, primarily due to the emerging incentive, the Maximal Extractable Value (MEV). The Proposer-Builder Separation (PBS) implementation by Flashbots, MEV-Boost, have permitted validators to outsource block building to a competitive market of builders. This change has had a significant impact on the dynamics of block proposal strategies. Our results confirm that MEV rewards, which increase over time as more transactions arrive, dominate consensus proposal rewards, which provides sufficient incentive for validators to delay their block proposals - essentially playing the waiting game. However, timing these delays introduces risk, particularly of block orphaning. Our data demonstrates that while late blocks tend to accrue fewer attestation shares, not all late blocks are at risk of orphaning. In fact, orphaning is a relatively rare event, occurring in only 0.22% of cases in our study. We further analyzed the impact of waiting games on consensus stability using an agent-based simulation model, simulating a range of delaying behaviours by validators in various scenarios. Our findings indicate that a delay strategy, if adopted by enough validators, can be profitable without leading to consensus degradation. In conclusion, our empirical data analysis and simulation results indicate that the risks associated with delaying proposals are outweighed by the potential benefits, given the current dynamics of Ethereum's PoS mechanism and MEV. Looking forward, we aim to further examine the interactions among builders, relays, and validators, particularly in terms of the latencies introduced during these interactions. We hope this will enable a better understanding of how waiting games can be optimized for validators, while ensuring a more competitive block auction for builders. § DUPLICATE BUILDER BIDS BY RELAYS Figure <ref> visualizes the distribution of duplicate bid submissions across different relays for a single slot. On an average, a relay is found to receive about 83 duplicate bids. To evaluate the marginal value of time that proposers could potentially gain by waiting, we cleaned the bid data by removing these duplicate entries, allowing us to focus on the contributions of unique bids. § WINNER BID ARRIVAL TIMES BY RELAYS Figure <ref> presents our analysis of winning bid arrival times across different relays, underlining the variations between them. We found that Ultra Sound, recognized for its optimistic nature, and Flashbots, known to be non-optimistic, have distinct median winner arrival times. Interestingly, despite no known claims of being optimistic, Agnostic relay's median winner arrival time is markedly closer to Ultra Sound's than to Flashbots', suggesting potential similarities in their handling of block submissions. § ORPHANED BLOCKS BY RELAYS In the scope of our analysis of 12,500 slots, we identified 28 instances where a block was orphaned, half of which originated from MEV-Boost and the remaining from locally built blocks by validators. The distribution of these orphaned blocks among relays is visualized in Figure <ref>, where BloXroute Max Profit stands out as the relay most frequently associated with an orphaned block, with six occurrences.
http://arxiv.org/abs/2307.07245v1
20230714093808
FreeCOS: Self-Supervised Learning from Fractals and Unlabeled Images for Curvilinear Object Segmentation
[ "Tianyi Shi", "Xiaohuan Ding", "Liang Zhang", "Xin Yang" ]
cs.CV
[ "cs.CV" ]
FreeCOS: Self-Supervised Learning from Fractals and Unlabeled Images for Curvilinear Object Segmentation Tianyi Shi, Xiaohuan Ding, Liang Zhang, Xin Yang[2] School of EIC, Huazhong University of Science & Technology {shitianyihust, dingxiaohuan, liangz, xinyang2014}@hust.edu.cn August 12, 2023 ======================================================================================================================================================================================= Curvilinear object segmentation is critical for many applications. However, manually annotating curvilinear objects is very time-consuming and error-prone, yielding insufficiently available annotated datasets for existing supervised methods and domain adaptation methods. This paper proposes a self-supervised curvilinear object segmentation method that learns robust and distinctive features from fractals and unlabeled images (FreeCOS). The key contributions include a novel Fractal-FDA synthesis (FFS) module and a geometric information alignment (GIA) approach. FFS generates curvilinear structures based on the parametric Fractal L-system and integrates the generated structures into unlabeled images to obtain synthetic training images via Fourier Domain Adaptation. GIA reduces the intensity differences between the synthetic and unlabeled images by comparing the intensity order of a given pixel to the values of its nearby neighbors. Such image alignment can explicitly remove the dependency on absolute intensity values and enhance the inherent geometric characteristics which are common in both synthetic and real images. In addition, GIA aligns features of synthetic and real images via the prediction space adaptation loss (PSAL) and the curvilinear mask contrastive loss (CMCL). Extensive experimental results on four public datasets, i.e., XCAD, DRIVE, STARE and CrackTree demonstrate that our method outperforms the state-of-the-art unsupervised methods, self-supervised methods and traditional methods by a large margin. The source code of this work is available at https://github.com/TY-Shi/FreeCOS. § INTRODUCTION Automatically segmenting curvilinear structures (such as vascular trees in medical images and road systems in aerial photography) is critical for many applications, including retinal fundus disease screening <cit.>, diagnosing coronary artery disease <cit.>, road condition evaluation and maintenance <cit.>. Despite a plethora of research works in the literature, accurately segmenting curvilinear objects remains challenging due to their complex structures with numerous tiny branches, tortuosity shapes, ambiguous boundaries due to imaging issues and noisy backgrounds. Most recent methods <cit.> leverage supervised deep learning for curvilinear object segmentation and have achieved encouraging results. However, those methods require a large number of pixel-wise manual annotations for training which are very expensive to obtain and error-prone due to poor image quality, annotator’s fatigue and lack of experience. Although, there are several publically available annotated datasets for curvilinear object segmentation <cit.>, the large appearance variations between different curvilinear object images, e.g., X-ray coronary angiography images vs. retinal fundus images, yields significant performance degradation for supervised models across different types of images (even across the same type of images acquired using different equipments). As a result, expensive manual annotations are inevitably demanded to tune the segmentation model for a particular application. Potential solutions to alleviate the annotation burden include domain adaption  <cit.> and unsupervised segmentation <cit.>. However, the effectiveness of domain adaptation is largely dependent on the quality of annotated data in the source domain and constrained by the gap between the source and target domain. Existing unsupervised segmentation methods <cit.> can hardly achieve satisfactory performance for curvilinear objects due to their thin, long, and tortuosity shapes, complex branching structures, and confusing background artifacts. Despite the high complexity and great variety of curvilinear structures in different applications, they share some common characteristics (i.e., the tube-like shape and the branching structure). Thus, existing studies <cit.> have demonstrated that several curvilinear structures (e.g., arterial trees of the circulation system) can be generated via the fractal systems with proper branching parameters to mimic the fractal and physiological characteristics, and some observed variability. These results motivate us to use the generated curvilinear objects via the fractal systems to explicitly encode geometric properties and varieties (i.e., different diameters and lengths of branches, and different branching angles) into training samples and to assist feature learning of a curvilinear structure segmentation model. However, such formulas generated training samples can hardly mimic the appearance patterns within curvilinear objects, the transition regions between curvilinear objects and backgrounds, which are also key information for learning a segmentation model and contained in easily-obtained unlabeled target images. This paper asks the question, how to combine fractals and unlabeled target images to encode sufficient and comprehensive visual cues for learning robust and distinctive features of curvilinear structures? The main contribution of this paper is a self-supervised segmentation method based on a novel Fractal-FDA synthesis (FFS) module and a geometric information alignment (GIA) approach. Specifically, curvilinear structures are synthesized by the parametric fractal L-Systems <cit.> and serve as segmentation labels of synthetic training samples. To simulate appearance patterns in the object-background transition regions and background regions, we apply Fourier Domain Adaptation <cit.> (FDA) to fuse synthetic curvilinear structures and unlabeled target images. The synthetic images via our FFS module can effectively guide learning distinctive features to distinguish curvilinear objects and backgrounds. To further improve the robustness to differences between intensity distributions of synthetic and real target images, we design a novel geometric information alignment (GIA) approach which aligns information of synthetic and target images at both image and feature levels. Specifically, GIA first converts each training image (synthetic and target images) into four geometry-enhanced images by comparing the intensity order of a given pixel to the values of its nearby neighbors (i.e., along with the up, down, left and right directions). In this way, the four converted images do not depend on the absolute intensity values but the relative intensity in order to capture the inherent geometric characteristic of the curvilinear structure, reducing the intensity differences between synthetic and target images. Then, we extract features from the 4-channel converted images and propose two loss functions, i.e., a prediction space adaptation loss (PSAL) and a curvilinear mask contrastive loss (CMCL), to align the geometric features of synthetic and target images. The PSAL minimizes the distance between the segmentation masks of the target images and synthetic curvilinear objects and the CMCL minimizes the distance between features of segmented masks and synthetic objects. The FreeCOS based on FFS and GIA approaches applies to several public curvilinear object datasets, including XCAD <cit.>, DRIVE <cit.>, STARE <cit.> and CrackTree <cit.>. Extensive experimental results demonstrate that FreeCOS outperforms the state-of-the-art self-supervised <cit.>, unsupervised <cit.>, and traditional methods <cit.>. To summarize, the main contributions of this work are as follows: * We propose a novel self-supervised curvilinear feature learning method which intelligently combines tree-like fractals and unlabeled images to assist in learning robust and distinctive feature representations. * We propose Fractal-FDA synthesis (FFS) and geometric information alignment (GIA), which are the two key enabling modules of our method. FFS integrates the synthetic curvilinear structures into unlabeled images to guide learning distinctive features to distinguish foregrounds and backgrounds. GIA enhances geometric features and meanwhile improves the feature robustness to intensity differences between synthetic and target unlabeled images. * We develop a novel self-supervised segmentation network that can be trained using only target images and fractal synthetic curvilinear objects. Our network performs significantly better than state-of-the-art self-supervised /unsupervised methods on multiple public datasets with various curvilinear objects. § RELATED WORK §.§ Traditional Methods Traditional curvilinear object segmentation methods <cit.> design heuristic rules and/or filters to capture features of the target curvilinear objects. For instance, Frangi et al. introduce the vesselness filter <cit.> based on the Hessian matrix to represent and enhance tube-like curvilinear objects. Khan et al. <cit.> further design B-COSFIRE filters to denoise retinal images and segment retinal vessels. Memari et al. <cit.> enhance image contrast via contrast-limited adaptive histogram equalization and then segment retinal vessels based on hand-crafted filters. In <cit.>, the authors propose optimally oriented flux (OOF) to enhance curvilinear tube-like objects. OOF exhibits better performance for segmenting adjacent curvilinear objects yet is sensitive to different sizes of curvilinear objects. Traditional methods based on hand-crafted filters do not require any training yet they require careful parameter tuning for optimized performance. And the optimized parameter settings are usually data-dependent or even region-dependent, limiting their convenience in segmenting a wide variety of curvilinear objects. §.§ Unsupervised Segmentation Methods Unsupervised segmentation methods can be generally divided into two classes: clustering based <cit.> and adversarial learning based <cit.>. Xu et al. <cit.> propose Invariant Information Clustering (IIC) which automatically partitions input images into regions of different semantic classes by optimizing mutual information between related region pairs. Such a clustering-based method is more suitable for segmenting objects with aspect ratios close to one while becoming ineffective for curvilinear objects due to their thin, long, tortuous shapes. Redo <cit.> is based on an adversarial architecture where the generator is guided by an image and extracts the object mask, then redraws a new object at the same location with different textures/ colors. However, this adversarial learning-based unsupervised method only performs well for objects which are visually distinguishable from backgrounds. For the segmentation of curvilinear objects with complex and numerous tiny branching structures, embedded in confusing and cluttered backgrounds, the efficacy of such a method degrades significantly. In contrast to unsupervised methods, our method explicitly encodes geometric and photometric characteristics, as well as some observed varieties of curvilinear objects in target application into synthetic images. Those synthetic images provide labels to effectively guide the model to learn robust and distinctive features and thus yield superior performance to state-of-the-art unsupervised methods <cit.>. §.§ Self-supervised Learning Methods Self-supervised learning methods construct pretexts from large-scale unsupervised data and utilize contrastive learning losses to measure the similarities of sample pairs in the representation space. To this end, various pretext tasks have been designed, including jigsaw <cit.>, hole-fill <cit.> and transformation invariance <cit.>. Although existing self-supervised methods have achieved outstanding performance in classification <cit.>, detection <cit.>, and image translation <cit.>, few of them provide a suitable pretext task for curvilinear object segmentation. A potential solution is the pixel-level contrastive-based method <cit.>, while it requires heavy manual annotations to prepare pixel-level positive and negative samples. Similar work to ours is <cit.> which designs a self-supervised vessel segmentation method via adversarial learning and fractals. But this method requires clean background images as input (i.e., the first frame of the angiography sequence) for synthesis which greatly limits its applications. In addition, adversarial learning cannot explicitly and precisely enforce visual cues, which are important for learning segmentation-oriented features, being encoded in the synthetic images. As a result, although their synthetic images visually look similar to the target images, the segmentation accuracy is also quite low. § METHOD Figure. <ref> shows the framework of FreeCOS which consists of two main modules, i.e., Fractal-FDA Synthesis (FFS) and Geometric Information Alignment (GIA). In FFS, we generate synthetic curvilinear structures via the parametric Fractal L-Systems and use the generated structures as segmentation maps to guide self-supervised training of a segmentation network. The synthetic curvilinear structures are then integrated into unlabeled images via FDA to form synthetic images of curvilinear structures. The intensity distributions of synthetic images could deviate from those of real target images and hence yield poor feature robustness. To address this problem, our GIA module first reduces image-level differences between the synthetic and target images by converting intensity images into four-channel intensity order images. The converted images of both synthetic images and target images are then input into U-Net to extract feature representations. Our GIA further aligns features of synthetic images and target images via the prediction space adaptation loss (PSAL) and the curvilinear mask contrastive loss (CMCL). In the following, we present the details of FFS and GIA. §.§ Fractal-FDA Synthesis We first generate curvilinear structures via the parametric Fractal L-Systems and then integrate the synthetic objects into unlabeled target images to obtain synthetic training samples via FDA. Fractal-based curvilinear structure generation. Fractals are simple graphic patterns rendered by mathematical formulas. In this work, we adopt the parametric Fractal L-systems proposed by Zamir et al. <cit.> to generate fractal tree structures and meanwhile select proper branching parameters according to physiological laws of curvilinear objects in target applications. Specifically, “grammar” for generating curvilinear structures with repeated bifurcations based on Fractal L-systems method is defined as follows: ω: F rule: F → F[-F][+F] The generated object by iteration iter: Draw(iter, F , rule) where F represents a line of unit length in the horizontal direction, ω denotes an axiom, iter denotes the iteration and rule denotes the production rule. The square brackets represent the departure from ([) and return to (]) a branch point. The plus and minus signs represent turns through a given angle δ in the clockwise and anticlockwise directions, respectively. For example, the first three stages of a tree produced by the basic Fractal L-system, denoted by iter=1 ∼ 3, are given by the: [ iter=1 F; iter=2 F[-F][+F]; iter=3 F[-F][+F][-F[-F][+F]][+F[-F][+F]] ] The left part of Figure. <ref> illustrates exemplar results of the generated curvilinear structures based on the basic Fractal system. To synthesize curvilinear structures with various widths and lengths, we replace F using F_random(w_i, l_i, c_i) which represents a line of width w_i, length l_i and intensity c_i. We set the initial width w_init, length l_init and decreased parameter γ by the repeated F_random in the Draw grammar. For the index number i of F_random, w_i and l_i are given by: w_i =w_init·γ^i-1 l_i =l_init·γ^i-1 We randomly choose a value within the range of (0, 255) and set this value as the intensity c_i for the index i of F_random. For the branching angle δ, we replace δ using δ_random (defined by δ_init±δ_delta, δ_init within the range of (20^∘, 120^∘)), in which δ_delta is a random angle ranging from 10^∘ to 40^∘. To mimic the geometric characteristics of a target application, we incorporate the above-mentioned parameters into the L-Systems. Specifically, we design a set of rules ruleset : (.rule_1, ⋯, .rule_n) for different fractal structures, like F_random→ F_random-F_random[+F_random-. .F_random][-F_random+F_random]. For each iteration iter, we randomly select a rule from the ruleset as the production rule. In this way, we can increase the diversity of structures. The generated fractal curvilinear object X_frac∈ℝ^H × W by iteration is given by: X_frac=Draw(iter, F_random , ruleset,w_i, l_i, c_i) FDA-based curvilinear object image synthesis. We further incorporate synthetic curvilinear objects into the target unlabeled images via FDA <cit.>. We define the target images as X_t∈ℝ^H × W and the synthetic images of fractal curvilinear object as X_frac. Let ℱ^A: ∈ℝ^H × W→∈ℝ^H × W be the amplitude component of the Fourier transform ℱ of a grayscale image. We define a β∈(0,1) to select the center region of the amplitude map. Given two randomly sampled synthetic image X_frac and target image X_t, we follow FDA <cit.> to replace the low-frequency part in the amplitude map of X_frac by the FFT <cit.>, denotes as ℱ^A(X_frac), with that of the target image X_t, denoted as ℱ^A(X_t). Then, the modified spectral representation of X_frac, with its phase component unaltered, is mapped back to image X_frac → t by the inverse FFT <cit.>, whose content is the same as X_frac, but will resemble the appearance of a sample X_t. After that, we apply a Gaussian blur to the output synthetic image of FDA to obtain synthetic images from FFS, denoted as X_F. §.§ Geometric Information Alignment Synthetic images generated by FFS could still have nontrivial intensity differences from that of real images. To address this problem, we further align synthetic images and real target images at both image and feature levels. Image-level Alignment. We aim to explicitly remove the dependency on raw intensity values for both synthetic and real images and meanwhile enhance the geometric characteristics of curvilinear structures. To this end, for each image pixel j we compare its intensity value X(j) with its 8 neighboring pixels {n_d^m | m = 1, …, 8} (yellow box in Figure. <ref>) lying perpendicular to j (red box in Figure. <ref>), where directional d denotes left, right, top, and bottom side of j. If X(j) X(n_d^m), we set the corresponding value for n_d^m to 0. Otherwise, we set the value for n_d^m to 1. Such operation produces four 8-bit images for each input image, where the m-th bit on the d-th image denotes the relative intensity order between j and its neighbor m along the d-th direction. For each input image X (denote as X_F and X_t), we concatenate four 8-bit images (X_IA^1, ⋯, X_IA^d), where X_IA^d(j) is represented as X_IA^d(j)=∑_m=1^8[X(j)>X(n_d^m)] × 2^m-1 Such local intensity order transformation <cit.> can capture the intrinsic of the curvilinear object and meanwhile reduces the intensity gap between synthetic and real images. Feature-level Alignment. Given the 4-channel transformed images, we utilize U-Net <cit.> to extract features. We align curvilinear objects’ features of synthetic images and unlabeled target images based on two loss functions, i.e., the prediction space adaptation loss (PSAL) and the curvilinear mask contrastive loss (CMCL). 1) Prediction space adaptation loss We utilize adversarial learning to explicitly align the prediction space distribution of target images and synthetic images. We denote the prediction segmentation masks of the target images and the synthetic images as Y_target∈ℝ^H × W and Y_syn∈ℝ^H × W, respectively. We input Y_target and Y_syn into a fully-convolutional discriminator D as <cit.> trained via a binary cross-entropy loss ℒ_d: ℒ_d=𝔼[log(D(Y_syn ))]+𝔼[log(1-D(Y_target))] Accordingly, the PSAL is computed as: ℒ_PSAL=𝔼[log(D(Y_target))] 2) Curvilinear mask contrastive loss. The CMCL aims to reduce the distance between features of synthetic images and true target images, and meanwhile to improve the feature distinctiveness between curvilinear objects and backgrounds. To this end, we take the feature maps (denoted as f ∈ℝ^H × W × C) from the final decoder layer of U-Net, the mask of synthetic image (denoted as G_syn∈{0,1}^H × W) and prediction mask of target image (denoted Y_target∈[0,1]) as input. We process f using a lightweight contrastive encoder and a contrastive projector as <cit.> to map f to the feature space where the pixel-level contrastive loss is applied for Z ∈ℝ^H × W × C. We denote I as H × W spatial location of the projected feature maps Z, then for a location i ∈ I, we can obtain a feature vector z_i at location i from feature map Z, label values g_i and y_i at i from the mask of synthetic image and the prediction mask respectively. We partition pixels of I into two groups: curvilinear object locations I^+ and background locations I^-. For synthetic images, we perform the partition directly based on the mask G_syn, i.e., I_syn^+={i ∈ I_syn| g_i=1} and I_syn^-={i ∈ I_syn| g_i=0}. For target images, as the ground-truth mask is not available, we alternatively perform the partition based on the prediction probability mask Y_target, i.e., I_target^+={i ∈ I_target| y_i≥ 1-α} and I_target^-={i ∈ I_target| y_i≤α}, where α=0.1 is a small threshold and is fixed in our method. We let q_syn^+={𝐳_i| i ∈(I_syn^+, σ)} and k_target^+={𝐳_i| i ∈(I_target^+, σ)} denote the curvilinear keys of synthetic and target images respectively. Similarly, we define k_syn^-={𝐳_i| i ∈(I_syn^-, σ)} and k_target^-={𝐳_i| i ∈(I_target^-, σ)} as the background keys of synthetic and target images, where (∙, σ) is a random sampling operator which samples a subset from a set randomly with a proportion ratio σ. We combine N negative queries of the features of synthetic and target images to form a negative set k^-=(k_syn^-, k_target^-). The CMCL is defined as: ℒ_CMCL=-log( exp(q_syn^+· k_target^+ / τ))+ log(exp(q_syn^+· k_target^+ / τ) +∑_i=0^Nexp(q_syn^+· k_i^- / τ)) where τ is a temperature hyper-parameter. 3) Final Loss. The final loss is a combination of the segmentation loss, the PSAL and CMCL as: ℒ_seg=𝔼[G_syn·log(Y_syn)] ℒ=ℒ_seg+ℒ_PSAL+λℒ_CMCL § EXPERIMENTS XCAD dataset. The X-ray angiography coronary artery disease (XCAD) dataset <cit.> is obtained during stent placement using a General Electric Innova IGS 520 system. Each image has a resolution of 512 × 512 pixels with one channel. The training set contains 1621 coronary angiograms without annotations as target images. The testing set contains 126 independent coronary angiograms with vessel segmentation maps annotated by experienced radiologists. Retinal dataset. We also employ two public retinal datasets to validate the effectiveness of the proposed method. The DRIVE dataset <cit.> consists of 40 color retinal images of size 565 × 584 pixels. We use 20 images as target images and 20 remaining as test images. The STARE dataset <cit.> contains 20 color retinal images of size 700 × 605 pixels with annotations as test images. There are 377 images without annotation which are used as target images. CrackTree dataset. The CrackTree dataset <cit.> contains 206 800×600 pavement images with different kinds of cracks with curvilinear structures. The whole dataset is split into 160 target images and 46 test images by <cit.> setting. Following <cit.>, we dilate the annotated centerlines by 4 pixels to form the ground-truth segmentation. §.§ Evaluation Metrics For XCAD and CrackTree, we follow <cit.> to use the following widely-used metrics in our evaluation, i.e., Jaccard Index (Jaccard), Dice Coefficient (Dice), accuracy (Acc.), sensitivity (Sn.) and specificity (Sp.). For the DRIVE and STARE datasets, we follow the state-of-the-art works for retina vessel segmentation <cit.> to report accuracy (Acc.), sensitivity (Sn.) specificity (Sp.) and area under curve (AUC) in our evaluation. §.§ Implementation Details For FFS, we set the ruleset as (.rule_1, ⋯, .rule_4), such as: rule_1:F_random→ F_random[+F_random-F_random]. rule_2:F_random→ F_random[-F_random-F_random]. rule_3:F_random→ F_random-F_random-F_random. rule_4:F_random→ F_random+F_random+F_random. We set the initial parameters of the Fractal system as follows. For all four datasets, the angle is randomly selected from (20^∘, 120^∘), the initial length is randomly selected from (120px, 200px), the decreased parameter γ is randomly selected from (0.7, 1). The initial width w_init is ranging from (8px, 14px) for XCAD, DRIVE and STARE. For CrackTree, the w_init is selected from (2px, 6px). The kernel size of Gaussian blur for FFS is 13. We generate 150, 150, 600 synthetic fractal images X_frac for XCAD, CrackTree, and retinal datasets, respectively. We apply data augmentation including horizontal flipping, random brightness and contrast changes ranging from 1.0 to 2.1, random saturation ranging from 0.5 to 1.5, and random rotation with 90^∘, 180^∘, and 270^∘. The standard deviation for Gaussian noise is set to a random value within (-5, +5). All images are cropped to 256×256 pixels for training. All the data-augmented operations are applied before the GIA module. The segmentation network is trained using the SGD with a momentum of 0.9 for optimization and the initial learning rate is 0.01. The discriminator network is trained using an Adam optimizer with an initial learning rate of 10^-3. We employ a batch size of 8 to train the network for 600 epochs. The number of negative queries q_syn^+, k_target^+ and k^- per batch is taken up to 500, 500 and 1000. The amplitude map center region selection parameter of β is set to 0.3. the sampling ratio is 0.3, the hyper-parameter λ is 0.4 and the temperature hyper-parameter τ is 0.1. §.§ Experimental Results §.§.§ Comparison with State-of-the-art Table. <ref> compares the performance of vessel segmentation on XCAD between FreeCOS and the state-of-the-art methods, including the unsupervised methods <cit.>, the self-supervised methods <cit.>, domain adaptation methods <cit.> and the traditional methods <cit.>. The results in the 1st row are based on supervised U-Net as <cit.> (i.e., identical segmentation network trained using real images with manual labels) which are the upper bound. For domain adaption methods, we pretrain a vessel segmentation model based on U-Net using training images of DRIVE. Then we adapt the pre-trained model to XCAD using MMD <cit.> and Ynet <cit.>. Even with supervised information in the annotated source domain, the performance of MMD and YNet is still inferior to ours. Specifically, our method achieves 21.2% improvement in Jaccard, 22.7% improvement in Dice, 6.9% improvement in Acc, 16.4% improvement in Sn, and 4.2% improvement in Sp compared with YNet. Compared with the unsupervised methods IIC <cit.> and ReDO <cit.>, our method achieves significantly better performance for all metrics on XCAD. The results show that unsupervised methods cannot achieve satisfactory performance on the gray-scale X-ray images where the segmentation objects can be hardly distinguished from the background. Self-supervised method SSVS <cit.> is specifically designed for XCAD, yet our method still achieves much better performance for all metrics, i.e., 11% improvement in Jaccard, 10.4% improvement in Dice and 10.4% improvement in Sn. Table. <ref> and <ref> further compare our method with the existing methods on the retinal and crack datasets and a similar trend can be observed in these three datasets. Figure. <ref> shows the visualization results of images from various kinds of curvilinear datasets. §.§.§ Ablation Study We first conduct ablation studies to evaluate the impact of different modules. To this end, we build the following variants based on our method. For all the variant models and our final model, we use the same U-Net model as our backbone. 1) FFS. We utilize the FFS to generate synthetic training images and the corresponding labels to train the U-Net segmentation model. 2) FFS+PSAL. We further apply PSAL to align features of synthetic and real target images on top of FFS. 3) FFS+PSAL+IA. We apply image-level alignment via relative intensity order transformation (i.e., IA) on top of FFS+PSAL. 4) FFS+GIA. We apply our GIA module (including IA, PSAL and CMCL) on top of FFS. This model is also our final curvilinear segmentation model. The ablation studies are conducted on XCAD. The trend on other datasets are similar and thus the results on the other datasets are omitted due to the space limit. The results in Table <ref> show that 1) by training a U-Net model using synthetic images from FFS, we can already achieve better performance than the SOTA self-supervised method SSVS which is based on adversarial learning <cit.>. Such results reveal a very interesting phenomenon although adversarial learning can synthesize visually similar images as real target images, it cannot explicitly control the generated visual patterns and thus fail to enforce the segmentation-oriented patterns and properties to be encoded in synthetic images. In comparison, FFS can explicitly control the synthesis of both curvilinear structures and background patterns and hence can achieve better performance. 2) Aligning features via prediction space adaptation loss (PSAL) can help reduce domain shifts between synthetic images and real target images and thus FFS+PSAL achieves performance improvements compared with FFS. 3) Aligning synthetic images and real target images via IA method can also reduce the domain shifts and thus FFS+PSAL+IA can provide complementary improvements to FFS+PSAL. 4) Finally, the best performance is obtained when combining both FFS and GIA. We further explore the importance of different parameters in FFS on the final performance and discuss how to generate images of curvilinear structures to encode sufficient and comprehensive visual cues for learning robust and distinctive features. To this end, we build 6 variant models based on FFS. 1) FFS w/o Gaussian blur. We do not perform Gaussian blur to synthetic images from FFS. 2) FFS w/o FDA. We remove FDA-based synthesis from FFS and only generate curvilinear structures without backgrounds. 3) FFS w/o various intensities. We remove intensity variations c_i in the Fractal system and set the intensity to a fixed value 60. 4) FFS w/o various angles. We utilize a small branching angle range from 1^∘ to 5^∘ to reduce the branching angle variations. 5) FFS w/o various lengths. We reduce the length variation range to (10px, 20px). 6) FFS w/o various widths. We reduce the width variation to (1px, 5px). The results in Table <ref> show that 1) Gaussian blur can smooth the transition region between curvilinear objects and backgrounds, better mimicking the target images and providing more challenging samples for training than those with sharp boundaries. As a result, excluding Gaussian blur decreases the performance of FFS by 6.8% in Dice. 2) FDA could provide background patterns of target images which are essential for learning features of negative samples. Thus, FFS w/o FDA is 15.6% worse than FFS in Dice. 3) Among the four parameters of the Fractal system, i.e., intensity, angle, length and width, width plays the most important role in the final performance, i.e., FFS w/o various widths decreases the performance by 26.1% compared with FFS in Dice. To summarize, we identify that the appearance patterns in the curvilinear object, object-background transition regions and background regions are important for self-supervised segmentation. Meanwhile, proper parameter settings which can provide similar geometric characteristics with the real target images are key to the success of our self-supervised method. §.§.§ Self-supervised training with more data We also examine the segmentation performance when varying the number of synthetic images on the XCAD dataset. Results in Tables. <ref> and <ref> provide the results when using different numbers of synthetic structures and real target images for generating synthetic images for self-supervised training respectively. Synthetic-15 ∼ Synthetic-150 denote using the number of synthetic fractal object images X_frac from 15 to 150, and Target-162 ∼ Target-1620 denote using the number of target images X_t from 162 to 1620. Increasing both synthetic structures and real target images can accordingly improve the final segmentation performance. § CONCLUSION In this paper, we propose a novel self-supervised curvilinear object segmentation method that learns robust and distinctive features from fractals and unlabeled images (FreeCOS). Different from existing methods, FreeCOS applies the proposed FFS and GIA approach to effectively guide learning distinctive features to distinguish curvilinear objects and backgrounds and aligns information of synthetic and target images at both image and feature levels. One limitation of FreeCOS is that may generate false positives in other curvilinear objects (e.g., catheters in XCAD) and requires proper selection for width range. However, we have successfully utilized this self-supervised learning method for coronary vessel segmentation, retinal vessel segmentation and crack segmentation by reasonable parameter selection. To the best of our knowledge, FreeOCS is the first self-supervised learning method for various curvilinear object segmentation applications. ieee_fullname
http://arxiv.org/abs/2307.04307v1
20230710020825
Weyl semimetallic state in the Rashba-Hubbard model
[ "Katsunori Kubo" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall" ]
Advanced Science Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195, Japan We investigate the Hubbard model with the Rashba spin-orbit coupling on a square lattice. The Rashba spin-orbit coupling generates two-dimensional Weyl points in the band dispersion. In a system with edges along [11] direction, zero-energy edge states appear, while no edge state exists for a system with edges along an axis direction. The zero-energy edge states with a certain momentum along the edges are predominantly in the up-spin state on the right edge, while they are predominantly in the down-spin state on the left edge. Thus, the zero-energy edge states are helical. By using a variational Monte Carlo method for finite Coulomb interaction cases, we find that the Weyl points can move toward the Fermi level by the correlation effects. We also investigate the magnetism of the model by the Hartree-Fock approximation and discuss weak magnetic order in the weak-coupling region. Weyl semimetallic state in the Rashba-Hubbard model Katsunori Kubo August 12, 2023 ===================================================== § INTRODUCTION In a two-dimensional system without inversion symmetry, such as in an interface of a heterostructure, a momentum-dependent spin-orbit coupling is allowed. It is called the Rashba spin-orbit coupling <cit.>. The Rashba spin-orbit coupling lifts the spin degeneracy and affects the electronic state of materials. Several interesting phenomena originating from the Rashba spin-orbit coupling have been proposed and investigated. By considering the spin precession by the Rashba spin-orbit coupling, Datta and Das proposed the spin transistor <cit.>, in which electron transport between spin-polarized contacts can be modulated by the gate voltage. After this proposal, the tunability of the Rashba spin-orbit coupling by the gate voltage has been experimentally demonstrated <cit.>. Such an effect may be used in a device in spintronics. The possibility of the intrinsic spin Hall effect, which is also important in the research field of spintronics, by the Rashba spin-orbit coupling has been discussed for a long time <cit.>. Another interesting phenomenon with the Rashba spin-orbit coupling is superconductivity. When the Rashba spin-orbit coupling is introduced in a superconducting system, even- and odd-parity superconducting states are mixed due to the breaking of the inversion symmetry <cit.>. This mixing affects the magnetic properties of the superconducting state, such as the Knight shift. While the above studies have mainly focused on the one-electron states in the presence of the Rashba spin-orbit coupling, the effects of the Coulomb interaction between electrons have also been investigated. The Hubbard model with the Rashba spin-orbit coupling on a square lattice called the Rashba-Hubbard model is one of the simplest models to investigate such effects. In this study, we investigate the ground state of this model at half-filling, i.e., electron number per site n=1, by the variational Monte Carlo method and the Hartree-Fock approximation. In the strong coupling limit, an effective localized model is derived and the possibility of long-period magnetic order is discussed <cit.>. The long-period magnetism is a consequence of the Dzyaloshinskii-Moriya interaction caused by the Rashba spin-orbit coupling. Such long-period magnetic order is also discussed by the Hartree-Fock approximation for the Rashba-Hubbard model <cit.>. However, there is a contradiction among these studies even within the Hartree-Fock approximation. In the weak-coupling region with a finite Rashba spin-orbit coupling, an antiferromagnetic order is obtained in Ref. <cit.>, but a paramagnetic phase is obtained in Refs. <cit.> and <cit.>. We will discuss this point in Sec. <ref>. The knowledge of the electron correlation beyond the Hartree-Fock approximation is limited. The electron correlation in the Rashba-Hubbard model is studied by a dynamical mean-field theory mainly focusing on magnetism <cit.> and by a cluster perturbation theory investigating the Mott transition in the paramagnetic state <cit.>. We will study the electron correlation in the paramagnetic phase by using the variational Monte Carlo method in Sec. <ref>. The results concerning the Mott transition are consistent with Ref. <cit.>. In addition, we find a transition to a Weyl semimetallic state by the electron correlation. Even without the Coulomb interaction, the band structure of this model is intriguing. When the Rashba spin-orbit coupling is finite, the upper and lower bands touch each other at Weyl points. In the large Rashba spin-orbit coupling limit, all the Weyl points locate at the Fermi level for half-filling. Topological aspects of the Weyl points and corresponding edge states of this simple model are discussed in Sec. <ref>. § MODEL The model Hamiltonian is given by H=H_kin+H_R+H_int. The kinetic energy term is given by H_kin = -t∑_(r,r') σ (c_rσ^†c_r' σ +c_r' σ^†c_rσ) =∑_kσϵ_k c_kσ^†c_kσ, where c_rσ is the annihilation operator of the electron at site r with spin σ and c_kσ is the Fourier transform of it. (r,r') denotes a pair of nearest-neighbor sites, t is the hopping integral, and the kinetic energy is ϵ_k=-2t (cos k_x + cos k_y), where the lattice constant is set as unity. The Rashba spin-orbit coupling term is given by <cit.> H_R = iλ_R ∑_rσσ' a=± 1 a (σ^x_σσ' c_rσ^†c_r+aŷσ' -σ^y_σσ' c_rσ^†c_r+ax̂σ') = -2λ_R∑_kσσ'(sin k_y σ^x_σσ'-sin k_x σ^y_σσ') c_kσ^†c_kσ' = ∑_kσσ'[h_x(k) σ^x_σσ'+h_y(k) σ^y_σσ'] c_kσ^†c_kσ' = ∑_kσσ' H_R σσ'(k) c_kσ^†c_kσ', where x̂ (ŷ) is the unit vector along the x (y) direction, σ are the Pauli matrices, λ_R is the coupling constant of the Rashba spin-orbit coupling, h_x(k)=-2λ_R sin k_y, and h_y(k)= 2λ_R sin k_x. We can assume t ≥ 0 and λ_R ≥ 0 without loss of generality. We parametrize them as t=t̃cosα and λ_R=√(2) t̃sinα. The band dispersion of H_0=H_kin+H_R is E_±(k) =-2t(cos k_x+cos k_y) ± |h(k)|, where |h(k)|=√(h_x^2(k)+h_y^2(k)) =2λ_R√(sin^2 k_x+sin^2 k_y). The bandwidth is W=8t̃. Due to the electron-hole symmetry of the model, the Fermi level is zero at half-filling. For α=0, that is, without the Rashba spin-orbit coupling, the band is doubly degenerate [Fig. <ref>(a)]. For a finite λ_R, the spin degeneracy is lifted except at the time-reversal invariant momenta X^(0)=(0,0), X^(1)=(π,0), X^(2)=(0,π), and X^(3)=(π,π) [Figs. <ref>(b) and <ref>(c)]. These are two-dimensional Weyl points. The energies at the Weyl points X^(1) and X^(2) are always zero. By increasing α to 0.5π (t=0), the energies at the other Weyl points X^(0) and X^(3) also move to zero. In Fig. <ref>(d), we show the energy dispersion in the entire Brillouin zone for α=0.5π. We can see the linear dispersions around the Weyl points. The Coulomb interaction term is given by H_int=U∑_rn_r↑n_r↓, where n_rσ=c_rσ^†c_rσ and U is the coupling constant of the Coulomb interaction. § TOPOLOGY AND EDGE STATES OF THE NON-INTERACTING HAMILTONIAN The energy bands degenerate when h(k)=0, i.e., at the Weyl points. In the vicinity of these points, we set k=X^(l)+p and obtain H_R(k) = ∑_j h_j(k)σ^j ≃∑_ij. ∂ h_j(k)/∂ k_i|_k=X^(l) p_iσ^j = ∑_ijv^(l)_ijp_iσ^j. The chirality of each Weyl point X^(l) is defined as χ_l = sgn [ v^(l)] <cit.> and we obtain χ_0=χ_3=1 and χ_1=χ_2=-1. The winding number of a normalized two-component vector field ĥ(k)=h(k)/|h(k)| is <cit.> w_l = ∮_C_ldk/2π·[ ĥ_x(k)∇ĥ_y(k) -ĥ_y(k)∇ĥ_x(k)], where C_l is a loop enclosing X_l. We obtain w_l=χ_l. Figure <ref> shows ĥ(k) around k=X^(0) and X^(1) as examples. We can recognize the winding numbers 1 and -1, respectively, from this figure. These topological numbers are related to the Berry phase <cit.>. The eigenvector of H_R(k) with eigenvalue -|h(k)| is |k⟩ =(1/√(2))(-1,ĥ_x(k)+iĥ_y(k))^T. The Berry connection is a(k) = -i⟨k | ∇ |k⟩ = 1/2[ ĥ_x(k)∇ĥ_y(k) -ĥ_y(k)∇ĥ_x(k)]. Then, the Berry phase is γ_l = ∫_C_ldk·a(k) =w_lπ. From the existence of such topological defects like the Weyl points, we expect edge states as in graphene with Dirac points <cit.>. We consider two types of edges: the edges along an axis direction [straight edges, Fig. <ref>(a)] and the edges along [11] direction [zigzag edges, Fig. <ref>(b)]. We denote the momentum along the edges as k and the momentum perpendicular to the edges as k_⊥. To discuss the existence of the edge states, the chiral symmetry and the winding number for a fixed k are important <cit.>. The Rashba term has a chiral symmetry: { H_R(k), σ^z } = H_R(k)σ^z+σ^zH_R(k)=0 and σ^zσ^z †=I with I being the unit matrix. The winding number for a fixed k is given by w(k) = ∫_0^2πdk_⊥/2π[ ĥ_x(k) ∂/∂ k_⊥ĥ_y(k) -ĥ_y(k) ∂/∂ k_⊥ĥ_x(k) ]. For the straight edges, we find w(k)=0 and we expect that the edge states are probably absent. For the zigzag edges, h_x(k)=-2λ_R sin(k-k_⊥) and h_y(k)= 2λ_R sin(k+k_⊥), where we have set 1/√(2) times the bond length as unity, and we find w(k)=-sgn[sin(2k)] except for k = 0, ±π/2, and ±π (projected Weyl points). At the projected Weyl points, w(k)=0. Thus, the edge states should exist except for the projected Weyl points at least without t. We note that the edge states can be understood as those of a one-dimensional topological insulator. The model only with the Rashba term with fixed k is a one-dimensional model. When this one-dimensional system has a gap with a non-zero topological number, the system can be regarded as a one-dimensional topological insulator and has edge states. This one-dimensional system is of symmetry class BDI and can possess a topological number of ℤ <cit.>. To explicitly demonstrate the existence of the edge states, we numerically evaluate the band energy for lattices with finite widths. We denote the number of lattice sites perpendicular to the edges as N (see Fig. <ref>) and obtain 2N bands. The obtained energy bands are shown in Fig. <ref>. For the straight edges [Figs. <ref>(a)–(c)], we do not find the edge states. It is consistent with w(k)=0. For the zigzag edges [Figs. <ref>(d)–(f)], we obtain isolated zero-energy states except for λ_R=0 [Fig. <ref>(d)]. In particular, for α=0.5π, the zero-energy states appear at all the k points except for the projected Weyl points as is expected from w(k) 0. We find that the zero-energy states remain even for finite t as shown in Fig. <ref>(e). For an even number of N, the energy of the zero-energy states shifts from zero around the projected Weyl points when N is small. For an odd number of N, we obtain zero energy even for a small N. Thus, we set N=51 in the calculations. We discuss the characteristics of the zero-energy edge states. We define c_i kσ as the Fourier transform of c_rσ along the edges, where i labels the site perpendicular to the edges (see Fig. <ref>). For the lattice with the zigzag edges, we can show that the states c_-(N-1)/2, π/4, ↓^†|0⟩ and c_(N-1)/2, π/4, ↑^†|0⟩ do not have matrix elements of H_R, where |0⟩ is the vacuum state. Thus, these states are the zero-energy states for α=0.5π completely localized on the left and right edges, respectively, with opposite spins. This helical character of the edge states is natural since the system lacks inversion symmetry due to the Rashba spin-orbit coupling. For other momenta and α, we calculate the spin density of the zero-energy edge states n_0 k σ(i)=⟨ 0 k| c_ikσ^† c_ikσ|0 k ⟩, where |0 k ⟩ denotes the zero-energy state at momentum k. The zero-energy states are doubly degenerate, and we take the average of the two states. We show n_0 k σ(i) for α=0.3π, as an example, in Fig. <ref>. At k where the bulk band gap is sufficiently large, the zero-energy states are localized well on the edges [Figs. <ref>(c) and <ref>(d)]. As the bulk band gap becomes small, the zero-energy states penetrate inner sites [Figs. <ref>(b) and <ref>(e)] and the zero-energy states extend in the entire lattice when the gap closes [Figs. <ref>(a) and <ref>(f)]. The spin components are opposite between the edges. For example, for k=0.4π and 0.45π, the up-spin state dominates on the right edge while the down-spin state dominates on the left edge. Thus, the edge states are helical. The spin components are exchanged between states at k and -k [compare Fig. <ref>(d) with Fig. <ref>(g) and Fig. <ref>(e) with Fig. <ref>(h)]. In Fig. <ref>(i), we show a schematic view of the spin density corresponding to k≃ 0.4π on the real-space lattice. § WEYL SEMIMETALLIC STATE INDUCED BY THE CORRELATION EFFECTS In this section, we investigate the effects of the Coulomb interaction U at half-filling, i.e., the electron number per site n=1, within the paramagnetic phase by applying the variational Monte Carlo method <cit.>. To achieve this objective, it is necessary to select a wave function capable of describing the Mott insulating state, as a Mott transition is anticipated, at least in the ordinary Hubbard model without the Rashba spin-orbit coupling. In this study, we employ a wave function with doublon-holon binding factors [doublon-holon binding wave function (DHWF)] <cit.>. A doublon means a doubly occupied site and a holon means an empty site. Such intersite factors like doublon-holon binding factors are essential to describe the Mott insulating state <cit.>. Indeed, the DHWF has succeeded in describing the Mott transition for the single-orbital <cit.> and two-orbital <cit.> Hubbard models. The DHWF is given by |Ψ(α_eff)⟩ = P_d P_h P_G | Φ(α_eff)⟩. The Gutzwiller projection operator P_G=∏_r[1-(1-g)P_d r], describes onsite correlations, where P_d r = n_r↑n_r↓ is the projection operator onto the doublon state at r and g is a variational parameter. The parameter g tunes the population of the doubly occupied sites. When the onsite Coulomb interaction is strong and n=1, most sites should be occupied by a single electron each. In this situation, if a doublon is created, a holon should be around it to reduce the energy by using singly occupied virtual states. P_d and P_h describe such doublon-holon binding effects. P_d is an operator to include intersite correlation effects concerning the doublon states. This is defined as follows <cit.>: P_d=∏_r[1-(1-ζ_d) P_d r∏_a (1-P_h r+a) ], where P_h r = (1-n_r↑)(1-n_r↓) is the projection operator onto the holon state at r and a denotes the vectors connecting the nearest-neighbor sites. P_d gives factor ζ_d when site r is in the doublon state and there is no holon at nearest-neighbor sites r+a. Similarly, P_h describing the intersite correlation effects on the holon state is defined as P_h=∏_r[1-(1-ζ_h) P_h r∏_a (1-P_d r+a) ]. Factor ζ_h appears when a holon exists without a nearest-neighboring doublon. For the half-filled case, we can use the relation ζ_d=ζ_h due to the electron-hole symmetry of the model. The one-electron part |Φ(α_eff) ⟩ of the wave function is given by the ground state of the non-interacting Hamiltonian H_0(α_eff) in which α in H_0 is replaced by α_eff. We can choose α_eff different from the original α in the model Hamiltonian. Such a band renormalization effect of the one-electron part is discussed for a Hubbard model with next-nearest-neighbor hopping <cit.>. We define the normal state as |Ψ_N⟩=|Ψ(α_eff=α)⟩, i.e., α_eff remains the bare value. We also define the Weyl semimetallic state as |Ψ_Weyl⟩=|Ψ(α_eff=0.5π)⟩, i.e., all the Weyl points are at the Fermi level and the Fermi surface disappears. In addition, we can choose other values of α_eff, but in a finite-size lattice, a slight change of α_eff does not change the set of the occupied wave numbers and the wave function |Φ(α_eff) ⟩. Thus, we have limited choices for α_eff as the band renormalization in the Hubbard model with the next-nearest-neighbor hopping <cit.>. We use the antiperiodic-periodic boundary conditions since the closed shell condition is satisfied, i.e., no k point is exactly on the Fermi surface for a finite-size lattice and there is no ambiguity to construct |Φ(α_eff)⟩. The calculations are done for L × L lattices with L=12, 14, and 16. We evaluate the expectation value of energy by the Monte Carlo method. We optimize the variational parameters g and ζ_d=ζ_h to minimize the energy. We denote the optimized energy of |Ψ(α_eff) ⟩ as E(α_eff). In particular, we denote E_N=E(α_eff=α) and E_Weyl=E(α_eff=0.5π). By using the Monte Carlo method, we also evaluate the momentum distribution function n(k)=∑_σ⟨ c_kσ^†c_kσ⟩, where ⟨⋯⟩ represents the expectation value in the optimized wave function. In Fig. <ref>(a), we show n(k) in the normal state at α=0.25π for L=16. For U/t̃=10, n(k) has clear discontinuities at the Fermi momenta. On the other hand, for U/t̃=14, n(k) does not have such a discontinuity; that is, the system is insulating and a Mott metal-insulator transition takes place between U/t̃=10 and U/t̃=14. To determine the Mott metal-insulator transition point U_MIT, we evaluate the quasiparticle renormalization factor Z, which is inversely proportional to the effective mass and becomes zero in the Mott insulating state, by the jump in n(k). Except for α=0, we evaluate Z by the jump between (π,0) and (π,π) as shown in Fig. <ref>(a). For α=0, the above path does not intersect the Fermi surface and we use the jump between (π,π) and (0,0) instead. In Fig. <ref>(b), we show the U dependence of Z for α=0.25π and L=16. By extrapolating Z to zero, we determine U_MIT/t̃≃ 12.9. We note that for a small α with a large L, the Mott transition becomes first-order consistent with a previous study for α=0 <cit.>. We have also evaluated energies for some values of α_effα. Figure <ref>(a) shows energies for α_eff=0.18π and 0.22π measured from the normal state energy at α=0.2π for L=16. The normal state has the lowest energy, at least for U/t̃≤ 20. Thus, the renormalization of α, even if it exists, is weak for a system distant from the Weyl semimetallic state (α=0.5π). A similar conclusion is obtained for a small intersite spin-orbit coupling case of the Kane-Mele-Hubbard model <cit.>. It is in contrast to the onsite spin-orbit coupling case <cit.>, where the effective spin-orbit coupling is enhanced by the Coulomb interaction even when the bare spin-orbit coupling is small. On the other hand, the renormalization of α becomes strong around α=0.5π. In Fig. <ref>(b), we show the energy E_Weyl of the Weyl semimetallic state measured from that of the normal state for α=0.4π for L=16. E_Weyl becomes lower than the normal state energy at U>U_Weyl≃ 9.4t̃. There is a possibility that the normal state changes to the Weyl semimetallic state gradually by changing α_eff continuously. However, for a finite lattice, the choices of α_eff are limited between α_eff=α and α_eff=0.5π. For example, at α=0.4π, there is no choice for L=12 and L=14 and only one choice 0.4017<α_eff/π<0.4559 for L=16. For this reason, we evaluate U_Weyl by comparing the energies of the normal and the Weyl semimetallic states to show the tendency toward the Weyl semimetallic state by the renormalization effect on α. Figure <ref> shows a phase diagram without considering magnetic order. The size dependence of the phase boundaries is weak. For a weak Rashba spin-orbit coupling region, i.e., for a small α, the Rashba spin-orbit coupling stabilizes the metallic phase. It is consistent with a previous study by a cluster perturbation theory <cit.>. Around α=0.5π, we obtain a wide region of the Weyl semimetallic phase. Thus, we expect phenomena originating from the Weyl points can be realized even away from α=0.5π with the aid of electron correlations. In the Weyl semimetallic state, the density of states at the Fermi level vanishes, and thus, energy gain is expected similar to the energy gain by a gap opening in an antiferromagnetic transition. We note that such a renormalization effect on α cannot be expected within the Hartree-Fock approximation and is a result of the electron correlations beyond the Hartree-Fock approximation. § HARTREE-FOCK APPROXIMATION FOR MAGNETISM In this section, we discuss the magnetism of the model by the Hartree-Fock approximation. The energy dispersion given in Eq. (<ref>) has the following property: E_±(k+Q)=-E_∓(k) for Q=(π,π). When E_a(k)=0, in particular, E_-a(k+Q)=E_a(k)=0. Thus, the Fermi surface is perfectly nested for half-filling (the Fermi energy is zero) with the nesting vector Q=(π,π) [see Figs. <ref>(a)–(c)]. Due to this nesting, the magnetic susceptibility at Q=(π,π) diverges at zero temperature <cit.>. It indicates that the magnetic order occurs with an infinitesimally small value of the Coulomb interaction U at zero temperature. However, some recent Hartree-Fock studies argue the existence of the paramagnetic phase with finite U <cit.>. To resolve this contradiction and gain insights into magnetism, we apply the Hartree-Fock approximation to the model within two-sublattice magnetic order, i.e., with ordering vector of Q=(π,π) or Q=(π,0). The Hartree-Fock Hamiltonian is given by H_HF = ∑_k[ c_k^† c_k+Q^† ][ ϵ̂(k) -Δ·σ; -Δ·σ ϵ̂(k+Q) ][ c_k; c_k+Q ], where k-summation runs over the folded Brillouin zone of the antiferromagnetic state, c_k=(c_k↑,c_k↓)^T, ϵ̂(k)=ϵ_kI+H_R(k), and Δ=Um_AF. Here, m_AF=[1/(2L^2)]∑_rσσ'e^-iQ·r⟨ c_rσ^†σ_σσ' c_rσ'⟩_HF, where ⟨⋯⟩_HF represents the expectation value in the ground state of H_HF. We solve the gap equation Δ=Um_AF self-consistently. First, we consider the magnetic order for Q=(π,π). Without the Rashba spin-orbit coupling, the asymptotic form m_AF=|m_AF|∼ (t̃/U)e^-2π√(t̃/U) for the weak-coupling region Δ=|Δ| ≪ W was obtained by Hirsch analyzing the gap equation <cit.>. If we take into consideration the fact that the asymptotic form of the density of states ρ(ϵ) ≃ -[1/(2π^2 t̃)] ln [|ϵ|/(16t̃)] for ϵ≃ 0 <cit.> is a good approximation even up to the band edge [see Fig. <ref>(d)], we obtain m ≃ (32t̃/U)e^-2π√(t̃/U). Indeed, this approximate form reproduces the numerical data well in the weak-coupling region as shown in Fig. <ref>(a). For a finite λ_R, we find numerically that m_AF is parallel to the x or y direction. It is expected from the effective Hamiltonian in the strong coupling limit we will discuss later. By assuming Δ≪λ_R and Δ≪ W, we obtain m_AF∼ (t̃/U)e^-2/[Uρ(0)] for a finite ρ(0), where ρ(0) is the density of states at the Fermi level. The coefficient to m_AF is determined by the entire behavior of the density of states up to the band edge [see Figs. <ref>(e) and <ref>(f)] and we cannot obtain it analytically in general. Figures <ref>(b) and <ref>(c) show the numerically obtained m_AF for α=0.2π and 0.4π, respectively, along with the fitted curves of (at̃/U)e^-2/[Uρ(0)], where a is the fitting parameter. The fitted curves reproduce well the numerical data in the weak-coupling region. From the obtained asymptotic form and the numerical data supporting it, we conclude that the magnetic order occurs by an infinitesimally small U for 0 ≤α < 0.5π consistent with the divergence of the magnetic susceptibility <cit.>. We cannot apply this asymptotic form for α=0.5π since ρ(0)=0 there. The numerical result shown in Fig. <ref>(d) indicates a first-order transition for α=0.5π. Here, we discuss previous papers indicating the existence of the paramagnetic phase with finite U. In Ref. <cit.>, the authors introduced a threshold ε for the magnetization m_AF. Then, the authors determined the magnetic transition point when m_AF becomes smaller than ε. However, m_AF becomes exponentially small in the weak-coupling region, as understood from the above analysis. In Ref. <cit.>, ε is not sufficiently small to discuss exponentially small m_AF and a finite region of the paramagnetic phase was obtained. In Ref. <cit.>, the authors calculated the energy difference Δ E between the paramagnetic state and the antiferromagnetic state. Then, the authors introduced a scaling between Δ E and U-U_AF, where U_AF is the antiferromagnetic transition point. They tuned U_AF to collapse the data with different α onto a single curve in a large-U region. Then, they obtained finite U_AF for α 0. However, this scaling analysis does not have a basis. In particular, if such a scaling holds for critical behavior, the data collapse should occur for U ≃ U_AF, not for a large-U region. We have also solved the gap equation for Q=(π,0) and obtained m_AF parallel to the y direction. By comparing energies for Q=(π,π) and Q=(π,0), we construct a phase diagram shown in Fig. <ref>. As noted, the antiferromagnetic state with Q=(π,π) occurs at infinitesimally small U except for α=0.5π. The Weyl semimetallic state remains for U/ t̃≲ 4.4 at α=0.5π. The antiferromagnetic state with Q=(π,0) appears at large U for α/π≳ 0.2. This phase boundary can be understood from the effective Hamiltonian in the strong coupling limit. The effective Hamiltonian is derived from the second-order perturbation theory concerning t and λ_R and is given by <cit.> H_eff = ∑_raμ[ J^μ_aS_r^μS_r+a^μ +D_a^μ(S_r×S_r+a)^μ], where a=x̂ or ŷ, μ=x, y, or z, S_r is the spin operator at site r, J_x̂^x =J_x̂^z =J_ŷ^y =J_ŷ^z = 4(t^2-λ_R^2)/U, J_x̂^y =J_ŷ^x = 4(t^2+λ_R^2)/U, D_x̂^y =-D_ŷ^x =8tλ_R/U, and the other components of D_a are zero. From the anisotropy in the interaction, we expect the ordered moments along the x or y direction for Q=(π,π) and along the y direction for Q=(π,0). Thus, the directions of the ordered moments obtained with the Hartree-Fock approximation are in accord with the effective Hamiltonian. For t ≪λ_R (α≃ 0), the magnetic order with Q=(π,π) is stable as in the ordinary Heisenberg model. For t ≫λ_R (α≃ 0.5π), the magnetic order with Q=(π,0) has lower energy than that with Q=(π,π) due to the anisotropic interaction. For t=λ_R (J_x̂^x =J_x̂^z =J_ŷ^y =J_ŷ^z=0), if we ignore the Dzyaloshinskii-Moriya interaction D_a, the model is reduced to the compass model <cit.>. It is known as a highly frustrated model. The condition t=λ_R corresponds to α=tan^-1(1/√(2))=0.1959π. Thus, the phase boundary α≃ 0.2π obtained with the Hartree-Fock approximation at a large-U region is corresponding to the highly frustrated region of the model. However, in a large-U region, we expect longer-period magnetic order due to the Dzyaloshinskii-Moriya interaction. It is out of the scope of the present study and has already been investigated by previous studies using the effective Hamiltonian <cit.>. Our important finding in this section is the absence of the paramagnetic phase except for α=0.5π in the weak-coupling region. However, the ordered moment and the energy gain of the antiferromagnetic state in the weak-coupling region are exponentially small. Thus, the effects of this magnetic order should be weak. In addition, this magnetic order would be easily destroyed by perturbations such as the next-nearest-neighbor hopping breaking the nesting condition <cit.>. Thus, the discussions in the previous sections without considering magnetic order are still meaningful. § SUMMARY We have investigated the Rashba-Hubbard model on a square lattice. The Rashba spin-orbit coupling generates the two-dimensional Weyl points, which are characterized by non-zero winding numbers. We have investigated lattices with edges and found zero-energy states on a lattice with zigzag edges. The zero-energy states are localized around the edges and have a helical character. The large density of states due to the flat zero-energy band may result in magnetic polarization at edges, similar to graphene <cit.>. We have also examined the effects of the Coulomb interaction U. The Coulomb interaction renormalizes the ratio of the coupling constant of the Rashba spin-orbit coupling λ_R to the hopping integral t effectively. As a result, the Weyl points can move to the Fermi level by the correlation effects. Thus, the Coulomb interaction can enhance the effects of the Weyl points and assist in observing phenomena originating from the Weyl points even if the bare Rashba spin-orbit coupling is not large. We have also investigated the magnetism of the model by the Hartree-Fock approximation. We have found that the antiferromagnetic state with the ordering vector Q=(π,π) occurs at infinitesimally small U due to the perfect nesting of the Fermi surface even for a finite λ_R. However, the density of states at the Fermi level becomes small for a large λ_R and as a result, the energy gain by the antiferromagnetic order is small in the weak-coupling region. Therefore, the effects of the magnetic order should be weak in such a region. In addition, this magnetic order would be unstable against perturbations, such as the inclusion of next-nearest-neighbor hopping <cit.>. Thus, we conclude that the discussions on the Weyl semimetal without assuming magnetism are still meaningful. This work was supported by JSPS KAKENHI Grant Number JP23K03330. 53 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Bychkov and Rashba(1984)]Bychkov1984 author author Y. A. Bychkov and author E. I. Rashba, title title Properties of a 2D electron gas with lifted spectral degeneracy, @noop journal journal JETP Lett. volume 39, pages 78 (year 1984)NoStop [Datta and Das(1990)]Datta1990 author author S. Datta and author B. Das, title title Electronic analog of the electro-optic modulator, https://doi.org/10.1063/1.102730 journal journal Appl. Phys. Lett. volume 56, pages 665 (year 1990)NoStop [Schultz et al.(1996)Schultz, Heinrichs, Merkt, Colin, Skauli, and Løvold]Schultz1996 author author M. Schultz, author F. Heinrichs, author U. Merkt, author T. Colin, author T. Skauli, and author S. Løvold, title title Rashba spin splitting in a gated HgTe quantum well, https://doi.org/10.1088/0268-1242/11/8/009 journal journal Semicond. Sci. Technol. volume 11, pages 1168 (year 1996)NoStop [Nitta et al.(1997)Nitta, Akazaki, Takayanagi, and Enoki]Nitta1997 author author J. Nitta, author T. Akazaki, author H. Takayanagi, and author T. Enoki, title title Gate Control of Spin-Orbit Interaction in an Inverted In0.53Ga0.47As/In0.52Al0.48As Heterostructure, https://doi.org/10.1103/PhysRevLett.78.1335 journal journal Phys. Rev. Lett. volume 78, pages 1335 (year 1997)NoStop [Engels et al.(1997)Engels, Lange, Schäpers, and Lüth]Engels1997 author author G. Engels, author J. Lange, author T. Schäpers, and author H. Lüth, title title Experimental and theoretical approach to spin splitting in modulation-doped InxGa1-xAs/InP quantum wells for B→0, https://doi.org/10.1103/PhysRevB.55.R1958 journal journal Phys. Rev. B volume 55, pages R1958 (year 1997)NoStop [Sinova et al.(2004)Sinova, Culcer, Niu, Sinitsyn, Jungwirth, and MacDonald]Sinova2004 author author J. Sinova, author D. Culcer, author Q. Niu, author N. A. Sinitsyn, author T. Jungwirth, and author A. H. MacDonald, title title Universal Intrinsic Spin Hall Effect, https://doi.org/10.1103/PhysRevLett.92.126603 journal journal Phys. Rev. Lett. volume 92, pages 126603 (year 2004)NoStop [Inoue et al.(2004)Inoue, Bauer, and Molenkamp]Inoue2004 author author J.-i. Inoue, author G. E. W. Bauer, and author L. W. Molenkamp, title title Suppression of the persistent spin Hall current by defect scattering, https://doi.org/10.1103/PhysRevB.70.041303 journal journal Phys. Rev. B volume 70, pages 041303(R) (year 2004)NoStop [Chalaev and Loss(2005)]Chalaev2005 author author O. Chalaev and author D. Loss, title title Spin-Hall conductivity due to Rashba spin-orbit interaction in disordered systems, https://doi.org/10.1103/PhysRevB.71.245318 journal journal Phys. Rev. B volume 71, pages 245318 (year 2005)NoStop [Dimitrova(2005)]Dimitrova2005 author author O. V. Dimitrova, title title Spin-Hall conductivity in a two-dimensional Rashba electron gas, https://doi.org/10.1103/PhysRevB.71.245327 journal journal Phys. Rev. B volume 71, pages 245327 (year 2005)NoStop [Sugimoto et al.(2006)Sugimoto, Onoda, Murakami, and Nagaosa]Sugimoto2006 author author N. Sugimoto, author S. Onoda, author S. Murakami, and author N. Nagaosa, title title Spin Hall effect of a conserved current: Conditions for a nonzero spin Hall current, https://doi.org/10.1103/PhysRevB.73.113305 journal journal Phys. Rev. B volume 73, pages 113305 (year 2006)NoStop [Dugaev et al.(2010)Dugaev, Inglot, Sherman, and Barnaś]Dugaev2010 author author V. K. Dugaev, author M. Inglot, author E. Y. Sherman, and author J. Barnaś, title title Robust impurity-scattering spin Hall effect in a two-dimensional electron gas, https://doi.org/10.1103/PhysRevB.82.121310 journal journal Phys. Rev. B volume 82, pages 121310(R) (year 2010)NoStop [Shitade and Tatara(2022)]Shitade2022 author author A. Shitade and author G. Tatara, title title Spin accumulation without spin current, https://doi.org/10.1103/PhysRevB.105.L201202 journal journal Phys. Rev. B volume 105, pages L201202 (year 2022)NoStop [Gor'kov and Rashba(2001)]Gorkov2001 author author L. P. Gor'kov and author E. I. Rashba, title title Superconducting 2D System with Lifted Spin Degeneracy: Mixed Singlet-Triplet State, https://doi.org/10.1103/PhysRevLett.87.037004 journal journal Phys. Rev. Lett. volume 87, pages 037004 (year 2001)NoStop [Yanase and Sigrist(2008)]Yanase2008 author author Y. Yanase and author M. Sigrist, title title Superconductivity and Magnetism in Non-centrosymmetric System: Application to CePt3Si, https://doi.org/10.1143/JPSJ.77.124711 journal journal J. Phys. Soc. Jpn. volume 77, pages 124711 (year 2008)NoStop [Beyer et al.(2023)Beyer, Hauck, Klebl, Schwemmer, Kennes, Thomale, Honerkamp, and Rachel]Beyer2023 author author J. Beyer, author J. B. Hauck, author L. Klebl, author T. Schwemmer, author D. M. Kennes, author R. Thomale, author C. Honerkamp, and author S. Rachel, title title Rashba spin-orbit coupling in the square-lattice Hubbard model: A truncated-unity functional renormalization group study, https://doi.org/10.1103/PhysRevB.107.125115 journal journal Phys. Rev. B volume 107, pages 125115 (year 2023)NoStop [Cocks et al.(2012)Cocks, Orth, Rachel, Buchhold, Le Hur, and Hofstetter]Cocks2012 author author D. Cocks, author P. P. Orth, author S. Rachel, author M. Buchhold, author K. Le Hur, and author W. Hofstetter, title title Time-Reversal-Invariant Hofstadter-Hubbard Model with Ultracold Fermions, https://doi.org/10.1103/PhysRevLett.109.205303 journal journal Phys. Rev. Lett. volume 109, pages 205303 (year 2012)NoStop [Radić et al.(2012)Radić, Di Ciolo, Sun, and Galitski]Radic2012 author author J. Radić, author A. Di Ciolo, author K. Sun, and author V. Galitski, title title Exotic Quantum Spin Models in Spin-Orbit-Coupled Mott Insulators, https://doi.org/10.1103/PhysRevLett.109.085303 journal journal Phys. Rev. Lett. volume 109, pages 085303 (year 2012)NoStop [Gong et al.(2015)Gong, Qian, Yan, Scarola, and Zhang]Gong2015 author author M. Gong, author Y. Qian, author M. Yan, author V. W. Scarola, and author C. Zhang, title title Dzyaloshinskii-Moriya Interaction and Spiral Order in Spin-orbit Coupled Optical Lattices, https://doi.org/10.1038/srep10050 journal journal Sci Rep volume 5, pages 10050 (year 2015)NoStop [Minář and Grémaud(2013)]Minar2013 author author J. Minář and author B. Grémaud, title title From antiferromagnetic ordering to magnetic textures in the two-dimensional Fermi-Hubbard model with synthetic spin-orbit interactions, https://doi.org/10.1103/PhysRevB.88.235130 journal journal Phys. Rev. B volume 88, pages 235130 (year 2013)NoStop [Kennedy et al.(2022)Kennedy, dos Anjos Sousa-Júnior, Costa, and dos Santos]Kennedy2022 author author W. Kennedy, author S. dos Anjos Sousa-Júnior, author N. C. Costa, and author R. R. dos Santos, title title Magnetism and metal-insulator transitions in the Rashba-Hubbard model, https://doi.org/10.1103/PhysRevB.106.165121 journal journal Phys. Rev. B volume 106, pages 165121 (year 2022)NoStop [Kawano and Hotta(2023)]Kawano2023 author author M. Kawano and author C. Hotta, title title Phase diagram of the square-lattice Hubbard model with Rashba-type antisymmetric spin-orbit coupling, https://doi.org/10.1103/PhysRevB.107.045123 journal journal Phys. Rev. B volume 107, pages 045123 (year 2023)NoStop [Zhang et al.(2015)Zhang, Wu, Li, Wen, Sun, and Ji]Zhang2015 author author X. Zhang, author W. Wu, author G. Li, author L. Wen, author Q. Sun, and author A.-C. Ji, title title Phase diagram of interacting Fermi gas in spin–orbit coupled square lattices, https://doi.org/10.1088/1367-2630/17/7/073036 journal journal New J. Phys. volume 17, pages 073036 (year 2015)NoStop [Brosco and Capone(2020)]Brosco2020 author author V. Brosco and author M. Capone, title title Rashba-metal to Mott-insulator transition, https://doi.org/10.1103/PhysRevB.101.235149 journal journal Phys. Rev. B volume 101, pages 235149 (year 2020)NoStop [Mireles and Kirczenow(2001)]Mireles2001 author author F. Mireles and author G. Kirczenow, title title Ballistic spin-polarized transport and Rashba spin precession in semiconductor nanowires, https://doi.org/10.1103/PhysRevB.64.024426 journal journal Phys. Rev. B volume 64, pages 024426 (year 2001)NoStop [Hou(2013)]Hou2013 author author J.-M. Hou, title title Hidden-Symmetry-Protected Topological Semimetals on a Square Lattice, https://doi.org/10.1103/PhysRevLett.111.130403 journal journal Phys. Rev. Lett. volume 111, pages 130403 (year 2013)NoStop [Sun et al.(2012)Sun, Liu, Hemmerich, and Das Sarma]Sun2012 author author K. Sun, author W. V. Liu, author A. Hemmerich, and author S. Das Sarma, title title Topological semimetal in a fermionic optical lattice, https://doi.org/10.1038/nphys2134 journal journal Nat. Phys. volume 8, pages 67 (year 2012)NoStop [Berry(1984)]Berry1984 author author M. V. Berry, title title Quantal phase factors accompanying adiabatic changes, https://doi.org/10.1098/rspa.1984.0023 journal journal Proc. R. Soc. London, Ser. A volume 392, pages 45 (year 1984)NoStop [Fujita et al.(1996)Fujita, Wakabayashi, Nakada, and Kusakabe]Fujita1996 author author M. Fujita, author K. Wakabayashi, author K. Nakada, and author K. Kusakabe, title title Peculiar Localized State at Zigzag Graphite Edge, https://doi.org/10.1143/JPSJ.65.1920 journal journal J. Phys. Soc. Jpn. volume 65, pages 1920 (year 1996)NoStop [Ryu and Hatsugai(2002)]Ryu2002 author author S. Ryu and author Y. Hatsugai, title title Topological Origin of Zero-Energy Edge States in Particle-Hole Symmetric Systems, https://doi.org/10.1103/PhysRevLett.89.077002 journal journal Phys. Rev. Lett. volume 89, pages 077002 (year 2002)NoStop [Hatsugai(2009)]Hatsugai2009 author author Y. Hatsugai, title title Bulk-edge correspondence in graphene with/without magnetic field: Chiral symmetry, Dirac fermions and edge states, https://doi.org/10.1016/j.ssc.2009.02.055 journal journal Solid State Commun. volume 149, pages 1061 (year 2009)NoStop [Schnyder et al.(2008)Schnyder, Ryu, Furusaki, and Ludwig]Schnyder2008 author author A. P. Schnyder, author S. Ryu, author A. Furusaki, and author A. W. W. Ludwig, title title Classification of topological insulators and superconductors in three spatial dimensions, https://doi.org/10.1103/PhysRevB.78.195125 journal journal Phys. Rev. B volume 78, pages 195125 (year 2008)NoStop [Kitaev(2009)]Kitaev2009 author author A. Kitaev, title title Periodic table for topological insulators and superconductors, https://doi.org/10.1063/1.3149495 journal journal AIP Conf. Proc. volume 1134, pages 22 (year 2009)NoStop [Ryu et al.(2010)Ryu, Schnyder, Furusaki, and Ludwig]Ryu2010 author author S. Ryu, author A. P. Schnyder, author A. Furusaki, and author A. W. W. Ludwig, title title Topological insulators and superconductors: Tenfold way and dimensional hierarchy, https://doi.org/10.1088/1367-2630/12/6/065010 journal journal New J. Phys. volume 12, pages 065010 (year 2010)NoStop [Yokoyama and Shiba(1987)]Yokoyama1987 author author H. Yokoyama and author H. Shiba, title title Variational Monte-Carlo Studies of Hubbard Model. I, https://doi.org/10.1143/JPSJ.56.1490 journal journal J. Phys. Soc. Jpn. volume 56, pages 1490 (year 1987)NoStop [Kaplan et al.(1982)Kaplan, Horsch, and Fulde]Kaplan1982 author author T. A. Kaplan, author P. Horsch, and author P. Fulde, title title Close Relation between Localized-Electron Magnetism and the Paramagnetic Wave Function of Completely Itinerant Electrons, https://doi.org/10.1103/PhysRevLett.49.889 journal journal Phys. Rev. Lett. volume 49, pages 889 (year 1982)NoStop [Yokoyama and Shiba(1990)]Yokoyama1990 author author H. Yokoyama and author H. Shiba, title title Variational Monte-Carlo Studies of Hubbard Model. III. Intersite Correlation Effects, https://doi.org/10.1143/JPSJ.59.3669 journal journal J. Phys. Soc. Jpn. volume 59, pages 3669 (year 1990)NoStop [Yokoyama(2002)]Yokoyama2002 author author H. Yokoyama, title title Variational Monte Carlo Studies of Attractive Hubbard Model. I, https://doi.org/10.1143/PTP.108.59","inLanguage":"en","copyrightHolder":"The journal journal Prog. Theor. Phys. volume 108, pages 59 (year 2002)NoStop [Capello et al.(2006)Capello, Becca, Yunoki, and Sorella]Capello2006 author author M. Capello, author F. Becca, author S. Yunoki, and author S. Sorella, title title Unconventional metal-insulator transition in two dimensions, https://doi.org/10.1103/PhysRevB.73.245116 journal journal Phys. Rev. B volume 73, pages 245116 (year 2006)NoStop [Watanabe et al.(2006)Watanabe, Yokoyama, Tanaka, and Inoue]Watanabe2006 author author T. Watanabe, author H. Yokoyama, author Y. Tanaka, and author J.-i. Inoue, title title Superconductivity and a Mott Transition in a Hubbard Model on an Anisotropic Triangular Lattice, https://doi.org/10.1143/JPSJ.75.074707 journal journal J. Phys. Soc. Jpn. volume 75, pages 074707 (year 2006)NoStop [Yokoyama et al.(2006)Yokoyama, Ogata, and Tanaka]Yokoyama2006 author author H. Yokoyama, author M. Ogata, and author Y. Tanaka, title title Mott Transitions and d-Wave Superconductivity in Half-Filled-Band Hubbard Model on Square Lattice with Geometric Frustration, https://doi.org/10.1143/JPSJ.75.114706 journal journal J. Phys. Soc. Jpn. volume 75, pages 114706 (year 2006)NoStop [Onari et al.(2007)Onari, Yokoyama, and Tanaka]Onari2007 author author S. Onari, author H. Yokoyama, and author Y. Tanaka, title title Phase diagram of half-filled square lattice for frustrated Hubbard model, https://doi.org/10.1016/j.physc.2007.05.017 journal journal Physica C volume 463–465, pages 120 (year 2007)NoStop [Koga et al.(2006)Koga, Kawakami, Yokoyama, and Kobayashi]Koga2006 author author A. Koga, author N. Kawakami, author H. Yokoyama, and author K. Kobayashi, title title Variational Monte Carlo Study of Two Dimensional Multi-Orbital Hubbard Model, https://doi.org/10.1063/1.2355252 journal journal AIP Conf. Proc. volume 850, pages 1458 (year 2006)NoStop [Takenaka and Kawakami(2012)]Takenaka2012 author author Y. Takenaka and author N. Kawakami, title title Variational Monte Carlo Study of Two-Dimensional Multi-Orbital Hubbard Model on Square Lattice, https://doi.org/10.1088/1742-6596/400/3/032099 journal journal J. Phys.: Conf. Ser. volume 400, pages 032099 (year 2012)NoStop [Kubo(2021)]Kubo2021 author author K. Kubo, title title Destabilization of ferromagnetism by frustration and realization of a nonmagnetic Mott transition in the quarter-filled two-orbital Hubbard model, https://doi.org/10.1103/PhysRevB.103.085118 journal journal Phys. Rev. B volume 103, pages 085118 (year 2021)NoStop [Kubo(2022)]Kubo2022 author author K. Kubo, title title Enhanced Spin–Orbit Coupling in a Correlated Metal, https://doi.org/10.7566/JPSJ.91.124707 journal journal J. Phys. Soc. Jpn. volume 91, pages 124707 (year 2022)NoStop [Kubo(2023)]Kubo2023 author author K. Kubo, title title Enhancement of an Effective Spin-Orbit Coupling in a Correlated Metal, https://doi.org/10.7566/JPSCP.38.011161 journal journal JPS Conf. Proc. volume 38, pages 011161 (year 2023)NoStop [Sato and Yokoyama(2016)]Sato2016 author author R. Sato and author H. Yokoyama, title title Band-Renormalization Effects and Predominant Antiferromagnetic Order in Two-Dimensional Hubbard Model, https://doi.org/10.7566/JPSJ.85.074701 journal journal J. Phys. Soc. Jpn. volume 85, pages 074701 (year 2016)NoStop [Richter et al.(2021)Richter, Graspeuntner, Schäfer, Wentzell, and Aichhorn]Richter2021 author author M. Richter, author J. Graspeuntner, author T. Schäfer, author N. Wentzell, and author M. Aichhorn, title title Comparing the effective enhancement of local and nonlocal spin-orbit couplings on honeycomb lattices due to strong electronic correlations, https://doi.org/10.1103/PhysRevB.104.195107 journal journal Phys. Rev. B volume 104, pages 195107 (year 2021)NoStop [Liu et al.(2023)Liu, You, Gu, Maekawa, and Su]Liu2023 author author Z. Liu, author J.-Y. You, author B. Gu, author S. Maekawa, and author G. Su, title title Enhanced spin-orbit coupling and orbital moment in ferromagnets by electron correlations, https://doi.org/10.1103/PhysRevB.107.104407 journal journal Phys. Rev. B volume 107, pages 104407 (year 2023)NoStop [Jiang(2023)]Jiang2023 author author K. Jiang, title title Correlation Renormalized and Induced Spin-Orbit Coupling, https://doi.org/10.1088/0256-307X/40/1/017102 journal journal Chin. Phys. Lett. volume 40, pages 017102 (year 2023)NoStop [Hirsch(1985)]Hirsch1985 author author J. E. Hirsch, title title Two-dimensional Hubbard model: Numerical simulation study, @noop journal journal Phys. Rev. B volume 31, pages 4403 (year 1985)NoStop [Fazekas(1999)]Fazekas1999 author author P. Fazekas, https://doi.org/10.1142/2945 title Lecture Notes on Electron Correlation and Magnetism, series Series in Modern Condensed Matter Physics, Vol. volume 5 (publisher World Scientific, year 1999)NoStop [Kugel and Khomskii(1982)]Kugel1982 author author K. I. Kugel and author D. I. Khomskii, title title The Jahn-Teller effect and magnetism: Transition metal compounds, https://doi.org/10.1070/PU1982v025n04ABEH004537 journal journal Sov. Phys. Usp. volume 25, pages 231 (year 1982)NoStop
http://arxiv.org/abs/2307.05008v1
20230711041131
Optical Memory for Arbitrary Perfect Poincaré States in an Atomic Ensemble
[ "Lei Zeng", "Ying-Hao Ye", "Ming-Xin Dong", "Wei-Hang Zhang", "En-Ze Li", "Dong-Sheng Ding", "Bao-Sen Shi" ]
quant-ph
[ "quant-ph", "physics.optics" ]
On the gap probability of the tacnode process Luming Yao[1]   and  Lun Zhang[1] August 12, 2023 ============================================= As robust information carriers, photons have many degrees of freedoms (DOFs), such as polarization <cit.>, spatial-mode <cit.>, temporal mode <cit.> and path (k-vector) <cit.>, which provides a dramatically increased encoding potential. Optical memories encoded with different photonic DOFs have attracted a wide range of attention in various information processing <cit.>. Poincaré beams that simultaneously couple spin angular momentum (SAM) and orbital angular momentum (OAM) of photons have azimuthally-dependent polarization distribution across their cross-section <cit.>. They have been widely applied in many fields <cit.>, such as optical trapping <cit.>, optical communications <cit.>, laser material processing <cit.> and optical encryption<cit.>. As the asymmetric characteristic of SAM and OAM for Poincaré states, the storage of Poincaré beams manifests the ability of the memories to simultaneously store two DOFs of photons, which is important for a practical memory. High-fidelity memories for OAM superposition states and conventional Poincaré States have been demonstrated in atomic systems recently <cit.>, however, those works only exploited the symmetric OAM superposition states and ignore the asymmetric contribution from different OAM states. Since the radius of a conventional Laguerre-Gaussian (LG) beam is proportional to √(|l|+1) <cit.>, LG modes with topological charge number | l| interact with different regions of the atomic ensemble and thus experience different optical depths (ODs) of the ensemble. Due to the strong correlation between the storage efficiency and effective atomic OD <cit.>, memory for conventional Poincaré states with asymmetric OAM values suffers from this imbalanced efficiency. For instance, a photonic state |Ψ⟩_in=1/√(2)(|L_1⟩+|L_2⟩) evolves to |Ψ⟩_re=1/√(η_1^2+η_2^2)(η_1|L_1⟩+η_2|L_2⟩) after being stored and retrieved and the corresponding fidelity is then reduced to 1/2+κ/(1+κ^2), here η_1, η_2 are storage efficiencies for |L_1⟩, |L_2⟩ and κ=η_2/η_1. Therefore, these previous works are limited by the opposite topological charge numbers, which restrict the scenarios of Poincaré beams' application. The perfect optical vortex (POV) that generated from the Fourier transform of a Bessel Gaussian (BG) mode <cit.> helps us circumvent this obstacle because of its topological charge number-independent ring diameters <cit.>. Perfect Poincaré beams (PPBs) have been experimentally generated by using various optical components such as axicons, spatial light modulators (SLM), q-plates, lenses, and the micro-patterned liquid-crystal devices <cit.>. After replacing the conventional LG modes with corresponding POV modes, one can achieve nearly identical efficiencies for the modes with different OAM quanta and then realize a faithful storage of perfect Poincaré beams (PPBs) with asymmetric OAM quanta. Here, we demonstrate a coherent optical memory for perfect Poincaré states based on electromagnetically induced transparency (EIT) protocol <cit.> by constructing a spatial-mode-independent light-matter interface and realize the storage of different 121 perfect Poincaré states with an arbitrary choice of OAM quanta. Our work shows a dramatically increased encoding flexibility and is promising for classical information processing and quantum communication. The setup can be divided into three parts: state preparation, optical storage, and projective measurement (see Fig. <ref>). The first part consists of two beam displacers (BDs), a lens f_1, and a spatial light modulator (SLM, HOLOEYE: LETO). The lens f_1 acting as a Fourier transformer to transform the BG states imprinted by the SLM to corresponding perfect Poincaré states. BD3 and a half-wave plate (HWP) convert PPBs into two beams with the same polarization. We ensure balanced storage efficiency for individual beams by adjusting the mirror. The lenses f_2 and f_3 constitute a telescope system that image the perfect Poincaré states to the center of the ensemble. The lens f_5, together with the lens f_6, also form a 4f imaging system to restore the probe beams to their original size for subsequent projective measurements. The storage is carried out in an ensemble of cold ^85Rb trapped in a two-dimensional magneto-optical trap (MOT). The signal and control fields are both circularly polarized (σ^+), and the expanded coupling field has a radius of 4 mm to cover the probe field, and it intersects with the probe beams at the center of the ensemble with an angle of 5^∘. The measurement part is composed of a lens f_6, a half-wave plate (HWP), a quarter-wave plate (QWP), a polarizing beam splitter (PBS), and an SLM that is placed at the back focal plane of the lens f_6. The probe light is finally coupled into a single-mode fiber (SMF) for detection. The POV can be generated from the Fourier transform of a higher-order Bessel beam <cit.>, for the situation of large r_r and small ω_0, the complex field amplitude is expressed by E^l_POV(r,φ)=i^l-12f/kω_0^2exp(ilφ)exp(-(r-r_r)^2/ω_0^2) where ω_0 is the waist of the corresponding Gaussian mode at the focal plane of the Fourier and k=2π/λ is the wave number. Eq. (<ref>) represents the field amplitude of a POV with radius equal to r_r(=k_rf/k), l is the topological charge number and k_r is the radial wave vector. The radius of the POV is independent of l, this property is crucial to the realization of mode-independent light-matter interaction and leads to the same storage efficiency against different OAM quanta. The achieved efficiency remains nearly unchanged in a wide range of l and decreases slightly when | l | exceeds 10. Therefore the range of l is chosen from -5 to 5 in our experiments since the storage efficiency (14.3%± 0.4%) barely changes within this range. We characterize the retrieved states by performing projective measurements. The polarization base is (|H⟩+|V⟩)/√(2), and the OAM base is set to be |POV_i,L_1⟩+e^iα|POV_i,L_2⟩, where α is the relative phase between |POV_i,L_1⟩ and |POV_i,L_2⟩. The state of signal light is tailored to be perfect Poincaré state |Ψ_i⟩=|H⟩|POV_i,L_1⟩+e^iθ|V⟩|POV_i,L_2⟩ initially, after being projected onto the above basis, the intensity of the retrieved signal that collected by the SMF is proportional to 1+cos(θ-α). By tilting the BD3, the values of θ are set as 0,π/2,π and 3π/2, then we can achieve the four target perfect Poincaré states: |Ψ_1⟩=|H⟩|POV_1,L_1=1⟩+|V⟩|POV_1,L_2=3⟩, |Ψ_2⟩=|H⟩|POV_2,L_1=-3⟩+i|V⟩|POV_2,L_2=4⟩, |Ψ_3⟩=|H⟩|POV_3,L_1=0⟩-|V⟩|POV_3,L_2=-5⟩, |Ψ_4⟩=|H⟩|POV_4,L_1=2⟩-i|V⟩|POV_4,L_2=-2⟩. Figure <ref> shows the measured interference curve as a function of α after storage. The average interference visibility values of |Ψ_1⟩, |Ψ_2⟩, |Ψ_3⟩ and |Ψ_4⟩ are 0.838±0.050, 0.844±0.034, 0.900±0.033 and 0.881±0.034, respectively. A bias magnetic field of 1 G guides the magnetically-induced evolution of OAM modes of two separated components stored in atoms <cit.>. The experimental results show that the polarization and phase are preserved during storage. The storage fidelity, which is defined as [Tr(√(√(ρ)ρ_0√(ρ)))]^2, is an important criterion to characterize the similarity between the retrieved signal (ρ) and the original signal (ρ_0). The chosen polarization bases for reconstructing the density matrix are |H⟩,|V⟩,|H⟩+i|V⟩ and |H⟩+|V⟩, the chosen bases in OAM degree of freedom are |L_1⟩,|L_2⟩,|L_1⟩+i|L_2⟩ and |L_1⟩+|L_2⟩. Figure <ref> shows the results of the reconstructed density matrix for these four optical states. The measured results of the fidelity of these four states are 81.1%±4.7%, 84.4%±4.5%, 82.5%±4.3%, and 86.7%±3.4%, respectively. Then we evaluate the memory of perfect Poincaré states with arbitrary L_1, L_2∈ [-5,5] by estimating fidelities of 121 PPBs. For the OAM DOF, we set the projected polarization basis to be |H⟩+|V⟩, and the phase diagram corresponding to the highest (lowest) theoretical value of the projective measurement is loaded on SLM2. Similarly, we get the visibility for polarization DOF by selecting the OAM base |POV_L_1⟩+|POV_L_2⟩ and choosing the highest (lowest) polarization base. Then the average interference visibility V can be achieved (for the states with L_1=L_2, we take the visibility for polarization DOF as its average interference visibility), so the fidelity can be quickly estimated as F=(1+3V)/4 <cit.>. The achieved storage fidelities of these states are shown in Fig. <ref>, which indicate the capability of our memory to store Poincaré light composed of two arbitrary different OAM quanta. We report a work on optical memory of perfect Poincaré states based on atomic ensemble, the results show that the system holds the ability of OAM quanta independent storage efficiency. Assuming the cold atom system can store d orthogonal different OAM modes, and there are 2d-1 orthogonal states with a symmetric value of topological charge numbers suitable for encoding in the case of the conventional Poincaré states, while for the perfect Poincaré states, the number of states appropriate for encoding increases to d^2. This significantly increases the flexibility of coding choices. It is possible to implement optical storage for perfect Poincaré states to avoid using the complex dual-rail setup by employing alkali atoms such as ^87Rb with simpler Zeeman sublevels structure. With the help of higher requirements for frequency shifting and magnetic field manipulation, one may provide the potential in simplifying the system through lifting the Zeeman-degeneration of hyper-fine ground states<cit.>. The storage time of the system is 1.5 μs, which can be extended with the help of the techniques of magnetically-insensitive states preparation. <cit.>. In conclusion, we realize an optical memory for perfect Poincaré states in an atomic ensemble. By using the transverse-size-invariance of perfect Poincaré states, we construct a spatial-mode-independent light-matter interface, and realize the storage of 121 perfect Poincaré states with arbitrary OAM. Our work largely extends the encoding flexibility of information, which is promising for quantum communication and information processing. Funding This work was supported by National Key R&D Program of China (Grants No. 2017YFA0304800), Anhui Initiative in Quantum Information Technologies (Grant No. AHY020200), the National Natural Science Foundation of China (Grants No. U20A20218, No. 61722510, No. 11934013, No. 11604322, No. 12204461), and the Innovation Fund from CAS, the Youth Innovation Promotion Association of CAS under Grant No. 2018490, the Anhui Provincial Key Research and Development Project under Grant No. 2022b13020002, and the Anhui Provincial Candidates for academic and technical leaders Foundation under Grant No. 2019H208. Acknowledgment We thank Prof. Wei Zhang for helpful discussions. Disclosures The authors declare no competing interests. Data Availability Statement Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. 10 ding2015raman D.-S. Ding, W. Zhang, Z.-Y. Zhou, S. Shi, B.-S. Shi, and G.-C. Guo, Nature Photonics 9, 332 (2015). xu2013long Z. Xu, Y. Wu, L. Tian, L. Chen, Z. Zhang, Z. Yan, S. Li, H. Wang, C. Xie, and K. Peng, Physical Review Letters 111, 240503 (2013). ding2013single D.-S. Ding, Z.-Y. Zhou, B.-S. Shi, and G.-C. Guo, Nature Communications 4, 1 (2013). nicolas2014quantum A. Nicolas, L. Veissier, L. Giner, E. Giacobino, D. Maxein, and J. Laurat, Nature Photonics 8, 234 (2014). heller2020cold L. Heller, P. Farrera, G. Heinze, and H. de Riedmatten, Physical Review Letters 124, 210504 (2020). parniak2017wavevector M. Parniak, M. Dąbrowski, M. Mazelanik, A. Leszczyński, M. Lipka, and W. Wasilewski, Nature Communications 8, 1 (2017). pu2017experimental Y. Pu, N. Jiang, W. Chang, H. Yang, C. Li, and L. Duan, Nature Communications 8, 1 (2017). lvovsky2009optical A. I. Lvovsky, B. C. Sanders, and W. Tittel, Nature Photonics 3, 706 (2009). berry2004 M. Berry, Journal of Optics A: Pure and Applied Optics 6, 475 (2004). dennis2002 M. Dennis, Optics Communications 213, 201 (2002). freund2002 I. Freund, Optics Communications 201, 251 (2002). arora2019 G. Arora, P. Senthilkumaran et al., Optics Letters 44, 5638 (2019). rosales2018 C. Rosales-Guzmán, B. Ndagano, and A. Forbes, Journal of Optics 20, 123001 (2018). kawauchi2007 H. Kawauchi, K. Yonezawa, Y. Kozawa, and S. Sato, Optics Letters 32, 1839 (2007). milione2015 G. Milione, M. P. Lavery, H. Huang, Y. Ren, G. Xie, T. A. Nguyen, E. Karimi, L. Marrucci, D. A. Nolan, R. R. Alfano et al., Optics Letters 40, 1980 (2015). milione2015using G. Milione, T. A. Nguyen, J. Leach, D. A. Nolan, and R. R. Alfano, Optics Letters 40, 4887 (2015). hamazaki2010 J. Hamazaki, R. Morita, K. Chujo, Y. Kobayashi, S. Tanda, and T. Omatsu, Optics Express 18, 2144 (2010). fang2020 X. Fang, H. Ren, and M. Gu, Nature Photonics 14, 102 (2020). parigi2015 V. Parigi, V. D’Ambrosio, C. Arnold, L. Marrucci, F. Sciarrino, and J. Laurat, Nature Communications 6, 1 (2015). ye2019 Y.-H. Ye, M.-X. Dong, Y.-C. Yu, D.-S. Ding, and B.-S. Shi, Optics Letters 44, 1528 (2019). Ye2022 Y.-H. Ye, L. Zeng, M.-X. Dong, W.-H. Zhang, E.-Z. Li, D.-C. Li, G.-C. Guo, D.-S. Ding, and B.-S. Shi, Phys. Rev. Lett. 129, 193601 (2022). ding2014toward D.-S. Ding, W. Zhang, Z.-Y. Zhou, S. Shi, J.-s. Pan, G.-Y. Xiang, X.-S. Wang, Y.-K. Jiang, B.-S. Shi, and G.-C. Guo, Physical Review A 90, 042301 (2014). ding2015quantum D.-S. Ding, W. Zhang, Z.-Y. Zhou, S. Shi, G.-Y. Xiang, X.-S. Wang, Y.-K. Jiang, B.-S. Shi, and G.-C. Guo, Physical Review Letters 114, 050502 (2015). hsiao2018 Y.-F. Hsiao, P.-J. Tsai, H.-S. Chen, S.-X. Lin, C.-C. Hung, C.-H. Lee, Y.-H. Chen, Y.-F. Chen, A. Y. Ite, and Y.-C. Chen, Physical Review Letters 120, 183602 (2018). vaity2015 P. Vaity and L. Rusch, Optics Letters 40, 597 (2015). wang2017 T. Wang, S. Fu, F. He, and C. Gao, Applied Optics 56, 7567 (2017). li2019 D. Li, S. Feng, S. Nie, C. Chang, J. Ma, and C. Yuan, Journal of Applied Physics 125, 073105 (2019). ostrovsky2013 A. S. Ostrovsky, C. Rickenstorff-Parrao, and V. Arrizón, Optics Letters 38, 534 (2013). xu2018 R. Xu, P. Chen, J. Tang, W. Duan, S.-J. Ge, L.-L. Ma, R.-X. Wu, W. Hu, and Y.-Q. Lu, Physical Review Applied 10, 034061 (2018). chaneliere2005 T. Chanelière, D. Matsukevich, S. Jenkins, S.-Y. Lan, T. Kennedy, and A. Kuzmich, Nature 438, 833 (2005). eisaman2005 M. D. Eisaman, A. André, F. Massou, M. Fleischhauer, A. S. Zibrov, and M. D. Lukin, Nature 438, 837 (2005). radwell2015spatially N. Radwell, T. W. Clark, B. Piccirillo, S. M. Barnett, and S. Franke-Arnold, Physical Review Letters 114, 123603 (2015). ye2021synchronized Y.-H. Ye, L. Zeng, Y.-C. Yu, M.-X. Dong, E.-Z. Li, W.-H. Zhang, Z.-K. Liu, L.-H. Zhang, G.-C. Guo, D.-S. Ding et al., Physical Review A 103, 053316 (2021). arahira20121 S. Arahira, T. Kishimoto, and H. Murai, Optics Express 20, 9862 (2012). zhao2009millisecond B. Zhao, Y.-A. Chen, X.-H. Bao, T. Strassel, C.-S. Chuu, X.-M. Jin, J. Schmiedmayer, Z.-S. Yuan, S. Chen, and J.-W. Pan, Nature Physics 5, 95 (2009). aop § AUTHOR BIOGRAPHIES [t][6.3cm][t]1.0 L0.25 < g r a p h i c s > John Smith received his BSc (Mathematics) in 2000 from The University of Maryland. His research interests include lasers and optics. 1.0 L0.25 < g r a p h i c s > Alice Smith also received her BSc (Mathematics) in 2000 from The University of Maryland. Her research interests also include lasers and optics.
http://arxiv.org/abs/2307.04065v1
20230709000559
Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks
[ "Jiaqi Jiang", "Jonathan A. Fan" ]
cs.LG
[ "cs.LG", "math.OC" ]
Projective Rectangles Thomas Zaslavsky August 12, 2023 ===================== We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peaked at high performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer functional evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems. § INTRODUCTION High dimensional, non-convex optimization problems are pervasive in many scientific and engineering domains, including computational materials science <cit.>, electromagnetics <cit.>, circuits design <cit.>, process engineering <cit.>, and systems biology <cit.>. These problems are known to be very difficult to solve because they are NP-hard, and algorithms aiming to definitively search for the global optimum, such as branch and bound methods, cannot practically scale to high dimensional systems. As such, various algorithm heuristics have been developed, ranging from evolutionary metaheuristics to Bayesian optimization <cit.>, which use judicious sampling of the landscape to identify high performing optima. In all cases, it remains challenging to apply these algorithms to ultra-high dimensional spaces with dimensions of hundreds to thousands due to the curse of dimensionality. The explosion of interest and research in deep neural networks over the last decade has presented new opportunities in optimization, as the process of training a deep network involves solving a high dimensional optimization problem. To this end, gradient-based optimization metaheuristics termed global topology optimization networks (GLOnets) <cit.> were recently proposed that use the training of a deep generative network to perform non-convex optimization. The concept applies to optimization problems where 𝐱 is a d-dimensional variable and the goal is to maximize the smoothly varying, non-convex objective function f(𝐱). To run the metaheuristic, the generative network is first initialized so that it outputs a distribution of 𝐱 values that spans the full optimization landscape. Over the course of network training, this distribution is sampled, f(𝐱) and local gradients are computed for these sampled points, and these values are incorporated into a customized loss function and backpropagated to evolve and narrow the distribution around high performing optima. Initial demonstrations indicate that GLOnets can perform better than standard gradient-based optimizers and global search heuristics for various non-convex optimization problems. However it is unable to extend to high dimensional problems in its current form, and the lack of interpretability with this black box algorithm has made it difficult to understand if and how it can to adapt to more general problems, including high dimensional problems. In this Article, we introduce the progressive growing GLOnet (PG-GLOnet) in which optimization within an ultra-high dimensional non-convex landscape is mediated through the training of a progressive growing deep generative network. Our tailoring of the network architecture for this optimization task serves to incorporate knowledge and assumptions about the optimization landscape into the metaheuristic, which is a requirement for tractably navigating ultra-high dimensional landscapes. We also explain how the algorithm works to smoothen the design landscape, how evaluation of the loss function serves as a gradient estimation calculation, and why the number of required functional evaluations is independent of problem dimension. With standard benchmarking test functions, we show that our concept performs better than state-of-the-art algorithms with fewer functional evaluations for one thousand dimensional problems. We anticipate that the customization of network architectures within the GLOnets framework will seed new connections between deep learning and optimization. § PROGRESSIVE GROWING GLONETS ALGORITHM AND BENCHMARKING The PG-GLOnet concept builds on the foundation of the original GLOnet algorithm, which we briefly review here. The optimization problem to be solved with GLOnets can be written in the following form: max_𝐱 f(𝐱) where f(𝐱) is a non-convex, continuous objective function with feasible gradients. With GLOnets, this optimization problem is indirectly solved through the training of a general neural network (Figure <ref>a), where the input is a d-dimensional random variable 𝐳 with a standard normal distribution and the output is a distribution of 𝐱's. The generator therefore serves to map 𝐳 onto 𝐱 = G(𝐳; ϕ) with a distribution P(𝐱; ϕ), where ϕ denotes the trainable neural network parameters. The optimization objective for the generator is defined as: L = max_ϕ𝔼_𝐱∼ P(𝐱; ϕ)exp[ f(𝐱)/T] The distribution that maximizes this expected value is a delta function centered at the global optimum, and as such, an ideally trained generator will produce a narrow distribution centered at the global optimum, thereby solving the original optimization problem. The use of the exponential function and the hyperparameter T in the optimization objective further enhance the valuation of the global optimum, and more generally high performing optima, in the design space. Generator training is consistent with conventional deep learning training methods: gradients of the objective function with respect to network parameters, ∇_ϕ𝔼f, are calculated through backpropagation, and they are used to iteratively optimize ϕ using standard gradient-based methods. In practice, the objective function is approximated by a batch of M samples. P(𝐱; ϕ), on the other hand, is typically implicit and cannot be directly sampled. To circumvent this issue, we draw M samples {𝐳^(m)}_m=1^M from the standard normal distribution, transform them to {𝐱^(m)}_m=1^M, and then approximate L and its gradient ∇_ϕ L with respect to network parameters ϕ: L ≈1/M∑_m=1^Mexp[ f(𝐱^(m))/T] ∇_ϕ L ≈1/M∑_m=1^M1/Texp[ f(𝐱^(m))/T] ∇_𝐱f · D_ϕ𝐱^(m) ∇_𝐱f = [∂ f/∂ x_1, ∂ f/∂ x_2, …, ∂ f/∂ x_d] are the gradients of f(𝐱) and D_ϕ𝐱 = ∂ (x_1, x_2, …)/∂(ϕ_1, ϕ_2, ...) is the Jacobian matrix. Evaluation of f(𝐱) is usually performed by a numerical simulator and the gradient of f(𝐱) can be calculated explicitly or by auto-differentiation for analytic expressions, or by the adjoint variables method (AVM). In the initial conception of GLOnet, which we term FC-GLOnet, the generative network was a fully connected deep network and was capable of effectively addressing optimization problems with a modest number of dimensions. However, it was found to be ineffective at optimizing within very high dimensional landscapes due to the curse of dimensionality, which makes a direct search for the global optimum within a full, high dimensional landscape an intractable proposition. We therefore propose the PG-GLOnet, which utilizes a generative network that outputs a distribution that gradually grows from a coarse, low dimensional space to a fine, high dimensional space. By tailoring the network architecture in this way, we regularize the optimization process to take place over differing degrees of optimization landscape smoothing, enabling our search process to be computationally efficient and tractable. The PG-GLOnet generator architecture is shown in Figure <ref>b. The progressive growth concept is inspired by progressively growing GANs <cit.> that have been developed in the computer vision community to process images with increasing spatial resolution during network training. The input to the network is a D-dimensional random vector 𝐱^0, and its dimension is much smaller than that of 𝐱. With L growing blocks, the network simultaneously transforms and increases the dimensionality of the input vector, and its output is a 2^L D dimensional vector 𝐱^L that matches the dimensionality of 𝐱. In each growing block, the input vector dimension is doubled in two ways, by direct upsampling and by a linear transform. The resulting outputs are combined together and further transformed using a non-linear activation function: 𝐱^out_2d × 1 = q((1-α) [ 𝐱^in_d × 1; 𝐱^in_d × 1 ] +α A_2d × d·𝐱^in_d × 1) A_2d × d are trainable parameters in the linear transformation branch, q(·) is a non-linear activation function, and α is a hyperparameter that is manually tuned over the course of optimization. Initially, α's for all of the growing blocks in the network are set to 0, such that the vector outputted by each block has the same effective dimensionality as its input vector. The network output 𝐱^L therefore has an effective dimensionality that matches the dimensionality of the input 𝐱^0. As α is increased for a particular growing block, its output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that exceeds and eventually doubles that of the growing block input vector. The effective dimensionality of 𝐱^L therefore arises from the aggregation of effective dimensionality increases from all growing blocks. To control the effective dimensionality of 𝐱^L over the course of PG-GLOnet training, α is manually changed from 0 to 1 sequentially from the left to right blocks (bottom of Figure <ref>b). At the end of PG-GLOnet training, α is 1 for all growing blocks and the effective dimensionality of 𝐱^L matches that of 𝐱. To evaluate the efficacy of PG-GLOnet in solving high dimensional non-convex optimization problems, we perform a series of benchmark numerical experiments where we optimize a set of standard test functions with PG-GLOnet and other established algorithms. In the first set of experiments, we consider a testing function that can be tuned from a convex to non-convex function and compare PG-GLOnet with ADAM, a well known momentum-based gradient descent algorithm that is typically more effective than gradient descent. ADAM is a local optimization algorithm and performs well on convex objective functions but can get trapped within local optima for non-convex functions. Our test function is a modified Rastrigin function defined as follows: f(𝐱; ρ) = ρ d + ∑_i=1^d [x_i^2 - ρcos(2π x_i)] ρ is a hyperparameter that specifies the amplitude of the sinusoidal modulation within the function. When ρ =0, f(𝐱; ρ) = ∑_i=1^d x_i^2 and is a convex function. As ρ increases, more local optima emerge and these optima become separated by larger magnitude barriers. We first consider the computational cost required by ADAM and PG-GLOnet to find the global optimum of a two dimensional modified Rastrigin function as a function of ρ. For ADAM, we run 10000 optimizations for 200 iterations with random starting points, and for PG-GLOnet, we run the algorithm 10 times with a batch size of 20 for 200 total iterations. In both cases, the algorithms terminate early when they output results within 10^-3 of the global optimum, and computational cost is quantified as the average number of function evaluations required to find the global optimum. The results are summarized in Figure <ref>a and indicate that for convex or nearly convex optimization landscapes, ADAM is more efficient at finding the global optimum. This efficiency arises because ADAM is a specially tailored local optimizer that is well suited for these types of problems, while PG-GLOnet always requires relatively large batch sizes and more iterations to converge. As ρ increases, orders-of-magnitude more ADAM evaluations are required to search for the global optimum due to trapping within local optima in the design landscape. The computational cost for PG-GLOnet, on the other hand, does not increase nearly as rapidly due to its ability to navigate non-convex landscapes and is ten times more efficient than ADAM for ρ greater than 3. We also perform benchmarks between ADAM and PG-GLOnet for a ten dimensional problem. Due to the inability for ADAM to converge to the global optimum in non-convex, high dimensional landscapes, we perform this benchmark differently and compare the best optimal value found by ADAM and PG-GLOnet given the same amount of computational resources. Here, we run ADAM for 200 iterations with 20 random starting points and PG-GLOnet for 200 iterations with a batch size of 20. We run these benchmark experiments ten times and average the best values from each experiment, and the results are reported in Figure <ref>b. We find that the PG-GLOnet is able to consistently find solutions at or near the global optimum for all values of ρ, but the local optimizer gets progressively worse as ρ increases. In our next set of benchmark experiments, we compare PG-GLOnet with the covariance matrix adaptation evolution strategy (CMA-ES), which is an established evolutionary algorithm used to perform population-based global searching of an optimization landscape. Compared to ADAM, it is more suitable for performing non-convex optimization. We consider two standard non-convex testing functions with lots of local optima, the Rastrigin and Schwefel functions (defined in the Appendix). Plots in Figures <ref>c and <ref>d show the average number of function evaluations required to find the global optimum as a function of problem dimension d. The computational cost of CMA-ES increases exponentially as the problem dimension becomes larger, indicating the intractability of applying this algorithm to ultra-high dimensional problems. For the Schwefel function, we limited our CMA-ES benchmarking experiments to a problem dimension of 20 due to this scaling trend. PG-GLOnet, on the other hand, has a relatively small computational cost that is not sensitive to the dimension. In fact, the same neural network architecture and batch size is used for all problems. A more detailed discussion as to the origins of problem dimension and batch size decoupling is provided in the Discussion section. Finally, we benchmark PG-GLOnet with state-of-art algorithms on testing functions proposed by the CEC'2013 Special Session and Competition on Large-Scale Global Optimization (LSGO) <cit.>. We consider the six non-convex benchmark functions from the competition, which involve variations and combinations of the Rastrigin and Ackely functions and are defined in the Appendix. These benchmark functions were designed to incorporate a number of challenging features for optimization, including: * High dimensions. The design space of a optimization problem grows exponentially as the dimension of design variables increases. These benchmark functions utilize one thousand dimensional landscapes. * Functions with non-separable subcomponents. The whole design variable is decomposed into several subcomponents and dimensions within each subcomponent are strongly coupled together. * Imbalance in the contribution of subcomponents. The contribution of a subcomponent is magnified or dampened by a coefficient. * Non-linear transformations to the base functions. Three transformations are applied to break the symmetry and introduce some irregularity on the landscape: (1) Ill-conditioning (2) Irregularities (3) Symmetry breaking. To globally search these landscapes for the global optimum, we perform a two step optimization procedure. First, we run PG-GLOnet for each benchmark function for 200 iterations and a batch size of 100, from which our generative network outputs a narrow distribution of 𝐱's in promising regions of the optimization landscape. We then sample this distribution 100 times and perform local gradient descent on each of these design variables for an additional 200 iterations. The best function values found by PG-GLOnet plus local gradient descent are reported in Table <ref>, together with results produced from FC-GLOnet plus local gradient descent, local conjugate gradient descent, and two state-of-art non-convex optimization algorithms that were the best performing algorithms in the most recent LSGO contest: CC-RDG3, which is a divide-and-conquer method <cit.>, and DGSC, which is a differential group method utilizing spectral clustering <cit.>. We observe that PG-GLOnet with local gradient descent refinement is able to significantly outperform the other algorithms for the majority of test functions. In addition, the total computational cost of the two step optimization procedure is only 4× 10^4 function evaluations, while CC-RDG3 and DGSC require 3× 10^6 function evaluations. § DISCUSSION We discuss the origins of the efficiency and efficacy of PG-GLOnet in solving ultra-high dimensional non-convex optimization problems. First, we examine how the generic GLOnet algorithm operates and why it is able to effectively utilize a gradient-based strategy to solve non-convex optimization problems. Second, we examine the role of the progressive growing generative network architecture in PG-GLOnet in solving ultra-high dimensional problems. By understanding the relationship between network architecture and optimization procedure, we elucidate built-in assumptions used by PG-GLOnet in its search for the global optimum. With the generic GLOnet algorithm, the original optimization problem cited in Equation 1 is reframed as a related problem (Equation 2) that addresses a transformed, smoothened optimization landscape. The key concepts that produce this landscape transformation and enable effective gradient-based optimization are outlined in Figure <ref>a and are: 1) distribution optimization, where the original problem involving the optimization of 𝐱 is transformed to a problem involving the optimization of parameters within a simple distribution P(𝐱); 2) exponential transformation, where the objective function is exponentially weighted; 3) over-parametrization, where the distribution P(𝐱) is now parameterized by a neural network with hundreds to thousands of weights; and 4) gradient estimation, where gradients that specify the evolution of the continuous distribution P(𝐱) are accurately computed through discrete samplings of 𝐳. Distribution optimization. With the concept of distribution optimization, the original problem of searching for an optimal 𝐱 is recast as a population-based search in which parameters within a distribution function are optimized, thereby enabling a search for the global optimum in a smoother and higher dimensional optimization landscape. This concept is shared by other population-based optimization algorithms, such as CMA-ES. To visualize the concept, we consider a non-convex one-dimensional function f(𝐱) plotted as a blue line in the leftmost figure in Figure <ref>a. The objective is to maximize f(𝐱), and the function contains multiple local maxima separated by deep valleys. It is easy for optimization algorithms, particularly gradient-based algorithms, to get trapped in the local optima. For example, if gradient descent optimization is used and is initialized at the yellow dot position, the algorithm will converge to the local optimum delineated by the red dot. With this approach, multiple independent gradient descent optimizations with random starting points are needed to increase the possibility of finding the global optimum. For these problems, gradient-free optimization heuristics are often employed, which can reduce the chances of trapping within suboptimal maxima but which introduce a more stochastic nature to the search process. However, if we consider the optimization of a distribution function that interacts with the global optimization landscape, local information at different parts of the landscape can be aggregated and collectively utilized to evolve this distribution in a manner that reduces issues of trapping within suboptimal maxima. Formally, we transform the optimization variable 𝐱 to parameters within the distribution P(𝐱), and the globally optimal distribution is one that is narrowly peaked around the global optimum. Distribution functions can be explicitly parameterized in many ways. As a simple illustrative example that builds on our discussion of the one-dimensional f(𝐱), we consider the one-dimensional Gaussian distribution denoted as P(𝐱; μ, σ), shown as the red curve in the leftmost figure in Figure <ref>a. μ and σ refer to mean and standard deviation, respectively. With a Gaussian distribution function, the objective function now becomes transformed to the expected value of f(𝐱) as a function of (μ, σ): 𝔼_𝐱∼ P(𝐱; μ, σ) f(𝐱). As this new optimization landscape is a function of two distribution parameters, μ and σ, it is two dimensional. We can directly visualize this new landscape by evaluating ∫ f(𝐱) P(𝐱;μ, σ) d𝐱 for all values of (μ, σ), and the result is summarized in the second figure from the left in Figure <ref>a. The horizontal line section at the bottom of the contour plot, where σ equals zero, is the original one-dimensional f(𝐱) with multiple optima. As σ increases to finite values above zero, the landscape becomes smoother. Mathematically, horizontal line sections for finite sigma are calculated by convolving f(𝐱) with the Gaussian function, producing a Gaussian blur that leads to smoothening. This smoothened landscape facilitates gradient-based optimization of (μ, σ) when the distribution is initialized to large σ values, and the final optimized distributions converge to the original f(𝐱) space at the bottom of the plot. However, while this two-dimensional landscape is smoother than the original f(𝐱), there remain multiple distribution parameter initializations for which the gradient-based optimizer converges to suboptimal maxima. Exponential transformation. To further smoothen the optimization landscape and enhance the presence of the global optimum, we perform an exponential transformation of the objective function. Mathematically, the objective function for the distribution optimization problem becomes: 𝔼_𝐱∼ P(𝐱; μ, σ)exp[ f(𝐱)/T]. The temperature term T modulates the impact of the global optimum on the optimization landscape such that low T produces strong landscape modulation by the global optimum. For our one-dimensional f(𝐱) example, the exponentially transformed landscape is plotted in the second figure from the left in Figure <ref>a and shows that the local optima has faded out, such that gradient-based optimization within this landscape is more likely to converge to the global optimum. The choice of T depends on the scale of f(𝐱). Consider f(𝐱) that is linearly normalized to span (0, 1). Such normalization can be typically achieved based on prior knowledge about the upper and lower bound of f(𝐱). If we want to amplify f(𝐱) for f(𝐱) > f_d and minimize f(𝐱) for f(𝐱) < f_d, where f_d is a division point between 0 and 1, the temperature is chosen to be T = f_d / log(1 + f_d). For example, if f_d is chosen to be the golden ratio, then the temperature is roughly T = 1.3. In practice, the selection of f_d is problem specific, and T can be treated as a hyperparameter that can be manually tuned around 1 for tailoring to a particular problem. Over-parameterization. To further enhance the ability for GLOnet to efficiently and reliably converge to the global optimum, we next consider the concept of over-parameterization in which the distribution P(𝐱) is now a neural network parameterized by weights ϕ. The objective function then becomes: 𝔼_𝐱∼ P(𝐱; ϕ)exp[ f(𝐱)/T]. Our use of a neural network is inspired by the fact that deep network training involves the solving of an extremely high dimensional non-convex optimization problem, that the convergence of the neural network is typically insensitive to initialization, and that good neural network parameters can be found using backpropagation. The underlying mathematical principles outlining why gradient descent is so effective for deep network training have been revealed to some extent by computer scientists in recent years. <cit.> First, the parameter space of deep networks is a high-dimensional manifold, such that most local optima are equivalently good and the probability of converging to a bad optimum during training decreases quickly with network size. Second, these equivalently high performing local optima originate from neural network over-parameterization, which builds in redundancy in the optimization landscape that speeds up and stabilizes the gradient-based optimization process. To understand how this applies to GLOnet, we revisit our one-dimensional f(𝐱) landscape in which local optima are separated by deep barriers. When the optimization landscape is transformed using P(𝐱,ϕ), it frames the optimization problem in a very high dimensional landscape, as the dimensionality of ϕ is much higher than 𝐱. Solutions to the optimization problem therefore reside in a high-dimensional manifold, such that many different ϕ's serve as high performing local optima. Additionally, local optima in f(𝐱) are no longer separated by deep barriers but are instead connected by pathways with low to no barriers in our transformed high dimensional landscape, mitigating trapping within these local optima during gradient-based optimization. The high dimensional landscape representing the transformed f(𝐱) is visualized as a two-dimensional projection in the rightmost plot in Figure <ref>a. The global optimum is now a connected band in the optimization landscape, as opposed to a single point in f(𝐱), and there are fewer energy barriers preventing gradients from converging to the global optimum, enabling gradient descent optimization to be more robust and faster. We note that neural network depth and expressivity play a large role in determining the practical impact of over-parameterization on optimization, and as a demonstration, we compare the performance of GLOnets based on linear and deep non-linear networks in the Appendix. Gradient estimation. A critical feature to maximizing the performance of GLOnet is ensuring that gradients used to evolve P(𝐱), which are approximated using a finite batch of samples, are sufficiently accurate. There are two methods for gradient estimation that can be used for GLOnets. The first is to use a score function gradient estimator, which utilizes the evaluated derivatives of the probability distribution P(𝐱; ϕ) and f(𝐱). This method for estimation requires explicit evaluation of derivatives to P(𝐱; ϕ) but only an implicit evaluation of ∇_𝐱f. The second is to use a pathwise gradient estimator, which relies on knowing the explicit derivatives of f(𝐱) but for which the probability distribution P(𝐱; ϕ) can be implicit. Empirically, we find for GLOnet that the pathwise gradient estimator more consistently produces smaller gradient error compared with the score function gradient estimator, and we therefore implement the pathwise gradient estimator in Equation <ref>. <cit.> The pathwise gradient estimator is based on the principle of Monte Carlo estimation, such that the estimation error decreases with the inverse square root of batch size. Importantly, this estimation error is independent of dimension. As a result, GLOnet and specifically PG-GLOnet are able to operate for batch sizes that are independent of problem dimension, as demonstrated in Figures 2c and 2d. This scaling of problem dimension without a required scaling in the number of functional evaluations allows PG-GLOnet to readily scale and address the 1000-dimensional problems in Table 1 with modest computational resources. Progressive growth. Direct searching within a high dimensional, non-convex landscape is an intractable problem. In the case of FC-GLOnet, which utilizes all of the features above, including distribution optimization and over-parameterization, the algorithm is still not effective in directly searching high dimensional landscapes (Table 1). With PG-GLOnet, the progressive growing architecture regularizes the optimization procedure to search first within a relatively coarse, low dimensional representation of the optimization landscape, followed by relatively local searching within increasingly higher dimensional landscape representations. This hierarchical increase of landscape dimensionality directly corresponds to the serial toggling of α within the series of growing blocks in the generator. As such, the optimization landscape is evolved over the course of PG-GLOnet training in a manner that maintains the tractability of the optimization problem. To further visualize the relationship between generative network architecture and optimization search procedure, we consider a non-convex two-dimensional landscape shown in Figure <ref>b. The generative network contains a single growing block, and the toggling of α from zero to one modulates the effective dimensionality of the generator output from one to two. Initially, α is zero and the vector outputted by the generator has the same effective dimensionality as its input vector and is one. The optimization landscape being searched is therefore a diagonal line within the two-dimensional landscape (Figure <ref>b, left-most plot), and with optimal solutions near the center of the line, the outputted generator distribution (red coloring in plot) narrows towards this region. As α is increased, the generator output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that increases and eventually doubles. In our PG-GLOnet visualization, this increase in effective dimensionality corresponds to a broadening of the optimization landscape being searched, and the outputted generator distribution widens relative to the diagonal line. Upon the completion of network growth, the PG-GLOnet distribution converges to the global optimum. The success of PG-GLOnet is therefore predicated on the ability for the outputted distribution of the generative network to be narrowed down to smaller but more promising regions of a coarse optimization landscape, prior to increasing the landscape dimensionality and adding more degrees of freedom to the problem. This concept therefore works particularly well for problems where optima within a low dimensional analogue of the optimization landscape help to inform of the presence and position of optima within the high dimensional landscape. This regularization of the optimization procedure also indicates that for problems where optima within coarse variants of the optimization landscape do not inform the position of the global optimum, PG-GLOnet will not work well. In summary, we present a general global optimization algorithm metaheuristic based on progressive growing deep generative neural networks termed PG-GLOnet. Unlike other population-based algorithms, PG-GLOnet uses gradient-based optimization to evolve an expressive, complex distribution in the optimization landscape to one centered around promising optima. This complex distribution, parameterized using the deep network framework, utilizes loss function engineering and over-parameterization to facilitate effective gradient-based searching. PG-GLOnet is particularly well suited to address ultra-high dimensional problems because the required batch size is independent of problem dimension and the progressively growing network architecture facilitates a hierarchical search process within a landscape with progressively growing effective dimensionality. This use of a hierarchical search strategy also provides bounds as to the types of problems and landscapes that are suited for PG-GLOnet optimization. We anticipate that further research in the tailoring of application-specific generative network architectures to particular optimization landscapes will enable the GLOnet platform to extend and adapt to an even wider range of non-convex, high dimensional optimization problems.
http://arxiv.org/abs/2307.04877v1
20230710195631
Engineering bound states in continuum via nonlinearity induced extra dimension
[ "Qingtian Miao", "Jayakrishnan M. P. Nair", "Girish S. Agarwal" ]
quant-ph
[ "quant-ph" ]
[email protected] Institute for Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA [email protected] Institute for Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA [email protected] Institute for Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA Department of Biological and Agricultural Engineering, Texas A&M University, College Station, TX 77843, USA Bound states in continuum (BICs) are localized states of a system possessing significantly large life times with applications across various branches of science. In this work, we propose an expedient protocol to engineer BICs which involves the use of Kerr nonlinearities in the system. The generation of BICs is a direct artifact of the nonlinearity and the associated expansion in the dimensionality of the system. In particular, we consider single and two mode anharmonic systems and provide a number of solutions apposite for the creation of BICs. In close vicinity to the BIC, the steady state response of the system is immensely sensitive to perturbations in natural frequencies of the system and we illustrate its propitious sensing potential in the context of experimentally realizable setups for both optical and magnetic nonlinearities. Engineering bound states in continuum via nonlinearity induced extra dimension Girish S. Agarwal August 12, 2023 ============================================================================== § INTRODUCTION The localization of electromagnetic waves has been a subject of intense research over the past few decades <cit.>. It is well known that the solutions of the Schrödinger equation below the continuum threshold possess discrete energies and are square integrable in nature. In contrast, above the continuum threshold, energy eigenvalues are continues and the solutions are unbounded. It has been, however, shown that there exist localized states within the continuum of energies, namely the bound states in continuum. BICs were first proposed in 1929 by von Neumann and Wigner <cit.> in an electronic system and Stillinger and Herrick later extended it to a two electron wave function <cit.>. However, a fist experimental observation of BICs came only in 1992 by Capasso et al, where they demonstrated an electronic bound state in a semiconductor superlattice <cit.>. The emergence of BICs in electromagnetic systems can be explicated by investigating the effective non-Hermitian Hamiltonian ensuing from the Maxwell's equations, resulting in complex resonance frequencies ω. BICs are, in essence, non-radiating solutions of the wave equations, ie., modes of the system with Im(ω) approaching zero. In the last decade, BICs have been realized in a multitude of settings involving, for example, electronic <cit.>, acoustic <cit.> and photonic <cit.> subsystems. In particular, owing to their excellent tunability, photonic systems have emerged as an excellent candidate in recent years with applications including, but not limited to the design of high-Q resonators <cit.>, lasing <cit.>, sensing <cit.>, filters <cit.>, etc. In <cit.>, Romano et al reported an optical sensor underpinned by BIC for the fine grained estimation of perturbations in a dielectric environment. Another recent work <cit.> reported the development of nanophotonic sensor based on high-Q metasurface elements for molecular detection with applications in biological and environmental sensing. Some other recent intriguing research include the enhanced sensing of spontaneous emission <cit.>, vortex generation <cit.>, switches <cit.>, efficient higher harmonic generation <cit.> and many more. In this paper, we propose Kerr nonlinearities as a resource to engineer BICs. Such nonlinearities can be observed in a plentitude physical systems ranging from optical cavities <cit.> to magnetic systems <cit.>, which has been a prime subject of interest, with many exotic effects <cit.>. Here, we present a variety of solutions for BICs relevant to single and two mode bosonic systems having a Kerr type of anharmonicity. The resulting BICs are strongly sensitive to perturbations in the system parameters, in particular variations in characteristic detunings which owes its origin to the existence of first and second order poles in the response function. In addition, we discuss a number of experimental platforms germane to our analysis of the nonlinear systems. In particular, we specifically illustrate its sensing capabilities of the two mode anharmonic system in the context of a few experimentally realizable systems. The manuscript is organized as follows. In section <ref>, we discuss the well known schemes for the generation of BICs without involving the use of Kerr nonlinearities. Subsequently, in section <ref>, we provide a detailed analysis of the protocol to achieve BICs in a single mode system with passive Kerr nonlinearity and the accompanying sensitivity to perturbations in the system. We extend the study into the domain two mode active nonlinear system in section <ref> and establish its equivalence with the single mode results in Appendix <ref>. Finally, we conclude our results in section <ref>. § BIC IN A COUPLED TWO-MODE SYSTEM We commence our analysis by revisiting the emergence of BICs in a generic two-mode system without any nonlinearities. To this end, we consider a system comprising of modes a and b coupled through a complex parameter J and driven externally at frequency ω_d. The dynamics of the system in the rotating frame of the drive is given by Ẋ=-iℋ X+F_in, where, X^T=[a b], F_in describes the modality of external driving and ℋ is the effective non-Hermitian Hamiltonian provided by ℋ=[ Δ_a-iκ J; J Δ_b-iγ ]. Here, Δ_i=ω_i-ω_d where i∈{a,b}, ω_a and ω_b are the characteristic resonance frequencies of the modes a and b, and κ, γ denote their respective decay rates. Note that the real and imaginary parts of J=g-iΓ represent the coherent and dissipative form of coupling between the modes. The eigenvalues of ℋ are given by λ_±=Δ_a+Δ_b/2-iγ̅±√((Δ_a-Δ_b/2-iγ̃)^2+(g-iΓ)^2), where γ̅=κ+γ/2 and γ̃=κ-γ/2. One of the ways to bring to naught the imaginary part of the eigenvalues is to employ engineered gain into the system, that is to make κ=-γ. This in conjunction with the absence of dissipative coupling, viz, Γ=0 and Δ_a=Δ_b yield the eigenvalues λ_±=Δ_a±√((g^2-γ^2)). Palpably, the system in the parameter domain g≥γ is earmarked by the observation of real eigenspectra <cit.>. Note en passant, that the system under this parameter choice lends itself to a PT-symmetric description of the effective Hamiltonian featuring an exceptional point (EP) in the parameter space at g=γ. On the other hand, the region g<γ affords eigenvalues which form a complex-conjugate pair, wherein the amplitude of the of the modes grows exponentially in time whereas the other one decays. In the context of PT-symmetric systems, it is important to notice that EPs, which have found applications in sensing <cit.> are functionally analogous to BICs. The exists another interesting parameter domain, conformable with anti-PT symmetry, i.e., {PT,ℋ}=0, that can spawn a BIC, without involving external gain. Such a system necessitates the absence of coherent coupling, that is to say g=0, κ=γ and Δ_a=-Δ_b, begetting λ_±=-iκ±√((Δ_a^2-Γ^2)) which take purely imaginary form when |Δ_a|≤Γ. In contrast, the |Δ_a|>Γ phase leads to decaying solutions with real part of the eigenvalues flanked on either side of the external drive frequency. Observe that when Δ_a= 0 and as Γ approaches κ, the system entails a BIC, marked by the existence of a vanishing eigenvalue, i.e., λ_+→ 0 and thereby eliciting a pole at origin in response to the external drive. The anti-PT symmetric system does not warrant the use of gain, however, it stipulates the use of dissipative coupling, which can be engineered by coupling the subsystems via a common intermediary reservoir <cit.>. It makes for a relevant observation that in general, the effective Hamiltonian in Eq. (1) does not yield non-radiating solutions of the Maxwell equations, especially when J=0, i.e., when the modes are decoupled. In the following section, we provide a mechanism to engineer BIC in a nonlinear system which does not depend on the underlying symmetries of the system. More importantly, the protocol can be implemented even in the limit where the subsystems are completely decoupled. In fact, the existence of BIC is an inalienable consequence of anharmonicities present in the system and the concomitant magnification of the dimensionality. The mechanism can be extended to two-mode nonlinear systems and we provide a detailed analysis in section <ref>. § BIC IN A SINGLE MODE KERR NONLINEAR SYSTEM We consider a medium with third order Kerr nonlinearity characterized by a nonlinear contribution to the polarization P^(3)(ω)=χ^(3)|E(ω)|^2E(ω) placed in a single-mode cavity with mode variable a as depicted in Fig. <ref>. Here, E is the cavity electric field and χ^(3) the third order nonlinear susceptibility. The cavity is driven externally at frequency ω_d. The passive nature of nonlinearity indicates that the nonlinear processes are only affected by the frequency composition of the field and not the medium which only plays a catalytic role <cit.>. The dynamics of the system in the rotating frame of the drive is given by ȧ=-i(Δ-iγ)a-i2U|a|^2a+ℰ, where, Δ=ω_a-ω_d, ω_a denotes the cavity resonance frequency, U=3ħω_a^2χ^(3)/4ϵ_0 n V_eff is a measure of Kerr nonlinearity of the medium with refractive index n, V_eff signifies the effective volume of the cavity mode having a leakage rate γ and ℰ=√(2γ P_d/ħω_d) represents the Rabi frequency of external driving. In the long time limit, the mode a decays into a steady state described by the cubic equation I=α/2(1+(Δ̃+α/2)^2), where I=2U|ℰ|^2/γ^3, α=4U/γ|a_0|^2, Δ̃=Δ/γ and a_0 is the steady amplitude of the mode a. The Eq. (4) can engender a bistable response under the condition UΔ<0 and Δ^2>3γ^2 as illustrated by Fig. <ref>(a). Notably, there exist two turning points characterized by the coordinates (I_±,α_±) of the I-α curve, subject to dI/dα=0, beyond which we observe an abrupt change in α. The exact form of α_± is given by α_±=-4Δ̃±2√(Δ̃^2-3)/3, while I± can be obtained from Eq. (4) by substituting the above-mentioned solutions. Moreover, there is a cut off for the pump power beyond which the bistable characteristics set in. The critical magnitude of I^c is defined by the inflection point in the I-α graph described by the condition dI/dα=d^2I/dα^2=0, providing us I^c=-α^2/2(Δ̃+α/2). For a given set of parameters U, α and γ, we would like to perturb the system in Δ, modifying the mode variable into a=a_0+δ a, in which δ a characterizes the perturbations of the mode a about a_0. The dynamics of the perturbations are governed by the following effective Hamiltonian ℋ̃=[ Δ̃+α-i β; -β^* -Δ̃-α-i ], where β=2U/γa_0^2. The complex eigenvalues of the Eq. (7) denoted as λ refers to the normal modes of the system and they can be obtained by solving the characteristic polynomial equation λ^2+2iλ+|β|^2-(Δ̃+α)^2-1=0. Notably, in the limit when the determinant of the Hamiltonian (α/2)^2-(Δ̃+α)^2-1→0, one of the solutions of the Eq. (7) becomes vanishingly small. Note that we are working in the frame rotating at frequency ω_d. Therefore, under this condition, the imaginary part of one of the eigenvalues approaches zero, alluding to the generation of a BIC, as depicted in the Fig. <ref> (b). It is worth noting that α≠0, i.e., U≠0 is a prerequisite for the existence of such a state. In other words, the generated BIC owes its origin entirely to the Kerr anharmonicities of the mode a. For a given value of the parameter Δ, the BICs exist at (I_±,α_±), which are exactly the turning points of the I-α curve as depicted in the Fig. <ref>(a,c). Application of nonlinearity induced BIC in sensing: The existence of BICs also leads to the enhanced sensitivity of the nonlinear response to perturbations in the system parameters. This can be accredited to the existence of the first and second order poles at α=α_± in the first order derivative of the nonlinear response dα/dΔ̃=-8α(Δ̃+α/2)/3(α-α_-)(α-α_+), obtained from differentiating Eq. (4) by Δ̃. To further elucidate the origin of sensitivity, we expand I around the turning points of the I-α curve, that is, α=α_±, I=I_±+∂ I/∂αϵ+∂^2 I/∂α^2ϵ^2+O(ϵ^3), where ϵ=α-α_± and I_± are obtained by substituting α_± in Eq. (4). Consequently, at the turning points of the curve, ∂ I/∂α=0 and we have dα/dΔ̃∼|I-I_±|^-1/2. On the other hand, close to inflection point sensitivity has the functional dependence dα/dΔ̃∼|I-I_c|^-2/3. In practice, one can choose a value of Δ and the Eq. (8) in conjunction with Eq. (4) respectively determine the corresponding α_± and I_± appropriate for sensing. As I is varied tantalizingly close to I_±, any perturbations in the parameter Δ translate into a prodigious shift in the mode response as perceptible from Fig. <ref>(d). Note that the sensitivity to aberrations in Δ is a direct artifact of the existence of a BIC. Bearing in mind the generality of our analysis, it is interesting to observe the variety of experimental platforms available to implement our scheme for investigating BICs produced by nonlinearity induced extra dimensions. Some of the well-known examples in the context of passive nonlinearities and bistability include Sodium vapor <cit.>, Ruby <cit.>, Kerr liquids like CS_2, nitrobenzene, electronic nonlinearity of Rb vapor etc. to name a few <cit.>. In the subsequent section, we stretch the analysis into the case of a two mode anharmonic system. § ENGINEERING BIC IN A TWO MODE KERR NONLINEAR SYSTEM We begin this section by considering a two mode active Kerr nonlinear system that consists of modes a and b coupled coherently through a real parameter g, and b is externally pumped at a frequency of ω_d. The Hamiltonian of the system can be expressed as H/ħ =ω_aa^† a+ω_b b^† b+g(b^† a+ba^†) +Ub^† bb^† b+iΩ(b^† e^-iω_d t-be^iω_d t), where ω_a and ω_b represent the resonance frequencies of the modes a and b, the coefficient U quantifies the strength of Kerr nonlinearity, and Ω denotes the Rabi frequency of external driving. The systems characterized by the aforementioned Hamiltonian are prevalent in nature, for example, a collection of two-level atoms under the conditions of no saturation which act as an active Kerr nonlinear medium in a driven resonant cavity. The dynamics of the system in the rotating frame of the drive is provided by ȧ=-(iδ_a+γ_a)a-igb, ḃ=-(iδ_b+γ_b)b-2iUb^† bb-iga+Ω, where δ_a=ω_a-ω_d, δ_b=ω_b+U-ω_d, and γ_a and γ_b denote the dissipation rates of the modes a and b, respectively. In the long-time limit, the system decay into a steady state, i.e., a→ a_0, b→ b_0 lending the following nonlinear cubic equation I=4 x^3+4δ̃_Rx^2+|δ̃|^2x, where I=UΩ^2, x=U|b_0|^2, δ̃=δ_b-iγ_b-g^2/δ_a-iγ_a, and we define δ̃_R=δ_b- g^2δ_a/δ_a^2+γ_a^2 and δ̃_I=-γ_b-g^2γ_a/δ_a^2+γ_a^2 as the real and imaginary parts of δ̃, respectively. Notice that δ̃_I is negative. Under the criterion δ̃_R<√(3)δ̃_I, there exist three possible roots for x, leading to a bistable response, wherein, two of the roots are stable while the third is unstable. Conditions for the existence of BIC: To analyze the effect of perturbations around the steady state, we use a linearized approximation by letting a=a_0+𝒜 and b=b_0+ℬ, where 𝒜 and ℬ signify the perturbations of mode a about a_0 and mode b about b_0, respectively. The dynamics of the perutrbations ψ^T = [𝒜,ℬ,𝒜^†,ℬ^†] are governed by the following equation, ∂ψ/∂ t=-iℋψ+ℐ, where ℋ is the effective Hamiltonian ℋ=( [ δ_a-iγ_a g 0 0; g δ_b+4x-iγ_b 0 2Ub_0^2; 0 0 -δ_a-iγ_a -g; 0 -2Ub_0^*2 -g -δ_b+4x-iγ_b; ]), and ℐ=0 for the steady state. The normal modes of the system are hallmarked by complex eigenvalues of Eq. (<ref>), which can be obtained by solving the characteristic polynomial equation (ℋ-λ𝐈)=0. Conspicuously, when ℋ=0, one of the eigenvalues can approach zero (in the rotating frame of the drive), spawning real eigenvalues and thereby indicating the emergence of a BIC. Therefore, we first determine the parameter domain consistent with condition 0 =ℋ =12(δ_a^2+γ_a^2)x^2+8(-δ_a g^2+δ_bδ_a^2+δ_bγ_a^2)x +(g^2-δ_aδ_b+γ_aγ_b)^2+(δ_aγ_b+δ_bγ_a)^2. It is worth noting that the existence of BIC relies on the prerequisite x=U|b_0|^2≠0. In other words, the Kerr anharmonicities of the mode b are solely responsible for the creation of the BIC. Upon solving Eq. (<ref>), we discover that BICs can exist at points x_±=-1/3δ̃_R±1/6√(δ̃_R^2-3δ̃_I^2), which are exactly the turning points of the I – x curve given in Eq. (<ref>), obtained from solving the condition dI/dx=0. While invoking the linearized dynamics, one must make sure that the dynamical system is stable, which is to ensure that the eigenvalues of ℋ have negative imaginary parts. Consequently, we define λ_R and λ_I as the real and imaginary parts of the complex eigenvalues, respectively, and let λ'=-iλ. The characteristic polynomial equation can then be written as 0=(ℋ-iλ'𝐈)=λ'^4+a_1λ'^3+a_2λ'^2+a_3λ'+a_4, where a_1=2(γ_a+γ_b), a_2=δ_a^2+2g^2+(γ_a^2+4γ_aγ_b+γ_b^2)+(12 x^2+8δ_b x+δ_b^2), a_3 =2δ_a^2γ_b+2δ_b^2γ_a+2(γ_aγ_b+g^2)(γ_a+γ_b) +16δ_bγ_ax+24γ_a x^2, a_4=ℋ. The stability conditions of the system can be obtained by employing the Routh-Hurwitz Criteria, yielding the constraints a_1>0, a_3>0, a_4>0, and a_1a_2a_3>a_3^2+a_1^2a_4. Apparently, the first two conditions are met automatically, and we find a_1a_2a_3-a_3^2-a_1^2 a_4=4γ_aγ_b(12 x^2+8δ_b x-δ_a^2+δ_b^2)^2 +4γ_aγ_b(γ_a+γ_b)^2[24x^2+16δ_b x+2(δ_a^2+δ_b^2)+(γ_a+γ_b)^2] +4g^2(γ_a+γ_b)^2[12x^2+8(δ_a+δ_b)x+(δ_a+δ_b)^2+(γ_a+γ_b)^2], which is manifestly positive fulfilling the final criterion. The only remaining criterion a_4=ℋ>0 is satisfied along with δ̃_R<√(3)δ̃_I and x∈ (0,x_-)∪(x_+,∞). Sensing capabilities of nonlinearity induced BIC: The importance of the above results can be legitimized in the optical domain with several well known systems, including, for instance Sagnac resonators <cit.> among other settings <cit.>. The presence of BICs at points x_± contributes to the significantly improved sensitivity of the nonlinear response to variations in the system parameters, in particular, to perturbations in natural frequency of the active nonlinear medium. The remarkable sensitivity is a direct upshot of the existence of first or second order poles at x=x_± in the first derivative of the nonlinear response which has the functional form d x/dδ_b=-x(x+δ̃_R/2)/3(x-x_-)(x-x_+), analogous to Eq. (9). Therefore, it immediately follows that adjacent to the turning points, we have d x/d δ_b∼I(x_±)-I^-1/2. By the same token, close to the inflection point, the sensitivity scales as I_c-I^-2/3. Sensing in magnetic systems: In view of the extensive studies on nonlinearities <cit.> in ferrimagnetic spheres, it is worthwhile to consider magnetic systems to implement the sensing scheme. Note that the anharmonicities in optical systems is a direct consequence of the nonlinear response of electrical polarization. In stark contrast, the anharmonic component in a magnetic system originates from the nonlinear magnetization. We consider a single ferromagnetic YIG interacting with a microwave cavity as portrayed in Fig. <ref>. The ferromagnet couples strongly with the microwave field at room temperature, giving rise to quasiparticles, namely cavity-magnon polaritons. The YIG acts as an active Kerr medium, which can be pinned down to the magnetocrystalline anisotropy <cit.> of the sample. A strong microwave pump of power P_d and frequency ω_d is used to stimulate the weak anharmonicity of the YIG, which is of the order 10^-9 Hz. The full Hamiltonian of the cavity-magnon system is consistent with Eq. (<ref>) where the mode operators a, b are respectively superseded by cavity and magnon annihilation operators. The quantities ω_a and ω_b represent the cavity and Kittel mode resonance frequencies. Rabi frequency of external pumping takes the form Ω=γ_e√(5πρ d P_d/3c), where γ_e is the gyromagnetic ratio, ρ denotes the spin density of the YIG with a diameter d and c stands for the velocity of light. For experimentally realizable parameters of the system, we plot in Fig. <ref> x from Eq. (12) by varying δ_b and I and the results replicate the physics described in Fig. <ref>. § CONCLUSIONS In conclusion, we have demonstrated a new scheme apropos of single and two-mode Kerr nonlinear systems to engineer BICs. In the context of single mode systems, we considered a passive Kerr nonlinearity in an optical cavity that demonstrates bistability. As the the system parameters are tuned in close proximity to the turning points of the hysteresis, a BIC springs into existence marked by a vanishing linewidth of the mode. In the neighborhood of the BIC, the steady state response was observed to show pronounced sensitivity to perturbations in the detunings. This remarkable sensitivity can be traced down to the existence of poles in the first order derivative of the response with respect to the perturbation variable. The sensitivity to perturbations scales as inverse square root of the deviations in external pump powers optimal for the turning points. Further, we extended the analysis into the regime of two-mode systems possessing an active nonlinear medium. Our analysis is generic, applicable to a large class of systems, including, both optical and magnetic systems. Some of the passive nonlinear optical platforms include nonlinear media like CS_2, nitrobenzene, Rb vapor whereas high-quality Sagnac resonators support active Kerr nonlinearities. In addition, we considered an active Kerr medium provided by magnetic systems interacting with a microwave cavity where research activity has flourished of late. In the domain of large detunings of the active Kerr medium, the two-mode setup can be described by an effectively single-mode anharmonic system in lockstep with the results from the passive Kerr nonlinearity in an optical cavity. § ACKNOWLEDGEMENTS The authors acknowledge the support of The Air Force Office of Scientific Research [AFOSR award no FA9550-20-1-0366], The Robert A. Welch Foundation [grant no A-1943] and the Herman F. Heep and Minnie Belle Heep Texas A&M University endowed fund. Q. M. and J. M. P. N. contributed equally to this work. § EQUIVALENCE BETWEEN SINGLE-MODE AND TWO-MODE ANHARMONIC SYSTEM So far, we have discussed schemes for the creation of BICs in single and two mode nonlinear systems. It is worth mentioning that there exists a close correspondence between the two mode and single mode results in the limit of large δ_b. To enunciate this, let us delve into the second part of Eq. (11). In the long-time limit, we have -(iδ_b+γ_b)m-2iUb^† bb-iga+Ω=0. Note that the effect of γ_b pales in comparison with δ_b and we can recast the above equation into b=-(ga+iΩ)/δ_b[1+x]^-1, where x=2U|b|^2/δ_b. For the purpose of simplification, we set Ω=0 and assume that the external drive is on the cavity at Rabi frequency ℰ. Owing to the largeness of δ_b, it is discernible that x<<1. Therefore, we can revise the above equation as b=-(ga+iΩ)/δ_b[1-x+O(x^2)]. Keeping only terms up to first order in x, we are left with b=-ga/δ_b[1-2U|b|^2/δ_b]. Upon iterating the solution and omitting the higher order terms, the approximate solution for b morph into b=-ga/δ_b+2(g/δ_b)^3(U/δ_b)|a|^2a. Substituting this into the first part of Eq. (11), we obtain an effective single mode description of the dynamics of the system, ȧ=-(iδ̃_a+γ_a)a-iŨ|a|^2a+ℰ, where δ̃_a=δ_a-g^2/δ_a and Ũ=2(g/δ_b)^4U. Strikingly, the preceding equation reproduces Eq. (3) with Δ, γ and U respectively replaced by δ̃_a, γ_a and Ũ, unfolding the equivalence between two-mode and single-mode nonlinear systems in the realm of large δ_b.
http://arxiv.org/abs/2307.05763v1
20230711194002
Realtime Spectrum Monitoring via Reinforcement Learning -- A Comparison Between Q-Learning and Heuristic Methods
[ "Tobias Braun", "Tobias Korzyzkowske", "Larissa Putzar", "Jan Mietzner", "Peter A. Hoeher" ]
eess.SY
[ "eess.SY", "cs.LG", "cs.SY" ]
Rad-ReStruct: A Novel VQA Benchmark and Method for Structured Radiology Reporting Chantal PellegriniContributed equally. Corresponding author: [email protected] Matthias Keicher⋆ Ege Özsoy Nassir Navab August 12, 2023 =================================================================================================================================== 2 § ABSTRACT Due to technological advances in the field of radio technology and its availability, the number of interference signals in the radio spectrum is continuously increasing. Interference signals must be detected in a timely fashion, in order to maintain standards and keep emergency frequencies open. To this end, specialized (multi-channel) receivers are used for spectrum monitoring. In this paper, the performances of two different approaches for controlling the available receiver resources are compared. The methods used for resource management (ReMa) are linear frequency tuning as a heuristic approach and a Q-learning algorithm from the field of reinforcement learning. To test the methods to be investigated, a simplified scenario was designed with two receiver channels monitoring ten non-overlapping frequency bands with non-uniform signal activity. For this setting, it is shown that the Q-learning algorithm used has a significantly higher detection rate than the heuristic approach at the expense of a smaller exploration rate. In particular, the Q-learning approach can be parameterized to allow for a suitable trade-off between detection and exploration rate. § INTRODUCTION The use of radio signals in areas such as smart factories, smart homes, or autonomous driving is constantly increasing the number of radio signal transmitters. Improper handling leads to an increase in unwanted radio emissions. The usable spectrum for mobile radio applications is being expanded by new technologies such as 5G. At the same time, unwanted signals are more difficult to detect when the overall bandwidth of the frequency spectrum to be monitored keeps growing. This increases the vulnerability of wireless networks and services due to unwanted interference signals. Therefore, in order to enforce radio standards and to keep emergency frequencies open, the frequency spectrum must still be monitored in a timely fashion. In particular, an active search for interference signals is required so that they can be detected, classified, and localized early (interference hunting). Frequency spectrum monitoring (FSM) systems typically consist of several receiver channels and corresponding software to control these channels. Specifically, the instantaneous bandwidths of the receiver channels cover only a small part of the overall spectrum to be monitored (due to cost reasons). Therefore, the aim is efficiently monitor the entire spectrum of interest with only few receiver channels by shifting the frequency windows of the receiver channels according to a suitable switching pattern. By this means, the entire spectrum can be visited over subsequent measurements. Controlling the frequency windows of the individual receiver channels is a typical problem of resource management (ReMa): Few receiver channels have to cover a relatively broad frequency spectrum within a limited time frame. By varying the frequency windows, the detection of continuous interference signals is relatively easy, while the probability of detecting pulsed signals is relatively low. State-of-the-art FSM systems use heuristic approaches to shift the frequency windows, which do not necessarily derive from any formal optimization procedure. §.§ State of the Art In the area of spectrum monitoring, previous heuristic approaches for ReMa can react in an agile fashion to current events within a certain framework. ReMa can be centralized [1] on a single network controller or decentralized [2] employing several devices. In other areas of communications technology involving similar ReMa problems, machine learning methods have already been successfully applied. For example, reinforcement learning methods can be used to optimize ReMa in mobile communication systems [3], taking into account aspects such as guaranteed data rates and fairness among users. However, such problem setups – or related setups regarding optimal user scheduling – are quite different from ReMa problems encountered in spectrum monitoring, so that corresponding solutions cannot simply be adopted or tailored to the specific FSM system contexts and associated requirements. Some related works, e.g. [4], indeed combine spectrum monitoring and machine learning – specifically convolutional neural networks – but these focus on signal classification tasks rather than on ReMa, which involves a fundamentally different problem setup. §.§ Paper Outline In the following, we investigate the performances of two different ReMa approaches for spectrum monitoring, namely linear frequency tuning as a heuristic approach and a Q-learning algorithm from the field of reinforcement learning. To test the methods to be investigated, a simplified scenario was designed with two receiver channels monitoring ten non-overlapping frequency bands with non-uniform signal activity. For this setting, we demonstrate that the Q-learning algorithm used has a significantly higher detection rate than the heuristic approach at the expense of a smaller exploration rate. As will be seen, a particular advantage of the Q-learning approach is that it can be parameterized to allow for a suitable trade-off between detection and exploration. The remainder of the paper is organized as follows: The environment for the Q-learning algorithm is defined in Section 2. The generation of training data is described in Section 3, and the two ReMa approaches are in detail described in Section 4. Numerical performance results are analyzed in Section 5, and conclusions are drawn in Section 6. § ENVIRONMENT In order to calculate detection rates of the methods under investigation in the analysis part, the actual number of all interference signals within an observation series (OS) must be known. To this end, we conducted a computer simulation study for a simplified scenario with two receiver channels and employed generated test data. In the considered scenario, an OS consisted of 100 observed frequency spectra. The overall frequency spectrum was divided into 10 non-overlapping sub-spectra. For the sub-spectra, normalized frequency values were employed, as the exact size, start, and end of the associated frequency range are irrelevant for the ReMa problem. The number of generated interference signals was chosen as three throughout our simulation study. The interference signals were continuous signals and had a 50 percent probability of being found in the first three sub-spectra. In particular, multiple interference signals could be located within the same sub-spectrum. In an observed spectrum of an OS, an interference signal had an 80 percent chance of being detected by one of the two receiver channels. This means that in 20 percent of the cases, an interference signal could not be detected by a receiver. The receivers were able to change their frequency range between the sub-spectra in an instantaneous fashion, i.e., the change of the frequency range did not entail any delay. Furthermore, the time required to observe and detect an interference signal over the entire spectrum was assumed to be equal to one time step (i.e., a single observed frequency spectrum) in the OS. The feedback of the detection is a binary signal: a successful detection of an interference signal is represented by a one, an unsuccessful detection by a zero. Thus, the model does not have any information about the probability of whether an interference signal has been detected or not, only the binary classification into detected and non-detected interference signals is available. A false-positive detection of an interference signal was not considered within the scope of this paper for simplicity, as it has little relevance for the considered ReMa problem. In our simulation setup, the signal occurrence probability is a free variable and can be varied over several measurements. Within one OS, the detection probability and the signal occurrence probability were kept constant for all observed frequency spectra. The number of generated interference signals, the number of sub-spectra within the overall frequency spectrum, and the number of receiver channels were also kept constant. As such, these values represent a simplified spectrum monitoring scenario with central control of the receiver channels and are based on empirical experience derived from current FSM systems. § GENERATING DATA In accordance with the environment, the training and validation data were generated with 100 spectra per OS and 10 sub-spectra per observed frequency spectrum. This results in a list of 100 x 10 elements composed of zeros and ones for the binary feedback of the signal detection step. Since the data are generated, any amount of test data could be produced. Therefore, a 50/50 split between training and validation data, with 10,000 observation series each, was used. § ALGORITHMS To analyze the two approaches, the algorithms were implemented in Python. For the training of the Q-learning algorithm, 10,000 OS were used. The heuristic linear frequency tuning follows a fixed frequency-switching scheme and thus does not rely on training data. §.§ Reinforcement Learning The Q-learning algorithm is a model of the Markov decision process. For each possible state of the system, values (Q-values) for the prospects of success of the possible actions are stored in a Q-table. The states are stored as the status of the system and, together with the number of possible actions, determine the dimensionality of the Q-table. According to the environment, the agent (Q-agent) has the option to set each receiver channel to 10 sub-spectra for each state of the model. This results in 10 possible actions per receiver channel. A state of the model includes the following information: * Current positions of the two receiver channels * Outputs of the interference signal detection from the previous spectrum within the OS. The Q-table is initialized with random Q-values. The Q-values are tested by the Q-agent and updated according to predetermined rules: * Degradation of the Q-value, if both receiver channels were set to the same position. * Degradation of the Q-value, if both receiver channels have swapped positions. * Degradation of the Q-value, if no interference signal is detected. * Improvement of the Q-value, if an interference signal is detected. * This bonus increases with repeated detection up to x times, where we chose x=5 throughout. The Q-learning algorithm learns, based on the current positions of the receiver channels and the interference signal detection of the last step, how promising different distributions of the receiver channels are for the next step. If an interference signal has been detected, a receiver channel observes the interference signal, until it is terminated. Meanwhile, the second receiver channel continues to scan the spectrum. The Q-agent can transfer structures of the training data set into the Q-table. This way, it is learned, on which partial spectra interference signals occur more frequently. An exploration value can be defined separately. This can force the Q-agent to terminate the repeated interference signal observation and to continue exploration of other partial spectra. Examples for the behavior of the Q-learning algorithm regarding the positioning of the receiver channels under different exploration values are presented in Fig. <ref> and Fig. <ref>. For better control of when the Q-agent should switch the sub-spectrum after a detection series, a variant of the Q-learning algorithm with memory was developed. Along with the positions and detection properties of the receiver channels, a number of memory steps for both receiver channels are stored in the status. From this, a new rule for updating the Q-values can be created: * Deterioration of the Q-value, when an interference signal has been detected on the same sub-spectrum more than x=5 times in a row. This rule prompts the Q-agent to terminate the observation after x=5 consecutive observations of an interference signal on the same sub-spectrum. This allows the Q-agent to learn to continue with the exploration after x=5 detections. An exemplary behavior of the Q-learning algorithm with memory regarding the positioning of the receiver channels is shown in Fig. <ref>. §.§ Heuristic Approach In the heuristic approach, the two receiver channels are placed side by side at the first and second sub-spectrum for each new OS. In each time step (observed frequency spectrum), the receiver channels are moved up by two sub-spectra. This repeats until the right end of the overall frequency spectrum is reached. Once the right end is attained, the receiver channels are reset to their initial positions at the first and second sub-spectrum. Since this approach is deterministic, it is not trained with any training data. An exemplary behavior of the heuristic algorithm regarding the positioning of the receiver channels is shown in Fig. <ref> as a reference. § ANALYSIS Since more interference signals can be generated during the data creation than can be detected with two receiver channels, a 100-percent detection rate (DR) is not possible. The detection rate of the approaches can only be compared with the number of detectable interference signals, shown in Fig. <ref> in red color. The heuristic approach, due to the forced frequency tuning, only achieves a very low DR of approximately 0.21% (orange). The Q-learning approach, on the other hand, can achieve a higher DR by learning the structure of the data. It searches more often in the first three sub-spectra and repeatedly observes recognized interference signals. The DR is higher with lower exploration (blue) than with higher exploration (cyan), because the receiver channels are less often distracted to other partial spectra. The approach with memory (green) has a higher standard deviation than the other algorithms with an average DR of approximately 41.79%. Regarding the visited frequency bands (Fig. <ref>), the Q-learning methods exhibit an imbalance in favor of the heavily occupied partial spectra, which is due to the repeated observation of the sub-spectrum containing the recognized interference signal. In comparison, it can be clearly seen that the linear frequency tuning has a balanced observation behavior between the sub-spectra (as expected). All spectra are visited exactly 20 times within an OS (orange). The Q-learning algorithm with low exploration mainly observes the first three sub-spectra (blue). With more exploration (cyan), the algorithm becomes similar to the heuristic approach. On average, the Q-learning algorithm with memory appears to be the most balanced one (green), although it again has a high standard deviation. The variability regarding the visited frequency bands observed for the Q-learning methods may be due to the random initialization employed for the Q-values. § CONCLUSION In this paper, a significantly higher detection rate of interference signals could be achieved with a trained Q-learning algorithm compared to a heuristic approach employing linear frequency tuning. High detection rates naturally come at the expense of lower exploration – a trade-off which can be optimally adjusted to the signal environment of interest, using free parameters available within the Q-learning algorithm. In future work, the Q-learning approaches must be tested with other signal types, such as pulse signals and signals with low duty cycle. For a better comparison, the Q-learning algorithm with memory might also be compared with an alternative heuristic approach which allows to dwell on detected signals. Yet, it should be noted that repeated observation of already detected interference signals always negatively affects the balance of exploration. Finally, promising ReMa approaches should be integrated into existing FSM systems to verify their advantages within field tests and realistic signal environments. § ACKNOWLEDGMENTS The authors would like to thank Dr. Alwin Reinhardt (Rohde & Schwarz) and Dr. Qi Wang (University of the West of Scotland) for fruitful discussions. § REFERENCES 1. M. Mostafavi, J.M. Niya, H. Mohammadi, and B.M. Tazehkand, "Fast convergence resource allocation in IEEE 802.16 OFDMA systems with minimum rate guarantee," China Commun., vol. 13, no. 12, pp. 120–131, Dec. 2016. 2. D. Niyato and E. Hossain, "Radio resource management games in wireless networks: an approach to bandwidth allocation and admission control for polling service in IEEE 802.16," IEEE Wireless Commun., vol. 14, no. 1, pp. 27–35, Feb. 2007. 3. U. Challita, L. Dong, and W. Saad, "Proactive resource management for LTE in unlicensed spectrum: a deep learning perspective," IEEE Trans. Wireless Commun., vol. 17, no. 10, pp. 4674–4689, Jul. 2018. 4. A. Selim, F. Paisana, J.A. Arokkiam, Y. Zhang, L. Doyle, and L.A. DaSilva, "Spectrum monitoring for radar bands using deep convolutional neural networks," in Proc. 2017 IEEE Global Commun. Conf. (Globecom), Dec. 2017. doi:10.1109/GLOCOM.2017.8254105
http://arxiv.org/abs/2307.04136v1
20230709092915
ECL: Class-Enhancement Contrastive Learning for Long-tailed Skin Lesion Classification
[ "Yilan Zhang", "Jianqi Chen", "Ke Wang", "Fengying Xie" ]
cs.CV
[ "cs.CV" ]
Zhang et al. Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China xfy_73.buaa.edu.cn ECL: Class-Enhancement Contrastive Learning for Long-tailed Skin Lesion Classification Yilan Zhang1 Jianqi Chen1 Ke Wang1 Fengying Xie1Corresponding author August 12, 2023 ====================================================================================== Skin image datasets often suffer from imbalanced data distribution, exacerbating the difficulty of computer-aided skin disease diagnosis. Some recent works exploit supervised contrastive learning (SCL) for this long-tailed challenge. Despite achieving significant performance, these SCL-based methods focus more on head classes, yet ignoring the utilization of information in tail classes. In this paper, we propose class-Enhancement Contrastive Learning (ECL), which enriches the information of minority classes and treats different classes equally. For information enhancement, we design a hybrid-proxy model to generate class-dependent proxies and propose a cycle update strategy for parameters optimization. A balanced-hybrid-proxy loss is designed to exploit relations between samples and proxies with different classes treated equally. Taking both “imbalanced data" and “imbalanced diagnosis difficulty" into account, we further present a balanced-weighted cross-entropy loss following curriculum learning schedule. Experimental results on the classification of imbalanced skin lesion data have demonstrated the superiority and effectiveness of our method. The codes can be publicly available from <https://github.com/zylbuaa/ECL.git>. § INTRODUCTION Skin cancer is one of the most common cancers all over the world. Serious skin diseases such as melanoma can be life-threatening, making early detection and treatment essential <cit.>. As computer-aided diagnosis matures, recent advances with deep learning techniques such as CNNs have significantly improved the performance of skin lesion classification <cit.>. However, as a data-hungry approach, CNN models require large balanced and high-quality datasets to meet the accuracy and robustness requirements in applications, which is hard to suffice due to the long-tailed occurrence of diseases in the real-world. Long-tailed problem is usually caused by the incidence rate and the difficulty of data collection. Some diseases are common while others are rare, making it difficult to collect balanced data <cit.>. This will cause the head classes to account for the majority of the samples and the tail classes only have small portions. Thus, existing public skin datasets usually suffer from imbalanced problems which then results in class bias of classifier, for example, poor model performance especially on tail lesion types. To tackle the challenge of learning unbiased classifiers with imbalanced data, many previous works focus on three main ideas, including re-sampling data <cit.>, re-weighting loss <cit.> and re-balancing training strategies <cit.>. Re-sampling methods over-sample tail classes or under-sample head classes, re-weighting methods adjust the weights of losses on class-level or instance-level, and re-balancing methods decouple the representation learning and classifier learning into two stages or assign the weights between features from different sampling branches <cit.>. Despite the great results achieved, these methods either manually interfere with the original data distribution or improve the accuracy of minority classes at the cost of reducing that of majority classes <cit.>. Recently, contrastive learning (CL) methods pose great potential for representation learning when trained on imbalanced data <cit.>. Among them, supervised contrastive learning (SCL) <cit.> aggregates semantically similar samples and separates different classes by training in pairs, leading to impressive success in long-tailed classification of both natural and medical images <cit.>. However, there still remain some defects: (1) Current SCL-based methods utilize the information of minority classes insufficiently. Since tail classes are sampled with low probability, each training mini-batch inherits the long-tail distribution, making parameter updates less dependent on tail classes. (2) SCL loss focuses more on optimizing the head classes with much larger gradients than tail classes, which means tail classes are all pushed farther away from heads <cit.>. (3) Most methods only consider the impact of sample size (“imbalanced data") on the classification accuracy of skin diseases, while ignoring the diagnostic difficulty of the diseases themselves (“imbalanced diagnosis difficulty"). To address the above issues, we propose a class-Enhancement Contrastive Learning method (ECL) for skin lesion classification, differences between SCL and ECL are illustrated in Fig.<ref>. For sufficiently utilizing the tail data information, we attempt to address the solution from a proxy-based perspective. A proxy can be regarded as the representative of a specific class set as learnable parameters. We propose a novel hybrid-proxy model to generate proxies for enhancing different classes with a reversed imbalanced strategy , i.e., the fewer samples in a class, the more proxies the class has. These learnable proxies are optimized with a cycle update strategy that captures original data distribution to mitigate the quality degradation caused by the lack of minority samples in a mini-batch. Furthermore, we propose a balanced-hybrid-proxy loss, besides introducing balanced contrastive learning (BCL) <cit.>. The new loss treats all classes equally and utilizes sample-to-sample, proxy-to-sample and proxy-to-proxy relations to improve representation learning. Moreover, we design a balanced-weighted cross-entropy loss which follows a curriculum learning schedule by considering both imbalanced data and diagnosis difficulty. Our contributions can be summarized as follows: (1) We propose an ECL framework for long-tailed skin lesion classification. Information of classes are enhanced by the designed hybrid-proxy model with a cycle update strategy. (2) We present a balanced-hybrid-proxy loss to balance the optimization of each class and leverage relations among samples and proxies. (3) A new balanced-weighted cross-entropy loss is designed for an unbiased classifier, which considers both “imbalanced data" and “imbalanced diagnosis difficulty". (4) Experimental results demonstrate that the proposed framework outperforms other state-of-the-art methods on two imbalanced dermoscopic image datasets and the ablation study shows the effectiveness of each element. § METHODS The overall end-to-end framework of ECL is presented in Fig. <ref>. The network consists of two parallel branches: a contrastive learning (CL) branch for representative learning and a classifier learning branch. The two branches take in different augmentations T^i, i ∈{1,2 } from input images X and the backbone is shared between branches to learn the features X̃^i, i ∈{1,2 }. We use a fully connected layer as a logistic projection for classification g(·): 𝒳̃→𝒴̃ and a one-hidden layer MLP h(·): 𝒳̃→𝒵∈ℝ^d as a sample embedding head where d denotes the dimension. ℒ_2-normalization is applied to 𝒵 by using inner product as distance measurement in CL. Both the class-dependent proxies generated by hybrid-proxy model and the embeddings of samples are used to calculate balanced-weighted cross-entropy loss, thus capturing the rich relations of samples and proxies. For better representation, we design a cycle update strategy to optimize the proxies' parameters in hybrid-proxy model, together with a curriculum learning schedule for achieving unbiased classifiers. The details are introduced as follows. §.§ Hybrid-Proxy Model The proposed hybrid-proxy model consists of a set of class-dependent proxies 𝒫= { p^c_k|k∈{ 1,2,...,N^p_c}., c. ∈{1,2,...,C}}, C is the class number, p^c_k∈ℝ^d is the k-th proxy vector of class c, and N^p_c is the proxy number in this class. Since samples in a mini-batch follow imbalanced data distribution, these proxies are designed to be generated in a reversed imbalanced way by giving more representative proxies of tail classes for enhancing the information of minority samples. Let us denote the sample number of class c as N_c and the maximum in all classes as N_max. The proxy number N^p_c can be obtained by calculating the imbalanced factor N_max/N_c of each class: N^p_c = {[ 1 N_c = N_max; ⌊N_max/10 N_c⌋ + 2 N_c ≠ N_max; ]. In this way, the tail classes have more proxies while head classes have less, thus alleviating the imbalanced problem in a mini-batch. As we know, a gradient descent algorithm will generally be executed to update the parameters after training a mini-batch of samples. However, when dealing with an imbalanced dataset, tail samples in a batch contribute little to the update of their corresponding proxies due to the low probability of being sampled. So how to get better representative proxies? Here we propose a cycle update strategy for the optimization of the parameters. Specifically, we introduce the gradient accumulation method into the training process to update proxies asynchronously. The proxies are updated only after a finished epoch that all data has been processed by the framework with the gradients accumulated. With such a strategy, tail proxies can be optimized in a view of whole data distribution, thus playing better roles in class information enhancement. Algorithm <ref> presents the details of the training process. §.§ Balanced-Hybrid-Proxy Loss To tackle the problem that SCL loss pays more attention on head classes, we introduce BCL and propose balanced-hybrid-proxy loss to treat classes equally. Given a batch of samples ℬ = { (x^(1,2)_i,y_i)}_B, let 𝒵={ z^(1,2)_i}_B = { z^1_1,z^2_2,...,z^1_B,z^2_B} be the feature embeddings in a batch and B denotes the batch size. For an anchor sample z_i∈𝒵 in class c, we unify the positive image set as z^+={ z_j|y_j = y_i = c, j ≠ i }. Also for an anchor proxy p^c_i, we unify all positive proxies as p^+. The proposed balanced-hybrid-proxy loss pulls points (both samples and proxies) in the same class together, while pushes apart samples from different classes in embedding space by using dot product as a similarity measure, which can be formulated as follows: L_BHP = -1/2B+∑_c ∈ C N^p_c∑_s_i∈{𝒵∪𝒫}1/2B_c+N^p_c-1∑_s_j∈{z^+∪ p^+}log exp(s_i· s_j/τ)/E E = ∑_c∈ C1/2B_c+N^p_c-1∑_s_k∈{𝒵_c∪𝒫_c}exp (s_i· s_k/τ) where B_c means the sample number of class c in a batch, τ is the temperature parameter. In addition, we further define 𝒵_c and 𝒫_c as a subset with the label c of 𝒵 and 𝒫 respectively. The average operation in the denominator of balanced-hybrid-proxy loss can effectively reduce the gradients of the head classes, making an equal contribution to optimizing each class. Note that our loss differs from BCL as we enrich the learning of relations between samples and proxies. Sample-to-sample, proxy-to-sample and proxy-to-proxy relations in the proposed loss have the potential to promote network's representation learning. Moreover, as the skin datasets are often small, richer relations can effectively help form a high-quality distribution in the embedding space and improve the separation of features. §.§ Balanced-Weighted Cross-Entropy Loss Taking both “imbalanced data" and “imbalanced diagnosis difficulty" into consideration, we design a curriculum schedule and propose balanced-weighted cross-entropy loss to train an unbiased classifier. The training phase are divided into three stages. We first train a general classifier, then in the second stage we assign larger weight to tail classes for “imbalanced data". In the last stage, we utilize the results on the validation set as the diagnosis difficulty indicator of skin disease types to update the weights for “imbalanced diagnosis difficulty". The loss is given by: L_BWCE = - 1/B∑_i=1^B w_i CE(ỹ_̃ĩ,y_i) w_i = {[ 1 e<E_1; (C/N_c/∑_c∈ C1/N_c)^e-E_1/E_2-E_1 E_1<e<E_2; (C/f^e_c/∑_c∈ C1/f^e_c)^e-E_2/E-E_2 E_2<e<E; ]. where w denotes the weight and ỹ denotes the network prediction. We assume f^e_c is the evaluation result of class c on validation set after epoch e and we use f1-score in our experiments. The network is trained for E epochs, E_1 and E_2 are hyperparameters for stages. The final loss is given by Loss = λ L_BHP + μ L_BWCE where λ and μ are the hyperparameters which control the impact of losses. § EXPERIMENT §.§ Dataset and Implementation Details §.§.§ Dataset and Evaluation Metrics. We evaluate the ECL on two publicly available dermoscopic datasets ISIC2018<cit.> and ISIC2019<cit.>. The 2018 dataset consists of 10015 images in 7 classes while a larger 2019 dataset provides 25331 images in 8 classes. The imbalanced factors α = N_max/N_min of the two datasets are all >50 (ISIC2018 58.30 and ISIC2019 53.87), which means that skin lesion classification suffers a serious imbalanced problem. We randomly divide the samples into the training, validation and test sets as 3:1:1. We adopt five metrics for evaluation: accuracy (Acc), average precision (Pre), average sensitivity (Sen), macro f1-score (F1) and macro area under curve (AUC). Acc and F1 are considered as the most important metrics in this task. §.§.§ Implementation Details. The proposed algorithm is implemented in Python with Pytorch library and runs on a PC equipped with an NVIDIA A100 GPU. We use ResNet50 <cit.> as backbone and the embedding dimension d is set to 128. We use SGD as the optimizer with the weight decay 1e-4. The initial learning rate is set to 0.002 and decayed by cosine schedule. We train the network for 100 epochs with a batch size of 64. The hyperparameters E_1, E_2, τ, λ, and μ are set to 20, 50, 0.01, 1, and 2 respectively. We use the default data augmentation strategy on ImageNet in <cit.> as T_1 for classification branch. And for CL branch, we add random grayscale, rotation, and vertical flip in T_1 as T_2 to enrich the data representations. Meanwhile, we only conduct the resize operation to ensure input size 224 × 224 × 3 during testing process. The models with the highest Acc on validation set are chosen for testing. We conduct experiments in 3 independent runs and report the standard deviations in the supplementary material. §.§ Experimental Results §.§.§ Quantitative Results. To evaluate the performance of our ECL, we compare our method with 10 advanced methods. Among them, focal loss <cit.>, LDAM-DRW <cit.>, logit adjust <cit.>, and MWNL <cit.> are the re-weighting loss methods. BBN <cit.> is the methods based on re-balancing training strategy while Hybrid-SC <cit.>, SCL <cit.>, BCL <cit.>, TSC <cit.> and ours are the CL-based methods. Moreover, MWNL and SCL have been verified to perform well in the skin disease classification task. To ensure fairness, we re-train all methods by rerun their released codes on our divided datasets with the same experimental settings. We also confirmed that all models have converged and choose the best eval checkpoints. The results are shown in Table <ref>. It can be seen that ECL has a significant advantage with the highest level in most metrics on two datasets. Noticeably, our ECL outperforms other imbalanced methods by great gains, e.g., 2.56% in Pre on ISIC2018 compared with SCL and 4.33% in F1 on ISIC2019 dataset compared with TSC. Furthermore, we draw the confusion matrixes after normalization in Fig. <ref>, which illustrate that ECL has significantly improved most of the categories, from minority to majority. §.§.§ Ablation Study. To further verify the effectiveness of the designs in ECL, we conduct a detailed ablation study shown in Table <ref> (the results on ISIC2018 are shown in supplementary material Table S2). First, we directly move the contrastive learning (CL) branch and replaced the balenced-weighted cross-entropy (BWCE) loss with cross-entropy (CE) loss. We can see from the results that adding CL branch can significantly improve the network's data representation ability with better performance than only adopting a classifier branch. And our BWCE loss can help in learning a more unbiased classifier with an improvement of 1.94% in F1 compared with CE on ISIC2019. Then we train the ECL w/o cycle update strategy. The overall performance of the network has declined compared with training w/ the strategy, indicating that this strategy can better enhance proxies learning through the whole data distribution. In the end, we also set the proxies' number of different classes equal to explore whether the classification ability of the network is improved due to the increase in the number of proxies. With more proxies, metrics fluctuate and do not increase significantly. However, the result of using proxies generated by reversed balanced way in hybrid-proxy model (HPM) outperforms equal proxies in nearly all metrics, which proves that more proxies can effectively enhance and enrich the information of tail classes. § CONCLUSION In this work, we present a class-enhancement contrastive learning framework, named ECL, for long-tailed skin lesion classification. Hybrid-proxy model and balanced-hybrid-proxy loss are proposed to tackle the problem that SCL-based methods pay less attention to the learning of tail classes. Class-dependent proxies are generated in hybrid-proxy model to enhance information of tail classes, where rich relations between samples and proxies are utilized to improve representation learning of the network. Furthermore, blanced-weighted cross-entropy loss is designed to help train an unbiased classifier by considering both "imbalanced data" and "imbalanced diagnosis difficulty". Extensive experiments on ISIC2018 and ISIC2019 datasets have demonstrated the effectiveness and superiority of ECL over other compared methods. splncs04 § SUPPLEMENTARY MATERIAL: ECL: CLASS-ENHANCEMENT CONTRASTIVE LEARNING OF LONG-TAILED SKIN LEISION CLASSIFICATION
http://arxiv.org/abs/2307.04517v1
20230710122524
Study on the Correlation between Objective Evaluations and Subjective Speech Quality and Intelligibility
[ "Hsin-Tien Chiang", "Kuo-Hsuan Hung", "Szu-Wei Fu", "Heng-Cheng Kuo", "Ming-Hsueh Tsai", "Yu Tsao" ]
eess.AS
[ "eess.AS" ]
Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor San Jiang, Yichen Ma, Qingquan Li, Wanshou Jiang, Bingxuan Guo, Lelin Li, and Lizhe Wang S. Jiang, Y. Ma, and L. Wang are with the School of Computer Science, China University of Geosciences, Wuhan 430074, China; S. Jiang is also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China, and with the Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China. E-mail: [email protected], [email protected], [email protected]. (Corresponding author: Lizhe Wang) Q. Li is with the College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China, and also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China. E-mail: [email protected]. W. Jiang and B. Guo are with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430072, China. E-mail: [email protected], [email protected]. L. Li is with the Provincial Key Laboratory of Geo-information Engineering in Surveying, Mapping and Remote Sensing, Hunan University of Science and Technology, Xiangtan 411201, China. E-mail: [email protected]. August 12, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Subjective tests are the gold standard for evaluating speech quality and intelligibility, but they are time-consuming and expensive. Thus, objective measures that align with human perceptions are crucial. This study evaluates the correlation between commonly used objective measures and subjective speech quality and intelligibility using a Chinese speech dataset. Moreover, new objective measures are proposed combining current objective measures using deep learning techniques to predict subjective quality and intelligibility. The proposed deep learning model reduces the amount of training data without significantly impacting prediction performance. We interpret the deep learning model to understand how objective measures reflect subjective quality and intelligibility. We also explore the impact of including subjective speech quality ratings on speech intelligibility prediction. Our findings offer valuable insights into the relationship between objective measures and human perceptions. Objective measures, subjective listening tests, speech quality, speech intelligibility § INTRODUCTION Speech quality and intelligibility are crucial in various speech-related applications, such as speech enhancement (SE), teleconferencing, voice conversion and text-to-speech, and hearing aids. As humans are the end-users of these applications, subjective listening tests are considered the most precise and trustworthy way to evaluate speech quality and intelligibility. However, conducting listening tests on a large number of participants is time-consuming and expensive. Therefore, a significant amount of research has been devoted to developing objective measures that can mathematically quantify speech quality and intelligibility. Objective measures can be divided into intrusive measures, where quality and intelligibility are estimated by comparing degraded/processed speech with clean references, and non-intrusive measures, where quality and intelligibility are calculated directly on the degraded/processed speech without clean reference. Perceptual Evaluation of Speech Quality (PESQ) <cit.> and Perceptual Objective Listening Quality Analysis (POLQA) <cit.> are intrusive speech quality measures. Despite being widely used in speech processing research, PESQ and POLQA are shown to correlate suboptimally with subjective tests <cit.>. Short-time objective intelligibility measure (STOI) <cit.> and extended STOI (ESTOI) <cit.> are popular intrusive speech intelligibility measures. However, STOI has been reported to provide suboptimal prediction capability for the subjective intelligibility results of the Wiener filtering <cit.> or deep learning (DL)-based <cit.> SE systems. Moreover, intrusive measures are less applicable to real-world scenarios because clean signals may not always be available. Compared with intrusive methods, non-intrusive methods such as ITU-T P.563 <cit.>, ANIQUE+ <cit.>, and speech-to-reverberation modulation ratio (SRMR) <cit.> overcome the limitation. A recent approach of non-intrusive methods directly predicts objective measures by using DL models without the need of a clean signal. These models are trained to predict standard objective measures, such as PESQ and STOI <cit.>. Several studies have shown high performance using this approach, but the ground-truth labels used to train these DL models are not always aligned with human perception. To better align with human perception, researchers have started to rely on ground truth human labels for model training. DNSMOS <cit.> and NISQA <cit.> are examples of DL models trained on mean opinion score (MOS) datasets, where DNSMOS focuses on distortions in SE and NISQA on distortions in communication networks. Andersen et al. <cit.> and Pedersen et al. <cit.> used convolutional neural network (CNN) models to predict subjective intelligibility. However, owing to the time-consuming nature of conducting subjective listening tests, collecting large-scale datasets of human labels to train DL-based models is challenging. One potential solution to bridge the gap between objective measures and human perception without relying on large-scale datasets of human labels is to predict human perception of speech quality and intelligibility by leveraging commonly used objective measures. The advantage of this approach is that it is considerably less time-consuming compared to conducting subjective listening tests. Previous studies have attempted to predict either speech quality or intelligibility using objective measures. Hu et al. <cit.> proposed composite measures for evaluating SE by linearly combining objective quality measures. Liu et al. <cit.> showed that ASR and objective quality measures have the potential to estimate intelligibility under noisy conditions. Ma et al. <cit.> reported that objective measures originally designed to predict speech quality can reliably predict the intelligibility of noise-suppressed speeches. However, there is a lack of research that considers both quality and intelligibility criteria and provides interpretations of the relationship between objective measures and human perception to indicate how objective measures reflect subjective quality and intelligibility in practical usage. In this study, we first time proposed using DL models that take a combination of off-the-shelf objective measures as inputs to predict subjective quality and intelligibility ratings. We evaluated the correlation between commonly used objective measures and subjective ratings of quality and intelligibility on a Chinese dataset called TMHINT-QI <cit.>, and then use DL techniques to propose new objective measures composed of all of the used objective measures. We demonstrated that the proposed DL model can achieve strong performance in predicting subjective quality and intelligibility ratings, even when trained on small amounts of training data. This core strength makes the DL model practical for real-world applications, as it can still maintain high accuracy without requiring a large amount of training data. Furthermore, we interpreted the proposed DL models to describe the relationship between the objective measures and subjective ratings of speech quality and intelligibility. We also investigated the potential improvement in intelligibility prediction by incorporating subjective quality ratings. This allows us to leverage the extensive research conducted in the field of speech quality assessment to enhance the assessment of speech intelligibility. Our results can provide valuable insights into the utility and limitations of objective measures in reflecting subjective quality and intelligibility ratings and potentially contribute to bridging the gap between objective measures and human perception. The remainder of this paper is organized as follows. Section <ref> describes the objective measures used in our experiments. Section <ref> details our dataset and presents our correlation analysis. We present our experimental setup and results in Section <ref>. Finally, we conclude the paper in Section <ref>. § OBJECTIVE MEASURES This study investigates several objective measures to predict subjective speech quality and intelligibility. These measures are categorized as either intrusive or non-intrusive, depending on whether a clean reference is required or not. In this section, we describe both types of objective measures. §.§ Intrusive objective measures Six different intrusive objective measures were assessed: PESQ, ITU-T P.835, normalized covariance metric (NCM), STOI, ESTOI, and word error rate (WER). PESQ evaluates speech quality and ranges from -0.5 to 4.5. ITU-T P.835 evaluates speech quality in terms of three aspects: signal quality (SIG), background noise (BAK), and overall quality (OVRL) <cit.>. NCM assesses the covariance between the envelopes of the clean and degraded/processed speech and provides scores ranging from 0 to 1 <cit.>. STOI and ESTOI evaluate speech intelligibility and have scores between 0 and 1. Finally, WER is calculated using Google ASR <cit.>. §.§ Non-intrusive objective measures In addition to intrusive measures, two non-intrusive objective measures were also evaluated: DNSMOS P.835 <cit.> and MOSA-Net <cit.>. DNSMOS P.835 is a multi-stage self-teaching based model that evaluates speech quality based on three aspects: signal quality (DNSMOS-SIG), background noise (DNSMOS-BAK), and overall quality (DNSMOS-OVRL). MOSA-Net uses time and spectral features and latent representations from a self-supervised model, and is originally trained to predict several objective metrics, but can be adapted for MOS predictions. Here we adopted the MOS prediction results of the MOSA-Net. Overall, four quality measures (PESQ, ITU-T P.835, DNSMOS P.835, MOSA-Net) and four intelligibility measures (NCM, STOI, ESTOI, WER) were involved in this study. Notably, we were able to obtain several objective measures, such as WER, DNSMOS, and MOSA-Net, by leveraging pre-trained APIs from third-party sources, eliminating the need for additional efforts to acquire these pre-trained models or gather extensive amounts of training data. § CORRELATIONS BETWEEN OBJECTIVE AND SUBJECTIVE ASSESSMENTS §.§ Dataset We conducted experiments using TMHINT-QI [TMHINT-QI dataset: http://gofile.me/6PGhz/4U6GWaOtY; TMHINT-QI dataset description: https://github.com/yuwchen/InQSS] <cit.>, a Chinese corpus containing noisy and enhanced data. To form the noisy data, we corrupted the clean speech from the TMHINT dataset with four types of noise (babble, street, pink, and white) at four different SNR levels (-2, 0, 2, and 5 dB). The noisy data was then enhanced by the minimum mean squared error (MMSE), Karhunen-Loéve transform (KLT), deep denoising-autoencoder (DDAE), fully convolutional network (FCN), and transformer model (denoted as Trans). Human listeners were recruited to evaluate the subjective TMHINT-QI scores. A total of 226 people aged between 20 and 50 years participated in the listening test. The quality score ranges from 1-5, where a higher value indicates a better speech quality. The intelligibility score calculates the number of words correctly recognized by listeners in a ten-word sentence; the intelligibility score ranged from 0-10. A higher intelligibility score indicates that the listeners correctly identify more words. There were 24,408 samples in total. We followed the setup in <cit.> to split the TMHINT-QI dataset into training and test sets. The subjective scores of each utterance were averaged to obtain its ground-truth score. Hence, the final training and test sets contained 12,937 and 1,978 unique utterances, respectively, along with their subjective quality and intelligibility scores. The training set was randomly split into 90% for training and 10% for validation in accordance with <cit.>. More details can be found in <cit.>. §.§ Correlation analysis We investigated the relationship between subjective quality and intelligibility ratings and objective measures of the test data by calculating the Pearson correlation coefficient (PCC). The correlation values between the subjective and objective measures are presented in Fig. <ref>, along with the correlation between subjective quality and intelligibility. Several observations are reported in these figures. We first found that human perceptions of quality and intelligibility are moderately correlated with each other, showing a correlation of about 0.68 between subjective quality and intelligibility ratings. In addition, we observed that all objective measures, with the exception of WER, demonstrated higher correlations with subjective quality compared to subjective intelligibility. For subjective quality, it is interesting to note that a high correlation is expected to PESQ, but objective intelligibility measures (ie, NCM, ESTOI, STOI, and WER) are more highly correlated with subjective quality ratings. For subjective intelligibility, the correlations of objective quality measures (ie, PESQ, ITU-T P.835, DNSMOS P.835, and MOSA-Net) are generally lower (below 0.24), which is reasonably expected for which they were originally designed to predict speech quality. Interestingly, in relation to speech intelligibility, high correlations are expected between objective intelligibility measures (ie, NCM, ESTOI, STOI, and WER), but all except WER have moderate correlations with subjective intelligibility in our dataset. We also exploited the correlations of either STOI or PESQ with WER. Fig. <ref> shows the scatter plots of WER against PESQ and STOI. Our finding is consistent with previous studies <cit.>, which show that the correlation value between STOI and WER is higher than that between PESQ and WER. This supports the results in <cit.> that integrating STOI into SE model optimization can improve WER on the enhanced speech. In summary, the strongest absolute correlation with subjective quality is found in NCM, followed by ESTOI and STOI. For subjective intelligibility, WER shows the highest absolute correlation, followed by subjective quality and NCM. § EXPERIMENTS §.§ Experimental setup The correlation analysis in Section <ref> indicates that none of the objective measures show a strong correlation (above 0.8) with subjective quality and intelligibility ratings, which aligns with the findings of <cit.> that no single objective measure demonstrates a high correlation. Thus, we seek to develop a DL model which takes a combination of objective measures as inputs to predict corresponding subjective quality and intelligibility scores. Fig. <ref> illustrates the details of the proposed DL model. Each of the objective and subjective measures was normalized using min–max to be between 0 and 1 before feeding into the DL model. Twelve objective measures are utilized as input for the DL model. The DL model consists of six dense layers, where each dense layer is followed by GELU activation, except for the last layer, which is followed by a sigmoid activation. This sigmoid activation produces values between 0 and 1, which are then divided into two separate tasks, one for quality estimation and the other for intelligibility prediction. Afterward, the output values are denormalized to obtain the predicted subjective quality and intelligibility scores. To evaluate the performance, three criteria: mean squared error (MSE), PCC and Spearman’s rank correlation coefficient (SRCC) were selected as evaluation criteria. §.§ Experimental results In our comparative analysis, we examined the performance of the proposed DL model in relation to the LR model, which predicts subjective quality or intelligibility scores separately. Additionally, we incorporated two DL-based non-intrusive speech assessment models into our evaluation. The first model, InQSS, combines self-supervised models and scattering transform. It has the capability to predict both subjective quality and intelligibility simultaneously, as outlined in <cit.>. The second model, MOS-SSL, utilizes fine-tuned features from wav2vec 2.0 to predict MOS <cit.>. We trained the MOS-SSL model on the TMHINT-QI dataset using a single task criterion to predict quality and intelligibility scores as separate targets. Table <ref> summarizes the results. The superior performance of the DL model over the LR model is evident in both subjective quality and intelligibility prediction. Additionally, the effectiveness of the DL model is confirmed by achieving higher PCC and SRCC scores compared to the InQSS and MOS-SSL methods. These results demonstrate the improved accuracy and reliability of the DL model in predicting subjective quality and intelligibility. We also examine how well the proposed DL model predicts compared to InQSS and MOS-SSL when different quantities of training data are accessible. To avoid the time-consuming process of conducting listening tests, it is preferable to possess a model that demands less training data but still performs comparable results. Table <ref> illustrates the percentage decrease in PCC for various percentages of training data, while Fig. <ref> visually represents the changes in PCC and SRCC as the number of training data varies. Table <ref> clearly shows that when trained with only 25% of the data, all three models were close to reaching saturation. The InQSS and MOS-SSL models had a decrease within 3%, while the DL model for quality prediction had a decrease of 1%. For intelligibility prediction, the InQSS and MOS-SSL models had a decrease within 8%, while the DL model had a decrease of 5%. Furthermore, the DL model demonstrated its superiority in performance with a percentage decrease of 3.4% for quality prediction and 4.6% for intelligibility prediction when trained with only 5% of the training data. Fig. <ref> also clearly demonstrates that DL models consistently outperform InQSS and MOS-SSL models. Moreover, it shows that the increase in PCC and SRCC values gradually slows down when the amount of training data exceeds 1,000. The overall analysis indicates that InQSS and MOS-SSL heavily relies on a large amount of training data and exhibits promising prediction performance only when more training data is available. In contrast, the proposed DL model's capacity to achieve good performance with limited amounts of training data is a significant advantage. This is particularly beneficial because collecting subjective human ratings is a challenging and expensive process, and being less reliant on a large amount of data is highly advantageous. §.§ Interpretation of the DL model Our aim is to investigate how each objective measure impacts the prediction performance of subjective quality and intelligibility. In order to uncover the underlying functional relationship between these measures of the DL model, we generated subjective quality and intelligibility scores by feeding data samples obtained from a multivariate normal distribution into the DL model. The subjective quality and intelligibility scores were then divided into 200 equal parts based on the values of the objective measures being analyzed. The scores for each part were averaged, resulting in 200 scores for each subjective quality and intelligibility. These scores were connected to form a line graph, which illustrates the functional relationship between the quality or intelligibility scores and the objective measures. We repeated this process 1,000 times and the functional relationship between the objective and subjective measures of the DL model is depicted in Figure <ref>, where the solid line represents the mean and the light-colored areas represent the standard deviation of the 1,000 lines. We limited our focus to several objective measures due to space limitations. From Fig. <ref>, it is evident that the relationships between objective measures and subjective quality is relatively linear compared to subjective intelligibility. As for subjective intelligibility, it can be noted that the slope gradually becomes less steep as objective measures increase. This implied that higher values of objective measures can accurately demonstrate the expected improvement in subjective quality, but not necessarily in subjective intelligibility. In addition, we found that subjective measures decline when DNSMOS-BAK reaches approximately 2.0. More specifically, there is a significant reduction in subjective intelligibility, decreasing from 9.2 to 8.8, while subjective quality experiences a less drastic drop, going from 3.4 to 3.2. Our findings suggest that this phenomenon occurs because attempts to suppress background noise inevitably result in speech distortion, which has a negative impact on speech quality and intelligibility. We can observe that individual objective measures cannot fully capture subjective quality and intelligibility with perfection. This observation supports our rationale for integrating all objective measures in order to establish a strong correlation with subjective listening tests. §.§ Enhancing intelligibility prediction through the incorporation of subjective quality While our DL model predicts both subjective quality and intelligibility simultaneously, we are interested in exploring whether including subjective quality can enhance the prediction of intelligibility. The moderate correlation of 0.68 between subjective quality and subjective intelligibility indicates a potential association between the two factors. Consequently, we propose that integrating subjective quality ratings has the potential to enhance the prediction of subjective intelligibility, to some extent at least. Meanwhile, opting for quality tests instead of intelligibility tests provides significant advantages in terms of saving effort. Quality tests require less time compared to the time-consuming process of listening intelligibility tests, which involve word identification for calculating intelligibility scores. Therefore, choosing quality tests offers a more time-efficient approach. We introduced modifications to the proposed DL model by including subjective quality scores as additional inputs. As a result, the model's primary objective shifted to predicting subjective intelligibility scores while taking into account these subjective quality scores. Table <ref> demonstrates a significant improvement in subjective intelligibility prediction when incorporating subjective quality ratings. The PCC value increased from 0.792 to 0.870, validating the effectiveness of using subjective quality to predict intelligibility. The inclusion of subjective quality ratings represents a valuable contribution to improving the accuracy of intelligibility predictions. By integrating subjective quality, we can harness the extensive research conducted in the field of speech quality assessment to enhance the assessment of speech intelligibility. § CONCLUSION The contributions of this study are four fold. First, the study proposes the use of DL models that take a combination of off-the-shelf objective measures as inputs to predict subjective quality and intelligibility ratings. Second, we evaluate the proposed DL model against different speech assessment methods and analyze the percentage decrease in PCCs as the amount of training data varies. The experimental results highlight the significant advantage of our DL model, which exhibits strong performance even with a small amount of training data. This is particularly beneficial in situations where gathering subjective human ratings is arduous and expensive. Thirdly, we provide insights into how objective measures reflect subjective quality and intelligibility. This analysis can help researchers better understand the relationship between objective measures and subjective measures. Fourthly, we demonstrate that incorporating subjective quality ratings can improve the prediction of subjective intelligibility. This integration allows us to leverage the extensive research conducted in the field of speech quality assessment to enhance speech intelligibility evaluation. Additionally, quality tests offer a time-saving advantage compared to the more time-consuming process of listening intelligibility tests. IEEEbib
http://arxiv.org/abs/2307.03966v1
20230708123510
Multi-Intent Detection in User Provided Annotations for Programming by Examples Systems
[ "Nischal Ashok Kumar", "Nitin Gupta", "Shanmukha Guttula", "Hima Patel" ]
cs.AI
[ "cs.AI", "cs.SE" ]
Both authors contributed equally to the paper Work done during internship at IBM Research UMass Amherst United States [email protected] [1] IBM Research India [email protected] IBM Research India [email protected] IBM Research India [email protected] printacmref=false In mapping enterprise applications, data mapping remains a fundamental part of integration development, but its time consuming. An increasing number of applications lack naming standards, and nested field structures further add complexity for the integration developers. Once the mapping is done, data transformation is the next challenge for the users since each application expects data to be in a certain format. Also, while building integration flow, developers need to understand the format of the source and target data field and come up with transformation program that can change data from source to target format. The problem of automatic generation of a transformation program through program synthesis paradigm from some specifications has been studied since the early days of Artificial Intelligence (AI). Programming by Example (PBE) is one such kind of technique that targets automatic inferencing of a computer program to accomplish a format or string conversion task from user-provided input and output samples. To learn the correct intent, a diverse set of samples from the user is required. However, there is a possibility that the user fails to provide a diverse set of samples. This can lead to multiple intents or ambiguity in the input and output samples. Hence, PBE systems can get confused in generating the correct intent program. In this paper, we propose a deep neural network based ambiguity prediction model, which analyzes the input-output strings and maps them to a different set of properties responsible for multiple intent. Users can analyze these properties and accordingly can provide new samples or modify existing samples which can help in building a better PBE system for mapping enterprise applications. Multi-Intent Detection in User Provided Annotations for Programming by Examples Systems Hima Patel August 12, 2023 ======================================================================================= § INTRODUCTION String Transformation in mapping enterprise applications refers to the specific paradigm in the domain of Programming by Example (PBE) approaches, where a computer program learns to capture user intent, expressed through a set of input-output pairs, from a pre-defined set of specifications and constraints <cit.>. The set of specifications and constraints is expressed through the Domain-Specific Language (DSL) which consists of a finite number of atomic functions or string expressions that can be used to formally represent a program for the user to interpret. Most of the PBE systems <cit.> for string transformation use ranking mechanisms that are either built using heuristics or learned using historical data. These kinds of ranking systems are designed to capture the following two important characteristics: small length and simpler programs. Such kind of ranking system mostly depends on the quality and number of input and output (I/O) annotation samples to learn better program. The quality of I/O samples denotes how good the I/O samples are to generate a single intent output. The number of given I/O annotation samples can vary depending on the user intent, but the fewer the better for the user (as the user has to provide less annotations). Therein lies the challenge of learning correct intent i.e., if examples are too few, then many possible DSL functions can satisfy them, and picking one intent (or program) arbitrarily or based on some ranking mechanism that satisfies simplicity and smaller length criteria, might lead to non-desired intent program. This might yield a solution that works well only on the given I/O samples but not on unseen samples. Similarly, the quality of I/O samples (irrespective of high I/O samples count) plays an important role in generating the correct intent program. The above two challenges are critical for PBE kind of systems to understand the user's intent by analyzing the given I/O samples. This can lead to a sub-optimal program that works on seen data but does not give desired outputs for unseen data. Hence, it is important to understand whether the given I/O samples capture the user's desired intent correctly or not. For illustration, let's take an example shown in Table <ref>. "Train" columns denote the columns representing I/O samples used to generate a transformation program and "Test" columns denote the input sample column which is passed to the transformation program to generate an output. GT output column denotes the actual desired output. For each example, the user provides 3 I/O samples to generate a transformation program using <cit.>. In the first example, user intent is to extract substring after ”_" character, but here PROSE system learns the program which transforms test input “B_DS2345" into test output “2345" (see generated output column), which implies that the system learns to extract last numeric substring, which is different from a user-desired intent. This happens because there can be many possible programs to transform one set of inputs into outputs. Sometimes those programs converge to the same intent and other times it can lead to multiple intents. For example, in Table <ref>, for the first set of I/O samples, multiple programs can be possible. For test input sample “B_D2S345", where desired output value is “D2345". However, programs in Program(s) column generate different values for this example, first program generates - “345", second program -“D2S345", third program - “D2S345", fourth program - “345" and so on. This shows that all these consistent programs with I/O samples can lead to multiple intents (or outputs) on unseen data. For the above use case, two clear intents are - (a) Extract numeric substring after “_", and second intent is extract substring after “_". But if we look at the second row in Table <ref>, where we replaced the third sample with a better sample “GE_D443 - D443", then automatically first intent program got eliminated from the programs list. Hence, accessing the quality of annotations with respect to single or multiple intents is required for better PBE systems. If the user provides sufficient and single intent specific samples, the system can easily generalize to the rest of the samples. Hence, there is a need for a system that analyses the I/O samples that can help in finding multiple intent issues in annotations. This would help in informing the user about multi-intent issues before generating a transformation program. Therefore, we propose a framework to understand the quality of I/O samples to accurately predict a single confident program. To achieve this goal, we introduce a set of generic properties which helps to find ambiguity/multiple intents in a given set of I/O annotation samples. These properties are generic enough for most of the PBE systems because these properties are designed by analyzing several PBE systems' DSL. We propose a deep learning-based framework to automatically identify the presence of these properties in the annotations. The proposed framework takes a set of I/O samples annotation pairs as input and analyzes those samples together to classify the annotations to these properties. User can utilize this information to enhance the I/O samples, hence, generating more accurate, single intent, simpler and shorter program. In summary, the core contributions of our work are as follows: * Multi-Tasking Attention-Based Deep Neural Network to address the issues of input and output annotation quality to generate a program with the correct intent. * Defined a set of generic properties after analyzing several PBE systems' DSL that can help to find whether a given set of I/O samples can lead to multiple intents or not. * We present an extensive quantitative analysis of a synthetically generated dataset. We also show the motivation of each module of our proposed framework through an ablation study. * We also demonstrate the impact of detecting multiple intents and correcting them before building any PBE system. § OVERVIEW OF PROPOSED METHODOLOGY In this section, we discuss the overview of the proposed methodology, define the set of properties to detect multiple intents, and formally define the problem setting. For any PBE system, I/O samples play an important role in determining the correct intent program. Examples are an ambiguous form of specification: there can be different programs that are consistent with the provided examples, but these programs differ in their behavior on unseen inputs. If the user does not provide a large set of examples or less but good quality samples, the PBE system may synthesize unintended programs, which can lead to non-desired outputs. Hence, there is a need for a framework that can access the quality of I/O samples with respect to multiple intents before generating the program. To access the quality of I/O samples, the most important aspect is to understand how good I/O patterns are for PBE system DSL. The proposed framework (Figure <ref>) consists of two major modules, (a) For I/O annotations, defining set of properties which can cause ambiguity or multiple intents - we analyzed several string transformation specific DSL's, and came out with a generic set of properties which helps to identify whether given I/O samples can lead to multiple intents or not. However, the proposed system is generic enough that users can always add a new set of properties based on new functions introduced in DSL which can cause multiple intents, (b) Multiple intent analyzer - we designed a multi-tasking attention-based deep neural network to detect the ambiguity in given I/O samples based on given set of identified properties. The system first analyzes the user I/O annotation samples using the proposed deep learning framework to detect properties that cause multiple intents or ambiguities. In the next step, the user analyzes those detected properties and based on that, add or modifies samples in I/O annotations to improve the overall annotation quality to learn the correct intent program. In the next section, we will first discuss the properties that will be helpful to decide whether given I/O samples can cause multiple intents or not. In Section 2.2, we will describe the proposed deep learning-based framework which utilizes these properties to find the presence of multiple intents in given annotations. §.§ Properties to Detect Multiple Intent The most important part in finding the ambiguity or possibility of the multiple intents in a given annotation is to analyze the I/O samples for generic characteristics of operators present in the DSL. Mostly, all the DSLs that exist in the literature for string transformation-based PBE systems use similar kinds of operators like split, substring with regex or constant value as an argument, concat, replace, extract first substring, etc. We analyzed several string manipulation-specific DSLs and come out with five generic properties that can help in detecting the multiple intents in the I/O samples. Figure <ref> shows one of the DSL created by combining several other DSL's commonly used operators. There can be other string manipulations operators such as trim, but these are high-level operators and generally doesn't contribute in the multi-intent scenario. In this paper, we will use the DSL showed in Figure <ref> to illustrate the importance of the defined properties. Properties of I/O to detect the presence of multiple intents should be tightly bound to the DSL used for the PBE system. At the same time, those properties should also be (1) concise enough to capture the implicit or explicit multiple intent and (2) expressive enough to allow transformations to be achieved without any confusion in ranking between the programs. Below, we describe the set of 5 properties and the motivation behind their design. §.§.§ Similar Length Ambiguity - This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and have the same length. For example, in Table <ref>, example 1, following substring or continuous sequence in output “123" and “535" are extracted from the similar continuous sequence in input and also are of the same length, hence it is not clear that whether the user wants to extract everything after second “_" or just three characters. In terms of DSL, mostly this kind of ambiguity can be possible because of the outputs generated by constant length-based operators like substring with constant positions vs pattern-based operators like split, substring with a pattern. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ..., (I_l, O_l)) satisfies this property when the continues sequence of string in an output matches to the same continuous sequence of string in input and have the same number of characters across that sequence in all the output samples. I_l denotes the l^th input sample, and O_l denotes the corresponding output sample, and l denotes the total I/O samples in one example. §.§.§ Exact Position Placement Ambiguity - This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and extracted output string always starts or ends on the same position in the input string. For example, in Table <ref>, example 2, following substring or continuous sequence in output “Kumar" and “Williams" are extracted from the similar continuous sequence in input and also starts from the same position in input i.e. 5, hence it is not clear that whether the user always wants to extract substring started from position 5 in input, or the user have some other desired intent (extract something after space character). In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow constant positions to detect a position of substring vs operators which uses regex or split-based operation to extract substring. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ..., (I_l, O_l)), it satisfies this property when the continuous sequence of string in an output matches to the same continuous sequence of string in the corresponding input and it has a start or end always at the same position. §.§.§ Exact Match Ambiguity This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and extracted output substring across annotations have the same string value. For example, in Table <ref>, example 3, following substring or continuous sequence in output “11" and “11" are extracted from the similar continuous sequence in input and also have the same string value i.e. 11, hence it is not clear that whether the user always wants to have constant value 11 in output or the user want to extract this value from the input string. In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow constant positions to detect a position of substring vs operators like split/substring which allow values to be extracted from the string itself. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ...., (I_l, O_l)) satisfies this property when the continues sequence of string in an output matches to same continuous sequence of string in input and have same value. §.§.§ Similar in Token Type Ambiguity This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input and extracted output substring across I/O pairs is of the same type. For example, in Table <ref>, example 4, following substring or continuous sequence in output “123" and “53" are extracted from the similar continuous sequence in input and also have the same value type, hence it is not clear that whether the user always wants to extract the same data type value or something else. Mostly, three types of tokens, Alphabet Tokens which consists of all uppercase and lowercase English alphabets, Numeric Tokens which consists of digits from 0 to 9 and Special-Character Tokens which consists of all printable special characters on the keyboard, are possible. Hence, we say that an example satisfies a similar token type if all its continuous substring in outputs are either all Alphabet Tokens or all Numeric Tokens or all Special-Character Tokens. In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow a specific set of regex positions to detect a position of substring vs operators like split/substring which allows values to be extracted from string itself. Formally, we can define this property as, given an example satisfies this property when the continues sequence of string in an output matches to same continuous sequence of string in input and have same value type. §.§.§ Repeating Characters Ambiguity This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and have multiple instances of that output substring is possible in input. For example, in Table <ref>, example 5, following substring or continuous sequence in output “1" and “2" can be extracted from two similar positions from the input. Those positions can be defined by any low-level operators like constant positions, regex, or high-level operators like split, etc. In this case, that common substring is possible at two constant positions in input i.e. positions 3 and 9. Hence it is not clear that whether the user wants to extract a substring from position 3 or 9. This is DSL independent ambiguity, which can happen because the user provided the samples in the way that it internally it generating such kind of ambiguity. Formally, we can define this property as, given an example satisfies this property when the continues sequence of string in an output matches to multiple instance of continuous sequence of string in input. §.§ Problem Formulation Given a set of l input-output annotations ((I_1, O_1),.., (I_l, O_l)), and a set of p properties (P_1, P_2, .., Pp) which can help to detect multi-intents in I/O annotations. The goal of this task is to answer the question “Is there any multi-intent or ambiguity present in I/O samples", if yes, what kind of ambiguities exist. In this paper, p is set to 5, as we designed and discussed 5 properties in the last section that can hinder the generalization of PBE systems. To learn to detect these sets of ambiguities, we design the multi-tasking attention-based deep neural network model. We first generate a set of I/O annotation examples corresponding to each of the five ambiguous properties. We refer to a single I/O pair (I_1, O_1) as a sample and a group of I/O pairs to learn the program using any PBE system as an example. Here, l denotes the total samples (I/O pair) used for each example. In this work, we used l=3, which means in each example, we have three I/O samples. One example can have multiple properties issues also. Intuitively, the goal of our proposed task is to detect the ambiguities in the user-provided I/O annotations so that the user resolves these ambiguities by adding the new or modifying the existing samples. This will enable PBE systems to generate a single intent program that performs as desired on the unseen samples. In the proposed framework, we train a multi-tasking attention-based deep neural network model as shown in Figure <ref> to learn the ambiguities as expressed in the I/O examples. We define each task as a formulation to learn one type of ambiguity. Consequently, the proposed framework solves the five tasks at a time corresponding to ambiguity detection for five different properties. Our model follows an encoder-decoder architecture where the encoder is shared among all the tasks and the decoder is independent for each task. We pose this problem as a multi-class classification problem. Each example is classified against five ambiguous properties as positive or negative, where positive means that the example is ambiguous for that property and negative means that it is not ambiguous. 0.2in Model Architecture - We model the proposed framework shown in Figure <ref> for detecting ambiguities through a hard-parameter sharing paradigm for multi-task learning. As shown in Figure <ref>, the proposed framework consists of three modules, Common Encoder, Task-Specific Modules, and the Loss module. We discuss each of these modules in subsequent subsections. §.§.§ Common Encoder This module is used for encoding the raw I/O strings (see Figure <ref>) and consists of two sub-modules: * Character Level Embedding Layer - This layer maps each character of the I/O pairs in each example to a 128-dimensional learning space. Given an input (i_1,.., i_n) and an output string (o_1,.., o_m) consisting of a sequence of characters of length n and m respectively, this layer outputs a list of character embedding. Here, n refers to the maximum length of input among all the examples in the dataset. The input strings which are smaller than the maximum length are appended with <pad> tokens to make their length equal to n. The <pad> tokens specify that the current character does not signify the original string but marks the end of it or is used to make all sequences of the same length so that the deep learning tensor computations are easier. A similar procedure is followed with the output strings, where the maximum length of output among all the examples in the dataset is m. Each character i_t and o_s in the input and output sequence is mapped to the 128-dimensional raw embedding e_i_t and e_o_s respectively via a randomly initialized and trainable embedding matrix, where t ∈{1,..n} and s ∈{1,..m}. * Input Encoder - This layer uses LSTM representations <cit.> applied on the embedding e_i_t of the inputs of each example as shown in Equation <ref>. This layer helps to learn the sequential dependencies of the characters of the inputs. It takes the input embedding of each character e_i_t and passes them through a LSTM layer consisting of n separate LSTM cells with a hidden vector size of 512 as shown in Equation <ref>. h_i_t = LSTM(e_i_t,h_i_t-1), t ∈ (1...n) Hence, the Common Encoder takes I/O pair as input and produces two output representations, the raw 128-dimensional embedding of each character in the output sample and the LSTM encoded embedding of the Input sample in the I/O pair. These embeddings will be generated for each I/O pairs in an example. The outputs of the Common Encoder are then utilized by next Modules. §.§.§ Task Specific Modules These modules are designed for the detection of each ambiguity property. We have 5 such modules (one for each ambiguity) with a similar structure, which process the inputs obtained from the common encoder. Each Task-Specific Module contains an Additive Attention Output Encoder, Concatenation Layer, Convolution Neural Networks & Pooling Layer, and Softmax Layer as classification layer. The weights across all these 5 task-specific modules are not shared with each other. * Attention Output Encoder - In our architecture, we use additive attention mechanism <cit.> to selectively impart more importance to the part of the input which has more influence on the output characters and hence obtain better output sample encoding. Specifically, this layer computes the additive attention a_e_o_s of a single embedded output character e_o_s with respect to the encoding of all the input characters h_i_1..n as shown in the equations <ref> and <ref>. For this, we pass the output from the Input Encoder to the Attention Output Encoder which first computes the attention weights α_s_1..n as shown in eq. <ref> and the corresponding attention vector a_e_o_s as shown in eq. <ref> for each output character O_s with respect to all input characters in the I/O pair. Here, W_a and U_a are the learnable weight matrices. W_a corresponds to the output embeddings vector e_o_s and U_a corresponds to the input encodings matrix h_i_1..n. V_a is the learnable vector. The attention output a_e_o_s is concatenated with the output embedding e_o_s to give c_o_s as shown in equation <ref> and is passed through an LSTM layer with hidden vector size 512 as shown in eq. <ref>. α_s_1..n = V_atanh(W_ae_o_s + U_ah_i_1..n), s ∈ (1..m) a_e_o_s = Σ_tα_s_t h_i_t, t ∈ (1..n), s ∈ (1..m) c_o_s= [ a_e_o_s,e_o_s], s ∈ (1..m) h_o_s = LSTM(c_o_s,h_o_s-1), s ∈ (1..m) The Attention Output Encoder outputs m different LSTM encodings h_O_1..m for each output string of length m in l I/O pairs, which further passed to the next Layer. * Concatenation Layer - For this, we concatenate the l encodings corresponding to l I/O pairs for each example. Detecting ambiguity is possible only by analyzing all the I/O pairs in a given example and not just one I/O pair. These encodings are obtained from the Attention Output Encoder in a row-wise manner as shown in equation <ref>. Here, h_1_o_s refers to the attention-encoded output of the s^th character of the Output O_1 from the first I/O pair. Similarly, h_l_o_s refers to the attention-encoded output of the s^th character of the Output O_l from the l^th I/O pair. q_s = concat(h_1_o_s, h_2_o_s, ..., h_l_o_s), s ∈ (1..m) Q = [q_1, q_2, ...., q_m-1, q_m] The output of the Concatenation Layer is a matrix Q as shown in eq. <ref>. There are a total of m different rows in the matrix corresponding to the m characters of the Outputs in an I/O pair. More specifically, each row of the matrix represents the character-level concatenation of the output encodings from l different examples. This matrix is then passed into the next layer. * Convolution Neural Network and Pooling Layers - Convolution Neural Networks (CNNs) are used for finding local dependencies in features. In our architecture, CNNs help us to capture the dependencies between adjacent characters and subsequent encoded Outputs of the I/O pairs. The input to the CNN layer is the matrix Q for each example that we obtain from the Concatenation Layer. In this layer, we apply 2-dimensional convolution operations with 512 output channels where each channel contains a kernel of dimension (2, l*512) on the input from the concatenation layer. We then applying MaxPooling on the outputs of the CNNs across each channel to obtain a single vector r of size 512 dimensions for the I/O pairs in an example as seen in eq. <ref>. This 512 size vector is then passed into the next Layer. r = MaxPool2D(Conv2D(Q)) * Classification Layer - Classification Layer is a fully-connected dense layer with 2 neurons corresponding to either the positive or negative class for each ambiguous property classification to give the classification logits u. This is shown in equation <ref> where W_f and b_f are the weight matrix and the bias vector respectively. u = W_f r + b_f Classification logits from the Classification Layer are then passed through the Softmax Layer. * Softmax Layer - This layer applies the softmax activation function on the classification logits to obtain a probability distribution p over the prediction classes (ambiguous properties) as shown in equation <ref>. Here, z is used for indexing a single class among the positive and the negative classes. p = exp(u_z)/Σ_z exp(u_z), z ∈ (0, 1) §.§.§ Loss Calculation The proposed multitask learning framework uses Cross-Entropy loss between the original and predicted labels as the objective function for all the five task-specific modules. Equation <ref> denotes the loss from the k^th task-specific module. We use k to index the task-specific modules. p_k is the predicted probability distribution for the k^th task-specific module. y_k is the original probability distribution for the k^th task-specific module. We obtain the final loss L by taking a weighted sum of the individual losses L_k of each of the task-specific modules as shown in equation <ref>. Here w_k is the weight corresponding to the kth Loss L_k. L_k = -Σ[y_klogp_k + (1-y_k)log(1-p_k)] L = w_1*L_1 + w_2*L_2 + w_3*L_3 + w_4*L_4 + w_5*L_5 § RESULTS AND DISCUSSIONS §.§ Dataset Creation We created a dataset corresponding to the five different ambiguous properties discussed in Section <ref>. We have written different regexes satisfying each ambiguous property based on a fixed Domain Specific Language (DSL). For each ambiguity property, the regexes generate several examples, and each example consists of 3 I/O pairs. We consider uppercase English characters, lowercase English characters, digits from 0 to 9, and all printable special characters. We generate a total of 100002 individual samples, grouped in an example of 3 samples, to finally produce 33334 examples per ambiguous property. In the next few subsections, we describe the procedure of generating the dataset for each ambiguous property. Table <ref> shows examples corresponding to each property. §.§.§ Similar Length Ambiguity For each output substring in an example, we chose a length from a range of 2-9 characters. We limit the output substrings to a maximum of 4 for each sample. Each output substring will contain a mixture of lowercase, uppercase English alphabets, and digits from 0-9. We add random strings on the front and the back of each output substring to construct the input string. Similarly, we do this for other output substrings, and finally, combine the I/O substrings to make it a single I/O pair. We repeat the above process by fixing the output substrings size across the samples in a single example and combine those I/O pairs to make a single example. In our case, we use a set of three I/O pairs in a single example. We illustrate the process of creating I/O pairs through the following example. In the first step, we first assume an output substring of length three for sample-1 is “abc", for sample-2 is “klp" and for sample-3 is “12j". In the second step, we add random I/O strings before and after the first output substring for sample-1 “dfg1#abc#2311", sample-2 “era#klp#hj1", and sample-3 “h2ral#12j#klj23jk". In the third step, we create a new output substring that can follow this similar length property or not. We then repeat the second step, for example, let us assume that the second output substring is of varied length, let's say “hjuk", “puefhkj", and “jf16hsk". Now, either we append this directly to the input with some delimiter or first add some other random string before or after this string. In this case, we append this directly using delimiter “@", so final input strings become “dfg1#abc#2311@hjuk", “era#klp#hj1@puefhkj" and “h2ral#12j#klj23jk@jf16hsk". We can combine the output substrings using any character or directly. In this example, we are combining it directly which leads to the following output samples corresponding to the input samples - `abchjuk", “klppuefhkj" and “12jjf16hsk". We can repeat the same process by generating more output substrings for an example. §.§.§ Exact Position Placement Ambiguity The process of example generation for this ambiguity will remain almost the same as the “Similar Length Ambiguity" property. The only change is that instead of fixing output substring length across samples, we will fix the output substring's position in the input string. §.§.§ Exact Match Ambiguity - In this case, the process differs with respect to output substring value. The output substring value across the I/O pairs within the same example will remain the same. This property inherently also satisfies the Similar Length Ambiguity. §.§.§ Similar in Token Type Ambiguity In this case, the process differs with respect to output substring type. That is, the output substring's token-type across the I/O pairs within same example will remain the same. In our work, we define two types of token-types viz. alphabets and numerals. More specifically, the two categories of similar token types are when, either the output strings contain only the uppercase and lowercase alphabets or only digits from 0-9. §.§.§ Repeating Characters Ambiguity In this case, the output substring exists (or repeats itself) at multiple positions in the input. §.§ Ablation Studies We compare the results of two major variations in the proposed framework : (a) two different loss functions - Cross-Entropy and Focal Loss, and (b) the importance of each layer by removing it from the framework. We consider the model in Figure <ref> as the main model. This model is referred to as Our in the results table. We carry out various ablation studies of the proposed model by removing various components to ascertain the role played by each component in the model. These models are are discussed below. §.§.§ Our_No_CNN: In this setup, we remove the CNN and the MaxPool layers from the proposed model architecture and only pass the concatenated output encodings to the classification layer. §.§.§ Our_No_AM In this setup, we remove the Attention Mechanism from the proposed model. We retain the same output encoder but set the attention weights for each output characters over all input characters is equal to 1 while calculating the attention vector. §.§.§ Our_GRU: In this, we replace all LSTM layers and cells with GRU <cit.> cells in the proposed architecture. We retain the same overall architecture and keep the GRU hidden size equal to 512. §.§ Discussions §.§.§ Quantitative Results In Table <ref>, we compare the results of the proposed framework with two different loss functions, Cross-Entropy, and Focal Loss. Also, we provide a quantitative analysis highlighting the importance of each layer. For this, we first remove the layer from the task-specific modules and then report the performance of the same. We show the property-wise performance in Table <ref>. From the result table, we can see that overall Cross-Entropy is performing better than the Focal Loss. The model has trained with 26,667 examples corresponding to each ambiguous property for 100 epochs with a batch size of 5 per epoch. We set the weights of each loss corresponding to five different ambiguity tasks equally to 1. We report the results on the 6,667 examples test set. The main model, denoted by our, performs better than the other variations of the proposed framework when using the same loss metric. We can also observe that by removing the attention layer from the main model, the performance of the model got decreased by 10-20% for most of the cases, which highlights the need for an attention layer. A similar kind of pattern can be observed when we remove the CNN layer from the main model. In some cases, performance got dropped to around 50%. Also, we can observe that removing the CNN layer makes the model more worse as compared to removing the attention layer. This shows that the CNN part of the architecture plays an important role in ambiguity detection. Also, we can see a significant drop in performance in most of the cases, if we replace the LSTM units with GRU units. The reason for the same is that LSTM units are able to capture context better than GRU because a sufficient number of samples are provided to our model to learn the context. When a sufficient number of samples are available for training, we can expect the LSTM model to learn the context better than GRU <cit.>. Hence, this analysis shows the importance of different layers in our proposed framework. Combining all these layers, makes the system perform almost 100 percent accurately on the test set, which shows that these ambiguities can be easily learned if we define the architecture which can capture context, interrelationships, and attention of output on input. In some cases, we observe that other variations are also giving perfect results which highlight that for those properties simpler network can also be generalizable on unseen test data. §.§.§ Saliency Maps For better understanding the predictions of the proposed model, we used the integrated gradients <cit.> based saliency on the inputs of the examples for visualization. We use three properties (similar length, exact match, and repeating characters) to illustrate the predictions of the learned model as shown in Figure <ref>. For each of these properties, we use one example (three I/O samples) to visualize the saliency maps. Also, we use a single substring in output just for ease of visualization, as this visualization becomes more complex to interpret if we have multiple substrings in output. The first row in the Figure <ref> denotes the saliency maps corresponding to the Similar Length Ambiguity property for the I/O pair - {"input": ["niti gup", "klop kio", "xyz abc"], "output": ["gup", "kio", "abc"]}. From Figure <ref> (a), we can see that in all the inputs, more importance (shown by lighter colors with high values) is given to the characters which mark the beginning and the end of the part of the string (“gup", “kio", and “abc") which belongs to the output. That is, we can see that a higher saliency score is associated with the hyphen and the @end symbol which mark the beginning and the ending of the output string. Hence, we can conclude that the model is able to learn the Similar Length Ambiguity property. The second row illustrates the saliency maps for the Exact Match Ambiguity property for the I/O pair - {"input": ["niti abc123", "klop abc123", "xyz abc123"], "output": ["abc123", "abc123", "abc123"]}. Here, it can be seen that, on average, more importance is given to the part of the input which contains the output as compared to the one which does not. That is, the characters corresponding to abc123 have higher saliency values as compared to the other parts like niti, klop, and xyz in the three inputs respectively. Hence, we can conclude that the model is able to recognize the output strings clearly and hence correctly classifying them. The third row shows the saliency maps for the Repeating Characters Ambiguity for the I/O pair - {"input": ["M%qSFA8qb%We %qSFA8qb%", "1bN%i6Op4%YK%i6Op4%", "Yp%83cGK3%yRv%83cGK3%"], "output": ["qSFA8qb", "i6Op4", "83cGK3"]}. It can be noticed that the characters in the output string have higher saliency values on an average in the input in their second repetition as compared to their first occurrence. This shows that the model is able to well recognize the repeated characters and hence correctly classify them. We have observed similar kinds of patterns for the other ambiguities. §.§.§ Case Study: Impact of detecting multiple intents and correcting them before building PBE systems In this section, we discuss that how the presence of ambiguity in input and output annotations can affect the output of widely used tools like PROSE <cit.> and Microsft Excel. Table <ref>, shows the different ambiguities detected by the proposed system on 6 examples, and also shows that whether existing PBE systems will able to learn correct intent or not using those sets of I/O pairs. For each example, the user provides three I/O samples to convey the desired intent. However, as we can see from the ambiguities detected column that each of these examples has some kind of ambiguities or multi-intent issues. Effect of the same can be reflected in a mismatch of PROSE/Excel output columns and GT output column. This shows the need for the framework which helps to figure out the multi-intent quality issues in annotation before generating program through any PBE systems. In the first example in Table <ref>, the system detects “Similar in Token Type Ambiguity", because substring (only one substring exist in this case) across the outputs have the same token type. This can lead to multiple intent issues of whether the user wants to extract everything after “_" irrespective of the token/data type, or user is just interested in specific numeric data type content for this case. Same multi-intent confusion can be reflected in the output of two different PBE systems on an input “B_DS2345" - (a) PROSE output is “2345", that means the PROSE framework learn to extract numeric content after “_", and (b) Excel output is “DS2345" which means that excel learns to extract all the content after “_". So, it is good if the user can first analyze the detected ambiguity and if that ambiguity holds for a user's actual intent, then the user can accordingly either provide new samples or change the existing samples. Like for the first example, the user intent is to extract everything after “_" and also detected ambiguity is of similar token type. So, the user can now either modify or add one new sample where the extracted output string also has non-numerical characters. With this new additional I/O sample (highlighted in bold) provided by a user, after analyzing the detected ambiguity, both PROSE and EXCEL are able to learn correct intent. This is reflected through the output columns i.e. the value of these columns is the same as the GT column (see Table <ref>) . Similarly, if we analyze the fifth example in Table <ref>, the system detects multiple ambiguities. Exact Position, Similar Length, and Similar in Token Type ambiguities exist for both the output substrings (Mohan/Abhil/Johny and Mr.). Similar in Exact Match Ambiguity exists only for “MR" substring in the output. For the first output substring (Mohan/Abhil/Johny), the user is fine with Exact Position and Similar in Token Type Ambiguity. However, the user wants to add a new example to remove the Similar Length Ambiguity. Similarly, for the second output substring, the user is fine with all the detected ambiguities except Exact Position Placement Ambiguity, because the user's goal is not to extract this information from the input string, the user wants to add that as a constant string in the output. So, after analyzing these properties, the user can provide new samples which will remove these ambiguities to learn the correct intent. Also, we can see from the table that due to these ambiguities both PROSE and EXCEL system learn the intent wrongly. However, after analyzing the ambiguities, the user provided the new sample as shown in Table <ref>. This new sample helps the system to learn the correct intent, which can be seen through the correct output on the test data. Similarly, by providing new sample as shown in Table <ref> for other examples, the user will be able to resolve the multi-intent quality issue and also be able to learn the correct intent through existing PBE frameworks. This shows the effectiveness of our proposed framework to detect ambiguity in PBE systems specifically in the string transformation domain. § RELATED WORK Task-specific string transformation can be achieved via both program synthesis and induction models. Induction-based approaches obviate the need for a DSL since they are trained to generate required output directly from the input string and used in tasks like array sorting <cit.>, long binary multiplication <cit.>, etc. However, induction models are not feasible for the string transformation domain as they require to be re-trained for each task and have lower generalization accuracy on unseen samples than synthesis models <cit.>. In literature, both neural-guided-based and symbolic-based approaches have been widely used for program synthesis. Several neural-guided approaches have been proposed in the last few years for program synthesis <cit.>. A sequential encoder-decoder network to infer transformation programs that are robust to noise present in input-output strings, where the hand-engineered symbolic systems fail terribly is proposed in <cit.>. A different variant of an encoder-decoder network where input-output string encoders are not cascaded but work in parallel to infer program sequences is proposed in <cit.>. In <cit.>, a novel neural architecture consisting of a R3NN module that synthesizes a program by incrementally expanding partial programs is used. These networks can be trained end-to-end and do not require any deductive algorithm for searching the hypotheses space. However, they do not guarantee that inferred programs are consistent with the observed set of input-output pairs and also, training on synthetically generated datasets results in poor generalizability on real-world tasks. Symbolic Program Synthesis approaches operate by dividing required transformation tasks into sub-tasks and searching the hypothesis space for regex-based string expressions to solve each of them. However, smart search and ranking strategies to efficiently navigate the huge hypothesis search space require significant engineering effort and domain knowledge. One of the earliest attempts to solve the problem of program synthesis pioneered the Flash-Fill algorithm designed to infer specification satisfying string transformation program in the form of Abstract Syntax Trees (AST) <cit.>. The PROSE system from <cit.> employs several hand-crafted heuristics to design ranking functions for deductive searching. Systems like PROSE though perform well on tasks similar to the previously encountered tasks but face a generalizability issue when exposed to new unseen tasks. This is also demonstrated in Table <ref> where the system infers one intent which is satisfied in the seen examples but fails on new unseen test data. Since PBE systems for string transformations rely on input and output annotations, it is necessary to provide non-ambiguous input and output samples to them. There is no work existing in the literature that talks about finding the ambiguities or multiple intent-based quality issues in input and output annotations, and providing that information to the user so that the user can look for those detected ambiguities and accordingly modify existing samples or provide new samples. This kind of system will help to capture the user's intent more clearly and make the system automatically generalizable on unseen data. Hence, in this paper we focused on finding the quality issues in input-output annotations with respect to multi-intent, to learn correct intent. § CONCLUSION This paper aims to solve the problem of detecting ambiguity in the user-provided I/O annotations for PBE systems which leads to the generation of wrong intent programs. To the best of our knowledge, our proposed framework is the first to solve this issue at the input and output annotation level. To solve this, we propose extensible multi-tasking attention-based DNN to find the multiple intents in the I/O samples. We also define a set of generic properties that help in detecting the multiple intents in the annotations. We have done a quantitative analysis of different variations of the proposed model architecture to show the impact of the proposed systems' modules. We have also illustrated the effectiveness of the proposed model through saliency maps and by using an existing PBE system outputs. A natural extension of our work is to use the detected ambiguity properties to automatically generate new input and output samples and to improve the program search space. siam
http://arxiv.org/abs/2307.04465v1
20230710102840
Tropical convexity in location problems
[ "Andrei Comăneci" ]
math.OC
[ "math.OC", "math.MG", "q-bio.PE", "14T90, 26B25, 52A30, 90B85, 92B10" ]
We investigate location problems whose optimum lies in the tropical convex hull of the input points. Firstly, we study geodesically star-convex sets under the asymmetric tropical distance and introduce the class of tropically quasiconvex functions whose sub-level sets have this shape. The latter are related to monotonic functions. Then we show that location problems whose distances are measured by tropically quasiconvex functions as before give an optimum in the tropical convex hull of the input points. We also show that a similar result holds if we replace the input points by tropically convex sets. Finally, we focus on applications to phylogenetics presenting properties of consensus methods arising from our class of location problems. Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging Chi Wang August 12, 2023 ================================================================================================================================== § INTRODUCTION There is a recent interest in studying location problems in tropical geometry, especially in the use of tropical methods to data analysis. Maybe the first article to promote such problems with a view towards “tropical statistics” is the work of Lin et al. <cit.>. They showed that tropical convexity in tree spaces has some better properties than the geometry of Billera, Holmes, and Vogtmann (BHV) <cit.>. This encouraged them to propose location estimators based on the symmetric tropical distance that could potentially exploit tropical convexity. In particular, this would give a tropical approach to the consensus problem from phylogenetics <cit.>. The connection for the proposed location statistics to tropical convexity was not well understood. For example, they noticed that tropical Fermat–Weber points can lie outside the tropical convex hull of the input points <cit.>, although it was found later that one can find Fermat–Weber points inside the tropical convex hull <cit.>. However, the unclear connection makes it difficult to obtain solutions that can be interpreted in the phylogenetic setting; see also <cit.>. Recently, we could show that studying the Fermat–Weber problem using an asymmetric distance function leads to a better explanation in terms of tropical convexity <cit.>. In particular, it provides a clear approach based on tropical convexity to the consensus problem from phylogenetics. Moreover, various desirable properties of consensus methods were obtained by exploiting tropical convexity. In fact, the good properties were solely due to tropical convexity and not the particular distance function which motivates the search for other methods with similar properties. In this paper, we focus on location problems that have the potential of exploiting tropical convexity. More specifically, we care of those location estimators that will belong to the tropical convex hull of the input points. Such estimators are based on distances that reflect the tropical structure of the space and can be seen as a counterpart to similar studies regarding location problems and ordinary convexity. Significant work was done for understanding geometric properties of location problems and their relationship to ordinary convexity. The case of Chebyshev centers dates back to the 60s in the work of Garkavi <cit.> and Klee <cit.>. More general location problems in a normed space were studied by Wendell and Hurter <cit.>, while a focus on geometric properties of Fermat–Weber problems with varying distances is covered by Durier and Michelot <cit.>. What is more, it was shown that finding an optimal solution in the (ordinary) convex hull for every set of points is equivalent to having an inner product space in three dimensions or more; a general form of this result was obtained by Durier <cit.>. The results mentioned above show a strong relationship between ordinary convexity and a Euclidean structure. Tropical convexity, on the other hand, it is related to the lattice structure of (^n,≤). Hence, we have to focus on “monotonic” distances. To interpret geometrically monotonic functions in the quotient space n, we notice that all sub-level sets share a similarity: they are geodesically star-convex with respect to the asymmetric tropical distance. The latter can be seen by remarking that geodesic segments are images of order segments in (^n,≤). The resulting sets, called -star-convex, and functions, called -star-quasiconvex, are discussed in sections <ref> and <ref>, respectively. In section <ref> we focus on location problems in which distances to the sites are measured by -star-quasiconvex functions. We show that this setting guarantees optimal locations in the tropical convex hull of the input. We will see that the triangle inequality does not play any role, which emphasizes the differences between tropical and ordinary convexity. Further, this setting allows for very general location problems where dissimilarities are not necessarily distances; triangle inequality is generally assumed in location science when dealing with geographic location <cit.>, but it is not reasonable for more general data <cit.> and never assumed in the construction of M-estimators <cit.>. We have further a few examples of location problems from the literature that end in our setting. In particular, location problems involving the symmetric and asymmetric tropical distances. However, the former case might contain cases where some optima are outside the tropical convex hull of the input. So what is the precise distinction between the symmetric and the asymmetric tropical distances that causes the above behaviour? We show that strict -star-convexity is the answer. This motivates that study of regularized versions discussed in §<ref>. We briefly show in section <ref> that we can extend the results to the case when the sites are tropically convex sets. Then section <ref> deals with the main application to phylogenetics: the tropical approach to consensus methods. Our general setting provides a large class of tropically convex consensus methods as defined in <cit.>. Furthermore, we enlarge the list of desirable properties of these consensus methods that were given in the previously cited work. Finally, we conclude with section <ref> consisting of highlights and possible directions for future research. § TROPICAL CONVEXITY The purpose of this section is to fix the notation and emphasize the basic properties of tropically convex sets that will be used later. One can consult the book of Joswig <cit.> for more details. We will use both semirings ^min=(∪{∞},∧,+) and ^max=(∪{-∞},∨,+) where x∧ y=min(x,y) and x∨ y=max(x,y). They are isomorphic under the map x↦ -x, but it is better to be seen as dual to each other. This duality will play an important role later similar to the relationship between max-tropical polytopes and min-tropical hyperplanes <cit.>. Since our applications deal with points of finite entry, we will define tropical geometric objects in ^n and n. It also exploits the common set of ^max and ^min and we can make use of the vector space structure. A min-tropical cone K⊂^n is a set closed under min-tropical linear combinations: (x+λ)∧ (y+μ)∈ K for all x,y∈ K and λ,μ∈. The image of a min-tropical cone in n is called a min-tropically convex set. A common example is the min-tropical hyperplane with apex v which is the set H^min_v={x∈n:|_j(x_j-v_j)|≥ 2}. The max-tropical cones and max-tropically convex sets are defined similarly, replacing min by max in the previous definitions. One can also see them as images of min-tropical cones and min-tropically convex sets under x↦ -x. The min-tropical convex hull of two points a,b∈n will be denoted by [a,b]_max and is called the min-tropical segment between a and b. We will also use the notation (a,b)_min=[a,b]_min∖{a,b} for the open min-tropical segment between a and b. Similarly, we define [a,b]_max and (a,b)_max. The min-tropical convex hull of a set A⊂n is the smallest min-tropically convex set containing A and we denote it by ^min(A). It can be related to the max-tropical semiring by <cit.>. For this we need to introduce the max-tropical sector S_i^max={x∈n:x_i≥ x_j ∀ j∈[n]}={x∈n:i∈_j x_j}. Then <cit.> says that x belongs to ^min(A) if and only if for each i∈[n] there exists a_i∈ A such that x∈ a_i+S_i^max. For the case of max-tropically convex hull just reverse min with max. We say that a point a of a min-tropically convex set A is i-exposed if (a+S_i^min)∩ A={a}. If a point is i-exposed for some i∈[n], then we simply call it exposed. Since the order ≤ on ^n is strongly related to tropical convexity, we will focus on monotonic function. We say that a function f:X→, defined on a subset X of ^n, is increasing if for every x, y∈ X with x≤ y we have f(x)≤ f(y). We call f strictly increasing if f(x)<f(y) whenever x≤ y and x≠ y. For a,b∈^n and a≤ b, we denote by [a,b]_≤ the set of points x∈^n such that a≤ x≤ b and call it the order segment between a and b. It can also be written as a box: [a,b]_≤=[a_1,b_1]×…×[a_n,b_n]. Its image in n is a polytrope, i.e. it is both min- and max-tropically convex <cit.>, which we call a box polytrope. A particular case is presented in the following example. Consider the asymmetric distance d_(a,b)=∑_i(b_i-a_i)-nmin_j(b_j-a_j) defined on n <cit.>. We are interested in geodesic segments under this distance, which are portrayed in Figure <ref>. This is different from the geodesic convexity discussed in <cit.> which focuses on the symmetric tropical distance. For two points a,b∈n we define the (oriented) geodesic segment between a and b under d_ as [a,b]_:={x∈^n:d_(a,x)+d_(x,b)=d_(a,b)}. The geodesic segment [a,b]_ is a (box) polytrope. To see this, we point out that [a,b]_=(a+S_i^min)∩(b+S_i^max) where i is any index from _j(b_j-a_j); the equality can be also seen in Figure <ref>. What is more, if we choose representatives a and b such that min_j(b_j-a_j)=0, then [a,b]_ is the image of [a,b]_≤ in n. The min-tropical vertices [a,b]_ are of the form v_j=b-(b_j-b_i+a_i-a_j)e_j=b-(b_j-a_j-min_ℓ(b_ℓ-a_ℓ))e_j for j∈[n] The set [a,b]_ contains the ordinary segment [a,b] but also the min- and max-tropical segments between a and b. What is more, for every c∈n the min-tropical segment between a and b is contained in [c,a]_∪[c,b]_. To see the latter statement, we take arbitrary representatives modulo for a and b and show that a∧ b∈[c,a]_∪[c,b]_. Let i∈_j[(a_j∧ b_j)-c_j]. Without loss of generality, we can assume that a_i∧ b_i=a_i. Thus, a∧ b∈ (c+S_i^min)∩(a+S_i^max)=[c,a]_. The canonical coordinates of a point x∈n are the entries of the x∈^n defined by x= x-(min_j x_j). This is a representative of x modulo such that all its entries are non-negative and at least one entry is 0. We say that K is a strictly min-tropically convex cone if K is a min-tropically convex cone and for every a,b∈ K such that a∧ b is different from a and b modulo , then a∧ b belongs to the interior of K. We say that a subset of n is strictly min-tropically convex if it is the image of a strictly min-tropically convex cone under the canonical projection ^n→n. A subset L of n is strictly min-tropically convex if all the points of the open min-tropical segment (a,b)_min belong to the interior of L, where a and b are distinct points in L. Any strictly min-tropically convex set is a singleton or its closure coincides with the closure of its interior. Moreover, all of its boundary points are exposed. The first part results from Remark <ref>. For the second part, consider v which is not exposed. Then there exist p,q in the strictly min-tropically convex set such that v∈(p,q)_min. According to the same remark, v is an interior point. § -STAR-CONVEX SETS A -star-convex set with kernel v is a non-empty set K⊆n such that for every point w∈ K we have [v,w]_⊆ K. We call K strictly -star-convex if [v,w]_∖{w} belongs to the interior of K for every w∈ K. Since [v,w]_ contains the ordinary segment [v,w], we conclude that -star-convex sets are also star-convex in the ordinary sense. We show now that -star-convex sets are min-tropically convex. Any -star-convex set is min-tropically convex. Let K be a -star-convex set with kernel v and a,b arbitrary points in K. According to Remark <ref>, we have [a,b]_min⊆ [v,a]_∪ [v,b]_. The latter set is contained in K due to its -star-convexity. However, -star-convex sets might not be max-tropically convex. For example, the image of the regular simplex Δ_n={e_1,…,e_n} in n is -star-convex but not max-tropically convex. One can find examples of -star-convex sets in Figure <ref>. Picture (a) shows a min-tropical hyperplane H^min_v which is -star-convex with kernel v—the apex. Picture (b) displays the unit balls for tropical L^p norms, which will be defined in Example <ref>. They are nested increasingly with respect to p; the outer one corresponds to the tropical L^∞ norm and is the only one that is not strictly -star-convex. One can recognize the triangle as the unit ball for the asymmetric tropical distance d_. The min-tropical hyperplane with apex at the origin (the kernel of the -star-convex sets) is dotted. Picture (c) shows a more complicated -star-convex sets. This case is not pure dimensional, the tropically exposed points do not form a closed set. Moreover, it is neither convex in the ordinary sense, nor strictly -star-convex. Let K be a -star-convex set with kernel v such that K≠{v}. Then K is strictly -star-convex if and only if K is strictly min-tropically convex and v is an interior point of K. Firstly, assume that K is strictly -star-convex. For every a,b∈ K the min-tropical segment [a,b]_min is a subset of [v,a]_∪[v,b]_. Therefore, all of the points of [a,b]_min with the exception of a and b must be in the interior of K. Hence, K is strictly min-tropically convex. The fact that v is an interior point is clear from the definition and our assumption that K≠{v}. Conversely, assume that K is strictly min-tropically convex and v is an interior point of K. We consider w∈ K∖{v} and we show that all points of [v,w]_∖{w} are in the interior of K. The result is clear for non-exposed points of [v,w]_ as we assumed K is strictly min-tropically convex. Hence, let u be an exposed point of [v,w]_ distinct from w. According to the discussion from Remark <ref>, u=w-(w_j-w_i)e_j where i∈_k w_k and j∉_k w_k. Since (u+w)/2 belongs to the interior of the tropical segment [u,w]_min and K is strictly min-tropically convex, then (u+w)/2 is an interior point of K. Thus, for small δ>0, the point c=(u+w)/2-δ e_i belongs to K. However, u∈[v,c]_=S_i^min∩(c+S_i^max) as c-u=(w-u)/2-δ e_i=(w_j-w_i)e_j/2-δ e_i. But u cannot be an exposed point of [v,c]_ as c-u is not parallel to a vector e_k for k∈[n] unless n=2. Consequently, u must be an interior point of K from the strict min-tropical convexity of K, when n≥ 3. For the case n=2, we could have noticed that the exposed points of [v,w]_ are v and w, so u can only be equal to v. But v was already assumed to be interior. The proof above shows that the assumption that v is an interior point of K is superfluous for the converse when n≥ 3. If K is strictly -star-convex with kernel v, then any exposed point of K from v+S_i^min is i-exposed. If a∈ v+S_i^min and it is not i-exposed, then there exists b∈(a+S_i^min)∩ K with b≠ a. In particular, a∈[v,b]_∖{b}. But the strict -star-convexity of K implies that a must be an interior point. § TROPICALLY QUASICONVEX FUNCTIONS A function f:^n→ whose sub-level sets L_≤α(f):={x:f(x)≤α} are convex is called quasiconvex. This is a purely geometric definition, but some other sources define them as functions satisfying f(λ x+(1-λ)y)≤max{f(x),f(y)} for every x,y∈^n and λ∈[0,1]. The latter can be more convenient in checking quasiconvexity. See <cit.> for more details. We will be interested in specific tropically quasiconvex functions. Before we introduce them, we need some notation. For a function γ:^n_≥ 0→ we associate the function γ:n→ defined by γ(x)=γ(x). We recall that x=x-(min_i x_i) are the canonical coordinates of x. We call a function f:n→ -star-quasiconvex with kernel v if f(x)=γ(x-v) for some increasing function γ:^n_≥ 0→. Moreover, if γ is strictly increasing, we call f strictly -star-quasiconvex. We will give a geometric interpretation of -star-quasiconvex in Theorem <ref>. However, we prefer the definition above because it easier to check in practice. Considering γ a monotonic norm <cit.>, f measures the distance to the kernel. If v=, then f is a gauge which are commonly used in convex analysis <cit.> and location science <cit.>. Gauges are sometimes dubbed “asymmetric norms” as they satisfy all the properties of a norm with the exception that f(x) need not be equal to f(-x). A famous class of monotonic norms are the L^p norms. They give rise to -star-quasiconvex gauges whose expression is γ_p(x)={[ √(∑_i∈[n](x_i-min_j∈[n] x_j)^p) if p∈[1,∞); max_i∈[n]x_i-min_j∈[n]x_j if p = ∞; ]. . We call them tropical L^p norms. They appeared in the work of Luo <cit.> under the name “B^p-pseudonorms”. One can recognize the tropical L^∞ norm as the tropical norm defined in <cit.>. The relationship to the L^∞ norm is stressed in <cit.>. The tropical L^1 norm gives rise to the asymmetric tropical distance d_; this relationship is implicit in <cit.>. The function γ depends only on the values on ∂^n_≥ 0, so we could have considered only ∂^n_≥ 0 as the domain of γ. However, this does not increase the generality since every (strictly) increasing function defined on ∂^n_≥ 0 can be extended to a (strictly) increasing function on ^n_≥ 0, according to the following lemma. Every (strictly) increasing function γ:∂^n_≥ 0→ can be extended to a (strictly) increasing function γ̃:^n_≥ 0→. Moreover, if γ is continuous, then the extension can also be made continuous. Consider γ̃(x)=max_i∈[n]γ(x_-i,0_i)+∏_i∈[n]x_i. Clearly, this is continuous if γ is, as being a composition of continuous functions. Moreover, γ̃(x)=γ(x) for every x⃗∈∂^n_≥ 0, due to monotonicity of γ and the fact that x_1x_2… x_n=0 for x∈∂^n_≥ 0. If x≤ y, then x_-i≤ y_-i for all i∈[n], where x_-i is obtained from x by removing the ith entry. Therefore, γ(x_-i,0_i)≤γ(y_-i,0_i) for every i∈[n], which implies γ̃(x)≤γ̃(y) after using ∏_j x_j≤∏_j y_j. In other words, γ̃ is increasing. Moreover, if γ is strictly increasing and x≠ y we have two cases. On the one hand, if y∈∂^n_≥ 0, then x∈∂^n_≥ 0 so γ̃(x)=γ(x)<γ(y)=γ̃(y). On the other hand, if y∈^n_>0, then ∏_j x_j<∏_j y_j. Using the last inequality with max_i∈[n]γ(x_-i,0_i)≤max_i∈[n]γ(y_-i,0_i), we obtain γ̃(x)<γ̃(y). Accordingly, γ̃ is strictly increasing if γ is strictly increasing. The following result explains why the functions from Definition <ref> deserve the name “-star-quasiconvex”. Let f:n→ be a continuous function. Then f is (strictly) -star-quasiconvex if and only if all of its non-empty sub-level sets are (strictly) -star convex with the same kernel. After an eventual translation, we can assume that the kernel is . Firstly, assume f is -star-quasiconvex and let α∈^n arbitrary such that L_≤α(f) is non-empty. Let γ:^n→ increasing such that f(x)=γ(x). Let w∈ L_≤α(f) and choose i∈[n] such that w∈ S_i^min. Since γ is increasing, the points x∈^n satisfying ≤ x≤w belong to L_≤α(γ). This set projects onto [,w]_ showing that [,w]_⊆ L_≤α(f). Since w was selected arbitrarily, L_≤α(f) must be -star convex with kernel . If f is strictly -star-quasiconvex, then the points satisfying ≤ x≤w different from w actually belong to L_<α(f). Due to the continuity of f, this coincides with the interior of L_≤α(f). This shows that L_≤α(f) is strictly -star-convex. Conversely, assume that L_≤α(f) is -star-convex with kernel for every α≥ f(). Take γ:∂^n_≥ 0→ defined as γ(x)=f(x) for x∈∂^n_≥ 0. Using Lemma <ref> it is enough to show that γ is increasing. Let x and y arbitrary points of ∂^n_≥ 0 such that x≤ y. The order segment [,y]_≤ projects onto [,y]_ which belongs to L_≤ f(y)(f). Due to the -star-convexity of sub-level sets, we obtain γ(x)=f(x)≤ f(y)=γ(y). If we have strict -star-convexity, then [0,y]_∖{y} is contained in the interior of L_≤ f(y)(f) which coincides to L_<f(y)(f). Hence, we obtain γ(x)<γ(y) for this case. The continuity of f is relevant only for strictly -star-quasiconvex functions. Without continuity, only the strict -star-convexity of the sub-level sets is not sufficient for f to be strictly -star-quasiconvex. This is similar to the case of ordinary quasiconvex functions; cf. <cit.> and <cit.>. We will see that convexity, in the ordinary sense, will also be helpful for our applications. We give a simple criterion for checking when a -star-quasiconvex function is convex. If γ is increasing and (strictly) convex, then γ is (strictly) convex. Let x,y∈^n and λ∈[0,1]. We have min_j(λ x_i+(1-λ) y_i)≥λmin_i x_i+(1-λ)min_i y_i as λ,1-λ≥ 0. Hence, λ x+(1-λ) y-min_j(λ x_i+(1-λ) y_i)≤λ( x-min_i x_i)+(1-λ)( y-min_i y_i). Since γ is convex and increasing, we obtain γ(λ x+(1-λ) y) ≤γ(λ( x-min_i x_i)+(1-λ)(y-min_i y_i)) ≤λ γ(x-min_i x_i)+(1-λ) γ(y-min_i y_i) =λγ(x)+(1-λ)γ(y). If γ is strictly convex and x≠ y modulo , then the second inequality from (<ref>) is strict, so γ(λ x+(1-λ) y)<λγ(x)+(1-λ)γ(y). Thus, γ is strictly convex if γ is strictly convex. § TROPICALLY CONVEX LOCATION PROBLEMS We will consider some input points v_1,…,v_m in n. We measure the distance (or dissimilarity) from x∈n to a point v_i using a -star-quasiconvex function f_i having kernel v_i. We consider increasing functions γ_i:^n→ such that f_i(x)=γ_i(x-v_i). Without loss of generality, we assume γ_i()=0, so that all dissimilarities are non-negative. The purpose of location problems is to find a point as close (or similar) as possible to the input points, depending on some criterion; usually, the optimal location is a minimum of an objective function h:n→. The function h is constructed using an increasing function g:^m_≥ 0→, which aggregates the distances to the input points. Formally, we define h(x)=g(f_1(x),…,f_m(x)). Since f_i measures the distance or dissimilarity from x to v_i and g is increasing, the minima of h record a global closeness to the input points. In most studied location problems, we would have a distance d on n and set f_i(x)=d(x,v_i). Common choices of g are g(x)=x_1+…+x_m, for the median or Fermat–Weber problem, g(x)=max_i∈[m]x_i for the center problem <cit.>, or g(x)=x_1^2+…+x_m^2, for defining the Fréchet mean <cit.>. Nevertheless, we will allow g to be an arbitrary increasing function. We will assume that h has a minimum, which happens, e.g., when h is lower semi-continuous. Let h be as above. Then there is a minimum of h belonging to ^max(v_1,…, v_m). Moreover, if g is strictly increasing and at least one of f_1,…,f_m is strictly -star-quasiconvex, then all the minima of h are contained in ^max(v_1,…,v_m). Consider x∉^max(v_1,…,v_m) which is a minimum of h. Thus there exists k∈[n] such that k∉_j(x_j-v_ij) for all i∈[m]. Set δ_i:=x_k-v_ik-min_j(x_j-v_ij) for all i, and δ=min_iδ_i, which is strictly positive by the consideration of k. Note that f_i(x-δ e_k)=γ_i(x-v_i-δ e_k-min_j(x_j-v_ij))≤γ_i(x-v_i-min_j(x_j-v_ij))=f_i(x) for all i∈[n]. Hence h(x-δ e_k)≤ h(x). Note that the inequality above is strict if g and some γ_ℓ are strictly increasing. Indeed, in that case, we must have f_ℓ(x-δ e_k)<f_ℓ(x), so we use the strict increase of g in the ℓth entry. That would contradict the optimality of x, so the second statement of the theorem holds. For the first statement, we can only infer that x-δ e_k is also a minimum of h. Hence, we can find an optimum of h in ^max(v_1,…,v_m) by moving x in directions -e_k for indices k as above. To be more precise, we collect in D(x) the possible elementary descent directions from x; formally D(x):=⋂_i∈[m]([n]∖_j∈[n](x_j-v_ij)). Notice that k∈ D(x), but k∉ D(x-δ e_k). Moreover, D(x-δ e_k)⊊ D(x), as the functions only increase by our move in a descent direction. Thus, replacing x by x-δ e_k, we find a minimum with smaller D(x). We can repeat the procedure to construct a minimum x^⋆ of h with D(x^⋆)=∅. The last condition is equivalent to x^⋆∈^max(v_1,…,v_m) due to <cit.>. The regions of f_i where it looks like a monotonic function are induced by the min-tropical hyperplane based at v_i. Those hyperplanes defined the max-tropical polytope generated by the input points, explaining why we look at the max-tropical convex hull, instead of the min analogue. The following lemma presents cases when there is a unique optimum location. We recall that a gauge γ is called strictly convex if γ(λ x+(1-λ)y)<1 for every λ∈(0,1) and x,y∈n with γ(x)=γ(y)=1, although they are not strictly convex functions. Assume that g,f_1,…,f_m are convex, g is strictly increasing, and at least one of the following conditions holds: a) at least one f_i is strictly convex; or b) all f_i are strictly convex gauges and the points v_1,…, v_m are not collinear. Then h is strictly convex. In particular, it has a unique minimum. Consider arbitrary distinct points x, y∈^n/ and a scalar λ∈(0,1). For case a), we have f_i(λ x+(1-λ) y-v_i)<λ f_i(x-v_i)+(1-λ)f_i(x-v_i). Since g is convex and strictly increasing and the functions f_j convex, we obtain h(λ x+(1-λ) y)<λ h(x)+(1-λ)h(y). So h must be strictly convex. For case b), at least one of the points v_i is not on the line through x and y. Then x-v_i and y-v_i they are not parallel and the strict convexity of the unit ball defined by f_i implies that f_i(λ x+(1-λ) y-v_i)<λ f_i(x-v_i)+(1-λ)f_i(x-v_i). The rest of the proof is identical to case a). §.§ Examples Here we review the tropical location problems from literature that fall in our category, i.e. an optimum belongs to the tropical convex hull of the input. [Tropical Fermat–Weber and Fréchet problems] To the best of our knowledge, the first one-point location problems in tropical geometry are proposed by Lin et al. <cit.>. They suggest the study of Fermat–Weber points and Fréchet means under the symmetric tropical distance d_. The goal was to relate them to tropical convexity for applications in phylogenetics. However, they noticed that tropical Fermat–Weber points might lie outside the tropical convex hull of the input points leading to medians that cannot be interpreted easily in biological applications <cit.>. However, Theorem <ref> says that it is possible to find an optimum in the tropical convex hull. This was already noticed for the tropical Fermat–Weber points <cit.> but it was unknown, until now, for tropical Fréchet means. [Tropical center] Consider the case f_i(x)=d_(v_i,x) and g(y)=max(y_1,…,y_m). This can be interpreted as the center of the minimum max-tropical L^1 ball enclosing the points v_1,…,v_m. The tropical center appears in <cit.>, but the details are omitted. If we choose representatives of the input points in ={x∈^n:x_1+…+x_n=0}, the optimum can be obtained by solving the linear program: [ minimize n· t ; subject to v_ij - x_j ≤ t , for i∈[m] and j∈[n]; x_1 + … + x_n = 0 ]. Note that the x-coordinates of the optimal solutions are equal, modulo , to the x-coordinates of the linear program [ minimize n· t+∑_j=1^n x_j ; subject to v_ij - x_j ≤ t , for i∈[m] and j∈[n] ]. Let (t^⋆,x^⋆) an optimal solution of (<ref>). For any solution of (<ref>) we have t+x_j≥max_i∈[m]v_ij=:V_j. In particular, x^⋆ will have the smallest entries if we actually have equality: t^⋆+x^⋆_j=V_j, otherwise we can replace x^⋆ by some x^⋆-ε e_i to minimize the objective function. This implies x^⋆ = V modulo ; in particular, the solution is unique in n. Even if we do not have g strictly increasing, the uniqueness and Theorem <ref> ensures that the optimum is in the tropical convex hull. However, this could have been noticed from the closed form V=⋁_i v_i for v_1,…,v_m∈. [Transportation problems] Consider λ_1,…,λ_n>0 and (λ) the simplex in n whose vertices are e_i/λ_i. Then γ_(λ)(x)=∑_iλ_i x_i-(∑_iλ_i)min_j x_j is the gauge on n whose unit ball is (λ). The (weighted) Fermat–Weber problem ∑_i∈[m]w_iγ_(λ)(x-v_i) is equivalent to a transportation problem and every transportation problem can be reduced to this case; to see this better, write it as a linear program after scaling the weights w_i such that ∑_i w_i=∑_j λ_j (this change does not influence the optimum). This was firstly noticed in <cit.>, where the authors focused on the case λ_1=…=λ_n. The corresponding optimum is called a tropical median in the work cited. The optimal point is called a λ-splitter by Tokuyama and Nakano <cit.>, but no metric interpretation was mentioned. The authors gave a condition of partitioning the space in n region in an equal fashion with some weights coming from λ and w; this can be seen as a reinterpretation of the first-order optimality condition for the corresponding Fermat–Weber problem. As a λ-splitter, it appeared in statistics <cit.> and as a particular case of Minkowski partition problems <cit.>. [Locating tropical hyperplanes] The tropical hyperplanes are parametrized by ^n/ by their identification with their apex. Moreover, we have d_(a,H_x^max)=(x-a)_(2)-(x-a)_(1). For a vector y, we denote by y_(k) the kth smallest entry, also known as the kth order statistic. Note that the aforementioned distance is -star-quasiconvex with apex a; the easiest to see this is noticing that the second order statistic is increasing. Therefore, our general location problems cover the case of locating tropical hyperplanes. The best-fit tropical hyperplane with with L^1 error, i.e. g is the L^1 norm, was considered by Yoshida, Zhang, and Zhang as part of tropical principal component analysis <cit.>. The case of L^∞ error was considered by Akian et al. <cit.> for applications to auction theory and called tropical linear regression. They also show that the problem is polynomial-time equivalent to mean-payoff games <cit.> and, using d_(a,H_x^max)=d_(x,H_a^min), that it is dual to the problem of finding the largest inscribed ball in the tropical convex hull of the input points <cit.>. To end this subsection, we compute the optimal location from the examples above for specific input points. We consider the points from <cit.> which are given by the columns of the matrix V = [ 0 1 3 2; 1 0 2 3; 1 1 0 0; ]. For this input, there is a unique tropical Fréchet point, (1,1,0), but the set of tropical Fermat–Weber points is a hexagon, marked with grey in Figure <ref>. We remark that V has two axes of symmetry and (1,1,0) is their intersection. The point (1,1,0) is also the tropical center of V, while the tropical median is (0,0,0). The latter point is the also the unique apex of the best-fit tropical hyperplane with L^1 error of <cit.>. It is also a solution of the tropical linear regression, but not the unique one. The apices of the best-fit tropical hyperplanes with L^∞ error are of the form (λ,λ,0) with λ≤ 1 and their set is pictured with green in Figure <ref>. §.§ Regularization In some cases, we cannot expend g to be strictly increasing or all the dissimilarity functions f_i to be strictly -star-quasiconvex. Hence, a minimization algorithm might return a point outside the max-tropical convex hull of the input points, when there are multiple solutions. In this subsection, we show how we could try to arrive to a solution belonging to ^max(v_1,…,v_m) through a regularized formulation. The idea of regularization is to consider a small parameter λ>0 and a nicely behaved function f_m+1:n→_≥ 0 and try to solve the optimization problem minimize g(f_1(x),…,f_m(x))+λ f_m+1(x). For our purposes, f_m+1 is nicely behaved if it is strictly -star-quasiconvex with a kernel from ^max(v_1,…,v_m). An easy choice for v is the tropical center from Example <ref>. This is also a location problem with g_λ:^m+1_≥ 0→ given by g_λ(x_1,…,x_m,x_m+1)=g(x_1,…,x_m)+λ x_m+1 and the optimality criterion is the function h_λ:n→ given by h_λ(x)=g_λ(f_1(x),…,f_m+1(x)) Note that g_λ is strictly increasing in the (m+1)-st entry for every λ>0. Checking more carefully the proof of Theorem <ref>, the second statement holds if f_ℓ is strictly -star-quasiconvex and g strictly increasing in its ℓ-th entry. We use this property for the regularization. Therefore, we obtain the following direct consequence of Theorem <ref>. For every λ>0, all the minima of h_λ lie in ^max(v_1,…,v_m). The influence of the term f_m+1 decreases as λ goes to 0. If the functions are regular enough, we expect that a collection of optima x^⋆_λ of h_λ to converge to an optimum of h. In fact, x^⋆_λ will be an optimum of h for λ sufficiently small if h is polyhedral convex and f_m+1 is Lipschitz continuous. If h is polyhedral convex and f_m+1 is a convex function with sub-linear growth, then there exists λ_0>0 such that all minima of h_λ are also minima of h for every λ<λ_0. The proof is quite technical using the differential theory from convex analysis so it is given in the appendix. We stress that Proposition <ref> can be useful for studying the tropical Fermat–Weber problem from <cit.>. Without regularization, it has undesirable behaviour for applications to biology; cf. <cit.>. § LOCATION PROBLEMS WITH TROPICALLY CONVEX SITES Location problems can appear also when facilities are regions of the ambient space and not only points. Here, we consider such a generalization where the sites are tropically convex sets. In the previous section, we used different distances to the input points. Here, we will measure our dissimilarities in a uniform way, by fixing an increasing function γ:^n_≥ 0→ and considering d_γ(x,y)=γ(y-x). We than say that d_γ is -star-quasiconvex; if γ is strictly increasing we say that d_γ is strictly -star-quasiconvex. This allows a clear definition of a distance from a region to a point: d_γ(A,x):=inf_y∈ Ad_γ(y,x). For a closed max-tropical cone K⊆^n we define the projection π_K:^n→ K as π_K(x)=max{y∈ K:y≤ x}. We note that π_K(x+λ)=π_K(x)+λ for every x∈^n and λ∈, so it induces a well-defined function π_K/:n→ K/ called the tropical projection onto the max-tropically convex set K/. The following lemma gives an explicit formula for the tropical projection and it characterizes it as a closest point under d_γ. We omit the proof, as it is a classical result, shown when γ is the maximum norm in <cit.> and for a general tropical L^p norm in <cit.>. Let A be a closed max-tropically convex set. Then the tropical projection π_A(x) of a point x has the entries π_A(x)_i=max_a∈ A(a_i+min_j∈[n](x_j-a_j)). Moreover, d_γ(A, x)=d_γ(π_A(x), x) and π_A(x) is the unique point whose distance to x equals d_γ(A,x) if d_γ is strictly -star-quasiconvex. In fact, the maximum expression of the tropical projection from Lemma <ref> can be taken over the extremal points, in the case of tropical polytopes <cit.>. A similar result seems similar for general convex sets, but the form above is sufficient for our purposes. From now on, our given sites are closed max-tropically convex sites A_1,…,A_m in n. Similar to section <ref>, the objective function is h=g(d_γ(A_1,x),…,d_γ(A_m,x)), where g:^m_≥ 0→_≥ 0 is increasing. There exists an minimum of h lying in the tropical convex hull of the input ^max(A_1∪…∪ A_m). Moreover, if g and γ are strictly increasing, then all the minima of h lie in ^max(A_1∪…∪ A_m). If x∉^max(A_1∪…∪ A_m), then <cit.> entails the existence of an index ℓ∈[n] such that min_j≠ℓ(x_j-a_j)<x_ℓ-a_ℓ for every a∈^max(A_1∪…∪ A_m). Since A_1,…,A_m are closed sets, then there exists an open ball around x not intersecting the union of these sets. Thus, for δ>0 sufficiently small and y=x-δ e_ℓ we have min_j(y_j-a_j)=min_j(x_j-a_j) for every a∈^max(A_1∪…∪ A_m). Therefore, equation (<ref>) implies π_A_i(y)=π_A_i(x) for all i∈[m]. Note that y-π_A_i(y)=x-π_A_i(x)-δ e_ℓ≤ x-π_A_i(x). Since γ is increasing, we have d_γ(A_i,y)=γ(y-π_A_i(y))≤γ(x-π_A_i(x))=d_γ(A_i,x) for every i∈[m]. Moreover, if γ is strictly increasing we get d_γ(A_i,y)<d_γ(A_i,x). In other words, going from x in the direction -e_ℓ we obtain a decrease in all the distances d_γ(A_i,x); in particular, a decrease of h. Using this observation, the rest of the proof is identical to the proof of Theorem <ref>. § TROPICALLY CONVEX CONSENSUS METHODS In this section, we focus on applications to phylogenetics—the study of evolutionary history of species <cit.>. The information is represented as an evolutionary tree, or phylogeny, which are trees whose leaves are labeled by the name of the species. In this paper, we will deal only with trees that encode the evolution from a common ancestor and possess a molecular clock. To be more formal, we have a finite set containing the names of the species and a rooted tree whose leaves are in bijection with ; the root corresponds to the most recent ancestor of all the species into consideration. The time is represented as positive weights on the edges, which gives a way to measure distances between nodes in the trees. What is more we assume that the distance from the root to any leaf is the same; it means that the same time is measured from the evolution of the most recent common ancestor (MRCA) of all species and any element of . Such trees are called equidistant. To a rooted phylogeny T we associate a distance matrix D∈^× where the entry D_ij represents the distance between the leaves labelled i and j in T. It is known that T is equidistant if and only if D is ultrametric <cit.>, i.e. D_ij≤max(D_ik,D_kj) ∀ i,j,k∈. Hence, we will not distinguish between equidistant trees and ultrametric matrices in the rest of the paper. Because D is symmetric and has zero entries on the diagonal, we can see it as a point of ^2. We define the tree space _ as the image of space of all ultrametrics in 2. Due to <cit.>, this is homeomorphic to the BHV space defined in <cit.>. We note that the ultrametric condition (<ref>) implies that _ is max-tropically convex. We are interested in consensus methods: given as input multiple phylogenies on , find an evolutionary tree on being as similar as possible to the input trees. This is a common problem in evolutionary biology, as multiple distinct trees arise from the statistical procedures or from the multiple methods to reconstruct phylogenies from different data; see <cit.> or <cit.> for details. A consensus method can be seen as a location statistic in the tree space. Since the latter is max-tropically convex, there were many attempts to exploit this geometric structure to obtain relevant information <cit.>. We are interested in tropically convex consensus methods, defined in <cit.>. A consensus method c is tropically convex if c(T_1,…,T_m)∈^max(T_1,…,T_m) for every m≥ 1 and T_1,…,T_m∈_. The location problems discussed in the previous section give rise to tropically convex consensus methods. Note that we do not need to impose the restriction that the optimum to lie in _. It is automatically satisfied from the tropical convexity of _ and Theorem <ref>. This observation ensured that tropical median consensus methods are fast to compute <cit.>. Tropically convex consensus methods are particularly interesting because they preserve relationships from the input trees. To explain this more clearly, we firstly need some terminology: two subsets of taxa A,B form a nesting in T, and we denote it by A<B, if the MRCA of A in T is a strict descendant of the MRCA of A∪ B. If D is the ultrametric associated to T, then we can write the condition as max_i,j∈ AD_ij<max_k,ℓ∈ A∪ BD_kℓ. We say that a consensus method c is Pareto on nestings if c(T_1,…,T_m) displays the nesting A<B whenever A<B appears in all input trees T_1,…,T_m. The consensus method c is called co-Pareto on nestings if c(T_1,…,T_m) does not display the nesting A<B unless A<B appears in some input tree T_i. These conditions are desirable for consensus methods <cit.>. It is useful to see these properties from a geometric point of view. Consider _(A<B) the subset of _ consisting of trees displaying the nesting A<B; it is described by (<ref>). We also make the notation _(A≮B) for the complement _∖_(A<B), which is the set of trees not displaying A<B. Then c is Pareto on nestings if and only if for every nesting A<B and trees T_1,…,T_m∈_(A<B) we have c(T_1,…,T_m)∈_(A<B). We also note that c is co-Pareto on nestings if and only if for every nesting A<B and trees T_1,…,T_m∈_(A≮B) we have c(T_1,…,T_m)∈_(A≮B). The next result shows that tropically convex consensus methods have both Pareto and co-Pareto properties, being an improved version of <cit.>. Thus, we have a large class of consensus methods satisfying both properties. This is remarkable, as no such consensus method is listed in the surveys <cit.>. Tropically convex consensus methods are Pareto and co-Pareto on nestings. For every nesting A<B, the set _(A<B) is max-tropically convex as (<ref>) describes an open max-tropical halfspace. Whence, Remark <ref> implies that tropically convex consensus methods are Pareto on nestings. Similarly, the set _(A≮B) is max-tropically convex as it is the intersection of _ with the tropical halfspace defined by the inequality max_i,j∈ AD_ij≥max_k,ℓ∈ A∪ BD_kℓ. Remark <ref> implies also the co-Pareto property. The Pareto property gives a unanimity rule: nestings present in all the trees are also present in the consensus. One may wonder if this rule can be relaxed as there exist (super)majority-rule consensus trees commonly used for the unweighted case; they are denoted M_ℓ by Felsenstein in <cit.>. Indeed, one can find such a rule for tropical medians <cit.>. A nesting appears in the tropical median consensus tree if it appears in a proportion of the input trees greater than 1-1/n2. Moreover, a nesting will not appear in the tropical median consensus tree if it occurs in a proportion less than 1/n2 of the input trees. The tropical median corresponds to the Fermat–Weber problem whose gauge distance is given by the regular simplex. Therefore, the essential hull of a finite set A defined in <cit.> coincides with the max-tropical convex hull of A. Then the conclusion follows from <cit.> and Remark <ref>, as in the proof of Proposition <ref>. Note that a consensus method is not well-defined when there are multiple minimum points. Most problematic is the situation when different tree topologies are possible, when it is unclear how to resolve incompatible optimum trees. Yet, this is not the case when the set of optimal locations is convex <cit.>: separating the tree space in cones of trees having a tree topology gives rise to a convexly disjoint collection in the sense of <cit.>. Nonetheless, the aforementioned proposition applies when the set of all optima in n2 is contained in _; guaranteed for strictly -star-quasiconvex dissimilarities. Otherwise, one might still have problems in defining consistently a consensus method; see <cit.> for the symmetric tropical Fermat–Weber problem. For this reason, one has to consider the regularized versions discussed in §<ref>. §.§ Towards tropical supertrees Supertrees are a generalizations of consensus trees in the case when the given input consists of phylogenies on different taxa. This can be also interpreted as a missing-data problem. In other words, we are given as input phylogenetic trees T_1,…,T_m whose leaves are labelled by _1,…,_m, respectively. A supertree method returns a tree whose leaf set is =⋃_i_i summarizing as the information from T_1,…,T_m. We use the idea of Grindstaff and Owen to represent trees with missing taxa by the set of all possible trees on all the taxa <cit.>. Their method is similar to a location problem with BHV distance using an L^∞ error. We note that another approach for supertrees in a tropical setting was proposed in <cit.>; the authors relied on imputation to reduce supertrees to consensus trees. So we replace the input tree T_i by the tropically convex set __i^-1(T_i) where _:_→_ is a projection obtained by keeping the entries of an ultrametric matrix corresponding only to rows and columns from ⊂. We will be interested in tropically convex supertrees, i.e. the output belongs to ^max(⋃_i∈[m]__i^-1(T_i)). According to Theorem <ref>, we may obtain such methods by employing strictly -star-quasiconvex dissimilarity measures. Tropically convex supertrees are also Pareto on nestings. We record this fact, whose proof is similar to Proposition <ref>. In particular, it motivates the search for tropically convex supertree methods. Tropically convex supertree methods are Pareto on nestings. A co-Pareto property is no longer possible, as relationships between groups of taxa might not appear in all input trees. Nonetheless, there cannot appear conflicting relationships. We remark that we did not give a well-defined supertree method. The problem arises from the fact that the optima could have different tree topologies. For example, an extreme case is when there are two trees T_1 and T_2 on disjoint set of taxa. There are clearly many different ways to combine the information. Therefore, extra assumptions must be made. § CONCLUSION AND FUTURE PERSPECTIVES We provided a large class of location estimators whose value lies in the max-tropical convex hull of the input with the purpose of obtaining consensus methods with good properties. The first direction would be to obtain methods to obtain the optima efficiently. On the other hand, searching for extra properties of specific location problems could be helpful for applications; more details are provided below. §.§ Comparison to consensus methods based on the BHV distance We have exploited tropical convexity to obtain consensus methods with good properties. More precisely, we focused on (co-)Pareto properties that can be interpreted in a purely geometric way. The associated spaces are also max-tropically convex so the aforesaid properties are immediate for the tropical approach. Although the BHV geometry of the tree space is more studied than its tropical counterpart, there are few consensus methods proposed for this geometry. A first proposal was given in the pioneering paper by Billera, Holmes, and Vogtmann <cit.>, but a few drawbacks were already pointed out: e.g., doubling every input tree changes the output. An approach based on Fréchet means was proposed by Miller et al. <cit.> and Bačák <cit.>. It is also Pareto and co-Pareto on splits <cit.>, but the result is more intricate. The same properties hold for Fermat–Weber and center problems in the BHV space <cit.>. The approach is again analytical, but similar for all the cases. One could try a geometric approach, as in the tropical case, as it could lead faster to identification of self-consistent properties for consensus methods. §.§ Majority rules in consensus methods Proposition <ref> provides a supermajority rule for tropical median consensus with respect to nestings. This can be a step towards understanding the relationship between median weighted trees and the widely used majority-rule consensus for unweigthed trees. In fact, the majority-rule consensus can be interpreted as a median <cit.>, but it is unclear if this can be extended to weighted phylogenies. However, Proposition <ref> provides a large threshold for a majority rule in the case of tropical median consensus trees, indicating that they are quite conservative. This seems to be owing to the low breakdown point of the tropical median caused by asymmetry; check <cit.> for more details. Therefore, an investigation of location estimators with higher breakdown point could provide a better connection to the majority-rule consensus. §.§ Compositional data A different application of our location estimators could be to compositional data <cit.>. That is, the data can be seen as points in a simplex; our methods would be applied to the centered logratio transform of the input. Note that -star-quasiconvex sets are defined with respect to special directions, which correspond to the vertices of the simplex. What is more, the motivation of Tokuyama and Nakano in studying algorithms for transportation problem came from splitting the points from a simplex in multiple regions <cit.>. Moreover, Nielsen and Sun analyzed clustering methods with the symmetric tropical distance on compositional data showing a better performance than other more commonly used dissimilarity measures <cit.>. These results suggest that -star-quasiconvex dissimilarities could be useful in compositional data analysis. §.§ Acknowledgments I am indebted to Michael Joswig for discussing different aspects of this paper. I thank Günter Rote for bringing <cit.> to my attention. The author was supported by Facets of Complexity (GRK 2434, project-ID 385256563). § APPENDIX A: CONVEX ANALYSIS ON N We state and proof a slightly more general form of Proposition <ref> and then we put an Euclidean structure on n to show how we can obtain a quantitative result for the regularized version of the tropical Fermat–Weber problem. §.§ The proof of Proposition <ref> We will prove the result in a finite-dimensional real vector space X. We will equip it with an inner product ⟨·,·⟩ which gives an isomorphism X^*≅ X. In this way, we can see the subgradients of a convex function as elements of X. We recall that the subdifferential of a convex function f:X→ at a point x is the set ∂ f(x)={c∈ X:f(y)-f(x)≥⟨ c,y-x⟩ ∀ y∈ X}. It will be used to characterize the minima of f through the first-order minimality condition: x is a minimum of f if and only if ∈∂ f(x). We refer to the book by Rockafellar <cit.> for more details on convex analysis. We are interested in optima of regularized versions of f of the form f+λ h with h having linear growth. More specifically, we care of h being Lipschitz continuous, i.e. there exists a constant L>0 such that |h(x)-h(y)|≤ Lx-y for every x,y∈ X, where · is any norm on X.[We assumed that X is finite-dimensional, so every two norms are equivalent. Thus, the definition does not depend on the specific norm. Nevertheless, the constant L depends on ·.] As a last definition, we say that h is polyhedral convex if it is the maximum of finitely many affine functions on X. Now we can state and proof a slight generalization of Proposition <ref>. Let h:X→ be a polyhedral convex function and f:X→ convex and Lipschitz continuous. Then there exists a constant λ_0>0 such that the minima of h+λ f are also the minima of h for every λ∈(0,λ_0). Consider an arbitrary minimum m_λ of h+λ f. The first-order optimality condition entails ∈∂ h(m_λ)+λ∂ f(m_λ). What is more, since f is Lipschitz continuous, <cit.> yields the existence of a bounded set B such that ∂ f(x)⊂ B for all x∈ X. If ∉∂ h(x), then ∉∂ h(x)+λ B for λ sufficiently small, as ∂ h(x) is closed. We also know that there are finitely many values for ∂ h(x), as we assumed h is a polyhedral convex function. Accordingly, there exists λ_0>0 such that ∉∂ h(x)+λ B for every λ∈(0,λ_0). The last relation implies that ∈∂ h(m_λ) if λ<λ_0, which is equivalent to m_λ being a minimum of h. If we know the bounded set B from the proof of Proposition <ref>, then we can set λ_0=sup{λ>0:∉ P+λ B, ∀ P∈} where is the set of all possible values of ∂ h(x) such that ∉∂ h(x). The infimum is positive, as 𝒫 is a finite collection of closed convex sets. If h is a gauge γ, then <cit.> says that we can set B={x∈ X:γ^∘(x)≤ r}=:rB_γ^∘ for some r>0 where γ^∘(y):=sup_x:γ(x)≤ 1⟨ x,y⟩ is the dual gauge. Hence, P+λ B represents the set of points at distance at most λ r from P measured by the distance d_γ^∘ induced from γ^∘, i.e. d_γ^∘(x,y)=γ^∘(y-x). Consequently, we have λ_0=inf_P∈d_γ^∘(P,)/r. §.§ Euclidean structure on n We just conclude with explaining how we can put a Euclidean structure on n in a natural way. The idea is to identify the tropical projective torus with a hyperplane of ^n with the regular Euclidean structure. Using this idea, by factoring with , one can identify n with the orthogonal subspace to , which is ={x∈:x_1+…+x_n=0}. This identification is natural as we obtain the same subdifferentials of a convex function f:n→ as in the case when we consider it as a function on ^n such that f(x+λ)=f(x) for each x∈^n and λ∈. Having fixed this structure, we search for λ_0 as in Proposition <ref> for h(x)=∑_i γ_∞(x-v_i) and f(x)=γ_1(x-v) where v∈^max(v_1,…,v_m). This is, we want quantitative results for regularizations of tropical Fermat–Weber problems. In this case, the subdifferentials of h are integer polytopes in . Moreover, one can check that the dual gauge of γ_1 has the expression γ_1^∘(x)=γ_1(-x)/n which takes integer values at each point of ∩^n. Consequently, λ_0=inf_P∈d_γ_1^∘(P,)≥ 1 as it is a positive integer. Whence, the minima of h+λ f are also minima of h for every λ∈(0,1).
http://arxiv.org/abs/2307.04627v1
20230710151340
Irreducibility of eventually positive semigroups
[ "Sahiba Arora", "Jochen Glück" ]
math.FA
[ "math.FA", "math.SP", "47B65, 47D06, 46B42, 47A10" ]
Irreducibility of eventually positive semigroups]Irreducibility of eventually positive semigroups [S. Arora]Sahiba Arora, Technische Universität Dresden, Institut für Analysis, Fakultät für Mathematik , 01062 Dresden, Germany [email protected] [J. Glück]Bergische Universität Wuppertal, Fakultät für Mathematik und Naturwissenschaften, Gaußstr. 20, 42119 Wuppertal, Germany [email protected] [2010]47B65, 47D06, 46B42, 47A10 Positive C_0-semigroups that occur in concrete applications are, more often than not, irreducible. Therefore a deep and extensive theory of irreducibility has been developed that includes characterizations, perturbation analysis, and spectral results. Many arguments from this theory, however, break down if the semigroup is only eventually positive – a property that has recently been shown to occur in numerous concrete evolution equations. In this article, we develop new tools that also work for the eventually positive case. The lack of positivity for small times makes it necessary to consider ideals that might only be invariant for large times. In contrast to their classical counterparts – the invariant ideals – such eventually invariant ideals require more involved methods from the theory of operator ranges. Using those methods we are able to characterize (an appropriate adaptation of) irreducibility by means of linear functionals, derive a perturbation result, prove a number of spectral theorems, and analyze the interaction of irreducibility with analyticity, all in the eventually positive case. By a number of examples, we illustrate what kind of behaviour can and cannot be expected in this setting. [ Jochen Glück August 12, 2023 =================== § INTRODUCTION §.§ Eventually positive C_0-semigroups A C_0-semigroup (e^tA)_t≥ 0 on a function space or, more generally, on a Banach lattice E is called individually eventually positive if, for each 0≤ f∈ E, there exists a time t_0≥ 0 such that e^tAf≥ 0 for all t≥ t_0. If the time t_0 can be chosen independently of f, then the semigroup is said to be uniformly eventually positive. Since Daners showed in <cit.> that the semigroup (e^tA)_t≥ 0 generated by the Di­rich­let-to-Neumann operator on the unit circle can, for suitable parameter choices, be eventually positive rather than positive, the theory of eventually positive semigroups garnered growing attention from the semigroup community. While the study of the finite-dimensional case has a somewhat longer history and can already be traced back to <cit.>, the development of the theory on general Banach lattices began with the papers <cit.> and resulted in a variety of articles: for instance, a characterization of uniform eventual positivity is given in <cit.>, perturbation results can be found in <cit.> (we also refer to <cit.> for related finite-dimensional perturbation results), the long-term behaviour is studied in <cit.>, and the more subtle property of local eventual positivity is analyzed in <cit.>. An overview of the state of the art as of the beginning of 2022 can be found in <cit.>. In general, the theory incorporates two types of results in its current level of development: * Characterizations of eventual positivity, though under rather strong technical assumptions. * Consequences of eventual positivity. The characterization results mentioned in (i) have already been used to show eventual positivity (and versions thereof) for a large variety of concrete PDEs, for example, the bi-Laplacian with Wentzell boundary conditions <cit.>, delay differential equations <cit.>, bi-Laplacians on graphs <cit.>, Laplacians with point interactions <cit.>, and polyharmonic operators on graphs <cit.>. Despite those successes, as long as we do not know how to considerably weaken the technical assumptions in those characterization theorems, there is a gap to the results of type (ii): the latter can (currently) hardly be used to show properties of those eventually positive semigroups that occur in concrete applications – rather they give us necessary conditions for eventual positivity and indicate, at the same time, the limits of the theory. This paper is mainly about the results of the second type. §.§ Irreducibility for positive semigroups A C_0-semigroup on a Banach lattice E is said to be positive if each semigroup operator leaves the positive cone E_+ invariant and a positive C_0-semigroup is called irreducible if it does not leave any closed ideals invariant except for {0} and E itself. This notion of irreducibility is motivated by the same concept in finite dimensions where it occurs, for instance, in the classical Perron–Frobenius theorem, in graph theory, and in the theory of Markov processes on finite state spaces. In infinite dimensions, many positive semigroups that describe the solutions to concrete evolution equations are irreducible. Detailed theoretical information about such semigroups can, for instance, be found in <cit.> and <cit.>. A treatment of irreducibility for general semigroups of positive operators – i.e., not only one-parameter semigroups – can be found in <cit.>. A property that makes irreducibility efficient to handle is that one can characterize it in terms of duality: a positive C_0-semigroup (e^tA)_t ≥ 0 on a Banach lattice E is irreducible if and only if for each non-zero f ∈ E_+ and each non-zero positive functional φ in the dual space E' there exists a time t ≥ 0 such that ⟨φ, e^tA f ⟩ > 0. This observation is classical, see for instance <cit.>, but we include a rather detailed analysis of the situation in Proposition <ref>. §.§ Contributions and organization of the article Motivated by the recent theory and applications of eventually positive semigroups, we explore the notion of irreducibility outside the framework of positive semigroups and thereby focus on the eventually positive case. This gives rise to two problems that cannot be solved by falling back to classical arguments from the positive case and thus require the use of different methods: * Classically, the analysis of invariant ideals for semigroups strongly relies on the positivity of the semigroup operators. However, as the semigroups we are interested in are only eventually positive, it becomes natural to focus on (the non-existence of) ideals that are only eventually invariant. For individually eventually positive semigroups though, this poses additional challenges since we cannot expect any of the operators in the semigroup to be positive. * For positive semigroups, the above-mentioned fact that irreducibility can be characterized by means of duality is very powerful. However, the arguments used to show this characterization in the positive case cannot be directly adapted to treat the eventually positive situation. In the light of those challenges, we make the following contributions: Regarding (i), we distinguish between irreducible semigroups in the classical sense and what we call persistently irreducible semigroups, that rely on eventual invariance of closed ideals (see Definition <ref>). We show this distinction is indeed necessary in the eventually positive case (Example <ref>), but not in the positive case (Proposition <ref>). For eventually positive semigroups that are analytic, irreducibility and persistent irreducibility are also equivalent (Proposition <ref>) and, in some cases, even imply a stronger version of eventual positivity (Proposition <ref>); this is similar to the same phenomenon in the positive case <cit.>. Regarding (ii), in order to characterize persistent irreducibility by means of duality (Theorem <ref>) a new tool is required: we first develop conditions under which non-closed vector subspaces that are individually eventually invariant are automatically uniformly eventually invariant. This is possible in the setting of operator ranges that we discuss in Section <ref>, see in particular Corollary <ref>. To explain the relevance of this to our framework let us point out that, while we define persistent irreducibility by means of closed ideals, non-closed ideals still play an essential role in the theory. This is because important arguments rely on the construction of invariant principal ideals – which are typically not closed (see the proof of Lemma <ref>). As mentioned before, positive irreducible semigroups exhibit many interesting spectral properties. Motivated by this, we study the spectrum of eventually positive persistently irreducible semigroups (Section <ref>) and show that some of these properties are carried over from the positive case. In particular, we show that the spectrum of the generator of an eventually positive persistently irreducible semigroup is guaranteed to be non-empty in some situations. The perturbation theory of eventually positive C_0-semigroups is currently quite limited. It was demonstrated in <cit.> that eventually positive semigroups do not behave well under (positive) perturbations. In Section <ref> we show that if the perturbation interacts well with the unperturbed semigroup though, then the eventual positivity is indeed preserved. This enables us to construct eventually positive semigroups that do not satisfy the technical assumptions of the available characterization results in, for instance, <cit.> and <cit.>. Consequently, we are able to give an example of an eventually positive (but non-positive) semigroup which is persistently irreducible but not eventually strongly positive (Example <ref>). Analogously to the positive case, such a situation cannot occur for analytic semigroups (Proposition <ref>). §.§ Notation and terminology Throughout, we freely use the theory of Banach lattices for which we refer to the standard monographs <cit.>. All Banach spaces and Banach lattices in the article are allowed to be real or complex unless otherwise specified. Let E be a Banach lattice with positive cone E_+. For f∈ E_+, we alternatively use the notation f≥ 0 and write f⪈ 0 if f≥ 0 but f 0. For each u∈ E_+, the principal ideal of E given by E_u := {f ∈ E: there exists c>0 such that f≤ cu } is also a Banach lattice when equipped with the gauge norm f_u:= inf{c>0: f≤ cu} (f∈ E_u). The principal ideal E_u embeds continuously in E. In addition, if this embedding is dense, then u is called a quasi-interior point of E_+. If E is a complex Banach lattice, we denote its real part by E_. An operator A: E ⊇A→ E is said to be real if A=A∩ E_+iA∩ E_ and A(A∩ E_) ⊆ E_. In particular, a bounded operator T between Banach lattices E and F is real if and only if T(E_)⊆ F_. We say that T is positive if T(E_+)⊆ F_+. In particular, every positive operator is real. Moreover, we say that T is strongly positive if Tf is a quasi-interior point of F_+ for each 0⪇ f∈ E. We denote the space of bounded linear operators between E and F by (E,F) with the shorthand (E):=(E,E). The range of T∈(E,F) is denoted by T. A C_0-semigroup (e^tA)_t≥ 0 is said to be real if each operator e^tA is real. It can be easily seen that (e^tA)_t≥ 0 is real if and only if A is a real operator. Lastly, we recall that the spectral bound (A) of the generator A is defined as (A):=sup{λ: λ∈(A)}∈ [-∞,∞). § EVENTUAL INVARIANCE OF VECTOR SUBSPACES We introduce the notion of a persistently irreducible C_0-semigroup in Section <ref> using the concept of eventually invariant ideals. To aid us in the sequel, we collect here a few preliminary results about eventual invariance. Let E be a Banach space and let = (T_i)_i ∈ I a net of bounded linear operators on E. A set S ⊆ E is called… * …invariant under if T_i S ⊆ S for each i ∈ I. * …uniformly eventually invariant under if there exists i_0 ∈ I such that T_i S ⊆ S for each i ≥ i_0. * …individually eventually invariant under if for each f ∈ S there exists i_0 ∈ I such that T_i f ∈ S for all i ≥ i_0. In the subsequent sections, we will use these concepts specifically for C_0-semigroups to study irreducibility and versions thereof. To this end it will be important to understand under which conditions an individually eventually invariant vector subspace is even uniformly eventually invariant. For the case of closed vector subspaces, this is quite easy as long as the index set of the net of operators is not too large. Recall that a subset B of a directed set (A,≤) is said to be majorizing if for each a ∈ A, there exists b∈ B such that a≤ b. We first consider nets of operators which eventually map into a fixed closed vector subspace; as a corollary, we then obtain a result for eventual invariance of closed vector subspaces. Let E,F be Banach spaces and let = (T_i)_i ∈ I a net of bounded linear operators from E to F such that the directed set I contains a countable majorizing subset. Let W ⊆ F be a closed vector subspace. Assume that for every x ∈ E there exists i_0 ∈ I such that T_i x ∈ W for all i ≥ i_0. Then i_0 can be chosen to be independent of x. Let J ⊆ I be a countable majorizing subset. For each j ∈ J the set V_j := {x ∈ E : T_ix ∈ W for all i ≥ j} is a closed subspace of E and it follows from the assumption that ⋃_j ∈ J V_j = E. Hence, by Baire's theorem there exists j_0 ∈ J such that V_j_0 = E. Let E be a Banach space and let = (T_i)_i ∈ I be a net of bounded linear operators on E such that the directed set I contains a countable majorizing subset. Let V ⊆ E be a closed vector subspace. If V is individually eventually invariant under , then it is even uniformly eventually invariant under . Consider the restricted operators T_iV: V → E and apply Proposition <ref>. We note that if a set S ⊆ E is uniformly eventually invariant, then so is its closure S. However, individual eventual invariance of S does not imply individual eventual invariance of S, in general. This is a serious obstacle in the subsequent section, especially in the proof of Theorem <ref>. To overcome it, we need a generalisation of Corollary <ref> to the so-called operator ranges. A vector subspace V of a Banach space E is called an operator range in E if there exists a complete norm _V on V which makes the embedding of V into E continuous. If existent, such a norm on V is uniquely determined up to equivalence due to the closed graph theorem. It follows easily from a quotient space argument that V is an operator range if and only if there exists a Banach space D and a bounded linear operator T: D → E with range V (this explains the terminology operator range). The concept of operator ranges was studied in depth in <cit.> and was recently used to prove an abstract characterization of the individual maximum and anti-maximum principle in <cit.> (see also <cit.>). The intersection of finitely many operator ranges is again an operator range (the proof is easy; details can be found in <cit.> or <cit.>). The next example shows that the intersection of infinitely many operator ranges is not an operator range, in general. The subsequent Proposition <ref>, however, serves as a reasonable substitute. Let Q denote the set of all quasi-interior points in the cone of the Banach lattice c_0, i.e., the set of all 0 ≤ f ∈ c_0 such that f_k > 0 for each k ∈. For each f ∈ Q the principal ideal (c_0)_f is an operator range, being complete with respect to the gauge norm _f. However, the intersection ⋂_f ∈ Q (c_0)_f can easily be checked to coincide with the space c_00 of sequences with only finitely many non-zero entries, and this space cannot be an operator range since it has a countable Hamel basis (so in particular, it is not complete under any norm). Let E be a Banach space, let I be a non-empty set, and for each i ∈ I, let V_i ⊆ E be an operator range with a complete norm _i which makes the embedding of V_i into E continuous. Moreover, for each i ∈ I, let c_i > 0 be a real number. Then the vector subspace V := { v ∈⋂_i ∈ I V_i : v_V := sup_i ∈ I c_i v_i < ∞} is complete when equipped with the norm _V and the corresponding embedding into E is continuous. In particular, V is an operator range. It is important to note that the space (V, _V) does not only depend on the numbers c_i, but also on the choice of the norms _i – while for fixed i all norms on V_i that make the embedding V_i → E continuous are equivalent, we may re-scale each norm with an i-dependent factor. For this reason, we can assume in the proof of the proposition that all factors c_i are equal to 1; this can be achieved by simply replacing each norm _i with the norm c_i _i. As mentioned before the proof we assume that c_i = 1 for each i. Clearly, the normed space (V, _V) embeds continuously into (V_i, _i) for each i, so it also embeds continuously into E. It suffices now to show that (V, _V) is complete. To this end, let (x^n) be a Cauchy sequence in (V,_V). Then (x^n) is bounded in (V,_V), say by a number M ≥ 0. For each i, as (V, _V) embeds continuously into the Banach space (V_i, _i), there exists x_i∈ V_i such that x^n → x_i in (V_i, _i). Now by the continuous embedding of V_i into E, we obtain that x^n → x_i in E for each i. It follows that x:=x_i=x_j for all i,j∈ I. In particular, x ∈⋂_i ∈ I V_i. To show show that x ∈ V and that (x^n) converges to x with respect to _V, let ε > 0. Since (x^n) is a Cauchy sequence in (V, _V), there exists n_0 such that x^m-x^n_i≤ε for all i ∈ I and all n,m ≥ n_0. For each i ∈ I, the convergence of x^n to x in with respect to _i thus yields x^m - x_i≤ε for all m ≥ n_0. Hence, x_i ≤x^n_0_i + ε≤ M+ε, for each i ∈ I, so x ∈ V. Moreover, the inequality x^m - x_i≤ε for each i ∈ I and all m ≥ n_0 shows that x^m-x_V ≤ε for all m ≥ n_0, so indeed x^n → x in (V, _V). We are now in a position to show the following generalization of Proposition <ref>. As a consequence, we will obtain a generalization of the eventual invariance result in Corollary <ref> to operator ranges (Corollary <ref>). Let E,F be Banach spaces and let = (T_i)_i ∈ I a net of bounded linear operators from E to F such that the directed set I contains a countable majorizing subset. Let W ⊆ E be an operator range, endowed with a complete norm _W that makes the inclusion map W → E continuous. The following assertions are equivalent: * There exists i_0 ∈ I such that T_i E ⊆ W for each i ≥ i_0. * There are numbers c_i ≥ 0 for all i ∈ I that satisfy the following property: For each x ∈ E there exists i_0 ∈ I such that T_i x ∈ W and T_i x_W ≤ c_i x_E for all i ≥ i_0. A result which is loosely reminiscent of this theorem can be found in <cit.>. For the proof of Theorem <ref> we need the following observation <cit.>: if W is an operator range in a Banach space F, and T: E → F is a bounded linear operator from another Banach space E into F, then the pre-image V := T^-1(W) is an operator range in E, as the norm _V given by x_V := x_E + Tx_W for all x ∈ V is complete and makes the embedding of V into E continuous. Let W⊆ E be an operator range, endowed with a complete norm _W that makes the embedding into E continuous. “(i) ⇒ (ii)”: This implication immediately follows by choosing c_i:= T_i_E→ W – which is finite by closed graph theorem – for i≥ i_0 and c_i:=0 for all other i. “(ii) ⇒ (i)”: Without loss of generality, we assume that c_i ≥ 1 for each i∈ I. Let J⊆ I be a countable majorizing set and for each j∈ J, let V_j := ⋂_i≥ j T_i^-1(W). Since W is an operator range in F, T_i^-1(W) is an operator range in E as it is complete and embeds continuously into E when endowed with the norm x_i:= x_E+T_ix_W on T_i^-1(W). Thus, by Proposition <ref>, the subspace Ṽ_̃j̃:= {x∈ V_j : x_Ṽ_̃j̃ := sup_i≥ j c_i^-1x_i <∞} of E is also an operator range. Next, we observe that ⋃_j∈ JṼ_̃j̃ = E. Indeed, let x∈ E. As J is majorizing we can, due to assumption (ii), choose j∈ J such that for each i≥ j, we have T_ix∈ W and T_i x_W ≤ c_i x_E. Hence, for all i ≥ j, one has x_i ≤ (1+c_i) x_E and thus c_i^-1x_i ≤ 2 x_E, as we chose each c_i to be at least 1. p It follows that x∈Ṽ_̃j̃, as desired. Since ⋃_j∈ JṼ_̃j̃ = E, we can apply Baire's theorem, for operator ranges <cit.> to conclude that there exists j∈ J such that Ṽ_̃j̃=E. In turn, V_j = E and thus T_i E ⊆ W for all i≥ j. Let E be a Banach space and let = (T_i)_i ∈ I a net of bounded linear operators on E such that the directed set I contains a countable majorizing set. For each operator range V in E the following assertions are equivalent: * The space V is uniformly eventually invariant under T. * There are numbers c_i ≥ 0 for all i ∈ I such that the space V satisfies the following quantified individual eventual invariance property: For each v ∈ V there exists i_0 ∈ I such that T_i v ∈ V and T_i v_V ≤ c_i v_V for all i ≥ i_0. Apply Theorem <ref> to the restricted operators T_iV : V → E. In the context of Theorem <ref> and Corollary <ref> it is worthwhile to mention the general question under which conditions individual properties of operator-valued functions are automatically uniform. An abstract framework to study this question was developed by Peruzzetto in <cit.>. § IRREDUCIBILITY OF C_0-SEMIGROUPS In this section, we study irreduciblity of C_0-semigroups, a notion which stems from the theory of positive semigroups. We drop the positivity assumption and first discuss a variety of sufficient conditions for the irreducibility of general C_0-semigroups (Proposition <ref>); those conditions are necessary in the positive case (Proposition <ref>). In the eventually positive case, things turn out to be more involved (Example <ref>) and the stronger notion of persistent irreducibility turns out to be fruitful for the analysis, see in particular Theorem <ref>. Note that, as [0,∞) contains the majorizing set , individual and uniform eventual invariance of a closed subspace under a semigroup are equivalent notions (Corollary <ref>). A semigroup (e^tA)_t ≥ 0 on a Banach lattice E is called… * …irreducible if it has no closed invariant ideals, except for {0} and E. * …persistently irreducible if it has no closed ideal that is uniformly (equivalently: individually) eventually invariant, except for {0} and E. Observe that both irreducibility and persistent irreducibility do not change if we replace A with A + λ for any λ∈. The name persistently irreducible is motivated by the observation that this property means that all the semigroup tails (e^tA)_t ≥ t_0 for t_0 ≥ 0 act irreducibly on E, i.e., the semigroup is not only irreducible but it also remains irreducible when its action is only considered for large times. Obviously, persistent irreducibility implies irreducibility (which also explains why we avoid the terminology eventually irreducible, a notion that one might, at first glance, be tempted to use instead of persistently irreducible). From the definition, it is seen at once that nilpotent semigroups are not persistently irreducible unless E≤1. In Example <ref>, we give an example of a semigroup which is nilpotent and irreducible – thus, showing that the notions irreducibility and persistent irreducibility are not equivalent. Throughout we will study a number of sufficient or necessary conditions for irreducibility; they are motivated by <cit.>, where positive semigroups are studied. Let (e^tA)_t≥ 0 be a C_0-semigroup on a Banach lattice E. We study the relationships between the following assertions. * The semigroup (e^tA)_t ≥ 0 is irreducible. * Weak condition at arbitrary times: For each 0 ⪇ f ∈ E and each 0 ⪇φ∈ E', there exists t ∈ [0,∞) such that ⟨φ, e^tAf ⟩≠ 0. * The semigroup (e^tA)_t ≥ 0 is persistently irreducible. * Weak condition at large times or 0: For each 0 ⪇ f ∈ E, each 0 ⪇φ∈ E' and each t_0 ∈ [0,∞), there exists t ∈{0}∪ [t_0,∞) such that ⟨φ, e^tAf ⟩ 0. * Weak condition at large times: For each 0 ⪇ f ∈ E, each 0 ⪇φ∈ E' and each t_0 ∈ [0,∞), there exists t ∈ [t_0,∞) such that ⟨φ, e^tAf ⟩ 0. Let us first point out that several implications between these conditions are true for general C_0-semigroups: For a C_0-semigroup (e^tA)_t≥ 0 on a Banach lattice E, the following implications between the Conditions <ref> are true: 3cm <ref> persistent irreducibility @=>[d] 3cm <ref> weak condition at large times or 0 @=>[l] 3cm <ref> weak condition at large times @=>[l] @=>[d] 3cm <ref> irreducibility 3cm <ref> weak condition at arbitrary times @=>[ll] The implications “<ref> ⇒ <ref>” and “<ref> ⇐ <ref> ⇒ <ref>” are obvious. “<ref> ⇒ <ref>”: If <ref> fails, we can find a closed ideal I which is neither equal to {0} nor to E, but which is invariant under the semigroup. As I is proper and non-zero, there exists a vector 0 ⪇ f ∈ I and a functional 0 ⪇φ∈ E' that vanishes on I. One has ⟨φ, e^tA f ⟩ = 0 for all t ≥ 0, so <ref> fails. “<ref> ⇒ <ref>”: If <ref> fails, we can find a closed ideal I which is neither equal to {0} nor to E and a time t_0 ≥ 0 such that e^tAI ⊆ I for all t ∈ [t_0,∞). Again, as I is proper and non-zero, there is 0 ⪇ f ∈ I and a functional 0 ⪇φ∈ E' which vanishes on I. Hence, we have ⟨φ, e^tA f ⟩ = 0 for all t ∈{0}∪ [t_0,∞), which shows that <ref> fails. Proposition <ref> gives us a plethora of examples of (non-positive) semigroups that are (persistently) irreducible. Indeed, we see that persistent irreducibility is implied by individual eventual strong positivity, i.e., by the property ∀ f⪈ 0 ∃ t_0≥ 0 ∀ t≥ t_0: e^tAf is a quasi-interior point of E_+; this follows from the fact that a vector g ∈ E_+ is a quasi-interior point of E_+ if and only if ⟨φ, g ⟩ > 0 for all 0 ⪇φ∈ E', see <cit.>. Therefore, all examples of eventually strongly positive semigroups in <cit.> and <cit.> are persistently irreducible. In Example <ref>, we give an example of a non-positive persistently irreducible semigroup that does not satisfy (<ref>). For positive semigroups, the notions of irreduciblity and persistent irreduciblity are, in fact, equivalent: For a positive C_0-semigroup (e^tA)_t≥ 0 on a Banach lattice E, all five of the Conditions <ref> are equivalent. By Proposition <ref> it suffices to prove the following two implications: “<ref> ⇒ <ref>”: This implication is well-known for positive semigroups. The argument can be found just after <cit.> (see also <cit.>). “<ref> ⇒ <ref>”: Since <ref> holds, we know from Proposition <ref> that the semigroup is also irreducible. The irreducibility together with the positivity implies that each operator e^tA is strictly positive, meaning that e^tAf ⪈ 0 whenever f ⪈ 0; see <cit.>. So if f, φ, and t_0 are given as in <ref>, then e^t_0 Af ⪈ 0. Again applying <ref> to the vectors e^t_0 Af and φ and thus find a time t ≥ 0 such that ⟨φ, e^(t + t_0)Af ⟩ 0. The situation gets more subtle for eventually positive semigroups. For them, the following theorem indicates that persistent irreducibility is the appropriate notion to further build the theory on, as this property can be conveniently characterized by testing against functionals. For an individually eventually positive real C_0-semigroup (e^tA)_t≥ 0 on a Banach lattice E, the following implications between the Conditions <ref> are true: 3cm <ref> persistent irreducibility @=>[d] 3cm <ref> weak condition at large times or 0 @<=>[l] 3cm <ref> weak condition at large times @<=>[l] @=>[d] 3cm <ref> irreducibility 3cm <ref> weak condition at arbitrary times @=>[ll] The difference to the situation without any eventual positivity assumption (Proposition <ref>) is that one now has equivalences throughout the first row of the diagram. In light of the situation for positive semigroups (Proposition <ref>) one may ask whether Theorem <ref> can be improved to get an equivalence between all five conditions, say at least for uniformly eventually positive semigroups. A negative answer is given in Example <ref>. However, the situation improves significantly if the semigroup is, in addition, analytic (Proposition <ref>). For the proof of Theorem <ref>, we need the following sufficient condition for a principal ideal in a Banach lattice to be uniformly eventually invariant under a semigroup. Getting uniform (rather than only individual) eventual invariance in the lemma is a bit subtle, as we merely assume the semigroup to be individually eventually positive. This is where our preparations on eventually invariant operator ranges from Section <ref> enter the game. Let E be a Banach lattice and let (e^tA)_t ≥ 0 be a real individually eventually positive C_0-semigroup on E. Let 0 ≤ h ∈ E and assume that there exists a time t_0 ≥ 0 such that e^tA h ≤ h for all t ≥ t_0. Then the principal ideal E_h (and, in turn, its closure) is uniformly eventually invariant under (e^tA)_t ≥ 0. We first consider a real vector f in the order interval [-h, h]. Due to the individual eventual positivity of the semigroup there exists a time t_1 ≥ t_0 such that e^tA(h-f) ≥ 0 and e^tA(h+f) ≥ 0 for all t ≥ t_1. Thus, for all t ≥ t_1 ≥ t_0, the vector e^tAf is real and satisfies ± e^tAf ≤ e^tAh ≤ h, so e^tAf ∈ [-h,h]. This proves that the order interval [-h,h] is individually eventually invariant under the semigroup. As [-h,h] spans E_h (over the underlying scalar field), it follows that E_h is individually eventually invariant under the semigroup. To show that E_h is even uniformly eventually invariant, we will now employ Corollary <ref>. If the underlying scalar field is , the preceding argument shows that, for each f ∈ E_h, we have e^tA f_h ≤f_h for all sufficiently large times t. If the underlying scalar field is and f ∈ E_h, say with gauge norm f_h ≤ 1, we can write f as f = f_1 + i f_2 for real vectors f_1,f_2 ∈ [-h,h] and hence, e^tAf = e^tAf_1 + e^tAf_2 has modulus at most 2h for all sufficiently large t. This shows that, for each f ∈ E_h, one has e^tAf_h ≤ 2f_h for all sufficiently large times t. In both cases, Corollary <ref> can be applied and give the uniform eventual invariance of E_h. Due to Proposition <ref>, only one implication is left to prove, namely “<ref> ⇒ <ref>”. Without loss of generality, assume that the growth bound (A) of the semigroup satisfies (A) < 0. Assume that <ref> fails, i.e., we can find 0 ⪇ f ∈ E, 0 ⪇φ∈ E' and t_0 ∈ [0,∞) such that ⟨φ, e^tAf ⟩ = 0 for all t ∈ [t_0,∞). Due to the individual eventual positivity of the semigroup, we can find a time t_1 ≥ t_0 such that e^tAf ≥ 0 for each t ≥ t_1. We distinguish two cases: Case 1: e^t_1Af = 0. It then follows from the individual eventual positivity that the orbit of every g ∈ E_f under the semigroup eventually vanishes. By applying Proposition <ref> to the family of operators (e^tAE_f)_t≥ 0 and W=E_f, we conclude that e^tA eventually vanishes on E_f. In particular, the closed ideal E_f is even uniformly eventually invariant. Thus, if E_f≠ E, then <ref> fails (as f is non-zero). On the other hand, if E_f = E, then the semigroup is nilpotent. But since <ref> is not true, one has E>1. Clubbing this with nilpotency, the semigroup can't be persistently irreducible, i.e., <ref> fails. Case 2: e^t_1Af ≠ 0. By the continuity of the semigroup orbit of f and the inequality e^tAf ≥ 0 for t ≥ t_1, it follows (by testing against positive functionals) that h := ∫_t_1^∞ e^tA f t ⪈ 0; the convergence of the integral is guaranteed as we assumed ω_0(A) < 0. By using again that e^tAf ≥ 0 for all t ≥ t_1, one readily checks that e^tAh ≤ h for all t ≥ 0. So, according to Lemma <ref>, the closure of the ideal E_h is uniformly eventually invariant under the semigroup. This closed ideal is non-zero since it contains that non-zero vector h. Moreover, we have ⟨φ, h ⟩ = 0, so φ vanishes on E_h, which shows that E_h≠ E. Whence, the semigroup is not persistently irreducible, i.e., <ref> fails. As a consequence of Theorem <ref>, we obtain the following eventual strict positivity result for eventually positive persistently irreducible semigroups: Let E be a Banach lattice and assume that (e^tA)_t ≥ 0 is a real C_0-semigroup on E which is individually eventually positive and persistently irreducible. For all 0 ⪇ f ∈ E and 0 ⪇φ∈ E' and all times t ≥ 0 one has e^tAf ≠ 0 and (e^tA)' φ≠ 0. We infer from Theorem <ref> that the semigroup satisfies Condition <ref><ref>. Let 0 ⪇ f ∈ E and 0 ⪇φ∈ E'. Suppose there exists t_0>0 such that e^t_0 Af=0 or (e^t_0A)' φ = 0. Then e^tAf=0 or (e^tA)' φ = 0 for all t≥ t_0. In either case, φe^tAf=0 for all t≥ t_0, which contradicts Condition <ref><ref>. Using Corollary <ref>, we are able to obtain the following analogue of <cit.>: If (e^tA)_t ≥ 0 is a C_0-semigroup on E which is uniformly eventually positive and persistently irreducible, then there exists a time t_0≥ 0 such that e^tA is a strictly positive operator for all t≥ t_0, meaning that e^tAf⪈ 0 for all 0⪇ f∈ E and all t≥ t_0. There exist uniformly eventually positive C_0-semigroups which are irreducible but not persistently irreducible; here is a concrete example. A uniformly eventually positive semigroup on ℓ^2 which is irreducible but not persistently irreducible. Let (r_n) be the orthonormal basis of L^2(0,1) that consists of the Rademacher functions and let U:L^2(0,1)→ℓ^2 be the unitary operator that maps each function to its coefficients with respect to the basis (r_n). Let (e^tB)_t≥ 0 denote the left shift semigroup on L^2(0,1) (which is nilpotent). We show that the semigroup on ℓ^2 given by e^tA=Ue^tBU^-1 for each t ≥ 0 is irreducible. However, the semigroup is clearly not persistently irreducible as it is nilpotent. In order to show that (e^tA)_t≥ 0 is irreducible, let I be a non-zero closed ideal of ℓ^2 that is invariant under the semigroup (e^tA)_t≥ 0. Then there exists k∈ such that e_k∈ I; here (e_n) denotes the standard orthonormal basis of ℓ^2. For every index j k we have e^tAe_ke_j = e^tBr_kr_j; and the term on the right is non-zero for some t∈ [0,1] (for instance, for all t<1 that are sufficiently close to 1). So for this time t the vector e^tAe_k dominates a non-zero multiple of e_j. As e^tAe_k is an I, so is e_j. Thus, I=ℓ^2. While the above counterexample shows that Condition <ref><ref> does not imply <ref> even in the case of uniformly eventually positive semigroups, we do not know whether the semigroup in the example satisfies <ref>. So it remains open whether any of the implications “<ref> ⇒ <ref>” or “<ref> ⇒ <ref>” is true for (individually or uniformly) eventually positive semigroups (but note that they cannot both be true, as Example <ref> shows). Finally, let us briefly consider the case of analytic semigroups. This case is simpler since a phenomenon as in Example <ref> cannot occur: Let E be complex Banach lattice and assume that (e^tA)_t ≥ 0 is an individually eventually positive C_0-semigroup on E. If the semigroup is analytic, then all five Conditions <ref> are equivalent. The remaining implication “<ref> ⇒ <ref>” in Theorem <ref> follows from the identity theorem for analytic functions. For positive semigroups, irreducibility together with analyticity implies a stronger version of positivity, namely that the semigroup operators map all vectors f ⪈ 0 to quasi-interior points <cit.>. In the following proposition, we slightly modify the argument to show that the same remains true if the semigroup is only uniformly eventually positive rather than positive. We do not know whether individual eventual positivity suffices for the same conclusion. Let E be a complex Banach lattice and assume that (e^tA)_t ≥ 0 is a uniformly eventually positive analytic C_0-semigroup on E and choose t_0 ∈ [0,∞) such that e^tA≥ 0 for all t ≥ t_0. If (e^tA)_t ≥ 0 is (persistently) irreducible, then for every 0⪇ f∈ E and all t > 2 t_0 the vector e^tAf is a quasi-interior point of E_+. In the light of the terminology of earlier papers on eventual positivity such as <cit.> it is natural to refer to the property in the conclusion of the proposition as uniform eventual strong positivity of (e^tA)_t ≥ 0, cf. (<ref>). In <cit.> this property was called eventual irreducibility. We first make the following preliminary observation: (*) If the orbit of a vector g ∈ E_+ is positive (meaning that e^tAg ≥ 0 for all t ≥ 0) and (t_n) ⊆ (0,∞) converges to 0 sufficiently fast, then we can find an increasing sequence (g_n) ∈ E_+ converging to g, that satisfies 0 ≤ g_n ≤ e^t_n A g for each index n. Indeed, let (t_n) converge to 0 so fast that ∑_n=1^∞e^t_nAg - g < ∞. Then we define the (not yet positive) vectors g_n := g - ∑_k=n^∞(g - e^t_k Ag )^+ for each n ∈, where the series converges absolutely in E. Clearly, (g_n) is an increasing sequence of real vectors in E that converges to g. For each index n one has g_n ≤ g - (g - e^t_n Ag )^+ = g e^t_nA g ≤ e^t_nAg. Since all the vectors e^t_nAg are positive we can now replace each g_n with g_n^+ to obtain a sequence (g_n) with the desired properties. This proves (*). Assume now that the conclusion of the proposition does not hold, i.e., that there exists a vector 0 ⪇ f ∈ E and a time τ > 2t_0 such that e^τ A is not a quasi-interior point of E_+. By the characterization of quasi-interior points in <cit.>), there is a functional 0 ⪇φ∈ E' such that ⟨φ, e^τ A f ⟩ = 0. The orbit of the vector g := e^t_0 A f ∈ E_+ is positive, so we can apply the preliminary observation (*) to g. Let (t_n) and (g_n) be as given by (*). By dropping finitely many elements of these sequences we can achieve that τ - t_n ≥ 2t_0 for all n and hence, all the operators e^(τ-t_0-t_n)A are positive. For all integers n ≥ m ≥ 1 we thus have 0 ≤ e^(τ-t_0-t_n)A g_m ≤ e^(τ-t_0-t_n)A g_n ≤ e^(τ-t_0)A g = e^τ Af and thus, ⟨φ, e^(τ-t_0-t_n)A g_m ⟩ = 0. As the sequence (τ-t_0-t_n)_n ≥ m accumulates at the point τ-t_0 ∈ (0,∞), analyticity of the semigroup implies that ⟨φ, e^tA g_m ⟩ = 0 for all t ∈ [0,∞) and all m ∈. As g_m → g, we even have 0 = ⟨φ, e^tA g ⟩ = ⟨φ, e^(t_0+t)Af ⟩ for all t ∈ [0,∞). According to Theorem <ref> this contradicts the persistent irreducibility. In Example <ref>, we show that the assumption of analyticity cannot be dropped in Proposition <ref>. § SPECTRAL PROPERTIES OF PERSISTENTLY IRREDUCIBLE SEMIGROUPS In this section, we study spectral properties of eventually positive semigroups that are persistently irreducible. For the case of positive irreducible semigroups, most of these properties are proved in <cit.>. It is instructive to observe that the conclusions of several results in this section resemble similar properties that were shown under stronger conditions in <cit.> (the conclusions are formulated somewhat differently in <cit.>, but can be rephrased in terms of leading eigenvectors, see <cit.>). Recall that a linear positive functional φ on a Banach lattice is said to be strictly positive if its kernel contains no positive non-zero element. Let E be a complex Banach lattice and let (e^tA)_t ≥ 0 be an individually eventually positive and persistently irreducible C_0-semigroup on E. * If 0 ⪇ u ∈ E is an eigenvector of A for an eigenvalue λ∈, then u is a quasi-interior point of E_+. * If 0 ⪇ψ∈ E' is an eigenvector of A' for an eigenvalue λ∈, then ψ is strictly positive. (a) Let 0 ⪇φ∈ E'_+. According to Theorem <ref>, Condition <ref><ref> is satisfied, so we can find t ∈ [0,∞) such that 0 < ⟨φ, e^tA u ⟩ = ⟨φ, e^t λ u ⟩ = e^t λ⟨φ, u ⟩, so ⟨φ, u ⟩ > 0. This shows that u is a quasi-interior point <cit.>. (b) This follows from a similar argument as (a). For the following theorem, recall that an eigenvalue λ of a linear operator A: E ⊇A→ E on a Banach space E is called geometrically simple if the eigenspace (λ-A) is one-dimensional. It is called algebraically simple if the generalized eigenspace ⋃_n ∈( (λ - A)^n ) is one-dimensional. Let (e^tA)_t ≥ 0 be real, individually eventually positive, and persistently irreducible C_0-semigroup on a complex Banach lattice E. If λ∈ is an eigenvalue of both A and A' and (λ - A') contains a positive non-zero functional, then λ is algebraically simple as an eigenvalue of A, the eigenspace (λ - A) is spanned by a quasi-interior point of E_+, and (λ - A') contains a strictly positive functional. For a proof of Theorem <ref>, we need the following general result: Let A: X ⊇A→ X be a closed and densely defined linear operator on a complex Banach space E and let λ∈ be an eigenvalue of both A and A'. If λ is geometrically simple as an eigenvalue of A and there exist eigenvectors u ∈(λ - A) and φ∈(λ - A') such that ⟨φ, u ⟩≠ 0, then λ is also algebraically simple as an eigenvalue of A. It seems likely that Lemma <ref> is known to experts in spectral theory, although we could not find an explicit reference for it. For matrices, the lemma is implicitly shown in the proof of <cit.>, whereas the infinite-dimensional version is implicitly shown in the proof of (iii) implies (iv) of <cit.>. Without loss of generality, we may assume that λ = 0. Let v ∈ A^2; it suffices to prove that v ∈ A. Since Av ∈ A and A is spanned by u, there exists a scalar α such that Av = α u. By testing this equality against φ and using that A'φ = 0, we obtain 0 = ⟨φ, Av ⟩ = α⟨φ, u ⟩. Since ⟨φ, u ⟩≠ 0, this implies that α = 0, so Av = 0. There is no loss of generality in assuming that λ = 0. Let 0 ⪇φ∈(A'). According to Proposition <ref>, φ is strictly positive and all positive non-zero elements of (A) are quasi-interior points of E_+. Now we show that the real part E_∩(A) of (A) is a sublattice of E_. Let f ∈ E_∩(A); it suffices to show that f∈(A). Due to the individual eventual positivity of the semigroup, there exists t_0 ≥ 0 such that f = e^tAf≤ e^tAf for all t ≥ t_0; this is a general property of individually eventually positive operator nets, see <cit.>. For t ≥ t_0 the vector e^tAf - f is therefore positive; but it is also in the kernel of φ, and hence equal to 0 due to the strict positivity of φ. Thus, e^tAf = f for all t ≥ t_0. For general t ≥ 0 this implies e^tAf = e^tA e^t_0 Af = e^(t+t_0)Af = f, so f∈ A, as claimed. Now we can show that A ∩ E_ is one-dimensional and spanned by a quasi-interior point of E_+. We have just seen that A ∩ E_ is a closed sublattice of the real Banach lattice E_. Since every non-zero positive vector in A ∩ E_ is a quasi-interior point within the Banach lattice E_, it is also a quasi-interior point within the Banach lattice A ∩ E_, see <cit.>. Hence, A ∩ E_ is at most one-dimensional <cit.>. Since A is real and A is non-zero, we conclude that A ∩ E_ is one-dimensional and thus spanned by a vector u ≠ 0. The modulus u is also a non-zero vector in A ∩ E_ and thus spans this space, too. By what we have noted at the beginning of the proof, u is a quasi-interior point of E_+. The fact that A is real implies that A is also spanned by u (over ). Finally, we note that the eigenvalue λ = 0 of A is algebraically simple. This follows from the geometric simplicity and ⟨φ, u ⟩ > 0 by means of Lemma <ref>. In the following corollary, we list a few simple consequences of Theorem <ref>. If E is a Banach space, then for each u∈ E and φ∈ E', the (at most) rank-one operator u⊗φ is defined as u ⊗φ: E → E, f ↦φfu. Let E be a complex Banach lattice and let (e^tA)_t ≥ 0 be a real, individually eventually positive, and persistently irreducible semigroup on E. * Assume that the spectral bound (A) is not -∞ and is an eigenvalue of A. If the operator family ( (λ - (A)) (λ,A) )_λ > (A) is bounded in some right neighbourhood of (A), then (A) is an algebraically simply eigenvalue of A, the eigenspace ((A) - A) is spanned by a quasi-interior point of E_+, and the dual eigenspace ((A) - A') contains a strictly positive functional. * If the semigroup is mean ergodic and the mean ergodic projection P is non-zero, then P = u ⊗φ for a quasi-interior point u of E_+ and a strictly positive functional φ∈ E'. * If (A) is not -∞ and is a pole of the resolvent, then the pole is simple and the corresponding spectral projection P is given by P=u⊗φ for a quasi-interior point u of E_+ and a strictly positive functional φ∈ E'. (a) We may assume that (A) = 0. According to Theorem <ref> it suffices to show that A' contains a non-zero positive element. Due to the boundedness assumption on the resolvent, A' contains a non-zero element φ <cit.>. The proof in this reference also shows how φ can be obtained: let u ∈ A be non-zero and choose an abritrary functional ψ∈ E' such that ⟨ψ, u ⟩≠ 0. Then the net ( λ(λ,A)'ψ)_λ∈ (0,∞) – where (0,∞) is ordered conversely to the order inherited from – has a weak^*-convergent subnet by the Banach–Alaoglu theorem and the limit φ of this subnet is non-zero and an element of A'. In this argument, we may choose the initial functional ψ to be positive. The individual eventual positivity of the semigroup implies that, for each 0≤ f∈ E, the distance of λ(λ,A)f to E_+ converges to 0 as t →∞ <cit.>. It follows from the positivity of ψ that φ is positive, as well. (b) If (e^tA)_t ≥ 0 is mean ergodic, the mean ergodic projection P is the projection onto A along A and A separates A'; see <cit.>. The same reference also shows that (0,∞) is in the resolvent set of A and that (λ(λ, A))_λ∈ (0,∞) is bounded in a right neighbourhood of 0. Since P = A and P was assumed to be non-zero, A contains a non-zero element. So (A) = 0 and we can apply (a) to conclude that A is spanned by a quasi-interior point u of E_+ and that A' contains a strictly positive functional φ. Being a projection, P satisfies P' = P = 1 <cit.>. Moreover, as P' is the weak^*-limit of the dual operators of the Cesàro means of (e^tA)_t ≥ 0, its range P' contains A' and in particular, φ, so P' is actually spanned by φ. Re-scaling φ and u such that φu=1, we deduce that P = u ⊗φ. (c) Since (A) is a pole of the resolvent (,A), it is an eigenvalue of both A and A' and the corresponding eigenspaces each contain a positive, non-zero vector; see <cit.>. Therefore, by Theorem <ref>, the spectral bound (A) is an algebraically simple eigenvalue of A, the eigenspace ((A)-A) is spanned by a quasi-interior point u of E_+, and ((A)-A') contains a strictly positive functional φ. It follows from the algebraic simplicity that the pole (A) of the resolvent of A is simple, see for instance, <cit.>. In particular, P=((A)-A) and P'=((A)-A'). Since P and P' have the same dimension <cit.>, the image P' is spanned by φ. By re-scaling φ and u such that φu=1, we get P = u ⊗φ. After the preceding results on eigenvectors we discuss in Theorem <ref> below that, on spaces of continuous functions, persistent irreducibility gives that the spectrum of the semigroup generator is non-empty. For positive semigroups, this is known <cit.>, but the proof given in this reference does not directly extend to the eventually positive case since the resolvent of an eventually positive semigroup need not be positive at any point. Nevertheless, as in the positive case, an essential ingredient for the proof is the observation in the following proposition. We say that a bounded linear operator T on a Banach lattice E has individually eventually positive powers if for each f ∈ E_+, there exists an integer n_0 ≥ 0 such that T^nf ≥ 0 for each n ≥ n_0. Let T be a bounded linear operator on a Banach lattice E and assume that T has individually eventually positive powers. Assume that there is a vector h ∈ E_+ and a number δ > 0 such that Th ≥δ h and T^n h ≠ 0 for each n ∈ (so in particular, h is non-zero). Then (T) ≥δ. Before the proof it is worthwhile to observe that the conclusion is very easy to see if T is positive: for each n ∈ one then has T^n h ≥δ^n h and thus T^n≥δ^n as h ≠ 0. In the eventually positive case one can argue as follows: We may assume that h = 1. Since Th - δ h is positive, there exists n_0 ≥ 0 such that T^n+1 h ≥δ T^n h ≥ 0 for each n ≥ n_0. For every k ≥ 0 this yields T^n_0+k h ≥δ T^n_0+k-1h ≥δ^2 T^n_0+k-2 h ≥…≥δ^k T^n_0 h. Hence, T^n_0+k≥T^n_0+kh≥δ^k α, where α := T^n_0 h is non-zero by assumption. So (T) = lim_k →∞T^n_0+k^1/n_0+k≥lim_k →∞δ^k/n_0+kα^1/n_0+k = δ, which proves the proposition. Before we use this proposition to study persistently irreducible semigroups on spaces C_0(L) for locally compact L, we mention the following simple consequence on C(K)-spaces for compact K (i.e., abstractly speaking, on AM-spaces with order unit), which holds without any irreducibility assumption. This generalizes <cit.>, where the semigroup was assumed to be uniformly eventually strongly positive rather than just individually eventually positive. Let (e^tA)_t ≥ 0 be a real and individually eventually positive C_0-semigroup on an AM-space E with order unit. If the semigroup is not nilpotent, then (A) is non-empty. Let be a (strong) order unit of E. Due to the strong continuity at time 0 and since the semigroup is real, there exists s > 0 such that e^sA≥1/2. Moreover, one has e^tA≠ 0 for any t ∈ [0,∞). Indeed, assume the contrary. Then e^tA = 0 for all sufficiently large t, say t ≥ t_0. For every real vector f ∈ E between 0 and one has thus 0 ≤ e^tAf ≤ e^tA for all sufficiently large t due to the individual eventual positivity. Hence, each orbit of the semigroup vanishes eventually. By Proposition <ref> this implies that the semigroup is nilpotent, which contradicts our assumption. As the powers of the operator e^sA are individually eventually positive, we can apply Proposition <ref> to conclude that the spectral radius of e^sA is non-zero. Thus, the growth bound of the semigroup is not -∞. But for individually eventually positive semigroups on AM-spaces with unit it was shown in <cit.> that the growth bound coincides with the spectral bound. So the spectrum is indeed non-empty. The assumption that the semigroup not be nilpotent is not redundant in the corollary: there exist real C_0-semigroup on C([0,1]) that are nilpotent (see for instance <cit.>). Obviously, such a semigroup is (even uniformly) eventually positive and yet has empty spectrum. On the other hand, this cannot happen for a positive C_0-semigroup on C(K): for those, it is a classical result that the spectrum of the generator is always non-empty <cit.>). On the space C_0(L) of continuous function on a locally compact Hausdorff space L that vanish at infinity, it can happen that a positive semigroup is not nilpotent and still its generator has empty spectrum; see <cit.> for a concrete example where this occurs. However, if the semigroup is also irreducible, then it follows again that the generator has non-empty spectrum <cit.>. The following theorem generalizes this result to individually eventually positive semigroups. Since the relation between individual eventual positivity of the semigroup and the resolvent operators is much subtler than in the positive case, we cannot use the same argument as in <cit.>. We will thus argue with a sum of finitely many semigroup operators rather than with the resolvent. Let L be locally compact Hausdorff and let (e^tA)_t ≥ 0 be a real, individually eventually positive, and persistently irreducible C_0-semigroup on C_0(L). Then (A) is non-empty. Consider a positive non-zero vector h ∈ C_0(L) which has compact support S. According to Corollary <ref>, we have e^tAh ≠ 0 for each t ∈ [0,∞). Using the individual eventual positivity, we can choose a time t_0 > 0 such that e^tAh ⪈ 0 for each t ≥ t_0. Due to the persistent irreducibility, Theorem <ref> shows that the semigroup satisfies Condition <ref><ref>. Hence, for every x ∈ L there exists a time t_x ∈ [t_0, ∞) such that (e^t_xAh)(x) > 0. For every x ∈ L, we denote the open support of the continuous positive function e^t_x Ah by U_x. Since x ∈ U_x for each x ∈ L, we have ⋃_x ∈ S U_x ⊇ S. Hence, due to the compactness of S we can find finitely many points x_1, …, x_m ∈ S such that U_x_1, …U_x_m cover S. Now consider the bounded linear operator T := e^t_x_1A + … + e^t_x_mA on C_0(L). Then Th ≥ 0 and for each x ∈ S we have (Th)(x) > 0. Again by the compactness of S there exists ε > 0 such that (Th)(x) ≥ε for each x ∈ S. Since h vanishes outside of S, it follows that Th ≥δ h for some δ > 0. We also observe that the operator T has individually eventually positive powers. To see this, let t_min denote the smallest of the numbers t_x_1, …, t_x_m; then t_min > 0 since we chose t_0 to be non-zero. Let f ∈ E_+ and consider t̂∈ [0,∞) such that e^tAf ≥ 0 for each t ≥t̂. Choose an integer n_0 ≥ 1 such that t_min n_0 ≥t̂. A brief computation then shows that T^n f ≥ 0 for every n ≥ n_0. On the other hand, as e^tAh ⪈ 0 for all t ∈ [t_0,∞), the same computation shows that T^nh ⪈ 0 for each n ∈. So the assumptions of Proposition <ref> are satisfied and thus T has non-zero spectral radius. Since the spectral radius is subadditive on commuting operators, we conclude that at least one of the operators e^t_x_1A, …, e^t_x_mA has non-zero spectral radius (and hence even each of the operators in the semigroup has non-zero spectral radius). Therefore, the growth bound of the semigroup is not -∞. As the growth bound coincides with the spectral bound for individually eventually positive semigroups on general AM-spaces <cit.>, it follows that (A) is non-empty, as claimed. § A METHOD TO CONSTRUCT EVENTUALLY POSITIVE SEMIGROUPS It is natural to ask for examples of eventually positive semigroups that are persistently irreducible but do not have the eventual strong positivity property (<ref>). Actually, one can easily find positive semigroups that satisfy those conditions – for instance the shift semigroup on L^p over the complex unit circle for any p ∈ [1,∞). However, it is much less clear how to find non-positive examples with precisely those properties. The major obstacle is that, for non-positive semigroups, eventual positivity is usually checked by the characterization theorems in <cit.> and <cit.> – but those theorems already yield eventual strong positivity. Thus, those theorems do not lend themselves to identifying eventually positive semigroups that are not eventually strongly positive. In this section, we will provide a condition that can be used to construct examples. Our approach relies on perturbation theory. It has been demonstrated in <cit.> that perturbation theory for eventual positivity has a number of serious limitations. However, we will see in this section that more can be said if the perturbation is known to interact well with the unperturbed semigroup. This allows us to obtain examples of eventual positivity in situations where the leading eigenvalue of the generator is not known and where the eventual positivity is not necessarily strong. As a result, we can build a semigroup in Example <ref> that has all the desired properties listed above: it is eventually positive but not positive, and it is persistently irreducible but not eventually strongly positive. §.§ Positive perturbations We start with the following perturbation result that follows quite easily from the Dyson–Phillips series expansion of perturbed semigroups. Let (e^tA)_t ≥ 0 be a C_0-semigroup on a Banach lattice E and let B ∈(E). If e^tA B e^sA≥ 0 for all s,t ≥ 0, then e^t(A+B) - e^tA≥ 0 for all t ≥ 0. Note that under the assumptions of the proposition, B is positive. From the perspective of eventual positivity, the point of Proposition <ref> is that any kind of eventual positivity of (e^tA)_t ≥ 0 is inherited by the perturbed semigroup (e^t(A+B))_t ≥ 0. Let us also point out that if A is real, then so are all semigroups in the proposition, and hence one can rewrite the conclusion as e^t(A+B)≥ e^tA for every t ≥ 0. We use the Dyson-Phillips series representation of the perturbed semigroup <cit.>: define V_0(t) := e^tA for each t ≥ 0 and, inductively, V_n+1(t) := ∫_0^t e^(t-s)A B V_n(s) s for each t ≥ 0 and each integer n ≥ 0, where the integral is to be understood in the strong sense. Then e^t(A+B) = ∑_n=0^∞ V_n(t) for each t ≥ 0, where the series converges absolutely with respect to the operator norm. It follows from the assumption of the proposition that V_1(t) ≥ 0 for all t ≥ 0. Since, also due to the assumption, e^tAB ≥ 0 for all t ≥ 0, one thus obtains inductively that V_n(t) ≥ 0 for each t ≥ 0 and each integer n ≥ 1. Thus, e^t(A+B) - e^tA = ∑_n=1^∞ V_n(t) ≥ 0 for every t, as claimed. Proposition <ref> gives us the option to construct examples of eventually positive semigroups by easy perturbations, without having precise spectral information about the perturbed operator A+B available. Let us first demonstrate this in a very simple finite-dimensional example. Consider the self-adjoint 3 × 3-matrix A := [ 7 -1 3; -1 7 3; 3 3 3 ] = U [ 0 ; 8 ; 9 ] U^* , where U = (u_1 u_2 u_3) is the unitary matrix with the columns u_1 := 1/√(6)[ -1; -1; 2 ] , u_2 := 1/√(2)[ 1; -1; 0 ] , and u_3 := 1/√(3)[ 1; 1; 1 ] . As the rescaled semigroup (e^-9t e^tA)_t ≥ 0 converges to the rank-1 projection u_3 u_3^* as t →∞, we see that (e^tA)_t ≥ 0 is (uniformly) eventually strongly positive. (Alternatively, one could also derive this from <cit.>.) However, one can say more in this concrete example: From the diagonalization of A one obtains by a short computation that A^n = [ - + - + ; - + - + ; ] for every integer n ≥ 1 (but not for n=0, as we dropped the term 0^n from the formula). So for every t ≥ 0, the third row and the third column of e^tA are positive. Now consider the perturbation B := [ 0 ; 0 ; b ] for any number b ≥ 0. Then the matrix e^tA B e^sA is positive for all s,t ≥ 0, so it follows from Proposition <ref> that the matrix semigroup generated by A + B = [ 7 -1 3; -1 7 3; 3 3 3 + b ] is (uniformly) eventually strongly positive for any fixed number b ≥ 0. In the above example, note that B does not commute with A if b ≠ 0, so A+B does not have the same eigenvectors as A. Hence, we were able to derive information about the eventual positivity of (e^t(A+B))_t ≥ 0 without knowing the eigenvectors of the generator. In Example <ref>, we will use the above matrix semigroup to construct an eventually positive semigroup (in infinite-dimensions) that is persistently irreducible but not eventually strongly positive. For this reason, it is desirable n the situation of Proposition <ref>, to have a way to check whether the perturbed semigroup is persistently irreducible. The following result is useful for this purpose. Let (e^tA)_t ≥ 0 be a C_0-semigroup on a Banach lattice E and let B ∈(E). Assume that e^tA B e^sA≥ 0 for all s,t ≥ 0 and that the semigroup (e^tA)_t ≥ 0 is individually eventually positive (hence, so is (e^t(A+B))_t ≥ 0 according to Proposition <ref>). Let I ⊆ E be a closed ideal that is uniformly eventually invariant under the perturbed semigroup (e^t(A+B))_t ≥ 0. Then I is uniformly eventually invariant under both the unperturbed semigroup (e^tA)_t ≥ 0 and the operator family (e^tABe^sA)_s,t ≥ 0. By saying that I is uniformly eventually invariant under (e^tABe^sA)_s,t ≥ 0 we mean that e^tABe^sAI ⊆ I for all sufficiently large s and t – i.e., the index set [0,∞) × [0,∞) is endowed with the product order given by (s_1, t_1) ≤ (s_2, t_2) if and only if s_1 ≤ s_2 and t_1 ≤ t_2. We first show the eventual invariance under (e^tA)_t ≥ 0: let 0 ≤ f ∈ I. Then it follows from the individual eventual positivity of the unperturbed semigroup and Proposition <ref> that 0 ≤ e^tA f ≤ e^t(A+B)f ∈ I for all sufficiently large t (where the time from which on the first inequality holds, depends on f). Hence, I is individually eventually invariant under (e^tA)_t ≥ 0. As I is closed, Corollary <ref> even gives the uniform eventual invariance. To show the (individual and whence also uniform) eventual invariance under the family (e^tABe^sA)_s,t ≥ 0, choose times t_0,s_0 ≥ 0 such that 0 ≤ e^t(A+B)I ⊆ I for all t ≥ t_0 and e^sAI ⊆ I for all s ≥ s_0. Let t ≥ t_0 and s ≥ s_0. First consider a vector 0 ≤ f ∈ I that is contained in A = A+B. Then e^t(A+B) e^sA f ∈ I. Since f ∈A = A+B we can take the derivative either with respect to s or t. As I is closed this yields e^t(A+B) A e^sA f ∈ I and e^t(A+B) (A+B) e^sA f ∈ I. Thus, e^t(A+B) B e^sAf ∈ I. By assumption Be^sAf and e^tA B e^sAf are positive, so it follows by means of Proposition <ref>, that 0 ≤ e^tA B e^sAf ≤ e^t(A+B) B e^sAf and thus, e^tA B e^sAf ∈ I. Finally, consider a general vector 0 ≤ f ∈ I. Choose an (f-dependent) time r_0 ≥ s_0 such that e^rAf ≥ 0 for all r ≥ r_0 and define f_n := 1/n∫_0^1/n e^rA e^r_0A f r for each integer n ≥ 1. Then one has 0 ≤ f_n ∈ I ∩A for each n, and thus e^tA B e^sA f_n ∈ I by what we have already shown. Since f_n → e^r_0A f, it follows that e^tA B e^(s+r_0)A f ∈ I for all t ≥ t_0 and all s ≥ s_0. This shows that I is individually eventually invariant with respect to the operator family (e^tABe^sA)_s,t ≥ 0, and the uniform eventual invariance thus follows from Corollary <ref>. §.§ Coupling eventually positive semigroups A trivial way to construct a new eventually positive semigroup from two given ones is to take their direct sum; this construction is arguably not particularly interesting, though, as the new semigroup does not show any kind of interaction between the two original ones. In particular, the new semigroup will not be (persistently) irreducible, even if the two original semigroups are. By employing the simple perturbation result from Proposition <ref> and the eventual invariance result from Theorem <ref> we will now demonstrate how a coupling term between the two semigroups can be introduced that destroys neither eventual positivity nor persistent irreducibility. Let (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 be C_0-semigroups on Banach lattices E_1 and E_2, respectively. Let B_12: E_2 → E_1 and B_21: E_1 → E_2 be bounded linear operators such that e^tA_1 B_12 e^sA_2: E_2 → E_1 and e^tA_2 B_21 e^sA_1: E_1 → E_2 are positive for all s,t ≥ 0 (so in particular, B_12 and B_21 are positive). * If both semigroups (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 are individually/uniformly eventually positive, then so is the semigroup (e^tC)_t ≥ 0 on E_1 × E_2 generated by C := [ A_1 ; A_2 ] + [ B_12; B_21 ] . * If both (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 are individually eventually positive and persistently irreducible and both B_12 and B_21 are non-zero, then (e^tC)_t ≥ 0 is also persistently irreducible. (a) Due to Proposition <ref>, we have that e^tC - e^tA_1⊕ e^tA_2≥ 0 for each t ≥ 0, from which the assertion readily follows. (b) Let I ⊆ E_1 × E_2 be a closed ideal that is uniformly eventually invariant under (e^tC)_t ≥ 0. Then I = I_1 × I_2 for closed ideals I_1 ⊆ E_1 and I_2 ⊆ E_2. It follows from Theorem <ref> that we can find a time t_0 ≥ 0 such that I = I_1 × I_2 is invariant under both e^tA_1⊕ e^tA_2 and [ 0 e^tA_1 B_12 e^sA_2; e^tA_2 B_21 e^sA_1 0 ] for all s,t ≥ t_0. As (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 are persistently irreducible, it follows that I_1 = {0} or I_1 = E_1 and likewise for I_2. So it only remains to check that the two cases (1)  I_1 = E_1, I_2 = {0} and (2) I_1 = {0}, I_2 = E_2 cannot occur. Suppose I_1 = E_1. As B_21 is non-zero, the spaces E_1,E_2 are non-zero, so we can choose a vector 0 ⪇ f_1 ∈ I_1. Moreover, as the dual operator B_21' is also non-zero, there exists 0 ⪇φ_2 ∈ E_2' such that B_21'φ_2 ≠ 0. Due to Theorem <ref>, the semigroup (e^sA_1)_s ≥ 0 satisfies Condition <ref><ref>, so there exists s ≥ t_0 such that ⟨φ_2, B_21 e^sA_1 f_1 ⟩ = ⟨ B_21' φ_2, e^sA_1 f_1 ⟩ 0 and hence, B_21e^sA_1 f_1 ≠ 0. By applying Corollary <ref> to the semigroup (e^tA_2)_t ≥ 0, we conclude that e^t_0 A_2 B_21e^sA_1 f_1 ≠ 0. Due to the choice of t_0, we have the inclusion e^t_0 A_2 B_21e^sA_1 I_1 ⊆ I_2, so it follows that I_2 ≠{0}, as claimed. By swapping the roles of I_1 and I_2, we see that the case I_1 = {0}, I_2 = E_2 cannot occur, either. The situation in Corollary <ref> can be interpreted in a system theoretic sense: On the state spaces E_1 and E_2 consider the input-output-systems ẋ_1 = A_1 x_1 + B_1 u_1, ẋ_2 = A_2 x_2 + B_2 u_2, y_1 = C_1 x_1, y_2 = C_2 x_2, respectively, where B_k: U_k → E_k are bounded linear operators defined on Banach spaces U_k and C_k: E_k → Y_k are bounded linear operators to Banach spaces Y_k. Here, u_k: [0,∞) → U_k are interpreted as input signals and y_k: [0,∞) → Y_k are interpreted as output signals for each k ∈{1,2}. Now we consider bounded linear operators G_21: Y_1 → U_2 and G_12: Y_2 → U_1 and couple both systems by setting u_1 := G_12 y_2 and u_2 := G_21y_1. Thus we obtain the coupled differential equation [ ẋ_1; ẋ_2 ][ A_1 ; A_2 ][ x_1; x_2 ] + [ B_1 G_12 C_2; B_2 G_21 C_1 ][ x_1; x_2 ] , which leads to the operator described in Example <ref>(a) if one sets B_12 := B_1 G_12 C_2 and B_21 := B_2 G_21 C_1. We know from Proposition <ref> that if an irreducible uniformly eventually positive semigroup is analytic, then it is even uniformly eventually strongly positive. In the following, we use Corollary <ref> to construct a non-positive example which shows that one cannot drop the analyticity assumption in this proposition. A non-positive but uniformly eventually positive semigroup which is persistently irreducible but not individually eventually strongly positive in the sense of (<ref>). Set E_1 := ^3 and E_2 := L^1(). Let A_1 ∈^3 × 3 be the matrix A from Example <ref>, which generates a (uniformly) eventually strongly positive semigroup according to that example. Moreover, we choose a semigroup (e^tA_2)_t ≥ 0 on L^1() as follows: for each t ∈ (0,∞) let k_t ∈ L^1() be the density function of the Gamma distribution whose mean and variance are both equal to t, i.e., k_t(x) = 1/Γ(t) x^t-1 e^-x for x ∈ (0,∞) and k_t(x) = 0 for x ∈ (-∞,0]. Then k_s ⋆ k_t = k_s+t for all s,t > 0 and the convolution with k_t defines a positive C_0-semigroup on L^1(). Additionally, for every t ≥ 0, let L_t: L^1() → L^1() be the left shift by t. Since each L_t commutes with convolutions, we obtain a C_0-semigroup (e^tA_2)_t ≥ 0 on L^1() by setting e^tA_2 f := L_t (k_t ⋆ f) for all t > 0 and f ∈ L^1() (and e^0A_2 := 𝕀). This semigroup is clearly positive and it is (persistently) irreducible for the following reason: let 0 ≤ f ∈ L^1() be non-zero and choose x_0 ∈ such that f has non-zero integral over (-∞, x_0). Then the strict positivity of k_t on (0,∞) implies that k_t ⋆ f is strictly positive on [x_0,∞) for every t > 0. Hence, e^tA_2f is strictly positive on [x_0-t,∞) for every t > 0. So Condition <ref><ref> is satisfied and hence, persistent irreducibility of (e^tA_2)_t ≥ 0 follows by Proposition <ref>. We now couple the C_0-semigroups (e^tA_1)_t ≥ 0 and (e^tA_2)_t ≥ 0 to obtain a semigroup on E_1 × E_2 = ^3 × L^1() as described in Corollary <ref>(a): choose the operator B_21: ^3 → L^1() as B_21 z = z_3 _[1,2] for each z= (z_1,z_2,z_3) ∈^3 and B_12: L^1() →^3 as B_12f = ∫_[-2,-1] f(x) x e_3 for each f ∈ L^1(); where e_3 ∈^3 denotes the third canonical unit vector. Positivity of the semigroup (e^tA_2)_t ≥ 0 and of the third row and column of e^tA_1 for each t > 0 (see Example <ref>) yields that the assumptions of Corollary <ref> are satisfied. Hence, part (a) of the corollary tells us that the semigroup (e^tC)_t ≥ 0 on E_1 × E_2 generated by the operator C := [ A_1 ; A_2 ] + [ B_12; B_21 ] is uniformly eventually positive. However, the semigroup is not positive. To see this we show that, for each z∈^3, the first component of e^tC(z,0) is equal to e^tA_1z for all t < 2. Indeed, let us use the following notation: for each n ∈_0 and each t ≥ 0 let V_n(t): E_1 × E_2 → E_1 × E_2 denote the n-th term of the Dyson–Phillips series representation of (e^tC)_t ≥ 0 that we already used in the proof of Proposition <ref> <cit.>. For each t ∈ [0,∞), one has V_0(t) [ z; 0 ] = [ e^tA_1z; 0 ] and V_1(t) [ z; 0 ] = [ 0; ∫_0^t e^(t-s)A_2 B_21 e^sA_1z s ] . Further, for each t ∈ [0,2) the function ∫_0^t e^(t-s)A_2 B_21 e^sA_1z s ∈ L^1() only lives within the spatial interval [-1,∞), so it is in the kernel of B_12. Hence, for all t ∈ [0,2) one has V_2(t) [ z; 0 ] = [ 0; 0 ] , and thus V_n(t) [ z; 0 ] = [ 0; 0 ] for all n ≥ 2 . So the first component of e^tC (z, 0) is equal to e^tA_1 z for all t ∈ [0,2), as claimed. As the semigroup (e^tA_1) is not positive we can choose a vector 0 ≤ z ∈^3 such that e^tA_1z ≱0 for some t ∈ (0,2), so (e^tC)_t ≥ 0 is indeed not positive. Next, we note that (e^tC)_t ≥ 0 is persistently irreducible. This follows from Corollary <ref>(b): the semigroup (e^tA_1)_t ≥ 0 is eventually strongly positive (see Example <ref>) and is thus persistently irreducible. The semigroup (e^tA_2)_t ≥ 0 is, as observed above, also persistently irreducible. As B_12 and B_21 are non-zero, part (b) of Corollary <ref> is applicable, as claimed. Finally, we show that (e^tC)_t ≥ 0 is not eventually strongly positive in the sense of (<ref>). Indeed, the Dyson–Phillips series representation of (e^tC)_t ≥ 0 and the aforementioned invariance property of the convolution with the kernels k_t can be used to check by induction that, for every z ∈^3 and each t ≥ 0, the second component of V_n(t)(z,0) is supported in the interval [1-t,∞) for each n ∈_0. Hence, the same is true for the second component of e^tC(z,0) for every t ≥ 0. So, e^tC(z, 0) is not a quasi-interior point of E_1 × E_2 for any t ≥ 0 and any 0 ≤ z ∈^3. §.§ Acknowledgements This article is based upon work from COST Action CA18232 MAT-DYN-NET, supported by COST (European Cooperation in Science and Technology). The article was initiated during a very pleasant visit of both authors at the University of Salerno in Spring'22. The first author is indebted to COST Action 18232 and the second author to the Department of Mathematics of the University of Salerno for financial support for this visit. plainurl
http://arxiv.org/abs/2307.03959v1
20230708115509
Understanding the power-law nature of participation in community sports organizations
[ "Jia Yu", "Mengjun Ding", "Weiqiang Sun", "Weisheng Hu", "Huiru Wang" ]
cs.SI
[ "cs.SI", "physics.soc-ph" ]
Understanding the power-law nature of participation in community sports organizations Jia Yu, Mengjun Ding, Weiqiang Sun,  Senior Member, IEEE, Weisheng Hu,  Member, IEEE, Huiru Wang Manuscript received June, 2023. (Corresponding author: Weiqiang Sun.) Jia Yu, Mengjun Ding, Weiqiang Sun, and Weisheng Hu are with the School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China. Huiru Wang is with the Department of Physical Education, Shanghai Jiaotong University, Shanghai 200240, China. (e-mail: {yujia543, mengjun_ding, sunwq, wshu, wanghr}@sjtu.edu.cn). August 12, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The improvement of living standards and awareness of chronic diseases have increased the importance of community sports organizations in promoting the physical activity levels of the public. However, limited understanding of human behavior in this context often leads to suboptimal resource utilization. In this study, we analyzed the participation behavior of 2,956 members with a time span of 6 years in a community sports organization. Our study reveals that, at the population level, the participation frequency in activities adheres to a power-law distribution. To understand the underlying mechanisms driving crowd participation, we introduce a novel behavioral model called HFBI (Habit-Formation and Behavioral Inertia), demonstrating a robust fit to the observed power-law distribution. The habit formation mechanism indicates that individuals who are more engaged are more likely to maintain participation, while the behavioral inertia mechanism suggests that individuals' willingness to participate in activities diminishes with their absences from activities. At the individual level, our analysis reveals a burst-quiet participation pattern, with bursts often commencing with incentive activities. We also find a power-law distribution in the intervals between individual participations. Our research offers valuable insights into the complex dynamics of human participation in community sports activity and provides a theoretical foundation to inform intervention design. Furthermore, the flexibility of our model enables its application to other data exhibiting power-law properties, broadening its potential impact beyond the realm of community sports. human behavior, power law, habit formation, behavioral inertia, burst timing, community sports activity. § INTRODUCTION Globalization urbanization, and increased wealth have led to significant lifestyle changes, causing a wide decrease in physical activity. According to the World Health Organization (WHO), inactivity rates can climb as high as 70% in certain countries, primarily due to shifts in transportation habits, heightened reliance on technology, and urbanization <cit.>. Physical inactivity, which has been identified as a global pandemic, is responsible for up to 8% of non-communicable diseases and deaths globally <cit.>. Conservatively estimated, physical inactivity cost health-care systems INT$53.8 billion worldwide in 2013 <cit.>. Additionally, if the prevalence of physical inactivity remains unchanged, it is projected that by 2030, there will be around 499.2 million new cases of preventable major NCDs worldwide, resulting in direct health-care costs of INT$ 520 billion. The annual global cost of not taking action on physical inactivity is anticipated to reach approximately $47.6 billion <cit.>. In an effort to improve physical activity participation, community sports organizations have achieved remarkable results in recent years. Many concur that community sport, as a low-threshold physical activity, is a powerful tool for targeting socially vulnerable groups <cit.>. Moreover, community sport has been recognized as a policy area and a social field that goes beyond “just" providing opportunities for groups to participate in sports. It also encompasses functions such as social care and crime reduction <cit.>. Today, being non-profit by nature, community sports organizations face greater challenges, such as competition for limited resources, volunteer availability, and capacity, and the impact of pandemics (such as COVID-19) <cit.>. Understanding the nature of the population participating in community sports is thus pivotal to making the best use of limited resources. The interest in the data-driven exploration of human behavior has been persistent. Very early on, power-law distribution has been found in certain human behaviors, such as the intervals between emails <cit.>, the pattern of phone calls <cit.>, and complex social networks <cit.>. Efforts have been made to understand the principle behind the formation of this power-law distribution in these behaviors <cit.>. Classical models such as the decision-based queuing process <cit.> and preferential attachment <cit.> are proposed to explain the power law distribution observed in the waiting time for processing emails and the degree distribution in complex networks, respectively. Research on community sports organizations is usually conducted from an organizational management perspective, providing high-level guidance for organizational development by quantifying aspects such as resources, program design, diversity, life cycle, and resilience <cit.>. However, very few, if any, models are population-based and consider when, how, and who participates in community-level sports activities <cit.>. In this study, with the data from 2,956 users collected over a span of six years, we discovered a power-law distribution of population participation in community sports activities. To explain this power-low distribution, we proposed the hypothesis of habit formation and behavioral inertia in community sports activity participation. Previous research has indicated that physical activity behavior can be developed through repeated experience of the activity in stable contexts <cit.>. Human behavior does exhibit inertia, as evidenced by the tendency for users to stick with default options <cit.> and purchase habits <cit.>. Our empirical data provides evidence of habit formation and behavioral inertia in community sports participation. It may help to address the question, “What is the typical `shape' of within-person real-world habit growth with repetition over the long-term" identified in the 2019 European Health Psychology Society Synergy Expert Meeting <cit.>. Based on these two mechanisms we designed a behavioral model called HFBI that can robustly fit the power-law distribution of the empirical data. Power-law distribution is also observed in the interval of participation at the individual level, signifying a burst-quiet pattern of activity participation. With the relevant activity information, we found that bursts tend to be initiated by activities with incentive rewards, suggesting that incentive activities can help call people back for sustained engagements. The main contributions of the article as described as follows. * For the first time, we discovered that the frequency of population participation in community physical activities and the interval between individual's participations obey power-law distributions. * We proposed an intuitive model to explain the power-law distribution of population participation in community physical activities, by taking into account habit formation and behavioral inertia. We demonstrated good fitting performance and statistical significance with real-world data. The model may as well be used in other domains where power-law distributions with low power-law exponents are observed. * The intervals between individual's participation exhibit a power-law distribution, with a pattern of bursts followed by periods of inactivity (a burst-quiet pattern). We observed that bursts often start with incentive activities located in the head position. This implies that incentive activities not only attract more participants but also have the potential to call users back from a quiet state to an active state, thereby promoting sustained engagement. The rest of this article is organized as follows. In Section II, we demonstrate the power-law phenomenon of participation frequency in activities at the population level. In Section III, we introduce the proposed HFBI model and present the evidence. In Section IV, we verify the participation patterns at the individual level and the role of incentive activities. In Section V, we present the related work. Finally, we summarize this paper in Section VI. § POWER-LAW DISTRIBUTION OF PARTICIPATION FREQUENCY AT THE POPULATION LEVEL §.§ Data Description The data used in our research was sourced from a university-based community sports platform that we develop and operate, which allows individuals to initiate or participate in sports activities. The initiator of the activity can choose whether or not to provide rewards as incentives for the activity. Over the course of 6 years, from May 2015 to May 2021, our dataset captured 28,714 records of activity participation in 770 activities (including 110 activities with incentives), involving a total of 2,956 individuals. Each record in the dataset contains the participant's ID, activity ID, team ID, and type of activity (whether to provide incentives or not). The activity IDs are consecutive natural numbers starting from 0 and arranged in the order of their occurrence (numbered from 0 to 769). §.§ Fitting the Empirical Data The frequency of user i participating in activities over the entire period is denoted as q_i. For the sequence of activity participation frequency { q_i}, we assume that the frequency larger than a truncated value q_min is described by the power law distribution, p(q) ∼ q^-γ, q ≥ q_min. In the Kolmogorov-Smirnov (KS) test, p>0.1 (or p>0.05) suggests that the data can be considered to follow a power-law distribution. We select the smallest value of q that satisfies the KS test with p>0.1 as q_min, and the data above q_min can be plausibly modeled as a power-law distribution. The estimate γ is chosen by maximum likelihood (MLE) <cit.>. §.§ Power-law Distribution of Participation Frequency The participation frequency of the population follows a power-law distribution. Fig. <ref> shows the empirical distribution of user participation frequency in activities in a complementary cumulative way to enhance the statistical significance <cit.>. The complementary cumulative function can be represented as F(q)=∑_q^'=q^∞ p(q^'), where p(q) denotes the proportion of individuals who participated in activities q times. A clear straight-line trend can be observed on the double logarithmic axis, indicating a power law distribution of the data. Kolmogorov–Smirnov (KS) tests and Maximum likelihood estimation (MLE) fits are employed to check whether the empirical distributions obey power law distribution and estimate the related parameters. The result shows that the frequency of population participation in the activity is in line with the power law distribution (p=0.18, q_min=2) with the power-law exponent γ=1.76. The cutoff of the tail indicates that there are fewer individuals participating in an exceptionally large number activity than what a power-law distribution would expect, which is a phenomenon commonly observed in real-world systems. Fig. <ref> shows the relationship between the fraction P of the participation and the most active p of the population. 80/20 rule is evident that the top 20% of the most active users contributed to approximately 84% of the total activity participation. Theoretically, the case is more extreme for power-law distributions with γ less than 2. However, the fact that the number of activities is finite and the tail cutoff brings the ratio close to the classical Pareto's law. To demonstrate that the power-law distribution of the participation frequency is not momentary coincidental, we analyzed the data for each activity node after the platform scale reached 1000. All samples (287 (88.9%) with q_min=1 and 36 (11.1%) with q_min=2) conformed to the power law distribution by KS test, with p-values all greater than 0.1. Fig. <ref> presents the γ for all samples of 323 activity nodes. The range of γ spans from 1.66 to 1.81 with a mean of 1.72. And it keeps changing slowly with each activity held, first decreasing steadily, and then fluctuating and rising. The γ less than 2 indicates a significant “heavy tail" phenomenon in the frequency of participation. § HFBI-A BEHAVIORAL MODEL BASED ON HABIT FORMATION AND BEHAVIORAL INERTIA To explore the principle behind the power-law distribution of the participation frequency, we propose a behavioral model named HFBI, which is based on the assumptions of habit formation and behavioral inertia. Intuitively, people who have participated in activities frequently or have just participated in an activity are more likely to participate in subsequent activities. They are supported by convincing evidence from our data. §.§ Evidence for Habit Formation and Behavioral Inertia To provide evidence for the habit formation and behavioral inertia mechanisms, we performed a statistical analysis of all activities in the dataset. The proportion of people who have participated in q activities and would choose to participate in a new available activity can be represented as prop .(q)=∑_j=0^N-1 m_q^j/∑_j=0^N-1 n_q^j. Here, n_q^j represents the number of individuals who have participated in q activities before a new activity j, m_q^j represents the number of individuals among them who choose to participate in the activity j, and N is the total number of activities in the dataset. The denominator represents the total number of individuals who have participated in q activities for all activities, while the numerator represents the number of individuals who choose to continue to participate in an activity after participating in q activities. Similarly, the proportion of people who have been away from activities for d sessions and would choose to participate in a new available activity can be represented as prop .(d)=∑_j=0^N-1 m_d^j/∑_j=0^N-1 n_d^j. n_d^j represents the number of individuals who have been away from activities for d sessions for activity j, m_d^j represents the number of individuals among them who choose to participate in the activity j. The denominator represents the total number of individuals who have been away from activities for d sessions for all activities, while the numerator represents the number of individuals who choose to continue to participate in an activity after being away from activities for d sessions. Fig. <ref> shows the proportion of people who have participated in q activities and would choose to participate in a new available activity. As shown, the proportion of individuals opting to continue participation increases almost linearly with the number of activities participated in the early stage. Fig. <ref> illustrates the proportion of people who have been away from activities for d sessions and would choose to participate in a new available activity. As the number of sessions away from activities increases, the proportion of people choosing to back to participating in activities sharply decreases. These provide solid evidence for the existence of habit formation and behavioral inertia in community sports participation. §.§ The HFBI Model Based on the evidence presented, we propose the HFBI model, which incorporates habit formation and behavioral inertia, to simulate user participation in activities. The experimental results demonstrate that the model can accurately simulate user participation in activities with only four parameters. §.§.§ Parameter Settings The HFBI model only requires four parameters: n, c, m, and α. n represents the number of activities held, i.e., the model's iteration count. c and m refer to the quantities of new and existing users participating in an activity (added in a round of iterations), respectively. α is a parameter that adjusts the ratio of habit formation and behavioral inertia to achieve a better fit with the empirical data. The parameters of c and m can be derived from the mean values of the dataset. Note that since the parameters are natural integers, the values of c and m will be rounded. To ensure consistency in the scale of the population, n is calculated based on the number of population, c, and m. Additionally, we initiate the iteration process with m pre-existing users to enable the selection of existing users at the start of the iteration. §.§.§ Model Description The model is characterized by adding users in a sequential and batched manner, which aligns with many real-life situations. Initially, we make the assumption that for every activity, there will be c new users and m existing users participating. For a new available activity and an existing user i, q_i represents the total number of activities that user i has participated in before, and d_i represents the interval between the last activity they participated in and the current new activity. User i participating in the activity can be attributed to two mechanisms. (1) User i has a probability of α to participate in the activity due to habit formation, which means the probability of participating is proportional to q_i: q_i/∑_i ∈ I q_i. (2) Additionally, there is a probability of 1-α for user i to participate in the activity due to behavioral inertia, which means the probability is a decreasing function of d_i: 1 / d_i/∑_i ∈ I 1 / d_i. Therefore, the total probability of user i participating in the activity is: ϕ_i=αq_i/∑_i ∈ I q_i+(1-α) 1 / d_i/∑_i ∈ I 1 / d_i. I is the set of all existing users. The model will perform n rounds of iterations, adding c new users and selecting m existing users based on Eq. <ref> in each round. The c new users will be added to the existing user pool in each round. The overall process of the model is shown in Algorithm <ref>. Note that the specific form of the decreasing function for d_i is not unique, as it can be adjusted by the parameter α. §.§.§ Proof of Power-Law Distribution and Exponent in Habit Formation When only considering the habit formation, that is ϕ_i=q_i/∑_i ∈ I q_i, the model can generate power-law distribution data with a power exponent γ=2+c/m. The proof process is similar to the Price model <cit.>. In the HFBI, for every activity held, there will be c new users and m existing users participating, and the participation probability of existing users is proportional to the number of activities they have participated in before. Let p_q(n) be the fraction of users that have participated q times when the platform contains n users, which is also the probability distribution of participation frequency. q_i represents the number of activities participated by user i. When organizing an activity where only one user among all existing users will participate, the probability of existing user i participating in the activity is q_i/∑_i q_i=q_i/n⟨ q⟩=q_i/n m+c/c. where ⟨ q⟩ represents the average number of activities each person participates in, ⟨ q⟩=n^-1∑_i q_i. The number of people who have participated in q activities is np_q(n). When there is a new activity, the expected number of people who have participated in q activities and will join the new activity is n p_q(n) × m ×q/n m+c/c=p_q(n) × m ×q/m+c/c. Then the master equation for the evolution of the participation frequency distribution is (n+c) p_q(n+c)=n p_q(n)+(q-1) m c/m+c p_q-1(n)-m q c/m+c p_q(n). The left side of the equation is the expected number of people participating in the activity q times after adding an activity. The first term on the right-hand side here represents the number of users with previous q participation. The second term refers to the expected number of users who have a participation frequency of q-1 and join the activity and become q times, while the third term refers to the expected number of users who have a participation frequency of q and participate in this activity and are no longer q times. Eq. <ref> is applicable for all cases where q ≠ 1. When q = 1, the right side of the equation will increase by c new users whose participation frequency becomes 1, instead of the second term in Eq. <ref>, and the equation for q=1 is (n+c)p_1(n+c)=n p_1(n)+c-m c/m+cp_1(n). When considering the limit of large population size n →∞ and calculating the asymptotic form of the distribution participation frequency in this limit, we take the limit n →∞ and use the shorthand p_q=p_q(∞). Eqs. <ref> and <ref> become p_q=(q-1) m c/c(m+c)+m q c p_q-1 for q>1, p_1=m+c/2 m+c for q = 1. Let k = c/m, then p_1=1+k/2+k for q = 1, p_q=(q-1)/1+k+p p_q-1 for q>1. With Eqs. <ref> and <ref>, we can iteratively determine p_q for all values of q, beginning with our initial solution for p_1. The results are as follows: [ p_1=1+k/2+k; p_2=1/2+k+1×1+k/2+k; p_3=2/3+k+1×1/2+k+1×1+k/2+k; p_4=3/4+k+1×2/3+k+1×1/2+k+1×1+k/2+k; ...; ] The expression for general q can be successively derived as: p_q=(q-1) ×(q-2) …× 1 ×(1+k)/(q+k+1) ×(q-1+k+1) …×(2+k+1) ×(2+k). It is known that the gamma function is Γ(x)=∫_0^∞ t^x-1e^-t d t, and it has the property that Γ(x+1)=x Γ(x) for x > 0. Applying this equation iteratively, we find that Γ(x+n)/Γ(x)=(x+n-1)(x+n-2) … x. Using this result, we can rewrite Eq. <ref> as p_q=(1+k) Γ(q) Γ(2+k)/Γ(1) Γ(2+k+q). By further employing Euler's formula B(x, y)=Γ(x) Γ(y)/Γ(x+y), Eq. <ref> can be simplified to p_q=(1+k)/Γ(1) B(q, 2+k) . Using Stirling’s approximation for the gamma function, the beta function B(x, y) falls off as a power law for large values of x, with exponent y <cit.>, B(x, y) ≃ x^-yΓ(y). Applying this finding to Eq. <ref>, for large values of q, the distribution of participation frequency goes as p_q∼ q^-γ=q^-(2+k)=q^-(2+c/m), where the exponent γ is γ=2+k=2+c/m. Therefore, by only considering habit formation, represented by ϕ_i=q_i/∑_i ∈ I q_i, the model is able to generate data with a power-law distribution, where the power exponent is given by γ=2+c/m. §.§.§ Experimental Results on the Real Dataset We conducted experiments on real data, and the results show that HFBI is capable of generating data with only four parameters derived from the mean values of the empirical data and also exhibits good statistical significance. The Kolmogorov-Smirnov (KS) test is used to assess whether the data generated by the model and empirical data are drawn from the same distribution. The KS statistic is a value that measures the maximum distance between two cumulative distribution functions (CDFs) of two samples, which is used to determine if two samples are drawn from the same underlying probability distribution or not. The null hypothesis is that the two distributions are identical. If p > 0.1, we cannot reject the null hypothesis, which suggests that the data generating process is plausible. The experiment is first performed on the largest-scale data, that is, the data up to the last activity node. The parameter values for c, m, and n are derived from the mean values of the data and are determined as 4, 33, and 731, respectively. In Fig. <ref>, a comparison is shown between the generated data from HFBI and the real data. It can be seen that the distribution of the simulated data and the real data are very close. The model achieves the best fit when α is set to 0.9. The α values within the range of 0 to 1 suggest that the results of the empirical distribution are attributed to the combined effects of both habit formation and behavioral inertia mechanisms. The habit formation mechanism described by Eq. <ref> can be demonstrated to generate data with a power-law distribution for γ=2+c/m, which is strictly greater than 2 and differs from the empirical data. The participation frequency with γ less than 2 implies that the frequency of participation in activities is slightly more than what can be explained by the habit formation mechanism alone. The behavioral inertia mechanism precisely compensates for this deficiency, as it captures the situation of individuals who have just participated in an activity being highly likely to continue participating in one or two due to inertia. It effectively adjusts the exponent while preserving the power-law distribution. It is the joint effect of both mechanisms that generate data that closely fit the empirical data. The data produced by the model is incapable of including the extremely rare users who have engaged in activities excessively. One possible explanation is that these individuals usually have a strong self-motivation to participate in activities, which cannot be captured by habit formation, as evidenced by the non-steady growth in the later stage of Fig. <ref>. And since the parameters have to be integers and the operation to maintain consistency of the number of users between the generated data and the empirical data, there will be a small difference between the model's n and the actual number of activity counts. This is considered acceptable since the proportion of these individuals is extremely low. To demonstrate the robustness of the model, the model was also employed to fit the participation frequency up to each activity node. As the generated data can be slightly different each time, we conducted 5 runs for each possible value of α and selected the optimal α value with the highest average p-value among 5 runs. The average p-values and corresponding optimal α of model fitting for 323 samples are shown in Fig. <ref>. In Fig. <ref> and Fig. <ref>, the behavioral inertia mechanism is represented by 1 / d_i/∑_i ∈ I 1 / d_i,e^-d_i/∑_i ∈ I e^-d_i, respectively. It shows that different functional forms can achieve a good fit at different values of α. The model shows good fitting performance (p>0.1) for all empirical data samples, indicating its correctness and robustness. The range of α values from 0.69 to 1 suggests that the proportion of habit formation and behavioral inertia may vary in different situations. We can observe clear downward trends in α around 450 to 600, indicating that the proportion of behavioral inertia gradually increases during this stage. By combining with Fig. <ref>, it can be observed that there is also a decreasing trend of γ. This indicates that behavioral inertia can effectively help to capture situations with smaller γ. § PARTICIPATION PATTERNS AT THE INDIVIDUAL LEVEL At the population level, the frequency of participation in activities follows a power-law distribution. At the individual level, the pattern of activity participation, specifically the intervals between each user's participation, is also worth studying. Similarly, we investigated the distribution of intervals between each individual's activity participation and discovered that they also exhibit a power-law distribution. In terms of activity participation patterns, it is a burst-quiet mode where individuals alternate between periods of high activity and periods of low activity. §.§ The Burst-Quiet Pattern The interval between an individual's participation is defined as the subtraction of the IDs of two consecutive activities in which they have participated, denoted by the r. Considering the requirement of a sufficient amount of interval sequence data, we focused on 58 loyal users who participated in more than 100 activities for the individual-level analysis. Fig. <ref> shows an example of a real user's participation in activities. It is evident that intervals of individual participation in activities vary greatly in size, with a majority being small and some being large. The participation of individuals is characterized by alternating bursts of high activity and long periods of low activity, similar to the outgoing mobile phone call sequence of an individual <cit.>. This burst-quiet pattern is common among the group of loyal users. We studied the distribution of interval sequences for all 58 users and discovered that their interval sequences also follow a power-law distribution (p>0.1 for 54 users, p>0.05 for all 58 users, r_min=1 for 48 users, and r_min=2 for 10 users). The power law distribution also plays an important role in the intervals of individual participation in activities. Fig. <ref> shows examples of complementary cumulative probability distributions of the intervals for three users. The intervals of participation in the activities of each of the three individuals obeyed a power law distribution with different power exponents. Fig. <ref> plots the probability distribution of the estimated power-law exponents γ for all loyal individuals, revealing a range from 1.6 to 3.25 and a mean of 2.35. Although their activity participation intervals all follow power-law distributions, the difference in the power-law exponent is quite significant. The range of γ is surprisingly consistent with γ for individuals with the intraday inter-call duration that follows a power-law distribution reported by Jiang et al <cit.>. And the probability distributions are also somewhat similar, which may suggest a potential connection between the intervals of different human behaviors. §.§ The Role of Incentive Activities in Bursts Burst, characterized by frequent participation in activities with short intervals within a specific period, has a significant impact on improving individuals' overall fitness level. Therefore, it is important to explore the factors associated with this pattern to promote physical activity among the population. In this study, a burst is defined as a period in which the interval between consecutive activities a user participates in is less than a threshold value Δ. The specific value of Δ is arbitrarily set in empirical analysis <cit.>. Organizations often invest resources to provide incentives for activities to attract users to participate. Incentives are crucial in promoting physical activity. Typically, physical activity behavior is initially motivated by incentive, and as habits form, it shifts towards unconscious and automatic processes <cit.>. The effectiveness of incentives can be immediately reflected in the number of participants in the activity. However, the benefits in other aspects are yet to be discovered. Our study has made some findings by observing the position of incentive activities in bursts. At thresholds of Δ=8, 9, and 10, we identified a total of 433, 399, and 378 bursts for all individuals, respectively, and recorded the positions of the first occurrence of the incentive activity within each burst. As shown in Fig. <ref>, the majority of bursts are observed to start with incentive activities. Table <ref> shows the number and percentage of bursts with the first incentive activity appearing at the head position in the bursts. Over 50% of bursts have their first incentive activity in the first position, and over 65% in the first three positions at different Δ. Note that there is only one in seven activities is incentivized. The proportion of incentive activities in the head of bursts is much higher than it, indicating a correlation between the occurrence of incentive activities and bursts. This phenomenon suggests that in addition to increasing the number of participants in the activity, incentive activities may also play a role in calling users back from a quiet state to a burst state for sustained engagements. § RELATED WORK Power law distributions have been observed in various domains and contexts, such as biology <cit.>, general science <cit.>, economics <cit.> and the social sciences <cit.>. Many human behaviors, such as the intervals between sending emails <cit.> and the pattern of phone calls <cit.>, have also been identified as following power-law distributions. Our work has discovered that the participation frequency of the population and the intervals between individual participation in activities exhibit power-law distributions in the context of community sports organizations. Over the years, there have been continuous efforts to propose diverse models aimed at replicating and explaining data characterized by power-law distributions. Barabási proposed the classic preferential attachment model, which can generate data exhibiting a power-law distribution with an exponent of 3 <cit.>. There are also derivative models that can generate data with power-law distributions with exponents between 2 and 3 <cit.>. They have been widely used to explain the power-law distribution of node degrees observed in social networks. The decision-based queuing process <cit.> simulates the power-law distribution of waiting times for emails by randomly assigning priorities to each incoming task and following a rule of processing tasks in priority order. This suggests that the power-law distribution of waiting times for emails may be attributed to human decision-making based on priorities. The preferential attachment model suggests that the power-law distribution of node degrees in networks may be due to the preferential connection of newly added nodes to high-degree nodes in the network <cit.>. In our HFBI model, the habit formation mechanism exhibits similarities to the preferential attachment model and can be proven to generate data conforming to a power-law distribution. In addition, the behavioral inertia component of the HFBI model introduces effective modifications, leading to a slight decrease in the exponent of the data while preserving its essential power-law characteristics. Community sports organizations have been receiving increasing attention for their significant contributions to public health and social harmony. Klenk et al. <cit.> investigated the participation of people with disabilities in community sports activities from three aspects: (1) social contacts, interactions, and friendships, (2) self-perception and identity formation, and (3) social acceptance, support, and embeddedness. Hanlon et al. <cit.> conducted a questionnaire survey to investigate the needs and initiatives for women's participation in community sports activities. Zhou et al's survey <cit.> revealed a correlation between the provision of community-sport services (both core and peripheral services) and participants’ satisfaction levels. To the best of our knowledge, there is no research that explores and comprehensively understands individual participation in community sports organizations from a data-driven and modeling approach. § CONCLUSION Our study has identified new members of the power-law data family, a) the frequency of community sports participation among populations, and b) the interval of individual activity participation. The participation frequency exhibits a power-law distribution with a tail cutoff and an exponent less than 2. We have proposed HFBI - a model based on habit formation and behavioral inertia, to uncover the underlying causes for this power-law distribution. In the model, the behavioral inertia mechanism effectively complements the habit formation mechanism, with which alone one can only generate power-law distributions with an exponent greater than 2. The model provides a robust fit to the empirical data. Furthermore, Individual participation in community sports activities exhibits a burst-quiet pattern. Importantly, our study suggests that periods of high activity bursts are often driven by incentive activities, highlighting the importance of incentive activities to sustain long-term physical activity behavior. Our results have important implications for the design of interventions aimed at promoting sustainable physical activity behavior. Interventions can be better tailored to align with individuals' behavioral tendencies by gaining insights into habit formation, behavior inertia, and incentive activities. Additionally, the classic preferential attachment process restricts the power law exponent to γ>2 <cit.>, while many real-world networks exhibit γ<2 <cit.>. Our HFBI model based on habit formation and behavior inertia can be valuable in other domains where power-law distributions with low power-law exponents are observed, such as the population of cities <cit.>, short-message communication <cit.>, and corporate innovative patent counts <cit.>. Despite the strengths of our study, there are limitations that should be noted. First, our study only focused on a sports community in a university, whose members are mostly well-educated university faculties and staff members, and may differ in the perception of self-motivated exercise from the population in the society at large. Further research is needed to understand how our study may be generalized to other community sports organizations. Secondly, the model cannot capture the behavior of extremely rare individuals who engage in activities excessively. As reported in the study's 80/20 rule, active individuals make a significant contribution to community activity participation, and future research should pay more attention to this group. In conclusion, our study provides novel insights into the principle underlying human participation in community sports activities and offers practical implications for the design of interventions to promote sustained physical activity behavior and human health. Our findings may also have broader implications for other fields where power-law distributions are commonly observed. § ACKNOWLEDGMENTS We would like to thank every member of the SJTU Health Community for their selfless commitment in building a supportive community and providing help to those in need. IEEEtran
http://arxiv.org/abs/2307.04301v1
20230710013651
NN-EVP: A physics informed neural network-based elasto-viscoplastic framework for predictions of grain size-aware flow response under large deformations
[ "Adnan Eghtesad", "Jan Niklas Fuhg", "Nikolaos Bouklas" ]
cs.CE
[ "cs.CE" ]
CORNELL]Adnan Eghtesadcor1 [email protected] CORNELL]Jan Niklas Fuhg CORNELL,CORNELL2]Nikolaos Bouklas [cor1]Corresponding author. [CORNELL]Sibley School of Mechanical and Aerospace Engineering, Cornell University, NY 14853, USA [CORNELL2] Center for Applied Mathematics, Cornell University, NY 14853, USA We propose a physics informed, neural network-based elasto-viscoplasticity (NN-EVP) constitutive modeling framework for predicting the flow response in metals as a function of underlying grain size. The developed NN-EVP algorithm is based on input convex neural networks as a means to strictly enforce thermodynamic consistency, while allowing high expressivity towards model discovery from limited data. It utilizes state-of-the-art machine learning tools within PyTorch's high-performance library providing a flexible tool for data-driven, automated constitutive modeling. To test the performance of the framework, we generate synthetic stress-strain curves using a power law-based model with phenomenological hardening at small strains and test the trained model for strain amplitudes beyond the training data. Next, experimentally measured flow responses obtained from uniaxial deformations are used to train the framework under large plastic deformations. Ultimately, the Hall-Petch relationship corresponding to grain size strengthening is discovered by training flow response as a function of grain size, also leading to efficient extrapolation. The present work demonstrates a successful integration of neural networks into elasto-viscoplastic constitutive laws, providing a robust automated framework for constitutive model discovery that can efficiently generalize, while also providing insights into predictions of flow response and grain size-property relationships in metals and metallic alloys under large plastic deformations. Viscoplasticity Flow response Grain size Hall-Petch Machine learning Neural networks NN-EVP: A physics informed neural network-based elasto-viscoplastic framework for predictions of grain size-aware flow response under large deformations [ August 12, 2023 ========================================================================================================================================================= § INTRODUCTION Understanding the flow response and viscoplastic behavior of metals incorporating strain rate sensitivity effects, is essential for the predictions of the mechanical behavior, improving the material performance, and designing novel alloys, critical towards revolutionizing the objectives of the aerospace and automotive manufacturing industries. Owing to their polycrystalline nature, the anisotropic flow response of metals under large deformations heavily depends on the average grain size that forms the underlying microstructure. Grain size strengthening, referred to as the Hall-Petch effect <cit.>, is associated with the grain boundary resistance to the crystallographic deformation mechanisms and dislocation motions <cit.>. Authors in <cit.> implemented a crystal plasticity model that accounts for grain size effects and slip system interactions on the deformation of austenitic stainless steels. The crystal plasticity finite element method (CPFEM) is used to study the grain size and morphology effects on yield strength <cit.>. In addition to crystal plasticity modeling, atomistic simulations are well utilized to study the Hall-Petch relationships in advanced alloys <cit.>. High-performance computing (HPC) has improved the efficiency of numerical techniques over the last few decades. However, despite acceleration of HPC simulations, predicting multi-scale material deformations is still a time-consuming task and limited by parallel scalability and hardware performance. To address this issue, recent research has focused on the integration of genetic algorithms (GA), machine learning (ML), and deep learning (DL) to facilitate automated constitutive modeling which has the potential to expedite multi-scale simulations <cit.>. In particular, they have been used to model hyperelasticity <cit.>, viscoelasticity <cit.> and multiphysics problems <cit.>. Lately, neural networks (NN) and convolutional neural networks (CNN) have also been utilized for a variety of applications in modeling viscoplastic deformations in macro and micro scales <cit.>. A recurrent neural network (RNN) model was proposed as a computationally-efficient surrogate for crystal plasticity simulations <cit.>. A new NN-based crystal plasticity algorithm was presented for FCC materials and its application to non-monotonic strain paths <cit.> followed by a CNN-based CPFEM model to predict the localized viscoplastic deformation in aluminum alloys <cit.>. Some recent work has implemented NN and CNN to predict yield surfaces from microstructural images and crystal plasticity simulations <cit.>. A machine learning-enabled crystal plasticity model with dislocation density hardening was developed to identify stress and strain localizations under large viscoplastic deformations <cit.>. An input convex NN (ICNN) framework was presented for the prediction of texture-dependent macroscopic yield functions from crystal plasticity simulations <cit.>. On the continuum level, NN algorithms have been implemented in finite element (FE) solvers to replace classical history-dependent constitutive models to obtain nonlinear structural responses <cit.>. While the literature addresses a wide range of machine and deep learning applications related to elastoplasticity and viscoplasticity in the context of big data, only a few rely on the more realistic case of limited-data availability of macroscopic observations from measured tensile tests. Furthermore, studies that focus on viscoplasticity in the low- and limited-data regimes, only utilize neural networks for parameter estimation of established phenomenological constitutive models rather than model discovery. The latter can focus on establishing automated frameworks that remove the need for a particular phenomenological parametrization. To address this shortcoming, and allow robust performance in low- and limited-data regimes, physics-based ML algorithms have been proposed aiming to directly enforce physical laws, thermodynamic principles and also established domain knowledge <cit.>. Motivated by the earlier work of <cit.>, recent work in <cit.>, enables modular ML-based elastoplasticity in the context of limited data using thermodynamics-aware and mechanistically informed neural networks offering a hybrid framework with each component of the model being selective as classical phenomenological or a data-driven model depending on the data availability, allowing for trustworthy prediction and generalization. The present work builds upon the work in <cit.> and proposes a novel NN-based framework for modeling the elasto-viscoplastic (NN-EVP) response of metals as a function of grain size where strain rate sensitivity effects are taken into consideration. The proposed algorithm and underlying constitutive model consider the coupled elasto-viscoplastic response in contrast to the studies where elastic and plastic regimes are treated separately. In addition, for consistency with the thermodynamic laws and mechanistic assumptions, neural network architectures are designed to enforce monotonicity and convexity requirements. The framework allows for the discovery of laws describing the rate-sensitive viscoplastic flow response of metals and alloys, enabling a predictive infrastructure for the flow response as a function of strain amplitude and grain size. The present paper is structured as follows. Section 2 starts with a brief review of the thermodynamics-based modular formulation, based on dual potentials, as well as particular functional forms, resulting from mechanistic assumptions, that describe the rate-sensitive viscoplastic flow response in metals. Afterward, the key aspects pertaining to the novel implementations of the NN-EVP framework are discussed. Section 3 describes the applications of the NN-EVP tool in modeling large viscoplastic deformations with isotropic hardening for both synthetic and experimental data as a function of grain size. A summary of the findings and key contributions concludes the work. § METHODS §.§ Elasto-viscoplasticity constitutive formulation based on dual potential Under the assumption of small strains, elasto-viscoplasticity can be modeled by decomposing the strain into its elastic and viscoplastic parts: ϵ=ϵ^e+ϵ^vp, where ϵ, ϵ^e, and ϵ^vp are the total strain, elastic strain, and viscoplastic strain, respectively. In a similar fashion, the specific free energy Ψ can be decomposed into elastic and viscoplastic contributions, denoted by Ψ^e and Ψ^vp respectively, as follows: Ψ = Ψ^e( ϵ^e) + Ψ^vp(r, α,T ), where r and α are the thermodynamic variables related to isotropic and kinematic hardening, and T is the temperature. Following the Coleman-Noll procedure <cit.> the Cauchy stress σ and thermodynamic forces R and X can then be defined as: σ=∂Ψ∂ϵ^e=-∂Ψ∂ϵ^vp, R=∂Ψ∂r 𝐗=∂Ψ∂α. Under isothermal conditions, the intrinsic potential is written as: Ψ_I=Ψ_I^e(ϵ^e)+Ψ_I^vp(α,r). The generalization of the concept of equipotential surfaces in stress space, representing similar dissipation (i.e., strain rate) levels leads to the definition of a dual potential in the form of <cit.>: φ^*=φ^*(σ,R,𝐗;r,α). The dual dissipation potential φ^* satisfies the following conditions: * φ^* is convex w.r.t to all its variables, * φ^* is always positive: φ^*>0, * φ^* includes the origin: φ^*(0,0,0,α,r)=0. Following the normality law of generalized standard materials the plastic strain rate ϵ̇^̇v̇ṗ and the hardening variables r and α can then be obtained as <cit.>: ϵ̇^vp=∂φ^*∂σ, ṙ=-∂φ^*∂R, α̇=-∂φ^*∂𝐗, which leads to the second law of thermodynamics as follows: Ψ_I=σ:∂φ^*∂σ+𝐗:∂φ^*∂𝐗+R∂φ^*∂R≥φ^* ≥ 0. §.§.§ Decomposition of dual potential Under consideration of the recovery effects caused by dislocation annihilation we can formulate the dual potential φ^* to also be decomposed into viscoplastic hardening and recovery potentials as follows <cit.>: φ^*=Ω_vp + Ω_r, where Ω_vp=Ω_vp(σ,R,𝐗;r,α), Ω_r=Ω_r(R,𝐗;r,α). In the absence of recovery effects, the dual potential can be considered to be equal to the hardening potential Ω_p <cit.>: φ^*=Ω_vp=φ^*(σ,R,𝐗;r,α). §.§.§ Particular functional forms and power-law Assuming linear elastic behavior for the elastic part of the free energy function we can write Ψ_I^e(ϵ^e) = 1/2ϵ^e : ℂ:ϵ^e where ℂ is the fourth-order elastic stiffness tensor. This yields the Cauchy stress through Hook's law as follows σ=ℂ:ϵ^e. For isotropic materials, the stiffness tensor is uniquely defined by the Young's modulus E and Poisson's ratio ν ℂ_ijkl = E ν/(1+ν) (1-2ν)δ_ijδ_kl + E/2 (1+ ν) (δ_ikδ_jl + δ_ilδ_jk) where δ_ij denotes the Kronecker delta. Under the assumption of isotropic and pressure-independent material response, we can reformulate the dual potential as a function of only the second and third stress invariants φ^*=φ^*(J_2(σ-𝐗),J_3(σ-𝐗),R,𝐗;r,α), where J_2(σ-𝐗)=√(32)‖σ^'-𝐗^'‖, J_3(σ-𝐗)=13tr(σ-𝐗), and σ^'=σ-13tr(σ), 𝐗^'=𝐗-13tr(𝐗). Here, the apostrophe □^' indicates the deviatoric component of the tensor and tr( ) implies the trace of the argument. After assuming that due to the incompressible behavior of metals, only the deviatoric components of the stress contributes to the plastic deformation, the dual potential reduces to <cit.>: φ^*=φ^*(J_2(σ-𝐗),R,𝐗;r,α), The fundamental flow rule in viscoplasticity theory involving the thermodynamic force R(r) is commonly written in the form of a power-law and is desirable because it provides uniqueness of solution for the stress value that accommodates an imposed strain rate <cit.>: φ^*=ϵ̇_0n+1 J_2(σ-𝐗)R(r)^n+1. Incorporating Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>) we get that ϵ̇^̇v̇ṗ=32ϵ̇_0 J_2(σ-𝐗)R(r)^nσ^'-𝐗^'‖σ^'-𝐗^'‖. The power-law embeds the introduced strain rate sensitivity and provides uniqueness of solution for the threshold of equivalent von Mises stress accommodating an imposed strain rate. In Eqs <ref> and <ref>, n is the rate sensitivity parameter and ϵ̇_0 is a reference strain rate usually chosen as the norm of applied strain rate tensor ‖ϵ̇^app‖. Following Eq. <ref>, the variable r and its rate ṙ, implying the effective accumulated visco-plastic strain and effective visco-plastic strain rate can be written as follows: r=ϵ_eff^vp=∫_0^t√(23)‖ϵ̇^̇v̇ṗ(τ)‖ dτ , ṙ=ϵ̇_eff^vp=√(23)‖ϵ̇^̇v̇ṗ‖. §.§.§ Perfect viscoplasticity In case of perfect viscoplasticity and no hardening effects, the yield surface does not evolve as a function of accumulated viscoplastic strain, and the term R(r) reduces to an initial yield value σ^Y. The dual potential and viscoplastic strain rate can then be written as: φ^*=ϵ̇_0n+1σ_eqσ^Y^n+1, ϵ̇^̇v̇ṗ=32ϵ̇_0σ_eqσ^Y^nσ^'σ_eq, where σ_eq=√(32)‖σ^'‖. Depending on the material properties, the rate sensitivity parameter varies in the range of 10 ≤ n ≤ 400. Figure <ref> shows the effect of rate sensitivity on perfect viscoplastic flow response of Cu under applied quasi-static strain rate of 0.001s^-1 generated by the power law equation mentioned above. Note that while higher values of n represent the rate sensitivity of the material more accurately, the computational cost and number of iterations required to numerically solve Eq. (<ref>) increases. §.§.§ Isotropic hardening In the case of isotropic hardening, the yield function R(r) starts with an initial value σ^Y (R(r=0)=σ^Y) and evolves as a function of accumulated viscoplastic strain r. The dual potential and viscoplastic strain rate then yield φ^*=ϵ̇_0n+1σ_eqR(r)^n+1, ϵ̇^̇v̇ṗ=32ϵ̇_0σ_eqR(r)^nσ^'σ_eq. Figure <ref> shows the viscoplastic flow response in Cu generated by the power law and as a function of applied strain rate and rate sensitivity parameter n=20, demonstrating significant anisotropy in the flow response as a function of imposed strain rate. While the particular functional forms and the power law model described above enable the constitutive formulation for viscoplastic flow response in metals, predicting the stress-strain behavior using these models often require manual calibration of a set of hardening parameters with experimental data obtained from uniaxial deformation tests. While the automation of the calibration procedures have been enabled by the existing ML and GA models, training these models can be still a time-consuming task, especially when discrepancies of the experimental response and the suggested power law model are present. Therefore, we propose a data-driven elasto-viscoplasticity framework based on neural networks, NN-EVP, that replaces the power law with a generic neural networks based algorithm capable of predicting the flow response in metals and alloys at large deformations in the context of limited data availability. §.§ NN-EVP In order to employ a physics-informed data-driven model for the elasto-viscoplastic constitutive formulation described earlier, the dual potential φ^* is represented by the response of a neural 𝒩𝒩_ϕ^* whose (scalar) input is chosen based on which hardening phenomena we aim to be captured. This is discussed in more detail in the subsequent sections. To remain consistent with thermodynamics laws, the following conditions are (implicitly) enforced on the dual potential neural network 𝒩𝒩_ϕ^*: * 𝒩𝒩_ϕ^* is convex and monotonically increasing * 𝒩𝒩_ϕ^* is always positive: 𝒩𝒩_ϕ^* > 0 * 𝒩𝒩_ϕ^* includes the origin: 𝒩𝒩_ϕ^*(0)=0 If 𝒩𝒩_ϕ^*: ℝ→ℝ is a feedforward neural network with L hidden layers, input x_0 and output y_L, the neural network can be written as follows: x_0 ∈ℝ_≥ 0, x_1 = ℱ_1( x_0W_1^T + b_1) ∈ℝ^n^1, x_l = ℱ_l( x_l-1W_l^T + b_l) ∈ℝ^n^l, l=1, …, L-1 x_L = x_L-1W_L^T + b_L, ∈ℝ, where the weights W_l∈ℝ^n^l× n^l-1 and biases b_l∈ℝ^n^l define the set of trainable parameters and the activation functions are denoted by ℱ_1. The neural network 𝒩𝒩_ϕ^* is positive, monotonically increasing, and input convex when the following conditions are met <cit.>: * x_0≥ 0 , W_l≥ 0 , b_l≥ 0, * ℱ_l: ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L, * ℱ_l': ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L, * ℱ_l”: ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L. In order to satisfy the conditions above, we choose a parameterized and adaptive/scalable Softplus activation function <cit.> as follows: ℱ_l^𝒩𝒩_ϕ^*(x) = 1/βlog(1+e^x) -ℱ_l^𝒩𝒩_ϕ^*(0) , where β>0 is a training parameter in addition to the weights and biases of the neural network, providing a more generic form of the activation function and thus more flexibility in training. Note that β is trainable for each hidden layer of 𝒩𝒩_ϕ^*. The deduction of ℱ_l^𝒩𝒩_ϕ^*(0) from the equation above is to satisfy the condition 𝒩𝒩_ϕ^*(0)=0. Once the dual potential 𝒩𝒩_ϕ^* is established, it can be used to replace the particular power law form introduced earlier in Eqs (<ref>) and (<ref>). Equivalently, the viscoplastic strain rate ϵ̇^vp can be obtained by taking the derivative of the output of 𝒩𝒩_ϕ^* with respect to Cauchy stress: ϵ̇^vp= ∂𝒩𝒩_ϕ^*∂σ. §.§.§ NN-EVP for perfect viscoplasticity In the case of perfect viscoplasticity, the yield function σ^Y is constant and does not evolve. We can therefore use σ_eq as the input to the neural network, 𝒩𝒩_ϕ^* =𝒩𝒩_ϕ^* (σ_eq). Figure <ref> shows a schematic of the particular neural network architecture used for modeling perfect viscoplasticity in this work. While different configurations are possible in particular with regards to the selection of the number of layers and neurons, due to the already low amount of training data (less than 100 data points depending on the resolution of stress-strain data points) compared to the number of trainable parameters we show the results for a network with 2 layers and 20 neurons in the following. We remark that slight changes to this network architecture, i.e. 2 layers with 30 neurons, for example, have not shown to yield different results. §.§.§ NN-EVP with isotropic hardening Once hardening effects are introduced into the constitutive model, the yield function R(r) (which will be not imposed, but discovered) evolves during the deformation as a function of accumulated plastic strain r. As a result, information pertaining to the evolution of yield function is also required as an additional input to the dual potential neural network 𝒩𝒩_ϕ^*. Thus, we could define the dual potential neural network as 𝒩𝒩_ϕ^* =𝒩𝒩_ϕ^* (σ_eq,R(r)). To this end, an additional neural network 𝒩𝒩_R=𝒩𝒩_R(r) is required to track the evolution of strain hardening. Notice that the output of hardening neural network 𝒩𝒩_R(r) could be used as an input to the dual potential neural network 𝒩𝒩_ϕ^* (σ_eq,R(r)). Here, instead of taking σ_eq and R(r) as two independent inputs to 𝒩𝒩_ϕ^*, we choose to take the ratio σ_eqR(r) as a single input. This mechanistic assumption informs the neural network with the physical constraint that σ_eq needs to be scaled proportionally with the evolution of R(r), imposing the strain hardening effects as the ratio σ_eqR(r) controls the elasto-viscoplastic transition as well as the rate of the evolution of viscoplastic strain rate. Figure <ref> shows a schematic of NN-EVP with isotropic hardening. To remain consistent with the thermodynamics laws, the following conditions are enforced on the hardening neural network 𝒩𝒩_R: * 𝒩𝒩_R is monotonically increasing, * 𝒩𝒩_R is always positive: 𝒩𝒩_R > 0, * 𝒩𝒩_R does not include the origin: 𝒩𝒩_R(r=0) ≠ 0. The neural network 𝒩𝒩_R is positive and monotonically increasing when the following conditions are met: * x_0≥ 0 , W_l≥ 0, b_l≥ 0, * ℱ_l: ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L, * ℱ_l': ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L. Notice that in contrast to the dual potential network 𝒩𝒩_ϕ^*, the hardening network 𝒩𝒩_R does not contain the origin and does not require convexity. This is due to the fact that the yield function R(r) has a nonzero value equal to the initial yield at zero accumulated viscoplastic strain r. We also remark that since 𝒩𝒩_R is monotonically increasing, the reciprocal form of 1𝒩𝒩_R, used as input to 𝒩𝒩_ϕ^*, is monotonically decreasing <cit.>. In order to satisfy the conditions above, we choose combinations of forms of ReLU, adaptive logistic or adaptive tanh activation functions as follows: ℱ_l^𝒩𝒩_R(x) = α_1 max(x,0) + α_2 1/1+e^ -β x, ℱ_l^𝒩𝒩_R(x) = α_1 max(x,0) + α_2 e^β x-e^- β x/e^β x+e^-β x, ℱ_l^𝒩𝒩_R(x) = α_1 1/1+e^ -β x + α_2 e^β x-e^- β x/e^β x+e^-β x, where α_1 and α_2 denote the weights assigned to the activation functions and applied to the hidden layers of each neural network. The reason behind these mixed activation functions is to facilitate learning the hardening response for larger strain amplitudes. Since the logistic activation function saturates early at lower strain levels, the addition of either a ReLu or tanh response compensates for stress-strain curvature at later stages of hardening. The effect of variation in α_1 and α_2 as well as the particular selection of the activation functions on the flow response is discussed later in detail. §.§.§ Hall-Petch effects and grain size-aware NN-EVP Hall-Petch effects and variation in grain size are concomitant with strong anisotropy in viscoplastic flow responses. In particular, the initial yield function of the material alters drastically depending on the average grain size of the underlying microstructure. Particular functional forms for Hall-Petch relationship vary in the literature. However, the most common form <cit.> can be written as follows: R_0,HP=Hμ√(b)√(d_grain) where b is the Burgers vector, μ is the shear modulus, d_grain is the average grain size and H stands for the Hall-Petch coefficient usually obtained from calibrations with experimental observations. Notice that Eq. (<ref>) provides a nonlinear relationship between the grain size and initial yield function in non-logarithmic space, however, it can be shown that the Hall-Petch stress does not always scale as the inverse square root of grain size <cit.> and thus Eq. <ref> is simply a specific function of many possible forms. In order to incorporate the Hall-Petch effect into the NN-EVP framework, we introduce a Hall-Petch neural network 𝒩𝒩_HP=𝒩𝒩_HP(d_grain) in addition to the dual potential 𝒩𝒩_ϕ^* and the hardening neural network 𝒩𝒩_R. This allows us to learn and discover the Hall-Petch relationship. To remain consistent with thermodynamics laws, the following conditions are enforced on the Hall-Petch neural network 𝒩𝒩_HP: * 𝒩𝒩_HP is monotonically decreasing, * 𝒩𝒩_HP is always positive: 𝒩𝒩_HP > 0, * 𝒩𝒩_HP(d_grain→ 0) →∞. The neural network 𝒩𝒩_HP is positive when the following conditions are met: * x_0≥ 0 , W_l≥ 0, b_l≥ 0, * ℱ_l: ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L. Here, we first choose a standard tanh activation function as follows and then take the reciprocal of the network output to satisfy the conditions above: ℱ_l^𝒩𝒩_HP(x) = e^x-e^-x/e^x+e^-x, 𝒩𝒩_HP←1/𝒩𝒩_HP. Notice that since the Hall-Petch term only affects the initial yield function and does not evolve during the deformation, an adaptive activation function here is not necessary. Figure <ref> shows a schematic of grain size-aware NN-EVP architecture including the Hall-Petch effects. As illustrated in the figure, the response of the Hall-Petch network 𝒩𝒩_HP and the hardening network 𝒩𝒩_R are combined to σ_eq𝒩𝒩_R(r)+𝒩𝒩_HP(d_grain) and then used as an input to the dual potential network 𝒩𝒩_ϕ^*. § RESULTS AND DISCUSSION §.§ Implementation highlights The NN-EVP framework is implemented in PyTorch <cit.>. PyTorch takes advantage of automatic differentiation <cit.> facilitating automatic computation of local gradients of the output of neural network with respect to its inputs. Automatic differentiation also allows us to determine the Jacobian required for solving the update step of implicit time-stepping using the Newton Raphson (NR) method <cit.>. Details of the implementation are provided in the form of a pseudo-code in algorithm <ref>. Due to its adaptive learning decay algorithm and robustness, the AdamW optimizer <cit.> is utilized to optimize the parameters of the three neural network used in this study. In addition, a cosine annealing scheduler <cit.> is utilized to further enhance the learning rate adaptivity and therefore improve the robustness of the NN-EVP framework. The cosine annealing scheduler is set with a starting learning rate of 1e^-2 and saturates to a learning rate of 1e^-3. The established framework is then trained to model perfect viscoplasticity as well as an isotropic hardening response under uniaxial tensile loading. We consider two different scenarios. First, the flow response is synthetically generated using the power law and phenomenological Johnson-Cook hardening <cit.> where in addition to fitting the existing data, the flow response is predicted to larger strain amplitudes via extrapolations enabled by recovering the trained neural networks. Next, experimentally measured data pertaining to large deformations as a function of grain size are used to train the NN-EVP framework presented herein. In all cases, a constant 11 component of the strain rate ϵ̇^app=1e^-3 s^-1 in X direction is applied. In order to satisfy the uniaxial loading conditions, the 22 and 33 components of the stress tensor are enforced to be zero (see algorithm <ref>). The NR solver tolerance is set to 𝒯𝒪ℒ_𝒩ℛ=1e^-6 for all simulations which is reached within 2-4 iterations depending on the deformation history. Finally, the mean squared error function is used to calculate the loss value in each training epoch. §.§ Rediscovering the power law for perfect viscoplasticity In order to predict perfect viscoplasticity in Cu, the power law equation with a constant yield function σ^Y and rate sensitivities of n=10, 20, and 100 are used to generate the synthetic stress-strain data. Since strain hardening is not involved, only the dual potential neural network 𝒩𝒩_ϕ^* (σ_eq=σ) is utilized. The left plot of Fig. <ref>(a) shows the evolution of normalized loss over 200 training epochs for a rate sensitivity of n=10. The training error saturates to a value of 1e^-5 after around 100 iterations. Note that the fluctuations observed in the loss evolution are innate to the optimizer algorithm regardless of the learning rate value. The right plot of Fig. <ref>(a) illustrates the trained model output using the ground truth provided by the power law. The transparent lines represent the training history of the model during the optimization process. Figures <ref>(b) and <ref>(c) highlight the loss evolution and training response for power laws with rate sensitives of n=20 and n=100 respectively. As the rate sensitivity increases, a sharper transition from elastic to viscoplastic deformation is observed, making it more challenging for the neural network to fit the data at the transition regime. An increased number of training epochs from 200 to 500 is an indication of such behavior. Note that while the training loss is still reducing slowly after around 500 epochs, to maintain consistency and computational efficiency, we hereafter set the maximum number of epochs N_Epochs=500 for all the simulations. §.§ Training via phenomenological hardening Synthetic stress-strain data for isotropic hardening can be generated using either physics-based or phenomenological hardening models. Physics-based hardening laws are more suited for mesoscale or micromechanical simulations because they consider the evolution of dislocation substructures and dislocation densities <cit.> and often require an evolved implementation framework which is computationally intensive. On the other hand, phenomenological hardening models such as Voce <cit.>, Peirce, Asaro, and Needleman (PAN) <cit.> or Johnson-Cook <cit.> are simpler in implementation and thus computationally more efficient for modeling macroscopic behavior where the details pertaining to the underlying microstructure is not considered. In the present work, we use the Johnson-Cook isotropic hardening model with the functional form written as follows <cit.>: R(r)=[A+Br^n][1+C logrr^*][1-T^*^m], T^*=T-T_0T_m-T_0, where A, B, C, n and m are the hardening parameters associated with the Johnson-Cook model. Note that since we are considering deformations at room temperature T_0, the term T^* that includes the effects of melting temperature T_m vanishes. The values corresponding to the Johnson-Cook model parameters for Cu are listed in Table <ref> <cit.>. Here both the dual potential network 𝒩𝒩_ϕ^* (σ, R(r)) and hardening neural network 𝒩𝒩_R(r) are utilized for training. The dual potential network uses the SoftPlus activation function while the hardening neural network 𝒩𝒩_R(r) uses the adaptive logistic activation function obtained by setting α_1=0 and α_2=1 in Eq. <ref>. Figure <ref> shows the loss evolution and trained model. Young's modulus and Poisson's ratio of 130 GPa and 0.34 are used for the elastic response. The loss evolution experiences an abrupt drop to a normalized value of around 5e^-4 and saturates after around 100 epochs, indicating a fast convergence of the proposed framework. §.§.§ Extrapolation of flow response to larger strain amplitudes One of the powerful advantages (and challenges) of algorithms that are developed via neural network is the ability to extrapolate beyond the data available to the networks during the training process. Once a model is trained for a specific set of data, the recovered trained model is able to predict the behavior for any given input within the trained framework and beyond the data observed by the model. Here, we restrict the model during training for predictions up to 0.5% total strain and recover the model and its trained parameters to examine the ability of the model to extrapolate the flow response up to 2% total strain. This benchmark provides us with a train-test configuration consisting of 25% training and 75% testing data. Figure <ref> shows the extrapolations of stress-strain response for different ratios of the mixed activation functions ReLU, adaptive logistic, and adaptive tanh with variable combination weights α_1 and α_2 in Eqs. (<ref>), (<ref>) and (<ref>). The logistic activation function saturates rapidly at low strain amplitudes, hindering the model from properly capturing the hardening response at higher strain amplitudes. To address this issue, tanh, and ReLU activation functions are combined with logistic activation functions using different weights to predict the flow response beyond the training domain. Note that the addition of the ReLU activation function should be done with caution to avoid excessive hardening behavior shown in Fig. <ref>(a). However, it is worthwhile to mention that sudden increase in strain hardening is well observed in case of deformation twinning in compression tests <cit.> and thus, a mixed activation function with large ReLU weights seems to be a promising approach for capturing the rapid hardening behavior upon twinning formations. Among various configurations shown in Fig. <ref>(a)-(e), a mixed activation function with 80% adaptive tanh and 20% ReLU best predicts the hardening behavior up to 2% strain. §.§ Training large plastic deformations via experimental data So far the capability of the NN-EVP framework in training and extrapolating the flow response at small deformations is well demonstrated. In this section, we elaborate on training the flow response on experimentally measured data at large plastic deformations. One of the challenges involved in training with experimental data is the limitation of stress-strain data points as well as their frequency and form of occurrence through the deformation history. Access to the experimental data reported in the literature usually involves the extraction of data from stress-strain images using digitizing software. Typically data extraction is based on manual user input via the coordinates representing the image. Creating a pair of lists for the model output and measured data matching at the corresponding strain levels and necessary for a one-by-one comparison is challenging. Generating data with equal spacing is also not an option since adaptive time stepping is required for a computationally efficient framework with fewer time increments that represent the same curve with large deformations up to 10% strain. Thus, to address this issue and to enable access to the experimental data points at arbitrary strain amplitudes with arbitrary and adaptive time stepping, a nonlinear interpolation preprocessing step via the Scipy <cit.> library is used prior to training. Figure <ref> shows an example of a continuous Scipy interpolation for experimentally obtained data in Ni with uneven spacing and arbitrary strain amplitudes. Once the Scipy interpolation is applied, the stress values corresponding to the desired strain levels are readily evaluated using the obtained interpolator. Figures <ref>(a) and <ref>(b) present the trained models using measured data for small and large deformations up to 1% and 10% strain in Ni. The model shows promising performance in capturing the elasto-viscoplastic transition in small strains as well as prediction of hardening curvature under large deformations. Note that since most of the experimental data existing in the literature ignore the elastic response, an additional step based on the current value of viscoplastic strain is imposed in algorithm <ref> to ignore the elastic response when computing the loss between the training output and measured response. §.§ Grain size-aware flow response and Hall-Petch discovery After validating the ability of the NN-EVP model in training experimental data to large strains with arbitrary data frequency, we take a step further to train the flow response at large deformations as a function of grain size and aim to discover the Hall-Petch relationship without considering a particular functional form. To this end, the measured flow response of Cu as a function of grain size and for average grain sizes of 2.1 μ m, 3.4 μ m, 7.1 μ m and 15 μ m, as shown in Fig. <ref>, is utilized for training the model including the grain size effects. Here, in addition to the dual potential network 𝒩𝒩_ϕ^* (σ, R(r)) and hardening network 𝒩𝒩_R(r), the Hall-Petch neural network 𝒩𝒩_HP(d_grain) is also used in the model. In order to increase the training speed, adaptive time stepping was implemented with Δ t | _t+Δ t=1.15 Δ t | _t , resulting in 50 increments per stress-strain curve. Since the time stepping is adaptive, the training losses corresponding to each data point are scaled proportionally with the time increment to that data point to balance the loss evaluation. This is required due to the uneven data spacing resulting from irregular time steps where the density of training data is much smaller within the large deformation regime above 2% strain. First, each curve is trained individually and independent of one another to test the ability of the model in training the flow response with varying stress levels. Results corresponding to single-curve fits are shown in Fig. <ref>. Since one curve is trained at a time, it is easier for the model to learn the flow response and thus the convergence is relatively fast. Next, a quaternary configuration (with all four stress-strain responses) including 200 data points is utilized for training. Training on binary and ternary configurations with two and three curves consisting of 100 and 150 data points are also provided in the supplementary data shown in Figs. <ref> and <ref>. Note that since multiple curves with contrasting stress levels and hardening curvatures are trained in parallel, it is more challenging for the model to converge to an optimal solution as evident in the behavior of loss evolution. Also, due to more data points the training process is computationally more demanding. Ultimately, we aim to discover the Hall-Petch relationship which describes the dependence of the initial yield on the grain size. To this end, after training the model for grain sizes of 2.1 μ m, 3.4 μ m, 7.1 μ m and 15 μ m, the Hall-Petch neural network 𝒩𝒩_HP(d_grain) is recovered to extrapolate the initial flow response for grain sizes below 2.1 μ m and beyond 7.1 μ m with average grain sizes and their corresponding Hall-Petch stress values listed in Table <ref>. The discovered Hall-Petch behavior is shown in Fig. <ref>. These results indicate the ability of the model to capture the expected linear Hall-Petch relationship in log-log space, discovering it without any underlying assumptions on its functional form. Note that regardless of the particular material used herein, the proposed NN-EVP model should be able to predict the Hall-Petch relationship for a wide range of materials with arbitrary grain size and stress-strain response. Extrapolation of the Hall-Petch relationship is a complicated topic that we aim to study in the future. § SUMMARY AND CONCLUSIONS The flow response in metals and metallic alloys differs drastically depending on the average grain size that forms the underlying microstructure. Modeling the viscoplastic flow response of metals is usually concomitant with the assumption of particular constitutive models in the form of a power law and phenomenological hardening formulations. These hardening laws often include parameters that involve manual calibrations with experimental data obtained from uniaxial tests. Machine learning models that can automate such calibrations generally rely on large amounts of data and often require a large number of iterations in order for them to be sufficiently accurate and predictive. For instance, the fitting process via genetic algorithms involve a large number of functional calls within four orders of magnitude to the black-box in order to obtain the flow response <cit.>. To remove the need for these big datasets and large number of iterations in computationally expensive training, we propose a data-driven elasto-viscoplasticity framework based on neural networks called NN-EVP that leverages PyTorch's high-performance ML library. The proposed approach is tested and trained using both synthetic and experimentally measured uniaxial data in the context of limited data availability. The developed NN-EVP framework was adopted to train elasto-viscoplastic flow response of metals at both small and large deformations and as a function of grain size. First, rate sensitivity-dependent perfect viscoplasticity was discovered using the power law with no hardening effects. Next, synthetically generated deformation responses via a Johnson-Cook phenomenological hardening law were used to train and test the approach on its ability to extrapolate the flow response at strain amplitudes beyond the observed training data. Next, the framework was applied to train large deformations obtained from experimental tests with limited stress-strain data availability. Finally, simultaneous training of multiple grain size-dependent stress-strain curves enabled us to obtain a grain size-aware flow behavior and discover the Hall-Petch relationship pertaining to the grain size strengthening effects. The proposed NN-EVP model presented herein takes a further step in the prediction of flow responses in metals and improves the computational efficiency of structure-property-relationship simulations of metallic materials. The findings of the present work provide insights into the versatility and flexibility of data-driven constitutive modeling which can be adapted for a wide range of materials exhibiting complex large deformation behavior. This motivates future work in incorporating the current model into finite elements for efficient full-field modeling of large deformations under arbitrary loading and boundary conditions. § DATA AVAILABILITY Data and Python script supporting the findings of this study are available from the corresponding author upon request. § ACKNOWLEDGMENTS JF, AE and NB gratefully acknowledge support by the Air Force Office of Scientific Research under award number FA9550-22-1-0075. § SUPPLEMENTARY DATA
http://arxiv.org/abs/2307.07386v1
20230714145314
Real-time system identification of superconducting cavities with a recursive least-squares algorithm
[ "Volker Ziemann" ]
physics.acc-ph
[ "physics.acc-ph" ]
Real-time system identification of superconducting cavities with a recursive least-squares algorithm V. Ziemann, Uppsala University, 75120 Uppsala, Sweden July 14, 2023 ====================================================================================================== We employ a recursive least-squares estimator to determine the bandwidth ω_12 and the detuning Δω of a cavity that is controlled with a low-level RF system and we present a comprehensive analysis of the convergence and asymptotic behavior of the algorithm for static and time-varying systems. § INTRODUCTION Superconducting acceleration cavities are used to accelerate protons <cit.>, electrons <cit.>, and heavy ions <cit.>, both with pulsed <cit.> and with continuous beams <cit.>. Owing to the low losses, the cavities have a very narrow bandwidth on the order of Hz for bare cavities and a few 100 Hz for cavities equipped with high-power couplers. In order to efficiently cool these cavities with liquid helium are the cavities made of rather thin material, which makes them easily deformable and this changes their resonance frequency, often by an amount comparable to their bandwidth. In pulsed operation, the dominant deformation comes from the electro-magnetic pressure of the field inside the cavity, the Lorentz-force detuning <cit.>, while cavities operated continuously are perturbed by so-called microphonics <cit.>, caused by pressure variations of the liquid helium bath or mechanical perturbations, for example, by reciprocating pumps or by malfunctioning equipment. As a consequence of these perturbations, the cavities are detuned and force the power generator to increase their output to maintain fields necessary for stable operation of the beams. This reduces the efficiency of the system and requires an, often substantial, overhead of the power generation, forcing it to operate at a less than optimal working point. To avoid this sub-optimal mode of operation and to compensate the detuning, many accelerators employ active tuning systems that use stepper motors and piezo-actuators <cit.> to squeeze the cavities back in tune, which requires diagnostic systems to measure the detuning. These measurements are usually based on comparing the phase of the signal that excites the cavity, measured with a directional coupler just upstream of the input coupler, to the phase of the field inside the cavity, measured by a field probe or antenna inside the cavity. Both analog <cit.> and digital <cit.> signal processing systems are used; often as part of the low-level radio-frequency (LLRF) feedback system that stabilizes the fields in the cavity. Even more elaborate systems, based on various system identification algorithms, were used <cit.>. All these algorithms normally rely on low-pass filtering the often noisy signals from the directional couplers and antennas in order to provide a reliable estimate of the cavity detuning and the bandwidth. In this report, we focus on a complementary algorithm that exploits correlations between the changes of the input signal that the LLRF system prescribes and the ensuing changes of the fields, as measured by the field probe. Modern digital LLRF systems often operate at digitization rates of several Msamples per second, making a huge number of “change measurements” available. We subject this wealth of data to a recursive least-squares (RLS) algorithm to extract the bandwidth and the detuning of a cavity as fit parameters. Such algorithms can be very efficiently implemented inside the LLRF and requires only moderate numerical overhead. Moreover, the difference between the continuously improving estimates of the fit parameters and the “true” values—the so-called estimation error—approaches zero <cit.>! Finally, this type of algorithm actually benefits from the measurement noise of the field probe, because it agitates the feedback and then exploits the ensuing correlations between the input to the cavity and the field inside. We must, however, point out that the slow convergence of the algorithm, especially for low-noise systems, makes it more suitable for continuous-wave operation than for pulsed operation. In the following sections, we first introduce the model of the cavity and the feedback and derive a simple proportional controller using optimal control theory whereas the analysis of PID controllers is deferred to appendices. After transforming the model to discrete time we develop the RLS algorithm to identify the cavity parameters in Section <ref> and analyze its convergence in Section <ref>. In the following two sections we generalize the method to efficiently deal with time-varying parameters and analyze its convergence. In Section <ref> we briefly address pulsed systems before closing. § MODEL Accelerating cavities can be described by an equivalent circuit composed of a resistor R, an inductance L, and a capacitor C, all connected in parallel. This circuit is then excited by a current I. In <cit.> the following differential equation for the envelope of the voltage V in the cavity is derived dV/dt =-(ω̂/2Q_L+iω̂δ/2) V +ω̂R I/2Q_L(1+β)n with δ=ω/ω̂-ω̂/ω=-2Δω/ω̂ , where ω is the frequency of the generator and ω̂^2=1/LC is the resonance frequency of the cavity. The two are usually close, but not necessarily equal and Δω=ω̂-ω is the difference between them. Moreover, Q_L=Q_0/(1+β) is the loaded Q-value and Q_0=R√(C/L) is what's called the unloaded Q-value, whereas β is the coupling of the antenna that feeds power into the cavity and n is the winding ratio of the transformer that normally models the coupler. With the abbreviations ω̂/2Q_L= and ω̂δ/2=Δω, we can rewrite Equation <ref> in the standard form dV/dt =-(+i )V + I where we defined the abbreviation =R/(1+β)n to simplify the notation. Throughout this report, we assume that all currents and voltages are baseband signals, that is after the down-mixer and before before the up-mixer. By splitting the voltage and the current into real and imaginary part, V=V_r+iV_i and I=I_r+iI_i we obtain two equations, one for the real and one for the imaginary part of the voltage. Assembling the two equations in the form of a matrix leads us directly to the following state-space representation ([ dV_r/dt; dV_i/dt ]) =([ - -; - ]) ([ V_r; V_i ]) +([ 0; 0 ]) ([ I_r; I_i ]) of the system that describes the dynamics of the cavity voltage powered by a generator that provides the currents. Normally we want to control a system around some desired voltage level V̂_r and V̂_i, which is the set point of the controller. The currents, we assume that they already include the beam currents, that produce these voltages are easily found by setting the left-hand side in Equation <ref> to zero and solving for the currents. This leads us to ([ Î_r; Î_i ]) =1/([ 1 /; -/ 1 ]) ([ V̂_r; V̂_i ]) . The dynamics around this operating point is then given by expanding the voltages around the steady state with V_r=V̂_r+v_r and V_i=V̂_i+v_i, where v_r and v_i are small deviations from the desired set point that we want to stabilize by changing the currents, which we represent likewise by I_r=Î_r+i_r and I_i=Î_i+i_i. Inserting in Equation <ref> then leads us to ([ dv_r/dt; dv_i/dt ]) =([ - -; - ]) ([ v_r; v_i ]) +([ 0; 0 ]) ([ i_r; i_i ]) , which describes essentially the same dynamical system as Equation <ref>, but now it describes the dynamics around the chosen set point V̂_r and V̂_i. Equation <ref> is in the standard form of a linear dynamical system v̇⃗̇=A̅v⃗ +B̅i⃗ where v⃗ is the column vector of voltages and i⃗ to that of the currents. The matrices A̅ and B̅ correspond to those in Equation <ref> and are given by A̅=([ - -; - ]) and B̅=([ 0; 0 ]) . In order to control this system we use an optimal controller. The discussion of a PID-type regulators is deferred to the appendices. § OPTIMAL CONTROL FEEDBACK An optimal controller is characterized by minimizing the functional J[v⃗,i⃗]=∫[v⃗^⊤Q_vv⃗ +i⃗^⊤Q_ii⃗] dt , where Q_v and Q_i are weights to specify whether the algorithm should favor small values of the state v⃗ or of the controller i⃗. We will use Q_v=1 and Q_i=Z^21, where Z is a constant with the units of an impedance that makes the units of v⃗ (volt) and i⃗ (ampere) commensurate in the definition of J. Moreover, 1 is the 2×2 unit matrix. The theory to design optimal controllers <cit.> for time-invariant systems relies on finding the solution K of the stationary Riccati equation 0=-Q_x -A̅^⊤ K - KA̅ + KB̅Q_u^-1B̅^⊤K , where A̅ and B̅ are defined in Equation <ref>. Since the detuning is normally small and might even oscillate around zero if it is caused by microphonics <cit.>, we will set to zero when determining K. In that case all matrices appearing in Equation <ref> are diagonal and the equation separates into two identical scalar equations, which read 0=-1+κ + κ+(^2^2/Z^2)κ^2 for κ defined by K=κ1. Solving this quadratic equation results in κ=-Z^2/^2[1∓√(1+^2/Z^2)] . The control law <cit.> relating i⃗ and v⃗ is then given by i⃗=-Q_u^-1B̅^⊤ K v⃗ or, in components, we obtain ([ i_r; i_i ]) =1/[1∓√(1+^2/Z^2)] ([ v_r; v_i ]) . We see that it depends on the ratio of and Z. We can select it to make the feedback work harder to minimize the voltages (small Z) or to keep the control currents small (large Z). It remains to determine which sign of the root to use. Inserting the control law from Equation <ref> into Equation <ref>, we eliminate the currents i_r and i_i in favor of the voltages v_r and v_i and, after some algebra, arrive at ([ dv_r/dt; dv_i/dt ])= ∓√(1+^2/Z^2)([ v_r; v_i ]) , which makes it obvious that we need to pick the negative sign of the root in order to make the feedback system stable. The control law is thus given by Equation <ref> with the negative sign of the root. We can summarize the system we will analyze by v̇⃗̇=A̅v⃗ + B̅i⃗ + noise and i⃗=K_pv⃗ where the feedback gain K_p=1/[1-√(1+^2/Z^2)] is given in Equation <ref>, which turns out to be a proportional controller that minimizes Equation <ref>. We also added a noise source to the right-hand side of the equation in order to account for the measurement errors due to noise in the system. § SIMULATION For the simulations we will convert the continuous-time system from Equation <ref> to discrete time with time step dt, which corresponds to the sampling time if the system is implemented digitally. By replacing the derivatives of the voltages by finite-difference equations dv⃗/dt→v⃗_t+1-v⃗_t/dt where we label the time steps by t. With these substitutions, Equation <ref> becomes v⃗_t+1 =A v⃗_t + B i⃗_t +w⃗_t with A=([ 1- dt - dt; dt 1- dt ]) and B= dt 1. Here w⃗_t represents noise on the voltage measurement system. We assume that it is uncorrelated and has magnitude σ. It is thus characterized by its expectation value E{w⃗_t^⊤w⃗_s}=σ^2δ_ts1. The feedback algorithm looks the same as for the continuous-time system i⃗_t =1/[1-√(1+^2/Z^2)] v⃗_t . Iterating the system of Equations <ref> and <ref> shows that the system is indeed stabilized by the feedback algorithm and the rms magnitude of the voltages compared to that of the currents is indeed given by the ratio /Z in Equation <ref>. § SYSTEM IDENTIFICATION Now we turn to the task of extracting dt and dt from continuously recording the voltages and currents. In order to isolate the sought parameters we rewrite Equation <ref> in the form v⃗_t+1=(1+F)v⃗_t + Bi⃗_t with F=([ - dt - dt; dt - dt ]) . For the time being we ignore the noise w⃗_t and rewrite this equation as v⃗_t+1-v⃗_t-B i⃗_t = F v⃗_t . Moreover, we rewrite Fv⃗_t as Fv⃗_t = - dt ([ v_r; v_i ])_t + dt ([ -v_i; v_r ])_t = ([ -v_r -v_i; -v_i v_r ])_t ([ dt; dt ]) . We now introduce the abbreviations G_t = ([ -v_r -v_i; -v_i v_r ])_t and y⃗_t+1= v⃗_t+1-v⃗_t-B i⃗_t and stack Equation <ref> for consecutive times on top of each other. In this way, we obtain a vastly overdetermined system of equations to determine dt and dt ([ y⃗_2; y⃗_3; ⋮; y⃗_T+1 ]) = U_T ([ dt; dt ]) with U_T=([ G_1; G_2; ⋮; G_T ]) that we solve in the least-squares sense with the Moore-Penrose pseudo-inverse q⃗_T=([ dt; dt ])_T= ( U_T^⊤ U_T)^-1 U_T^⊤([ y⃗_2; y⃗_3; ⋮; y⃗_T+1 ]) . Here we introduce the abbreviation q⃗_T to denote the estimated parameters at time step T. It turns out that we can avoid lengthy evaluations by calculating Equation <ref> recursively. With the definition P_T^-1=U^⊤_TU_T, its initial value P_0=p_01, and the definition of U_T from Equation <ref> we express P_T+1 through P_T in the following way P_T+1^-1 = U^⊤_T+1U_T+1 = p_01+G_1^⊤G_1+G_2^⊤G_2+…+G_T^⊤G_T+G_T+1^⊤G_T+1 = P_T^-1+G_T+1^⊤G_T+1 . Noting that for all time steps t G_t^⊤G_t=(v_r^2+v_i^2)_t1 = v⃗_t^2 1 is proportional to the unit matrix 1. This renders the fit into two orthogonal and independent parts; one for each of the fit parameters. To proceed, we introduce the scalar quantity p_T with P_T=p_t1 and find that it obeys p_T+1^-1=p_T^-1+v⃗_t^2 . Taking the reciprocal leads to p_T+1 = p_T/1+p_Tv⃗_T^2 = (1 - p_T v⃗_T^2/1+p_Tv⃗_T^2)p_T , where the second equality proves convenient in the following. Note that we need to initialize this recursion with an initial non-zero value and set p_0=1 in the simulations. Despite being numerically unity, we carry p_0 through all equations, because it carries the inverse units of v⃗_t^2. We now turn to finding q⃗_T+1 by writing Equation <ref> for T+1 q⃗_T+1 = P_T+1(G_1^⊤y⃗_2+G_2^⊤y⃗_3+… + G_T^⊤y⃗_T+1 +G_T+1^⊤y⃗_T+2) = [1-p_T v⃗_T^2/1+p_Tv⃗_T^2] p_t(∑_t=1^T G_t^⊤y⃗_t+1 +G_T+1^⊤y⃗_T+2) = [1-p_T v⃗_T^2/1+p_Tv⃗_T^2] ( q⃗_T+p_tG_T+1^⊤y⃗_T+2) . Writing G_T+1^⊤y⃗_T+2 in components we finally arrive at q⃗_T+1=[1-p_T v⃗_T^2/1+p_Tv⃗_T^2] [ q⃗_T+p_t([ -v_ry(1)-v_iy(2); -v_iy(1)+v_ry(2) ])] where v_r and v_i are from time step T+1 whereas y(1) and y(2) come from time step T+2. Equation <ref> and <ref> constitute the algorithm to continuously update estimates for the two components of q⃗, the bandwidth q(1)= dt and the detuning q(2)= dt, as new voltage measurements v⃗_T+1 become available. Figure <ref> illustrates the performance of the algorithm to determine the “true” values q⃗_0=(0.02,-0.01) over 10^6 iterations. The numerical values of q⃗_0 are somewhat large (cavity bandwidth of 3 kHz in a 352 MHz cavity, sampled at 1 Msamples/s) in order to visualize the performance of the algorithm. The two plots show the convergence for a noise level of σ=10^-3 on the left and σ=10^-2 on the right. We see that the convergence is much slower, though also smoother, for the lower noise level. This is no surprise, because the noise causes excursions in the voltages that the feedback system tries to compensate. More noise causes larger excursions for both the voltages and the currents and from correlating these values, the system parameters are extracted with the help of Equation <ref> and its recursive versions Equation <ref> and <ref>. Figure <ref> motivates a number of questions about the convergence of the algorithm, such as the relevant parameters, apart from the noise level σ, that determine the rate of convergence. Moreover, the asymptotically reachable precision is an important quantity to determine. § CONVERGENCE In <cit.> we analyzed a recursive least-squares algorithm, its rate of convergence and its asymptotic behavior. Those methods are directly applicable here as well. We immediately note that (U_T^⊤U_T)^-1σ^2=P_Tσ^2 is the empirical covariance matrix that contains the square of error bars <cit.> of the fit parameters q⃗_T on its diagonal. Moreover, as a consequence of Equation <ref>, is P_T always diagonal, which implies that fitting for dt and dt is uncorrelated, which aids the robustness of the fit. The evolution of p_T can be estimated from Equation <ref>, provided we can estimate the time behavior of v⃗_T^2, which follows from inserting Equation <ref> into and <ref> and leads to v⃗_T = (A+κ B)^T v⃗_0 + ∑_s=0^T-1 (A+κ B)^sw⃗_T-s-1 . If the largest eigenvalue of A+κ B is smaller than unity, the initial value v⃗_0 “dies out” after a while and we can neglect this term. We continue with a bold move to replace v⃗_T^2 by its expectation value E{v⃗_T^2} and exploit the identity v⃗_T^2=v⃗_T^⊤v⃗_T = v⃗_T v⃗_T^⊤ to find E{v⃗_T^2} = E{∑_s=0^T-1∑_r=0^T-1 (A+κ B)^sw⃗_T-s-1w⃗_T-r-1^⊤((A+κ B)^⊤)^r} = σ^2 ∑_s=0^T-1( (A+κ B)(A+κ B)^⊤)^s + o(1) . where we used the statistics of the noise defined immediately after Equation <ref>. Here o(1) denotes a quantity that vanishes in the limit of large T. Noting that the product of matrices in the sum is symmetric, we can diagonalize it and find (A+κ B)(A+κ B)^⊤ = O Λ O^⊤ where O is an orthogonal matrix and Λ contains the eigenvalues on its diagonal and zeros elsewhere. Inserting this expression into Equation <ref>, we obtain E{v⃗_T^2} = σ^2 O ∑_s=0^T-1Λ^s O^⊤ =σ^2 ∑_s=0^T-1Λ^s ≈σ^2(1-Λ)^-1 = σ^2μ , where we use that the trace is invariant under cyclic permutations of matrices and that the sum over powers of a matrix approximates a geometric series of which we used the asymptotic value. Moreover, we introduce the abbreviation μ=(1-Λ)^-1 and point out that the right-hand side of Equation <ref> does no longer depend on T and is therefore constant. Replacing v⃗_T^2 by is expectation value E{v⃗_T^2}=σ^2μ in Equation <ref> leads to p_T+1=p_T -σ^2μ p_T^2/1+p_Tσ^2μ≈ p_T -σ^2μ p_T^2 where we neglected the denominator, because at large times T the error bars, which are proportional to p_t become smaller and smaller, such that the second term vanishes in this limit. In order to solve this equation we write it in a smooth approximation as p_T+1-p_T/dT =-σ^2μ p_T^2 or dp_T/dT = -σ^2μ p_T^2 where dT=T+1-T=1. This equation has the solution p_T = p_0/1+σ^2μ p_0T . We find that p_T shrinks proportional to T_s/T, where the time scale T_s is given by T_s=1/μσ^2p_0 and only depends on the noise level σ^2 and on μ, which is defined in Equation <ref> and encapsulates the essential dynamics of the system. Figure <ref> shows p_T from a simulation with parameters corresponding to those used in the right-hand image in Figure <ref>. Here the black solid line shows p_T from the simulation and the red asterisks the values produced by Equation <ref>. We find good agreement over the full range of iterations. Let us now introduce the estimation error a⃗_T=q⃗_T-q⃗_0 where we denote the “true” parameters by q⃗_0 = ( dt, dt) and use Equation <ref> to calculate its time evolution. First, we note that y⃗_T+2 on the right-hand side can be written as y⃗_T+2=(A-1) v⃗_T+1 = F v⃗_T+1 = G_T+1q⃗_0 . Inserting into Equation <ref> leads to q⃗_T+1 = ( 1-p_Tv⃗_T^2/1+p_Tv⃗_T^2) (q⃗_T + p_T G^⊤_T+1G_T+1q⃗_0) = ( 1-p_Tv⃗_T^2/1+p_Tv⃗_T^2) (q⃗_T +p_T v_T+1^2 q⃗_0) ≈ q⃗_T-p_T v⃗_T^2 q⃗_T + p_T v⃗_T+1^2 q⃗_0 . Finally, we replace v⃗_T^2 and v⃗_T+1^2 by their asymptotic values σ^2μ, subtract q⃗_0 on both sides of the equation, and arrive at a⃗_T+1 = a⃗_T - σ^2μ p_Ta⃗_T where we replaced q⃗_T -q⃗_0 by the estimation error a_T. As before, we transform this equation into its smooth approximation and obtain da⃗_T/dT=-σ^2μ p_T a⃗_T = -σ^2μp_0/1+σ^2μ p_0Ta⃗_T . where we replaced p_T by its approximation from Equation <ref>. Observing that we can solve this equation for each component of a⃗_T we arrive at a⃗_T = a⃗_0/1+p_0σ^2μ T . This equation describes an estimate of the temporal evolution of the difference between the parameter estimate q⃗_T and the “true” value q⃗_0. Figure <ref> shows the evolution of a_T from the same simulation used to generate Figure <ref> as solid black line. The values from the model in Equation <ref> are shown as red asterisks. We observe that the two curves agree reasonably well up to about 10^4 iterations, where the simulation starts to deviate significantly from the model. On the same figure, we show the error bars of the fit parameters, approximately given by √(p_tσ^2), as the blue dashed line. Once the estimation errors have the same order of magnitude as the error bars, replacing the v⃗_T^2 by its expectation value appears to be no longer a valid approximation. Instead of following Equation <ref>, the magnitude of the estimation error a_T follows the magnitude of the error bars, which scale as 1/√(T). This behavior is also consistent with the very general analysis of the asymptotic behavior of least-squares algorithms from <cit.>. Remarkably, the error bars and with it the estimation errors will asymptotically approach zero, such that the “true” system parameters are determined exactly. This is a consequence that new information in the form of new measurements is added to improve the estimate q⃗_T until it reaches the “true” parameters. As long as a_T is larger than the error bars, the approximation from Equation <ref> works reasonably well and we can estimate the number of iterations N_t where the transition from the initial convergence to the asymptotic regime occurs. With the error bar given by √(p_tσ^2)=√(p_0σ^2/(1+p_0σ^2μ)), we find N_t= a_0^2-p_0σ^2/μ p_0^2σ^4 where we see that N_t depends on the initial excess of the estimation error above the noise level a_0^2- p_0σ^2 and is inversely proportional to the noise level on the measurement system. So far, we assumed that the system parameters dt and dt are constant, which ensures that the “true” system parameters are asymptotically approached. In most systems, however, this assumption is not always valid, especially when the cavity is affected by microphonics. We therefore adapt the algorithm in the next sections to follow time-varying system parameters. § TIME-VARYING PARAMETERS In order to emphasize newly added information we follow <cit.> and introduce a “forgetting factor” α=1-1/N_f where N_f is the time horizon over which old information is downgraded in the last equality of Equation <ref>, which now reads P_T+1^-1= α P_T^-1+G_T+1^⊤G_T+1 . We see that we only have to replace P_T by P_T/α in the derivation of Equations <ref> and <ref>. In particular, we can also replace p_T by p_T/α and find p_T+1 = 1/α(1 - p_T v⃗_T^2/α+p_Tv⃗_T^2)p_T and q⃗_T+1=[1-p_T v⃗_T^2/α+p_Tv⃗_T^2] [ q⃗_T+p_T/α([ -v_ry(1)-v_iy(2); -v_iy(1)+v_ry(2) ])] that are capable of following time-dependent system parameters. In Figure <ref> we show the system parameters q(1) and q(2) over 4× 10^6 iterations. All other parameters correspond to those used in the previous examples. Between iterations 10^6 and 2× 10^6 the values of both parameters are doubled. In the plot on the left-hand side we use a forgetting time scale N_f=10^5 and see that the algorithm nicely tracks the changes, but the two traces are rather noisy. For the plot on the right-hand side, N_f is increased to 5× 10^5, which results in a much slower response to follow the changed parameters, albeit with much less noisy traces. In a second simulation, we keep the q_0(1)= dt fixed but let q_0(2)= dt oscillate around zero with amplitude 0.01, a situation that might be caused by microphonics. The top row in Figure <ref> shows the tracked system parameters over 4× 10^6 iterations. When sampling with 1 Msamples/s the four oscillations shown correspond to an oscillation of dt with 1 Hz. On the top left we use N_f=10^4 and find that the algorithm nicely tracks the oscillations, but the reconstructed parameters are rather noisy. Increasing N_f to 10^5 leads to a rather faithful, and much less noisy, reconstruction of the parameters. The bottom row in Figure <ref> shows the corresponding plots for a ten times higher oscillation period, now corresponding to 10 Hz. We observe that on the left-hand side with N_f=10^4 the oscillations are clearly recovered, even the amplitude is approximately correct. The right-hand plot, with N_f=10^5, shows the amplitude much reduced. This is a consequence of using N_f with a magnitude comparable to the oscillation period. Essentially, the oscillations are averaged out. We conclude that we can recover oscillations that have a period a few times longer than the “forgetting” horizon N_f. That with smaller values of N_f the recovered parameters become more noisy warrants an analysis of the tradeoff between tracking fast parameter changes and accurate determination of the parameters. § CONVERGENCE REVISITED We adapt the analysis from Section <ref> to include α=1-1/N_F. Replacing v⃗_T^2 in Equation <ref> by its expectation value from Equation <ref>, we find p_T+1=1/α( p_T-μσ^2p_T^2/α) . Here we ignore the second term in the denominator, because it is proportional to p_T and quickly becomes negligible. With p_T+1-p_T → dp_T/dT we rewrite this equation in the smooth approximation as dp_T/dT = (1-α/α) p_T - μσ^2/α^2 p_T^2 which has the solution p_T=p_0/μσ^2p_0/βα^2-(μσ^2p_0/βα^2-1)e^-β T with β=1-α/α . The left-hand plot in Figure <ref> shows good agreement of p_T from a simulation (solid black line) and from Equation <ref> (red asterisks). Importantly, we find that p_T reaches a limiting value p_∞ for a large number of iterations. It is given in the limit T→∞ of Equation <ref> p_∞ =α^2β/μσ^2≈1/N_fμσ^2 and favors large values of N_f. Clearly, the asymptotic error bars σ(q_i) for the fit parameters q_i are given by σ(q_i)≈√(p_∞σ^2)≈1/√(N_fμ) no longer approach zero with increasing number of iterations, as they did in Section <ref>, but remain finite. Remarkably, this asymptotic resolution is only determined by N_f and by the parameter μ from Equation <ref>. In a similar fashion, replacing v⃗_T^2 in Equation <ref> by its expectation value and performing the same approximations that were used to derive Equation <ref> in Section <ref> we find a⃗_T+1 = a⃗_T - σ^2μ/α p_Ta⃗_T . Inserting p_T from Equation <ref> and replacing the difference a⃗_T+1-a⃗_T by the differential da⃗_T/dT, we obtain da⃗_T/dT = -αβ/1-ce^-β Ta⃗_T with c=1-βα^2/μσ^2p_0 . Considering one of the two components of a⃗_T at a time, let's call it a, we separate variables and arrive at da/a = -αβ dT/1-ce^-β T . With the substitution z=1-ce^-β T we integrate both sides and find ln(a/a_0) = α∫dz/z(z-1)=αln.(z-1/z)|_0^T . Finally, after substituting back T for z, we obtain the temporal evolution of the both components of a⃗_T is the same, which permits us to write a⃗_T=a⃗_0ln((1-c)e^-β T/1-ce^-β T)^α . The right-hand plot in Figure <ref> shows the evolution of the estimation error for 10^6 iterations as a solid black line. The model estimate from Equation <ref>, shown by the red asterisks, nicely follows the black line until it crosses the blue line that indicates the error bars of the estimate. This is the same type of behavior we already found in Section <ref>. § PULSED OPERATION Finally, we tested the performance of the system in pulsed mode, even though it was conceived for continuously operating systems. The right-hand plots in Figure <ref> show the voltages and currents where the set point of the v_r was increased after 2.5× 10^5 iterations and reduced to zero after 5× 10^5 iterations while operating with a proportional-integral controller, which caused some overshoot and ringing at the rising and falling edges of the pulse since no effort was made to optimize the feedback parameters. In this simulation we reduced the noise level to σ=10^-3 such that the convergence to determine the system parameters, shown on the right-hand plot in Figure <ref>, is very slow until, at the start of the pulse, the correct values are immediately found. The algorithm efficiently exploits the large excursions of the currents and the voltages during the start of the pulse, determines the system parameters and rapidly converges to the “real” values. This may prove useful to keep an eye on pulse-to-pulse changes of q(1)= dt=ω̂dt/2Q_L and thus on the deterioration of Q_L before a quench. § CONCLUSION AND OUTLOOK We worked out an algorithm to determine the cavity bandwidth and the detuning from LLRF feedback data in real time. The calculations are very efficient and given by Equations <ref> and  <ref> for static parameters and by Equations <ref> and  <ref> for time-varying parameters. The latter case showed a distinct tradeoff between the ability to follow fast changes and the achievable resolution, whose limit is given by Equation <ref>. The evolution of the parameter p_T, which is proportional to the data-driven covariance matrix and thus the error bars, can be analytically described for both cases. Its dynamics is entirely determined by the expectation value E{v⃗_T^2}=μσ^2 and we determined μ for different regulators (optimal, PID) in the text and in the appendices. The left-hand plot in Figure <ref> shows the evolution of | p_T| for three values of μ where The forgetting horizon N_f=∞ which causes |p_T| to decrease without limit. On the other hand, with N_f=10^5 iterations | p_T| saturates at a finite value, and thereby finite error bars, for a large number of iterations. We found that the evolution of the estimation error a_T can also be analytically estimated as long as it is larger than the error bars. In this regime it follows a 1/T dependence. Once it becomes comparable to the error bars it still decreases, but with a 1/√(T) dependence, which is consistent with the general behavior expected <cit.> for least-squares algorithms. Despite being developed for continuously operated system, the algorithm also works for pulsed systems and quickly determines the system parameters during times of change, such as the rise of the pulse. This stimulates the idea to deliberately introduce small perturbations—so-called dithering—to improve the convergence of the system identification <cit.>, albeit at the cost of slightly deteriorating the performance of the feedback system. The algorithm from Section <ref> and <ref> robustly filters out any useful information to improve the estimate of the fit parameters. One might even say that any noise is good noise for system identification. §.§ Acknowledgments Discussions with Tor Lofnes, Uppsala University are gratefully acknowledged. plain M SNS S. Henderson et al., The Spallation Neutron Source accelerator system design, Nucl. Instrum. Methods A763 (2014) 610. ESS A. Jansson et al., The status of the ESS project, Proceedings of IPAC 2022, Bangkok, 792. CEBAF B. Norum, J. McCarthy, R. York, CEBAF - a high-energy, high duty factor electron accelerator for nuclear physics, Nucl. Instrum. Methods B10/11 (1985) 337. XFEL W. Decking et al., A MHz-repetition-rate hard X-ray free-electron laser driven by a superconducting linear accelerator, Nature Photonics 14 (2020) 391. CBETA A. Bartnik, et al., CBETA: First Multipass Superconducting Linear Accelerator with Energy Recovery, Phys. Rev. Lett. 125, 044803, July 2020. FRIB P. Ostroumov et al., Beam commissioning in the first superconducting segment of the Facility for Rare Isotope Beams, Phys. Rev. Accel. Beams 22, 080101. SPIRAL2 H. Goutte, A. Navin, Microscopes for the Physics at the Femtoscale: GANIL-SPIRAL2, Nucl.Phys.News 31 (2021) 1, 5. HELIAC W. Barth et al., Advanced basic layout of the HElmholtz LInear ACcelerator for cw heavy ion beams at GSI, Proceedings of IPAC 2023, Venice, https://doi.org/10.18429/JACoW-IPAC2023-TUPA186 JLABFEL C. Behre et al., First lasing of the IR upgrade FEL at Jefferson lab, Nucl. Instrum. Methods A528 (2004) LFD B. Aune et al., Superconducting TESLA cavities, Phys. Rev. ST Accel. Beams 3, 092001. ANA1 G. Davis et al., Microphonics Testing of the CEBAF Upgrade 7-Cell Cavity, Proceedings of PAC 2001, 1152. ANA2 T. Powers, Theory an practice of Cavity RF test systems, Proceedings of the 12th International Workshop on RF Superconductivity, Cornell University, Ithaca, New York, USA (2005) 30. SCHILCHER T. Schilcher, Vector sum control of pulsed accelerating fields in lorentz force detuned superconducting cavities, Dissertation, Universität Hamburg, 1998. PLAWSKI T. Plawski et al., Digital Cavity Resonance Monitor-Alternative way to Measure Cavity Microphonics, Proceedings of the 12th International Workshop on RF Superconductivity, Cornell University, Ithaca, New York, USA (2005) 616. TUNER M. Liepe, Superconducting Multicell Cavities for Linear Colliders, Dissertation, Universität Hamburg, 2001. RYBA R. Rybaniec et al., Real-time estimation of superconducting cavities parameters, Proceedings of EPAC 2014 in Dresden, p. 2456. CZARSKI T. Czarski, Superconducting cavity control based on system model identification, Meas. Sci. Technol. 18 (2007) 2328. LAIWEI T. Lai, C. Wei, Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems, The Annals of Statistics 10 (1982) 143. VZAPB V. Ziemann, Hands-On Accelerator Physics Using MATLAB, CRC Press, Boca Raton, 2019; especially Section 6.6.3. FYF V. Ziemann, Physics and Finance, Springer, Heidelberg, 2021. KIRK D. Kirk, Optimal control theory, Dover, New York, 2004. ZZ1 I. Ziemann, V. Ziemann, Noninvasively improving the orbit-response matrix while continuously correcting the orbit, Physical Review Accelerators and Beams 24 (2021) 072804. AW K. Åström, B. Wittenmark, Adaptive Control, 2nd edition, Dover Publications, Mineola, 2008; especially Section 2.2. OP V. Ziemann, Operational improvements for an algorithm to noninvasively measure the orbit response matrix in storage rings, arXiv:2303.11216, March 2023. § CONVERGENCE WITH A PD CONTROLLER Equation <ref> shows that the convergence of the algorithm is governed by the expectation value E{v_T^2}=μσ^2. We therefore calculate μ for a proprtional-differential (PD) controller in this appendix for which the currents i_t are given by i⃗_t=K_pv⃗_t -K_d(v⃗_t-v⃗_t-1) instead of Equation <ref>. Here K_p is the proportional gain defined near the end of Section <ref>. Inserting this expression into Equation <ref> gives us v⃗_t+1 = (A+B K_p-B K_d)v⃗_t + B K_d x⃗_t-1 . Introducing the abbreviations C=A+B K_p-B K_d and D= B K_d this defines a recursion relation that resembles the one for the Fibonacci sequence. Since linear difference equations are solved by power laws, we attempt an Ansatz v_t=Z^tv⃗_1 and find that the matrix Z has to satisfy Z^2-C Z-D=0 which us solved by Z_1,2 = 1/2( C ±√(C^2+4D)) , where the square root of the matrix C^2+4D can be evaluated in Matlab with the function sqrtm(). Taking the square root can return a matrix with imaginary entries, which forces us to use the hermitian conjugate of a matrix M, denoted by M^∗, instead of the transpose. For matrices with real entries the hermitian conjugate reverts to the transpose. With Z_1,2 known the voltages v_t are given by v⃗_t=(c_1 Z_1^t + c_2 Z_2^t) v⃗_1 with constants c_1 and c_2 that are determined from the initial conditions v⃗_0=0 and v⃗_1=w⃗_1 and lead to 0=(c_1 Z_1^0 + c_2 Z_2^0) w⃗_1 and w⃗_1= (c_1 Z_1^1 + c_2 Z_2^1) w⃗_1 . Solving for the constants gives us c_1=-c_2=(Z_1-Z_2)^-1 such that v⃗_t=(Z_1-Z_2)^-1(Z_1^t - Z_2^t) w⃗_1 which allows us to calculate the influence of the noise w⃗_1 at t=1 on all subsequent voltages v⃗_t. Adding the contributions from all noise sources from t=1 until t=T we find that v⃗_T is given by v⃗_T = ∑_s=0^T (Z_1-Z_2)^-1(Z_1^s - Z_2^s) w⃗_T-s-1 . The expectation value E{v_T^2}= E{v⃗_T v⃗_T^⊤}=μσ^2 is then obtained from E{v⃗_T v⃗_T^⊤} = E{∑_s=0^T∑_r=0^T(Z_1-Z_2)^-1(Z_1^s - Z_2^s). ×. w⃗_T-s-1w⃗_T-r-1^⊤(Z_1^∗ r-Z_2^∗ r)(Z_1^∗-Z_2^∗)^-1} = σ^2∑_s=0^T(Z_1-Z_2)^-1(Z_1^s - Z_2^s) (Z_1^∗ s-Z_2^∗ s)(Z_1^∗-Z_2^∗)^-1 = σ^2∑_s=0^T(Z_1-Z_2)^-1[ (Z_1Z_1^∗)^s - (Z_1Z_2^∗)^s. . - (Z_2Z_1^∗)^s + (Z_2Z_2^∗)^s ](Z_1^∗-Z_2^∗)^-1 ≈ σ^2(Z_1-Z_2)^-1[(1-Z_1Z_1^∗)^-1 -(1-Z_1Z_2^∗)^-1. . -(1-Z_2Z_1^∗)^-1+(1-Z_2Z_2^∗)^-1] (Z_1^∗-Z_2^∗)^-1 . In the second equality we use E{w⃗_sw⃗_r^⊤}=σ^2δ_rs1 and in the last approximate equality we extend the sum to infinity and use the summation formula for an infinite geometric sum that only converges for | Z_1,2| <1, which defines the limit of stability for the system. The trace of the product of matrices in the final approximate equality thus gives us μ for a regulator with proportional and differential control. The left-hand plot in Figure <ref> shows μ as a function of K_d from zero up to the limit of stability. We observe that μ is rather constant up to K_d=40 whence it grows significantly, because the system is close to its limit of stability and to some extent already excites the signal, which is beneficial for system identification. The right-hand plot shows the evolution of | p_T| for K_d=0 (dotted) and K_d=44 (solid black) where μ=16.3. We see that with the larger value of μ the convergence sets in earlier. The red asterisks are calculated from Equation <ref> with μ=16.3 and show good agreement with the values from the simulation. § CONVERGENCE WITH A PI CONTROLLER In this appendix we calculate μ for a proptional-integral (PI) controller, whose controller is defined by i⃗_t=K_pv⃗_t -K_i∑_i=0^tv⃗_i . As for the PD controller, inserting Equation <ref> into Equation <ref> leads to v⃗_t+1 = (A+B K_p)v⃗_t - B K_i∑_i=0^tv⃗_i . In order to remove the dependence on the sum, we calculate the difference of voltages on two consecutive time steps t+1 and t+2. After some algebra we obtain v⃗_t+2-v⃗_t+1= ( A +BK_p-BK_i) v⃗_t+1 - ( A +BK_p) v⃗_t . Adding v⃗_t+1 on both sides then leads to v⃗_t+2= (1+ A +BK_p-BK_i) v⃗_t+1 - ( A +BK_p) v⃗_t . Introducing the abbreviations Ĉ= 1+A +BK_p-BK_i and D̂=A+BK_p and using the Ansatz v_t=Z^tv⃗_1 the charactersitic equation for this system becomes Z^2-ĈZ+D̂=0 which has the solutions Z_1,2 = 1/2( Ĉ±√(Ĉ^2-4D̂)) such that we can write the general solution as y⃗_t = ( c_1 Z_1^t +c_2 Z_2^t)v⃗_1 . As in the previous appendix, we obtain the constants c_1 and c_2 from matching the initial values and, as before, find c_1=-c_2=(Z_1-Z_2)^-1. As a consequence of calulating the difference of v⃗_t+2-v⃗_t+1 the variable y⃗_t is the sum of all v⃗_i and we have to take the difference between consecutive values in order to obtain v⃗_t+1=y⃗_t+1-y⃗_t = (Z_1-Z_2)^-1[(Z_1-1)Z_1^t-(Z_2-1)Z_2^t]w⃗_1 . Calculating the expectation value E{v⃗_T^2}=μσ^2 is somewhat lengthy but follows the same spirit as in the previous appendix and leads to E{v⃗_T v⃗_T^⊤} ≈ σ^2[ R_1(1-Z_1Z_1^∗)^-1R_1^∗ -R_1(1-Z_1Z_2^∗)^-1R_2^∗. . -R_2(1-Z_2Z_1^∗)^-1R_1^∗ +R_2(1-Z_2Z_2^∗)^-1R_2^∗] with R_1=(Z_1-Z_2)^-1(Z_1-1) and R_1=(Z_1-Z_2)^-1(Z_2-1). Finally, μ is given by the trace of the matrix in the square brackets. The left-hand plot in Figure <ref> shows μ as a function of K_i from zero up to the limit of stability. We observe that increasing K_i always increases μ and thus helps to identify the system parameters. The right-hand plot shows the evolution of | p_T| for K_i=0 (dotted) and K_d=160 (solid black) where μ=51.1. The faster convergence for the larger value of K_i and μ is clearly visisble. Moreover, the model values, calculated from Equation <ref> with μ=51.1 and shown as red asterisks, agree well with those from the simulation. § CONVERGENCE WITH A PID CONTROLLER For a PID regulator, the control law for the currents reads i⃗_t=K_pv⃗_t -K_d(v⃗_t-v⃗_t-1) -K_i∑_i=0^tv⃗_i which, upon inserting into Equation <ref>, leads to v⃗_t+1 = (A+B K_p-B K_d)v⃗_t +BK_dv⃗_t-1- B K_i∑_i=0^tv⃗_i . As for the PI regulator, we remove the dependence on the sum by taking the difference v⃗_t+2-v⃗_t+1 = ( A+B K_p-B K_d-BK_i)v⃗_t+1 - (A+B K_p-2BK_d)v⃗_t - BK_dv⃗_t-1 and, after adding v⃗_t+1 to both sides of the equation and using the Ansatz v_t=Z^tv⃗_1, we arrive at the characteristic equation Z^3-(1+A+B K_p-B K_d-BK_i)Z^2+ (A+B K_p-2BK_d) Z +BK_d = 0 . In order to find the roots of this matrix-valued equation, we exploit the fact that all matrices before the powers of Z can be diagonalized simultaneously with the matrix V 1+A+B K_p-B K_d-BK_i=VΛ V^∗ where Λ is diagonal with complex-conjugate eigenvalues and V^∗ is the hermitian conjugate of V. Using V we bring A+B K_p-2BK_d and BK_d to the same basis S=V^∗(A+B K_p-2BK_d) V and T=V^∗BK_dV are all diagonal with complex-conjugate values on the diagonals. This transforms Equation <ref> into two complex-valued equations, where one is the complex conjugate of the other z^3-Λ_11z^2+S_11z+T_11=0 whose roots z_1, z_2, and z_3 can be found numerically, such that we can reassemble the matrices Z_i with Z_i=V([ z_i 0; 0 z_i^∗ ]) V^∗ and use them to obtain y⃗_t y⃗_t = V [ C_1 Z_1^t + C_2 Z_2^t +C_3 Z_3^t] V^∗v⃗_1 from which we determine v⃗_t+1=y⃗_t+1-y⃗_t,as for the PI controller in the previous appendix. We find the constants C_1 from matching the boundary conditions y⃗_0=y⃗_1=0 and y⃗_2=v⃗_1, by solving the following system of equations ([ 0; 0; 1 ]) = ([ 1 1 1; z_1 z_2 z_3; z_1^2 z_2^2 z_3^2 ]) ([ c_1; c_2; c_3 ]) and assembling the C_i with C_i = V ([ c_i 0; 0 c_i^∗ ]) V^∗ . For v⃗_t we finally obtain v⃗_t = [C_1(Z_1-1)Z_1^t+C_2(Z_2-1)Z_2^t+C_3(Z_3-1)Z_3^t]v⃗_1 . Following the same steps as in the previous appendices, we arrive at the expectation value E{v_Tv_T^⊤}=μσ^2 given by E{v_Tv_T^⊤} = σ^2[∑_i=1^3 ∑_j=1^3 R_i(1-Z_iZ_j^∗)^-1R_j^∗] with the abbreviations R_i=C_i(Z_i-1) and μ given by the expression in the square brackets. We checked the resulting values against simulations and found good agreement similar to that found on the right-hand side in Figure <ref>. Moreover, the limiting cases with K_i=0 agree with the values from Appendix A and for K_d=0 with Appendix B.
http://arxiv.org/abs/2307.05123v1
20230711090020
Entanglement Distribution in the Quantum Internet: Knowing when to Stop!
[ "Angela Sara Cacciapuoti", "Michele Viscardi", "Jessica Illiano", "Marcello Caleffi" ]
quant-ph
[ "quant-ph", "cs.NI" ]
Entanglement Distribution in the Quantum Internet: Knowing when to Stop! Angela Sara Cacciapuoti^*, Senior Member, IEEE, Michele Viscardi, Jessica Illiano, Marcello Caleffi, Senior Member, IEEE The authors are with the www.quantuminternet.itwww.QuantumInternet.it research group, FLY: Future Communications Laboratory, University of Naples Federico II, Naples, 80125 Italy. A.S. Cacciapuoti and M. Caleffi are also with the Laboratorio Nazionale di Comunicazioni Multimediali, National Inter-University Consortium for Telecommunications (CNIT), Naples, 80126, Italy. ^*Corresponding author. A preliminary version of this work is currently under review for IEEE QCE23 <cit.>. Michele Viscardi acknowledges PNRR MUR project CN00000013, Angela Sara Cacciapuoti acknowledges PNRR MUR NQSTI-PE00000023, Marcello Caleffi acknowledges PNRR MUR project RESTART-PE00000001. August 12, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Entanglement distribution is a key functionality of the Quantum Internet. However, quantum entanglement is very fragile, easily degraded by decoherence, which strictly constraints the time horizon within the distribution has to be completed. This, coupled with the quantum noise irremediably impinging on the channels utilized for entanglement distribution, may imply the need to attempt the distribution process multiple times before the targeted network nodes successfully share the desired entangled state. And there is no guarantee that this is accomplished within the time horizon dictated by the coherence times. As a consequence, in noisy scenarios requiring multiple distribution attempts, it may be convenient to stop the distribution process early. In this paper, we take steps in the direction of knowing when to stop the entanglement distribution by developing a theoretical framework, able to capture the quantum noise effects. Specifically, we first prove that the entanglement distribution process can be modeled as a Markov decision process. Then, we prove that the optimal decision policy exhibits attractive features, which we exploit to reduce the computational complexity. The developed framework provides quantum network designers with flexible tools to optimally engineer the design parameters of the entanglement distribution process. Entanglement Distribution, Quantum Internet, Quantum Communications, Markov Decision Process § INTRODUCTION The Quantum Internet is foreseen to enable several applications with no counterpart in the classical world <cit.>, such as distributed quantum computing <cit.> and secure communications <cit.>. To this aim, the entanglement distribution process plays the key role. Indeed, the successful distribution of entangled states among remote network nodes represents a necessary condition for any entanglement-based network <cit.>. A few theoretical models and designs for entanglement distribution have been recently proposed in literature. In <cit.>, the authors model the distribution of entangled pairs as a discrete time Markov chain. Specifically, they assume infinite coherence time and infinite resources at the central node, with the aim of analyzing the expected capacity of the central node in terms of the number of qubits to be stored to meet the stability condition of the system. In <cit.>, the distribution of entangled pairs is modeled as a continuous time Markov chain. Such a model is based on a Poisson probability distribution for the successful distribution of entangled pairs, and it accounts for some non-idealities, such as decoherence and noisy measurements. In <cit.>, a Markov decision process is used to study the limits of bipartite entanglement distribution via entanglement swapping, by using a chain of quantum repeaters equipped with quantum memories. Finally, in <cit.> some practical figures of merit for entanglement distribution in quantum repeater networks are provided. In particular, the authors define the average connection time and the average size of the largest distributed entangled state for a fixed scenario. Despite these research efforts, the fundamental problem of knowing when to stop (the entanglement distribution) remains unsolved. And filling this research gap is mandatory for the efficient engineering of any entanglement distribution process. Specifically, it is well-known that quantum entanglement is a very fragile resource, easily degraded by decoherence <cit.>. Decoherence severely impacts the time horizon in which freshly-generated entangled states can be successfully distributed and exploited for communication needs. Yet, due to the noise irremediably affecting the quantum communication channels utilized for entanglement distribution, it may be necessary to attempt the distribution process multiple times before that all the selected network nodes successfully share the targeted entangled state. As a matter of fact, because of the complex and stochastic nature of the physical mechanisms underlying quantum noise, there is no guarantee that all the selected nodes can successfully share the entangled state within the time horizon dictated by the coherence times. As a consequence, in noisy scenarios requiring multiple distribution attempts, it may be convenient to stop the distribution process early, i.e., before entangling all the selected nodes. The rationale for this choice is twofold. On one hand, an early stopping can be required to account for additional delays induced by the network functionalities exploiting the entanglement resource. On the other hand, an early stopping can be convenient whenever “enough” nodes – accordingly to a certain figure of merit – already share entanglement, so that the entangled resource can be promptly exploited for the needed communication/computing purpose. In this paper, we take steps in the direction of knowing when to stop by developing a theoretical framework. This framework provides quantum network designers with flexible tools to optimally engineer the design parameters of the entanglement distribution. To the best of our knowledge, this is the first work addressing the optimal stopping rule for entanglement distribution. §.§ Our contributions The developed theoretical framework abstracts from the particular state to be distributed and provides a model that can be tweaked to account for the physical characteristics of the process itself. Specifically through the paper: - we provide a comprehensive characterization of the entanglement distribution problem, by showing that it can be modeled as a Markov decision process with minimal assumptions; - we provide the optimality conditions of the policy to be adopted, and we prove some key properties of the optimal policy that can be exploited for reducing the computational complexity; - we analyze the impact of different reward functions on the distribution process through two main figures of merit: the average cluster size and the average distribution time; - we gain insights on the selection of appropriate reward functions for entanglement distribution process engineering. In summary, we present an easy-to-use tool for modeling and fine-tuning entanglement distribution systems to meet specific performance requirements. It is important to emphasize that the model we offer in this study is highly adaptable and can be tailored to various scenarios and applications. The rest of the manuscript is organized as follows. In Sec. <ref>, we introduce the system model along with some preliminaries. In Sec. <ref>, we first formulate the entanglement distribution as a decision process, and then we derive both general (Sec. <ref>) and reward-dependent (Sec. <ref>) properties of the optimal policy, which we exploit for for reducing the computational complexity of the optimal policy search. In Sec. <ref> we validate the theoretical analysis through numerical simulations, and we discuss the impact of the reward functions on the performance of the entanglement distribution process. Finally, in Sec. <ref> we conclude the paper, and some proofs are gathered in the Appendix. § SYSTEM MODEL Generating and distributing entanglement can be a demanding task due to the delicate nature of quantum states and their susceptibility to environmental disturbances. In many practical scenarios, the generation of entanglement requires sophisticated and resource-intensive setups, often involving complex experimental apparatuses and precise control mechanisms. These technological limitations, coupled with the need for specialized environments that can facilitate quantum communication processes, make it pragmatic to assume a specialized super-node responsible for entanglement generation and distribution <cit.>. We emphasize that, when it comes to the distribution of multipartite entanglement, the assumption of a super-node for the entanglement generation is needed, not only due to the current maturity of the quantum technologies, but also due to the unavoidable requirement of some sort of local interaction among the qubits to be entangled, as discussed in <cit.>. Accordingly, we consider a scenario where a super-node is in charge of generating and distributing EPR pairs to a set of S quantum nodes – referred to as clients in the following – through quantum channels. It is worthwhile to note that the assumption of EPR pair distribution to client nodes is not restrictive, i.e., it does not hold only in EPR-based networks. In fact, when it comes to the distribution of multipartite entanglement, the super-node can, in principle, distribute each entangled qubit (ebit) to each client. However, this approach is not viable for all the classes of multipartite entanglement, which are characterized by different[As an example, the direct distribution of GHZ-like states, which are characterized by the lowest persistence, requires all the photons encoding the GHZ state to be successfully distributed to the clients in a single distribution attempt <cit.>.] persistence properties<cit.>. Accordingly, in the following we consider the more general case in which multipartite entangled states are distributed – as instance, through teleportation <cit.> – by exploiting the a-priori distribution of EPR pairs via heralded scheme <cit.>. As a matter of fact, this strategy is very common in literature and it has been proved to guarantee more resilience to noise and better protection against memory decoherence <cit.>. In the following we collect some definitions and assumptions that will be used in the paper. EPR Distribution Model: The distribution attempt of an EPR ebit toward a client node through a noisy quantum channel is modeled with a Bernoulli distribution with parameter p, where p denotes the successful distribution probability. According to the above, we consider quantum channels modeled as absorbing channels. Such a model constitutes a worst-case scenario, since the noise irreversibly corrupts the information carrier without any possibility of ebit recovery <cit.>. The channel behavior is captured through the parameter p, i.e., the probability of an ebit propagating through a quantum channel without experiencing absorption. And, q 1-p denotes the loss probability, i.e., the probability of ebit distribution failure as a consequence of the carrier absorption. It is worthwhile to highlight that other noisy channel models can be easily incorporated in our analysis. As an example, Pauli channels followed by a purification process can be as well modeled with a Bernoulli distribution with parameter p, where p denotes the success probability of the joint distribution and purification process. We observe that, by exploiting heralded schemes, the super-node is able to recognize which client – if any – experienced an absorption over the channel. And, in case of absorption, further distributions can be attempted. Indeed, it may be necessary to attempt the distribution multiple times before having the targeted subset of clients in the network successfully received the ebit. From the above, it follows straightforward to consider, within our model, the number of possible distribution attempts as the key temporal parameter. Clearly, the maximum number of distribution attempts is determined by the coherence times of the underlying quantum technology, as detailed in the next subsection. §.§ Problem Formulation We consider the time horizon of the entanglement distribution process constituted by N time slots: 𝒩 = {1,2, …, N }. with N implicitly accounting for the minimum guaranteed coherence time. Specifically, the value of N in (<ref>) depends on the particulars of the technology adopted for generating and distributing the entangled states, and it is set such that decoherence effects can be considered negligible within the time horizon. As shown in Fig. <ref>, the time is organized into N time-slots, where at (the end of) each time-slot the super-node can decide whether another distribution attempt should be performed (or not) in the subsequent time-slot. Clearly, the number of clients having already successfully received an ebit through the noisy channel, referred in the following as “connected” clients, represents a key parameter. We formalize this concept through the following two definitions. The action set 𝒜 denotes the set of actions available at the super-node: 𝒜 = { C, Q }, with C denoting the action of attempting another distribution round in the next time slot, and Q denoting the action of not attempting the distribution. The system state space is defined as the pair (s,n) ∈𝒮̃×𝒩, where 𝒩 is given in (<ref>) and S̃ is defined as follows: 𝒮̃𝒮∪{Δ}, with 𝒮{0,1,2, …, S } denoting the set of possible values for number of connected clients. Accordingly, the system is in state (s,n) with s ∈𝒮 if s ≤ S clients have successfully received an ebit from the super-node within the first n distribution attempts. It is worthwhile to note that Δ in (<ref>) represents an auxiliary state, referred to as absorbing state, that denotes the state of the system where no further distributions are attempted. In the following, we will use the symbols s_n (s,n) as a shorthand notation for the system state (s,n), whenever this will not generate confusion. The allowed action set A_s_n denotes the set of actions available at the super-node when the system state is s_n, and it results: 𝒜_s_n = {C,Q} s ∈𝒮∖{S}∧ n<N {Q} s = S ∨ s = Δ∨ n=N From Def. <ref> it results that the only allowed action is Q whenever the system either: i) successfully distributed entanglement to all the clients, or ii) is in the absorbing state s=Δ, or iii) is at the last available time-slot N. Assuming the system being in the state s_n ∈𝒮̃×𝒩 and depending on the particular action a ∈𝒜_s_n taken, the system will evolve into some state s̃_n+1∈𝒮̃×𝒩 with some probability p(s̃_n+1|s_n,a), which will be derived with Lemma <ref> in Section <ref>. During the first time-slot, the super-node simultaneously transmits S ebits to the S clients. In case of absorption, further distributions can be attempted. This requires additional time, thus challenging the decoherence constraints as well as impacting the overall distribution rate. Hence there exists a trade-off between the number of clients that successfully receive an ebit – which we refer to as distributed cluster size – and the distribution time, i.e., the number of time slots after which the distribution process is either completed or arrested. This trade-off deeply impacts the performance of the overlaying communication functionalities. Thus, its optimization becomes crucial in the design of quantum networks. To capture this trade-off by abstracting from the particulars of the underlying hardware technology(ies), we model the effects of the action a ∈𝒜_s_n, taken by the super-node starting from the state s_n, through the notion of an utility function r(s_n,a), referred to as reward function. Accordingly, we formalize this concept in the following Definition. Assuming that action a ∈𝒜_s_n is taken when the system is in state s_n ∈𝒮̃×𝒩, the overall reward achieved is: r(s_n,a) = -f(s_n) s ∈𝒮, a = C g(s_n) s ∈𝒮, a= Q 0 s= Δ where: - f(s_n) denotes the continuation cost function, which models the overall cost of attempting (continuing) the ebits distribution when the system is in s_n; - g(s_n) denotes the pay-off function, which models the gain achievable by stopping the ebits distribution when the system is in the state s_n. It is clear that, according to our formulation, once the system reaches the absorption state, no further costs or rewards are obtained since the distribution process has been stopped. The notion of reward function allows us to abstract from the particulars of i) the underlying technology for entanglement generation and distribution, and ii) the overlying network functionalities exploiting entanglement as a communication resource. In turn, this enables the following two key features: i) it restricts our attention on the effects of the entanglement distribution process; b) it allows us to measure the performance of an entanglement distribution strategy, and thus it allows us to quantitatively compare different strategies. In the following we restrict our attention on payoff functions {g(s_n)} satisfying the two following properties. [Monotonicity with s] The payoff function g(s_n) is a monotonic non-decreasing function of s: g(s_n) ≤ g(s̃_n) with s<s̃. [Monotonicity with n] The payoff function g(s_n) is a monotonic non-increasing function of n: g(s_n) ≥ g(s_m) with n ≤ m. The rationale for these two properties is to model scenarios with meaningful meaning from an entanglement distribution perspective. Specifically, with Property <ref> the reward function tunes the system choice towards larger s, i.e., higher number of connected clients. Clearly, this is reasonable since the higher is the number of connected clients, the larger is – as instance – the distributed multipartite entangled state. Conversely, Property <ref> tunes the system choice towards shorter distribution times, which is mandatory to account for the fragile, easily degraded nature of entanglement. It is worthwhile to note that the theoretical framework developed in Sec. <ref> continues to hold regardless of whether the reward exhibits any monotonicity. Conversely, we will exploit these two properties in Sec. <ref> for reducing the computational complexity of the optimal decision strategy. According to the theoretical framework developed so far, the entanglement distribution process is modeled through the quintuple: {𝒮̃, 𝒩, 𝒜_s_n, p(s̃_n+1|s_n,a), r(s_n,a)}. § KNOWING WHEN TO STOP Here, we develop the theoretical framework for modeling the entanglement distribution process. Specifically, in Sec. <ref>, we prove that – with the minimal set of assumptions about the quantum technologies underlying entanglement generation and distribution – the entanglement distribution process can be modeled as a Markov decision processes. Then in Sec. <ref> we prove some key properties that we will exploit to reduce the computational complexity of the problem. §.§ Optimal Decision Model In Theorem <ref> we prove that the entanglement distribution process can be modeled as a Markov Decision Process. To this aim, the preliminary result in Lemma <ref> is needed. Assuming action a ∈𝒜_s_n is taken when the system is in state s_n ∈𝒮̃×𝒩, the probability p(s̃_n+1|s_n,a) of the system evolving into state s̃_n+1∈𝒮̃×𝒩 depends only on current state and action, and it is given by: p(s̃_n+1|s_n,a) = p(s̃|s), if a = C ∧ s,s̃∈𝒮 : s̃≥ s 1 if a = Q ∧s̃ = Δ 0 otherwise, with p(s̃|s)=S-ss̃-sq^S-s̃p^s̃-s. See Appendix <ref> The available actions defined in (<ref>) establish two disjoint functioning regimes for the system, namely, the regime of action C and the regime of action Q, as shown in Fig. <ref> with reference to a system with S=3 clients. Specifically, Fig. <ref> represents the regime of action C. Here, the system evolves according to the transition probabilities p(s̃|s) in (<ref>). It is worth noting that there exist no transition towards the absorbing state through action C. Differently, Fig. <ref> represents the region of action Q. Specifically, by accounting for (<ref>), once the super-node decides to perform action Q, the system will only evolve towards (or remain in) the absorbing state Δ, where no further ebit transmissions are attempted. The entanglement distribution process can be modeled as a Markov Decision Process. The proof follows from Lemma 1 by accounting for the Markov property of the transition probabilities <cit.>. In the following, stemming from the result stated in Theorem <ref>, we will embrace the powerful framework of the Markov Decision Process to (optimal) “know when to stop” the entanglement distribution process. To this aim, the following definition is needed. A policy π(·) is a rule determining the action to be taken in any possible state of the considered system. Hence, it is a function that maps the set of system states over the set of the allowed actions: ∀ s_n ∈𝒮̃×𝒩: π(s_n) ∈𝒜_s_n In the following, Π denotes the set of all possible policies. We note that, in (<ref>), we exploited the Markovianity by considering policies π(·) depending on the current system state only, rather than on the entire history of the system state evolution <cit.>. Furthermore, we note that the overall reward achieved by adopting any policy π(·) ∈Π is inherently stochastic, due to the noise affecting entanglement distribution. Thus, to assess and to compare the decision maker's preference toward different policies, we need a criterion to measure the performance of the selected policy. One widely adopted criterion in literature is the expected total reward, which we introduce in the following. Given that the strategy π(·) is adopted, the total expected reward v_π(s_1), obtained when the system state starts in state s_1, is recursively defined as: v_π(s_1) = r(s_1,π(s_1) ) + ∑_s̃∈𝒮̃: s̃_2=(s̃, 2) p( s̃_2 | s_1 , π(s_1) ) v_π(s̃_2), where v_π(s̃_n) denotes the expected remaining reward at time slot n, and it is given by (<ref>) shown at the top of the next page. Specifically, the boundary condition at time slot N in (<ref>) prevents from infinite loops in the absorbing state. We note that, for deriving the expression in (<ref>), we exploited Theorem 4.2.1 in <cit.>. Accordingly, it is possible to restrict our attention on deterministic policies π(·) ∈Π with no loss of optimality. Furthermore, we note that the expected total reward v_π(s_1) has been defined as a recursive function, where the recursive step v_π(s_n) at time slot n is function of three key parameters. That are the number of connected clients s, the policy π(·) through action π(s_n), and the reward at time slot n+1 via the transition probabilities p( ·| s_n, π(s_n) ). Stemming from the above, we are ready now to formally define the problem of (optimal) knowing when to stop the entanglement distribution. By accounting for (<ref>), the overall objective is to find the strategy π^* ∈Π that maximizes the expected total reward when the system is in state s_1: v_π^*(s_1) = max_π∈Π{ v_π(s_1) } As a matter of fact, being the considered sets 𝒮̃ and 𝒩 finite, there always exists a deterministic strategy achieving the maximum in (<ref>) <cit.>. Furthermore, we have implicitly assumed as overall goal to maximize the reward for some specific initial state s_1. Alternatively, the goal might be to find the optimal policy π^* prior to know the initial state s_1. In such a case, by accounting for (<ref>), the total expected reward v_π is given by: v_π = ∑_s ∈𝒮: s_1=(s, 1) p(s) v_π(s_1) with p(s), namely, the probability of successfully distributing ebits to s clients during the first distribution attempt, given by: p(s)=p^s q^S-s However, the reward in (<ref>) is maximized by maximizing the reward in (<ref>) for each s_1 in 𝒮 <cit.>. Hence in the following we will focus on the problem formulation in (<ref>) without any loss in generality. §.§ Optimal Decision Strategy: Properties In this subsection, we prove that the optimal policy π^*(·) exhibits specific properties with respect to the reward function. Then, we will engineer these properties to derive effective, practical strategies for reducing the computational complexity of the decision problem. To this aim, some preliminaries are needed. First, we explicit the expression of the expected remaining reward in (<ref>). Specifically, let us denote with v^*(s_1) the maximum expected total reward, which is equivalent to the expected total reward achieved by the optimal policy π^* given in (<ref>): v^*(s_1) v_π^*(s_1) By accounting for the allowed action set 𝒜_s_n given in (<ref>) and for the reward function defined in Def. <ref>, the maximum expected total reward v^*(s_1) is given in (<ref>) shown at the top of the next page, with the maximum expected remaining reward at the n-th recursive step given by: v^*(s_n) = max{ v^*_Q(s_n), v^*_C(s_n) } if n < N r(s_N,Q) otherwise In (<ref>), v^*_Q(s_1) and v^*_C(s_1) denote the maximum expected reward achievable when action Q or C is taken, respectively, starting from state s_n. Furthermore, let us denote with p(s̆_n+k|s_n,C) the probability to evolve into state s̆_n+k= (s̆, n+k) at time slot n+k, starting from state s_n=(s,n) with s ≠Δ, by having chosen always action C at the end of each of time-slot[Namely, by choosing action C regardless whether the number of connected clients s is either s < S or s = S.] between n and n+k-1. By exploiting the Markovianity in Lemma <ref>, this probability, referred to us extended transition probability, can be recursively written as follows: p(s̆_n+k|s_n,C)= ∑_s̃=s^s̆ p( s̆_n+k|s̃_n+1,C) p(s̃_n+1 |s_n, C ), with the expression of p(s̃_n+1 |s_n, C ) given in Lemma <ref>. Stemming from the extended transition probabilities given in (<ref>), we are ready to define now two rewards functions, that will be exploited in the following for efficiently deriving the optimal policy. Given that the system is in state s_n=(s,n), with s≠Δ and n<N, we introduce the quantities v^+(s_n) and v^-(s_n), referred to as the reward majorant and the reward minorant, respectively: v^+(s_n) = r(s_n,C ) + ∑_s̆∈𝒮̃ p( s̆_N | s_n , C ) v_Q^*(s̆_n+1) = -f(s_n) + ∑_s̆∈𝒮̃ p( s̆_N | s_n , C ) g(s̆_n+1) v^-(s_n) = r(s_n,C ) + ∑_s̃∈𝒮̃ p( s̃_n+1 | s_n , C ) v_Q^*(s̃_n+1)= = -f(s_n) + ∑_s̃∈𝒮̃ p( s̃_n+1 | s_n , C ) g(s̃_n+1), with s̃_n+1 = (s̃, n+1) and s̆_n+1 = (s̆, n+1). Both the majorant and the minorant model the reward achievable by deciding first to continue the entanglement distribution at time slot n and, then, to stop the distribution at the subsequent time slot n+1. Yet, they significantly differ each other: - The reward minorant v^-(s_n) is obtained by assuming the system evolving from state s_n to state s̃_n+1 in agreement with the transition probabilities given in (<ref>). - Conversely, the reward majorant v^+(s_n) is obtained by assuming the system able to evolve freely from state s_n to state s̆_N – with s̆_N=(s̆,N) representing the state that would have been reached by performing N-n subsequent distributions attempts by choosing only action C and never action Q – yet in a single time slot. In other words, the majorant models the expected reward achieved when the system performs N-n subsequent distributions attempts, yet i) by paying only a single continuation cost -f(s_n), and ii) by obtaining a pay-off g(s̆_n+1) as if s̆ would have been reached in a single time slot. The proof of the main result, namely, Theorem <ref> requires the following preliminary lemma. Given that the system state is s_n with s ∈𝒮 and n < N, it results: v^-(s_n) ≤ v^*_C(s_n) ≤ v^+(s_n) See Appendix <ref>. Given that the system state is s_n with s ∈𝒮 and n < N, it results: π^*(s_n) = Q if g(s_n) ≥ v^+(s_n) C if g(s_n) ≤ v^-(s_n) The proof follows directly from Lemma <ref>, by accounting for the definition of v^*_C(s_n) and v^*_Q(s_n) given in (<ref>). Markov decision problems as the one we considered in (<ref>) are generally solved with backward induction <cit.>. Specifically, stemming from the expression of the maximum expected remaining reward given in (<ref>), backward induction works as follows: starting from n=N and going backward in time, the optimal action maximizing the expected total reward is obtained for each state s_n by exploiting the already-derived optimal actions for states s̃_n+1, with s̃ > s. When the system state is s_n, backward induction requires to preliminarly evaluate ( S-s+1)^N-n optimal actions – i.e., to compute the optimal action for each possible future state – before determining the optimal action π^*(s_n) for the current state. Luckily, with Theorem <ref> we have derived an efficient strategy for finding the optimal action without the need of evaluating the future evolution of the system. Specifically, whenever g(s_n) satisfies one of the conditions in (<ref>), the optimal action can be decided regardless of any further evolution of the system. We validate this result with the first experiment in Sec. <ref>. Finally, it is important to discuss the assumptions underlying Theorem <ref>. As regards to the continuation cost f(·), Theorem <ref> does not require any assumption or constraint, except f(·) being reasonable non negative[Otherwise it would represent a pay-off rather than a cost.]. As regards to the pay-off function g(·), Theorem <ref> requires Properties <ref>-<ref> being satisfied. Yet these properties are not restrictive, since they reasonably drive the entanglement distribution toward entangling the larger number of client nodes in the shorter possible time-frame. In the next subsection, we will introduce and discuss some (reasonable) assumptions on the pay-off function which allows us to further simplify the search of the optimal policy. §.§ One-Step Look Ahead Here we depart from the general discussion of Sec. <ref>, by further extending the result of Theorem <ref> for deriving the optimal policy, albeit imposing additional constraints on the rewards. To this aim, the following preliminaries are need. Given that there exists only two actions in (<ref>) – namely, continue or stop – the entanglement distribution problem belongs to the framework of optimal stopping problems, for which there exists a very simple (hence, computational efficient) rule – namely, one-step look ahead (OLA) rule – for deciding the action to be taken. At time-step n, the one-step look ahead (OLA) set 𝒮^Q_n ⊆𝒮̃ is the set of system states where the instantaneous reward achievable by stopping is not lower than the expected reward achievable by attempting a further distribution attempt and then deciding to stop the distribution. 𝒮^Q_n = { s ∈𝒮 : g(s_n) ≥ v^-(s_n) } with v^-(s_n) given in (<ref>). π(s_n) = Q if s_n ∈𝒮^Q_n ⟺ g(s_n) ≥ v^-(s_n) C otherwise The naming for the OLA rule follows by noting that the reward minorant v^-(s_n) represents the expected reward when the policy is to continue for one-step and then to stop, namely: v^-(s_n) = -f(s_n) + E[g(𝔖_n+1)] with 𝔖_n+1 denoting the random variable describing the system state at step n+1. The OLA rule is optimal whenever the OLA set is closed <cit.>, namely, whenever the system state remains confined within the OLA set, once entering. Unfortunately, the optimality of the OLA rule strictly depends on the particulars of the cost f(·) and pay-off g(·) functions, and no general conclusions can be taken independently. Yet, we can consider different settings for the cost/pay-off functions – which allows us to model a wide range of possible communication scenarios – and discuss the optimality of the OLA rule with respect to this setting. More into details, we consider the following three base-cases: g(s_n) = s/n g(s_n) = λ^n s, with λ∈ (0,1] g(s_n) = s/S - n/N with f(s_n) = 0 since we already incorporated the cost arising with additional distribution attempts into the reward. As an example, with the first base-case given in (<ref>) we model a scenario where the reward, represented by the number s of entangled clients, is discounted by a factor equal to the number of time-slots used for entangling such clients. The rationale for this scenario is to model the reward as a sort of entanglement throughput – namely, as an average entanglement per unit of time – similarly to the bit throughput that represents one of the key metric for classical networks. As regards to the second base-case given in (<ref>), it introduces a discount factor λ which exponentially weights the reward s as time passes. As a matter of fact, multiplicative decreasing the rate of some process such as in (<ref>) is widely adopted in classical networks, with TCP exponential back-off constituting the most famous case. Finally, with (<ref>) we meant to introduce another base-case for conferring generality to the discussion. By considering the settings of the base-cases, we have the following result. When the rewards are modeled as in (<ref>) or (<ref>), the OLA rule is optimal and it results: π^*(s_N) = Q ⟺ s ≥λ Sp/1- λ + λ p if g(s_n) = λ^n s s ≥ S - S/Np if g(s_n) = s/S - n/N whereas when the rewards are modeled as in (<ref>), the OLA rule is not optimal. See Appendix <ref>. From an engineering perspective, it is evident that having an efficient (i.e., low-computational-complexity) optimal rule, such as the OLA rule, for deriving the optimal policy – namely, for deciding when to stop distributing entanglement within a quantum network – is highly advantageous. Hence, whenever possible, the opportunity of choosing rewards satisfying the optimality condition of the OLA rule should be preferred. Nevertheless, whenever this should not be possible, we can still exploit the main result – namely, Theorem <ref> – for designing an efficient rule, as long as we tolerate finding a sub-optimal policy rather than an optimal one. π(s_n) = Q if s_n ≥v^+(s_n) + v^-(s_n)/2 C otherwise Clearly, the “amount” of sub-optimality – hence, the loss in reward – introduced by such a rule strictly depends on the particular settings of the rewards. In the next subsection, we will evaluate such a sub-optimality for the three base-cases introduced above. § PERFORMANCE EVALUATION In this section, we first validate the theoretical results derived in Secs. <ref> and <ref>. Then, we discuss the impact of the reward functions on the performance of the entanglement distribution process. To this aim, we focus on two key metrics: * average distribution time, namely, the average number of time-slots before the distribution is arrested; * average cluster size, namely, the average number of client nodes successfully entangled; More into details, we investigate how the choice of the reward setting influences these two key metrics. This allows us to to draft some guidelines for selecting a reward function able to drive the system to fulfill some specific performance requirements. With the first experiment, we evaluate in Fig. <ref> the expected total reward v_π given in (<ref>) as a function of the ebit propagation probability p. The adopted simulation set is as follows: the number of clients is S=100, the time-horizon is constituted by N=100 time-slots, the rewards are modeled as in (<ref>) with g(s_n) = s/n, and p varies with step equal to 0.025. Within the experiment, we consider four different rewards. First, we consider the reward v_π^* achieved with the optimal policy π^*, with π^* obtained via exhaustive search through backward induction. Clearly, this is the maximum expected reward that can be achieved, and it represents the performance baseline for any sub-optimal policy. We note that, the higher is p, the higher is the reward v_π^*. This result is reasonable, since higher distribution probabilities allow the system to evolve toward states characterized by higher cluster sizes s and lower distribution times n. Additionally, we consider the reward v_π^* achieved with the policy π^* computed via Theorem <ref>. More into detail, π^*(s_n) is obtained with Theorem <ref> whenever either of the two constrains in (<ref>) holds, and via backward induction otherwise. Clearly, by comparing this reward with the optimal reward v_π^*, we can observe a perfect agreement between the two rewards. This constitutes an experimental validation of the analytical results derived in Theorem <ref>. Furthermore, we consider the reward v_π achieved when the policy π is obtained with the OLA rule given in Definition <ref>. Indeed, it must be noted that – although barely noticeable even in the zoomed-in inset of Fig. <ref> – the reward achievable with the OLA rule is lower than the reward v_π^* achievable with the optimal policy for any value of p. This validates the theoretical results derived in Prop <ref>, and, specifically, the sub-optimality of the OLA rule for g(s_n) = s/n. Yet, the performance degradation of the OLA rule is practically negligible. Finally, we consider the reward v_π achieved with the policy π obtained via the sub-optimal rule given in Definition <ref>. From Fig. <ref>, one might question the rationale for this sub-optimal rule and, specifically, one might incorrectly believes that – given that the OLA rule significantly outperforms the sub-optimal rule given in Definition <ref> – the last rule is useless. Yet it must be noted that the performance of the OLA rule strictly depends on some specific assumptions on the cost f(·) and pay-off g(·) functions, assumptions which are not required by the rule given in Definition <ref>. From the above discussion, it becomes clear that there exists a trade-off between optimality and computational-efficiency, that must be properly engineered by the quantum network designers. Specifically, designers can decide to adopt generalist heuristic policies – such as the one in Def. <ref> – which does not impose limitations on the choice of the reward functions albeit at the price of sub-optimal decisions. Or they can leverage optimal, efficient policies – such as the OLA one – as long as they can tolerate additional constraints in the reward function definition. With the second experiment, we aim at assessing the importance of an optimal policy for achieving the highest total reward. For this, in Fig. <ref> we plot the average total reward v_π given in (<ref>) as a function of the ebit propagation probability p for 10^6 Montecarlo distribution process trials, for the same simulation set adopted in Fig. <ref>. We note that lines denote the mean of the total rewards over the different trials, whereas shading areas denote the standard deviation of the different trials[With the shading areas of optimal and OLA rewards practically overlapping.]. We extend the set of policies by considering – along with the optimal and the two sub-optimal policies already considered in the previous experiment – 20 random policies. We observe that the higher is the ebit distribution probability p , the higher is the performance gap between the expected total reward achieved by the optimal strategy and the reward achieved by a random strategy. As a matter of fact, the performance gap remains evident even if we consider the distribution of the optimal reward via standard deviation. This result shows the importance of the considered problem for scenarios of practical interest, namely, for scenarios where entanglement can be fairly distributed. In Fig. <ref>, we present the average cluster size s as a function of the ebit propagation probability p, computed with the same 10^6 Montecarlo distribution process trials of Fig. <ref>. As before, lines denote the mean over the different trials, whereas shading areas denote the standard deviation of the different trials. First, we note that the random policies might achieve larger cluster sets with respect to the optimal policy. The rationale for this behaviour is that the optimal policy aims at: i) maximizing the cardinality of the cluster set, while simultaneously ii) minimizing the distribution time. Hence, depending on g(·) and p, the optimal policy might prefer an earlier stop of the distribution process. And this was, indeed, the overall objective of our modeling. These considerations are confirmed by Fig. <ref>, which presents the average distribution time n as a function of ebit propagation probability p, computed with the same 10^6 Montecarlo distribution process trials of Fig. <ref>. Indeed, it is possible to note that the values of p in Fig. <ref> – for which the random policies achieve larger cluster sizes with respect to the optimal policy – are characterized by longer distribution times. Finally, with the latest experiment, we aim at discussing the impact of the rewards settings – and, specifically, of the three base-cases introduced in (<ref>)-(<ref>) – on the overall entanglement distribution process. For this, we preliminary compare the optimal policy π^* for the different settings of the pay-off function via the action matrices represented in Fig <ref>. Formally, the action matrix A^*: S × N ⟶ p ∈ [0,1] is defined as follows: a_s,n^* ∈ A^* = p̃ ⟺ π^*(s_n) = Q ∀ p ≤p̃ C ∀ p > p̃ As an example, by considering the action map for the pay-off function g(s_n) = s/n represented in Fig. <ref>, we note that, for an arbitrary time-slot n, a_s,n^* increases as the cluster size s increases. This means that, as the cluster size s increases, higher values of p are needed for having action C being the optimal action. Clearly, for a given n, for the lowest values of s, action C is optimal for almost all the values of p. This is very reasonable: when the current cluster size s is very small, so is the pay-off reward. Hence, it is likely more convenient to attempt another entanglement distribution rather than to stop here. And, vice-versa, for the highest values of s, action Q is optimal for almost all the values of p. Furthermore, we observe that the values of the action matrices in Fig. <ref> strongly depend on the particular pay-off function. As instance, the action map for the pay-off function g(s_n) = λ^n represented in Fig. <ref> strongly depends on the cluster size, whereas it is largely independent from the time-slot. As a result, the pay-off function g(s_n) = λ^n drives the entanglement distribution process towards larger cluster sizes at the price of significantly longer distribution times. These considerations are are clearly confirmed by Fig. <ref>, which presents the average cluster size s as a function of the ebit propagation probability p – for the same 10^6 trials of Fig. <ref> – for the different settings of the pay-off function given in (<ref>)-(<ref>). As before, lines denote the mean over the different trials, whereas shading areas denote the standard deviation of the different trials. First, we note that the larger is the parameter λ in (<ref>), the larger is the average cluster size s and the steeper is the slope of the related curve. As a matter of fact, the largest values of the average cluster size are achieved when the pay-off function is g(s_n) = s/S-n/N as in (<ref>). This agrees with the action matrix in Fig <ref>, where action Q becomes optimal only for the largest values of s. Interestingly, the pay-off functions significantly impact the performances for lower values of p. Indeed, both in Fig. <ref> and Fig. <ref>, as p increases, the distance between the curves in the graph tends to reduce. The rationale is that, as p increases, the target system state – namely, the system state maximizing the reward – can be quickly achieved in shorter distribution times. Thus, different reward functions result in vastly different ebit distribution performances under bad transmission conditions. From the above, it becomes evident that, whenever there exist requirements in terms of average cluster size or average distribution time, our modeling allows to meet the performance requirements by choosing a suitable reward function, as instance by tuning the value of λ in g(s_n) = λ^n s. Thus, our formulation of the entanglement distribution process as an optimal decision problem constitutes an effective, handy tool for quantum network designers aiming at engineering the entanglement distribution process. § CONCLUSION In this work, we provided a formulation of the entanglement distribution process as a Markov Decision Process. Our theoretical model jointly accounts for the constraints arising from the underlying technologies as well as for the overlaying communication protocol requirements. We exploited this formulation for discussing the trade-off arising between the two key performance metrics – i.e., the average cluster size and the average distribution time – and for discussing the impact of the reward function and the decision-making policy on the entanglement distribution performance. Our formulation provides quantum network designers with an effective, handy tool for tuning and engineering the entanglement distribution process, so that it can meet the performance requirements through proper reward functions. § PROOF OF LEMMA <REF> According to the model developed in Sec. <ref>, a distribution attempt takes place only if action a=C is taken. And in this case, at time slot n+1, the system – as a result of the distribution attempts – evolves into another state characterized by a number s̃ of “connected nodes”, which cannot be smaller than the number s of “connected nodes” in the time slot n. The reason for which s̃≥ s is twofold: i) the heralded scheme allows the super-node to recognize which node – if any – experienced an ebit loss in a given time-slot. Hence, in the successive time slot, the super-node distributes entanglement only to the missing nodes; ii) by restricting the distribution attempts within a time interval N where the decoherence effects are negligible, the system state evolution is restricted from “backward” transitions towards smaller connected sets with s̃ < s. Stemming from this and in according to the EPR distribution model given in Sec. <ref>, each ebit distribution attempt follows a Bernoulli distribution with parameter p. Accordingly, it follows that when a = C and s,s̃∈𝒮 : s̃≥ s, the transition probability p(s̃_n+1|s_n,C) is given by: p(s̃_n+1|s_n,C) = p(s̃|s)= S-ss̃-sq^S-s̃p^s̃-s. Conversely, when action a=Q is taken, the system can only evolve in the absorption state Δ, which is a fictitious state modeling the state where no further distribution attempts are performed. It is worthwhile to observe that once in the absorption state s=Δ, the system remains in such a state, i.e., no evolution towards s̃≠Δ is allowed. As a consequence the proof follows. § PROOF OF LEMMA <REF> We have two statements to prove within the inequality given in (<ref>). First Inequality. We start by proving the first part of the inequality in (<ref>), namely: v^-(s_n) ≤ v^*_C(s_n) ∀ s ∈𝒮∧ n < N By exploiting the expression of v^*_C(s_n) in (<ref>), one can recognize that: v^*_C(s_n)= -f(s_n) + ∑_s̃∈𝒮̃ p( s̃_n+1 | s_n , C ) v^*(s̃_n+1) According to (<ref>), v^*(s̃_n+1)≥ v_Q^*(s̃_n+1) and by accounting for the expression of v^-(s_n) in (<ref>), the proof follows. Second Inequality. We now prove the second part of the inequality in (<ref>), i.e.: v^+(s_n) ≥ v^*_C(s_n) ∀ s ∈𝒮∧ n < N By exploiting the expressions of v^+(s_n) in (<ref>) and v_C^*(s_n) reported in (<ref>), one recognizes that proving (<ref>) is equivalent to prove that: ∑_s̆∈𝒮̃ p( s̆_N | s_n , C ) v_Q^*(s̆_n+1) ≥∑_s̃∈𝒮̃ p( s̃_n+1 | s_n , C ) v^*(s̃_n+1), where, by definition v^*(s̃_n+1) = max{ v^*_Q(s̃_n+1), v^*_C(s̃_n+1) } To prove (<ref>), we can consider the two elements in (<ref>) separately. To this aim, let us consider the more general case, namely, the case where n+1<N[Indeed, when n+1=N, no decision has to be made since the distribution is interrupted and the system goes in the absorption state]. Case 1: v^*(s̃_n+1) = v^*_Q(s̃_n+1). Let us conduct a proof with a reductio ad absurdum, i.e., let us suppose that: ∑_s̆∈𝒮̃ p( s̆_N | s_n , C ) v_Q^*(s̆_n+1) < ∑_s̃∈𝒮̃ p( s̃_n+1 | s_n , C ) v^*_Q(s̃_n+1) ). By accounting for the extended transition probabilities given in (<ref>), we obtain equation (<ref>) given at the top of the next page. We note that (<ref>) is satisfied only if there exists at least one s̃∈𝒮: s̃≥ s so that: ∑_s̆≥s̃ p( s̆_N | s̃_n+1 , C ) p( s̃_n+1 | s_n , C ) g(s̆_n+1 ) < < p( s̃_n+1 | s_n , C ) g(s̃_n+1) ⟺ ⟺∑_s̆≥s̃ p( s̆_N | s̃_n+1 , C ) g(s̆_n+1 ) < g(s̃_n+1). By accounting for Property <ref> and by recognizing that ∑_s̆≥s̃ p( s̆_N | s̃_n+1 , C ) = 1, (<ref>) constitutes a reductio ab absurdum and so does (<ref>). Case 2: v^*(s̃_n+1) = v^*_C(s̃_n+1). Let us conduct the proof again with a reductio ad absurdum by supposing that: ∑_s̆∈𝒮̃ p( s̆_N | s_n , C ) v_Q^*(s̆_n+1) < ∑_s̃∈𝒮̃ p( s̃_n+1 | s_n , C ) v^*_C(s̃_n+1) ) By accounting for the extended probabilities given in (<ref>), we obtain equation (<ref>) given at the top of the next page. For the sake of notation simplicity and with no loss in generality – as discussed at the end of this proof – let us assume N=n+2. Accordingly, v^*(s̆_n+2) = g(s̆_N) and (<ref>) holds only if there exists at least one s̃∈𝒮: s̃≥ s so that: ∑_s̆≥s̃ p( s̆_N | s̃_n+1 , C ) g(s̆_n+1 ) < < - f(s̃_n+1) + ∑_s̆≥s̃ p( s̆_N | s̃_n+1 , C ) g(s̆_N) Hence, by accounting for Property <ref>, (<ref>) constitutes a reductio ab absurdum and so does (<ref>). We finally note that, whether N should be greater than n+2 – say N = n+3 as instance – we have that v^*(s̆_n+2) is equal to max{ v^*_Q(s̃_N-1), v^*_C(s̃_N-1) }, and the proof follows recursively by adopting the same reasoning adopted for the two elements in (<ref>). § PROOF OF PROPOSITION <REF> §.§ Case I: rewards modeled as in (<ref>). Here we prove that the OLA rule is not optimal when the rewards are modeled as in (<ref>), namely, when: g(s_n) = s/n Let us assume the system state being s_n ∈𝒮. Whether action C is chosen, the expected state E[𝔖_n+1] is given by: E[𝔖_n+1] = ∑_s̃∈𝒮S-ss̃ - s q^S-s̃ p^s̃-s = s + p (S-s) Accordingly, stemming from the definition of OLA set in (<ref>) and by accounting for (<ref>), we have that S^Q_n and S^Q_n+1 are given by: S^Q_n = { x ∈𝒮 : x/n≥x + p (S-x)/n+1} S^Q_n+1 = { x ∈𝒮 : x/n+1≥x + p (S-x)/n+2} Hence, after simple algebraic manipulations, it results: s ∈ S^Q_n ⟹ s ≥n p/1 + n p S s̃∈ S^Q_n+1⟹s̃≥(n+1) p/1 + (n+1) p S Let us conduct the proof with a reductio ab absurdum by assuming that, starting from state s_n : s ∈ S^Q_n and evolving into state s̃_n+1, it must result s̃∈ S^Q_n+1 for any s̃. Without any loss of generality, we assume: s = n p/1 + n p S ∧ s̃ = s and, by jointly accounting for (<ref>) and (<ref>), it results: s̃ = s = n p/1 + n p S > (n+1) p/1 + (n+1) p S ⟹ p < 0 which clearly constitutes a reductio ab absurdum. §.§ Case II: rewards modeled as in (<ref>). Here we prove that the OLA rule is optimal when the rewards are modeled as in (<ref>), namely, when: g(s_n) = λ^n s To this aim, let us assume s_n ∈𝒮^Q_n and let us conduct the proof with a reductio ab absurdum by assuming that the system can evolve into a s̃_n+1∉𝒮^Q_n+1. From (<ref>), we have that S^Q_n and S^Q_n+1 are given by: S^Q_n = { x ∈𝒮 : λ^n x ≥λ^n+1 x + p (S-x) } S^Q_n+1 = { x ∈𝒮 : λ^n+1 x ≥λ^n+2 x + p (S-x) } Hence, after simple algebraic manipulations, it results: s ∈ S^Q_n ⟹ s ≥λ p S/1 - λ - λ p s̃∉ S^Q_n+1⟹s̃ < λ p S/1 - λ - λ p which constitutes a reductio ab absurdum, given that the system cannot evolve from s_n to s̃_n+1 with s̃ < s. §.§ Case III: rewards modeled as in (<ref>). Here we prove that the OLA rule is optimal when the rewards are modeled as in (<ref>), namely, when: g(s_n) = s/S - n/N To this aim, let us assume s_n ∈𝒮^Q_n and let us conduct the proof with a reductio ab absurdum by assuming that the system can evolve into a s̃_n+1∉𝒮^Q_n+1. From (<ref>), we have that S^Q_n and S^Q_n+1 are given by: S^Q_n = { x ∈𝒮 : x/S - n/N≥x + p x/S + p + n+1/N} S^Q_n+1 = { x ∈𝒮 : x/S - n+1/N≥x + p x/S + p + n+2/N} Hence, after simple algebraic manipulations, it results: s ∈ S^Q_n ⟹ s ≥ S - S/N p s̃∉ S^Q_n+1⟹s̃ < S - S/N p which constitutes a reductio ab absurdum, given that the system cannot evolve from s_n to s̃_n+1 with s̃ < s. IEEEtran
http://arxiv.org/abs/2307.07336v1
20230714133100
Risk Controlled Image Retrieval
[ "Kaiwen Cai", "Chris Xiaoxuan Lu", "Xingyu Zhao", "Xiaowei Huang" ]
cs.CV
[ "cs.CV" ]
Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser with Prompts Deval Pandya ==================================================================================== Most image retrieval research focuses on improving predictive performance, but they may fall short in scenarios where the reliability of the prediction is crucial. Though uncertainty quantification can help by assessing uncertainty for query and database images, this method can provide only a heuristic estimate rather than an guarantee. To address these limitations, we present Risk Controlled Image Retrieval (), which generates retrieval sets that are guaranteed to contain the ground truth samples with a predefined probability. can be easily plugged into any image retrieval method, agnostic to data distribution and model selection. To the best of our knowledge, this is the first work that provides coverage guarantees for image retrieval. The validity and efficiency of is demonstrated on four real-world image retrieval datasets, including the Stanford CAR-196 <cit.>, CUB-200 <cit.>, the Pittsburgh dataset <cit.> and the ChestX-Det dataset <cit.>. § INTRODUCTION Given a query image, the goal of an image retrieval system is to find the best matching cancidates from the database. Image retrieval has long been studied and has paved the foundation for many computer vision applications, including face recognition <cit.>, large scale image classification <cit.> and person re-identification <cit.>. Image retrieval works by representing each image as a vector in a high dimensional space and finding the nearest neighbors of the query image in the database. Discriminative image representations are crucial for this process. Massive research has been devoted to improving the representation power of image features, spanning from early hand-crafted features to nowadays deep learning based ones. However, like any deep learning based methods, current image retrieval systems are data-driven and hard to explain, rendering them potentially unreliable due to limited training data and model complexity. Reliability is crucial for many applications, such as self-driving and medical diagnosis. In addition to accurate predictions, it is important to be aware of any potential risks associated with the prediction. For example, rather than solely identifying the best-matching candidates from the dieaseas databases, a medical practioner would want a cluster of retrieval results with a specific level of reliability, say 90%, before making a diagnosis. Conventional image retrieval system are inadequate for these tasks as they do not offer any measure of reliability. To address this issue, researchers have developed the concept of uncertainty estimation for image retrieval. This involves estimating the level of uncertainty for both query and database samples. Approaches including <cit.> involves using high uncertainty as an indicator of potential prediction inaccuracies. This research partially addresses the issue of uncertainty in image retrieval reliability. However, there are limitations to the uncertainty estimation approach. Firstly, the estimated uncertainty is merely a heuristic measure rather than a guarantee, leaving practitioners with only a vague idea of the prediction's reliability. Secondly, the estimated uncertainties are independent of the retrieval set size, despite the fact that having more candidates increases the likelihood of covering the ground truth. We propose a novel image retrieval framework called Risk Controlled Image Retrieval () that addresses such limitations. generates adaptive retrieval sets conforming to a predefined level of risk[The risk is defined as the probability of the retrieval set missing all ground truth samples of the query samples. Please see Sec.<ref> for a formal definition.] by introducing two modules, 1) adaptive retrieval and 2) risk control. The adaptive retrieval module employs a common uncertainty estimator to provide a heuristic uncertainty for the query and determine a prior retrieval set size. Then the risk control module adjusts the retrieval set size to comply with the predefined risk level and prior retrieval set size. We summarize the contributions of this paper as follows: * We propose an adaptive retrieval strategy that adapts retrieval size based on the commonly estimated heuristic uncertainty. * We for the first time enable an image retrieval system to generate retrieval sets that meet a predefined level of risk. * We demonstrate the effectiveness of the proposed methods by experimental results on four image retrieval datasets: the Stanford CAR-196 <cit.>, CUB-200 <cit.>, the Pittsburgh dataset <cit.> and the ChestX-Det dataset <cit.>. § RELATED WORK §.§ Image Retrieval Image retrieval involves building a tagged database offline and searching it online, where images are represented by feature vectors. Therefore, image retrieval's effectiveness relies on the feature extractor's representation power. Initially, hand-crafted image features such as SIFT <cit.> were primarily used. However, with the superior performance of deep learning-based features from pretrained CNNs <cit.>, hand-crafted features have become gradually obsolete. Recent research on image retrieval demonstrates that deep learning-based features can be effectively learned end-to-end with ranking loss functions <cit.>. In this sense, image retrieval has evolved into a metric learning problem, with the aim of learning a mapping function that creates an embedding space where similar objects are positioned close together while dissimilar objects farther apart. Loss functions, including contrastive loss <cit.>, triplet loss <cit.>, N-pairs loss <cit.>, etc., have been studied to enhance metric learning, which in turn benefits image retrieval. §.§ Uncertainty Estimation Deep learning has achieved tremendous success in numerous computer vision tasks, but its black-box working mechanism has raised concerns about its reliability. Uncertainty estimation is a way to quantify the confidence of the model in its prediction. Uncertainty of predictions can arise from either the data's inherent uncertainty or the model's uncertainty, known as aleatoric and epistemic uncertainty, respectively <cit.>. To quantify epistemic uncertainty, Variational Inference (VI)-based methods such as Monte Carlo (MC) Dropout are used to approximate Bayesian Neural Networks (BNNs), where the weights of the networks are modeled as distributions. Ensemble method <cit.> initiate multiple instances of a same model and then take the variances of predictions as an uncertainty level. On the other hand, researchers typically estimate aleatoric uncertainty by learning it via an additional head parallel to the network. In image retrieval, DUL <cit.> learns aleatoric uncertainty by constructing a stochastics embedding space where the uncertainty is regularized by a KL divergence loss. BTL <cit.> utilizes a bayesian loss function that enforces triplet constraints on stochastic embeddings. <cit.> shows MCD can be employed in image retrieval to quantify epistemic uncertainty. Current research on uncertainty estimation in image retrieval is primarily concerned with quantifying the likelihood of a pair of images being similar or dissimilar, albeit with only a heuristic notion of uncertainty. Our objective is to retrieve a set of images with a guarantee of containing the ground truth samples of a given query sample with a user-specified probability. The framework of conformal prediction guarantees the correctness of a prediction <cit.>. Conformal quantile regression <cit.> enhances this approach to deliver prediction intervals with guaranteed accuracy. Building on this idea, DFUQ <cit.> proposes generating risk-guaranteed intervals in image regression, which has motivated us to tackle the constraints of existing image retrieval methods. However, it is important to note that DFUQ is specifically designed for regression tasks that rely on true labels. On the other hand, our image retrieval approach learns image embeddings without the use of true labels. Furthermore, the uncertainty estimation methods for regression tasks are distinct from those used in image retrieval since true labels are not available. These differences make it challenging to provide a risk-guaranteed retrieval set in image retrieval, highlighting the need for a shift in methodology. § METHOD The framework extends image retrieval pipelines that incorporate heuristics for estimating uncertainty. In this section, we first explain the standard image retrieval pipeline. Next, we discuss three popular uncertainty estimation techniques for image retrieval. Lastly, we present the framework. §.§ Image Retrieval In an image retrieval pipeline, a feature extractor f_e is trained to map images from high-dimension image space to a low-dimension embedding space. In the embedding space, similar samples are close and dissimilar samples are far apart. Suppose there is a dataset {𝒬, 𝒟}, where 𝒬 denotes query set and 𝒟 database. Given i^th query sample X_i^𝒬∈𝒬 (superscript denotes which set it belongs), its most similar samples {Y_i, j | j=1,2,..,K} are retrieved from 𝒟: {Y_i, j | j=1,2,..,K} = ℛ_[K, f_e] (X_i^𝒬), where ℛ_[K, f_e] represents a retrieval function conditioned on K and f_e: ℛ_[K, f_e] (X_i^𝒬) = { x | x ∈𝒫, 𝒫⊆𝒟, |𝒫|= K, d[f_e (X_i^𝒬), f_e (x)] ≤ d[f_e (X_i^𝒬), f_e (y)], ∀ y ∈𝒟\𝒫} where d denotes a metric criterion, and K the number of the retrieved candidates, f_e the feature extractor. The above image retrieval pipeline has been the de facto standard in the image retrieval community. However, it merely retrieves a set of best matching candidates, without any indicator showing how reliable the retrieval set is. This is problematic in risk-sensitive scenarios, where the reliability of the prediction is crucial. §.§ Uncertainty-aware Image Retrieval Uncertainty estimation mitigates this issue by estimating a heuristic notion of uncertainty for each query and database sample. Existing uncertainty-aware image retrieval methods include three methodologies: 1) estimate model uncertainty by Bayesian Neural Networks <cit.>, 2) predict uncertainty by a deterministic model <cit.>, and 3) estimate uncertainty by an ensemble of models. We briefly introduce these methods in what follows. §.§.§ Uncertainty Estimated by a BNN Bayesian approaches treat the weights of a neural network as distributions, instead of as deterministic values. However, obtaining the analytical posterior distribution for the weights is intractable due to the difficulty of acquiring evidence. Researchers have utilized various methods to address this challenge, with variational inference being the most widely employed. Monte Carlo Dropout (MCD) <cit.> initializes this methodology by assuming a mixed Gaussian distribution on each weight, which makes the sampling of weights equivalent to applying dropout operations. In our image retrieval setting, we apply dropout to all conventional layers of the feature extractor with a dropout rate p. Features μ and their heuristic uncertainties σ^2 are obtained by applying dropout at test time and feed forwarding T times: μ = 1/T∑_t=1^T f_e_t ∼θ(X), σ^2 = 1/T∑_t=1^T (f_e_t ∼θ(X)-μ)^2. where θ denotes the weights posterior distribution. §.§.§ Uncertainty Estimated by an Ensemble Ensemble method <cit.> trains multiple instances of a same determinstic model, each with random initial weights. During inference time, predictions of all models are averaged, and the uncertainty is obatained as the variance of the predictions. Provided that there are N instances, the mean and variance of the predictions are: μ = 1/N∑_i=1^N f_e_i(X), σ^2 = 1/N∑_i=1^N (f_e_i(X)-μ)^2. §.§.§ Uncertainty Estimated by a Single Determinstic Model Multiple feed forward propagations in MCD can cause overhead, and using multiple model instances can result in increased memory usage. In comparison, a single deterministic model is an attractive approach for estimating uncertainty. <cit.> constructs a Gaussian distribution for each feature, i.e., f_e(X) ∼𝒩(μ, σ^2). A bayesian triplet loss is introduced to enforce triplet contrastive learning among these probabilistic features. The network is built upon a common image retrieval model by adding a variance head f_u parallel to the mean head f_e. Once trained, the model can output μ and σ^2 using only one feed forward propagation: [μ, σ^2] = [f_e(X), f_u(X)]. In summary, uncertainties of features can be obtained through a BNN-based method, an ensemble, or a single deterministic model. Nevertheless, all of these methods only provide a heuristic notion of uncertainty <cit.>, which is not a guarantee. That said, even with the estimated uncertainty, the retrieval set cannot be interpreted as a likelihood of encompassing the ground truth samples of a given query sample. §.§ Risk Controlled Image Retrieval The problems of existing uncertainty-aware image retrieval methods are problematic in two senses: * The heuristic notion of uncertainty cannot serve the purpose of a clear probability interpretation of the retrieval set. * The retrieval set's uncertainty is usually represented by the query sample's uncertainty <cit.>, which does not take into the retrieval set size into account. This means that for a given query sample, a retrieval set with 1 samples will be assigned the same uncertainty as a retrieval set with 10 samples. However, a larger retrieval set is obviously more likely to cover the ground truth than a smaller one. We solve these two issueWs by proposing , which provides retrieval sets that guarantee to cover the ground truth samples with a predefined level of probability. And the retrieval set size is dynamically adapted to reflect the uncertainty of the query sample. §.§.§ Risk Notion In , we denote the risk of a image retrieval system by ρ, which is defined as the probability of a retrieval set missing all ground truth samples of the given query sample: ρ(ℛ) = E_X ∈𝒬[ℓ(ℛ(X), 𝒮(X))], where ℛ denotes a generic retrieval system as in (<ref>), 𝒮 denotes retriving all ground truth samples, and ℓ indicates if the retrieval set misses all ground truth samples: ℓ(ℛ(X), 𝒮(X)) = 1(ℛ(X) ∩𝒮(X) = ∅). The risk function ρ(·) evaluates the performance of a retrieval system ℛ, and is bounded between 0 and 1: ρ(ℛ)=0 occurs when the retrieval set ℛ(X) always covers at least one ground truth, and ρ(ℛ)=1 happens when the retrieval set ℛ(X) never covers a ground truth. §.§.§ Adaptive Retrieval In practice, practioners may want `easy' query samples to have smaller retrievals set than those of `hard' query samples, because this would save downstream processing time on easy ones without sacrificing accuracy. Therefore, we propose to adapt the retrieval set size based on the query's difficulty level. And we employ the common heuristic uncetainty of query samples as a coarse measure of difficulty. Specifically, we propose a simple yet effective mapping function π _κ() as follows: π_κ: K_i ←⌈κ·Φ[f_u(X_i^𝒬)]⌉ , where κ∈R^+ denotes a non-negative real number, f_u is a heuristic uncertainty estimator, Φ means normalizing the result to [0, 1], and ⌈·⌉ means rounding up to the nearest integer. The mapping function is enssential as it allows to retrieve varying numbers of candidates, tailored to the uncertainty of each query sample. Compared to the conventional retrieval function in (<ref>) that always returns a fixed number (i.e., K) of candidates, our adaptive retrieval function retrieves K_i candidates depending on the choice of κ and the uncertainty of the query sample: {Y_i, j | j=1,2,..,K_i} = ℛ_[κ, f_u, f_e] (X_i^𝒬) §.§.§ Risk Control The adaptive retrieval strategy makes the retrieval set varying according to the uncertainty of the query samples, but it still cannot ensure the risk of retrieval set is below any predefined level. In this section, we will discuss how to make risk of the adaptive retrieval set controllable. Given a dataset {𝒬, 𝒟}, let the pretrained feature extractor f_e and uncertainty estimator f_u be fixed, then the risk function ρ(ℛ_[κ, f_u, f_e]) is a monotone nonincreasing function of the κ for ∀κ∈R^+ . For any i^th query X_i^𝒬, if κ_1 > κ_2, then K_i, κ_1≥ K_i, κ_2 (by mapping function (<ref>) as f_u is fixed), which means |ℛ_[κ_1, f_u, f_e](X_i^𝒬)|≥|ℛ_[κ_2, f_u, f_e](X_i^𝒬)|. Since in a retrieval system the candidates are retrieved based on a consistent distance metric d (see (<ref>)), ℛ_[κ_2, f_u, f_e](X_i^𝒬) ⊆ℛ_[κ_1, f_u, f_e](X_i^𝒬), then, by the loss definition in (<ref>), ℓ(ℛ_[κ_2, f_u, f_e](X_i^𝒬), 𝒮(X_i^𝒬)) ≥ℓ(ℛ_[κ_1, f_u, f_e](X_i^𝒬), 𝒮(X_i^𝒬)), and thus, ρ(ℛ_[κ_2, f_u, f_e]) ≥ρ(ℛ_[κ_1, f_u, f_e]). For a trained image retrieval system, the f_u, f_e are fixed, therefore, the risk of the adaptive retrieval system depends on the parameter κ, which we denote as ρ(κ). Assume ρ(κ) is a monotone nonincreasing function, and an upper confidence bound ρ̂^+(κ) for each κ is accessible. If ∀κ∈R^+, P(ρ(κ) ≤ρ̂^+(κ)) ≥ 1-δ. let κ̂ denotes the smallest κ such that for any κ > κ̂ we have ρ̂^+(κ) ≤α: κ̂ = inf{κ: ρ̂^+(κ^') < α, ∀κ^' > κ}. then P(ρ(κ̂) ≤α) ≥ 1-δ. Consider the smallest κ that controls the risk to be less than α, i.e., κ^* = inf{κ: ρ(κ) ≤α}, Suppose ρ (κ̂) > α, then by the monotonicity of ρ(κ), we have κ̂ < κ^*. Eq.(<ref>) implies ρ̂^+(κ^*)<α, and Eq.(<ref>) implies ρ(κ^*)=α (by continuity of ρ(κ)). According to (<ref>), this case only happens with a probability of at most δ. Therefore, P(ρ(κ̂) ≤α) ≥ 1-δ. Fig.<ref> provides an experimental example that helps to visualize the proof process. With Eq.(<ref>), we can proceed to determine an upper confidence bound, denoted by ρ̂^+(κ). According to <cit.>, Hoeffding’s Ineqality is applicable to our risk function ρ(κ), which is bounded by one. We denote the emperical risk on the calibration set by ρ̂(κ). Hoeffding’s Ineqality indicates that P(ρ̂(κ) - ρ(κ) ≤ -x) ≤ e^-2nx^2, which implies an upper confidence bound ρ̂^+(κ)=ρ̂(κ)+√(1/2 nlog(1/δ)). Given a predefined risk requirement, i.e., risk level α and error rate δ, we use (<ref>) to compute κ̂. The computation process is outlined in Algorithm <ref>. Once we have κ̂, we can ensure that the retrieval set will cover the ground truth samples with a probability of 1-α and limit the error rate to δ. § EXPERIMENTS §.§ Datasets CUB-200 <cit.> contains 11,788 images of 200 classes. Each class has at least 50 images. We use the first 100 classes as a training set and the other 100 as a test set. Stanford CAR-196 Stanford CAR-196 <cit.> contains 16,185 images of 196 classes. Each class has at least 80 images. We use the first 98 classes as the training set and the other 98 classes as test set. Pittsburgh <cit.> is a large image database from Google Street View. Following the split of NetVLAD <cit.>, we adopt the subset that consists of 10k samples in the training/validation/test split. ChestX-Det <cit.> is a subset of the public dataset NIH ChestX-ray14, and it contains 3543 images with 14 classes (13 categories of dieases and normal cases). We choose samples of the six classes as training set, and the rest as the test set. For all datasets, we randomly choose 50% images from the test set to form the calibration set. §.§ Implementations Our proposed is versatile and can be applied to any image retrieval model that provides heuristic uncertainty. For comparison purposes, we use the image retrieval model from <cit.> as the backbone model. Based on this backbone, we build four comparing methods: Determinstic, MCD, BTL and Deep Ensemble. The architectures of the first three are depicted in Fig.<ref>. The Determinstic model is composed of ResNet-50 <cit.> backbone, a GeM layer <cit.>, a fully connected layer and a L2-normalization layer. The Determinstic model is trained with the triplet loss <cit.>. The MCD model is same as the Determinstic except that we apply dropout to all conventional layers of the backbone during both training and testing time. The BTL model has an additional variance head, and it is trained with the bayesian triplet loss <cit.>. We build 5 determinstic models to form the Deep Ensemble model. We train the above models using the Adam optimizer, with an initial learning rate of 10^-5 and an exponential learning rate scheduler with gamma 0.99. The weight decay is set to 10^-3. Our model's feature dimension is 2048, and the variance dimension is 1. We adopt a hard mining strategy and feed the model with triplets that violate the triplet margin <cit.> (please see supplementary for details). §.§ Metrics §.§.§ Emperical risk The goal of the proposed is to control the empirical risk. The empirical risk is evaluated on the test split, and should be below than α with a probability of 1-δ (according to Throrem.<ref>). §.§.§ Retrieval size It is possible that the risk control algorithm resorts to large retrieval sets to lower the emperical risk. However, large retrieval sets tend to include more negative samples and can be inefficient for downstream processing. Ideally, the retrieval size should be minimized while still controlling the risk to stay below the threshold. §.§.§ Recall@K The performance of the retrieval model itself matters. We use Recall@K as the metric, which is defined as the percentage queries whose top-K retrieved samples have at least one sample that share the same class as the query. §.§.§ ECE@1 Expected Calibrated Error (ECE) <cit.> is used to measure the quality of the estimated heuristic uncertainty. We follow <cit.> and adopt ECE@1, where a lower value indicates a better-calibrated uncertainty. §.§.§ Qualitative visualization We also provide qualitative visulizations of the image retrieval results. §.§ Results §.§.§ Image retrieval performance We begin by evaluating the retrieval performance of various methods. The Recall@1 results of different methods on various test sets are presented in Fig.<ref>. BTL achieves a higher recall@1 than Determinstic on most datasets, suggesting that an appropriate uncertainty-aware image retrieval can enhance image retrieval performance. Nevertheless, MCD's performance is inconsistent when compared to Determinstic, which may be due to the impact of the dropout layers on the representation power, depending on the dataset. Deep Ensemble achieves the best performance, which can be attributed to the fact that it utilizes multiple models to average out noises. §.§.§ Uncertainty estimation performance Fig.<ref> depicts the reliability diagrams of different methods across the four datasets. It can be seen that none of these curves conforms to the ideally-calibrated line (dashed line), indicating that the heuristic uncertainty itself is not reliable. Moreover, the gaps between the curves and the dashed line vary across datasets for both MCD, BTL and Deep Ensemble, implying that the same uncertainty estimation method cannot consistently perform across different datasets. §.§.§ Risk control performance We incorporate into MCD, BTL, and Deep Densemble, resulting in the image retrieval pipelines MCD[R], BTL[R], and Deep Ensemble[R], respectively. Meanwhile, to evaluate the practical usefulness of heuristic uncertainty, we normalize heuristic uncertainties to the range [0, 1] based on their statistics on the calibration set. Then each query-candidate pair's uncertainty is calculated as the sum of their individual uncertainties <cit.>. This will results in three comparing methods: MCD[H], BTL[H] and Deep Ensemble[H]. With a predefined a risk level α and error rate δ, MCD[R], BTL[R] and Deep Ensemble[R] calculate κ̂ using Algorithm.<ref> on the calibration set. For fairness, MCD[H], BTL[H] and Deep Ensemble[H] retrive as many candidates as ×[R] can achieve, then only keep the pairs whose uncertainties are below α. We perform the image retrieval on the test sets and report their emperical risks. Fig.<ref> shows the empirical risk of different methods on different datasets. It can be observed that the risks of ×[R] are always below the predefined risk levels α. This means that even before retrieval we can say for sure that the retrievals of ×[R] has a α probability of covering the ground truth, with only a δ probability of being wrong. In comparison, heuristic uncertainty-only method, including MCD[H], BTL[H] and Deep Ensemble[H], exihibt risks higher than the predefined risk levels in the range α < 0.4. The results show that the heuristic uncertainty alone is not reliable to be used for risk control, and can control the risk well.. In addition, we show the effect of using different δ, which controls how conservative the is. Fig.<ref> shows the emperical risk with different δ on the Pittsburgh dataset. With a smaller δ, would be more conservative (i.e., larger retrieval size) to ensure the risk is below the given α . It is possible that always resort to large retrieval sets to lower the emperical risk, in which situation the risk is trivially controlled but the retrieval sets would be of less practical use. To examine this, we compare MCD[R], BTL[R] and Deep Ensemble[R] against their fix-size retrieval conterparts. Fig.<ref> shows that when achieving the same recall@K, ×[R] and their fix-size retrieval conterparts have a similar average retrieval set size. This indicates that ×[R] does not rely on brute-forcely increasing K to control the risk. However, we also notice that MCD[R] and Deep Ensemble[R] on the Pittsburgh test set has a slightly larger average retrieval set size than their fix-size retrieval conterparts, This is likely due to poor uncertainty estimation on Pittsburgh, as shown in Fig.<ref>. §.§.§ Qualitative visualization The distribution of retrieval size on the Pittsburgh test set is presented in Fig. <ref>. It is evident that the retrieval size varies with the risk level α: a smaller α results in a larger retrieval size, and vice versa. This helps practiioners save time on esay queries and focus on more difficult ones. Fig.<ref> shows the qualitative visualization of retrievals on the CAR-196 dataset by different methods. In the first row, it is evident that when a relatively easy query is provided, all methods successfully retrieve the correct results. Moving on to the second row, where the query is more challenging, the fixed-size retrieval method fails without any indication, and BTL[H] fails as well with a low heuristic number. However, BTL[R] takes into account the difficulty of the query and retrieves an additional candidate, resulting in the correct answer. As for the third row, it represents a more demanding query. BTL[H] generates no candidate in this case, while BTL[R] retrieves more candidates to meet the coverage requirement. § CONCLUSION This paper introduces , a significant improvement to the current uncertainty estimation for image retrieval. Unlike the heuristic notion of uncertainty provided by existing methods, offers a risk guarantee. This paper includes a theoretical analysis of the risk bound of and presents extensive experimental results that demonstrate its efficacy in controlling the risk of a generic image retrieval system. We believe that the advancement provided by can benefit risk-sensitive applications, such as medical image retrieval and autonomous driving.
http://arxiv.org/abs/2307.07579v1
20230714190524
Webb's PEARLS: Transients in the MACS J0416.1-2403 Field
[ "Haojing Yan", "Zhiyuan Ma", "Bangzheng Sun", "Lifan Wang", "Patrick Kelly", "Jose M. Diego", "Seth H. Cohen", "Rogier A. Windhorst", "Rolf A. Jansen", "Norman A. Grogin", "John F. Beacom", "Christopher J. Conselice", "Simon P. Driver", "Brenda Frye", "Dan Coe", "Madeline A. Marshall", "Anton Koekemoer", "Christopher N. A. Willmer", "Aaron Robotham", "Jordan C. J. D'Silva", "Jake Summers", "Rachana A. Bhatawdekar", "Cheng Cheng", "Adi Zitrin", "S. P. Willner" ]
astro-ph.GA
[ "astro-ph.GA", "hep-ex" ]
0000-0001-7592-7714]Haojing Yan Department of Physics and Astronomy, University of Missouri, Columbia, MO 65211, USA 0000-0003-3270-6844]Zhiyuan Ma Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA 0000-0001-7957-6202]Bangzheng Sun Department of Physics and Astronomy, University of Missouri, Columbia, MO 65211, USA 0000-0001-7092-9374]Lifan Wang George P. and Cynthia Woods Mitchell Institute for Fundamental Physics & Astronomy, Texas A. & M. University, Department of Physics and Astronomy, 4242 TAMU, College Station, TX 77843, USA 0000-0003-3142-997X]Patrick Kelly School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455, USA 0000-0001-9065-3926]Jose M. Diego Instituto de Física de Cantabria (CSIC-UC), Avda. Los Castros s/n, 39005, Santander, Spain 0000-0003-3329-1337]Seth H. Cohen School of Earth & Space Exploration, Arizona State University, Tempe, AZ 85287-1404, USA 0000-0001-8156-6281]Rogier A. Windhorst School of Earth & Space Exploration, Arizona State University, Tempe, AZ 85287-1404, USA Department of Physics, Arizona State University, Tempe, AZ 85287-1504, USA 0000-0003-1268-5230]Rolf A. Jansen School of Earth & Space Exploration, Arizona State University, Tempe, AZ 85287-1404, USA 0000-0001-9440-8872]Norman A. Grogin Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0002-0005-2631]John F. Beacom Center for Cosmology and AstroParticle Physics (CCAPP), The Ohio State University, Columbus, OH 43210, USA Department of Physics, The Ohio State University, Columbus, OH 43210, USA Department of Astronomy, The Ohio State University, Columbus, OH 43210, USA 0000-0003-1949-7638]Christopher J. Conselice Jodrell Bank Centre for Astrophysics, Alan Turing Building, University of Manchester, Oxford Road, Manchester M13 9PL, UK 0000-0001-9491-7327]Simon P. Driver International Centre for Radio Astronomy Research (ICRAR) and the International Space Centre (ISC), The University of Western Australia, M468, 35 Stirling Highway, Crawley, WA 6009, Australia 0000-0003-1625-8009]Brenda Frye Steward Observatory, University of Arizona, 933 N Cherry Ave, Tucson, AZ, 85721-0009, USA 0000-0001-7410-7669]Dan Coe Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0001-6434-7845]Madeline A. Marshall National Research Council of Canada, Herzberg Astronomy & Astrophysics Research Centre, 5071 West Saanich Road, Victoria, BC V9E 2E7, Canada ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia 0000-0002-6610-2048]Anton Koekemoer Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0001-9262-9997]Christopher N. A. Willmer Steward Observatory, University of Arizona, 933 N Cherry Ave, Tucson, AZ, 85721-0009, USA 0000-0003-0429-3579]Aaron Robotham International Centre for Radio Astronomy Research (ICRAR) and the International Space Centre (ISC), The University of Western Australia, M468, 35 Stirling Highway, Crawley, WA 6009, Australia 0000-0002-9816-1931]Jordan C. J. D'Silva International Centre for Radio Astronomy Research (ICRAR) and the International Space Centre (ISC), The University of Western Australia, M468, 35 Stirling Highway, Crawley, WA 6009, Australia 0000-0002-7265-7920]Jake Summers School of Earth & Space Exploration, Arizona State University, Tempe, AZ 85287-1404, USA 0000-0003-0883-2226]Rachana A. Bhatawdekar European Space Agency, ESA/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, NL 0000-0003-0202-0534]Cheng Cheng Chinese Academy of Sciences South America Center for Astronomy, National Astronomical Observatories, CAS, Beijing 100101, China 0000-0002-0350-4488]Adi Zitrin Physics Department, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 8410501, Israel 0000-0002-9895-5758]S. P. Willner Center for Astrophysics Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA With its unprecedented sensitivity and spatial resolution, the James Webb Space Telescope (JWST) has opened a new window for time-domain discoveries in the infrared. Here we report observations in the only field that has received four epochs (over 126 days) of JWST NIRCam observations in Cycle 1. This field is towards MACS J0416.1-2403, which is a rich galaxy cluster at z=0.397 and is one of the Hubble Frontier Fields. We have discovered 14 transients from these data. Twelve of these transients happened in three galaxies (with redshifts z=0.94, 1.01, and 2.091) crossing a lensing caustic of the cluster, and these transients are highly magnified by gravitational lensing. These 12 transients are likely of a similar nature to those previously reported based on the Hubble Space Telescope (HST) data in this field, i.e., individual stars in the highly magnified arcs. However, these twelve could not have been found by the HST because they are too red and too faint. The other two transients are associated with background galaxies (z=2.205 and 0.7093) that are only moderately magnified, and they are likely supernovae. They indicate a de-magnified supernova surface density of ∼0.5 arcmin^-2 integrated up to z≈ 2 when monitored at the time cadence of a few months. Such a high surface density is achieved at the ∼3–4 μm survey limit of AB ∼ 28.5 mag, which, while beyond the capability of HST, can be easily reached by JWST. § INTRODUCTION New capabilities in multi-messenger and time-domain astronomy will open outstanding vistas for discovery, as highlighted, for example, in the Decadal Survey on Astronomy and Astrophysics 2020 (Astro2020)[<https://nap.nationalacademies.org/resource/26141/interactive/>]. A core challenge is in localizing sources on the sky and in redshift, with the primary technique being searches for electromagnetic counterparts, which calls for observatories with the best flux sensitivity and angular resolution possible. Until recently, one of the leading observatories for this purpose was the Hubble Space Telescope (HST), which had many successes. For example, it has found high-redshift supernovae, especially those of type Ia, which constrain cosmological models. In many cases, these observations were integrated with large, general-purpose extragalactic surveys <cit.>. Observations of high-redshift clusters have been important for increasing yields <cit.> while observations of low- redshift clusters have led to the first discovery of a multiply imaged supernova (at z=1.49) due to gravitational lensing <cit.>, which provides a new route to constrain H_0 through time delay <cit.>. For another example, a novel type of transient phenomena — caustic-crossing transients — has been identified through HST observations <cit.>. These are individual stars in highly magnified background galaxies lying very close to the critical curve of the lensing cluster, which are further magnified — temporarily — by the intracluster stars that serve as microlenses. These have given us a completely unexpected method to study individual stars at cosmological distances. The advent of the James Webb Space Telescope (JWST) has brought dramatically better opportunities, due to its more than an order of magnitude increase of efficiency relative to HST. The Prime Extragalactic Areas for Reionization and Lensing Science program <cit.>, one of the programs under the JWST Interdisciplinary Scientists' Guaranteed Time Observations (GTO), has a major time-domain science component. One of its fields is MACS J0416.1-2403 (hereafter M0416), which is a lensing cluster at z=0.397 and one of the Hubble Frontier Fields <cit.>, where several caustic-crossing transients have been found with HST near two caustic-straddling arcs. <cit.> discovered two fast transients in an arc at z=1.0054, which are collectively nicknamed by the authors as “Spock.” The transients in this strongly lensed galaxy are consistent with being supergiant stars with temperatures between 3500 K and 18000 K, as shown in <cit.>. <cit.> and <cit.> identified a transient of similar nature in another arc at z=0.94, which is named “Warhol.” In addition, the ultra-deep, UV-to-visible HST program “Flashlights” detected two high-significance caustic transients in the Spock arc and four in the Warhol arc <cit.>. It is expected that these highly lensed regions are constantly producing caustic transients. To take advantage of these opportunities with JWST, PEARLS has incorporated three epochs of NIRCam observations of M0416 in its design. We expect that these data are particularly powerful for detecting red supergiant stars at redshifts z≳ 1 because they are bright at λ≳ 2 μm. The design is also motivated by the possibility of detecting individual Population III stars through caustic transits at z>7 <cit.>. Another JWST GTO program, the CAnadian NIRISS Unbiased Cluster Survey <cit.>, also has one separate epoch of NIRCam observations on this cluster. All these data have been taken, which makes M0416 the only field in JWST Cycle 1 that has four epochs of NIRCam observations. For this reason, M0416 is the best region in the sky to date for studying infrared transients. We have carried out a transient search using these unique data. The search has gone beyond the aforementioned two arcs, as we also intend to assess the general infrared transient rate in less magnified regions at depths that have never been probed before. This paper is the first in a series on this subject and presents the overview of the transients found in this field. The paper is organized as follows. The NIRCam observations and data are described in Section 2. The transient search is detailed in Section 3. We discuss the transients in Section 4 and conclude with a summary in Section 5. All magnitudes are in the AB system. All coordinates are in the ICRS frame (equinox 2000). § OBSERVATIONS AND DATA The four epochs of NIRCam observations all used the same eight bands, namely, F090W, F115W, F150W, and F200W in the “short wavelength” (SW) channel and F277W, F356W, F410M, and F444W in the “long wavelength” (LW) channel, respectively. The native NIRCam pixel scales are 0031 pix^-1 in the SW channel and 0063 pix^-1 in the LW channel. As the SW channel is made of four detectors, the observations used the dithers to cover the gaps in between. The PEARLS observations adopted the readout pattern with “up-the-ramp” fitting to determine the count rate, while those of CANUCS used a combination of the and patterns. The total exposure times, dates of observation, and 5 σ depths of these observations are summarized in Table <ref>. NIRCam has two nearly identical modules (“A” and “B”) that subtend two adjacent, square fields. As the spatial orientations of the JWST instruments vary in time on an annual basis, these two fields cannot both be on the same region in the sky within a year. For this reason, all four epochs of NIRCam observations were designed to center the B module on the cluster, leaving the A module mapping different regions in the flanking area. In this transient study, we only use the module B data because only these are spatially overlapped. The data were retrieved from the Mikulski Archive for Space Telescopes (MAST). Reduction started from the so-called Stage 1 “uncal” products, which are the single exposures from the standard JWST data reduction pipeline after Level 1b processing. We further processed these products using the version 1.9.4 pipeline in the context of . A few changes and augmentations were made to the pipeline to improve the reduction quality; most importantly, these included enabling the use of an external reference catalog for image alignment and implementing a better background estimate for the final stacking. The astrometry of the single exposures was calibrated using the GAIA 3rd Data Release. These single images were projected onto the same grid and were stacked in each band and in each epoch. We produced two versions of stacks, one at the pixel scale of 006 (hereafter the “60mas” version) and the other at the scale of 003 (the “30mas” version), to best match the native pixel scales in the LW and the SW channels, respectively. The mosaics are in surface brightness units of MJy sr^-1. The AB magnitude zeropoints are 26.581 and 28.087 for the 60mas and the 30mas stacks, respectively. Figure <ref> shows the composite color image using the data from all these four epochs. For convenience, hereafter we refer to the three epochs from the PEARLS program as Ep1, Ep2 and Ep3, respectively, where “p” stands for “PEARLS”; the epoch from the CANUCS program, which was in between Ep2 and Ep3, is referred to as Ec (“c” stands for “CANUCS”). § TRANSIENT DISCOVERIES §.§ Search method The search for transients was done in the usual way by detecting positive peaks on difference images between epochs. Thanks to the excellent image alignment and stable image quality, we were able to form the difference images by direct subtraction, for which we used the 60mas images. Due to the intense human labor in the visual inspection step (see below), this work is limited to these pairs of difference images: Ep1 - Ep2 for the search of decaying sources in Ep1, Ep2 - Ep1 for new sources appearing in Ep2, and Ep3 - Ep1 for new sources appearing in Ep3. Ec was not used to initiate transient searches, as it was only 13.3 days after Ep2; however, it was used when studying the light curves of the identified transients. When building a difference image, its “root mean square” (RMS) map was also constructed by adding the error maps of the parent images in quadrature. As compared to the SW images, the LW images are less affected by defects and therefore their difference images are cosmetically cleaner. We chose to use the F356W band as the basis of our search because its data are the deepest. The 60mas version was used in the initial search. SExtractor <cit.> was run on the F356W difference images between epochs. The RMS maps were used in the process to estimate the signal-to-noise ratios (S/N) of the peaks in the difference images. Only the peaks that have S/N≥ 5 were further considered. This left thousands of peaks in each difference image, which were then visually inspected. Not surprisingly, the vast majority of these peaks are not transient objects but are residuals caused by imperfect subtraction of bright stars and galaxies due to two reasons: (1) the values of the pixels occupied by bright objects fluctuate over different epochs because of Poisson noise, which often results in positive peaks accompanied by negative peaks in a difference image; (2) the position angle of the point spread function (PSF) is different in different epochs (due to the different field orientations), which leads to spurious sources around bright objects that appear in different positions in different epochs. After this visual inspection step, only a few tens of transient candidates survived. To ensure their reliability, we further required that the selected transients should be detected at S/N≥ 5 in the difference image of at least one more band in addition to F356W. Under this requirement, we obtained a total of 14 robust transients. §.§ Descriptions of the transients and their photometry Our sample includes seven transients in the Warhol region, four in the Spock region, one in yet another arc, and two in other regions. Their locations are indicated in Figure <ref>. These objects and their photometry are described below. In most cases, these sources are embedded in a highly non-uniform background and/or are affected by contamination from nearby objects, and PSF fitting had to be used to obtain reliable photometry. For this purpose, the 30mas images are more appropriate. To be consistent, we use PSF fitting for all objects on the 30mas images, and the detailed process is explained in the Appendix. §.§.§ Transients in the Warhol region Figure <ref> shows the positions of the seven transients discovered in this region. The first three letters in their IDs indicate the difference images on which they were first detected in our process; for example, “Dc2” stands for the difference image constructed by subtracting the Ep2 image from the Ec image, and so on. The letter “W” indicates that these are in the Warhol region. The photometric results are listed in Table <ref>. Figure <ref> shows three of them (, and ) that were seen in multiple epochs, while Figure <ref> shows the other four (, , , ) that were only visible in a single epoch. ∙ This transient was visible in Ep1, reached the maximum brightness in Ep2, became fainter in Ec, and further declined in brightness in Ep3 but still remained visible. It faded more rapidly in the red bands than in the blue ones. At its peak (m_277=27.10 mag), it was the brightest among all transients in this region. As it was visible in all epochs, its photometry was done in each epoch individually. ∙ This transient was invisible in Ep1, appeared in Ep2 and slowly varied in the following two epochs. Interestingly, its behaviors in the SW and the LW bands differed: while it decayed with time in the LW bands, it became much brighter in the SW bands in Ep3, especially in the two bluest bands. The photometry was done on the difference images between these epochs and Ep1 (i.e., the D21, Dc1 and D31 images), as this would offer a more reliable determination of the background. ∙ This transient was only 0.28 away from and was also invisible in Ep1. It appeared in Ep2 in F150W and redder bands. It was much weaker in Ec and barely (if at all) visible in Ep3. The photometry was done on the difference images between all other epochs and Ep1. The decline in brightness from Ep2 to Ec is very obvious in the blue bands. The F444W photometry weakly suggests that it might be slightly brightened from Ep2 to Ec, however this is inconclusive due to the large errors. The extracted signals in Ep3 all have S/N< 2, which we consider as non-detections. ∙ This event appeared as a sudden brightening in Ec, particularly in the LW bands. As mentioned earlier, Ec was not used to initiate the transient search; this event was found on the difference images involving Ec when inspecting other transients in the Warhol region. While there seems to be a “source” in other epochs at this location, there is no detectable signal in the difference images in between Ep1, Ep2 and Ep3. This means that the event happened only in Ec and that it left no trace in any other epochs, including Ep2, which was only 13.3 days apart. The photometry was done on the difference images between Ec and Ep1 (i.e., the Dc1 images). ∙ This transient was only seen in Ep3, as it was only visible in the difference images involving Ep3. It was very close to but was a different transient. It was invisible in the three bluest bands. The photometry was done on the difference images between Ep3 and Ep1 (the D31 images). ∙ and These two transients were also only seen in Ep3. Like , the photometry was done on the difference images between Ep3 and Ep1 (the D31 images). §.§.§ Transients in the Spock region Figure <ref> shows the positions of the four transients in this region. Due to the high brightness of this arc, these transients can only be revealed in the difference images and none of them are clearly seen in the original images. The designation of their IDs follows the convention as in 3.2.1, with the exception that “S” is used to indicate that these transients are in the Spock region. Figures <ref> to <ref> show the details of these transients. Their photometry (except for , see below) is presented in Table <ref>. ∙ This transient is best detected in the difference images between Ep2 and Ep1 (the D21 images), but is only seen in the LW bands. It becomes significantly weaker in the difference images between Ec and Ep1 (the Dc1 images), and almost completely disappears from those between Ep3 and Ep1 (the D31 images). All this indicates that it reached the maximum in Ep2 and then faded. Assuming that it was invisible in Ep1, we obtained its magnitudes in Ep2 and Ec by photometry on the difference images between Ep2 and Ep1 (D21) and those between Ec and Ep1 (Dc1). ∙ This transient is detected in the difference images involving Ep2 but not without. Therefore, it is reasonable to assume that this event was caught in Ep2 only. The photometry was done on the difference images between Ep2 and Ep3 (i.e., the D23 images) because they offer a cleaner background than others (e.g., the D21 images). ∙ This transient is best detected in the difference images between Ep2 and Ep3 (D23). It appears in the D23 images in F150W through F410M, is barely visible in F444W, and is invisible in F115W and F090W. This transient presents a complicated case that is difficult to understand. First of all, it seems to be an elongated system in the D23 F356W image. In the D23 F200W and F150W images, which have better resolution, this elongated structure is resolved into two components. However, it does not maintain such a two-component structure (or the elongated morphology) consistently in all bands: one of the components (the southern one) is missing from the D23 F277W and F410M images. Second, in the difference images between Ep2 and Ep1 (D21), only the southern component appears, and it only appears in F200W and F150W. Taking the above at the face value, one would infer the following picture: (1) D21-S3 was made of two components; (2) the northern one maintained its brightness from Ep1 to Ep2 and then decayed (not visible in the D21 images but showing up in the D23 images); (3) the southern component brightened in Ep2 but only in F150W and F200W (in the D21 images, only visible in F150W and F200W), and then decayed; (4) however, this southern component maintained the brightness from Ep1 through Ep3 in F277W, F356W and F410M. The last point is inconsistent with the observation that the southern component seems to be present in the D23 F356W image. It is possible to attribute this inconsistency to the weakness of the signals. We attempted PSF fitting on the two components in the D23 images, however the fitting failed in all bands. We have to give up photometry on this transient. ∙ This transient is only detected in the difference images involving Ep3, implying that it appeared in Ep3. It is very close to (only 0.39 apart), which already decayed and was invisible in Ep3. The photometry was done on the difference images between Ep3 and Ep1 (D31). We note that the number of transients (four) observed in this arc in the four epochs is consistent with expectations made for JWST for this particular arc. In <cit.>, the authors find that due to microlensing, one expects between 1–5 transients per pointing in the Spock arc when reaching ∼29 mag. §.§.§ A transient in yet another arc There is one transient identified on an arc at z=2.091 <cit.> where no previous transient has been reported. Dubbed “Mothra,” this transient is discussed in detail in Diego et al. (2023b, to be submitted). Here we only present its discovery. Figure <ref> shows the details of this transient. Its location is labeled on the color images, which points to a bright knot (the faintest among the five knots) on the arc that was visible in all four epochs. This transient is best explained by the intrinsic variability of a red supergiant star that is being magnified by a dark milli-lens (Diego et al. 2023, to be submitted). The transient is best seen in the difference images between Ep3 and Ep1 (D31) and as well as in those between Ec and Ep1 (Dc1), where it shows up as a strong, red source decreasing the amplitude towards the blue end. It is even visible in the difference images between Ec and Ep2 (Dc2; 13.3 days apart), albeit being much weaker. It is almost invisible in the difference images between Ep3 and Ec (D3c; 29.4 days apart). In other words, this knot was slowly increasing in brightness and reached the maximum at around Ec (96.7 days between Ep1 and Ec), and it more or less kept at the maximum through Ep3 (29.4 days between Ec and Ep3). Since this transient event was caused by the variability of a source that was visible in all epochs, ideally its photometry should be done on the images taken in each epoch. As it is almost blended with the nearby (and brighter) knot, one should do PSF fitting on both simultaneously. However, we were only able to obtain reasonable PSF fitting results in Ep1 (see the Appendix for details). The procedure failed in other epochs, mostly because the light of brightened transient blended with the nearby knot more severely. Therefore, we performed PSF fitting on the difference images between Ep1 and other epochs (see Figure <ref>) and then added the excess fluxes extracted in this way to the Ep1 SED to obtain the SEDs in other epochs. The results are presented in Table <ref> and are also shown in the bottom row in Figure <ref>. §.§.§ Two likely supernovae There were two transients associated with galaxies that are only moderately magnified. Both of them were detected in multiple epochs, and neither was seen in the HFF data. Based on their light curves, we believe that they are SNe. Their physical interpretations will be detailed in a forthcoming paper (Wang et al., in prep.). Here we provide a brief description of their discovery and photometric properties. The photometry is presented in Table <ref>. ∙ We reported this event in <cit.> based on the data from Ep1 and Ep2. Figure <ref> shows the color images of the transient and its vicinity in the four epochs. The transient appeared as a blue source in Ep1 and then became very red in the subsequent epochs. From the difference images in between epochs, we can see that its red light (F200W and redder) reached the maximum in Ep2. It was very close to an irregular galaxy, which presumably is the host. The CANUCS NIRISS slitless spectroscopy shows that this galaxy is at z=2.205. Based on the latest lens model of this cluster (Diego et al., in prep.), the galaxy is amplified by a factor of μ=5.37. As the transient was visible in all epochs, its photometry should be done on the images in individual epochs. To minimize the impact of the contamination from the host galaxy, the photometry was done by PSF fitting. The results are also shown in Figure <ref>. ∙ This transient was found within a spiral galaxy at z=0.7093 (redshift based on <cit.>), which is amplified by μ=2.07 based on the same lens model. It was invisible in Ep1 and appeared in Ep2. From the difference images in between the four epochs, it seems that this transient was the brightest in most bands in Ep2 and then slightly decayed in Ec and Ep3. The photometry in Ep2, Ec and Ep3 was done on the difference images between Ep2 and Ep1 (D21), Ec and Ep1 (Dc1), and Ep3 and Ep1 (D31). In these difference images, its neighbourhood is still affected by the strong residuals due to the imperfect subtraction of the host galaxy bulge. Our PSF-fitting photometry reduced the contamination. § DISCUSSION The NIRCam data used here were produced by only half of the NIRCam field of view (by its module B). Due to the different field orientations at different times, the area overlapped in all four epochs amounts to only 3.98 arcmin^2. Our data result in 14 robust transients, the largest number of transients ever found within such a small area. There are two reasons for this high transient “production” rate. First, the high sensitivity of NIRCam has allowed us to search for transients to an unprecedented depth. The vast majority of our transients were fainter than 27.0 mag even at their peaks and were fainter than 28.0 mag most of the time in most bands. The PEARLS observations had exposure times of ∼0.8–1 hours per band per epoch, and the data have reached the 5 σ limits of ∼28.5–30.0 mag (within 0.2 radius aperture), which are sufficient for us to validate the transients in multiple bands. Second, M0416 has included two regions extremely magnified by the cluster, the Warhol and the Spock arcs, which are known to have produced a number of caustic transients in the previous studies with HST. Both arcs are at relatively low redshifts (z≈ 1), which facilitates the detection of luminous stars in them. In our data spanning 126 days, these two regions contribute seven and four transients, respectively, making them the most fertile transient “factories.” In addition, our search found a transient in yet another arc that no transient was seen previously (the “Mothra” arc). As mentioned above (and to be discussed in details in our forthcoming papers), these transients are most likely stars in the lensed arcs that were temporarily magnified by a higher factor due to micro lensing effect. This remains the only direct way to study individual stars at cosmological distances, and therefore should be further pursued by JWST in the coming years. An interesting discovery is that these transients could occur very frequently; this is suggested by , a transient in the Warhol region appeared in Ec and not seen in either Ep2 (13.6 days before) or Ep3 (29.4 days after). A high cadence of ∼10 days (∼5 days in the rest frame) would potentially reveal more fast transients like , which is worth exploring with JWST in the near future. In addition to the 12 transients in the highly magnified arcs, we also discovered two transients within this <4 arcmin^-2 search area in regions that are only moderately magnified (μ∼ 2–5). They are most likely SNe; given their brightness, they would also have been discovered even without the lensing magnification at the intrinsic, post-peak brightness of ∼28.5–29.0 mag. Taken at face value, these two SNe being discovered in 3.98 arcmin^2 implies that the SN surface density of ∼0.5 arcmin^-2 integrated up to z≈ 2.2 when monitored over ∼126 days, which is broadly consistent with expectaions <cit.>. An interesting point is that both SNe were caught at near their maxima (in the rest-frame visible range) by the Ep2 observations. Note that Ep2 was separated by 83.4 days from Ep1. Due to time dilation, neither transients changed significantly in brightness from Ep2 to Ec, which was 96.7 days after Ep1. This suggests that a time cadence of ∼90 days is effective in discovering SN-like transients (integrated over all redshifts) and is highly likely to catch the events near their peaks. Finally, we note that there could be a color-bias in the transients reported here: the vast majority of them are very red; the only exception is SN01 in Ep1, and it also transformed to a red object in Ep2. We believe that this is at least in part due to the fact that the initial selection was based on the F356W difference images. An initial selection based an SW band (e.g., F150W) is possible, although it would be more complex in the visual validation due to the more numerous defects in the SW data. We will defer such an attempt to a future paper. § SUMMARY M0416 has been observed by NIRCam for four epochs, which makes it the most intensely monitored field by the JWST in its Cycle 1. The eight-band data also provide the best near-IR SED sampling to date. In this work, we present the transients identified in these four epochs that spanned over 126 days. In total, we have identified 14 transients. Twelve of them occurred in three regions highly lensed by the cluster (seven, four, and one in the Warhol, Spock, and Mothra regions, respectively), while the other two happened in two background galaxies that are only moderately magnified (by ∼2–5×). The eight-band photometry enables the construction of their SEDs from 0.9 to 4.4 μm. This is the first time that time-domain studies are given such detailed information for interpretation. The analysis of these SEDs and light curves will be presented in our forthcoming papers. Our work here demonstrates the power of JWST in the study of the transient IR sky. It is now expected that JWST will be able to function for about twenty years, which will enable various long-term JWST monitoring programs addressing new sciences never possible before. A new era of the IR time-domain science has come. The NIRCam data presented in this paper can be accessed via [10.17909/wmmd-ev74]http://dx.doi.org/10.17909/wmmd-ev74 after the proprietary period. We thank the CANUCS team for generously providing early access to their proprietary data of MACS0416. This project is based on observations made with the NASA/ESA/CSA James Webb Space Telescope and obtained from the Mikulski Archive for Space Telescopes, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA), and the Canadian Astronomy Data Centre (CADC/NRC/CSA). We thank our Program Coordinator, Tony Roman, for his expert help scheduling this complex program. HY and BS acknowledge the partial support from the University of Missouri Research Council Grant URC-23-029. J.M.D. acknowledges the support of project PGC2018-101814-B-100 (MCIU/AEI/MINECO/FEDER, UE) Ministerio de Ciencia, Investigación y Universidades. LW acknowledges support from NSF through grant #1813825. SHC, RAW and RAJ acknowledge support from NASA JWST Interdisciplinary Scientist grants NAG5-12460, NNX14AN10G and 80NSSC18K0200 from GSFC. ZM is supported in-part by the National Science Foundation, grant #1636621. JFB was supported by NSF Grant No. PHY-2012955. CNAW acknowledges funding from the JWST/NIRCam contract NASS-0215 to the University of Arizona. CC is supported by the National Natural Science Foundation of China, No. 11803044, 12173045. AZ acknowledges support by Grant No. 2020750 from the United States-Israel Binational Science Foundation (BSF) and Grant No. 2109066 from the United States National Science Foundation (NSF), and by the Ministry of Science & Technology, Israel. While we used isophotal aperture photometry by SExtractor to search for transients, we used PSF fitting for the final photometry of the identified transients. This approach assumes that transients are all point-like even under the JWST resolution, which should be valid. The reason that we adopted the more complicated PSF fitting (as opposed to aperture photometry) was because of the background contamination. Our transients were all embedded in highly non-uniform background, and in a lot of cases it still leaves structures even in the difference images between epochs. In such situations, PSF fitting handles contamination better than any aperture photometry. We did PSF fitting on the 30mas images (as opposed to the 60mas used in the transient identification). The process is outlined as follows. We first generated the PSFs using the simulation tool WebbPSF (version 1.1.1) at the pixel scale of 30mas. For each band, PSFs were simulated at 36 evenly distributed positions on each detector, and each simulated PSF was saved as an individual image. All simulated PSFs were 87×87 pixels (2.61×2.61) in size. Then, effective PSFs are built using EPSFBuilder in Photutils. Hereafter these PSFs are referred to as the WPSFs. In the meantime, we also constructed empirical PSFs for comparison. The difficulty is that the M0416 field does not have many suitable stars. Nevertheless, we were able to find five isolated, unsaturated and high S/N stars for this purpose. A region of 87×87 pixels centered on each star was cut out from the image, and the five cutouts were sent to EPSFBuilder to build the empirical PSF. These empirical PSFs are referred to as the EPSFs henceforth. Both types of PSFs were used, and we found only marginal differences (see below). We used BasicPSFPhotometry function in Photutils to perform PSF fitting. This function also allows simultaneous fit to multiple, overlapping sources when necessary. The non-linear fitting routine LevMarLSQFitter in Astropy is applied, which utilizes the least-square statistics to decide on the best fit. In PSF fitting, the actual area used to fit the model is usually much smaller than the full PSF size, as only the centeral region has sufficient S/N. Here we found that the optimal fitting area was 11×11 pixels, which is about 2.8× the full-width-at-half-maximum of the F356W PSF. We provided an initial guess of the source's centroid and flux when running the fit. The former was estimated by visually locating the peak pixel, while the latter was estimated by measuring the aperture flux within a circle of 11 pixels in diameter centered at the initial guess of the location. We found that BasicPSFPhotometry could converge to the same solution even when the initial guesses were widely different. On the other hand, the routine requires an accurate background estimate because it fixes the background to the input value. In most cases, we estimated the background by using the MedianBackground function in Photutils and adopting the 3 σ-clipped median in the image cutouts centered on the transient (i.e., the same image stamps as shown in the figures in the main text) with the sources masked. However, this general routine was not applicable in some special cases, for which tailored treatments were necessary. For example, leaving both the source location and its flux as free parameters was not feasible for the sources of low S/N. In these cases, we fixed the centroid and fit for the flux. For several sources in the highly magnified regions, small but notable positional offsets were found between some SW and LW bands. In such cases, we did not force the fit to be centered at the same position; instead, we determined the centroid in different bands individually. Regarding the background estimation, median statistics were not applicable for the sources in extremely non-uniform local backgrounds. For these cases, we estimated the local background value manually in an iterative manner: we subtracted different constants from the image at a step size of 0.005 MJy/sr, performed a PSF fitting, and visually examined the residual image to determine the best local background value by eye. We note below the sources that were applied some of these special treatments. * D21-W2 We determined the source's centroids in the LW bands using the D21 images and in the SW bands using the D31 images. Their centroids in each band were then fixed for all epochs. * D21-W3 The source's centroids were determined from the D21 images and then fixed for Ec. * Dc2-W4 This source's local background is non-uniform; thus, we visually examined and selected the best local background value in Ec. * D31-W5/W6/W7 These sources' centroids in the SW bands were determined from the D31 F200W image; in the LW bands, they were determined from the D31 F356W image. * D21-S2 This source's centroids in the SW bands were determined from the F200W D21 image; in the LW bands, they were determined from the D21 F277W image. * D31-S4 This source's centroids in F115W and F150W were fixed to its F200W centroid. * Mothra (1) In Ep1, this source's centroid in F444W were fixed to its F410M centroid. (2) The complexity of simultaneously fitting two overlapping sources on a thin arc made it hard to use any automatic approach to estimate the local background. Therefore, we had to tune the background estimate manually as mentioned above. Furthermore, we used the Ep1 difference images between adjacent bands (the bluer image was always PSF-matched to the redder image when constructing the difference) to judge whether the extracted flux in each band was reasonable under each step of background estimate. For example, if the F150W-F115W difference image shows a distinct source at the transient location, it means that the extracted flux in F150W must be higher than that in F115W. This added constraint, while tedious, allowed us to further tweak the background values to obtain the most reasonable flux measurements. In all cases, our PSF-fitting results have been properly normalized by aperture correction. As mentioned above, we used five stars to construct the EPSFs; these five stars were our basis for the aperture correction. We ran PSF fitting on these five stars to obtain their fitted fluxes, and also derived their aperture fluxes within the same 11×11 pixel areas. The averaged ratio between the two in each band was the multiplicative aperture correction factor, which we applied to the outputs from BasicPSFPhotometry. Finally, we note that there are only marginal differences between the results based on the WPSFs and those based on the EPSFs. As the EPSFs were derived using only a small number of stars (total of five), we regard them as being less secure. Therefore, we adopted the WSPF results for our final photometry reported in the tables. aasjournal.bst natexlab#1#1 [Amanullah et al.(2010)Amanullah, Lidman, Rubin, Aldering, Astier, Barbary, Burns, Conley, Dawson, Deustua, Doi, Fabbro, Faccioli, Fakhouri, Folatelli, Fruchter, Furusawa, Garavini, Goldhaber, Goobar, Groom, Hook, Howell, Kashikawa, Kim, Knop, Kowalski, Linder, Meyers, Morokuma, Nobili, Nordin, Nugent, Östman, Pain, Panagia, Perlmutter, Raux, Ruiz-Lapuente, Spadafora, Strovink, Suzuki, Wang, Wood-Vasey, Yasuda, & Project)]Amanullah_2010 Amanullah, R., Lidman, C., Rubin, D., et al. 2010, The Astrophysical Journal, 716, 712, 10.1088/0004-637X/716/1/712 [Bergamini et al.(2022)Bergamini, Grillo, Rosati, Vanzella, Mestric, Mercurio, Acebron, Caminha, Granata, Meneghetti, Angora, & Nonino]Bergamini2022 Bergamini, P., Grillo, C., Rosati, P., et al. 2022, arXiv e-prints, arXiv:2208.14020, 10.48550/arXiv.2208.14020 [Bertin & Arnouts(1996)]Bertin1996 Bertin, E., & Arnouts, S. 1996, , 117, 393, 10.1051/aas:1996164 [Caminha et al.(2017)Caminha, Grillo, Rosati, Balestra, Mercurio, Vanzella, Biviano, Caputi, Delgado-Correal, Karman, Lombardi, Meneghetti, Sartoris, & Tozzi]Caminha2017 Caminha, G. B., Grillo, C., Rosati, P., et al. 2017, , 600, A90, 10.1051/0004-6361/201629297 [Chen et al.(2019)Chen, Kelly, Diego, Oguri, Williams, Zitrin, Treu, Smith, Broadhurst, Kaiser, Foley, Filippenko, Salo, Hjorth, & Selsing]Chen2019 Chen, W., Kelly, P. L., Diego, J. M., et al. 2019, , 881, 8, 10.3847/1538-4357/ab297d [Dawson et al.(2009)Dawson, Aldering, Amanullah, Barbary, Barrientos, Brodwin, Connolly, Dey, Doi, Donahue, Eisenhardt, Ellingson, Faccioli, Fadeyev, Fakhouri, Fruchter, Gilbank, Gladders, Goldhaber, Gonzalez, Goobar, Gude, Hattori, Hoekstra, Huang, Ihara, Jannuzi, Johnston, Kashikawa, Koester, Konishi, Kowalski, Lidman, Linder, Lubin, Meyers, Morokuma, Munshi, Mullis, Oda, Panagia, Perlmutter, Postman, Pritchard, Rhodes, Rosati, Rubin, Schlegel, Spadafora, Stanford, Stanishev, Stern, Strovink, Suzuki, Takanashi, Tokita, Wagner, Wang, Yasuda, Yee, & Supernova Cosmology Project]Dawson2009 Dawson, K. S., Aldering, G., Amanullah, R., et al. 2009, , 138, 1271, 10.1088/0004-6256/138/5/1271 [Diego et al.(2023)Diego, Kei Li, Meena, Niemiec, Acebron, Jauzac, Struble, Amruth, Broadhurst, Cerny, Ebeling, Filippenko, Jullo, Kelly, Koekemoer, Lagatutta, Lim, Limousin, Mahler, Patel, Remolina, Richard, Sharon, Steinhardt, Umetsu, Williams, Zitrin, Palencia, Dai. Lingyuan Ji, & Pascale]Diego2023a Diego, J. M., Kei Li, S., Meena, A. K., et al. 2023, arXiv e-prints, arXiv:2304.09222, 10.48550/arXiv.2304.09222 [Hayden et al.(2021)Hayden, Rubin, Boone, Aldering, Nordin, Brodwin, Deustua, Dixon, Fagrelius, Fruchter, Eisenhardt, Gonzalez, Gupta, Hook, Lidman, Luther, Muzzin, Raha, Ruiz-Lapuente, Saunders, Sofiatti, Stanford, Suzuki, Webb, Williams, Wilson, Yen, Amanullah, Barbary, Böhringer, Chappell, Cunha, Currie, Fassbender, Gladders, Goobar, Hildebrandt, Hoekstra, Huang, Huterer, Jee, Kim, Kowalski, Linder, Meyers, Pain, Perlmutter, Richard, Rosati, Rozo, Rykoff, Santos, Spadafora, Stern, Wechsler, & The Supernova Cosmology Project]Hayden2021 Hayden, B., Rubin, D., Boone, K., et al. 2021, , 912, 87, 10.3847/1538-4357/abed4d [Kaurov et al.(2019)Kaurov, Dai, Venumadhav, Miralda-Escudé, & Frye]Kaurov2019 Kaurov, A. A., Dai, L., Venumadhav, T., Miralda-Escudé, J., & Frye, B. 2019, , 880, 58, 10.3847/1538-4357/ab2888 [Kelly et al.(2015)Kelly, Rodney, Treu, Foley, Brammer, Schmidt, Zitrin, Sonnenfeld, Strolger, Graur, Filippenko, Jha, Riess, Bradac, Weiner, Scolnic, Malkan, von der Linden, Trenti, Hjorth, Gavazzi, Fontana, Merten, McCully, Jones, Postman, Dressler, Patel, Cenko, Graham, & Tucker]Kelly2015 Kelly, P. L., Rodney, S. A., Treu, T., et al. 2015, Science, 347, 1123, 10.1126/science.aaa3350 [Kelly et al.(2018)Kelly, Diego, Rodney, Kaiser, Broadhurst, Zitrin, Treu, Pérez-González, Morishita, Jauzac, Selsing, Oguri, Pueyo, Ross, Filippenko, Smith, Hjorth, Cenko, Wang, Howell, Richard, Frye, Jha, Foley, Norman, Bradac, Zheng, Brammer, Benito, Cava, Christensen, de Mink, Graur, Grillo, Kawamata, Kneib, Matheson, McCully, Nonino, Pérez-Fournon, Riess, Rosati, Schmidt, Sharon, & Weiner]Kelly2018 Kelly, P. L., Diego, J. M., Rodney, S., et al. 2018, Nature Astronomy, 2, 334, 10.1038/s41550-018-0430-3 [Kelly et al.(2022)Kelly, Chen, Alfred, Broadhurst, Diego, Emami, Filippenko, Keen, Kei Li, Lim, Meena, Oguri, Scarlata, Treu, Williams, Williams, Zhou, Zitrin, Foley, Jha, Kaiser, Mehta, Rieck, Salo, Smith, & Weisz]Kelly2022 Kelly, P. L., Chen, W., Alfred, A., et al. 2022, arXiv e-prints, arXiv:2211.02670, 10.48550/arXiv.2211.02670 [Kelly et al.(2023)Kelly, Rodney, Treu, Oguri, Chen, Zitrin, Birrer, Bonvin, Dessart, Diego, Filippenko, Foley, Gilman, Hjorth, Jauzac, Mandel, Millon, Pierel, Sharon, Thorp, Williams, Broadhurst, Dressler, Graur, Jha, McCully, Postman, Borello Schmidt, Tucker, & von der Linden]Kelly2023 Kelly, P. L., Rodney, S., Treu, T., et al. 2023, arXiv e-prints, arXiv:2305.06367, 10.48550/arXiv.2305.06367 [Lotz et al.(2017)Lotz, Koekemoer, Coe, Grogin, Capak, Mack, Anderson, Avila, Barker, Borncamp, Brammer, Durbin, Gunning, Hilbert, Jenkner, Khandrika, Levay, Lucas, MacKenty, Ogaz, Porterfield, Reid, Robberto, Royle, Smith, Storrie-Lombardi, Sunnquist, Surace, Taylor, Williams, Bullock, Dickinson, Finkelstein, Natarajan, Richard, Robertson, Tumlinson, Zitrin, Flanagan, Sembach, Soifer, & Mountain]Lotz2017 Lotz, J. M., Koekemoer, A., Coe, D., et al. 2017, , 837, 97, 10.3847/1538-4357/837/1/97 [Regős & Vinkó(2019)]RV2019 Regős, E., & Vinkó, J. 2019, , 874, 158, 10.3847/1538-4357/ab0a73 [Riess et al.(2004)Riess, Strolger, Tonry, Casertano, Ferguson, Mobasher, Challis, Filippenko, Jha, Li, Chornock, Kirshner, Leibundgut, Dickinson, Livio, Giavalisco, Steidel, Benítez, & Tsvetanov]Riess2004 Riess, A. G., Strolger, L.-G., Tonry, J., et al. 2004, , 607, 665, 10.1086/383612 [Riess et al.(2018)Riess, Rodney, Scolnic, Shafer, Strolger, Ferguson, Postman, Graur, Maoz, Jha, Mobasher, Casertano, Hayden, Molino, Hjorth, Garnavich, Jones, Kirshner, Koekemoer, Grogin, Brammer, Hemmati, Dickinson, Challis, Wolff, Clubb, Filippenko, Nayyeri, U, Koo, Faber, Kocevski, Bradley, & Coe]Riess2018 Riess, A. G., Rodney, S. A., Scolnic, D. M., et al. 2018, , 853, 126, 10.3847/1538-4357/aaa5a9 [Rodney et al.(2018)Rodney, Balestra, Bradac, Brammer, Broadhurst, Caminha, Chirivı, Diego, Filippenko, Foley, Graur, Grillo, Hemmati, Hjorth, Hoag, Jauzac, Jha, Kawamata, Kelly, McCully, Mobasher, Molino, Oguri, Richard, Riess, Rosati, Schmidt, Selsing, Sharon, Strolger, Suyu, Treu, Weiner, Williams, & Zitrin]Rodney2018 Rodney, S. A., Balestra, I., Bradac, M., et al. 2018, Nature Astronomy, 2, 324, 10.1038/s41550-018-0405-4 [Suzuki et al.(2012)Suzuki, Rubin, Lidman, Aldering, Amanullah, Barbary, Barrientos, Botyanszki, Brodwin, Connolly, Dawson, Dey, Doi, Donahue, Deustua, Eisenhardt, Ellingson, Faccioli, Fadeyev, Fakhouri, Fruchter, Gilbank, Gladders, Goldhaber, Gonzalez, Goobar, Gude, Hattori, Hoekstra, Hsiao, Huang, Ihara, Jee, Johnston, Kashikawa, Koester, Konishi, Kowalski, Linder, Lubin, Melbourne, Meyers, Morokuma, Munshi, Mullis, Oda, Panagia, Perlmutter, Postman, Pritchard, Rhodes, Ripoche, Rosati, Schlegel, Spadafora, Stanford, Stanishev, Stern, Strovink, Takanashi, Tokita, Wagner, Wang, Yasuda, Yee, & Project)]Suzuki_2012 Suzuki, N., Rubin, D., Lidman, C., et al. 2012, The Astrophysical Journal, 746, 85, 10.1088/0004-637X/746/1/85 [Wang et al.(2017)Wang, Baade, Baron, Bernard, Bromm, Brown, Clayton, Cooke, Croton, Curtin, Drout, Doi, Dominguez, Finkelstein, Gal-Yam, Geil, Heger, Hoeflich, Jian, Krisciunas, Koekemoer, Lunnan, Maeda, Maund, Modjaz, Mould, Nomoto, Nugent, Patat, Pacucci, Phillips, Rest, Regos, Sand, Sparks, Spyromilio, Staveley-Smith, Suntzeff, Uddin, Villarroel, Vinko, Whalen, Wheeler, Wood-Vasey, Yang, & Yue]WangFLARE2017 Wang, L., Baade, D., Baron, E., et al. 2017, arXiv e-prints, arXiv:1710.07005. 1710.07005 [Willott et al.(2022)Willott, Doyon, Albert, Brammer, Dixon, Muzic, Ravindranath, Scholz, Abraham, Artigau, Bradač, Goudfrooij, Hutchings, Iyer, Jayawardhana, LaMassa, Martis, Meyer, Morishita, Mowla, Muzzin, Noirot, Pacifici, Rowlands, Sarrouh, Sawicki, Taylor, Volk, & Zabl]Willott2022 Willott, C. J., Doyon, R., Albert, L., et al. 2022, , 134, 025002, 10.1088/1538-3873/ac5158 [Windhorst et al.(2018)Windhorst, Timmes, Wyithe, Alpaslan, Andrews, Coe, Diego, Dijkstra, Driver, Kelly, & Kim]Windhorst2018 Windhorst, R. A., Timmes, F. X., Wyithe, J. S. B., et al. 2018, , 234, 41, 10.3847/1538-4365/aaa760 [Windhorst et al.(2023)Windhorst, Cohen, Jansen, Summers, Tompkins, Conselice, Driver, Yan, Coe, Frye, Grogin, Koekemoer, Marshall, O'Brien, Pirzkal, Robotham, Ryan, Willmer, Carleton, Diego, Keel, Porto, Redshaw, Scheller, Wilkins, Willner, Zitrin, Adams, Austin, Arendt, Beacom, Bhatawdekar, Bradley, Broadhurst, Cheng, Civano, Dai, Dole, D'Silva, Duncan, Fazio, Ferrami, Ferreira, Finkelstein, Furtak, Gim, Griffiths, Hammel, Harrington, Hathi, Holwerda, Honor, Huang, Hyun, Im, Joshi, Kamieneski, Kelly, Larson, Li, Lim, Ma, Maksym, Manzoni, Meena, Milam, Nonino, Pascale, Petric, Pierel, del Carmen Polletta, Röttgering, Rutkowski, Smail, Straughn, Strolger, Swirbul, Trussler, Wang, Welch, B. Wyithe, Yun, Zackrisson, Zhang, & Zhao]Windhorst2023 Windhorst, R. A., Cohen, S. H., Jansen, R. A., et al. 2023, , 165, 13, 10.3847/1538-3881/aca163 [Yan et al.(2023)Yan, Ma, Grogin, Wang, Windhorst, Frye, Coe, Marshall, & Team]Yan2023TNSa Yan, H., Ma, Z., Grogin, N., et al. 2023, Transient Name Server AstroNote, 6, 1
http://arxiv.org/abs/2307.04851v1
20230710184640
Physics-Based Modeling and Validation of 2D Schottky Barrier Field-Effect Transistors
[ "Ashwin Tunga", "Zijing Zhao", "Ankit Shukla", "Wenjuan Zhu", "Shaloo Rakheja" ]
physics.app-ph
[ "physics.app-ph" ]
Physics-Based Modeling and Validation of 2D Schottky Barrier Field-Effect Transistors Ashwin Tunga^a, Zijing Zhao, Ankit Shukla, Wenjuan Zhu, and Shaloo Rakheja Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL USA ^[email protected] August 12, 2023 =============================================================================================================================================================================================================================================================== In this work, we describe the charge transport in two-dimensional (2D) Schottky barrier field-effect transistors (SB-FETs) based on the carrier injection at the Schottky contacts. We first develop a numerical model for thermionic and field-emission processes of carrier injection that occur at a Schottky contact. The numerical model is then simplified to yield an analytic equation for current versus voltage (I-V) in the SB-FET. The lateral electric field at the junction, controlling the carrier injection, is obtained by accurately modeling the electrostatics and the tunneling barrier width. Unlike previous SB-FET models that are valid for near-equilibrium conditions, this model is applicable for a broad bias range as it incorporates the pertinent physics of thermionic, thermionic field-emission, and field-emission processes from a 3D metal into a 2D semiconductor. The I-V model is validated against the measurement data of 2-, 3-, and 4-layer ambipolar MoTe_2 SB-FETs fabricated in our lab, as well as the published data of unipolar 2D SB-FETs using MoS_2. Finally, the model's physics is tested rigorously by comparing model-generated data against TCAD simulation data. Compact model, ambipolar transport, Schottky contact, field emission, MoTe_2, 2D electronics § INTRODUCTION Over the past few decades, the semiconductor industry has focused on dimensional scaling of silicon transistors based on Moore's law in order to improve their speed, performance, and efficiency <cit.>. However, due to the short-channel effects, such as the leakage current and static power dissipation <cit.>, in ultra-scaled transistors <cit.>, dimensional scaling has been slowing down in recent years. Several solutions to this problem have been explored, forming what is known as the “more-than-Moore” strategy <cit.>. To that end, novel materials that can mitigate short-channel effects have been investigated <cit.>. Among various novel materials, two-dimensional (2D) semiconductors have emerged as an excellent channel material for a field-effect transistor (FET) <cit.>. In a 2D semiconductor FET, mobile electrons, confined in an atomically thin channel, are strongly electrostatically coupled to the gate <cit.>. The primary advantage of 2D semiconductor FETs over ultra-thin body (UTB) transistors <cit.> is that UTB semiconductors are a result of the termination of a 3D crystal, which leads to surface roughness and considerable carrier scatterings. In contrast, 2D semiconductors are inherently atomically thin and do not have dangling bonds, and could offer higher performance at ultra-scaled process nodes. To harness the full potential of 2D materials for nanoscale CMOS, challenges related to device scaling, low resistance contacts, gate-stack design, wafer-scale integration, and process variability must be addressed. A review of opportunities and challenges of 2D semiconductors can be consulted in recent publications <cit.>. Transition metal dichalcogenides (TMDs) are a class of 2D materials that can be incorporated into FET device structures. TMDs have a sizeable bandgap, transitioning from indirect bandgap in bulk to direct bandgap in their monolayer limit <cit.>. Their sizable bandgap lends to their advantage over graphene for logic devices  <cit.>. TMD-based FETs are also expected to be superior to black-phosphorus-based FETs in which the on-off ratio degrades rapidly at high drain-bias <cit.>, thus limiting the prospects of black phosphorus for low-power logic operations. Among the various TMD materials, MoTe_2 is an excellent candidate for implementing logic FETs. MoTe_2 FETs with an on-off current ratio of 10^6 and ambipolar conduction have been experimentally demonstrated <cit.>. Ambipolar transistors could reduce the complexity, while also enhancing the security <cit.>, of CMOS circuits since the channel can be tuned to conduct both electrons and holes by applying an appropriate electric field. To enable circuit design and allow technology-to-circuit co-optimization, a compact device model that faithfully reproduces the device terminal behavior over a broad operating range is needed. In the case of MoTe_2 SB-FETs, a physically accurate and scalable compact model must accurately interpret the role of source and drain contacts. As a result of the lack of effective substitutional doping techniques, metal contacts are directly deposited over the MoTe_2 channel <cit.>. Thus, unlike metal-oxide-semiconductor (MOS) FETs with ohmic source and drain contacts, MoTe_2 FETs invariably have Schottky contacts, which limit the injection of mobile carriers into the channel and thus the net current flow in the transistor. The compact model presented here is based on the generalized theory of carrier emission at the metal source/drain contacts. Our model includes both thermionic emission and thermal field emission (tunneling) of carriers at the Schottky contacts. Due to their closed-form nature, the equations we arrive at are easily adaptable to a compact model, where majority of the model parameters have a well-defined physical interpretation. We show that the model captures the essential physics of 2D SB-FETs by comparing model output against numerical simulations conducted in a commercial TCAD tool. We demonstrate the model's applicability to fabricated MoTe_2 ambipolar FETs with varying channel thickness (2-layers to 4-layers) and over broad bias conditions (V_GS ∈ [-8, 8] V and V_DS ∈ [0.5, 4.5] V), spanning both hole and electron conduction regimes. The model is also successfully applied to fabricated unipolar 2D FETs based on MoS_2. § PRIOR WORK Penumatcha et al. <cit.> developed an analytical method to describe the off-state transfer characteristics of low-dimensional FETs. The authors model the transmission of carriers through a Schottky barrier using the Landauer formalism. However, the model involves numerical computation and is thus not suitable for compact modeling. Besides, the model was demonstrated only for below-threshold gate bias (V_GS) and low drain voltages (V_DS), which limits the model use for practical operating conditions. Prior works also include the 2D Pao-Sah model with drift-diffusion formalism extended to ambipolar transport <cit.>. These models do not account for Schottky contact-limited charge injection and instead focus on the channel-controlled charge transport, which is not the main transport physics here. The model for SB-FETs presented in <cit.>, validated only against TCAD data, is strictly derived for carrier injection into a 3D semiconductor and is thus not applicable to the 2D SB-FETs presented here. Moreover, <cit.> also neglects the thermionic field emission current, which as we discuss in Sec. <ref> is crucial for intermediate gate voltage regimes. Neglecting thermionic field emission is also expected to yield an unphysical temperature dependence of I-V curves of an SB-FET. In <cit.>, a tunneling equation, empirically derived from the 3D thermionic emission equation, along with the drift-diffusion formalism is used to obtain the drain current in a Si nanowire FET. However, because of its implicit nature, the model is not considered compact from a circuit simulation standpoint. A compact model for a double-gated reconfigurable FET is presented in <cit.> based on 3D band-to-band tunneling current in an SB-FET, which is not the relevant physics underlying the ambipolar 2D SB-FETs discussed in our work. In <cit.>, authors focus on the experimental demonstration of reconfigurable logic gates based on the SOI technology. On the modeling front, the authors use an empirical formulation, based on tan-hyperbolic functions to fit the experimental data. Other related works either focus on dual-gate nanowire geometry <cit.>, silicon-on-insulator structure <cit.>, consider only 3D channels <cit.>, or implement a numerical I-V model <cit.> for SB-FETs. The model presented in this paper is specifically developed for SB-FETs using 2D semiconductors, has a strong physical basis, is validated rigorously against numerical simulations as well as experimental data of ambipolar SB-FETs fabricated in-house and unipolar SB-FETs reported in the literature. Because of its explicit nature with few parameters, most of which have a physical origin, our model is suitable for circuit simulations. § MODEL DESCRIPTION Figure <ref> shows the cross-section of an MoTe_2 SB-FET with hexagonal BN gate dielectric and metal source and drain contacts, which create a Schottky barrier at the metal/2D channel interface. Here, a van der Waals gap is formed between the metal and the semiconductor, resulting in a tunneling barrier, which increases the net contact resistance <cit.>. Due to the atomic thickness of the 2D channel, the charge injection mechanism differs significantly from injection from a metal into bulk materials. Although Richardson-Dushman <cit.> and Fowler-Nordheim <cit.> theories of electron emission formulated for bulk materials <cit.> can fit experimental data for 2D devices, these models do not represent the essential physics of 2D SB-FETs. In the thermionic emission (TE) process, thermally excited carriers with energy greater than the potential barrier at the contacts can traverse over the barrier into the semiconducting channel, resulting in a current flow. The activation energy for TE, i.e., the barrier between the metal and the bottom of the conduction band for electrons and the top of the valence band for holes, decreases linearly with gate bias until it equals the characteristic Schottky barrier height. Due to the linear variation of the activation energy, the channel current varies exponentially with gate bias, as shown in Sec. <ref>. With increasing gate bias, the potential barrier thins, which increases the probability of carriers to tunnel through the barrier. Thus, a field-dependent tunneling current is observed in the device. The electric field-enhanced tunneling phenomenon is also referred to as field emission (FE). The sum of TE and FE currents gives the net drain current measured in an SB-FET, illustrated qualitatively in Fig. <ref>(left). Unlike in a unipolar device, in an ambipolar SB-FET, the TE current is marginal compared to the FE current, and the total drain current is predominantly the sum of the electron and hole FE currents. §.§ Numerical Model The current density, J_net, due to carrier transmission across an energy barrier from a metal into the channel is given as J_net = J_1 → 2 - J_2 → 1, where J_1 → 2 (J_2 → 1) is the current density due to carriers incident from region 1 (region 2) into region 2 (region 1), shown in Fig. <ref>(right). Consider J_1 → 2: J_1 → 2 = 1/𝒜∑_k q T(k_x) v_x f_1(k) (1-f_2(k)), where q is the charge of the carrier, 𝒜 is the area of the 2D crystal, k is the wavevector in reciprocal space, k_x is the x-component of k, v_x is the velocity of carrier incident at the barrier, f_i is the Fermi-Dirac distribution in region i (i=1,2 for metal, semiconductor), and T(k_x) is the transmission probability. If the carriers considered are electrons, converting the sum over k-space into an integral in energy space gives J_1 → 2 = -4q/h^2√(m_e^*/2)∫_-∞^∞ T(E_x) × ( ∫_0^∞f_1(E) (1 - f_2(E))/√(E_y) dE_y ) dE_x, where m_e^* is the effective mass of the electron, h is Planck's constant, E_x is the energy due to momentum perpendicular to the barrier or the longitudinal momentum (i.e., E_x = p_x^2/2m_e^*, where p_x is the momentum perpendicular to the barrier interface), E_y is the energy due to lateral momentum. The metal Fermi-level (E_Fm) is considered as the reference energy level. The model assumes conservation of lateral carrier momentum with effective mass approximation. The same procedure as above can be followed to obtain an expression for J_2 → 1 to get J_net,e as J_net,e = -4q/h^2√(m_e^*/2)∫_-∞^∞ T(E_x) × ( ∫_0^∞f_1(E) - f_2(E)/√(E_y) dE_y ) dE_x. The net hole current density is obtained similarly as the electron current with m_e^* replaced with the effective mass of holes, m_h^*. The total drain current density due to both carriers is simply the sum of their respective net currents. Taking into account the direction of the drain current (along the -x axis, from drain to source contact), the total current density is -J_D =J_net,e + J_net,h. While the drain current is modeled by considering the spatially localized carrier injection at the contacts, effects of gate-source voltage (V_GS) and drain-source voltage (V_DS) are incorporated via the Fermi functions, f_1 and f_2, and the transmission probability, T(E_x). For the classical TE process, the transmission probability T(E_x) = 1. Evaluating (<ref>) for electrons for T(E_x) = 1 and non-degenerate statistics and integrating over E_x ∈ [E_A, ∞) (E_A is the TE activation energy) gives (see Appendix <ref>) J_TE,e = -q √(8π k_B^3 m_e^*)/h^2 T^3/2exp(-E_A/k_B T) ×[ 1 - exp( -qV_c/k_B T) ]. The activation energy E_A reduces linearly with the gate bias until the flatband voltage when E_A = ϕ_SB, qV_c = E_Fm - E_Fs = - E_Fs is the voltage across the contact (E_Fm is used as the reference energy level), A^*_2D≡ (q√(8π k_B^3 m_e^*))/(h^2) is the effective Richardson constant for a 2D semiconductor. It is important to note the T^3/2 dependence in the pre-factor of the above equation compared to the T^2 dependence in the 3D TE model <cit.>. A similar treatment leads to the hole TE current. The FE transmission probability using the Wentzel–Kramers–Brillouin (WKB) approximation for a triangular barrier <cit.> is T_e,h(E_x) = exp( -8 π√(2m^* (ϕ_SB,(e,h) - E_x)^3)/3hqF_x), where F_x is the magnitude of the electric field at the triangular barrier. For electron tunneling that dominates for V_GS>V_min, (<ref>) is integrated over E_x ∈ [-∞,ϕ_SB]. In Sec. <ref>, we show an explicit analytic tunneling equation that lends itself well to compact modelling. A simplified conduction band profile at the source contact is shown in Fig. <ref>. The electric field is given as F_x = φ_(s,d)(V_GS, V_DS)/L_B(V_GS, V_DS), where L_B is the tunneling barrier width, and φ_(s,d) is the potential drop at the respective source/drain contact. For a constant V_DS, as V_GS increases, φ_s increases and L_B decreases, resulting in a strong increase in F_x with V_GS. At yet higher V_GS, φ_s remains roughly constant but L_B continues to decrease, which reduces the rate of increase of F_x with V_GS. In our model, the effect of the channel transport is enclosed in the electric field, which ensures the self-consistency between our methodology and the emission-diffusion theory of MOSFETs presented in <cit.>. §.§ Compact Model The SB-FET compact model includes both TE and FE processes and is thus applicable over a broad bias range. The total drain current is I_D = W[(J_tun,e + J_TE,e) + (J_tun,h + J_TE,h)], where W is the device width, and J_tun (J_TE) is the tunneling (thermionic) current. The holes (electrons) are injected into the channel from the drain (source) contact. The TE process for a 2D semiconductor is described by (<ref>), while the 2D tunneling process can be described by the following set of equations (see Appendix <ref>): J_tun,(e,h) = P_f,(e,h)( J_TFE + J_FE), J_TFE = C_0 C_1 √(k_BTπ)/β( exp(βϕ_SB,(e,h)) - 1 ), J_FE = C_0 C_1 √(π E_00^3)(1 - exp(- qV_DS/E_00) ), C_0 = 4q/h^2√(m^*_(e,h)/2), C_1 = exp( - √(ϕ_SB,(e,h) - E_0)( ϕ_SB,(e,h) + E_0/2) ), E_0 = ϕ_SB,(e,h) - (ln(K_0)/α)^2/3, β = k_BT - E_00/(k_BT)(E_00), E_00 = 2/3α√(ϕ_SB,(e,h) - E_0), α = 8π√(2m^*_(e,h))/3hqF_x. J_TFE and J_FE are the thermionic field emission (TFE) and field emission (FE) components, respectively, of the tunneling current, and K_0 and P_f are constant fitting parameters. The terminal voltages modulate F_x at the Schottky contact and thus control the current through the device. To model F_x, we need to obtain the channel potential and the tunneling barrier width. The channel potential is obtained from the balance equation given as V_G(S,Deff) - V_FB,(e,h) = φ_(s,d) - Q_ch,(e,h)/C_ins, where V_FB is the flat-band voltage, Q_ch is the mobile charge in the channel, and C_ins is the insulator capacitance. Q_ch is empirically modeled as <cit.> Q_ch,e = -C_inv,e n_ek_B T/qlog( 1 + exp( q V_GS - V_T,e/n_e k_B T) ), Q_ch,h = C_inv,h n_h k_B T/qlog( 1 + exp( -q V_GDeff + V_T,h/n_h k_B T) ), where C_inv(e,h) is the inversion capacitance, V_T(e,h) is the threshold voltage, and n_(e,h) is related to the sub-threshold swing. The V_DS dependence of V_FB and V_T is given as V_FB = V_FB0 - δ_FBV_DSeff, V_T = V_T0 - δ_T √(V_DSeff), where δ_FB and δ_T are empirical parameters that are determined from calibrating the model with experimental data, as described in the companion paper. V_DSeff is the effective V_DS that drops across the channel. The effective V_DS varies linearly with V_DS at low drain bias and eventually saturates at V_DSAT. We define V_DSeff and V_GDeff using a saturation function as, V_DSeff = V_DSV_DS/V_DSAT/( 1 + (V_DS/V_DSAT)^ν)^1/ν, V_GDeff = V_GS - V_DSeff. Here, V_DSAT is the saturation voltage and ν is the transition region fitting parameter. The tunneling barrier width, L_B, depends on the characteristic length, λ, and the depletion width, W_D. The tunneling process can happen either over W_D or a few characteristic lengths (Λ = n_0 λ, n_0 ≥ 1). At low V_GS, W_D is greater than Λ, and L_B is determined by Λ. At intermediate V_GS, the depletion region thins and W_D<Λ, and the tunneling path is influenced by the depletion width. Thus, L_B is modeled as L_B = Λ W_D/Λ + W_D, Λ = n_0 λ = n_0 √(t_cht_insϵ_ch/ϵ_ins), W_D,(e,h) = √(ϵ_chφ_(s,d)/ζ_(e,h) Q_ch,(e,h)/t_ch), t_ch (t_ins) is the channel (insulator) thickness, ϵ_ch (ϵ_ins) is the channel (insulator) dielectric constant, and ζ is a fitting parameter that describes the charge in the depletion region as a fraction of the channel charge. Figure <ref> shows the effect of key model parameters on a typical I_D-V_GS curve. § MODEL VALIDATION §.§ Comparison against TCAD results The device physics of MoTe_2 SB-FETs is analyzed using the TCAD tool, Sentaurus, from Synopsys <cit.>. A four-layer, 2.5  μ m long MoTe_2 SB-FET with 30 nm thick BN gate dielectric was simulated. The band-gap of MoTe_2 was fixed at 1.0 eV, while the hole and electron effective masses were kept equal at 0.55m_0 (m_0 is the free electron mass.). Further, the source/drain contacts were modeled as Schottky contacts with a Schottky barrier height of 0.50 eV. Finally, the gate contact was treated as a Dirichlet boundary condition, and the rest of the boundaries were treated as a Neumann boundary. The drift-diffusion formalism was used to model the charge transport in the channel, with carrier mobility fixed at 50 cm^2/Vs for both electrons and holes. Injection at the source and drain contacts was modeled using thermionic emission and the non-local tunneling equations, as implemented in Sentaurus. TCAD simulation results, shown in Fig. <ref>(a), confirm that the the majority of V_DS drops at the contacts. Moreover, from Fig. <ref>(b), we can infer that the charge transport is severely limited by carrier injection at the contacts and that the region near the contacts is depleted of charge carriers. The dependence of the electric field, F_x, at the source contact with V_GS is shown in Fig. <ref>(c). F_x, which is given as the ratio of the potential drop, φ_s at the source and the depletion width, L_B, increases linearly with V_GS in weak inversion. This is because in weak inversion φ_s varies linearly with V_GS, while L_B remains constant. At high V_GS, although L_B continues to shrink as shown in Fig. <ref>(b), φ_s saturates, which slows the rate of increase of F_x in strong inversion. §.§ Comparison against measurement data We validate our compact model against experimental measurement data of bilayer, trilayer, and four-layer MoTe_2 SB-FETs fabricated in-house. Figure <ref> shows the optical image of a fabricated trilayer MoTe_2 device. The devices were fabricated by following a bottom-up approach, where the embedded gates are formed first with metal evaporation after optical lithography patterning. The bottom BN was exfoliated and transferred on to the gates using the dry transfer method, before the source and drain contacts were patterned. The MoTe_2 flakes were exfoliated on to a 90 nm SiO_2/Si wafer. The thicknesses were identified from the optical image contrast. The MoTe_2 flakes are dry transferred on top of the source and drain contacts, with a top BN flake as the adhesive layer. The top BN layer also encapsulates the MoTe_2 channel. Our model contains a total of 24 parameters (11 each for electrons and holes and 2 common to both). Nine of the parameters are empirical in nature, while the remainder have a physical origin and can be deduced from straightforward experimental calibration. See Appendix <ref> for parameter extraction methodology. Figure <ref> shows an excellent match between the transfer curves obtained from our model and measurement data of the bilayer, trilayer, and four-layer MoTe_2 SB-FETs. Additionally, our model can capture the transconductance of the device measured experimentally. Table <ref> shows the extracted model parameters. The asymmetric electron and hole conduction in the fabricated devices is due to the unequal Schottky barrier heights, ϕ_SB,e≠ϕ_SB,h. The extracted values of ϕ_SB,e and ϕ_SB,h show that E_g (= ϕ_SB,e + ϕ_SB,h) increases with the increase in the channel thickness  <cit.>. We also apply our compact model to n-type MoS_2 SB-FETs reported in <cit.>. In a unipolar device, TE is observable in the measured I-V data in the sub-threshold regime. This is readily captured in our model as it is based on a generalized theory of carrier emission. Figure <ref> shows that the model faithfully captures the channel current from sub-threshold to strong inversion regimes for a 6-nm thick and 5-μm long MoS_2 FET. At very low gate voltages, the Sc contacted device is dominated by gate leakage, which is not included in our model. § CONCLUSION A compact model for ambipolar MoTe_2 SB-FETs was presented. The model relies on explicit, analytic equations to model thermionic emission and field-emission tunneling. We also presented a model for the variation of the tunneling barrier width with the terminal voltages. We conducted TCAD simulations to verify the model physics. Finally, we demonstrated the model's applicability to produce I-V data of realistic devices by comparing the model output against measurements of SB-FETs fabricated in-house as well as data available in the published literature. Because of its compact nature and few parameters, most of which have a physical significance, the model is suitable for technology-device-circuit co-design. § DERIVATION OF THERMIONIC EMISSION EQUATION To derive (<ref>), we assume T(E_x) = 1 and apply Maxwell-Boltzmann statistics <cit.> in (<ref>) along with E=E_x+E_y. J_net,e = -4q/h^2√(m_e^*/2)∫_0^∞ T(E_x) × ( ∫_0^∞f_1(E) - f_2(E)/√(E_y) dE_y ) dE_x. J_net,e = -4q/h^2√(m_e^*/2)∫_ϕ_SB^∞∫_0^∞exp(-E_x+E_y/k_BT) - exp(E_Fs - (E_x+E_y)/k_BT) 1/√(E_y) dE_y dE_x. J_net,e = -4q/h^2√(m_e^*/2)∫_ϕ_SB^∞exp(-E_x/k_BT) dE_x ×∫_0^∞(exp(-E_y/k_BT) - exp(E_Fs - E_y/k_BT) ) 1/√(E_y) dE_y. Solving the integrals and using qV_c = -E_Fs, J_TE,e = -q √(8π k_B^3 m_e^*)/h^2 T^3/2exp(-ϕ_SB/k_B T) ×[ 1 - exp( -qV_c/k_B T) ]. § DERIVATION OF ANALYTIC TUNNELING EQUATION FE can be analytically derived from (<ref>) and (<ref>) as follows. J_tun,e = C_0 ∫_-∞^ϕ_SBexp( -α(ϕ_SB - E_x)^3/2) ×( ∫_0^∞f_1(E) - f_2(E)/√(E_y) dE_y ) dE_x, where J_tun,e is the electron tunneling current, C_0 is the constant prefactor in (<ref>) and α is the constant in the exponent in (<ref>). Since E = E_x + E_y, the integral with respect to E_y is the difference of Fermi integrals of order -1/2 given as ∫_0^∞f_1(E) - f_2(E)/√(E_y) dE_y = √(k_BT) Γ(1/2) ×[ ℱ_-1/2( E_Fm - E_x/k_BT) - ℱ_-1/2( E_Fs - E_x/k_BT) ]. The Fermi integral can be approximated as ℱ_-1/2(x) = exp(x), x ≤ 0, 2/√(π) x^1/2, x>0. Equation (<ref>) can now be converted from a double integral equation to a single integral equation in E_x as follows J_tun,e = ∫_-∞^ϕ_SB G(E_x) dE_x, where G(E_x) is the electron tunneling current density per energy level, E_x, in the conduction band. As shown in Fig. <ref>(a), the peak of G(E_x) moves closer to E_Fm as the electric field, F_x, increases. Let us define the point E_0 as E_0 = *arg max_E_x G(E_x). To obtain an analytic solution of (<ref>), ln(T(E_x)) is linearized around E_0. Let us suppose that T(E_0) = 1/K_0(F_x). Although K_0(F_x) can be approximated as a constant value, a piece-wise value of K_0 is a better approximation as shown in Fig. <ref>(b). E_0 is given as E_0 = ϕ_SB - (ln(K_0)/α)^2/3, ln(T(E_x)) = -α f(E_x) = -α[ f(E_0) + (E_x - E_0)f'(E_0) ]. The integral in (<ref>) now has an analytic solution. J_TFE = ∫_0^ϕ_SB G(E_x) dE_x = C_0 C_1 √(k_BTπ)/β( exp(βϕ_SB) - 1 ) ×(1 - exp(- qV_c/k_BT) ), C_1 = exp( - √(ϕ_SB - E_0)( ϕ_SB + E_0/2) ), β = k_BT - E_00/(k_BT)(E_00), E_00 = 2/3α√(ϕ_SB - E_0). J_FE = ∫_-∞^0 G(E_x) dE_x = C_0 C_1 √(π E_00^3)(1 - exp(- qV_c/E_00) ), Figure <ref>(b) shows the validation of the analytic equation with the numerical integral, using different values of K_0. To fit the numerical integral, another parameter is introduced for tunneling current, which gives J_tun,e = P_f,e (J_TFE + J_FE). A similar parameter, P_f,h is introduced for the hole branch. § PARAMETER EXTRACTION METHODOLOGY The input parameters of the model that are fixed include (i) device width (W) (ii) the channel thickness (t_ch), (iii) insulator thickness (t_ins), which along with the insulator dielectric constant (ϵ_ins), gives the insulator capacitance (C_ins). The approximate range of Schottky barrier heights (ϕ_SB) can be obtained by extracting the x-direction electric field at the contact (F_x) from the measurement data, for a given ϕ_SB, using (8)-(11) and verifying that the extracted F_x is reasonable. ϕ_SB can then be tuned to obtain a best fit. If the thermionic emission current is observed in the device, ϕ_SB can also be extracted using the Arrhenius plots. The minimum current points are used to determine V_FB0,(e,h) and δ_FB,(e,h). The knee point in the semi-log I_D - V_G curve determines V_T0 and δ_T, and n is related to the sharpness of the knee point. The slope of the semi-log transfer curve in the sub-threshold region are used to obtain Λ, while ζ is correlated to the on-state current of the device. The empirical parameter K_0 is used to obtain an analytic tunneling equation from the numerical model. Lower (higher) K_0 approximates the low-field (high-field) region better. K_0 = 100 reasonably approximates tunneling at both low-field and high-field region. The empirical parameter, P_f, lies in the range of [0.1, 1], and can be tuned to obtain a best fit. § ACKNOWLEDGEMENT The authors acknowledge support from SRC (Grant SRC 2021-LM-3042) and NSF (Grant ECCS 16-53241 CAR). IEEEtran
http://arxiv.org/abs/2307.05321v1
20230711150719
Recent results from the CMS Proton Precision Spectrometer
[ "C. Royon" ]
hep-ph
[ "hep-ph" ]
=6.0in =8.25in =-0.3in =-0.20in #1 #1 #1 #1 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT Recent results from the CMS Proton Precision Spectrometer Christophe Royon Department of Physics and Astronomy, The University of Kansas, Lawrence, USA The Precision Proton Spectrometer (PPS) is a new subdetector of CMS that provides a powerful tool for the advancement of beyond standard model searches. We present recent results obtained with the PPS subdetector illustrating the unique sensitivity achieved using proton tagging. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > § INTRODUCTION: PHOTON INDUCED PROCESSES We discuss recent results from the CMS and TOTEM collaborations using the Precision Proton Spectrometer (PPS). PPS allows detecting and measuring intact protons in pp collisions at the LHC. In standard runs at the LHC, PPS has a good acceptance in diffractive masses typically between 450 and 1400 GeV when both protons are detected after collisions <cit.>. This shows the possibility to look for beyond standard model (BSM) event production at the LHC using a completely new method. More than 100 fb^-1 of data was collected in Run II by the CMS and TOTEM collaboration. As an example, let us consider the exclusive diphoton production as shown in Fig. <ref>. The left diagram displays the QCD process via two gluon exchange while the right diagram displays photon exchanges. At LHC energies, in the acceptance of PPS above 450 GeV, photon exchanges completely dominate gluon ones by several orders of magnitude <cit.>. If we measure two intact protons and two photons in CMS, we are sure that this is a photon exchange. The same conclusion applies for exclusive productions of WW, ZZ, t t̅, γ Z which we will describe in turn. The LHC can be considered as a γγ collider. Let us also discuss the advantages of detecting the intact protons in the final state using again the example of exclusive diphoton production. For our signal events, we detect all particles after interactions, namely the two intact protons and the two photons. The conservation of momentum and energy ensures that the mass and rapidity of the diphoton and the diproton systems are the same for signal. The leading background is due to pile up events where diphotons and intact protons originate from different proton-proton interactions (we recall that up to 50 interactions occur per bunch crossing at the LHC in standard running). The ratio of the diphoton and diproton masses as well as the difference in rapidity are shown in Fig. <ref> for signal and pile up background. It is clear that requesting this matching will lead to a negligible background <cit.>. This conclusion remains for any exclusive production at the LHC, such as WW, ZZ, t t̅, γ Z, etc. § QUASI-EXCLUSIVE DILEPTON PRODUCTION In this section, we will describe the search for quasi-exclusive dilepton in PPS. Dilepton production is a QED process and we aim at detecting at least one intact proton in the final state (requesting both protons to be intact leads to less than 1 event since the cross section for dilepton masses above 450 GeV is very small). 17 (resp. 23) events are found with protons in the PPS acceptance and 12 (resp. 8) show a matching better than by 2σ in the μμ (resp. ee) channel. The significance is thus better than 5σ for observing 20 events for a background of 3.85 (1.49± 0.07 (stat) ± 0.53 (syst) for μμ and 2.36± 0.09 (stat) ± 0.47 (syst) for ee) <cit.>. The rapidity versus mass of the dilepton system is shown in Fig. <ref> where we see the quasi-exclusive dimuons in red and dielectrons in empty circles. § EXCLUSIVE PRODUCTION OF DIPHOTONS In this section, we will present the recent results concerning the exclusive production of diphotons. The number of events predicted by the Standard Model (SM) is negligible in the acceptance of PPS. Extra-dimensions, composite Higgs models or axion-like particles for example can predict quartic γγγγ anomalous couplings and thus a much higher number of events that could be measured in PPS <cit.>. Anomalous couplings can appear via loops of new particles coupling to photons or via resonances decaying into two photons. The search for exclusive diphoton production was performed by requesting back-to-back, high diphoton mass (m_γγ>350 GeV), and a matching in rapidity and mass between diphoton and proton information. The first limits on quartic photon anomalous couplings at high diphoton masses were derived with about 10 fb^-1 of data (|ζ_1|<2.9 10^-13 GeV^-4, |ζ_2|<6. 10^-13 GeV^-4) with about 10 fb^-1 <cit.> and were updated with 102.7 fb^-1 (|ζ_1|<7.3 10^-14 GeV^-4, |ζ_2|<1.5 10^-13 GeV^-4) <cit.>. The search for exclusive diphotons can be reinterpreted directly as a search for axion-like particles (ALP) decaying into two photons. The first limits on ALPs at high mass are shown in Fig. <ref> <cit.>. The sensitivities were projected with 300 fb^-1 in Ref. <cit.> using the same method and allow to gain about two orders of magnitude on sensitivity with respect to more standard methods at the LHC for ALP masses around 1 TeV and to cover in addition a domain at higher masses. It is also worth mentioning that this search for ALPs is complementary to the one performed at lower masses in heavy ion collisions <cit.>. § EXCLUSIVE PRODUCTION OF W AND Z BOSON PAIRS Using the same method as for exclusive γγ production, it is possible to look for WW and ZZ exclusive production at the LHC. The search was performed in the full hadronic decay modes of the W and Z bosons since the anomalous production of WW or ZZ events dominates at higher mass with respect to the SM case, with a rather low cross section (the branching ratio of W and Z bosons decaying into hadrons is of course the highest one). Two “fat" jets of radius 0.8 with jet p_T>200 GeV, 1126<m_jj<2500 GeV, asking the jets to be back-to-back (|1-ϕ_jj/π|<0.01) are requested. The fat jet correspond to the boosted decay of the W or Z boson. As usual, matching between the central WW system and the proton information in rapidity and mass is also requested. No signal was found and limits on the SM cross section were put as σ_WW<67fb, σ_ZZ<43fb for 0.04<ξ<0.2 <cit.>. New limits on quartic anomalous couplings were determined as a_0^W/Λ^2 < 4.3 10^-6 GeV^-2, a_C^W/Λ^2 < 1.6 10^-5 GeV^-2, a_0^Z/Λ^2 < 0.9 10^-5 GeV^-2, a_C^Z/Λ^2 < 4. 10^-5 GeV^-2 with 52.9 fb^-1, and are shown in Fig. <ref> for a_0^W as an example. The SM contribution of exclusive WW production appears at lower WW masses compared to anomalous couplings. In order to observe this process at the LHC with higher luminosity (300 fb^-1 for instance), it is possible to use purely leptonic channels for W decays since the dijet background is too high at low masses for hadronic channels. The SM prediction on exclusive WW (leptonic decays) after selection is about 50 events for 300 fb^-1 for 2 background events, which should lead to an observation of this process <cit.>. It is also worth noticing that it is possible to look in addition for the exclusive production of γ Z events using the same method where the Z boson decays either leptonically or hadronically. It leads to an improvement on the reach for the γγγ Z anomalous coupling by about three orders of magnitude compared to the more standard method at the LHC which looks for the decay of the Z boson into 3 photons <cit.>. § EXCLUSIVE PRODUCTION OF T T̅ The search for exclusive t t̅ production in leptonic and semi-leptonic decay modes was performed using about 29.4 fb^-1 of data. Because of the neutrino originating from the W boson decay, the matching between the diproton and t t̅ information is not enough to reject completely the background and kinematic fitters based on W and t mass constraints were used to reduce further the background. No event was found and a limit was put on the t t̅ exclusive production cross section as σ^excl._t t̅ < 0.59 pb <cit.>. It will be possible to improve significantly the sensitivities to γγ t t̅ anomalous couplings at the LHC by measuring the time of flight of the protons and requesting them to originate from the same vertex as the t t̅ in order to reject the pile up background <cit.>. § LOOKING FOR Z+X AND Γ +X PRODUCTION An additional original search for Z+X and γ +X events was also performed by the CMS and TOTEM collaboration. The idea to measure the total mass in the event reconstructed using the intact protons, allowing to obtain the mass of Z+X and γ + X while X might not be reconstructed. No signal was found and the limits are shown on Fig. <ref> <cit.>. This should be definitely redone with higher luminosity. To conclude, we described many results using PPS from the TOTEM and CMS collaborations, ranging from the observation of the quasi-exclusive production of leptons to the high sensitivity to quartic anomalous couplings such as γγγγ, γγ WW, γγ ZZ, γγ t t̅ and to ALP. Better sensitivities is even expected for the next runs at the LHC using higher luminosities and improved detectors such as the fast timing ones. 99 tdr TOTEM Collaboration, TOTEM-TDR-003 ; CMS Collaboration, CMS-TDR-13. gammagamma5 S. Fichet, G. von Gersdorff, B. Lenzi, C. Royon, M. Saimpert, JHEP 02 (2015) 165; S. Fichet , G. von Gersdorff , O. Kepka, B. Lenzi , C. Royon, Phys. Rev. D 89 (2014) 114004; E. Chapon, C. Royon, O. Kepka, Phys. Rev. D 81 (2010) 074003; O. Kepka, C. Royon, Phys. Rev. D 78 (2008) 073005; S. Fichet, G. von Gersdorff, C. Royon, Phys. Rev. Lett. 116 (2016) 23, 231801; S. Fichet, G. von Gersdorff, C. Royon, Phys. Rev. D 93 (2016) 7, 075031. dilepton A. M. Sirunyan et al., CMS and TOTEM Collaborations, JHEP 1807 (2018) 153. gammagamma8 A. Tumasyan et al., CMS and TOTEM Collaborations, Phys. Rev. Lett. 129 (2022) 011801. gammagamma7 CMS and TOTEM Collaborations, CMS-PAS-EXO-21-007. axion C. Baldenegro, S. Fichet, G. von Gersdorff, C. Royon, JHEP 06 (2018) 131. axion1 C. Baldenegro, S. Hassani, C. Royon, L. Schoeffel, Phys. Lett.B 795 (2019) 339; D. d'Enterria, G. da Silveira, Phys. Rev. Lett. 111 (2013) 080405. ww1 CMS and TOTEM Collaborations, CMS-PAS-SMP-21-014. ww C. Baldenegro, G. Biagi, G. Legras, C. Royon, JHEP 12 (2020) 165. gammaz C. Baldenegro, S. Fichet, G. von Gersdorff, C. Royon, JHEP 06 (2017) 142. ttbar1 CMS and TOTEM Collaborations, CMS-PAS-TOP-21-007. ttbar C. Baldenegro, A. Bellora, S. Fichet, G. von Gersdorff, M. Pitt, C. Royon, JHEP 08 (2022) 021. zx CMS and TOTEM Collaborations, CMS-PAS-EXO-19-009.
http://arxiv.org/abs/2307.05541v1
20230708192609
High Fidelity 3D Hand Shape Reconstruction via Scalable Graph Frequency Decomposition
[ "Tianyu Luan", "Yuanhao Zhai", "Jingjing Meng", "Zhong Li", "Zhang Chen", "Yi Xu", "Junsong Yuan" ]
cs.CV
[ "cs.CV" ]
High Fidelity 3D Hand Shape Reconstruction via Scalable Graph Frequency Decomposition Tianyu Luan^1 Yuanhao Zhai^1 Jingjing Meng^1 Zhong Li^2 Zhang Chen^2 Yi Xu^2 Junsong Yuan^1 ^1State University of New York at Buffalo        ^2OPPO US Research Center, InnoPeak Technology, Inc. {tianyulu,yzhai6,jmeng2,jsyuan}@buffalo.edu {zhong.li,zhang.chen,yi.xu}@oppo.com =================================================================================================================================================================================================================================================================================================== Despite the impressive performance obtained by recent single-image hand modeling techniques, they lack the capability to capture sufficient details of the 3D hand mesh. This deficiency greatly limits their applications when high-fidelity hand modeling is required, , personalized hand modeling. To address this problem, we design a frequency split network to generate 3D hand mesh using different frequency bands in a coarse-to-fine manner. To capture high-frequency personalized details, we transform the 3D mesh into the frequency domain, and propose a novel frequency decomposition loss to supervise each frequency component. By leveraging such a coarse-to-fine scheme, hand details that correspond to the higher frequency domain can be preserved. In addition, the proposed network is scalable, and can stop the inference at any resolution level to accommodate different hardware with varying computational powers. To quantitatively evaluate the performance of our method in terms of recovering personalized shape details, we introduce a new evaluation metric named Mean Signal-to-Noise Ratio (MSNR) to measure the signal-to-noise ratio of each mesh frequency component. Extensive experiments demonstrate that our approach generates fine-grained details for high-fidelity 3D hand reconstruction, and our evaluation metric is more effective for measuring mesh details compared with traditional metrics. The code is available at <https://github.com/tyluann/FreqHand>. § INTRODUCTION High-fidelity and personalized 3D hand modeling have seen great demand in 3D games, virtual reality, and the emerging Metaverse, as it brings better user experiences, , users can see their own realistic hands in the virtual space instead of the standard avatar hands. Therefore, it is of great importance to reconstruct high-fidelity hand meshes that can adapt to different users and application scenarios. Despite previous successes in 3D hand reconstruction and modeling<cit.>, few existing solutions focus on enriching the details of the reconstructed shape, and most current methods fail to generate consumer-friendly high-fidelity hands. When we treat the hand mesh as graph signals, like most natural signals, the low-frequency components have larger amplitudes than those of the high-frequency parts, which we can observe in a hand mesh spectrum curve (<ref>). Consequently, if we generate the mesh purely in the spatial domain, the signals of different frequencies could be biased, thus the high-frequency information can be easily overwhelmed by its low-frequency counterpart. Moreover, the wide usage of compact parametric models, such as MANO <cit.>, has limited the expressiveness of personalized details. Even though MANO can robustly estimate the hand pose and coarse shape, it sacrifices hand details for compactness and robustness in the parameterization process, so the detail expression ability of MANO is suppressed. To better model detailed 3D shape information, we transform the hand mesh into the graph frequency domain, and design a frequency-based loss function to generate high-fidelity hand mesh in a scalable manner. Supervision in the frequency domain explicitly constrains the signal of a given frequency band from being influenced by other frequency bands. Therefore, the high-frequency signals of hand shape will not be suppressed by low-frequency signals despite the amplitude disadvantage. To improve the expressiveness of hand models, we design a new hand model of 12,337 vertices that extends previous parametric models such as MANO with nonparametric representation for residual adjustments. While the nonparametric residual expresses personalized details, the parametric base ensures the overall structure of the hand mesh, , reliable estimation of hand pose and 3D shape. Instead of fixing the hand mesh resolution, we design our network architecture in a coarse-to-fine manner with three resolution levels U-net for scalability. Different levels of image features contribute to different levels of detail. Specifically, we use low-level features in high-frequency detail generation and high-level features in low-frequency detail generation. At each resolution level, our network outputs a hand mesh with the corresponding resolution. During inference, the network outputs an increasingly higher resolution mesh with more personalized details step-by-step, while the inference process can stop at any one of the three resolution levels. In summary, our contributions include the following. * We design a high-fidelity 3D hand model for reconstructing 3D hand shapes from single images. The hand representation provides detailed expression, and our frequency decomposition loss helps to capture the personalized shape information. * To enable computational efficiency, we propose a frequency split network architecture to generate high-fidelity hand mesh in a scalable manner with multiple levels of detail. During inference, our scalable framework supports budget-aware mesh reconstruction when the computational resources are limited. * We propose a new metric to evaluate 3D mesh details. It better captures the signal-to-noise ratio of all frequency bands to evaluate high-fidelity hand meshes. The effectiveness of this metric has been validated by extensive experiments. We evaluate our method on the InterHand2.6M dataset <cit.>. In addition to the proposed evaluation metrics, we also evaluate mean per joint position error (MPJPE) and mesh Chamfer distance (CD). Compared to MANO and other baselines, our proposed method achieves better results using all three metrics. § RELATED WORK Parametric hand shape reconstruction. Parametric models are a popular approach in hand mesh reconstruction. Romero <cit.> proposed MANO, which uses a set of shape and pose parameters to control the movement and deformation of human hands. Many recent works <cit.> combined deep learning with MANO. They use features extracted from the RGB image as input, CNN to get the shape and pose parameters, and eventually these parameters to generate hand mesh. These methods make use of the strong prior knowledge provided by the hand parametric model, so that it is convenient to train the networks and the results are robust. However, the parametric method limits the mesh resolution and details of hands. Non-parametric hand shape reconstruction. Non-parametric hand shape reconstruction typically estimates the vertex positions of a template with fixed topology. For example, Ge  <cit.> proposed a method using a graph convolution network. It uses a predefined upsampling operation to build a multi-level spectrum GCN network. Kulon <cit.> used spatial GCN and spiral convolution operator for mesh generation. Moon <cit.> proposed a pixel-based approach. However, none of these works paid close attention to detailed shapes. Moon <cit.> provided an approach that outputs fine details, but since they need the 3D scanned meshes of the test cases for training, their model cannot do cross-identity reconstruction. In our paper, we design a new hand model that combines the strength of both parametric and non-parametric approaches. We use this hand model as a basis to reconstruct high-fidelity hands. Mesh frequency analysis. Previous works mainly focused on the spectrum analysis of the entire mesh graph. Chung. <cit.> defines the graph Fourier transformation and graph Laplacian operator, which builds the foundation of graph spectrum analysis. <cit.> extends commonly used signal processing operators to graph space. <cit.> proposes a spectrum graph convolution network based on graph spectrum characteristics. The spectral decomposition of the graph function is used to define graph-based convolution. Recent works such as <cit.> widely use spectrum GCN in different fields. However, these works mainly focus on the analysis of the overall graph spectrum. In this paper, we use spectrum analysis as a tool to design our provided loss function and metric. § PROPOSED METHOD We propose a scalable network that reconstructs the detailed hand shape, and use frequency decomposition loss to acquire details. <ref> shows our network architecture. We design our network in the manner of a U-net. First, we generate a MANO mesh from image features from EfficientNet <cit.>. Based on the MANO mesh, we use a graph convolution network (green, yellow, and red modules in <ref>) to recover a high-fidelity hand mesh. In order to obtain high-frequency information, we use image features from different layers of the backbone network as a part of the GCN inputs. Specifically, at the low-resolution level, we take high-level image features as part of the input, and use a low-resolution graph topology to generate a low-resolution mesh. At medium and high-frequency levels, we use lower-level image feature through the skip connection to produce a high-resolution mesh. Note that at every resolution level, the network will output the intermediate hand mesh, so it would naturally have the ability for scalable inference. During the training process, we supervise both intermediate meshes and the final high-resolution mesh. We discuss the details in the following. §.§ High Fidelity 3D Hand Model We design our hand representation based on MANO <cit.>. MANO factorizes human hands into a 10-dimensional shape representation β and a 35-dimensional pose representation θ. MANO model can be represented as { M(θ, β) = W(T_P(θ, β), θ, w) T_P(θ, β) = T + B_S(β) + B_P(θ) . where W is the linear blend skinning function. Model parameter w is the blend weight. B_S and B_P are another two parameters of MANO named shape blend shape and pose blend shape, which are related to pose and shape parameters, respectively. MANO can transfer complex hand surface estimation into a simple regression of a few pose and shape parameters. However, MANO has limited capability in modeling shape detail. It is not only limited by the number of pose and shape dimensions (45), but also by the number of vertices (778). In our work, we designed a new parametric-based model with 12,338 vertices generated from MANO via subdivision. The large vertex number greatly enhances the model's ability to represent details. Subdivided MANO. To address this problem. We design an extended parametric model that can better represent details. First, we add detail residuals to MANO as M^'(θ, β, d) = W(T_P^'(θ, β, d), θ, w^'), T_P^'(θ, β, d) = T^' + B_S^'(β) + B_P^'(θ) + d, where, w^', T^', B_S^'(β), and B_P^'(θ) are the parameters our model, and d is the learnable per-vertex location perturbation. The dimension of d is the same as the number of vertices. Besides vertex residuals, we further increase the representation capability of our hand model by increasing the resolution of the mesh. Motivated by the traditional Loop subdivision<cit.>, we propose to design our parametric hand model by subdividing the MANO template. Loop subdivision can be represented as T^' = 𝐋_𝐬T, where, T is original template mesh with n vertices and m edges. T^' is the subdivided template mesh with n+m vertices. 𝐋_𝐬∈ℝ^(n+m)× m is the linear transformation that defines the subdivision process. The position of each vertex on the new mesh is only determined by the neighbor vertices on the original mesh, so 𝐋_𝐬 is sparse. We use similar strategies to calculate B_S and B_P. The MANO parameters map the input shape and pose into vertex position adjustments. These mappings are linear matrices of dimension x × n. Therefore, we can calculate the parameters as w^' = (𝐋_𝐬w^⊤)^⊤, B_S^' = (𝐋_𝐬B_S^⊤)^⊤, B_P^' = (𝐋_𝐬B_P^⊤)^⊤. We repeat the procedure twice to get sufficient resolution. <ref> shows example meshes from the new model in different poses (d is set to 0). We can see that our representation inherits the advantages of the parametric hand model. It has a plausible structure with no visual artifacts when the hand poses change. §.§ Hierachical Graph Convolution Network Our GCN network utilizes a multiresolution graph architecture that follows the subdivision process in Section <ref>. Different from the single graph GCNs in previous works <cit.>, our GCN network uses different graphs in different layers. At each level, each vertex of the graph corresponds to a vertex on the mesh and the graph topology is defined by the mesh edges. Between two adjunct resolution levels, the network uses the 𝐋_𝐬 in <ref> for upsampling operation. This architecture is designed for scalable inference. When the computing resources are limited, only the low-resolution mesh needs to be calculated; when the computing resources are sufficient, then we can calculate all the way to the high-resolution mesh. Moreover, this architecture allows us to explicitly supervise the intermediate results, so the details would be added level-by-level. §.§ Graph Frequency Decomposition In order to supervise the output mesh in the frequency domain and design the frequency-based metric, we need to do frequency decomposition on mesh shapes. Here, we regard the mesh as an undirected graph, and 3D locations of mesh vertices as signals on the graph. Then, the frequency decomposition of the mesh is the spectrum analysis of this graph signal. Following <cit.>, given an undirected graph 𝒢 = {𝒱, ℰ} with a vertices set of 𝒱= {1,2,...,N } and a set of edges ℰ= {(i, j) }_i,j ∈𝒱, the Laplacian matrix is defined as 𝐋:=𝐃 - 𝐀, where 𝐀 is the N × N adjacency matrix with entries defined as edge weights a_ij and 𝐃 is the diagonal degree matrix. The ith diagonal entry di = ∑_ja_ij. In this paper, the edge weights are defined as a_ij:={ 1 , (i,j) ∈ℰ 0 , otherwise . which means all edges have the same weights. We decompose 𝐋 using spectrum decomposition: 𝐋=𝐔^⊤Λ𝐔. Here, Λ is a diagonal matrix, in which the diagonal entries are the eigenvalues of 𝐋. 𝐔 is the eigenvector set of 𝐋. Since the Laplacian matrix 𝐋 describes the fluctuation of the graph signal, its eigenvalues show how "frequent" the fluctuations are in each eigenvector direction. Thus, the eigenvectors of larger eigenvalues are defined as higher frequency bases, and the eigenvectors of smaller eigenvalues are defined as lower frequency bases. Since the column vectors of 𝐔 is a set of orthonormal basis of the graph space, following <cit.>, we define transform F(x) = 𝐔^⊤x to be the Fourier transform of graph signal, and F'(x) = 𝐔x to be reverse Fourier transform. This means, given any graph function x ∈ℝ^N× d, we can decompose x in N different frequency components: x=∑_i=1^N𝐔_𝐢(𝐔_𝐢^⊤x), where 𝐔_𝐢∈ℝ^N × 1 is the ith column vector of 𝐔. d is the dimension of the graph signal on each vertex. 𝐔_𝐢^⊤x is the frequency component of x on the ith frequency base. Having <ref>, we can decompose a hand mesh into frequency components. <ref> shows an example of a groundtruth mesh and its frequency decomposition result. The x-axis is the frequencies from low to high. The y-axis is the amplitude of each component in the logarithm. It is easy to observe that the signal amplitude generally decreases as the frequency increases. <ref> shows the cumulative frequency components starting from frequency 0. We can see how the mesh shape changes when we gradually add higher frequency signals to the hand mesh. In general, the hand details increase as higher frequency signals are gradually included. §.§ Frequency Decomposition Loss Frequency decomposition loss. Conventional joint and vertex loss, such as the widely used pre-joint error loss <cit.> and mesh pre-vertex error loss <cit.> commonly used in human body reconstruction, and Chamfer Distance Loss <cit.> commonly used in object reconstruction and 3D point cloud estimation, all measure the error in the spatial domain. In that case, the signals of different frequency components are aliased together. As shown in <ref>, the amplitudes of low-frequency signals of hand shape are much larger than high-frequency signals, so when alias happens, the high-frequency signals will get overwhelmed, which means direct supervision on the spatial domain would mainly focus on low-frequency signals. Thus, spatial loss mostly does not drive the network to generate high-frequency details. Our experiments in <ref> also demonstrate this. To generate detailed information without being overwhelmed by low-frequency signals, we designed a loss function in the frequency domain. Specifically, we use graph frequency decomposition (<ref>) to define our frequency decomposition loss as L_F = 1/F∑_f=1^Flog(𝐔_f^⊤V̂-𝐔_f^⊤V_gt^2/𝐔_f^⊤V̂𝐔_f^⊤V_gt + ϵ + 1), where F=N is the number of total frequency components, 𝐔_f is the fth frequency base, · is L2 norm, ϵ = 1 × 10^-8 is a small number to avoid division-by-zero, V̂∈ℝ^N × 3 and V_gt∈ℝ^N × 3 are the predicted and groundtruth vertex locations, respectively. During training, for every frequency component, our loss reduces the influence of the amplitude of each frequency component, so that information on different frequency components would have equivalent attention. In <ref>, we demonstrate the effectiveness of the frequency decomposition loss. Total loss function. We define the total loss function as: L = λ_JL_J + ∑_l=1^3[ λ_v^(l)L_v^(l) + λ_F^(l)L_F^(l)], where l is the resolution level. l=1 is the lowest-resolution level and l=3 is the highest resolution level. L_J^(l) is 3D joint location error, L_v^(l) is per vertex error, and L_F^(l) is the frequency decomposition loss. λ_J^(l), λ_v^(l), and λ_F^(l) are hyper-parameters. For simplicity, we refer L_J^(l), L_v^(l), and L_F^(l) as L_J, L_v, and L_F for the rest of the paper. Following previous work <cit.>, we define 3D joint location error and per vertex loss as: L_J = 1/N_J∑_j=1^N_JĴ_̂ĵ-J_gt,j, L_v = 1/N∑_i=1^Nv̂_i-v_gt,i, where Ĵ_j and J_gt,j are the output joint location and groundtruth joint location. N_J is the number of joints. v̂_i and v_gt,i are the estimated and groundtruth location of the ith vertex, and N is the number of vertices. § EXPERIMENTS §.§ Datasets Our task requires detailed hand meshes for supervision. Because of the difficulty of acquiring 3D scan data, this supervision is expensive and hard to obtain in a large scale. One alternative plan is to generate meshes from multiview RGB images using multiview stereo methods. Considering the easy access, we stick to this plan and use the generated mesh as groundtruth in our experiments. We do all our experiments on the InterHand2.6M dataset <cit.>, which is a dataset consisting of multiview images, rich poses, and human hand pose annotations. The dataset typically provides 40-100 views for every frame of a hand video. Such a large amount of multiview information would help with more accurate mesh annotation. Finally, we remesh the result hand mesh into the same topology with our 3-level hand mesh template, respectively, so that we can provide mesh supervision for all 3 levels of our network. We use the resulting mesh as groundtruth for training and testing. In this paper, we use the mesh results provided in <cit.>, which are generated using multiview methods of <cit.>, and only use a subset of InterHand2.6m, due to the large number of data in the original dataset. The remeshing method and more dataset details can be found in supplementary material Section 4. In <ref> (last column, “groundtruth"), we show a few examples of the generated groundtruth meshes. Although these meshes are not the exact same as real hands, it is vivid and provides rich and high-fidelity details of human hands. This 3D mesh annotation method is not only enough to support our solution and verify our methods, but is also budget-friendly. §.§ Implementation Details. We follow the network architecture in <cit.> to generate intermediate MANO results. We use EfficientNet <cit.> as a backbone. The low-level, mid-level, and high-level features are extracted after the 1st, 3rd, and 7th blocks of EfficientNet, respectively. For each image feature, we use 1 × 1 convolutions to deduce dimensions. The channel numbers of 1 × 1 convolution are 32, 32, and 64 from low-level to high-level, respectively. After that, we project the initial human hand vertices to the feature maps, and sample a feature vector for every vertex using bilinear interpolation. The GCN graph has 778, 3093, and 12337 vertices at each resolution level. In the training process, we first train <cit.> network, and then use the pretrained result to train our scalable network. For training <cit.>, we use their default hyper-parameters, set the learning rate to 1 × 10 ^-4, and set batch size to 48. When training GCN network, we set λ_J to be 1, set λ_v^(1) and λ_F^(1) to be 1 and 60, set λ_v^(2) and λ_F^(2) to be also 1 and 60, and set λ_v^(3) and λ_F^(3) to be 1 and 100. The learning rate is set to 5 × 10 ^-4 for GCN and 1e-4 for the rest of the network. The batch size is set to 28. The training process takes about 25 hours on 1 NVIDIA GTX3090Ti GPU for 150 epochs. In reference, we use a smooth kernel to post-process the mesh to reduce sharp changes. More details of post-processing will be found in Supplementary Materials Section 3. §.§ Quantitative Evaluation We use mean per joint position error (MPJPE) and Chamfer distance (CD) to evaluate the hand pose and coarse shape. Besides, to better evaluate personalized details, we also evaluate our mesh results using the proposed mean signal-to-noise ratio (MSNR) metric. Mean Signal-to-Noise Ratio (MSNR). Previous metrics for 3D hand mesh mostly calculate the Euclidean distance between the results and the groundtruth. Although in most cases, Euclidean distance can roughly indicate the accuracy of the reconstruction results, it is not consistent with human cognitive standards: it is more sensitive to low-frequency errors, but does not perform well in personalized detail distinction or detailed shape similarity description. Thus, we propose a metric that calculates the signal-to-noise ratio in every frequency base of the graph. We define our Mean Signal-to-Noise Ratio (MSNR) metric as MSNR =1/F∑_f=1^Flog(𝐔_f^⊤V̂/𝐔_f^⊤V̂ - 𝐔_f^⊤V_gt + ϵ), where F=N is the total number of frequency components and S_f is the signal-to-noise ratio of the fth frequency component. 𝐔_f, V̂, and V_gt have the same meaning as in <ref>. ϵ=1 × 10 ^-8 is a small number to avoid division-by-zero. Thus, the maximum of S_f is 8. By this design, the SNR of different frequency components would not influence each other, so we can better evaluate the high-frequency information compared to the conventional Euclidean Distance. We designed an experiment on InterHand2.6m to validate the effectiveness of our metric in evaluating high-frequency details. We add errors of 8 different frequency bands to the hand mesh. For each frequency band, the error amplitude is set under 10 different uniform distributions. As shown in <ref>, we measure the MPVE and MSNR for every noise distribution on every frequency band, to see how the measured results of the two metrics change with the noise amplitude in each frequency band. The result shows that in the low-frequency part, MPVE increases fast when the noise amplitude increases (the upper lines), but in high-frequency bands, the measured result changes very slowly when the noise amplitude increases. MSNR behaves completely differently from MPVE. It is more sensitive to noise in the high-frequency band than in the low-frequency band. Thus, compared to Euclidean distance, MSNR better measures the error in high-frequency details. <ref> shows a few examples of noisy meshes. Evaluation on InterHand2.6M dataset. We report mean per joint position error (MPJPE), Chamfer distance (CD), and mean signal-to-noise ratio (MSNR) to evaluate the overall accuracy of reconstructed hand meshes. <ref> shows the comparison among 3 levels of our proposed method and MANO. As shown in the table, the proposed method improves the accuracy of hand surface details by a large margin (as indicated by MSNR). We also observe that, while our method generates better shape details in a scalable manner, the joint locations and the overall shape of the output meshes also become slightly more accurate (as indicated by MPJPE and CD). Here, the MSNR of MANO, Ours-level 1, and Ours-level 2 are calculated after subdividing their meshes into the same resolution as Ours-level 3. §.§ Ablation Study We conduct several experiments to demonstrate the effectiveness of the feature skip connection design (in <ref>). and different loss functions. The results are shown in <ref>. From the result, we observe that our projection-to-feature-map skip connection design leads to performance improvement in all three metrics. For the loss functions, we observe MSNR degrades when the frequency decomposition loss is removed, indicating inferior mesh details. Removing the per-vertex error loss dramatically increases the Chamfer distance, indicating that the overall shape is not well constrained. The visualization results of the latter 2 experiments are shown in <ref>, if we do not use frequency decomposition loss, the mesh result we get tends to be smoother with less personalized details. If we do not use per-vertex error loss, the mesh's low-frequency information is not well-learned. The mesh we generate will have an overall shape deformation. Scalable design. We also demonstrate the scalable design of the proposed network by analyzing the resource needed at each resolution level (<ref>). In general, higher resolution levels require more computational resources in the network, and more resources to store and render the mesh. Still, our approach supports scalable reconstruction and can be applied to scenarios with limited computational resources. Here, “baseline" means only generating the MANO mesh in our network. Visualization Results. The qualitative reconstruction results are shown in <ref>. We observe that even when MANO is upsampled to 200k vertices, it still does not capture personalized details while our results provide better shape details. More qualitative results can be found in the Supplementary Material Section 5. § CONCLUSION We provided a solution to reconstruct high-fidelity hand mesh from monocular RGB inputs in a scalable manner. We represent the hand mesh as a graph and design a scalable frequency split network to generate hand mesh from different frequency bands. To train the network, we propose a frequency decomposition loss to supervise each frequency component. Finally, we introduce a new evaluation metric named Mean Signal-to-Noise Ratio (MSNR) to measure the signal-to-noise ratio of each mesh frequency component, which can better measure the details of 3D shapes. The evaluations on benchmark datasets validate the effectiveness of our proposed method and the evaluation metric in terms of recovering 3D hand shape details. § ACKNOWLEDGMENTS This work is supported in part by a gift grant from OPPO. ieee_fullname SAD § DETAILED NETWORK ARCHITECTURE We proposed a detailed network architecture of our approach in <ref>. The green boxes are the features, in which we note the feature dimensions. The blue boxes represent blocks of EfficientNet <cit.>. The red boxes represent GCN blocks. The GCN residual blocks in the network are designed following the manner of <cit.>. Details of the residual blocks are shown on the right of the figure. The gray boxes are the feature skip-connection part. To get multi-level image features from feature maps, we project the vertices into the feature maps, and use a bilinear interpolation technique to sample features. We will illustrate the process more in <ref>. The purple boxes are the sub-network used to generate MANO mesh. The orange boxes indicate the annotation we used. The green arrows are feature streams and the red lines are skip connections. We fetch skip-connected features from the output of EfficientNet Block 1, Block 3, and Block 7. The features are used as parts of the input of the GCN. The GCN has 3 levels. At each level, the input features go through a 10-layer GCN Residual Block, then output a feature vector and a 3D location at each vertex. The 3D locations are used as intermediate output and for supervision. The features are used as a part of the input for the next level. At the third level, we only output the 3D location of each vertex as the final mesh. § SKIP-CONNECTED FEATURE SAMPLING In <ref>, the features fetched from EfficientNet are feature maps. We want to transfer them into feature vectors and put them on the vertices without losing spatial information. Thus, we design a feature sampling strategy to put the local image feature on each graph vertex. As shown in <ref>, we use orthodox projection to find the feature vector for each vertex on the feature map. For every vertex P, we calculate the projection point P^' on the feature map. Then, we extract the feature vector x ∈𝐑^c using bilinear interpolation at point P^', where c is the feature map channel number. The total output feature dimension is N × c, where N is the number of graph vertices. § MESH POST-PROCESSING We do a post-process on the third-level mesh. Due to the flaws of groundtruth mesh (shown in <ref>), some of our output mesh also have similar structure flaws. To tackle this problem, we designed a smooth mask to reduce the flaws. <ref> shows the output of the network, our smooth mask, and our final mesh result. As we can see, the flaws are highly reduced. Note that, this flaw is caused by the noisy groundtruth, so it can also be reduced by a better remeshing of the training data in the future. § REMESHING PROCEDURE We try to use the multiview stereo (MVS) generated mesh provided in <cit.>. However, the MVS mesh has about 500k vertices on each mesh. The large vertex number mesh with high redundancy makes our training process much slower. Moreover, without a fixed topology, the choices of shape supervision are limited. For example, we would not be able to use the per vertex loss and frequency decomposition loss for training. Thus, we designed a remeshing technic to transfer the mesh generated in the multiview stereo (MVS) method into a unified topology. The algorithm is shown in <ref>a. First, we align the MVS mesh with a parametric template mesh. Here, we use template meshes designed in the main paper Section 3.2. Second, we use an optimization approach to calculate a set of pose and shape parameters, so that the template mesh becomes a coarse approximation of the MVS mesh. Finally, we use the closet point on the MVS mesh as a substitute for each vertex on the parametric mesh. This procedure would preserve the detailed shape and the topology of the parametric template at the same time. In our experiments, we generate 3 resolution levels of groundtruth mesh for supervision, and use the third level for testing. However, despite the good attributes of the groundtruth meshes, some of them still have flaws. <ref>b shows an example of the mesh flaws inside the mesh (red rectangle). It happens because some of the vertices on the parametric mesh find the wrong corresponding vertices on the MVS mesh. These groundtruth mesh flaws will eventually cause defects in generated mesh (shown in <ref>). We have largely reduced the flaws of our mesh using the mesh post-processing method mentioned in <ref>. § MORE VISUALIZATION RESULTS We show more visualization results of our proposed method in <ref>. § FAILURE CASES We show in <ref> a few failure cases where our method generates hand meshes with flaws. Most of these flaws are caused by groundtruth flaws in remeshing (shown in <ref>b). § FUTURE WORKS AND DISCUSSIONS In future works, our backbone can be replaced with more recent work such as those in <cit.>. The object detection and segmentation-related network can be helpful for hand-related tasks. We would also improve the remeshing procedure to reduce the artifacts. Besides, we would also improve our method to tackle the in-the-wild hand reconstruction problem. Moreover, the frequency decomposition approach can be easily expanded to improve the details of human body reconstruction works such as <cit.>.
http://arxiv.org/abs/2307.07234v1
20230714091238
Polynomially knotted 2-spheres
[ "Rama Mishra", "Tumpa Mahato" ]
math.GT
[ "math.GT" ]
Polynomially knotted 2 Spheres [ August 12, 2023 ============================== Department of Mathematics, Indian Institute of Science Education and Research, Pune, India [email protected] Department of Mathematics, Indian Institute of Science Education and Research, Pune, India [email protected] We show that every proper, smooth 2-knot is ambient isotopic to a polynomial embedding from ℝ^2 to ℝ^4. This representation is unique up to a polynomial isotopy. Using polynomial representation of classical long knots we show that all twist spun knots posses polynomial parametrization. We construct such parametrizations for few spun and twist spun knots and provide their 3 dimensional projections using Mathematica. 0.2in § INTRODUCTION Higher dimensional knot theory has attracted a lot of interest by knot theorists recently. By an n knot one means a smooth, proper, locally flat embedding of S^n in S^n+2. Moving from classical knots, the simplest situation is to understand knotting of S^2 in S^4 or in ℝ^4. In fact, studies are being done on any surface getting knotted inside four dimensional space and 2-knots are just a special case. Notion of ambient isotopy can be defined as in the case of classical knots. The central problem will remain the same: classify all 2-knots on the basis of ambient isotopy. Similar to the classical knots, a diagrammatic theory (<cit.>) and a braid theory (<cit.>) has been developed. Both theories have provided many interesting invariants for 2-knots. In case of classical knots we have encountered many numerical invariants, whose computations become easy using a suitable parametrization. Some interesting parametrizations are Fourier knots and Polynomial knots. While Fourier knots represent compact images, polynomial knots can provide only long knots. Polynomial knots are important because they bring an algebraic flavour into knot theory. One point compactification of a polynomial knot is a classical knot and projective closure of a polynomial knot is a knot in ℝP^3. Polynomial knots are extensively studied by Shastri <cit.>, Mishra-Prabhakar <cit.>, Durfee-Oshea <cit.>. Explicit polynomial representation for all knots up to 8 crossings can be found in <cit.>. In this paper, we would like to discuss about parametrizing 2-knots using polynomial functions. Since a 2-knot is a proper, smooth, locally flat embedding of S^2 in S^4, removing a point from S^2 and the corresponding image under the embedding in S^4 we get an embedding of ℝ^2 to ℝ^4. We will call this embedding a long 2-knot. We provide an explicit proof for the fact that every long 2-knot is ambient isotopic to a polynomial embedding of ℝ^2 in ℝ^4. We also show that any two polynomial embeddings of ℝ^2 in ℝ^4 which are ambient isotopic are in fact polynomially isotopic. The idea of the proof is similar to the classical case <cit.>. We also address the problem of explicitly constructing a polynomial parametrization for a given 2-knot. We begin with taking the simplest 2-knots which are obtained from classical knots, such as Spun Knots, first introduced by Emil Artin <cit.>. Then we move to twist spun knots <cit.>. Obtaining a polynomial parametrization for these classes of knots is relatively simple because we can use the existing polynomial representation of classical knots <cit.> which can be brought into the form suitable for spin construction or twist spin construction. In this paper we explicitly provide the steps to parametrize spun knots and twist spun knots using polynomial functions and present their 3 dimensional projections using Mathematica. This paper is organized as follows: Section 2 is divided into two subsections. Section 2.1 includes the basic definitions required to study the 2-knots. In Section 2.1 we provide a brief exposition to polynomial parametrization of classical knots. We show that for every classical long knot there exists a polynomial embedding ϕ:ℝ→ℝ^3 defined by ϕ(t)= (x(t),y(t), z(t)) such that for some closed interval [a,b] image of ϕ outside [a,b] contains no knotting, z(a)=z(b)=0 and z(t)>0 for t∈ (a,b). We use this embedding to construct spun knots. In Section 3, we provide a detailed proof that for all long 2-knots, polynomial representation exists and is unique up to a polynomial isotopy. In Sections 4.1 and 4.2 we provide detailed algorithm to write polynomial representation of spun knots and twist spun knots respectively. We write down explicit parametrization for spun trefoil and spun figure eight knot and also k twist spun trefoil for k=1,2,3 and 10. § PREREQUISITES §.§ 2-knots An embedding of a manifold K in a manifold M is called proper if ∂ K=K ∩∂ M. Let K be a properly embedded k-manifold in an m-manifold M. Then K is said to be locally flat at a point x ∈ K if there exists a regular neighbourhood N of x in M such that the pair (N,N ∩ K) is homeomorphic to a standard ball pair (D^m,D^k). We say that K is locally flat if it is locally flat at every point of M. Non-locally flat embeddings exist. An example can be seen in <cit.>. A locally flat embedding of S^2 in ℝ^4 or S^4 is called a 2-knot. A proper, locally flat embedding of R^2 in R^4 is referred as a long 2-knot. Note that every 2-knot will give rise to long 2-knot by removing one point from S^2 and the image of that point from S^4 and consider the restriction of the embedding. Two 2-knots K_1 and K_2 are said to be equivalent if there exists an orientation preserving diffeomorphism h:S^4→ S^4 such that h(K_1)=K_2. To study 2-knots, we will need to project them on suitable 3-dimensional space. To be able to understand the projection we restrict ourselves to long 2-knots, i.e., proper, locally flat embeddings of ℝ^2 in ℝ^4. Thus projections will amount to understanding the image of maps from ℝ^2→ℝ^3. Let ϕ: S^2 →ℝ^4 be a 2-knot. A projection π: ℝ^4→ℝ^3 is said to be a regular projection if the composition π∘ϕ: S^2→ℝ^3 is a smooth immersion. In general there may not exist a regular projection for a given 2-knot. However, it has been proved <cit.> that for every 2-knot K in ℝ^4 there exists a 2-knot K̃ which is ambient isotopic to K and for K̃ a regular projection exists. In a regular projection, the image of S^2 in ℝ^3 is an immersion, hence the derivative map is injective at each point of S^2. Thus the singular set (points where the map fails to be one one) will consist of finite number of double point sets and triple points. A 2-knot diagram consists of the projected image by a regular projection and a display of over/under information on each double point set and on each triple point. Figure <ref> gives an idea of how a 2-knot diagram looks like. Artin <cit.> introduced a way to construct 2-knots by spinning a knotted arc in ℝ^3 around ℝ^2. These types of knots are called Spun Knots. Spun knots are the simplest class of 2-knots. Their construction is described as follows: In ℝ^4, consider the upper half space ℝ^3_+={(x_1,x_2,x_3,0):x_3≥ 0 } with the boundary ∂ℝ^3_+=ℝ^2={(x_1,x_2,0,0)} We can spin any point x=(x_1,x_2,x_3,0) in ℝ^3_+ about ℝ^2 by the following formula: x_θ=(x_1,x_2,x_3 Cos θ,x_3 Sin θ) Now, to get a spin knot we choose an properly embedded arc K i.e K is embedded in ℝ^3_+ locally flatly and intersects ∂ℝ^3_+ transversely only at the endpoints. Then we spin ℝ^3_+ along ℝ^2 by 360^o and the locus of the arc gives a 2-knot given by, {(x_θ:x ∈ K,0 ≤θ≤ 2π)}. E.C. Zeeman in 1965 <cit.> generalized Artin's spinning construction to twist spinning . In this case we imagine the knotted part of K inside a 3-ball and rotate it on its axis while spinning the knot K about the xy plane. We can rotate the 3-ball as many times as we want on its axis in ℝ^3. If we rotate it k times then we will get a k-twist spun knot. It is hard to visualize a two dimensional knot in four dimensional space. So, we see its projection on some ℝ^3. A projection of spun trefoil knot in xzw-upper half space is shown in Figure <ref>. Another way to visualize 2-knots in 4-space is to consider its intersection by hyper planes. Then each cross section will be a 1-dimensional manifold which will be a knotted or unknotted S^1 at each cross section. A collection of such intersections is referred as a motion picture for the given 2-knot. A motion picture of the spun trefoil is shown in Figure 4. A detailed exposition on 2 knots be found in <cit.> and <cit.>. §.§ Classical polynomial knots It is easy to see that the ambient isotopy classes of classical knots (S^1⊂ S^3) is in bijection with the ambient isotopy classes of long knots, i.e., the smooth embeddings of ℝ in ℝ^3 which are proper and have asymptotic behaviour outside a closed interval. In 1994, the following theorems were proved: For every long knot K there exists a polynomial embedding t→ (f(t),g(t), h(t)) from ℝ to ℝ^3 which is isotopic to K. If two polynomial embeddings ϕ_0: ℝ→ℝ^3 and ϕ_1: ℝ→ℝ^3 represent isotopic knots then there exists a one parameter family of polynomial embeddings ϕ_t: ℝ→ℝ^3 for each t∈ [0,1]. We say that ϕ_0 and ϕ_1 are polynomially isotopic. Both these theorems were proved using Weierstrass' Approximation <cit.>. Later concrete polynomial embeddings representing few classes of knots were constructed and their degrees were estimated (eg., <cit.>, <cit.>). In all the constructions, the main idea is to fix a suitable knot diagram in mind and find real polynomials f(t) and g(t) such that the plane curve (x(t), Y(t))= (f(t),g(t)) provides the projection of the diagram. For this curve one can explicitly find the parameter pairs (s_i,t_i) for which f(s_i)=f(t_i) and g(s_i)=g(t_i) for each i. Then depending upon the over/under crossing information we can construct a polynomial h(t) which provides h(s_i)<h(t_i) at an under crossing and h(s_i)>h(t_i) at an over crossing. Note that in our construction we always project the knot on XY plane and Z coordinate provides the height function. All the crossings of the knot lie in the image of a closed interval [a,b]. We prove the following lemma which will be useful in Section 4. Given a long knot K there exists a polynomial embedding t→ (x(t),y(t),z(t)) from ℝ→ℝ^3 such that for some interval [a,b] z(a)=z(b)=0, z(t)>0 for t∈ (a,b) and there is no crossing outside [a,b], for example the image looks as in Figure <ref>. Proof. We construct polynomials f(t) and g(t) such that the curve defined by (x,y)= (f(t),g(t)) represents a projection of K on XY plane. We choose the polynomial h_0(t) which provides the over/under crossing information. Note that we can choose h_0(t) to be an even degree polynomial. Now by adding a sufficiently large real number R we can show that h(t)=h_0(t)+R has exactly two real roots a and b, h(t)>0 for t∈ (a,b) and all the double points of the projection lie inside [a,b]. This completes the proof. As a demonstration, here is the embedding t→ (t^3-3t, t^4-4t^2, -t^6+2t^5+4.24t^4-8.48t^3-3.24t^2+6.48t+12) whose Mathematica plot is shown in Figure <ref>. § POLYNOMIAL REPRESENTATION OF LONG 2-KNOTS In this section we generalize the results in <cit.> in the context of 2-knots. Every smooth long 2-knot (ℝ^2→ℝ^4) has a polynomial representation. Proof: Let F be long 2-knot in ℝ^4 given by a smooth embedding ϕ:ℝ^2 →ℝ^4. Let ϕ be defined by ϕ(t,s)=(α(t,s),β(t,s),γ(t,s),δ(t,s)) such that ϕ≡(α,β,γ):ℝ^2 →ℝ^3 is a generic immersion (having finite number of double point sets and triple points as the only singularities). Up to equivalence we can assume that the Jacobian matrix J= [ ∂α/∂ t ∂α/∂ s; ∂β/∂ t ∂β/∂ s; ∂γ/∂ t ∂γ/∂ s ] has rank 2 outside some closed region I_1× I_1,where I_1=[a,b]. Since, we have finitely many double point sets or triple points in the image of ϕ, we can choose another closed region I_2× I_2, where I_2=[c,d] such that ϕ(I_2× I_2) contains all the crossings. Let,M × M, where M=[M_1,M_2] such that M contains both the rectangles I_1× I_1 and I_2× I_2. Let, ϕ(M× M) is contained inside a 4-ball of radius R with ϕ(M_i,M_j) =R ; i,j=1,2. Let N × N, where N=[N_1,N_2] be such that ϕ(N× N) is contained inside a 4-ball of radius 2R with ϕ(N_i,N_j) =2R ; i,j=1,2. By a smooth reparametrization we can assume that [M_1,M_2]=[-1/2,1/2] and [N_1,N_2]=[-1,1]. Thus ϕ([-1/2,1/2] × [-1/2,1/2] ) and ϕ([-1,1]×[-1,1] ) are contained inside balls of radius R and 2R respectively with ϕ(±1/2,±1/2) =R and ϕ(± 1,± 1) =2R and the Jacobian matrix has rank 2 outside [-1/2,1/2] × [-1/2,1/2]. Now, consider the restriction of ϕ to I_1× I_1 i.e ϕ|_I_1× I_1:I_1× I_1→ℝ^4. Since, the set of embeddings from a compact, Hausdorff manifold to any any manifold forms an open set in the set of all smooth maps with C^1-topology, there exists an ε_0 > 0 such that ψ∈ N(ϕ,ε_0) implies that ψ is an embedding of I_1× I_1 in ℝ^3, where N(ϕ,ε_0)={ψ: (t,s) ∈ I_1× I_1sup{ψ(t,s)-ϕ(t,s),ψ'(t,s)-ϕ'(t,s)}< ε_0}. Let ε < min{R/2,ε_0}. For this ε, we can take an ε/2-approximation ψ using the Bernstein polynomial <cit.> or the Weierstrass' Approximation of two variables inside the square [-1,1]× [-1,1] <cit.>. Let ψ_1≡ (x_1,y_1,z_1,w_1) be the approximation,where x_1,y_1,z_1 and w_1 are polynomials in two variables t,s. Then, det(J) > 1-ε/2 in ([-1,-1/2]∪[1/2,1])×([-1,-1/2]∪[1/2,1]). Now, for δ_i∈ (0,ε/2),i=1,2,we can choose N ∈ℤ^+ large enough so that ψ≡ ( x_1+δ_1/2N+1t^2N+1+δ_2/2N+1 s^2N+1, y_1+δ_1/2N+1t^2N+1+δ_2/2N+1s^2N+1, z_1+δ_1/2N+1t^2N+1+δ_2/2N+1s^2N+1, w_1+δ_1/2N+1t^2N+1+δ_2/2N+1s^2N+1) ≡ (x,y,z,w) is an ε-approximation of ϕ inside [-1,1]× [-1,1] such that determinant of J is positive outside the square [-1,1]× [-1,1] as each partial derivative ∂ x/∂ t,∂ x/∂ s,∂ y/∂ t,∂ y/∂ s,∂ z/∂ t,∂ z/∂ s,∂ w/∂ t,∂ w/∂ s are positive outside [-1,1]× [-1,1]. Inside a compact region polynomial will approximate the knot. But outside also this map should behave well which means the knot should not create another knotting outside this compact ball. The above argument along with adding higher odd degree terms ensures that the long 2-knot become asymptotically flat outside the compact region. Now, we have to show that ψ:ℝ^2 →ℝ^4 is an embedding. First we check that ψ is an Immersion: Since, ϕ:ℝ^2 →ℝ^4 is embedding implies it is an injective immersion. That means Jacobian J(ϕ;t,s) has rank 2. Now, in [-1,1]×[-1,1] ψ'(t,s) >ϕ'(t,s)-ε/2 which implies Jacobian J(ψ;t,s) has rank 2 in [-1,1]×[-1,1]. Now, outside [-1,1]×[-1,1] , all partial derivatives are positive which implies the Jacobian J(ψ;t,s) has rank 2 outside [-1,1]×[-1,1] as well. Hence, ψ is an immersion. Next we show that ψ is Injective: Since the derivative of the Jacobian matrix for ψ has rank 2 in and outside [-1,1]×[-1,1], by inverse function it follows that ψ is injective. Two polynomial embeddings ψ_0≡ (x_0,y_0,z_0,w_0) and ψ_1≡ (x_1,y_1,z_1,w_1) of ℝ^2 in ℝ^4 are said to be polynomially isotopic if there exists an isotopy F:ℝ^2 × I →ℝ^4 between ψ_0 and ψ_1 such that F_u:=F(t,s,u) is a polynomial embedding for each u ∈ [0,1]. Notation: For ε∈ℝ^+,𝑁∈ℤ^+ and ϕ≡ (x,y,z,w):ℝ^2 →ℝ^4, let ϕ_ε,N:ℝ^2 →ℝ^4 given by ϕ_ε,N(t,s)=(x(t,s) ,y(t,s), z(t,s)+ε t^2N+1, w(t,s)+ε s^2N+1). We prove the following lemma. Let ϕ =(x,y,z,w):ℝ^2 →ℝ^4 be a polynomial embedding such that the map (x,y,z):ℝ^2 →ℝ^3 is a generic immersion. Then for each N ∈ℤ^+,∃ an ε>0 such that ϕ and ϕ_ε,N are polynomially isotopic. Proof: We have to show that the maps F_u:ℝ^2 →ℝ^4 given by F_u(t,s)=(x(t,s),y(t,s),z(t,s)+u ε t^2N+1,w(t,s)+u ε s^2N+1) are embedding for each u ∈ [0,1]. The Jacobian for each F_u J= [ ∂ x/∂ t ∂ x /∂ s; ∂ y /∂ t ∂ y /∂ s; ∂ z /∂ t+(2N+1)u ε t^2N ∂ z /∂ s; ∂ w /∂ t ∂ w /∂ s +(2N+1)u ε s^2N; ] has rank 2 since Jacobian for ϕ has rank 2. Hence, F_u is an immersion for each u ∈ [0,1]. Next we have to show F_u is injective for each u ∈ [0,1]. Let us use the notation x(t_i,s_i)=x_i,i Now,consider the set S={ (t_1,s_1),(t_2,s_2)∈ℝ^2 | (x_1,1,y_1,1)= (x_2,2,y_2,2) for (t_1,s_1) ≠ (t_2,s_2)} Since,ϕ is an embedding , z_1,1≠ z_2,2 when w_1,1=w_2,2 or, w_1,1≠ w_2,2 when z_1,1=z_2,2 Consider the first case. Now if we choose 0 < ε < (t_1,s_1),(t_2,s_2)∈ Smin{| z_1,1-z_2,2|/| t_1^2N+1-t_2^2N+1|} Then for this ε z_1,1+u ε t_1^2N+1≠ z_2,2+u ε t_2^2N+1 for all (t,s) ∈ S and u∈ [0,1]. Same argument is applied for the second case. Thus F_u is injective for all u ∈ [0,1] and therefore proves the lemma. This lemma allow us choose a polynomial embedding with higher odd degree in third and fourth coordinates that keeps the Jacobian positive outside that a compact region,say M × M. Let ψ_0≡ (x_0,y_0,z_0,w_0) and ψ_1≡ (x_1,y_1,z_1,w_1) be two polynomial embeddings of ℝ^2 in ℝ^4 which represent the isotopic 2-knots. Then ψ_0 and ψ_1 are polynomially isotopic. Proof: We can always perturb the knot so that ϕ_0:ℝ^2 →ℝ^3 and ϕ_1:ℝ^2 →ℝ^3 are generic immersions. Now , let A × A be the common compact region inside which both ϕ_0 and ϕ_1 have their double curve and triple points. Choose N× N ⊃ (M × M) ∪ (A × A) and image of N × N under ϕ_0 and ϕ_1 are 3-balls of radius R_1 and R_2 respectively. Let, R=max{R_1,R_2}. We can choose N × N=[-1/2,1/2]× [-1/2,1/2]. Since,ϕ_0 and ϕ_1 represent the same knot type , there exists an isotopy F:ℝ^2 × I →ℝ^4 such that F(t,s,0)=ϕ_0(t,s) and F(t,s,1)=ϕ_1(t,s). Now consider the restriction of the isotopy maps F_u on (I_1× I_1)× I where I_1=[-1,1]. Now each map F_u:(I_1× I_1)→ℝ^4 is on compact region (I_1× I_1) where we can choose a Weierstrass approximation of this map using Bernstein polynomial with two variables inside an ε_0 neighbourhood of F_u for some ε_0 >0 and adding higher odd order terms to the co-ordinates we can get polynomial embeddings P_u:(I_1× I_1)→ℝ^4 for each u ∈ [-1,1] whose Jacobian is positive outside. Therefore, P_0 and P_1 are polynomially isotopic. Now we can have isotopies between ϕ_0 and P_0 defined by H:ℝ^2 × I →ℝ^4 where H(t,s,u)=(1-u)ϕ_0(t,s) + u P_0(t,s). Now each H_u, defined by H_u(t,s)=(1-u) ϕ_0(t,s)+u P_0(t,s), is an ε- approximation of ϕ_0 inside I_1× I_1 for our choice of ε and P and H_u is increasing outside [-1/2,1/2]× [-1/2,1/2]. We can use similar arguments as in Theorem 3.1 to prove that H_u is an embedding of ℝ^2 in ℝ^4. Thus we can define a polynomial isotopy between ϕ_0 and P_0. Similarly we can show that ϕ_1 and P_1 are polynomially isotopic. Hence,ϕ_0 and ϕ_1 are polynomially isotopic. § POLYNOMIAL REPRESENTATION OF SOME FAMILIES OF 2-KNOTS: In Section 3, we showed that every long 2 knot can be realized as an image of ℝ^2 in ℝ^4 under a polynomial map which is an embedding. To get the compact 2 knot, i.e., a knotted S^2 in S^4 we will have to go through the one point compactification and the map will no longer remain polynomial. However, in this section we show that the spun knots and the twist spun knots can be realized as image of a polynomial map as the compact image. §.§ Spun knots: Given a classical knot K, there exist polynomials f(t,s), g(t,s), h(t,s) and p(t,s) in two variables t and s such that for some interval [a,b] the image of [a,b]× [0,2π] under the map ϕ: ℝ^2 →ℝ^4 defined by ϕ ((t,s))= (f(t,s), g(t,s), h(t,s), p(t,s)) is isotopic to the spun of K. Proof: Consider the long knot K. Choose a polynomial embedding ψ: t→ (f(t),g(t),h(t)) from ℝ to ℝ^3 such that the image of some closed interval [a,b] under ψ looks as in Figure 5, i.e., h(a)=h(b)=0 and h(t)>0 for t∈ (a, b). Now we have a knotted arc lying inside the upper half space with their end points place in XY plane. Now we consider the spinning construction and get a map F:[a,b]× [0, 2π]→ℝ^4 defined by F(t,s)=(f(t), g(t), h(t) Cos(s), h(t) Sin(s). Using point set topology argument one can prove that image of F in ℝ^2 is homeomorohic to S^2. Now we replace the trigonometric functions Cos(s) and Sin(s) by their Chebyshev polynomial approximation inside the interval [0, 2π] <cit.>. Let the Chebyshev approximation of Cos(s) and Sin(s) inside the interval [0,2π] be denoted by C(s) and S(s) respectively. Then choosing f(t,s)=f(t), g(t,s)=g(t), h(t,s)=h(t) C(s) and p(t,s)=h(t) S(s) serves the purpose. This completes the proof. Examples: 1. Spun trefoil knot: The following polynomial representation of trefoil is given by A.R Shastri in <cit.>. x(t) :t^3-3t y(t) :t^5-10t z(t) :t^4-4t^2 We will chnage z(t) to z(t):-t^4+4t^2+3 so that only two points are on xy-plane and the arc is on the upper-half space. The figure we get after plotting this in Mathematica is shown in Figure <ref>. Note that, the z coordinate is zero exactly at two point corresponding to t= -2.1554 and t=2.1554. Thus the spin construction will give us the map (t,s)→(x(t), y(t), z(t) Cos(s), z(t) Sin(s)) giving the spun trefoil as the image. Now to obtain the polynomial map, we need to aproximate Cosine and Sine function in [0, 2π ] using Chebyshev's Polynomials. The Chebyshev approximaton of Cosine and Sine functions respectively are as follows. C(s):= -0.0000193235 s^8+0.000485652 s^7-0.00399024 s^6+0.0081095 s^5 +0.0265068 s^4 +0.0163844 s^3-0.509175 s^2+0.00205416 s+0.999921, S(s):= 8.73651067430188 × 10^-19 s^8+0.000144829 s^7-0.00318496 s^6 +0.0220637 s^5-0.0322337 s^4-0.125592 s^3-0.0257364 s^2 +1.00614 s-0.000238495. Thus (t,s)→(x(t),y(t), z(t) C(s), z(t) S(s)) is a polynomial parametrization for the spun trefoil knot. A Mathematica plot of its projection on xzw-plane, i.e., {(x(t),z(t) C(s),z(t) S(s) ) | t ∈ [-2.1554, 2.1554], s ∈ [0,2π] } and a hyperplane cross-sectional view are shown in Figure <ref> and Figure <ref> respectively. 1. Spun figure eight knot: Using the parametrization for long figure eight knot given in <cit.> and following the same procedure as above we obtain a polynomial parametrization of the spun figure eight knot by (t,s)→ (x(t),y(t), z(t) C(s), z(t) S(s) ) where, x(t) =2/5(t^2-7) (t^2-10) t, y(t) = 1/10 t (t^2-4) (t^2-9) (t^2-12), z(t) C(s) =(20-13 t^2-t^4) (-0.0000193235 s^8+0.000485652 s^7-0.00399024 s^6 +0.0081095 s^5+0.0265068 s^4 +0.0163844 s^3-0.509175 s^2 +0.00205416 s+0.999921 ), z(t) S(s) =(20-13 t^2-t^4) (8.73651067430188 × 10^-19 s^8+0.000144829 s^7 -0.00318496 s^6+0.0220637 s^5-0.0322337 s^4-0.125592 s^3 -0.0257364 s^2+1.00614 s-0.000238495 ). Figure <ref> (b) and Figure <ref> represent their projection on xzw-plane and a hyperplane cross-sectional view respectively. §.§ Twist spun knots: In order to polynomially parameterize twist spun of a knot K we choose a polynomial parametrization for the classical long knot K in ℝ^3, K:ℝ→ℝ^3 t →{ f(t),g(t),h(t)} and consider a properly embedded arc which is the image of K(t) for t ∈ [a,b] i.e k is embedded in ℝ^3_+ locally flatly and intersects ∂ℝ^3_+ transversely only at the endpoints (f(a),g(a),0) and (f(b),g(b),0). Now we can find an interval [a',b'] ⊂ [a,b] where all the crossings of K will lie. To get a twist we choose the axis of rotation as a line segment PQ parallel to the xy plane, joining two points on the knotted arc, say P:=(f(t_1),g(t_1),h(t_1)) and Q:=(f(t_2),g(t_2),h(t_2)) , where [a',b'] ⊂ [t_1,t_2] and h(t_1)=h(t_2)=c. We have to choose c in such a way that the knotted part of the arc does not cross xy plane while rotating about PQ. Now, we will spin K about xy plane while rotating about PQ to get the twist spun knot. See Figure <ref>. The rotation matrix around PQ will be, 𝐑'=𝐓_𝐜*𝐑*𝐓_𝐜^-1=𝐓_𝐜*𝐑*𝐓_-𝐜, where 𝐓_𝐜 is the translation matrix along z axis which will send (x,y,z) to (x,y,z+c) and 𝐑 gives the rotation about the line P'Q' on xy plane, parallel to PQ, joining the points P'=(f(t_1),g(t_1),0) and Q'=(f(t_2),g(t_2),0) on the knotted arc. The matrix 𝐑 is given by Rodrigues' Rotation formula <cit.>, which is described as follows: If 𝐯 is a vector in ℝ^3 and 𝐤 is a unit vector describing an axis of rotation about which 𝐯 rotates by an angle ϕ according to the right hand rule, the Rodrigues formula for the rotated vector 𝐯_𝐫𝐨𝐭 is given by 𝐯_rot=𝐯 Cosϕ +(𝐤×𝐯)Sinϕ+𝐤·(𝐤·𝐯)(1-Cosϕ) In this situation 𝐤 is along the line joining (f(t_1),g(t_1),0) and (f(t_2),g(t_2),0). So, 𝐤=(f(t_2)-f(t_1)/N,g(t_2)-g(t_1)/N,0), where N=√((f(t_2)-f(t_1))^2+(g(t_2)-g(t_1))^2). For simplification, denote f(t_2)-f(t_1):=f_21 g(t_2)-g(t_1):=g_21 Then, 𝐤=(f_21/N,g_21/N,0), where N=√(f_21^2+g_21^2). Then the rotation matrix through an angle ϕ counterclockwise about the axis k is 𝐑=𝐈+𝐒𝐢𝐧ϕ 𝐊+(1-𝐂𝐨𝐬ϕ) 𝐊^2 where, 𝐊= [ 0 -k_z k_y; k_z 0 -k_x; -k_y k_x 0 ] = [ 0 0 g_21N; 0 0 -f_21N; -g_21N f_21N 0 ]. So, in our case 𝐑= [ f_21^2 + g_21^2 CosϕN^2 f_21 g_21 (1-Cosϕ)N^2 g_21 SinϕN; f_21 g_21 (1-Cosϕ)N^2 f_21^2 Cosϕ + g_21^2N^2 -f_21 SinϕN; -g_21 SinϕN f_21 SinϕN Cosϕ ] Thus the rotation matrix about PQ will be, 𝐑'=𝐓_𝐜*𝐑*𝐓_𝐜^-1=𝐓_𝐜*𝐑*𝐓_-𝐜, where 𝐑= [ f_21^2 + g_21^2 CosϕN^2 f_21 g_21 (1-Cosϕ)N^2 g_21 SinϕN 0; f_21 g_21 (1-Cosϕ)N^2 f_21^2 Cosϕ + g_21^2N^2 -f_21 SinϕN 0; -g_21 SinϕN f_21 SinϕN Cosϕ 0; 0 0 0 1 ] and 𝐓_𝐜 is the translation matrix along z axis which will send (x,y,z) to (x,y,z+c) given by T_c= [ 1 0 0 0; 0 1 0 0; 0 0 1 c; 0 0 0 1 ]. Then, 𝐑'= [ f_21^2 + g_21^2 CosϕN^2 f_21 g_21 (1-Cosϕ)N^2 g_21 SinϕN -c g_21 SinϕN; f_21 g_21 (1-Cosϕ)N^2 f_21^2 Cosϕ + g_21^2N^2 -f_21 SinϕN c f_21 SinϕN; -g_21 SinϕN f_21 SinϕN Cosϕ -c Cosϕ+c; 0 0 0 1 ]. This rotation about PQ is given by the parametrization : (t,ϕ) ⟶{(f'(t,ϕ),g'(t,ϕ),h'(t,ϕ)) | a ≤ t ≤ b 0 ≤ ϕ < 2π } where f'(t,ϕ) :(f_21^2 + g_21^2 Cosϕ) 𝐟(𝐭)+f_21 g_21 (1-Cosϕ) 𝐠(𝐭)+N g_21 Sinϕ (𝐡(𝐭)-c)/N^2 g'(t,ϕ) :f_21 g_21 (1-Cosϕ) 𝐟(𝐭)+(f_21^2 Cosϕ + g_21^2) 𝐠(𝐭)+N g_21 Sinϕ (c-𝐡(𝐭))/N^2 h'(t,ϕ) :-g_21 Sinϕ 𝐟(𝐭)+f_21Sinϕ 𝐠(𝐭)+N Cosϕ 𝐡(𝐭)+N c (1-Cosϕ)/N where, N=√(f_21^2+g_21^2). Now, the only thing to take care is that there should be no rotation outside the endpoints of the axis. We will achieve this by multiplying it with a bump function. Construction of the bump function: To define the bump function B(t), we first define the function F(t) as : F(t)= e^-1/t, t > 0 0, t ≤ 0 Then the B(t) is defined as follows: B(t)=F(d_2-t^2)/F(t^2-d_1)+F(d_2-t^2) where, we will choose d_1,d_2∈ℝ^+ in such way that B(t)= 1, t ∈ [a',b'] (which contains all crossings) 0, t ∈ [a,t_1] ∪ [t_2,b] ∈ (0,1), otherwise See Figure <ref>. Then, in [a,b] define f(t,ϕ)= B(t) f'(t,ϕ)+(1-B(t)) f(t) g(t,ϕ)= B(t) g'(t,ϕ)+(1-B(t)) g(t) h(t,ϕ)= B(t) h'(t,ϕ)+(1-B(t)) h(t) Now,we will spin it about xy plane in ℝ^4 while rotating about PQ in ℝ^3 to get a twist spun knot. Then the rotation angle about PQ, ϕ=k θ, where θ will be the spinning angle about xy plane. Hence,the parametrization for k-twist spun knot will be (t,θ) ⟶{(f(t,kθ) , g(t,kθ) , h(t,kθ) Cos θ , h(t,kθ) Sin θ) | a ≤ t ≤ b 0 ≤ θ < 2π} Now, f,g,h are linear combinations of the functions f,g,h ,Cosine and Sine. Replacing Cosine and Sine functions with their corresponding Chebyshev polynomial approximation in [0,2π], we obtain a polynomial parametrization of twist spun knots. Example: Twist spun trefoil As in the earlier example, we start with the following polynomial representation of trefoil knot: x(t) :t^3-3t y(t) :t^5-10t z(t) :-t^4+4t^2+16 Here, [a,b] =[-2.54404,2.54404] [a',b'] =[-1.946,1.946] P =(f[-2.19],g[-2.19],h[-2.19]) Q =(f[2.19],g[2.19],h[2.19]) c =h[2.19] Hence, 𝐤 is along the line joining (-f[2.19],-g[2.19],0) and (f[2.19],g[2.19],0). So, 𝐤=(f[2.19]/√(f[2.19]^2+g[2.19]^2),g[2.19]/√(f[2.19]^2+g[2.19]^2),0). and N=√(f[2.19]^2+g[2.19]^2) This rotation about PQ is given by the parametrization (See Figure <ref>) : (t,ϕ) ⟶{(f'(t,ϕ),g'(t,ϕ),h'(t,ϕ)) | a ≤ t ≤ b 0 ≤ ϕ < 2π } where, f'(t,ϕ) :(f[2.19]^2 + g[2.19]^2 Cosϕ) 𝐟(𝐭)+f[2.19] g[2.19] (1-Cosϕ) 𝐠(𝐭)+(𝐡(𝐭)-c)N g[2.19] Sinϕ/N^2 g'(t,ϕ) :f[2.19] g[2.19] (1-Cosϕ) 𝐟(𝐭)+(f[2.19]^2 Cosϕ + g[2.19]^2) 𝐠(𝐭)+(c-𝐡(𝐭))N g[2.19] Sinϕ/N^2 h'(t,ϕ) :-g[2.19] Sinϕ 𝐟(𝐭)+f[2.19]Sinϕ 𝐠(𝐭)+N 𝐡(𝐭) Cosϕ+N c (1-Cosϕ)/N We define the bump function as: B(t)=F(4.8-t^2)/F(t^2-3.8)+F(4.8-t^2) The restricted rotation about PQ (See Figure <ref> ) is given by (t,θ) ⟶{(f(t,ϕ),g(t,ϕ),h(t,ϕ) ) | -2.54404 ≤ t ≤ 2.54404 0 ≤ ϕ < 2π} where, f(t,ϕ)= B(t) f'(t,ϕ)+(1-B(t)) f(t) g(t,ϕ)= B(t) g'(t,ϕ)+(1-B(t)) g(t) h(t,ϕ)= B(t) h'(t,ϕ)+(1-B(t)) h(t) . Hence,the parametrization for k-twist spun trefoil knot will be (t,θ) ⟶{(f(t,kθ),g(t,kθ),h(t,kθ) Cosθ ,h(t,kθ) Sinθ) | -2.54404 ≤ t ≤ 2.54404 0 ≤ θ < 2π} Some projections of the k twist spun trefoil for k=2,3,10 in ℝ^3 are shown in Figures <ref>,<ref>,<ref> respectively. § CONCLUSION: A more general spinning construction is described by R.A.Litherland <cit.> called Deform Spinning, where roll-spun knot and twist-spun knots are special cases. We would like to find a way to parameterize these 2 knots since these are obtained by deforming classical knotted arc using some transformations in ℝ^3 while spinning. Finding a general method to parameterize any 2 knot will be a goal in the next project. 10 Abutheraa Abutheraa, M. A., Lester, D., & Ardil, C. (Ed.) (2007). Computable Function Representations Using Effective Chebyshev Polynomial. In C. Ardil (Ed.), Conference of the World-Academy-of-Science-Engineering-and-Technology (Vol. 25, pp. 103-109). World Academy of Science, Engineering and Technology. Artin Artin, E. (1925). Zur Isotopie zweidimensionaler Flächen imR4. Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 4, 174-177. Brown Brown, A.N. (2004). Examples of polynomial knots. Carter-Saito Carter, J.S., & Saito, M. (1997). Knotted Surfaces and Their Diagrams. durfoshea Durfee, A., & O'Shea, D. (2006). Polynomial knots. arXiv preprint math/0612803. Gluck Gluck, H. (1961). THE EMBEDDING OF TWO-SPHERES IN THE FOUR-SPHERE. Bulletin of the American Mathematical Society, 67, 586-589. Litherland R.A. Litherland, Deforming twist-spun knots, Trans. Amer. Math. Soc. 250 (1979), 311–331 Prabhakar Mishra, R., & M.Prabhakar (2008). Polynomial representation for long knots. arXiv: Geometric Topology. Mishra Mishra, R. (2000). Minimal Degree Sequence for Torus Knots. Journal of Knot Theory and Its Ramifications, 09, 759-769.." Shastri Shastri A.R,Polynomial Representation of knots, Tohoku Math. J. (2) 44 (1992) 11-17. shukla Shukla, R. On polynomial isotopy of knot-types. Proc. Indian Acad. Sci. (Math. Sci.) 104, 543–548 (1994). Kamada Kamada, S. (2002). Braid and Knot Theory in Dimension Four. Rodrigues Rodrigues, O. (1840). Des lois géométriques qui régissent les déplacements d'un système solide dans l'espace, et de la variation des coordonnées provenant de ces déplacements considérés indépendamment des causes qui peuvent les produire. Journal de mathématiques pures et appliquées, 5, 380-440. Stancu Stancu, D. D. (1963). A Method for Obtaining Polynomials of Bernstein type of two Variables. The American Mathematical Monthly, 70(3), 260–264. https://doi.org/10.2307/2313121 Rudin Rudin, W. (1953). Principles of mathematical analysis, International Series of Pure and Applied Mathematics. Zeeman Zeeman, E.C. (1965). Twisting spun knots. Transactions of the American Mathematical Society, 115, 471-495.
http://arxiv.org/abs/2307.04323v1
20230710033248
Optimal $(2,δ)$ Locally Repairable Codes via Punctured Simplex Codes
[ "Dong Wang", "Weijun Fang", "Sihuang Hu" ]
cs.IT
[ "cs.IT", "math.IT" ]
Optimal (2,δ) Locally Repairable Codes via Punctured Simplex Codes Dong Wang12, Weijun Fang124, and Sihuang Hu124 1 Key Laboratory of Cryptologic Technology and Information Security, Ministry of Education, Shandong University, Qingdao, 266237, China, [email protected] 2 School of Cyber Science and Technology, Shandong University, Qingdao, 266237, China, {fwj, husihuang}@sdu.edu.cn 4 Quancheng Laboratory, Jinan 250103, China ================================================================================================================================================================================================================================================================================================================================================================================================================== This research is supported in part by National Key Research and Development Program of China under Grant Nos. 2021YFA1001000 and 2022YFA1004900, the National Natural Science Foundation of China under Grant No. 62201322, the Natural Science Foundation of Shandong under Grant No. ZR2022QA031. (Corresponding Author: Weijun Fang) Locally repairable codes (LRCs) have attracted a lot of attention due to their applications in distributed storage systems. In this paper, we provide new constructions of optimal (2, δ)-LRCs. Firstly, by the techniques of finite geometry, we present a sufficient condition to guarantee a punctured simplex code to be a (2, δ)-LRC. Secondly, by using characteristic sums over finite fields and Krawtchouk polynomials, we construct several families of LRCs with new parameters. All of our new LRCs are optimal with respect to the generalized Cadambe-Mazumdar bound. § INTRODUCTION In order to ensure the reliability of nodes in large-scale distributed storage systems, the concept of locally repairable codes was first proposed in <cit.>. Let [n]={1,2,...,n}, for a linear code C of length n over the finite field , a code symbol c_i of C has locality r if there exists a subset R_i⊆ [n] such that i∈ R_i,|R_i|≤ r+1 and c_i is a linear combination of {c_j}_j∈ R_i\{i} over . If each symbol of a codeword in C has locality r, then C is called a locally repairable code with locality r or an r-LRC. However, when multiple node failures happens in a distributed storage system, the r-LRCs can not recover failed nodes successfully. To address this problem, Prakash et al. <cit.> extended the concept of r-LRCs to (r,δ)-LRCs which can tolerate any δ-1 erasures. A code symbol c_i of C has locality (r,δ) if there exists a subset R_i⊆ [n] such that i∈ R_i,|R_i|≤ r+δ-1 and d(C|_R_i)≥δ where C|_R_i is the punctured code on the set [n]\ R_i. The code C is called an (r,δ)-LRC if all code symbols have locality (r,δ). Obviously when δ=2,(r,δ)-LRCs reduce to r-LRCs. §.§ Known Results about (r,δ)-LRCs In <cit.>, analogous to the classical Singleton bound for general codes, the following Singleton-type bound for an (r,δ)-LRC with parameters [n,k,d] is given as d≤ n-k+1-(⌈ k/r⌉ -1)(δ -1). If an (r,δ)-LRC achieves the Singleton-type bound (singletonBound) with equality, then the code is called a Singleton-optimal (r,δ)-LRC. Due to its interesting algebraic structures and practical applications in distributed storage systems, several constructions of Singleton-optimal (r,δ)-LRCs have been proposed in <cit.>. Note that the Singleton-type bound is independent of the field size. In <cit.>, Cadambe and Mazumdar derived the first field-dependent bound for q-ary r-LRCs with parameters [n,k,d], k≤min_s∈ℤ_+{sr+k_opt^(q)(n-s(r+1),d)}, where k_opt^(q)(n,d) is the maximum dimension of a q-ary linear code of length n and minimum distance d. The generalized Cadambe-Mazumdar bound was considered in <cit.>, which stated that for a q-ary (r,δ)-LRC with parameters [n,k,d], k≤min_s∈ℤ_+{sr+k_opt^(q)(n-s(r+δ-1),d)}. We call a code achieving the generalized C-M bound (CMbound_rdeltaLRC) with equality as a k-optimal (r,δ)-LRC. In <cit.>, the authors proved that the simplex code is a k-optimal 2-LRC. By deleting some columns from the generator matrix of the simplex code, several new families of k-optimal LRCs with localities 2 or 3 were proposed in <cit.> and <cit.>. In <cit.>, Luo et al. presented several binary k-optimal 2-LRCs by deleting or adding some columns from a binary simplex code and used character sums to determine their parameters. Motivated by works of <cit.>, Luo et al. constructed a family of p-ary linear codes and demonstrated that they are k-optimal 2-LRCs in some cases. Tan et al.<cit.> determined the locality of some known linear codes and showed that many of these codes are k-optimal. §.§ Our Contributions and Techniques In this paper, we focus on new constructions of (2, δ)-LRCs. We follow the construction of linear codes presented in <cit.>. This construction has been applied to secret sharing schemes or LRCs by many researchers <cit.>. It is more intuitive to describe the properties of (2,δ)-LRCs in the language of finite geometry, this converts the analysis of locality into how many lines pass through a point in projective geometry. From the finite geometry point of view, we give a simple but useful sufficient condition to guarantee a linear code to be a (2,δ)-LRC (see Theorem suffi_condi_lrc). We generalize some results proposed by Luo et al.<cit.> (see Theorems thm_generaliz_luo, thm_generaliz_luo_loc and loc_gen_luo) and Silberstein et al.<cit.> (see Theorem gen_wt2). In particular, we extend the p-ary linear codes presented in <cit.> to the q-ary linear codes, where p is a prime and q is the power of p, and determine their locality in some cases. Motivated by Silberstein's work on r-LRCs, we utilize Krawtchouk polynomials to determine the parameters of some punctured simplex codes. Specifically speaking, if the punctured columns from the generator matrix of a simplex code have certain weight, then determining the minimum distance of the punctured simplex code is equivalent to determining the minimum value of Krawtchouk polynomials. Then we construct two infinite families of k-optimal (2,q)-LRCs. Our constructions are generalizations of the results of <cit.>. Moreover, all our new LRCs are k-optimal with respect to the generalized C-M bound. The rest of this paper is organized as follows. In Section II, we recall a general construction of linear codes given by Ding et al.<cit.>, and some basic notation and results on finite geometry and Krawtchouk polynomials. In Section III, we consider (2,δ)-LRCs and present three infinite families of k-optimal (2,δ)-LRCs. Section IV concludes the paper. § PRELIMINARIES §.§ A General Construction of Linear Codes In this subsection, we describe a general construction of linear code which was given by Ding et al.<cit.>. Let m be a positive integer, q a power of some prime p, 𝔽 _q the finite field containing q elements and ^m the vector space over of dimension m. For any vector x= (x_1,x_2,⋯,x_m)∈^m, the Hamming weight of x is given as wt(x)=|{1≤ i≤ m:x_i≠ 0}|. We let tr_q^m/q(·) be the trace function from 𝔽_q^m to and tr(·) the absolute trace function from to . Ding et al.<cit.> established a general construction of linear codes, which says that if D={d_1,...,d_n} is a nonempty subset of 𝔽 _q^m, a q-ary linear code of length n is constructed by C_D={c_x=(tr_q^m/q(xd_1),⋯,tr_q^m/q(xd_n)):x∈𝔽 _q^m}. If D={ d_1, d_2, ⋯, d_n} is a nonempty subset of 𝔽^m_q, then the above construction (<ref>) can be modified to C_D={c_x=(x· d_1,...,x· d_n):x∈^m}, where x· d_i is the Euclidean inner product of x and d_i. Using character sums over finite fields, we can compute the parameters of those constructed codes. Assume that ω_p is the primitive p^th root of unity in the complex number field ℂ, then for a∈, the additive character χ_a from to is defined as χ_a(c)=ω_p^tr(ac), for all c∈. If a=0, then ∑_c∈χ_a(c)=q; otherwise ∑_c∈χ_a(c)=0 (<cit.>). The following two bounds are useful in subsequent sections. Let C be a q-ary [n,k,d] linear code, then n≥∑_i=0^k-1⌈d/q^i⌉. A linear code achieving the Griesmer bound with equality is called a Griesmer code. Let C be a q-ary code with M codewords, length n and minimum distance d. If qd>(q-1)n, then M≤qd/qd-(q-1)n. §.§ Finite Geometry The projective space PG(m-1, q) over 𝔽_q is the geometry whose points, lines, planes, ⋯ , hyperplanes are the subspaces of 𝔽^m_q of dimension 1, 2, 3, ⋯ , m-1. So, we also use a nonzero vector g ∈𝔽^m_q to denote the point in PG(m-1, q). Two nonzero vectors g_1 and g_2 are the same point in PG(m-1, q) if and only if g_1=λ g_2 for some λ∈𝔽^*_q. Note that when we replace g_i by λ g_i for some λ∈𝔽^*_q, the parameters of the code given by Eq. (<ref>) do not change. So we rewrite the code construction given in Eq. (<ref>) via the language of projective geometry as follows. Suppose D={ d_1, d_2, ⋯, d_n} is a nonempty subset of PG(m-1,q), then a q-ary linear code of length n is constructed by C_D={c_x=(x· d_1,...,x· d_n):x∈^m}. In this paper, we will use Eq. (<ref>) to construct optimal LRCs. Note that when D=PG(m-1,q), C_D is the famous simplex code. Thus in this sense, for general nonempty subset D, the code C_D is the punctured code of the simplex code. We let the points in PG(m-1,q) be the vectors in ^m that the first nonzero coordinate is 1 for simplicity. If A is a nonempty subset of [m], we let P_[m]=PG(m-1,q) and P_A be the subset of PG(m-1,q) that the coordinates outside of A are 0. It is easy to see that |P_A|=q^|A|-1/q-1,⋃_α∈^*α P_A=L_A^* where α P_A={αa:a∈ P_A} and L_A={(a_1,...,a_m)∈𝔽 _q^m:a_i=0 if i∉ A}. For any two subsets A_1,A_2 of [m], the intersection of P_A_1 and P_A_2 is equal to P_A_1∩ A_2, where P_∅ =∅ . §.§ Krawtchouk Polynomials In this subsection, we briefly review some basic results of Krawtchouk polynomials. Given positive integers n,q, and suppose 0 ≤ k ≤ n, the Krawtchouk polynomial of degree k is defined as<cit.> K_k(x;n,q)=K_k(x)=∑_j=0^k(-1)^jxjn-xk-j(q-1)^k-j. The following lemma is a slight modification of <cit.>. Let a and s be positive integers and x be a vector of length m over with wt(x)=a. Then we have ∑_y∈^m,wt(y)=sω _p^tr(x·y)=K_s(a;m,q). § (2,Δ)-LRCS FROM PUNCTURED SIMPLEX CODES In this section, we will provide several constructions of LRCs via punctured simplex codes. Firstly, we give a simple lemma which will be used to determine the locality of linear codes. Let δ≥ 2 be an integer, g_1,⋯,g_δ+1 be δ+1 distinct collinear points in PG(m-1,q). Let C be the linear code with the generator matrix G=[g_1 ... g_δ+1], then C is a q-ary [δ+1, 2, δ]-MDS code. Since any two of g_1,⋯,g_δ+1 are linearly independent and any three of g_1,⋯,g_δ+1 are linearly dependent, we have rank(G)=2. Thus (C)=2 and (C^⊥)=δ-1. On the other hand, G is the parity-check matrix of C^⊥, thus d(C^⊥) ≥ 3. By the Singleton bound, d(C^⊥) ≤δ+1-(δ-1)+1=3, hence C^⊥ is a [δ+1,δ-1,3]-MDS code. So C is a [δ+1,2,δ]-MDS code. In the following, we give a sufficient condition which guarantees that a punctured simplex code is a (2,δ)-LRC. Suppose 2≤δ≤ q and D is a subset of PG(m-1, q). If |D|≤q^m-1-1/q-1(q+1-δ)-1, then the code C_D^c given in Eq. (<ref>) is a q-ary (2,δ)-LRC, where D^c=PG(m-1,q)∖ D. For any point g∈ D^c, there are q^m-1-1/q-1 lines in PG(m-1,q) containing g, and each line has q+1 points. Since |D|≤q^m-1-1/q-1(q+1-δ)-1, by the Pigeonhole Principle, there exists at least one line L containing g, such that there are δ+1 points g_1= g, g_2, ⋯, g_δ+1 of L belonging to the subset D^c. By Lemma <ref>, d((C_D^c)_|E)=δ, where E={ g_1, g_2, ⋯, g_δ+1}. Hence the code C_D^c has (2, δ)-locality. When D=∅, then C_D^c is the q-ary simplex code. From Theorem <ref>, we know that the q-ary simplex codes have locality (2,q). In particular, to ensure the code C_D^c to be a 2-LRC, it only needs to satisfy that |D| ≤ q^m-1-2. Thus our method is simpler than that in <cit.>. Hyun et al.<cit.> constructed infinite families of binary Griesmer codes punctured by unions of projective spaces, and Luo et al.<cit.> obtained similar results of linear codes over 𝔽_p. In the following, we extend their results to general q-ary codes. Let m,t>1 be positive integers. Assume that A_1,...,A_t are nonempty subsets of [m] satisfying A_i∩ A_j=∅ for any i≠ j∈[t]. Let D=∪ _i=1^tP_A_i and D^c=P_[m]\ D, then the code C_D^c defined by Eq. (<ref>) is a q-ary linear code with parameters [q^m-1/q-1-∑_i=1^tq^|A_i|-t/q-1,m,q^m-1-∑_i=1^tq^|A_i|-1]. Furthermore, assume that |A_1|=...=|A_i_1|=s_1,|A_i_1+1|=...=|A_i_2|=s_2, ...,|A_i_u-1+1|=...=|A_i_u|=s_u where s_1<s_2<...<s_u. If max{i_1,i_2-i_1,...,i_u-i_u-1}≤ q-1, then C_D^c is a Griesmer code. Note that P_A_i∩ P_A_j=P_A_i∩ A_j=∅ for any i≠ j∈[t], so we have |D|=∑_i=1^t|P_A_i|=∑_i=1^tq^|A_i|-t/q-1, thus the length of C_D^c is q^m-1/q-1-∑_i=1^tq^|A_i|-t/q-1. Let x=(x_1,...,x_m) be any nonzero vector of ^m, then wt(c_x) =|D^c|-|{d∈ D^c|x· d=0}| =|D^c|-∑_d∈ D^c1/q∑_y∈ω_p^tr(yx· d) =q-1/q|D^c|-1/q∑_d∈ D^c∑_y∈^*ω_p^tr(yx· d) =q-1/q|D^c|-1/q∑_d∈^m^*ω_p^tr(x· d)+1/q∑_d∈ D∑_y∈^*ω_p^tr(yx· d). Note that ∑_d∈^m^*ω_p^tr(x· d) =∑_d_1∈⋯∑_d_m∈ω_p^tr(x_1d_1)⋯ω_p^tr(x_md_m)-1 =∏_i=1^m(∑_d_i∈ω_p^tr(x_id_i))-1=-1, where d=(d_1,...,d_m) and ∑_d∈ D∑_y∈^*ω_p^tr(yx· d) =∑_i=1^t∑_d∈ P_A_i∑_y∈^*ω_p^tr(x·(yd)) =∑_i=1^t∑_d∈^|A_i|^*ω_p^tr(x_A_i·d). As ∑_d∈^|A_i|^*ω_p^tr(x_A_i·d)= q^|A_i|-1,x_A_i=0 -1,x_A_i≠0 for 1≤ i≤ t, then the minimum weight is min_x∈^m^*wt(c_x)=q-1/q|D^c|+1/q-t/q =q^m-1-∑_i=1^tq^|A_i|-1. It is easy to prove that q^m-∑_i=1^tq^|A_i|>0 since ∑_i=1^t |A_i|≤ m. Thus wt(c_x)=0 if and only if x= 0, hence the dimension is m. Suppose ∑_i=1^tq^|A_i|-1=∑_i=g^hb_iq^i, where 0≤ b_i≤ q-1,i=g,...,h. Then ∑_i=0^m-1⌈q^m-1-∑_j=1^tq^|A_j|-1/q^i⌉=∑_i=0^m-1⌈q^m-1-∑_j=g^hb_jq^j/q^i⌉ =∑_i=0^m-1q^m-1-i-∑_i=0^g∑_j=g^hb_jq^j-i-∑_i=g+1^h∑_j=i^hb_iq^j-i =q^m-1/q-1-∑_i=1^tq^|A_i|-∑_i=g^hb_i/q-1. As max{i_1,i_2-i_1,...,i_u-i_u-1}≤ q-1,∑_i=g^hb_i=i_1+i_2-i_1+...+i_u-i_u-1=i_u=t. The length of C_D^c is q^m-1/q-1-|D|=q^m-1/q-1-∑_i=1^tq^|A_i|-∑_i=g^hb_i/q-1, hence the code C_D^c is a Griesmer code. We now investigate the locality of the codes given in Theorem thm_generaliz_luo. Keep the notation as in Theorem thm_generaliz_luo. If t=2 and |A_i|≤ m-2 for all i∈[t], then the code C_D^c has locality (2,q); if t≥ 3 and m≥ 4, then the code C_D^c has locality (2,q); if m>t=2,q>2 and |A_1|=m-1, then the code C_D^c has locality (2,q-1). Case 1: t=2, |A_i|≤ m-2,i=1,2. If m≥ 4, then |D|=q^|A_1|+q^|A_2|-2/q-1≤q^2+q^m-2-2/q-1≤q^m-1-q/q-1; if m=3, then |D|=q^|A_1|+q^|A_2|-2/q-1=2≤q^2-q/q-1. By Theorem <ref>, we can deduce that the code C_D^c has locality (2,q). Case 2: m>3 and t>2. Note that |D|=∑_i=1^tq^|A_i|-t/q-1≤q^m-t+1+(t-1)q-t/q-1=q^m-t+1+t(q-1)-q/q-1≤q^m-t+1+m(q-1)-q/q-1≤q^m-2+q^m-2(q-1)-q/q-1=q^m-1-q/q-1. By Theorem <ref>, we can deduce that the code C_D^c has locality (2,q). Case 3: m>t=2, |A_1|=m-1,|A_2|=1,q>2. Note that |D|=q^m-1+q-2/q-1≤2q^m-1-2/q-1-1. By Theorem <ref>, we can deduce that the code C_D^c has locality (2,q-1). Keep the notation as in Theorems thm_generaliz_luo and thm_generaliz_luo_loc, the code C_D^c is k-optimal LRC with respect to the bound (<ref>). Case 1: t=2, |A_i|≤ m-2,i=1,2. We let n'=q^m-1/q-1-q^|A_1|+q^|A_2|-2/q-1-q-1,d=q^m-1-q^|A_1|-1-q^|A_2|-1, according to the Plotkin bound, we have k_opt^(q)(n',d) ≤⌊log_qq(q^m-1-q^|A_1|-1-q^|A_2|-1)/q^2-2⌋ ≤⌊log_q(q^m-1-q^|A_1|-1-q^|A_2|-1)⌋ ≤ m-2. Utilizing the bound (CMbound_rdeltaLRC) with s=1, we obtain that k≤ 2+k_opt^(q)(n',d)≤ m. Therefore, the code C_D^c is k-optimal. Case 2: m>3 and t>2. The Griesmer code C_D^c has parameters [n=q^m-∑_i=1^tq^|A_i|+t-1/q-1,k=m,d=q^m-1-∑_i=1^tq^|A_i|-1]. When q>2, we have ⌈q^m-1-∑_i=1^tq^|A_i|-1/q^m-2⌉≥⌈ q-t-1+q^m-t/q^m-2⌉≥⌈ q-m-1+q^m-3/q^m-2⌉=q and ⌈q^m-1-∑_i=1^tq^|A_i|-1/q^m-1⌉=1. So we have n-1=∑_i=0^m-2⌈d/q^i⌉,n-q-1≥∑_i=0^m-3⌈d/q^i⌉. Using the Griesmer bound, we obtain that k_opt^(q)(n-1-q,d)≤ m-2, thus k≤ 2+k_opt^(q)(n-1-q,d)≤ m and the code C_D^c is k-optimal. When q=2, since A_1,...,A_t are mutually disjoint and max{i_1,i_2-i_1,...,i_u-i_u-1}≤ 1, we obtain that ⌈2^m-1-∑_i=1^t2^|A_i|-1/2^m-2⌉≥ 2. According to the similar arguments, k≤ 2+k_opt^(q)(n-1-q,d)≤ m and the code C_D^c is k-optimal. Case 3: m>t=2, |A_1|=m-1,|A_2|=1,q>2. We let n'=q^m-1/q-1-q^m-1+q-2/q-1-q,d=q^m-1-q^m-2-1, according to the Plotkin bound, we have k_opt^(q)(n',d) ≤⌊log_qq(q^m-1-q^m-2-1)/q^2-q-1⌋ ≤⌊log_q(q^m-1-q^m-2-1)⌋ ≤ m-2. Utilizing the bound (CMbound_rdeltaLRC) with s=1, we obtain that k≤ 2+k_opt^(q)(n',d)≤ m. Therefore, the code C_D^c is k-optimal. Let q=4,m=3 and A_1={1},A_2={2,3}, then C_D^c defined in Theorem thm_generaliz_luo is a 4-ary Griesmer code [15,3,11] with a generator matrix G=(G_1 G_2), where G_1=[ 1 1 1 1 1 1 1; 1 α α+1 0 1 α α+1; 0 0 0 1 1 1 1 ],G_2=[ 1 1 1 1 1 1 1 1; 0 1 α α+1 0 1 α α+1; α α α α α+1 α+1 α+1 α+1 ] and α is a primitive element in 𝔽_4. Then C_D^c is a (2, 3)-LRC. For instance, one can see that the columns (1,1,0)^T,(1,0,1)^T,(1,α,α+1)^T,(1,α+1,α)^T of G generate a [4,2,3]-code. Hence the first symbol of C_D^c has locality (2,3). Note that k^(4)_opt(11,11)=1, thus C_D^c attains the generalized C-M bound (<ref>). In the following, we consider another family of punctured simplex codes, which is motivated by<cit.>. Let A⊆ [m],|A|=s≥ 3,D={d∈ P_A: wt(d)=2},D^c=P_[m]\ D, then the code C_D^c defined in Eq. (<ref>) is a q-ary k-optimal (2,q)-LRC with parameters [n,k,d]=[q^m-1/q-1-(q-1)s2,m,q^m-1-⌊(2(s-1)(q-1)+q)^2/8q⌋] providing that s2(q-1)≤q^m-1-q/q-1 and 0<qd/qd-(q-1)(n-q-1)<q^m-1. There are (q-1)^2s2 vectors in L_A with Hamming weight 2, so |D|=(q-1)s2 since wt(a)=wt(λa) if and only if λ∈^*. ∑_d∈ D∑_y∈^*ω_p^tr(yx· d) =∑_d∈^s,wt(d)=2ω_p^tr(x_A·d) =K_2(a;s,q) =q^2/2a^2-(2(q-1)qs+q(2-q)/2)a +s2(q-1)^2. Thus the minimum weight of C_D^c corresponding to x is min_x∈^m^*wt(c_x) =⌈ q^m-1-(q-1)^2/qs2-4s(q-1)+(q-2)^2/8q⌉ =q^m-1-⌊(q-1)^2/qs2 +4s(q-1)+(q-2)^2/8q⌋ =q^m-1-⌊(2(s-1)(q-1)+q)^2/8q⌋ according to the Eq. (calcu_min_weight). According to Theorem suffi_condi_lrc, the code has locality (2,q). Using the Plotkin bound, we obtain that k_opt^(q)(n-q-1,d) ≤⌊log_qqd/qd-(q-1)(n-q-1)⌋ ≤ m-2. Thus the code C_D^c is a k-optimal (2,q)-LRC according to the generalized C-M bound (CMbound_rdeltaLRC). By the techniques of graph theory, the authors in <cit.> have obtained these codes as 2-LRCs. And they only proved that these codes are k-optimal 2-LRCs for s=3,m≥ 3,2≤ q≤ 14. However, in Theorem thm_weight2, we prove that all the codes in <cit.> are actually k-optimal (2,q)-LRCs. Let A_1,A_2,...,A_t⊆ [m],|A_i|=s_i≥ 3 for i∈[t],D_i={d∈ P_A_i|wt(x)=2},D=⋃_i∈[t] D_i,D^c=P_[m]\ D. If |A_i∩ A_j|≤ 1 for all i≠ j∈[t], and (q-1)∑_i=1^ts_i2≤q^m-1-q/q-1, then the code C_D^c defined in (gen_linear_constr) is a q-ary k-optimal (2,q)-LRC with parameters [n,k,d]=[q^m-1/q-1-(q-1)∑_i=1^ts_i2,m,q^m-1-Δ] providing that 0<qd/qd-(q-1)(n-q-1)<q^m-1 where Δ=⌊∑_i=1^t (2(s_i-1)(q-1)+q)^2/8q⌋. For all i≠ j∈[t], D_i∩ D_j={d∈ P_A_i∩ A_j|wt(d)=2}=∅ since |A_i∩ A_j|≤ 1. |D|=∑_i=1^t|D_i|=(q-1)∑_i=1^ts_i2. ∑_d∈ D∑_y∈^*ω_p^tr(yd· x) =∑_i=1^t∑_d∈ D_i∑_y∈^*ω_p^tr(yd· x) =∑_i=1^t∑_d∈^s_i,wt(d)=2ω_p^tr(x_A_i·d) =K_2(w_1;s_1,q)+...+K_2(w_t;s_t,q) where w_i=wt(x_A_i). Thus the weight of a codeword corresponding to some nonzero x is wt(c_x) =(q^m-1/q-1-(q-1)∑_i=1^ts_i2)q-1/q +1/q+1/q∑_i=1^tK_2(w_i;s_i,q) =q^m-1-(q-1)^2/q∑_i=1^ts_i2+1/q∑_i=1^tK_2(w_i;s_i,q). Note that the axis of symmetry of K_2(w_i;s_i,q) is 2(q-1)s_i+2-q/2q=1/q(1-s_i)+s_i-1/2≥s_i/2>1 for all prime power q, thus w_i≥1 when K_2(w_i;s_i,q) get the minimum value. Meanwhile, |A_i∩ A_j|≤ 1 for all i≠ j∈[t], so K_2(w_i;s_i,q) get the minimum value simultaneously for all i∈[t]. The minimum weight of c_x is min_x∈^m^*wt(c_x) =⌈ q^m-1-(q-1)^2/q∑_i=1^ts_i2. . -∑_i=1^t4s_i(q-1)+(q-2)^2/8q⌉ =q^m-1-⌊∑_i=1^t(2(s_i-1)(q-1)+q)^2/8q⌋. According to Theorem suffi_condi_lrc, the code has locality (2,q). Using the Plotkin bound, we obtain that k_opt^(q)(n-q-1,d) ≤⌊log_qqd/qd-(q-1)(n-q-1)⌋ ≤ m-2. The code C_D^c is a k-optimal (2,q)-LRC according to (CMbound_rdeltaLRC). Let q=2,m=5,A_1={1,2,3} and A_2={3,4,5}. By the SageMath software, the binary code C_D^c defined in Theorem gen_wt2 has parameters [25,5,12] and the generator matrix is G=(G_1 G_2) where G_1=[ 1 0 0 1 0 1 0 1 1 0 1 0; 0 1 0 1 0 0 1 1 0 1 1 0; 0 0 1 1 0 0 0 0 1 1 1 0; 0 0 0 0 1 1 1 1 1 1 1 0; 0 0 0 0 0 0 0 0 0 0 0 1 ],G_2=[ 1 0 1 1 0 1 0 0 1 0 1 0 1; 0 1 1 0 1 1 0 1 1 0 0 1 1; 0 0 0 1 1 1 0 0 0 1 1 1 1; 0 0 0 0 0 0 1 1 1 1 1 1 1; 1 1 1 1 1 1 1 1 1 1 1 1 1 ]. By the Plotkin bound, k^(2)_opt(22,12) ≤⌊log_2 12 ⌋=3. Hence C_D^c is a k-optimal 2-LRC achieving the C-M bound. § CONCLUDING REMARKS In this paper, we have investigated new constructions of optimal (2, δ)-LRCs via punctured simplex codes. By using the language of finite geometry, we propose a simple but useful condition to ensure that a linear code has (2,δ)-locality. According to some characteristic sums and Krawtchouk polynomials, we obtain several infinite families of q-ary (2,δ)-LRCs. All these codes are optimal with respect to the generalized C-M bound. We not only generalize some previous results of 2-LRCs to the (2, δ)-LRCs, but also construct some new optimal (2, δ)-LRCs which are not optimal in the sense of 2-LRCs. It is interesting to find more new optimal (2, δ)-LRCs and generalize these results to the (r,δ)-LRCs with r ≥ 3 in the future. IEEEtran
http://arxiv.org/abs/2307.05936v1
20230712060401
Introducing Packet-Level Analysis in Programmable Data Planes to Advance Network Intrusion Detection
[ "Roberto Doriguzzi-Corin", "Luis Augusto Dias Knob", "Luca Mendozzi", "Domenico Siracusa", "Marco Savi" ]
cs.CR
[ "cs.CR", "cs.NI" ]
acl[ACL]Access Control List ai[AI]Artificial Intelligence ahd[AHD]Abnormal Health Detection alu[ALU]Arithmetic Logic Unit ann[ANN]Artificial Neural Network api[API]Application Programming Interface bmv2[BMv2]Behavioral Model version 2 bow[BoW]Bag-of-Words cnn[CNN]Convolutional Neural Network cpe[CPE]Customer Premise Equipment dl[DL]Deep Learning dlp[DLP]Data Loss/Leakage Prevention dpi[DPI]Deep Packet Inspection dnn[DNN]Deep Neural Network dns[DNS]Domain Name System dos[DoS]Denial of Service ddos[DDoS]Distributed Denial of Service ebpf[eBPF]extended Berkeley Packet Filter ewma[EWMA]Exponential Weighted Moving Average fl[FL]Federated Learning fpu[FPU]Floating Point Unit foss[FOSS]Free and Open-Source Software fnr[FNR]False Negative Rate fpr[FPR]False Positive Rate fvg[FedAvg]Federated Averaging gdpr[GDPR]General Data Protection Regulation gpu[GPU]Graphics Processing Unit ha[HA]Hardware Appliance ics[ICS]Industrial Control System ids[IDS]Intrusion Detection System iid[i.i.d.]independent and identically distributed ilp[ILP]Integer Linear Programming iot[IoT]Internet of Things isp[ISP]Internet Service Provider ips[IPS]Intrusion Prevention System jsd[JSD]Jensen-Shannon Distance ldap[LDAP]Lightweight Directory Access Protocol lstm[LSTM]Long Short-Term Memory lucid[Lucid]Lightweight, Usable CNN in DDoS Detection mau[MAU]Match-Action Unit mbgd[MBGD]Mini-Batch Gradient Descent mips[MIPS]Millions of Instructions Per Second ml[ML]Machine Learning mlp[MLP]Multi-Layer Perceptron mssql[MSSQL]Microsoft SQL nat[NAT]Network Address Translation netbios[NetBIOS]Network Basic Input/Output System nic[NIC]Network Interface Controller nids[NIDS]Network Intrusion Detection System nf[NF]Network Function nfv[NFV]Network Function Virtualization nsc[NSC]Network Service Chaining ntp[NTP]Network Time Protocol of[OF]OpenFlow ood[o.o.d.]out-of-distribution os[OS]Operating System ourtool[P4DDLe]P4-empowered Ddos detection with Deep Learning p4[P4]Programming Protocol-independent Packet Processors pess[PESS]Progressive Embedding of Security Services pisa[PISA]Protocol Independent Switch Architecture pop[PoP]Point of Presence portmap[Portmap]Port Mapper ppv[PPV]Positive Predictive Value ps[PS]Port Scanner qoe[QoE]Quality of Experience qos[QoS]Quality of Service rnn[RNN]Recurrent Neural Network rpc[RPC]Remote Procedure Call sdn[SDN]Software Defined Networking nn[NN]Neural Network bnn[BNN]Binary Neural Network sfnr[sFNR]system False Negative Rate sla[SLA]Service Level Agreement snf[SNF]Security Network Function snmp[SNMP]Simple Network Management Protocol ssdp[SSDP]Simple Service Discovery Protocol svm[SVM]Support Vector Machine tc[TC]Traffic Classifier tftp[TFTP]Trivial File Transfer Protocol tor[ToR]Top of Rack tpr[TPR]True Positive Rate tsp[TSP]Telecommunication Service Provider unb[UNB]University of New Brunswick vm[VM]Virtual Machine vne[VNE]Virtual Network Embedding vnep[VNEP]Virtual Network Embedding Problem vnf[VNF]Virtual Network Function vsnf[VSNF]Virtual Security Network Function vpn[VPN]Virtual Private Network xdp[XDP]eXpress Data Path wan[WAN]Wide Area Network waf[WAF]Web Application Firewall =0 Programmable data planes offer precise control over the low-level processing steps applied to network packets, serving as a valuable tool for analysing malicious flows in the field of intrusion detection. Albeit with limitations on physical resources and capabilities, they allow for the efficient extraction of detailed traffic information, which can then be utilised by ml algorithms responsible for identifying security threats. In addressing resource constraints, existing solutions in the literature rely on compressing network data through the collection of statistical traffic features in the data plane. While this compression saves memory resources in switches and minimises the burden on the control channel between the data and the control plane, it also results in a loss of information available to the nids, limiting access to packet payload, categorical features, and the semantic understanding of network communications, such as the behaviour of packets within traffic flows. This paper proposes ourtool, a framework that exploits the flexibility of P4-based programmable data planes for packet-level feature extraction and pre-processing. ourtool leverages the programmable data plane to extract raw packet features from the network traffic, categorical features included, and to organise them in a way that the semantics of traffic flows is preserved. To minimise memory and control channel overheads, ourtool selectively processes and filters packet-level data, so that all and only the relevant features required by the nids are collected. The experimental evaluation with recent ddos attack data demonstrates that the proposed approach is very efficient in collecting compact and high-quality representations of network flows, ensuring precise detection of ddos attacks. Roberto Doriguzzi-Corin^α, Luis Augusto Dias Knob^α, Luca Mendozzi^β, Domenico Siracusa^α, Marco Savi^β ^αCybersecurity Centre, Fondazione Bruno Kessler, Trento - Italy ^βUniversity of Milano-Bicocca, Department of Informatics, Systems and Communication (DISCo), Milano - Italy August 12, 2023 ============================================================================================================================================================================================================================================================================================ Roberto Doriguzzi-Corin^α, Luis Augusto Dias Knob^α, Luca Mendozzi^β, Domenico Siracusa^α, Marco Savi^β ^αCybersecurity Centre, Fondazione Bruno Kessler, Trento - Italy ^βUniversity of Milano-Bicocca, Department of Informatics, Systems and Communication (DISCo), Milano - Italy August 12, 2023 ============================================================================================================================================================================================================================================================================================ Programmable data planes offer precise control over the low-level processing steps applied to network packets, serving as a valuable tool for analysing malicious flows in the field of intrusion detection. Albeit with limitations on physical resources and capabilities, they allow for the efficient extraction of detailed traffic information, which can then be utilised by ml algorithms responsible for identifying security threats. In addressing resource constraints, existing solutions in the literature rely on compressing network data through the collection of statistical traffic features in the data plane. While this compression saves memory resources in switches and minimises the burden on the control channel between the data and the control plane, it also results in a loss of information available to the nids, limiting access to packet payload, categorical features, and the semantic understanding of network communications, such as the behaviour of packets within traffic flows. This paper proposes ourtool, a framework that exploits the flexibility of P4-based programmable data planes for packet-level feature extraction and pre-processing. ourtool leverages the programmable data plane to extract raw packet features from the network traffic, categorical features included, and to organise them in a way that the semantics of traffic flows is preserved. To minimise memory and control channel overheads, ourtool selectively processes and filters packet-level data, so that all and only the relevant features required by the nids are collected. The experimental evaluation with recent ddos attack data demonstrates that the proposed approach is very efficient in collecting compact and high-quality representations of network flows, ensuring precise detection of ddos attacks. § INTRODUCTION Network intrusions and anomalies are one of the most significant plagues in modern communication networks. As the number and complexity of incidents are constantly increasing <cit.>, it has become imperative to implement meticulous monitoring and robust counteraction measures to effectively detect and mitigate these threats. In the past decade, network monitoring has lived a second youth, thanks in large part to the prominence of sdn <cit.> and to the rise of ml technologies in networking <cit.>. Within the network security domain, the fusion between the centralised control plane of sdn and data-driven threat detection powered by ml algorithms has proved remarkable efficacy in promptly identifying and mitigating network intrusions and anomalies <cit.>. Despite the undeniable benefits brought by the synergy between sdn and ml, the implementation of robust network security solutions remains a challenge, primarily due to the fine-grained traffic features required by ml algorithms. In this context, a direct and effective approach for leveraging the centralised control plane of sdn, while ensuring that ml algorithms receive the necessary traffic information, is through a technique known as packet mirroring <cit.>. By employing packet mirroring, a duplicate copy of the network traffic is transmitted from the data plane to the control plane, where it undergoes traffic feature extraction and pre-processing before ml algorithms can be executed upon it (Figure <ref>). Unfortunately, the channel between data and control planes often comes with severe bandwidth and latency bottlenecks <cit.>. These inherent limitations imply that it may be overwhelmed by the sheer volume of mirrored data <cit.>, thereby jeopardising ordinary sdn network operations and compromising prompt response to ongoing network attacks. The recent emergence of programmable data planes <cit.> has introduced a technology that offers a promising solution to tackle the above challenges. Programmable data planes enable the customization of the data plane pipeline (known as fastpath <cit.>) using domain-specific languages like P4 <cit.>. This level of programmability and flexibility empowers network practitioners to optimise feature extraction, data pre-processing, and ml inference operations, which can be offloaded to the data plane (see Figure <ref>): with programmable data planes, it thus becomes possible to finely manipulate the type and volume of traffic data forwarded to the control plane (known as slowpath <cit.>). However, it is important to note that the data plane has inherent limitations, such as a limited set of available arithmetic instructions and memory capacity in the match-action tables <cit.>. As a result, only simple ml models can be effectively offloaded, which may lead to noticeable degradation in inference performance <cit.>. We advocate leveraging the full potential of the centralised sdn control plane by executing ml inference directly on the sdn controller. This approach allows network monitoring operations to exploit the advantages of a comprehensive, global view of the network, while also enabling the implementation and execution of sophisticated ml algorithms. By performing ml inference on the controller, the network can benefit from enhanced analysis and decision-making capabilities, leveraging the rich information available at the control plane. Simultaneously, we recognise the value of programmable data planes in performing traffic feature extraction and pre-processing. By leveraging the programmability of data planes, it becomes possible to precisely control the amount of data that traverses the control channel. This approach allows for selective processing and filtering of data at a data plane level, reducing the burden on the control channel and enabling efficient resource allocation (see Figure <ref>). This concept has been extensively explored in the scientific literature, with several studies proposing solutions based on the aggregation of traffic statistics computed within the data plane. The primary objective of these solutions is to optimise the load on the control channel and effectively manage memory utilization in the data plane <cit.>. One notable drawback of relying solely on statistical traffic features, such as flow duration, averages, maximums, minimums, and standard deviations of attributes like packet size, rate, and inter-arrival time, is the inability to report categorical features (e.g., IP and TCP flags) or portion of packets' payload directly to the control plane. These packet-level traffic features are crucial for certain ml-based nids in the current state of the art (e.g., <cit.>, among many others), which rely on such traffic attributes to learn the semantic of malicious traffic flows and to segregate them from legitimate network activities. In this paper, we propose ourtool (ourtool), a framework for efficient packet-level feature extraction and pre-processing in P4-based programmable data planes. ourtool inherently supports the categorical features, and it has been designed to preserve the semantic integrity of the flows. With these properties, ourtool ensures that the ml-based nids executed in the control plane can capture the relevant protocols, data formats, application behaviours and traffic patterns, allowing for a comprehensive understanding of the network traffic semantics. ourtool exploits stateful memory objects (P4 registers) and a counting Bloom filter to efficiently select and store the packet-level features that are relevant for the ml model. This approach enables the data plane to discard redundant data, which would otherwise be disregarded by the control plane, with great benefits in terms of control channel usage and control plane processing load. The selected features are stored within two ring buffers, which have been specifically designed to avoid race conditions between the control plane (reading the stored features) and the data plane (writing them). The latter aspect has been often neglected by previous works. To the best of our knowledge, this is the first work addressing the challenges of packet-level feature extraction in the data plane for network security applications. In contrast to existing works, ourtool combines the flexibility of packet-level features with the network-wide view provided to the nids by the centralised sdn control plane (not available when the nids runs completely in the data plane <cit.>). We believe that the combination of these two features sets ourtool apart from current state-of-the-art approaches to network attack detection with programmable data planes. We have extensively evaluated ourtool using a recent dataset of ddos attacks and a well-known nids based on a cnn that takes packet-level features as input <cit.>. We have compared our approach against a naïve strategy for packet-level feature extraction, which stores the data in the buffers sequentially without any optimisation logic. In a comprehensive range of testing scenarios, we empirically show that ourtool is able to collect more feature-rich traffic flows. As a result, our approach achieves a significantly lower system fnr, measured as the sum of overlooked and misclassified malicious flows, enabling faster detection and mitigation of network attacks. The remainder of the paper is structured as follows. Section <ref> reviews and discusses the related work. Section <ref> provides an overview of P4 data plane programming, counting Bloom filters and the cnn used in our experiments. Section <ref> presents ourtool's architecture and the methods for feature extraction and storage. Section <ref> details the setup of the experimental evaluation presented in Section <ref>. Finally, the conclusions are given in <ref>. § RELATED WORK One of the primary challenges in network traffic monitoring is finding a balance between the accuracy of the monitored traffic attributes and the level of resources (both network and computing) required to achieve it. In this regard, programmable network devices can be exploited to handle the monitoring operations in the data plane, effectively reducing the burden on both control plane and control channel. Nevertheless, despite the adaptability of modern data plane implementations, including those relying on the p4 programming language, they still have certain limitations in the types of operations they can perform. To address this issue, a common technique is to offload some of the computation from the data plane to the control plane (e.g., performing ml inference), where more computing power and advanced tools are accessible. Nevertheless, this approach could be constrained by the limited capacity of the communication/control channel linking the data and control planes. This section provides an overview of recent research that has tackled these challenges, with a particular emphasis on the network security domain, where achieving a balance between resource utilization and monitoring accuracy is critical for quickly identifying and responding to cyber threats. §.§ ML-based In-network Traffic Classification In network monitoring applications, an effective solution to prevent the control channel from being overloaded with network data is to offload the tasks of traffic feature extraction, data pre-processing, and ml inference to the programmable data plane (see Figure <ref>). This approach, also known as the in-network approach, offers several advantages in terms of control channel utilization by minimising the interaction between data and control planes. However, despite its benefits, this approach also presents some significant drawbacks that should be taken into account. It is worth noting that the p4 programming language does not support floating-point operations and divisions <cit.>. Consequently, some highly effective ml algorithms, including ann, cannot be directly implemented in the data plane, as those algorithms rely on model weights that are usually represented as floating-point numbers. Recent initiatives have been trying to enable floating-point operations to p4, either with the adoption of dedicated hardware fpu <cit.> or by proposing hardware changes to the data plane's architecture of programmable devices <cit.>. Although promising, these approaches present limited portability to existing p4 devices (switches and/or smartNICs), and may require a non-negligible time horizon for their adoption in more recent products. For this reason, most of the existing proposals for ml-based in-network traffic classification that exploit programmable data planes adopt simple ml models, such as decision trees <cit.><cit.><cit.><cit.><cit.>, binary decision trees <cit.>, random forests <cit.><cit.><cit.><cit.><cit.>, svm <cit.><cit.>, Naïve Bayes <cit.><cit.>, K-means <cit.><cit.>, XGBoost <cit.>. Concerning nn models, bnn can be successfully offloaded <cit.><cit.> thanks to their simplicity, as model weights are binary values. In all these works, the ml models are implemented by exploiting either match-action tables, populated by the control plane with appropriate rules, or by making use of P4 registers, i.e., by hard-coding the model into the P4 program. The former case is more flexible, as the model can be updated by the control plane at runtime (e.g. after re-training) by just injecting new rules, but it consumes memory that is typically dedicated to traffic forwarding. In a recent work, Razavi et al. <cit.> have implemented an ann directly in the data plane. A notable limitation of this approach is the encoding of floating-point weights as 8-bit integers. As no performance evaluation is presented, the efficacy of the proposed solution remains uncertain. In addition, some works <cit.><cit.> focus on the design and implementation of frameworks for offloading to the data plane the most appropriate ml model according to the task to be executed. In general, what we can conclude by analysing the aforementioned papers is that the performance of ml-based in-network traffic classification is often hindered by the inherent limitations of the p4 language and/or of the underlying hardware. This can make it challenging to offload ml models with satisfactory classification performance. In contrast to the solutions mentioned above, we have opted to execute the traffic classifier in the control plane. This approach allows us to choose the most appropriate ml model for a given task, without being constrained by the limitations of the P4 language or the hardware of the devices. By leveraging the flexibility of the control plane, we can use more advanced ml models to achieve better accuracy and performance as needed. Furthermore, the centralised nature of the sdn control plane enables the nids to leverage network-wide traffic data, facilitating more accurate and comprehensive detection of network threats. This advantage is not attainable with the distributed inference employed by in-network approaches. §.§ Interaction between Control Plane and Data Plane Offloading a portion of traffic processing to the control plane (Figure <ref>) requires careful consideration of the limitations imposed by the control channel in terms of bandwidth and latency. This consideration becomes even more crucial when a substantial amount of data must be exchanged between the two planes, as is for instance required when executing ml inference in the control plane for ddos attack detection <cit.>. NetWarden <cit.> is a defense solution designed to mitigate network covert channels, utilising programmable data planes. In order to achieve this goal, NetWarden's approach involves restricting the data plane to performing per-packet operations exclusively on header fields. On the other hand, the control plane is only used for batch operations, reducing the need for interaction between the control and data planes and optimising the overall performance of the system. DySO <cit.> is a monitoring framework that shares the same fundamental principles as NetWarden, seeking to eliminate the bottleneck that can occur between data and control planes. Our research similarly aims to minimise the interaction between the two planes, accomplishing this by conducting feature extraction and selection in the data plane. By only forwarding essential data to the control plane for input to the ml model, we can significantly reduce the overall system latency and optimise performance. A couple of recent works take different approaches to solve the bottleneck issue with the control channel. Escala <cit.> operates under the assumption that multiple control channels may be in use and may become overloaded. To address this issue, Escala offers the ability to elastically scale these channels at runtime as necessary, and to seamlessly migrate event streams (i.e., data transmitted from the data to the control plane) between different channels. Chen et al. <cit.> introduce MTP, a novel framework designed to optimise the placement of measurement points in the data plane, with the aim of minimising the overall cost associated with monitoring servers and network devices. This framework is based on an ilp formulation, which defines a constraint on the capacity of the links between the data and control planes. In general, these works are orthogonal to our proposal and could be adopted to further alleviate the performance bottleneck of the control channel. Finally, Bermudez Serna et al. <cit.> focus on security aspects. They propose a reactive configuration mechanism that can be adopted to counteract attacks aimed at overloading the control channel which, given its constrained capacity, could easily get saturated by malicious traffic. Also, this strategy is orthogonal to our proposal and could be adopted together with it for enhanced security. §.§ Efficient Feature Extraction in the Data Plane Efficient feature extraction in programmable data planes is a challenging problem that has received limited attention in the scientific literature. The goal is to generate features that are compatible with the input layer of a ml classifier, which is executed in the control plane for traffic monitoring or classification. Despite the importance of this problem, there are only a few studies that have addressed it and most of them, like our work, focus on ddos detection as a use case. In this respect, FlowLens <cit.> is a flow classifier designed for programmable switches that implements a mechanism for optimising the amount of data to be stored in memory and transferred through the control channel. While this feature enables efficient processing of network flows, race conditions on the shared memory are managed by the control plane, which deletes the flow tables to avoid concurrent read/write operations on the registers that store the traffic features. According to the paper, the flow tables are left empty until the reading process is completed. As a result, any incoming packets during this period cannot be processed or collected. Musumeci et al. <cit.> propose an approach to SYN Flood ddos attack detection that leverages p4-based programmable data planes for feature extraction and pre-processing. The p4 program extracts statistical traffic features computed over a pre-defined time window, such as average packet length, percentage of TCP and UDP packets and percentage of TCP packets with active SYN flag. A similar approach is adopted by Zang et al. <cit.>, who propose a ddos detection system based on ensemble learning and data-plane feature extraction and pre-processing. Due to the reliance on global statistical features rather than per-flow features, both approaches are limited to producing binary outputs indicating the presence of an ongoing attack. However, their ml classifiers cannot identify specific malicious flows. HybridDAD <cit.> is another solution based on statistical features. In this case, the output of the ml algorithm is a label that indicates whether there has been an attack during the previous time window, plus the class of the attack (among four types of DDoS flood attacks). ORACLE <cit.> implements a flow-based feature extraction and pre-processing in the data plane for the detection of ddos attacks, which is offloaded to ml algorithms executed in the control plane. To accomplish this, ORACLE employs a p4 program that collects per-flow statistical features, including flow duration, standard deviation of inter-arrival time, average packet size, and standard deviation of packet length. Apart from the inherent complexity of calculating statistical features in the data plane due to the limitations of the P4 language, ORACLE utilises hashing to index flows. With this approach, packets from different flows can be grouped together in the case of collisions. As a result, the computed statistics may be incorrect, making the final flow statistics unreliable and unusable. Our solution addresses the limitations of prior research by implementing a double-block storage mechanism in the data plane, which effectively prevents race conditions. We achieve this by using two separate blocks (implemented using a set of p4 registers) and switching between them using a dedicated p4 register. The data plane writes to one block while the control plane reads from the other, ensuring consistency and minimising conflicts. Compared to previous methods based on statistical flow-level features, our approach leverages a packet-level representation of network traffic, which preserves the semantics of flows (i.e., the behaviour of packets within each flow) and the categorical features of packets (e.g., TCP flags, ICMP type, etc.). This information is crucial for various ml-based nids <cit.>. On the other hand, packet-level features may entail a greater amount of data transmission via the control channel, compared to flow-level statistical features. We alleviate this by proposing a flow-based storage mechanism, that relies on a statistical technique called counting Bloom filter <cit.>. By doing so, we can keep in memory only the essential data required for control plane operations, reducing the need to transmit extra traffic information that would ultimately be discarded. § BACKGROUND §.§ Data Plane Programming with P4 p4 is a language for expressing how the network traffic is processed by the data plane of programmable network elements such as hardware and software switches. p4 is target-independent, thus it supports a variety of targets such as ASICs, FPGAs, NICs and software switches. The main components of a p4-programmable pipeline are shown in Figure <ref>, which sketches a data plane architecture based on the pisa model <cit.>. It consists of a Parser, a state machine that extracts packet headers and metadata from the incoming bitstream, a Checksum verification control block, an Ingress Match-Action processing control block, an Egress Match-Action processing control block, a Checksum update control block and a Deparser, which assembles the headers with the payload received from the Parser. Besides this basic model, the latest p4 specification, namely P4_16 <cit.>, has been conceived to support various switch architectures, NICs and “non-standard” programmable components with external libraries. As shown in the figure, the Match-Action block may consist of one or multiple mau. Each mau comprises TCAM and SRAM memory blocks (MEM in the figure) for various purposes such as match-action tables, registers, meters, etc. The alu blocks instead execute arithmetic operations and modifications on the packets' headers and metadata based on the content of the match-action tables. alu may use stateful objects stored in the SRAM memory (e.g., registers) for additional operations. Relevant to this work are the stateful objects of type register. Registers belong to a wider category of stateful elements called extern objects, which also include counters and meters. Unlike match-action tables, which can be modified only by the control plane, extern objects can be read and written from both data and control planes. Therefore, the access to registers, and to extern objects in general, should be managed with atomic operations to prevent race conditions. In this work, we use registers to store per-packet and per-flow data that is written by the data plane and periodically read by the control plane. We initially enclosed read/write operations into atomic blocks to avoid race conditions between data and control planes. However, as we noticed that this approach causes a considerable loss of traffic data when the control plane locks the registers for reading, we adopted a more efficient strategy consisting of using two different registers, one for writing and one for reading. A third register serves as a switch to orchestrate the access to those two registers. Section <ref> provides more details on this mechanism. §.§ Counting Bloom filters A counting Bloom filter <cit.> is an array of m cells that utilizes probabilistic hashing techniques to keep an approximate count of items. We use a counting Bloom filter to efficiently count the number of packets belonging to the same flow that have been collected in a specific time frame. The key idea behind a counting Bloom filter is to use h hash functions to map the elements of a set onto an array of size m. To achieve this, each element is hashed h times using different hash functions. The resulting hash values are used to index into the array, and the corresponding array cells are incremented by one. Since multiple elements can map to the same cell due to collisions, the counters of an element may not be incremented equally. To estimate the count of an element, the minimum value of the cells to which the element is mapped is used. The rationale behind this is that the minimum count is less likely to have been affected by collisions with other elements. This is a highly efficient method for counting the number of packets per flow (or packets/flow) that have been collected in a memory block within a given time window. While a single hash function, such as CRC32, can also perform this task, counting Bloom filters are less prone to collisions, which can lead to inaccuracies in the count. §.§ Neural network architecture To validate our approach to data plane traffic feature extraction, we consider the ddos attack detection use case. In particular, we focus on volumetric ddos attacks, as they are particularly challenging to handle in constrained systems, such as network switches and SmartNICs, due to the large amounts of data rate such attacks can produce. In this work, we adopt a state-of-the-art solution for ddos attack detection called lucid <cit.>. lucid is a cnn that takes as input a representation of a traffic flow consisting of packet-level features and returns the probability of the flow being malicious (i.e., part of a ddos attack). Its input format makes lucid suitable for data plane feature extraction, where line-rate packet processing is a requirement. lucid's representation of traffic flows consists of packet-level features organised in two-dimensional arrays. Rows are the flow's packets in chronological order (lucid defines bi-directional flows identified by a 5-tuple of IP addresses, L4 ports and L4 protocol), while columns are per-packet features, categorical features included (e.g., TCP flags, IP Flags, ICMP type, etc.). By utilising a convolutional layer as its initial hidden layer, lucid effectively leverages the aforementioned representation to learn traffic flow semantics and uncover latent behavioural patterns from the chronological sequence of packets. lucid's output layer is a 1-neuron layer whose value is the probability of the input flow being a ddos flow. We set the same hyper-parameters as in the lucid paper with respect to the height and number of the convolutional kernels, h=3 and k=64 respectively. On the other hand, we slightly adapt the neural network's architecture to comply with the requirements of the feature extraction executed in the data plane. First, two packet features used by lucid, namely highest layer and protocols, involve application layer information that is not available in the packet headers (lucid extracts such features with the support of TShark <cit.>, which can return detailed packet's summaries along with standard header fields). Therefore, we only use the f=9 features that can be extracted in the data plane, namely: Timestamp, Packet Length, IP Flags, TCP Length, TCP Ack, TCP Flags, TCP Window Size, UDP Length and ICMP Type. We also decrease the detection time window from t=100 seconds to t=2 seconds and the number of packets/flow from p=100 to p=4. First, by reducing the input size we reduce the processing time and memory usage. Second, based on the results reported in <cit.>, lucid can achieve a very high classification accuracy with any value of t>1 second when combined with p>3. In summary, in our experiments, we use the following lucid hyper-parameters: p=4, f=9, k=64, h=3, t=2 Despite a lower number of packets and features in the flow representations and a shorter time window, we will show that the classification accuracy of the proposed system is comparable to that obtained by the original lucid implementation on a publicly available dataset of ddos and benign traffic. § SYSTEM ARCHITECTURE Motivated by the insights presented in Section <ref>, the objective of this study is to enable efficient network intrusion detection in the control plane of programmable networks through packet-level analysis. The primary challenge in achieving this objective lies in the limitations of the network devices responsible for extracting and collecting such features, as well as the constrained capacity of the control channel used to transmit them to the control plane. Given these premises, we present ourtool, an approach to intrusion detection for p4-based programmable networks that enables resource-efficient traffic feature extraction and filtering in the data plane. The key idea of ourtool is to extract and store in the data plane only the features that are essential to the Traffic Classifier running on the control plane, thereby optimising the amount of data transmitted through the control channel. For this purpose, we have designed a system architecture matching the pisa model presented in Section <ref>, with the support of control plane software for traffic classification (Figure <ref>). The p4 program executed in the data plane extracts the packet attributes and makes them available to the control plane. On the other hand, the control plane is responsible for gathering the traffic metadata from the registers allocated in the data plane and coordinating the read/write access to such registers. The architecture presented in the figure has been specifically designed to reduce the workload on the control channel while adhering to the limitations imposed by the p4 language and the available resources in the data plane. §.§ Core logic Data plane operations are sketched in Figure <ref> (bottom part) and elaborated in the pseudo-code of Algorithm <ref>. The data plane logic is implemented in the Parser and in the Ingress Pipeline. The Parser takes the incoming packets and extracts header fields and metadata required by the Traffic Classifier (line <ref> of the pseudo-code). The components in the Ingress Pipeline are in charge of storing the traffic features and other metadata in stateful memory, provided that the packet passes the Checksum verification (line <ref>). The key idea behind ourtool for supporting packet-level feature extraction and storage is to collect only the features that are needed by the Traffic Classifier. In our implementation, the traffic classifier (i.e., Lucid) takes as input a representation of traffic flows consisting of matrices of packet-level features (Section <ref>). Each flow has the shape p× f, where p denotes the number of packets/flow, and f represents the number of features extracted from the header of each packet. These requirements determine how the packet-level features are extracted and stored in the data plane. Nevertheless, it is important to note that our design is flexible and compatible with various packet-level representations of network traffic, including those involving segments of the packet's payload (e.g., FC-Net <cit.>). For each incoming packet, the f features described in Section <ref> are extracted from its header (line <ref>). Before storing this information in the stateful memory, ourtool verifies that there are less than p packets belonging to the same flow already stored in memory. If this condition is verified, the features are saved in memory, otherwise, they are discarded. By disregarding irrelevant features in the data plane, ourtool ensures optimal utilisation of memory resources and control channel. To achieve this goal, ourtool keeps track of the number of packets/flow collected within a given observation time window by using a probabilistic technique based on hashing called counting Bloom filters <cit.> described in Section <ref>. Instead of using h different hash functions to generate h hash values from a packet, ourtool leverages the p4 implementation of the CRC32 algorithm with h=4 different packet identifiers id_f^4,id_b^4,id_f^5,id_b^5 (line <ref>). The identifiers id_f^4 and id_b^4 each represent a 4-tuple consisting of source and destination IP addresses, as well as source and destination transport ports, for a given packet. The index f refers to the forward order, in which the values appear in the packet, while the index b represents the backward order, in which the positions of the IP addresses and transport ports are swapped. The other identifiers id_f^5 and id_b^5 are generated by taking the two 4-tuples id_f^4 and id_b^4, and by adding a fifth item with the value of the packet's transport protocol to each of them (i.e., they both are 5-tuples). After computing the CRC32 value for each of the four identifiers (as shown in line <ref>), the algorithm checks whether the minimum counter value among the four counters stored in the filter in the computed positions (i.e., hashed values) F_k[h_id] is less than the maximum number of packets/flow allowed (i.e., p). If such a condition is verified, the current packet belongs to a network flow for which less than p packets have been collected so far. In such a case, the packet's 5-tuple identifier id_f^5 and its corresponding features f are saved in memory and the position in the memory is updated. These operations are summarised from line <ref> to line <ref> of the algorithm's pseudo-code and detailed in the following sections. Finally, once the feature extraction and collection operations have been completed, the program updates the packet's checksum to reflect any packet modifications (even though the current implementation does not actually alter the packet's contents) (line <ref>). The header and payload are then reassembled through the DeParser function before the packet is sent to the egress port (line <ref>). §.§ Registers Feature extraction and collection are handled with the support of stateful memory elements, such as registers. The value k∈{0,1} of the Switch Register is used to coordinate the memory access and to avoid race conditions between control and data planes. This value is set by the control plane and used by the data plane to select the appropriate registers for read/write operations (line <ref>). Traffic features are stored in two memory blocks B_k (line <ref>). A memory block consists of f registers (one per packet feature), along with other 5 registers for hosting the five elements of the 5-tuple id_f^5, which is used later on by the Traffic Classifier to map packets into flows. Each register is divided into n cells, whose size varies depending on the value being stored. In our implementation, register Source IP is of size , while Timestamp is of size , TCP Length is of size , etc. n is the maximum number of packets whose features can be extracted and stored in B_k. Figure <ref> sketches the structure of a memory block, in which each row represents a packet, and each column corresponds to an element of the 5-tuple packet identifier or a packet-level feature. In our implementation, we use two counting Bloom filters F_k, each one associated to the corresponding memory block B_k. Specifically, a filter is a register consisting of m cells, each with a size of 3 bits to store the packets/flow counters. It is important to note that the output of the CRC32 hashing function is a 32-bit number, which would ideally require the allocation of two counting Bloom filters of m=2^32 cells. While this would be the optimal solution to minimize the chances of collisions, such an approach would require a significant amount of memory space. To address this concern, we have decided to limit the bit count of CRC32's output to r<32. We achieve this by using the modulo operator (line <ref>), with r being a number dependent on the number of packets that can be stored in a memory block, denoted by n. The sensitivity of ourtool to the size of the counting Bloom Filter is provided in Section <ref>. §.§ Memory Management The memory blocks B_k are managed as circular buffers to ensure that the most recent traffic data is always available. Thus, when a memory block B_k is full, denoted by c_k=n-1, the subsequent packet is stored at the beginning of the block, which corresponds to position c_k=0, overwriting the old packet stored there. When it happens for many packets, such as in high packet rate situations, the control plane classifier may miss the information of older flows either partially or entirely. §.§ Control-Data Plane interaction The Control Plane interacts with the Data Plane through rpc, such as those provided by software frameworks like Apache Thrift API <cit.> or P4Runtime API <cit.>. In this regard, the Control Plane is in charge of swapping the value of the Switch Register, collecting the packet features from the two memory blocks B_k and clearing registers B_k, F_k and c_k (k∈{0,1}). Figure <ref> illustrates the timelines of Control and Data Plane operations. The two planes operate in parallel and never block each other. The Data Plane always writes on the same block until the value of the Switch Register is changed from the Control Plane. If we start from time t_0, as shown in the figure, once the control plane sets the value of the Switch Register to 1, the Data Plane will start writing on memory block B_1. After that, the Control Plane will first read block B_0 and, once done with reading the block, it will reset all the registers of blocks B_0, F_0 and c_0 and the traffic features will be processed for classification. The whole process restarts at time t_3, when the Control Plane sets the Switch Register to 0 to retrieve the features collected in memory block B_1, while the Data Plane switches to block B_0. § EXPERIMENTAL SETUP ourtool has been implemented as a set of methods for the reference P4 software switch (namely the bmv2 <cit.>) and tested in a Mininet emulated network <cit.>. A prototype implementation of ourtool is publicly available for testing and use <cit.>. The network topology, represented in Figure <ref>, includes two virtual hosts, one acting as the attacker which sends malicious traffic to the second virtual host (i.e., the victim of the attack). The evaluation environment has been set up on a single physical machine equipped with an 80-core Intel(R) Xeon(R) Gold 5218R CPU @2.10GHz and 128 GB of DDR4 RAM. This machine also runs the lucid framework <cit.>, which includes a pre-processing tool and a cnn trained for ddos attack detection. lucid and the switch are interfaced through rpc. The rpc mechanism is implemented using the Apache Thrift API <cit.> and is used to read, write and reset the registers from the control plane through data plane methods , and respectively <cit.>. §.§ Dataset ourtool is evaluated using CIC-DDoS2019 <cit.>, a recent dataset of ddos attacks provided by the Canadian Institute of Cybersecurity at the University of New Brunswick. This dataset contains multiple days of network activity, including benign traffic and 13 distinct types of ddos attacks. It is accessible to the public and contains pre-recorded traffic traces with complete packet payloads, along with supplementary text files that provide labels and statistical information for each traffic flow <cit.>. In our experiments, we inject the p4 switch with the traffic traces to evaluate ourtool's performance in handling the feature extraction process in the data plane. In the dataset, the benign traffic was created using the B-profile <cit.>, which defines distribution models for various applications, such as web (HTTP/S), remote shell (SSH), file transfer (FTP), and email (SMTP). On the other hand, the attack traffic was generated using third-party tools and can be broadly classified into two main categories: reflection-based and exploitation-based attacks. The first category involves attacks in which the attacker triggers responses from a remote server (such as a DNS resolver) towards the victim's spoofed IP address, ultimately overwhelming the victim with these responses. The second category pertains to attacks that exploit known vulnerabilities in target systems, applications or in certain network protocols. The traffic traces have been divided into training, validation and test sets and processed with the lucid's packet parser <cit.>. Such a tool extracts packet-level features from the traffic and groups them into array-like representations of network flows, as described in Section <ref>. The pre-processing of the training and validation traces is done offline for lucid's training and tuning, whereas the traffic traces of the test set are solely employed for online testing. During the testing phase, feature extraction is performed in the data plane (as explained in Section <ref>), while lucid's parser tool is executed in the control plane for building the arrays and normalising the features. To mitigate the impact of bmv2's poor performance <cit.>, we made a strategic decision to exclude all attacks with a packet rate higher than 30 kpackets/s from the test set. This step was necessary to ensure the integrity and accuracy of our evaluation results by preventing any interference caused by packets being dropped due to bmv2's performance limitations. The key details of the remaining test traces are presented in Table <ref> for reference. §.§ Evaluation metrics Our primary evaluation metrics are the Collected Flows and the sfnr. Collected Flows measures the number of flows stored in memory relative to the total number of flows injected into the switch within a given time window. This metric provides insight into ourtool's ability to capture information on as many traffic flows as possible. The sfnr quantifies the percentage of malicious flows that go undetected, either due to misclassification by the nids, or because no packets of such flows are present in the memory block due to the reasons explained in Section <ref>. By assessing the sfnr, we measure the efficiency of ourtool in promptly identifying and flagging potential intrusions. A formal definition of the sfnr measured in a given observation time window is provided in Equation <ref>. sFNR = FNR+1/|X_m|∑_x_i∈ X_m c_x_i⊕ n_x_i In Equation <ref>, fnr is the False Negative Rate of the nids running in the control plane. It is worth reminding that fnr is the metric that measures the percentage of positive samples (in our case, the malicious flows) that are misclassified as negatives by a classifier. In the equation, the set of malicious flows is denoted by X_m, c_x_i represents the actual number of packets collected for a given malicious flow x_i in a memory block, while n_x_i is the number of observed packets of that flow. The XOR operation c_x_i⊕ n_x_i returns 1 if c_x_i=0 and n_x_i≠ 0 or if c_x_i≠0 and n_x_i=0 (the second case can never happen), and returns 0 otherwise. The summation computes the total number of observed malicious flows in a given time interval with no packets collected in the memory block. We divide this value by the total number of observed malicious flows X_m to obtain the rate. We also define the quality metric that allows us to establish a relationship between three distinct quantities: (i) the number of packets in a traffic flow, (ii) the number of packets of that flow collected in the data plane within a given time window, and (iii) the packet/flow p required by the Traffic Classifier running in the control plane. This metric quantifies the level of “useful” information that the data plane delivers to the Traffic Classifier for each flow, while taking into account that gathering more than p packets per flow is inefficient resource utilization. Specifically, the quality metric is defined by Equation <ref>. quality = 1/|X|∑_x_i∈ Xc_x_i/l_x_i The quality metric is determined by taking the average quality score across the flows observed within a given time window. In the equation, the set of such flows is denoted by X={X_b,X_m} and includes both benign flows X_b and malicious flows X_m. The quality of a single flow x_i∈ X is expressed as c_x_i over l_x_i, where l_x_i = min{n_x_i, p} is the minimum between the number of packets n_x_i of the i-th flow and p. l_x_i represents the optimal number of packets that we need to collect in the data plane to maximise the amount of information provided to the ml model for a given flow x_i. Collecting fewer packets may not provide sufficient information for the classifier, while collecting more may waste valuable memory space without providing any additional benefits to the classifier. This is because any excess packets beyond the optimal number would be discarded by the feature pre-processing component, which constructs the flow samples required by the ml model. In the case of the quality metric, c_x_i represents the number of packets stored in a memory block for a given flow x_i, either benign or malicious. With ourtool, which uses a counting Bloom filter to track the number of packets/flow in memory, the value of c_x_i ranges from 0 to p. With no packet tracking, c_x_i may fall between 0 and n_x_i. It is worth noting that in certain cases, the value of c_x_i may be zero even when n_x_i≠ 0. This can occur when no packets for flow x_i have been collected in the current memory block, either because of collisions or because they were overwritten by more recent packets in the circular buffer (Section <ref>). § EVALUATION RESULTS One of the key benefits of ourtool over other approaches is the ability to extract raw packet features from the network traffic, categorical features included, and to organise them in a way that the semantic of traffic flows is preserved. ourtool efficiently achieves this objective by implementing a counting Bloom filter (Section <ref>) that tracks the number of packets/flow without any wastage of memory resources in the data plane. We demonstrate the advantages of ourtool through simulation and emulation experiments, where we disable the tracking methods and observe the corresponding changes using a range of metrics. In the simulation scenario, no actual network data is involved. Instead, we rely on the network profile of a campus network <cit.> and we demonstrate the benefits of ourtool by simulating the feature extraction and storage process in the data plane. The second set of experiments has been conducted in an emulated environment consisting of two hosts (an attacker and a victim) and a p4-enabled software switch with configurable registers. The experiments involve injecting pre-recorded network traffic into this environment, using a publicly available dataset of both benign and ddos traffic. §.§ Simulation Scenario The goal of the simulations is to analyse the correlation between the size of the memory block B_k and two key variables: the maximum number of packets/flow (i.e., variable p) and the number of cells m of the Bloom filter F_k. This information gives an understanding on how to configure ourtool based on the characteristics of the network traffic under monitoring (mainly the average packet rate and average packets/flow) to maximise the number of collected flows within a given time window. To do so, we fix the size of B_k and we vary the value of p and the bit count r of the CRC32's output that determines the size of the Bloom filter (see Section <ref>). We utilise the profile outlined in <cit.> to simulate the traffic of a realistic network. This profile was generated using network activity data gathered from a university campus. From it, we extracted the distribution of TCP, UDP, and other protocols and we computed the average number of packets/flow for each protocol. Each experiment consists of 1000 iterations, each simulating the collection of a random number of flows, ranging from 1 to 8192 flows. The length of such flows (number of packets/flow) is generated based on the aforementioned traffic profiles, while the arrival of packets across different flows has been randomised to ensure a non-sequential packet distribution. To minimise potential issues due to the insufficient space in memory to store packets for all the 8192 flows (which is not the goal of this analysis), we reserve space in the memory block B_k for at least two packets/flow by setting the value n=16384 packets (see also Figure <ref> for reference). We execute the experiments with ourtool and we compare the results against those obtained using a baseline configuration, in which we disable the ourtool's algorithm for filtering the packets (Algorithm <ref>). §.§.§ Bloom filter size Each Bloom filter is characterised by two key dimensions: the number of cells, denoted as m∈ [0,2^r-1], and the size of each cell. In this experiment, we have set the size of the cells while varying the value of r to understand the sensitivity of ourtool to the number of cells in terms of Collected Flows. Considering that 75% of the flows within the traffic profile consist of no more than 4 packets, we have chosen to set the cell size to 3 bits. This size adequately accommodates the counting of up to 4 packets/flow. Nevertheless, in the next section, we evaluate the sensitivity of ourtool to variations in the number of packets/flow and the cell size. Figure <ref> presents the results obtained with Bloom filter sizes from 1024 to 32768 cells (or r from 10 to 15). As expected, the larger the Bloom filter size, the smaller the number of hash collisions and the higher the number of flows collected. Remarkably, it is worth noting that ourtool is able to collect more flows than the baseline, regardless of the Bloom filter size. It is important to recall that the baseline approach gathers all packets without any filtering logic. Considering that the average number of packets/flow in the traffic profile is 78, the baseline approach exhausts the memory capacity with approximately 210 flows (calculated as 16384/78). Consequently, once the memory block reaches its limit, older packets are overwritten by more recent ones, resulting in a loss of flows. Figure <ref> leads to another significant observation: the stability of ourtool throughout the experiment rounds, in contrast to the baseline approach. This stability is a direct result of the packet filtering strategy employed by ourtool. By filtering out excess packets from each flow, ourtool ensures that the memory allocated for a flow remains bounded by the predetermined maximum p. Consequently, the number of collected flows remains stable even in extremely randomised scenarios like this simulation. In contrast, the baseline approach lacks any form of packet filtering. As a result, the number of packets per flow stored in memory is not constrained, leading to much larger fluctuations in terms of collected flows. This absence of bounds on per-flow packet storage is evident from the erratic behaviour depicted in the figure, where the scattered values of collected flows reflect the wide range of flow sizes, spanning from 1 to 1000 packets (with average and standard deviation of 77 and 74 packets, respectively). The baseline curve shown in the figure represents a polynomial approximation of the values and serves to emphasize the average trend of collected flows under this configuration. In light of the results obtained in this simulation, we have determined that setting the number of cells to 32768 is the most suitable choice. This value effectively minimises hash collisions (0.5% with 8000 flows) while maintaining a minimal impact on memory usage. It is important to note that ourtool utilises two counting Bloom filters (F_k with k∈{0,1} in Algorithm <ref>). Since we only need 3 bits to count up to p=4 packets/flow, the size of each F_k Bloom filter will be 12 kilobytes. This memory requirement is relatively insignificant when compared to the memory blocks B_k, each of which demands approximately 562 kilobytes to store the features of 16384 packets. §.§.§ Maximum number of collected packets per flow In this experiment, we fix the size of the Bloom filter to 32768 cells, and we vary p from 2 to 32 packets/flow. We expect that by decreasing the value of p, the average number of packets collected per flow will also decrease, leading to a higher number of collected flows in memory but also to more potential collisions. This behaviour can be observed in Figure <ref>, which demonstrates that by increasing p, the memory block fills up sooner and, as a consequence, the percentage of collected flows deteriorates earlier. Of course, with p=2 the memory is never filled, even with 8192 generated flows (we remind that we set the size of the memory block to 16384 packets). However, we can notice that with p=2 a small portion of flows is lost due to collisions when the number of generated flows is 5000 or higher. Indeed, when the value of p is small, there is an increased likelihood of missing the condition at line <ref> of Algorithm <ref>. As a result, a higher number of packets and flows are discarded. In the figure, we can also notice the stability of ourtool with p=4 almost until the end of the experiment. For this reason, and considering the proven effectiveness of lucid with p>3 (as discussed in Section <ref>), we set p=4 for the experiments conducted in the emulated environment. §.§ Emulated Scenario Based on the results of the simulations described in the previous section, we set the parameters for the emulations as follows: (i) memory block size: n=16384 packets, (ii) bloom filter size: 32768 cells (r=15) and (iii) maximum packets/flows p=4. The duration of all the experiments has been set to 6 minutes, during which we inject various combinations of attack and benign traffic from the attacking host to the victim through the ourtool-enabled switch (see Figure <ref>). In this experiment, we generate a series of ddos attacks by using the traffic traces of the CIC-DDoS2019 dataset (see Section <ref>). If we exclude the DNS attack, the flows of the other attacks present an average flow length of around 2.2 packets, hence considerably smaller than those of the profile of the campus network used in simulations (around 78 packets/flow). Considering that we chose p=4 (i.e., p>2.2), we expect that with these small flows, both ourtool and the baseline can collect approximately the same number of flows. Acknowledging that malicious traffic flows typically do not constitute the totality of network traffic, we define four evaluation scenarios to enable a comprehensive comparison. These scenarios involve the inclusion of background benign traffic at varying proportions relative to the total packets in the network, namely 0% (no benign traffic), 25%, 50%, and 75%. Table <ref> presents the average flow length as we increase the percentage of benign traffic in the network (whose average flow length is about 36 packets). Other important information pertains to the relationship between packets and flows. Since we add a portion of background benign traffic based on the total amount of packets, this does not necessarily reflects to an analogous increase in the number of flows. Actually, based on the low average flow length in the attacks (first column in Table <ref>), and the relatively high number of packets/flow of the benign trace (around 36 on average), even splitting 50% between benign and ddos packets, the average percentage of ddos flows is higher than 93% on the total of flows (excluding the DNS attack that presents an exceptional behaviour). Table <ref> shows the percentage of ddos flows for each attack type. Given these premises, we compare ourtool against the baseline approach in the emulated environment. For both approaches, we inject each of the eight attacks of the CIC-DDoS2019 dataset combined with benign traffic in the various proportions reported above. The comparison is evaluated using the three metrics presented in Section <ref>, namely sfnr, Collected Flows and quality. §.§.§ Collected Flows and sFNR Figure <ref> shows the comparison related to Collected Flows for all attack types. First, we can notice that with the baseline approach the percentage of collected flows remains stable when we vary the proportion between benign and attack traffic. This can be attributed to the absence of any filtering logic, causing the percentage of collected flows to solely depend on the memory capacity. In contrast, ourtool demonstrates consistent improvement in the percentage of collected flows as the volume of benign traffic (and the average flow length) increases. While the baseline approach fills up the memory block predominantly with large benign flows, ourtool optimizes memory usage by storing only the essential information required by the nids and filtering out packets that exceed the maximum limit of p=4. This memory management prevents large flows from occupying excessive memory space, thus ensuring that a higher number of flows can be transmitted to the nids for classification. For instance, in the case of the MSSQL attack, whose average flow size is around 2 packets/flow, with 0% of benign traffic the performance of ourtool is approximately the same as with the baseline approach. This is because with p=4, no packets are filtered and the memory occupation is similar with both approaches. However, as soon as we add some benign traffic, which is composed of larger flows, the filtering mechanism starts dropping the extra packets, saving space in memory for more flows. This is reflected in the sfnr metric (Figure <ref>), which shows the ability of ourtool to collect more malicious flows than the baseline approach, ultimately leading to faster mitigation of network attacks. It is also worth noticing that when the volume of the attack is small, like in the case of LDAP and DNS, the memory is never filled, even when adding benign traffic. Therefore, the collected flows is 100% for both, while the non-zero sfnr obtained with the DNS attacks is solely due to lucid's fnr. §.§.§ Flow quality As defined in Section <ref>, the quality metric measures the average amount of “useful” data collected in the data plane and transmitted to the control plane for classification. Table <ref> presents the average quality metrics, revealing that while ourtool can collect a larger number of flows in memory, these flows maintain a high level of quality, although sometimes slightly lower than that obtained with the baseline. Unlike the baseline, ourtool might drop packets because of collisions, which accounts for the slight reduction in quality observed in the attack scenarios that present a combination of high flow rate and high packet rate. The SSDP attack is an interesting use case that demonstrates this observation: as we add more and more benign traffic, the packet rate remains constant (around 30 kpackets/sec), while the flow rate decreases, hence the chances of collisions. §.§.§ Temporal evolution A comparison between the two approaches throughout the whole experiment is presented in Figures <ref> and <ref>. For a matter of space, we provide the plots of only two attacks, although the other attacks present similar behaviours. The plotted data consistently validates the average numbers presented earlier in this section, with no notable deviations observed over time. Besides the memory size and the packet rate, the performance of ourtool is also influenced by the flow size and, as a consequence, by the flow rate. By collecting only p=4 packets/flow, the number of collected flows increases (and the sfnr decreases) when the flow rate increases, as highlighted by the respective plots in the two Figures. On the contrary, the baseline approach is less affected by the flow rate, as it keeps on overwriting the old flows when the memory is full, resulting in approximately a stable sfnr and number of collected flows. § CONCLUSION In this paper, we have presented ourtool, a solution based on P4 programmable data planes that enables the benefits of centralised intrusion detection while reducing the impact on control channel and hardware resources. ourtool takes advantage of a probabilistic hashing data structure to carefully select the amount of information to be extracted from the network traffic and sent to the control plane, taking into consideration the traffic features required by the traffic classifier. The key advantage of ourtool over similar solutions resides in its ability to build a packet-level representation of traffic flows. This peculiarity allows ourtool to satisfy the requirements of state-of-the-art ml-based nids, which rely on a detailed representation of the traffic that goes beyond mere statistics. We have demonstrated that by using a counting Bloom filter to retain only the necessary information within the switch, ourtool optimises the usage of stateful memory in the data plane, while reducing the chances of missed malicious flows due to lack of memory space. IEEEtran
http://arxiv.org/abs/2307.04433v1
20230710091541
Holographic Gubser-Rocha model does not capture all the transport anomalies of strange metals
[ "Yongjun Ahn", "Matteo Baggioli", "Hyun-Sik Jeong", "Keun-Young Kim" ]
cond-mat.str-el
[ "cond-mat.str-el", "hep-th" ]
=1 tarburst.fd
http://arxiv.org/abs/2307.04380v1
20230710072553
Ghost polygons, Poisson bracket and convexity
[ "Martin Bridgeman", "François Labourie" ]
math.GT
[ "math.GT", "math.DG", "53D30" ]
ADAQ-SYM: Automated Symmetry Analysis of Defect Orbitals Igor A. Abrikosov August 12, 2023 ======================================================== § INTRODUCTION The character variety of a discrete group Γ in a Lie group 𝖦 admits a natural class of functions: the algebra of regular functions generated as a polynomial algebra by trace functions or characters. When Γ is a surface group, the character variety becomes equipped with a symplectic form generalizing Poincaré intersection form – called the Atiyah–Bott–Goldman symplectic form <cit.> – and a fundamental theorem of Goldman <cit.> shows that the algebra of regular functions is stable under the Poisson bracket and more precisely that the bracket of two characters is expressed using a beautiful combinatorial structure on the ring generated by characters. The Poisson bracket associated to a surface group has been heavily studied in <cit.>, <cit.>; and in the context of Hitchin representations the link between the symplectic structure, coordinates and cluster algebras discovered by Fock–Goncharov in <cit.>, has generated a lot of attention: for instance see <cit.>, <cit.>, <cit.>, <cit.> and <cit.> for more results, and also relations with the swapping algebra <cit.>. On the other hand the deformation space of Anosov representations admits many other natural functions besides regular functions. Length functions, associated to any geodesic current, studied by Bonahon <cit.> in the context of Teichmüller theory, play a prominent role for Anosov representations for instance in <cit.> and <cit.>. Another class are the correlation functions, defined in <cit.> and <cit.>. These functions are defined as follows. For the sake of simplicity, we focus in this introduction on the case of a projective Anosov representation ρ of a hyperbolic group Γ: one can then associate to any geodesic g a rank 1-projector _ρ(g). The correlation function _G associated to a configuration of n-geodesics – that is an n-tuple G=(g_1,…,g_n) of geodesics up to cyclic transformation – is then _G:ρ↦_G(ρ)(_ρ(g_n)…_ρ(g_1)) . In Teichmüller theory, the correlation function of two geodesics is the cross-ratio of the endpoints. Generally, the correlation functions of geodesics in Teichmüller theory is a rational function of cross-ratios. This is no longer the case in the higher rank. For instance if C is a geodesic triangle given by the three oriented geodesics (g_1,g_2,g_3), the map ^*_C:ρ↦^*_C(ρ)(_ρ(g_1)_ρ(g_2)_ρ(g_3)) , is related to Goncharov triple ratio on the real projective plane. For a geodesic current μ, its length function _μ is defined by an averaging process – see equation (<ref>). One can also average correlation functions: say a Γ-invariant measure μ on the set ^n of generic n-tuples of geodesics is an integrable cyclic current if it is invariant under cyclic transformations and satisfies some integrability conditions – see section <ref> for precise definitions. Then the μ-correlation function or μ-averaged correlation function is _μ:ρ↦∫_^n/Γ_G(ρ) μ̣ . The corresponding functions are analytic <cit.> but rarely algebraic. In the case when Γ is a surface group, the algebra of functions on the deformation space of Anosov representations admits a Poisson bracket coming from the Atiyah–Bott–Goldman symplectic form. To uniformize our notation, we write ^k_μ for _μ when μ is supported on ^k and ^1_ν=_ν for the length function of a geodesic current ν. Then, one of the main result of this article, Theorem <ref>, gives as a corollary [Poisson stability] The space of length functions and correlation functions is stable under the Poisson bracket. More precisely there exists a Lie bracket on the polynomial algebra formally generated by tuples of geodesics (G,H)↦ [G,H] so that {^k_μ,^p_ν}=∫_^n+m/Γ_[G,H](ρ) μ̣(G)ν̣(H) . The complete result, in particularly Corollary <ref> allows to recursively use this formula. In Theorem <ref> we compute explicitly what is the Hamiltonian vector field of the correlation functions. For instance in Teichmüller theory, this allows us to compute the higher derivatives of a length function along twist orbits by a combinatorial formula involving cross-ratios. The bracket (G,H)↦ [G,H] – that we call the ghost bracket – is combinatorially constructed. In this introduction, we explain the ghost bracket in a simple case and refer to section <ref> for more details. Recall first that an ideal polygon – not necessarily embedded – is a sequence (h_1,…, h_n) of geodesics in such that the endpoint of h_i is the starting point of h_i+1. Let then G be the configuration of n geodesics (g_1,…, g_n), with the endpoint of g_i not equal the starting point of g_i+1. The associated ghost polygon is given by the uniquely defined configuration (θ_1,…θ_2n) of geodesics – see figure (<ref>) such that * (θ̅_1,θ_2,θ̅_3 …,θ̅_2n-1,θ_2n) is an ideal polygon, * for all i, θ_2i=g_i and is called a visible edge, while θ_2i+1 is called a ghost edge. We now denote by ⌈ g,h⌉ the configuration of two geodesics g and h, ϵ(g,h) their algebraic intersection, and g̅ is the geodesic g with the opposite orientation. Then if (θ_i,…,θ_2n) and (ζ_i,…,ζ_2p) are the two ghost polygons associated to the configurations G and H, we define the projective ghost bracket of G and H as [G,H] G· H·(∑_i,j(-1)^i+jϵ(ζ_j,θ_i) ⌈ζ_j,θ_i⌉) , which we consider as an element of the polynomial algebra formally generated by configurations of geodesics. We have similar formulas when G or H are geodesics, thus generalizing Wolpert's cosine formula <cit.>. In the case presented in the introduction – the study of projective Anosov representations – the ghost bracket is actually a Poisson bracket and is easily expressed in paragraph <ref> using the swapping bracket introduced by the second author in <cit.>. Formula (<ref>) is very explicit and the Poisson Stability Theorem <ref> now becomes an efficient tool to compute recursively brackets of averaged correlations functions and length functions. 0.2 truecm In this spirit, we give two applications of this stability theorem. Following Martone–Zhang <cit.>, say a projective Anosov representation ρ admits a positive cross ratio if 0<(_ρ(g)_ρ(h))<1 for any two intersecting geodesics g and h. Examples come from Teichmüller spaces and Hitchin representations <cit.>. More generally positive representations are associated to positive cross ratios <cit.>. Our first application is a generalisation of the convexity theorem of Kerckhoff <cit.> and was the initial reason for our investigation: [Convexity Theorem] Let μ be the geodesic current associated to a measured geodesic lamination, _μ the associated length function. Let ρ be a projective Anosov representation which admits a positive cross ratio, then for any geodesic current ν, {_μ,{_μ,_ν}}≥ 0 . Furthermore the inequality is strict if and only if i(μ,ν) ≠ 0. Recall that in a symplectic manifold {f,{f,g}}≥ 0 is equivalent to the fact that g is convex along the Hamiltonian curves of f. This theorem involves a generalisation of Wolpert's sine formula <cit.>. Our second result allows us to construct commuting subalgebras in the Poisson algebra of correlation functions. Let ℒ be a geodesic lamination whose complement is a union of geodesic triangles C_i. To each such triangle, we call the associated correlation function ^*_C_i an associated triangle function. The subalgebra associated to the lamination is the subalgebra generated by triangle functions and length functions for geodesic currents supported on ℒ. [Commuting subalgebra] For any geodesic lamination whose complement is a union of geodesic triangles, the associated subalgebra is commutative with respect to the Poisson bracket. In a forthcoming paper, we use Theorem <ref> with Dick Canary, to obtain a new proof of a Theorem by Potrie–Sambarino <cit.> that says that the entropy for simple roots is 1 for Hitchin representations. 0.5 truecm In order to give a flavour of the constructions of our article, let us explain that the first step is to integrate a closed form α with values in the Lie algebra of the group against a ghost polygon by a simple combinatorial process that we call ghost integration producing the number ∮_ρ(G)α , called the ghost integral – see section <ref>. We relate this ghost integration to the derivative of correlation functions using the dynamical cohomological equation – in a more general context than surface groups or hyperbolic groups. More precisely, we have for a variation of a flat connection ∇̇ _G(∇̇)=∮_ρ(G)∇̇ , see paragraph <ref>. We obtain this formula as a consequence of our study of the dynamical cohomological equation (proposition <ref>). In order to get to the Hamitonian, we have to introduce the dual objects to ghost integration in the twisted cohomology of the group, namely a form Ω_ρ(G) with values in the endomorphism bundle, so that ∫_(α∧Ω_ρ(G))=∮_ρ(G)α . Then the ghost intersection of two ghosts polygons G and H is _ρ(G,H)=∮_ρ(G)Ω_ρ(H) , and we show that _[G,H](ρ)= _ρ(G,H) . For details, see section <ref>. In order to finally compute the Poisson brackets of averaged functions and proceed to the proof of Theorem <ref>, we have to carefully exchange some integrals – see section <ref>. 0.2 truecm The constructions outlined above are the analogues of classical constructions (integration along a path, intersection of geodesics) in differential topology described in section <ref>, in some sense playing the role of non-abelian homology. §.§ The general case For the sake of simplicity, this introduction focused on the case of the so-called projective Anosov representations. More generally, one can construct correlation functions out of geodesics decorated with weights of the Lie group with respect to a Θ-Anosov representations. The Θ-decorated correlation functions are described by configurations of Θ-decorated geodesics. The full machinery developed in this article computes more generally the brackets of these decorated correlation functions. Using that terminology, the Poisson Stability Theorem <ref> still holds with the same statement, but the ghost bracket has to be replaced by a decorated ghost bracket which follows a construction given in paragraph <ref>, slightly more involved than formula (<ref>). §.§ Beyond representations: uniformly hyperbolic bundles We also introduce a new tool allowing us to describe “universal Anosov representations“ in the spirit of universal Teichmüller spaces: the definition of uniformly hyperbolic bundles. This new tool allows us to extend results obtained for Anosov representations, notably stability and limit curves, in a situation where no periodicity according to a discrete group is required. In particular, the (not averaged) correlation functions make sense and we are able to compute the variation of such a correlation function in proposition <ref>. This result follows in particular from the solution of the (dynamical) cohomological equation (proposition <ref>). Important constructions such as ghost integration – in section <ref> – and ghost intersection – in section <ref> – are also given in the context of uniformly hyperbolic bundles. 2 truecm We would like to thank Dick Canary for very useful comments, Fanny Kassel, Curt McMullen, Andrés Sambarino and Tengren Zhang for their interest. § PRELIMINARY In this section, we recall basic facts about intersection of geodesics in the hyperbolic plane, dual forms to geodesics and the Goldman symplectic form. We also introduce one of the notions important for this paper: geodesically bounded forms. §.§ The hyperbolic plane, geodesics and forms We first recall classical results and constructions about closed geodesics in the hyperbolic plane. §.§.§ Geodesics and intersection Let us choose an orientation in . We denote in this paper by the space of oriented geodesics of that we identify with the space or pairwise distinct points in . We denote by g̅ the geodesic g with the opposite orientation. Let g_0 and g_1 be two oriented geodesics. The intersection of g_0 and g_1 is the number ϵ(g_0,g_1) which satisfies the following rules ϵ(g_0,g_1)=-ϵ(g_1,g_0)=-ϵ(g̅_0,g_1) , and verifying the following * ϵ(g_0,g_1)=0 if g_0 and g_1 do not intersect or g_0=g_1. * ϵ(g_0,g_1)=1 if g_0 and g_1 intersect and (g_0(∞),g_1(∞),g_0(-∞),g_1(-∞) is oriented. * ϵ(g_0,g_1)=1/2 if g_0(-∞)=g_1(-∞) and (g_0(∞),g_1(∞),g_1(-∞)) is oriented. Observe that ϵ(g_0,g_1)∈{-1,-1/2,0,1/2,1} and that we have the cocycle property, if g_0,g_1,g_2 are the sides of an ideal triangle with the induced orientation, then for any geodesic g we have ∑_i=0^2ϵ(g,g_i)=0 . We need an extra convention for coherence A phantom geodesic is a pair g of identical points (x,x) of ∂_∞. If g is a phantom geodesic, h any geodesic (phantom or not), we define ϵ(g,h) 0. §.§.§ Geodesic forms Let us denote by Ω^1() the space of 1-forms on the hyperbolic space. A form ω in Ω^1() is bounded if |ω_x(u)| is bounded uniformly for all (x,u) in U the unit tangent bundle of . We let ^∞ the vector space of bounded forms. We have a equivariant mapping Ω^1() ,gω_g , which satisfies the following properties * ω_g is a closed 1-form in supported in the tubular neighbourhood of g at distance 1, outside the tubular neighbourhood of g at distance 1/2. * ω_g=-ω_g̅ * Let g_0 be any geodesic g, then ∫_g_0ω_g = ϵ(g_0,g) . The construction runs as follows. Let us fix a function f from ℝ^+ to [0,1] with support in [0,1] which is constant and equal to 1/2 on [0,1/2] neighbourhood of 0. We extend (non-continuously) f to ℝ as an odd function. Let finally R_g be the “signed distance" to g, so that R_g̅=-R_g. We finally set ω_g=-(̣f∘ R_g). Then (<ref>) and (<ref>) are obvious. We leave the reader check the last point in all possible cases. We extend the above map to phantom geodesics by setting ω_g=0 for a phantom geodesic and observe that the corresponding assignment still obey proposition <ref>. The form ω_g is called the geodesic form associated to g. Such an assignment is not unique, but we fix one, once and for all. Then we have For any pair of geodesic g_0 and g_1, ω_g_1∧ω_g_0=farea_ with f bounded and in L^1. The only non-trivial case is if g_0, g_1 share an endpoint. In the upper half plane model let g_0 be the geodesic x=0, while g_1 is the geodesic x=a. Observe that the support of ω_g_1∧ω_g_0 is in the sector V defined by the inequations y>B>0 and | x/y| < C for some positive constants A an C. Finally as the signed distance for g_0 satisfies sinh(R_g_0) = x/y then ω_g_0=f_0 d(x/y) , ω_g_1=f d(x-a/y) , where f_0 and f_1 are functions bounded by a constant D. An easy computation shows that d(x/y)∧ d(x-a/y) =a d x∧ d y/y^3 . Oberve that | f f_0 a| is bounded by D^2a, and ∫_V d x∧ d y/y^3≤ 2C∫_B^∞1/y^2 d y=1/B< ∞ . This completes the proof. The above result is still true whenever g or h are phantom geodesics. From that it follows that For any pair of geodesics, phantom or not, g and g_0, we have ∫_g_0ω_g = ϵ(g_0,g)=∫_ω_g_0∧ω_g . Moreover for any (possibly ideal) triangle T in ∫_∂ Tω_g=0 . §.§ The generic set and barycentric construction For any oriented geodesic g in we denote by g̅ the geodesic with opposite orientation, and we write g≃ h, if either g=h or g=h̅. Let us also denote the extremities of g by (∂^-g,∂^+g) in ×. For n≥ 2, let us the define the singular set as ^n_1{(g_1,…,g_n)|∀ i,j, g_i≃ g_j } , and the generic set to be _⋆^n^n∖^n_1 . We define a Γ-compact set in _⋆^n to be the preimage of a compact set in the quotient _⋆^n/Γ. The barycenter of a family G=(g_1,…,g_n) of geodesics is the point (G) which attains the minimum of the sum of the distances to the geodesics g_i. Choosing a uniformisation, the barycentric construction yields a -equivariant map from :_⋆^n ,(g_1,…,g_n)(y) . It follows from the existence of the barycenter map that the diagonal action of Γ on _⋆^n is proper. The barycentric section is then the section σ of the following fibration restricted to _⋆^n F:()^n→^n , given by σ=(σ_1,…,σ_n) , where σ_i(g_1,…,g_n) is the orthogonal projection of (g_1,…,g_n) on g_i. Obviously The barycentric section is equivariant under the diagonal action of on _⋆^n as well as the natural action of the symmetric group 𝔖_n. §.§ Geodesically bounded forms We abstract the properties of geodesic forms in the following definition: Let α be a closed 1-form on . We say that α is geodesically bounded if * α belongs to ^∞, ∇α is bounded. * for any geodesic g, α(ġ) is in L^1(g, ṭ), ω_g∧α belongs to L^1() and ∫_gα =∫_ω_g∧α . * Moreover for any (possibly ideal) triangle T in ∫_∂ Tα =0 . We denote by the vector space of geodesically bounded forms. We observe that any geodesically bounded form is closed and that any geodesic form belongs to . §.§ Polygonal arcs form We will have to consider geodesic polygonal arcs which are embedded finite union of oriented geodesic arcs =γ_0∪⋯∪γ_p , such that γ_i joins γ_i^- to γ_i^+ and we have γ_i^-=γ_i-1^+, while γ_0^- and γ_p^+ are distinct points at infinity. We say that γ_1,…,γ_p-1 are the interior arcs. We have similarly to above Given a polygonal arc =γ_0∪⋯∪γ_p there exists a closed 1-form ω_ so that * the 1-form ω_ is supported on a 1-neighborhood of , * Let B be a ball containing a 1-neighbourhood of the interior arcs, such that outside of B the 1-neighbourhood V_0 of γ_0 and the 1-neighbourhood V_1 of γ_p are disjoint then .ω_|_V_0=.ω_g_0|_V_0 , .ω_|_V_1=.ω_g_p|_V_1 . where g_0 and g_p are the complete geodesics cointaining the arcs γ_0 and γ_p. * For any element Φ of , ω_Φ()=Φ^*(ω_). * For any geodesic g, ∫_gω_=ϵ(g,[γ_0^-,γ_p^+]). * Let be a polygonal arc with extremities at infinity x and y, then for any 1-form α in we have ∫_ω_∧α=∫_[x,y]α . The construction runs as the one for geodesics. Let us fix a function f from ℝ^+ to [0,1] with support in [0,1] which is constant and equal to 1/2 on [0,1/2]. We extend (non-continuously) f to ℝ as an odd function. Let finally R_g be the “signed distance" to g, so that R_g̅=-R_g. We finally set ω_g=-(̣f∘ R_g). Then (1), (2), (3) and (4) are obvious. Then writing ∖=U⊔ V where U and V are open connected sets. We have that ∫_U ω_∧α=∫_U (̣f∘ R_g)∧α=1/2∫_g α , by applying carefully Stokes theorem. The same holds for the integral over V, giving us our wanted result. The form ω_ is the polygonal arc form. §.§ The Goldman symplectic form Let S be a closed surface with Σ its universal cover that we identify with by choosing a complete hyperbolic structure on S. Given a representation ρ:π_1(S) → G we let E = Σ×_ρ𝔤 be the bundle over S by taking the quotient of the trivial bundle over Σ×𝔤 by the action of π_1(S) given by γ(x,v) = (γ(x), Adρ(γ) (v)). Let ∇ be the associated flat connection on the bundle E and denote by Ω^k(S)⊗(E) the vector space of k-forms on S with values in (E). Recall that ∇ gives rise to a differential ^̣∇: Ω^k(S)⊗(E)→Ω^k+1(S)⊗(E) . We say a 1-form α with values in (E) is closed if ^̣∇α=0 and exact if α=^̣∇β. Let then consider C^1_ρ(S) {closed 1-forms with values in (E)} , E^1_ρ(S) {exact 1-forms with values in (E)} , H^1_ρ(S) C^1_ρ(S)/E^1_ρ(S) . When S is closed, the Goldman symplectic form on H^1_ρ(S) is given by (α,β)∫_S(α∧β) , where for u and v in S: (α∧β)(u,v)(α(u)β(v))-(α(v)β(u)). Observe that if consider complex bundles, the Goldman symplectic form is complex valued, while it is real valued for real bundles. § UNIFORMLY HYPERBOLIC BUNDLES AND PROJECTORS We introduce the notion of uniformly hyperbolic bundles over the unit tangent bundle of – see definition <ref>. This notion is a universal version of Anosov representations defined in <cit.>. More specifically, we explain in the projective case, that such objects, which are bundles with data, are associated to sections of the endomorphism bundle given by projectors. One of the main results – proposition <ref> – is a description of the variation of such a projector under a variation of the data defining the uniformly hyperbolic bundles. Finally, we recover Anosov representations as periodic cases of uniformly hyperbolic bundles. Uniformly hyperbolic bundle is the structure underlying the study of quasi-symmetric maps in <cit.>. This notion has a further generalisation to all hyperbolic groups Γ, replacing by a real line bundle X over ∂_∞Γ×∂_∞Γ∖{(x,x)| x∈∂_∞Γ} , equipped with a Γ-action so that X/Γ is the geodesic flow of Γ. We will not discuss it in this paper, since this will uselessly burden our notation. §.§ Uniformly hyperbolic bundles: definition Let be the unit tangent bundle of . We denote by the vector field on generating the geodesic flow ϕ. We consider the trivial bundle E=V×. For any flat connection ∇ on E, we consider the lift Φ^∇ of ϕ given by the parallel transport along the orbits of ∂_t. When D is the trivial connection on E, we just write ΦΦ^D and observe that Φ_t(x,u)=(ϕ_t(x),u) where x is in and v in V. A rank k uniformly hyperbolic projective bundle is a pair (∇,h) where h is a section of the frame bundle on E, ∇ a trivializable connection on the bundle E, satisfying first the (standard) bounded cocyle hypothesis: ‖Φ_1^∇‖ is uniformly bounded. Then we assume that we have a Φ^∇ invariant decomposition at every point x E_x=L_x⊕ P_x , where L_x and P_x are subspaces with (L_x)=k and so that * The bundle L⊗ P^* is contracting, that is there exist positive constant B and b so that for all positive real s, for all x in for all non-zero vector u and v in L_x and P_x respectively ‖Φ^∇_s(u)‖/‖ u‖≤ B e^-bs‖Φ^∇_s(v)‖/‖ v‖ . * There exists a positive ϵ, so that for any converging sequences x and y to a point x, and any sequence u and v converging to u and v in E_x, so that u_m and v_m belongs to L_x_m and P_y_m respectively, then |⟨u| v||⟩≤ (1-ϵ) ‖ u‖·‖ v‖ . * There is a volume form on E, which is bounded with respect to h and ∇-parallel along orbits on the flow. The metric and scalar products considered are with respect to the metric g_h for which h is orthonormal. The fundamental projector associated to a uniformly hyperbolic bundle is the section of (E) given by the projection on L parallel to P. Observe that we do not require a priori any continuity on the bundles L and P. When the dimension of L_x is 1, we talk of a projective uniformly hyperbolic bundle, when it is k, we talk of a rank k uniformly hyperbolic bundle. The hypothesis (<ref>) is for simplification purposes. Using that hypothesis, one sees that (L) and (P) are respectively contracting and expanding bundles. The bounded cocycle assumption, akin to a similar condition in Oseledets theorem, implies that there exists positive constants A, B and C so that ‖Φ_s^∇‖≤ A+ Be^Cs . If we have a projection π from a set F to U, we write, for x in , F_x=π^-1(x). Let (∇,h) be a uniformly hyperbolic bundle. Then there exists open sets 𝒱 and 𝒰 of (E) and (E^*), where k is the dimension of L_x, respectively as well as a positive real T, so that * For every u in 𝒰_x and v in 𝒱_x, u and v are transverse. * Φ_T sends 𝒰 to 𝒰 and is 1/2 Lipschitz. * Φ_-T sends 𝒱 to 𝒱 and is 1/2 Lipschitz. * L and P are sections of 𝒱 and 𝒰 This is a rephrasing of the definition of uniformly hyperbolic bundle: let us consider L and P as sections of (E) and (E^*) respectively and let ℒ and 𝒫 be the closure of the images of these sections. The second condition implies for any u in ℒ and v in 𝒫, d(u,w)≥ϵ>0 , ∀ w not transverse to v . It follows that we can find ϵ_0 so that the open sets 𝒱{u| d(u,ℒ)≤ϵ_0} , 𝒰{v| d(v,𝒫)≤ϵ_0} , satisfy the condition of the lemma. As a classical consequence we have. Let (∇,h) be a uniformly hyperbolic bundle. Let us choose a trivialisation so that ∇ is the trivial connection. Then * The fundamental projector is a parallel along the geodesic flow and continuous bounded section of (E). * L is constant along the strong stable foliation of the geodesic flow of . * Finally P is constant along the strong unstable foliation of . The second condition of the definition of uniformly hyperbolic bundles guarantees that there exist open sets 𝒱 and 𝒰 in (E) and (E^*) respectively, so that L and P are sections of 𝒱 and 𝒰 respectively and moreover the closure of 𝒱 and 𝒰 do not intersect. The first condition implies that for s large enough Φ_s is contracting as a map from 𝒱 to 𝒰. Thus L being an invariant section is continuous; the same holds for P. Hence is continuous. Using now that the geodesic flow is contracting along the stable leaves towards the future, and contracting along the unstable leaves towards the past, it follows that L is constant along the strong stable leaves and P is constant along the strong unstable leaves. This allows to define the limit maps of the unformly hyperbolic bundle (∇,h). Let us choose a trivialization E=V× so that ∇ is trivial. The limit map of the uniformly hyperbolic bundle is ξ: ∂_∞→(V) , so that ξ(x)=L(y), if y belongs to the strong stable foliation defined by x. Symmetrically, the dual limit map of the uniformly hyperbolic bundle is ξ^*: ∂_∞→(V^*) , so that ξ^*(x)=P(y), if y belongs to the strong unstable foliation defined by x. 0.2truecm Finally let us define a notion of equivalence for uniformly hyperbolic bundles: Two uniformly hyperbolic bundles (∇_0,h_0) and (∇_1,h_1) are equivalent if there is a section B of 𝖦𝖫(E) so that * ∇_1=B^*∇_0 , * The metrics g_h_0 and B^*g_h_1 are uniformly equivalent. §.§ Families of uniformly hyperbolic bundles In order to study families of uniformly hyperbolic bundles, we will adopt two different gauge-fixing points of view: * The fixed gauge point of view: we allow the frame to vary but fix the connection * The fixed frame point of view: we allow the connection to vary but fix the frame. A natural example comes from a projective Anosov representation of a cocompact surface group. We call such an example, where the frame and the connections are invariant under the action of a cocompact surface group a periodic bundle. We discuss periodic bundles in <ref>. For a vector bundle V over a topological space X, we denote by V_x the fiber at a point x in X. A C^k-bounded variation of a uniformly hyperbolic bundle (∇,h) is a family (∇^t,h_t)_t∈ ]-ϵ,ϵ[ of connections and frames on E_0 so that * (∇_0,h_0)=(∇,h), * for all t, ∇^t is trivializable * for all t close to 0, the C^k-derivatives of t↦∇_ḣ_t are bounded with respect to g_h_t. We will see that any smooth family of periodic variation is of bounded variation. Then we have the lemma: Assume that ∇^t,h is a C^k bounded variation of a uniformly hyperbolic bundle where k∈ℕ∪{ω}. Then for t in some neighbourhood of zero, the bundle (∇^t,h_t) is uniformly hyperbolic. Let _t be the associated projector, then _t depends C^k on t. We prove this lemma in paragraph <ref>. §.§ The fundamental projector and its variation Our goal is to compute the variation of the associated family of fundamental projectors of a bounded variation of a uniformly hyperbolic bundle. More precisely, let assume we have a uniformly hyperbolic bundle (∇_0,h_0) with decomposition E_0=L_0⊕ P_0 . We prove in this paragraph the following proposition Assume that we have a bounded variation ∇_t,h of the uniformly hyperbolic bundle (∇_0,h_0) in the fixed connection point of view, that is ∇_t is the trivial connection D. The derivative of the fundamental geodesic at a point x in a geodesic g, is given by _0=[Ȧ,_0] + ∫_g^+ [̣̇A,_0] ·_0+ ∫_g^-_0· [̣̇A,_0] . where g^+ is the geodesic arc from x to g(+∞) and g^- is the arc from x to g(-∞) (in other words with the opposite orientation to g), and Ȧ is the endormorphism so that Ȧ h =.∂/∂ t|_t=0 h_t . §.§.§ Preliminary: subbundles of (E_0) We first adopt the fixed frame point of view. Let ∇ be a flat connection on E_0, Then is parallel for the induced flat connection on (E_0) along the flow. Let also F_0 be the subbundle of (E_0) given by F_0{B∈(E_0)| B+ B=B} . Observe that for any section C of (E_0), [C,] is a section of F_0 and that for any element A in F_0 we have (A)=0. The bundle F_0 decompose as two parallel subbundles F_0=F_0^+⊕ F_0^- , where we have the identification F_0^+=P^*⊗ L , F_0^-=L^*⊗ P . The projection of F_0 to F_0^+ parallel to F_0^- is given by B↦ B, while the projection on F_0^- parallel to F_0^+ is given by B↦ B. Finally there exists positive constants A and a, so that for all positive time s, endomorphisms u^+ in F_0^+ and u^- in F_0^-, we have ‖Φ_-s(u^-)‖≤ A e^-as‖ u^-‖ , ‖Φ_s(u^+)‖≤ A e^-as‖ u^+‖ . Consequently, for any section D of F_0, we write D=D^++D^- where D^± are sections of F_0^± according to the decomposition (<ref>). Let us write (E_0)=E_0^*⊗ E_0=(L^*⊗ L)⊕ (P^*⊗ P)⊕ (L^*⊗ P)⊕ (P^*⊗ L) , In that decomposition, F_0=(P^*⊗ L)⊕(L^*⊗ P). Let F_0^+=P^*⊗ L , F_0^-=L^*⊗ P . Thus, we can identify F_0^+ as the set of elements whose image lie in L and F_0^- are those whose kernel is in P. Thus F_0^+ = {B∈ F_0| B=B}= {B∈ F_0| B=0} , F_0^- = {B∈ F_0| B=0}={B∈ F_0| B=B} . Then the equation for any element B of F_0, B= B+ B , corresponds to the decomposition F_0=F_0^+⊕ F_0^-. Thus the projection on F_0^+ is given by B↦ B, while the projection on F_0^- is given by B↦ B. The definition of F_0^+ and F_0^- and the corresponding contraction properties of the definition of a uniformly hyperbolic bundles give the contraction properties on F_0^+ and F_0^-. §.§.§ The cohomological equation Let σ be a bounded section of F_0, then there exists a unique section η of F_0 so that ∇_η=σ. This section η is given by η(x)=∫_-∞^0 ·σ(ϕ_s(x)) ṣ-∫_0^∞σ(ϕ_s(x))· ṣ . Classically, in dynamical systems, the equation ∇_η=σ is called the cohomological equation. Since σ belongs to F_0^+ while σ belongs to F_0^- by lemma <ref>, the right hand side of equation (<ref>) makes sense using the exponential contraction properties given in the inequalities (<ref>). Indeed, for a positive s by lemma <ref> again, ‖Φ_-s(σ(ϕ_s) ·) ‖ ≤ Ae^-as‖σ‖_∞ , ‖Φ_s(·σ(ϕ_-s)) ‖ ≤ Ae^-as‖σ‖_∞ . It follows that using the above equation as a definition for η we have η(ϕ_s(x))=∫_-∞^t·σ(ϕ_u(x)) ụ-∫_t^∞σ(ϕ_u(x))· ụ . Thus ∇_η=σ +σ=σ , since σ is a section of F_0. Uniqueness follows from the fact that F_0 has no parallel section: indeed neither F_0^+ nor F_0^- have a parallel section. §.§.§ Variation of the fundamental projector: metric gauge fixing We continue to adopt the variation of connection point of view and consider after gauge fixing only hyperbolic bundles where the metric is fixed. Let ∇^t,h give rise to a bounded variation of the uniformly hyperbolic bundle (∇_0,h), where ∇_0 is the trivial connection D. Our first result is The variation of the fundamental projector _t associated to (∇^t,h) is given by (x)=∫_-∞^0 ( · [,∇̇_] )(x^s) ṣ- ∫_0^∞([,∇̇_]·)(x^s) ṣ , where x^s=ϕ_s(x) and ∇̇_(u)=.∂/∂ s|_t=0∇^t_ (u). Let us distinguish for the sake of this proof the following connections. Let ∇ be the flat connection on E_0 and ∇^End the associated flat connection on (E_0). Then from the equation ^2=, we obtain after differentiating, += . Thus is a section of F_0. Moreover taking the variation of the equation ∇^End_=0 yields ∇^End_=-∇̇^End_=[,∇̇_ ] . In other words, the variation of the fundamental projector is a solution of the cohomological equation ∇^End_η=σ, where σ=[,∇_] and η=. Applying proposition <ref>, yields the equation (<ref>). §.§.§ The fixed connection point of view and the proof of proposition <ref> We can now compute the variation of the projector in the fixed frame point of view and prove proposition <ref>. We first need to switch from the fixed frame of view to the fixed connection point of view. Let (∇^t,h) be a variation in the fixed frame point of view. Let A^t be so that ∇^t=A_t^-1 DA_t and A_0=Id. In particular, we have ∇̇_= D_Ȧ=̣̇A( ) . Then the corresponding variation in the fixed connection point of view is ( D,h_s) where h_t=A_t(h). It follows that ḣ= Ȧ(h) , ∇̇_ =̣̇A()=D_Ȧ . Let now _0^t the projector – in the fixed connection point of view– associated to (D, h_t), while ^t is the projector associated to (∇^s, h). Obviously _0^t=A_t^t A_t^-1 , _0_0^0=^0_0 . Thus _0=[Ȧ,]+ . Using lemma <ref> and equations (<ref>), we have =∫_-∞^0 · [,∇̇_] ∘ϕ_s ṣ- ∫_0^∞ [,∇̇_]·∘ϕ(s) ṣ , which yields (using the fact that _0=): _0=[Ȧ,_0] +∫_-∞^0_0· [_0,∇̇_]∘ϕ_s ṣ- ∫_0^∞ [_0,∇̇_]·_0 ∘ϕ(s) ṣ . From equation (<ref>), we get that ∫_0^∞ [_0,∇̇_]_0 ∘ϕ(s) ṣ=∫_g^+[_0,̣̇A]·_0=- ∫_g^+[̣̇A,_0]·_0 , while ∫_-∞^0 _0 [_0,∇̇_]∘ϕ(s) ṣ=-∫_g^-_0 [_0,̣̇A] =∫_g^-_0[̣̇A,_0] . This concludes the proof of proposition <ref>. §.§ Proof of the stability lemma <ref> Let us first choose a continuous family of gauge transformations g so that g_t^*h_t=h. The bounded variation condition implies that for a given T, for any α, there exists β so that | s|≤β, implies that ‖Φ_T-Φ_T^s‖≤α , where Φ_Y^s is the parallel transport at time T for ∇^s and the norm is computed with respect to h. Thus from lemma <ref>, for α small enough, Φ_T^s preserves 𝒰 and is 3/4-Lipschitz, while the same holds for Φ_-T^s and 𝒱. This implies that for | s|≤β, (∇_s,h) is a uniformly hyperbolic bundle. By the C^k bounded variation hypothesis, Φ_-T^s is a C^k-family of contracting maps, hence the fixed section is itself C^k as a function of s. This proves that the fundamental projector varies C^k in s. §.§ Θ-Uniformly hyperbolic bundles We now generalize the situation described in the previous paragraphs, using the same notational convention. Let V be a finite dimension vector space, let Θ=(K_1,…,K_n) be a strictly increasing n-tuple so that 1≤ K_1<…< K_n < (V) . Then a Θ-uniformly hyperbolic bundle over is given by a pair (∇,h) for which there exists a Φ^∇- invariant decomposition E_0=E_1⊕…⊕ E_n+1 , so that (∇,h) is uniformly of rank K_å for all å in {1,…,n} with invariant decomposition given by E_0=F_å⊕ F^∘_å , with F_å=E_1⊕…⊕ E_å , F^∘_å=E_å+1⊕…⊕ E_n+1 . The flag (F_1,…,F_n) will be called a Θ-flag. In other words, we generalized the situation described before for Grassmannians to flag varieties. §.§ Projectors and notation In this section, we will work in the context of a Θ-uniformly hyperbolic bundle ρ=(∇,h) associated to a decomposition of a trivialisable bundle E=E_1⊕⋯⊕ E_n+1 . Let us denote k_å(E_å) and K_å k_1+… k_å so that Θ=(K_1,…, K_n). We then write for a geodesic g, ^å(g) , the projection on F_å=E_1⊕…⊕ E_å parallel to F_å^∘ E_å+1⊕…⊕ E_n+1. When g is a phantom geodesic we set the convention that ^å(g). Observe that all ^å(g) are well defined projectors in the finite dimensional vector space V which is the space of ∇-parallel sections of E. Or in other words the vector space so that in the trivialization given by ∇, E=V×. Finally, we will consider a Θ-geodesic g, given by a geodesic g_0 labelled by an element å of Θ and write (g)^å(g_0) , Θ_g=(^å)=K_å . §.§ The periodic case Let Σ be the universal cover of a closed surface S. We denote by π the projection from Σ to S and p the projection from to Σ. Let Γ be the fundamental group of S and ρ be a projective Anosov representation of Γ on some vector space ℰ. Let E be the associated flat bundle on S with connection ∇. We will use in the sequel the associated trivialisation of the bundle E_0=p^*π^* E on which ∇ is trivial. Let us choose a Γ-invariant euclidean metric g on the bundle E_0. Let us finally choose a orthonormal frame h for g so that g=g_h. It follows from the definition of projective Anosov representations that the corresponding bundle (∇,h) is uniformly hyperbolic. We call such a uniformly hyperbolic bundle periodic. More generally, let 𝖯_Θ be the parabolic group stabilizing a Θ-flag. Then a 𝖯_Θ-Anosov representation defines a Θ-uniformly hyperbolic bundle. Finally we observe * Given a representation ρ, a different choice of a Γ-invariant metric yields an equivalent uniformly hyperbolic bundle. * Similarly, two conjugate representations give equivalent uniformly hyperbolic bundles. § GHOST POLYGONS AND CONFIGURATIONS OF PROJECTORS We introduce here our main tool, ghost polygons, and relate them to configurations of geodesics and correlation functions. This section is mainly concerned with definitions and notation. We will consider the space of oriented geodesics of , and an oriented geodesic g as a pair (g^-,g^+) consisting of two distinct points in . §.§ Ghost polygons 0.2 truecm A ghost polygon is a cyclic collection of geodesics ϑ=(θ_1,…,θ_2p). The ghost edges are the geodesics (possibly phantom) θ_2i+1 , and the visible edges are the even labelled edges θ_2i, such that θ_2i+1^+=θ_2i^+ , θ_2i-1^-=θ_2i^- . * The geodesics are allowed to be phantom geodesics, * It will be convenient some time to relabel the ghost edges as ζ_iθ_2i+1. * It follows from our definition that (θ̅_1,θ_2,θ̅_3,…,θ_2p) is closed ideal polygon. We have an alternative point of view. A configuration of geodesics of rank p is just a finite cyclically ordered set of p-geodesics. We denote the cyclically ordered set of geodesics (g_1,…,g_p) by ⌈ g_1,…, g_p⌉. The cardinality of the configuration is called the rank of the the configuration. We see that the data of a ghost polygon and a configuration of geodesics is equivalent (see figure (<ref>)): * we can remove the ghost edges to obtain a configuration of geodesics from a ghost polygon, * conversely, given any configuration G=(g_1,…,g_p), the associated ghost polygon ϑ=(θ_1,…,θ_2p) is given by θ_2i g_i, θ_2i+1(g_i+1^-,g_i^+) We finally say that two configurations are non-intersecting if their associated ghost polygons do not intersect. Let us add some convenient definitions. Let ϑ=(θ_1,…,θ_2p) be a ghost polygon associated to the configuration configuration ⌈ g_1,…, g_p⌉, We then define the opposite configurations as follows. * For visible edge g_1 of G, the opposite configuration is tuple g_1^* (g_1,g_2,…,g_p,g_1). * For ghost edge θ_1 of G, the ghost opposite configuration is the tuple θ_1^*(g_2,…,g_p,g_1). Observe that both opposite configurations are not configurations per se but actually tuples – or ordered configurations. We finally define the core diameter r(G) of a ghost polygon G to be the minimum of those R such that, if B(R) is the ball of radius R centered at the barycenter (G), then B(R) intersects all visible edges. We obviously have The map G↦ r(G) is a continuous and proper map from _⋆^n/ to ℝ. §.§ Θ-Ghost polygons We now Θ-decorate the situation. Let as in paragraph <ref>, Θ=(K_1,…,K_n) with K_å<K_å+1. Let G be a ghost polygon, a Θ-decoration is a map Å from the set of visible edges to 1,…,n. We again have the equivalent description in terms of configurations. A Θ-configuration of geodesics of rank p is configuration (g_1,…,g_p) with a map Å – the Θ-decoration – from the collection of geodesics to {1,…,n}. We think of a Θ-decorated geodesic, or in short a Θ-geodesic, as a geodesic labelled with an element of Θ. When ρ is a uniformly hyperbolic bundle and ^å(g) a fundamental projector associated to a geodesic g, we will commonly use the following shorthand. Let G be ghost polygon (θ_1,θ_2,…,θ_2p) be given by configuration ⌈ g_1,…, g_p⌉. * For visible edge g_i we write _i^Å_i^Å(i)(g_i). * For visible edge g_i, the opposite ghost endomorphism is _G^Å(g^*_j) _j·_j-1⋯_j+1·_j . * For ghost edge ζ_i, the opposite ghost endormorphism is _G^Å(ζ_i^*)_i·_i-1…_i+1 . The reader should notice that in the product above, the indices are decreasing. The opposite ghost endomorphisms have a simple structure in the context of projective uniformly hyperbolic bundles (that is when Θ={1}). When Θ={1}, _G(θ^*_i)=_G(ρ) (θ_i). Let G=(θ_1,…,θ_2p) be a ghost polygon with configuration ⌈ g_1,…, g_p⌉. If g_i^+ = g_i+1^- then p_i+1p_i = 0 and the equality holds trivially with both sides zero. We thus can assume there is a ghost edge ζ_i = θ_2i+1 for each i ∈{1,…,p}. When Θ={1} all projectors have rank 1. Thus for visible edge g_i _G(g_i^*)=_i_i-1…_i+1_i = (_i…_i+1) _i= _G(ρ) (g_i) . For a ghost edge ζ_i as (_i+1_i) ≠ 0 _G(ζ^*_i)=_i_i-1…_i+1 = _i_i+1(_n…_1)/(_i_i+1)= _G(ρ) , where 1/(_i_i+1)_i_i+1. Then we see that has trace 1, its image is the image of _i, and its kernel is the kernel of _i+1. Thus is the rank 1 projector on the image of _i, parallel to the kernel of _i+1. Hence =(ζ_i). The result follows. 0.2 truecm §.§ Correlation function Given a Θ-configuration of geodesics G=⌈ g_1,…,g_p⌉ given by a p-uple of geodesics (g^0_1,…,g^0_p), with a Θ-decoration Å the correlation function associated to G is _G: ρ↦_⌈ g_1,…,g_p⌉(ρ)(^å(p)(g^0_p)⋯^å(1)(g^0_1))=((g_p)⋯(g_1)) , where is the projector associated to the uniformly hyperbolic bundle ρ. The reader should notice (again) that the geodesics and projectors are ordered reversely. §.§ Analyticity in the periodic case In this subsection we will treat first the case of complex bundles, that is representation in in 𝖲𝖫(n,ℂ) of the (complex) parabolic group 𝖯^ℂ_Θ associated to Θ. We now have, as a consequence of <cit.>, the following Let G be a ghost polygon. Let ρ be an analytic family of 𝖯^ℂ_Θ-Anosov representations parametrized by the unit disk . Then, the function u↦_G(ρ_u) is analytic. Moreover the map G↦_G is a continuous function with values in the analytic functions. Indeed the correlation functions only depends on the limit curve of the representation and thus the analyticity of the limit curve proved in <cit.> gives the result. We deduce the general analyticity result from this proposition by complexifying the representation. § GHOST INTEGRATION In this section, given a Θ-uniformly hyperbolic bundle ρ, a Θ-ghost polygon G and a 1-form α on with values in the endormorphism bundle of a uniformly hyperbolic bundle (of a special type), we produce a real number denoted ∮_ρ(G)α . This procedure is called the ghost integration. We introduce the dual cohomology object Ω_ρ(G) which is a 1-form with values in the endomorphism bundle so that ∫_(α∧Ω_ρ(G))=∮_ρ(G)α . The construction is motivated by the following formula that we shall derive and explain in paragraph <ref> _G(∇̇)=∮_ρ(G)∇̇ . Observe that here we use an abuse of language: we use the same notation for 1-form on with values in (E) and their pull-backs which are 1-forms on U with values in (π^*(E)) where π is the projection from U to . §.§ Bounded and geodesically bounded forms In this paragraph, we define a certain type of 1-forms with values in (E), where E is a uniformly hyperbolic bundle (∇,h). All norms and metrics will be using the Euclidean metric g_h on E associated to a framing h. A bounded 1-form ω on with values in (E) is a form so that ‖ω_x(u)‖_x is bounded uniformly for all (x,u) in U. Let us denote ^∞(E) the vector spaces of those forms and ‖ω‖_∞=sup_(x,u)∈ U‖ω_x(u)‖_x . As an example of such forms, we have * Given a Θ-geodesic g, given by a (possibly phantom) geodesic g_0, and an element å of Θ, the projector form is β_ρ(g)ω_g (g)=ω_g ^å(g_0) . where we used the notation (<ref>). * Any Γ-equivariant continuous form in the case of a periodic bundle. * Given (A_t)_t∈]-1,1[ a bounded variation of a uniformly hyperbolic bundle (see definition <ref>, the form Ȧ.∂ A_t/∂ t|_t=0 , is by definition a bounded 1-form. We do not require forms in ^∞(E) to be closed. A form α is geodesically bounded if for any parallel section A of (E), (α A) is geodesically bounded as in definition <ref>. We denote by (E) the set of 1-forms which are geodesically bounded. Again for any geodesic, the projector form β_ρ(g) is geodesically bounded. However Γ-equivariant forms are never geodesically bounded unless they vanish everywhere. §.§ Line integration Let ω be a 1-form in ^∞(E). Let x be a point on the oriented geodesic g and Q a parallel section of (E) along g. The line integration of ω – with respect to the uniformly hyperbolic bundle ρ – is given by _x,g,(ω) ∫_g^+( [ω,] ) + ∫_g^-( [ω,]) . Observe that since for a projector , we have (A [B,]) =([,A] B) , we have the equivalent formulation _x,g,(ω) = ∫_g^+(ω [,] ) + ∫_g^-(ω [,] ) . Let now α be a section of (E) so that α̣ belongs to ^∞(E). We also define the primitive line integration of α by _x,g,(α) (α(x) [,] )+ _x,g,(α̣) = ([α(x),] )+ _x,g,(α̣) . §.§.§ Bounded linear forms and continuity The line integration operator ω↦_x,g,Q(ω) , a continuous linear form on ^∞(E). This proposition is an immediate consequence of the following lemma There exist positive constants B and b, only depending on and x, so that for any ω in ^∞(E) if y is a point in g^+, z a point in g^- and denoting the tangent vector to the geodesic g. |( [ω_y(),] )| ≤ Be^-bd(x,y)‖ω‖_∞ , ‖ [,] ‖_z ≤ Be^-bd(x,z) , ‖ [,]‖_y ≤ Be^-bd(x,y) . Let us choose a trivialization of E so that ∇ is trivial. By hypothesis ω is in ^∞(E) and thus ‖ω_y()‖_y≤‖ω‖_∞ . Then σ: y↦σ(y) [ω_y(),] , is a section of F_0^-. Since is bounded – see proposition <ref> – there exists k_1 such that for all y ‖σ(y)‖_y≤ k_1 ‖ω‖_∞ . By lemma <ref>, F_0^- is a contracting bundle in the negative direction, which means there exists positive constants A and a so that if y=ϕ_t(x) with t>0, then ‖Φ_-t^∇( σ(y))‖_x≤ A e^-at‖σ(y)‖_y , where ∇ is the connection. However in our context, since we have trivialized the bundle, Φ_-t^∇ is the identity fiberwise, and thus combining the previous remarks we get that if y is in g^+, then ‖ [ω_y(),] ‖_x ≤ A e^-a(d(y,x)‖ω‖_∞ . By Cauchy–Schwarz, for all endomorphisms U and V, we have |(U V)|≤‖ U‖_x‖ V‖_x . Thus combining equations (<ref>) and (<ref>) we obtain |( [ω_y(),] )|≤‖‖_x ‖ [ω_y(),] ‖_x ≤ A e^-a(d(y,x)‖‖_x ‖ω‖_∞ , and the inequality (<ref>) follows. Similarly, [,] is a parallel section of F_0^-, thus the inequality (<ref>) is an immediate consequence of inequality <ref>. The primitive line integration _x,g,Q(α) does not depend on the choice of x on g. Let us write for the sake of this proof _x_x,g,Q(α). Let μ be the geodesic arc from y to x. Let us consider a parametrization of g so that x=g(s_0) and y=g(t_0). Then letting ω = α̣ _y-_x = ((α(y)-α(x)) [,]) + ∫_t_0^∞(ω(ġ) [,Q])ṭ +∫_t_0^-∞(ω(ġ) [,Q] )ṭ - ∫_s_0^∞(ω(ġ) [,Q])ṭ-∫_s_0^-∞(ω(ġ) [,Q] )ṭ = ∫_s_0^t_0(ω(ġ) ( [,Q]- [, Q]-[,Q] )) ṣ =0 , where the last equality comes form the fact that, since is a projector [,Q] + [, Q]=[,Q] . Finally we have, Assume that β is bounded. Then _m,Q(β)= 0. Let ϖ=β̣. It follows that (ϖ() [,])=∂/∂ t(β [,]) . Thus by the exponential decay lemma <ref>, we have ∫_g^+(ϖ [,])=-(β(x) [,]) . Similarly ∫_g^-(ϖ [,])=-(β(x) [,] ) . It follows that _x,g_0,Q(ϖ)= -(β(x) [,])-(β(x) [,] )=-(β(x) [,]) . This concludes the proof. §.§ Ghost integration: the construction Let now G be a configuration of geodesics with a Θ-decoration Å. Let ρ be a Θ-uniformly hyperbolic bundle, where G=⌈ g_1,…, g_p⌉. Let _i=^Å(i)(g_i) and _i=_i-1…_i+1 . Let α be a closed 1-form with values in (E). Assume that α belongs to ^∞(E). Let β be a primitive of α – that is a section of (E) so that β̣=α – let _ρ(G)(β)∑_i=1^n _g_i,_i(β) , The quantity _ρ(G)(β) only depends on the choice of α and not of its primitive. Let β_0 and β_1 two primitives of α. Observe that Bβ_1-β_0 is constant, then _G(β_1)-_G(β_0)=∑_i=1^p (B[_i,_i])=∑_i=1^p (B_i_i)-∑_i=1^p (B_i_i)=0 , since _i_i=_i-1_i-1. We define the ghost integration of a 1-form α in ^∞(E) with respect to a Θ-ghost polygon G and a uniformly hyperbolic bundle ρ to be the quantity ∮_ρ(G)α_ρ(G)(β) , where β is a primitive of α. Gathering our previous results, we summarize the important properties of ghost integration: The ghost integration enjoys the following properties: * The map α↦∮_ρ(G)α is a continuous linear form on ^∞(E). * Assume α=β̣, where β is a bounded section of (E). Then ∮_ρ(G)α=0 . We remark that the second item implies that ghost integration is naturally an element of the dual of the first bounded cohomology with coefficients associated to the bundle. These are consequences of the corresponding properties for J_x,g,Q proved respectively in propositions <ref>, <ref> and <ref>. §.§ Ghost integration of geodesic forms Recall that we denoted by (E) the space of geodesically bounded forms, and observe that for any geodesic g, the projector form β_ρ(g) belongs to (E). Let ρ be a Θ-uniformly hyperbolic bundle. Let G be configuration of geodesics of rank p associated to a ghost polygon ϑ(θ_1,…θ_2p) and a Θ-decoration. Assume that α is in (E). Then ∮_ρ(G)α = - (∑_i=1^2p(-1)^i∫_θ_i(α _G(θ_i^*))) , where _G^Å(θ_i^*) denotes the opposite ghost endomorphism to θ_i. In the context of projective uniformly hyperbolic bundle, that is Θ={1}, then the previous formula is much simpler as an immediate consequence of lemma <ref>. Let G be configuration of geodesics of rank p associated to a ghost polygon ϑ(θ_1,…θ_2p) and a Θ-decoration. Let ρ be a projective uniformly hyperbolic bundle. Assume that α is in (E). Then ∮_ρ(G)α = - _G(ρ)(∑_i=1^2p(-1)^i∫_θ_i(α (θ_i))) . Observe that both formulae above do not make sense for a general bounded form. Observe also that Let G be a ghost polygon, and α a 1-form with values in the center of (E) then ∮_ρ(G)α=0 . §.§.§ An alternative construction: a first step Let x be a point in , γ_i^± the geodesic from x to g_i^±. Assume that α is in (E) then ∮_ρ(G)α = ∑_i=1^p(∫_γ_i^+(α _i [_i,_i])+ ∫_γ_i^-(α [_i,_i] _i )) . Let fix a point x_i in each of the g_i. Let β a primitive of α so that β(x)=0. Let η_i be the geodesic from x to x_i. It follows that, since α is geodesically bounded, we have by the cocycle formula (<ref>) ∫_γ_i^+(_i [α,_i] _i)= ∫_η_i(_i [α,_i] _i)+∫_g_i^+(_i [α,_i] _i) . Similarly ∫_γ_i^-(_i _i [α,_i])= ∫_η_i(_i _i [α,_i]+∫_g_i^-(_i _i [α,_i]) . Observe now that, using the relation [,Q] + [, Q]=[,Q], we have ∫_η_i(_i [α,_i] _i)+∫_η_i(_i _i [α,_i]) =∫_η_i(_i [α,_i])=(_i [β(x_i),_i]) . Thus, we can now conclude the proof: _ρ(G)(β) = ∑_i=1^p(∫_γ_i^+(_i [α,_i] _i)+ ∫_γ_i^-(_i _i [α,_i])) = ∑_i=1^p(∫_γ_i^+(_i [_i,_i] α)+ ∫_γ_i^-([_i,_i] _i α)) . §.§.§ Proof of proposition <ref> Let us assume we have a ghost polygon ϑ = (θ_1,…,θ_2p) given by a configuration of geodesics G=⌈ g_1,…, g_p⌉. Let _i=(g_i) and α an element of (E). We have _i [_i,_i] = _i_i- _i_i_i , [_i,_i] _i = _i_i_i- _i_i=_i_i_i- _i-1_i-1 . Since α is geodesically bounded we have ∮_ρ(G)α =∑_i=1^p(∫_γ_i^+(α _i _i )- ∫_γ_i+1^-(α _i _i)) -∑_i=1^p(∫_γ_i^+(α _i _i _i)-∫_γ_i^-(α _i _i _i) ) . For i∈{1,…,p}, let ζ_i be the ghost edge joining g_i+1^- to g_i^+, that is ζ_i=θ_2i+1. For a closed form β which is geodesically bounded the cocycle formula (<ref>) yields ∫_γ_i^+β-∫_γ_i^-β=∫_g_iβ , ∫_γ_i+1^-β-∫_γ_i^+β=-∫_ζ_iβ . Thus _ρ(G)(α) = ∑_i=1^p(∫_ζ_i(α_i_i) -∫_g_i(α_i_i_i)) . To conclude we need first to observe that as g_i is a visible geodesic then _i_i_i is the opposite ghost endomorphism _G(g_i^*). On the other hand as ζ_j is a ghost edge then _j_j is the opposite ghost endomorphism _G(ζ_j^*). Thus _ρ(G)(α) = - (∑_i=1^2p(-1)^i∫_θ_i(α _G(θ_i^*))) . §.§.§ Another altenative form with polygonal arcs Let G = (θ_1,…,θ_2p) be Θ a ghost polygon given by configuration ⌈ g_1,…,g_p⌉ with g_i = θ_2i. Let x be the barycenter of G. Let x_i be the projection of x on g_i. For a ghost edge ζ_i = θ_2i+1, let us consider the polygonal arc _i given by _i=a_i∪ b_i∪ c_i∪ d_i , where * the geodesic arc a_i is the arc (along g_i+1) from g_i+1^- to x_i+1, * the geodesic arc b_i joins x_i+1 to x, * the geodesic arc c_i joins x to x_i, * the geodesic arc d_i joins x_i to g_i^+. We then have, using the same notation as in proposition <ref> We have for α in (E) ∮_ρ(G)α = -∑_i∫_g_i(α _G(θ_i^*)) + ∫__i(α _G(θ_i^*)) . The proof relies on the fact that for α in (E), and ζ_i a ghost edge we have ∫__iα=∫_ζ_iα . Then the formula follows from proposition <ref>. Ghost integration and Rhombus integration. The process described for the ghost integration is a generalisation of the Rhombus integration described in <cit.>. §.§ A dual cohomology class Let ρ be a Θ-uniformly hyperbolic bundle. Let now G be a Θ-ghost polygon with configuration ⌈ g_1,…,g_p⌉ and Θ-decoration Å. Let ϑ=(θ_1,…,θ_2p) be the associated ghost polygon and denote by ζ_i=θ_2i+1 the ghost edges. Let _i be the associated polygonal arc associated to the ghost edge ζ_i as in paragraph <ref>. The ghost dual form to ρ(G) is Ω_ρ(G)∑_i=1^p(ω_g_i_G(g^*_i) - ω__i_G(ζ^*_i)) . Observe that ρ(G) incorporates a Θ-decoration and so Ω_ρ(G) depends on the Θ-decoration. We have the following properties * The ghost dual form belongs to (E). * Assume that α belongs to (E). Then ∮_ρ(G)α=∫_(α∧Ω_ρ(G)) . * (exponential decay inequality) Finally, there exist positive constants K and a only depending on ρ and R_0 so that if the core diameter of G is less than R_0, then ‖Ω_ρ(G)(y)‖_y≤ K e^-a d(y,(G)) , and, moreover, Ω_ρ(G)(y) vanishes when d(y,(G))≥ R_0+2 and d(y,g)>2 for all visible edges g of G. Later we will need the following corollary which we prove right after we give the proof of the proposition. We have the following bounds: The map ϕ_G : y ↦ ‖Ω_ρ(G)(y)‖_y , belongs to L^1(), and ‖ϕ_G‖_L^1() is bounded by a continuous function of the core diameter of G. The map ψ_G,y : γ ↦ ‖Ω_ρ(G)(γ y)‖_y , belongs to ℓ^1(Γ), and ‖ψ_G,y‖_ℓ^1(Γ) is bounded by a continuous function of the core diameter of G. Finally the map ϕ : H ↦ ‖Ω_ρ(H)‖_∞= sup_y∈‖Ω_ρ(H)(y)‖_y , is bounded on every compact set of ^p_⋆ We first prove the exponential decay inequality (<ref>) which implies in particular that Ω_ρ(G) belongs to ^∞(E). Let r(G) be the core diameter of G. Let as usual g_i be a visible edges, x be the barycenter of all g_i and x_i be the projection of x on g_i. By the construction of the polygonal arc _i, it follows that outside of the ball of radius r(G)+2 centered at x, then Ω_ρ(G)= ∑_iω^-_g_i [_i,_i]_i + ∑_iω^+_g_i_i [_i,_i] , where ω^±_g_i=f^±_iω_g_i where f^±_i is a function with values in [0,1] with support in the 2-neighbourhood of the arc [x_i,g_i^±]. Then the decay given in equation (<ref>) is an immediate consequence of the exponential decay given in inequality (<ref>). Observe now that Ω_ρ(G) is closed. Let A be a parallel section of (E), then it is easily seen that (Ω_ρ(G)A) is geodesically bounded. It follows that Ω_ρ(G) is in (E). Then the result follows from the alternative formula for ghost integration in proposition <ref>. Given a ghost polygon H whose set of visible edges is g_H, and core diameter less than R_0. Let V_H≤{y∈| d(y,(H))≤ R_0+2 or d(y,g)≤ 2 for some g ∈ g_H} Observe that the volume of V_H(R) V_H∩ B((H),R) has some linear growth as a function of R, and more over this growth is controlled as a function of R_0. This, and the exponential decay inequality (<ref>), implies that ϕ_G, whose support is in V_H, is in L^1() and that is norm is bounded by a constant that only depends on R_0. Similarly consider F_H,y{γ∈Γ| d(γ(y),(H))≤ R_0+2 or d(γ(y),g)≤ 2 for g in g_H} . and F_H,y(R){γ∈ F_H,y| d(γ (y),(H))≤ R} . Then the cardinal of the subset F_H,y(R) has linear growth depending only on R_0. Hence, for every y, γ↦∑_γ∈ F_H,yK_0e^-a(d(γ y,(H)) , seen a function of H is in ℓ^1(Γ) and its ℓ^1 norm is bounded as a function of R_0. Hence – as a consequence of the exponential decay inequality (<ref>) – for every y, the map γ↦‖Ω_ρ(G)(γ y)‖_y , is in ℓ^1(Γ) and its ℓ^1 norm is bounded by a function of R_0. Finally from inequality (<ref>), we have obtain that there is a constant R_1 only depending on R_0 such that sup_y∈‖Ω_ρ(H)(y)‖_y≤sup_y∈ B((H), R_1)‖Ω_ρ(H)(y)‖_y +1 . The bounded cocycle hypothesis, equation (<ref>), implies that sup_y∈ B((H), R_1)‖Ω_ρ(H)(y)‖_y is bounded by a function only depending on R_1, and thus sup_y∈‖Ω_ρ(H)(y)‖_y is bounded by a function of R_0. This completes the proof of the corollary. §.§ Derivative of correlation functions In this paragraph, as a conclusion of this section, we relate the process of ghost integration with the derivative of correlation functions. Let ∇_t,h be a bounded variation of a uniformly hyperbolic bundle ρ=(∇,h). Assume that G is a Θ-ghost polygon, then _G(∇̇)=∮_ρ(G)∇̇ . This proposition is an immediate consequence of the following lemma, which is itself an immediate consequence of the definition of the line integration in paragraph <ref> and lemma <ref>: Let (∇_t,h_t) be a family of uniformly hyperbolic bundles with bounded variation – see definition <ref> – associated to a family of fundamental projectors . Then for a decorated geodesic g, (_0(g)· Q)=_ρ(g),Q(∇̇) . §.§ Integration along geodesics For completeness, let us introduce ghost integration for geodesics: we define for any geodesically bounded 1-form α in Ξ(E) and a Θ-geodesic g, ∮_ρ(g)α∫_g(α_g) . It is important to observe that, contrarily to a general ghost polygon, we only integrate geodesically bounded forms, not bounded ones. In particular, we cannot integrate variations of uniformly hyperbolic bundles. § GHOST INTERSECTION AND THE GHOST ALGEBRA In this section we will effectively define and compute the ghost intersections of ghost polygons or geodesics. This is the objective of propositions <ref> and <ref>. We define the associated ghost algebra in paragraph <ref> and relate in <ref> the corresponding ghost bracket for the projective case to the swapping bracket defined in <cit.> by the second author. Finally we relate the intersection of two ghost polygon to the correlation of the brackets of these in the crucial proposition <ref>. In the somewhat independent paragraph <ref>, we define and study natural maps from the ghost algebra to itself. We will use freely the definitions given in section <ref> for ghost polygons. §.§ Ghost intersection: definitions and computation We proceed step by step with the definitions. §.§.§ Intersecting two geodesics Let g and h two Θ-geodesics (in other words, geodesics labelled with an element of Θ). Let us define _ρ(g,h)∮_ρ(g)β^0_h , where β^0_hβ_h-Θ_h/(E) is the trace free part of β_h and Θ_h is defined in equation (<ref>). A straightforward computation using equation (<ref>) and (<ref>) then gives _ρ(g,h)=ϵ(h,g)(_⌈ g,h⌉ (ρ) - 1/(E)Θ_gΘ_h) . By convention, the quantity ϵ(g,h) for two Θ-decorated geodesics g and h is the same as the intersection of the underlying geodesics. §.§.§ Intersecting a ghost polygon with a geodesic Let ρ be a Θ-uniformly hyperbolic bundle. Let G be a Θ-ghost polygon and h a Θ-geodesic. The ghost intersection of G and g is _ρ(G,h) -∮_ρ(G)β_ρ(h)=-∫_(β_ρ(h)∧Ω_ρ(G))=-∫_h(Ω_ρ(G) (h))-_ρ(h,G) . By convention we set _ρ(h,G) -_ρ(G,h). We will prove that we can effectively compute the ghost intersection. Then we have Let G be a configuration of geodesics, associated to a ghost polygon ϑ=(θ_1,…,θ_2p). The ghost intersection of h and G is given by _ρ(G,h)=∑_i=1^2p(-1)^i+1ϵ(h,θ_j) _⌈ h, θ^*_i⌉(ρ) , where θ_i^* is the opposite configuration as in paragraph <ref>. In the projective case, that is Θ={1} we have _ρ(G,h)=_G(ρ)(∑_i=1^2p(-1)^i+1ϵ(h,θ_j) _⌈ h, θ_i⌉(ρ)) . §.§.§ Intersecting two ghost polygons We define the ghost intersection of two ghost-polygons or equivalently of two configuration of geodesics G and H to be _ρ(G,H)∮_ρ(G)Ω_ρ(H)=∫_(Ω_ρ(H)∧Ω_ρ(G)) . We can again compute this relatively effectively: The ghost intersection of the two configuration G and H, associated respectively to the ghost polygons ϑ=(θ_i)_i∈ I, with I=[1,2p], and ς=(σ_j)_j∈ J, with J=[1,2m], respectively, is given by _ρ(G,H) = ∑_i∈ I,j∈ J(-1)^i+jϵ(σ_j,θ_i) _⌈σ^*_j,θ^*_i⌉(ρ) . In the projective case, this simplifies as _ρ(G,H) = _G(ρ)_H(ρ)(∑_i∈ I,j∈ J(-1)^i+jϵ(σ_j,θ_i) _⌈σ_j,θ_i⌉(ρ)) . §.§ Θ-Ghost bracket and the ghost space We develop a more formal point of view. Our goal is proposition <ref> that identifies the intersection as a correlation function. Let 𝒜 be the vector space generated by Θ-ghost polygons (or equivalently configurations of Θ-geodesics) and Θ-geodesics. We add as a generator the element , and call it the Casimir element. By definition, we say has rank 0. We will see that the Casimir element will generate the center. Recall also that we can reverse the orientation on geodesics. The corresponding reverse orientation on configuration is given by ⌈ g_1,… ,g_p⌉⌈g̅_p,… ,g̅_1⌉. We define the bracket on the basis of 𝒜 and extend it by linearity. * The bracket of with all elements is 0. * Let G and H be two configurations of Θ-geodesics, associated respectively to the ghost polygons ϑ=(θ_i)_i∈ I, with I=[1, 2p] and ς=(σ_j)_j∈ J, with J=[1,2m] respectively. Their Θ-ghost bracket is given by [G,H] ∑_i∈ I,j∈ Jϵ(σ_j,θ_i)(-1)^i+j⌈θ^*_i,σ^*_j⌉ , where we recall that θ^*_j is the opposite ghost configuration defined in paragraph <ref>. * Let g and h be two Θ-geodesics and G a ghost polygon as above. Then we define [g,h] ϵ(h,g)(⌈ h,g⌉ -Θ_hΘ_g·) , G,h] ∑_j∈ J (-1)^j+1ϵ(h,θ_j) ⌈ h,θ^*_j⌉ -[h,G] , Finally 𝒜 equipped with the ghost bracket is called the ghost algebra. We observe that the ghost bracket is antisymmetric. However, the Θ-ghost bracket does not always satisfy the Jacobi identity: there are some singular cases. We actually prove in the Appendix <ref>, as Theorem <ref> the following result Assume A, B, and C are ghost polygons and that V_A∩ V_B∩ V_C=∅ , where V_A, V_B and V_C are the set of vertices of A, B and C respectively, then [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0 . Finally we now extend the map on 𝒜 so as to define _G(ρ) for G an element of 𝒜, while defining _(ρ) 1/(E) . The purpose of this formal point of view is to rewrite Propositions <ref> and <ref> as the simple formula: We have for G, H ghost polygons then _ρ(G,H)=_[G,H](ρ) . This formula will allow us to compute recursively Poisson brackets of correlation functions. §.§ The projective case: swapping and ghost algebras Throughout this section, we will restrict ourselves to the projective case, that is Θ={1}. §.§.§ Ghost polygons and multifractions In <cit.>, the second author introduced the swapping algebra ℒ consisting of polynomials in variables (X,x), where (X,x) are points in S^1, together with the relation (x,x)=0. We introduced the swapping bracket defined on the generators by [(X,x),(Y,y)]=ϵ((Y,y),(X,x)) ((X,y)· (Y,x) ) . We proved that the swapping bracket gives to the swapping algebra the structure of a Poisson algebra. We also introduced the multifraction algebra ℬ which is the vector space in the fraction algebra of ℒ generated by the multifractions which are elements defined, when X and x are a n tuples of points in the circle and σ an element of the symmetric group 𝔖(n) by [X,x;σ]∏_i=1^n (X_i,x_σ(i))/∏_i=1^n(X_i,x_i) . We proved that the multifraction algebra is stable by the Poisson bracket, while it is obviously stable by multiplication. Let us consider the algebra ℬ_0 which is generated as a vector space by the multifraction algebra to which we add extra generators denoted ℓ_g for any geodesic g – which are formally logarithms of geodesics ℓ_g=log(g) as well as a central element ; we finally extend the swapping bracket to ℬ_0 by adding [ℓ_g,ℓ_h]1/g h[g,h]+ϵ(g,h) , [G,ℓ_h] 1/h[G,h] -[ℓ_h,G] . We call ℬ_0 with the extended swapping bracket, the extended swapping algebra. The reversing orientation is defined on generators by ℓ_g=ℓ_g, We then have The extended swapping algebra is a Poisson algebra. The reversing isomorphisms antipreserves the Poisson structure: [G,H]=-[ G, H]. This is just a standard check that adding “logarithmic derivatives” to a Poisson algebra still gives a Poisson algebra. We first see that that ∂_g: z↦ [ℓ_g,z]=1/g[g,z] , is a derivation on the fraction algebra of the swapping bracket. Indeed, ∂_g([z,w]) = 1/g[g,[z,w]] =1/g([z,[g,w]]-[w,[g,z]]) = ([z,[g,w]/g]-[w,[z,w]/g])=[z,∂_g(w)]+[∂_g(z),w] . Moreover, the bracket of derivation gives [∂_g,∂_h](z)=[[ℓ_g,ℓ_h],z] Let us check this last point: ∂_g (∂_h (z))= 1/g[g,1/h[h,z]]=-1/gh^2[g,h][h,z] + 1/g h[g,[h,z]] . Thus, we complete the proof of the proposition [∂_g,∂_h](z)=[g,h](-[h,z]/gh^2- [g,z]/hg^2) + 1/gh[[g,h],z]]=[[g,h]/gh,z] . §.§ The projective case: ghost algebra and the extended swapping algebra In the projective case, it is convenient to consider the free polynomial algebra 𝒜_P generated by the ghost polygons, and extend the ghost bracket by the Leibnitz rule to 𝒜_P. In this paragraph, we will relate the algebras 𝒜_P and ℬ_0, more precisely we will show: There exists a homomorphisms of commutative algebra map π:𝒜_P→ℬ_0 , which is surjective, preserves the bracket and and the reversing the orientation isomorphism: [π(A),π(B)]=π[A,B] , π(A)=π(A) . Finally if A belongs to the kernel of π, then for any projective Anosov representation ρ, _A(ρ)=0. Thus, 𝒜_P/(π) is identified as an algebra with bracket with ℬ_0; in particular 𝒜_P/(π) is a Poisson algebra. This will allow in the applications to reduce our computations to calculations in the extended swapping algebra, making use of the fact that the extended swapping algebra is a Poisson algebra by proposition <ref>. Unfortunately, we do not have the analogue of the swapping bracket in the general Θ-case, although the construction and result above suggest to find a combinatorially defined ideal ℐ in the kernel of (ρ) for any ρ, so that 𝒜/ℐ satisfies the Jacobi identity. §.§.§ From the ghost algebra to the extended swapping algebra In this paragraph, we define the map π of Theorem <ref>. The map π is defined on the generators by g ⟼ π(g)ℓ_g , G=⌈ g_1,… ,g_p⌉ ⟼ π(G) [X,x;σ]=∏_i=1^n (g^+_i,g^-_i+1)/∏_i=1^n(g^+_i,g^-_i) . where X=(g^+_i), x=(g^-_i), σ(i)=i+1. Cyclicity is reflected by π(⌈ g_1,… ,g_p⌉) = π(⌈ g_2,… ,g_p, g_1 ⌉) . Conversely, we then have the following easy construction. Let X=(X_1,…,X_k), x=(x_1,…, x_k), and g_i the geodesic (X_i,x_i). Let σ be a permutation of {1,…,k} and let us write σ=σ_1,…,σ_q be the decomposition of σ into commuting cycles σ_i or order k_i with support I_i. For every i, let m_i be in I_i and let us define h^i_j= g_σ_i^j-1(m_i) , G_i=⌈ h^i_1, … h^i_k_i⌉ . We then have with the above notation [X,x;σ]=π(G_1… G_q) . The map π is surjective. In the sequel, the decomposition (<ref>) will be referred as the polygonal decomposition of the multifraction [X,x;σ]. We also obviously have Any tuples of ghost polygons is the polygonal decomposition of a multifraction. §.§.§ The map π and the evaluation For any multifraction B=[X,x;σ] and projective Anosov representation ρ associated to limit curves ξ and dual limit curves ξ^*, we define ^P_B(ρ)∏_i ⟨V_i,v_σ(i)|⟩/∏_i ⟨V_i,v_i|⟩ , where V_i is a non-zero vector in ξ^*(X_i) while v_i is a non-zero vector in ξ(x^i). Given ρ, we now extend G↦_G(ρ) and ^P_B(ρ) to homomorphisms of commutative free algebras to 𝒜_p and ℬ_0. We then have the following result which follows at once since we are only considering rank 1 projectors. We have, for all projective Anosov representations ρ ^P_π(G)(ρ)=_G(ρ) , _G(ρ)=_G̅(ρ^*) , This proposition implies that for every G in the kernel of π, for every ρ, _G(ρ)=0. §.§.§ Swapping bracket We now compute the brackets of multifractions. We shall use the notation of paragraph <ref> where the opposite configuration g^* of a ghost or visible edge g is defined. Observe that g^* is an ordered configuration. Then we have Let G and H be two multifractions that are images of ghost polygons: G = π(θ_1,…,θ_2p) and H = π(ζ_1,…,ζ_2q). Then their swapping bracket is given by [G ,H ] = (G H ( ∑_i,jϵ(ζ_j,θ_i)(-1)^i+jπ(⌈θ_i,ζ_j ⌉)) ) . Moreover, for g=(X,x) and h=(Y,y) geodesics, we have in the fraction algebra of the swapping algebra. [ℓ_h,ℓ_g] = ( ϵ(g,h) π(⌈ g,h⌉)) . [G , ℓ_h] = (G ( ∑_iϵ(h,θ_i)(-1)^i+1π(⌈θ_i,h⌉))) . Moreover, using the notation θ^*_i for the opposite edge, we have, for every i and j π(⌈θ_i^*,ζ_j^*⌉ )=G H π(⌈θ_i,ζ_j⌉) . In this proof, we will omit to write π and confuse a ghost polygon and its image under π. Equation (<ref>) follows at once from the definition. Let now G=⌈ g_1,…, g_p⌉, let η_i be the ghost edges joining g_i+1^- to g_i^+. Then we may write in the fraction algebra of the swapping algebra ⌈ g_1,…, g_p⌉ =∏_i=1^pη_i/∏_i=1^p g_i . Using logarithmic derivatives we then have 1/G [G ,ℓ_h] =∑_i=1^p (1/h η_i[η_i,h] -1/h g_i[g_i,h] )=∑_i=1^p (ϵ(h,η_i)⌈η_i,h⌉ -ϵ(h,g_i)⌈ g_i,h⌉) , which gives equation (<ref>). Writing now G =⌈ g_1,…, g_p⌉ =∏_i=1^pη_i/∏_i=1^q g_i , H =⌈ h_1,…, h_q⌉ =∏_i=1^qν_i/∏_i=1^q h_i , where η_i and ν_i are ghost edges of G and H respectively, we get [G ,H ] /G H = ∑_(i,j)(1/g_i h_j[g_i,h_j] -1/g_i ν_j[g_i,ν_j] +1/η_i ν_j[η_i,ν_j] - 1/η_i h_j[η_i,h_j] ) = ∑_(i,j)(ϵ(h_j,g_i)⌈ g_i,h_j⌉ - ϵ(ν_j,g_i)⌈ g_i,ν_j⌉ +ϵ(ν_j,η_i)⌈η_i,ν_j⌉ - ϵ(h_j,η_i)⌈η_i,h_j⌉) , which is what we wanted to prove. The equation (<ref>) follows from the definition of the map π. As a corollary we obtain The map π preserves the bracket. The proof follows at once from proposition <ref> and <ref> which computes the ghost intersection and recognizing each term as the correlation functions of a term obtained in the corresponding ghost bracket in proposition <ref>. §.§.§ Proof of Theorem <ref> We have proved all that we needed to prove: the theorem follows from corollary <ref> and <ref>, as well as lemma <ref>. §.§ Natural maps into the ghost algebra Let w be a p-multilinear map from the ghost algebra to itself. We say w is natural, if for tuples of integers (n_1,…,n_p) there exists an integer q, a real number A such that given a tuple of ghost polygons G=(G_1,…, G_p) with G_i in ^n_i, then w(G_1,…,G_p)=∑_i=1^q λ_i H_i , where H_i are ghost polygons, λ_i are real numbers less than A and, moreover, every visible edge of H_i is a visible edge of one of the G_i.[The existence of q is actually a consequence of the definition: there only finitely many polygons with a given set of visible edges] We will extend the definition of the core diameter to any element of the ghost algebra by writing, whenever H_i are distinct ghost polygons ghost polygons r(∑_i=1^q λ_i H_i)sup_i=1,…,q(|λ_i| r(H_i)) , We also recall that the core diameter of a ghost polygons, only depends on the set of its visible edeges. We then define the core diameter of a tuple of polygons G=(G_1,…,G_n), as the core diameter of the union of the set of edges of the G_i's. We then have the following inequality of core diameters for a natural map w, G=(G_1,…,G_p) and q and A as in the definition r(w(G))≤ A r(G) . We now give an exemple of a natural map The map (G_1,…,G_n)↦[G_1,[G_2,[… [G_n-1,G_n]…]]] is a natural map. This follows at once from the definition of the ghost bracket and a simple induction argument. § GEODESIC AND CYCLIC CURRENTS In this section, building on the classical notion of geodesic currents, we define the notion of higher order geodesic currents, called cyclic currents. Among them we identify integrable currents, show how they can average correlation functions and produce examples of them. Recall that is the set of oriented geodesics in . The set of Θ-geodesics is then denoted ×Θ. §.§ Cyclic current First recall that a signed measure is a linear combination of finitely many positive measure. Any signed measure is the difference of two positive measures. A cyclic current is a Γ-invariant signed measure invariant under cyclic permutation. As a first example let us consider for μ and ν geodesic current, the signed measure μ∧ν given by μ∧ν1/2ϵ (μ⊗ν -ν⊗μ) , where we recall that ϵ(g,h) is the intersection number of the two geodesics g and h. The signed measure μ∧ν is a cyclic current supported on intersecting geodesics. Moreover μ∧ν=-ν∧μ. We have 2∫_^2/Γ f(g,h) μ̣∧ν(g,h)= ∫_^2/Γ f(g,h) ϵ(g,h)(μ̣(g) ν̣(h)-ν̣(g) μ̣(h) ) = ∫_^2/Γ f(h,g) ϵ(h,g)(μ̣(h) ν̣(g)-ν̣(h) μ̣(g) ) = ∫_^2/Γ f(h,g) ϵ(g,h)(ν̣(h) μ̣(g)-μ̣(h) ν̣(g) ) =2∫_^2/Γ f(h,g) μ̣∧ν(g,h) . Hence μ∧ν is cyclic. The last assertions are obvious. Our main definition is the following, let ρ be a Θ-Anosov representation of Γ, the fundamental group of a closed surface. We give several definitions, let w be a natural map from ^p_1×⋯×^p_q to ^m * a w-cyclic current is a Γ-invariant measure μ=μ_1⊗⋯⊗μ_q where μ_i are Γ-invariant cyclic currents on ^n_i, * the w-cyclic current μ is a (ρ,w)-integrable current if there exists a neighborhood U of ρ in the moduli space of (complexified) Θ-Anosov representations of Γ, and a positive function F in L^1(_⋆^k/Γ,η) so that for all σ in U, and G in _⋆^k; |_w(G)(σ)|≤ F(G) , where F_0 is the lift of F to _⋆^k. * When w is the identity map , we just say a current is ρ-integrable, instead of (ρ,)-integrable. * A current of order k, is w-integrable or integrable if it is (ρ,w)-integrable or ρ-integrable for all representations ρ. §.§.§ Γ-compact currents A Γ-invariant w-cyclic current μ is Γ-compact if it is supported on a Γ-compact set of _⋆^p. Obviously a Γ-compact cyclic current is integrable for any natural map w. Here is an important example of a Γ-compact cyclic current: Let ℒ be a geodesic lamination on S with component of its complement C being a geodesic triangle. Let π:↦ S be the universal covering of S and x a point in C Then π^-1C=_i∈π^-1(x) C_i . The closure of each C_i is an ideal triangle with cyclically ordered edges (g_i^1,g_i^2,g_i^3). We consider the opposite cyclic ordering (g_i^3,g_i^2,g_i^1). The notation δ_x denotes the Dirac measure on X supported on a point x of X. Then we obviously have The measure defined on ^p by μ^*_C=1/3∑_i∈π^-1(x)(δ_(g_i^1,g_i^3,g_i^2)+δ_(g_i^2,g_i^1,g_i^3)+δ_(g_i^3,g_i^2,g_i^1)) , is a Γ-compact cyclic current. §.§.§ Intersecting geodesics Let us give an example of integrable current. Let μ be Γ-invariant cyclic current supported on pairs of intersecting geodesics. Assume furthermore that μ(^2/Γ) is finite. Then μ is integrable. This follows at once from the following lemma. Let ρ_0 be a Θ-Anosov representation. Then there exists a constant K_ρ in an neighborhood U of ρ_0 in the moduli space of Anosov representations, such that for any ρ in U, for any pair of intersecting geodesics |_⌈ g, h⌉(ρ)|≤ K_ρ . Given any pair of geodesics (g_1,g_0) intersecting on a point x, then we can find an element γ in Γ, so that γ x belongs to a fundamental domain V of Γ. In particular, there exists a pair of geodesics h_0 and h_1 passing though V so that _⌈ g_0,g_1⌉(η)=_⌈ h_0,h_1⌉(η)=(_η(h_0) _η(h_1)) , where _η is the fundamental projector for η. Since the set of geodesics passing through V is relatively compact, the result follows by the continuity of the fundamental projector _η(h) on h and η. Given μ and ν, then μ∧ν is integrable. Let A{(g,h)|ϵ(g,h) = ± 1} , B{(g,h)|ϵ(g,h) = ±1/2} . Observe first that denoting i the Bonahon intersection, we have |μ∧ν (A/Γ)|≤ i(μ̅,ν̅)<∞ , where the last inequality is due to Bonahon <cit.>, and λ̅ is the symmetrised current of λ. As Γ acts with compact quotient on the set of triples of points on ∂, it follows that Γ acts on B with compact quotient and therefore μ∧ν(B) is finite. Therefore taking the sum we have that μ∧ν(^2/Γ) is finite. §.§.§ A side remark Here is an example of (ρ,w)-integrable current. First we the following inequality: given a representation ρ_0, there is a constant K_0, a neighborhood U of ρ_0, such that for every k-configuration G of geodesics and ρ in U then |_G(ρ)|≤ e^kK_0 r(G) . Since this is just a pedagogical remark that we shall not use, we do not fill the details of the proofs. From that inequality we see that if G↦ e^kK_0 r(G) is in L^1(_⋆^k/Γ,μ) then μ is (ρ,w)-integrable. § EXCHANGING INTEGRALS To use ghost integration to compute the Hamiltonian of the average of correlation functions with respect to an integrable current, we will need to exchange integrals. This section is concerned with proving the two Fubini-type exchange theorems we will need. Recall that the form β_ρ(g) is defined in equation <ref>. Let μ a Γ-invariant geodesic current. Let G be a Θ-ghost polygon. Then * ∫_β_gμ̣(g) — defined pointwise — is an element of ^∞(E), * the map g↦∮_ρ(G)β_g is in L^1(,μ), * finally, we have the exchange formula ∮_ρ(G)(∫_β_ρ(g) μ̣(g))=∫_(∮_ρ(G)β_ρ(g))μ̣(g) . Similarly, we have a result concerning ghost intersection forms. We have to state it independently in order to clarify the statement. Let us first extend the assigement G↦Ω_G by linearity to the whole ghost algebra, and observe that if we have distinct ghost polygons G_i and H=∑_i=1^q λ_iG_i , with sup_i∈{1,…,q|λ_i|=A , Then ‖Ω_H(y)‖≤ qA sup_i∈{1,…,q‖Ω_G_i(y)‖ . Let μ be a w-cyclic and Γ-compact current of rank p. Let G be a ghost polygon. Let w be a natural map. Then * ∫_^pΩ_ρ(w(H))μ̣(H) — defined pointwise — is an element of ^∞(E), * the map H↦∮_ρ(G)Ω_ρ(w(H)) is in L^1(^p,μ), * finally, we have the exchange formula ∮_ρ(G)(∫_^pΩ_ρ(w(H)) μ̣(H))=∫_^p(∮_ρ(G)Ω_ρ(w(H)))μ̣(H) . We first concentrate on Theorem <ref>, then prove Theorem <ref> in paragraph <ref>. §.§ Exchanging line integrals Theorem <ref> is an immediate consequence of a similar result involving line integrals. Let μ be a Γ-invariant geodesic current on , then * ∫_β_gμ̣(g) — defined pointiwise — is an element of ^∞(E), * Let g_0 be a geodesic, x a point on g_0 and Q a parallel section of (E) along g_0, then the map g↦_x,g_0,Q(β_g) , is in L^1(,μ). * We have the exchange formula _x,g_0,Q(∫_β_ρ(g) μ̣(g))=∫__x,g_0,Q(β_ρ(g)) μ̣(g) . We prove the first item in proposition <ref>, the second item in <ref> and the third in <ref>. §.§ Average of geodesic forms and the first item Let μ be a Γ-invariant measure on . Let y be a point in , and G(y,R){g∈| d(g,y)≤ R} . As an immediate consequence of the Γ-invariance we have For every positive R, there is a constant K(R) so that for every y in μ(G(y,R))≤ K(R) . Observe now that if g is not in G(y) G(y,2), then y is not in the support of ω_g and thus β_ρ(g)(y)=0. We then define The μ-integral of geodesic forms is the form α so that at a point y in α_y∫_G(y)β_g(y) μ̣(g)=∫_β_g(y) μ̣(g) . We use some abuse of language and write α∫_β_ρ(g)μ̣(g) . The form α_y is well defined since G(y) is compact. Moreover, the next lemma gives the proof of the first item of proposition <ref> The μ-integral of geodesic forms belongs to ^∞(E) and we have a constant K_5 only depending on ρ and μ so that ‖∫_β_ρ(g)μ̣(g)‖_∞≤ K_5 . We have |∫_β_ρ(g)(y) μ̣(g)|=|∫_G(y)β_ρ(g)(y) μ̣(g)|≤μ(G(y)) sup_g∈ G(y)‖β_ρ(g)‖_∞ . Then by proposition <ref>, there is a constant k_1 so that μ(G(y))≤ k_1. Recall that β_ρ(g)=ω_g(g). Then by the equivariance, ω_g is bounded independently of g, while by lemma <ref>, is a bounded section of (E). The result follows. §.§ Decay of line integrals We now recall the following definition. _x,g,(ω) = ∫_g^+(ω [,] ) + ∫_g^-(ω [,] ) . We prove in this paragraph the following two lemmas. Let g_0 be a geodesic and x a point in g_0, Let g be a geodesic such that d(g,g_0)>1, then for any function ψ on with values in [0,1]: _x,g_0,Q(ψβ_ρ(g))=0 . This follows at once from the fact that under the stated hypothesis, the support of ω_g does not intersect g_0. For any endomorphism and representation ρ, there exist positive constants K and k, so that for all g so that d(g,x)>R, for any function ψ on with values in [0,1]: |_x,g_0,Q(ψβ_ρ(g))|≤ K e^-kR . We assume x and g are so that d(g,x)> R. It is enough (using a symmetric argument for g_0^- to show that |∫_g_0^+(ψβ_ρ(g)··[,])|≤ K e^-kR , where g_0^+ is the arc on g_0 from x to +∞. Let us denote by g_0^+(R) the set of points of g_0^+ at distance at least R from x: g_0^+(R){y∈ g_0^+| d(y,x)≥ R} . Then if y belongs to g_0^+ and does not belong to g_0^+(R-1), then d(y,x)<R-1. Thus d(y,g)>1. Thus, by lemma <ref>, β_ρ(g)(y) vanishes for y in g_0^+ and not in g_0^+(R-1). Thus |∫_g_0^+(ψβ_ρ(g)·· [,])|≤∫_g_0^+(R-1)|(β_ρ(g)()·· [,])| ṭ . Then the result follows from the exponential decay lemma <ref>. Lemma <ref> now follows immediately after using a symmetric result for g_0^-. §.§ Cutting in pieces and dominating: the second item We need to decompose into pieces. Let g_0 be an element of and x a point on g_0. Let x^+(n) – respectively x^-(n) – the point in g_0^+ – respectively g^-_0 – at distance n from x. Let us consider U_0 {g∈| d(g,g_0)>1} , V^+_n {g∈| d(g,x^+(n))< 2 and for all 0≤ p<n , d(g,x^+(n))≥ 2 } , V^-_n {g∈| d(g,x^-(n))< 2 and for all 0≤ p<n, d(g,x^-(n))≥ 2 } . This gives a covering of : We have the decomposition =U_0∪⋃_n∈ℕ V^±_n , When g does not belong to U_0, there is some y in g_0 so that d(g,y)≤ 1, hence some n so that either d(y,g^+(n))≤ 2, while for all 0≤ p<n we have d(y,g^+(p))> 2, or d(y,g^-(n))≤ 2, while for all 0≤ p<n we have d(y,g^-(p))> 2. Let now n(g)=sup{m∈ℕ| g∈ V^+_m∪ V^-_m} . By convention, we write n(g)=+∞, whenever g does not belong to ⋃_n∈ℕ V^±_n. The non-negative control function F_0 on is defined by F_0(g)=e^-n(g). We now prove For any positive k, the function (F_0)^k is in L^1(,μ). Moreover, there exist positive constants K_9 and k_9 so that for all functions ψ on with values in [0,1] we have |_x,g_0,Q(ψβ_g)|≤ K_9(F_0(g))^k_9 . We now observe that the second item of proposition <ref> is an immediate consequence of this lemma. We first prove that F_0 and all its powers are in L^1(,μ). Observe that V^±_n⊂ G(x^±(n),2). It follows from that μ(V^±_n)≤ K(2) by proposition <ref>. Moreover, for any g in V_n^±, F_0(g)^k≤ e^-kn. The decomposition of lemma <ref> implies that F_0^k is in L^1(,μ). Let g be a element of . * When g belongs to U_0, then by lemma <ref>, _x,g_0,Q(β_ρ(g))=0. Hence |_x,g_0,Q(β_ρ(g))|≤ A (F_0(g))^a, for any positive A and a. * When g does not belong to U_0, then g belongs to V^±_n(g) with n(g)<∞. By lemma <ref>, we have d(x,g)≥ n(g). It follows from lemma <ref> that for any positive function ψ, we have |_x,g_0,Q(ψβ_ρ(g))|≤ Ke^-kn(g)=KF_0(g)^k . The last inequality concludes the proof. §.§ Proof of the exchange formula of proposition <ref> Let us choose, for any positive real R, a cut-off function ψ_R, namely a function on with values in [0,1], with support in the ball with center x and radius R+1, and equal to 1 on the ball of radius x and radius R. We write |_x,g_0,Q(∫_β_ρ(g) μ̣(g))-∫__x,g_0,Q(β_ρ(g))μ̣(g)|≤ A(R)+B(R)+C(R) , where A(R) = |_x,g_0,Q(∫_β_ρ(g) μ̣(g))-_x,g_0,Q(ψ_R∫_β_ρ(g)μ̣(g))| , B(R) = |_x,g_0,Q(ψ_R∫_β_ρ(g) μ̣(g))-∫__x,g_0,Q(ψ_R β_ρ(g)) μ̣(g)| , C(R) = |∫__x,g_0,Q( ψ_R β_ρ(g)) μ̣(g)-∫__x,g_0,Q( β_ρ(g)) μ̣(g) | . We will prove the exchange formula (the third item of proposition <ref>) as an immediate consequence of the following three steps 0.2 truecm Step 1: By lemma <ref>, α=∫_β_ρ(g)μ̣_g is in ^∞(E). By definition of a cutoff function, the support of (1-ψ(R)) α vanishes at any point y so that d(x,y)<R. Thus the exponential decay lemma <ref> guarantees that A(R)=|_x,g_0,Q((1-ψ(R)) α)|≤ K_4e^-k_4R‖α‖_∞ . Hence lim_R→∞A(R)=0. 0.2 truecm Step 2: Observe that ψ_R∫_β_ρ(g) μ̣(g)=∫_ψ_Rβ_ρ(g) μ̣(g) . Moreover the function g↦ψ_Rβ_g is continuous from to ^∞(E). Thus follows from the continuity of _x,g_0,Q proved in proposition <ref> implies that B(R)=0. 0.2 truecm Final Step: As a consequence of Lebesgue's dominated convergence theorem and the domination proved in lemma <ref>, we have that lim_R→∞ C(R)=0. 0.1 truecm Combining all steps lim_R→∞(A(R)+B(R)+C(R))=0 . Hence thanks to equation (<ref>), we have _x,g_0,Q(∫_β_ρ(g) μ̣(g))=∫__x,g_0,Q(β_ρ(g)) μ̣(g) . §.§ Proof of Theorem <ref> We assume now that μ is a Γ-compact current of order k>1. We may also assume – by decomposing the positive and negative part that μ is a positive current. We want to show that ∫_^pΩ_ρ(w(H))μ̣(H) — defined pointwise — is an element of ^∞(E). Since μ is Γ-compact, it follows that the core diameter of any H in the support of μ is bounded by some constant R_0 by proposition <ref>. It will be enough to prove that ∫_^p‖Ω_ρ(w(H))(y)‖_y μ̣(H) ≤ K_0 , for some constant K_0 that depends on μ. Let 𝒦 be a fundamental domain for the action of Γ on ^p. Observe now that ∫_^p‖Ω_ρ(w(H))(y)‖_y μ̣(H) = ∑_γ∈Γ∫_γ𝒦‖Ω_ρ(w(H))(y)‖_y μ̣(H) =∫_𝒦(∑_γ∈Γ‖Ω_ρ(w(H))(γ(y))‖_y) μ̣(H) = ∫_𝒦‖ψ_w(H),y‖_ℓ^1(Γ) μ̣(H) , where ψ_w(H),y : γ ↦‖Ω_ρ(w(H))(γ(y))‖_y . By the second assertion of corollary <ref>, the map ψ_H,y is in ℓ^1(Γ) and its norm is bounded by a continuous function of the core diameter r(w(H)) of w(H), hence by a continuous function of r(H) by inequality (<ref>), hence by a constant on the support of μ, since r is Γ-invariant and continuous by proposition <ref> and μ is Γ-compact. Since r(H) is bounded on the support of μ, the first item of the theorem follows. Let us consider the map Ψ: H↦∮_ρ(G)Ω_ρ(w(H))=∫_(Ω_ρ(w(H))∧Ω_ρ(G)) , where we used formula (<ref>) in the last equality. Our goal is to prove Ψ is in L^1(^p,μ). We have that ‖Ω_ρ(w(H))∧Ω_ρ(G)(y)‖≤‖Ω_ρ(w(H))(y)‖ ‖Ω_ρ(G)(y)‖ . It follows that ∫_^p|Ψ(H)| μ̣(H) ≤ ∫_^p∫_‖Ω_ρ(G)(y)‖ ‖Ω_ρ(w(H))(y)‖ ỵ μ̣(H) ≤ ∫_‖Ω_ρ(G)(y)‖ ( ∫_^p‖Ω_ρ(w(H))(y)‖ μ̣(H)) ỵ ≤ K_0∫_‖Ω_ρ(G)(y)‖ ỵ = K_0 ‖Ω_ρ(G)‖_L^1() , where we used the first in the third inequality. We can now conclude by using the first assertion the corollary <ref>. We use again a family of cutoff functions {ψ_n}_n∈ℕ defined on ^p with values in [0,1] so that each ψ_n has a compact support, and ψ_n converges to 1 uniformly on every compact set. It follows from the Lebesgue's dominated convergence theorem and the second item that lim_n→∞∫_^p( ∮_ρ(G)Ω_ρ(w(H))) ψ_n μ̣(H) =∫_^p(∮_ρ(G)Ω_ρ(w(H))) μ̣(H) . Recall now that by the last assertion of corollary <ref>, ‖Ω_ρ(H)‖_∞ is bounded on every compact set and Γ-invariant, hence bounded on the support of μ. Thus we have the following convergence in ^∞(E) lim_n→∞∫_^pΩ_ρ(w(H)) ψ_n μ̣(H) =∫_^pΩ_ρ(w(H)) μ̣(H) , From the continuity obtained in proposition <ref>, we then have that lim_n→∞∮_ρ(G)∫_^pΩ_ρ(w(H)) ψ_n μ̣(H) =∮_ρ(G)∫_^pΩ_ρ(w(H)) μ̣(H) . Finally, for every n, since ψ_n has compact support the following formula holds ∮_ρ(G)( ∫_^pΩ_ρ(w(H)) ψ_n μ̣(w(H)))= ∫_^p(∮_ρ(G)Ω_ρ(w(H))) ψ_n μ̣(H) . The exchange formula now follows from both assertions (<ref>) and (<ref>). § HAMILTONIAN AND BRACKETS: AVERAGE OF CORRELATION AND LENGTH FUNCTIONS We now leave the realm of uniformly hyperbolic bundles in general and focus only on periodic ones. This corresponds to the study of Anosov representations of the fundamental group of a closed surface. The fact that S is closed allows us to introduce a new structure: the smooth part of the representation variety of projective representations carries the Goldman symplectic form, defined in paragraph <ref>, see also <cit.>. Hence we have a Poisson bracket on functions on the character variety. In this section, we will introduce averaged correlation functions and length functions and compute their Hamiltonian vector fields and Poisson bracket. §.§ Averaged length function: definition As a first step in the construction, let us consider a Θ-decorated current μ^å supported on ×{å} where å is in Θ. The associated length function on the character variety of Anosov representation is the function ^å_μ^å defined by ^å_μ^å(ρ)log(∫_/Γ R_å^σ μ̣^å) , where R_å^σ is the (complex valued in the case of complex bundles) 1-form associated to a section σ of (F_å) by ∇_uσ=R^σ(u)·σ. Although R^σ depends on the choice of the section σ, the integrand over does not. In the complex case, we see the length functions as taking values in ℂ/2π i ℤ due to the ambiguity of defining the logarithm. Recall that in our convention (F_å) is a contracting bundle and thus the real part of _μ is positive. Moreover for a closed geodesic γ whose associated geodesic current, supported on ×{å} is also denoted by γ^å. ^å_γ(ρ)=-log(.Hol(γ)|_F_å) , where Hol(γ) is the holonomy of γ. For a geodesic current δ supported on a closed geodesic, the length function _δ is analytic. This extends to all geodesic currents by density and Morera's Theorem (See <cit.> for a related discussion in the real case). The notion extends naturally – by additivity – to a general Θ-geodesic current. We can now extend the length function to any Θ-geodesic current. Let μ be a Θ-geodesic current on ×Θ, we can then write uniquely μ=∑_å∈Θμ^å , where μ^å is supported on ×{å}, then by definition the μ-averaged length function[In the complex case, since the logarithm, hence the length, is defined up to an additive constant, the Hamiltonian is well defined and the bracket of a length function and any other function makes sense.] is _μ(ρ)∑_å∈Θ^å_μ^å(ρ) . §.§ Averaged correlation function: definition When w is a natural map, μ a (ρ,w)-integrable cyclic current, the associated averaged correlation function of order n _w(μ) on the moduli space of Θ-Anosov representations is defined by _w(μ)(ρ)∫_^n/Γ_w(G)(ρ) μ̣(G) , where G=(G_1,…,G_p) with and _G is the correlation function associated to a Θ-configuration of geodesics defined in paragraph <ref>. As we shall see in proposition <ref>, the function _w(μ) is analytic . Our main result is a formula for the Poisson bracket of those functions. We use a slightly different convention, writing ^k for a correlation function of order k and ^1_μ=_μ. Let μ be either a w-integrable Θ-cyclic currents at ρ_0 or a Θ-geodesic current. Similarly, let ν be either a v-integrable Θ-cyclic currents at ρ_0 or a Θ-geodesic current. Then the measure μ⊗ν is z-integrable at ρ_0, where z(G,H)=[w(G),v(H)] and moreover {^p_w(μ),^n_v(ν)}(ρ) = ∫_^p+n/Γ_ρ(w(G),v(H)) μ̣(G)ν̣(H) = ∫_^p+n/Γ_[w(G),v(H)](ρ) μ̣(G)ν̣(H) . As a corollary, generalizing Theorem <ref> given in the introduction, using a simple induction and proposition <ref> we get The vector space generated by length functions, averaged correlations functions and constants is stable under Poisson bracket. More precisely, let μ_1, …μ_p cyclic currents of order n_i, and N=n_1+… n_p then {^n_1_μ_1,{^n_2_μ_2,…{^n_p-1_μ_p-1,^n_p_μ_p}…}}(ρ) = ∫_^N/Γ^N_[G_1,[G_2,[…,[G_p-1,G_p]…]]](ρ) μ̣_1(G_1)…μ̣_1(G_p) . In the course of the proof, we will also compute the Hamiltonians of the corresponding functions. Let μ be a Θ-geodesic current. The Hamiltonian of the length function _μ is H^0_μ the trace free part of H_μ, where H_μ-∫_β_ρ(g) μ̣(g) , Let w be a natural function. Let ν be a (ρ,w) integrable cyclic current. The Hamiltonian of the correlation function _w(ν) of order n, with n>1 is Ω_w(ν)∫_^nΩ_ρ(w(G)) ν̣(G) , Both H_μ and Ω_w(ν) are in ^∞(E). §.§ Preliminary and convention in symplectic geometry Our convention is that if f is a smooth function and a symplectic form, the Hamiltonian vector field X_f of f and the Poisson bracket {f,g} of f and g are defined by f̣(Y) = (Y,X_f) , {f,g} = f̣(X_g)=(X_g,X_f)=- g̣(X_f) . 0.5 truecm Observe that if Ω is a complex valued symplectic form – which naturally take entries in the complexified vector bundle – and f a complex valued function then the Hamiltonian vector field is a complexified vector field. The bracket of two complex valued functions is then a complex valued function. In the sequel, we will not write different results in the complex case (complex valued symplectic form and functions) and the (usual) real case. We first start by computing the bracket and Hamiltonian of length functions; §.§ Regularity of averaged correlations functions We prove here Let w be a natural function. Let μ be a (ρ,w)-integrable current, then * _w(μ) is an analytic function in a neighborhood of ρ, * For any tangent vector v at ρ, then _w(G)(v) is in L^1(μ) and _w(μ)(v)=∫__⋆^n/Γ_w(G)(v)μ̣(G) . As in proposition <ref>, we work in the context of complex uniform hyperbolic bundles, possibly after complexification of the whole situtation. Let us first treat the case when μ is Γ-compact. In that case, the functions _G:ρ↦ T_w(G) are all complex analytic by proposition <ref>, uniformly bounded with uniformly bounded derivatives in the support of μ. Thus the result follows from classical results. We now treat the non Γ-compact case. Let now consider an exhaustion of _⋆^n/Γ by compacts K_n and write μ_n=1_K_nμ. Let then _n=∫_K_n_w(μ_n)μ̣ . Then by our integrability hypothesis and Lebesgue dominated convergence Theorem _n converges uniformly to _w(μ). Since all _n are complex analytic, by Morera Theorem _w(μ) is complex analytic and _n converges C^∞ to _w(μ). It thus follows that _w(μ)(v)=lim_n→∞_n(v)=lim_n→∞∫_K_n_w(G)(v)μ̣(G) . We now conclude by lemma <ref>. §.§ Length functions: their Hamiltonians and brackets The first step in our proof is to understand the variation of length, The derivatives of a length function with respect to a variation ∇̇ is given by _μ(∇̇)=∫_Θ×/Γ(∇̇) μ̣(x) . By the linearity of the definition, see equation (<ref>), it is enough to consider a Θ-geodesic current μ^å supported on ×{å}. Let E^å⋀^(F_å) E, and Λ^å the natural exterior representation from sl(E) to sl(E^å). Then by <cit.> and formula (<ref>) we have _μ(∇̇)=∫_{å}×/Γ(_åΛ^å(∇̇)) μ̣^å(x) , where ^1_å is the section of (E_å) given by the projection on the line (F_a) induced by the projection on F_å parallel to F_å^∘ – see section <ref> for notation. We now conclude by observing –using just a litle bit of linear algebra– that for any element in sl(E) (^1_åΛ^å(A))=(_å A) . Indeed let us choose a basis (e_1,…, e_p) of F_å completed by a basis (f_1,…, f_m) of F_å^∘ and choose a metric so that this basis is orthonormal. Then Λ^å(A)(e_1∧…∧ e_p)=∑_i=1^pe_1∧… e_i-1∧ A(e_i)∧ e_i+1∧… e_p , (^1_åΛ^å(A))=⟨e_1∧…∧ e_p ,Λ^å(A)(e_1∧…∧ e_p)|=⟩∑_i=1^p⟨e_i , A(e_i)|=⟩(_å A) . Let then H_μ= -∫_β_ρ(g) μ̣(g) . We proved that H_μ lies in ^∞(E) in lemma <ref>. We now prove the following proposition The Hamiltonian vector field of _μ is given by H^0_μ, which is the trace free part of H^μ. Then {_ν,_μ}=(H^0_μ,H^0_ν)=∫__⋆^2/Γ_ρ(g,h) ν̣(g)⊗μ̣(h) , Observe that if μ and ν are both supported on finitely many geodesics, then the support of μ⊗ν is finite in ^2 and its cardinality is the geometric intersection number of the support of μ, with the support of ν. This is a generalization of Wolpert cosine formula, see <cit.>. Remark that ϵμ⊗ν is supported in ^2 on a set on which Γ acts properly. Let us first consider the computation of (H_μ,H_ν). let Δ_1 be a fundamental domain for the action of Γ on ^2. Then denoting ^0_g the traceless part of _g (H^0_μ,H^0_ν) = ∫_Δ_0((∫_β^0_h μ̣(h))∧(∫_β^0_g ν̣(g))) = ∫_Δ_0∫_×ω_h∧ω_g (^0 (g)^0 (h)) μ̣(h)ν̣(g) = ∫_Δ_1∫_ω_h∧ω_g (^0 (g)^0 (h)) μ̣(h)ν̣(g) = ∫_Δ_1ϵ(h,g)(^0 (g)^0 (h)) μ̣(h) ν̣(g) . Let us comment on this series of equalities: the first one is the definition of the symplectic form and that of H_μ and H_ν, for the second one we use the pointwise definition of H_μ and H_ν, for the third one we use proposition <ref>. Observe that the final equality gives formula (<ref>). From the third equality we also have (H^0_μ,H^0_ν) = ∫_Δ_1(∫_gω_h (^0 (g)^0 (h))) μ̣(g)ν̣(h) . Let now consider the fibration z:×→^2 and observe that z^-1(Δ_1) is a fundamental domain for the action of Γ in ×. Let Δ_2 be a fundamental domain for the action of Γ on and observe that Δ_2× is a fundamental domain for the action on Γ on ×. Then the above equation leads to (H^0_μ,H^0_ν) = ∫_z^-1(Δ_1)ω_h (^0 (g)^0 (h)) μ̣(g)ν̣(h) =∫_Δ_2×ω_h (^0 (g)^0 (h)) μ̣(g)ν̣(h) = ∫_Δ_2(^0 (g)∫_β^0_ρ(h)ν̣(h) )μ̣(g) =-∫_Δ_2(^0 (g)H^0_ν)μ̣(g) = -_μ(H^0_ν)=_ν(H^0_μ) . As a conclusion, if Ham(_ν) is the Hamiltonian vector field of _ν, then for all length functions _μ _μ(H^0_ν-Ham(_ν))=0 . We proved in <cit.> that the derivatives of the length functions generates the cotangent space of the character variety on some open dense subset. This completes the proof. As noted, the above gives a generalization of Wolpert's cosine formula. Explicitly we have for two Θ-geodesic currents μ,ν then {_ν,_μ} = ∫_(^2)_⋆/Γϵ(g,h)(((g)(h)) - Θ(g)Θ(h)/(E)) μ̣(g)ν̣(h) . §.§ Bracket of length function and discrete correlation function We have Let G be a Θ-configuration and μ a Θ-geodesic current, then {_G,_μ}=-∫_(∮_ρ(G)β_ρ(g)) μ̣(g)=∫__ρ(G,g) μ̣(g) . By proposition <ref>, we have _G(H_μ)=-∮_ρ(G)(∫_β_ρ(g)μ̣) . Thus by the exchange formula (<ref>), we have _G (H_μ)=-∫_(∮_ρ(G)β_ρ(g)) μ̣(g) . Thus conclude using equation (<ref>) {_G,_μ}= _G (H_μ)=-∫_(∮_ρ(G)β_ρ(g)) μ̣(g)=∫__ρ(G,g) μ̣(g) . §.§ Bracket of length functions and correlation functions Our first objective is, given a family of flat connection ∇ whose variation at zero is ∇̇, to compute _μ(∇̇). Assume that the Θ-cyclic current μ is (ρ,w)-integrable. Then {_w(μ),_ν}(ρ) = ∫_^n+1/Γ_ρ(w(G),g) ν̣(g) μ̣(G) . By Theorem <ref>, the hamiltonian vector field of _ν is given by H^0_ν=-∫_β^0_ρ(g) ν̣(g) . Let Δ be a fundamental domain for the action of Γ on ^n, and observe that Δ× is a fundamental domain for the action of Γ on ^n+1. It follows since H_ν is ρ-equivariant and proposition <ref> that {_w(μ),_ν}=_w(μ)(H^0_ν) = ∫_Δ_w(G)(H^0_ν) μ̣(G) =∫_Δ(∮_ρ(w(G))H^0_ν) μ̣(G) = -∫_Δ∫_(∮_ρ(w(G))β_ρ(g)) ν̣(g)μ̣(G) = ∫_Δ∫_(_ρ(w(G),g)) ν̣(g)μ̣(G) = ∫_^n+1/Γ_ρ(w(G),g) μ̣(G)ν̣(g) . For the second equality we used proposition <ref> and that integrating a 1-form with values in the center gives a trivial result by proposition <ref>. §.§ Hamiltonian of correlation functions We are going to prove the following result Let w a natural function. Let μ be a (ρ,w)-integrable Θ-current. Then for every y in , Ω_ρ(G) belongs to L^1(^p,μ). Moreover Ω_w(μ)(ρ)∫_^pΩ_ρ(w(G))μ̣(G) . seen as vector field on the character variety is the Hamiltonian of the correlation function _w(μ). We first prove proposition <ref> under the additional hypothesis that μ is a Γ-compact current, then move to the general case by approximation. Assume μ is a Γ-compact current. By the density of derivatives of length functions, it is enough to prove that for any geodesic current ν associated to a length function _ν whose Hamiltonian is H_ν we have {_ν,_w(μ)}=(Ω_w(μ),H_ν)=_ν(Ω_w(μ)) . Then using a fundamental domain Δ_0 for the action of Γ on , and Δ_1 a fundamental domain for the action of Γ on ^n, and finally denoting ν_0 the flow invariant measure in associated to the current ν _ν(Ω_w(μ)) =∫_Δ_0(Ω_w(μ))ν̣_0(g) = ∫_Δ_0(∫_^n( (g)Ω_ρ(w(G))) μ̣(G))ν̣_0(g) =∫_^n(∫_Δ_0( (g)Ω_ρ(w(G))) ν̣_0(g))μ̣(G) = ∫_Δ_1(∫_( (g)Ω_ρ(w(G))) ν̣_0(g))μ̣(G) =∫_Δ_1∫_∫_g( (g)Ω_ρ(w(G)))) ν̣(g)μ̣(G) = ∫_^n/Γ(∫_∫_(ω_g (g)∧Ω_ρ(w(G))) )ν̣(g) μ̣(G) =-∫_(^n/Γ)×_ρ(w(G),g) μ̣(G)ν̣(g) = {_ν,_μ} . The first equality uses equation (<ref>), the second uses the definition of Ω_μ, the third one comes from Fubini theorem, the fourth one from lemma <ref>, the fifth one from the fibration from to , the sixth one from formula (<ref>), the seventh one definition (<ref>). Let us now prove the general case when μ is a ρ-integrable current. Let us consider an exhaustion K of ^p/Γ by compact sets. Assume that the interior of K_m+1 contains K_m. Let 𝒦 be a fundamental domain of the action Γ on _⋆^p. Let _m(ρ)∫_K_m_w(G)(ρ) μ̣(G) . The functions _m are analytic and converges C^0 on every compact set to _μ by the integrability of μ. Thus, by Morera's Theorem, _μ is analytic and converges C^∞ on every compact . Let us call X the Hamiltonian vector field of _μ and X_m the Hamiltonian vector field of _m. It follows that X converges to X. We have just proven in the previous paragraph that the Hamiltionian of _m is X_m=∫_C_mΩ_ρ(H) μ̣ . From corollary <ref>, for every y and H, the function γ↦‖Ω_ρ(γ w(H))(y)‖, is in ℓ^1(γ). It follows that X_m(y)=∫_C_m𝒦(∑_γ∈ΓΩ_ρ(γ H)(y) μ̣(H)) . Since {X_m(y)}_m∈ℕ converges for any exhaustion of 𝒦 to X(y). It follows by lemma <ref> that H↦∑_γ∈ΓΩ_ρ(γ w(H))(y) μ̣(H) , is in L^1(𝒦,μ) and that X(y)= ∫_K∑_γ∈ΓΩ_ρ(γ w(H))(y) μ̣(H)=∫_^pΩ_ρ(w(H))(y) μ̣(H) , where we applied Fubini again in the last equality. This is what we wanted to prove. §.§ Bracket of correlation functions We have Let μ and ν be two integrable Θ-currents of rank m and n respectively. Let p=m+n, then {_w(ν),_v(μ)}=∫_^p/Γ_ρ(w(H),v(G)) ν̣⊗μ̣(H,G) . We have {_w(ν),_v(μ)}=_w(ν)(Ω_v(μ)) = ∫_^n/Γ_w(H)(Ω_v(μ)) ν̣(H) =∫_^n/Γ(∮_ρ(w(H))Ω_v(μ)) ν̣(H) = ∫_^n/Γ(∮_ρ(w(H))∫_^mΩ_ρ(v(G))μ̣(G)) ν̣(H) =∫_^n/Γ(∫_^m∮_ρ(w(H))Ω_ρ(v(G))μ̣(G)) ν̣(H) = ∫_^p/Γ_ρ(w(H),v(G)) ν̣(H) μ̣(G) . The crucial point in this series of equalities is the exchange formula for the fifth equality which comes from Theorem <ref>. With the above, we have completed the proof of the ghost representation Theorem <ref>. § APPLICATIONS In this section we give two applications of our previous results. The first one is a generalization of Kerckhoff theorem <cit.> of the convexity of length functions, and the related Wolpert's sine formula for the second derivatives along twist orbits <cit.>. The second one is to give examples of commuting functions arising from laminations. Both results will follow from computations in the ghost algebra combined with the Ghost Representation Theorem <ref>. §.§ Convexity of length functions for positively ratioed representations We can know prove our convexity theorem. We work in the context of real projective Anosov representation, or 𝖲𝖫(n,ℝ) valued with Θ={1}. Let us first say, following Martone–Zhang <cit.> that a representation has a positive cross ratio if for all intersecting geodesics g and h 0<_⌈ g,h ⌉(ρ)<1 . Let μ be an oriented geodesic current supported on non-intersecting geodesics. Then for any geodesic current ν for any projective representation with a positive cross ratio, we have {_μ,{_μ,_ν}}(ρ)≥ 0 . Furthermore the inequality is strict if and only if i(μ,ν) ≠ 0. This will follow from the definition of a positive cross ratio and our generalisation of Wolpert sine formula: Let μ be an oriented geodesic current supported on non-intersecting geodesics. Then for any geodesic current ν, for any projective representation ρ, we have {_μ,{_μ,_ν}}(ρ)=2∫_^3,+/Γϵ(g_0,h)ϵ(g_1,h)(_⌈ g_1,h,g_0⌉ -⌈ g_1,h⌉⌈ g_0,h⌉)(ρ) μ̣^2 (g_1,g_0) ν̣(h) . where ^3,+ is the set of (g_1,h,g_0) so that if h intersects both g_1 and g_0, then h intersects g_1 before g_0. §.§ Commuting functions arising from laminations Let ℒ be a lamination. Associated to this lamination we get several functions that we called associated to the lamination * The length functions associated to geodesic currents supported on the laminations, * functions associated to any complementary region of the lamination. Let F_ℒ be the vector space generated by these functions. our result is then Let ℒ be a geodesic lamination, then the vector space F_ℒ consists of pairwise Poisson commuting functions. An interesting example is the case of the maximal geodesic lamination coming from a decomposition into pair of pants. An easy check give that there are 6g-6 length functions, and 4g-4 triangle functions. Thus we have 10g-10 commuting functions. However in the case the dimension of the space is 16g-16 and it follows that there are relations between these functions. It is interesting to notice that these relations may not be algebraic ones: In that specific case some relations are given by the higher identities <cit.> generalizing Mirzakhani–McShane identities. §.§ Double derivatives of length functions in the swapping algebra In order to prove our convexity result, we will need to calculate the double brackets. By Theorem <ref>, as the map A →_A on the ghost algebra factors through the extended swapping bracket ℬ_0, it suffices to do our calculations in ℬ_0. For simplicity, we will further denote the elements ℓ_g in ℬ_0 by g. Let h be a an oriented geodesic and g_0 and g_1 two geodesics so that ϵ(g_0,g_1)=0. Let ϵ_i=ϵ(g_i,h). Assume first that ϵ_0ϵ_1=0, then [g_1,[g_0,h]]=0. Assume otherwise that h intersect g_1 before g_0 or that g_1=g_0. Then [g_1,[g_0,h]] = ϵ_1ϵ_0 (⌈ g_1,h,g_0 ⌉- ⌈ g_1,h⌉ ⌈ g_0,h ⌉) =ϵ_1ϵ_0 ⌈ g_1,h⌉ ⌈ g_0,h ⌉ (⌈γ_0 ,γ_1 ⌉-1 ) , where γ_0 (g_0^+,h^-) and γ_1 (h^+, g_1^-). Observe that γ_0 and γ_1 are not phantom geodesics by hypothesis. First let us remark that by the Jacobi identity, since [g_0,g_1]=0, then [g_1,[g_0,h]]=[g_0,[g_1,h]] . We apply formulas of paragraph <ref>. We first have from equation (<ref>). [g_0,h]=ϵ(h,g_0)⌈ g_0,h ⌉ + ϵ(g_0,h) . It follows that if ϵ(g_0,h)=0, then [g_1,[g_0,h]]=0 . The same holds whenever ϵ(g_1,h)=0 by the symmetry given by equation (<ref>). Assume now that ϵ_0ϵ_1≠0. Let then (g_0,ζ_0,h,η_0) be the associated ghost polygon to ⌈ g_0,h⌉ with ghost edges ζ_0 = (g_0^+,h^-) and η_0 = (h^+,g_0^-). Thus using the hypothesis ϵ(g_0,g_1)=0, and using the notation ϵ_i=ϵ(g_i,h) we get from equation (<ref>) [g_1,[g_0,h]] = -ϵ_0⌈ g_0,h ⌉(ϵ_1 ⌈ g_1,h⌉- ϵ(g_1,ζ_0)⌈ g_1,ζ_0⌉ -ϵ(g_1,η_0)⌈ g_1, η_0 ⌉) . Since h intersects g_1 before g_0, we have ϵ(g_1,η_0)=0 and ϵ(g_1,ζ_0)=ϵ(g_1,h). Thus [g_1,[g_0,h]] = ϵ_1ϵ_0 ( ⌈ g_1, ζ_0 ⌉⌈ g_0,h ⌉- ⌈ g_1,h⌉⌈ g_0,h ⌉) . As ζ_0 = (g_0^+,h^-) by definition of the swapping algebra ⌈ g_1, ζ_0 ⌉⌈ g_0,h ⌉ = (g_1^+,h^-)(g_0^+,g_1^-)(g_0^+,h^-)(h^+,g_0^-)/(g_1^+,g_1^-)(g_0^+,h^-)(g_0^+,g_0^-)(h^+,h^-) = (g_1^+,h^-)(h^+,g_0^-)(g_0^+,g_1^-)/(g_1^+,g_1^-)(h^+,h^-)(g_0^+,g_0^-) = ⌈ g_1, h, g_0 ⌉ . Similarly ⌈ g_1, h, g_0 ⌉/⌈ g_1, h ⌉⌈ g_0, h⌉ = (g_0^+,g_1^-)(h^+,h^-)/(g_0^+,h^-)(h^+,g_1^-) = ⌈ (g_0^+,h^-),(h^+,g_1^-)⌉ . The result follows from equations (<ref>) and the fact that γ_0=(g_0^+,h^-) and γ_1=(h^+,g_1^-). §.§ Triangle functions and double brackets Let δ_0=(a_1,a_2,a_3) be an oriented ideal triangle, we associate to such a triangle the configuration t_0⌈ a_1,a_3,a_2⌉ . The reader should notice the change of order. One can make the following observation. First t t̅ =1. Thus for a self-dual representation ρ, we have _t(ρ)^2=1 and in particular _t is constant along self dual representations. Let t_0 be a triangle, then [t_0, g] = ∑_j∈{1,2,3}ϵ(a_j,g) t_0 (⌈ g,a_j⌉+⌈ g,a̅_j⌉) . Let t_0, t_1 be triangles. Then [t_1,t_0] = t_1· t_0∑_i,j∈{1,2,3}ϵ(a_i,b_j)(⌈ a_i,b_j⌉ + ⌈ a_i, b_j⌉+⌈a_i,b_j⌉ + ⌈a_i, b_j⌉= t_0∑_i∈{1,2,3} [t_1,a_i-a_i] . Assume now that t_0 and t_1 are two non-intersecting triangles. Then we have the formula: [t_1,[t_0, g]] =t_0 t_1∑_i,j∈{1,2,3} α,β∈{-1,1}ϵ(b_i,g)ϵ(a_j,g) (⌈ b_i^β,g,a_j^α⌉ -⌈ a^α_j,g⌉ ⌈ b^β_i,g⌉) , where c^1=c, c^-1=c̅. Observe first the the hypothesis imply that [t_0,t_1]=0. Thus, by Jacobi identity, [t_0,[t_1,g]]=[t_1,[t_0, g]] . The ghost polygon associated to t is (a_1,a_2,a_3,a_1,a_2,a_3). Thus [t_0, g] = t_0 ∑_j∈{1,2,3}ϵ(a_j,g)⌈ g,a_j⌉-ϵ(a_j,g)⌈ g,a̅_j⌉ = t_0 ∑_j∈{1,2,3}ϵ(a_j,g)(⌈ g,a_j⌉+⌈ g,a̅_j⌉) . In particular, if ϵ(g,a_i)=0 for all i, then [t_0, g]=0. Hence, in that case [t_0,[t_1, g]]=[t_1,[t_0, g]]=0 , and the formula (<ref>) is correct. For t_0,t_1 we have [t_1,t_0] = t_1· t_0∑_i,j∈{1,2,3}ϵ(a_i,b_j)⌈ a_i,b_j⌉ - ϵ(a_i,b_j)⌈ a_i, b_j⌉-ϵ(a_i,b_j)⌈a_i,b_j⌉ + ϵ(a_i,b_j)⌈a_i, b_j⌉ = t_0· t_1∑_i,j∈{1,2,3}ϵ(a_i,b_j)(⌈ a_i,b_j⌉ +⌈ a_i, b_j⌉+⌈a_i,b_j⌉ + ⌈a_i, b_j⌉) = t_0∑_i∈{1,2,3} [t_1, a_i- a_i] For the triple bracket, let us focus in the case where g intersects both t_0 and t_1 and by the above symmetry that g intersects t_1, then t_0. Let (a_i,ζ_i,g,η_i) the ghost polygon to ⌈ a_i,g⌉ with ζ_i = (a_i^+,g^-) and η_i = (g^+,a_i^-). Let t_1 = ⌈ b_1, b_3, b_2⌉ be another ideal triangle not intersecting t_0 and such that g intersects t_1, then t_0. Then the associated ghost polygon is (b_1,b_2,b_3,b_1,b_2,b_3). Let h be an edge of the ghost polygon of t_1. Then as g intersects t_1 before t_0 ϵ(h,η_j)=0 , ϵ(h,ζ_j)=ϵ(h,g) . Thus [ t_1,⌈ g, a_j⌉ ] = t_1⌈ g, a_j⌉∑_i∈{1,2,3}(ϵ(g,b_i) ⌈ b_i,g⌉ -ϵ(g,b_i) ⌈b_i,g⌉ - ϵ(ζ_j,b_i)⌈ζ_j, b_i ⌉+ϵ(ζ_j,b_i)⌈ζ_i,b_i⌉) . Simplifying we obtain [ t_1,⌈ g, a_j⌉ ] = t_1⌈ g, a_j⌉∑_i∈{1,2,3}ϵ(g,b_i)( ⌈ b_i,g⌉ + ⌈b_i,g⌉ -⌈ b_i, ζ_j ⌉-⌈b_i, ζ_j ⌉) . By equation (<ref>) ⌈ g, a_j⌉⌈ b_i, ζ_j ⌉ = ⌈ b_i,g, a_j ⌉ ⌈ g, a_j⌉⌈b_i, ζ_j ⌉ = ⌈b_i,g, a_j ⌉ . Thus we obtain [t_1,⌈ g,a_j⌉] = t_1∑_i∈{1,2,3}ϵ(b_i,g) (⌈ b_i,g,a_j⌉-⌈ a_j,g⌉ ⌈ b_i,g⌉ +⌈b̅_j,g,a_j⌉ - ⌈ g,a_j⌉⌈b̅_j, g ⌉) , t_1,⌈ g,a̅_j⌉] = t_1∑_i∈{1,2,3}ϵ(b_i,g) (⌈ b_i,g,a̅_j⌉-⌈ a_j,g⌉ ⌈ b_i,g⌉ +⌈b̅_j,g,a̅_j⌉ - ⌈ g,a̅_j⌉⌈b̅_j, g ⌉) . Combining the two last equations, and writing ϵ(b_i,g)=ϵ^1_i and ϵ(a_j,g)=ϵ^0_j we have (after some reordering) [t_1,[t_0, g]] = t_0 t_1 ∑_i,j∈{1,2,3}ϵ^1_iϵ^0_j(⌈ b_i,g,a_j⌉+⌈ b_i,g,a̅_j⌉+⌈b̅_i,g,a_j⌉+⌈b̅_i,g,a̅_j⌉ - ⌈ a_j,g⌉ ⌈ b_i,g⌉-⌈a̅_j,g⌉ ⌈ b_i,g⌉-⌈a̅_j,g⌉ ⌈b̅_i,g⌉-⌈ a_j,g⌉ ⌈b̅_i,g⌉) , which is what we wanted to prove. Let g be disjoint from the interior of ideal triangle δ. Then g and the triangle function t commute. Similarly let δ_0, δ_1 be ideal triangles with disjoint interiors. Then the associated triangle functions t_0,t_1 commute. We first make an observation. If ϵ(g, h) = ± 1/2 then ⌈ g,h ⌉ + ⌈ g, h⌉ = 1 . To see this, assume g^+=h^-. Then ⌈ g,h⌉ = 0 and ⌈ g, h⌉ has ghost polygon (g,h,h, g) giving ⌈ g, h⌉ = h· g/g·h =1 . By symmetry, this holds for all g,h with ϵ(g,h) = ± 1/2. Let g be disjoint from the interior of ideal triangle δ = (a_1,a_2,a_3). Then from above [g, t] = t∑_i∈{1,2,3}ϵ(g,a_i)(⌈ g,a_i⌉ + ⌈ g, a_i⌉) = t∑_i∈{1,2,3}ϵ(g,a_i) . If ϵ(g,a_i) = 0 for all i then trivially [ g, t] = 0. Thus we can assume ϵ(g, a_1) = 0 and ϵ(g,a_2),ϵ(g,a_3) ≠ 0. If g = a_1 then as ϵ(a_1, a_2) = -ϵ(a_1, a_3) then [g, t] =0. Similarly for g = a_1. Otherwise g, a_2, a_3 share a common endpoint and a_2,a_3 have opposite orientation at the common endpoint. Therefore as g is not between a_2 and a_3 in the cyclic ordering about their common endpoint, then ϵ(g,a_2)= -ϵ(g,a_3) giving [g, t] =0. Let t_0, t_1 be the triangle function associated to ideal polygons δ_0, δ_1 with t_0 = [a_1, a_3,a_2]. Then from above [t_1,t_0] = t_0∑_i [t_1,a_i-a_i] . Thus if t_0,t_1 have ideal triangles with disjoint interiors then by the above, [a_i,t_1] = [a_i,t_1]=0 giving [t_0,t_1]=0. Let t_1, t_2 be ideal triangles intersecting triangle t_0 with sides a_i. Let u = ∑ a_i - a_i. Then [t_2,[t_1,t_0]] = t_0([t_1,u][t_2,u]- [t_2,[t_1,u]]) . From above [t_1,t_0] = -t_0[t_1,u] and [t_2,t_0] = -t_0[t_2,u]. Therefore [t_2,[t_1,t_0]] = -[t_2,t_0][t_1,u] - t_0[t_2,[t_1,u]] = t_0[t_2,u][t_1,u]-t_0[t_2,[t_1,u]] . §.§ Positivity Recall that a projective representation ρ has a positive cross ratio if for all g,h intersecting geodesics 0 < _⌈ g,h⌉(ρ) < 1. We now give an equivalent definition which is the one originally given by Martone–Zhang in <cit.>. A projective representation ρ has a positive cross ratio if and only if for all (X,Y,y,x) cyclically oriented _⌈ (X,x),(Y,y)⌉(ρ)>1 . Let X,x,Y,y be 4 points. We observe that (X,Y,y,x) is cyclically oriented if and only if geodesics (X,y),(Y,x) intersect. The result then follows from ⌈ (X,x),(Y,y)⌉ =(X,y) (Y,x)/(X,x) (Y,y)=((X,x) (Y,y)/(X,y) (Y,x))^-1=⌈ (X,y),(Y,x)⌉^-1 . Assume ρ is a projective representation with a positive cross ratio. Let h be so that if h intersects both g_1 and g_0, then h intersects g_1 before g_0. Let g_1, g_0 be such that ϵ(g_0,g_1) = 0 Then we have the inequality ϵ_1ϵ_0 _⌈ g_1 ,h , g_0⌉- ⌈ g_1,h⌉⌈ g_0,h ⌉(ρ)≥ 0 . Furthermore the inequality is strict if and only if h intersects both g_0, g_1 in their interiors (i.e. if and only if |ϵ_0ϵ_1| = 1). By Lemma <ref> we have, since g_1 meets h before g_0. ϵ_0ϵ_1(⌈ g_1,h,g_0⌉ - ⌈ g_1,h⌉⌈ g_0,h⌉) = ϵ_1ϵ_0 ⌈ g_1,h⌉ ⌈ g_0,h ⌉ (⌈γ_0 ,γ_1 ⌉-1 ) . where γ_0 (g_0^+,h^-) and γ_1 (h^+, g_1^-). We will also freely use that if x^+=y^- or x^-=y^+, then _⌈ x,y⌉=0, while if x^+=y^+ or x^-=y^- then _⌈ x,y⌉=1. 0.2 truecm First case: ϵ_0ϵ_1 =0. In that case, we have equality. 0.2 truecm Second case: 0<|ϵ_0ϵ_1| <1. In that situation one of the end point of h is an end point of g_0 or g_1. * Firstly, the cases g_0^± = h^- or g_1^± = h^+ are impossible since h meets g_1 before g_0. * Secondly if g_1^+= h^- or g_0^- = h^+, then _⌈ g_1,h⌉_⌈ g_0,h⌉(ρ) = 0. * Finally, if g_1^-= h^- or g_0^+ = h^+, then either γ_0^+=γ_1^+ or γ_0^-=γ_1^-. In both cases, _⌈γ_0,γ_1⌉(ρ)=1 and hence it follows that _⌈ g_1 ,h , g_0⌉- ⌈ g_1,h⌉⌈ g_0,h ⌉(ρ)=0. 0.2 truecm Final case: |ϵ_0ϵ_1|=1. As both g_0 and g_1 intersect h and ρ has a positive cross ratio, then by proposition <ref>, _⌈ g_1,h⌉ ⌈ g_0,h ⌉(ρ)=_⌈ g_1,h⌉(ρ) _⌈ g_0,h ⌉(ρ)>0 . We can then split in two cases as in figure (<ref>): * If ϵ_0ϵ_1>0, then γ_0 and γ_1 do not intersect, and (h^-,g_0^+,h^+,g_1^-) is a cyclically oriented quadruple. Hence, by definition _⌈γ_0,γ_1⌉(ρ)>1. See figure (<ref>)) * If now ϵ_0ϵ_1<0, then γ_0 and γ_1 intersect, and by proposition <ref> _⌈γ_0,γ_1⌉(ρ)<1.(see figure (<ref>)) Combining both cases, we get that ϵ_0ϵ_1 (_⌈γ_0,γ_1⌉(ρ)-1)> 0 . The result follows from equations (<ref>) and (<ref>). Then we have Assume ρ is a projective representation with a positive cross ratio. Let g_1, g_0 be such that ϵ(g_0,g_1) = 0. Then we have the inequality _[g_1,[g_0, h]](ρ)≥0 . Furthermore the inequality is strict if and only if h intersects both g_0, g_1 in their interiors. The Jacobi identity for the swapping bracket <ref> gives that [g_0,[g_1,h]]=[g_1,[g_0,h]] since [g_0,g_1]=0. Thus the proof follows lemma <ref>, lemma <ref>. §.§ Proof of the convexity theorem <ref> and the sine formula theorem <ref> By the representation theorem and its corollary <ref> {_μ,{_μ,_ν}}(ρ)=∫_^3/Γ_[g_1,[g_0, h]](ρ) μ̣(g_0)μ̣(g_1)ν̣(h) . Since by lemma <ref>, the integrand is non-negative, the integral is non-negative. If i(μ,ν)= 0 then for all g in the support of μ and h in the support of ν, |ϵ(g,h)| ≠ 1. Thus by lemma <ref> for g_0,g_1 in the support of μ and h in the support of ν then _[g_1,[g_0, h]](ρ) = 0 . Thus the integral is zero for i(μ,ν)= 0. If i(μ,ν) ≠ 0 then there exists g_0, h in the supports of μ,ν respectively such that |ϵ(g_0,h)| = 1. If h is descends to a closed geodesic then it is invariant under an element γ of Γ then we let g_1 = γ g_0. Then the triple (g_1,g_0,h) is in the support of μ⊗μ⊗ν. Thus _[g_1,[g_0, h]](ρ) > 0 and the integral is positive. If h does not descend to a closed geodesic, then as any geodesic current is a limit of a discrete geodesic currents, it follows that h intersects g_1 = γ g_0 for some γ in Γ. Again the triple (g_1,g_0,h) are in the support of μ⊗μ⊗ν with _[g_1,[g_0, h]](ρ) > 0. Thus the integral is positive. This completes the proof of Theorem <ref>. For Theorem <ref>, we use the Jacobi identity for the swapping bracket to get ∫_^3/Γ_[g_1,[g_0, h]](ρ) μ̣(g_0)μ̣(g_1)ν̣(h) =2∫_^3,+/Γ_[g_1,[g_0, h]](ρ) μ̣(g_0)μ̣(g_1)ν̣(h) . Then we use lemma <ref>. § THE JACOBI IDENTITY FOR A Θ-GHOST BRACKET We now explain the the Jacobi identity for polygons with disjoint set of vertices is satisfied. §.§ Linking number on a set Let us make a little more general construction recall some construction of in <cit.> Let be a set, 𝒢_1 be the set of pair of points of Z. We denote temporarily the pair (X,x) with the symbol Xx. We also defined a linking number on to be a map from ^4 to a commutative ring 𝔸 (X,x,Y,y)→ϵ(Xx,Yy), so that for all points X,x,Y,y,Z,z the following conditions are satisfied ϵ(Xx,Yy)+ϵ(Xx,yY)=ϵ(Xx,Yy)+ϵ(Yy,Xx) = 0 , ϵ(zy,XY)+ϵ(zy,YZ)+ϵ(zy,ZX) = 0 , ϵ(Xx,Yy).ϵ(Xy,Yx) = 0 . The second author proved in <cit.> the Let (X,x,Z,z,Y,y) be 6 points on the set equipped with an linking number, then ϵ(Xy,Zz)+ϵ(Yx,Zz)=ϵ(Xx,Zz)+ϵ(Yy,Zz). Moreover, if {X,x}∩{Y,y}∩{Z,z}=∅, then ϵ(Xx,Yy)ϵ(Xy,Zz)+ϵ(Zz,Xx)ϵ(Zx,Yy)+ϵ(Yy,Zz)ϵ(Yz,Xx) = 0 , ϵ(Xx,Yy)ϵ(Yx,Zz)+ϵ(Zz,Xx)ϵ(Xz,Yy)+ϵ(Yy,Zz)ϵ(Zy,Xx) = 0 . §.§ The ghost algebra of a set with a linking number §.§.§ Ghost polygons and edges We say a geodesic is a pair of points in . We write g=(g_-,g_+). A configuration G ⌈ g_1,… g_n⌉ is a tuple of geodesics (g_1,… g_n) up to cyclic ordering, with n≥ 1. The positive integer n is the rank of the configuration. To a configuration of rank greater than 1, we associate a ghost polygon, also denoted G which is a tuple G = (θ_i,…,θ_2n) where g_i = θ_2i are the visible edges and ϕ_i = θ_2i+1 ((g_i+1)_-,(g_i)_+) are the ghost edges. The ghost index i_e of an edge e is an element of ℤ/2ℤ which is zero for a visible edge and one for a ghost edge. In other words i_θ_k k [2]. We will then denote by G_∘ the set of edges (ghost or visible) of the configuration G. Geodesics, or rank 1 configuration, play a special role. In that case G=⌈ g⌉, by convention G^∘ consists of of single element g which is a visible edge. §.§.§ Opposite edges We now define the opposite of an edge in a reduced configuration. Recall that a configuration is a tuple up to cyclic permutation. in this section we will denote ⌊ g_1,… g_n⌋, a tuple. We denote by the ∙ the concatenation of tuples: ⌊ g_1,… g_n⌋∙⌊ h_1,… h_p⌋⌊ g_1,… g_n, h_1,… h_p⌋ . We introduce the following notation. If θ is a visible edge of G, we define θ_+ = θ_- = θ and if θ is a ghost edge of G then we define θ_+ to be the visible edge after θ and θ_- the visible edge before. The opposite of an edge is θ^* ⌊θ_+…θ_-⌋ where the ordering is an increasing ordering of visible edges from θ_+ to θ_-. More specifically * For a visible edge g_i, the opposite is the tuple g_i^* = ⌊ g_i, g_i+1… g_i-1g_i⌋, * while for a ghost edge ϕ_i the opposite is ϕ_i^* = ⌊ g_i+1g_i+2… g_i-1 g_i⌋. * if ⌈ h⌉ is a rank 1 configuration. The opposite of its unique edge h is h itself. §.§ Ghost bracket and our main result We now define the ghost algebra of to be the polynomial algebra 𝒜_0 freely generated by ghost polygons and geodesics. The ghost algebra is equipped with the antisymmetric ghost bracket, given on the generators 𝒜 by, for two ghosts polygons B and C and geodesics g and h, [B,C] = ∑_(b,c)∈ B_∘× C_∘ϵ(c,b)(-1)^i_b+i_c⌈ c^* b^*⌉ . It is worth writing down the brackets of two geodesics g and h, as well as the bracket of a geodesic g and a configuration B, -[g,B]=[B,g] = ∑_b∈ B_∘ϵ(g,b)(-1)^i_b+1⌈ g,b^*⌉ , -[g,h]=[h,g] = ϵ(g,h) ⌈ g, h⌉ . Our goal in this section is to prove Let A, B, C three polygons with no common vertices: V_A∩ V_B∩ V_C=∅, where V_G is the set of vertices of the polygon G. Then the ghost bracket satisfies the Jacobi identity for A, B, C: [A,[B,C]]+[B,[C,A]]+[C,[A,B]] =0 . As the formula for the bracket differs based on whether ghost polygons are rank 1 or higher, will need to consider the different cases based on the rank of the three elements. We will denote rank 1 elements by a,b,c and higher rank by A,B,C. For a, b and c edges in A, B, C ghost or otherwise we label their ghost indexes by i_a ,i_b, i_c and their opposites by a^*, b^*, c^*. §.§ Preliminary: more about opposite edges Let also use the following notation: if θ_k and θ_l are two edges, ghost or visible ⌈ g_1,…, g_n⌉, of a ghost polygon, then G(θ_k,θ_l) = ⌊θ_k_+…θ_l_-⌋ , where again this is an increasing of visible edges. The tuple G(θ_k,θ_l) is an “interval" defined by θ_k and θ_2. In order to continue our description of the triple brackets. We need to understand, in the above formula, what are the opposite of ϕ^* in [b^*,c^*]. Our preliminary result is the following Let B and C be two ghost polygons, b and c edges in B and C respectively. Let ϕ be an edge in ⌈ b^*,c^*⌉, then we have the following eight possibilities 1: Either ϕ is an edge of B, different from b or a ghost edge, then ϕ^*= G(ϕ,b)∙ c^*∙ G(b,ϕ) , 2: b is a visible edge, ϕ is the initial edge b in b^* and then ϕ^*=b^*∙ c^*∙ b . 3: b is a visible edge, ϕ is the final edge b in b^* and then ϕ^*=b∙ c^*∙ b^* . 4, 5, 6: Or ϕ is an an edge of C, and the three items above apply with some obvious symmetry, giving three more possibilities. 7: or ϕ is the edge u_b,c (c_-^-,b_+^+) of ⌈ b^*,c^*⌉ which is neither an edge of b nor an edge of c, a ghost edge, and ϕ^*=⌊ c^*,b^*⌋ . 8: ϕ is the edge u_c,b (b_-^-,c_+^+) of ⌈ b^*,c^*⌉ which is neither an edge of b nor an edge of c, a ghost edge, and ϕ^*=⌊ b^*,c^*⌋ . This follows from a careful book-keeping and the previous definitions. §.§ Cancellations Let us introduce the following quantities for any triple of polygons A, B, C whatever their rank. They will correspond to the cases obtained corresponding to the cases observed in lemma <ref>: Case 1: P_1(A,B,C) ∑_(a,c,b,ϕ)∈ A_∘× C_∘× B_∘^2 ϕ≠bϵ(a,ϕ)ϵ(c,b)(-1)^i_a+i_ϕ+i_b+i_c⌈ a^* ∙ G(ϕ,b)∙ c^*∙ G(b,ϕ) ⌉ , Case 3: P_2(A,B,C) ∑_(a,b,c,ϕ)∈ A_∘× B_∘× C_∘^2 ϕ≠cϵ(a,ϕ)ϵ(c,b)(-1)^i_a+i_ϕ+i_b+i_c⌈ a^*∙ G(ϕ,c)∙ b^*∙ G(c,ϕ) ⌉ , Case 4: Q_1(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,b)ϵ(c,b)(-1)^i_a+i_c⌈ a^*∙ b ∙ c^*∙ b^* ⌉ , Case 5: Q_2(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,b)ϵ(c,b)(-1)^i_a+i_c⌈ a^*∙ b^* ∙ c^*∙ b⌉ , Case 6: R_1(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,c)ϵ(c,b)(-1)^i_a+i_b⌈ a^*∙ c ∙ b^*∙ c^* ⌉ , Case 7: R_2(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,c)ϵ(c,b)(-1)^i_a+i_b⌈ a^*∙ c^* ∙ b^*∙ c ⌉ , Case 8: S_1(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,u_b,c)ϵ(c,b)(-1)^i_a+i_c+i_b⌈ a^*∙ c^* ∙ b^* ⌉ , Case 4: S_2(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,u_c,b)ϵ(c,b)(-1)^i_a+i_c+i_b⌈ a^*∙ b^* ∙ c^* ⌉ . We then have We have the following cancellations, where the two last ones use the hypothesis (<ref>) P_1(A,B,C)+P_2(C,A,B) = 0 , first cancellation , Q_1(A,B,C)+R_2(B,C,A) = 0 , second cancellation , S_1(A,B,C)+S_1(B,C,A)+S_1(C,A,B) = 0 , hexagonal cancellation-1 , S_2(A,B,C)+S_2(B,C,A)+S_2(C,A,B) = 0 , hexagonal cancellation-2 . For the first cancellation, we have P_1(A,B,C)+P_2(C,A,B) = ∑_(a,c,b,ϕ)∈ A_∘× C_∘× B_∘^2 ϕ≠bϵ(a,ϕ)ϵ(c,b)(-1)^i_a+i_ϕ+i_b+i_c⌈ a^* ∙ G(ϕ,b)∙ c^*∙ G(b,ϕ) ⌉ + ∑_(c,a,b,ϕ)∈ C_∘× A_∘× B_∘^2 ϕ≠bϵ(c,ϕ)ϵ(b,a) (-1)^i_a+i_ϕ+i_b+i_c⌈ c^* ∙ G(ϕ,b)∙ a^*∙ G(b,ϕ) ⌉ = ∑_(a,c)∈ A_∘× C_∘ (b_0,b_1)∈ B_∘^2 b_0≠b_1(ϵ(a,b_1)ϵ(c,b_0) + ϵ(c,b_0)ϵ(b_1,a) )(-1)^i_a+i_b_0+i_b_1+i_c⌈ a^* ∙ G(ϕ,b)∙ c^*∙ G(b,ϕ) ⌉=0 , where we used the change of variables (b_0,b_1)=(b,ϕ) in the second line and (b_0,b_1)=(ϕ,b) in the third and use the cyclic invariance. The second cancellation follows by a similar argument R_1(A,B,C)+Q_2(B,C,A) = ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,c)ϵ(c,b)(-1)^i_a+i_b⌈ b^*∙ c^* ∙ a^*∙ c) ⌉ + ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(b,c)ϵ(a,c)(-1)^i_a+i_b⌈ b^*∙ c^* ∙ a^*∙ c) ⌉ =0 . Finally the hexagonal cancellation-1 follows from the hexagonal relation ϵ(a,u_b,c)ϵ(c,b)+ϵ(b,u_c,a)ϵ(a,c)+ϵ(c,u_a,b)ϵ(b,a)=0 , which is itself a consequence of lemma <ref> and the assumption (<ref>). A similar argument works the second hexagonal relation. §.§ The various possibilities for the triple bracket We have to consider 3 different possibilities for the triple brackets [A,[B,C] taking in account whether B and C have rank 1. The following lemma will be a consequence of lemma <ref>. We will also use the following conventions: if Q_1(U,V,W)=Q_2(U,V,W) , then we write Q(U,V,W) Q_1(U,V,W)=Q_2(U,V,W) , if R_1(U,V,W)=R_2(U,V,W) , then we write R(U,V,W) R_1(U,V,W)=R_2(U,V,W) . We have the following four possibilities (independent of the rank of U) for the triple brackets * The polygons V and W have both rank greater than 1, then [U,[V,W]] = P_1(U,V,W)+P_2(U,V,W)+Q_1(U,V,W)+Q_2(U,V,W) + R_1(U,V,W)+R_2(U,V,W)+ S_1(U,V,W)+S_2(U,V,W) . * Both v V and w W have rank 1, then [U,[v,w]] = Q(U,v,w)+R(U,v,w)+S_1(U,v,w)+S_2(U,v,w) . * The polygon W has rank greater than 1, while v V has rank 1, then [U,[v,W]] = P_2(U,v,W)+Q(U,v,W)+R_1(U,v,W)+R_2(U,v,W) + S_1(U,v,W)+S_2(U,v,W) . * The polygon W has rank greater than 1, while v V has rank 1, then [U,[V,w]] = P_2(U,W,v)+R(U,W,v)+Q_1(U,W,v)+R_2(U,W,v) + S_1(U,W,v)+S_2(U,W,v) . This is deduced from lemma <ref>. Indeed we deduce from that lemma that we have * if B is a geodesic, then case 1 does not happen, and case 4 and case 5 coincide, thus P_1(U,V,W)=0 , Q_1(U,V,W)=Q_2(U,V,W) Q(U,V,W) . * Symmetrically, if C is a geodesic, then case 2 does not happen, and case 6 and case 7 coincide, thus P_2(U,V,W)=0 , R_1(U,V,W)=R_2(U,V,W) R(U,V,W) . §.§ Proof of the Jacobi identity We will use freely in that paragraph lemma <ref> The previous discussion gives [A,[B,C]] = P_1(A,B,C)+P_2(A,B,C)+Q_1(A,B,C)+Q_2(A,B,C) + R_1(A,B,C)+R_2(A,B,C)+ S_1(A,B,C)+S_2(A,B,C) , B,[C,A]] = P_1(B,C,A)+P_2(B,C,A)+Q_1(B,C,A)+Q_2(B,C,A) + R_1(B,C,A)+R_2(B,C,A)+ S_1(B,C,A)+S_2(B,C,A) , C,[A,B]] = P_1(C,A,B)+P_2(C,A,B)+Q_1(C,A,B)+Q_2(C,A,B) + R_1(C,A,B)+R_2(C,A,B)+ S_1(C,A,B)+S_2(C,A,B) . The proof of the Jacobi identity then follows from the cancellations (<ref>). In that case, writing a A, b B and c C, we have [a,[b,c]] = Q(a,b,c)+R(a,b,c)+S_1(a,b,c)+S_2(a,b,c) , b,[c,a]] = Q(b,c,a)+R(b,c,a)+S_1(b,c,a)+S_2(b,c,a) c,[b,a]] = Q(c,a,b)+R(c,a,b)+S_1(c,a,b)+S_2(c,a,b) . The Jacobi identity follows from the cancellations (<ref>). Assume a A is a geodesic, B and C has rank 2. Then [a,[B,C]] = P_1(a,B,C)+P_2(a,B,C)+Q_1(a,B,C)+Q_2(a,B,C) + R_1(a,B,C)+R_2(a,B,C)+ S_1(a,B,C)+S_2(a,B,C) , C,[a,B]] = P_2(C,a,B)+Q_1(C,a,B)+Q_2(C,a,B) +R(C,a,B)+ S_1(C,a,B)+S_2(C,a,B) , B,[C,a]] = P_1(B,C,a)+R_1(B,C,a)+R_2(B,C,a) +Q(B,C,a)+ S_1(B,C,a)+S_2(B,C,a) . Then again the cancellations (<ref>), yields the Jacobi identity in that case. We have here that A has rank greater than 1, while b B and c C are geodesics, then [A,[b,c]] = Q(A,b,c)+R(A,b,c)+S_1(A,b,c)+S_2(A,b,c) , b,[c,A]] = P_1(b,c,A)+Q_1(b,c,A)+Q_2(b,c,A) +R(b,c,A)+ S_1(b,c,A)+S_2(b,c,A) , c,[A,b]] = P_2(c,A,b)+R_1(c,A,b)+R_2(c,A,b) +Q(c,A,b)+ S_1(c,A,b)+S_2(c,A,b) . For the last time, the cancellations (<ref>), yields the Jacobi identity in that case. § A LEMMA IN HYPERBOLIC GEOMETRY For any geodesic g and g_0, where g_0 is parametrized by the arc, the following holds. If R> 1 and d(g_0(R), g)<2, while d(g_0(R-1),g)≥ 2, then d(g_0(0),g)≥ R . We let h be a geodesic with d(g_0(R),h) = d(g_0(R-1),h) = 2. Then we observe that d(g_0(0),g) ≥ d(g_0(0), h). We drop perpendiculars from g_0(R-1),g_0(R-1/2) and g_0(0) to h. The perpendicular from g_0(R-1) to h is length 2 and let a be the length of the perpendicular from g_0(R-1/2). Then considering the Lambert quadrilateral with opposite sides of length a, 2 gives sinh(a)cosh(1/2) = sinh(2) , sinh(a)cosh( R-1/2)=sinh D , where D = d(g_0(0), h). It follows easily that e^D/2≥sinh(D) = sinh(a) cosh(R-1/2) ≥sinh (a)/2 e^R-1/2 . Thus d(g_0(0),g) ≥ D ≥ R-1/2+log(sinh(a)) ≥ R . § FUNDAMENTAL DOMAIN AND L^1-FUNCTIONS If Γ is a countable group acting on X preserving a measure μ, a μ-fundamental domain for this action is a measurable set Δ so that ∑_γ∈Γ 1_γ(Δ)=1, μ-almost everywhere. A function F on X is Γ-invariant if for every γ in Γ, F=F∘γ, μ–almost everywhere. Then For any Γ-invariant positive function, if Δ_0 and Δ_1 are fundamental domain then ∫_Δ_0 Fμ̣=∫_Δ_1 Fμ̣ . Using the γ-invariance of F ∫_Δ_0F=∑_γ∈Γ∫_X F· 1_Δ_0∩γ(Δ_1)μ̣=∑_η∈Γ∫_X F· 1_η(Δ_0)∩Δ_1μ̣= ∫_Δ_1F . We define by a slight abuse of language, if Γ-admits a μ fundamental domain Δ on X ∫_X/ΓF μ̣∫_ΔF μ̣ . Let Γ be a group acting properly on X_0 and X_1 preserving μ_0 and μ_1 respectively. Assume that Δ_0 – respectively Δ_1 – is a fundamental domain for the action of Γ on X_0 and X_1, then Let F be a positive function on X_0× X_1 which is Γ invariant, where Γ acts diagonally and the action on each factor preserves measures called μ_0 and μ_1 and admits a fundamental domain called Δ_0 and Δ_1, then ∫∫_Δ_0× X_1F μ̣_0⊗μ̣_1=∫∫_X_0×Δ_1F μ̣_0⊗μ̣_1 . Indeed Δ_0× X_1 and X_0×Δ_1 are both fundamental domains for the diagonal action of Γ on X_0× X_1. The lemma then follows from the previous one and Fubini theorem. Let f be a continuous function defined on a topological space X. Let μ be a Radon measure on X. Then the following lemma holds. Assume that there exists a real constant k so that for every exhausting sequence K of compacts of X, lim_m→∞∫_K_mf μ̣=k. Then f belongs to L^1(X,μ) and ∫_Xf μ̣=k. amsplain 10 Atiyah:1983 Michael F Atiyah and Raoul Bott, The Yang-Mills equations over Riemann surfaces, Philos. Trans. Roy. Soc. London Ser. A 308 (1983), no. 1505, 523–615. BGLPW Jonas Beyrer, Olivier Guichard, François Labourie, Beatrice Pozzetti, and Anna Wienhard, Positivity, cross ratios and the collar lemma. Bonahon:1988 Francis Bonahon, The geometry of Teichmüller space via geodesic currents, Inventiones Mathematicae 92 (1988), no. 1, 139–162. Bonahon:2014woa Francis Bonahon and Guillaume Dreyer, Hitchin characters and geodesic laminations, Acta Mathematica 218 (2017), no. 2, 201–295. Bridgeman:2020vg Martin Bridgeman, Richard Canary, and François Labourie, Simple length rigidity for Hitchin representations, Adv. Math. 360 (2020), 106901, 61. 4035950 Bridgeman:2015ba Martin J Bridgeman, Richard Canary, François Labourie, and Andres Sambarino, The pressure metric for Anosov representations, Geometric And Functional Analysis 25 (2015), no. 4, 1089–1179. Choi:2020aa Suhyoung Choi, Hongtaek Jung, and Hong Chan Kim, Symplectic coordinates on PSL_3( R)-Hitchin components, Pure Appl. Math. Q. 16 (2020), no. 5, 1321–1386. 4220999 Fock:2006a Vladimir V Fock and Alexander B Goncharov, Moduli spaces of local systems and higher Teichmüller theory, Publ. Math. Inst. Hautes Études Sci. (2006), no. 103, 1–211. Goldman:1984 William M Goldman, The symplectic nature of fundamental groups of surfaces, Advances in Mathematics 54 (1984), no. 2, 200–225. Goldman:1986 , Invariant functions on Lie groups and Hamiltonian flows of surface group representations, Inventiones Mathematicae 85 (1986), no. 2, 263–302. Kerckhoff:1983th Steven P. Kerckhoff, The Nielsen realization problem, Ann. of Math. (2) 117 (1983), no. 2, 235–265. 690845 Labourie:2020tv François Labourie and Jérémy Toulisse, Quasicircles and quasiperiodic surfaces in pseudo-hyperbolic spaces, arXiv:2010.05704, 2020. Labourie:2006 François Labourie, Anosov flows, surface groups and curves in projective space, Inventiones Mathematicae 165 (2006), no. 1, 51–114. Labourie:2005 , Cross ratios, surface groups, PSL(n, R) and diffeomorphisms of the circle, Publ. Math. Inst. Hautes Études Sci. (2007), no. 106, 139–213. Labourie:2013ka , Lectures on representations of surface groups, Zurich Lectures in Advanced Mathematics, European Mathematical Society (EMS), Zürich, 2013. Labourie:2012vka , Goldman algebra, opers and the swapping algebra, Geometry and Topology 22 (2018), no. 3, 1267–1348. McShane-Lab François Labourie and Gregory McShane, Cross ratios and identities for higher Teichmüller-Thurston theory, Duke Mathematical Journal 149 (2009), no. 2, 279 – 345. Labourie:2018fj François Labourie and Richard Wentworth, Variations along the Fuchsian locus, Annales Scientifiques de l'Ecole Normale Supérieure. Quatrième Série 51 (2018), no. 2, 487–547. Martone:2019uf Giuseppe Martone and Tengren Zhang, Positively ratioed representations, Comment. Math. Helv. 94 (2019), no. 2, 273–345. Nie:2013tu Xin Nie, The quasi-Poisson Goldman formula, J. Geom. Phys. 74 (2013), 1–17. Potrie:2014uta Rafael Potrie and Andr e s Sambarino, Eigenvalues and Entropy of a Hitchin representation, Inventiones Mathematicae (2017), no. 3, 885–925. Sun:2021tj Zhe Sun, Rank n swapping algebra for PGL_n Fock-Goncharov X moduli space, Math. Ann. 380 (2021), no. 3-4, 1311–1353. Sun:2020vm Zhe Sun, Anna Wienhard, and Tengren Zhang, Flows on the PGL(V)-Hitchin component, Geom. Funct. Anal. 30 (2020), no. 2, 588–692. Sun:2017 Zhe Sun and Tengren Zhang, The Goldman symplectic form on the PGL(V)-Hitchin component, arXiv:1709.03589. Turaev:1991wk Vladimir G Turaev, Skein quantization of Poisson algebras of loops on surfaces, Annales Scientifiques de l'Ecole Normale Supérieure. Quatrième Série 24 (1991), no. 6, 635–704. Wolpert:1981vt Scott Wolpert, An elementary formula for the Fenchel-Nielsen twist, Comment. Math. Helv. 56 (1981), no. 1, 132–135. Wolpert:1983td Scott A Wolpert, On the Symplectic Geometry of Deformations of a Hyperbolic Surface, Annals of Mathematics 117 (1983), no. 2, 207–234.
http://arxiv.org/abs/2307.04186v1
20230709142509
Absolute Concentration Robustness and Multistationarity in Reaction Networks: Conditions for Coexistence
[ "Nidhi Kaihnsa", "Tung Nguyen", "Anne Shiu" ]
math.DS
[ "math.DS", "math.AG", "q-bio.MN", "92E20, 37N25, 26C10, 34A34, 34C08" ]
Absolute concentration robustness and multistationarity]Absolute concentration robustness and multistationarity in reaction networks: Conditions for coexistence [ Nidhi Kaihnsa, Tung Nguyen, and Anne Shiu Received: date / Accepted: date ============================================= Many reaction networks arising in applications are multistationary, that is, they have the capacity for more than one steady state; while some networks exhibit absolute concentration robustness (ACR), which means that some species concentration is the same at all steady states. Both multistationarity and ACR are significant in biological settings, but only recently has attention focused on the possibility for these properties to coexist. Our main result states that such coexistence in at-most-bimolecular networks (which encompass most networks arising in biology) requires at least 3 species, 5 complexes, and 3 reactions. We prove additional bounds on the number of reactions for general networks based on the number of linear conservation laws. Finally, we prove that, outside of a few exceptional cases, ACR is equivalent to non-multistationarity for bimolecular networks that are small (more precisely, one-dimensional or up to two species). Our proofs involve analyses of systems of sparse polynomials, and we also use classical results from chemical reaction network theory. 0.1in Keywords: Multistationarity, absolute concentration robustness, reaction networks, sparse polynomials. 2020 MSC: 92E20, 37N25, 26C10, 34A34, 34C08 § INTRODUCTION A mass-action kinetics system exhibits absolute concentration robustness (ACR) if the steady-state value of at least one species is robust to fluctuations in initial concentrations of all species <cit.>. Another biologically significant property is the existence of multiple steady states, that is, multistationarity. Significantly, this property has been linked to cellular decision-making and switch-like responses <cit.>. As both ACR and multistationarity are important properties, it is perhaps surprising that their relationship was explored only recently, when the present authors with Joshi showed that ACR and multistationarity together – or even ACR by itself – is highly atypical in randomly generated reaction networks. This result dovetails with the fact that the two properties are somewhat in opposition, as multiple steady states are not in general position in the presence of ACR. The results of Joshi et al. are asymptotic in nature (as the number of species goes to infinity), and they pertain to networks that are at-most-bimolecular (which is typical of networks arising in biology) and reversible (which is not) <cit.>. This naturally leads to the following question: For multistationarity and ACR to coexist, how many species, reactions, and complexes are needed? Which networks (without the requirement of being reversible) of small to medium size allow such coexistence? Another motivation for Question <ref> comes from synthetic biology. In order to design reaction networks with certain dynamical properties, we need to better understand the design principles that allow for such behaviors, as well as the constraints on the size (such as the minimum numbers of species, reaction, and complexes) of such networks. Our work focuses on answering Question <ref>. Broadly speaking, our results fall into two categories: (i) results that give lower bounds on the dimension of a network or its number of species, reactions, or complexes; and (ii) results for certain classes of networks (one-dimensional, up to 2 species, and so on). Our primary focus is on at-most-bimolecular networks, but we also present results on general networks. In the first category, our results are summarized in the following theorem, which gives some minimum requirements for ACR and nondegenerate multistationarity to coexist. This coexistence is typically on a nonzero-measure subset of the parameter space of reaction rate constants. Let G be an at-most-bimolecular reaction network with n species such that there exists a vector of positive rate constants κ^* such that the mass-action system (G,κ^*) has ACR and is nondegenerately multistationary. Then G has: * at least 3 species (that is, n ≥ 3), * at least 3 reactant complexes (and hence, at least 3 reactions) and at least 5 complexes (reactant and product complexes), and * dimension at least 2. If, additionally, G is full-dimensional (that is, G has no linear conservation laws), then G has: * at least n+2 reactant complexes (and hence, at least n+2 reactions), and * dimension at least 3. For the proof of Theorem <ref>, we refer the reader to Section <ref> for part (3) (Lemma <ref>); Section <ref> for parts (1), (2), and (5) (Theorem <ref>); and Section <ref> for part (4) (Theorem <ref>). Additionally, many of the lower bounds in Theorem <ref> are tight. Indeed, this is shown for parts (1)–(3) through the following network: {A+B → 2C → 2B,  C→ A } (Example <ref>). As for part (4), this bound is proven for networks that need not be at-most-bimolecular, and its tightness is shown in that context (Proposition <ref>). While Theorem <ref> concerns nondegenerate multistationarity, we also investigate the capacity for ACR together with degenerate multistationarity, specifically, in networks with 4 reactant complexes (Proposition <ref>). Finally, we prove two additional results in the spirit of Theorem <ref>. The first states that 3 is the minimum number of pairs of reversible reactions needed (in reversible networks) for multistationarity, even without ACR (Theorem <ref>). The second concerns networks that are not full-dimensional, and states the minimum number of reactant complexes needed for the coexistence of ACR and nondegenerate multistationarity is n-k+1, where 1 ≤ k ≤ n-2 is the number of linearly independent conservation laws (Theorem <ref>). As for our second category of results, we start with one-dimensional networks, a class of networks for which ACR <cit.>, multistationarity <cit.>, and even multistability <cit.> is well studied. Such networks do not allow for the coexistence of ACR and nondegenerate multistationarity (Proposition <ref>). Moreover, one-dimensional bimolecular networks can only be multistationary if they are degenerately so (Lemma <ref>). Moreover, we explicitly characterize all such degenerate networks (Lemma <ref>). Here our proofs make use of recent results of Lin, Tang, and Zhang <cit.>. Another class of at-most-bimolecular networks we analyze are those with exactly 2 species (Section <ref>). For such networks that are reversible, we characterize the property of unconditional ACR, which means that ACR occurs for all possible values of rate constants (Theorem <ref>). As for networks that need not be reversible, we show that ACR and multistationarity can coexist, but only in a degenerate way. Moreover, up to relabelling species, only two such networks allow such coexistence for a nonzero-measure subset of the space of reaction rate constants (Theorem <ref>). Our works fits into a growing body of literature that explores the minimal conditions needed for various dynamical behaviors, including the two properties that are the focus of the current work: multistationarity <cit.> and ACR <cit.>. There are additional such studies on multistability <cit.> and Hopf bifurcations <cit.> (which generate periodic orbits). For instance, in analogy to Theorem <ref> above, the presence of Hopf bifurcations requires an at-most-bimolecular network to have at least 3 species, 4 reactions, and dimension 3 <cit.>. This article is organized as follows: Section <ref> introduces reaction networks, multistationarity, and ACR. Section <ref> contains several results on steady states and their nondegeneracy. We use these results in Sections <ref> and <ref> to prove our main results. We conclude with a discussion in Section <ref>. § BACKGROUND This section recalls the basic setup and definitions involving reaction networks (Section <ref>), the dynamical systems they generate (Section <ref>), absolute concentration robustness (Section <ref>), and a concept pertaining to networks with only 1 species: “arrow diagrams” (Section <ref>). §.§ Reaction networks A reaction network G is a directed graph in which the vertices are non-negative-integer linear combinations of species X_1, X_2, …, X_n. Each vertex is a complex, and we denote the complex at vertex i by y_i=∑_j=1^ny_ijX_j (where y_ij∈_≥ 0) or y_i=(y_i1, y_i2, …, y_in ). Throughout, we assume that each species X_i, where i=1,2,…,n, appears in at least one complex. Edges of a network G are reactions, and it is standard to represent a reaction (y_i, y_j) by y_i → y_j. In such a reaction, y_i is the reactant complex, and y_j is the product complex. A species X_k is a catalyst-only species in reaction y_i → y_j if y_ik = y_jk. In examples, it is often convenient to write species as A,B,C,… (rather than X_1,X_2,X_3,…) and also to view a network as a set of reactions, where the sets of species and complexes are implied. The reaction network { 0 ← A → 2A ,  B ← A+B} has 2 species, 5 complexes, and 3 reactions. The species B is a catalyst-only species in the reaction B ← A+B. A reaction network is reversible if every edge of the graph is bidirected. A reaction network is weakly reversible if every connected component of the graph is strongly connected. Every reversible network is weakly reversible. The following network is reversible: { A+B ⇆ 2A,  2B ⇆ A, 0 ⇆ B }. One focus of our work is on at-most-bimolecular reaction networks (or, for short, bimolecular), which means that every complex y_i satisfies y_i1+y_i2 + … + y_in≤ 2. Equivalently, each complex has the form 0, X_i, X_i+X_j, or 2X_i (where X_i and X_j are species). The networks in Examples <ref>–<ref> are bimolecular. §.§ Mass-action systems Let r denote the number of reactions of G. We write the i-th reaction as y_i → y_i' and assign to it a positive rate constant κ_i∈_> 0. The mass-action system arising from a network G and a vector of positive rate constants κ=(κ_1, κ_2, …, κ_r), which we denote by (G,κ), is the following dynamical system arising from mass-action kinetics: dx/dt = ∑_i=1^r κ_i x^y_i (y_i'-y_i)  =:  f_κ(x) , where x^y_i := ∏_j=1^n x_j^y_ij. Observe that the right-hand side of the ODEs (<ref>) consists of polynomials f_κ,i, for i=1,…,n. For simplicity, we often write f_i instead of f_κ,i. The question of which polynomials f_i can appear as right-hand side of mass-action ODEs is answered in the following result <cit.>. Let f:ℝ^n →ℝ^n be a polynomial function, that is, assume that f_i ∈ℝ[x_1, x_2 …, x_n] for i=1,2,…, n. Then f arises as the right-hand side of the differential equations (<ref>) (for some choice of network G and vector of positive rate constants κ) if and only if, for all i=1,2,…,n, every monomial in f_i with negative coefficient is divisible by x_i. Next, observe that the mass-action ODEs (<ref>) are in the linear subspace of ℝ^n spanned by all reaction vectors y_i' - y_i (for i=1,2,…, r). We call this the stoichiometric subspace and denote it by S. The dimension of a network is the dimension of its stoichiometric subspace. (This dimension is sometimes called the “rank” <cit.>.) In particular, if (S)=n (that is, S= ℝ^n), we say that G is full-dimensional. A trajectory x(t) of (<ref>) with initial condition x(0) = x^0 ∈_>0^n remains, for all positive time, in the following stoichiometric compatibility class of G <cit.>: P_x(0) :=  (x(0)+S)∩_≥ 0^n . For full-dimensional networks, there is a unique stoichiometric compatibility class: P=_≥ 0^n. For networks that are not full-dimensional, every nonzero vector w in S^⊥ yields a (linear) conservation law ⟨ w,x ⟩ = ⟨ w, x(0) ⟩ that is satisfied by every x ∈ P_x(0), where ⟨ -,- ⟩ denotes the usual inner product on ^n. [<ref>] The network { 0 κ_1← A κ_2→ 2A ,  B κ_3← A+B} has a one-dimensional stoichiometric subspace (spanned by (1,0)) and generates the following mass-action ODEs (<ref>): dx_1/dt =  -κ_1x_1+κ_2x_1-κ_3x_1 x_2=x_1(-κ_1+κ_2-κ_3 x_2) dx_2/dt =  0. Observe that the negative monomials in the first ODE are -κ_1x_1 and -κ_3x_1 x_2, and each of these is divisible by x_1, which is consistent with Lemma <ref>. Next, the stoichiometric compatibility classes (<ref>) are rays of the following form (where T>0): { (x_1,x_2) ∈ℝ^2_≥ 0| x_2 = T} . The equation x_2 = T is the unique (up to scaling) conservation law. A steady state of a mass-action system is a non-negative vector x^*∈_≥ 0^n at which the right-hand side of the ODEs (<ref>) vanishes: f_κ(x^*)=0. Our main interest in this work is in positive steady states x^*∈_> 0^n. The set of all positive steady states of a mass-action system can have positive dimension in ^n, but this set typically intersects each stoichiometric compatibility class in finitely many points <cit.>. Finally, a steady state x^* is nondegenerate if (df_κ(x^*)|_S)=S, where df_κ(x^*) is the Jacobian matrix of f_κ evaluated at x^*. We consider multiple steady states at two levels: systems and networks. A mass-action system (G,κ) is multistationary (respectively, nondegenerately multistationary) if there exists a stoichiometric compatibility class having more than one positive steady state (respectively, nondegenerate positive steady state). A reaction network G is multistationary if there exists a vector of positive rate constants κ such that (G,κ) is multistationary. For a reaction network G, we let cap_pos(G) (respectively, cap_nondeg(G)) denote the maximum possible number of positive steady states (respectively, nondegenerate positive steady states) in a stoichiometric compatibility class. [<ref>] We return to the network G={ 0 κ_1← A κ_2→ 2A ,  B κ_3← A+B} and its ODEs (<ref>). A direct computation reveals that when κ_1 ≥κ_2, there is no positive steady state. On the other hand, when κ_2 > κ_1, the steady states form exactly one stoichiometric compatibility class (<ref>) – namely, the one given by T = (κ_2 - κ_1)/κ_3 – and all such steady states are degenerate. Hence, G is multistationary but not nondegenerately multistationary. [<ref>] The following (full-dimensional) reaction network and indicated rate constants yield a mass-action system with 3 nondegenerate positive steady states <cit.>: { A+B [1/4]1/32⇆ 2A, 2B [1/4]1⇆ A, 0 [1]1⇆ B } . Therefore, this network is nondegenerately multistationary. §.§ Deficiency and absolute concentration robustness The deficiency of a reaction network G is δ = m - ℓ- (S), where m is the number of vertices (or complexes), ℓ is the number of connected components of G (also called linkage classes), and S is the stoichiometric subspace. The deficiency is always non-negative <cit.>, and it plays a central role in many classical results on the dynamical properties of mass-action systems <cit.>. Two such results are stated below. These results, which are due to Feinberg and Horn <cit.>, are stated for weakly reversible networks (the setting in which we use these results later). Deficiency-zero networks are not multistationary. Moreover, if G is a weakly reversible network with deficiency zero, then for every vector of positive rate constants κ, the mass-action system (G,κ) admits a unique positive steady state in every stoichiometric compatibility class. Consider a weakly reversible network G with connected components (linkage classes) G_1, G_2, …, G_ℓ. Let δ denote the deficiency of G, and (for all i=1,2,…, ℓ) let δ_i denote the deficiency of G_i. Assume the following: * δ_i ≤ 1 for all i=1,2,…,ℓ, and * δ_1+δ_2+ … + δ_ℓ = δ. Then G is not multistationary: for every vector of positive rate constants κ, the mass-action system (G,κ) admits a unique positive steady state in every stoichiometric compatibility class. Our next topic, ACR, like multistationarity, is analyzed at the level of systems and also networks. Let X_i be a species of a reaction network G with r reactions. * For a fixed vector of positive rate constants κ∈ℝ^r_>0, the mass-action system (G,κ) has absolute concentration robustness (ACR) in X_i if (G,κ) has a positive steady state and in every positive steady state x ∈_> 0^n of the system, the value of x_i is the same. This value of x_i is the ACR-value of X_i. * The reaction network G has unconditional ACR in species X_i if, for every vector of positive rate constants κ∈ℝ^r_>0, the mass-action system (G,κ) has ACR in X_i. ACR requires the existence of a positive steady state (Definition <ref>(1)). This requirement is sometimes not included in definitions of ACR in the literature. However, this is not an extra requirement for some of the networks we consider, namely, weakly reversible networks, for which positive steady states are guaranteed to exist (see Deng et al. <cit.> and Boros <cit.>). The property of unconditional ACR is often too restrictive. Thus, many of our results focus on ACR (or other properties) that hold for some full-dimensional subset of the parameter space of rate constants ^r_> 0 (where r is the number of reactions of a given network). The Lesbesgue measure of such a subset is nonzero. For simplicity, we use “measure” to mean Lebesgue measure. [<ref>] We revisit the network { 0 κ_1← A κ_2→ 2A ,  B κ_3← A+B}. From our earlier analysis, the mass-action system has ACR in B when κ_2 > κ_1 (which defines a nonzero-measure subset of the rate-constants space ^3_> 0), but lacks ACR when κ_2 ≤κ_1 (as there are no positive steady states). Consider the following network G, which is bimolecular and full-dimensional: { 2X_2 X_2 [κ_2]κ_1⇄ X_1+X_2 X_1 }. The mass-action ODEs are as follows: ẋ_1  = κ_1x_2 -κ_2x_1 x_2  =  (κ_1-κ_2x_1)x_2 ẋ_2  = κ_3x_2-κ_4x_1x_2  =  (κ_3-κ_4x_1)x_2 . When κ_1κ_2≠κ_3κ_4, there are no positive steady states and hence no ACR. Now assume κ_1κ_2=κ_3κ_4. In this case, the positive steady states are defined by the line x_1=κ_1κ_2, and so the system is multistationary and has ACR in species X_1. However, all the steady states of this system are degenerate. Consider the following network <cit.>, which we call G: { A [κ_1]κ_2⇆ A + B, 2B [κ_3]κ_4⇆ 3B, A [κ_5]κ_6⇆ 2A } . The mass-action ODEs (<ref>) are as follows: dx_1/dt = κ_5x_1 - κ_6x_1^2 dx_2/dt  = κ_1x_1-κ_2x_1x_2+κ_3x_2^2-κ_4x_2^3. It follows that G has unconditional ACR in species A with ACR-value κ_5/κ_6 (the existence of positive steady states comes from the fact that G is reversible; recall Remark <ref>). The following result, which is <cit.>, concerns ACR in one-dimensional networks. Let G be a one-dimensional network with species X_1, X_2, …, X_n. If G has unconditional ACR in some species X_i^*, then the reactant complexes of G differ only in species X_i^* (more precisely, if y and y are both reactant complexes of G, then y_i=y_i for all i ∈{1,2,…,n}∖{i^* }). §.§ Arrow diagrams In this subsection, we recall the arrow diagrams associated to one-species networks. These diagrams are useful for stating results about such networks <cit.>. Let G be a reaction network with only one species X_1. Let m denote the number of (distinct) reactant complexes of G, which we list in increasing order of molecularity: a_1 X_1, a_2X_1, …, a_m X_1 (so, a_1 < a_2 < … < a_m). The arrow diagram of G is the vector ρ = (ρ_1, ρ_2, … , ρ_m) ∈{→ , ←, }^m defined by: ρ_i  := {[ → if for every reaction a_i X_1 → bX_1 in G, the inequality b > a_i holds; ← if for every reaction a_i X_1 → bX_1 in G, the inequality b < a_i holds; otherwise. ].   * The network {0 ← A,  2A → 3A} has arrow diagram (←, →). * The network {0 ← A,  A → 2A,  2A → 3A} has arrow diagram (, →). It is often useful to consider the arrow diagrams of “embedded” one-species networks, as follows. Let G be a reaction network with species X_1, X_2, …, X_n. Given a species X_i, the corresponding embedded one-species network of G is obtained by deleting some (possibly empty) subset of the reactions, replacing each remaining reaction a_1 X_1 + a_2 X_2 + … +a_s X_s → b_1 X_1 + b_2 X_2 + … +b_s X_s by the reaction a_i X_i → b_i X_i, and then deleting any trivial reactions (i.e, reactions of the form a_i X_i → a_i X_i, in which the reactant and product complexes are equal) and keeping only one copy of duplicate reactions. Consider the network G={ 0⇆ B → A }. The following networks are embedded one-species networks of G: {0 → B}, {0 ← B }, {0 ⇆ B }, and {0 → A}. § RESULTS ON STEADY STATES AND NONDEGENERACY This section contains results on the steady states of mass-action systems. We use these results in later sections to prove our main results. Section <ref> analyzes the steady states of full-dimensional networks, while Section <ref> pertains to non-full-dimensional networks. Next, Section <ref> focuses on bimolecular networks and investigates scenarios in which the right-hand side of a mass-action ODE vanishes. Finally, Section <ref> concerns bimolecular networks that are reversible. §.§ Full-dimensional networks Consider a reaction network G with n species, r reactions, and exactly j reactant complexes[ A network has exactly j reactant complexes if the set of distinct reactant complexes has size j.]; and let κ^* ∈ℝ^r_>0 be a vector of positive rate constants. We often rewrite the mass-action ODE system (<ref>) for (G,κ^*) as follows: [ dx_1/dt; dx_2/dt; ⋮; dx_n/dt; ] =  N [ m_1; m_2; ⋮; m_j ] , where N is an (n × j)-matrix (with real entries) and m_1,m_2,…,m_j are distinct monic monomials in x_1, x_2, …,x_n given by the reactant complexes. [<ref>] The network { 2X_2 X_2 [κ_2]κ_1⇄ X_1+X_2 X_1 } has two reactant complexes, which yield the monomials m_1:=x_2 and m_2:=x_1x_2. Consider (κ_1,κ_2,κ_3,κ_4)=(1,2,3,6) (so, κ_1κ_2=κ_3κ_4 holds). Now the matrix N, as in (<ref>), is as follows: N  := [ 1 -2; 3 -6 ] . This matrix N does not have full rank, and we saw earlier that all steady states of this mass-action system are degenerate. In the next result, part (1) asserts that this phenomenon holds in general. Let G be a full-dimensional reaction network with n species, and κ^* be a vector of positive rate constants. Let N be a matrix defined, as in (<ref>), by the mass-action ODE system of (G,κ^*). * If rank(N) ≤ n-1, then every positive steady state of (G,κ^*) is degenerate. * If rank(N) = n and G has exactly n+1 reactant complexes, then the positive steady states of (G,κ^*) are the positive roots of a system of binomial equations (sharing some common monomial m_0) of the following form: m_i - β_i m_n+1 =  0 for i=1,2,…, n , where β_1,β_2, …, β_n∈ℝ and m_1, … , m_n+1 are distinct monic monomials in x_1, x_2, …,x_n. * If G has exactly n+1 reactant complexes and (G,κ^*) has a nondegenerate, positive steady state, then (G,κ^*) is not multistationary. Assume (G,κ^*) is a full-dimensional mass-action system in n species, and let N be as in (<ref>). First, we prove (1). Assume rank(N) ≤ n-1, and let x^* be a positive steady state. It follows that the polynomials f_i, as in (<ref>), are linearly dependent (over ℝ). Hence, the Jacobian matrix – even before evaluating at x^* – has rank less than n. Thus, the image of the Jacobian matrix, after evaluating at x^*, has dimension less than n, i.e, (df(x^*)|_S)≠ℝ^n =S. Hence, x^* is degenerate. Next, we prove (2). As in equation (<ref>), we write the mass-action ODEs for (G,κ^*) as [ dx_1/dt; dx_2/dt; ⋮; dx_n/dt; ] =  N [ m_1; ⋮; m_n; m_n+1; ] , where N is n × (n+1) and the m_i's are distinct monic monomials in x_1,x_2, …,x_n. As G is full-dimensional and rank(N) =n, we can relabel the m_i's, if needed, so that the square sub-matrix of N formed by the first n columns has rank n. Thus, by row-reducing N, we obtain a matrix of the following form (where β_1, β_2, …,β_n ∈ℝ): N'  := [[ -β_1; I_n -β_2; ⋮; -β_n; ]] . We conclude from the above discussion that the positive steady states of (G,κ^*) are the positive roots of the following n binomial equations (which are in the desired form): m_i - β_i m_n+1 = 0 for i=1,2,…, n . Before moving on to part (3), we summarize what we know (so we can use it later). The positive steady states are the roots of the binomials (<ref>), which we rewrite using Laurent monomials (our interest is in positive roots, so there is no issue of dividing by zero): x_1^a_i1 x_2^a_i2… x_n^a_in := m_i/m_n+1 = β_i for i=1,2,…, n . We apply the natural log to (<ref>) and obtain the following, which involves the n × n matrix A:=(a_ij): A [ ln(x_1); ln(x_2); ⋮; ln(x_n) ] = [ ln(β_1); ln(β_2); ⋮; ln(β_n) ] =: ln (β) . Now we prove (3). Assume x^* is a nondegenerate, positive steady state. (We must show that no other positive steady states exist.) By part (1), the n × (n+1) matrix N has rank n, so the proof of part (2) above applies. Assume for contradiction that x^** is a positive steady state, with x^**≠ x^*. Then, by (<ref>), the linear system Ay=ln(β) has more than one solution, and so rank(A) ≤ n-1. It follows that the set of positive steady states, {(e^y_1, e^y_2, …, e^y_n) | Ay=ln(β)}, is positive-dimensional and so (by the Inverse Function Theorem and the fact that G is full-dimensional) all positive steady states of (G,κ^*) are degenerate. This is a contradiction, as x^* is nondegenerate. For algebraically inclined readers, observe that the equations in Proposition <ref>(2) define a toric variety. Additionally, every such variety has at most one irreducible component that intersects the positive orthant <cit.>. This fact can be used to give a more direct proof of Proposition <ref>(3). The end of the proof of Proposition <ref> concerns nondegenerate positive steady states and their relation to the dimension of the set of positive steady states. More ideas in this direction are explored in the recent work of Feliu, Henriksson, and Pascual-Escudero <cit.>. Let G be a full-dimensional reaction network with n species, let κ^* be a vector of positive rate constants, and let f_1,f_2,…, f_n denote the right-hand sides of the mass-action ODEs of (G,κ^*). If f_i is the zero polynomial, for some i∈{1,…,n}, then every positive steady state of (G,κ^*) is degenerate. This result follows directly from Proposition <ref>(1) and the fact that, in this case, the rank of N, as in (<ref>), is strictly less than n. The next two results pertain to networks with few reactant complexes (at most n, where n is the number of species) and many reactant complexes (at least n), respectively. Let G be a reaction network with n species. * If G has exactly 1 reactant complex, then, for every vector of positive rate constants κ^*, the mass-action system (G, κ^*) has no positive steady states. * If G has exactly j reactant complexes, where 2 ≤ j ≤ n (in particular, n ≥ 2), and G is full-dimensional, then every positive steady state (of every mass-action system defined by G) is degenerate. Assume G has n species, which we denote by X_1, X_2, …, X_n, with exactly j reactant complexes, for some 1≤ j ≤ n. Let κ^* be a vector of positive rate constants. As in (<ref>), we write the mass-action ODE system arising from (G, κ^*) as follows: [ dx_1/dt; ⋮; dx_n/dt; ] =  N [ m_1; ⋮; m_j; ]  =: [ f_1; ⋮; f_n; ]  , where N:=(N_ij) is an (n × j)-matrix (with entries in ℝ) and m_1, …, m_j are distinct monic monomials in x_1, …, x_n (as G has n species and j reactant complexes). We first prove part (1). In this case, the right-hand sides of the ODEs have the form f_i = c_i ∏_k=1^nx_k^a_k, with at least one c_i≠ 0. It follows that there are no positive steady states. We prove part (2). Assume that G is full-dimensional (the stoichiometric subspace is ℝ^n) and that 2 ≤ j ≤ n. Let x^*=(x_1^*,x_2^*,…,x_n^*) be a positive steady state. We must show x^* is degenerate. We first consider the subcase when the rank of the matrix N is at most (n-1). By Proposition <ref>(1), every positive steady state is degenerate. Now we handle the remaining subcase, when N has rank n (and hence, N is n× n). Now, solving the steady-state equations f_1= … = f_n=0 can be accomplished by multiplying the expression in (<ref>) by N^-1, which implies that every monomial m_1,…, m_n evaluates to zero at steady state. Hence, no positive steady states exist. If G is a full-dimensional network with n species and exactly j reactant complexes, where j ≥ n, then: * There exists a vector of positive rate constants κ^*, such that the corresponding matrix N, as in (<ref>), has rank n. * If there exists a vector of positive rate constants κ^* such that the matrix N does not have rank n, then there exists a vector of positive rate constants κ^** such that (G,κ^**) has no positive steady states. Assume G is full-dimensional, with n species, r reactions (denoted by y_1 → y_1', … y_r → y_r'), and exactly j reactant complexes, where j ≥ n. We begin with part (1). Let κ=(κ_1,…, κ_r) denote the vector of unknown rate constants (each κ_i is a variable). Let N be the (n × j) matrix for (G,κ) in the sense of N in (<ref>). More precisely, the entries of N are ℤ-linear combinations of the κ_i's, such that, for every vector of positive rate constants κ^* ∈_> 0^r, the evaluation N|_κ=κ^* is the matrix N as in (<ref>) for (G, κ^*). As G is full-dimensional, there are no ℝ-linear relations among the n rows of N. Hence, the size-n minors of N define a (possibly empty) measure-zero subset V ⊆^r_> 0. Thus, ^r_> 0∖ V is nonempty, and every κ^* ∈^r_> 0∖ V yields a matrix N=N|_κ=κ^* with rank n. This proves part (1). For part (2), suppose that there exists κ^* ∈_> 0^r such that the resulting matrix N has rank strictly less than n. It follows that there is a linear relation: c_1f_κ^*,1+… + c_nf_κ^*,n = 0 , where c_1, …, c_n are real numbers – not all 0 – and the f_κ^*,i denote the right-hand sides of the mass-action ODEs for (G,κ^*). On the other hand, for unknown rate constants κ, as in the proof above for part (1), c_1f_κ,1+… + c_nf_κ,n is not the zero polynomial. Thus, when we rewrite this expression as a sum over r reactions y_i → y_i' as follows: c_1f_κ,1+… + c_nf_κ,n = d_1 κ_1 x^y_1 + … + d_r κ_r x^y_r, where d_i ∈ℤ for all i, we conclude that d_i ≠ 0 for some i. By relabeling reactions, if needed, we may assume that i=1. Now consider the following vector of positive rate constants κ_ϵ^* := (κ_1^* + ϵ , κ_2^*, …, κ_r^*), for some ϵ>0. Assume for contradiction that (G,κ_ϵ^*) has a positive steady state x^*. At steady state, f_κ^*_ϵ,i evaluates to 0, for all i, and this yields the first equality here: 0 = ( c_1f_κ^*_ϵ,1+… + c_nf_κ^*_ϵ,n) |_x=x^* =  c_1f_κ^*,1|_x=x^*+… + c_nf_κ^*,n|_x=x^* + ϵ d_1 x^y_1|_x=x^* = ϵ d_1 x^y_1|_x=x^* , and the second and third equalities come from the fact that the mass-action ODEs are linear in the rate constants and from equation (<ref>), respectively. We obtain x^y_1|_x=x^*=0, which contradicts the fact that x^* is a positive steady state. This concludes the proof. The next proposition returns to a topic from Proposition <ref>, namely, networks with n species and n+1 reactant complexes. Assume G is a full-dimensional network, with n species and exactly n+1 reactant complexes, which we denote as follows: y_i1 X_1 + y_i2 X_2 + … y_in X_n for i=1,2,…,n+1 . Let A denote the n × n matrix obtained from the (n+1) × n matrix Y:=(y_ij) by subtracting the last row from every row and then deleting the last row. * If rank(A) = n, then G is not nondegenerately multistationary. * If rank(A) ≤ n-1, then there exists a vector of positive rate constants κ^* such that (G,κ^*) has no positive steady states. Case 1: rank(A) = n. Fix an arbitrary vector of positive rate constants κ^*. We must show that (G, κ^*) is not nondegenerately multistationary. Let N denote the n × (n+1) matrix defined by (G, κ^*), as in (<ref>). We consider two subcases. Subcase: rank(N) ≤ n-1. In this subcase, Proposition <ref>(1) implies that every positive steady state of (G, κ^*) is degenerate, and so (G, κ^*) is not nondegenerately multistationary. Subcase: rank(N) = n. Part (2) of Proposition <ref> pertains to this setting, so we can follow that proof. In particular, equation (<ref>) – the (n × n) matrix A there exactly matches the matrix A here – implies that the positive steady states are defined by a linear system of the form Ay=ln(β), where y=(ln(x_1), …, ln(x_n))^⊤. Hence, as rank(A) = n, we have at most one positive steady state and so (G, κ^*) is not multistationary. Case 2: rank(A) ≤ n-1. We must show that there exists a choice of rate constants so that the resulting system has no positive steady states. Proposition <ref>(1) implies that there exists κ^* such that the following holds: the matrix N defined by (G,κ^*) has (full) rank n. Fix such a choice of κ^*. If (G, κ^*) has no positive steady states, then we are done. Therefore, for the rest of the proof, we assume that (G, κ^*) admits a positive steady state. In what follows, we need to consider additional vectors of positive rate constants (besides κ^*) and their corresponding matrices N, as in (<ref>). Therefore, as in the proof of Proposition <ref>(1), let κ=(κ_1,…, κ_r) (where r is the number of reactions) denote the vector of unknown rate constants, and let N be the n × (n+1) matrix for (G,κ) in the sense of N in (<ref>), so that for every vector of positive rate constants κ^* ∈_> 0^r, the evaluation N|_κ=κ^* is the matrix N as in (<ref>). We now follow the ideas in the proof of Proposition <ref>, part (2), with the difference being that we now consider unknown rate constants κ. The mass-action ODEs for (G,κ) are given by: [ dx_1/dt; ⋮; dx_n/dt; ] = N[ m_1; ⋮; m_n+1; ] , where m_1,…, m_n+1 are distinct monic monomials in x_1,x_2, …,x_n. Our next aim is to row-reduce N (over the field ℚ(κ_1,…, κ_r)). Accordingly, for 1≤ k ≤ n+1, let [B_k] denote the determinant of the matrix obtained from N by removing the k-th column. By construction, each [B_k] is in ℤ[κ_1,…, κ_r]. We claim that, for all 1 ≤ k ≤ n+1, the polynomial [B_k] is nonzero. By symmetry among the monomials m_i, it suffices to show that [B_n+1] is nonzero. To show this, assume for contradiction that [B_n+1]=0. Then, N can be row-reduced to a matrix in which the last row has the form (0,0,…, 0, ω), where 0 ≠ω∈ℚ(κ_1,…, κ_r). Now consider the evaluation at κ=κ^*. By (<ref>), the matrix N = N|_κ=κ^* has (full) rank n, so ω |_κ=κ^* is nonzero. However, this implies that positive steady states of (G,κ^*) satisfy ω|_κ=κ^* m_n+1 = 0, much like in (<ref>). Thus, (G,κ^*) has no positive steady states, which is a contradiction, and hence our claim holds. Next, as [B_n+1] is nonzero, we can apply a version of Cramer's rule to row-reduce N to the following matrix (where I_n denotes the size-n identity matrix): N'  = [ [ (-1)^n-1[B_1][B_n+1]; I_n (-1)^n-2[B_2][B_n+1]; ⋮; (-1)^0[B_n][B_n+1]; ]] . Thus, as in (<ref>), the positive steady states are the positive roots of the equations m_i - β_i m_n+1=0 (for i=1,2,…,n), where: β_i  :=  (-1)^n-i+1[B_i]/[B_n+1] for  i=1,2,…,n . Thus, β_i|_κ= κ^* >0 (for all i=1,2,…,n), since (G,κ^*) admits a positive steady state. We conclude from this fact, plus the claim proven earlier (namely, that [B_ℓ] ≠ 0 for all ℓ), that the following is an open subset of ℝ^r_>0 that contains κ^*: Σ := {κ̅∈ℝ^r_>0 :  β_1|_κ= κ̅ >0, …, β_n|_κ= κ̅ >0,  [B_1]|_κ=κ̅≠ 0, …, [B_n+1]|_κ= κ̅≠ 0 }. For the rest of the proof, we restrict our attention to rate constants, like κ^*, that are in Σ. For such rate constants, like in (<ref>–<ref>), the positive steady states are the roots of the following equation A [ ln(x_1); ln(x_2); ⋮; ln(x_n) ] = [ ln(β_1); ln(β_2); ⋮; ln(β_n) ] =: ln (β) . Next, as rank(A) ≤ n-1, there exists a nonzero vector γ∈ℝ^n in the orthogonal complement of the column space of A. By relabeling the m_i's (which permutes the columns of N), if needed, we may assume that γ_1 ≠ 0. By construction of γ and equation (<ref>), we have ⟨γ, ln(β) ⟩ = 0, which is readily rewritten as follows: ( [B_1]/[B_n+1])^γ_1…((-1)^k+1[B_k]/ [B_n+1] )^γ_k…((-1)^n+1 [B_n]/ [B_n+1] )^γ_n = 1 . For ε>0, let κ^*_ϵ denote the vector of rate constants obtained from κ^* by scaling by (1+ ε) all rate constants of reactions in which the reactant generates the monomial m_1. As Σ is an open set, κ^*_ϵ∈Σ for ε sufficiently small. Also, by construction, the matrix N |_κ = κ^*_ε is obtained from N |_κ = κ^* by scaling the first column by (1+ ε). So, for 2 ≤ i ≤ n+1, we have [B_i] |_κ = κ^*_ε = (1+ ϵ) [B_i] |_κ = κ^*. Thus, by replacing κ^* by κ^*_ε, the left-hand side of equation (<ref>) is scaled by (1+ε)^-γ_1, and so there exists ε>0 for which equation (<ref>) does not hold (when evaluated at κ = κ^*_ε). Hence, this vector κ^*_ε yields a mass-action system (G,κ^*_ε) with no positive steady states, as desired. Proposition <ref> implies that for networks with at least n reactant complexes (where n is the number of species), some choice of rate constants yields a matrix N with (full) rank n. Our next result shows that when this condition holds (even for networks with fewer reactants), every species appears in at least one reactant complex. We introduce the following shorthand (which we use in several of the next results): a complex y_ℓ 1X_1+ y_ℓ 2X_2 + … + y_ℓ nX_n involves species X_i if y_ℓ i≠ 0. For instance, X_1+X_2 involves X_2, but X_1+X_3 does not. Let G be a full-dimensional reaction network with n species, let κ^* be a vector of positive rate constants, and let N be the matrix for (G,κ^*), as in (<ref>). If rank(N) = n and (G,κ^*) has a positive steady state, then for every species X_i, at least one reactant complex of G involves X_i. We prove the contrapositive. Assume that there is a species X_i such that for every reactant complex a_1X_1+ a_2 X_2 + … +a_nX_n we have a_i = 0. Then, by Lemma <ref>, the right-hand side of the mass-action ODE for X_i, which we denote by f_i, is a sum of monomials, all of which have positive coefficients. But (G,κ^*) has a positive steady state, so f_i must be 0. We conclude that the i-th row (of the n rows) of N is the zero row and so rank(N) ≤ n-1. §.§ Networks with conservation laws The following result is similar to several results in the prior subsection, but pertains to networks that are not full-dimensional. Let G be a reaction network with n ≥ 3 species. Assume that G is (n-k)-dimensional, where k ≥ 1 (so, G has k conservation laws). If G has exactly j reactant complexes, for some j ∈{2,3,…, n-k}, then every positive steady state (of every mass-action system defined by G) is degenerate. We mimic the proofs of Propositions <ref>(1) and <ref>. Let κ^* be a vector of positive rate constants. Let N be an (n × j) matrix defined, as in (<ref>), by (G,κ^*): [ dx_1/dt; ⋮; dx_n/dt; ] =  N [ m_1; ⋮; m_j; ]  =: [ f_1; ⋮; f_n; ]  , where m_1, …, m_j are distinct monic monomials in x_1, …, x_n. We consider two cases. First assume that rank(N) ≤ n-k-1. Then the polynomials f_i span a subspace of dimension ≤ n-k-1 and hence the Jacobian matrix – even before evaluating at a positive steady state – has rank ≤ n-k-1. Every positive steady state is therefore degenerate. Consider the remaining case: rank(N) = n-k (so, j=n-k). In this case, multiplication by N defines an injective map ℝ^n-k→ℝ^n. Hence, by (<ref>), the steady-state equations f_1= … = f_n=0 imply the monomial equations m_1=…= m_j=0. Thus, there are no positive steady states. The next result concerns networks with n-1 conservation laws, that is, one-dimensional networks. Let G be a one-dimensional reaction network, and let κ^* be a vector of positive rate constants. If (G,κ^*) has ACR, then (G,κ^*) is not nondegenerately multistationary. Assume that G is one-dimensional, with n species. Thus, G has n-1 linearly independent conservation laws. Let κ^* be a vector of positive rate constants for which there is ACR. We may assume that the ACR species is X_1 (by relabeling species, if needed). Let f_1,…, f_n denote the right-hand sides of the mass-action ODEs arising from (G,κ^*). Let x^*=(x_1^*, …, x_n^*) denote an arbitrary positive steady state of (G,κ^*). (The ACR-value is x^*_1.) Let P_x^* denote the (one-dimensional) stoichiometric compatibility class that contains x^*. It suffices to show that (1) x^* is the unique positive steady state in P_x^* or (2) x^* is degenerate. We consider two cases. Case (a): X_1 is not a catalyst-only species (in some reaction of G). This implies that f_2,…, f_n are all scalar multiples of f_1, and that the compatibility class P_x^* is defined by n-1 conservation laws of the form x_j=a_j x_1+b_j, where a_j,b_j ∈ℝ, for j∈{2,3,…, n}. By substituting these n-1 relations into f_1, we obtain a univariate polynomial in x_1, which we denote by h. If h has multiple positive roots, then there is no ACR, which is a contradiction. If, on the other hand, h does not have multiple positive roots, then P_x^* does not contain multiple positive steady states (that is, x^* is the unique positive steady state in P_x^*). Case (b): X_1 is a catalyst-only species in all reactions of G. In this case, f_1=0, and x_1=x_1^* is a conservation law of G, and it is one of the defining equations of the compatibility class P_x^*. By relabeling species X_2,…, X_n, if needed, we may assume that X_2 is not a catalyst-only species (as G is one-dimensional). Thus, we can “extend” the conservation law x_1=T to a “basis” of n-1 conservation laws that define the compatibility class P_x^*, by appending n-2 conservation laws of the form x_j=a_jx_2+b_j, where a_j,b_j ∈ℝ, for j∈{3,4,…, n}. Next, we substitute these n-2 conservation relations into f_2, which yields a polynomial in x_1 and x_2, which we denote by g. Consider the following set, which is the positive variety of g in ℝ^2_>0 (the values of x_3,…, x_n are free, so we ignore them): Σ := {x∈ℝ^2_>0| g(x_1,x_2)=0 }. By construction and the fact that there is ACR in X_1, the set Σ is contained in the hyperplane (line) x_1=x_1^*, and so is either one-dimensional or zero-dimensional. We consider these two subcases separately. First, assume that Σ is one-dimensional. In this subcase, Σ equals the subset of the hyperplane x_1=x_1^* in the positive quadrant ℝ^2_>0, and so the compatibility class P_x^* consists entirely of positive steady states. The Inverse Function Theorem now implies that every positive steady state of P_x^* (in particular, x^*) is degenerate. Consider the remaining subcase, in which Σ is zero-dimensional (that is, Σ consists of finitely many points). It follows that g is either non-negative on ℝ^2_>0 or non-positive on ℝ^2_>0, and so f_2 is either non-negative on P_x^* or non-positive on P_x^*. Consequently, as every f_i is a scalar multiple of f_2, the steady state x^* is degenerate. §.§ Bimolecular networks We begin this subsection with a result that clarifies how the polynomials arising in mass-action ODEs are constrained when the network is bimolecular. Consider a bimolecular mass-action system (G,κ^*) with n species. Let f_i be the right-hand side of the mass-action ODE for species X_i (for some 1 ≤ i ≤ n). Fix positive values a_j >0 for all j ∈{1,2,…, n}∖{i}. Let g_i denote the univariate polynomial obtained by evaluating f_i at x_j=a_j for all j ∈{1,2,…, n}∖{i}. If the polynomial g_i is nonzero, then g_i has at most one sign change and hence has at most one positive root. Let g_i denote the nonzero polynomial obtained by evaluating f_i at x_j=a_j for all j≠ i. Several properties of g_i arise from the fact that G is bimolecular: (1) (g_i) ≤ 2, (2) the coefficient of x_i^2 is non-positive, and (3) the constant coefficient is non-negative. Thus, g_i has at most one sign change, and so Descartes' rule of signs implies that g_i has at most one positive root. The next two results pertain to bimolecular mass-action systems in which the right-hand side of some ODE vanishes (Propositions <ref>) or vanishes when evaluated at an ACR-value (Proposition <ref>). We motivate these results through the following example. [Enlarged Shinar-Feinberg network] A common way to construct a network with an ACR species (e.g., A) is through the existence of an f_i that becomes zero when we substitute the ACR-value in place of the species. We illustrate this idea through the following network: G = {A+B 2B ,  B A,  0 B+C 2B,  0C} . This network is constructed from a well-studied network first introduced by Shinar and Feinberg <cit.> by adding three reactions involving a new species (C). We examine the mass-action ODE for B: dx_2/dt = κ_1x_1x_2-κ_2 x_2 - κ_3 x_2x_3+ κ_4 x_2x_3 = x_2(κ_1 x_1-κ_2) + x_2x_3(-κ_3+κ_4)  =  g+h  =:  f_2 , where g:= x_2(κ_1 x_1-κ_2) (which is the right-hand side of the ODE for X_2 in the original Shinar-Feinberg network) and h:= x_2x_3(-κ_3+κ_4) (arising from the additional reactions, involving X_3). Assume κ_3=κ_4. It is easy to check that (G,κ) has a positive steady state and also has ACR in species X_1 with ACR-value α = κ_2/κ_1. Also, observe that f_2|_x_1=α=0, as a result of the equalities g|_x_1=α=0 and h=0 (which is due to the equality κ_3=κ_4). The next two results characterize which reactions can exist in such a situation. More precisely: * Proposition <ref> gives conditions that hold when a mass-action ODE is zero (effectively characterizing what reactions can yield h=0 in this case). * Proposition <ref> gives conditions that hold when a mass-action ODE is zero when evaluated at the ACR-value (effectively characterizing what reactions can yield f_2|_x_1=α=0 in this case, involving a decomposition like the one we observed above: f_2|_x_1=α=g|_x_1=α +h). The next result uses the following notation: [Empty complex] We introduce the dummy variable X_0:=0, so that (for instance) X_0 is the empty complex and X_i+X_0 := X_i for any species X_i. The following result clarifies which reactions can exist if some mass-action ODE is zero. Let G be a bimolecular reaction network with n species X_1, X_2, …, X_n. Fix 1 ≤ i ≤ n. Let κ^* denote a vector of positive rate constants for G, and let f_i denote the right-hand side of the mass-action ODE for species X_i in the system (G, κ^*). If f_i is the zero polynomial, then the set of reactions of G in which X_i is a non-catalyst-only species is a (possibly empty) subset of the reactions listed here (where our use of X_0 follows Notation <ref>): * the reactions of the form X_i +X_j → 2X_i (and we denote the rate constant by κ^*_1,j), where j∈{0,1,… n}∖{i}, * the reactions of the form X_i + X_j →⋆ (with rate constant κ^*_2,j,ℓ, where ℓ is an index for such reactions), where j∈{0,1,… n}∖{i} and ⋆ is any complex that does not involve X_i, and, additionally, the following relationships among the rate constants hold: κ^*_1,j = ∑_ℓκ^*_2,j,ℓ  for all  j∈{0,1,… n}∖{i} , where a rate constant is set to 0 if the corresponding reaction is not in G. Let κ^* be a vector of positive rate constants for a bimolecular network G with n species, and let f_i be the right-hand side of (G,κ^*) for the species X_i. Let Σ denote the set of reactions of G in which X_i is a non-catalyst-only species. Reactions not in Σ do not contribute to f_i, so we ignore them for the rest of the proof. We claim that for all reactions in Σ, the reactant complex is not one of the following 5 types: 0, X_j, X_j + X_j', 2X_j, 2X_i for any j,j' ∈{1,2,…, n }∖{i}. Indeed, any of the first 4 types of complexes would yield a constant term in f_i (when viewed as a polynomial in x_i) consisting of a sum of monomials with positive coefficients; similarly, the last type (2X_i) would yield a negative x_i^2 term (the fact that G is bimolecular is used here). However, f_i is zero, so the claim holds. It follows that, for every reaction in Σ, the reactant complex either is X_i or has the form X_i+X_j for some j ∈{1,2,…, n }∖{i}. It is straightforward to check that all possible such reactions (in which X_i is a non-catalyst-only species) are listed in the proposition. Next, reactions of type (1) in the proposition contribute positively to f_i, while those of type (2) contribute negatively, as follows: f_i  = ( κ^*_1,0 - ∑_ℓκ^*_2, 0, ℓ) x_i  + ∑_j∈{1,2,…, n }∖{i}( κ^*_1,j - ∑_ℓκ^*_2,j,ℓ) x_i x_j . As f_i=0, the coefficient of x_i and the coefficient of each x_ij in (<ref>) must be 0, which yields the desired equalities (<ref>). Proposition <ref> concerns general (bimolecular) mass-action systems, and now we consider those with ACR. The next result characterizes which reactions can exist if some mass-action ODE becomes zero when evaluated at the ACR-value. Let G be a bimolecular reaction network with species X_1, X_2, …, X_n, where n ≥ 2. Let κ^* denote a vector of positive rate constants. Assume that the mass-action system (G, κ^*) has ACR in species X_1 with ACR-value α >0. Fix 2 ≤ i ≤ n. Let f_i denote the right-hand side of the mass-action ODE for species X_i in the system (G, κ^*). If f_i≠ 0 and f_i|_x_1=α is the zero polynomial, then the set of reactions of G in which X_i is a non-catalyst-only species is a nonempty subset of the following reactions (the same as the ones in Proposition <ref>): * the reactions of the form X_i +X_j → 2X_i (and we denote the rate constant by κ^*_1,j), where j∈{0,1,… n}∖{i}, * the reactions of the form X_i + X_j →⋆ (with rate constant κ^*_2,j,ℓ, where ℓ is an index for such reactions), where j∈{0,1,… n}∖{i} and ⋆ is any complex that does not involve X_i. Additionally, the following relationship between the ACR-value α and the rate constants holds: α = ( ∑_ℓκ^*_2,0, ℓ) - κ^*_1,0/κ^*_1,1 - ( ∑_ℓκ^*_2,1,ℓ)  . In particular, the numerator and denominator of (<ref>) are nonzero. Finally, if n≥ 3, then the following relationships among the rate constants hold: κ^*_1,j = ∑_ℓκ^*_2,j,ℓ  for all  j∈{2,3,… n}∖{i} . (In equations (<ref>)–(<ref>), a rate constant is set to 0 if the corresponding reaction is not in G.) Assume that f_i is nonzero, but f_i|_x_1=α is zero. Using properties of polynomial rings over a field, it follows that (x_1- α) divides f_i. From the fact that G is bimolecular, we conclude that: f_i  = (x_1 - α) ( β x_i + γ + ∑_j ∈ [n] ∖{i}δ_j x_j ) = β x_1 x_i + γ x_1 + ( ∑_j ∈ [n] ∖{i}δ_j x_1 x_j ) -αβ x_i -αγ - ( ∑_j ∈ [n] ∖{i}αδ_j x_j ) , for some real numbers β, γ, δ_j, at least one of which is nonzero. In the right-hand side of (<ref>), the variable x_i does not appear in any of the following monomials (here the hypothesis i≠ 1 is used): γ x_1, -αγ, δ_j x_1 x_j, -αδ_j x_j , for j ∈{1,2,…,n}∖{i}, so Lemma <ref> implies that the coefficients of these monomials must be non-negative. Since α>0, we conclude that γ=0 and δ_j=0 (for all j ∈{1,2,…,n}∖{i}). Thus, using (<ref>), we have f_i= β x_1 x_i - αβ x_i, for some β∈ℝ∖{0}. Next, we investigate which reactions contribute to the two monomials in f_i. For β x_1 x_i, the contributing reactions have the form X_1+X_i→ 2X_i and X_1 + X_i →⋆, where ⋆ does not involve X_i. The first reaction contributes positively, while the second type contributes negatively. Let κ^*_1,1 be the reaction rate constant for X_1+X_i→ 2X_i (as in the statement of the lemma) and κ^*_2,1,ℓ be the rate constant for reactions of type X_1 + X_i →⋆, where ℓ is an index for all the reactions of this type. We conclude that κ^*_1,1 - ∑_ℓκ^*_2,1,ℓ =  β . Similarly, the monomial -αβ x_i in f_i comes from reactions of the form X_i → 2X_i, which contributes positively, and X_i →⋆, which contributes negatively, where ⋆ is a complex that does not involve X_i. Hence, κ^*_1,0 - ∑_ℓκ^*_2,0,ℓ =  - αβ . Now the equations (<ref>) and (<ref>) together imply the desired equality (<ref>). Next, let Σ denote the set of reactions of G in which X_i is a non-catalyst-only species. We showed above that Σ contains a (nonempty) subset of reactions with rate constants labeled by κ^*_1,0, κ^*_1,1, κ^*_2,0,ℓ, κ^*_2,1,ℓ. Let Σ' ⊆Σ denote the remaining reactions, and let G' denote the subnetwork defined by the reactions in Σ'. Let κ' be obtained from κ^* by restricting to coordinates corresponding to reactions in Σ'. By construction, the mass-action ODE of (G', κ') for species X_i has right-hand side equal to 0. So, Proposition <ref> applies (where reactions arising from j=0,1 in that proposition are absent from G' by construction), and yields two conclusions. First, Σ' is a subset of the reactions listed in Proposition <ref> (specifically, with j ≠ 0,1), and so Σ is a subset of the full list (including j = 0,1). Second, the equations (<ref>) hold (for j ≠ 0,1), which are the desired equalities (<ref>). The reactions listed in Propositions <ref> and <ref> (the lists are the same) are not reversible. Hence, if G is a reversible network satisfying the hypotheses of either proposition, then X_i is a catalyst-only species in every reaction of G. [<ref>] We revisit the enlarged Shinar-Feinberg network, G= {A+B 2B ,  B A,  0 B+C 2B,  0C}. Recall that, when κ_3=κ_4, the mass-action system (G,κ) has ACR in X_1 with ACR-value α= κ_2/κ_1, and that f_2|_x_1=α=0. In the notation of Proposition <ref>, the rate constants of reactions in which X_2 is non-catalyst-only are: κ^*_1,1=κ_1,   κ^*_1,0=κ_2,   κ^*_2,3,1=κ_3,  κ^*_1,3=κ_4 . Now the formula in Proposition <ref> for the ACR-value (<ref>) exactly yields the ACR-value computed earlier: α= κ_2/κ_1, and the relationship among rate constants (<ref>) recapitulates κ_3=κ_4. The formula for the ACR-value, in (<ref>), is related to the concept of “robust ratio” introduced by Johnston and Tonello <cit.>. §.§ Three reversible reactions are necessary for multistationarity Recall that, in (<ref>), we saw an instance of a (nondegenerately) multistationary, bimolecular network that consists of 3 pairs of reversible reactions. In this subsection, we prove that bimolecular networks with fewer pairs of reversible reactions are non-multistationary (Theorem <ref>). Our proof of Theorem <ref> requires several supporting lemmas on one-dimensional networks. For the next lemma, recall from Section <ref> that cap_pos(G) (respectively, cap_nondeg(G)) denotes the maximum possible number of positive (respectively, nondegenerate and positive) steady states of a network G. In Lemma <ref> below, part (1) was conjectured by Joshi and Shiu <cit.> and then proved by Lin, Tang, and Zhang <cit.> (see also <cit.>). Part (2) is due to Tang and Zhang <cit.>. Let G be a one-dimensional reaction network. * If G is multistationary and cap_pos(G) < ∞, then G has an embedded one-species network with arrow diagram (←, →) and another with arrow diagram (→, ←). * If cap_pos(G) < ∞, then cap_nondeg(G) = cap_pos(G). Joshi and Shiu showed that the network G={ 0 ← A → 2A} is the only one-species, bimolecular network for which cap_pos(G) =∞ <cit.>. The following lemma generalizes this result from one-species networks to one-dimensional networks. Let G be a one-dimensional and bimolecular reaction network with n species. The following are equivalent: * cap_pos(G) = ∞. * Up to relabeling species, G is one of the following networks: * {2X_1 ← X_1+X_2 → 2X_2 }, * {X_1 → 2X_1}∪Σ, where Σ consists of at least one reaction from the following set: {0 ← X_1 }∪{X_i ← X_1 + X_i | i=2,3,…, n} . Additionally, for the networks listed above in 2(a) and 2(b), every positive steady state (of every mass-action system arising from the network G) is degenerate. Let G be a one-dimensional, bimolecular reaction network. Up to relabeling species, the one-dimensional stoichiometric subspace is spanned by one of the following seven vectors: (1,0,0,…, 0) , (1,-1,0,0,…, 0) , (1,1,0,0,…, 0) , (1,-2, 0,0,…, 0) , (1,1,-1,0,0,…, 0) , (1,1,-2, 0,0,…, 0) , (1,1,-1,-1, 0,0,…, 0) . We first consider the case when the stoichiometric subspace is spanned by one of the five vectors listed in (<ref>). The network G is then a subnetwork of one of the following networks (where we use A,B,C,D in place of X_1,X_2,X_3,X_4 for ease of notation); {0 ⇆ A+B} , {A ⇆ 2B} , {A+B ⇆ C} , {A+B ⇆ 2C} , {A+B ⇆ C+D} . A direct calculation shows that the deficiency of G is 0, so the deficiency-zero theorem (Lemma <ref>) implies that G is not multistationary. In particular, cap_pos(G) < ∞. Having shown that the case of (<ref>) is consistent with Lemma <ref>, we now consider the remaining two cases, from (<ref>), separately. First, assume the stoichiometric subspace of G is spanned by (1,0,0,…, 0). It follows that the reactions of G form a subset of the following 2n+4 reactions: 0 [k_0]m_0⇆ X_1 [m_1]ℓ_0⇆ 2X_1 0 [k_1]ℓ_1⇆ 2X_1 X_i [k_i]m_i⇆ X_1 + X_i for i=2,3,…, n . The ODEs for species X_2, X_3, …, X_n are dx_i/dt=0 so, x_i=T_i (with T_i>0) for i=2,3,…,n are the corresponding conservation laws. We substitute these conservation laws into the ODE for X_1: d x_1/dt|_x_2=T_2,…, x_n=T_n = (k_0 + 2 k_1) + (k_2T_2 + … + k_nT_n) +m_1 x_1 - (m_0+ m_2T_2 + … + m_nT_n) x_1 - (ℓ_0 + 2 ℓ_1) x_1^2 . When at least one k_i is positive and all other k_j's are non-negative, the right-hand side of (<ref>), viewed as a polynomial in x_1, has a nonzero constant term and, hence, is not the zero polynomial. Similarly, if ℓ_0 or ℓ_1 is positive and ℓ_0,ℓ_1 ≥ 0, then the right-hand side of (<ref>) has a nonzero coefficient of x_1^2 and is again a nonzero polynomial. We conclude that if G contains at least one of the reactions labeled by k_i or ℓ_i, then cap_pos(G) < ∞, which is consistent with Lemma <ref>. We now consider the case when G contains no reactions labeled by k_i or ℓ_i, that is, every reaction of G is one of the following n+1 reactions: 0 m_0← X_1 m_1→ 2X_1 X_i m_i← X_1 + X_i for i=2,3,…, n . The right-hand side of the ODE for X_1, as in (<ref>), becomes x_1(m_1 - m_0 - m_2T_2 - … - m_nT_n). In order for this polynomial in x_1 to become the zero polynomial for some choice of positive rate constants of G (equivalently, cap_pos(G) = ∞), we must have m_1>0 and m_j>0 for at least one of j=0,2,3,…,n. This gives exactly the reactions listed in Lemma <ref>(2)(b). In this case, given m_j>0 for the reactions appearing in the network, we can always choose T_j>0, such that the right-hand side of the ODE for X_1 vanishes (i.e., cap_pos(G) = ∞). Moreover, when this right-hand side vanishes is the only situation in which there are positive steady states, and an easy calculation shows that all such positive steady states are degenerate. This concludes our analysis of networks with stoichiometric subspace spanned by the vector (1,0,0,…, 0). Our final case is when the stoichiometric subspace is spanned by the vector (1,-1,0,…, 0). In this case, the reactions of G form a subset of the following 2n+4 reactions: X_1 [k_1]ℓ_1⇆ X_2 2 X_1 [k_2]ℓ_2⇆ 2X_2 2 X_1 [k_3]m_1⇆ X_1 + X_2 [m_2]ℓ_3⇆ 2 X_2 X_1 + X_i [k_i+1]ℓ_i+1⇆ X_2 + X_i for i=3,4,…, n . The conservation laws are x_1+x_2=T_2 and x_i=T_i for i=3,4,…, n. The ODE for species X_1 is: dx_1/dt = -(2k_2+k_3) x_1^2 -k_1x_1 -(k_4 x_3 + … + k_n+1x_n) x_1 + (m_1-m_2) x_1 x_2 + (ℓ_1 x_2 + 2 ℓ_2 x_2^2 + ℓ_3 x_2^2) + (ℓ_4 x_3 + … + ℓ_n+1 x_n) x_2 . Consider the subcase when at least one of the ℓ_i is positive and all other ℓ_j's are non-negative. After substituting the expressions arising from the conservation laws (namely, x_2= T_2-x_1 and x_i=T_i for i=3,4,…, n) into the right-hand side of the ODE (<ref>), we obtain a polynomial in x_1 that has a positive constant term (see the second line of the right-hand side of (<ref>)). Hence, if G contains at least one of the reactions labeled by ℓ_i, then cap_pos(G) < ∞. By symmetry, if G has at least one of the reactions labeled by k_i, then again cap_pos(G) < ∞. Hence, if G contains a reaction labeled by ℓ_i or k_i, then this subcase is consistent with the lemma. Consider the remaining subcase, when G is a subnetwork of { 2 X_1 m_1← X_1 + X_2 m_2→ 2 X_2 }, and so consists of only one or two reactions. If G has only one reaction, then Proposition <ref> implies that cap_pos(G) = 0 < ∞ (which is consistent with the lemma). Now assume that G has two reactions, that is, G={ 2 X_1 m_1← X_1 + X_2 m_2→ 2 X_2 }. If m_1 ≠ m_2, then the ODE for X_1 is dx_1/dt= (m_1 - m_2) x_1 x_2 and so there are no positive steady states. When m_1=m_2, the ODE for X_1 becomes dx_1/dt=0 and it follows that cap_pos(G) = ∞. Moreover, a simple computation shows that all the positive steady states are degenerate. This concludes the proof. [<ref>] The network { 0 ← A → 2A ,  B ← A+B} is one of the networks listed in Lemma <ref>.2(b), where n=2. If G is a one-dimensional, bimolecular network, then G is not nondegenerately multistationary. Assume that G is a one-dimensional network that is nondegenerately multistationary. We must show that G is not bimolecular. We claim that cap_pos(G) is finite. Indeed, if cap_pos(G) = ∞, then Lemma <ref> implies that all positive steady states are degenerate and so G is not nondegenerately multistationary, which is a contradiction. Hence, cap_pos(G) < ∞. The hypotheses of part (1) of Lemma <ref> are satisfied, that is, cap_pos(G) < ∞, and G is one-dimensional and multistationary. Therefore, G has an embedded one-species network with arrow diagram (←, →). Such an embedded network (e.g., {0 ← A,  2A → 3A}) involves at least one complex that is not bimolecular, and so G is also not bimolecular. If G is a bimolecular reaction network that consists of one or two pairs of reversible reactions, then G is not multistationary. Assume that G is bimolecular and consists of one or two pairs of reversible reactions. Let p denote the number of pairs of reversible reactions (so, p=1 or p=2), and ℓ the number of linkage classes. Let s be the dimension of the stoichiometric subspace (so, s=1 or s=2). Case 1: p=1. The deficiency of G is δ = 2-1-1=0 and G is weakly reversible. Hence, by the deficiency-zero theorem (Lemma <ref>) the network is not multistationary. Case 2: p=s=2. If ℓ=1, then the deficiency is δ = 3-1-2=0. If ℓ=2, then the deficiency is δ=4-2-2=0. Therefore, for either value of ℓ, the deficiency-zero theorem (Lemma <ref>) implies that the network is not multistationary. Case 3: p=2 and s=1. G is one-dimensional, bimolecular, and reversible. So, Lemma <ref> implies that cap_pos(G) < ∞. Now Lemma <ref>(2) yields cap_pos(G) = cap_nondeg(G), and Lemma <ref> implies that cap_nondeg(G)  ≤  1. Thus, cap_pos(G) ≤ 1, or, equivalently, G is non-multistationary. § MAIN RESULTS ON BIMOLECULAR NETWORKS In this section, we establish minimal conditions for a bimolecular network to admit ACR and nondegenerate multistationarity simultaneously. These minimal conditions are in terms of the numbers of species, reactions, and reactant complexes. The main result is as follows. Let G be a bimolecular reaction network. If there exists a vector of positive rate constants κ^* such that the mass-action system (G,κ^*) has ACR and also is nondegenerately multistationary, then: * G has at least 3 species. * G has at least 3 reactant complexes (and hence at least 3 reactions) and at least 5 complexes (reactant and product complexes). * If G is full-dimensional, then G has at least 5 reactant complexes (and hence at least 5 reactions). This section is structured as follows. In Subsection <ref>, we prove part (1) of Theorem <ref> (specifically, part (1) follows from Proposition <ref> and Theorem <ref>). Theorem <ref> also analyzes two-species bimolecular networks with ACR and degenerate multistationarity. Additionally, we characterize unconditional ACR in two-species bimolecular networks that are reversible (Theorem <ref>). Subsequently, in Subsection <ref>, we prove parts (2) and (3) of Theorem <ref> (Theorem <ref> and Proposition <ref>). We also consider full-dimensional, 3-species, bimolecular networks with only 4 reactant complexes. By Theorem <ref>, such networks do not allow for the coexistence of ACR and nondegenerate multistationary. Nevertheless, ACR and degenerate multistationarity is possible, and we characterize the possible sets of reactant complexes of such networks (Proposition <ref>). §.§ Bimolecular networks with one or two species This subsection characterizes unconditional ACR in reversible networks with only one or two species (Proposition <ref> and Theorem <ref>). Notably, our results show that such networks with unconditional ACR are not multistationary. Our interest in reversible networks comes from our prior work with Joshi <cit.>. In that article, our results on multistationarity in randomly generated reaction networks arise from “lifting” this property from the following (multistationary) motif: { B ⇆ 0 ⇆ A ⇆ B+C , C ⇆ 2C } . The question arises, Are there multistationary motifs with fewer species, reactions, or complexes than the one in (<ref>)? Discovering more motifs might aid in analyzing the prevalence of multistationarity in random reaction networks generated by stochastic models besides the one in <cit.>. §.§.§ Networks with one species When there is only one species, say X_1, and the network is bimolecular, there are only 3 possible complexes: 0, X_1, 2X_1. Hence, every such network is a subnetwork of the following network: G_X_1 = {0⇆ X_1⇆ 2X_1 ⇆ 0}. Therefore, the possible reversible networks, besides G_X_1 itself, are listed here: {0⇆ X_1}, {X_1⇆ 2X_1}, {0⇆ 2X_1}, {0⇆ X_1⇆ 2X_1}, {X_1⇆ 0⇆ 2X_1}, {0⇆ 2X_1⇆ X_1} . Every bimolecular network in only one species is not nondegenerately multistationary. Every reversible, bimolecular network in only one species has unconditional ACR. Let G be a bimolecular network with only one species. Then G is a subnetwork of G_X_1, in (<ref>), and the first part of the proposition now follows readily from Lemmas <ref>–<ref>. Next, assume G is a reversible, bimolecular network with only one species. Then G is either the network G_X_1 or one of the networks listed in (<ref>). Each of these networks is weakly reversible and satisfies the conditions of either the deficiency-zero or deficiency-one theorem (Lemmas <ref>–<ref>). Thus, for every choice of positive rate constants κ, the mass-action system (G, κ) has a unique positive steady state. Hence, G has unconditional ACR. §.§.§ Reversible networks with two species We now consider reversible, bimolecular networks with two species. Among such networks, the ones with unconditional ACR are characterized in the following result, which is the main result of this subsection. Let G be a reversible, bimolecular reaction network with exactly two species (and at least one reaction). * If G is full-dimensional, then the following are equivalent: * G has unconditional ACR; * G is not multistationary. * If G is one-dimensional, then the following are equivalent: * G has unconditional ACR; * Up to relabeling species, G is the (non-multistationary) network {X_2 ⇆ X_1+X_2}. Theorem <ref> encompasses Propositions <ref> and <ref> below. Let G be a full-dimensional, reversible, bimolecular reaction network with exactly two species. Then the following are equivalent: * G has unconditional ACR; * G is not multistationary. Let G be a full-dimensional, reversible, bimolecular network with exactly 2 species. We first prove (b) ⇒ (a). Assume that G is non-multistationary, and let κ^* be a choice of positive rate constants. Then, the mass-action system (G,κ^*) admits at most one positive steady state (x^*_1, x^*_2) (here the assumption that G is full-dimensional is used). However, the fact that G is reversible guarantees at least one positive steady state (Remark <ref>). Hence, (G, κ^*) has a unique positive steady state (x^*_1, x^*_2) and therefore has ACR in both species with ACR-values x_1^* and x_2^*, respectively. So, G has unconditional ACR. Next, we prove (a) ⇒ (b). Assume that G has unconditional ACR. Let κ^* be a choice of positive rate constants. By relabeling species, if necessary, we may assume that the system (G, κ^*) has ACR in species X_1 with some ACR-value α>0. Every positive steady state of (G, κ^*), therefore, has the form (α, x_2^*), where x^*_2 ∈ℝ_>0. We must show that there is at most one such steady state. Write the mass-action ODEs of (G, κ^*) as dx_1/dt = f_1 and dx_2/dt=f_2. Consider the univariate polynomial f_2|_x_1 = α∈ℝ[x_2]. We claim that this polynomial is not the zero polynomial. To check this claim, assume for contradiction that f_2|_x_1 = α is zero. As G is reversible, Remark <ref> (which relies on Propositions <ref>–<ref>) implies that X_2 is a catalyst-only species of every reaction of G. We conclude that G is not full-dimensional, which is a contradiction. Having shown that the univariate polynomial f_2|_x_1 = α is nonzero, we now use Lemma <ref> to conclude that (G, κ^*) has at most one positive steady state of the form (α, x_2^*). Proposition <ref> fails for networks that are not reversible. Indeed, a network without positive steady states (such as { 0 → A,  0 → B}) is not multistationary and also lacks unconditional ACR. We end this subsection by considering two-species networks that are one-dimensional. Up to relabeling species, each such network is a subnetwork of exactly one of the following networks G_i: G_1  := { 0 ⇆ X_1+X_2 } G_2  := {X_1⇆ 2X_2} G_3  := { 0 ⇆ X_1 ⇆ 2X_1 ⇆ 0 ,   X_2 ⇆ X_1+X_2} G_4   := { 2X_1 ⇆ X_1+X_2 ⇆ 2X_2 ⇆ 2X_1  ,   X_1 ⇆ X_2 } The next result, which is part (2) of Theorem <ref>, states that among the reversible subnetworks of the networks G_i listed in (<ref>), only one has unconditional ACR (namely, {X_2 ⇆ X_1+X_2}). Let G be a one-dimensional, reversible, bimolecular reaction network with exactly two species. Then the following are equivalent: * G has unconditional ACR; * Up to relabeling species, G is the (non-multistationary) network {X_2 ⇆ X_1+X_2}. Let G be a two-species, one-dimensional, reversible, bimolecular reaction network. From the list (<ref>), we know that G is a subnetwork of one of G_1, G_2, G_3, and G_4. Assume G is a subnetwork of G_1, G_2, or G_4. Then, G ≠{X_2 ⇆ X_1+X_2} and G is not a subnetwork of {0 ⇆ X_1 ⇆ 2X_1 ⇆ 0}. So, it suffices to show G does not have unconditional ACR. In networks G_1, G_2, and G_4, the reactant and product complexes of every reaction differ in both species X_1 and X_2. Also, all reactions in G are reversible, so every complex of G is a reactant complex. We conclude that G has two reactant complexes that differ in both species, and hence, Lemma <ref> implies that G does not have unconditional ACR. We now consider the remaining case, when G is a subnetwork of G_3. We write G_3 = N_1 ∪ N_2, where N_1:={0 ⇆ X_1 ⇆ 2X_1 ⇆ 0 } and N_2:={X_2 ⇆ X_1+X_2}. If G=N_2, the mass-action ODEs are dx_1/dt =κ_1x_2-κ_2x_1x_2 and dx_2/dt =0, and so G has unconditional ACR in species X_1 with ACR-value κ_1κ_2. If G is a subnetwork of N_1, then G has only one species (recall that every species of a network must take part in at least one reaction), which is a contradiction. Our final subcase is when G contains reactions from both N_1 and N_2. Then, from N_2, the complex X_2 is a reactant complex of G. Similarly, from N_1, at least one of X_1 and 2X_1 is a reactant complex of G. Hence, G contains two reactant complexes that differ in both species, X_1 and X_2. Therefore, Lemma <ref> implies that G does not have unconditional ACR. Finally, the fact that the network {X_2 ⇆ X_1+X_2} is non-multistationary follows easily from the deficiency-zero theorem (Lemma <ref>). §.§.§ Irreversible networks with two species In <cit.>, the following network was called a “degenerate-ACR network,” because it has unconditional ACR and yet every positive steady state is degenerate: { A+B → B, A → 2A } . This degeneracy arises from the fact that a single (one-dimensional) stoichiometric compatibility class consists entirely of steady states <cit.>. The main result of this subsection, Theorem <ref> below, shows that only one additional two-species network exhibits both ACR and multistationarity for a nonzero-measure set of rate constants; this network is obtained by adding to (<ref>) the reaction A → 0. Both networks, therefore, are one-dimensional, two-species networks. To prove Theorem <ref>, we need the following lemma, which concerns the network in (<ref>) (and others as well). Let G be a subnetwork of the network {X_1+X_2 → X_2, 0 ← X_1 → 2X_1}. Then: * Every positive steady state (of every mass-action system defined by G) is degenerate. * Let Σ denote the set of vectors of positive rate constants κ for which the mass-action system (G, κ) both has ACR and is multistationary. If Σ has nonzero measure, then G is one of the following networks: {X_1+X_2 → X_2, X_1 → 2X_1} and {X_1+X_2 → X_2, 0 ← X_1 → 2X_1}. This result is straightforward to check by hand, so we only outline the steps, as follows. Assume G is a subnetwork of {X_1+X_2 k→ X_2, 0 ℓ← X_1 m→ 2X_1}. If G admits a positive steady state, G must contain the reaction X_1 m→ 2X_1. Hence, there are three subnetworks to consider: * If G= {0 ℓ← X_1 m→ 2X_1}, then Σ is empty. * If G= {X_1+X_2 k→ X_2,  X_1 m→ 2X_1}, then Σ = { (k,m) ∈ℝ^2_>0}. * If G= {X_1+X_2 k→ X_2, 0 ℓ← X_1 m→ 2X_1}, then Σ = { (k,ℓ,m) ∈ℝ^3_>0| m > ℓ}. In cases (2) and (3), the set Σ has nonzero measure. Finally, for all three of these networks, every positive steady state is degenerate (some of these networks are also covered by Lemma <ref>). Let G be a bimolecular reaction network with exactly two species, X_1 and X_2. Let Σ denote the set of vectors of positive rate constants κ for which the mass-action system (G, κ) both has ACR in species X_2 and is multistationary. Then: * For every κ^* ∈Σ, every positive steady state of (G, κ^*) is degenerate. * If Σ has nonzero measure, then G is one of the following networks: {X_1+X_2 → X_2, X_1 → 2X_1} and {X_1+X_2 → X_2, 0 ← X_1 → 2X_1}. Assume that G is bimolecular and has exactly two species. If Σ is empty (for instance, if G has no reactions), then there is nothing to prove. Accordingly, assume that Σ is nonempty (and in particular G has at least one reaction). We first claim that G has a reaction in which X_1 is a non-catalyst-only species. To prove this claim, assume for contradiction that X_1 is a catalyst-only species. Then the stoichiometric compatibility classes are defined by the equations x_1=T, for T>0 (we are also using the fact that G has at least one reaction). But this does not allow for multistationarity and ACR in X_2 to coexist, because two positive steady states in the same compatibility class would have the form (T,y) and (T,z), with y≠ z, which contradicts the assumption of ACR in X_2. So, the claim holds. For an arbitrary vector κ of positive rate constants, let f_κ,1 and f_κ,2 denote the right-hand sides (for species X_1 and X_2, respectively) of the mass-action ODE system of (G,κ). Consider the following partition of Σ: Σ = ( Σ∩{κ| f_κ,1= 0}) ∪( Σ∩{κ| f_κ,1≠ 0})  =: Σ_0 ∪Σ_1 . By construction, Σ_0∩Σ_1=∅. We first analyze Σ_0. If Σ_0 is empty, then skip ahead to our analysis of Σ_1. Accordingly, assume Σ_0 is nonempty, and let κ^* ∈Σ_0. We must show that every positive steady state of (G,κ^*) is degenerate. We claim that G is two-dimensional (assuming that Σ_0 is nonempty). We prove this claim as follows. We saw that G contains a reaction in which X_1 is a non-catalyst-only species, so Proposition <ref> implies that for j=0 or j=2 (or both, where we are using Notation <ref>) our network G contains the reaction X_1+X_j → 2X_1 and at least one reaction of the form X_1+X_j →⋆, where ⋆ is a complex not involving X_1. Consider the subcase j=0. If some ⋆ involves X_2, then G contains X_1 → 2X_1 and X_1 →⋆, which yield linearly independent reaction vectors and so G is two-dimensional. If none of the complexes ⋆ involve X_2, then G must contain additional reactions in which X_2 is not a catalyst-only species (to avoid f_2=0), and so again G is two-dimensional. The subcase j=2 is similar. Next, as G is two-dimensional and f_κ^*,1= 0, Corollary <ref> implies that every positive steady state of (G,κ^*) is degenerate, as desired. Additionally, as X_1 is a non-catalyst-only species and (for all κ∈Σ_0) f_κ,1=0, Proposition <ref> implies that there is a nontrivial linear relation that every κ∈Σ_0 satisfies. Hence, Σ_0 has zero measure. To complete the proof, it suffices to show the following about the set Σ_1: (1) For every κ^* ∈Σ_1, every positive steady state of (G, κ^*) is degenerate; and (2) If Σ_1 has nonzero measure, then G={X_1+X_2 → X_2, X_1 → 2X_1} or G={X_1+X_2 → X_2, 0 ← X_1 → 2X_1}. Assume Σ_1 is nonempty (otherwise, there is nothing to prove). We introduce the following notation: for κ∈Σ_1, let β(κ) denote the ACR-value for X_2. We now claim the following: For every κ∈Σ_1, the univariate polynomial f_κ,_1|_x_2=β(κ) is the zero polynomial. To verify this claim, we first note that f_κ,_1|_x_2=β(κ) has at least two positive roots (as (G, κ) is multistationarity), so the polynomial f_κ,_1|_x_2=β(κ), if nonzero, must have at least two sign changes (by Descartes' rule of signs). However, by Lemma <ref>, the polynomial f_κ,_1|_x_2=β(κ) has at most one sign change, and so the claim holds. We now know that for every κ^* ∈Σ_1, we have f_κ^*,_1≠ 0, but f_κ^*,_1|_x_2=β(κ^*) =0. Hence, G has at least one reaction in which X_1 is a non-catalyst-only reaction and (by Proposition <ref>) every such reaction must be one of the 8 reactions displayed here: 0 κ_4,1⟵ X_1 κ_1⟶ 2X_1 κ_2⟵ X_1+X_2 κ_3,1⟶ 0  , X_2 κ_4,2⟵ X_1 κ_4,3⟶ 2X_2 , X_2 κ_3,2⟵ X_1+X_2 κ_3,3⟶ 2X_2 For every κ^* ∈Σ_1, Proposition <ref> yields the following ACR-value formula: β(κ^*)  = κ^*_4 ∙ - κ^*_1/κ^*_2 - κ^*_3 ∙ , where κ^*_3 ∙ := κ^*_3, 1 + κ^*_3, 2 + κ^*_3, 3 and κ^*_4 ∙:=κ^*_4, 1 + κ^*_4, 2 + κ^*_4, 3. For reactions in (<ref>) that are not in G, the corresponding rate constants, κ^*_i or κ^*_ij, are set to 0. Next, the possible reactions in which X_1 is a catalyst-only species are as follows: 0 κ_5κ_6⇄ X_2 κ_7κ_8⇄ 2X_2 κ_9κ_10⇄ 0 , X_1 κ_11κ_12⇄ X_1+ X_2 We proceed by considering three subcases, based in part on whether f_κ,2 (which is a polynomial in the unknowns x_1, x_2, and κ) is zero: (a) f_κ,2=0, and X_2 is a catalyst-only species in every reaction of G, (b) f_κ,2=0, and X_2 is a non-catalyst-only species in some reaction of G, or (c) f_κ,2≠ 0. We first consider subcase (a). By inspecting reactions in (<ref>) and (<ref>), we conclude that G must be a subnetwork of {X_1+X_2 → X_2, 0 ← X_1 → 2X_1}. This subcase is done by Lemma <ref>. Next, we examine subcase (b). Let G_1:={X_1+X_2 → X_2, 0 ← X_1 → 2X_1}, G_2:= {0← X_2 → 2X_2}, and G_3:={0← X_1+X_2→ 2X_2,  X_1 ← X_1+ X_2 → 2X_1 }. By Proposition <ref> (and by inspecting reactions in (<ref>) and (<ref>)), G must be a subnetwork of G_1∪ G_2 ∪ G_3 with at least one reaction in G_2 ∪ G_3. Moreover, there is a nontrivial linear relation in the rate constants that holds for all κ∈Σ_1. It follows that Σ_1 is contained in the hyperplane defined by this linear relation and hence has zero measure. Let κ^* ∈Σ_1. By examining G_1 ∪ G_2 ∪ G_3, we see that the possible reactants of G are X_1, X_2, X_1+X_2. Next, G has at least 2 reactants (as otherwise, Proposition <ref> would imply that G admits no positive steady states). Hence, by inspection, G either is full-dimensional or is a subnetwork of { 0 ← X_2 → 2X_2,  X_1+X_2 → X_1}, which we already saw in Example <ref> (where A=X_2 and B=X_1) has ACR in X_1 but not in X_2 (and the analysis of its subnetworks is similar). Hence, G is full-dimensional, and so Corollary <ref> (and the fact that f_κ^*,2=0) implies that every positive steady state of (G,κ^*) is degenerate. Consider subcase (c). Let κ^* ∈Σ_1 (so, in particular, f_κ^*,2≠ 0). We claim that f_κ^*,2|_x_2=β(κ^*) = 0. To see this, observe that, in the reactions (<ref>) and (<ref>), the complex 2X_1 appears only as a product, never as a reactant. Hence, f_κ^*,2|_x_2=β(κ^*) (which is a univariate polynomial in x_1) has degree at most 1. However, the fact that (G,κ^*) is multistationary implies that f_κ^*,2|_x_2=β(κ^*) has two or more positive roots. Hence, f_κ^*,2|_x_2=β(κ^*) is the zero polynomial. Now we show that every positive steady state of (G,κ^*) is degenerate. Such a steady state has the form (p,β), and we also know that f_κ^*,1|_x_2=β(κ^*) = f_κ^*,2|_x_2=β(κ^*)=0. Hence, (x_2-β(κ^*)) divides both f_κ^*,1 and f_κ^*,2. Consequently, the derivatives of f_κ^*,1 and f_κ^*,2 with respect to x_1 at (p,β) are both zero. It follows that the first column of the 2 × 2 Jacobian matrix, when evaluated at (p,β), is the zero column. Hence, if the stoichiometric subspace of G, which we denote by S, is two-dimensional, then (p,β) is degenerate. We now assume (S)=1 (and aim to reach a contradiction). Recall that G contains at least one reaction from those in (<ref>), so in order for (S)=1 it must be that G contains no reaction from (<ref>). Hence, from the expression for f_2 (which we know is not zero), in (<ref>), the only possible reactions in G are the ones labeled by κ_2 , κ_3,3, κ_4,2,κ_4,3. Hence, the one-dimensional network G is either the network { X_1 κ_4,3⟶ 2X_2 } or a subnetwork of {2X_1 κ_2⟵ X_1+X_2 κ_3,3⟶ 2X_2,   X_1 κ_4,2⟶ X_2 }. Now it is straightforward to check that G is not multistationary, which is a contradiction. To complete the proof, it suffices to show that, in subcase (c), the set Σ_1 has measure zero. Accordingly, let κ∈Σ_1. As noted earlier, the ACR-value of X_2 in (G,κ) is β(κ) = κ_4 ∙ - κ_1/κ_2 - κ_3 ∙. From (<ref>) and (<ref>), the right-hand side of the mass-action ODE for (G,κ) has the following form (with rate constants set to 0 for reactions not in G): f_κ,2 =  ( κ_3,3 - κ_2 - κ_12) x_1 x_2 + (κ_11 + κ_4,2 + 2 κ_4,3) x_1 - (κ_8 + 2 κ_9) x_2^2 + (κ_7 - κ_6) x_2 + (κ_5 + 2 κ_10) . By assumption, at least one of the rate constants (the κ_i and κ_i,j) in (<ref>) is nonzero. By our earlier arguments, at the beginning of subcase (c), we conclude that f_κ,2|_x_2=β(κ) = 0. Hence, the linear and constant terms of f_κ,2|_x_2=β(κ) are both 0, which, using (<ref>), translates as follows: ( κ_3,3 - κ_2 - κ_12) κ_4 ∙ - κ_1/κ_2 - κ_3 ∙ + (κ_11 + κ_4,2 + 2 κ_4,3)   =  0 and - (κ_8 + 2 κ_9) ( κ_4 ∙ - κ_1/κ_2 - κ_3 ∙)^2 + (κ_7 - κ_6) κ_4 ∙ - κ_1/κ_2 - κ_3 ∙ + (κ_5 + 2 κ_10)   = 0 . It follows that Σ_1 is constrained by the equations (<ref>), at least one of which is nontrivial. Hence, Σ_1 is contained in a hypersurface and so has measure zero. §.§ Bimolecular networks with at least three species In the previous subsection, we showed that a bimolecular network must have at least 3 species in order for ACR and nondegenerate multistationarity to coexist. Consequently, this subsection focuses on bimolecular networks with at least 3 species. We prove that the coexistence of ACR and nondegenerate multistationarity requires a minimum of 3 reactant complexes and a minimum of 5 complexes (Theorem <ref>). The remainder of this subsection focuses on a family of networks with n species and n reactants, for which ACR and nondegenerate multistationarity coexist (Section <ref>), and then analyzes full-dimensional networks with 3 species (Section <ref>). Let G be a bimolecular reaction network with at least 3 species. If there exists a vector of positive rate constants κ^* such that (G,κ^*) has ACR and is nondegenerately multistationary, then: * G has at least 3 reactant complexes (and hence, at least 3 reactions), and * G has at least 5 complexes (reactant and product complexes). We first prove part (1). Let (G,κ^*) be as in the statement of the theorem and let n denote the number of species, where n ≥ 3. By relabeling species, if needed, we may assume that (G,κ^*) has ACR in species X_1. Let f_1,f_2,…, f_n denote the right-hand sides of the mass-action ODEs of (G,κ^*). As (G,κ^*) has ACR, we know that at least one of the right-hand sides is nonzero. Let f_i denote one of these nonzero polynomials. Assume for contradiction that G has only 1 or 2 reactant complexes. Since G admits a nondegenerate positive steady state, Proposition <ref> implies that G is not full-dimensional and G has exactly 2 reactant complexes. We claim that all the right-hand sides f_ℓ are scalar multiples of each other. More precisely, we claim that for all j ∈{1,2,…,n}∖{i}, there exists c_j ∈ℝ such that f_j=c_j f_i. Indeed, each f_j has at most two monomials (because G has exactly two reactant complexes), so if f_j is not a constant multiple of f_i, then some ℝ-linear combination of f_i and f_j is a monomial and hence (G,κ^*) has no positive steady state (which is a contradiction). Thus the positive steady states of (G,κ^*) are precisely the positive roots of f_i=0 and the linear equations given by the conservation laws. Since X_1 is the ACR species and G is bimolecular, we must have f_i = (α - x_1) (β_0 + β_1 x_1 + … + β_n x_n), where α is the (positive) ACR-value and β_j ∈ℝ for all j=0,1,…,n. We consider several cases, based on how many of the coefficients β_2,β_3, …, β_n are nonzero. We begin by considering the case when β_2=β_3=… = β_n=0. In this case, f_i is a (nonzero) polynomial in x_1 only, and so has the form f_i=γ_1 x_1^m_1 + γ_2 x_1^m_2, where γ_1,γ_2 ∈ℝ and 0 ≤ m_1 < m_2 ≤ 2. As (G,κ^*) has a positive steady state, we conclude that γ_1 and γ_2 are nonzero and have opposite signs. Now Lemma <ref> implies that i=1 (so, f_1=f_i ≠ 0) and f_2=f_3=…=f_n=0. In fact, Lemma <ref> implies that X_2, …, X_n are catalyst-only species of G (equivalently, the mass-action ODE right-hand sides for X_2,…, X_n are zero for all choices of positive rate constants). Such a system is not multistatationary, which is a contradiction. Now consider the case when two or more of the β_2,β_3, …, β_n are nonzero. In this case, there exist distinct j_1,j_2 (where 2 ≤ j_1,j_2 ≤ n) with β_j_1,β_j_2≠ 0. Then f_i contains the monomials x_j_1, x_j_2, x_1x_j_1, x_1x_j_2 which contradicts the fact that G has exactly two reactant complexes. The final case is when exactly one of the β_2,β_3, …, β_n is nonzero. Relabel the species, if needed, so that β_2≠ 0. In this case, the two reactant complexes of G involve only species X_1 and X_2. By using Lemma <ref> again, much like we did for the prior case, we conclude that X_3 ,…, X_n are catalyst-only species of G and so (G,κ^*) is effectively the mass-action system of a (bimolecular) network with only two species, X_1 and X_2. Now it follows from Theorem <ref> that (G,κ^*) is not nondegenerately multistationarity, which contradicts our assumption. This completes part (1). We prove part (2). Assume for contradiction that G has at most 4 complexes. By Proposition <ref>, the dimension of the stoichiometric subspace of G must be at least 2. So, the deficiency of G satisfies δ =  m - ℓ - (S)  ≤  4 - 1 - 2  = 1 . Hence, the deficiency of G is 0 or 1, and the latter requires G to have exactly one linkage class. Now Lemmas <ref>–<ref> imply that G is not multistationary, which is a contradiction. Theorem <ref> gives a lower bound on the number of reactant complexes and the number of all complexes (reactants and products), and the next example shows that these bounds are tight. The example also shows the tightness of the lower bounds on the number of species and the dimension of the network (from Theorem <ref>). Consider the following bimolecular network with 3 species and 3 reactant complexes and 5 complexes: G = {X_1+X_2 2X_3, X_3 X_1 , 2X_3 2X_2} . This network is two-dimensional, as the total amount of X_1,X_2,X_3 is conserved. For every vector of positive rate constants κ, the system (G,κ) is nondegenerately multistationary and also has ACR in X_3 with ACR-value _̨2/ (2_̨3). Details are given in the proof of Proposition <ref>, below, which pertains to a family of networks that includes the network G. §.§.§ Non-full-dimensional networks The bimolecular network in Example <ref> is the n=3 case of the networks G_n that we introduce in the next result. These networks have the property that every reactant complex is bimolecular, but (when n≥ 4) one of product complexes is not. For all n ≥ 3, consider the following network with n species, n reactant complexes, and n reactions: G_n  = {X_1+X_2 2X_3+∑_j=4^nX_j,   X_3 X_1 ,  2X_3 2X_2}⋃{ X_4_̨4→ 0, … ,  X_n_̨n→ 0 } . Each such network G_n satisfies the following: * there is a unique (up to scaling) conservation law, which is given by x_1+x_2+x_3=T, where T represents the total concentration of species X_1,X_2,X_3; and * for every vector of positive rate constants κ∈ℝ^n_>0, the system (G_n,κ) is nondegenerately multistationary and also has ACR in species X_3, X_4, …, X_n. Fix n ≥ 3. The mass-action ODEs for G_n are as follows: dx_1/dt =  -_̨1x_1x_2 + _̨2 x_3 dx_2/dt =  -_̨1x_1x_2 + 2_̨3 x_3^2 dx_3/dt =  2_̨1x_1x_2 - _̨2x_3 - 2_̨3x_3^2 dx_j/dt = _̨1 x_1 x_2 - _̨j x_j    for j∈{4,…, n} . The network G_n has exactly one conservation law (up to scaling), and it is given by x_1+x_2+x_3=T. Additionally, using the first two ODEs, we compute that the value of species X_3 at all positive steady states is _̨2/2_̨3. Next, we use this steady-state value for X_3, together with the first and fourth ODEs, to obtain the expression _̨2^2/2_̨3_̨n for the steady-state value for X_j, for j ≥ 4. Thus, ACR in X_3,X_4,…, X_n will follow once we confirm the existence of positive steady states. Next, we investigate the steady-state values of X_1 and X_2. Using the steady-state value of X_3, the conservation law, and the first ODE, we see that the steady-state values of X_1 and X_2 correspond to the intersection points of the line x_1+x_2 + _̨2/2_̨3 = T and the curve x_1 x_2 = _̨2^2/2_̨1_̨3. This is depicted qualitatively below (by [green] dashed lines and a [red] solid curve, respectively). < g r a p h i c s > It follows that, given any vector of positive rate constants κ∈ℝ^n_>0, when T is sufficiently large, there are two pairs of (nondegenerate) positive steady-state values for X_1 and X_2, and so (G_n, κ) is nondegenerately multistationary (and thus admits a positive steady state, and so has ACR). §.§.§ Full-dimensional networks with 3 species Consider a bimolecular network G that has 3 species. We saw that if G admits ACR and nondegenerate multistationarity simultaneously, then G has at least 3 reactant complexes (Theorem <ref>). If, however, G is full-dimensional, then more reactants are required, as stated in the following result. Let G be a full-dimensional bimolecular reaction network with exactly 3 species. If there exists a vector of positive rate constants κ^* such that (G,κ^*) has ACR and is nondegenerately multistationary, then G has at least 5 reactant complexes (and hence at least 5 reactions) Proposition <ref> is a direct consequence of Propositions <ref>(3) and <ref>, and a stronger version of this result appears in the next section (Theorem <ref>). Proposition <ref> implies that if a full-dimensional bimolecular network with 3 species and fewer than 5 reactions has both ACR and multistationarity, then this coexistence happens in a degenerate way. We illustrate this situation with two examples, and then characterize all such networks with exactly 4 reactant complexes (Proposition <ref>). Consider the following full-dimensional network with 3 species, 4 reactions, and 4 reactant complexes: {2Z → Z,  X+Y → Z → Y+Z,  0 → X}. When all rate constants are 1, the mass-action ODEs are as follows: dx/dt = 1-xy dy/dt = z - xy dz/dt = -z^2 + xy . For this system, the set of positive steady states is { (x,y,z) ∈ℝ^3_>0| xy=z=1}, and every positive steady state is degenerate. We conclude that this system is multistationary (but degenerately so) and has ACR in species Z (with ACR-value 1). Consider the following network: { X+Z→ Z,  Y+Z ⇆ Y→ 0,  2X← X→ X+Y }. Like the network in Example <ref>, this network is full-dimensional and has 3 species, 4 reactions, and 4 reactant complexes; however, the set of reactant complexes differs. When all rate constants are 1, the mass-action ODEs are as follows: dx/dt  =  x-xz dy/dt  =  x - y dz/dt  =  y - yz . For this system, the set of positive steady states is { (x,y,z) ∈ℝ^3_>0| x=y, z=1}, and every positive steady state is degenerate. Thus, this system is (degenerately) multistationary and has ACR in species Z (with ACR-value 1). The next result shows that Examples <ref> and <ref> cover all cases of three-species, four-reactant networks with ACR and (degenerate) ACR occurring together, in the sense that these two networks represent the only two possibilities for the set of reactant complexes (when a certain full-rank condition is met, which we discuss below in Remark <ref>). Let G be a full-dimensional bimolecular reaction network with exactly 3 species – which we call X,Y,Z – and exactly 4 reactant complexes. If κ^* is a vector of positive rate constants such that: (a) rank(N) = 3, where N is the matrix for (G,κ^*) as in (<ref>), (b) (G, κ^*) has ACR in species Z, and (c) (G,κ^*) is multistationary (which is degenerately so, by Proposition <ref>), then the set of reactant complexes of G is either {X, X+Z, Y, Y+Z} or {0, X+Y, Z, 2Z}. Let G, κ^*, and N be as in the statement of the proposition. In particular, G has 3 species and 4 reactants, and (G,κ^*) admits a positive steady state, which we denote by (x^*,y^*, α) (so α is the ACR-value of Z). Also, N has rank 3 and so Proposition <ref>(2) and its proof imply that steady-state equations can be “row-reduced” so that the positive steady states of (G,κ^*) are the roots of 3 binomial equations of the following form: h_1   :=  m_1 - β_1 m_4  = 0 h_2   :=  m_2 - β_2 m_4  = 0 h_3  :=  m_3 - β_3 m_4  = 0  , where β_j ∈ℝ (for j=1,2,3) and m_i=x^a_i y^b_i z^c_i (for i=1,2,3,4) are 4 distinct monic monomials given by the reactant complexes. Also, each m_i (for i=1,2,3,4) has degree at most 2 in x,y,z (as G is bimolecular). In other words, a_i,b_i,c_i are non-negative integers that satisfy the following: a_i+b_i+c_i  ≤  2 . We infer that β_1, β_2, β_3 >0, because otherwise h_1=h_2=h_3=0 would have no positive roots. For i∈{1,2,3}, consider the following, where we recall that α is the ACR-value of Z: g_i  :=  h_i|_z = α =  d_i x^a_i y^b_i - d_i' x^a_4 y^b_4 , where d_i:= α^c_i>0 and d_i':= β_i α^c_4>0. For i∈{1,2,3}, by construction, g_i(x^*,y^*)=0 and so the subset of the positive quadrant ℝ^2_>0 defined by g_i=0, which we denote by S_i, is nonempty. There are four possible “shapes” for each set S_i: * S_i = ℝ^2_>0, when (a_i,b_i)=(a_4,b_4) (and necessarily, d_i = d_i', to avoid S_i = ∅). * S_i is the horizontal line y=y^*, when a_i = a_4 and b_i ≠ b_4. * S_i is the vertical line x=x^*, when a_i ≠ a_4 and b_i = b_4. * S_i is a strictly increasing curve (passing through (x^*,y^*)) defined by the following equation, when a_i ≠ a_4 and b_i ≠ b_4: y  = ( d_i/d'_i) ^ 1/b_4 - b_i x ^a_i - a_4 /b_4 - b_i  . Any two lines/curves of the form (2)–(4) either coincide or intersect only at (x^*,y^*). Hence, the intersection S_1 ∩ S_2 ∩ S_3 is either * the single point (x^*,y^*), * a single line or curve of the form (2)-(4), or * the positive quadrant ℝ^2_>0. By construction and the fact that α is the ACR-value, the set of all positive steady states of (G,κ^*) is the set {(x,y, α ) | (x,y) ∈ S_1 ∩ S_2 ∩ S_3}. Hence, in the case of (a), (G,κ^*) is not multistationary, which is a contradiction. Next, we show that case (c) does not occur. On the contrary, assume that it does. Then S_1=S_2=S_3=_> 0^2, which implies that (a_1,b_1)=(a_2,b_2)=(a_3,b_3) = (a_4,b_4). Since m_1,m_2,m_3,m_4 are 4 distinct monomials, it must be that c_1,c_2,c_3,c_4 are 4 distinct non-negative integers. However, as noted earlier, c_i∈{0,1,2} for each i, which yields a contradiction. Finally, we consider case (b). This case happens only when one of the following subcases occur: Subcase 1: Exactly 1 of the 3 subsets S_i is the positive quadrant, and the other two coincide. Without loss of generality, assume S_1 = ℝ^2_>0 and so S_2 = S_3 ≠ℝ^2_>0. Hence, (a_1,b_1)=(a_4,b_4) ≠ (a_2,b_2) = (a_3,b_3). However, m_1 ≠ m_4 and m_2 ≠ m_3, and so: c_1 ≠  c_4 and c_2 ≠ c_3 . We rewrite the inequalities (<ref>), using the equalities (a_1,b_1)=(a_4,b_4) and (a_2,b_2) = (a_3,b_3): a_1+b_1+c_1  ≤  2 , a_1+b_1+c_4  ≤  2 , a_2+b_2+c_2  ≤  2 , a_2+b_2+c_3  ≤  2 . Finally, Lemma <ref> implies that each of species X and Y takes part in some reactant complex, so we obtain the following (again using (a_1,b_1)=(a_4,b_4) and (a_2,b_2) = (a_3,b_3)): a_1 + a_2  ≥  1 and b_1 + b_2  ≥  1 . The only non-negative solutions to the conditions in (<ref>), (<ref>), and (<ref>) are as follows: * a_1=a_3=1, a_2=a_3=0, b_1=b_4=0, b_2=b_3=1, {c_1,c_4} = {c_2,c_3} = {0,1}; * a_1=a_3=0, a_2=a_3=1, b_1=b_4=1, b_2=b_3=0, {c_1,c_4} = {c_2,c_3} = {0,1}. In all of these solutions, the set of reactant complexes is {X,  X+Z,  Y,  Y+Z}. Subcase 2: Exactly 2 of the 3 subsets S_i are the positive quadrant. Without loss of generality, assume that S_1 = S_2 = ℝ^2_>0≠ S_3. This implies the following: (a_1,b_1) = (a_2,b_2) = (a_4,b_4)  ≠  (a_3,b_3)  . However, m_1,m_2,m_4 are 3 distinct monomials, so c_1, c_2,c_4 are 3 distinct non-negative integers. Now inequality (<ref>) implies that {c_1, c_2,c_4} = {0,1,2}. Let i^* ∈{1,2,4} be such that c_i^*=2. Next, the equalities in (<ref>) and the inequality (<ref>) for i=i^* together imply that (a_1,b_1)=(a_2,b_2)=(a_4,b_4)=(0,0). Therefore, the set of reactant complexes corresponding to m_1,m_2,m_4 is {0,  Z,  2Z}. Finally, Lemma <ref> implies that the fourth reactant complex must involve both X and Y and so (by bimolecularity) is X+Y. Therefore, the set of reactant complexes is {0,  X+Y,  Z,  2Z}. Subcase 3: None of the subsets S_i are positive quadrants, and the 3 sets coincide. This implies that (a_1,b_1)=(a_2,b_2)=(a_3,b_3) ≠ (a_4,b_4). These conditions are symmetric to those in subcase 2, and so the reactant complexes are {0,  Z,  2Z,  X+Y}. This completes subcase 3 (and case (b)). Proposition <ref> includes the hypothesis that the matrix N for the system (G,κ^*) has (full) rank 3. If we remove this hypothesis, we can obtain more full-dimensional networks with 3 species and 4 reactants that allow ACR and (degenerate) multistationarity to occur together. We present one such network in Example <ref>. Consider the following full-dimensional network with 3 species and 4 reactants: G:={0→ X→ Y → 2 Y,     Y← Y+Z → 2 Z} . The system (G,κ^*) obtained by setting all the reaction rates to 1 has the following ODEs: [ dx/dt; dy/dt; dz/dt ] = [ -1 0 0 1; 1 1 -1 0; 0 0 0 0; ][ x; y; yz; 1 ] =  N [ x; y; yz; 1 ] . The matrix N (defined above) has rank 2, the set of positive steady states is { (x,y,z) ∈ℝ^3_>0| x=1,  y(z-1)=1}, and every positive steady state is degenerate. Thus, this system is (degenerately) multistationary and has ACR in species X (with ACR-value 1). In the next section, we see that the exceptional networks in Proposition <ref> – namely, full-dimensional, three-species networks with reactant-complex set {0,Z,2Z,X+Y} or {X,Y,X+Z,Y+Z} – do not have unconditional ACR. Indeed, this fact is a direct consequence of a more general result concerning networks with n species and n+1 reactants (Theorem <ref>). § MAIN RESULTS ON GENERAL NETWORKS The results in the prior section pertain to networks that are bimolecular, while here we analyze networks that need not be bimolecular. We consider full-dimensional networks (Section <ref>) and non-full-dimensional networks (Section <ref>) separately. §.§ Full-dimensional networks In Proposition <ref>, we saw a family of networks that admit ACR and nondegenerate multistationarity together. These networks have n reactants (where n is the number of species), but are not full-dimensional. In this subsection, we show that for full-dimensional networks, the coexistence of ACR and nondegenerate multistationarity requires at least n+2 reactants (Theorem <ref>). We also show that this lower bound is tight (Proposition <ref>). Additionally, we consider full-dimensional networks with only n+1 reactants and show that if such a network is multistationary (even if only degenerately so), then the network can not have unconditional ACR (Theorem <ref>). Let G be a full-dimensional reaction network with n species. If there exists a vector of positive rate constants κ^* such that the mass-action system (G,κ^*) has ACR and also is nondegenerately multistationary, then n ≥ 2 and G has at least n+2 reactant complexes and hence, at least n+2 reactions. It follows readily from definitions that ACR and multistationarity do not coexist in networks with only one species, so n ≥ 2. We proceed by contrapositive. We consider two cases. If G has at most n reactant complexes, then Proposition <ref> (which requires n ≥ 2) implies that every positive steady state of (G,κ^*) is degenerate and so (G,κ^*) is not nondegenerately multistationary. In the remaining case, when G has n+1 reactant complexes, Proposition <ref>(3) implies that (G,κ^*) is not nondegenerately multistationary. The next example shows that the bound in Theorem <ref> is tight for n=2. The following network is full-dimensional and has 2 species, 4 reactant complexes, and 4 reactions (the out-of-order labeling of the rate constants is to be consistent with Proposition <ref>, which appears later): { A+B κ_1 → 2B κ_3 → 2B+A  ,  B κ_2 → 0 κ_4 → A } . Observe that all reactant complexes are bimolecular, but one of the product complexes is not. In the next result, we show that this network exhibits ACR (in species A with ACR-value κ_2/κ_1) and nondegenerate multistationarity when κ_2^2 > 4 κ_3 κ_4. (Proposition <ref>). Among full-dimensional networks for which ACR and nondegenerate multistationarity coexist, this network is optimal in the sense that it has the fewest possible species, reactant complexes, and reactions (by Theorem <ref>). In the next result, we generalize the network in Example <ref> to a family of networks that show that the lower bound on the number of reactions in Theorem <ref> is tight for all n. The networks in the following result are also optimal in terms of the molecularity of the reactant complexes (they are bimolecular), although two of the product complexes have high molecularity. For all n ≥ 2, consider the following full-dimensional network with n species, n+2 reactions, and n+2 reactant complexes: G_n   = { X_1+X_2 κ_1 → 2X_2 + X_3 + … + X_n ,  X_2 κ_2 → 0,  2X_2 κ_3 → 2X_2+X_1 ,  0 κ_4 → X_1 } ⋃{ X_jκ_j+2→ 0 | 3 ≤ j ≤ n}. For every vector of positive rate constants κ^* for which (κ^*_2)^2 > 4 κ^*_3 κ^*_4, the system (G_n, κ^*) has nondegenerate multistationarity and has ACR in species X_1. The mass-action ODEs are given by: dx_1/dt  = κ_3 x_2^2 - κ_1 x_1 x_2 + κ_4 dx_2/dt  = κ_1 x_1 x_2-κ_2x_2 dx_j/dt  = κ_1 x_1 x_2 - κ_j+2 x_j    for j∈3,…, n. The steady-state equation for X_2 implies that x_1=κ_2/κ_1 at all positive steady states, so there is ACR in X_1 (whenever positive steady states exist). Next, the steady-state equations for X_1 and X_2 imply that the steady state values of X_2 are x_2^± := k_2 ±√(k_2^2 - 4k_3k_4)/2 k_3. Both of these steady state values are positive precisely when the discriminant k_2^2 - 4k_3k_4 is positive (this is a straightforward computation; alternatively, see <cit.>). Now we use the steady-state equation for X_j, where j≥ 3, to compute the two positive steady states that exist whenever (κ^*_2)^2 > 4 κ^*_3 κ^*_4: ( x_1^*,  x_2^+, κ_1/κ_3 x_1^*x_2^+, …,  κ_1/κ_n x_1^*x_2^+) and ( x_1^*,  x_2^-, κ_1/κ_3 x_1^*x_2^-, …,  κ_1/κ_n x_1^*x_2^-)  , where x^*_1:=κ_2/κ_1. Finally, nondegeneracy can be checked by a computing the Jacobian matrix. Our next result concerns full-dimensional networks with one more reactant than the number of species, as follows. Let G be a full-dimensional network, with n species and exactly n+1 reactant complexes. If G is multistationary, then there exists a vector of positive rate constants κ such that (G,κ) has no positive steady states, and hence G does not have unconditional ACR. Assume that G is full-dimensional, has exactly n+1 reactant complexes (where n is the number of species), and is multistationarity. By definition, there exists κ^* ∈ℝ^r_>0, where r is the number of reactions, such that (G,κ^*) is multistationary. Let N be the n × (n+1) matrix arising from (G,κ^*), as in (<ref>); and let A be the n × n matrix defined by G, as in Proposition <ref>. We claim that rank(N) ≤ n-1 or rank(A) ≤ n-1. Indeed, if rank(N) = n and rank(A) = n, then the proof of Proposition <ref> shows that (G,κ^*) is not multistationary, which is a contradiction. If rank(N) ≤ n-1, then Proposition <ref>(2) implies that there exists κ^**∈ℝ^r_>0 such that (G,κ^**) has no positive steady states. Similarly, in the remaining case, when rank(N) =n and rank(A) ≤ n-1, the desired result follows directly from Proposition <ref>(2). §.§ Non-full-dimensional networks In an earlier section, we saw a family of networks with n species, n reactant complexes, and exactly one conservation law, for which ACR and nondegenerate multistationarity coexist (Proposition <ref>). Our next result shows that this n is the minimum number of reactant complexes (when there is one conservation law), and, furthermore, as the number of conservation laws increases, the minimum number of reactant complexes required decreases. Let G be a reaction network with n ≥ 3 species and k ≥ 1 conservation laws (more precisely, G has dimension n-k). If there exists a vector of positive rate constants κ^* such that the system (G,κ^*) is nondegenerately multistationary and has ACR in some species, then G has at least n-k+1 reactant complexes. If G has k ≥ 1 conservation laws and at most n-k reactant complexes, then Propositions <ref>(1) and  <ref> together imply that G is not nondegenerate multistationarity. As noted earlier, the bound in Theorem <ref> is tight for k=1, due to Proposition <ref>. We also know that, for k=n-1, the bound holds vacuously (Proposition <ref>). Our next result shows that the bound is also tight for all remaining values of k (namely, 2 ≤ k ≤ n-2). Let n ≥ 3, and let k ∈{2,3,…, n-2}. If k ≠ n-2, consider the following network: G_n,k =  {X_1+X_2+∑_j=n+2-k^nX_j 2X_3+∑_j=4^nX_j,   X_3 X_1 ,  2X_3 2X_2} ⋃{ X_4_̨4→ 0, … ,  X_n+1-k_̨n-k+1→ 0 } . On the other hand, if k = n-2, consider the following network: G_n,k = {X_1+X_2+∑_j=4^nX_j 2X_3+∑_j=4^nX_j,   X_3 X_1 ,  2X_3 2X_2} Each such network G_n,k satisfies the following: * G_n,k has n species, n-k+1 reactants, and n-k+1 reactions; * G_n,k has dimension n-k, and the following are k linearly independent conservation laws: x_1+x_2+x_3=T and x_j = T_j for j∈{n-k+2,…,n}. * for every vector of positive rate constants κ, the system (G_n,κ) is nondegenerately multistationary and also has ACR in species X_3, X_4, …, X_n-k+1. This result can be checked directly, in a manner similar to the proof of Proposition <ref>. Indeed, for every vector of positive rate constants, there is ACR in species X_3,X_4, …,X_n-k+1 and exactly two nondegenerate positive steady states when T is large enough. The reaction networks in Proposition <ref> are not bimolecular, and they contain reactions with many catalyst-only species (namely X_n-k+2,…,X_n). We do not know whether there exist reaction networks that are bimolecular and do not contain reactions with catalyst-only species, and yet (like the networks in Proposition <ref>) show that the lower bound in Theorem <ref> is tight. § DISCUSSION In this article, we proved lower bounds in terms of the dimension and the numbers of species, reactant complexes (and thus reactions), and all complexes (both reactant and product complexes) needed for the coexistence of ACR and nondegenerate multistationarity. Additionally, we showed that these bounds are tight, via the network {A+B → 2C → 2B,  C→ A } (Example <ref>). Networks like the one in Example <ref> contain special structures that may be biologically significant. Exploring such structures will aid in establishing design principles for creating networks with ACR and multistationarity. We plan to explore such networks and their architecture in the future. In the present work, our interest in multistationarity comes from the fact that it is a necessary condition for multistability. Another interesting direction, therefore, is to investigate conditions for coexistence of ACR and multistability, rather than multistationarity. The “minimal" networks in the current work admit only two positive steady states and are not multistable. Hence, we conjecture that the lower bounds (on dimension and the numbers of species, reactant complexes, and all complexes) for the coexistence of ACR and multistability are strictly larger than the bounds proven here for ACR and multistationarity. Finally, we are interested in the conditions for the coexistence of other combinations of biologically significant dynamical properties, such as ACR and oscillations. In addition to the minimum requirements for their coexistence, we also hope to discover new network architectures or motifs that can be used to design synthetic networks possessing these dynamical properties. §.§ Acknowledgements This project began at an AIM workshop on “Limits and control of stochastic reaction networks” held online in July 2021. AS was supported by the NSF (DMS-1752672). The authors thank Elisenda Feliu, Oskar Henriksson, Badal Joshi, and Beatriz Pascual-Escudero for many helpful discussions. plain 10 AC:non-mass David. F. Anderson and Simon. L. Cotter. Product-form stationary distributions for deficiency zero networks with non-mass action kinetics. B. Math. Biol., 78(12), 2016. ACK:product David F. Anderson, Gheorghe Craciun, and Thomas G. Kurtz. Product-form stationary distributions for deficiency zero chemical reaction networks. B. Math. Biol., 72(8), 2010. AN:non-mass David F. Anderson and Tung D. Nguyen. Results on stochastic reaction networks with non-mass action kinetics. Math. Biosci. Eng., 16(4):2118–2140, 2019. splitting-banaji Murad Banaji. Splitting reactions preserves nondegenerate behaviours in chemical reaction networks. SIAM J. Appl. Math., 83(2):748–769, 2023. banaji-boros Murad Banaji and Balázs Boros. The smallest bimolecular mass action reaction networks admitting Andronov–Hopf bifurcation. Nonlinearity, 36(2):1398, 2023. banaji-boros-hofbauer-3-rxn Murad Banaji, Balázs Boros, and Josef Hofbauer. Oscillations in three-reaction quadratic mass-action systems. Preprint, arXiv:2304.02303. boros2019existence Balázs Boros. Existence of positive steady states for weakly reversible mass-action systems. SIAM J. Math. Anal., 51(1):435–449, 2019. CIK Carsten Conradi, Alexandru Iosif, and Thomas Kahle. Multistationarity in the space of total concentrations for systems that admit a monomial parametrization. B. Math. Biol., 81:4174–4209, 2019. Deng Jian Deng, Martin Feinberg, Chris Jones, and Adrian Nachman. On the steady states of weakly reversible chemical reaction networks. Preprint, arXiv:1111.2386. dennis-shiu Allison Dennis and Anne Shiu. On the connectedness of multistationarity regions of small reaction networks. Preprint, arXiv:2303.03960. F1 Martin Feinberg. Complex balancing in general kinetic systems. Arch. Ration. Mech. Anal., 49:187–194, 1972. FeinbergLec79 Martin Feinberg. Lectures on chemical reaction networks. Delivered at the Mathematics Research Center, Univ. Wisc.-Madison. Available at http://crnt.engineering.osu.edu/LecturesOnReactionNetworks, 1979. Feinberg1987 Martin Feinberg. Chemical reaction network structure and the stability of complex isothermal reactors–I. The deficiency zero and deficiency one theorems. Chem. Eng. Sci., 42(10):2229–2268, 1987. FeinDefZeroOne Martin Feinberg. The existence and uniqueness of steady states for a class of chemical reaction networks. Arch. Ration. Mech. Anal., 132(4):311–370, 1995. feliu-henriksson-pascual Elisenda Feliu, Oskar Henriksson, and Beatriz Pascual-Escudero. Dimension and degeneracy of solutions of parametric polynomial systems arising from reaction networks. Preprint, arXiv:2304.02302. HT Vera Hars and János Tóth. On the inverse problem of reaction kinetics. Qualitative Theory of Differential Equations, 30:363–379, 1979. H Fritz Horn. Necessary and sufficient conditions for complex balancing in chemical kinetics. Arch. Ration. Mech. Anal., 49:172–186, 1972. H-J1 Fritz Horn and Roy Jackson. General mass action kinetics. Arch. Ration. Mech. Anal., 47:187–194, 1972. joshi-kaihnsa-nguyen-shiu-1 Badal Joshi, Nidhi Kaihnsa, Tung D. Nguyen, and Anne Shiu. Prevalence of multistationarity and absolute concentration robustness in reaction networks. Preprint, arXiv:2301.10337. atoms_multistationarity Badal Joshi and Anne Shiu. Atoms of multistationarity in chemical reaction networks. J. Math. Chem., 51:153–178, 2013. joshi2017small Badal Joshi and Anne Shiu. Which small reaction networks are multistationary? SIAM J. Appl. Dyn. Syst., 16(2):802–833, 2017. Kholodenko2010 Boris N. Kholodenko, John F. Hancock, and Walter Kolch. Signalling ballet in space and time. Nat. Rev. Mol. Cell Bio., 11:414–426, June 2010. lin-tang-zhang Kexin Lin, Xiaoxian Tang, and Zhishou Zhang. Multistationarity of reaction networks with one-dimensional stoichiometric subspaces. CSIAM Trans. Appl. Math., 3: 564–600, 2022. acr-dim-1 Eduardo R Mendoza, Dylan Antonio SJ Talabis, Editha C Jose, and Lauro L Fontanil. Absolute concentration robustness in rank-one kinetic systems. Preprint, arXiv:2304.03611. MST Nicolette Meshkat, Anne Shiu, and Angelica Torres. Absolute concentration robustness in networks with low-dimensional stoichiometric subspace. Vietnam J. Math., 50:623–651, 2022. mv-small-networks Nida Obatake, Anne Shiu, and Dilruba Sofia. Mixed volume of small reaction networks. Involve, 13:845–860, 2020. pantea-voitiuk Casian Pantea and Galyna Voitiuk. Classification of multistationarity for mass action networks with one-dimensional stoichiometric subspace. Preprint, arXiv:2208.06310. ACR Guy Shinar and Martin Feinberg. Structural sources of robustness in biochemical reaction networks. Science, 327(5971):1389–1391, 2010. shiu2019nondegenerate Anne Shiu and Timo de Wolff. Nondegenerate multistationarity in small reaction networks. Discrete Contin. Dyn. Syst. - Ser. B, 24(6), 2019. tang-wang-hopf Xiaoxian Tang and Kaizhang Wang. Hopf bifurcations of reaction networks with zero-one stoichiometric coefficients. Preprint, arXiv:2208.04196. MR4241183 Xiaoxian Tang and Hao Xu. Multistability of small reaction networks. SIAM J. Appl. Dyn. Syst., 20(2):608–635, 2021. tang-zhang Xiaoxian Tang and Zhishou Zhang. Multistability of reaction networks with one-dimensional stoichiometric subspaces. SIAM J. Appl. Dyn. Syst., 21(2):1426–1454, 2022 . tonello2017network Elisa Tonello and Matthew D. Johnston. Network translation and steady state properties of chemical reaction systems. B. Math. Biol., 80(9):2306–2337, 2018. tyson-albert John J Tyson, Reka Albert, Albert Goldbeter, Peter Ruoff, and Jill Sible. Biological switches and clocks. J. R. Soc. Interface, 5:S1–S8, 2008. smallestHopf Thomas Wilhelm and Reinhart Heinrich. Smallest chemical reaction system with Hopf bifurcation. J. Math. Chem., 17(1):1–14, 1995. Authors' addresses: Nidhi Kaihnsa, University of Copenhagen [email protected] Tung Nguyen, Texas A&M University [email protected] Anne Shiu, Texas A&M University [email protected]
http://arxiv.org/abs/2307.04468v1
20230710103412
Badgers: generating data quality deficits with Python
[ "Julien Siebert", "Daniel Seifert", "Patricia Kelbert", "Michael Kläs", "Adam Trendowicz" ]
cs.LG
[ "cs.LG", "68", "D.m" ]
Beyond spectroscopy. II. Stellar parameters for over twenty million stars in the northern sky from SAGES DR1 and Gaia DR3 Gang Zhao2,1 August 12, 2023 ========================================================================================================================= Generating context specific data quality deficits is necessary to experimentally assess data quality of data-driven (artificial intelligence (AI) or machine learning (ML)) applications. In this paper we present , an extensible open-source Python library to generate data quality deficits (outliers, imbalanced data, drift, etc.) for different modalities (tabular data, time-series, text, etc.). The documentation is accessible at <https://fraunhofer-iese.github.io/badgers/> and the source code at <https://github.com/Fraunhofer-IESE/badgers>. § INTRODUCTION §.§ Context Applications and systems based on artificial intelligence (AI), machine learning (ML), data mining or statistics (hereafter referred to as data-driven software components) are pieces of software where the decision function is not programmed in a classical way, but is based on one or more models that can be designed either automatically (e.g. through learning or mining) or is based on domain expertise hypotheses (e.g. business rules or statistical tests). Assessing the quality of such software components is not trivial, as it depends on several factors, such as the quality and quantity of the data, the type of model and how it is built, the application context, and domain expertise <cit.>. §.§ Motivation Data quality deficits (e.g., outliers, imbalanced data, missing values, etc.) can have a variety of effects on the performance of a data-driven model. A theoretical understanding of the robustness of data-driven models against specific data quality deficits is available for only a small number of models. Many can only be empirically tested against specific data quality deficits. To make matters worse, data quality deficits are context and application dependent. Assessing the robustness of a data-driven software components to changes in data quality requires a systematic approach. It also requires the ability to generate specific data quality deficits in order to run tests. Currently, there are many Python libraries to detect and handle data quality deficits, such as pyod[<https://pyod.readthedocs.io/en/latest/>] <cit.> for detecting outliers, imbalanced-learn[<https://imbalanced-learn.org>] <cit.> for dealing with imbalanced data, autoimpute[<https://autoimpute.readthedocs.io/en/latest/>] for imputing missing values, or great-expectations[<http://docs.greatexpectations.io>] for validation. In addition, the field of deep-learning has provided us with libraries for augmenting training data (see for instance albumentation[<https://albumentations.ai/docs/>] <cit.>). However, there are very few, if any, libraries for generating context-specific data quality deficits. §.§ Contribution This paper presents , a Python package dedicated to generate data quality deficits. The aim is to propose a set of standardized and extensible objects (called generators) that can take data as input, infer context information from it, and generate data quality deficits. This package relies on a few design decisions. First, it follows a simple API. Each generator provides a function (where X is the input features and y is either a vector of class labels, regression targets, or an empty one). Secondly, aims to support as many data types as possible (e.g., tabular data, images, text, graphs, etc.). This means relying on mainstream and long-established libraries (such as numpy[<https://numpy.org/>], pandas[<https://pandas.pydata.org/>, or scikit-learn[<https://scikit-learn.org/stable/index.html>]] for tabular data) whenever possible, or following reasonable design decisions. Finally, should be structured and implemented so that it can be easily extended. §.§ Structure of the paper The paper is organized as follows. Section <ref> presents a short overview of related work. Section <ref> presents structure and implementation. Section <ref> shows a couple of application examples. Section <ref> discusses limitations, future work, concludes the paper and provides links to the project. § RELATED WORK Assessing the quality of ML applications is a broad area of research. In their paper <cit.>, Zhang and co-authors provide a relatively comprehensive overview of testing activities that apply to machine learning. According to their categorization, we can argue that generating data quality defects falls into the spectrum of test input generation. That is, the generation of specific data with the purpose of evaluating specific aspects of the system under test. The techniques listed range from rule-based to generative AI techniques. Most of the methods presented here are either part of specific test frameworks or have been described in scientific papers. To the best of our knowledge, they are not part of a library dedicated to the generation of quality defects. Data augmentation techniques are typically used in machine learning to enrich the training data set and help train models to achieve a better goodness of fit, generalize better, and become robust to some data quality issues (e.g., noise). They usually consist of specific transformations (like rotations or scaling for images) that, in principle, should not change the semantic of the data. Recent surveys, like <cit.> for images and <cit.> for text, provide an overview of the different techniques used in data augmentation. In section <ref>, we mentioned existing libraries for data augmentation. Although their main goal is not to specifically generate data quality deficits, data augmentation methods provide interesting algorithms that can be reused for our purpose. When it comes to generating data quality deficits from existing data, very few papers provide overviews of existing methods and implementations. For instance, <cit.> discusses how to generate outliers from existing data. While the authors seem to have implemented a number of these methods to test them empirically, no implementation is actually available. <cit.> discusses how to generate missing values. Note that the methods discussed in <cit.> have been implemented in R[ <https://cran.r-project.org/web/packages/missMethods/>] but not in Python. In summary, there exists a variety of methods for generating data quality defects. But very few are available in a dedicated Python library. § PROPOSED SOLUTION: BADGERS §.§ Overview Badgers is a Python library for generating data quality deficits from existing data. As a basic principle, badgers provides a set of objects called generators that follow a simple API: each generator provides a function that takes as argument (the input features) and (the class labels, the regression target, or None) and returns the corresponding transformed and . As an example, figure <ref> shows the generate function implemented in the that adds Gaussian White noise to some existing data. The code is divided into two main modules: and . The module handles all the utilities and things that are generic to all generators such as base classes (in ), decorators (in ), and utilities (in ). The generators themselves are stored under the module, which in turn is divided into submodules, each representing a data type (e.g. , , , , etc.). Each submodule hosts the generators implementations dedicated to one specific data quality deficit (such as outliers, drift, missingness, etc.) for a specific data type. Figure <ref> shows the detail of the current structure. §.§ Available features Badgers is currently under development and the list of features will most probably evolve in the near future. For the moment, the focus has been more on tabular data. As shown in the figure <ref>, the module contains five submodules: , , , , and . As their names suggest, each submodule implements generators dedicated to specific data quality deficits. For time series data (), the following submodules are available: and . For text data () only one submodule () is at the moment available. §.§.§ Tabular Data bad­gers.gen­er­at­ors.tabular_data.drift Drift happens when some statistical properties of the data changes over time <cit.>. Two generators are currently available in this module: and . Figures <ref> and <ref> illustrate how these two generators works. Simply put, the randomly shifts values of each column independently of one another. This amounts to translating the data (see Figure <ref>). The input features are first standardized (mean = 0, var = 1) and a random number is added to each column. The applies a similar transformation but for instances belonging to the same class. Here all the instances of a given class are translated, and the translation for different classes is not the same (see Figure <ref>). bad­gers.gen­er­at­ors.tabular_data.imbalanced Whereas imbalanced data is usually understood in the context of classification <cit.>, when some classes are over- or under-represented, we use a broader definition. For us, a data set is said to be imbalanced when some statistical properties of the data are over- or under-represented in comparison to a ground truth. Currently, three generators have been implemented: , , and . Simply put, all of these generators sample the original data set with replacement. The samples data points belonging to each class to obtain a specified class distribution (e.g., 10% of class 1, 20% of class 2, and 70% of class 3, see Figure <ref>). The samples data points according to the regression target and expects a function that maps the values of to a sampling probability (see Figure <ref>). Finally, the performs a similar transformation but the sampling probability now depends upon the input features values (see Figure <ref>). bad­gers.gen­er­at­ors.tabular_data.noise Currently only one generator has been implemented: . It adds a Gaussian White noise to the input features (see Figure <ref>). bad­gers.gen­er­at­ors.tabular_data.outliers Two types of generators are currently available. Generators that directly generate outliers from the input features and generators that first reduce the dimensionality of the input features and then apply an outlier generator from the previous category. , , , and are generators from the first category. Figures <ref>, <ref>, <ref>, and <ref> illustrate how these four generators create outliers. The generates outliers by creating data points where each feature i gets a value outside the range ]μ_i-3σ_i,μ_i+3σ_i[, where μ_i and σ_i are the mean and the standard deviation of feature i (see Figure <ref>). The generates outliers by creating data points on an hypersphere of center μ and of radius larger than 3σ (see Figure <ref>). The and both generate outliers by creating data points that belong to regions of low density. The difference between the two generators lies in their low density estimation methods. approximates regions of low density by computing an histogram of the data (see figure <ref>). uses a kernel density estimator (see Figure <ref>). belongs to the second category. It first standardizes the data and applies a dimensionality reduction technique (so far badgers support scikit-learn transformers that provide an function like ). The outliers are generated using one of the generators mentioned above. Finally the standardization and the dimensionality reduction are inverted. §.§.§ Time series data Time series data is currently supported in in the form of numpy arrays and pandas dataframes. bad­gers.gen­er­at­ors.time_series.noise Currently only one generator has been implemented: . It adds a Gaussian White noise to the input features . The implementation is the same as in . Figure <ref> illustrates this generator. bad­gers.gen­er­at­ors.time_series.outliers Here some existing instances are replaced with outliers. Currently only one generator is implemented: . The creates locally extreme values, by changing the values of some randomly selected data points x(t_i) ∈ X (see Figure <ref>). The values are sampled out of the ]μ_j,Δ-3σ_j,Δ,μ_j,Δ+3σ_j,Δ[ range, where μ_j,Δ and σ_j,Δ are the mean and the standard deviation of the j^th feature computed in the local time interval Δ = [t_i - n, t_i + n]. §.§.§ Text Text data is currently supported in in the form of lists of strings. bad­gers.gen­er­at­ors.text.typos For now, only one generator is implemented: . The randomly swaps adjacent letters in words larger than three letters except for the first and the last letters. As an illustration, the sentence "the quick brown fox jumps over the lazy dog" becomes "the qucik brwon fox jupms oevr the lzay dog" after applying this generator. § EXAMPLES We implemented several examples in the form of notebooks (accessible at <https://fraunhofer-iese.github.io/badgers/> under the tutorials section). The next two figures provide some examples to illustrate the use of a single generator (Figure <ref>), as well as the pipelining of several ones (Figure <ref>). § CONCLUSION This paper gave an overview of , a Python package dedicated to generating data quality deficits. is in a relatively early development stage. Until now, our focus has been to develop the library structure, the API, as well as some relatively simple generators. The goal was first and foremost to show the potential of such a library. This library has been used in the context of internal projects. The purpose was first to conduct robustness tests and to augment data. By open-sourcing this library, we hope to provide not only a tool to ease robustness tests of data-driven applications but also to foster discussions on the topic of generating data quality deficits. Future work will focus both on developing new generators and to test the applicability of this library in the context of data science projects. Discussions and design decisions will be needed to prioritize the work and to decide how to improve the support of other types of data (for instance images, graphs, geolocated data). Finally, can be installed with the Python package installer [<https://pip.pypa.io/en/stable/>]: . The full documentation is accessible at <https://fraunhofer-iese.github.io/badgers/>. The source code for is available under the BSD-3 license at <https://github.com/Fraunhofer-IESE/badgers>. alpha
http://arxiv.org/abs/2307.04146v1
20230709103222
Intrinsic Separation Principles
[ "Boris Houska" ]
math.OC
[ "math.OC" ]
-2cm A Survey and Approach to Chart Classification Anurag Dhote10009-0000-9385-4758 Mohammed Javed1Corresponding author0000-0002-3019-7401 David S Doermann20000-0003-1639-4561 August 12, 2023 ================================================================================================================================ This paper is about output-feedback control problems for general linear systems in the presence of given state-, control-, disturbance-, and measurement error constraints. Because the traditional separation theorem in stochastic control is inapplicable to such constrained systems, a novel information-theoretic framework is proposed. It leads to an intrinsic separation principle that can be used to break the dual control problem for constrained linear systems into a meta-learning problem that minimizes an intrinsic information measure and a robust control problem that minimizes an extrinsic risk measure. The theoretical results in this paper can be applied in combination with modern polytopic computing methods in order to approximate a large class of dual control problems by finite-dimensional convex optimization problems. § INTRODUCTION The separation principle in stochastic control is a fundamental result in control theory <cit.>, closely related to the certainty-equivalence principle <cit.>. It states certain problem of optimal control and state estimation can be decoupled. For general control systems, however, the separation theorem fails to hold. Thus, if one is interested in finding optimal output-feedback control laws for such systems, one needs to solve a rather complicated dual control problem <cit.>. There are two cases where such dual- or output-feedback control problems are of interest: 2pt * The first case is that we have an uncertain nonlinear system—in the easiest case, without state- and control constraints—for which the information content of future measurements depends on the control actions. In practice, this dependency can often be neglected, because, at least for small measurement errors and process noise, and under certain regularity assumptions, the separation theorem holds in a first order approximation <cit.>. Nevertheless, there are some nonlinear systems that can only be stabilized if this dependency is taken into account <cit.>. * And, the second case is that we have an uncertain linear system with state- and control constraints. Here, the process noise and future measurement errors have to be taken into account if one wants to operate the system safely, for instance, by designing output-feedback laws that ensure constraint satisfaction for all possible uncertainty scenarios. The current paper is about the second case. This focus is motivated by the recent trend towards the development of safe learning and control methods <cit.>. §.§ Literature Review Dual control problems have been introduced by Feldbaum in the early 1960s <cit.>. Mature game-theoretic and stochastic methods for analyzing such dual- and output feedback control problems have, however, only been developed much later. They go back to the seminal work of N.N. Krasovskii <cit.> and A.B. Kurzhanskii, <cit.>. Note that these historical articles are complemented by modern set-theoretic control theory <cit.>. Specifically, in the context of constrained linear systems, set-theoretic notions of invariance under output feedback can be found in the work of Dórea <cit.>, which focuses on the invariance of a single information set, and in the work of Artstein and Raković <cit.>, which focuses on the invariance of a collection of information sets. Moreover, a variety of set-theoretic output-feedback control methods for constrained linear systems have appeared in <cit.>. These have in common that they propose to take bounds on measurement errors into account upon designing a robust predictive controller. In this context, the work of Goulart and Kerrigan must be highlighted <cit.>, who found a remarkably elegant way to optimize uncertainty-affine output feedback control laws for constrained linear systems. A general overview of methods for output-feedback and dual model predictive control (MPC) can be found in <cit.>, and the reference therein. §.§ Contribution The three main contributions of this paper can be outlined as follows. Meta Information Theory. While traditional information theories are based on the assumption that one can learn from accessible data, models for predicting the evolution of an uncertain control system require a higher level of abstraction. Here, one needs a prediction structure that is capable of representing the set of all possible future information states of a dynamic learning process without having access to future measurement data. Note that a comprehensive and thorough discussion of this aspect can be found in the above mentioned article by Artstein and Raković <cit.>, in which notions of invariance under output-feedback for collections of information sets are introduced. Similar to their construction, the current article proposes a meta information theoretic framework that is based on a class of information set collections, too. A novel idea of the current article in this regard, however, is the introduction of intrinsic equivalence relations that can be used to categorize information sets with respect to their geometric properties. This leads to an algebraic-geometric definition of meta information spaces in which one can distinguish between extrinsic and intrinsic information measures. Here, intrinsic information about a system is needed to predict what we will know about its states, while extrinsic information is needed to predict and assess the risk that is associated to control decisions. Intrinsic Separation Principle. The central contribution of this paper is the introduction of the intrinsic separation principle. It formalizes the fact that the intrinsic information content of a constrained linear system does not depend on the choice of the control law. An important consequence of this result is that a large class of dual receding horizon control problems can be solved by separating them into a meta learning problem that predicts intrinsic information and a robust control problem that minimizes extrinsic risk measures. Moreover, the intrinsic separation principle can be used to analyze the existence of solutions to dual control problems under certain assumptions on the continuity and monotonicity of the objective function of the dual control problem. Polytopic Dual Control. The theoretical results in this paper are used to develop practical methods to approximately solve dual control problems for linear systems with convex state- and control constraints as well as polytopic process noise and polytopic measurement error bounds. In order to appreciate the novelty of this approach, it needs to be recalled first that many existing robust output-feedback control methods, for instance the state-of-the-art output-feedback model predictive control methods in <cit.>, are based on a set-theoretic or stochastic analysis of a coupled system-observer dynamics, where the control law depends on a state estimate. This is in contrast to the presented information theoretic approach to dual control, where control decisions are made based on the system's true information state rather than a state estimate. In fact, for the first time, this paper presents a polytopic dual control method that neither computes vector-valued state estimates nor introduces an affine observer structure. Instead, the discretization of the control law is based on optimizing a finite number of control inputs that are associated to so-called extreme polytopes. The shapes, sizes, and orientations of these extreme polytopes encode the system's intrinsic information while their convex hull encodes the system's extrinsic information. The result of this discretization is a finite dimensional convex optimization problem that approximates the original dual control problem. §.§ Overview The paper is structured as follows. 1pt * Section <ref> reviews the main idea of set-theoretic learning and introduces related notation. * Section <ref> establishes the technical foundation of this article. This includes the introduction of meta information spaces and a discussion of the difference between intrinsic and extrinsic information measures. * Section <ref> introduces the intrinsic separation principle for constrained linear systems, see Theorem <ref>. * Section <ref> discusses how to resolve dual control problems by intrinsic separation, see Theorem <ref>. * Section <ref> presents methods for discretizing dual control problems using polytopic information set approximations. The main technical result is summarized in Theorem <ref>. A numerical case study is presented. And, * Section <ref> summarizes the highlights of this paper. §.§ Notation Throughout this paper, 𝕂^n denotes the set of closed subsets of ℝ^n, while 𝕂_c^n denotes the set of compact subsets of ℝ^n. It is equipped with the Hausdorff distance d_H(X,Y) max{max_x ∈ Xmin_y ∈ Y x-y , max_y ∈ Ymin_x ∈ X x-y } for all X,Y ∈𝕂_c^n, where ‖·‖: ℝ^n →ℝ denotes a norm on ℝ^n, such that (𝕂_c^n,d_H) is a metric space. This definition can be extended to 𝕂^n as follows: if the maxima in the above definition do not exist for , we set d_H(X,Y) = ∞. The pair (𝕂^n,d_H) is called an extended metric space. Finally, the notation cl(·) is used to denote the closure, assuming that it is clear from the context what the underlying metric distance function is. For instance, if 𝔛⊆𝕂^n denotes a set of closed sets, cl(𝔛) denotes the closure of 𝔛 in (𝕂^n,d_H). § INFORMATION SPACES An information space (ℐ,d,⊓) is a space in which learning can take place. This means that (ℐ,d) is an extended metric space that is equipped with a learning operator ⊓: ℐ×ℐ→ℐ , such that (ℐ,⊓) is a semi-group. Among the most important examples for such spaces is the so-called set-theoretic information space, which is introduced below. §.§ Set-Theoretic Learning In the context of set-theoretic learning <cit.>, ℐ = 𝕂^n denotes the set of closed subsets of the vector space ℝ^n, while d = d_H denotes the (extended) Hausdorff distance. Here, the standard intersection operator takes the role of a learning operator, ⊓ = ∩ , recalling that the intersection of closed sets is closed. The motivation behind this definition can be outlined as follows: let us assume that we currently know that a vector is contained in a given set X ∈𝕂^n. If we receive additional information, for instance, that the vector x is also contained in the set Y ∈𝕂^n, our posterior information is that x is contained in the intersection of the sets X and Y, which is denoted by X ∩ Y. Note that the above set-theoretic framework is compatible with continuous functions. If f: ℝ^n →ℝ^m denotes such a continuous function, the notation ∀ X ∈𝕂^n, f(X) { f(x) | x ∈ X } is used to denote its associated continuous image map. It maps closed sets in ℝ^n to closed sets in ℝ^m. Similarly, for affine functions of the form f(x) = A x + b, the notation AX+b = { Ax+b | x ∈ X } is used, where A and b are a matrix and a vector with compatible dimensions. And, finally, the Minkowski sum X+Y { x+y | x ∈ X, y ∈ Y }, is defined for all sets X,Y ∈𝕂^n. Set theoretic learning models can be augmented by probability measures in order to construct statistical information spaces <cit.>. In such a context, every element of ℐ consists of a set X and a probability distribution ρ_X on X. A corresponding metric is then constructed by using the Wasserstein distance <cit.>. Moreover, if (X,ρ_X) ∈ℐ and (Y,ρ_Y) ∈ℐ are two independent random variables, the learning operation (X,ρ_X) ⊓ (Y,ρ_Y) ( X ∩ Y , ρ_XY ) has the form of a Bayesian learning update, ρ_XY(x) ρ_X(x)ρ_Y(x)/∫_X ∩ Yρ_X(y)ρ_Y(y) dy . Thus, as much as the current paper focuses—for simplicity of presentation—on set-theoretic learning, most of the developments below can be generalized to statistical learning processes by augmenting the support sets with probability distributions or probability measures <cit.>. §.§ Expectation and Deviation Expectation and deviation functions are among the most basic tools for analyzing learning processes <cit.>. The expectation function is defined by ∀ X ∈𝕂_c^n, E(X) ∫_X x dx . It is a continuous function on 𝕂_c^n that satisfies E(AX+b) = A E(X) + b . For the special case that the compact set X is augmented by its associated uniform probability distribution, as discussed in Remark <ref>, the above definition of E(X) corresponds to the traditional definition of expected value functions in statistics. Similarly, a deviation function D: 𝕂_c^n →ℝ is a continuous and radially unbounded function that satisfies 2pt * D(X) ≥ 0, * D(X) = 0 if and only if X = { E(X) }, * D(X) = D(X-E(X)), and * D(X ∩ Y) ≤ D(X), for all X,Y ∈𝕂_c^n. While statistical learning models often use the variance of a random variable as a deviation measure, a more natural choice for D in the context of set theoretic learning is given by the diameter, D(X) = diam(X) max_x,y ∈ X x - y . A long and creative list of other possible choices for D can be found in <cit.>. § META LEARNING The above definition of information spaces assumes that information or data is accessible at the time at which learning operations take place. If one wishes to predict the future evolution of a learning process, however, one faces the problem that such data is not available yet. Therefore, this section proposes to introduce a meta information space in which one can represent the set of all possible posterior information states of a learning process without having access to its future data. Informally, one could say that a meta information space is an abstract space in which one can “learn how to learn”. §.§ Information Ensembles The focus of this and the following sections is on the set-theoretic framework recalling that 𝕂_c^n denotes the set of compact subsets of ℝ^n. A set 𝔛⊆𝕂_c^n is called an information ensemble of 𝕂_c^n if ∀ Y ∈𝕂_c^n, X ∩ Y ∈𝔛 for all X ∈𝔛. Because ∅ = X ∩∅∈𝔛, any information ensemble contains the empty set. If 𝔛⊆𝕂_c^n is an information ensemble, then cl(𝔛) is an information ensemble, too. Let 𝔛 be a given information ensemble and let X_∞∈cl(𝔛) be a given set in its closure. Then there exists a Cauchy sequence X_1,X_2,…∈𝔛 such that X_∞ lim_k →∞ X_k ∈ cl(𝔛) . Next, let Y ∈𝕂_c^n be an arbitrary compact set. The case X_∞∩ Y = ∅ is trivial, since ∅∈𝔛⊆cl(𝔛). Next, if X_∞∩ Y ≠∅, then there exists for every ξ∈ X_∞∩ Y an associated sequence z_1(ξ) ∈ X_1, z_2(ξ) ∈ X_2, … with lim_k →∞ z_k(ξ) = ξ . This construction is such that the sets Z_k cl( { z_k(ξ) | ξ∈ X_∞∩ Y } ) satisfy Z_k = X_k ∩ Z_k ∈𝔛, since 𝔛 is an information ensemble. Consequently, it follows that X_∞∩ Y = lim_k →∞ Z_k ∈ cl(𝔛) . Thus, cl(𝔛) is an information ensemble, as claimed by the statement of the proposition. Information ensembles can be used to construct information spaces, as pointed out below. Let 𝔛 be an information ensemble of 𝕂_c^n. Then (𝔛,d_H,∩) is an information space. Condition (<ref>) implies that X ∩ Y ∈𝔛 for all X,Y ∈𝔛. Thus, (𝔛,∩) is a subsemigroup of (𝕂_c^n,∩). Moreover, d_H defines a metric on 𝔛. Consequently, (𝔛,d_H,∩) is an information space. The difference between information ensembles and more general set collections, as considered in <cit.>, is that Property (<ref>) is enforced. Note that this property makes a difference in the context of developing a coherent learning algebra: if (<ref>) would not hold, (𝔛,∩) would, in general, not be a subsemigroup of (𝕂_c^n,∩). §.§ Extreme Sets A set X ∈cl(𝔛) of a given information ensemble 𝔛⊆𝕂_c^n is called an extreme set of 𝔛 if ∀ Y ∈cl(𝔛) ∖{ X }, X ∩ Y ≠ X . The set of extreme sets of 𝔛 is denoted by ∂𝔛. It is called the boundary of the information ensemble 𝔛. Clearly, we have ∂ X ⊆cl(𝔛), but, in general, ∂ X is not an information ensemble. Instead, ∂𝔛 can be interpreted as a minimal representation of the closure of 𝔛, because cl(𝔛) = { Y ∈𝕂_c^n | ∃ X ∈∂𝔛, Y ⊆ X } . Reversely, the closure of 𝔛 can be interpreted as the smallest information ensemble that contains ∂𝔛. §.§ Meta Information Spaces Let 𝕀^n denote the set of closed information ensembles of 𝕂_c^n; that is, the set of closed subsemigroups of (𝕂_c^n,∩) that are closed under intersection with sets in 𝕂_c^n. Similarly, the notation 𝕀_c^n will be used to denote the set of compact information ensembles of the information space (𝕂_c^n,d_H,∩). Next, the meta learning operator is introduced by defining 𝔛⊓𝔜 { X ∩ Y | X ∈𝔛, Y ∈𝔜 } for all 𝔛,𝔜∈𝕀^n. A corresponding metric distance function, Δ_H is given by Δ_H(𝔛, 𝔜) max{max_X ∈𝔛min_Y ∈𝔜 d_H(X,Y), max_Y ∈𝔜min_X ∈𝔛 d_H(X,Y) } for all 𝔛, 𝔜∈𝕀_c^n such that (𝕀_c^n,Δ_H) is a metric space. Similar to the construction of the Hausdorff distance d_H, the definition of Δ_H can be extended to 𝕀^n by understanding the above definition in the extended value sense. The following proposition shows that the triple (𝕀^n, Δ_H, ⊓) is an information space. It is called the meta information space of (𝕂_c^n, d_H, ∩). The triple (𝕀^n, Δ_H, ⊓) is an information space. It can itself be interpreted as a set-theoretic information space in the sense that we have 𝔛⊓𝔜 = 𝔛∩𝔜 for all 𝔛,𝔜∈𝕀^n. The proof of this proposition is divided into two parts: the first part shows that (<ref>) holds and the second part uses this result to conclude that (𝕀^n, Δ_H, ⊓) is an information space. Part I. Let 𝔛,𝔜∈𝕀^n be given information ensembles. For any X ∈𝔛∩𝔜 the intersection relation X ∩ X = X ∈𝔛∩𝔜 holds. But this implies that 𝔛∩𝔜 ⊆ { X ∩ Y | X ∈𝔛, Y ∈𝔜 } = 𝔛⊓𝔜 . In order to also establish the reverse inclusion, assume that Z ∈𝔛⊓𝔜 is a given set. It can be written in the form with and . Clearly, we have and . Moreover, we have , since the intersection of compact sets is compact. Thus, since 𝔛 and 𝔜 are information ensembles, (<ref>) implies that and . But this is the same as saying that , which implies 𝔛∩𝔜⊇𝔛⊓𝔜. Together with the above reverse inclusion, this yields (<ref>). Part II. Note that (𝕀^n,∩) is a semigroup, which follows from the definition of intersection operations. Moreover, (𝕀^n,Δ_H) is, by construction, an extended metric space. Thus, (𝕀^n, Δ_H, ⊓) is indeed an information space, as claimed by the statement of this proposition. The triple (𝕀_c^n,Δ_H,⊓) is also an information space. It can be interpreted as a sub-meta information space of (𝕀^n,Δ_H,⊓). The statement of this corollary follows immediately from the previous proposition, since the intersection of compact sets is compact; that is, (𝕀_c^n,⊓) is a subsemigroup of (𝕀^n,⊓). The statement of the above proposition about the fact that (𝕀^n,Δ_H,⊓) can be interpreted as a set-theoretic information space can be further supported by observing that this space is naturally compatible with continuous functions, too. Throughout this paper, the notation f(𝔛) { f(X) | X ∈𝔛 } is used for any 𝔛∈𝕀^n, recalling that f(X) denotes the compact image set of a continuous function f on a compact set X ∈𝕂_c^n. Due to this continuity assumption on f, closed information ensembles are mapped to closed information ensembles. §.§ Interpretation of Meta Learning Processes Meta information spaces can be used to analyze the evolution of learning processes without having access to data. In order to discuss why this is so, a guiding example is introduced: let us consider a set-theoretic sensor, which returns at each time instance a compact information set X ∈𝕂_c^1 containing the scalar state x of a physical system, x ∈ X. If the absolute value of the measurement error of the sensor is bounded by 1, this means that X ⊆ [a,a+2] for at least one lower bound a ∈ℝ. The closed but unbounded information ensemble that is associated with such a sensor is given by 𝔙 = { X ∈𝕂_c^1 | ∃ a ∈ℝ: X ⊆ [a,a+2] }∈𝕀^1 . It can be interpreted as the set of all information sets that the sensor could return when taking a measurement. Next, in order to illustrate how an associated meta learning process can be modeled, one needs to assume that prior information about the physical state x is available. For instance, if x is known to satisfy x ∈ [-3,3], this would mean that our prior is given by 𝔛 = { X ∈𝕂_c^1 | X ⊆ [-3,3] } . In such a situation, a meta learning process is—due to Proposition <ref>—described by an update of the form 𝔛^+ = 𝔛⊓𝔙 = 𝔛∩𝔙 , where 𝔛^+ denotes the posterior, 𝔛^+ = { X ∈𝕂_c^1 | [ ∃ a ∈ℝ:; X ⊆ [ max{ a,-3}, 2 + min{ 1, a } ] ]} . It is computed without having access to any sensor data. §.§ Intrinsic Equivalence Equivalence relations can be used to categorize compact information sets with respect to their geometric properties. In the following, we focus on a particular equivalence relation. Namely, we consider two sets X,Y ∈𝕂_c^n equivalent, writing X ≃ Y, if they have the same shape, size, and orientation. This means that X ≃ Y ⟺ ∃ a ∈ℝ^n: X + a = Y . The motivation for introducing this particular equivalence relation is that two information sets X and Y can be considered equally informative if they coincide after a translation. Two information ensembles 𝔛,𝔜⊆𝕂_c^n are called intrinsically equivalent, 𝔛∼𝔜, if their quotient spaces coincide, (𝔛/≃) = (𝔜/≃) . The intrinsic equivalence relation ∼ from the above definition is—as the name suggests—an equivalence relation. This follows from the fact that 𝔛∼𝔜 if and only if [ ∀ X ∈𝔛, ∃ a ∈ℝ^n: X + a ∈ 𝔜; and ∀ Y ∈𝔜, ∃ b ∈ℝ^n: Y + b ∈ 𝔛 , ] which, in turn, follows after substituting the above definition of ≃ in (<ref>). If 𝔛,𝔜⊆𝕂_c^n are intrinsically equivalent information ensembles, 𝔛∼𝔜, their closures are intrinsically equivalent, too, cl(𝔛) ∼ cl(𝔜) . Proposition <ref> ensures that the closures of 𝔛 and 𝔜 are information ensembles, cl(𝔛) ∈𝕀^n and cl(𝔜) ∈𝕀^n. Next, there exists for every X_∞∈cl(𝔛) a convergent sequence of sets X_1,X_2,…∈𝔛 such that X_∞ = lim_k →∞ X_k . Moreover, since 𝔛∼𝔜, there also exists a sequence a_1,a_2,…∈ℝ^n such that the sequence Y_k X_k + a_k ∈ 𝔜 remains in 𝔜. Because 𝔛 and 𝔜 are compact, the sequence of offsets a_k must be bounded. Thus, it has a convergent subsequence, a_j_1,a_j_2,…∈ℝ^n, with limit a_∞ lim_k →∞ a_j_k ∈ ℝ^n . This construction is such that X_∞ + a_∞ = lim_k →∞ { X_j_k + a_j_k} ∈ cl(𝔜) . A completely analogous statement holds after replacing the roles of 𝔛 and 𝔜. Consequently, the closures of 𝔛 and 𝔜 are intrinsically equivalent, which corresponds to the statement of the proposition. §.§ Extrinsic versus Intrinsic Information Throughout this paper, it will be important to distinguish between extrinsic and intrinsic information. Here, the extrinsic information of an information ensemble is encoded by the union of its elements, namely, the extrinsic information set. It describes present information. The extrinsic information content of an information ensemble can be quantified by extrinsic information measures: An information measure f: 𝕀_c^n→ℝ is called extrinsic, if there exist a function g: 𝕂_c^n →ℝ with ∀𝔛∈𝕀_c^n, f(𝔛) = g ( ⋃_X ∈𝔛 X ) . In contrast to extrinsic information, the intrinsic information of an information ensemble 𝔛 is encoded by its quotient space, 𝔛/≃. It describes future information. In order to formalize this definition, it is helpful to introduce a shorthand for the meta quotient space ℚ_c^n 𝕀_c^n/∼ . In analogy to Definition <ref>, the intrinsic information of an information ensemble can be quantified by intrinsic information measures: An information measure f: 𝕀_c^n→ℝ is called intrinsic, if there exist a function g: ℚ_c^n →ℝ with ∀ X ∈𝕀_c^n, f(𝔛) = g(𝔛/≃) . In order to develop a stronger intuition about the difference between extrinsic and intrinsic information measures, it is helpful to extend the definitions of the expectation and deviation functions E and D from the original information space setting in Section <ref>. These original definitions can be lifted to the meta information space setting by introducing their associated extrinsic expectation 𝔈 and extrinsic deviation 𝔇, given by 𝔈(𝔛) E( ⋃_X ∈𝔛 X ) and 𝔇(𝔛) D( ⋃_X ∈𝔛 X ) for all 𝔛∈𝕀_c^n. Note that 𝔈 and 𝔇 are continuous functions, which inherit the properties of E and D. Namely, the relation 𝔈(A 𝔛 + b ) = A 𝔈(𝔛) + b holds. Similarly, 𝔇 satisfies all axioms of a deviation measure in the sense that 2pt * 𝔇(𝔛) ≥ 0, * 𝔇(𝔛) = 0 if and only if 𝔛 = {{𝔈( 𝔛 ) }}, * 𝔇(𝔛) = 𝔇(𝔛-𝔈(𝔛)), and * 𝔇(𝔛⊓𝔜) ≤𝔇(𝔛), for all 𝔛,𝔜∈𝕀_c^n. Note that such extrinsic deviation measures need to be distinguished carefully from intrinsic deviation measures. Here, a function 𝔇^∘: 𝕀_c^n→ℝ, is called an intrinsic deviation measure if it is a continuous and intrinsic function that satisfies 2pt * 𝔇^∘(𝔛) ≥ 0, * 𝔇^∘(𝔛) = 0 if and only if 𝔛∼{{𝔈( 𝔛 ) }}, * 𝔇^∘(𝔛) = 𝔇^∘(𝔛-𝔈(𝔛)), and * 𝔇^∘(𝔛⊓𝔜) ≤𝔇^∘(𝔛), for all 𝔛,𝔜∈𝕀_c^n. The second axiom is equivalent to requiring that 𝔇^∘ is positive definite on the quotient space ℚ_c^n. In order to have a practical example in mind, we introduce the particular function ∀𝔛∈𝕀_c^n, 𝔇_∞^∘(𝔛) = max_X ∈𝔛 max_x,y ∈ X x - y , which turns out to be an intrinsic information measure, as pointed out by the following lemma. The function 𝔇_∞^∘, defined by (<ref>), is an intrinsic deviation measure on 𝕀_c^n. Let 𝔛∈𝕀_c^n be a given information ensemble and let X^⋆ be a maximizer of (<ref>), such that 𝔇_∞^∘( 𝔛 ) = diam(X^⋆) = max_x,y ∈ X^⋆ x - y . If 𝔜∈𝕀_c^n is an intrinsically equivalent ensemble with 𝔛∼𝔜, then there exists an offset vector a^⋆∈ℝ^n such that X^⋆+a^⋆∈𝔜. Thus, we have 𝔇_∞^∘(𝔜) = max_Y ∈𝔜 diam(Y) ≥ diam(X^⋆ + a^⋆) = diam(X^⋆ + a - E(X^⋆ + a) ) = diam(X^⋆ - E(X^⋆)) = diam(X^⋆) = 𝔇_∞^∘(𝔛) , where the equations in the second, third, and fourth line follow by using the axioms of D and E from Section <ref>. The corresponding reverse inequality follows by using an analogous argument exchanging the roles of 𝔛 and 𝔜. Thus, we have 𝔇_∞^∘(𝔛) = 𝔇_∞^∘(𝔜). This shows that 𝔇_∞^∘ is an intrinsic information measure. The remaining required properties of 𝔇_∞^∘ are directly inherited from the diameter function, recalling that the diameter is a continuous deviation function that satisfies the corresponding axioms from Section <ref>. This yields the statement of the lemma. Let us revisit the tutorial example from Section <ref>, where we had considered the case that 𝔛 = { X ∈𝕂_c^1 | X ⊆ [-3,3] } and 𝔛^+ = { X ∈𝕂_c^1 | [ ∃ a ∈ℝ:; X ⊆ [ max{ a,-3}, 2 + min{ 1, a } ] ]} denote the prior and posterior of a data-free meta learning process. If we set D(X) = diam(X) and define 𝔇 and 𝔇_∞^∘ as above, then 𝔇( 𝔛 ) = 𝔇( 𝔛^+ ) = 6 . An interpretation of this equation can be formulated as follows: since our meta learning process is not based on actual data, the extrinsic information content of the prior 𝔛 and the posterior 𝔛^+ must be the same, which implies that their extrinsic deviations must coincide. This is in contrast to the intrinsic deviation measure, 𝔇_∞^∘( 𝔛 ) = 6 > 2 = 𝔇_∞^∘( 𝔛^+ ), which predicts that no matter what our next measurement will be, the diameter of our posterior information set will be at most 2. § INTRINSIC SEPARATION PRINCIPLE The goal of this section is to formulate an intrinsic separation principle for constrained linear systems. §.§ Constrained Linear Systems The following considerations concern uncertain linear discrete-time control systems of the form [ x_k+1 = A x_k + B u_k + w_k; η_k = C x_k + v_k . ] Here, x_k ∈ℝ^n denotes the state, u_k∈𝕌 the control, w_k ∈𝕎 the disturbance, the measurement, and v_k ∈𝕍 the measurement error at time k ∈ℤ. The system matrices A, B, and C as well as the state, control, disturbance, and measurement error constraints sets, 𝕏∈𝕂^n, 𝕌∈𝕂_c^n_u, 𝕎∈𝕂_c^n, and 𝕍∈𝕂_c^n_v, are assumed to be given. §.§ Information Tubes The sensor that measures the outputs C x_k of (<ref>) can be represented by the information ensemble 𝔙 { X ∈𝕂_c^n | ∃η∈ℝ^n_v : η -C X ⊆𝕍 } . Since 𝕍 is compact, 𝔙 is closed but potentially unbounded, 𝔙∈𝕀^n. If 𝔛∈𝕀^n denotes a prior information ensemble of the state of (<ref>) an associated posterior is given by 𝔛⊓𝔙. This motivates to introduce the function F( 𝔛, μ ) { X^+ ∈𝕂_c^n| [ ∃ X ∈𝔛⊓𝔙:; X^+ ⊆ A X + B μ(X) + 𝕎 ]}, which is defined for all 𝔛∈𝕀^n and all control laws that map the system's posterior information state to a feasible control input. Let 𝒰 denote the set of all such maps from 𝕀^n to 𝕌. It is equipped with the supremum norm, ‖μ‖ sup_X ∈𝕂^n μ(X) , such that (𝒰,‖·‖) is a Banach space. As μ∈𝒰 is potentially discontinuous, F(𝔛,μ) is not necessarily closed. Instead, the following statement holds. If 𝔛, 𝕌, 𝕍, and 𝕎 are closed, then the closure of the set F(𝔛,μ) is for every given μ∈𝒰 a closed information ensemble, F(𝔛,μ) cl( F(𝔛,μ) ) ∈𝕀^n . The statement of this proposition follows from Proposition <ref> and the above definition of F. The functions F and F are the basis for the following definitions. An information ensemble 𝔛_s∈𝕀^n is called control invariant (<ref>) if there exists a such that 𝔛_s⊇ F(𝔛_s, μ_s) . A sequence 𝔛_0,𝔛_1,…∈𝕀^n of information ensembles is called an information tube for (<ref>) if there exists a sequence μ_0,μ_1,…∈𝒰 such that ∀ k ∈ℕ, 𝔛_k+1⊇ F(𝔛_k, μ_k) . An information tube 𝒳_0,𝒳_1,…∈𝕀^n is called tight if it satisfies ∀ k ∈ℕ, 𝔛_k+1 = F( 𝔛_k, μ_k ) for at least one control policy sequence μ_k ∈𝒰. §.§ Intrinsic Separation The following theorem establishes the fact that the intrinsic equivalence class of tight information tubes does not depend on the control policy sequence. Let 𝔛_0,𝔛_1, …∈𝕀_c^n and 𝔜_0, 𝔜_1, …∈𝕀_c^n be tight information tubes with compact elements. If the initial information ensembles are intrinsically equivalent, 𝔛_0 ∼𝔜_0, then all information ensembles are intrinsically equivalent; that is, 𝔛_k ∼𝔜_k for all k ∈ℕ. Because 𝔛 and 𝔜 are tight information tubes, there exist control policies and such that 𝔛_k+1 = F(𝔛_k, μ_k ) and 𝔜_k+1 = F(𝔜_k, ν_k ) for all k ∈ℕ. Next, the statement of the theorem can be proven by induction over k: since we assume 𝔛_0 ∼𝔜_0, this assumption can be used directly as induction start. Next, if 𝔛_k ∼𝔜_k, there exists for every X_k ∈𝔛_k ∩𝔙 an offset vector a_k ∈ℝ^n such that Y_k = X_k + a_k ∈𝔜_k. Because 𝔙 satisfies ∀ a ∈ℝ^n, ∀ V ∈𝔙, V + a ∈𝔙, it follows that Y_k = X_k + a_k ∈𝔜_k ∩𝔙. Consequently, a relation of the form A X_k + B μ_k(X_k) + 𝕎 = A Y_k + (Bμ_k(X_k) - Aa_k) + 𝕎 = A Y_k + B ν_k(Y_k) + 𝕎 - a_k+1, can be established, where the next offset vector, a_k+1, is given by a_k+1 A x_k + B ν_k(Y_k) - B μ_k(X_k) ∈ ℝ^n . Note that a completely symmetric relation holds after exchanging the roles of 𝔛_k and 𝔜_k. In summary, it follows that an implication of the form 𝔛_k ∼𝔜_k ⟹ F(𝔛_k,μ_k) ∼ F(𝔜_k,ν_k) holds. An application of Proposition <ref> to the latter equivalence relation yields the desired induction step. This completes the proof of the theorem. The above theorem allows us to formulate an intrinsic separation principle. Namely, Theorem <ref> implies that the predicted future information content of a tight information tube does not depend on the choice of the control policy sequence with which it is generated. In particular, the tight information tubes from (<ref>) satisfy ∀ k ∈ℕ, 𝔇^∘(𝔛_k) = 𝔇^∘(𝔜_k) for any intrinsic information measure 𝔇^∘. Note that this property is independent of the choice of the control policy sequences μ_k and ν_k that are used to generate these tubes. §.§ Control Invariance As mentioned in the introduction, different notions of invariance under output-feedback control have been analyzed by various authors <cit.>. This section briefly discusses how a similar result can be recovered by using the proposed meta learning based framework. For this aim, we assume that 2pt * the sets 𝕍∈𝕂_c^n_v and 𝕎∈𝕂_c^n_w are compact, * the set is closed and convex, * the pair (A,C) is observable, and * (A,B,𝕌, 𝕎) admits a robust control invariant set. The first two assumptions are standard. The third assumption on the observability of (A,C) could also be replaced by a weaker detectability condition. However, since one can always use a Kalman decomposition to analyze the system's invariant subspaces separately <cit.>, it is sufficient to focus on observable systems. And, finally, the fourth assumption is equivalent to requiring the existence of a state-feedback law μ: ℝ^n →𝕌 and a set X∈𝕂_c^n such that ∀ x ∈X, ∀ w ∈𝕎, Ax+B μ(x) +w ∈X , which is clearly necessary: if we cannot even keep the system inside a bounded region by relying on exact state measurements, there is no hope that we can do so without such exact data. If the above four assumptions hold, (<ref>) admits a compact control invariant information ensemble. The proof of this lemma is divided into two parts, which aim at constructing an information tube that converges to a control invariant information ensemble. Part I. The goal of the first part is to show, by induction over k, that the recursion ∀ k ∈ℕ, 𝔛_k+1^∘ A ( 𝔛_k^∘∩𝔙 ), 𝔛_0^∘ 𝕂_c^n is set monotonous. Since 𝔛_0^∘ = 𝕂_c^n, 𝔛_1^∘⊆𝔛_0^∘ holds. This is the induction start. Next, if 𝔛_k+1^∘⊆𝔛_k^∘ holds for a given integer k ≥ 0, it follows that 𝔛_k+2^∘ = A(𝔛_k+1^∘∩𝔙 ) ⊆ A(𝔛_k^∘∩𝔙 ) = 𝔛_k+1^∘ , where the inclusion in the middle follows directly by substituting the induction assumption. In summary, the monotonicity relation 𝔛_k+1^∘⊆𝔛_k^∘ holds for all k ∈ℕ. Part II. The goal of the second part is to show that the sequence 𝔛_k { X - E(X) + x | X ∈𝔛_k^∘, x∈cvx(X) } , converges to an invariant information ensemble. Here, cvx(X) denotes the convex hull of the robust control invariant set X. Because we assume that 𝕌 is convex, cvx(X) is robust control invariant, too. This means that there exists a μ: ℝ^n →𝕌 such that ∀ x ∈cvx(X), ∀ w ∈𝕎, Ax + B μ(x)+w ∈cvx(X) . Since E satisfies E( X ) ∈cvx(X) for all X ∈𝕂_c^n, (<ref>) and the definitions of 𝔛_k and 𝔙 imply that [ ∀ X ∈𝔛_k ∩𝔙, E(X) ∈cvx(X),; ∀ X ∈𝔛_k, X - E(X) ∈𝔛_k^∘; and ∀ X ∈𝔙, X - E(X) ∈𝔙 ] for all k ∈ℕ. Thus, the state estimation based auxiliary feedback law ∀ X ∈𝕂_c^n, μ(X) μ(E(X)) ensures that the recursive feasibility condition [ A X + B μ(X) + 𝕎; = A(X-E(X)) + A E(X) + B μ(E(X)) + 𝕎_⊆ cvx(X)∈𝔛_k+1 ] holds for all X ∈𝔛_k ∩𝔙. Consequently, the auxiliary sequence 𝔛_k is a monotonous information tube, ∀ k ∈ℕ, 𝔛_k ⊇ 𝔛_k+1 ⊇ F(𝔛_k,μ) , where monotonicity follows from (<ref>) and the considerations from Part I. Moreover, since (A,C) is observable, 𝔛_k is compact for all k ≥ n-1. In summary, 𝔛_k is a monotonously decreasing sequence of information ensembles, which—due to the monotone convergence theorem—converges to a compact control invariant information ensemble, 𝔛_∞ = lim_k →∞ 𝔛_k ∈ 𝕀_c^n_x and F(𝔛_∞,μ) ⊆ 𝔛_∞ . This corresponds to the statement of the lemma. The purpose of Lemma <ref> is to elaborate on the relation between control invariant information ensembles and existing notions in linear control as observability and robust stabilizability. Lemma <ref> does, however, not make statements about feasibility: the state constraint set 𝕏 is not taken into account. Moreover, the construction of the feedback law μ in (<ref>) is based on the vector-valued state estimate E(X) rather than the information state X, which is, in general, sub-optimal. Note that these problems regarding feasibility and optimality are resolved in the following section by introducing optimal dual control laws. § DUAL CONTROL This section is about dual control problems for constrained linear systems. It is discussed under which assumptions such problems can be separated into a meta learning and a robust control problem. §.§ Receding Horizon Control Dual control problems can be implemented in analogy to traditional model predictive control (MPC) methods. Here, one solves the online optimization problem J(X_0) = inf_𝔛,μ ∑_k=0^N-1 L(𝔛_k,μ_k) + M(𝔛_N) s.t. {[ ∀ k ∈{ 0, 1, …, N-1 },; F(𝔛_k,μ_k) ⊆𝔛_k+1, X_0 ∈𝔛_0; μ_k ∈𝒰,; ∀ X_k ∈𝔛_k, X_k ⊆𝕏 ]. on a finite time horizon { 0,1,…, N }, where 0 is the current time. The optimization variables are the feedback policies μ_0,μ_1,…,μ_N-1∈𝒰 and their associated information tube, 𝔛_0,𝔛_1,…,𝔛_N ∈𝕀_c^n. In the most general setting, the stage and terminal cost functions, L: 𝕀_c^n×𝒰→ℝ and M: 𝕀_c^n→ℝ, are assumed to be lower semi-continuous, although some of the analysis results below will be based on stronger assumptions. We recall that 𝕏 denotes the closed state constraint set. The parameter X_0 ∈𝕀_c^n corresponds to the current information set. It is updated twice per sampling time by repeating the following steps online: 2pt i) Wait for the next measurement η. ii) Update the information set, X_0 ← X_0 ∩{ x ∈ℝ^n |η - Cx ∈𝕍} . iii) Solve (<ref>) and denote the first element of the optimal feedback sequence by μ_0^⋆∈𝒰. iv) Send u^⋆ = μ_0^⋆(X_0) to the real process. v) Propagate the information set, X_0 ← A X_0 + B u^⋆ + 𝕎 . vi) Set the current time to 0 and continue with Step i). Note that Step iii) assumes that the “inf” operator in (<ref>) can be replaced by a “min” and that an associated optimal feedback policy exists. Conditions under which this can be guaranteed are discussed in Section <ref>. §.§ Objectives Tube model predictive control formulations <cit.> use risk measures as stage cost functions. In principle, any lower semi-continuous function of the form R: 𝕂_c^n →ℝ∪{∞}, can be regarded as such a risk measure, although one would usually require that the monotonicity condition X ⊆ Y ⟹ R(X) ≤ R(Y) holds for all X,Y ∈𝕂_c^n. Similarly, this paper proposes to call ℜ: 𝕀_c^n →ℝ∪{∞} an extrinsic risk measure if ℜ(𝔛) = R ( ⋃_X ∈𝔛 X ) for a lower semi-continuous function R that satisfies (<ref>). Problem (<ref>) enforces state constraints explicitly. Alternatively, one can move them to the objective by introducing the indicator function I_𝕏 of the state constraint set 𝕏. Because we have ( ∀ X_k ∈𝔛_k, X_k ⊆𝕏 ) ⟺ I_𝕏( ⋃_X ∈𝔛_k X ) < ∞, enforcing state constraints is equivalent to adding an extrinsic risk measure to the stage cost; here with R = I_𝕏. By using the language of this paper, the traditional objective of dual control <cit.> is to tradeoff between extrinsic risk and intrinsic deviation. This motivates to consider stage cost functions of the form L( 𝔛,μ) = ℜ(𝔛) + τ·𝔇^∘(𝔛) . Here, ℜ denotes a lower semi-continuous extrinsic risk measure and 𝔇^∘ a lower semi-continuous intrinsic information measure. For general nonlinear systems, the parameter τ > 0 can be used to tradeoff between risk and deviation. In the context of constrained linear systems, however, such a tradeoff is superfluous, as formally proven in the sections below. The stage cost function (<ref>) can be augmented by a control penalty. For example, one could set L( 𝔛,μ) = ℜ(𝔛) + τ·𝔇^∘(𝔛) + ℭ(μ) , where ℭ: 𝒰→ℝ models a lower semi-continuous control cost. This additional term does, however, not change the fact that the parameter τ does not affect the optimal solution of (<ref>). Details about how to construct ℭ in practice will be discussed later on in this paper, see Section <ref>. §.§ Separation of Meta-Learning and Robust Control The goal of this section is to show that one can break the dual control problem (<ref>) into an intrinsic meta learning problem and an extrinsic robust control problem. We assume that * the stage cost function L has the form (<ref>), * the function ℜ is an extrinsic risk measure, * the function 𝔇^∘ is intrinsic and τ≥ 0, and * the function M is extrinsic and monotonous, 𝔛⊆𝔜 ⟹ M(𝔛) ≤ M(𝔜) . In this context, the meta learning problem consists of computing a constant information tube that is found by evaluating the recursion [ ∀ k ∈ℕ, 𝔜_k+1 F(𝔜_k,ν_k); with 𝔜_0 { X ∈𝕂_c^n | X ⊆ X_0 }, ] for a constant sequence ν_0,ν_1, …∈𝒰. For simplicity of presentation, we assume 0 ∈𝕌 such that we can set ν_k(X) = 0 without loss of generality. Due to Theorem <ref>, L satisfies L(𝔛_k) = ℜ(𝔛_k) + τ·𝔇^∘( 𝔜_k ) along any optimal tube of (<ref>). Consequently, (<ref>) reduces to a robust control problem in the sense that all objective and constraint functions are extrinsic, while the shapes, sizes and orientations of the sets of the optimal information tube are constants, given by (<ref>). In summary, the contribution of intrinsic information to the objective value of (<ref>), denoted by J_I(X_0) τ·∑_k=0^N-1𝔇^∘(𝔜_k), depends on X_0 but it does not depend on the choice of the control law. It can be separated from the contribution of extrinsic objective terms, as elaborated below. §.§ Existence of Solutions In order to discuss how one can—after evaluating the meta-learning recursion (<ref>)—rewrite (<ref>) in the form of an extrinsic robust control problem, a change of variables is introduced. Let ℬ_k denote the set of bounded functions of the form c_k: 𝔜_k →ℝ^n. It is a Banach space with respect to its supremum norm ‖ c_k ‖ sup_X ∈𝔜_k c_k(X) . Due to Theorem <ref>, any tight information tube 𝔛_0,𝔛_1, …∈𝕀_c^n, started at 𝔛_0 = 𝔜_0, is intrinsically equivalent to the precomputed tube 𝔜_0, 𝔜_1, …∈𝕀_c^n and can be written in the form 𝔛_k = { Y + c_k(Y) | Y ∈𝔜_k } for suitable translation functions c_k ∈ℬ_k. In the following, we introduce the auxiliary set 𝒞_k { (c,c^+) | [ ∀ X ∈∂[ 𝔜_k ∩𝔙],; A c(X) - c^+( AX + 𝕎) ∈ (-B 𝕌) ]} recalling that ∂ denotes the boundary operator that returns the set of extreme sets of a given information ensemble. Because 𝕌 is compact, 𝒞_k ⊆ℬ_k ×ℬ_k+1 is a closed set. Additionally, we introduce the shorthands ℛ_k(c_k) ℜ( { Y + c_k(Y) | Y ∈𝔜_k } ) and ℛ_N(c_N) M( { Y + c_N(Y) | Y ∈𝔜_N } ) . Since we assume that ℜ and M are lower-semicontinuous on 𝕀_c^n, the functions ℛ_k: ℬ_k →ℝ are lower semi-continuous on the Banach spaces ℬ_k. They can be used to formulate the extrinsic robust control problem[If the sets 𝕌 and 𝕏 and the functions ℛ_k are convex, (<ref>) is a convex optimization problem.] J_E(X_0) = min_c_0,c_1,…,c_N ∑_k=0^N-1ℛ_k(c_k) + ℛ_N(c_N) s.t. {[ ∀ k ∈{ 0, 1, …, N-1 },; (c_k,c_k+1) ∈𝒞_k, c_0 ≡ 0,; ∀ Y ∈𝔜_k, Y + c_k(Y) ⊆𝕏, ]. which can be used to find the optimal solution of (<ref>). In detail, this result can be summarized as follows. Let 𝕏∈𝕂^n be a closed set, let 𝕌∈𝕂_c^n_u, 𝕍∈𝕂_c^n_v, and 𝕎∈𝕂_c^n_w be compact sets, let L be given by (<ref>) with ℜ and M being set-monotonous and lower semi-continuous extrinsic risk measures, and let 𝔇^∘ be an intrinsic lower semi-continuous information measure. Then the following statements hold. * Problem (<ref>) admits a minimizer or is infeasible. * Problem (<ref>) is intrinsically separable; that is, J(X_0) = J_E(X_0) + J_I(X_0). * If c_0,c_1,…,c_N is a minimizer of (<ref>), its associated sequence of information ensembles, given by (<ref>), is an optimal information tube of (<ref>). Because the objective functions ℛ_k of (<ref>) are lower semicontinuous and since the feasible set of (<ref>) is closed under the listed assumptions, it follows directly from Weierstrass' theorem that this optimization problem admits a minimizer or is infeasible. Next, a relation between (<ref>) and (<ref>) needs to be established. For this aim, we divide the proof into three parts. Part I. Let us assume that 𝔛_0,𝔛_1, …, 𝔛_N ∈𝕀_c^n is a tight information tube for given μ_0,μ_1,…,μ_N-1∈𝒰, ∀ k ∈{ 0,1,…,N-1}, 𝔛_k+1 = F(𝔛_k,μ_k) . Due to Theorem <ref>, there exist functions c_k: ℬ_k →ℝ^n such that 𝔛_k = { Y + c_k(Y) | Y ∈𝔜_k }. The goal of the first part of this proof is to show that (c_k,c_k+1) ∈𝒞_k. Because the information tube is tight, we have A X + B μ_k(X) + 𝕎 ∈ ∂𝔛_k+1 for all X ∈∂ [ 𝔛_k∩𝔙 ]. Since any set Y ∈∂ [ 𝔜_k ∩𝔙] is mapped to an extreme set X = Y + c_k(Y) ∈∂ [ 𝔛_k ∩𝔙], it follows that A (Y+c_k(Y)) + B μ_k(X) + 𝕎∈∂𝔛_k+1 ⟹ (AY+𝕎)_∈ ∂𝔜_k+1 + (c_k(Y) + B μ_k(X))_∈ ℝ^n ∈ ∂𝔛_k+1 for any such pair (X,Y). But this is only possible if c_k(Y) + B μ_k(X) = c_k+1(AY+𝕎) . Since μ_k(X) ∈𝕌 and since the choice of Y ∈∂ [ 𝔜_k ∩𝔙] is arbitrary, it follows from (<ref>) that (c_k,c_k+1) ∈𝒞_k. Part II. The goal of the second part of this proof is to reverse the construction from the first part. For this aim, we assume that we have functions c_k: ℬ_k →ℝ^n that satisfy the recursivity condition (c_k,c_k+1) ∈𝒞_k for all k ∈{ 0,1,…,N-1} while the sets 𝔛_k are given by (<ref>). Since every set X ∈𝔛_k ∩𝔙 is contained in at least one extreme set X∈∂[ 𝔛_k ∩𝔙], there exists for every such X a set Y∈∂ [ 𝔜_k ∩𝔙 ] with X ⊆ X = Y + c_k(Y). Note that this is equivalent to stating that there exists a function Σ_k: 𝔛_k ∩𝔙→∂ [ 𝔜_k ∩𝔙 ] that satisfies ∀ X ∈𝔛_k ∩𝔙, X ⊆Σ_k(X) + c_k(Σ_k(X)) . It can be used to define the control laws μ_k(X) B^†[ c_k+1(A Σ_k(X)+𝕎 ) - A c_k(Σ_k(X)) ], where B^† denotes the pseudo-inverse of B. Because we assume (c_k,c_k+1) ∈𝒞_k, we have μ_k(X) ∈𝕌 and A X + B μ_k(X) + 𝕎 [ (<ref>),(<ref>)⊆ A Σ_k(X) + 𝕎 + c_k+1(A Σ_k(X) + 𝕎) ∈ 𝔛_k+1 ] for all X ∈𝔛_k ∩𝔙, where the latter inclusion follows from (<ref>) and the fact that A Σ_k(X) + 𝕎∈𝔜_k+1. Consequently, we have 𝔛_k+1⊇ F(𝔛_k,μ_k). Part III. The construction from Part I can be used to start with any feasible information tube of (<ref>) to construct a feasible sequence c_0,c_1,…,c_N such that J_E(X_0) ≤ ∑_k=0^N-1ℛ_k(c_k) + ℛ_N(c_N) = ∑_k=0^N-1 L(𝔛_k,μ_k) + M(𝔛_N) - J_I(X_0) . Thus, we have J_E(X_0) + J_I(X_0) ≤ J(X_0). Similarly, the construction from Part II can be used to start with an optimal solution of (<ref>) to construct a feasible point of (<ref>), which implies J_E(X_0) + J_I(X_0) ≥ J(X_0). Thus, the second and the third statement of the theorem hold. §.§ Recursive Feasibility and Stability Feasible invariant information ensembles 𝔛_s∈𝕀_c^n exist if and only if the optimization problem min_𝔛_s,μ_s L(𝔛_s,μ_s) s.t. {[ F(𝔛_s,μ_s) ⊆𝔛_s,; μ_s∈𝒰,; ∀ X ∈𝔛_s, X ⊆𝕏 ]. is feasible. By solving this optimization problem, one can find optimal invariant information ensembles avoiding the constructions from the proof of Lemma <ref>; see Remark <ref>. In analogy to terminal regions in traditional MPC formulations <cit.> invariant information ensembles can be used as a terminal constraint, 𝔛_N ⊆𝔛_s. If (<ref>) is augmented by such a terminal constraint, recursive feasibility can be guaranteed. Similarly, if one chooses the terminal cost M such that min_μ∈𝒰 L(𝔛,μ) + M ( F(𝔛, μ) ) ≤ M(𝔛) for all 𝔛∈𝕀_c^n, the objective value of (<ref>) descends along the trajectories of its associated closed-loop system. Under additional assumptions on the continuity and positive definiteness of L, this condition can be used as a starting point for the construction of Lyapunov functions. The details of these constructions are, however, not further elaborated at this point, as they are analogous to the construction of terminal costs for traditional Tube MPC schemes <cit.>. § POLYTOPIC APPROXIMATION METHODS This section discusses how to solve the dual control problem (<ref>) by using a polytopic approximation method. For this aim, we assume that 𝕍 and 𝕎 are given convex polytopes, while 𝕏 and 𝕌 are convex sets. §.§ Configuration-Constrained Polytopes Polytopic computing <cit.> is the basis for many set-theoretic methods in control <cit.>. Specifically, tube model predictive control methods routinely feature parametric polytopes with frozen facet directions <cit.>. In this context, configuration-constrained polytopes are of special interest, as they admit a joint parameterization of their facets and vertices <cit.>. They are defined as follows. Let Y ∈ℝ^m × n and G ∈ℝ^n_G × m be matrices that define the parametric polytope [ ∀ y ∈𝒢, P(y) { x ∈ℝ^n | Y x ≤ y }; on 𝒢 { y ∈ℝ^m | G y ≤ 0 } ; ] and let Λ_1,Λ_2,…,Λ_ν∈ℝ^n × m be vertex maps, such that P(y) = conv( Λ_1 y, Λ_2 y, …, Λ_ν y ) ⟺ y ∈𝒢 , where conv(·) denotes the convex hull. The condition y ∈𝒢 is called a configuration-constraint. It restricts the parameter domain of P to a region on which both a halfspace and a vertex representation is possible. Details on how to construct the template matrix Y together with the cone 𝒢 and the matrices Λ_i can be found in <cit.>. §.§ Polytopic Information Ensembles As pointed out in Section <ref>, the minimal representation of a closed information ensemble 𝔛∈𝕀_c^n is given by its set of extreme sets, denoted by ∂𝔛. This motivates to discretize (<ref>), by introducing a suitable class of information ensembles, whose extreme sets are configuration-constrained polytopes. In detail, 𝔓(z) { X ∈𝕂_c^n | ∃ y ∈ℙ(z): X ⊆ P(y) } defines such a class of information ensembles with the polytope ℙ(z) { y ∈ℝ^m | G y ≤ 0, Z y ≤ z }⊆𝒢 being used to parameterize convex subsets of 𝒢. The choice of Z ∈ℝ^l × m influences the polytopic discretization accuracy and z ∈ℝ^l denotes its associated discretization parameter. Note that 𝔓(z) ∈𝕀_c^n is for any such z a compact but potentially empty information ensemble. §.§ Polytopic Meta Learning Traditional set-theoretic methods face a variety of computational difficulties upon dealing with output feedback problems, as summarized concisely in <cit.>. The goal of this and the following sections is to show that the proposed meta learning framework has the potential to overcome these difficulties. Here, the key observation is that Proposition <ref> alleviates the need to intersect infinitely many information sets for the sake of predicting the evolution of a learning process. Instead, it is sufficient to compute one intersection at meta level in order pass from a prior to a posterior information ensemble. In detail, if we assume that our prior information about the system's state is represented by a polytopic ensemble, 𝔓(z), the posterior 𝔓(z) ⊓𝔙 = 𝔓(z) ∩𝔙 needs to be computed, where 𝔙 is given by (<ref>). Since 𝕍 is assumed to be a polytope, 𝔙 can be written in the form 𝔙 = { X ∈𝕂_c^n | ∃ y ∈𝒢: X ⊆ P(y), Z_1 y ≤v}, as long as the template matrices Y, G, and Z_1 ∈ℝ^l_1 × m as well as the vector v∈ℝ^l_1 are appropriately chosen. Here, we construct the matrix Z = (Z_1^,Z_2^)^ such that its first l_1 ≤ l rows coincide with Z_1. This particular construction of Z ensures that the intersection 𝔓(z) ∩𝔙 = 𝔓(ζ) with {[ ζ_1 = min( z_1, v); ζ_2 = z_2 ]. can be computed explicitly, where min(z_1,v) denotes the componentwise minimizer of the vectors z_1 and v. The latter condition is not jointly convex in z and ζ. Therefore, the following constructions are based on the convex relaxation ( [ v; z_2 ]) ≤ ( [ ζ_1; ζ_2 ]) ⟹ 𝔓(z) ∩𝔙 ⊆ 𝔓(ζ) . Note that the conservatism that is introduced by this convex relaxation is negligible if the measurement error set 𝕍 is small. In fact, for the exact output feedback case, 𝕍 = { 0 }, we have min(z_1,v) = v, since the measurements are exact and, as such, always informative. §.§ Extreme Vertex Polytopes In analogy to the construction of the domain 𝒢, a configuration domain ℋ = { z ∈ℝ^l | H z ≤ 0 } can be chosen. In detail, by using the methods from <cit.>, a matrix H ∈ℝ^l × n_H and matrices Ω_1,…,Ω_ν∈ℝ^m × l can be pre-computed, such that ℙ(ζ) = conv( Ω_1 ζ, Ω_2 ζ, …, Ω_νζ ) ⟺ ζ∈ℋ . This has the advantage that the vertices Ω_j ζ of the polytope ℙ(ζ) are known. In order to filter the vertices that are associated to extreme polytopes, the index set 𝕁 { j ∈{ 1,2,…,ν} | P(Ω_j ζ) ∈ ∂ [𝔓(ζ)] } is introduced. Its definition does not depend on the choice of the parameter ζ∈ℋ. This follows from the fact that the normal cones of the vertices of ℙ(ζ) do not depend on ζ∈ℋ—recalling that the facet normals of ℙ(ζ) are given constants. The polytopes P(Ω_j ζ), with j ∈𝕁, are called the extreme vertex polytopes of 𝔓(ζ). Extreme vertex polytopes play a central role in the context of designing polytopic dual control methods. This is because their shapes, sizes, and orientations can be interpreted as representatives of the intrinsic information of 𝔓(ζ). Moreover, the convex hull of the extreme vertex polytopes encodes the extrinsic information of 𝔓(ζ), conv( { P(Ω_j ζ) | j ∈𝕁 } ) = ⋃_X ∈𝔓(ζ) X . The latter equation follows from the fact that the vertices of the extreme polytopes of 𝔓(ζ) are contained in the convex hull of the vertices Λ_i Ω_j ζ of the extreme vertex polytopes, with i ∈{ 1, …, ν} and j ∈𝕁. §.§ Polytopic Information Tubes The goal of this section is to show that it is sufficient to assign one extreme control input u_j ∈𝕌 to each extreme vertex polytope P(Ω_j ζ) in order to discretize the control law, without introducing conservatism. This construction is similar in essence to the introduction of the vertex control inputs that are routinely used to compute robust control invariant polytopes <cit.>. The key difference here, however, is that the “vertices” P(Ω_j ζ) of the information ensemble 𝔓(ζ) are sets rather than vectors. They represent possible realizations of the system's information state, not a state estimate. Let us assume that 𝕎 = P(w) is a polytope with given parameter w∈𝒢. Moreover, we assume that the vertices of ℙ(·) are enumerated in such a way that 𝕁 = { 1,2,…, |𝕁|}, where |𝕁| ≤ν denotes the cardinality of 𝕁. Let us introduce the convex auxiliary set ℱ { (z,z^+) | [ ∃ (ζ,ξ,u) ∈ℝ^l× (ℝ^m)^|𝕁|×𝕌^|𝕁|; ∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁,; v≤ζ_1, z_2 ≤ζ_2,; Y A Λ_i Ω_j ζ + YB u_j + w≤ξ_j,; G ξ_j ≤ 0, H ζ≤ 0, Z ξ_j ≤ z^+ ]}. The rationale behind the convex constraints in this definition can be summarized as follows. 2pt * We start with the current information ensemble 𝔓(z). * The constraints v≤ζ_1 and z_2 ≤ζ_2 subsume (<ref>). * The constraint H ζ≤ 0 ensures that the vertices of P(Ω_j ζ) are given by Λ_i Ω_j ζ, with i ∈{ 1, …, ν}. * The extreme controls u_j are used to steer all vertices of P(Ω_j ζ) into the auxiliary polytope P(ξ_j). * And, finally, the constraints G ξ_j ≤ 0 and Z ξ_j ≤ z^+ ensure that P(ξ_j) is contained in 𝔓(z^+). The above points can be used as road-map for the rather technical proof of the following theorem. Let ℱ and 𝔙 be defined as above, recalling that 𝕎 = P(w) denotes the uncertainty set and that 𝕌 is assumed to be convex. Then, the implication (z,z^+) ∈ℱ ⟹ ∃μ∈𝒰, 𝔓(z^+) ⊇ F(𝔓(z),μ) holds for all z,z^+ ∈ℝ^l. Let us assume that (z,z^+) ∈ℱ. As discussed in Section <ref>, the inequalities v≤ζ_1 and z_2 ≤ζ_2 in the definition of ℱ ensure that 𝔓(z) ⊓𝔙 ⊆ 𝔓(ζ). Moreover, there exists for every X ∈𝔓(ζ) a y ∈ℙ(ζ) with X ⊆ P(y) ∈ ∂ [ 𝔓(ζ)] . Next, since we enforce H ζ≤ 0, y is in the convex hull of the extreme vertices. That is, there exist scalar weights θ_1,θ_2,…, θ_|𝕁|∈ [0,1] with ∑_j ∈𝕁θ_j = 1 and y = ∑_j ∈𝕁θ_j Ω_j ζ , keeping in mind that these weights depend on X. They can be used to define the control law μ(X) ∑_j ∈𝕁θ_j u_j ∈ 𝕌 and ξ ∑_j ∈𝕁θ_j ξ_j ∈ 𝒢 where u_1,u_2,…,u_|𝕁|∈𝕌 are the extreme control inputs and ξ_1,ξ_2,…,ξ_|𝕁|∈𝒢 are the auxiliary variables that satisfy the constraints from the definition of ℱ. Note that this construction is such that the vertices of the polytope P(y), which are given by Λ_i y, satisfy A Λ_i y + B μ(X) + w = ∑_j ∈𝕁θ_j [ A Λ_i Ω_j ζ + B u_j + w ] ∈ P(ξ), where the latter inclusion holds for all w ∈ W. Consequently, since this holds for all vertices of P(y), we have A X + B μ(X) + 𝕎 ⊆ A P(y) + B μ(X) + 𝕎 ⊆ P(ξ) . Moreover, the above definition of ξ and the constraints G ξ_j ≤ 0 and Z ξ_j ≤ z^+ from the definition of ℱ imply that Z ξ≤ z^+ and P(ξ) ∈𝔓(z^+). But this yields F(𝔓(z),μ) ⊆ 𝔓(z^+) , which completes the proof. §.§ Polytopic Dual Control In order to approximate the original dual control problem (<ref>) with a convex optimization problem, we assume that the stage and terminal cost functions have the form L(𝔓(z),μ) = 𝔩(z,u) and M(𝔓(z)) = 𝔪(z) for given convex functions 𝔩 and 𝔪, where the stacked vector u = ( u_1^,u_2^,…,u_|𝕁|)^ collects the extreme control inputs. Due to Theorem <ref> a conservative approximation of (<ref>) is given by min_z,ζ,ξ,u ∑^N-1_k=0𝔩(z_k,u_k) + 𝔪(z_N) s.t. { [ ∀ k ∈{ 0, …, N-1},; ∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁,; v≤ζ_k,1, z_k,2≤ζ_k,2,; Y A Λ_i Ω_j ζ_k + YB u_k,j + w≤ξ_k,j,; G ξ_k,j≤ 0, H ζ_k ≤ 0, Z ξ_k,j≤ z_k+1,; u_k,j∈𝕌, Λ_i Ω_j ζ_k ∈𝕏, Z ŷ≤ z_0 . ]. Since 𝕌 and 𝕏 are convex sets, this is a convex optimization problem. Its optimization variables are the parameters z_k ∈ℝ^l of the polytopic information tube, the associated extreme control inputs u_k,j∈𝕌 and the auxiliary variables ζ_k ∈ℋ and ξ_k ∈𝒢, needed to ensure that ∀ k ∈{ 0,1,…, N-1 }, F(𝔓(z_k),μ_k) ⊆ 𝔓(z_k+1) . Here, X_0 = P(ŷ) denotes the information set at the current time, modeled by the parameter ŷ∈𝒢. The constraint Z ŷ≤ z_0 corresponds to the initial value condition . Additionally, it is pointed out that the extrinsic information content of the auxiliary ensemble 𝔓(ζ) ⊇𝔓(z) ⊓𝔙 overestimates the extrinsic information content of 𝔓(z). Thus, the extrinsic state constraints can be enforced by using the implication chain ∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁, Λ_i Ω_j ζ_k ∈𝕏 ⟹ ⋃_X ∈𝔓(ζ) X ⊆𝕏 ⟹ ⋃_X ∈𝔓(z) X ⊆𝕏 . Finally, (<ref>) is solved online whenever a new information set X_0 = P(ŷ) becomes available, denoting the optimal extreme controls by u_k,j^⋆. A corresponding feasible control law can then be recovered by setting μ_0^⋆(X_0) ∑_j ∈𝕁θ_j^⋆(X_0) u_0,j, where the scalar weights θ_j^⋆(X_0) can, for instance, be found by solving the convex quadratic programming problem θ^⋆(X_0) θ≥ 0argmin ( ∑_j ∈𝕁θ_j^⋆ u_0,j)^2 s.t. {[ ∑_j ∈𝕁θ_j Ω_j ζ_0^⋆ = ŷ; ∑_j ∈𝕁θ_j = 1, ]. although, clearly, other choices for the weights θ_j^⋆ are possible, too. Finally, the receding horizon control loop from Section <ref> can be implemented by using the above expression for μ_0^⋆, while the information set update and propagation step can be implemented by using standard polytopic computation routines <cit.>. By solving the convex optimization problem min_z^s,ζ^s,ξ^s,u^s 𝔩(z^s,u^s) s.t. { [ ∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁,; v≤ζ_1^s, z_2^s≤ζ_2^s,; Y A Λ_i Ω_j ζ^s + YB u_j^s + w≤ξ_j^s,; G ξ_j^s≤ 0, H ζ^s≤ 0, Z ξ_j^s≤ z^s,; u_j^s∈𝕌, Λ_i Ω_j ζ^s∈𝕏 ]. an optimal control invariant polytopic information ensemble can be computed. §.§ Structure and Complexity Problem (<ref>) admits a tradeoff between the computational complexity and the conservatism of polytopic dual control. In detail, this tradeoff can be adjusted by the choice of the following variables. * The number of facets, m, the number of vertices, ν, and the number of configuration constraints, n_G, depend on our choice of Y and G. The larger m is, the more accurately we can represent the system's intrinsic information content. * The number of information ensemble parameters, l, the number of extreme vertex polytopes, |𝕁|, and the number of meta configuration constraints, n_H, depends on how we choose Z and H. The larger |𝕁| is, the more degrees of freedom we have to parameterize the optimal dual control law. In contrast to these numbers, the number of controls, n_u, is given. If we assume that 𝕌 and 𝕏 are polyhedra with n_𝕌 and n_𝕏 facets, these number are given by the problem formulation, too. Additionally, we recall that N denotes the prediction horizon of the dual controller. The number of optimization variables n_opt and the number of constraints, n_con, of Problem (<ref>) are given by n_opt = (2N+1) l + N |𝕁| ( n_u + m ) n_con = N ( l + |𝕁| ( n_G + n_H + l + n_𝕌 + ν ( m + n_𝕏) ) + l. In this context, however, it should also be taken into account that the constraints of (<ref>) are not only sparse but also possess further structure that can be exploited via intrinsic separation. For instance, the algebraic-geometric consistency conditions [ G Y = 0, Λ_i Y = 1,; H Z = 0, Ω_j Z = 1, and Z_1 Y = 0 ] hold for all i ∈{1,…,ν} and all j ∈𝕁, which can be used to re-parameterize (<ref>), if a separation of the centers and shapes of the information sets is desired. Last but not least, more conservative but computationally less demanding variants of (<ref>) can be derived by freezing some of its degrees of freedom. For instance, in analogy to Rigid Tube MPC <cit.>, one can implement a Rigid Dual MPC controller by pre-computing a feasible point (z^s,ζ^s,ξ^s,u^s) of (<ref>). Next, we set [ z_k = Z Y x_k + z^s, ζ_k = ZY x_k + ζ^s,; and u_j,k = u_k + u_j^s, ξ_k,j = Y x_k+1 + ξ_j^s , ] where x and u denote a central state- and a central control trajectory that are optimized online, subject to x_k+1 = A x_k + B u_k. By substituting these restrictions in (<ref>) and by using (<ref>), the resulting online optimization problem can be simplified and written in the form min_x, u ∑^N-1_k=0ℓ( x_k, u_k) + m(x_N) s.t. { [ ∀ k ∈{ 0, …, N-1},; A x_k + B u_k = x_k+1,; Z ŷ≤ ZY x_0 + z^s,; u_k∈𝕌, x_k ∈𝕏 . ]. Problem (<ref>) can be regarded as a conservative approximation of (<ref>). The sets X { x∈ℝ^n | [ ∀ i ∈{ 1, …, ν}, ∀ j ∈𝕁,; x + Λ_i Ω_j ζ^s∈𝕏 ]} and 𝕌 { x ∈ℝ^n | [ ∀ j ∈𝕁, u + u_j^s∈𝕌 ]} take the robustness constraint margins into account, while ℓ and m are found by re-parameterizing the objective function of (<ref>). Problem (<ref>) is a conservative dual MPC controller that has—apart from the initial value constraint—the same complexity as certainty-equivalent MPC. Its feedback law depends on the parameter ŷ of the initial information set X_0 = P(ŷ). §.§ Numerical Illustration We consider the constrained linear control system A = 1/4( [ 6 4; 1 3 ]), B = ( [ 0; 1 ]), C = ( 1 0 ), 𝕏 = { x ∈ℝ^2 | x_2 ≥ -45 }, 𝕌 = [-55,55], 𝕎 = [ -1/2, 1/2]^2 ⊆ℝ^2 , 𝕍 = [-1,1] . In order to setup an associated polytopic dual controller for this system, we use the template matrices Y = ( [ 1 ; 1 1; 1; -1 ; -1 -1; -1 ]) and G = ( [ -1 1 -1; -1 1 -1; -1 1 -1; -1 1 -1; -1 -1 1; 1 -1 -1 ]) setting m = ν = n_G = 6. Here, Y and G are stored as sparse matrices: the empty spaces are filled with zeros. By using analogous notation, we set Z = ( [ 1 1; 1 1; 1 ; 1; 1; 1; 1; 1 ]) , H = ( [ -1 -1 1 1; -1 1 1 -1 -1; 1 -1 1 -1 ; 1 1 -1 -1; 1 1 -1 -1 ; -1 1 -1 1 ; -1 1 -1 1 -1 ; 1 -1 1 -1 ; 1 -1 1 -1 ]) , l = 8, and n_H = 9, which can be used to represent six dimensional meta polytopes with 6+8=14 facets and ν= 68 vertices. They have up to |𝕁| = 60 extreme vertex polytopes. The first row of Z corresponds to the block matrix Z_1. It can be used to represent the set 𝔙 by setting v = 2, since the diameter of 𝕍 is equal to 2. Moreover, due to our choice of Y and 𝕎, we have w = [ 1/2, 1, 1/2, 1/2, 1, 1/2 ]^∈𝒢 . Next, we construct a suitable stage cost function of the form (<ref>). We choose the extrinsic risk measure ℜ(𝔛) ∑_i=1^6 ( max_{x}∈𝔛 Y_i x )^2 + 50 ·∑_i=1^2 max_{x},{ x' }∈𝔛 (x_i-x_i')^2 and the intrinsic information measure 𝔇^∘(𝔛) ∑_i=1^2 ( max_X ∈𝔛 max_x,x' ∈ X | x_i-x_i' | )^2 . This particular choice of ℛ and 𝒟^∘ is such that 𝔯(z) ℜ( 𝔓(z) ) and 𝔡^∘(z) 𝔇^∘( 𝔓(z) ) are convex quadratic forms in z that can be worked out explicitly. Namely, we find that 𝔯(z) = ( ∑_i=1^6 z_i+2^2 ) + 50 ·[ (z_3+z_6)^2 + (z_5+z_8)^2 ] and 𝔡^∘(z) = z_1^2 + z_2^2 . Last but not least a control penalty function needs to be introduced, which depends on the extreme control inputs, for instance, we can set 𝔠(u) = ∑_i=1^|𝕁|[ u_i^2 + 50 ·( u_i - 1/|𝕁|∑_j=1^|𝕁| u_j )^2 ] in order penalize both the extreme inputs as well as the distances of these extreme inputs to their average value. The final stage cost is given by 𝔩(z,u) = 𝔯(z) + τ·𝔡^∘(z) + 𝔠(u), where we set the intrinsic regularization to τ = 0.01. The optimal invariant information ensemble 𝔓(z_s) is found by solving (<ref>). It is visualized in the left plot of Figure <ref>. Note that the light blue shaded hexagon corresponds to the union of all sets in 𝔓(z_s), which can be interpreted as a visualization of its extrinsic information content. The 60 extreme vertex polytopes of 𝔓(ζ_s), given by P(Ω_j ζ_s) for j ∈{ 1,2,…, 60 }, are difficult to plot as they are all clustered at the vertices of the extrinsic hexagon (they partly obscure each other; not all 60 are clearly visible), but an attempt is made to visualize them in different shades of gray. As the optimal solution happens to satisfy 𝔓(z_s)∩𝔙 = 𝔓(ζ_s), at least for this example, the convex relaxation (<ref>) does not introduce conservatism. Next, a closed-loop simulation of the polytopic dual controller (<ref>) is started with the initial information set X_0 = [17,23] × [17,23] using the prediction horizon N=10 while the terminal cost is set to 𝔪(z_N) = {[ 0 if z_N ≤ z_s; ∞ otherwise ]. in order to enforce recursive feasibility. The right plot of Figure <ref> shows an extrinsic image of the first predicted tube; that is, the union of the sets 𝔓(z_k) along the optimal solution of (<ref>) for the above choice of X_0, which are shown in light blue. The dark blue shaded polytope corresponds to the terminal region that is enforced by the above choice of 𝔪. The proposed polytopic dual control method optimizes feedback laws that depend on the system's information state. Note that such dual control laws are, in general, less conservative than robust output feedback laws that are based on state estimation or observer equations with affine structure, as considered in <cit.> or <cit.>. § CONCLUSIONS This paper has presented a set-theoretic approach to dual control. It is based on meta information spaces that enable a data-free algebraic characterization of both the present and the future information content of learning processes. In detail, an intrinsic equivalence relation has been introduced in order to separate the computation of the future information content of a constrained linear system from the computation of its robust optimal control laws. An associated intrinsic separation principle is summarized in Theorem <ref>. It is the basis for analyzing the existence of solutions of a large class of dual control problems under certain structural and continuity assumptions that are summarized in Theorem <ref>. For the first time, this paper has presented a polytopic dual control method for constrained linear systems that is based on convex optimization. In contrast to existing robust output-feedback control schemes, this method optimizes control laws that depend on the system's information state. This alleviates the need to make control decisions based on state estimates or observer equations that induce a potentially sub-optimal feedback structure. Instead, (<ref>) optimizes a finite number of control inputs that are associated to the extreme vertex polytopes of the predicted information ensembles. A numerical case study for a system with two states has indicated that (<ref>) can be solved without numerical problems for moderately sized problems. For larger systems, however, the computational complexity of accurate dual control can become exorbitant. In anticipation of this problem, this paper has outlined strategies towards reducing the computational complexity at the cost of more conservatism. For instance, the Rigid Dual MPC problem (<ref>) has essentially the same online complexity as a comparable certainty-equivalent MPC problem. The development of more systematic methods to tradeoff conservatism and computational complexity of polytopic dual control methods as well as extensions of polytopic dual control for constrained linear systems that aim at simultaneously learning their state and their system matrices A, B, and C appear to be challenging and practically relevant directions for future research. 10 Dorea2021 T.A. Almeida and C.E.T. Dórea. Output feedback constrained regulation of linear systems via controlled-invariant sets. IEEE Transactions on Automatic Control, 66(7), 2021. Artstein2011 Z. Artstein and S.V. Raković. Set invariance under output feedback: a set-dynamics approach. International Journal of Systems Science, 42(4):539–555, 2011. Bemporad2000 A. Bemporad and A. Garulli. Output-feedback predictive control of constrained linear systems via set-membership state estimation. International Journal of Control, 73(8):655–665, 2000. Bertsekas1971 D.P. Bertsekas and I.B. Rhodes. Recursive state estimation for a set-membership description of uncertainty. IEEE Transactions on Automatic Control, 16:117–128, 1971. Blanchini2003 F. Blanchini and S. Miani. Stabilization of LPV systems: state feedback, state estimation, and duality. SIAM Journal on Control and Optimization, 42(1):76–97, 2003. Blanchini2015 F. Blanchini and S. Miani. Set-theoretic methods in control. Systems & Control: Foundations & Applications. Birkhäuser Boston, Inc., Boston, MA, 2015. Brunner2018 F.D. Brunner, M.A. Müller, and F. Allgöwer. Enhancing output-feedback MPC with set-valued moving horizon estimation. IEEE Transactions on Automatic Control, 63(9):2976–2986, 2018. Doob1953 J.L. Doob. Stochastic Processes. Wiley, 1953. Dorea2009 C.E.T. Dórea. Output-feedback controlled-invariant polyhedra for constrained linear systems. Proceedings of the 48th IEEE Conference on Decision and Control, Shanghai, pages 5317–5322, 2009. Efimov2022 A. dos Reis de Souza, D. Efimov, T. Raïssi, and X. Ping. Robust output feedback model predictive control for constrained linear systems via interval observers. Automatica, 135(109951), 2022. Feldbaum1961 A.A. Feldbaum. Dual-control theory (i-iv). Automation and Remote Control, 21, pages 1240–1249 and 1453–1464, 1960; and 22, pages 3–16 and 129–143, 1961. Filatov2004 N.M. Filatov and H. Unbehauen. Adaptive Dual Control. Springer, 2004. Findeisen2003 R. Findeisen, L. Imsland, F. Allgöwer, and B.A. Foss. State and output feedback nonlinear model predictive control: An overview. European Journal of Control, 9:190–207, 2003. Fukuda2020 K. Fukuda. Polyhedral Computation. ETH Zürich Research Collection, 2020. Goulart2007 P. Goulart and E. Kerrigan. Output feedback receding horizon control of constrained systems. International Journal of Control, 80(1):8–20, 2007. Gutman1986 P.O. Gutman and M. Cwikel. Admissible sets and feedback control for discrete-time linear dynamical systems with bounded controls and states. IEEE Transactions on Automatic Control, 31(4):373–376, 1986. Hewing2020 L. Hewing, K.P. Wabersich, M. Menner, and M.N. Zeilinger. Learning-based model predictive control: Toward safe learning in control. Annual Review of Control, Robotics, and Autonomous Systems, 3:269–296, 2020. Hovd2005 M. Hovd and R.R. Bitmead. Interaction between control and state estimation in nonlinear MPC. Modeling, Identification, and Control, 26(3):165–174, 2005. Joseph1961 P.D. Joseph and J.T. Tou. On linear control theory. AIEE Transactions on Applications and Industry, 80:193–196, 1961. Kalman1962 R.E. Kalman. Canonical structure of linear dynamical systems. Proceedings of the National Academy of Sciences the United States of America, 48:596–600, 1962. Krasovskii1995 A.N. Krasovskii and N.N. Krasovskii. Control under lack of information. Birkhäuser, Boston, 1995. Krasovskii1964 N.N. Krasovskii. On the theory of controllability and observability of linear dynamic systems. Journal of Applied Mathematics and Mechanics, 28(1):1–14, 1964. Kurzhanski1972 A.B. Kurzhanskii. Differential games of observation. Doklady Akademii Nauk SSSR, 207(3):527–530, 1972. Kurzhanski2004 A.B. Kurzhanskii. The problem of measurement feedback control. Journal of Applied Mathematics and Mechanics, 68:487–501, 2004. Langson2004 W. Langson, I. Chryssochoos, S.V. Raković, and D.Q. Mayne. Robust model predictive control using tubes. Automatica, 40(1):125–133, 2004. Lindquist1973 A. Lindquist. On feedback control of linear stochastic systems. SIAM Journal on Control, 11:323–343, 1973. Mayne2009 D.Q. Mayne, S.V. Rakovic, R. Findeisen, and F. Allgöwer. Robust output feedback model predictive control of constrained linear systems: time varying case. Automatica, 45:2082–2087, 2009. Rakovic2012 S. Raković, B. Kouvaritakis, R. Findeisen, and M. Cannon. Homothetic tube model predictive control. Automatica, 48(8):1631–1638, 2012. Rakovic2016 S. Raković, W.S. Levine, and B. Açıkmeşe. Elastic tube model predictive control. In American Control Conference (ACC), 2016, pages 3594–3599. IEEE, 2016. Rawlings2017 J.B. Rawlings, D.Q. Mayne, and M.M. Diehl. Model Predictive Control: Theory, Computation, and Design. Madison, WI: Nob Hill Publishing, 2017. Rockafellar2013 R.T. Rockafellar and S. Uryasev. The fundamental risk quadrangle in risk management, optimization and statistical estimation. Surveys in Operations Research and Management Science, 18:33–53, 2013. Sehr2019 M.A. Sehr and R.R. Bitmead. Probing and Duality in Stochastic Model Predictive Control. In Handbook of Model Predictive Control. Control Engineering, pages 125–144, Birkhäuser, 2019. Stengel1994 R. Stengel. Optimal Control and Estimation. Dover Publications, New York, 1994. Taylor1996 J.C. Taylor. An introduction to measure and probability. Springer, 1996. Warter1981 H. van Warter and J.C. Willems. The certainty equivalence property in stochastic control theory. IEEE Transactions on Automatic Control, AC-26(5):1080–1087, 1981. Villani2005 C. Villani. Optimal transport, old and new. Springer, 2005. Villanueva2020 M.E. Villanueva, E. De Lazzari, M.A. Müller, and B. Houska. A set-theoretic generalization of dissipativity with applications in Tube MPC. Automatica, 122(109179), 2020. Villanueva2022 M.E. Villanueva, M.A. Müller, and B. Houska. Configuration-constrained tube MPC. arXiv e-prints, page arXiv:2208.12554, 2022 (accessed 2022 November 4). Witsenhausen1968 H.S. Witsenhausen. Sets of possible states of linear systems given perturbed observations. IEEE Transactions on Automatic Control, 13:556–558, 1968. Wonham1969 W.M. Wonham. On the separation theorem of stochastic control. SIAM Journal on Control, 6(2):312–326, 1968. Wu2022 F. Wu, M.E. Villanueva, and B. Houska. Ambiguity tube MPC. Automatica, 146(110648), 2022. Zanon2021 M. Zanon and S. Gros. Safe reinforcement learning using robust MPC. IEEE Transactions on Automatic Control, 66(8):3638–3652, 2021.
http://arxiv.org/abs/2307.06222v2
20230712151345
The physical and chemical structure of Sagittarius B2 VIIIa. Dust and ionized gas contributions to the full molecular line survey of 47 hot cores
[ "T. Möller", "P. Schilke", "Á. Sánchez-Monge", "A. Schmiedeke", "F. Meng" ]
astro-ph.GA
[ "astro-ph.GA" ]
shapes,arrows H>c<@ ();a, [3][][]http://adsabs.harvard.edu/abs/#3 @linkstart##1##2 @linkendempty [3][][]http://adsabs.harvard.edu/abs/#3 @linkstart##1##2 @linkendempty<cit.> [3][][]http://adsabs.harvard.edu/abs/#3 @linkstart##1##2 @linkendempty<cit.> [3][][] http://adsabs.harvard.edu/abs/#3 @linkstart##1##2 @linkendempty C[1]>p#1 R[1]>p#1 I. Physikalisches Institut, Universität zu Köln, Zülpicher Str. 77, D-50937 Köln, Germany [email protected] Institut de Ciències de l'Espai (ICE, CSIC), Can Magrans s/n, E-08193, Bellaterra, Barcelona, Spain Institut d'Estudis Espacials de Catalunya (IEEC), Barcelona, Spain Green Bank Observatory, 155 Observatory Rd, Green Bank, WV 24944 (USA) University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China Sagittarius B2 (Sgr B2) is a giant molecular cloud complex in the central molecular zone of our Galaxy hosting several sites of high-mass star-formation. The two main centers of activity are Sgr B2(M) and Sgr B2(N) containing 27 continuum sources in Sgr B2(M) and 20 sources in Sgr B2(N), respectively. Our analysis aims to be a comprehensive modelling of each core spectrum, where we take the complex interaction between molecular lines, dust attenuation, and free-free emission arising from  regions into account. In this work, which is the first of two papers on the complete analysis, we determine the dust and, if  regions are contained, the parameters of the free-free thermal emission of the ionized gas for each core and derive a self-consistent description of the continuum levels of each core. Using the high sensitivity of ALMA we seek to characterize the physical and chemical structure of these continuum sources and gain better insight into the star formation process within the cores. We used ALMA to perform an unbiased spectral line survey of all 47 sources in the ALMA band 6 with a frequency coverage from 211 GHz to 275 GHz. In order to model the free-free continuum contribution of a specific core we fit the contained recombination lines to obtain the electron temperatures and the emission measures, where we use an extended XCLASS program to describe recombination lines and free-free continuum simultaneously. In contrast to previous analyses, we derived the corresponding parameters here not only for each core but also for their local surrounding envelope and determined their physical properties. The distribution of recombination lines we found in the core spectra fits well with the distribution of  regions described in previous analyses. In Sgr B2(M), the three inner sources are the most massive, whereas in Sgr B2(N) the innermost core A01 dominates all other sources in mass and size. For the cores we determine average dust temperatures around 236 K (Sgr B2(M)) and 225 K (Sgr B2(N)), while the electronic temperatures are located in a range between 3800 K and 23800 K. The self-consistent description of the continuum levels and the quantitative description of the dust and free-free contributions form the basis for the further analysis of the chemical composition of the individual sources, which is continued in the next paper. This detailed modeling will give us a more complete picture of the star formation process in this exciting environment. Dust and ionized gas contributions to the full molecular line survey of 47 hot cores T. Möller et al. The physical and chemical structure of Sagittarius B2 VIIa. Dust and ionized gas contributions to the full molecular line survey of 47 hot cores T. Möller1 P. Schilke1 Á. Sánchez-Monge1,2,3 A. Schmiedeke1,4 F. Meng1,5 Received May 01, 2023 / Accepted May 31, 2023 ============================================================================================================================================================ § INTRODUCTION Sagittarius B2 (Sgr B2) is a giant molecular cloud complex in the central molecular zone (CMZ) of our Galaxy and hosts several sites of high-mass star-formation. Situated at a distance of 8.34 ± 0.16 kpc 2019A A...625L..10G, Sgr B2 is one of the most massive molecular clouds in the Galaxy with a mass of 10^7 M_⊙ and H_2 densities of 10^3 – 10^5 cm^-3 (2016A A...588A.143S, 1995A A...294..667H, 1989ApJ...337..704L). The Sgr B2 complex has a diameter of 36 pc 2016A A...588A.143S and contains two main sites of active high-mass star formation, Sgr B2 Main (M) and North (N), which are separated by ∼48 (∼1.9 pc in projection). Both sites have comparable luminosities of 2 - 10 × 10^6 L_⊙, masses of 5 × 10^4 M_⊙ and sizes of ∼0.5 pc [see][]2016A A...588A.143S and are surrounded by an envelope, which occupies an area of around 2 pc in radius. This envelope contains at least ∼70 high-mass stars with spectral types in the range from O5 to B0 (see e.g. 1995ApJ...449..663G, 2014ApJ...781L..36D). All together is embedded in another envelope with a radius of 20 pc, which contains more than 99 % of the total mass of Sgr B2, although it has a much lower density (n_H_2∼ 10^3 cm^-3) and hydrogen column density (N_H ∼ 10^23 cm^-2) compared to the inner envelope, whose density (n_H_2∼ 10^5 cm^-3) and hydrogen column density (N_H ∼ 10^24 cm^-2) are significantly higher. The greater number of  regions and the higher degree of fragmentation observed in Sgr B2(M) suggests a more evolved stage and a greater amount of feedback compared to Sgr B2(N) (see e.g. 1992ApJ...389..338G, 2011A A...530L...9Q, 2017A A...604A...6S, 2019A A...628A...6S, 2019A A...630A..73M, 2022A A...666A..31M). Furthermore, M is very rich in sulfur-bearing molecules, while N is dominated by organic matter (1991ApJS...77..255S, 1998ApJS..117..427N, 2004ApJ...600..234F, 2013A A...559A..47B, 2014ApJ...789....8N, 2021A A...651A...9M). Since both sites have large luminosities indicating ongoing high-mass star formation, the age difference between M and N is not very large, as shown by 1995ApJ...451..284D, (1996ApJ...464..788D), who used the photon flux of the exciting stars and the ambient gas density to estimate an age of ∼10^4 yr for both regions. High mass proto-clusters such as Sgr B2 have complex, multi-layered structures that require an extensive analysis. Sgr B2 provides a unique opportunity to study in detail the nearest counterpart of the extreme environments that dominate star formation in the Universe. The high density of molecular lines and the continuum emission detected toward the two main sites indicate the presence of a large amount of material to form new stars. Spectral line surveys give the possibility to obtain a census of all atoms and molecules and give insights into their thermal excitation conditions and dynamics by studying line intensities and profiles, which allows one to separate different physical components and to identify chemical patterns. Although the molecular content of Sgr B2 was analyzed in many line surveys before (see e.g. 1986ApJS...60..819C, 1989ApJS...70..539T, 1991ApJS...77..255S, 1998ApJS..117..427N, 2004ApJ...600..234F, 2013A A...559A..47B, 2014ApJ...789....8N, 2021A A...651A...9M), the high sensitivity of the Atacama Large Millimeter/submillimeter Array (ALMA) offers the possibility to gain better insight into the star formation process. Our analysis, which we will describe in the following, is a continuation of the paper by 2017A A...604A...6S, where 47 hot-cores in the continuum emission maps of Sgr B2(M) and N were identified, see Figs. <ref> - <ref>. This paper is the first of two papers describing the full analysis of broadband spectral line surveys towards these 47 hot-cores to characterize the hot core population in Sgr B2(M) and N. This analysis aims to be a comprehensive modelling of each core spectrum, where we take the complex interaction between molecular lines, dust attenuation, and free-free emission arising from  regions into account. As shown by 2017A A...604A...6S, many of the identified cores contain large amounts of dust. Additionally, some cores were associated with  regions. However, 2017A A...604A...6S do not distinguish between the contributions from the cores and the envelope, which did not allow to isolate the core mass, especially for the weaker cores. In addition, the extinction due to dust and ionized gas must be determined properly to get reliable results for massive sources such as Sgr B2(M) and N. In this paper we quantify the dust and, if present, the free-free contributions to the continuum by deriving the appropriate parameters for each core and the local surrounding envelope and determine the corresponding physical properties. Here, we obtain the dust temperatures from the results of the line surveys by assuming that the dust temperature equals the gas temperature following 1984A A...130....5K and 2001ApJ...557..736G, who showed that gas and dust are thermally coupled at high densities (n_H_2 > 10^5 cm^-3), which can be found at the inner parts of the Sgr B2 complex. In the second paper (Möller et al. in prep.), we describe the analysis of the molecular content of each hot core, where we identify the chemical composition of the detected sources and derive column densities and temperatures. This paper is structured as follows: We start with Section <ref>, where we describe the observations and outline the data reduction procedure, followed by Section <ref>, where we present the modeling methodology used to analyze the data set. Afterwards, our results are described and discussed in Section <ref>. Finally, we present our conclusions in Section <ref>. § OBSERVATIONS AND DATA REDUCTION Sgr B2 was observed with ALMA [Atacama Large Millimeter/submillimeter Array;][]2015ApJ...808L...1A during Cycle 2 in June 2014 and June 2015, using 34 – 36 antennas in an extended configuration with baselines in the range from 30 m to 650 m, which results in an angular resolution of 0 3 - 0 7 (corresponding to ∼3300 au). The observations were carried out in the spectral scan mode covering the whole ALMA band 6 (211 to 275 GHz) with 10 different spectral tunings, providing a resolution of 0.5 – 0.7 km s^-1 across the full frequency band. The two sources Sgr B2(M) and Sgr B2(N) were observed in track-sharing mode, with phase centers at α_ J2000 = 17^ h 47^ m 20 157, δ_ J2000 = -28^∘ 23' 04 53 for Sgr B2(M), and at α_ J2000 = 17^ h 47^ m 19 887, δ_ J2000 = -28^∘ 22' 15 76 for Sgr B2(N). Calibration and imaging were carried out with CASA[The Common Astronomy Software Applications [CASA,][]2007ASPC..376..127M is available at <https://casa.nrao.edu>.] version 4.4.0. Finally, all images were restored with a common Gaussian beam of 0 4. Details of the observations, calibration and imaging procedures are described in 2017A A...604A...6S and 2019A A...628A...6S. § DATA ANALYSIS The spectra of each hot core and the corresponding surrounding envelope were modeled using the eXtended CASA Line Analysis Software Suite [XCLASS[<https://xclass.astro.uni-koeln.de/>],][]2017A A...598A...7M with additional extensions (Möller in prep.). By solving the 1D radiative transfer equation assuming local thermal equilibrium (LTE) conditions and an isothermal source, XCLASS enables the modeling and fitting of molecular lines T_ mb(ν) = ∑_m,c ∈ i [η(θ_ source^m,c) [S^m,c(ν) (1 - e^-τ_ total^m,c(ν)). . + I_ bg (ν) (e^-τ_ total^m,c(ν) - 1) ] ] + (I_ bg(ν) - J_CMB), where the sums go over the indices m for molecule, and c for component, respectively. In Eq. (<ref>), T_ mb(ν) represents the intensity in Kelvin, η(θ^m,c) the beam filling (dilution) factor, S^m,c(ν) the source function, see Eq. (<ref>), and τ_ total^m,c(ν) the total optical depth of each molecule m and component c. Additionally, I_ bg indicates the background intensity and J_CMB the intensity of the cosmic microwave background. As Sgr B2(M) and N have high H_2 densities (n_H_2 > 10^6 cm^-3), LTE conditions can be assumed 2015PASP..127..266M and the kinetic temperature of the gas can be estimated from the rotation temperature: T_ rot≈ T_ kin. For molecules, we assume Gaussian line profiles, whereas Voigt line profiles are used for radio recombination lines (RRLs), see Sect. <ref>. Additionally, finite source size, dust attenuation, and optical depth effects are taken into account as well. All molecular parameters (e.g. transition frequencies, Einstein A coefficients) are taken from an embedded SQLite database containing entries from the Cologne Database for Molecular Spectroscopy (CDMS, 2001A A...370L..49M, 2005JMoSt.742..215M) and Jet Propulsion Laboratory database (JPL, 1998JQSRT..60..883P) using the Virtual Atomic and Molecular Data Center (VAMDC, 2016JMoSp.327...95E). Additionally, the database used by XCLASS describes partition functions for more than 2500 molecules between 1.07 and 1000 K. The contribution of each molecule is described by multiple emission and absorption components, where each component is specified by the source size θ_ source, the rotation temperature T_ rot, the column density N_ tot, the line width Δ v, and the velocity offset v_ offset from the source velocity (v_ LSR). Moreover, XCLASS offers the possibility to locate each component at a certain distance l along the line of sight. All model parameters can be fitted to observational data by using different optimization algorithms provided by the optimization package MAGIX 2013A A...549A..21M. In order to reduce the number of fit parameters, the modeling can be done simultaneously with corresponding isotopologues and vibrationally excited states. The ratio with respect to the main species can be either fixed or used as an additional fit parameter. Details are described in 2017A A...598A...7M. §.§ Recombination lines In addition to molecules, XCLASS can analyze radio recombination lines as well. According to 2006ApJ...653.1226Q, who analyzed a large number of galactic  regions, deviations from LTE are small, making LTE a reasonable assumption. Similar to molecules, the contribution of each RRL is described by a certain number of components, where each component is, in addition to source size θ_ source (in arcsec) and distance l (stacking parameter), defined by the electronic temperature T_ e (in K), the emission measure EM (in pc cm^-6), the line width(s) Δ v (in km s^-1), and the velocity offset v_ offset (in km s^-1). The optical depth of RRLs (in LTE) is given by 2002ASSL..282.....G τ_ RRL, ν = ∫κ_n_1, n_2, ν^ ext ds = π h^3 e^2/(2 π m_e k_B)^3/2 m_e c· EM·n_1^2 f_n_1, n_2/T_e^3/2 ×exp[Z^2 E_n_1/k_B T_e] (1 - e^-h ν_n_1, n_2 / k_B T_e) ϕ_ν. Here, f_n_1, n_2 indicates the oscillator strength ν_n_1, n_2, the transition frequency, n_1 the main quantum number, and E_n_1 the energy of the lower state, which are taken from the embedded database. For each RRL the database contains oscillator strengths up to Δ n = 6 (ζ-transitions), i.e. all transitions of the RRL within the given frequency range up to ζ transitions are taken into account. In Eq. (<ref>), the term ϕ_ν represents the line profile function. XCLASS offers the possibility to use a Gaussian G(x) or a Voigt V(x, σ, γ) line profile function, which is a convolution of the Gaussian and the Lorentzian function, i.e. 2002ASSL..282.....G V(x, σ, γ) = ∫_-∞^∞ G(x; σ) L(x - x', γ) dx'. Due to the fact that the computation of the Voigt profile is computationally quite expensive, XCLASS uses the pseudo-Voigt profile ϕ^m,c,t_ pseudo-Voigt(ν) which is an approximation of the Voigt profile V(x) using a linear combination of a Gaussian G(x) and a Lorentzian line profile function L(x) instead of their convolution. The mathematical definition of the normalized pseudo-Voigt profile, i.e. ∫_0^∞ϕ^m,c,t_ pseudo-Voigt(ν) dν = 1, is given by ϕ^m,c,t_ pseudo-Voigt(ν) = η· L (ν, f) + (1 - η) · G (ν, f), with 0 < η < 1. There are several possible choices for the η parameter. XCLASS use the expression derived by <cit.>, η = 1.36603 (f_L/f) - 0.47719 (f_L/f)^2 + 0.11116 (f_L/f)^3, which is accurate to 1 %. Here, f = [f_G^5 + 2.69269 f_G^4 f_L + 2.42843 f_G^3 f_L^2 . + 4.47163 . f_G^2 f_L^3 + 0.07842 f_G f_L^4 + f_L^5]^1/5 , indicates the total full width at half maximum (FWHM), where f_L and f_G represents the Lorentzian and Gaussian full width at half-maximum, respectively. The application of a Voigt line profile requires an additional parameter for each RRL and component: In addition to the Gaussian line width Δ v_G^m,c, the Lorentzian line width Δ v_L^m,c (in km s^-1) has to be specified as well. These line widths are related to the full width at half-maxima f_L and f_G by f_G = Δ v_G^m,c/c_ light·ν_t^m,c·(1 - ( v_ offset^m,c + v_ LSR)/c_ light), f_L = Δ v_L^m,c/c_ light·ν_t^m,c·(1 - ( v_ offset^m,c + v_ LSR)/c_ light), where ν_t^m,c indicates the transition frequency of RRL m, component c, and transition t, v_ offset^m,c the velocity offset, and v_ LSR the source velocity, respectively. §.§ Free-free continuum The hot plasma in  regions gives rise to the emission of thermal bremsstrahlung, which causes a continuum opacity. The optical depth τ_ ff of this free-free contribution in terms of the classical electron radius r_e = α ħ c/m_e c^2 = α^2 a_0 is given by 2000A A...356.1149B τ_ ff = 4/3 (2 π/3)^1/2 r_e^3 ZZ_i^2 m^3/2 c^5/√(k_B T_e) h ν^3 (1 - e^-h ν/k_B T_e) ⟨ g_ ff⟩· EM = 1.13725 ·(1 - e^-h ν/k_B T_e) ·⟨ g_ ff⟩ ×[ T_e/ K]^-1/2 [ ν/ GHz]^-3 [ EM/ pc cm^-6], where T_ e indicates the electron temperature, EM the emission measure, and ⟨ g_ ff⟩ the thermal averaged free-free Gaunt coefficient, respectively. XCLASS makes use of the tabulated thermal averaged free-free Gaunt coefficients ⟨ g_ ff⟩ derived by 2015MNRAS.449.2112V, which include relativistic effects as well. In order to model the free-free continuum contribution of a specific core we fit the corresponding RRLs to obtain the electron temperatures T_ e and the emission measures EM, see Sect. <ref>. §.§ Dust extinction Extinction from dust is very important in Sgr B2 [see e.g. ][]2021A A...651A...9M. Assuming, that dust and gas are well mixed, the dust opacity τ_d(ν) used by XCLASS is described by τ_d(ν) = τ_d, ref·[ν/ν_ ref]^β = [N_ H·κ_ν_ ref· m_ H_2·1/χ_ gas-dust] ·[ν/ν_ ref]^β, where N_ H indicates the hydrogen column density (in cm^-2), κ_ν_ ref the dust mass opacity for a certain type of dust [in cm^2 g^-1, ][]1994A A...291..943O, and β the spectral index[Note, that we use temperature units for the fitting. The spectral indices for flux units α is given by α = β + 2.]. In addition, ν_ ref = 230 GHz represents the reference frequency for κ_ν_ ref, m_ H_2 the mass of a hydrogen molecule, and 1 / χ_ gas-dust the ratio of dust to gas, which is set here to 1/100 1983QJRAS..24..267H. §.§ Local overlap In line-crowded sources like Sgr B2(M) and N line intensities from two neighbouring lines, which have central frequencies with (partly) overlapping width regions, do not simply add up if at least one line is optically thick. Here, photons emitted from one line are absorbed by the other line. XCLASS takes the local line overlap [described by][]1991A A...241..537C from different components into account, by computing an average source function S_l (ν) at frequency ν and distance l S_l (ν) = ε_l (ν)/α_l (ν) = ∑_t τ_t^c (ν) S_ν (T_ rot^c)/∑_t τ_t^c (ν), where ε_l describes the emission and α_l the absorption function, T_ rot^c the excitation temperature, and τ_t^c the optical depth of transition t and component c, respectively. Additionally, the optical depths of the individual lines included in Eq. (<ref>) are replaced by their arithmetic mean at distance l, that is τ_ total^l(ν) = ∑_c [[ ∑_t τ_t^c (ν) ] + τ_d^c(ν)], where the sums run over both components c and transitions t. Here, τ_d^c(ν) indicates the dust opacity, which is added as well. The iterative treatment of components at different distances, takes also non-local effects into account. Details of this procedure are described in 2021A A...651A...9M. §.§ Fitting procedure §.§.§ General model setup In our analysis, we assume a two-layer model for all cores in Sgr B2(M) and N, in which the first layer (hereafter called core-layer) describes contributions from the corresponding core. The second layer (envelope-layer) contains features from the local surrounding envelope of Sgr B2 and is located in front of the core layer. Here, we assume that all components belonging to a layer, have the same distance to the observer. Following 2017A A...604A...6S, a core is identified within the continuum emission maps of Sgr B2(M) and N, if at least one closed contour (polygon) above the 3σ level is found (where σ indicates the rms noise level of the map of 8 mJy beam^-1 for Sgr B2(M) and N, respectively), see Figs. <ref> - <ref>. The spectra for each core, described in Figs. <ref> - <ref>, are obtained by averaging over all pixels contained in a polygon to improve the signal-to-noise and detection of weak lines. As shown in Figs. <ref> - <ref>, extended structures are clearly visible in addition to the identified compact sources, i.e. the dust is not concentrated on the cores only, but a non-negligible contribution is also contained in the envelope. Here, we cannot distinguish between the contributions of the inner and outer envelope of Sgr B2. However, the outer envelope will not contribute significantly due to its much lower density. To obtain a more or less consistent description of the continuum for each core spectrum, the dust parameters for each core and the local surrounding envelope have to be determined. In agreement with 2017A A...604A...6S, we assume a dust mass opacity of κ_1300 μ m = 1.11 cm^2 g^-1 [agglomerated grains with thin ice mantles in cores of densities 10^8 cm^-3;][]1994A A...291..943O for both layers. For some hot cores, for which we have derived dust temperatures above 300 K, this may not be a good choice. Using a different dust mass opacity of, e.g. κ_1300 μ m = 5.86 cm^2 g^-1 (agglomerated grains without ice mantles in cores of densities 10^8 cm^-3) would result in hydrogen column densities lower by a factor of five. In our analysis, see Fig. <ref>, we start with identifying molecules and, if contained, recombination lines in each core spectrum and determine a quantitative description of their respective contributions. Here we first use a phenomenological description of the corresponding continuum level and neglecting dust and free-free contributions. §.§.§ Envelope spectra In the following, we determining the dust parameters of the envelope by selecting pixels around each core that are not too close to another core or  region[Here we consider the  regions described by 2015ApJ...815..123D.], see Fig. <ref>. Here, we first selected points for each core that have the same distance to the core center, and then successively shifted outward those points that were still within the contour of the source or too close to an  region. The spectra at these positions are used to compute an averaged envelope spectrum for each core, see Figs. <ref> - <ref>, where we used STATCONT 2018A A...609A.101S to estimate the corresponding continuum level. §.§.§ Dust parameters of the envelope In order to compute the dust parameters for each envelope, we assume that the gas temperature equals the dust temperature and estimate the gas temperature for each envelope by fitting CH_3CN, H_2CCO, H_2CO, H_2CS, HNCO, and SO in the corresponding spectra, see Fig. <ref>. These molecules were chosen because they show mostly isolated and non-blended transitions, and therefore, they can be used to derive temperatures without requiring a full line survey analysis of the entire envelope spectrum. The final dust temperature for each envelope is computed by averaging over the obtained excitation temperatures. The corresponding hydrogen column density N_ H and spectral index β are determined by fitting the continuum level of each envelope spectrum using XCLASS. §.§.§ Free-free parameters of the envelope Some envelope spectra (A07, A09, A24, and A26 in Sgr B2(M)) contain RRLs, which we used to determine the free-free contribution to the corresponding continuum levels as well. The frequency ranges covered by our observation, contain two Hα (H29α and H30α), three Hβ (H36β, H37β, H38β), three Hγ (H41γ, H42γ, H43γ), four Hδ (H44δ, H46δ, H47δ, H48δ), five Hϵ (H47ϵ, H48ϵ, H49ϵ, H50ϵ, H51ϵ), and four Hζ (H50ζ, H52ζ, H53ζ, H54ζ) transitions. Although the contributions of transitions with Δ n ≥ 4 (δ-, ϵ-, and ζ-transitions) are very small, they can still help in determining the model parameters (electron temperatures, T_ e, and the emission measures, EM) because even very weak transitions contain useful informations and provide additional constraints. In contrast to the aforementioned procedure we have performed a full line survey analysis of each envelope spectrum containing RRLs taking local-overlap into account, see Sect. <ref>, because the RRLs there contain non-negligible admixtures of other molecules, see Fig. <ref>. In this analysis, we start with modelling all molecules and RRLs using the new XCLASS-GUI included in the extended XCLASS package, which offers the possibility to interactively model observational data and to describes molecules and recombination lines in LTE. The GUI can be used to generate synthetic spectra from input physical parameters, which can be overlaid on the observed spectra and/or fitted to the observations to obtain the best fit to the physical parameters. For all RRLs, we used a single emission component covering the full beam. Furthermore, for species where only one transition is included in the survey, e.g. CO and CS, a reliable quantitative description of their contribution is not possible. Since line overlap plays a major role for many sources, it is necessary to model the contributions of these molecules as well. Therefore, we fix the excitation temperatures for components describing emission features to a value of 200 K[A temperature of 200 K might be to high for molecules located in the envelope, but for molecules with only one transition a reliable temperature estimation is not possible. Here we have used the indicated value only for a phenomenological description of the line shape of the corresponding molecule. The exact value has no further meaning for our analysis.]. For components describing absorption features, we assume an excitation temperature of 2.7 K. In the next step we compute the average gas temperature from all identified molecules with more than one transition, where we take all components into account which describe emission features. After that, we perform one final fit, where all molecule and RRL parameters are fitted together with the hydrogen column density N_ H and spectral index β to achieve a self-consistent description of the molecular and recombination lines and the continuum level. Subsequently, we re-compute again the average gas temperature to get the dust temperature of the envelope. §.§.§ Continuum parameters of core In the next step, we estimate the continuum parameters for each core spectrum. Similar to the procedure described above, we obtain the dust temperature for each core from the averaged gas temperature. Here, we make use of the results of the analysis of the full molecular line survey. Here, we modeled all RRLs with a single emission component, whose source size θ_ core is given by the diameter of a circle that has the same area A_ core as the corresponding polygon describing the source[Here, we assume a common source size for all molecules and RRLs within a source, since calculating each individual source size would be too computationally expensive due to the calculation of the local overlap. Therefore, our derived column densities describe lower limits only.], see Figs. <ref> - <ref>, i.e. we determine the source size θ_ core, see Tab. <ref>, using A^ core = π (θ_ core/2)^2 ⇒θ_ core = 2 √(A_ core/π). For the temperature estimation we consider all components of molecules describing emission features in the corresponding core spectrum and which have more than one transition within the frequency ranges covered by the observation. Afterwards, we use again XCLASS to derive the corresponding hydrogen column density N_ H and spectral index β, taking into account both the continuum contributions from the envelope layer and a possibly existing free-free contribution from the core layer, see Fig. <ref>. The obtained dust and free-free parameters for each layer and source are described in Tabs. <ref> - <ref>. Additionally, we calculated the contribution γ of each portion to the total continuum of the corresponding core spectrum by determining the ratio of the integrated intensity of each contribution and the total continuum. Here, each contribution is calculated without taking the interaction with other contributions into account, which is why the ratios described in Tabs. <ref> - <ref> should be regarded as upper limits. For some sources we were not able to derive a self-consistent description of the continuum of the corresponding core spectrum. For sources A10, A20, A25, and A27 in Sgr B2(M) and A07, A08, A10, A12, A13, and A20 in Sgr B2(N), we could not find RRLs despite negative slopes of the continuum levels. Additionally, the derived free-free parameters for source A16 in Sgr B2(N) can not describe the observed slope. In addition, sources A18 in Sgr B2(M) and A11 in Sgr B2(N) show positive slopes that cannot be explained by optically thin dust emissions. As mentioned by 2017A A...604A...6S, for some faint sources the slope of the continuum levels might be falsified by calibration issues and the frequency-dependent filtering out of extended emission. For all sources where a self-consistent description of the core continuum was not possible, we apply a phenomenological description of the continuum and use averaged dust parameters for the core layers in sources in Sgr B2(M) and Sgr B2(N), respectively. §.§.§ Errors of continuum parameters The errors of the continuum parameters described in Tabs. <ref> - <ref>, are derived by two different methods: The errors of the dust temperatures for spectra without RRLs describe the standard errors of the corresponding means, while the errors of the other parameters were estimated using the [<https://emcee.readthedocs.io/en/stable/>] package 2013PASP..125..306F, which implements the affine-invariant ensemble sampler of 2010CAMCS...5...65G, to perform a Markov chain Monte Carlo (MCMC) algorithm approximating the posterior distribution of the model parameters by random sampling in a probabilistic space. Here, the MCMC algorithm starts at the estimated maximum of the likelihood function, that is the continuum model parameters described in Tabs. <ref> - <ref>, and draws 30 samples (walkers) of model parameters from the likelihood function in a small ball around the a priori preferred position. For each parameter we used 500 steps to sample the posterior. The probability distribution and the corresponding highest posterior density (HPD) interval of each continuum parameter are calculated afterwards. Details of the HPD interval are described in 2021A A...651A...9M. In order to get a more reliable error estimation, the errors for the hydrogen column densities and emission measures are calculated on log scale, i.e. these parameters are converted to their log10 values before applying the MCMC algorithm and converted back to linear scale after finishing the error estimation procedure. For most sources, the log10-errors of the hydrogen column densities and spectral indices are tiny, so the linear values are usually on the order of one. Finally the errors of the ratios γ are calculated using the continuum parameters, where each parameter is reduced (enhanced) by the corresponding left (right) error value. The posterior distributions of the continuum parameters of core A17 in Sgr B2(M) are shown in Fig. <ref>, the distributions for the other cores are shown in the appendix <ref>. For all histograms we find a more or less unimodal distribution, that is only one best fit within the given parameter ranges. § RESULTS AND DISCUSSION §.§ Results In both regions, Sgr B2(M) and Sgr B2(N), most of the cores and their local envelopes are dominated by the contribution of thermal dust, where the dense, dust-dominated cores are optically thick toward the center and optically thin in the outer regions, see Figs. <ref> - <ref>. The greater number of  regions in Sgr B2(M) cause strong thermal free-free emissions, which for some cores are the dominant continuum contributions to the total continuum levels. However, we also detect ionized gas localized between the sources, as we found in the envelope around cores A09 and A26 in Sgr B2(M). §.§.§ Results of Sgr B2(M) For local envelopes around the hot cores in Sgr B2(M) we found only a small variation of the dust parameters except for those envelopes containing RRLs (A07, A09, A24, and A26). The dust temperatures vary between 46 K (A19) and 88 K (A08) for envelopes whose spectra do not contain RRLs. For envelopes where free-free ionized gas emission has been found, the dust temperatures are located in the range of 50 K (A24) and 162 K (A26). Additionally, the envelope around the central core A01 shows the highest hydrogen column density of 1.1 × 10^25 cm^-2, while the lowest hydrogen column densities are found for envelopes containing RRLs. For the majority of spectra we find spectral dust indices of β = 0.1 (α = 2.1) (A05, A08, A10, A11, A13, A14, A17, A18, A19, A21, A22, A23, A25), which are associated with optically thick dust emission (see e.g. 2014MNRAS.444.2303S, 2015ApJ...811..118R). For core A03, we derive an index of β = 2.0 (α = 4.0), which indicates optically thin dust emission. Envelopes containing free-free emission in their spectra have spectral dust indices between 0.2 and 0.4, i.e. moderate optically thick dust emission. The contributions of the dust emission from the envelopes to the total continua observed toward the hot cores alter between 5.4 % (A03) and 32.4 % (A14). From the RRLs we obtain electronic temperatures between 2781 (A09) and 5817 K (A07), finding no correlation with dust temperatures. The corresponding emission measures are located within a small range between 8.8 × 10^6 (pc cm^-6) (A07) and 2.2 × 10^7 (pc cm^-6) (A24). The free-free emissions in the envelopes contribute almost nothing to the observed continuum and range from 0.5 % (A07) and 2.2 % (A26). Eight hot core spectra in Sgr B2(M) contain RRLs. For two other sources (A09, A26) we could not detect RRLs in the corresponding core spectra, although RRLs are found in their envelopes, which may be due to the fact that there is ionized gas localized between the sources and that we used averaged core spectra, see e.g. Fig. <ref>, where possible contributions from individual pixels with a small RRL contribution were averaged out. For example, the polygon used to calculate the averaged core spectrum of A26 contains a small  region identified by 2015ApJ...815..123D, see Fig. <ref>, but we could not detect any RRLs in the corresponding core spectrum. On the other hand, the identification of RRLs in the core spectrum of A17 in Sgr B2(M) is remarkable, because 2015ApJ...815..123D do not describe an  region in the vicinity of this source. The hot cores in Sgr B2(M) have dust temperatures ranging from 195 K (A26) to 342 K (A01), but unlike the envelope spectra, we do not find a general correlation between cores containing RRLs and high dust temperatures, which again may be due to the use of averaged core spectra. Moreover, we obtain the highest hydrogen column density not for the central core A01, as 2017A A...604A...6S, but for core A03. This discrepancy could be due to the fact that, unlike 2017A A...604A...6S, we decompose the contributions from the envelope and core, where the envelope around core A01 has the highest hydrogen column density of all envelopes in Sgr B2(M). Furthermore, the continuum of core A01 contains strong free ionized gas emissions. In contrast to the envelopes the majority of core spectra shows spectral dust indices above 0.4. Only three cores have spectral indices of β = 0.0 (α = 0.0). Half of all cores (A01, A04, A07, and A08) containing RRLs have spectral indices above 1.0, which are associated with optically thin dust emission. For cores whose spectra do not contain RRLs and for which we could derive a self-consistent description of the continuum level, we obtain approximately the same spectral indices as 2017A A...604A...6S. The obtained electronic temperatures T_ e show a large variation between 3808 K (A01) and 23779 K (A17). Although electronic temperatures above 10000 K are unusual for -regions (1967ApJ...149L..61Z, 2002ASSL..282.....G), the temperatures agree quite well with those derived by 2006ApJ...653.1226Q, where temperatures between 1850 K and 21810 K were found using high-precision radio recombination line and continuum observations of more than 100  regions in the Galactic disk. Taking Non-LTE effects into account, 1995ApJ...442L..29M derived an electronic temperature of T_ e = 23700 K towards a source in Sgr B2(M), which is very close to the LTE temperature we determined for source A17. Additionally, 2006ApJ...653.1226Q identified a Galactic  region (G220.508-2.8) with an electronic temperature of T_ e = 21810 K. Furthermore, the error estimation for A17, see Fig. <ref>, shows that the best description of the free-free contribution occurs at the indicated temperature. Although we cannot rule out the possibility that the slope of the continuum level of core A17 is affected by interferometric filtering, this high temperature seems to actually exist. But it is unclear what mechanism heats the gas to such high temperatures. The electron temperature of an  region in thermal equilibrium is determined by the balance of competing heating and cooling mechanisms. Among others, the electronic temperature T_ e can be influenced by the effective temperature of the ionizing star or by the electron density, which inhibits cooling and increases T_ e by collisional excitation in the high electron density  regions. Additionally, T_ e is affected by the dust grains, which are involved in heating and cooling in complex ways (see e.g. 1986PASP...98..995M, 1991ApJ...374..580B, 1995ApJ...454..807S). Photoelectric heating occurs due to the ejection of electrons from the dust grains, while the gas is cooled by collisions of fast particles with the grains. Furthermore, the electron temperature decreases with distance from the star, because the field of ionizing radiation is attenuated by dust grains. However, the electron temperature also increases when coolants are depleted at the dust grains. According to 1980pim..book.....D, photoionization is not able to heat the material to such a high temperature unless there is a strong depletion of metals in this region compared to other  regions in Sgr B2. Since heavy elements cool photoionized gas, the electron temperatures of  regions are directly related to the abundance of the heavy elements: A low electron temperature T_ e corresponds to a higher heavy element abundance due to the higher cooling rate and vice versa. Another possibility is that the heating is caused by a non-equilibrium situation at the edge of the expanding  region. The electronic temperatures of the other cores correspond quite well to those obtained by 1993ApJ...412..684M, suggesting that the metal abundance in most cores of Sgr B2 is similar to that of Orion. The corresponding emission measures ranges from 4.7 × 10^8 (pc cm^-6) (A17) to 4.4 × 10^9 (pc cm^-6) (A01). The free-free contributions to the total continuum levels vary between 15.7 % and 75.8 %, where the high contributions are caused by the fact, that seven of the hot cores in Sgr B2(M) contain one or more  regions, see Fig. <ref>, while core A17 was not associated with  regions before, which may be caused by the strong dust contribution, see Fig.<ref>. §.§.§ Results of Sgr B2(N) Similar to Sgr B2(M), we found only a small variation of the dust parameters for the local envelopes around the hot cores in Sgr B2(N). The dust temperatures are located in a range between 35 K (A02) and 86 K (A15) while the column densities vary between 5.1 × 10^23 cm^-2 (A12) and 1.0 × 10^25 cm^-2 (A01). Similar to the envelope spectra around sources in Sgr B2(M), we find spectral dust indices for most of the envelopes (A04, A08, A09, A10, A12, A13, A14, A15, A18, A19, and A20) in Sgr B2(N) of 0.1. The highest dust index of β = 2.5 (α = 4.5) is found for the envelope around core A07. In contrast to Sgr B2(M), we do not find RRLs in the envelope spectra in Sgr B2(N). The dust emissions from the envelopes contribute between 6.6 % (A03) and 71.2 % (A17) to the total continuum level. The strong contribution of the envelope around source A17 shows that the contribution of the envelope is indispensable for a realistic modeling of a sources with low continuum levels. For the hot cores in Sgr B2(N) we derived dust temperatures between 190 K (A09) and 282 K (A14), where the high core dust temperature of source A19 is noteworthy, because this source is not associated with an  region. However, we cannot exclude an influence of interferometric filtering on molecular lines used for temperature estimation. The column densities alter between 1.3 × 10^24 cm^-2 (A17) and 2.1 × 10^25 cm^-2 (A01) and the spectral index between β = -0.1 (α = 1.9) (A07) and β = 2.5 (α = 4.5) (A01). Only two cores contain RRLs with electronic temperatures of 9877 K (A01) and 9921 K (A16). The corresponding emission measures are located in a range between 1.0 × 10^8 (pc cm^-6) (A16) and 3.9 × 10^8 (pc cm^-6) (A01), which is almost an order of magnitude lower than the highest emission measure for core A01 in Sgr B2(M). For A01 the free-free contribution is 10.5 %. In the following we present in more detail the results obtained from the fitting towards the cores of Sgr B2(M) and (N). §.§ Physical properties of the continuum sources In Tab. <ref>, we summarize the physical parameters derived from the continuum parameters described above. Here, we compute the (dust and gas) masses for each source determined from the expression 1983QJRAS..24..267H M_ d + g = S_ν· D^2/B_ν (T_ dust^ core) ·κ_ν, where S_ν is the flux density at 242 GHz, D is the distance (8.34 kpc for Sgr B2), B_ν (T_ dust^ core) is the Planck function at a core dust temperature T_ dust^ core, and κ_ν is the absorption coefficient per unit of total mass (gas and dust) density. Assuming a spherical and homogeneous core, we use the following expression to estimate the hydrogen density n_H_2^ core from the hydrogen column density N_H_2^ core n_H_2^ core = N_H_2^ core/d^ core, where d^ core = D ·θ^ core indicates the diameter and θ^ core the corresponding source size of the source. The electron density n_e^ core is computed in a similar way using the derived emission measure EM^ core n_e^ core = √( EM^ core/d^ core), where we assume spherical and homogeneous  regions. The ionized gas mass M_i^ core is estimated using the following expression M_i^ core = n_e^ core·4/3 π (d^ core/2)^3 · m_p, where m_p indicates the proton mass. Finally we calculate the number of ionizing photons per second N_i^ core 2016A A...588A.143S N_i^ core = ∫ n_e^2 (β̃ - β̃_̃1̃) dV, where we assume a Strömgren sphere. Here, β̃ and β̃_̃1̃ are the rate coefficients for recombinations to all levels and to the ground state, respectively. The term (β̃ - β̃_̃1̃) describes the recombination coefficient to level 2 or higher and can be described using the expression derived by 1968ApJ...154..391R, (β̃ - β̃_̃1̃/ cm^3 s^-1) = 4.1 · 10^-10 ( T_e^ core/ K)^-0.8, who approximate the recombination coefficient given by 1959MNRAS.119...81S for electron temperatures T_ e. Unlike other cores containing  regions, cores A15 and A24 in Sgr B2(M) each match more or less in position and size a single  region identified by 2015ApJ...815..123D. In the following, we will estimate the age of these  regions. We start with calculating the initial Strömgren radius (R_ St) of an  region, which is given by 1968ITPA...28.....S: R_ St = ( 3/16 π (β̃ - β̃_̃1̃)·N_i^ core/[n_H_2^ core]^2)^1/3, where N_i^ core indicates the number of ionizing photons per second, Eq.(<ref>), n_H_2^ core the hydrogen density, Eq.(<ref>), and (β̃ - β̃_̃1̃) the recombination coefficient, Eq.(<ref>), respectively. Assuming expansion into a homogeneous molecular cloud, the dynamical age of both regions can now be computed by using the following expansion equation (1968ITPA...28.....S, 1980pim..book.....D) t_ exp = 4/7 R_ St/c_s [ (r^ core/R_ St)^7/4 - 1 ], where r^ core = d^ core / 2 describes the radius of the  region and R_ St the Strömgren radius, Eq.(<ref>). In addition, c_s indicates the isothermal sound speed, which is given by 2011piim.book.....D c_ s = √(2 k_B T_e^ core/m_H), where m_H is the hydrogen atomic mass. Additionally, we compute the ratio of the electron and the molecular pressure 2019PASJ...71..128T, P_e/P_M = 2 n_e^ core k_B T_e^ core/n_H_2^ core k_B T_ dust^ core = 2 n_e^ core T_e^ core/n_H_2^ core T_ dust^ core, where n_e^ core represents the electron density, Eq. (<ref>), n_H_2^ core the hydrogen density, Eq.(<ref>), T_ dust^ core the dust, and T_e^ core the electronic temperature, respectively. A ratio greater than one means that the corresponding  region is in the expansion phase, since the pressure of the ionized gas exceeds the pressure of the neutral gas. Compared to 2017A A...604A...6S, we find much lower dust and gas masses, i.e. between 27 % and 57 % of their masses, which is due to our elevated dust temperatures. 2017A A...604A...6S assume a dust temperature of 100 K, while our core dust temperatures range from 190 K to 342 K. However, the mass distribution of the most massive cores is almost unchanged for both regions. In addition, we obtain reduced H_2 volume densities n_ H_2 for most sources in Sgr B2(M) and N. For the central sources, our densities are in the range of 4 % and 153 % of the previous analysis results, while the volume densities for some outlying sources, especially in Sgr B2(M), exceed the results of 2017A A...604A...6S by a factor of up to five (A26). These discrepancies are among others due to the different sizes of the sources used in our analysis compared to those described in 2017A A...604A...6S. Nevertheless, the highest hydrogen density, which we determine in our analysis is in the order of 10^8 cm^-3, which corresponds to 10^6 M_⊙ pc^-3, still one orders of magnitude larger than the typical stellar densities found in super star clusters [e.g. ∼10^5 M_⊙ pc^-3,][]2010ARA A..48..431P. Finally, we calculated the physical parameters of the ionized gas for the sources in which we identified RRLs. In Sgr B2(M), we find RRLs in all sources except A02, for which 2017A A...604A...6S had also found contributions from ionized gas. This is quite different for Sgr B2(N), in which we could find RRLs only for the central source A01. In all other sources which were connected with  regions by 2017A A...604A...6S we could not find RRLs. However, we identify RRLs in A16 that were not associated with ionized gas by 2017A A...604A...6S. The  regions we identified fit very well with the results of 2015ApJ...815..123D for both regions. According to 2015ApJ...815..123D, only sources A01, A10, and A16 in Sgr B2(N) contain  regions. The electron densities n_e vary between 47 % and 211 % of the values derived from 2017A A...604A...6S, with the differences due to the different analysis techniques. In contrast to 2017A A...604A...6S, which found the highest electron density for core A01 in Sgr B2(M), we obtain the highest electron density for core A24, which is associated with a single, bright  region, see Fig. <ref>. All other cores containing RRLs show comparable densities. For core A15 and A24 in Sgr B2(M) we also determined the age of the corresponding  region (A15: 3100 yr, A24: 7000 yr), which are comparable to results obtained by 2022A A...666A..31M. Additionally, the gas pressure ratio of the observed ionized gas and the observed ambient molecular gas is close to unity for A24, which indicates that the pressure-driven expansion for the corresponding  region is coming to a halt, while for core A15, we find a remarkable pressure-driven expansion for this  region. § CONCLUSIONS Many of the hot cores identified by 2017A A...604A...6S include large amounts of dust. In addition, some cores contain one or more  regions. In this work, which is the first of two papers on the complete analysis of the full spectral line surveys towards these hot cores, we have quantified the dust and, if contained, the free-free contributions to the continuum levels. In contrast to previous analyses, we derived the corresponding parameters here not only for each core but also for their local surrounding envelope and determined their physical properties. Especially for some outlying sources, the contributions of these envelopes are not negligible. In general, the distribution of RRLs we found in the core spectra fits well with the distribution of  regions described by 2015ApJ...815..123D. Only for core A02 in Sgr B2(M) and A10 in Sgr B2(N) we can not identify RRLs in the corresponding spectra, although  regions are contained in these sources. Additionally, we found RRLs in core A17 of Sgr B2(M), despite the fact that no  region is known to be nearby. The average dust temperature for envelopes around sources in Sgr B2(M) is 73 K while in Sgr B2(N), however, we obtain only 59 K, which may be caused by the enhanced number of  region in Sgr B2(M) compared to N. For the cores we obtain average dust temperatures around 236 K (Sgr B2(M)) and 225 K (Sgr B2(N)) and see no correlations between occurrence of RRLs and enhanced dust temperatures, although one would expect this in the presence of ionized gas. For the average hydrogen column densities we get 2.5 × 10^24 cm^-2 (2.6 × 10^24 cm^-2) for the envelopes and 7.8 × 10^24 cm^-2 (6.1 × 10^24 cm^-2) for Sgr B2(M) and N, respectively. The derived electronic temperatures are located in a range between 2781 K and 9921 K, while two cores show electronic temperatures of 15214 K and 23779 K. The highest emission measures in Sgr B2(M) are found in cores A01 and A24, while the two cores in Sgr B2(N) containing RRLs have almost the same emission measure. In Sgr B2(M), the three inner sources are the most massive, whereas in Sgr B2(N) the innermost core A01 dominates all other sources in mass and size. This analysis of the dust and ionized gas contribution to the continuum emission enables a full detailed analysis of the spectral line content which will be presented in a following paper (Möller et al. in prep.). This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through grant Collaborative Research Centre 956 (subproject A6 and C3, project ID 184018867) and from BMBF/Verbundforschung through the projects ALMA-ARC 05A14PK1 and ALMA-ARC 05A20PK1. A.S.M. acknowledges support from the RyC2021-032892-I grant funded by MCIN/AEI/10.13039/501100011033 and by the European Union `Next GenerationEU'/PRTR, as well as the program Unidad de Excelencia María de Maeztu CEX2020-001058-M. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2013.1.00332.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. aa § ERROR ESTIMATION §.§ Error estimation of continuum parameters for cores in Sgr B2(M) Corner plots corner showing the one and two dimensional projections of the posterior probability distributions of the continuum parameters of each core in Sgr B2(M). On top of each column the probability distribution for each free parameter is shown together with the value of the best fit and the corresponding left and right errors. The left and right dashed lines indicate the lower and upper limits of the corresponding highest posterior density (HPD) interval, respectively. The dashed line in the middle indicates the mode of the distribution. The blue lines indicate the parameter values of the best fit. The plots in the lower left corner describe the projected 2D histograms of two parameters and the contours the HPD regions, respectively. In order to get a better estimation of the errors, we determine the error of the hydrogen column density and the emission measure on log scale and use the velocity offset (v_ off) related to the source velocity of v_ LSR = 64 km s^-1. §.§ Error estimation of continuum parameters for cores in Sgr B2(N) Corner plots corner showing the one and two dimensional projections of the posterior probability distributions of the continuum parameters of each core in Sgr B2(N). On top of each column the probability distribution for each free parameter is shown together with the value of the best fit and the corresponding left and right errors. The left and right dashed lines indicate the lower and upper limits of the corresponding highest posterior density (HPD) interval, respectively. The dashed line in the middle indicates the mode of the distribution. The blue lines indicate the parameter values of the best fit. The plots in the lower left corner describe the projected 2D histograms of two parameters and the contours the HPD regions, respectively. In order to get a better estimation of the errors, we determine the error of the hydrogen column density and the emission measure on log scale and use the velocity offset (v_ off) related to the source velocity of v_ LSR = 64 km s^-1.
http://arxiv.org/abs/2307.07400v1
20230714152235
Contextual behavioural Metrics (Extended Version)
[ "Ugo Dal Lago", "Maurizio Murgia" ]
cs.FL
[ "cs.FL", "cs.PL" ]
The torsion of stellar streams Adriana Bariego–Quintana 1 Felipe J. Llanes–Estrada2 July 14th 2023 ======================================================================================== We introduce contextual behavioural metrics (CBMs) as a novel way of measuring the discrepancy in behaviour between processes, taking into account both quantitative aspects and contextual information. This way, process distances by construction take the environment into account: two (non-equivalent) processes may still exhibit very similar behaviour in some contexts, e.g., when certain actions are never performed. We first show how CBMs capture many well-known notions of equivalence and metric, including Larsen's environmental parametrized bisimulation. We then study compositional properties of CBMs with respect to some common process algebraic operators, namely prefixing, restriction, non-deterministic sum, parallel composition and replication. § INTRODUCTION Simulation and bisimulation relations are often the methodology of choice for reasoning relationally about the behaviour of systems specified in the form of LTSs. On the one hand, most of them can be proved to be congruences, therefore enabling modular equivalence proofs. On the other hand, not being based on any universal quantification (e.g. on tests or on traces), they enable simpler relational arguments, especially when combined with enhancements such as the so-called up-to techniques <cit.>. The outcome of relational reasoning as supported by (bi)simulation relations is inherently binary: two programs or systems are either (bi)similar or not so. As an example, all pairs of non-equivalent elements have the same status, i.e. the bisimulation game gives no information on the degree of dissimilarity between non-equivalent states. This can be a problem in those contexts, such as that of probabilistic systems, in which non-equivalent states can give rise to completely different but also extremely similar behaviours. This led to the introduction of a generalization of bisimulation relations, i.e. the so-called bisimulation metrics <cit.>, which rather than being binary relations on the underlying set of states S, are binary maps from S to a quantale (most often of real numbers) satisfying the axioms of (pseudo)metrics. In that context, the bisimulation game becomes inherently quantitative: the defender aims at proving that the two states at hand are close to each other, while the attacker tries to prove that they are far apart. The outcome of this game is a quantity representing a bound not only on any discrepancy about the immediate behaviour of the two involved states, (e.g. the fact that some action is available in s but not in t), but also providing some information about differences which will only show up in the future, all this regardless of the actions chosen by the attacker. In this sense, therefore, bisimulation metrics condense a great deal of information in just one number. Notions of bisimulation metrics have indeed be defined for various sequential and concurrent calculi (see, e.g., <cit.>), allowing a form of metric reasoning on program behaviour. But when could any of such techniques be said to be compositional? This amounts to be able to derive an upper bound on the distance δ(C[t],C[s]) between two programs in the form C[t] and C[s] from the distance δ(s,t) between s and t. Typically, the latter is required to be itself an upper bound on the former, giving rise to non-expansiveness as a possible generalization of the notion of a congruence. This, however, significantly restricts the class of environments C to which the aforementioned analysis can be applied, since being able to amplify differences is a very natural property of processes. Indeed, an inherent tension exists between expressiveness and compositionality in metric reasoning <cit.>. But there is another reason why behavioural metrics can be seen as less informative than they could be. As already mentioned, any number measuring the distance between two states s and t implicitly accounts for all the possible ways of comparing s and t, i.e. any context. Often, however, only contexts that act in a certain very specific way could highlight large differences between s and t, while others might simply see s and t as very similar, or even equivalent. This further dimension is abstracted away in compositional metric analysis: if the distance between s and t is very high, but C does not “take advantage” of such large differences, C[s] and C[t] should be close to each other, but are dubbed being far away from each other, due to the aforementioned abstraction step. It is thus natural to wonder whether metric analysis can be made contextual. In the realm of process equivalences, this is known to be possible through, e.g. Larsen's environmental parametrized bisimulation <cit.>, but not much is known about contextual enhancements of bisimulation metrics. Other notions of program equivalence, like logical relations or denotational semantics, have been shown to have metric analogues <cit.>, which in some cases can be made contextual <cit.>. In this paper, we introduce the novel notion of contextual behavioural metric (CBM in the following) through which it is possible to fine-tune the abstraction step mentioned above and which thus represents a refinement over behavioural metrics. In CBMs, the distance between two states s,t of an LTS is measured by an object d having a richer structure than that of a number. Specifically, d is taken to be an element of a metric transition system, in which the contextual and temporal dimensions of the differences can be taken into account. In addition to the mere introduction of this new notion of distance, our contributions are threefold: * On the one hand, we show that metric labelled transition systems (MLTSs in the following), namely the kind of structures meant to model differences, indeed form a quantale, this way allowing us to prove that CBMs are generalized metrics. This is in Section <ref>. * On the other hand, we prove that some well-known methodologies for qualitative and quantitative relational reasoning on processes, namely (strong) bisimulation relations and metrics, and environmental parametrized bisimulations <cit.>, can all be seen as CBMs where the underlying MLTS corresponds to the original quantale. This is in Section <ref>. * Finally, we prove that CBMs have some interesting compositional properties, and that this allows one to derive approximations to the distance between processes following their syntactic structure. This is in Section <ref>. Many of the aforementioned works about behavioural metrics are concerned with probabilistic forms of LTSs. In this work, instead, we have deliberately chosen to focus on usual nondeterministic transition systems. On the one hand, the quantitative aspects can be handled through the so-called immediate distance between states, see below. On the other hand, it is well known that probabilistic transition systems can be seen as (non)deterministic systems whose underlying reduction relation is defined between state distributions. Focusing on ordinary LTSs has the advantage of allowing us to concentrate our attention on those aspects related to metrics, allowing for a separation of concerns. This being said, we are confident that most of the results described here could hold for probabilistic LTSs, too. Proofs and other details are omitted due to space constraints, and can be found online <cit.>. § WHY THE ENVIRONMENT MATTERS The purpose of this section is to explain why purely numerical quantales do not precisely capture differences between states of an LTS and how a more structured approach to distances can be helpful to tackle this problem. We will do this through an example drawn from the realm of higher-order programs, the latter seen as states of the LTS induced by Abramsky's applicative bisimilarity <cit.>. Let us start with a pair of programs written in a typed λ-calculus, both of them having type (𝙽𝚊𝚝→𝙽𝚊𝚝)→𝙽𝚊𝚝, namely M_2 and M_4, where M_n≜λ x.xn. These terms can indeed be seen as states of an LTS, whose relevant fragment is the following one: < g r a p h i c s > Labelled transitions correspond to either parameter passing (each actual parameter being captured by a distinct label V) or evaluation. It is indeed convenient to see the underlying LTS as a bipartite structure whose states are either computations or values. The two states 𝖤(V 2) and 𝖤(V 4) are the natural number values to which V 2 and V 4 evaluate, respectively. Clearly, the latter are not to be considered equivalent whenever different, and this can be captured, e.g., by either exposing the underlying numerical value through a labelled self-transition or by stipulating that base type values, contrary to higher-order values, can be explicitly observed, thus being equivalent precisely when equal. If one plays the bisimulation game on top of this LTS, the resulting notion of equivalence turns out to be precisely Abramsky's applicative bisimilarity. For very good reasons, M_2 and M_4 are dubbed as not equivalent: they can be separated by feeding, e.g. V=λ x.x to them. But now, how far apart should M_2 and M_4 be? The answer provided by behavioural metrics consists in saying that M_2 and M_4 are at distance at most x∈ℝ^∞_+ iff x is an upper bound on the differences any adversary observes while interacting with them, independently on how the adversary behaves. As a consequence, if the underlying λ-calculus provides a primitive for multiplication, then it is indeed possible to define values of the form V_n≜λ x.x× n for every n, allowing the environment to observe arbitrarily large differences of the form |𝖤(V_n 2)-𝖤(V_n 4)| = | 2n-4n| =2n. In other words, the distance between M_2 and M_4 is +∞. The possibility of arbitrarily amplifying distances is well-known, and can be tackled, e.g., by switching to a calculus in which all functions are non-expansive, ruling out terms such as V_n where n>1. This makes the distance between M_2 and M_4 is indeed 2, because no input term V can “stretch” the distance between 2 to 4 to anything more than 2. This is what happens, e.g., in 𝖥𝗎𝗓𝗓 <cit.>. But is this the end of the story? Are we somehow losing too much information by stipulating that M_2 and M_4 are, say, at distance 2? Actually, the only moment in which the environment observes the state with which it is interacting is at the end of the dialogue, namely after feeding it with a function V:𝙽𝚊𝚝→𝙽𝚊𝚝. If, for example, the environment picks V_q≜λ x.(x-3)^2+2, then the observed difference is 0, while if it picks V_l≜λ x.x+2 then the observed distance is maximal, i.e. 2. In other words, the observed distance strictly depends on how the environment behaves and should arguably be parametrised on it. This is indeed the main idea behind Larsen's environmental parametrised bisimulation, but also behind our contextual behavioural metrics. In the latter, differences can be faithfully captured by the states of another labelled transition system, called a metric labelled transition system, in which observed distances are associated to states. In our example, the difference between M_2 and M_4 is the state s of a metric labeled transition system whose relevant fragment is: < g r a p h i c s > Crucially, while s,t_s,t_l,u_l are all mapped to the null observable difference, u_q is associated to 2. This allows to discriminate between those environments which are able to see large differences from those which are not. This is achieved by allowing differences to be modelled by the states of a transition system themselves. Using a categorical jargon, it looks potentially useful, but also very tempting, to impose the structure of a coalgebra to the underlying space of distances rather than taking it as a monolithical, numeric, quantale. The rest of this paper can be seen as an attempt to make this idea formal. § CONTEXTUAL BEHAVIOURAL METRICS, FORMALLY This section is devoted to introducing contextual behavioural metrics, namely the concept we aim at studying in this paper. We start with the definition of quantale <cit.>, the canonical codomains of generalized metrics <cit.>. The notion of quantale used in this paper is that of unital integral commutative quantale: A quantale is a structure = (Q,,,,,) such that ,: 2^Q → Q, the two objects , are in Q, and is a binary operation on Q, where: * (Q,,,,) is a complete lattice; * (Q,,) is a commutative monoid; * for every ∈ Q and every A ⊆ Q it holds that A = ∈ A. We write when = {,}. Generalized metrics are maps which associate an element of a given quantale to each pair of elements. As customary in behavioural metrics, we work with pseudometrics, in which distinct elements may be at minimal distance: A pseudometric over a set A with values in a quantale is a map m: A × A → satisfying: * for all a ∈ A: m(a,a) =; * for all a,b ∈ A: m(a,b) = m(b,a); * for all a,b,c ∈ A: m(a,c) m(a,b) m(b,c). In the rest of this paper, we refer to pseudometrics simply as metrics. It is now time to introduce our notion of a process, namely of the computational objects we want to compare. We do not fix a syntax, and work with abstract labelled transition systems (LTSs in the following). In order to enable (possibly quantitative) metric reasoning, we equip states of our LTS with an immediate metric , namely a metric measuring the observable distance between two states. We define a -LTS as a quadruple (,,,) where: * is the set of processes; * is the set of labels; * ⊆ ×× is the transition relation; * : ×→ is a metric. The example LTS from Section <ref> should be helpful in understanding why the metric is needed: terms and values of distinct types are at maximal immediate distance, while terms and values of the same type are at minimal distance, except when the type is 𝙽𝚊𝚝, whereas the immediate distance is just the absolute value between the two numbers. We now need to introduce another notion of transition system, this time meant to model differences between computations. This kind of structure can be interpreted as a quantale, and will form the codomain of Contextual Bisimulation Metrics. Intuitively, a Metric LTS is an LTS endowed with a function from states to a quantale . This allows to keep track of immediate distance changes. Let us start with the notion of a pre-metric LTS: A pre-metric -LTS is a quadruple =(,,,) where: * is the set of states; * is the set of labels; * ⊆ ×× is the transition relation; * :→ is a function which assigns values in to states in . A pre-metric LTS does not necessarily form a quantale, because does not necessarily have, e.g. the structure of a monoid or a lattice. In order to be proper codomains for metrics, pre-metric LTSs need to be endowed with some additional structure, which will be proved to be enough to form a quantale. A metric -LTS = (,,,) is a pre-metric -LTS endowed with two elements [],[] ∈, and three operators [],[]:2^→ and []: ×→, where the conditions hold for all possible values of the involved metavariables: =15pt [ [] [] = [] [] = []; ∀∈: [] [] [] = []; [] [] ∃∈: [] [] = [] ∈; [] [] ∃: = [] and [] = [] ∈; ∃ surjective f: →: ∀∈: [] f(); [1] [] [2] [] = [1] [] [2] for some [1],[2] ([1] [] [2]) = [1][] [2]; such that: [1] [] [1] and [2] [] [2] ] Axioms ensures that [] allows every possible behaviour (somehow capturing every context), and dually [] disallows every behaviour. [] allows all and only the behaviours in (union of contexts), while [] enables all and only the behaviours allowed by every element in (intersection of contexts). The sum [] has a behaviour similar to [], but it is binary and differs on the value returned by . Due to the requirements about joins and meets over potentially infinite sets, MLTSs are not easy to define directly. We argue, however, that an MLTS can be defined as the closure of a pre-MLTS. If the underlying quantale is boolean, one can get the desired structure by considering 2^2^X, where X is the carrier of the given pre-MLTS: it suffices to take subsets in “conjunctive” normal form. For the general case, the class ∪_n∈ℕ2^^2^X^n times, which is indeed a set in ZFC, suffices. The axiomatics above is still not sufficient to give the status of a quantale to -MLTSs. The reason behind all this is that there could be equivalent but distinct states in . We then define a preorder [] on the states of any MLTS : A relation ⊆× is a []-preserving simulation[Technically, it is a reverse simulation. We call it simulation for brevity.] if, whenever [1] [2], it holds that: * [1] [] [2]; * ∀∈: [2] [] [2] ∃[1]: [1] [] [1] and [1] [2]. We define [] ⊆× as the largest []-preserving simulation. We use the notation [] for mutual []-preserving simulation, that is [] = [] ∩[]. We say that is a lower (resp. upper) bound of ⊆ if [] (resp. []) for all ∈. The forthcoming result states that, in general, MLTSs almost form quantales. We can recover a proper quantale by quotienting modulo []. Let = (,,,) be a MLTS. Then: * [] is a preorder relation; * For all : [] [] and [] []; * For all ⊆: [] is a lower bound of , and if is a lower bound of then [] []. * For all ⊆: [] is an upper bound of , and if is an upper bound of then [] []. * For all ∈, ⊆: [] [] [] [] ∈. * For all ∈: [] []. * For all ,∈: [] []. * For all ,,∈: ([] ) [] [] [] ([] ). * If [] is a partial order relation, then is a quantale. We only show item 4, which is the more involved. So, define ⊆× as follows: = (,[] )⊆, ∈ We now show that is a []-preserving simulation. So, let []. Condition [] follows from the fact that ∈. For <ref>, suppose [] [] [1]. By definition of [], we have that [1] = [] for some ⊆ such that for all ∈ there is ∈ such that []. Therefore [] for some ∈, and hence []. Since [] is the largest []-preserving simulation, we can conclude that [] is an upper bound of for all ⊆, as required. It remains to show that, for all ⊆: [] is minimal among the upper bounds of . So, define ⊆× as follows: = ([] ,)⊆, upper bound of We wish to prove that is a []-preserving simulation. So, let ⊆ and let be an upper bound of . Condition [] [] holds by definition. For <ref>, suppose []. Since is an upper bound, we have that for all ∈ there is [] such that []. Then define f so that it assigns one such to each ∈, and let be the image of f. We have that [] [] []. Since is an upper bound of , we have that [], as required. Unless stated otherwise, we assume that every MLTS we work with is a quantale. Let (,,,) and = (,,,) be, respectively, a -LTS and a -MLTS. Then, a map : ×→ is a contextual bisimulation map if: * (,) [] (,); * if (,) [], then the following holds: * [] ∃: [] and (,) []; * [] ∃: [] and (,) []. We say that is a contextual bisimulation metric (CBM) if is both a contextual bisimulation map and a metric. We define the contextual bisimilarity map as follows: (,) = [](,) is a contextual bisimulation map The following result states that the contextual bisimilarity map is well behaved, being a contextual bisimulation map upper bounding any other such map: is a contextual bisimulation map. Moreover, for all contextual bisimulation maps , and processes ,, it holds that (,) [] (,). We start by showing that is a contextual bisimulation map. For condition (,) [] (,), notice that (,) = [] (,) is a contextual bisimulation map = [] (,) is a contextual bisimulation map. Since (,) [] (,) for all , we have the thesis. For the remaining condition, suppose (,) []. By definition of metric and of [], we have that (,) [] for some contextual bisimulation map , from which follows the thesis. Minimality follows directly from the definition of and of []. We still do not know whether is a metric. We need a handy characterization of for that. *A Useful Characterization of CBMs. Larsen's environment parametrized bisimulations <cit.> is a variation on ordinary bisimulation in which the compared states are tested against environments of a specific kind, this way giving rise to a ternary relation. We here show that CBMs can be captured along the same lines. A formal comparison between CBMs and Larsen's approach is deferred to <ref>. Let (,,,) and (,,,) be, respectively, a -LTS and a -MLTS. An -indexed family of relations {_} such that _⊆× is said to be a parametrized bisimulation iff, whenever _, it holds that (,) [], and [] implies: * [] ∃: [] and _; * [] ∃: [] and _. Parametrized bisimilarity is the largest parametrized bisimulation, namely the largest family {∼_} such that ∼_ if _ for some parametrized bisimulation {_}. The fact that {∼_} is indeed a parametrized bisimulation holds because parametrized bisimulations are closed under unions (defined point-wise), something which can be proved with a simple generalisation of standard techniques <cit.>. Parametrized bisimilarity turns out to be strongly related to , this way providing a simple proof technique that will be heavily used in the rest of the paper. The following lemma provides a monotonicity property for parametrized bisimilarity, which will be very useful in the following: If [] and ∼_, then ∼_. It suffice to prove that [] = (,)∃[] : ∼_ is a parametrized bisimulation, which follows easily by the definition of []. Parametrized bisimilarity turns out to be strongly related to , this way providing a simple proof technique that will be heavily used in the rest of the paper. For all ,,, it holds that (,) [] ∼_. For the ⇒ direction, define the -indexed family of relations _⊆× as follows: _ (,) [] It suffice to show that is a parametrized bisimulation. So, let _. Condition (,) [] follows from the fact that (,) []. Now, suppose that [] and []. Since [], we have that (,) [] []. Then [] for some such that (,) [] [], that is _[]. The case for a move is similar. For the ⇐ direction, it suffice to show that , defined below, is a contextual bisimulation map. (,) = [] ∼_ Condition (,) [] (,) follows from the fact that (,) is less than for all such that ∼_. So, suppose (,) [] and []. Then [] for some such that ∼_. Hence [] for some such that ∼_. Therefore (,) []. The case for a move is similar. We are finally ready to state that satisfies the axioms of a metric. The contextual bisimulation map is a metric. We only show triangle inequality, that is ([1],[3]) [] ([1],[2]) [] ([2],[3]) for all [1],[2],[3]. So, define _ as follows: [1] _[3] ∃[2]: = ([1],[2]) [] ([2],[3]) By <ref> it suffice to prove that _ is a parametrized bisimulation. So, suppose [1] _[3]. Condition ([1],[3]) [] (([1],[2]) ([2], [3])) follows from the fact that is a metric and hence satisfies triangle inequality. For the remaining condition, suppose [] and []. Then = [1] [] [2] for some [1],[2] such that [1] [] [1] and [2] [] [2]. Thus [2] [] [2] for some [2] such that ([1],[2]) [] [1]. This in turn implies [3] [] [3] for some [3] such that ([2],[3]) [] [2]. Therefore [1] [] [3], as required. The case for [3] moves is similar. § SOME RELEVANT EXAMPLES This section is devoted to showing how well-known and heterogeneous notions of equivalence and distance can be recovered as CBMs for appropriate quantales and MLTSs. §.§ Strong Bisimilarity as a CBM We start recalling that strong bisimilarity <cit.> is the largest strong bisimulation relation, that is a relation ⊆× on the states of a plain LTS (,,) such that implies: * [] ∃: [] and; * [] ∃: [] and. The first thing we have to do to turn strong bisimilarity into a CBM is to define, given such an LTS (,,), a canonical immediate distance on the boolean quantale , which we call the canonical distance: (,) = if ∀: [] [] otherwise That is, the immediate distance is precisely when the processes expose the same labels. Notice that immediate distance is not affected by possible future behavioural differences. Any LTS like this is said to be a boolean LTS. The boolean quantale can be turned very naturally into a MLTS: let be ({[],[]},,,) where the transitions are self loops [] [] [] for every ∈, and just associates [] to [] and [] to []. Given any boolean LTS, is the characteristic function of bisimilarity, i.e. (,) = [] ∼. To see why, define: (,)= [] if ∼ [] otherwise Notice that is a contextual bisimulation map. Indeed, condition (,) [] (,) surely holds: if (,) = [] it must be ≁ and hence (,) = []. For the other condition, suppose (,) []. Then (,) = = []. Therefore ∼, and hence if [] there is a matching -transition of , and viceversa. From the above, together with the fact that is minimal, it follows that: ∼(,) = [] For the reverse implication, define ⊆× as follows: (,) = [] It suffice to show that is a bisimulation relation. So, suppose and []. Since (,) = [], we have that (,) [] [] and hence [] for some such that (,) = []. Therefore . The case for a move is similar. §.§ Behavioural CBMs Most behavioural metrics from the literature are defined on probabilistic transition systems <cit.>, differently from CBMs. Some probabilistic behavioural metrics can still be captured in our framework by using as states of the process LTS (sub)distributions of states of the original PLTS, e.g. the distribution based metric in <cit.>. Non-probabilistic behavioural metrics exist, e.g., the so-called “branching metrics” <cit.>, which are indeed instances of behavioural metrics as defined below. Notice that our definition has a generic quantale as its codomain, while usually behavioural metrics take values in the interval ℝ_[0,1]. Let us first recall what we mean by a behavioural metric here. A metric M: ×→ is said to be a behavioural metric if, for all pairs of states ,, it holds that (,) [] M(,) and, whenever M(,) [] [], we have that: * [] ∃: [] and M(,) [] M(,); * [] ∃: [] and M(,) [] M(,). Intuitively, behavioural metrics can be seen as quantitative variations on the theme of a bisimulation: they associate a value from a quantale to each pair of processes (rather than a boolean), they are coinductive in nature. Moreover, they are based on the bisimulation game, i.e., any move of one of the two processes needs to be matched by some move of the other, at least when their distance is not maximal. Our definition is similar to the one in <cit.>. However, many behavioural metrics in literature deal with non-determinism through the Hausdorff lifting, that is by stipulating that (,) [] M(,) and for all : M(,) [] [[] ] [[] ] M(,) and M(,) [][[] ] [[] ] M(,) The two notions are equivalent if the process LTS is image-finite and is totally ordered, both conditions are often assumed to be true in the literature. The following lemma states the above formally. Over totally ordered quantales and for image-finite process LTSs, behavioural metrics and Hausdorff metrics coincide. We first show that behavioural metrics (BM for short) are Hausdorff metrics (HM for short). So, let M be a BM. The requirement (,) [] M(,) holds by definition of BM. Let's focus to the requirement M(,) [] [[] ] [[] ] M(,). First notice that it holds trivially when M(,) =. Otherwise, it suffice to show that M(p,q) [] [[] ] M(,) whenever []. So, suppose []. Then, by definition of BM, there is such that [] and M(,) [] M(,) = d. The thesis follows as d [] [[] ] M(,) by definition of . The requirement M(,) [] [[] ] [[] ] M(,) follows by a similar argument. We now show that HMs are BMs, provided the process LTS is image-finite and is a total order. So, let M be a HM. The requirement (,) [] M(,) holds by definition of HM. We consider the case M(,) [] [] as otherwise the thesis is trivial. So, let []. By definition of HM, we have that M(,) [] [[] ] M(,). Since S = M(,)[] is finite (by image-finiteness) and totally ordered, we have that [[] ] M(,) ∈ S, from which the thesis follows. We now show how to interpret as a MLTS. Morally, we just fix as the set of states, the identity as , and self loops as transitions. This however violates the requirement that the top element has no outgoing transitions. We therefore add the element []. Notice that we still need [], as it ensures that is closed under []. Let = (,,,) where = ⊎{[]}, is as in the underlying process LTS, transitions are the self loops of the form [] for every ∈, and ∈, is the identity on , and ([]) = []. Notice that, when [] [], we have that: [] [] . We also have that for every behavioural metric there is a CBM that “agrees” on the quantitative distance between processes. This intuition is formalized as follows: Let M be a behavioural metric, and let m_M be defined as: m_M(,) = M(,) if M(,) [] [] [] otherwise Then, m_M is a CBM and for every , it holds that m_M(,) = M(,). The fact that m_M(,) = M(,) follows immediately from the definition. It remains to show that m_M is really a CBM. We first show that it is a contextual bisimulation map. Notice that (,) [] m_M(,) follows immediately from the definitions of m_M and behavioural metrics. So, suppose m_M(,) [] and []. Then, m_M(,) = M(,) [] [] and m_M(,) =. Therefore [] for some such that M(,) [] M(,). Since m_M(,) = M(,) [] [], we have that m_M(,) [] m_M(,) =, as required. The case for a move is similar. It remains to show that m_M is a metric. We only show triangle inequality, that is: m_M([1],[3]) [] m_M([1],[2]) [] m_M([2],[3]). If m_M([1],[3]) [] [], the thesis follows from the fact that M is a metric and <ref>. If m_M([1],[3]) = [], it must be m_M([1],[3]) = []. Since M is a metric, we have that m_M([1],[2]) = [] or m_M([2],[3]) = []. Then m_M([1],[3]) = [] or m_M([2],[3]) = []. In both cases m_M([1],[2]) [] m_M([2],[3]) = [], as required. The agreement of M and m_M holds by definition. The fact that m_M is a CBM, instead is a consequence of the fact that transitions preserve M distances (by definition of behavioural metric), and that behavioural metrics are metrics, indeed. §.§ On Environment Parametrised Bisimulation and CBMs As already mentioned, the concept of a CBM is inspired by Larsen's environment parametrized bisimulation <cit.>. It should then come with no surprise that there is a relationship between the two, which is the topic of this section. First, let us recall what an environment parametrized bisimulation is. Let (,,) and (,,) be LTSs. Elements of are called processes, while elements of are called environments. A -indexed family of relations {_}, where _⊆× is a environment parametrized bisimulation (EPB in the following) if, whenever _ and []: * [] ∃: [] and _; * [] ∃: [] and _. Environment parametrized bisimilarity, denoted as ∼_, is defined as ∼_ iff _ for some EPB . It turns out that ∼_ is the largest EPB <cit.>. EPBs can be embedded into the CBMs framework as follows: * fix as the boolean quantale , and define (,) = [] if ∃: [] and [] [] otherwise * let _ = (,,,) be any MLTS such that for all ∈ it holds that = [] ≠[], and for all ∈ there is [] ∈ such that []. Here is strong mutual similarity on the disjoint union of (forgetting ) and . When such conditions hold, we say that is embedded into _. We remark that, for every , there is an MLTS _ enjoying the properties above, obtained by augmenting with the immediate metric defined above (this gives rise to a pre-metric LTS, <ref>) and by closing it with respect to the operations and constants ,,, of <ref>. The intuition is that: * Two processes should have minimal immediate distance if there is a non-empty context in which their immediate behaviour is equivalent. This is ensured by the fact that they exhibit at least a common label from their current state. * _ needs to precisely simulate the behaviours in . We therefore require that every element of has a corresponding element in , with “equivalent behaviour”. In this setting, mutual simulation turns out to be the appropriate notion of behavioural equivalence. The link between environment parametrized bisimulations and CBMs is made formal by the following proposition. Let be an environment LTS embedded into an MLTS _. For every , and , it holds that ∼_(,) [_] []. The proposition above ultimately follows from the fact that ∼_∼_[] (where ∼_[] is parametrized bisimilarity <ref>) together with <ref>. § ABOUT THE COMPOSITIONALITY OF CBMS One of the greatest advantages of the bisimulation proof method is its modularity, which comes from the fact that, under reasonable assumptions, bisimilarity is a congruence. In a metric setting, one strives to obtain similar properties <cit.>, which take the form of non-expansiveness, or variations thereof. In this section we study the compositionality properties of CBMs with respect to some standard process algebraic operators. We are interested in properties that generalise the concept of a congruence. Following the lines of <cit.>, our treatment will be contextual, meaning that the environment in which processes are deployed can indeed contribute to altering their distance, although in a controlled way. In order to keep our theory syntax independent, we model operators f as functions f:^n → (where n is the arity of the operator). In particular, for each process operator f of arity n we define the function f̂: ^n×^n → as follows: f̂ ([1],,[n],[1],,[n]) = [](f([1],,[n]), f([1],,[n]))∀ 1 ≤ i ≤ n: ([i],[i]) [] [i] Intuitively, f̂(,) bounds (f(),f()) whenever is such that ([i],[i]) [] [i] for every i. Moreover, f̂(,) is the lowest among such bounds. Of course, our compositionality results rely on some assumptions on the compositionality of the immediate metric . Formally, we require that, for all operators f (with arity n), the following holds for every [1],,[n],[1],,[n]: (f([1],,[n]),f([1],,[n])) [] ([1],[1]) [] [] ([n],[n]). Below, we will give results about when and under which condition the value of the operator f̂ can be upper-bounded by a function on its parameters. We remark that our compositionality results apply to each operator independently. For the sake of concreteness, we give some examples of processes and their metric analysis. To this purpose, let = {,}, fix as the boolean quantale and let be defined exactly as we did in <ref> (i.e., returns if the processes can fire some common action, otherwise). Distances will take values from a MLTS _0 over . Similarly to <ref>, we require _0 to be such that for every ∈ it holds that = [] ≠[_0]. Moreover, we assume that _0 is able to represent at least Milner's synchronisation trees <cit.>. For simplicity, we omit self loops of [_0] from all the graphical representations of our MLTS. Of course, these assumptions hold only in the examples, while our results hold for general MLTSs. §.§ Restriction We assume restriction to be modelled by a -indexed family of unary operators _, and that is closed under these operators. Their semantics is standard. Their semantics can be defined in a standard way: [] ≠_ []_ Let [0] and [0] be as in the following figure. We have that [0] and [0] have the exact same behaviour on the branch, while we can observe differences on the branch ([1] can perform an action, [1] is terminated). State [0] captures exactly the similarities between [0] and [0]: after a move it reduces to ; after an move, it reduces to [1]. We argue that [1] captures the similarities between [1] and [1]: since neither of the two can perform the action , [1] reduces to with label , while it does not perform actions because [1] and [1] “disagree” on such label. So ([0],[0])=[0]. Processes () [0] and () [0] exhibit equivalent behaviour instead. In fact, operator () filters out the problematic branch. It is therefore the case that (()[0],()[0])=. < g r a p h i c s > [node distance = 2cm, on grid,auto] (q0) [state][label =above:[0]] ; (q1) [state, below left = of q0][label =above:[1]] ; (q2) [state, below right = of q0][label =above:[2]] ; (p0) [state,right =4.5cm of q0][label =above:[0]] ; (p1) [state, below left = of p0][label =above:[1]] ; (p2) [state, below right = of p0][label =above:[2]] ; (p3) [state, below = of p0][label =above:[3]] ; (s0) [state,right =3.5cm of p0][label =above:[0]] ; (s1) [state, below = of s0][label =left:[1]] ; (s2) [state, below right = of s0][label =right:] ; [->] (q0) edge [] node (q1) (q0) edge [] node (q2) (p0) edge [] node (p1) (p0) edge [] node (p2) (p1) edge [] node (p3) (s0) edge [] node (s1) (s0) edge [] node (s2) (s1) edge [] node (s2) ; [node distance = 2cm, on grid,auto] (q0) [state][label =above:() [0]] ; (q2) [state, right = of q0][label =above:() [2]] ; (p0) [state,right =4.5cm of q0][label =above:() [0]] ; (p2) [state, right = of p0][label =above:() [2]] ; [->] (q0) edge [] node (q2) (p0) edge [] node (p2) ; The restriction operator does not add new behaviours to the original process, as it can only restrict it. We can then expect that the differences between any two processes do not increase if such processes are placed in a restriction context. Proposition below indeed shows that _̂ enjoys a property similar to non-expansiveness, that is the distance between any two processes and bounds the distance between _ and _. _̂(,) []. It suffice to show that _ = (_, _)(,) [] is a parametrized bisimulation. So, let _ _ _. Condition (_,_) [] follows by <ref>. For condition 2, suppose [] and _[]. By inversion, we have that ≠ and = _ for some such that []. Then [] for some such that (,) []. Therefore: [] ≠ [] Moreover, _ _ _ as required. §.§ Prefixing We assume that is closed under operator : ×→, whose semantics is standard. [] We proceed similarly to the case of : we treat the prefix operator as an -indexed family of unary operators _. Let [0] and [0] be as in <Ref>. Since [0] and [0] can only reduce with a move to, respectively, [0] and [0], their distance ([0],[0]) should reduce to ([0],[0]) = [0]. Moreover, after performing an action, ([0],[0]) should reduce to . < g r a p h i c s > [node distance = 2cm, on grid,auto] (q00)[state, above = of q0][label =above:[0]] ; (q0) [state][label =left:[0]] ; (q1) [state, below left = of q0][label =above:[1]] ; (q2) [state, below right = of q0][label =above:[2]] ; (p00)[state, above = of p0][label =above:[0]] ; (p0) [state,right =4.5cm of q0][label =left:[0]] ; (p1) [state, below left = of p0][label =above:[1]] ; (p2) [state, below right = of p0][label =above:[2]] ; (p3) [state, below = of p0][label =above:[3]] ; (s00)[state, above = of s0][label =above:([0], [0])] ; (s0) [state,right =3.5cm of p0][label =left:[0]] ; (s1) [state, below = of s0][label =left:[1]] ; (s2) [state, below right = of s0][label =right:] ; [->] (q00) edge [] node (q0) (q0) edge [] node (q1) (q0) edge [] node (q2) (p00) edge [] node (p0) (p0) edge [] node (p1) (p0) edge [] node (p2) (p1) edge [] node (p3) (s00) edge [] node (s0) (s00) edge [bend left] node (s2) (s0) edge [] node (s1) (s0) edge [] node (s2) (s1) edge [] node (s2) ; In our contextual setting, prefixing of processes can change the distance, and the new distance may be incomparable to the original one. Therefore properties like non-expansiveness do not hold in general for _̂. Among the compositionality properties appeared in literature, uniform continuity <cit.> seems appropriate for prefixing. Uniform continuity holds when for all [ϵ] [] [] there is [δ] [] [] such that _̂(,[δ]) [] [ϵ]. Such condition is too strong: for instance if [ϵ] [] [] the only option is to take [δ] = [], hence [δ] [] []. For this reason, we need a stronger property for [ϵ], namely that the meet of the set of reducts of [ϵ] is strictly greater than [] and its immediate value is lower than that of [ϵ]. We start with the following auxiliary lemma. For all [ϵ], if [δ] = [] [ϵ] [] is such that [δ] [] [ϵ], then for all : _̂(,[δ]) [] [ϵ]. Let [ϵ] and [δ] be as in the statement. It suffice to show that ∼_[δ] implies ∼_[ϵ]. So, suppose ∼_[δ]. For condition (,) [] [ϵ], by <ref> we have that (, ) [] (,). The thesis then follows since (,) [] [δ] and [δ] [] [ϵ]. For condition 2, suppose [ϵ] [] [ϵ]. If ≠ the thesis holds trivially since neither nor can fire a transition. Instead, if =, we have that the only transitions are, respectively, [] and []. Since [δ] [] [ϵ] by definition and ∼_[δ] by assumption, we have the thesis. For all [ϵ] [] [] such that [] = [] [ϵ] [] and [] [] [ϵ], there is [δ] [] [] such that _̂(,[δ]) [] [ϵ]. Set [δ] = []. The thesis follows by <ref>. §.§ Non-deterministic Sum We assume that is closed under binary operator , whose semantics is again standard. [][] [] [] Let [0], [0] and [0] be as in <Ref>, and [0] as in the picture below. We have that ([0] [0],[0] [0]) = [0]: it reduces to after a move (both processes indeed terminate after a action). An action instead leads to a state that can only perform a action towards . This is because [0] [0] can reduce to [1] with a move, while [0] [0] cannot match that action exactly: it can reduce to [1] or [1], that are not bisimilar to [1]. < g r a p h i c s > [node distance = 2cm, on grid,auto] (r0) [state][label =above:[0]] ; (r1) [state, below = of r0][label =below:[1]] ; (q0) [state,right =3cm of r0][label =above:[0] [0]] ; (q1) [state, below left = of q0][label =above:[1]] ; (q2) [state, below = of q0][label =below:[2]] ; (q3) [state, below right = of q0][label =below:[1]] ; (p0) [state,right =4.9cm of q0][label =above:[0][0]] ; (p1) [state, left = of p0][label =above:[1]] ; (p2) [state, below = of p0][label =below:[2]] ; (p3) [state, below = of p1][label =below:[3]] ; (p4) [state, below right= of p0][label =below:[1]] ; [->] (r0) edge [] node (r1) (q0) edge [] node (q1) (q0) edge [] node (q2) (q0) edge [] node (q3) (p0) edge [] node (p1) (p0) edge [] node (p2) (p0) edge [] node (p4) (p1) edge [] node (p3) ; Intuitively, the non-deterministic sum of two precesses can behave as the former process or as the latter (but not as both). Therefore we can expect that the distance between two sums is bounded by the join of the distances of the components. This is however not always the case, as the immediate distance is not necessarily non-expansive. The sum operator [] from <ref>, instead, turns out to be sufficient for our purposes. Proposition below indeed shows that is non-extensive. For every [1],[2],[1],[2]: it holds that ([1],[2],[1],[2]) [] [1] [] [2]. We start by showing that: ([1] [2], [1] [2]) [] [1] [2] Where: ([1],[1]) [] [1] ([2],[2]) [] [2] We rely on <ref>, and we prove: [1] [2] ∼_[1] [] [2][1] [2] Condition ([1] [2],[1] [2] [] [1] [] [2] follows directly from thr assumption <ref>. So, suppose [1] [] [2] []. It must be = [1] [2] for some [1],[2] such that [1] [] [1] and [2] [] [2]. Suppose [1] [2] []. By inversion on the operational semantics, we have that [1] [] or [2] []. We show only the former case. Since [1] [] [1], we have that [1] [] [1] for some [1] such that [1] ∼_[1][1]. Since [1] [] [1] [] [2], we have that [1] ∼_[1] [2][1], as required. The case for [1] [2] moves is similar. §.§ Parallel Composition We assume to be closed under the binary operator , whose semantics is defined below: [][] [] [] [] [] [] The notion of synchronisation considered in this paper is the one pioneered in CSP <cit.>. This choice is motivated by the fact that, in comparison with CCS-like communication <cit.> (which requires dual actions to synchronise resulting in an invisible τ-action), CSP notion does not change the label: this simplifies the technical development and enables stronger compositionality properties. Most of the works on compositionality of metrics for parallel composition we are aware of use CSP synchronisation, e.g. <cit.>. Let [0] and [0] be as in <Ref>, and [0] as in <Ref>. We have that ([0] [0],[0] [0]) is as the figure below. Indeed, [0] [0] and [0] [0] necessarily reduce to bisimilar states after a action: therefore their distance -reduces to . The situation for actions is more involved, due the the presence of several -reducts for both processes. So, consider the transition [0] [0] [] [1] [0]. We need to find the matching move of [0][0] that minimises the distance between the reducts. So, consider the transition [0] [0] [] [1] [1]. Since [1] [0] can only perform actions while [1] [1] only ones, we have that ([1] [0],[1] [1]) =. If we instead consider transition [0] [0] [] [1] [0], we have that ([1] [0],[1] [0]) = [1]. Indeed, [1] [0] [] while [1] [0] does not: hence [1] []. Moreover, [1] [] [2]. The only -reducts of [1] [0] and [1] [0] are, respectively, [1] [1] and [1] [1]. It is easy to verify that ([1] [1],[1] [1]) = [2]. The last possible matching choice is [0] [0] [] [0] [1], for which we have that ([1] [0],[0] [1]) = [1]: the argument is similar to the previous case. All the other starting -moves of [0] [0], and those of [0] [0], have matching moves leading to distances greater or equal than [1]. < g r a p h i c s > [node distance = 2cm, on grid,auto] (q00) [state][label =above:[0] [0]] ; (q10) [state, below left = of q00][label =above:[1] [0]] ; (q01) [state, below = of q00][label =below:[0] [1]] ; (q20) [state, below right = of q00][label =right:[2] [0]] ; (q11) [state, below = of q10][label =below:[1] [1]] ; (q21) [state, below = of q20][label =below:[2] [1]] ; (p00) [state, right =7cm of q00][label =above:[0] [0]] ; (p10) [state, below left = of p00][label =above:[1] [0]] ; (p01) [state, below = of p00][label =below:[0] [1]] ; (p20) [state, below right = of p00][label =right:[2] [0]] ; (p11) [state, below = of p10][label =below:[1] [1]] ; (p21) [state, below = of p20][label =right:[2] [1]] ; (p03) [state, left = of p10][label =above:[3] [0]] ; (p13) [state, below = of p03][label =below:[3] [1]] ; [->] (q00) edge [] node (q10) (q00) edge [] node (q20) (q00) edge [] node (q01) (q00) edge [] node (q11) (q10) edge [] node (q11) (q01) edge [] node (q11) (q01) edge [] node (q21) (q20) edge [] node (q21) (p00) edge [] node (p10) (p00) edge [] node (p20) (p00) edge [] node (p01) (p00) edge [] node (p11) (p10) edge [] node (p11) (p01) edge [] node (p11) (p01) edge [] node (p21) (p20) edge [] node (p21) (p10) edge [] node (p03) (p03) edge [] node (p13) (p11) edge [] node (p13) ; [node distance = 2cm, on grid,auto] (s0) [state][label =left:([0] [0],[0] [0])] ; (s1) [state, right = of s0][label =below:[1]] ; (s2) [state, right = of s1][label =below:[2]] ; (s3) [state, right = of s2][label =right:] ; [->] (s0) edge [] node (s1) (s0) edge [bend left] node (s3) (s1) edge [] node (s2) (s2) edge [] node (s3) ; Parallel composition does not enjoy strong compositionality properties. Indeed in general ([1],[2],[1],[2]) is related neither to [1] nor to [2], and even ([1],[2],[1],[]) is not related to [1]. Consider for instance the case where [2] “consumes” a [1] move. However, our metric domain contains “contextual” information. We exploit this fact to show that a nice compositionality property, similar to non-extensivity <cit.>, holds when the context and the distance are “compatible”. A formal definition of compatibility follows. A relation ⊆× is a compatibility relation if, whenever : * []; * [] and [] and []. We say that is -compatible iff for some compatibility relation . Consider again [0], [0], [1] from <ref> and ([0] [0],[0] [0]) of <ref>. We have that [0] is not [0]-compatible as Condition 2 from <ref> is violated: [0] [] [1] and [0] [] [1] but [0] [_0] [1]. Instead, [0] below is [0]-compatible: it follows from the facts that [0] necessarily reduces to a greater or equal state, [0] reduces to terminated states, which are vacuously compatible with every distance. Note that ([0],[0]) = [0] [_0] [0] and ([0] [0],[0] [0]) [_0] [0]. The second inclusion follows from the first by <ref>. < g r a p h i c s > [node distance = 2cm, on grid,auto] (s0) [state][label =left:[0]] ; (s1) [state, right = of s1][label =right:[1]] ; [->] (s0) edge [bend left] node (s1) (s0) edge [loop above] node (s0) (s1) edge [loop above] node (s1) ; If [1] is [2]-compatible and [2] is [1]-compatible, then ([1],[2],[1],[2]) [] [1] [] [2]. It suffice to prove that _[1] [] [2] = {([1] [2],[1] [2]) |([1],[1]) [] [1],([2],[2]) [2], [1] is [2]-compatible and [2] is [1]-compatible} is a parametrized bisimulation. So, let [1] [2] _[1][2][1] [2]. Condition ([1] [2],[1] [2]) [] [1] [] [2] follows immediately by <ref>. For condition 2, suppose [1] [] [2] []. By inversion, it must be = [1] [] [2] for some [1], [2] such that [1] [] [1] and [2] [] [2]. So, suppose [1] [2] []. We proceed by cases on the rule used: * [1] [][1][1] [2] [] [1] [2] Then, [1] [][1] for some [1] such that ([1],[1]) [] [1]. Then: [1] [][1][1] [2] [] [1] [2] By item 1 of <ref>, we have that [1] is [2]-compatible. By item 2 of <ref>, we have that [2] is [1]-compatible and [2] [2]. Hence ([2],[2]) [] [2]. Therefore: [1] [2] _[1] [] [2] [1] [2] * [1] [][1] [2] [][2][1] [2] [] [1] [2] Then, [1] [][1] and [2] [][2] for some [1],[2] such that ([1],[1]) [] [1] and ([2],[2]) [] [2]. Then: [1] [][1] [2] [][2][1] [2] [] [1] [2] By item 2 of <ref>, we have that [1] is [2]-compatible and [2] is [1]-compatible. Therefore: [1] [2] _[1] [] [2] [1] [2] The case for the last rule is similar, as is the case for [1][2] moves. §.§ Replication We assume that is closed both under operator (as defined in <ref>) and under : →, whose semantics is standard. [][] In general, replication has bad compositionality properties: since it allows infinite behaviour, even a small distance in the parameter can get amplified to a much larger value. However, we show that is not expansive under the assumption that the parameter always reduces to a larger or equal value and [] is idempotent. Such condition is of course quite strong, but it holds for instance when interpreting bisimilarity as a contextual bisimulation metric (see <ref>). We have that [0] and [0] can both fire a or action and reduce to a process with the same behaviour (the simplest state with this property is drawn in the figure). Therefore, the distance ([0],[0]) = []. In general, however, the distance among processes is not preserved by replication, as shown below: < g r a p h i c s > [node distance = 2cm, on grid,auto] (p0q0) [state][label =below:[0] = [0]] ; (p0) [state, right = of p0q0][label =below:[0]] ; (p1) [state, right = of p0][label =below:[1]] ; (p2) [state, right = of p1][label =below:[2]] ; (q0) [state, right = of p2][label =below:[0]] ; (q1) [state, right = of q0][label =below:[1]] ; (q2) [state, above right = of q1][label =below:[2]] ; (q3) [state, below right = of q1][label =above:[3]] ; (!p0) [state, below =2.5cm of p0q0][label =below:[0]] ; (!p1) [state, right = of !p0][label =below:[1][0]] ; (dotsp) [right = of !p1] ; (!q0) [state, right = of dotsp][label =below:[0]] ; (!q1) [state, right = of !q0][label =below:[1][0]] ; (dotsq) [right = of !q1] ; (s0) [state,below=2.5cm of !p0] [label =below:([0],[0])] ;; (s1) [state,right= of s0] ; (sbot) [state,right= of s1] [label =below:] ; (s0!) [state,right= of sbot][label =below:([0],[0])] ; (s1!) [state, right = of s0!] ; (s2!) [state, right = of s1!] ; [->] (p0q0) edge [loop above] node , (p0q0) (p0) edge node (p1) (p1) edge node (p2) (q0) edge node (q1) (q1) edge node (q2) (q1) edge node (q3) (!p0) edge [bend left] node (!p1) (!p1) edge [bend left] node (dotsp) (!p1) edge [bend left] node (!p0) (dotsp) edge [bend left] node (!p1) (!q0) edge [bend left] node (!q1) (!q1) edge [bend left] node (dotsq) (!q1) edge [bend left] node , (!q0) (!q1) edge [loop above] node (!q1) (dotsq) edge node , (!q1) (dotsq) edge [bend left=70] node (!q0) (dotsq) edge [bend right=70] node (!q1) (s0) edge node (s1) (s0) edge [bend left] node (sbot) (s1) edge node (sbot) (s0!) edge node (sbot) (s0!) edge [bend left] node (s1!) (s1!) edge [bend left] node (s0!) (s1!) edge node (s2!) (s2!) edge [loop right] node (s2!) ; We define 𝐈𝐧𝐜, the set of increasing states, as the largest set ⊆ such that, whenever ∈ and [] : [] and ∈. If is increasing and [] is idempotent, then (,) []. We prove that, provided [] is idempotent, _ defined below is a parametrized bisimulation. _ = ([1] ( ([n] )), [1] ( ([n] ))n ≥ 0, ∈ Dec, ∀ 1 ≤ i ≤ n: ([i],[i]) , (,) So, let _. For condition 1 we have that (,) [] (Σ_1 ≤ i ≤ n([i],[j])) [] (,). The first inequality follows from <ref>, the second from idempotency. For condition 2, suppose [] and []. First notice that ∈ Inc and []. By inversion and a routine induction on n (omitted) we can conclude that is in one of the following shapes: * = [1] ( ([n] )) where there is non-empty I ⊆{1,,n} such that i ∈ I [i] [] [i] and i ∉I [i] = [i]; * = [1] ( ([n] ())) where there is (possibly empty) I ⊆{1,,n} such that [] i ∈ I [i] [] [i] and i ∉I [i] = [i]; We show only case 2, which is slightly more involved. So, we have that, for all i ∈ I: [i] [] [i] for some [i] such that ([1],[1]) []. Furthermore, there is such that [] and (,) []. So, let = [1] ( ([n] ())), where i ∉I [i] = [i]. We have that [] and _, as required. § RELATED WORK & CONCLUSION Quite a few works in the literature study context dependent relations. The closest to our work is the already mentioned study about environment paremetrized bisimilarity <cit.>. Our definition of CBM is similar to theirs, where the main differences are that we also consider quantitative aspects and that we explicitly work with a metric. The same work also provides an interesting logical characterisation of their relation in terms of Hennessy-Milner logic, but does not study compositionality. Since environment parametrized bisimilarity can be embedded into our framework, our compositionality results also hold for <cit.>. A closely related line of research <cit.> (non-exhaustive list) studies conditional bisimulations in an abstract categorical framework, where conditions are used to make assumptions on the environment. In particular, <cit.> introduces a notion of conditional bisimilarity for reactive systems and shows that conditional bisimilarity is a congruence. In <cit.>, an early and a late notion of symbolic bisimilarity for value passing processes are introduced, where actual values are symbolically represented with boolean expressions with free variables. Symbolic bisimilarities are parametric w.r.t. a predicate that, in a sense, allows to make assumptions on the values that the context can send. Our notion of contextuality instead restricts the choices of the environment, and we do not consider explicit value passing. Compositionality of behavioural metrics has been studied in the probabilistic setting <cit.>. In <cit.>, it has been shown that parallel composition is non-extensive. We remark that our notion of parallel composition is slightly more general than the one considered in <cit.>, as in there processes necessarily synchronise on common actions. The work <cit.> studies compositionality for quite a few process algebraic operators, showing e.g. that non-deterministic sum is non-expansive, while parallel composition is non-extensive. The bang operator is shown Lipschitz continuous for the discounted metric, while not even uniformly continuous w.r.t. the non-discounted one. <cit.> introduces structural operational semantics formats that guarantee compositionality of operators. Basically, compositionality depends on how many parameters of the operator are copied from the source to the destination of the rules, weighted by probabilities and the discount factor. Concluding Remarks. This paper introduces a new form of metric on the states of a LTS, called contextual behavioural metric, which enables contextual and quantitative reasoning. We study compositional properties of CBMs w.r.t. some operators, showing that, under the assumption that the immediate metric is non-extensive, the following hold: restriction is non-expansive, non-deterministic sum is non-extensive, prefixing enjoys a property slightly weaker than uniform continuity, parallel composition is non-extensive when the distance between components is compatible with the context and replication enjoys non-expansiveness under some (rather strong) assumptions on the underling quantale . Due to the generality of CBMs, our compositionality results extend to behavioural metrics as defined in <ref>. For instance, since the compatibility relation of <ref> holds trivially for the MLTS of behavioural metrics, we have that compositionality of parallel composition only depends on the compositionality of the immediate metric. Our work is still preliminary, and indeed we are yet in the quest for an appropriate general notion of compositionality: here we tried to adapt concepts from the probabilistic setting <cit.>, where uniform continuity is considered as the most general notion of compositionality. In our setting not even prefixing enjoys uniform continuity, which should not come as a surprise, as quantales are not totally ordered in general. Our compositionality results have heterogeneous side conditions. Spelling out all the compositionality results in a uniform way would come with a high price: operators for which compositionality holds without any side condition, such as restriction, would have to be treated as those for which compositionality holds only modulo appropriate (and strong) hypotheses, such as replication. An interesting future work would be to infer the side conditions directly from SOS rules, or studying more operators or rule formats as in <cit.>. Another direction of future research would be to consider calculi with value and/or channel passing like the π-calculus: since strong bisimilarity is not a congruence in such settings, a promising approach could be a “contextualisation” of open-bisimilarity <cit.>.
http://arxiv.org/abs/2307.03943v1
20230708093708
Camouflaged Object Detection with Feature Grafting and Distractor Aware
[ "Yuxuan Song", "Xinyue Li", "Lin Qi" ]
cs.CV
[ "cs.CV" ]
Camouflaged Object Detection with Feature Grafting and Distractor Aware *Corresponding author. This work is supported in part by the National Natural Science Foundation of China (Grant No. 41927805). Yuxuan Song College of Computer Science and Technology Ocean University of China Qingdao, China [email protected] Xinyue Li College of Computer Science and Technology Ocean University of China Qingdao, China [email protected] Lin Qi* College of Computer Science and Technology Ocean University of China Qingdao, China [email protected] August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================= The task of Camouflaged Object Detection (COD) aims to accurately segment camouflaged objects that integrated into the environment, which is more challenging than ordinary detection as the texture between the target and background is visually indistinguishable. In this paper, we proposed a novel Feature Grafting and Distractor Aware network (FDNet) to handle the COD task. Specifically, we use CNN and Transformer to encode multi-scale images in parallel. In order to better explore the advantages of the two encoders, we design a cross-attention-based Feature Grafting Module to graft features extracted from Transformer branch into CNN branch, after which the features are aggregated in the Feature Fusion Module. A Distractor Aware Module is designed to explicitly model the two possible distractor in the COD task to refine the coarse camouflage map. We also proposed the largest artificial camouflaged object dataset which contains 2000 images with annotations, named ACOD2K. We conducted extensive experiments on four widely used benchmark datasets and the ACOD2K dataset. The results show that our method significantly outperforms other state-of-the-art methods. The code and the ACOD2K will be available at https://github.com/syxvision/FDNet. Camouflaged Object Detection, Transformer, Convolutional Neural Networks, Distractor § INTRODUCTION Camouflage refers to creatures use the similarity of color, texture, etc. to hide themselves in the background without being discovered by predators. Inspired by the natural camouflage of animals such as chameleon, artificial camouflage was created to deceive human's visual inspection. The computer vision task of Camouflaged Object Detection (COD) aims to accurately segment concealed objects from the background environment, which has recently attracted interests of researchers and facilitated many applications in different fields. However, due to its inherent nature, locating and segmenting of camouflaged objects is much more difficult than ordinary object detection, which makes the COD task extremely challenging. Recently, many deep learning based methods have been proposed to solve the COD task and have achieved impressive progress. SegMaR <cit.> introduces a Magnification Module to iteratively upsample images to segment camouflaged objects with complex structures. ZoomNet <cit.> showed that multi-scale information is very effective for resolving the appearance and shape variation of objects at different scales. This model uses a shared encoder to encode images of three scales. However, shared encoders cannot take full advantage of multi-scale images and may cause error propagation. Therefore, we proposed use two different encoders in parallel, and designed a Feature Grafting Module for better feature transfer. Existing COD methods only consider the background as distractor, such as SINetv2 <cit.> which uses reverse attention to erase the foreground and use the background to mine potential camouflage areas. However, in the COD task, due to the similarity between the object and the surrounding environment, there are two different types of distractors as shown in Figure <ref>: 1) in the first row, the stem of the branch is misclassified as camouflaged object since its texture is very similar to the target. 2) in the second row, the lower half of the animal's body is blended with the black background, and the network misses it. This observation inspired us that explicitly modeling semantic features of these two types of distractors with supervision can improve detection performance. In this paper, we propose a Feature Grafting and Distractor Aware network (FDNet) for camouflaged object detection. We employ Transformer and CNN to exploit information on different scales, where Transformer models long-term dependence for rich context information and CNN mines local details for edge information. To aggregate the features from these two encoders, we developped a Feature Grafting Module based on cross-attention, which fuses features in a bottom-up manner to produce a coarse prediction map. A Distractor Aware Module was designed to guide the learning by modeling the two types of distractor and exploring potential camouflage regions under the supervision of groundtruth. Benefited from the designed modules, our proposed network can better recognize distractors and achieve better detection performance. In addition, we contribute to the COD community with a new COD dataset under the fact that most existing COD datasets consists of natural camouflaged animals, whereas only a small portion are camouflage created by human. To address this limitation, we collected and annotated 2000 images of artificial camouflages from the Internet, constituting the current largest artificial camouflage dataset, named ACOD2K. Figure <ref> shows some exmaple images of this dataset. We compared our proposed model with other state-of-the-art models on public datasets and this new dataset. Our contributions. 1) Camouflaged objects can be segmented more accurately by our proposed FDNet which featured by the multi-scale feature extractor and the explicitly modeling of distractors. 2) The parallel encoding and the Feature Grafting Module are able to extract and fuse multi-scale features, which are utilized by the Distractor Aware Module to incorporate two different types of distracting semantic cues for target segmentation. 3) A large artificial camouflage dataset, ACOD2K, was proposed and tested to compare the performance of our proposed model and other existing models. § RELATED WORK The release of large-scale camouflage datasets (such as COD10K <cit.>) has triggered the invention of many deep learning-based methods, which have shown impressive results for the COD task. A majority of the recent work are inspired by how human observers visually search camouflaged targets, as SINet <cit.>, ZoomNet <cit.> and SegMaR <cit.>. SINet was designed to have two stages for searching and recognition respectively. ZoomNet <cit.> and the recently proposed SegMaR <cit.> enlarge the image in potential target regions to further mine distinguishing clues in a coarse-to-fine manner. Other work proposed to use auxiliary cues to improve performance, such as making better use of boundary clues <cit.> and frequency-domain perceptual cues <cit.>. The joint task learning was also found to be useful when SOD(Salient Object Detection) and COD are simultaneously considered to boost each other's performance <cit.>. Unlike CNN, Transformer has a global receptive fields, which can capture richer contextual information. Its success in the natural language processing has been observed by computer vision tasks. UGTR <cit.> uses Bayesian and Transformer to infer areas of uncertainty. To take the advantage of both architecture, we employ CNN and Transformer together to enhance the performance of the model. § OUR METHOD §.§ ACOD2K dataset Camouflage images can be categorized as natural or artificial. Natural camouflage refers to the ability of animals to blend into their surroundings through changes in their physiological characteristics, making them difficult to detect by predators. Artificial camouflage refers to camouflage designed using human reasoning through methods such as painting and camouflage uniforms, with a specific aim to target human visual perception characteristics in order to more effectively deceive the human visual system. It has great practical value for tasks such as disaster-assisted search and rescue operations. Leveraging this advantage, we have constructed ACOD2K, the largest artificial camouflage dataset.It's worth noting that current camouflaged object detection methods are exclusively trained on natural camouflaged images. This is because existing datasets mainly feature natural camouflaged animals, making it difficult to train models that can accurately detect artificial camouflage. For instance, the two most commonly used training datasets in COD tasks, CAMO and COD10K, have an imbalanced distribution of natural and artificial camouflage images. Of the 2,500 images in CAMO, less than 10% are artificial camouflage images. Similarly, COD10K, a large-scale dataset with 10,000 images covering multiple camouflaged objects in natural scenes divided into 5 super classes, lacks artificial camouflage images. This highlights the need for datasets like ACOD2K, which has a significant number of artificial camouflage images, to enable the development of more robust camouflaged object detection methods.ACOD2K are consisted by 2000 images, where 1500 images are with camouflaged objects, 400 images are with non-camouflaged objects, and 100 are background images. Most of the images are collected from the Internet (80%), searched using the keywords such as “military camouflage”, “body painting”, “Ghillie suit”, and the rest are from public COD and SOD dataset. Figure <ref> shows some examples of ACOD2K, from which it can be seen that artificial camouflages are intentionally made by humans using materials and colors to conceal the whole target body in the background. High-quality and fine-grained pixel-level matting annotations were carried out for each image. In order to guarantee the quality, an additional researcher further verified all annotations. §.§ Overall Architecture The overall structure of our proposed FDNet is shown in Figure <ref>. It is divided into two stages, the first stage generates a coarse feature map, and the second stage refines the feature map based on the Distractor Aware Module. FDNet uses multi-scale images as input. Unlike ZoomNet which uses shared encoders, we instead used the PVT <cit.> for the main scale and used the Res2Net50 <cit.> for the sub-scale, which constitue a parallel encoder. We designed a Feature Grafting Module based on cross-attention to aggregate features of these two scales, which not only extracts valuable semantic clues, but also fully suppresses redundant information and background noise. Then the multi-scale features are sent to the Feature Fusion Module for decoding, it achieved more efficient transmission of encoded information through bottom-up dense connections. Finally, Send it into the dual-branch Distractor Aware Module to refine the feature map, and use ground truth for supervision. §.§ Feature Grafting Module For the main scale image, we use PVT as the backbone to extract feature maps of 4 stages, which can be denoted as g_i;i=1,2,3,4. Since the features with too small resolution will lose most of the information, we did not use g_4. For the sub-scale image, we use Res2Net50 as the backbone to extract a set of feature maps, which can be denoted as f_i;i=1,2,3,4.We choose to graft feature on feature groups with the same feature resolution. Since the resolution of the sub-scale is twice that of the main scale, the resolution of g_i,f_i+1;i=1,2,3 is same. For the first two groups, we use pooling for feature grafting to maintain and highlight useful information. In neural networks, deeper features have richer semantic clues. For g_3 extracted using Transformer, which has rich global context information. For f_4 extracted using CNN, which has edge detail information complementary to global information. We believe that using simple fusion methods such as pooling, concatenation, or addition is not effective enough for mutual learning between these two features, and cannot well suppress background noise from CNN. Therefore, we use cross-attention to incorporate the global semantic cue learned from the main scale into each pixel of the sub-scale. The detail is shown in Figure <ref>. F_4 = Softmax(f_4^Q ·g_3^K^T/√(k))· f_4^V f_4^Q,f_4^V=θ(f_4) g_3^K=ϕ(g_3) θ() uses flatten and permute operations to transform f_4∈ R^C × H × W into f_4^'∈ R^HW × C. Same as self attention, we have passed it through Layer Normalization and linear transformation to get f_4^Q, f_4^V, the process of g_3 getting g_3^K through ϕ() is same as θ. §.§ Feature Fusion Module Unlike the previous method that directly performs convolution after channel concat on the adjacent feature layer to output the prediction map, we fuse deeper features as a semantic filter. We first element-wise multiply it with the current layer features to suppress background interference that may cause abnormality, and then preserve the original information by residual addition. The details are shown in Figure <ref>. The features by the Feature Grafting Module are denoted as F_i;i=1,2,3,4. Since F_4 is the last layer of features, we directly perform 3x3 convolution on F_4 to form F̂_̂4̂, For F_3, we perform filtering on F4 to form F_3^filter. Correspondingly, F_2^filter and F_1^filter are shown in the following formula. We take the top-level feature F̂_̂1̂ as the final result of the Feature Fusion Module, and the coarse prediction is F_c. F̂_̂4̂ = Conv3(F_4) F_3^filter = Conv3(Conv1(F_4↑_2) F̂_̂3̂ = Conv3([F_3^filter * F_3+F_3;F̂_̂4̂]) F_2^filter = Conv3(Conv1([F_4↑_4;F_3↑_2])) F̂_̂2̂ = Conv3([F_2^filter * F_2+F_2;F̂_̂3̂]) F_1^filter = Conv3(Conv1([F_4↑_8;F_3↑_4;F_2↑_2])) F̂_̂1̂ = Conv3([F_1^filter * F_1+F_1;F̂_̂2̂]) F_c=Conv3(F̂_̂1̂) Conv3, Conv1 represents 3x3, 1x1 convolution respectively, ↑ refers to upsample, [;] means channel concatenation, and * represents element-wise multiplication. §.§ Distractor Aware Module We believe that there are two types of distractors present in the coarse prediction map generated in the first stage, namely: (i) objects that are camouflaged but not detected, referred to as false negatives, ξ_fn, and (ii) objects that are not camouflaged but are misdetected, referred to as false positives, ξ_fp. To address this, we propose a dual-branch Distractor Aware Module that explicitly models the potential interference and aims to improve the accuracy of the segmentation results.As illustrated in the lower part of Figure <ref>, we first use F̂_̂1̂∈ R^64 × H × W to extract ξ_fn features through a lightweight encoder, the encoder is designed as two 3x3 convolutions, following BN and Relu. In order to make better use of ξ_fn, We generated the predicted map of ξ_fn. During training, the ground truth of ξ_fn is approximated by the difference between the ground truth of the segmentation map and the coarse predicted map F_c. Then we concate ξ_fn with F̂_̂1̂ and send it into the attention mechanism to generate augmented weights ξ_fn^a. The attention mechanism aims to enhance the features of possible ξ_fn regions. we perform element-wise multiplication for ξ_fn^a and original feature F̂_̂1̂, and then perform residual connection to generate the enhanced feature F_fn. Now, the network can better segment those regions that are ignored as background. ξ_fn = Small Encoder(F̂_̂1̂) fn_GT = GT - φ(F_c) Similarly, we use the same encoder to extract ξ_fp features and the predicted map. The ground truth of ξ_fp is approximated by the difference between the coarse predicted map F_c and the ground truth of the segmentation map. we concate F_fn with ξ_fp on channel dimension, then send it into the refine unit consisting of two 3x3 convolutional layers to capture richer context information, so as to better distinguish the misdetected areas. Finally, it is subtracted from F_fn to obtain the prediction feature that suppresses ξ_fp distractor. After 3x3 convolution, we obtain the final prediction map F_p. φ() represents binarization operation. ξ_fp = Small Encoder(F̂_̂1̂) fp_GT = φ(F_c) - GT §.§ Loss Functions Our network has two types of supervision. For the loss L_F_p of the prediction map, same as most COD methods, we use the weighted BCE loss and the weighted IOU loss(Loss1). For the loss L_fn, L_fp of fn and fp, we use the weighted BCE loss(Loss2). The loss function is as follows. Loss = L_F_p+ λ L_fn + β L_fp Loss1 = L_BCE^ω+L_IOU^ω Loss2 = ∑_i(-[N_p/N_p+N_n(y_i)log(p_i)+ N_n/N_p+N_n(1-y_i)log(1-p_i)]) In the experiment, λ and β are set to 10. N_n and N_p represent the number of pixels of positive pixels and negative pixels, respectively. § EXPERIMENTS §.§ Experiment Setup Datasets.We perform experiments on four COD benchmark datasets and ours ACOD2K. Public datasets include CAMO <cit.>, CHAMELON <cit.>, COD10K <cit.> and NC4K <cit.>, Like the previous methods, we use 3040 images from COD10K and 1000 images from CAMO for training, and other datasets for testing. For the ACOD2K, we divide it into train set and test set according to the ratio of 8:2. Evaluation Criteria.We use four metrics that commonly used in COD tasks to evaluate the model performance: Mean absolute error(MAE) <cit.>, F_β^w-measure <cit.>, E-measure <cit.>, S-measure <cit.>. Implementation Details.Our network uses PVT <cit.> and Res2Net50 <cit.> pretrained on ImageNet as backbone. We use data augmentation strategy of random flips and rotations. During training, in order to balance efficiency and performance, the size of the main scale is set to 288x288. The batchsize is 32. We use SGD with momentum and weight decay initialized to 0.9 and 0.0005 as the optimizer, the learning rate is initialized to 0.05, follows a linear decay strategy, and the maximum training epoch is set to 50. The entire network is performed on NVIDIA GeForce GTX 3090Ti. §.§ Comparisons with State-of-the-arts To show the effectiveness of our method, we compare with 10 SOTA methods on public datasets. On ours ACOD2K, we compare with 3 COD methods. For fair comparison, the results of these models are either provided by the authors or retrained from open source code. Quantitative Evaluation.As shown in the Table <ref>, our method achieves the superior performance on multiple evaluation metrics. Specifically, our method increases F_β^ω by 1.5%, 3.3%, 6%, 1.9% over the second-best method on all four datasets. Table <ref> shows the FDNet outperforms the second-best method on the four metrics by increasing 1.4%, 2.4%, 1%,0.4% on the ACOD2K. Qualitative Evaluation.We further show the qualitative comparison of FDNet with other methods, presented in the form of visualization maps. As shown in Figure <ref>, our method not only recognizes them well, but also segments fine edges. In addition, in the second row, our method also works well with the presence of distractor in the image. §.§ Ablation Studies As shown in the Table <ref>, we conducted five ablation experiments. In A, we removed all key modules, only used single-scale images, and simply perform convolution after channel concatenation to get the final prediction map. In B, we replaced the Feature Fusion Module on the basis of A. In C, we use multi-scale images, but share the encoder, and the features of different scales are fused by pooling. In D, we use CNN and Transformer to encode the images of two scales respectively, and use the Feature Grafting Module to fuse feature. In E, we added Distractor Aware Module based on D. Effectiveness of multi-scale. By fusing features of different scales, we can explore richer semantic representations. From the second and third rows in the table <ref>, it can be seen that the performance of C is significantly better than that of B, especially in the COD10K, S_α, F_β^w , E_ϕ, ℳ increased by 4.4%, 8.5%, 2.9%, 0.9% respectively.Effectiveness of Feature Fusion. From the first and second rows of the table <ref>, B's performance on the four indicators increased by 0.8%, 2.2%, 1.1%, 0.4% on average, this is due to the positive impact of the Feature Fusion Module's bottom-up dense feature-guided structure. Effectiveness of Feature Grafting. Compared with C, all indicators of D on the two datasets have different degrees of increase, especially F_β^w on the CAMO increased by 1%. This is largely because Feature Grafting Module aggregates the advantages of two different types of encoders well. Effectiveness of Distractor Aware. E outperforms D on all datasets, and the visual comparison results in Figure <ref> also clearly verify that the module can mine potential interference areas. § CONCLUSION We propose a novel COD network, FDNet. First, we design the Feature Grafting Module to extract valuable semantic information and suppress background noise. Then, in the Distractor Aware Module, we obtained more accurate prediction map by refining the two types of distractors. Additionally, we also construct a new artificial camouflage dataset, ACOD2K. Experiments on four public datasets and ACOD2K show that our method outperforms other methods significantly both qualitatively and quantitatively. In the future, we will explore more effective supervision methods for two types of distractors. IEEEtran
http://arxiv.org/abs/2307.05375v1
20230709095034
Emotion Analysis on EEG Signal Using Machine Learning and Neural Network
[ "S. M. Masrur Ahmed", "Eshaan Tanzim Sabur" ]
eess.SP
[ "eess.SP", "cs.AI", "cs.HC" ]
Emotion Analysis on EEG Signal Using Machine Learning and Neural Network S. M. Masrur Ahmed Software Engineer bKash Limited Dhaka, Bangladesh [email protected] Eshaan Tanzim Sabur Department of Computer Science BRAC University Dhaka, Bangladesh [email protected] August 12, 2023 ============================================================================================================================================================================================================================================== Emotion has a significant influence on how one thinks and interacts with others. It serves as a link between how a person feels and the actions one takes, or it could be said that it influences one's life decisions on occasion. Since the patterns of emotions and their reflections vary from person to person, their inquiry must be based on approaches that are effective over a wide range of population regions. To extract features and enhance accuracy, emotion recognition using brain waves or EEG signals requires the implementation of efficient signal processing techniques. Various approaches to human-machine interaction technologies have been ongoing for a long time, and in recent years, researchers have had great success in automatically understanding emotion using brain signals. In our research, several emotional states were classified and tested on EEG signals collected from a well-known publicly available dataset, the DEAP Dataset, using SVM (Support Vector Machine), KNN (K-Nearest Neighbor), and an advanced neural network model, RNN (Recurrent Neural Network), trained with LSTM (Long Short Term Memory). The main purpose of this study is to improve ways to improve emotion recognition performance using brain signals. Emotions, on the other hand, can change with time. As a result, the changes in emotion over time are also examined in our research. emotion recognition, EEG signal, DEAP dataset, fft, Machine Learning, SVM, KNN, DEAP, RNN, LSTM § INTRODUCTION Emotion is defined as a person's conscious or unconscious behavior that indicates our response to a situation. Emotion is interconnected with a person's personality, mood, thoughts, motivation, and a variety of other aspects. Fear, happiness, wrath, pride, anger, panic, despair, grief, joy, tenseness, surprise, confidence, enthusiasm are the common emotions are all experienced by humans <cit.>. The experience can be both positive or negative. In the light of this, physiological indications such as heart rate, blood pressure, respiration signals, and Electroencephalogram (EEG) signals might be useful in properly recognizing emotions.Emotion recognition has always been a major necessity for humanity, not just for usage in fields like computer science, artificial intelligence, and life science, but also for assisting those who require emotional support. For a long time, experts couldn't figure out a reliable way to identify true human emotion. One method was to use words, facial expression, behavior, and image to recognize one's emotions <cit.>. Researchers found that subject answers are unreliable for gauging emotion; people are unable to reliably express the strength and impact of their feelings. Furthermore, it is simple to manipulate self-declared emotions, resulting in incorrect findings. As a result, researchers had to shift their focus to approaches that do not rely on subject reactions. The development of Brain-Computer Interface (BCI) and Electroencephalogram (EEG) signals demonstrated more accurate methods for detecting human emotions. It introduced an involuntary approach to get more accurate and reliable results. Involuntary signals are uncontrollable and detect people's true feelings. They have the ability to express genuine emotions. The advancement of a reliable human emotion recognition system using EEG signals could help people regulate their emotions and open up new possibilities in fields like education, entertainment, and security and might aid people suffering from Alexithymia or any other psychiatric disease. The goal of our paper is to use effective techniques on DEAP dataset to extract features from EEG signals using band waves and apply machine learning algorithms and neural network models to check the efficiency of the used algorithms on valence-arousal, EEG regions and band waves. § LITERATURE REVIEW The EEG research community is expanding its reach into a number of different fields. In her research, Vanitha V. et al. <cit.> aims to connect stress and EEG, and how stress can have both beneficial and bad effects on a person's decision-making process. She also discusses how stress affects one's interpersonal, intrapersonal, and academic performance and argues that stress can cause insomnia, lowered immunity, migraines, and other physical problems. Jin et al. <cit.> while analyzing emotions reported promising results, claiming that combining FFT, PCA, and SVM yielded results that were about 90 percent accurate. As a result, rather than the complexity of the classification algorithm used, the feature extraction stage determines the accuracy of any model. As a result, categorization systems can offer consistent accuracy and recall. Liu et al. <cit.> proposed a fractal-based algorithm to identify and visualize emotions in real time. They found that gamma band could be used to classify emotion. For emotion recognition, the authors analyzed different kinds of EEG features to find the trajectory of changes in emotion. They then proposed a simple method to track the changes in emotion with time. In this paper, the authors built a bimodal deep auto encoder and a single deep auto encoder to produce shared representations of audios and images. They also explored the possibility of recognizing emotion in physiological signals. Two different fusion strategies were used to combine eye movement and EEG data. The authors tested the framework for cross modal learning tasks. The authors introduce a novel approach that combines deep learning and physiological signals. The DEAP Dataset was also utilized by the following writers to analyze emotion states. Xing et al. <cit.> developed a stacked autoencoder (SAE) to breakdown EEG data and classify them using an LSTM model. - The observed valence accuracy rate was 81.1 percent, while the observed arousal accuracy rate was 74.38 percent. Chao et al. <cit.> investigated a deep learning architecture, reaching an arousal rate of 75.92 percent. and 76.83 percent for valence states. Mohammadi et al. <cit.> classified arousal and valence using Entropy and energy of each frequency band and reached an accuracy of 84.05 percent for arousal and 86.75 percent for valence. Xian et al. <cit.> utilized MCF with statistical, frequency, and nonlinear dynamic characteristics to predict valence and arousal with 83.78 percent and 80.72 percent accuracy, respectively. Ang et al. <cit.> developed a wavelet transform and time-frequency characteristics with ANN classification method. For joyful feeling, the classification rate was 81.8 percent for mean and 72.7 percent for standard deviation y. The performance of frequency domain characteristics for sad emotions was 72.7 percent. Alhagry et al. <cit.> developed a deep learning technique for identifying emotions from raw EEG data that used long-short term memory (LSTM) neural networks to learn features from EEG signals and then classified these characteristics as low/high arousal, valence, and liking. The DEAP data set was used to evaluate the -e technique. -The method's average accuracy was 85.45 percent for arousal and 85.65 percent for valence. § METHODOLOGY §.§ Data Materials For our research, we have chosen the DEAP <cit.> dataset. The DEAP dataset for emotion classification is freely available on the internet. A number of physiological signals found in the DEAP dataset can be utilized to determine emotions. It includes information on four main types of states: valence, arousal, dominance, and liking. Due to the use of various sample rates and different types of tests in data gathering, the DEAP Dataset is an amalgamation of many different data types. EEG data was gathered from 32 participants, comprising 16 males and 16 women, in 32 channels. The EEG signals were collected by playing 40 different music videos, each lasting 60 seconds, and recording the results. Following the viewing of each video, participants were asked to rate it on a scale of one to nine points. According to the total number of video ratings received, which was 1280, the number of videos (40) multiplied by the number of volunteers (40) yielded the result (i.e. 32). Following that, the signals from 512 Hz were downsampled to 128 Hz and denoised utilizing bandpass and lowpass frequency filters, as well as a lowpass frequency filter. 512 Hz EEG signals were acquired from the following 32 sensor positions (using the worldwide 10- 20 positioning system): Fp1, AF3, F3, F7, FC5, FC1, C3, T7, CP5, CP1, P3, P7, PO3, O1, Oz, Pz, Fp2, AF4, Fz, F4, F8, FC6, FC2, Cz, T8, CP2, P4, P8, PO4, and O2. It was also possible to take a frontal face video of each of the 22 participants. Several signals, including EEG, electromyograms, breathing region, plethysmographs, temperature, and so on, were gathered as 40 channel data during each subject's 40 trials, with each channel representing a different signal. EEG data is stored in 32 of the 40 available channels. The rest of the channels record EOG, EMG, ECG, GSR, RSP, TEMP and PLET data. §.§ Data Visualization We extracted valence and arousal ratings from the dataset. The combination of Valence and Arousal can be converted to emotional states: High Arousal Positive Valence (Excited, Happy), Low Arousal Positive Valence (Calm, Relaxed), High Arousal Negative Valence (Angry, Nervous) and Low Arousal Negative Valence (Sad, Bored). We have analyzed the changes in emotional state along with the number of trials for each group by following Russell’s circumplex model. Russell's circumplex model helped classify the DEAP dataset. Russell's methodology for visualizing the scale with the real numbers, the DEAP dataset employs self-assessment manikins (SAMs) <cit.>. 1–5 and 5–9 were chosen as the scales based on self-evaluation ratings <cit.>. The label was changed to “positive” if the rating was greater than or equal to 5, and to “negative” if it was less than 5. We utilized a different way to determine "positive" and "negative" values. The difference in valence and arousal was rated on a scale of 1 to 9 by the participants of DEAP. We believe that categorizing the dataset using a mean value is not a good approach because there may be no participants who rate between 1-2 and 4-6. As a result, using a mean value to derive the separation could lead to bias. On the other hand, all users may have given ratings ranging from 5 to 9. To avoid biased analysis, we wanted to utilize the value from the mid range to separate the positive and negative values. As a result, to distinguish between "positive" and "negative" numbers, we used median values. We looked for a positive or negative valence as well as a positive or negative arousal level in each experiment. Numbers greater than the median are considered "positive", while values less than the median are considered "negative". Four labels for our research have been created: high arousal low valence (HALV), low arousal high valence (LAHV), high arousal high valence (HAHV), and low arousal low valence (LALV). §.§ Channel Selection We used two types of studies for FFT analysis. For making an RNN model with LSTM with the help of FFT processing, Emotiv Epoch+ was fitted with a total of 14 channels, which were carefully selected. The number of channels is [1,2,3,4,6,11,13,17,19,20,21,25,29,31] .The number of bands is 6. band = [4,8,12,16,25,45] . We also discovered the relation between Time domain and Frequency domain with the help of FFT in another study. §.§ FFT Fourier Transform (FFT) is a mathematical procedure that computes the discrete Fourier transform (DFT) of a sequence. It is used to solve a variety of different types of equations or graphically depict a range of frequency activity. Fourier analysis is a signal processing technique used to convert digital signals (x) of length (N) from the timedomain to the frequency domain (X) and vice versa. FFT is a technique that is widely utilized when estimating the Power Spectral Density of an EEG signal. PSD is an abbreviation for Power spectral distribution at a specific frequency and can be computed directly on the signal using FFT or indirectly by altering the estimated autocorrelation sequence. §.§ RNN and LSTM RNNs have risen to prominence as computing power has improved, data volumes have exploded, and long short-term memory (LSTM) technology became available in the 1990s. RNNs may be incredibly precise in forecasting what will happen next because of their internal memory, which allows them to retain key input details. The reason they're so popular is because they're good at handling sequential data kinds like time series and voice. Recurrent neural networks have the advantage over other algorithms in that they can gain a deeper understanding of a sequence and its context. A short-term memory is common in RNNs. When linked with an LSTM, they have a long-term memory as well (more on that later). Due to the data sequence providing important information about what will happen next, an RNN may do jobs that other algorithms are unable to complete. <cit.> Long short-term memory networks (LSTMs) are a sort of recurrent neural network extension that expands memory effectively. As a result, it's well-suited to learning from big experiences separated by long periods of time. RNN extensions that increase memory capacity are known as long short-term memory (LSTM) networks. The layers of an RNN are built using LSTMs. RNNs can either assimilate new information, forget it, or give it enough importance to alter the result thanks to LSTMs, which assign “weights” to data. The layers of an RNN, which is sometimes referred to as an LSTM network, are built using the units of an LSTM. With the help of LSTMs, RNNs can remember inputs for a long time. Because LSTMs store data in a memory comparable to that of a computer, this is the case. The LSTM can read, write, and delete information from its memory. This memory can be thought of as a gated cell, with gated signifying that the cell decides whether to store or erase data (i.e., whether to open the gates) based on the value it assigns to the data. To allocate importance, weights are utilized, which the algorithm also learns. This basically means that it learns over time which data is critical and which is not. Long-Short-Term Memory Networks (LSTMs) are recurrent neural network subtypes (RNN). §.§ Feature Extraction Extracting features from EEG data can be done in a variety of methods. Periodogram and power spectral density calculations and combining band waves of various frequencies are required for feature extraction with the help of FFT. The Welch method is <cit.> a modified segmentation scheme for calculating the average periodogram. Generally the Welch method of the PSD can be described by the equations below, the power spectra density, P(f) equation is defined first. Then, for each interval, the Welch Power Spectrum, P_welch (f), is given as the mean average of the periodogram. P(f)=1/M U|∑_n=0^M-1 x_i(n) w(n) e^-j 2 π f|^2 P_welch (f)=1/L∑_i=0^L-1 P(f) The power spectral density (PSD) shows how a signal’s power is distributed in the frequency domain. Among the PSD estimators, Welch’s method and the multitaper approach have demonstrated the best results <cit.>. The input <cit.> signal x [n], n = 0,1,2,…,N-1 is divided into a number of overlapping segments. Let M be the length of each segment, using n=0,1, 2,…,M-1, M. x_i = x [i×M/2 + n] where n=0,…,M-1,i=0,1,2,…,N-1 Each segment is given a smooth window w(n). In most cases, we employ the Hamming window at a time. The Hamming window formula for each segment is as follows: w(n)=0.54-0.46cos[2nπ/M] Here, U=(1 / M) ∑_n=0^M-1 w^2(n) denotes the mean power of the window w(n). So, M U=∑_n=0^M-1 w^2(n) denotes the energy of the window function w(n) with length M. It is to be noted that, L denotes the number of data segment. For validation, ”Accuracy” is the most popular metric. However, a model’s performance cannot be judged based only by the accuracy. So, we have used other metrics, such as - precision, recall, and f-score. The metrics were calculated using the mean of metrics for all the folds through cross validation. § RESULTS In our research, we tried to come up with a relation among EEG channel, time domain and frequency domain using Welch’s Periodogram with the help of band wave and FFT. The band waves identify the following emotions. The following figure shows the time domain of the EEG signals. From the figure, we can see that there has been lots of electrical activities going on the EEG channels. And from the time domain, we can get the graphs of the frequency domain along with Power Spectral Density across the channels with the help of Fourier Transformation. In our study, we used Fast Fourier Transformation, the sine wave was taken from 4 Hz to 45 Hz. So, by comparing the sine wave with the time domain, we can get the PSD at the frequency domain. From the time-frequency domain, we can see the electrical activity in brain in multiple time intervals which shows a relation between different frequencies, brain activity and voltage. For the first FFT analysis, the research calculates mean, std, min, first quartile, median, third quartile and max values of 1240 trials of the six regions based on sensors and four band power values. For this research, we used SVM and K-NN classifiers. SVM classifier used “linear” kernel in this research. The research also calculates the accuracy of Valence and Arousal. We attempted to experience the variations in electrical activities in the brain over time in the first study. To extract EEG signals, the 32 sensor sites were separated into globally recognizable zones. The position of the electrode are frontal, central, temporal, parietal, and occipital placements, respectively. The topographical maps are used to visualize spatial distribution of activity. This useful visualization method allows us to examine how data changes over one time point to another. The subject in this study was watching a video while we analyzed the changes in electrical activity from 0.153 to 0.273 seconds. We can see the changes of electrical activity in voltage based various frequencies as band waves are determined by the range of frequency and different band waves indicate different ranges of emotion. From this, it can be said that the subject can feel different emotions in a particular time point. For the second research, during the FFT processing, we employed meta data for the purpose of doing a meta vector analysis. Raw data was split over a time span of 2 seconds, with each slice having a 0.125-second interval between it. A two-second FFT of channel was carried out in different frequencies in a sequence. Emotiv Epoch+ was fitted with a total of 14 channels, which were carefully selected. The number of channels is [1,2,3,4,6,11,13,17,19,20,21,25,29,31] .The number of bands is 6. band = [4,8,12,16,25,45] . A band power of 2 seconds on average is used. The window size was 256 with a step size of 16, with each update occurring once every 0.125 seconds. The sampling rate was set to 128 hertz. The FFT was then performed on all of the subjects using these settings in order to obtain the required output. Neural networks and other forms of artificial intelligence require a starting collection of data, referred to as a training dataset, that serves as a foundation for subsequent application and use. This dataset serves as the foundation for the program's developing information library. Before the model can interpret and learn from the training data, it must be appropriately labeled. The lowest value of the data is 200 and the greatest value is above 2000, which means that trying to plot it will result in a lot of irrelevant plots, which will make conducting the analysis tough. The objective of machine learning is to create a plot and then optimize it further in order to obtain a pattern. And if there are significant differences between the plotted points, it will be unable to optimize the data. As a result, in order to fix this issue, the values have been reduced to their bare minimum, commonly known as scaling. The values of the data will not be lost as a result of scaling; instead, the data will be optimized to the point where there is little difference between the plotted points. In order to achieve this, StandardScaler must transform your data into a distribution with a mean of zero and a standard deviation of one. When dealing with multivariate data, this is done feature-by-feature to ensure that the data is accurate (in other words independently for each column of the data). Because of the way the data is distributed, each value in the dataset will be deducted from the mean and then divided by the standard deviation of the dataset. After that, we divided the data set into two parts: a training data set and a testing data set. Training will be carried out on 75% of the data, and testing will be carried out on 25% of the data. A total of 456768 data were used in the training process. A total of 152256 data were used in the testing. RNN has been kept sequential. The first layer LSTM of sequential model takes input of 512. The second layer takes input of 256. The third and fourth layer takes an input of 128 and 64. And, the final layer LSTM of sequential model takes input of 10. Since we are conducting classification where we will need 0 or 1 that is why sigmoid has been used. The activation functions used are relu and for the last part sigmoid. The rectified linear activation function, abbreviated ReLU, is a piecewise linear function that, if the input is positive, outputs the value directly; otherwise, it outputs zero. Batch normalization was used. Batch normalization is a method for training extremely deep neural networks in which the inputs to a layer are standardized for each mini-batch. This results in a stabilization of the learning process and a significant drop in the total of training epochs required for training deep networks. Through randomly dropping out nodes while training, a single model can be utilized to simulate having a huge variety of distinct network designs.[2] This is referred to as dropout, and it is an extremely computationally efficient and amazingly successful regularization technique for reducing overfitting and improving generalization error in all types of deep neural networks. In our situation, dropout rates began at 30%, increased to 50%, then 30%, 30%, 30%, and eventually 20%. We worked with three-dimensional datasets; however, when we converted to a dense layer, we obtained a one-dimensional representation in order to make a prediction. RMSprop was used as the optimizer with a learning rate of 0.001, a rho value of 0.9, and an epsilon value of 1e-08. RMSprop calculates the gradient by dividing it by the root of the moving (discounted) average of the square of the gradients. This application of RMSprop makes use of conventional momentum rather than Nesterov momentum. Additionally, the centered version calculates the variance by calculating a moving average of the gradients. As we can see, accuracy increases very gradually in this case, and learning rate plays a major part. If we increased the learning rate, accuracy would also increase rapidly, and when optimization is reached, the process would reverse, with accuracy decreasing at a faster rate. That is why the rate of learning has been reduced. When one zero is removed, the accuracy decreases significantly. As our loss function, we utilized the Mean Squared Error. The Mean Squared Error (MSE) loss function is the most basic and extensively used loss function, and it is typically taught in introductory Machine Learning programs. To calculate the MSE, take the difference between your model's predictions and the ground truth, square it, and then average it across the whole dataset. The MSE can never be negative since we are constantly squaring the errors. To compute loss, we utilized mean squared error. Because of the squaring portion of the function, the MSE is excellent for guaranteeing that the trained model does not contain any outlier predictions with significant mistakes. Because of this, the MSE places greater emphasis on outlier predictions with large errors. We tried our best to reduce the percentage of value loss and increase the accuracy rate. We saved the model and kept track by every 50 epochs. In the first picture, we can see that for the first 50 epochs the training loss 0.1588 and validation loss reduced to 0.06851 and 0.06005. And the training accuracy rate increased from 9.61 percent to 45.784 percent and validation accuracy increased to 53.420 pecent. For the second 50 epochs, the training loss reduced to 0.06283 and the validation loss reduced to .05223 where the training accuracy increased to 51.661 percent and validation accuracy increased to 60.339 percent. For the third 50 epochs, the training loss reduced to 0.05992 and the validation loss reduced to .04787 where the training accuracy increased to 54.492 percent and validation accuracy increased to 64.413 percent. After 200 epochs the ratio started to change at a very slow rate.We ran 1000 epochs and got the training accuracy rate of 69.21% and the validation accuracy rate was 78.28%. § CONCLUSION To summarize, in this research, we describe the EEG-based emotion recognition challenge, as well as existing and proposed solutions to this problem. Emotion detection by the use of EEG waves is a relatively new and exciting area of study and analysis. To identify and evaluate on numerous emotional states using EEG signals acquired from the DEAP Dataset, SVM (Support Vector Machine), KNN (K-Nearest Neighbor). According to the findings, the suggested method is a very promising option for emotion recognition, owing to its remarkable ability to learn features from raw data in a short period of time. When compared to typical feature extraction approaches, it produces higher average accuracy over a larger number of people. 00 b1 S. D. Rama Chaudhary Ram Avtar Jaswal, “Emotion recognition based on eegusing deap dataset,”European Journal of Molecular amp; Clinical Medicine,vol. 8, no. 3, pp. 3509–3517, 2021,issn: 2515-8260. b2 X. Cheng, C. Pei Ying, and L. Zhao, “A study on emotional feature analysisand recognition in speech signal,”Measuring Technology and MechatronicsAutomation, International Conference on, vol. 1, pp. 418–420, Apr. 2009.doi:10.1109/ICMTMA.2009.89. b3 C. Huang, Y. Jin, Q. Wang, L. Zhao, and C. Zou, “Multimodal emotion recog-nition based on speech and ecg signals,” vol. 40, pp. 895–900, Sep. 2010.doi:10.3969/j.issn.1001-0505.2010.05.003. b4 Y. Wang, X. Yang, and J. Zou, “Research of emotion recognition based onspeech and facial expression,”TELKOMNIKA Indonesian Journal of Electri-cal Engineering, vol. 11, Jan. 2013.doi: 10.11591/telkomnika.v11i1.1873. b5 S. A. Hussain and A. S. A. A. Balushi, “A real time face emotion classificationand recognition using deep learning model,”Journal of Physics: ConferenceSeries, vol. 1432, p. 012 087, Jan. 2020.doi: 10 . 1088 / 1742 - 6596 / 1432 / 1 /012087. [Online]. Available: https : / / doi . org / 10 . 1088 / 1742 - 6596 / 1432 / 1 /012087. b6 V. Vanitha and P. Krishnan, “Real time stress detection system based on eegsignals,” vol. 2016, S271–S275, Jan. 2016. b7 J. Jin, X. Wang, and B. Wang, “Classification of direction perception eegbased on pca-svm,” inThird International Conference on Natural Computa-tion (ICNC 2007), vol. 2, 2007, pp. 116–120.doi: 10.1109/ICNC.2007.298. b8 W. Liu, W.-L. Zheng, and B.-L. Lu, “Emotion recognition using multimodaldeep learning,” vol. 9948, Oct. 2016,isbn: 978-3-319-46671-2.doi: 10.1007/978-3-319-46672-958. b9 X. Xing, Z. Li, T. Xu, L. Shu, B. Hu, and X. Xu, “Sae+lstm: A new framework for emotion recognition from multi-channel eeg,” Frontiers in Neurorobotics,vol. 13, p. 37,2019, issn: 1662-5218. doi: 10.3389/fnbot.2019.00037. [Online]. Available: https://www.frontiersin.org/article/10.3389/fnbot.2019.00037. b10 H. Chao, H. Zhi, D. Liang, and Y. Liu, “Recognition of emotions using multi-channel eeg data and dbn-gc-based ensemble deep learning framework,”Computational Intelligence and Neuroscience, vol. 2018, pp. 1–11, Dec. 2018.doi:10.1155/2018/9750904. b11 Z. Mohammadi, J. Frounchi, and M. Amiri, “Wavelet-based emotion recognition system using eeg signal,”Neural Computing and Applications, vol. 28,Aug. 2017.doi: 10.1007/s00521-015-2149-8. b12 X. Li, J.-Z. Yan, and J.-H. Chen, “Channel division based multiple classifiersfusion for emotion recognition using eeg signals,”ITM Web of Conferences,vol. 11, p. 07 006, Jan. 2017.doi: 10.1051/itmconf/20171107006. b13 A. Ang and Y. Yeong, “Emotion classification from eeg signals using time-frequency-dwt features and ann,”Journal of Computer and Communications,vol. 05, pp. 75–79, Jan. 2017.doi: 10.4236/jcc.2017.53009. b14 S. Alhagry, A. Aly, and R. El-Khoribi, “Emotion recognition based on eeg using lstm recurrent neural network,”International Journal of Advanced Computer Science and Applications, vol. 8, Oct. 2017.doi: 10 . 14569 / IJACSA .2017.081046. b15 "DEAP: A Database for Emotion Analysis using Physiological Signals (PDF)", S. Koelstra, C. Muehl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, I. Patras, EEE Transactions on Affective Computing, vol. 3, no. 1, pp. 18-31, 2012. b16 J. D. Morris, “Observations: Sam: The self-assessment manikin an efficient cross-cultural measurement of emotional response 1,”Journal of Advertising Research, 1995. b17 D. Wang and Y. Shang, “Modeling physiological data with deep belief net-works,”International journal of information and education technology (IJIET),vol. 3, pp. 505–511, Jan. 2013.doi: 10.7763/IJIET.2013.V3.326. b18 X. Li, P. Zhang, D. Song, G. Yu, Y. Hou, and B. Hu, “Eeg based emotion identification using unsupervised deep feature learning,” 2015. b19 M. A. Asghar, M. J. Khan, Fawad, Y. Amin, M. Rizwan, M. Rahman, S. Bad-nava, S. S. Mirjavadi, and S. S. Mirjavadi, “Eeg-based multi-modal emotion recognition using bag of deep features: An optimal feature selection approach,”Sensors (Basel, Switzerland), vol. 19, no. 23, Nov. 2019,issn: 1424-8220.doi:10 . 3390 / s19235218. [Online]. Available: https : / / europepmc . org / articles /PMC6928944. b21 W. Ng, A. Saidatul, Y. Chong, and Z. Ibrahim, “Psd based features extraction for eeg signal during typing task,”IOP Conference Series: Materials Science and Engineering, vol. 557, p. 012 032, Jun. 2019.doi: 10.1088/1757- 899X/557/1/012032. b22 M. Ghofrani Jahromi, H. Parsaei, A. Zamani, and D. W. Stashuk, “Cross comparison of motor unit potential features used in emg signal decomposition,”IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26,no. 5, pp. 1017–1025, 2018.doi: 10.1109/TNSRE.2018.2817498. b23 Q. Xiong, X. Zhang, W.-F. Wang, and Y. Gu, “A parallel algorithm framework for feature extraction of eeg signals on mpi,”Computational and Mathematical Methods in Medicine, vol. 2020, pp. 1–10, May 2020.doi: 10 . 1155 / 2020 /9812019. b24 N. Donges. (). “A guide to rnn: Understanding recurrent neural networksand lstm networks,” [Online]. Available: https://builtin.com/data- science/recurrent-neural-networks-and-lstm. (accessed: 24.09.2021).
http://arxiv.org/abs/2307.04037v2
20230708195151
Employing Drones in Agriculture: An Exploration of Various Drone Types and Key Advantages
[ "E. C. Nunes" ]
cs.RO
[ "cs.RO" ]
Employing Drones in Agriculture: An Exploration of Various Drone Types and Key Advantages 1st Eduardo Carvalho Nunes Department of Engineering University of Trás-os-Montes and Alto Douro 5000-801, Vila Real, Portugal ORCID: 0000-0002-5345-8854 ===================================================================================================================================================================== This article explores the use of drones in agriculture and discusses the various types of drones employed for different agricultural applications. Drones, also known as unmanned aerial vehicles (UAVs), offer numerous advantages in farming practices. They provide real-time and high-resolution data collection, enabling farmers to make informed irrigation, fertilization, and pest management decisions. Drones assist in precision spraying and application of agricultural inputs, minimizing chemical wastage and optimizing resource utilization. They offer accessibility to inaccessible areas, reduce manual labor, and provide cost savings and increased operational efficiency. Drones also play a crucial role in mapping and surveying agricultural fields, aiding crop planning and resource allocation. However, challenges such as regulations and limited flight time need to be addressed. The advantages of using drones in agriculture include precision agriculture, cost and time savings, improved data collection and analysis, enhanced crop management, accessibility and flexibility, environmental sustainability, and increased safety for farmers. Overall, drones have the potential to revolutionize farming practices, leading to increased efficiency, productivity, and sustainability in agriculture. Drone, Agriculture, UAV § INTRODUCTION The use of drones in agriculture has gained significant attention in recent years due to their potential to revolutionize farming practices. Drones, also known as unmanned aerial vehicles (UAVs), offer a range of applications that can enhance efficiency, productivity, and sustainability in agriculture. One of the key advantages of using drones in agriculture is their ability to provide real-time and high-resolution data collection <cit.>. Drones equipped with cameras, sensors, and imaging technologies can capture detailed imagery of crops, soil conditions, and field topography <cit.>. This data can be used for crop monitoring, assessment, and precision agriculture practices <cit.>. By analyzing this data, farmers can make informed decisions regarding irrigation, fertilization, and pest management, leading to optimized resource utilization and improved crop yields <cit.>. Drones also play a crucial role in precision spraying and application of agricultural inputs <cit.>. With their ability to navigate through fields and deliver targeted treatments, drones can reduce chemical wastage, minimize environmental impact, and improve the efficiency of pesticide and fertilizer application <cit.>. This targeted approach helps protect beneficial insects, reduce water pollution, and optimize resource utilization <cit.>. Furthermore, drones offer accessibility to inaccessible or inaccessible areas by traditional means <cit.>. They can fly at low altitudes and capture data from different angles and perspectives, providing a comprehensive view of the field <cit.>. This enables farmers to monitor large farmland areas quickly and efficiently, reducing the time and labor required for manual inspections <cit.>. Drones can cover large farmland areas in a fraction of the time it would take using traditional methods, leading to cost savings and increased operational efficiency <cit.>. In addition to data collection and monitoring, drones can assist in mapping and surveying agricultural fields. They can create high-resolution maps and 3D models, providing valuable information for crop planning, land management, and resource allocation. Drones equipped with advanced sensors, such as LiDAR or hyperspectral cameras, can capture detailed data for precise analysis and decision-making <cit.>. This enables farmers to identify areas of nutrient deficiencies, optimize irrigation practices, and implement site-specific management strategies. The use of drones in agriculture is challenging. Regulations and licensing requirements for drone operation vary across countries and regions, and compliance with these regulations is essential to ensure safe and responsible drone use <cit.>. Additionally, drones' limited flight time and battery capacity can pose challenges in large-scale farming operations <cit.>. However, advancements in drone technology, such as improved battery life and payload capacity, are addressing these limitations and expanding the possibilities for drone applications in agriculture. § DIFFERENT TYPES OF DRONES USED IN AGRICULTURE In agriculture, different types of drones are used for various applications. These drones offer unique capabilities and functionalities that cater to specific agricultural needs. Some of the commonly used types of drones in agriculture include: * Multi-Rotor Drones: Multi-rotor drones (Figure <ref>), such as quadcopters and hexacopters, are popular in agriculture due to their maneuverability and stability <cit.>. They are equipped with multiple rotors that allow them to hover in place, fly at low altitudes, and capture high-resolution imagery. Multi-rotor drones are suitable for tasks that require close and contained object capture, such as monitoring crop health, detecting pests and diseases, and applying targeted treatments <cit.>. * Fixed-Wing Drones: Fixed-wing drones (Figure <ref>) have a wing-like structure and are designed to fly like airplanes <cit.>. They are known for their long-flight endurance and ability to cover large areas. Fixed-wing drones are commonly used for mapping and surveying agricultural fields, as they can fly faster and cover more considerable distances. However, they require a runway for takeoff and landing, which can be a limitation in specific agricultural settings. * Hybrid Drones: Hybrid drones (Figure <ref>) combine the features of multi-rotor and fixed-wing drones <cit.>. They can take off and land vertically like multi-rotor drones and then transition to fixed-wing flight for longer endurance and coverage <cit.>. Hybrid drones are suitable for applications that require both close-range imaging and large-scale mapping, providing flexibility and versatility in agricultural operations. * Thermal Imaging Drones: Thermal imaging drones (Figure <ref>) are equipped with thermal cameras that capture infrared radiation emitted by objects <cit.>. These drones are used in agriculture to monitor crop health, detect irrigation issues, and identify areas of heat stress or pest infestation <cit.>. Thermal imaging drones can provide valuable insights into the temperature distribution and thermal patterns in agricultural fields, aiding precision agriculture practices. * Spraying Drones: Spraying drones (Figure <ref>), also known as agricultural drones or crop dusting drones, are specifically designed for the targeted application of pesticides, fertilizers, and other agricultural inputs <cit.>. These drones are equipped with spraying systems that can accurately and efficiently deliver chemicals to crops, reducing the need for manual labor and minimizing chemical wastage <cit.>. Spraying drones offer precise and controlled applications, reducing environmental impact and optimizing resource utilization. * Surveillance Drones: Surveillance drones (Figure <ref>) are used in agriculture for monitoring and security purposes <cit.>. These drones are equipped with cameras and sensors that capture real-time video footage and imagery, allowing farmers to monitor their fields, livestock, and infrastructure remotely <cit.>. Surveillance drones can help detect unauthorized activities, track animal movements, and identify potential threats or risks in agricultural operations. * Mapping and Surveying Drones: Mapping and surveying drones (Figrue <ref>) are used to create high-resolution maps and 3D models of agricultural fields <cit.>. These drones have advanced sensors, such as LiDAR (Light Detection and Ranging) or photogrammetry cameras, to capture detailed and accurate data <cit.>. Mapping and surveying drones are valuable tools for precision agriculture, enabling farmers to analyze topography, monitor soil conditions, and plan efficient land management strategies. * Payload-Specific Drones: Drones are designed for specific agricultural applications besides the above types. For example, there are drones equipped with hyperspectral sensors for detailed analysis of crop health and nutrient content <cit.>. There are also drones with specialized sensors for monitoring soil moisture levels, detecting weed infestations, or assessing plant growth parameters <cit.>. These payload-specific drones (Figure <ref>) cater to specific data collection needs in agriculture. § ADVANTAGES OF USING DRONES IN AGRICULTURE Using drones in agriculture offers several advantages contributing to improved efficiency, productivity, and sustainability in agricultural practices. The advantages of using drones in farming are: * Precision Agriculture: Drones enable precision agriculture practices by providing high-resolution imagery and data collection capabilities <cit.>. They can capture detailed information about crop health, soil conditions, and pest infestations, allowing farmers to make informed decisions and apply targeted treatments <cit.>. This precision approach helps optimize resource utilization, reduce input wastage, and increase crop yields <cit.>. * Cost and Time Savings: Drones can cover large areas of farmland quickly and efficiently, reducing the time and labor required for manual inspections and data collection <cit.>. They can perform tasks such as crop monitoring, mapping, and spraying in a fraction of the time it would take using traditional methods <cit.>. This leads to cost savings by minimizing the need for manual labor and reducing the use of resources such as water, fertilizers, and pesticides <cit.>. * Improved Data Collection and Analysis: Drones equipped with various sensors, such as cameras, thermal imaging, and multispectral sensors, can collect a wide range of data about crops, soil, and environmental conditions <cit.>. This data can be used for detailed analysis and monitoring, enabling farmers to detect early signs of crop stress, nutrient deficiencies, or disease outbreaks <cit.>. The data collected by drones can be processed using advanced analytics and machine learning algorithms to generate actionable insights for better decision-making <cit.>. * Enhanced Crop Management: Drones provide real-time and up-to-date information about crop health, allowing farmers to implement timely interventions and optimize crop management practices <cit.>. For example, drones can help identify areas of the field that require additional irrigation or fertilization, enabling precise application and reducing waste <cit.>. They can also assist in monitoring crop growth, estimating yield potential, and predicting harvest times <cit.>. * Accessibility and Flexibility: Drones offer accessibility to areas that are difficult to reach or inaccessible by traditional means, such as steep slopes or dense vegetation <cit.>. They can fly at low altitudes and capture data from different angles and perspectives, providing a comprehensive view of the field <cit.>. Drones can be deployed quickly and easily, allowing farmers to respond rapidly to changing conditions or emergencies <cit.>. * Environmental Sustainability: Using drones in farming can contribute to environmental sustainability by reducing the use of chemicals and minimizing the environmental impact of agricultural practices <cit.>. Drones enable targeted spraying of pesticides and fertilizers, reducing the amount of chemicals applied and minimizing their dispersion into the environment <cit.>. This targeted approach helps protect beneficial insects, reduce water pollution, and promote ecological balance <cit.>. * Safety: Drones eliminate or reduce the need for farmers to physically access hazardous or difficult-to-reach areas, such as tall crops, steep terrains, or areas with potential safety risks <cit.>. This improves the safety of farmers and reduces the risk of accidents or injuries associated with manual labor <cit.>. § CONCLUSION Using drones in agriculture holds immense promise for revolutionizing farming practices and improving efficiency, productivity, and sustainability. The various types of drones available cater to specific agricultural needs, ranging from crop monitoring and assessment to precision spraying, mapping, and surveying. Drones provide real-time and high-resolution data collection, enabling farmers to make informed decisions regarding resource allocation and optimize crop management practices. They offer cost and time savings by reducing manual labor and minimizing the use of resources. The ability of drones to access inaccessible areas and provide comprehensive views of the fields enhances their usability and efficiency in large-scale farming operations. Furthermore, drones contribute to environmental sustainability by enabling targeted spraying, reducing chemical wastage, and minimizing the environmental impact of agricultural practices. The safety aspect of using drones must be considered, as they eliminate or reduce the need for farmers to access hazardous areas physically. Despite challenges such as regulations and limited flight time, advancements in drone technology are continually addressing these limitations. Overall, the advantages of using drones in agriculture are significant, and their integration into farming practices has the potential to transform the industry, leading to optimized resource utilization, improved crop yields, and sustainable agricultural practices. 00 10.1002/net.21818Otto, A., Agatz, N., Campbell, J., Golden, B. & Pesch, E. Optimization Approaches for Civil Applications of Unmanned Aerial Vehicles (UAVs) or Aerial Drones: A Survey. Networks. (2018) 10.1007/s41666-020-00080-6Nasajpour, M., Pouriyeh, S., Parizi, R., Dorodchi, M., Valero, M. & Arabnia, H. Internet of Things for Current COVID-19 and Future Pandemics: An Exploratory Study. Journal Of Healthcare Informatics Research. (2020) 10.3390/rs9010088Jakob, S., Zimmermann, R. & Gloaguen, R. The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo—A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data. Remote Sensing. (2017) 10.3390/s20051487Gao, D., Sun, Q., Hu, B. & Zhang, S. A Framework for Agricultural Pest and Disease Monitoring Based on Internet-of-Things and Unmanned Aerial Vehicles. Sensors. (2020) 10.1109/access.2020.2982086Castellanos, G., Deruyck, M., Martens, L. & Joseph, W. System Assessment of WUSN Using NB-IoT UAV-Aided Networks in Potato Crops. Ieee Access. (2020) 10.1038/s41598-020-67898-3Santangeli, A., Chen, Y., Kluen, E., Chirumamilla, R., Tiainen, J. & Loehr, J. Integrating Drone-Borne Thermal Imaging With Artificial Intelligence to Locate Bird Nests on Agricultural Land. Scientific Reports. (2020) 10.3390/land10020164Ayamga, M., Tekinerdogan, B. & Kassahun, A. Exploring the Challenges Posed by Regulations for the Use of Drones in Agriculture in the African Context. Land. (2021) 10.3390/drones6070160Javan, F., Samadzadegan, F., Gholamshahi, M. & Mahini, F. A Modified YOLOv4 Deep Learning Network for Vision-Based UAV Recognition. Drones. (2022) 10.1109/access.2021.3130900Dutta, A., Roy, S., Kreidl, O. & Bölöni, L. Multi-Robot Information Gathering for Precision Agriculture: Current State, Scope, and Challenges. Ieee Access. (2021) 10.5937/ekonomika1804091sSpalević, Ž., Ilic, M. & Savija, V. The Use of Drones in Agriculture: ICT Policy, Legal and Economical Aspects. Ekonomika. (2018) 10.3390/app11052138Kim, S., Ahmad, H., Moon, J. & Jung, S. Nozzle With a Feedback Channel for Agricultural Drones. Applied Sciences. (2021) 10.5194/isprs-archives-xlii-2-789-2018Oliveira, R., Khoramshahi, E., Suomalainen, J., Hakala, T., Viljanen, N. & Honkavaara, E. Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing. The International Archives Of The Photogrammetry Remote Sensing And Spatial Information Sciences. (2018) 10.1111/sum.12771Chen, Q., Li, L., Chong, C. & Wang, X. AI‐enhanced Soil Management and Smart Farming. Soil Use And Management. (2021) 10.1088/1757-899x/1259/1/012015Borikar, G., Gharat, C. & Deshmukh, S. Application of Drone Systems for Spraying Pesticides in Advanced Agriculture: A Review. Iop Conference Series Materials Science And Engineering. (2022) 10.1016/j.jairtraman.2020.101929Merkert, R. & Bushell, J. Managing the Drone Revolution: A Systematic Literature Review Into the Current Use of Airborne Drones and Future Strategic Directions for Their Effective Control. Journal Of Air Transport Management. (2020) 10.1371/journal.pone.0141006Lisein, J., Michez, A., Claessens, H. & Lejeune, P. Discrimination of Deciduous Tree Species From Time Series of Unmanned Aerial System Imagery. Plos One. (2015) 10.3390/drones5020041Krul, S., Pantos, C., Frangulea, M. & Valente, J. Visual SLAM for Indoor Livestock and Farming Using a Small Drone With a Monocular Camera: A Feasibility Study. Drones. (2021) 10.3390/agronomy11091809Huzaifah, M., Juraimi, A., Che'ya, N., Sulaiman, N., Manaf, M., Ramli, Z. & Motmainna, M. Using Remote Sensing and an Unmanned Aerial System for Weed Management in Agricultural Crops: A Review. Agronomy. (2021) 10.30657/pea.2021.27.10Dadi, V., Nikhil, S., Mor, R., Agarwal, T. & Arora, S. Agri-Food 4.0 and Innovations: Revamping the Supply Chain Operations. Production Engineering Archives. (2021) 10.22438/jeb/43/1/mrn-1912Verma, A., Singh, M., Parmar, R. & Bhullar, K. Feasibility Study on Hexacopter UAV Based Sprayer for Application of Environment-Friendly Biopesticide in Guava Orchard. Journal Of Environmental Biology. (2022) 10.1007/978-981-16-4369-9_25Kumaar, A. & Kumaar, A. GPS-Based Path Planning Algorithm for Agriculture Drones. (2021) 10.3390/agriculture13051075McCarthy, C., Nyoni, Y., Kachamba, D., Banda, L., Moyo, B., Chisambi, C., Banfill, J. & Hoshino, B. Can Drones Help Smallholder Farmers Improve Agriculture Efficiencies and Reduce Food Insecurity in Sub-Saharan Africa? Local Perceptions From Malawi. Agriculture. (2023) 10.1051/matecconf/202133502002Lee, C., Phang, S. & Mun, H. Design and Implementation of an Agricultural UAV With Optimized Spraying Mechanism. Matec Web Of Conferences. (2021) 10.1051/e3sconf/202338101048Zhichkin, K., Nosov, V., Zhichkina, L., Anichkina, O., Borodina, I. & Beketov, A. Efficiency of Using Drones in Agricultural Production. E3s Web Of Conferences. (2023) 10.1109/access.2019.2949703Farooq, M., Riaz, S., Abid, A., Abid, K. & Naeem, M. A Survey on the Role of IoT in Agriculture for the Implementation of Smart Farming. Ieee Access. (2019)
http://arxiv.org/abs/2307.05358v2
20230711154503
Combating Data Imbalances in Federated Semi-supervised Learning with Dual Regulators
[ "Sikai Bai", "Shuaicheng Li", "Weiming Zhuang", "Jie Zhang", "Song Guo", "Kunlin Yang", "Jun Hou", "Shuai Zhang", "Junyu Gao", "Shuai Yi" ]
cs.LG
[ "cs.LG", "cs.AI" ]
1†]Sikai Bai 2†]Shuaicheng Li 3]Weiming Zhuang 1]Jie Zhang 1]Song Guo 2]Kunlin Yang 2]Jun Hou 2]Shuai Zhang 4]Junyu Gao 5]Shuai Yi [1]Hong Kong Polytechnic University, ^2 SenseTime Research, ^3Sony AI [4]Northwestern Polytechnical University, ^5Nanyang Technological University Combating Data Imbalances in Federated Semi-supervised Learning with Dual Regulators [ Received / Accepted ====================================================================================== empty Federated learning has become a popular method to learn from decentralized heterogeneous data. Federated semi-supervised learning (FSSL) emerges to train models from a small fraction of labeled data due to label scarcity on decentralized clients. Existing FSSL methods assume independent and identically distributed (IID) labeled data across clients and consistent class distribution between labeled and unlabeled data within a client. This work studies a more practical and challenging scenario of FSSL, where data distribution is different not only across clients but also within a client between labeled and unlabeled data. To address this challenge, we propose a novel FSSL framework with dual regulators, FedDure. FedDure lifts the previous assumption with a coarse-grained regulator (C-reg) and a fine-grained regulator (F-reg): C-reg regularizes the updating of the local model by tracking the learning effect on labeled data distribution; F-reg learns an adaptive weighting scheme tailored for unlabeled instances in each client. We further formulate the client model training as bi-level optimization that adaptively optimizes the model in the client with two regulators. Theoretically, we show the convergence guarantee of the dual regulators. Empirically, we demonstrate that FedDure is superior to the existing methods across a wide range of settings, notably by more than 11% on CIFAR-10 and CINIC-10 datasets. †Equal contribution, corresponding authors § INTRODUCTION Federated learning (FL) is an emerging privacy-preserving machine learning technique <cit.>, where multiple clients collaboratively learn a model under the coordination of a central server without exchanging private data. Federated learning is a decentralized learning paradigm and has empowered a wide range of applications, including healthcare <cit.>, consumer products <cit.>, and etc. The majority of existing FL works <cit.> assume that the private data in clients are fully labeled, but the assumption is unrealistic in real-world federated applications as annotating data is time-consuming, laborious, and expensive. To remedy these issues, federated semi-supervised learning (FSSL) is proposed to improve model performance with limited labeled and abundant unlabeled data on each client <cit.>. In particular, prior works <cit.> have achieved competitive performance by exploring inter-client mutual knowledge. However, they usually focus on mitigating heterogeneous data distribution across clients (external imbalance) while assuming that labeled and unlabeled training data are drawn from the same independent and identical distribution. These assumptions enforce strict requirements of data annotation and would not be practical in many real-world applications. A general case is that labeled and unlabeled data are drawn from different distributions (internal imbalance). For example, photo gallery on mobile phones contains many more irrelevantly unlabeled images than the ones that are labeled manually for classification task <cit.>. Existing FSSL methods perform even worse than training with only a small portion of labeled data, under this realistic and challenging FSSL scenario with external and internal imbalances, as shown in Figure <ref>. The main reasons of performance degradation are two-fold: 1) internal imbalance leads to intra-client skewed data distribution, resulting in heterogeneous local training; 2) external imbalance leads to inter-client skewed data distribution, resulting in client drift <cit.>. The co-occurrence of internal and external data imbalances amplifies the impact of client drifts and local inconsistency, leading to performance degradation. To address the above issues, we propose a new federated semi-supervised learning framework termed FedDure. FedDure explores two adaptive regulators, a coarse-grained regulator (C-reg) and a fine-grained regulator (F-reg), to flexibly update the local model according to the learning process and outcome of the client's data distributions. Firstly, C-reg regularizes the updating of the local model by tracking the learning effect on labeled data. By utilizing the real-time feedback from C-reg, FedDure rectifies inaccurate model predictions and mitigates the adverse impact of internal imbalance. Secondly, F-reg learns an adaptive weighting scheme tailored for each client; it automatically equips a soft weight for each unlabeled instance to measure its contribution. This scheme automatically adjusts the instance-level weights to strengthen (or weaken) its confidence according to the feedback of F-reg on the labeled data to further address the internal imbalance. Besides, FedDure mitigates the client drifts caused by external imbalance by leveraging the global server model to provide guidance knowledge for C-reg. During the training process, FedDure utilizes the bi-level optimization strategy to alternately update the local model and dual regulators in local training. Figure <ref> shows that FedDure significantly outperforms existing methods and its performance is even close to fully supervised learning (orange line) under internal and external imbalance. To the end, the main contributions are three-fold: (1) We are the first work that investigates a more practical and challenging scenario of FSSL, where data distribution differs not only across clients (external imbalance) but between labeled and unlabeled data within a client (internal imbalance). (2) We propose FedDure, a new FSSL framework that designs dual regulators to adaptively update the local model according to the unique learning processes and outcomes of each client. (3) We theoretically analyze the convergence of dual regulators and empirically demonstrate that FedDure is superior to the state-of-the-art FSSL approaches across multiple benchmarks and data settings, improving accuracy by 12.17% on CIFAR10 and by 11.16% on CINIC-10 under internal and external imbalances. § RELATED WORK Federated Learning (FL) is an emerging distributed training technique that trains models on decentralized clients and aggregates model updates in a central server <cit.>. It protects data privacy as raw data is always kept locally. FedAvg <cit.> is a pioneering work that aggregates local models updated by weighted averaging. Statistical heterogeneity is an important challenge of FL in real-world scenarios, where the data distribution is inconsistent among clients <cit.>, which can result in drift apart between global and local model, i.e., client-drift <cit.>. A plethora of works have been proposed to address this challenge with approaches like extra data sharing, regularization, new aggregation mechanisms, and personalization <cit.>. These approaches commonly consider only supervised learning settings and may not be simply applied to scenarios where only a small portion of data is labeled. Numerous studies focus on purely unsupervised federated learning, and some of them are application-specific <cit.>. Moreover, some work studies federated self-supervised learning <cit.> to learn generic representations with purely unlabeled data on clients, and these methods require IID labeled data for fine-tuning the representations for downstream tasks. Our work primarily focuses on federated semi-supervised learning, where a small fraction of data has labels in each client. Semi-Supervised Learning aims to utilize unlabeled data for performance improvements and is usually divided into two popular branches pseudo labeling and consistency regularization. Pseudo-labeling methods <cit.> usually generate artificial labels of unlabeled data from the model trained by labeled data and apply the filtered high-confidence labels as supervised signals for unlabeled data training. MPL <cit.> extends the knowledge distillation to SSL by optimizing the teacher model with feedback from the student model. Consistency regularization approaches <cit.> regularize the outputs of different perturbed versions of the same input to be consistent. Many works <cit.> apply data augmentation as a perturbed strategy for pursuing outcome consistency. Federated Semi-Supervised Learning (FSSL) considers learning models from decentralized clients where a small amount of labeled data resides on either clients or the server <cit.>. FSSL scenarios can be classified into three categories: (1) Labels-at-Server assumes that clients have purely unlabeled data and the server contains some labeled data <cit.>; (2) Labels-at-Clients considers each client has mostly unlabeled data and a small amount of labeled data <cit.>; (3) Labels-at-Partial-Clients assumes that the majority of clients contain fully unlabeled data while numerous clients have fully labeled data <cit.>. Labels-at-Clients has been largely overlooked; prior work <cit.> proposes inter-client consistency loss, but it shares extra information among clients and bypasses the internal class imbalance issue. This work introduces dual regulators to address the issue, without extra information shared among clients. Class Imbalance Methods are concerned with dataset resampling <cit.> or loss reweighting <cit.> for gradient calculation. In centralized learning setting, many methods <cit.> focus on resampling from the minority class for balanced class-wise distribution. Important examples receive more attention and align larger weights than others for accelerating the optimization of networks, which can be quantified by their loss <cit.> or the uncertainty <cit.>. § METHOD This section first defines the problem and introduces a novel framework with dual regulators (FedDure). Using dual regulators, we then build a bi-level optimization strategy for federated semi-supervised learning. §.§ Problem Definition We focus on Federated Semi-Supervised Learning (FSSL) with external and internal imbalance problems. Specifically, we assume that there are K clients, denoted as {𝒞_1, ..., 𝒞_K}. Federated learning aims to train a generalized global model f_g with parameter θ_g. It coordinates decentralized clients to train their local models ℱ_l = { f_l,1, ...,f_l,K} with parameters {θ_l,1, ...,θ_l,K}, where each client is only allowed to access its own local private dataset. In the standard semi-supervised setting, the dataset contains a labeled set 𝒟^s = {x_i, y_i}_i=1^N^s and an unlabeled set 𝒟^u = {u_i}_i=1^N^u, where N^s ≪ N^u. Under FSSL, the private dataset 𝒟_k of each client C_k contains N_k^s labeled instances 𝒟^s_k = {x_i,k, y_i,k}_i=1^N^s_k and N_k^u unlabeled instances 𝒟^u_k = {u_i,k}_i=1^N^u_k. The internal imbalance means that the distribution of 𝒟^s_k and 𝒟^u_k are different; the external imbalance refers to different distributions between D_k in different clients k. We provide a detailed description in Section <ref>. In this work, we primarily focus on image datasets. For an unlabeled image u_k in client C_k, we compute the corresponding pseudo label ŷ_k with the following equation: ŷ_k = argmax (f_l,k(𝒯_w( u_k); θ_l,k)), where 𝒯_w( u_k) is the weakly-augmented version of u_k and the pseudo labeling dataset in the client C_k is denoted as 𝒟^u_k = {u_i,k, ŷ_i,k}_i=1^N_k^u. We omit the client index k in the parameters later for simplicity of notation. §.§ Dual Regulators In this section, we present federated semi-supervised learning with dual regulator, termed FedDure. It dynamically adjusts gradient updates in each client according to the class distribution characteristics with two regulators, a coarse-grained regulator (C-reg) and a fine-grained regulator (F-reg). Figure <ref> depicts the optimization process with these two regulators. We introduce the regulators in this section and present the optimization process in Section <ref>. Coarse-grained Regulator (C-reg). Existing FSSL methods decompose the optimization on the labeled and unlabeled data, leading to heterogeneous local training. C-reg remedies the challenge with a collaborative training manner. Intuitively, the parameters of the local model can be rectified according to the feedback from C-reg, which dynamically regulates the importance of local training on all unlabeled data by quantifying the overall learning effect using labeled data. It contributes to counteracting the adverse impact introduced by internal imbalance and preventing corrupted pseudo-labels <cit.>. Meanwhile, C-reg acquires global knowledge by initializing with the received server model parameters at the beginning of each round of local training, which can provide global guidance to the local model to mitigate external imbalance (client-drift). We define C-reg as f_d with parameters ϕ. At training iteration t, C-reg searches its optimal parameter ϕ^* by minimizing the cross-entropy loss on unlabeled data with pseudo labels. Actually, the optimal parameter ϕ^* is related to the local model's parameter θ_l via the generated pseudo label, and we denote the relationship as ϕ^*(θ_l). Since it requires heavy computational costs to explore the optimal parameter ϕ^*, we approximate ϕ^* by performing one gradient step ϕ^t+1 at training iteration t (i.e., ϕ^t). Practically, we introduce the updated fine-grained regulator (F-reg) to measure the scalar weight for each unlabeled instance for updating C-reg. The formulation to optimize C-reg is as follows: ϕ^t+1 =ϕ^t -η_s ∇_ϕ^t𝔼_uℋ (w^t+1;ϕ^t) ℒ_ce(ŷ, f_d(𝒯_s(u); ϕ^t)), where ℋ (w^t+1;ϕ^t) = f_w(f_d(𝒯_s(u); ϕ^t);w^t+1), f_w is the fine-grained regulator (F-reg), and w^t+1 is the parameters of F-reg updated by Eqn. <ref>, which is detailed in the following subsection. 𝒯_s(u) is the strongly-augmented unlabeled image u and f_d(𝒯_s(u); ϕ^t) is the output vector of f_d to evaluate the quality of pseudo labels from the local model. Next, we quantify the learning effect of the local model with the C-reg using labeled samples by computing the cross-entropy difference d^t+1 of C-reg between training iterations t and t+1: d^t+1 = 𝔼_x,y[ ℒ_ce ( y, f_d(x; ϕ^t)) - ℒ_ce (y, f_d(x; ϕ^t+1 ) )]. The quantized learning effect is further used as the reward information to optimize the local model by regulating the importance of local training on unlabeled data. In particular, the cross-entropy differences d^t+1 signify the generalization gap for the C-reg updated by the pseudo labels from the local model. Fine-grained Regulator (F-reg). Previous SSL methods usually utilize a fixed threshold to filter noisy pseudo labels <cit.>, but they are substantially hindered by corrupted labels or class imbalance on unlabeled data. Internal and external imbalances in FSSL could amplify these problems, leading to performance degradation. To tackle the challenge, F-reg regulates the importance of each unlabeled instance in local training for mitigating the learning bias caused by internal imbalance. It learns an adaptive weighting scheme tailored for each client according to unlabeled data distribution. A unique weight is generated for each unlabeled image to measure the contribution of the image to overall performance. We construct F-reg f_w parameterized by w[F-reg is a MLP architecture with one fully connected layer with 128 filters and a Sigmoid function.]. Before updating F-reg, we perform one gradient step update of C-reg ϕ to associate F-reg and C-reg: ϕ^- = ϕ^t -η_s ∇_ϕ^t𝔼_uℋ (w^t;ϕ^t) ℒ_ce(ŷ, f_d(𝒯_s(u); ϕ^t)), where one gradient step of C-reg ϕ^- depends on the F-reg w^t and regards the others as fixed parameters. Next, we optimize F-reg in local training iteration t, where the optimal parameter w^* is approximated by one gradient step of F-reg (i.e., w^t+1). The optimization of F-reg is formulated as: w^t+1 = w^t - η_w ∇_w^t𝔼_x,yℒ_ce(y,f_d(x; ϕ^-(w^t)), where f_d(x; ϕ^-(w^t)) is the output of f_d on labeled data. We then introduce a re-weighting scheme that calculates a unique weight m_i for i-th unlabeled sample: m_i = f_w(f_l(𝒯_s(u_i), θ_l^t), w^t+1). Note that m_i is a scalar to re-weight the importance of the corresponding unlabeled image. §.§ Bi-level Optimization In this section, we present optimization processes for the dual regulators and local model θ. We alternatively train two regulators, which approximate a gradient-based bi-level optimization procedure <cit.>. Then, we update the local model with fixed C-reg and F-reg. Update F-reg. Firstly, we obtain one gradient step update of C-reg ϕ^- using Eqn. <ref>. After that, the supervised loss ℒ_ce( y,f_d(x; ϕ^-(w^t)) guides the update of the F-reg with Eqn. <ref>. Since w^t is explicitly beyond the supervised loss, the updating of F-reg can be achieved by standard backpropagation using the chain rule. Update C-reg. After updating the parameters of F-reg, we update C-reg by Eqn. <ref>, regarding local model θ_l^t as fixed parameters. Update Local Model with F-reg. We use the updated F-reg w^t+1 to calculate a unique weight m_i for i-th unlabeled sample with Eqn. <ref>. The gradient optimization is formulated as: g^t_u = 𝔼_u[ ∇_θ^t_lℒ_ce(ŷ, f_l(𝒯_s(u); θ_l^t)) ·m]. Update Local Model with C-reg. We then use C-reg to calculate entropy difference d^t+1 in Eqn. <ref>. The entropy difference d^t+1 is adopted as a reward coefficient to adjust the gradient update of the local model on unlabeled data. The formulation is as follows: g^t_d = d^t+1·∇_θ^t_l𝔼_uℒ_ce(ŷ, f_l(𝒯_s(u); θ_l^t)), where this learning process is derived from a meta-learning strategy, provided in supplementary materials 3.1. Update Local Model with Supervised Loss. Besides, we compute the gradient local model on labeled data as: g^t_s = ∇_θ^t_l𝔼_x,yℒ_ce(y, f_l(x; θ_l^t)). On this basis, we update the local model’s parameter with the above gradient computation in Eqn. <ref>, <ref> and <ref>, which is defined as: θ^t+1_l =θ^t_l - η( g^t_s + g^t_u + g^t_d), where η denotes the learning rate of the local model. Finally, after T local epochs, the local model is returned to the central server. The server updates the global model θ_g^r+1 by weighted averaging the parameters from these received local models in the current round, and the r+1 round is conducted by sending θ_g^r+1 to the randomly selected clients as initialization. Algorithm <ref> presents the pipeline of the overall optimization process. §.§ Convergence of Optimization Process In this section, we further analyze the convergence of our optimizations. When updating F-reg in Eqn. <ref>, w^t is explicitly beyond the supervised loss, the optimization of F-reg can be easily implemented by automatic backpropagation using the chain rule. We only discuss the convergence of the bi-level optimizations using the meta-learning process. Update Local Model with C-reg. The local model tries to update its parameters on the feedback from the updated coarse-grained regulator (C-reg), which adjusts the learning effect via the meta-learning process (proofed in supplementary materials 3.1). The cross-entropy loss on labeled data ℒ_ce(y, f_d(x; ϕ^t+1(θ_l^t)) is applied to characterize the quality of learning effect from the local model. The CE loss function is related to θ_l^t. We derive the following theorem and finish the proof in supplementary materials 3.2. Suppose that supervised loss function ℒ_ce(y, f_d(x; ϕ^t+1(θ_l^t)) is L-Lipschitz and has ρ-bounded gradients. The ℒ_ce(ŷ, f_d(𝒯_s(u); ϕ^t)) has ρ-bounded gradients and twice differential with Hessian bounded by ℬ. Let the learning rate η_s=min{ 1, e/T} for constant e > 0, and η= min{1/L, c/√(T)} for some c > 0, such that √(T)/c≥ L. Thus, the optimization of the local model using coarse-grained regulator can achieve: min_0≤ t ≤ T𝔼[ ∇_θ_lℒ_ce(y, f_d(x; ϕ^t+1(θ_l^t))_2^2] ≤𝒪(c/√(T)). Update C-reg. We introduce updated F-reg to measure the contributions of each instance for updating C-reg in Eqn. <ref>, where ℋ (w^t+1;ϕ^t) is related to ϕ^t. The updated F-reg adjusts the learning contributions on each unlabeled instance for regulating the optimization of C-reg. We conclude that our C-reg can always achieve convergence when introducing the feedback from F-reg, and also detail the proof in supplementary materials 3.2. Suppose supervised and unsupervised loss functions are Lipschitz-smooth with constant L and have ρ-bounded gradient. The ℋ(·) is differential with a ϵ-bounded gradient and twice differential with its Hessian bounded by ℬ. Let learning rate η_s satisfies η_s =min{ 1, k/T} for constant k > 0, such that k/T <1. η_w = min{1/L, c/√(T)} for constant c>0 such that √(T)/c≥ L. The optimization of the coarse-grained regulator can achieve: lim_t →∞𝔼[ ∇_ϕℋ (w^t+1;ϕ^t) ℒ_ce(ŷ, f_d(𝒯_s(u); ϕ^t)) _2^2]=0. § EXPERIMENTS In this section, we demonstrate the effectiveness and robustness of our method through comprehensive experiments in three benchmark datasets under multiple data settings. More details can be found in the supplementary material. §.§ Experimental Setup Datasets. We conduct comprehensive experiments on three image datasets, including CIFAR-10 <cit.>, Fashion-MNIST <cit.> and CINIC-10 <cit.>. All datasets are split according to official guidelines; we provide more dataset descriptions and split strategies in the supplementary material. Data Heterogeneity. We construct three data heterogeneity settings with different data distributions. We denote each setting as (𝒜, ℬ), where 𝒜 and ℬ are data distribution of labeled and unlabeled data, respectively. The settings are as follows: (1) (IID, IID) means both labeled and unlabeled data are IID. By default, we use 5 instances per class to build the labeled dataset for each client. The remaining instances of each class are divided into K clients evenly to build an unlabeled dataset. (2) (IID, DIR) means labeled data is the same as (IID, IID), but the unlabeled data is constructed with Dirichlet distribution to simulate data heterogeneity, where each client could only contain a subset of classes. (3) (DIR, DIR) constructs both labeled and unlabeled data with Dirichlet distribution. It simulates external and internal class imbalance, where the class distributions across clients and within a client are different. We allocate 500 labeled data per class to 100 clients using the Dirichlet process. The rest instances are divided into each client with another Dirichlet distribution. Figure <ref> compares the data distribution of FedMatch (Batch NonIID) <cit.> and ours. Our (DIR, DIR) setting presents class imbalance both across clients (external imbalance) and between labeled and unlabeled data within a client (internal imbalance). Implementation Details. We use the Adam optimizer with momentum =0.9, batch size =10 and learning rates =0.0005 for η_s, η and η_w. If there is no specified description, our default settings also include local iterations T=1, the selected clients in each round S=5, and the number of clients K=100. For the DIR data configuration, we use a Dirichlet distribution Dir(γ) to generate the DIR data for all clients, where γ=0.5 for all three datasets. We adopt the ResNet-9 network as the backbone architecture for local models and the coarse-grained regulator, while an MLP is utilized for the fine-grained regulator. Baselines. We compare the following methods in experiments. FedAvg* denotes FedAvg <cit.> only trained on labeled samples in FSSL (about 10% data). FedAvg-SL and FedProx-SL are fully supervised training using FedAvg <cit.> and FedProx <cit.>, respectively. FedAvg+UDA, FedProx+UDA, FedAvg+Fixmatch, and FedProx+Fixmatch: a naive combination between semi-supervised methods (UDA <cit.> and Fixmatch <cit.>) and FL algorithms. They use labeled and unlabeled data, but need to specify a predefined threshold on pseudo labels. FedMatch <cit.> adopts inter-consistency loss, which is state-of-the-art FSSL method. Note that we use the same hyper-parameters for FedDure and other methods in all experiments. §.§ Performance Comparison Table <ref> reports the overall results of FedDure and other methods on the three datasets. These results are averaged over 3 independent runs. Our FedDure achieves state-of-the-art FSSL performances on all datasets and data settings. (IID, IID) setting: compared with naive combination FSSL methods and FedMatch, our FedDure significantly outperforms them on all three datasets. Specifically, when evaluated on CINIC-10, which is a more difficult dataset with a larger amount of unlabeled samples, other methods suffer from the performance bottleneck and are inferior on CIFAR-10 with fewer unlabeled samples. These results show that FedDure effectively alleviates the negative influence of mass unlabeled data by regulating the local model's optimization on unlabeled data through knowledge feedback from labeled data using F-reg and C-reg. (IID, DIR) setting: our FedDure is slightly affected by weak class mismatch on unlabeled data, but it significantly outperforms by FedMatch 15.26% on CIFAR10 dataset. Also, competitive performance is achieved compared to the supervised method FedAvg-SL on Fashion-MNIST. (DIR, DIR) setting: Under this more challenging and realistic setting, our FedDure significantly outperforms others by at least 11% on CIFAR-10 and CINIC-10 datasets. In particular, the performance of other approaches degrades sharply and is even worse than FedAvg*. It means that unlabeled data might have a negative effect on performance due to the distribution mismatch between labeled and unlabeled data. Therefore, these results demonstrate that our method is well suited for a wide range of scenarios since dual regulators effectively and flexibly provide real-time feedback for local updates. Moreover, we further analyze the effectiveness of our methods on a real-world dataset in the supplementary material. §.§ Ablation Study Effectiveness of Components. To measure the importance of proposed components in our FedDure, we conduct ablation studies with the following variants. (1) baseline: the naive combination of FedAvg <cit.> and Fixmatch <cit.>. (2) Ours w/o C-reg: this variant removes the C-reg (i.e. g_d in Eqn.<ref>) and updates F-reg with local model. (3) Ours w/o F-reg: this variant replaces the dynamic weight (i.e. g_u in Eqn.<ref>) and uses a fixed threshold to filter low-confidence pseudo labels. Table <ref> shows that adopting the C-reg improves the performance from 54.79% to 57.73% under (DIR, DIR) setting on CIFAR-10. The F-reg can further make a performance boost under almost all data sets on CIFAR-10 and F-MNIST. These evaluations demonstrate the effectiveness of these components. The local model can flexibly optimize parameters according to the complementary feedback from C-reg and F-reg. Impacts of Data Heterogeneity. To demonstrate the robustness of our method against data imbalance, we characterize different levels of imbalances by Dirichlet distribution with different coefficients {0.3, 0.5, 0.7, 1.0} and evaluate multiple methods. As illustrated in Figure <ref> and <ref>, our FedDure exhibits significant improvements under different levels of data imbalances. However, FedMatch and baseline (FedAvg-Fixmatch) suffer from rapid performance degradation in the higher data heterogeneous (small Dirichlet Coefficient). These results show that FedDure is more flexible and can alleviate diverse inductive bias across clients when accounting for severe data heterogeneity. Number of Local Iterations. We analyze the impact of local iterations on performance under two different cases. Firstly, we fix the total number of local iterations and reduce global rounds with the increase of local iterations to maintain invariant computation costs. As shown in Figure <ref>, the performance decreases as local iterations per round increase. This could be due to fewer clients participating in training and many clients having no chance to be learned, especially with only 40 global rounds. Secondly, we fix the total number of global rounds and increase the local iterations in each round. It means that total computation cost increases as local iterations increase. Figure <ref> shows that FedDure achieves steady performance gains by increasing local iterations. These results indicate that enhancing the local model training can promote the overall performance improvements of the central server. Number of Label Data per Client. We evaluate FedDure under the different percentages of labeled instances in each client in {2%, 4%, 10%, 15%, 20%}. As illustrated in Figure <ref> and <ref>, FedDure gains steady performance improvements with the number of labeled data increases in two data settings. In contrast, the performance of baseline is basically unchanged in both settings and FedMatch degrades when the labeling ratio is larger than 4% in (DIR, DIR). These results demonstrate that our dual regulators can extract more valuable knowledge from labeled instances with imbalanced distribution to optimize the model. Number of Selected Clients per Round. Lastly, we investigate the performance on the impact of the number of selected clients varied in { 2,5,10,20}. As shown in Figure <ref> and <ref>, significant improvements can be achieved by increasing the selected clients. However, there would be a limited impact on performance when the selected clients reach a certain amount. We argue that although the number of the selected clients has a positive correlation with overall performance, our method explores the underlying knowledge of each client to promote overall performance improvement in the central server. In this case, when there are enough clients, our method learns comprehensive knowledge such that the performance becomes saturated. § CONCLUSION In this paper, we introduce a more practical and challenging scenario of FSSL, data distribution is different across clients (external imbalance) and within a client (internal imbalance). We then design a new federated semi-supervised learning framework with dual regulators, FedDure, to address the challenge. Particularly, we propose a coarse-grained regulator (C-reg) to regularize the gradient update in client model training and present a fine-grained regulator (F-reg) to learn an adaptive weighting scheme for unlabeled instances for gradient update. Furthermore, we formulate the learning process in each client as bi-level optimization that optimizes the local model in the client adaptively and dynamically with these two regulators. Theoretically, we show the convergence guarantee of the regulators. Empirically, extensive experiments demonstrate the significance and effectiveness of FedDure. In the future, we consider designing and integrating other client selection strategies for FSSL and extending our method from image classification to more computer vision tasks. ieee_fullname
http://arxiv.org/abs/2307.04740v1
20230710175219
On the image of graph distance matrices
[ "William Dudarov", "Noah Feinberg", "Raymond Guo", "Ansel Goh", "Andrea Ottolini", "Alicia Stepin", "Raghavenda Tripathi", "Joia Zhang" ]
math.CO
[ "math.CO", "math.PR", "05C12, 05C50" ]
]On the Image of Graph Distance Matrices [2020]05C12, 05C50 ]William Dudarov ]Noah Feinberg ]Raymond Guo ]Ansel Goh ]Andrea Ottolini ]Alicia Stepin ]Raghavendra Tripathi ]Joia Zhang Department of Mathematics, University of Washington, Seattle, WA 98195, USA [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Let G=(V,E) be a finite, simple, connected, combinatorial graph on n vertices and let D ∈ℝ^n × n be its graph distance matrix D_ij = d(v_i, v_j). Steinerberger (J. Graph Theory, 2023) empirically observed that the linear system of equations Dx = 1, where 1 = (1,1,…, 1)^T, very frequently has a solution (even in cases where D is not invertible). The smallest nontrivial example of a graph where the linear system is not solvable are two graphs on 7 vertices. We prove that, in fact, counterexamples exists for all n≥ 7. The construction is somewhat delicate and further suggests that such examples are perhaps rare. We also prove that for Erdős-Rényi random graphs the graph distance matrix D is invertible with high probability. We conclude with some structural results on the Perron-Frobenius eigenvector for a distance matrix. [ [ August 12, 2023 =================== § INTRODUCTION Let G=(V,E) be a finite, simple, connected, combinatorial graph on |V|=n vertices. A naturally associated matrix with G is the graph distance matrix D ∈ℝ^n × n such that D_ij=d(v_i, v_j) is the distance between the vertex v_i and v_j. The matrix is symmetric, integer-valued and has zero on diagonals. The graph distance matrix has been extensively studied, we refer to the survey Aouchiche-Hansen <cit.>. The problem of characterizing graph distance matrices was studied in <cit.>. A result of Graham-Pollack <cit.> ensures that D is invertible when the graph is a tree. Invertibility of graph distance matrix continues to receive attention and various extension of Graham-Pollack has been obtained in recent times <cit.>. However, one can easily construct graphs whose distance matrices are non-invertible. Thus, in general the graph distance matrix may exhibit complex behaviour. Our motivation comes from an observation made by Steinerberger <cit.> who observed that for a graph distance matrix D, the linear system of equations Dx = 1, where 1 is a column vector of all 1 entries, tends to frequently have a solution–even when D is not invertible. An illustrative piece of statistics is as follows. Among the 9969  #V ≤ 100, 3877  (D) < n  7  1∉(D). This is certainly curious. It could be interpreted in a couple of different ways. A first natural guess would be that the graphs implemented in Mathematica are presumably more interesting than `typical' graphs and are endowed with additional symmetries. For instance, it is clear that if D is the distance matrix of a vertex-transitive graph (on more than 1 vertices) then Dx=1 has a solution. Another guess would be that this is implicitly some type of statement about the equilibrium measure on finite metric spaces. For instance, it is known <cit.> that the eigenvector corresponding to the largest eigenvalue of D is positive (this follows from the Perron-Frobenius theorem) and very nearly constant in the sense of all the entries having a uniform lower bound. The sequence A354465 <cit.> in the OEIS lists the number of graphs on n vertices with 1∉(D) as 1, 0, 0, 0, 0, 0, 2, 14, 398, 23923, … where the first entry corresponds to the graph on a single vertex for which D=(0). We see that the sequence is small when compared to the number of graphs but it is hard to predict a trend based on such little information. The first nontrivial counterexamples are given by two graphs on n=7 vertices. Lastly, it could also simply be a `small n' effect where the small examples behave in a way that is perhaps not entirely representative of the asymptotic behavior. It is not inconceivable to imagine that the phenomenon disappears completely once n is sufficiently large. We believe that understanding this is an interesting problem. §.§ Acknowledgements This project was carried out under the umbrella of the Washington Experimental Mathematics Lab (WXML). The authors are grateful for useful conversations with Stefan Steinerberger. A.O. was supported by an AMS-Simons travel grant. § MAIN RESULTS §.§ A plethora of examples Notice that the sequence A354465 <cit.> in the OEIS lists suggests that for n≥ 7 one can always find a graph on n vertices for which Dx=1 does not have a solution. Here, we recall that D represents the distance matrix of the graph, and 1 represents a vector with all of its |V| entries that are equal to one (we often omit the explicit dependence on |V|, when it is understood from the context). The main result of this section is the following. For each n≥ 7, there exists a graph G on n vertices such that Dx=1 does not have a solution. Since we know that no counterexample exists for n<7, the result is sharp. Our approach to find many examples of graphs for which Dx=1 has no solutions is to prove some structural results (of independent interests) that show how to obtain bigger examples out of smaller ones. For a careful statement of such structural results, we will need some definitions. We start with the notion of graph join. The graph join G+H of two graphs G and H is a graph on the vertex set V(G) ∪ V(H) with edges connecting every vertex in G with every vertex in H along with the edges of graph G and H. Our structural result on the distance matrix of the graph join of two graphs is better phrased with the following definition. Let G be a graph with adjacency matrix A_G. Then, define D_G = 2J-2I-A_G. Observe that for a graph of diameter 2, D_G is the distance matrix, justifying this choice of notation. We now state the main ingredient in the proof of Theorem  <ref>. Let G and H be a graphs and suppose that D_G x=1 has no solution. Then, the distance matrix D of the graph join G+H has no solution to Dx=1 if and only if there exists a solution to D_H x = 1 such that ⟨ x, 1⟩ = 0. An alternative approach to the proof of Theorem <ref>, that unfortunately does not allow for the same sharp conclusion (though it can be used to generate examples for infinitely many values of n) relies instead of the notion of Cartesian product. Given two graphs G=(V_1, E_1) and H=(V_2, E_2) their Cartesian product G × H is a graph on the vertex set V=V_1× V_2 such that there is an edge between vertices (v_1,v_2) and (v_1',v_2') if and only if either v_1=v_1' and v_2 is adjacent to v_2' in H or v_2=v_2' and v_1 is adjacent to v_1' in G. If G and H are graphs such that 1 is not in the image of their distance matrices, then the Cartesian product graph G × H also has the property that 1 is not in the image of its distance matrix. We note that examples for which Dx=1 are not so easy to construct. In addition to the numerical evidence we provided in the introduction, we are able to give a rigorous, albeit partial, explanation of why this is the case (see Lemma <ref>). §.§ Erdős-Rényi random graphs We conclude with a result about Erdős-Rényi random graphs. We first recall their definition. An Erdos-Renyi graph with parameters (n,p) is a random graph on the labeled vertex set V = {v_1,v_2,...,v_n} for which there is an edge between any pair (v_i,v_j) of vertices with independent probability p. The following theorem shows that their distance matrices are invertible with high probability. As a consequence, Dx=1 has a solution for Erdős-Rényi graphs with high probability, as we summarize in the following Theorem. Let 0 < p < 1 and let D_n,p be the (random) graph distance matrix associated of a random graph in G(n,p). Then, as n →∞, ℙ( (D_n,p) = 0) → 0. It is a natural question to ask how quickly this convergence to 0 happens. Our approach relies heavily on recent results <cit.> about the invertibility of a much larger class of random matrices with discrete entries, providing some explicit bounds that are likely to be loose. We propose a conjecture, which is reminiscent of work on the probability that a matrix with random ± 1 Rademacher entries is singular, we refer to work of Komlós <cit.> and the recent solution by Tikhomirov <cit.>. One might be inclined to believe that the most likely way that D_n,p can fail to be invertible is if two rows happen to be identical. This would happen if there are two vertices v, w that are not connected by an edge which, for every other vertex u ∈ V, are both either connected to u or not connected to u. For a graph G ∈ G(n,p) each vertex is connected to roughly ∼ np vertices and not connected to ∼ (1-p)n vertices. This motivates the following Question. Is it true that lim_n →∞log( ℙ( (D_n,p) = 0) )/n = log( p^p (1-p)^1-p) ? The right-hand side log( p^p (1-p)^1-p) = p log(p) + (1-p) log(1-p) is merely (up to constants) the entropy of a Bernoulli random variable. §.§ Perron-Frobenius eigenvectors are nearly constant Let (X, d) be a metric space and let x_1, …, x_n be n distinct points in X. The notion of distance matrix naturally extends to this case. That is, we define D∈ℝ^n× n by setting D_ij=d(x_i, x_j). This notion clearly agrees with the graph distance matrix if X is a graph equipped with the usual shortest path metric. Let λ_D be the Perron-Frobenious eigenvalue of D and let v be the corresponding eigenvector with non-negative entries. In the following we will always assume that v is normalized to have L^2 norm 1 unless otherwise stated. In <cit.>, it was proved that v 1/√(n)≥1/√(2) ;. It is also shown in <cit.> that the above inequality is sharp in general for the distance matrix in arbitrary metric space. However, it was observed that for graphs in the Mathematica database, the inner product tends to be very close to 1, and it was not known if the lower bound of 1/√(2) is sharp for graphs. We show that this bound is sharp for graph distance matrices as well. The lower bound is achieved asymptotically by the Comet graph that we define below. We define a comet graph, C_m_1^m_2, to be the disjoint union of a complete graph on m_1 vertices with the path graph on m_2 vertices and adding an edge between one end of the path graph and any vertex of the complete graph. Let D_m be the graph distance matrix of the Comet graph C_m^2^m. Let v_m be the top eigenvector (normalized to have unit L^2 norm) of the distance matrix D_m. Then, lim_m→∞v_m 1/√(n)=1/√(2) , where n=m^2+m is the number of vertices in C_m^2^m. While Theorem <ref> shows that the lower bound 1/√(2) is sharp, it does not reveal the complete truth. It is worth emphasizing that the lower bound is achieved only in the limit as the size of the graph goes to infinity. The following theorem shows that if a graph has diameter 2 then, ⟨ v, 1⟩/√(n) is significantly larger. Let G be a graph with diameter 2 and let D be the distance matrix of G. Let v be the top-eigenvector of D normalized to have L^2 norm 1. Then, v 1/√(n)≥4/3·1/√(2) . In the light of above theorem, it is reasonable to expect a more general result of the following form that we leave open. Problem. Let G be a graph on n vertices with distance matrix D. Let v be the top eigenvector of D with unit L^2 norm. If G has diameter d then, ⟨ v, 1⟩/n≥1/√(2)(1+f(d)) , for some f such that f(d)→ 0 as d→∞. § PROOF OF THEOREM <REF> This section is dedicated to the proof of the main Theorem <ref>. Since the main ingredient is the structural result about the distance matrix of the graph join (Theorem <ref>), we begin the section with the proof of that. Observe that the distance matrix of G+H is given by D = [ D_G J; J D_H ]. Recall that the orthogonal complement of the kernel for a symmetric matrix is the image of the matrix because the kernel of a matrix is orthogonal to the row space, which in this case, is the column space. In particular, this applies to D_G and D_H. To prove the forwards direction, we will show the contrapositive. We have two cases, namely the case where D_H x = 1 has no solution and the case where there is a solution to D_H x = 1 where ⟨ x, 1⟩≠ 0 First, assume that D_H x = 1 has no solution. Then, we have that D_G ⊥̸1 and D_H ⊥̸1 because 1∉D_G and 1∉D_H. So, there exists x_1 ∈D_G and x_2 ∈D_H such that ⟨ x_1, 1⟩ = ⟨ x_2, 1⟩ = 1. Observe that the vector x=(x_1, x_2)^T satisfies Dx=1 so we are done with this case. Now, suppose that there exists x such that D_H x = 1 and ⟨ x, 1⟩≠ 0. Then, let x_2 = x/⟨ x, 1⟩. Once again, D_G ⊥̸1 so there exists x_1 ∈D_G such that ⟨ x_1, 1⟩ = 1-1/⟨ x, 1⟩. Then, the vector x=(x_1, x_2)^T satisfies Dx=1. Thus, we are done with this direction. Now, for the reverse direction, suppose that there exists y such that D_H y= 1 and ⟨ y, 1⟩ = 0. Assume for a contradiction that there exists a solution to Dx=1. Then, we have x_1, x_2 such that D_G x_1 + J x_2 = 1 and J x_1 + D_H x_2= 1. First, suppose that ⟨ x_1, 1⟩ = 1. Then, we have D_H x_2=0 so x_2 ∈D_H. Note that 1∈D_H so D_H ⊥1. Thus, ⟨ x_2, 1⟩ = 0, implying that Jx_2=0. However, this implies that D_Gx_1=1, which is a contradiction. Now, suppose that ⟨ x_1, 1⟩≠ 1. Then, D_H x_2 = c1 for some c≠ 0. So, x_2= y/c + z for some z ∈D_H. Noting that D_H ⊥1, we have ⟨ x_2, 1⟩ = ⟨ y, 1⟩/c = 0. So, Jx_2 = 0 implying that D_Gx_1=1, which is a contradiction. Now, we will construct a family of graphs {H_n}_n=3^∞ such that each H_n has 2n vertices and there exists x satisfying D_H_nx = 1 with ⟨ x, 1⟩ = 1. First, we will define {H_n}_i=3^∞. For each n ≥ 3, define H_n=C_n^c + K_n, where + is the graph join and C_n^c is the complement of the cycle graph on n vertices. For each n≥ 3, there exists x satisfying D_H_nx = 1 with ⟨ x, 1⟩ = 0. To start, observe that D_H_n is of the form [ B J_n; J_n J_n-I_n ] where B is defined by B_i,j= 0 i=j 2 i=j ± 1 n 1 otherwise. The vector x=(1_n,-1_n)^T satisfies D_H_nx=1 with ⟨ x,1 ⟩ = 0 so we are done. Observe that each H_i has an even number of vertices. We will now show construct a family of graphs {H_n'}_n=3^∞ such that each H_n' has 2n+1 vertices. For each n ≥ 3, define H_n' to be the graph formed by attaching one vertex to every vertex of H_n except for one of the vertices of the C_n^c component of H_n. For each n≥ 3, there exists x satisfying D_H_n'x = 1 with ⟨ x, 1⟩ = 0. To start, observe that we can write D_H_n' as [ D_H_n; y ] where y = (2,1, …, 1,0). Then, the vector x=(1_n,-1_n,0)^T satisfies D_H_n'x=1 with ⟨ x,1 ⟩ = 0 so we are done. Now, for sake of notation, we will recall the definition of the cone of a graph. Given a graph G, the graph (G) is defined as the graph join of G with the trivial graph. Take G=(H_(n-1)/2) if n is odd, and G=(H^'_n/2-1) if n is even. The proof is immediate from Theorem <ref>, Lemma <ref> and Lemma <ref>. We now move to the proof of Theorem <ref>, that allows for an alternative way of constructing graphs for which Dx=1 does not have a solution. To this aim, let G and H be two graphs on n and m vertices, respectively. Let A∈ℝ^n× n and B∈ℝ^m× m be the distance matrices of G and H respectively. It is well-known (see for instance <cit.>, <cit.>) that the distance matrix of the Cartesian product G× H is given by J_m ⊗ A+ B ⊗ J_n ∈ℝ^nm× nm where ⊗ is the Kronecker product and J_ℓ denotes ℓ×ℓ matrix with all 1 entries. Theorem <ref> is an immediate consequence of the following Lemma <ref>. Suppose that A is a n× n matrix and B is an m × m matrix such that the linear systems Ay= 1_n and Bz=1_m have no solution. Then, (J_m ⊗ A+ B ⊗ J_n)x=1_nm has no solution. Assume for the sake of contradiction that there exists x∈ℝ^nm× nm with (J_m ⊗ A+ B ⊗ J_n)x=1_nm. Then, we have (J_m ⊗ A)x = 1_nm - (B ⊗ J_n)x = (c_1, …, c_m)^T , where each c_i∈ℝ^1× n is a vector with constant entries. Since Bz=1_m has no solutions, there must be some 1≤ j≤ m for which c_j=α1_n, where α≠ 0. ( [ A ⋯ A; ⋮ ⋱ ⋮; A ⋯ A ] + [ b_1,1J_n ⋯ b_1,mJ_n; ⋮ ⋱ ⋮; b_m,1J_n ⋯ b_m,mJ_n ])x =1 [ A ⋯ A; ⋮ ⋱ ⋮; A ⋯ A ] x = 1- [ b_1,1J_n ⋯ b_1,mJ_n; ⋮ ⋱ ⋮; b_m,1J_n ⋯ b_m,mJ_n ]x. Writing x as the block vector (x_1, ..., x_m)^T where each x_i∈ℝ^1× n, we note that A(x_1+…+x_m) = c_i, ∀ 1≤ i≤ m . In particular the above equation holds for i=j. Thus, we obtain Ay=1_n for y= (x_1+…+x_m)/α which contradicts our assumption. [ A ⋯ A; ⋮ ⋱ ⋮; A ⋯ A ] x = [ c_1; ⋮; c_n ]. [ A (x_1 + ⋯ + x_m); ⋮; A (x_1 + ⋯ + x_m) ] = [ c_1; ⋮; c_n ]. As we pointed out in Section 2, while we have established that there are infinitely many graphs G such that Dx=1 does not have a solution, finding such graphs can be hard. To illustrate this, we conclude this section with a structural result about family of graphs for which Dx=1 does have a solution. Let G=(V, E) be a connected graph. Suppose there are two vertices v,w∈ V such that the following conditions hold. * v is not connected to w * v∼ x for every x∈ V∖{w} * w∼ x for every x∈ V∖{v}. If D is the graph distance matrix of G then Dx=1 has a solution. Furthermore, if there are two or more distinct pairs of vertices satisfying 1-3 then D is non-invertible. Observe that we can write the distance of G such that the first two columns of D are (0,2, 1, …, 1)^T and (2,0, 1, …, 1)^T. Therefore x=(1/2,1/2, 0, ..., 0)^T satisfies Dx=1. If there are two pair of vertices, say w.l.o.g v_1, v_2 and v_3, v_4 satisfying conditions 1-3 then the first four columns of D look like [ 0 2 1 1; 2 0 1 1; 1 1 0 2; 1 1 2 0; 1 1 1 1; ⋮ ⋮ ⋮ ⋮; 1 1 1 1 ]. Labeling the columns c_1,…, c_4, we have c_1+c_2-c_3=c_4. D must be singular. § PROOF OF THEOREM <REF> We start with the following well-known result (see, e.g., <cit.>) about the diameter of an Erdős-Rényi graph. Let p∈ (0, 1). Let P_p,n be the probability that a random Erdős-Rényi graph G(n, p) has diameter at least 3. Then, lim_n→∞P_p,n = 0. Let I be the identity matrix, J be the all-ones matrix, and A be the graph's adjacency matrix. Owing to the Lemma (<ref>), we can write, with high probability, the distance matrix as D = 2J-A-2I. We will now state the following theorem from <cit.>, which describes the smallest singular value σ_n of a matrix M_n=F_n+X_n where F_n is a fixed matrix and X_n is a random symmetric matrix under certain conditions. Assume that ξ has zero mean, unit variance, and there exist positive constants c_1<c_2 and c_3 such that ℙ(c_1 ≤|ξ-ξ'|≤ c_2)≥ c_3, where ξ' is an independent copy of ξ Assume that the upper diagonal entries of x_ij are i.i.d copies of a random variable ξ satisfying <ref>. Assume also that the entries f_ij of the symmetric matrix F_n satisfy | f_ij|≤ n^γ for some γ > 0. Then, for any B>0, there exists A>0 such that ℙ(σ_n(M_n)≤ n^-A)≤ n^-B. Combining all these results, we can prove the main result of the section. Owing to Lemma <ref>, we can assume that with high probability the distance matrix has the form D=2J-2A-2I. Note that the upper diagonal entries of A are i.i.d copies of a random variable satisfying Condition <ref> with c_1=c_3=1 and c_2=1. Furthermore, 2(J-I) is symmetric and its entries are bounded. Therefore, the result follows from Theorem <ref>. § PROOF OF THEOREM <REF> Let D_m be the graph distance matrix of C_m^2^m. We start by observing that D_m= [ J_m^2-I_m^2 B_m; (B_m)^⊤ A_m ] , where A_m as a matrix m× m matrix such that (A_m)_ij=|i-j| and B_m is m× m matrix defined by B_m= [ 2 3 ⋯ m+1; ⋮ ⋮ ⋮ ⋮; 2 3 ⋯ m+1; 1 2 ⋯ m; ] Our first observation is that the first eigenvector of D_m is constant for the first m^2-1 entries (considering the symmetry of the graph, this is not surprising). Let λ_m denote the largest eigenvalue of D_m and let v be the corresponding eigenvector. Then, for all i,j≤ m^2-1, we have v_i=v_j. Let r_i, r_j be i-th and j-th rows of D respectively. We first note that r_i-r_j=e_i-e_j for i, j≤ m^2-1. Now observe that λ_m v_j-λ_m v_i =r_jv-r_iv =e_i-e_jv=v_i-v_j . The conclusion follows since λ_m≥ 0. We start with an estimate for λ_m that will later allow us to bound entries of v. Let λ_m be the largest eigenvalue of D_m then λ_m = (1+o(1)) ·m^5/2/√(3) . Write D = D_m and let λ_m be as above. Let A be the m^2 + m by m^2 + m matrix defined by A_i,j= i - m^2 if i > m^2, j ≤ m^2 j - m^2 if j > m^2, i ≤ m^2 0 otherwise . Let B be the m^2+m by m^2+m matrix defined by B_i,j= 1 if i,j ≤ m^2 0 otherwise . Let C be the m^2+m by m^2+m matrix defined by C_i,j= m+1 if i,j > m^2 0 otherwise . Note that A ≤ D ≤ A + B + C where the inequalities refer to entrywise inequalities. This means that for all x ∈ℝ^m^2 + m with nonnegative entries, x^TAx ≤ x^TDx ≤ x^T(A+B+C)x Let λ_A,λ_B,λ_C be the top eigenvalue of A, B, and C respectively and let λ_A+B+C be the top eigenvalue of A+B+C. Noting that A,B,C are all symmetric nonnegative matrices, letting S ⊂ℝ^m^2+m be the subset of vectors with nonnegative entries such that x_2≤ 1. Then, λ_A ≤λ_m ≤λ_A+B+C≤λ_A+λ_B + λ_C . It is easily seen that λ_B = m^2 and λ_C = m(m+1). We can also compute λ_A explicitly. Let v be the top eigenvector of A. Since the first m^2 rows and columns of M are all identical, the first m^2 entries of v are the same. Normalize v so that the first m^2 entries are 1. Then λ_Av = Dv yields λ_Av_1 = λ_A = ∑_j=1^m A_1,jv_m^2+j = ∑_j=1^m jv_m^2+j and for 1 ≤ k ≤ m, λ_Av_m^2+k = ∑_j=1^m^2kv_j = ∑_j=1^m^2 k = m^2k . Plugging v_m^2+k = m^2k/λ_A into the first equation, we get λ_A^2 = ∑_j=1^m m^2j^2 = m^2(m)(m+1)(2m+1)/6 . This yields, √(m^3(m+1)(2m+1)/6)≤λ_m ≤√(m^3(m+1)(2m+1)/6) + m^2 + m(m+1) . With this estimate in hand we can now show stronger bounds on v_∞ than are directly implied by <cit.> in the general case. Let v be the top eigenvector of D_m normalized so that v_1=1 we have v_∞ = 𝒪(√(m)) First we note that D_m_max≤ m+1, second we note that by <cit.> we know 1/2√(m^2 + m)≤v_i/v_2≤ 1 And in particular this means max_i∈ [m^2+m] v_i=max_i∈ [m^2+m]v_i/v_1≤max_i,j∈ [m^2+m]v_i/v_j≤ 2√(m^2+m)≤ 2m+1 It follows from <cit.> that v_∞ = 𝒪(m). when we have normalized v such that v_1=1. Since the first m^2-1 terms of v are 1 and the entries in D are at most (m+1) we get λ_m v_i =∑_k=1^m^2-1 (D_m)_i,kv_k +∑_k=m^2^m^2+m (D_m)_i,kv_k ≤ m^2(m+1)+2m(m+1)^2=𝒪(m^3) . Since λ_m≥ m^5/2/√(3), it follows that v_i≤𝒪(√(m)). Let v be as above. There exists C>0 such that for i≥ m^2, we have √(1/3m)-C/m≤ (v_i-v_i-1)≤√(3/m)+C/m , for all sufficiently large m. For i≥ m^2 we consider the following difference r_i-r_i-1. Observe that first i-1 coordinates are 1 followed by n+m+1-i many -1. Therefore, λ(v_i-v_i-1) =(D_mv)_i-(D_mv)_i-1=r_i-r_i-1v =∑_k=1^i-1 v_i-∑_k=i^m^2+m v_i =(m^2-1)+∑_k=m^2^i-1 v_i-∑_k=i^m^2+m v_i . Using the fact that v_i≤ C√(m) for all i we obtain m^2-1-Cm^3/2≤λ(v_i-v_i-1)≤ m^2-1+Cm^3/2 . Since λ_m∼ m^5/2/√(3), the desired conclusion follows. To conclude the proof we first note that from above ⟨1, v⟩≥ m^2. On the other hand, Now with this we have enough to get good estimates of v_1 and v_2 which will imply the desired result. Starting with ℓ_1 first we have v_1 =∑_k=1^m^2+m v_k =m^2-1+∑_k=m^2^m^2+m v_k ∼ m^2+√(3/m)∑_k=1^m k ∼ m^2+m√(3m)/2 ∼ m^2 Now turning to the ℓ_2 we have v_2^2 =∑_k=1^m^2+m v_k^2 =m^2-1+∑_k=m^2^m^2+m v_k^2 ∼ m^2+3/m∑_k=1^m k^2 = m^2+3(m+1)(2m+1)/6 = 2m^2 We also obtain v_2^2 ≤ 2m^2 + C(m+1)^3/2 . Combining these results tells us that lim inf_m→∞1v/v_2·1_2≥1/√(2) . § PROOF OF THEOREM <REF> Let G be any graph with diameter 2. Since D_ij is either 1 or 2 (except for D_ii=0), it is easy to see that 1v-v_i ≤λ v_i=∑_j=1^n D_i,j v_j ≤ 2(1v-v_i). Rearranging, we obtain the uniform two-sided bound 1v/λ+1≤ v_i < 21v/λ+1. This yields, in particular, that for all 1≤ i, j≤ n 1≤v_i/v_j≤ 2 . This defines a convex region, that we denote by D. In order to prove our result, it suffices to prove that the minimum of v_1= 1v over the set D, subject to the constraint v_2=1, is at least 4/(3√(2)). To this aim, we first notice that the minimizers of this problem are the same, up to a scalar factor, of the maximizers of v_2_2 in D subject to v_1=1 (in fact, in both cases they must be minimizers of the homogeneous function v_1/v_2 on D). Since the latter is a maximization problem for a strictly convex function on a convex set, the maximizers must be extreme points of D. In particular, going back to the original formulation, we conclude that the smallest that 1v can be will be when all entries of v are c,2c for some c so that v_2=1. Suppose now that we have m entries equal to c and n-m equal to 2c, then 1=v_2^2 =∑_k=1^m c^2+∑_k=m+1^n(2c)^2 =mc^2+(n-m)4c^2 Then solving for c we find c=1/√(4n-3m) So now we can optimize over m to minimize the ℓ_1 norm v_1/√(n)=mc+(n-m)2c/√(n)=2n-m/√(n(4n-3m)) Now treating n as a constant and differentiating wrt to m we get d/dm2n-m/√(n(4n-3m)) =-√(4n^2-3mn)+3n(2n-m)/2√(4n^2-3mn)/4n^2-3mn =3mn-2n^2/2(4n^2-3mn)^3/2 If we want to set this equal to 0 we only care about the denominator so we solve 0 =3mn-2n^2 0 =n(3m-2n) Which gives solutions n=0,2n/3 from which we see the latter is the minimum. Now if we substitute this into our formula for the ℓ_1 norm we get 2n-m/√(n(4n-3m))=4n/3/√(n(4n-2n))=4/3·1/√(2) Now by <ref> we know that if G is a random graph, then for large n it will have diameter 2 and this bound will hold. alpha
http://arxiv.org/abs/2307.04013v1
20230708164601
BPNet: Bézier Primitive Segmentation on 3D Point Clouds
[ "Rao Fu", "Cheng Wen", "Qian Li", "Xiao Xiao", "Pierre Alliez" ]
cs.CV
[ "cs.CV" ]
Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers Amirhossein Askari Farsangi1 Ali Sharifi Zarchi1 Mohammad Hossein Rohban1 August 12, 2023 ========================================================================================== This paper proposes BPNet, a novel end-to-end deep learning framework to learn Bézier primitive segmentation on 3D point clouds. The existing works treat different primitive types separately, thus limiting them to finite shape categories. To address this issue, we seek a generalized primitive segmentation on point clouds. Taking inspiration from Bézier decomposition on NURBS models, we transfer it to guide point cloud segmentation casting off primitive types. A joint optimization framework is proposed to learn Bézier primitive segmentation and geometric fitting simultaneously on a cascaded architecture. Specifically, we introduce a soft voting regularizer to improve primitive segmentation and propose an auto-weight embedding module to cluster point features, making the network more robust and generic. We also introduce a reconstruction module where we successfully process multiple CAD models with different primitives simultaneously. We conducted extensive experiments on the synthetic ABC dataset and real-scan datasets to validate and compare our approach with different baseline methods. Experiments show superior performance over previous work in terms of segmentation, with a substantially faster inference speed. § INTRODUCTION Structuring and abstracting 3D point clouds via segmentation is a prerequisite for various computer vision and 3D modeling applications. Many approaches have been proposed for semantic segmentation, but the finite set of semantic classes limits their applicability. 3D instance-level segmentation and shape detection are much more demanding, while this literature lags far behind its semantic segmentation counterpart. Finding a generalized way to decompose point clouds is essential. For example, man-made objects can be decomposed into canonical primitives such as planes, spheres, and cylinders, which are helpful for visualization and editing. However, the limited types of canonical primitives are insufficient to describe objects' geometry in real-world tasks. We are looking for a generalized way of decomposing point clouds. The task of decomposing point clouds into different geometric primitives with corresponding parameters is referred to as parametric primitive segmentation. Parametric primitive segmentation is more reasonable than semantic instance segmentation for individual 3D objects, which unifies the 3D objects in the parametric space instead of forming artificially defined parts. However, the task is quite challenging as 1) there is no exhaustive repertoire of canonical geometric primitives, 2) the number of primitives and points belonging to that primitive may significantly vary, and 3) points assigned to the same primitive should belong to the same type of primitive. Inspired by the fact that Bézier decomposition, where NURBS models can be divided into canonical geometric primitives (plane, sphere, cone, cylinder, etc.) and parametric surfaces into rational Bézier patches, we propose to learn Bézier decomposition on 3D point clouds. We focus on segmenting point clouds sampled from individual objects, such as CAD models. Departing from previous primitive segmentation, we generalize different primitive types to Bézier primitives, making them suitable for end-to-end and batch training. To the best of our knowledge, our method is the only work to learn Bézier decomposition on point clouds. To summarize our contributions: * We introduce a novel soft voting regularizer for the relaxed intersection over union (IOU) loss, improving our primitive segmentation results. * We design a new auto-weight embedding module to cluster point features which is free of iterations, making the network robust to real-scan data and work for axis-symmetric free-form point clouds. * We propose an innovative reconstruction module where we succeed in using a generalized formula to evaluate points on different primitive types, enabling our training process to be fully differential and compatible with batch operations. * Experiments demonstrate that our method works on the free-form point clouds and real-scan data even if we only train our model on the ABC dataset. Furthermore, we present one application of Bézier primitive segmentation to reconstruct the full Bézier model while preserving the sharp features. The code is available at: <https://github.com/bizerfr/BPNet>. § RELATED WORK Bézier primitive segmentation involves parametric fitting, instance segmentation, and multi-task learning. We now provide a brief review of these related research areas. Primitive segmentation. Primitive segmentation refers to the search and approximation of geometric primitives from point clouds. Primitives can be canonical geometric primitives, such as planes or spheres, or parametric surface patches, such as Bézier, BSpline, or NURBS. We can classify primitive segmentation methods into two lines of approaches: geometric optimization and machine learning. Popular geometric optimization-based methods include RANSAC <cit.>, region growing <cit.> and Hough transforms <cit.>. We refer to <cit.> for a comprehensive survey. One limitation of geometric optimization-based methods is that they require strong prior knowledge and are hence sensitive to parameters. In order to alleviate this problem, recent approaches utilize neural networks for learning specific classes of primitives such as cuboids <cit.>. The SPFN supervised learning approach <cit.> detects a wider repertoire of primitives such as planes, spheres, cylinders, and cones. Apart from the canonical primitives handled by SPFN, ParSeNet <cit.> and HPNet <cit.> also detect open or closed BSpline surface patches. Nevertheless, different types of primitives are treated separately with insufficient genericity. This makes them unsuitable for batch operations, thus suffering long inference times. Deep learning-based methods are less sensitive to parameters but often support a limited repertoire of primitives. Our work extends SPFN, ParSeNet, and HPNet with more general Bézier patches. Instance segmentation. Instance segmentation is more challenging than semantic segmentation as the number of instances is not known a priori. Points assigned to the same instance should fall into the same semantic class. We distinguish between two types of methods: proposal-based <cit.> and proposal-free methods <cit.>. On the one hand, proposal-based methods utilize an object-detection module and usually learn an instance mask for prediction. On the other hand, proposal-free methods tackle the problem as a clustering step after semantic segmentation. We refer to a recent comprehensive survey <cit.>. The significant difference between instance segmentation and primitive segmentation is that instance segmentation only focuses on partitioning individual objects where primitive fitting is absent. Patch-based representations. Patch-based representations refer to finding a mapping from a 2D patch to a 3D surface. Previous works including <cit.> learn a parametric 2D mapping by minimizing the Chamfer distance <cit.>. One issue with Chamfer distance is that it is not differentiable when using the nearest neighbor to find matched pairs. We learn the uv mapping instead. Learning uv parameters enables us to re-evaluate points from our proposed generalized Bézier primitives, making our training process differentiable and supporting batch operations. Multi-task learning. Multi-task learning aims to leverage relevant information contained in multiple related tasks to help improve the generalization performance of all the tasks <cit.>. Compared to single-task learning, the architectures used for multi-task learning—see, e.g., <cit.>—share a backbone to extract global features, followed by branches that transform the features and utilize them for specific tasks. Inspired by <cit.>, we use a cascaded architecture for our joint optimization tasks. § METHOD Figure <ref> shows an overview of the proposed neural network. The input to our method is a 3D point cloud P={p_i | 0≤ i ≤ N-1}, where p_i denotes the point coordinates (with or without normals). The output is the per-point patch labels { P_k | ∪_k=0 P_k = P}, where each patch corresponds to a Bézier primitive. The network will also output patch degree (d_u-by-d_v) and weighted control points C={𝐜_kmn = (x,y,z,w)|0≤ m ≤ d_u, 0≤ n ≤ d_v, 0 ≤ k ≤ K-1}, where K denotes the number of patches. We constrain the maximum degree to be M_d*N_d. We let our network output a maximum number of K Bézier patches for all CAD models, and we use K̂ to denote the ground-truth number of patches which is smaller than K and varies for each CAD model. §.§ Architecture Our architecture consists of two components: a backbone for extracting features and a cascaded structure for joint optimization. The backbone is based on three stacked EdgeConv <cit.> layers and extracts a 256D pointwise feature for each input point. Let 𝐏∈ℝ^N × D_in denote the input matrix, where each row is the point coordinates (D_in is three) with optional normals (D_in is six). Let 𝐗∈ℝ^N × 256 denote the 256D pointwise feature matrix extracted from the backbone. We use a cascaded structure to optimize the per-point degree probability matrix 𝐃∈ℝ^N × (M_d*N_d), the soft membership matrix 𝐖∈ℝ^N × K, the UV parameter matrix 𝐓∈ℝ^N × 2, and the weighted control points tensor 𝐂∈ℝ^K × (M_d+1) × (N_d+1) × 4 jointly. Because 𝐃, 𝐖, 𝐓, and 𝐂 are coupled, it is natural to use a cascaded structure to jointly optimize them. Here, the cascaded structure is similar to <cit.>, where the features are concatenated and transformed for different MLP branches. §.§ Joint Optimization We have four modules: decomposition, fitting, embedding, and reconstruction. They are coupled to optimize 𝐃, 𝐖, 𝐓 and 𝐂 jointly by using our proposed four modules. §.§.§ Decomposition Module Degree classification. We use Bézier primitive with different degrees to replace classical primitives, including plane, sphere, plane, BSpline, etc. For the sake of the classification of degrees, the straightforward idea would be to use a cross-entropy loss: CE = -log(p_t), where p_t denotes the possibility of the true degree labels. However, the degree type is highly imbalanced. For example, surfaces of degree type 1-by-1 represent more than 50%, while 3-by-2 surfaces are rare. To deal with the imbalance, we utilize the multi-class focal-loss <cit.>: FL = -(1-p_t)^γlog(p_t), where γ denotes the focusing parameter. Then the degree type classification loss is defined as: L_deg = 1/N∑_i=0^N-1FL(𝐃_i,:) Primitive segmentation. The output of primitive segmentation is a soft membership indicating per-point primitive instance probabilities. Each element w_ik is the probability for a point p_i to be a member of primitive k. Since we can acquire pointwise patch labels from our data pre-processing, we use a relaxed IOU loss <cit.> to regress the 𝐖: L_seg = 1/K̂∑_k=0^K̂-1[1 - 𝐖_:,k^T Ŵ_:,k̂/𝐖_:,k_1 + Ŵ_:,k̂_1 - 𝐖_:,k^T Ŵ_:,k̂], where 𝐖 denotes the output of the neural network and 𝐖̂ is the one-hot encoding of the ground truth primitive instance labels. The best matching pairs (k, k̂) between prediction and ground truth are found via the Hungarian matching <cit.>. Please refer to <cit.> for more details. Soft voting regularizer. Since we learn 𝐃 and 𝐖 separately, points belonging to the same primitive instance may have different degrees, which is undesirable. To favor degree consistency between points assigned to the same primitive, we propose a soft voting regularizer that penalizes pointwise degree possibilities. We first compute a score for each degree case for all primitive instances by 𝐒 = 𝐖^T𝐃, where each element s_kd denotes the soft number of points for degree d in primitive instance k. We then perform L_1-normalization to convert 𝐒 into primitive degree distributions Ŝ: Ŝ = [1/∑_d=0S_kd] ⊙𝐒, where the first term denotes the sum of each column and ⊙ denotes the element-wise product. Finally, we utilize a focal loss to compute the primitive degree voting loss: L_voting = 1/K̂∑_k=0^K̂-1FL(Ŝ_k,:), where FL denotes the focal loss. The global loss for the decomposition module is defined as: L_dec= L_deg + L_seg + L_voting. §.§.§ Fitting Module Parameter regression. Through Bézier decomposition we obtain the ground truth labels for the (u, v) parameters and record all parameters into matrix 𝐓̂. We regress the uv parameters using a mean squared error (MSE) loss: L_para= 1/N∑_i=0^N-1𝐓_i,: - 𝐓̂_i,:_2^2 Control point regression. We select a maximum number of primitive instances K for all models. As the ground truth primitive instance K̂ varies for each model, we reuse the matching pairs directly from the Hungarian matching already computed in the primitive segmentation step. Note that as the predicted degree (d_u, d_v) may differ from the ground truth (d̂_̂û, d̂_̂v̂), we align the degree to compute the loss via a maximum operation as (max(d_u, d̂_̂û), max(d_v, d̂_̂v̂)). The network always outputs (M_d+1) × (N_d+1) control points for each primitive corresponding to the predefined maximum degree in U and V direction, and these control points will be truncated by the aligned degree. Furthermore, if the ground-truth degree is smaller than the prediction, we can pad “fake” control points that are zero for the ground-truth patch; otherwise, we just use the aligned degree, which is the maximum of the predicted and the ground truth. Finally, the control point loss is defined as: L_ctrl= 1/N_𝐜∑_t=0^N_𝐜-1𝐜_t - 𝐜̂_t_2^2, where 𝐜_t and 𝐜̂_t denote the matched control points, and N_𝐜 is the number of matched control point pairs. Finally, we define the L_fit loss as: L_fit = L_para + L_ctrl. §.§.§ Embedding Module We use the embedding module to eliminate over-segmentation by pulling point-wise features toward their center and pushing apart different centers. Unlike ParSeNet and HPNet, 1) we do not need a mean-shift clustering step which is time-consuming; 2) we calculate the feature center in a weighted manner rather than simply averaging. The weights are chosen as 𝐖 and will be automatically updated in the decomposition module; 3) 𝐖 will be further optimized to improve the segmentation. Moreover, our embedding module is suitable for batch operations even though the number of primitive instances for each CAD model and the number of points for each primitive varies. Otherwise, one has to apply mean-shift for each primitive, which deteriorates timing further. To be specific, we use 𝐖 to weight 𝐗 to obtain primitive features for all candidate primitive instances. Then, we reuse 𝐖 to weigh all the primitive instance features to calculate a “soft” center feature for each point. We favor that each point feature embedding should be close to its “soft” center feature, and each primitive instance feature embedding should be far from each other. The primitive instance-wise feature matrix 𝐗_ins is defined as: 𝐗_ins = [1/∑_i=0^N-1w_ik] ⊙ (𝐖^T𝐗), where each row of 𝐗_ins denotes the instance-wise features for each patch. We then compute the “soft” center feature matrix 𝐗_center as: 𝐗_center = 𝐖𝐗_ins, where each row denotes the “soft” center for each point. Then we define L_pull as: L_pull = 1/N∑_i=0^N-1Relu(𝐗_i,: - (𝐗_center)_i,:_2^2 - δ_pull), and we define L_push as: L_push = 1/2K(K-1)∑_k_1<k_2Relu( δ_push - (𝐗_ins)_k_1,: - (𝐗_ins)_k_2,:_2^2 ). Finally, the total embedding loss L_emb is defined as: L_emb = L_pull + L_push. §.§.§ Reconstruction Module The reconstruction module is designed to reconstruct points from the predicted multiple Bézier primitives, i.e., rational Bézier patches, and further jointly optimize 𝐖. One difficulty is that each CAD model has various numbers of primitives, and the degree of each primitive is also different. Therefore, we seek a generalized formula to support tensor operations on re-evaluating points for a batch of CAD models. The straightforward approach would be to compute a synthesizing score for all degree types. Assume the maximum number of primitive instances is K, and we have M_d * N_d types of different degrees. The total number of combinations is K * M_d * N_d. We define a synthesizing score for each case in Einstein summation form: (s_w)_kci = w_ik * s_kc, where w_ik denotes the probability of point p_i to belong to primitive instance k and s_kc denotes the degree score for degree type m-by-n indexed with c = M * (m - 1) + (n - 1) for primitive instance k coming from 𝐒. Then, we need to normalize (s_w)_kdi such that ∑_k, d, i (s_w)_kdi = 1. Finally, the reconstructed point coordinates p_i are defined as: [ x_i'; y_i'; z_i'; ] = ∑_k,m,n(s_w)_kci𝐑_kmn(u_i,v_i), where parameter (u_i,v_i) for point p_i is shared for all combinations. Such a formulation makes extending the formula in matrix form easy and avoids resorting to loop operations. However, such an approach is too memory-intensive. We thus truncate the degree from the degree probability matrix by re-defining the Bernstein basis function for degree d as: (B_M)_d^l(t)= dlt^l(1-t)^d-l, l ≤ d 0, l > d , where 0 ≤ l ≤ M, and M is the maximum degree. Then, the reconstructed point coordinates for p_i for a degree m-by-n patch k is: [ x_i'; y_i'; z_i'; ] = ∑_m_i^M_d∑_n_i^N_d(B_M_d)_m^m_i(u)(B_N_d)_n^n_i(v)𝐜_m_in_i(c_w)_m_in_iw_ik/∑_m_i,n_i(B_M_d)_m^m_i(u)(B_N_d)_n^n_i(v)(c_w)_m_in_iw_ik, where 𝐜_m_in_i denotes the control point coordinates and (c_w)_m_in_i denotes its weight, and w_ik is the element of 𝐖. If we also input the normal (n_x_i, n_y_i, n_z_i) for point p_i, we can also reconstruct the normal (n_x_i', n_y_i', n_z_i') by: [ n_x_i'; n_y_i'; n_z_i'; ] = [ ∂ x_i'/∂ u; ∂ y_i'/∂ u; ∂ z_i'/∂ u; ]×[ ∂ x_i'/∂ v; ∂ y_i'/∂ v; ∂ z_i'/∂ v; ], where × denotes the cross product. 𝐩_i denotes the input point coordinates. 𝐩_i^* denotes the reconstructed point coordinates. 𝐧_p_i denotes the input point normals. 𝐧_p_i^* denotes the reconstructed normals. The coordinate loss is defined as: L_coord = 1/N∑_i=0^N-1𝐩_i- 𝐩_i^*_2^2. If we also input the normals, the normal loss is defined as: L_norm = 1/N∑_i=0^N-1(1 - |𝐧_p_i^T𝐧_p_i^*|). The loss for the reconstruction module is defined as: L_recon = L_coord, without normals, L_coord+L_norm, with normals. §.§.§ Total Loss The total loss is defined as the sum of decomposition, fitting, embedding, and reconstruction losses: L = L_dec + L_fit + L_emb + L_recon. We do not use different weights for each loss item because all point clouds are normalized into a unit sphere. Moreover, the uv parameters are outputted directly from a sigmoid layer, and the control points are outputted directly by a tanh layer. Thus, each loss item is almost at the same scale, so we do not need different weights for each loss item. Furthermore, we use different learning rates for different modules to balance the training. Specific training details are listed in section <ref>. § EXPERIMENTS §.§ Dataset Pre-Processing We evaluate our approach on the ABC dataset <cit.>. However, the ABC dataset does not have the annotations to learn Bézier decomposition on point clouds. Therefore, we do a pre-processing step. Specifically, we utilize the CGAL library <cit.> and OpenCascade library <cit.> to perform Bézier decomposition on STEP files directly and perform random sampling on the surface to obtain the following labels: point coordinates, point normals, point uv parameters, surface patch indices of the corresponding points, surface patch degrees, and surface patch control points. Finally, we use 5,200 CAD models for training and 1,300 CAD models for testing. Each CAD model contains randomly sampled 8,192 points (non-uniform) with annotations. §.§ Training Details We train a multi-task learning model. The learning rates differ depending on the MLP branch. The learning rate for the backbone, soft membership, and uv parameters is set to 10^-3, while the learning rate for the degree probabilities and control points is set to 10^-4. As we have several learning tasks that are not independent, we set a lower learning rate for loss items, such as degree probabilities which converges faster. We set γ as 3.0 for the focal loss, and δ_pull as 0 and δ_push as 2.0 for the embedding losses. We employ ADAM to train our network. The model is then trained using 150 epochs. §.§ Comparisons We compare our algorithm with SPFN, ParSeNet, and HPNet <cit.>. We use both points and normals for training all the algorithms. Since SPFN only supports four types of canonical primitives (plane, sphere, cone, and cylinder), we consider points belonging to other primitives falling out of the supported canonical primitive types as the “unknown” type. To make fair comparisons, we modify SPFN to let the network take point coordinates and normals as input for training. For ParSeNet, we only train the segmentation module on the ABC dataset. We use their pre-trained fitting model (SplineNet) directly. For HPNet, we also use the pre-trained fitting model directly, which is the same as ParSeNet. We observed that the output of HPNet is very sensitive to the number of points. In order to use HPNet at its best, we down-sample the point clouds to 7k points for training and testing. We choose the following evaluation metrics: * Primitive Type Accuracy (“Acc”): 1/K∑_k=0^K-1𝕀(t_k==t̂_k), where t_k and t̂_k are predicted primitive type and ground truth type, respectively. This is used to measure the type accuracy. Note that our primitive types differ from other baselines. * Rand Index (“RI”): a+b/c, where c is N2 denoting the total possible pairs for all points, and a denotes the number of pairs of points that are both in the same primitive of prediction and ground truth, while b denotes the number of pairs of points that are in a different primitive of prediction and ground truth. Rand index is a similarity measurement between two instances of data clustering, and a higher value means better performance <cit.>. * Normal Error (“Err”): 1/N∑_i=0^N-1arccos( |𝐧_p_i^T𝐧_p_i^*|), where 𝐧_p_i and 𝐧_p_i^* are ground truth and predicted unit normal, respectively. * Inference Time (“Time”): The inference time on the whole test dataset. * Average Primitive Number (“Num”): The predicted average number of primitives on the whole test data set. We record these evaluation metrics in table <ref> and <ref>. Figure <ref> shows visual depictions of the results. Our results show the best performance regarding primitive type accuracy, normal fitting error, and inference time. Our method is much faster for inference because it uses a general formula for different primitive types, and the embedding module is free of iterations. Other methods treat primitives with different equations, and ParSeNet and HPNet need a mean-shift step. Even though our approach may lead to more segmented primitives by the nature of Bézier decomposition, the evaluation metrics of primitive type accuracy and normal fitting error are computed in a point-wise manner. Thus, over-segmentation and under-segmentation will not lead to smaller or bigger errors due to fewer or more segmented primitives. We also show the performance of all the methods without normals as input. For our method and SPFN, we only input point coordinates into the neural networks but use normals as supervision. Since ParSeNet does not regress normals, we cannot use normals as supervision. We train ParSeNet without normals as input to test its performance. HPNet uses the network to regress the normals from the input and also utilizes the ground truth normals to construct an affinity matrix as a post-processing step for clustering. We modify HPNet to let the affinity matrix be constructed from the regressed normals instead of the ground-truth normals. Table <ref> records the evaluation metrics of each method. From the experiments, we deduce that normals are important for the task of parametric primitive segmentation. §.§ Ablation Studies We first conduct experiments to verify the usefulness of the soft voting regularizer. The soft voting regularizer favors point primitive type consistency for each primitive instance, i.e., points assigned to the same primitive instance should have the same primitive type. From our experiment, we find that the soft voting regularizer not only improves the primitive type accuracy but also accelerates training relaxed IOU. Please refer to figure <ref> and the last two rows of table <ref>. We also verify the functionalities of each module. If we only use the decomposition module, the result is not good even though the “Acc” and “RI” are slightly higher because the decomposition module ignores the fitting, limiting the segmentation applicable to specific datasets. The reconstruction module reduces the “Err” significantly compared to the fitting module because the reconstruction module controls how “well-fitted” a predicted Bézier primitive is to the input point clouds. In contrast, the fitting module only regresses the control points and uv parameters. The embedding module is designed to eliminate small patches that contain few points, seeing the “Num” column. Therefore, experimenting with the embedding module results in fewer patch numbers than its counterpart. To conclude, training with all the modules yields the best results. §.§ Stress Tests To test whether our algorithm can work in real-world scenarios, we show more results from the real-scan data from the Aim@Shape dataset <cit.>. The sampling is non-uniform, with missing data and measurement noise compared to the ABC dataset. Besides, We cannot train the network on those data directly because they lack ground-truth labels. Instead, we use the models trained on the ABC dataset and test the performance on real-scan data. Our algorithm still works, while other methods are sensitive. Another positive aspect is that our algorithm could decompose the axis-symmetric free-form point clouds with much smoother boundaries of different patches. Please refer to figure <ref>. We also test the performance of our network by adding Gaussian white noise. Specifically, we apply different scales of Gaussian white noise to the point coordinates after normalizing them into a unit sphere. The noise scale denotes the standard deviation of the Gaussian white noise. It ranges from 0.01 to 0.05. We train our network on noise-free data but test the network with Gaussian white noise. Please refer to table <ref>. §.§ Applications We can reconstruct the full Bézier model from the Bézier primitive segmentation. We do not follow ParSeNet to pre-train a model that outputs a fixed control point size. Instead, we reuse the rational Bézier patch to refit the canonical Bézier patch. We treat the degrees of the canonical Bézier patch the same as the rational Bézier patch. As a result, we fetch the segmentation and degrees of each patch predicted from the network. Then, we use the parameterization <cit.> to recompute uv parameters and least squares to refit control points for each patch. Each patch is expanded by enlarging the uv domain to guarantee intersections with its adjacent patches. After that, we use the CGAL co-refinement package <cit.> to detect intersecting polylines for adjacent tessellated patches and trim the tessellated patch with the intersected polylines. Our reconstructed full Bézier model can preserve the sharp features, while the boundaries of ParSeNet for different primitives are jaggy and thus fail to preserve the sharp features. Please refer to figure <ref>. § CONCLUSION This paper presents an end-to-end method to group points by learning Bézier decomposition. In contrast to approaches treating different geometric primitives separately, our method uses a general formulation for different primitive types. Regarding limitations, Bézier decomposition may naturally generate overly complex segmentations. In addition, we choose the rational Bézier patch as the primitive type. As the formulation is not linear, fitting the parametric patch is not direct. In future work, we wish to use the neural network to directly regress the canonical Bézier patch. § ACKNOWLEDGEMENTS This research is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 860843. The work of Pierre Alliez is also supported by the French government, through the 3IA Côte d'Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. named
http://arxiv.org/abs/2307.03954v1
20230708112025
Magnon influence on the superconducting DOS in FI/S bilayers
[ "A. S. Ianovskaia", "A. M. Bobkov", "I. V. Bobkova" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall" ]
National Research University Higher School of Economics, Moscow, 101000 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia National Research University Higher School of Economics, Moscow, 101000 Russia Heterostuctures superconductor/ferromagnetic insulator (FI/S) are paradigmic systems for studying mutual influence of superconductivity and magnetism via proximity effects. In particular, spin-split superconductivity is realized in such structures. Recent experiments and theories demonstrate a rich variety of transport phenomena occurring in devices based on such heterostructures that suggest direct applications in thermoelectricity, low-dissipative spintronics, radiation detection and sensing. In this work we investigate the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface on the spin-split superconductivity. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed, and the BCS-like spin-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified. An odd-frequency superconducting order parameter admixture to the leading singlet order parameter is also found. These findings expand the physical picture of spin-split superconductivity beyond the mean-field description of the ferromagnet exchange field. Magnon influence on the superconducting DOS in FI/S bilayers I.V. Bobkova August 12, 2023 ============================================================ § INTRODUCTION Long ago it was demonstrated that the exchange field of ferromagnetic insulators (FIs), such as EuS and EuO, can spin-split the excitation spectrum of an adjacent thin-film superconductor <cit.>. The spin splitting in the DOS observed in those experiments resembles the spin splitting created by a strong in-plane field applied to a thin superconducting film. This discovery opened up the way for performing spin-polarized tunneling measurements without the need of applying large magnetic fields. A renewed interest in studying ferromagnetic/superconductor (F/S) structures came with active development of superconducting spintronics <cit.>, caloritronics and spin caloritronics <cit.>. In particular, in F/S structures with spin-split density of states (DOS) a series of promising phenomena have been studied. Among them are giant thermoelectric <cit.>, thermospin effects <cit.>, highly efficient thermally-induced domain wall motion <cit.>, spin and heat valves <cit.>, cooling at the nanoscale <cit.>, low-temperature thermometry and development of sensitive electron thermometers <cit.>. The spin-split DOS in F/S structures has also been explored in the presence of magnetic inhomogeneities, such as textured ferromagnets and domain walls <cit.>. Characteristic signatures of equal-spin triplet pairing were reported <cit.>. It was shown that the characteristic spatial and energy dependence of the spin-dependent DOS allows to tomographically extract the structure of the spin-triplet Cooper pairs <cit.>. Furthermore, the influence of the domain structure on the position-averaged superconducting DOS in FI/S bilayer was studied <cit.>. Another important direction in the field of F/S hybrid structures is investigation of interplay between the superconducting state and ferromagnetic excitations - magnons. A series of interesting results, presumably related to the influence of the superconductor on the magnon spectrum have been reported. In particular, it was found that the adjacent superconductor works as a spin sink strongly influencing Gilbert damping of the magnon modes <cit.> and can result in shifting of k = 0 magnon frequencies (Kittel mode) <cit.>. The electromagnetic interaction between magnons in ferromagnets and superconductors also results in appearance of magnon-fluxon excitations <cit.> and efficient gating of magnons <cit.>. Further it was reported that the magnetic proximity effect in thin film F/S hybrids results in appearing of magnon-cooparons, which are composed of a magnon in F and an accompanying cloud of spinful triplet pairs in S <cit.>. Some aspects of back influence of magnons on superconducting state have already been investigated. For example, a possible realization of the magnon-mediated superconductivity in F/S hybrids has been proposed <cit.>. At the same time, the influence of magnons via the magnetic proximity effect on the superconducting DOS practically has not yet been studied, although the electron-magnon interaction and influence of this interaction on the DOS in ferromagnetic metals have been investigated long ago <cit.>. Here we consider how the effects of electron-magnon interactions in FI/S thin-film hybrids manifest themselves in the superconducting DOS and quasiparticle spectra of the superconductor. It is found that the magnon-mediated electron spin-flip processes cause the interaction and mixing of the spin-split bands resulting in their reconstruction, which is especially important near the edge of the superconducting gap. We demonstrate that the classical BCS-like Zeeman-split shape of the superconducting DOS can be strongly modified due to the electron-magnon interaction and this modification is temperature-dependent. The influence of magnons on the temperature dependence of the Zeeman splitting of the DOS and relevance of our findings to existing and future experiments are also discussed. The paper is organized as follows. In Sec. <ref> we describe the system under consideration and the Green's functions formalism taking into account magnon self-energies. In Sec. <ref> the modifications of the quasiparticle spectra in the superconductor due to the electron-magnon coupling are discussed. In Sec. <ref> we study signatures of the electron-magnon interaction in the Zeeman-split superconducting DOS and their temperature dependence. Our conclusions are summarized in Sec. <ref>. § SYSTEM AND FORMALISM We consider a thin-film bilayer as depicted in Fig. <ref>, in which a ferromagnetic insulator FI is interfaced with a conventional spin-singlet s-wave superconductor S. The thickness of the S layer d_S is assumed to be small as compared to the superconducting coherence length ξ_S. In this case the S layer can be considered as homogeneous along the normal to the interface plane. The FI layer in its ground state is magnetized in-plane, along the z-direction. The Hamiltonian of the system takes the form: Ĥ=Ĥ_S+Ĥ_FI+Ĥ_ex, where Ĥ_S is the standard mean-field BCS Hamiltonian describing electrons in the superconducting film: Ĥ_S = ∑_ k σξ_ k c_ k σ^† c_ k σ - ∑_ kΔ c_ k↑^† c_- k↓^† - ∑_ kΔ^* c_- k↓ c_ k↑ . ξ_ k = k^2/2m - μ is the normal state kinetic energy of the electrons in the S layer, counted from the chemical potential of the superconductor μ. Δ is the superconducting order parameter in S, which assumed to be of conventional isotropic s-wave type. c_ k σ^+ and c_ k σ are creation and annihilation operators of electrons with the wave vector k and spin σ. Ĥ_FI describes magnons in the FI. Assuming easy-axis magnetic anisotropy in the FI it can be written as Ĥ_FI = ∑_ q (ω_0 + D q^2) b_ q^† b_ q, where b_ q^+ and b_ q are creation and annihilation operators of magnons in FI with wave vector q, ω_0 = |γ| (μ_0 H_0 + 2 K_a/M_s) is the magnonic frequency at q=0, D is the magnon stiffness constant, γ is the typically negative gyromagnetic ratio, M_s is the saturation magnetization, μ_0 is the permeability of free space, K_a is the easy-axis anisotropy constant and H_0 is the external field (can be equal to zero in our consideration). Electronic and magnonic wave vectors k and q are assumed to be two-dimensional (2D), that is the electrons and magnons can only propagate in plane of the FI/S interface. The wave functions along the y-direction, perpendicular to the interface, are assumed to be quantized. For simplicity, in the formulas we leave only one transverse magnon mode. In fact, we have checked that different modes give quantitatively different, but qualitatively the same contributions to considered self-energies. Their effect can be accounted for by multiplying our results for the self-energy corrections by an effective number of working transverse modes (see below). Ĥ_ex accounts for the exchange interaction between S and FI: Ĥ_ex = -J∫ d^2 ρ S_FI(ρ) s_e(ρ) , where ρ is a two-dimensional radius-vector at the interface plane, S_FI and s_e are the spin density operators in the FI and S, respectively. J is the interface exchange constant. By performing the Holstein-Primakoff transformation to the second order in the magnonic operators in Eq. (<ref>) one obtains Ĥ_ex = Ĥ_1 + Ĥ_2 + Ĥ_3, with Ĥ_1 = ∑_ k, k' U_ k, k'(c_ k, ↑^† c_ k', ↑-c_ k,↓^† c_ k',↓) , U_ k, k' = JM_s/2|γ|∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ), Ĥ_2 = ∑_ k, k', q, q' T_ k, k', q, q' b_ q^† b_ q' (c_ k, ↑^† c_ k', ↑-c_ k,↓^† c_ k',↓), T_ k, k', q, q' = - J/2∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ) ϕ_ q^*(ρ) ϕ_ q'(ρ), Ĥ_3 = ∑_ k, k', q V_ k, k', q (b_ q c_ k, ↑^† c_ k', ↓ + b_ q^† c_ k', ↓^† c_ k, ↑), V_ k, k', q = J √(M_s/2|γ|)∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ) ϕ_ q(ρ) , where Ĥ_1 describes a spin-splitting of the electronic energy spectrum in S in the mean-field approximation. The second term Ĥ_2 represents the Ising-term, which physically accounts for the renormalization of the spin-splitting by magnonic contribution. Since the processes of the spin transfer between electrons and magnons are of primary importance for our consideration, when calculating the electronic Green's function we simplify this term by substituting the magnon operator b_ q^† b_ q by its averaged value ⟨ b_ q^† b_ q⟩ = n_ qδ_ q q', where n_ q is the density of magnons with wave vector q. The third term Ĥ_3 transfers spin between electron and magnon operators and will turn out to be the most significant for effects under consideration. If we choose the wave functions of electrons Ψ_ k(ρ) and magnons ϕ_ q(ρ) at the interface in the form of plane waves propagating along the interface, that is Ψ_ k(ρ)=(1/√(d_S))e^i k ρ and ϕ_ q(ρ)=(1/√(d_FI))e^i q ρ, then Ĥ_ex can be simplified: Ĥ_ex = Ũ∑_k (c_k, ↑^† c_k, ↑-c_k,↓^† c_k,↓) + V ∑_k, q (b_q c_k, ↑^† c_k-q, ↓ + b_q^† c_k-q, ↓^† c_k, ↑) , where Ũ = -J (M_s-N_m |γ|)/(2|γ|d_S ) is the averaged spin-splitting field in the superconductor renormalized by the magnon density N_m, and V = J√(M_s/2|γ|d_FI A)(1/d_S) is the electron-magnon coupling constant, where A is the area of the FI/S interface. Introducing the following Nambu-spinor Ψ̌_ k = (c_ k ↑, c_ k ↓, -c_- k ↓^†, c_- k ↑^†)^T, we define the Gor'kov Green's function in the Matsubara representation Ǧ_ k(τ) = -⟨ T_τΨ̌_ kΨ̌_ k^†⟩, where ⟨ T_τ ... ⟩ means imaginary time-ordered thermal averaging. Turning to the Matsubara frequency representation the Green's function obeys the following equation: (iω - ξ_k τ_z - Ũσ_z - Δτ_x - Σ̌_m )Ǧ_ k (ω) = 1, where ω is the fermionic Matsubara frequency, σ_i and τ_i (i=x,y,z) are Pauli matrices in spin and particle-hole spaces, respectively. Σ̌_m is the magnonic self-energy, which describes corrections to the electronic Green's function due to the electron-magnon interaction and in the framework of the self-consistent Born approximation takes the form: Σ̌_m = - V^2 T ∑_ q,Ω B_ q(Ω) {σ_+ Ǧ_ k- q (ω - Ω)σ_- + . . σ_- Ǧ_ k+ q (ω + Ω)σ_+} , where σ_± = (σ_x ± i σ_y), Ω is the bosonic Matsubara frequency and B_ q(Ω) = [iΩ - (ω_0+Dq^2)]^-1 is the magnonic Green's function. From the spin structure of Eq. (<ref>) it is seen that Σ̌_m is diagonal in spin space. For this reason the electronic Green's function, which is given by the solution of Eq. (<ref>) is also diagonal matrix in spin space and Eq. (<ref>) can be written for the both spin subbands separately: (iω - ξ_k τ_z - σŨ - Δτ_x - Σ̂_m, σ )Ĝ_ k, σ (ω) = 1, where Ĝ_ k, σ is 2 × 2 matrix in the particle-hole space corresponding to the electron spin σ = ↑, ↓. Σ̂_m,σ is also 2 × 2 matrix in the particle-hole space representing the magnonic self-energy for the given spin subband σ: Σ̂_m,σ = - V^2 T ∑_ q,Ω B_ q(Ω) Ĝ_ k-σ q, σ̅ (ω - σΩ). As a factor in the expressions σ means ± 1 for the spin-up (spin-down) subbands, and σ̅ means the opposite spin subband. The dimensionless coupling constant quantifying the strength of the electron-magnon coupling is K=V^2 A / 4 πħ v_F √(D Δ). Our numerical estimates made for the parameters corresponding to EuS/Al or YIG/Nb structures suggest that K should be rather small, K ≪ 1, for the detailed discussion of the numerical estimates see Sec. <ref>. The smallness of the electron-magnon coupling constant allows us to use non self-consistent Born approximation when calculating magnon self-energy. That is, we substitute Ĝ_ k - σ q, σ̅ by the bare superconducting Green's function obtained without taking into account the magnon self-energy Ĝ_ k - σ q, σ̅^(0) in Eq. (<ref>). Then the explicit solution of Eq. (<ref>) takes the form: Ĝ_ k,σ (ω) = i ω_ k, σ +ξ_ k, στ_z + Δ_ k, στ_x/(i ω_ k, σ)^2 - (ξ_ k, σ)^2 - (Δ_ k, σ)^2 . where all the quantities marked by are renormalized by the electron-magnon interaction as follows: Δ_ k, σ (ω) = Δ + δΔ_ k,σ(ω) = Δ - - V^2 T ∑_ q, Ω B_ q(Ω) Δ/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 , ξ_ k, σ (ω) = ξ_ k + δξ_ k,σ(ω)= ξ_ k - - V^2 T ∑_ q, Ω B_ q(Ω) ξ_ k-σ q/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 , ε_ k, σ (ω) = i ω - Uσ + δε_ k,σ(ω)= i ω - Uσ + + V^2 T ∑_ q, Ω B_ q(Ω) i ω - iσΩ +Uσ/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 . For the problem under consideration all the in-plane directions of k are equivalent. For this reason the magnonic corrections only depend on the absolute value k of the wave vector. Further in order to study the quasiparticle spectra and density of states we turn from Matsubara frequencies to the real energies in the Green's functions i ω→ε + i δ, where δ is an infinitesimal positive number. The magnonic corrections for spin-up electrons δΔ_ k, ↑, δξ_ k, ↑ and δε_ k, ↑ are presented in Figs. <ref>-<ref> as functions of the quasiparticle energy ε and ξ_ k≡ξ, which after linearization in the vicinity of the Fermi surface takes the form ξ_ k ≈v_F ( k - k_F). The key features of the corrections, which can be see in the presented plots are: (i) The dependence of the corrections on ξ is very weak. The reason is that the most important range of the magnonic wave numbers contributing to the corrections is q ≲ 1/ξ_S, where ξ_S = v_F/Δ is the superconducting coherence length. Then taking parameters of the magnon spectrum corresponding to YIG ω_0,YIG∼ 10^-1Δ, D_YIG≈ 5*10^-40J*m^2 or EuS ω_0,EuS∼ 10^-2Δ, D_EuS≈ 3*10^-42J*m^2, we obtain that D q^2 ≪ω_0 to very good accuracy for all reasonable parameters. Consequently, one can disregard D q^2 with respect to ω_0 in the magnonic Green's function B_ q and after linearization of ξ_ k - σ q≈v_F ( k - σ q - k_F) in the vicinity of the Fermi surface we see that the dependence on k drops from Eqs. (<ref>)-(<ref>). (ii) The correction to the normal state electron dispersion δξ is small with respect to all other corrections and is neglected below. (iii) The important corrections δΔ and δε have peaks at the energies corresponding to the superconducting coherence peaks of the opposite spin subbands. While the coherence peaks for the spin-up subband are located at ε = ±Δ +Ũ, the peaks of the corrections are at ε = ±Δ -Ũ. It is an obvious consequence of the process of electron spin flip accompanied by emission or absorption of a magnon. (iv) Correction δΔ represents an effective contribution to the superconducting order parameter induced from the pure singlet pairing Δ via the electron-magnon interaction. It depends on the Matsubara frequency and contains both singlet and triplet components. As can be seen from Eq. (<ref>), the correction obeys the condition δΔ_↑(ω) = δΔ_↓(-ω). It means that the triplet component δΔ_t (ω) = δΔ_↑(ω) - δΔ_↓(ω) = -δΔ_t(-ω) works as an effective odd-frequency superconducting order parameter. This situation is rather unusual because typically in F/S hybrid systems we encounter an odd-frequency anomalous Green's function, but at the same time the order parameter is still even frequency in the framework of the conventional BCS weak coupling theory. § QUASIPARTICLE SPECTRA Now we turn to discussion of how quasiparticle spectra in the S layer are modified by the electron-magnon interaction. In Fig. <ref>(a) we present the spectral functions for the both spins in the S layer calculated from the Green's function (<ref>) according to the relation A_σ(ε, k) = -1/π Tr{1+τ_z/2 Im[Ĝ_ k,σ^R(ε)]}. The spectral function is isotropic in momentum space and for this reason we plot it as a function of ξ_ k≡ξ. The electron-like and hole-like quasiparticle branches are clearly seen at positive and negative energies, respectively. Black dashed lines represent the quasiparticle spectra in the absence of the electron-magnon interaction. The electron-magnon interaction leads to the following main modifications of the quasiparticle spectra: (i) The Zeeman splitting of spin-up and spin-down quasiparticle branches is reduced due to the magnon-mediated interaction between quasiparticles with opposite spins. (ii) For positive energy branches, corresponding to electron-like quasiparticles, the lifetime of spin-up quasiparticles and quasiparticles at the upper part of the spin-down branch is considerably suppressed, what is seen as a broadening of the corresponding branches. For negative energies, corresponding to hole-like quasiparticles, the situation is symmetric if we interchange spins. The broadening of the spin-down branch only occurs in the energy region, where the spin-up branch also exists. The physical reason is that the spin-flip processes providing the broadening are nearly horizontal due to the fact that ω_0 + Dq^2 ≪Δ, that is the magnon energies are small as compared Δ in the whole range of ξ, considered in Fig. (<ref>). The lower (upper) part of the spin-down (up) positive (negative) energy branch is not broadened because there are no available states for the opposite spin quasiparticles at the appropriate energies and, consequently, the spin-flip processes are not allowed. (iii) In Fig. <ref>(a) we also see a reconstruction of the spin-down spectral branch in the energy range of the bottom of the spin-up branch. In order to investigate this effect in more detail we plot the same figure on a logarithmic scale in Fig. <ref>(b), what allows to clearly see weak spectral features. Figs. <ref>(c) and (d) represent the spectral functions for the spin-up band on the normal and on the logarithmic scale, respectively. From Figs. <ref>(b) and (d) it is seen that due to the electron-magnon interaction in the energy region of the extremum of the spin-up (down) branch, a nonzero density of states appears for the opposite spin branch. It looks like a horizontal line starting from the bottom of the corresponding branch. This line is horizontal due to the independence of the electron-magnon self-energy corrections (<ref>) and (<ref>) on ξ. This mixing of the spin-up and spin-down bands resulting from the magnon-mediated spin-flip processes is natural and exists at all energies, but the spectral weight of the opposite spin branch is too small except for the regions of the extrema of the bands corresponding to the coherence peaks of the superconducting DOS. Intersection of the additional lines with the original spin-down band results in its reconstruction, which looks like an avoided crossing point. The results for the spectral function presented and discussed above correspond to T=0.1Δ. This temperature is higher than the gap in the magnonic spectrum ω_0=0.03Δ, which we take in our calculations. Therefore, a large number of thermal magnons are excited at this temperature. In Fig. <ref> the spectral function is demonstrated for lower temperature T=0.01Δ<ω_0. It is seen that the characteristic signatures of the magnon-mediated spin-flip processes, that is the mixing, reconstruction and broadening of the branches are much less pronounced due to the suppression of the thermally excited magnons at such low temperatures. § DOS IN THE PRESENCE OF MAGNONS Now we turn to discussion of the local density of states (LDOS) in the S layer, which is calculated as the momentum integrated spectral function: N(ε) = ∫d^2k/(2π)^2 A(ε, k). Fig. <ref>(a) demonstrates the LDOS in the presence of electron-magnon interaction (solid line) as compared to the LDOS calculated at V=0 (dashed line). The LDOS at V=0, that is calculated assuming mean-field approximation for the exchange field, takes the conventional BCS-like shape. It manifests Zeeman-split coherence peaks, and the outer peak is always higher than the inner one. The electron-magnon interaction inverts the relative ratio of the peak heights and broadens the outer peaks, while the width of the inner peaks remains unchanged. The reason is the same as for the broadening of the spectra in Fig. <ref>: electron spin-flip processes accompanied by a magnon emission or absorption. The outer coherence peaks in Fig.<ref>(a) correspond to the energy regions of the bottom (top) of the positive(negative)-energy spin-up(down) bands. This type of broadening, which only affects outer peaks, differs from the other physical mechanisms resulting in the broadening of the coherence peaks, such as the orbital effect of the magnetic field, inelastic scattering or magnetic impurities, which affect all the peaks <cit.> and can be roughly described by the Dynes parameter. The other important manifestation of the electron-magnon interaction is that the shape of the LDOS strongly depends on temperature even at very low temperatures ∼ω_0 ≪Δ, in agreement with the discussed above behavior of the spectral function. The temperature evolution of the LDOS is presented in Fig. <ref>. It is seen that the broadening of the outer peak develops with increasing temperature in the temperature range ∼ω_0. It is clear if we remember that the broadening is caused by the spin-flip processes, which are mediated by the thermally excited magnons. We do not consider larger temperatures T ≫ω_0 comparable to the critical temperature of the superconducting film because in this temperature range the temperature dependence of the superconducting gap comes into play and the correct consideration of the problem requires solving of the self-consistency equation for the order parameter. Now let us discuss numerical estimates of the dimensionless constant K=V^2 A / 4 πħ v_F √(D Δ), which controls the strength of the electron-magnon coupling. Substituting V = J√(M_s/2|γ|d_FI A)(1/d_S) and expressing the interface exchange coupling constant via the experimentally accessible quantity Ũ as |J| = 2 |γ| Ũ d_S/M_s (where to the leading approximation we neglect magnonic contribution to the magnetization), we obtain K = Ũ^2 (2|γ|/M_s) 1/(4 π√(DΔ)v_F d_FI) for one transverse magnon mode. The effective number of working transverse modes N_⊥∼ d_FI/a, where a is the interatomic distance in the ferromagnet. According to our estimates for d_FI≈ 10 nm N_⊥∼ 2 ÷ 5. One can take the following parameters for YIG/Nb heterostructures: Ũ/Δ = 0.5, v_F = 10^6m/s, Δ_Nb = 2.7*10^-22J, a=1.2m, 2|γ|/M_s = 3.3*10^-27m^3, D = D_bare,YIG-δ D_YIG, where D_bare,YIG = 5*10^-40J*m^2<cit.> is the exchange stiffness of YIG and δ D_YIG is the renormalization of the stiffness in FI/S bilayers due to formation of magnon-Cooparon quasiparticles <cit.>. As it was predicted <cit.>, for the material parameters of YIG/Nb heterostructures δ D_YIG can be ∼ (0.5 ÷ 1) D_YIG,bare for d_FI∼ (1 ÷ 0.5) d_S. Therefore, the electron-magnon coupling constant for YIG/Nb heterostructures can vary in a wide range K_YIG/Nb≳ 10^-4. The considered here values K ∼ 0.01 can be realized in the regime of strong renormalization of the exchange stiffness constant D. For EuS/Al heterostructures one can take Ũ/Δ = 0.25 <cit.>, v_F = 10^6m/s, Δ_Al = 3.5*10^-23J, a=10^-10m, 2|γ|/M_s = 3.3*10^-28m^3, D = D_bare,EuS, where D_bare,EuS = 3*10^-42J*m^2<cit.>. The superconducting renormalization of the stiffness due to formation of magnon-Cooparon quasiparticles is predicted to be small for the parameters corresponding to EuS/Al heterostructures at reasonable thicknesses d_FI due to smaller values of Δ and larger M_s. Substituting these parameters to the expression for K we come to the conclusion that for EuS/Al heterostructures K_EuS/Al∼ 10^-7÷ 10^-6, that is the electron-magnon effects unlikely to be observed in such structures. In general, the electron-magnon effects in the LDOS and quasiparticle spectra should be more pronounced in ultra-thin superconducting films with high critical temperatures, where large absolute values of the effective exchange field Ũ can be realized. The smaller values of the exchange stiffness of the ferromagnet will also enhance the effect. The manifestations of the electron-magnon coupling become more pronounced at T ≳ω_0 and grow with temperature. Now we discuss the influence of the electron-magnon interaction on the effective Zeeman splitting, which is defined as the distance between the split coherence peaks of the LDOS divided by 2. Experimentally, the low-temperature reduction of the effective Zeeman splitting at T ≪Δ for EuS/Al heterostructures has been reported <cit.>. It was ascribed to the presence of weakly bound spins at the interface of the EuS/Al. The renormalization of the effective exchange field in the superconductor by the thermal magnons can also contribute to this effect. Indeed, the fit of experimentally observed temperature dependence of the distance between the Zeeman-split coherence peaks Δ V_peak(T) by 2|Ũ| = J (M_s-N_m |γ|)/(2|γ|d_S ) with the magnon density N_m = (1/S d_FI)∑_ q{exp[-(ω_0+Dq^2)/T]-1}^-1 and ω_0 ≈ 0.03K is in reasonable agreement with the experimental data. In addition, the broadening of the outer coherence peaks, predicted in this work, leads to enhancement of the distance between the spin-split coherence peaks. The broadening becomes stronger with increasing temperature. This effect leads to an apparent growth of the peaks splitting with temperature and, therefore, acts opposite to the renormalization of the effective Zeeman field by magnons. However, our numerical estimates suggest that the temperature growth is unlikely to be observed, at least for heterostructures, consisting of the materials discussed above, because the renormalization of the effective Zeeman field by magnons dominates. § CONCLUSIONS In this work the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface in thin-film FI/S heterostructures on the spectrum of quasiparticles and the LDOS in the superconducting layer is studied. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed. The reconstruction is the most pronounced in the region of the bottom of the energetically unfavorable spin band because of the enhanced density of the electronic states and existence of the available states in the opposite-spin band. The BCS-like Zeeman-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified due to the electron-magnon interaction. The outer spin-split coherence peaks are broadened, and the inner peaks remain intact. This type of broadening is a clear signature of the magnon-mediated spin flips and strongly differs from other mechanisms of the coherence peaks broadening, which usually influence all peaks. The broadening grows with temperature due to the thermal excitation of magnons. The described above features in the electronic DOS are mainly caused by diagonal in the particle-hole space magnonic contributions to the electron self-energy, that is by the quasiparticle processes. Besides that we have also found an off-diagonal in the particle-hole space magnonic contribution to the electronic self-energy. It mimics an odd-frequency superconducting order parameter admixture to the leading singlet order parameter. The study of its influence on the superconducting properties of the system may be an interesting direction for future research. § ACKNOWLEDGMENTS We acknowledge the discussions of the exchange interaction hamiltonian with Akashdeep Kamra. The work was supported by the Russian Science Foundation via the RSF project No. 22-42-04408.
http://arxiv.org/abs/2307.04130v1
20230709090053
The 21-cm forest as a simultaneous probe of dark matter and cosmic heating history
[ "Yue Shao", "Yidong Xu", "Yougang Wang", "Wenxiu Yang", "Ran Li", "Xin Zhang", "Xuelei Chen" ]
astro-ph.CO
[ "astro-ph.CO" ]
Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach Yuanheng Zhang, Nan Jiang, Zhaoheng Xie, Junying Cao*, Yueyang Teng* Y. Zhang is with the College of Medicine and Biological Information Engineering, Northeastern University, China. N. Jiang is with the Department of Ultrasound, General Hospital of Northern Theater Command, China. Z. Xie is with the Institute of Medical Technology, Peking University, China. J. Cao is with the Department of Ultrasound, General Hospital of Northern Theater Command, China. Y. Teng is with the College of Medicine and Biological Information Engineering, Northeastern University, China. J. Cao and Y. Teng contributed equally to this work. This work is supported by the Natural Science Foundation of Liaoning Province (2022-MS-114). This work is supported by the Key R&D Plan Projects of Liaoning Province in 2020 (Project No. 2020JH2/10300122). August 12, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== * Key Laboratory of Cosmology and Astrophysics (Liaoning) & College of Sciences, Northeastern University, Shenyang 110819, China * National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China * Key Laboratory of Radio Astronomy and Technology, Chinese Academy of Sciences, A20 Datun Road, Chaoyang District, Beijing 100101, China * School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China * Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China * National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Northeastern University, Shenyang 110819, China * Key Laboratory of Data Analytics and Optimization for Smart Industry (Ministry of Education), Northeastern University, Shenyang 110819, China * Center for High Energy Physics, Peking University, Beijing 100871, China The absorption features in spectra of high-redshift background radio sources, caused by hyperfine structure lines of hydrogen atoms in the intervening structures, are known collectively as the 21-cm forest. They provide a unique probe of small-scale structures during the epoch of reionization, and can be used to constrain the properties of the dark matter (DM) thought to govern small-scale structure formation. However, the signals are easily suppressed by heating processes that are degenerate with a warm DM model. Here we propose a probe of both the DM particle mass and the heating history of the Universe, using the one-dimensional power spectrum of the 21-cm forest. The one-dimensional power spectrum measurement not only breaks the DM model degeneracy but also increases the sensitivity, making the probe actually feasible. Making 21-cm forest observations with the upcoming Square Kilometre Array has the potential to simultaneously determine both the DM particle mass and the heating level in the early Universe, shedding light on the nature of DM and the first galaxies. The 21-cm line of neutral hydrogen (HI) traces various structures throughout cosmic history. Complementary to the 21-cm tomographic observation, the 21-cm absorption signal against high-redshift radio point sources probes intervening structures along individual lines of sight <cit.>. The structures located at different distances along the sightline resembles forest structure on the background source spectrum, which is called 21-cm forest in analogy to the Lyman α (Lyα) forest. The high frequency resolution of radio telescopes allows the 21-cm forest to be a promising probe to small-scale structures during the epoch of reionization (EoR) <cit.>. In warm dark matter (WDM) models, the small-scale power is suppressed by free-streaming effect compared with the standard cold dark matter (CDM) model <cit.>. Using Lyα forest as a tracer of small-scale structures, this effect has been used to constrain the WDM particle mass at low redshifts <cit.>. Similarly, the 21-cm forest can potentially be used deep into the EoR <cit.>, as the decreased number of low-mass halos leads to weaker 21-cm forest signals. Methods have been developed to improve the detection of 21-cm forest signal <cit.>. However, the 21-cm forest signal can also be suppressed by heating effects during the early galaxy formation<cit.>. While this means that it is a sensitive probe of the temperature of the intergalactic medium (IGM) <cit.>, it is degenerate with the WDM suppression effect <cit.>, making the interpretation of observations ambiguous. Nevertheless, the WDM reduces mainly the number density of 21-cm absorption lines <cit.>, whereas a higher IGM temperature suppresses both the absorption depth and line number density <cit.>. This difference makes it possible to distinguish these two effects statistically. In this Article, we simulate 21-cm forest signals during the EoR under the influence of different dark matter (DM) particle masses and different heating histories of the IGM. We show that although the IGM heating and WDM both suppress the 21-cm signal, they behave differently. By measuring the one-dimensional (1D) power spectrum along lines of sight, it is possible to break the degeneracy, and constrain the DM particle mass and the IGM temperature (hence the early heating history) simultaneously. To simulate the 21-cm forest from the EoR, a high dynamic range is required to model the large-scale structures in density and ionization fields on ≳ 100 comoving-megaparsec scales, while resolving small-scale halos and their ambient gas on approximately kiloparsec scales. We use a hybrid approach to achieve this. The cosmological evolution of large-scale density and ionization fields is simulated with the semi-numerical simulation 21cmFAST <cit.>, with a comoving box size of 1 Gpc with 500^3 grids, where the initial density fluctuations are set by DM properties, while each of the (2 Mpc)^3 grid is further divided into 500^3 voxels, and populated with halos of various masses according to the local grid density and the conditional halo mass function <cit.>, which depends on the matter power spectrum regulated by the DM particle mass. The density in each voxel is determined by the Navarro-Frenk-White profile <cit.> or the infall model profile <cit.> according to its distance to the nearest halo (Methods). § RESULTS Recent astrophysical observations have put lower limits on the WDM particle mass (m_ WDM) of a few kiloelectronvolts <cit.>. We simulate the 21-cm forest signals assuming m_ WDM = 10 keV, 6 keV, and 3 keV, respectively, to be compared with the signals from a CDM model. The 21-cm optical depth depends on the density, the neutral fraction of hydrogen gas, and the spin temperature T_ S. The density field and ionization field are simulated according to the DM properties as described above, with more details given in Methods. T_ S is assumed to be fully-coupled to the gas kinetic temperature T_ K by the early Lyα background<cit.>, and T_ K is determined by the heating history of the IGM, or the virial temperature of halos, depending on the gas location (Methods). The heating history of neutral IGM during the EoR is computed taking into account the adiabatic expansion of the universe, the Compton heating/cooling, and the X-ray heating. We model the X-ray emissivity as proportional to the formation rate of early non-linear structures <cit.>, normalized by an X-ray production efficiency parameter f_ X (Methods). Assuming an unheated IGM (f_ X = 0), the 21-cm optical depth (top panels) and the differential brightness temperature (negative, bottom panels) along a line of sight at z=9 are shown in Fig. <ref>, for CDM (left column) and various WDM particle masses (right columns), respectively. In the lower panels, the flux density of the background source, scaled to 150 MHz assuming a power-law spectrum, is assumed to be S_150 = 1 mJy, 10 mJy, and 100 mJy, from top to bottom respectively. The overall 21-cm absorption depth in WDM models is comparable to the signal level in the CDM model, both corresponding to the absorption depth by the unheated IGM. However, the small-scale fluctuations are notably reduced in WDM models, due to the more suppressed formation of low-mass halos. Note that the major contribution to the 21-cm forest signal is from the overdense gas in the halo surroundings which is not heated by virialization shocks <cit.>. These small-scale fluctuations are also suppressed, resulting in sparser absorption lines in the spectra. Figure <ref> shows the 21-cm optical depth (top panels) and brightness temperature (bottom panels) spectra at z∼ 9 in the CDM model, assuming different X-ray efficiency parameters. As f_ X increases, the IGM is increasingly heated, increasing the spin temperature and notabley reducing the 21-cm forest signal. The dotted and dashed lines in the lower panels correspond to the thermal noise levels expected for phase-one and phase-two low-frequency arrays of the Square Kilometre Array (denoted by SKA1-LOW and SKA2-LOW), for which array sensitivities of A_ eff / T_ sys = 800 m^2 K^-1 <cit.> and 4000 m^2 K^-1 <cit.> (with A_ eff being the total effective area and T_ sys being the system temperature) are adopted, respectively. For both arrays, we assume a maximum baseline of 65 km, a channel width of 1 kHz, and an integration time of 100 hours (hr). For the case with negligible early X-rays, the 21-cm forest signal can be marginally detected by the SKA1-LOW for sources with S_150∼ 1 mJy, while the same signal will be easily detected with SKA2-LOW. However, the heating will notably diminish the detectability of individual absorption lines, weakening the probing power of the 21-cm forest on either the DM properties, or the thermal history of the IGM. Even if f_ X = 0.1, i.e. the early star formation has only ∼ 10% X-ray productivity as that of nearby starburst galaxies, the IGM will be heated to about 56 K at z = 9, then direct measurement of the 21-cm forest would only be possible for extremely bright quasars with S_150≳ 100 mJy for SKA1-LOW, or S_150≳ 10 mJy for SKA2-LOW, otherwise a much longer integration time would be required. If f_ X≳ 1, the IGM would be heated to ≳ 650 K at z = 9, then direct detection of the forest signal will be challenging even for SKA2-LOW. The heating would be weaker at higher redshifts, but then it would be more difficult to find a suitable quasar as background source. Moreover, only the fluctuating part of the absorption is measurable in 21-cm forest observation, while the overall absorption depth from the homogeneous IGM would be effectively subtracted when comparing with the intrinsic continuum <cit.>. If we simply count the absorption lines with a certain threshold of optical depth or equivalent width, the effects of a WDM model and a more heated IGM would be degenerate, both reducing the number of detectable absorbers <cit.>. A statistical variable with more distinguishing power is needed. As we shall show below, the 1D power spectrum of 21-cm forest along the line of sight <cit.> can serve this purpose. The left panel of Fig. <ref> compares the 1D power spectra of 21-cm forest in the CDM model with different f_ X. The 21-cm optical depth is inversely proportional to the gas temperature, and proportional to the density. As f_ X increases, the IGM is increasingly heated, and the 1D power spectrum is notably suppressed on all scales. When the IGM is cold, the high contrast in temperature between gas in halos and gas in the IGM far from halos dominates the large-scale fluctuations in the optical depth, with typical scales corresponding to the clustering scales of halos of various masses. As f_ X increases from 0 to 1, the IGM far from halos with the lowest temperature is heated first, suppressing the temperature contrast on scales of halo clustering, which results in the flattening of 1D power spectrum on large scales. When f_ X = 1, the IGM temperature is about 650 K at z = 9, comparable to the virial temperature (∼ 1000 K) of the smallest halos holding gas (with mass M_ min∼ 10^6 M_⊙, Methods), then the large-scale fluctuations in the temperature are mostly smoothed, leaving only a flatter power spectrum originated from density fluctuations. The 21-cm forest and its 1D power spectrum are further reduced when f_ X increases from 1 to 3. The 1D power spectra all drop off on small scales corresponding to the clustering scale of the smallest halos holding gas, and the cut-off at the small-scale end is set by the spectral resolution assumed. The right panel of Fig. <ref> shows the results for different DM properties assuming an un-heated IGM (f_ X = 0). The lower m_ WDM results in a much lower level of small-scale density fluctuations, thus suppressing the small-scale 21-cm forest power spectrum. Note that with the same thermal history, the overall amplitude of the 1D power spectrum remains similar for different m_ WDM, while the slope will be steeper for a warmer DM model. This behavior is distinct from the heating effect, which suppresses the 1D power spectrum more dramatically on all scales. The dotted and dashed lines in Fig. <ref> indicate the thermal noise in the power spectrum, P^ N, expected for SKA1-LOW and SKA2-LOW respectively, utilizing 10 background sources. The error bars include both the thermal noise of SKA2-LOW and the sample variance (see Methods). As shown in Fig. <ref>, for a background source with S_150 = 10 mJy, direct measurement of 21-cm forest becomes difficult if f_ X≳ 0.1, and almost impossible even for SKA2-LOW if f_ X≳ 1. However, the 1D power spectrum of 21-cm forest can be measured precisely by SKA1-LOW over a broad range of wavenumber k if f_ X∼ 0.1, and it is still detectable by SKA2-LOW with S_150∼ 10 mJy sources even if f_ X = 3 at z=9. This is because the absorption appears as an increased variance and can be measured statistically from the power spectrum even if individual absorbers are too weak to be detected with notableness <cit.>. The 1D power spectrum measurement also allows extraction of the scale-dependent information encoded in the density and temperature fields, in contrast to the flatter thermal noise. So the observation of the 21-cm forest by 1D power spectrum is not only more feasible, but also has better discriminating power for the effects of IGM heating and the WDM. Fig. <ref> shows the 1D power spectra for different f_ X and m_ WDM assuming S_ 150 = 1 mJy, 10 mJy, and 100 mJy, respectively. Using 1D power spectrum, with 10 background sources of S_ 150∼ 1 mJy and a moderate integration time of ∼ 100 hr, the 21-cm forest signal will be detectable by SKA2-LOW if f_ X≲ 0.1, for all DM particle masses considered here. For brighter sources with S_ 150≳ 10 mJy, the full shape of 1D power spectrum can be well characterized, and a broader range of possible f_ X values can be probed. Therefore the 21-cm forest 1D power spectrum will not only break the degeneracy between the effects of WDM and heating, but also be vital to make the probe feasible in practice. The Universe may also accommodate both a heated IGM and WDM particles, both regulating the amplitude and shape of the 1D power spectrum of 21-cm forest. We simulate the signals for various combinations of f_ X and m_ WDM values, and measure the amplitude P and the slope β = dlogP(k)/ dlogk of the 1D power spectra at k = 40 Mpc^-1. The top panels of Fig. <ref> show that the amplitude of the 1D power spectra roughly determines f_ X, or the IGM temperature, with a weak degeneracy between a higher f_ X and a smaller m_ WDM. On the other hand, the slope in the bottom panels shows a different degeneracy; a flatter power spectrum indicates a higher f_ X and/or a larger m_ WDM, while a steeper one implies a lower f_ X and/or a smaller m_ WDM. Therefore, the amplitude and slope of 21-cm forest 1D power spectrum can be diagnostic characters for the DM particle mass and the IGM temperature. When combined, one can effectively break the degeneracy and determine f_ X and m_ WDM simultaneously. With the 21-cm forest 1D power spectrum measured from 100 neutral patches of 10 comoving megaparsec at z = 9, we use the Fisher matrix formalism to forecast constraints on m_ WDM and T_ K as expected for both SKA1-LOW and SKA2-LOW, including the thermal noise and sample variance. Fig. <ref> shows that if the IGM was only weakly heated, then very tight constraints can be put on both m_ WDM and T_ K, with σ_m_WDM = 1.3 keV and σ_T_K = 3.7 K for the fiducial model of m_ WDM = 6 keV and T_ K = 60 K after a total observation time of δ t = 100 hr on each source using SKA1-LOW, and σ_m_WDM = 0.3 keV and σ_T_K = 0.6 K using SKA2-LOW. σ_m_ WDM and σ_T_ K are marginalized absolute errors. If the IGM was heated up to 600 K at z = 9 (corresponding to f_ X = 1), then SKA2-LOW would be required, and we expect to have σ_m_WDM = 0.6 keV and σ_T_K = 88 K. The probe is more sensitive for lower values of m_ WDM. Note that these constrains can be obtained by measurements on segments of neutral patches along sightlines against 10 background sources with S_ 150 = 10 mJy. The constraints would be better if more sources at different redshifts, or brighter sources, are available. § DISCUSSION The 21-cm signal from the EoR can potentially be used to constrain DM properties <cit.>, but the degeneracies with astrophysical effects can be an obstacle<cit.>. During the EoR, there are various feedback effects <cit.>. Here we consider primarily radiative feedbacks, including Lyα photons coupling T_ S to T_ K, ionizing photons determining the large-scale ionization field, and X-ray photons heating the IGM. The mechanical and chemical feedbacks affect the density profiles and the cooling mechanisms, but have minor influences on the 21-cm forest. The main focus of this work is the heating effect that is most important in reducing the 21-cm forest signal and is degenerate with the WDM effect. Using a set of semi-numerical simulations covering a high dynamic range, we show that both the presence of WDM and an early X-ray heating can reduce the number of observable 21-cm absorbers. This degeneracy hinders the 21-cm forest from being an effective probe to either the DM properties or the thermal history of the universe. We have demonstrated that the 1D power spectrum of 21-cm forest is a good observable to break this degeneracy, and is even effective in high heating-rate cases in which the number of 21-cm forest lines is severely diminished. By quantifying the fluctuations, the 1D power spectrum of the 21-cm forest is also immune to subtraction of the overall absorption from the homogeneous IGM in practical observations. The DM particle mass and the IGM temperature at a specific redshift can be simultaneously constrained. Although in our simulation the gas density profile surrounding a halo is based on simple models, this does not have much impact on the number density and the clustering properties of absorption lines, which determines the main characteristics of the 1D power spectrum. We also note that the overall signal level is dependent on the local density δ_0 in the large-scale environment. We investigate the effect of local density by computing the 21-cm forest signals on different grids, with various densities on the 2 Mpc scale. As shown in Extended Data Figs. 1 and 2, the local density affects the overall magnitude of signals, but the effect is much weaker than the heating, even in the extreme case of δ_0 = 2 in a grid of ∼ 2 Mpc. Meanwhile, the local density has almost negligible effect on the shape of 1D power spectrum, making the effect distinguishable from the WDM effect. While direct detection of individual 21-cm absorption lines will be challenging if the early IGM is heated, the 1D power spectrum measurement is more promising. The observation relies on the availability of high-redshift radio-bright sources prior to reionization. Quite a number of radio-loud quasars have been detected beyond redshift 5 <cit.>, including nine at z > 6<cit.>. A few hundred radio quasars with > 8 mJy at z ∼ 6 are expected to be spectroscopically observed in the near future <cit.>. As there is no evidence for the evolution in the radio loudness fraction of high-z quasars <cit.>, one can expect about ∼ 2000 sources with > 6 mJy at 8 < z < 12 <cit.>. The long-duration gamma-ray bursts (GRBs) are also possible high-redshift sources. Several cases have been discovered beyond redshift 8 <cit.>. For future missions like the High-z Gamma-ray bursts for Unraveling the Dark Ages Mission and the Transient High-Energy Sky and Early Universe Surveyor, the expected detection rate of luminous GRBs from Population III stars is 3 – 20 yr^-1 at z > 8 <cit.>. Given the higher sensitivity of 1D power spectrum observation, radio afterglows of high-z GRBs could also be used. The fast radio bursts, though brighter, are however too brief to allow long integration required. Current combination of astrophysical probes of strong gravitational lensing, Lyα forest, and luminous satellites of our Galaxy indicates that m_ WDM may be larger than 6 keV<cit.>, but models with m_ WDM of a few keV are still not excluded. On the other hand, tomographic 21-cm power spectrum measurement, in combination with complementary probes, yield a constraint on the IGM temperature of 8.9 K < T_ K < 1.3× 10^3 K at z∼ 8 at 68% confidence<cit.>. With the upcoming SKA-LOW, the 21-cm forest observation, especially the 1D power spectrum, can improve the constraints on both the properties of DM and the thermal history of the early universe simultaneously, providing an effective probe to the DM in an unexplored era in the structure formation history, and to the first galaxies interplaying with the early IGM. § METHODS §.§ The 21-cm forest signal. Using high-redshift quasars or radio afterglows of GRBs as background radio sources <cit.>, the HI in halos and in the IGM absorbs 21-cm photons along the line of sight. The 21-cm forest signal is the flux decrements due to 21-cm absorption with respect to the continuum of a background radio source, which in the Rayleigh-Jeans limit is characterized by the differential brightness temperature. In the optically-thin limit, which is usually the case for the 21-cm transition, the observed differential brightness of the 21-cm absorption signal, relative to the brightness temperature of the background radiation T_γ(ŝ, ν_0, z) at a specific direction ŝ and redshift z, is δ T_ b(ŝ, ν) ≈T_ S(ŝ, z)-T_γ(ŝ, ν_0, z)/1+zτ_ν_0(ŝ, z). Here ν_0 = 1420.4 MHz is the rest-frame frequency of 21-cm photons, T_ S is the spin temperature of the absorbing HI gas, and τ_ν_0 is the 21-cm optical depth. In terms of the average gas properties within each voxel, the 21-cm optical depth can be written as <cit.> τ_ν_0(ŝ, z) ≈ 0.0085[1+δ(ŝ, z)] (1+z)^3/2[x_ HI(ŝ, z)/T_ S(ŝ, z)] [H(z) /(1+z)/ d v_ / d r_] (Ω_ bh^2/0.022)(0.14/Ω_ mh^2), where δ(ŝ, z), x_ HI(ŝ, z), and H(z) are the gas overdensity, the neutral fraction of hydrogen gas, and the Hubble parameter, respectively, and d v_/ d r_ is the gradient of the proper velocity projected to the line of sight. Ω_ b, Ω_m and h are baryon density parameter, matter density parameter and dimensionless Hubble constant, respectively. The brightness temperature of the background radiation at the rest frame of the 21-cm absorption T_γ(ŝ, ν_0, z) is related to the observed brightness temperature at a redshifted frequency ν, T_γ(ŝ, ν, z=0), by T_γ(ŝ, ν_0, z)=(1+z) T_γ(ŝ, ν, z=0), and it has contributions from both the background point source and the cosmic microwave background (CMB), i.e. T_γ(ŝ, ν, z=0)=T_ rad(ŝ, ν, z=0)+T_ CMB(z=0), where T_ rad(ŝ, ν, z=0) represents the observed brightness temperature of the point source, and it usually dominates over the CMB temperature (T_ CMB). For a given radio telescope resolving a solid angle of Ω, the observed brightness temperature of a source is related to the flux density S_ rad(ν) by T_ rad(ŝ, ν, z=0) =c^2/2 k_ Bν^2S_ rad(ν)/Ω, where c is the speed of light and k_ B is the Boltzmann constant. The flux density of the background source is modeled to have a power-law spectrum scaled to 150 MHz, i.e. S_ rad(ν) = S_150(ν / ν_150)^η <cit.>, where ν_150 = 150 MHz and a spectral index of η=-1.05 is assumed as appropriate for a powerful radio source like Cygnus A <cit.>. Note that the spectral index of high-redshift quasars has a large scatter, and their spectra may be flatter than Cygnus A at low frequencies <cit.>, but the detailed spectral index makes only a negligible difference to our results. In this work, we take the flux densities of S_150 = 1 mJy, 10 mJy, and 100 mJy for the background point sources as examples, and assume the maximum baseline of 65 km for both the SKA1-LOW and SKA2-LOW for calculating the angular resolution for a given redshift. Assuming that T_ S is fully coupled to T_ K by the early Lyα background, the 21-cm optical depth τ_ν_0 and the forest signal δ T_ b are then dependent on the density δ, neutral fraction x_ HI, gas temperature T_ K, and the velocity gradient d v_/ d r_, of each voxel along the line of sight. Here we account only for the Hubble expansion for the velocity field, but neglect the peculiar velocity, as the peculiar velocity mainly shifts the contribution of the absorption from individual segments of gas. We note that the peculiar velocity may affect the individual line profiles <cit.>, but we expect that its effect on the overall amplitude of the signal and the 1D power spectrum is small. The density field, ionization field, and the gas temperature field are modeled as follows. Throughout this study, we adopted the set of cosmological parameters consistent with the Planck 2018 results<cit.>: Ω_ m = 0.3153, Ω_ b h^2 = 0.02236, Ω_Λ = 0.6847, h = 0.6736, σ_8 = 0.8111. Ω_Λ and σ_8 are dark-energy density parameter and matter fluctuation amplitude, respectively. §.§ The density field. The evolution of the large-scale density field is simulated with linear theory using the 21cmFAST <cit.>, for both the CDM and WDM models. The simulation box has a comoving size of (1 Gpc)^3, and (500)^3 grids. The influence of DM properties on the density field is mainly on small scales. In each of the 2 Mpc grids, the small-scale density distribution is simulated by randomly populating halos according to the conditional halo mass function and the local density of the grid from the 21cmFAST simulation, and assigning density profiles to the gas in the halos as well as in the IGM, as detailed below. §.§.§ Halo mass function. In the framework of the CDM model, the number density of halos per mass interval in the range (M, M + dM), in a simulation grid with mass M_0 and overdensity δ_0 at redshift z, can be modeled by the conditional halo mass function <cit.> of the Press-Schechter form <cit.>, i.e. d n(M|δ_0,M_0;z)/ d M=√(1/2 π)ρ̅_ m0 (1+δ_0)/M| d S/ d M| δ_ c(z) - δ_0/(S-S_0)^3/2 exp{-[δ_ c(z) - δ_0]^2/2(S-S_0)}, where ρ̅_ m0 is the average density of matter in the universe today, S=σ^2(M) is the variance of mass scale M, S_0=σ^2(M_0), and δ_ c(z)=1.686/D(z) is the critical overdensity for collapse at redshift z extrapolated to the present time using the linear theory, in which D(z) is the linear growth factor. In the WDM model, the structure formation is suppressed below the free streaming scale λ_ fs of DM particles, and the conditional halo mass function can be approximately written as <cit.> d n(M|δ_0,M_0;z)/ d M=1/2{1+erf[log _10(M / M_ fs)/σ_log M]}[ d n (M|δ_0,M_0;z)/ d M]_ PS, where σ_log M=0.5, and M_ fs is the suppressing mass scale of halo formation corresponding to λ_ fs, i.e. M_ fs=4 π (λ_ fs/2)^3ρ_ m0 /3. PS represents Press-Schechter form in CDM model. The comoving free streaming scale is approximately <cit.> λ_ fs≈ 0.11(Ω_ WDM h^2/0.15)^1 / 3(m_ WDM/ keV)^-4 / 3( Mpc), where Ω_ WDM is the WDM density normalized by the critical density. The Press-Schechter mass function [ d n (M|δ_0,M_0;z)/ d M]_ PS in Eq. (<ref>) takes the form of Eq. (<ref>), but the variance of density fluctuations is evaluated with the matter power spectrum fitted for WDM <cit.>: P_ WDM(k)=P_ CDM(k){[1+(α k)^2 β]^-5 / β}^2, where β = 1.12 and α is given by <cit.> α=0.049(m_ WDM/ keV)^-1.11(Ω_ WDM/0.25)^0.11(h/0.7)^1.22 h^-1 ( Mpc). Supplementary Fig. 1 shows the halo mass function, evaluated at δ_0 = 0 and S_0 = 0, for both CDM and WDM models. The halo number is obviously suppressed below the free streaming scale in the WDM models, with the lower m_ WDM resulting in larger suppressing scale. Especially, the WDM models notably reduce the total number of halos by suppressing the small ones, thus suppressing the small-scale fluctuations in the neutral hydrogen density, which have a major contribution to the 21-cm forest signals. The major contribution to the 21-cm forest signal comes from the gas in and around the large number of low-mass halos that are not producing ionizing photons and reside in neutral environments<cit.>. Therefore, we focus on neutral patches along a given line of sight, and select neutral grids from the large-scale ionization field simulated by 21cmFAST. Then we randomly populate each of these 2 Mpc grids with halos according to the conditional mass function determined by the DM models. We consider only the halos with the mass upper limit M_4 corresponding to the virial temperature of T_ vir = 10^4 K, so that the atomic cooling is not efficient enough to enable substantial star formation. The lower limit of halo mass M_ min is set by the filtering mass scale, so that the halos could retain most of its gas and the gas in the ambient IGM to contribute to the 21-cm absorption. The filtering mass is mainly determined by thermal history of the universe, and it is of order ∼ 10^6 M_⊙ for the redshifts of interest (7≲ z≲ 11) for f_ X≲ 1 in the CDM model<cit.>. It would be higher for higher f_ X, and the different density profiles in WDM models may also slightly modify its value. In the present work, we set the same M_ min = 10^6 M_⊙ for all the models for simplicity, but we expect that the dependence of filtering mass on f_ X will make the probe more sensitive to the thermal history of the universe, while more challenging to discriminate WDM models for cases with high f_ X. §.§.§ Gas profile. Each grid along the line of sight is further divided into (500)^3 voxels, each with a size of (4 kpc)^3, then the gas density of each voxel is determined by its distance to the nearby halos. Inside the virial radius r_ vir, we assume that the dark matter follows the NFW density profile <cit.>, and the gas is in hydrostatic equilibrium with the dark matter <cit.>. Thus, the gas density distribution can be derived analytically <cit.>: lnρ_ g(r)=lnρ_ gc-μ m_ p/2 k_ B T_ vir[v_ e^2(0)-v_ e^2(r)], where ρ_ gc denotes the central gas density, μ is the mean molecular weight of the gas, m_ p is the proton mass, and v_ e(r) is the gas escape velocity at radius r, given by v_ e^2(r)=2 ∫_r^∞G M(r^')/r^' 2 d r^'=2 V_ c^2F(y x)+y x/1+y x/x F(y). Here V_ c^2 ≡ G M/r_ vir is the circular velocity at the virial radius, G is gravitational constant, x ≡ r / r_ vir, y is the halo concentration, and F(y)=ln(1+y)-y/(1+y). The central gas density is determined by normalizing the total baryonic mass fraction of the halo to the cosmic mean value, which gives ρ_ gc =(Δ_c / 3) y^3(Ω_ b / Ω_ m) e^A/∫_0^y(1+t)^A / t t^2 d tρ̅_ m(z), where ρ̅_ m(z) is the mean matter density of the universe at redshift z, A ≡ 2 y/F(y), e is the mathematical constant (base of natural log), and Δ_c=18 π^2 + 82(Ω_ m^z-1) - 39(Ω_ m^z-1)^2 is the mean density of a virialized halo with respect to the cosmic mean value <cit.>, in which Ω_ m^z=Ω_ m(1+z)^3 /[Ω_ m(1+z)^3+Ω_Λ]. The gas density in the halo surroundings is enhanced because of the gravitational potential. Outside the virial radii of halos, we assume that the gas density profile follows the dark matter distribution, and it can be computed by using the infall model which is based on the excursion set theory <cit.>. The gas density profiles in and around halos of different masses are plotted in Supplementary Fig. 2 for z = 9. It is seen that there is density discontinuity at the virial radius in our model. This is expected at the virialization shock near the virial radius <cit.>, though the exact location of the shock may vary from halo to halo <cit.>. The infall model was developed for the matter density and velocity distribution around density peaks <cit.>. Directly applying it to arbitrary environments may over-predict the gas density in under-dense regions. Therefore, we normalize the density field to ensure that the minimum density is 0, and the average density of the (500)^3 voxels in each 2 Mpc grid equals the grid density from the large-scale 21cmFAST simulation. To test the reliability for the small-scale density field, we run a small-scale high-resolution hydrodynamical simulation with the GADGET (GAlaxies with Dark matter and Gas intEracT) <cit.> for high redshifts. The simulation has a box size of 4 h^-1 Mpc and 2×800^3 gas and DM particles <cit.>. We compare the probability density distribution of our analytical gas density field with the one from the simulated gas density in Supplementary Fig. 3, at the same resolution at z = 17. It shows that our gas density model closely recovers the probability distribution of the gas density fluctuations from the hydrodynamical simulations. The line-of-sight density distribution in the CDM model is illustrated in the left panel of Extended Data Fig. 1 for three grids with different local overdensities δ_0 on the 2 Mpc scale at z=9. The density distributions for different DM properties are shown in Supplementary Fig. 4. §.§ The ionization field. The large-scale ionization field is simulated with the semi-numerical simulation 21cmFAST assuming ionizing sources with a minimum halo mass of M_4 and an ionizing efficiency parameter of ζ = 11 <cit.>. By suppressing the formation of small-scale halos, the WDM models may possibly speed up or delay the large-scale reionization process by modifying both the abundances of ionizing sources and sinks <cit.>. In the present work, we use the basic version of 21cmFAST in which the effect of sinks is incorporated by a homogeneous recombination number, and the reionization is delayed in the WDM models as shown in Supplementary Fig. 5. It shows that the effect of WDM on the large-scale reionization history becomes obvious only if m_ WDM≲ 3 keV, and this is consistent with the fact that atomic-cooling halos are effectively suppressed in WDM models with m_ WDM≲ 3 keV as shown in Supplementary Fig. 1. Note that the 21-cm forest signals mainly come from neutral regions, and we pick up neutral patches in the large-scale simulation box to analyze the small-scale structures in the 21-cm forest signals. The large-scale reionization history only determines the probability of getting a neutral patch of the IGM with a certain length along a line of sight. In order to have consistent source properties when comparing the results for the same f_ X, we set the same ionizing efficiency parameter for all the models considered here, while the global reionization history would be slightly different among WDM and CDM models. On the other hand, a different reionization scenario may change the minimum source mass, for example, in a reionization model with stronger feedback effects would have a minimum halo mass for collapse higher than M_4, thus changing the reionization history. However, the large-scale ionization field and the overall reionization history have only a minor effect on the small-scale 21-cm forest signals we are interested in. For each of the neutral grids in the simulation box, we assume that the gas is in collisional ionization equilibrium (CIE), so that the ionized fraction of each voxel is determined by its local density and temperature, i.e. n_ e n_ HIγ=α_ B n_ e n_ p, where n_ HI, n_ e and n_ p represent the number densities of neutral hydrogen, electron and proton, respectively, γ is the collisional ionization coefficient <cit.>, and α_ B is the case B recombination coefficient <cit.> which is appropriate for low-mass halos and the incompletely ionized IGM. Here both γ and α_ B are functions of temperature. §.§ The temperature field. The gas temperature T_ K of each voxel is determined by the thermal history of the early universe and the location of the voxel with respect to halos. While the photoionization heating by the UV background dominates the gas heating in ionized regions <cit.>, it is the X-rays that can penetrate deep into the neutral IGM and dominate the heating of the neutral gas contributing to 21-cm signals. For the gas in the neutral IGM, its temperature is mainly determined by the cosmic expansion, the heating or cooling from the Compton scattering, and the X-ray heating. The global evolution of the IGM temperature can be written as <cit.> d T_ K/ d t=-2 H(z) T_ K+2/3ϵ_ comp/k_ B n+2/3ϵ_ X,h/k_ B n, where n is the total particle number density, ϵ_ comp is the Compton heating/cooling rate per unit physical volume <cit.>, and ϵ_ X,h represents the part of the X-ray emissivity ϵ_ X that contributes to heating, for which we adopt a fitted formula to simulations, i.e. ϵ_ X,h = [1-0.8751(1-x_i^0.4052)] ϵ_ X <cit.>, where x_i is the ionized fraction. Assuming that the X-ray productivity is proportional to the star formation rate, and hence to the matter collapse rate, the total X-ray emissivity ϵ_ X can be written as <cit.>: 2/3ϵ_ X/k_ B n H(z) = 5 × 10^4 K f_ X(f_⋆/0.1 d f_ coll / d z/0.011+z/10). Here f_⋆ is the star formation efficiency approximately evaluated at M_4 <cit.>, as appropriate for the most abundant star-forming halos, f_ coll is the fraction of matter collapsed into atomic-cooling halos with M>M_4, and f_ X is the normalization parameter describing the uncertain nature of X-ray productivity in the early universe as compared to the local universe<cit.>. The global evolution of the IGM temperature T_ K is shown in Supplementary Fig. 6 for different values of f_ X. The curve with f_ X = 0 denotes the case with purely adiabatic cooling and Compton heating. Inside the virial radius, the gas kinetic temperature T_ K equals to the virial temperature T_ vir of the halo. As for the gas in the overdense regions near halos, it will be adiabatically heated depending on the local density. In the absence of X-rays, the temperature profiles for halos with 10^6 M_⊙, 10^7 M_⊙, and 10^8 M_⊙ are illustrated in Supplementary Fig. 7 for z = 9. Similar to the density profiles, the gas temperature also shows discontinuity at the virialization shocks as expected, but the exact location of the virialization shocks has negligible effects on our main results. In the cases with X-ray heating, the gas temperature outside the halos is set by the maximum between the adiabatic temperature and the heated IGM temperature. §.§ Thermal noise of direct measurement. In the direct measurement of individual absorption lines, the noise flux density averaged over two polarizations can be expressed as <cit.>: δ S^ N≈2 k_ B T_ sys/A_ eff√(2 δνδ t), where A_ eff is the effective collecting area of the telescope, T_ sys is the system temperature, δν is the channel width, and δ t is the integration time. The corresponding thermal noise temperature is: δ T^ N = δ S^ N(λ_z^2 /2 k_ BΩ) ≈λ_z^2 T_ sys/A_ effΩ√(2 δνδ t), where λ_z is the observed wavelength, and Ω=π (θ/2)^2 is the solid angle of the telescope beam, in which θ = 1.22λ_z/D is the angular resolution with D being the longest baseline of the radio telescope/array. For the SKA1-LOW, we adopt A_ eff / T_ sys= 800 m^2 K^-1 <cit.>, and A_ eff / T_ sys= 4000 m^2 K^-1 is expected for SKA2-LOW <cit.>. For both arrays, we assume D = 65 km and δ t = 100 hr, and δν = 1 kHz is assumed in order to resolve individual 21-cm lines. Correspondingly, the synthetic spectra shown in Figs. <ref> and <ref> are smoothed with the same channel width. At redshift z = 9, the angular resolution is about 8.17 arcsec, and the noise temperature is plotted with dotted and dashed lines in the lower panels in Figs. <ref> and <ref>, for SKA1-LOW and SKA2-LOW respectively. §.§ 1D power spectrum of 21-cm forest. It is seen from Fig. <ref> that the direct measurement of individual absorption lines is vulnerably hampered by the early X-ray heating. In order to improve the sensitivity for detecting the 21-cm forest signal, and to reveal the clustering properties of the absorption lines so as to distinguish the effects between heating and WDM models, we follow the algorithm in Ref. <cit.>, and compute the 1D power spectrum of the brightness temperature on hypothetical spectra against high-redshift background sources. The brightness temperature δ T_b(ŝ, ν) as a function of observed frequency ν can be equivalently expressed in terms of line-of-sight distance r_z, δ T_ b^'(ŝ, r_z), and the Fourier transform of δ T_b ^'(ŝ, r_z) is δT^'(ŝ, k_)=∫δ T_ b^'(ŝ, r_z) e^-i k_ r_z d r_z. The 1D power spectrum along the line of sight is defined as: P(ŝ, k_) = |δT^'(ŝ, k_)|^2(1/Δ r_z). The term 1/Δ r_z is the normalization factor, in which Δ r_z is the length of sightline under consideration. To reveal the small-scale structures we are interested in, we select neutral patches with Δ r_z = 10 comoving Mpc, and compute the 1D power spectra from segments of 10 comoving Mpc along the line of sight. For a reasonable number of 𝒪(10) high-z background sources, the expected value of the power spectrum is obtained by averaging over 100 neutral patches on lines of sight penetrating various environments, i.e. P(k_) ≡⟨ P(ŝ, k_)⟩. On each quasar spectrum, we will be able to select ∼ 10 segments of 10 comoving Mpc length in neutral patches; as the neutral patches are intermittently separated by ionized regions during the EoR, we may need a spectrum covering ∼ 200 comoving Mpc along the line of sight. A length of 200 comoving Mpc projects to a total bandwidth of about 14 MHz at redshift 9, corresponding to Δ z ∼ 0.8, which is reasonable in practice. For the rest of the paper, we abbreviate k_ as k, as here we are always interested in the k-modes along the line of sight. Supplementary Fig. 8 shows the evolution of the 1D power spectrum with redshift. The solid lines in the left and middle panels show the power spectra in the CDM model and in the WDM model with m_ WDM = 3 keV respectively, in the absence of X-rays. As the redshift increases, the halo abundance decreases, and the small-scale fluctuations in the forest signal decrease, resulting in steeper power spectra. The small-scale power is slightly more notably suppressed in the WDM model, as the halo formation is more delayed. However, the redshift evolution has only a weak effect on the 1D power spectrum in the absence of X-ray heating. The right panel of Supplementary Fig. 8 illustrates the evolution of the 1D power spectrum in the CDM model with f_ X = 3. In the case of strong X-ray heating, the 1D power spectrum of the 21-cm forest is dramatically suppressed with the decreasing redshift, and the dominant reason is the rapidly increasing IGM temperature. It implies that for the purpose of constraining DM properties, the 1D power spectrum measurement at higher redshift is preferred, as long as a radio-bright source at an even higher redshift is available. §.§ Measurement error on 1D power spectrum. The observational uncertainties in the 21-cm forest include the thermal noise, the sample variance, the contaminating spectral structures from foreground sources in the chromatic sidelobes, and the bandpass calibration error. The bandpass calibration error depends on specific calibration strategies, and mainly affects the broadband amplitude of the continuum, so we expect that it has a negligible effect on the small-scale features we are interested in. The contaminating spectral structures from foregrounds are not likely affecting the small structures we are aiming at, as the discriminating features locate at k ≳ 3 Mpc^-1, which are well within the “EoR window”<cit.>. Therefore, we consider only the thermal noise of an interferometer array, and the sample variance in the power spectrum measurement. The sample variance on the 1D power spectrum is P^S=σ_P(k)/√(N_s × N_m), where σ_P(k) is the standard deviation of P(k) from N_s× N_m measurements of the 1D power spectrum at k, in which N_s is the number of 1D power spectrum measurements on different neutral patches of Δ r_z, and N_m is the number of independent modes in each k-bin from each measurement. Using 10 high-redshift background radio sources, it is reasonable to expect about 100 independent measurements of 1D power spectra from segments of spectra, each corresponding to a comoving length of 10 Mpc. We adopt N_s = 100, and σ_P(k) is obtained by simulating 21-cm forest signals from N_s neutral segments of 10 comoving Mpc length penetrating various environments covering grid densities from δ = -0.7 to δ = +1.5. As for the thermal noise error, we follow the approach taken by Ref. <cit.>, and assume that each spectrum is measured for two times separately, or the total integration time is divided into two halves, and the cross-power spectrum is practically measured in order to avoid noise bias. Then the observing time for each measurement of the spectrum is δ t_0.5 = 0.5 δ t, and the thermal noise on the spectrum is increased by a factor of √(2). Then the thermal noise uncertainty on the 1D power spectrum is given by <cit.> P^N = 1/√(N_s)(λ_z^2 T_ sys/ A_ effΩ)^2(Δ r_z/2 Δν_zδ t_0.5), where Δν_z is the total observing bandwidth corresponding to Δ r_z. A distance of 10 comoving Mpc along the line of sight corresponds to a bandwidth of Δν_z = 0.56 MHz at z = 9. Assuming the same telescope parameters of SKA1-LOW and SKA2-LOW as those for the direct measurement, and the same observation time of δ t = 100 hr (δ t_0.5 = 50 hr) on each source, the expected thermal noise on the 1D power spectrum of 21-cm forest is plotted in Figs. <ref> and <ref>, as well as in Supplementary Fig. 8, with dotted lines for SKA1-LOW and dashed lines for SKA2-LOW, respectively. The total measurement errors including the thermal noises of SKA2-LOW and sample variance are shown with the error bars in these figures. We have tested the extraction of 21-cm forest 1D power spectrum by simulating mock quasar spectra with thermal noises, and calculating the 1D power spectra from the noisy spectra. The results are shown in Supplementary Fig. 9, with upper panels from mock spectra with SKA1-LOW noises, and lower panels from mock spectra with SKA2-LOW noises, respectively. In each row, the left panel shows the results from mock spectra with both 21-cm absorption signals and thermal noises, and the right panel shows the results from mock spectra with only thermal noises. The measured noise power spectra agree well with the theoretical predictions. It is seen that the measurement of 1D power spectrum notably improves the observability of the 21-cm forest signals as compared to the direct measurement of individual absorption lines. With about 10 moderately bright quasars with S_ 150≳ 10 mJy at redshift around 9, the 1D power spectrum can be measured by SKA2-LOW even if the IGM was heated as sufficiently as in the model with f_ X = 3, and can reach a high signal-to-noise ratio if f_ X≲ 1. Note that the measurement error can be further suppressed if more sources are available beyond reionization, and more power spectra can be averaged to suppress both the thermal noise and the sample variance. Data Availability The main data that support the results in this work are provided with this paper, and are also available at https://doi.org/10.57760/sciencedb.08093https://doi.org/10.57760/sciencedb.08093. Further datasets are available from the corresponding authors upon reasonable request. Code Availability The code 21cmFAST used for large-scale simulation is publicly available at https://github.com/andreimesinger/21cmFASThttps://github.com/andreimesinger/21cmFAST, the codes for simulating small-scale structures and 21-cm forest signals are available from the corresponding authors upon reasonable request, and the GADGET code is available at https://wwwmpa.mpa-garching.mpg.de/gadgethttps://wwwmpa.mpa-garching.mpg.de/gadget. Additional information Correspondence and requests for materials should be addressed to Yidong Xu (email: [email protected]), Xin Zhang (email: [email protected]) or Xuelei Chen (email: [email protected]). * We thank the anonymous referees for very constructive comments and suggestions. We thank Yichao Li, Peng-Ju Wu, Jing-Zhao Qi, and Bin Yue for helpful discussions. This work was supported by National Key R&D Program of China (Grant No. 2022YFF0504300), the National Natural Science Foundation of China (Grant Nos. 11973047, 11975072, 11835009, 11988101, and 12022306), and the National SKA Program of China (Grant Nos. 2020SKA0110401, 2020SKA0110100, 2022SKA0110200, and 2022SKA0110203). Y.X. and X.C. also acknowledge support by the CAS grant (Grant No. ZDKYYQ20200008). Y.W. acknowledges support by the CAS Interdisciplinary Innovation Team (Grant No. JCTD-2019-05). R.L. acknowledges support by the CAS grant (Grant No. YSBR-062) and the grant from K.C.Wong Education Foundation. Author contributions Y.S. performed most of the computation and analysis, and wrote part of the manuscript. Y.X. led the study, contributed to the simulations, and wrote the majority of the manuscript. Y.W. and W.Y. contributed to the computation of the 1D power spectrum. Y.X. and R.L. proposed the study. X.Z. and X.C. contributed to the collaboration organization, the Fisher forecasts, and the manuscript writing, and supervised the study. All authors discussed the results and commented on the manuscript. Competing Interests The authors declare no competing interests. < g r a p h i c s > The density (left panel), optical depth (middle panel) and brightness temperature (right panel) for a line of sight of 2 comoving Mpc in the CDM model at z = 9. The green, yellow and red lines correspond to local overdensities of δ_0 = 0, 1 and 2, respectively. The flux density of the background source in the right panel is assumed to be S_150=10 mJy. < g r a p h i c s > 1-D power spectrum of a synthetic 21-cm forest spectrum in the CDM model, for a line of sight penetrating through an un-heated IGM (f_ X = 0) with different local overdensities at z = 9. The green, yellow and red curves correspond to δ_0 = 0, 1 and 2, respectively. The flux density of the background source is assumed to be S_150=10 mJy. < g r a p h i c s > [Supplementary Figure figure] Halo mass function for different DM particle masses at z = 9. The red, yellow, blue and pink curves correspond to the CDM model and WDM models with m_ WDM = 10 keV, 6 keV, and 3 keV, respectively. < g r a p h i c s > [Supplementary Figure figure] Neutral hydrogen overdensity profiles inside and outside the virial radius of a halo at z = 9. The green, yellow and red lines correspond to halo mass of 10^6 M_⊙, 10^7 M_⊙ and 10^8 M_⊙, respectively. < g r a p h i c s > [Supplementary Figure figure] Probability density distribution of the gas overdensity at z = 17. The black solid line is the probability density distribution from the GADGET simulation with a box size of 4 h^-1 Mpc and 2×800^3 gas and DM particles. The blue dashed line is the one derived from our hybrid approach with the same resolution as the GADGET simulation. < g r a p h i c s > [Supplementary Figure figure] Density distribution of a patch of 10 comoving Mpc at z = 9 along the line of sight, for an un-heated IGM (f_ X = 0). The four panels, from left to right, correspond to the CDM model and the WDM models with m_ WDM = 10 keV, 6 keV and 3 keV, respectively. < g r a p h i c s > [Supplementary Figure figure] Reionization history simulated by 21cmFAST. The black, red, yellow and green curves correspond to the average neutral fraction x̅_ HI as a function of redshift z in the CDM model and the WDM models with m_ WDM = 10 keV, 6 keV and 3 keV, respectively. < g r a p h i c s > [Supplementary Figure figure] Evolution of the global gas temperature with redshift. The blue, green, yellow and red lines correspond to f_ X = 0, 0.1, 1 and 3, respectively. < g r a p h i c s > [Supplementary Figure figure] Temperature profiles of gas inside and outside the virial radii of halos at z = 9 with an un-heated IGM (f_ X = 0). The green, yellow and red lines correspond to halo masses of 10^6 M_⊙, 10^7 M_⊙ and 10^8 M_⊙, respectively. < g r a p h i c s > [Supplementary Figure figure] Evolution of the 1-D power spectrum of 21-cm forest averaged over 100 measurements on segments of 10 comoving Mpc length in neutral patches along lines of sight against background sources with S_150 = 10 mJy. The solid lines in the left and central panels show the power spectra in the CDM model and those in the WDM model with m_ WDM = 3 keV respectively, assuming an un-heated IGM (f_ X= 0). The solid lines in the right panel show the power spectra in the CDM model assuming an efficiently-heated IGM (f_ X = 3). In each panel, the blue, green and yellow lines correspond to z = 7, 9 and 11, respectively. The dotted and dashed lines with the corresponding colors are the expected thermal noises P^ N for SKA1-LOW and SKA2-LOW, respectively, and the error bars show the total measurement errors of SKA2-LOW. < g r a p h i c s > [Supplementary Figure figure] 1-D cross-power spectrum computed from mock spectra simulated with thermal noises expected for SKA1-LOW (upper panels) and SKA2-LOW (lower panels), respectively. The left plots show the results in which the mock spectra contain both 21-cm forest signal and thermal noise, and the right plots show the results from mock spectra with only thermal noise. Same as Fig. 3 the 1-D power spectra are averaged over 100 measurements on segments of 10 comoving Mpc length in neutral patches along lines of sight against 10 background sources with S_150 = 10 mJy. The blue, green, yellow and red curves correspond to f_ X = 0, 0.1, 1 and 3, respectively. The dotted and dashed lines are the theoretical thermal noises P^N expected for the SKA1-LOW and SKA2-LOW, respectively.
http://arxiv.org/abs/2307.04408v1
20230710081540
TIM: Teaching Large Language Models to Translate with Comparison
[ "Jiali Zeng", "Fandong Meng", "Yongjing Yin", "Jie Zhou" ]
cs.CL
[ "cs.CL" ]
Violation of a Leggett–Garg inequality using ideal negative measurements in neutron interferometry Stephan Sponar^1 August 12, 2023 =================================================================================================== UTF8gbsn Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning. However, these models can sometimes struggle with tasks that require more specialized knowledge such as translation. One possible reason for such deficiency is that instruction tuning aims to generate fluent and coherent text that continues from a given instruction without being constrained by any task-specific requirements. Moreover, it can be more challenging for tuning smaller LLMs with lower-quality training data. To address this issue, we propose a novel framework using examples in comparison to teach LLMs to learn translation. Our approach involves presenting the model with examples of correct and incorrect translations and using a preference loss to guide the model's learning. We evaluate our method on WMT2022 test sets and show that it outperforms existing methods. Our findings offer a new perspective on fine-tuning LLMs for translation tasks and provide a promising solution for generating high-quality translations. Please refer to Github for more details: https://github.com/lemon0830/TIM. § INTRODUCTION Generative large language models, like GPT4, have shown remarkable performance in various NLP tasks <cit.>. For machine translation, the GPT models achieve very competitive translation quality, especially for high-resource languages <cit.>, which opens up new possibilities for building more effective translation systems. It is impractical to deploy such large models for the translation task only, and using or tuning open-sourced generative language models has become an attractive research direction. In this regard, researchers have explored strategies for example selection and instruction design through In-Context Learning (ICL) <cit.>. However, evaluations of open-sourced LLMs like Bloom show that they do not perform as well as strong multilingual supervised baselines in most translation directions <cit.>. Additionally, ICL can increase decoding latency due to the need for large models with long context. Based on these observations, researchers suggest tuning relatively small LLMs for translation with a few high-quality supervised instructions <cit.>. Instruction tuning has been shown to be an efficient method for making LLMs better aligned to the task descriptions preferred by humans <cit.>. The only requirement is to collect task-specific data, and LLMs will be fine-tuned on the data with the language modeling loss. However, optimizing for simple next-token prediction loss will cause models to overlook context information, especially for low-capacity models. It is serious for the tasks in which the specialized knowledge in context is necessary for task completion, and ignoring such knowledge on translation can lead to inadequacy and hallucination. Therefore, there is a need to investigate the limitations of LLMs and explore methods for improving their performance in specialized tasks. In this paper, we propose to teach the language models to learn translation with examples in comparison, aiming to make full use of a small amount of high-quality translation data. Based on the training data, we further construct two kinds of comparisons: output comparison and preference comparison. Output comparison is used to learn responses of different instructions for the same input. Preference comparison is used to maximize the gap between correct and incorrect translations. Specifically, in order to help identify specific areas where the model may be making errors, we introduce an additional preference loss during fine-tuning, which is used to learn reward models <cit.>, as regularization to penalize unexpected outputs. We evaluate TIM on WMT22 test sets in four language directions (EN⇔DE, EN⇔ZH), and the improvement over the baselines shows the effectiveness of our method. Our model shows better zero-shot translation performance and stability in prompt choice. As the size increases, the performance of the models trained with TIM increases, with the improvement being more pronounced in the case of smaller models. In particular, the tuned LLaMa-13B <cit.> achieves top 1 on quality estimation without references in the EN⇔DE, outperforming the dedicated models for quality estimation like COMET. § RELATED WORK The research of machine translation based on LLMs can be divided into two categories: LLMs as interface <cit.> and instruction tuning <cit.>. The studies of using LLMs as interface focus on empirical analysis. For example, <cit.> evaluate ChatGPT, GPT3.5 (text-davinci-003), and text-davinci-002 in eighteen different translation directions involving high and low resource languages. <cit.> further evaluate four popular LLMs (XGLM, BLOOMZ, OPT and ChatGPT) on 202 directions and 102 languages, and compare them with strong supervised baselines, which provides a more comprehensive benchmark result. Many efforts are also put into investigating translation exemplars selection strategy of in-context learning <cit.>. Another line of work introduces knowledge, such as word alignments extracted from a dictionary, to LLMs for better translation <cit.>. Tuning smaller LLMs (e.g., 7B) for translation tasks is a promising direction since they are better at English than supervised translation models. However, even for directions from other languages to English, the gap between language models fine-tuned with translation data and supervised systems is still evident <cit.>. Different from them, we introduce output comparison and preference comparison data and present a preference regularization to alleviate hallucination and help LLMs learn translation better. § METHOD In brief, we tune generative language models to learn translation with output comparison and preference comparison in the instruction tuning framework. First, we will give a formal introduction to instruction tuning. Then, we present the detail of two kinds of comparisons of our method consisting of output comparison and preference comparison, and an additional preference learning loss. Finally, we show the different ways of parameter tuning. §.§ Background: Instruction Tuning The purpose of instruction tuning is to enhance the capacity of language models in handling NLP instructions. The concept is that the models can be trained to execute tasks specified in instructions, which would enable them to comprehend and execute tasks that have not been encountered before. As illustrated in Figure <ref>, generally, each instance of instruction-following data starts with “instructions” c describing the task the model should perform, and a corresponding output y indicating the answer to the instruction. The “input” x, the optional context or input for the task, is not necessary sometimes but is used for the machine translation task. Given the instruction data, the language models are optimized by minimizing the negative log-likelihood of the output y: L_lm=-1/|y|∑_i^|y|logp(y_i|c,x). Notably, the objective is the same as that used in pretraining. §.§ Output Comparison An important ingredient of our method is the construction of samples used to provide comparison signals for model learning. In addition to regular translation data, we construct data used for comparison by introducing dictionary information or translation errors, which are shown in Figure <ref>. Dictionary-guided Data. To make the model aware of the underlying reasons for different translations, we inform the model of different correct outputs with the help of bilingual dictionaries[https://github.com/facebookresearch/MUSE]. We do not manually replace the words in an input-output pair to synthesize the comparison data but directly use a multi-reference corpus. Specifically, we use the “no error” submissions annotated by humans of WMT20 in Multidimensional Quality Metrics (MQM) datasets[https://github.com/google/wmt-mqm-human-evaluation] as the multi-reference of the source sentence. Then, we obtain the word alignments between a single source sentence and multiple references by looking up the bilingual dictionary. Finally, we use the word alignments as a note added to the input. As shown in Figure <ref>, for the same input sentence “国有企业和优势...老区。”, with the note containing different word alignments, the outputs of Example 1 and Example 2 are different. Error-guided Data. In addition, inspired by <cit.>, we introduce translations with error annotations. For correct input-output pairs, the added notes indicate no mistakes in the references, while the notes of incorrect input-output pairs indicate detailed translation errors. As shown in the left part of Figure <ref>, the output of Example 1 is a correct translation while the output of Example 2 has a major locale convention/name format mistake, corresponding to the added note. We directly use the human-annotated data of WMT20 in MQM datasets. §.§ Preference Comparison In preference comparison, we assign contrastive outputs for each type of data, denoted as Bad Output, and train the model with an extra preference loss. For the regular translation data, we use the prediction of large language models (e.g., Alpaca) as the comparison. For each sample with dictionary information or error information, we randomly sample a translation with errors as the Bad Output. Moreover, we add noise to the Bad Output by randomly deleting words or swapping the positions of two words. With examples of correct and incorrect translations, the model can be optimized to produce higher quality translations by distinguishing them, which can reduce the resources needed for training. One way to utilize the contrastive outputs is to train a reward model and further fine-tune the language model with the reward model using reinforcement learning, i.e., RLHF <cit.>. Instead of using such complex two-stage training process, we directly tune the model using a preference loss: L_pl=-log(σ(r_θ(c,x,y_0)-r_θ(c,x,y_1))), where σ(·) is the sigmoid function, and y_0 and y_1 denote the preferred output and comparison output, respectively. Specifically, r_θ is a linear head that takes the hidden state of the top layer and returns a scalar. In practice, preference learning is calculated at the token level: L_pl=-1/N-I∑_i=I^Nlog(σ(r_θ(h_i^(0))-r_θ(h_i^(1)))), where I is the index starting from the segments different between y_0 and y_1, N is the maximum length of two sequences, and h_i is the hidden state of the i-th token. The overall loss function for tuning the model is L=L_lm+λL_pl, where λ is a coefficient of the preference learning loss. We simply set λ as 0.5 in this paper. §.§ Tuning Strategies In addition to vanilla fine-tuning all model parameters, parameter efficient fine-tuning methods are specially proposed for large language models such as prefix tuning and LoRA <cit.>. In this paper, we adopt three different strategies for tuning the models, listed in descending order from the number of fine-tuned parameters. LoRA: Tuning with Low-rank Matrices. LoRA <cit.> is a technique that reduces the number of trainable parameters by introducing new low-rank matrices to any module in the model while keeping the original weights frozen. This results in a significant reduction in storage requirements for large language models, as well as efficient task-switching during deployment without impacting inference latency. FixEmb: Tuning with Embedding Fixed. It is likely that the limited number of trainable parameters in LoRA-based tuning can restrict its expressiveness for certain tasks. To overcome this limitation, a simple solution would be to fine-tune the parameters of the model layers while keep the embeddings fixed. By doing so, the model can gain more flexibility in adjusting its performance without compromising the important semantic information captured by the embeddings. Full: Tuning Full Parameters. Full parameter tuning has recently been demonstrated more effective than LORA. The limitation of full parameter fine-tuning is the memory footprint, but it is not serious for the 7B models and little data. § EXPERIMENTS In this section, we begin by conducting preliminary experiments to investigate the impact of inference strategies and the resilience of our TIM under varying instructions. Subsequently, we evaluate TIM's performance on the WMT and FLORES-200 dev-test tasks, comprising a total of four language pairs. For this evaluation, we employ BLOOMZ-7b-mt[https://huggingface.co/bigscience/bloomz-7b1-mt] and LLaMA-7b <cit.> as the backbones. §.§ Settings To avoid data leakage as much as possible <cit.>, we use the latest WMT22 test set and FLORES-200 dev-test. * WMT22 Test Sets. We use the test sets from WMT22 competition[https://www.statmt.org/wmt22/translation-task.html], which consist of more recent content from diverse domains such as news, social, e-commerce, and conversational domains. The test sets comprise 1984, 2037, 1875, and 2037 samples for the German-to-English (De⇒En), English-to-German (En⇒De), Chinese-to-English (Zh⇒En), and English-to-Chinese (En⇒Zh) language pairs, respectively. * FLORES-200 dev-test. We use the dev-test split from the FLORES-200 benchmarks[https://github.com/facebookresearch/flores/blob/main /flores200]. This dataset includes 1,012 sentences extracted from English Wikipedia, covering a broad range of topics and domains. These sentences have been carefully checked by professional translators into approximately 200 languages. To ensure a fair and consistent evaluation, we fine-tuned all models for 1 epoch with a batch size of 128, while imposing a maximum text length of 512. The learning rate is 2e-5 and weight decay is 0.0. We conducted fine-tuning on eight NVIDIA A100 GPUs, utilizing the Deep-Speed ZeRO stage3 for model parallelism. The results of the final checkpoint are reported. For automatic evaluations, we utilize two widely adopted metrics: BLEU <cit.> implemented in SacreBLEU[https://github.com/mjpost/sacrebleu], and COMET[https://github.com/Unbabel/COMET] with Unbabel/wmt22-comet-da. These metrics employ distinct approaches to evaluate the quality of machine translation. BLEU is driven by n-gram similarity, while COMET relies on cross-lingual pretrained models. §.§ Baselines We leverage the BLOOMZ-7b-mt and LLaMA-7b models as the foundation models and evaluate the following baselines: Alpaca-(*) is a reproduction of the Alpaca model fine-tuned solely on the alpaca multi-task dataset[https://huggingface.co/datasets/tatsu-lab/alpaca]. MT-(*) is fine-tuned on the human-written validation data from previous WMT competitions, i.e., the newstest2017-2021 of Chinese⇔English and German⇔English, which consist of 45,433 sentence pairs for all four directions. We use the notation TIM-(*) to refer to LLMs fine-tuned using our proposed TIM approach. The training data for TIM-(*) includes the WMT translation data as well as the dictionary-guided and error-guided data described in Section <ref>. Besides, we report the results of WMT22 winners, GPT-4 <cit.>, and NLLB-3.3B <cit.>. The latter is a multilingual translation model trained on a massive parallel corpus of over 200 languages[The results in <cit.> are directly reported.]. §.§ Pre-Experiments In this section, we investigate the effect of inference strategies and instructions. We fine-tune the BLOOMZ-7b-mt with our TIM and conduct evaluations on the WMT22 test sets. Effect of Inference Strategies. Beam search has been the standard search algorithm for machine translation, while LLMs usually use sampling for efficiency. We compare the performance of sampling and beam search, and the two search algorithms are combined with the notes in our dictionary-guided and error-guided data. Table <ref> presents the experimental results. First, we observe that instructing the model to generate translations without errors does not result in a significant performance gain, contrary to the conclusion drawn in <cit.>. We speculate that the preference loss function implicitly allows the LLMs to learn to generate error-free translations, making the additional instructions unnecessary. Secondly, previous studies have shown that introducing alignment information from dictionaries can improve translation performance <cit.>. Surprisingly, Table <ref> shows that adding alignment notes significantly improves the performance of De⇒En, but harms the performance of other language pairs. This may be due to the fact that most of the words in the dictionaries we use are common words, or that the wording styles of the dictionaries differ greatly from the reference. Further research is needed to determine how to better collect and use dictionary information for machine translation is left for future work. Effect of Instructions. In human interaction scenarios, instructions provided by users may vary in style and form, and thus it is essential to evaluate the robustness of TIM under different instruction styles. The performance of our TIM using ten distinct instructions is shown in Figure <ref>. The result indicates that our TIM achieves consistent performance across all the tested instructions. §.§ Main Results Based on the observation in Section <ref>, we use a simple instruction “Translate from {src} to {tgt}.\n{input}” and beam search strategy with a beam size of 4 for all models during inference. Table <ref> presents the translation performance on the WMT22 test sets and FLORES-200 dev-test. For the models based on BLOOMZ-7b-mt, we only evaluate them on WMT22 test sets due to the data leakage issue. We have the following observations: First, based on LLaMA-7b, the Alpaca-(*) models exhibit some translation ability particularly in high-resource directions such as De⇒EN and En⇒DE, due to the small amount of translation instruction data based on Spanish⇔English that Alpaca possesses. Introducing a small number of translation sentence pairs (i.e., MT-(*)) in the corresponding language can result in additional improvement. Secondly, we observe significant performance fluctuations across different language models, training data, and language pairs for (*)-LoRA and (*)-Full. For example, when the backbone is BLOOMZ-7b-mt, MT-LoRA outperforms MT-Full in most language pairs except for En⇒De. However, when the backbone is the LLaMa-7b model, MT-LoRA underperforms MT-Full in Zh⇒En and En⇒Zh language pairs. Our speculation is that LoRA can prevent LLMs from overfitting but is limited in the number of trainable parameters. In contrast, the experiment result of (*)-FixEmb indicates that fine-tuning with fixed embedding parameters can better leverage the generalization of LLMs and prevent overfitting. Finally, training LLMs with comparison can further enhance the understanding of the translation task. Compared to Alpaca-(*) and MT-(*) models, TIM-(*) achieve significantly better performance on both the WMT22 test sets and FLORES-200 dev-test. Concretely, based on BLOOMZ-7b-mt, TIM-FixEmb achieves notable improvement compared with MT-FixEmb, with 2.93, 3.29, 1.34, 2.40 BLEU scores and 0.55, 0.47, 0.50, 2.80 COMET scores on Zh⇒En, En⇒Zh, De⇒En, En⇒De, respectively. § ANALYSIS §.§ Effect of Model Size In this section, we present a comparison between TIM and instruction tuning across different model sizes. Figure <ref> illustrates the consistent improvements achieved by TIM, indicating its generalizability. Notably, BLOOM-3b does not outperform BLOOM-1b7 with instruction tuning. On the other hand, as the foundation LLM's size increases, the translation performance of the LLMs after fine-tuning with TIM gradually improves. In particular, the improvement is more significant when the model size is smaller. This observation supports our hypothesis that simple instruction tuning with a small amount of training data, may not effectively learn task patterns and instead relies heavily on the model's original ability to comprehend instructions. On the other hand, training LLMs with comparison encourages them to swiftly identify the task's requirements and patterns and leverage internal cross-lingual knowledge. §.§ Zero-shot Translation To evaluate TIM’s performance in translation directions never seen previously, i.e., zero-shot multilingual capability, we conduct experiments on the WMT22 multilingual-to-English translation benchmark which encompasses 4 translation directions: Czech-to-English (cs⇒en), Japanese-to-English (ja⇒en), Russian-to-English (ru⇒en), and Ukrainian-to-English (uk⇒en). We compare our method with the following open-sourced models: ChatGLM-6b[https://huggingface.co/THUDM/chatglm-6b], Alpaca-7b[https://huggingface.co/tatsu-lab/alpaca-7b-wdiff], Vicuna-13b[https://huggingface.co/lmsys/vicuna-13b-delta-v1.1], BayLing-13b <cit.>, and NLLB-3.3b <cit.>. We report the results of the above models in <cit.>. Due to the better performance of LLaMA in multilingual-to-English, we report the performance of fine-tuned LLaMA-7b and LLaMA-13b with our TIM, respectively. As depicted in Figure <ref>, TIM-(*) (i.e., TIM-FixEmb-7b, TIM-LoRA-13b, and TIM-FixEmb-13b) exhibit good zero-shot multilingual capability on these translation directions. Compared to ChatGLM-6b, Alpaca-7b, and Vicuna-13B, TIM-(*) exhibits superior translation ability, highlighting that aligning training languages strengthens the alignment of other languages as a by-product. Additionally, TIM-(*) outperforms BayLing-13b, which uses additional interactive translation training data, in XX⇒English translations. TIM-(*) also demonstrate comparative performance with NLLB-3.3B in some language pairs. These results demonstrate that adding carefully constructed translation data, combined with an effective training strategy such as our proposed TIM, can enhance the overall task capability of LLMs. §.§ Ablation Study To analyze the impact of different components of TIM, we investigate five variants of TIM-FixEmb taking BLOOMZ-7b-mt as the backbone: 172 w/o ℒ_pl, where we removed the ℒ_pl; 173 w/o Dict, where we removed the dictionary-guided comparisons in training data; 174 w/o Error, where we removed the error-guided comparisons in training data; 175 w/o OutputCom, where we removed output comparison; 176 w/o OutputCom&ℒ_pl, in which we fine-tuned LLM with translation instructions by standard instruction tuning method. We illustrate the BLEU scores on Zh⇒ and En⇒De in Figure <ref>. The experimental results of 175 and 176 demonstrate that LLMs can quickly learn better translation output through preference comparison, even without adding any output comparison data. Moreover, the results of 172 173 and 175 show that output comparison is more crucial than preference comparison. In particular, the removal of error-guided data (i.e., 174) results in a greater performance drop than the removal of dictionary-guided data (i.e., 173). We hypothesize that this is because the translations without errors in the system's outputs of WMT2020 are relatively similar, causing the “output” of dictionary-guided data to be too similar to create a high-quality comparison. If translation data with multiple more diverse references were available, we might achieve further improvement. We leave this for future work. §.§ MT Metrics Evaluation The preference scores can reflect the quality of the model output. To assess whether the strategy can successfully learn a meaningful importance estimation, we use MTME[https://github.com/google-research/mt-metrics-eval] to evaluate the performance of our preference scores on standard test sets from the WMT22 Metrics Shared Tasks in De⇒En and En⇒De, respectively. Specifically, for each pair consisting of a source sentence and the corresponding hypothesis, we wrap them with our Training Prompt, compute the score of each token in the hypothesis, and use the score of the last token as the sentence-level score. Table <ref> shows the system-level accuracy (Acc) and Pearson correlations (PCCs). In particular, our TIM-LLaMA-13b outperforms all the reference-free metrics and achieves the best Pearson correlation on De⇒En. This demonstrates that the LLMs are implicitly a reward model which can be jointly optimized during instruction tuning <cit.>. § CONCLUSION We propose TIM, a training method that instruction tunes open-source large language models for the translation task with the comparison of translations. Experiments and analyses validate the effectiveness of TIM in terms of translation quality and zero-shot translation ability. For the reference-free MT metrics evaluation, TIM-LLaMA-13b even outperforms some popular metrics like COMET and BLEURT in De⇒En, showing that our method can well learn the translation and evaluation jointly. Future work can explore the use of more diverse references for output comparison, and more advanced preference learning signals. acl_natbib
http://arxiv.org/abs/2307.07552v1
20230714180005
Uncovering Local Integrability in Quantum Many-Body Dynamics
[ "Oles Shtanko", "Derek S. Wang", "Haimeng Zhang", "Nikhil Harle", "Alireza Seif", "Ramis Movassagh", "Zlatko Minev" ]
quant-ph
[ "quant-ph", "cond-mat.dis-nn", "cond-mat.str-el" ]
./Images/ [itemize]noitemsep, topsep=0pt [enumerate]noitemsep, topsep=0pt thmTheorem
http://arxiv.org/abs/2307.04495v1
20230710113346
Model-Driven Engineering Method to Support the Formalization of Machine Learning using SysML
[ "Simon Raedler", "Juergen Mangler", "Stefanie Rinderle-Ma" ]
cs.SE
[ "cs.SE", "cs.AI", "H.1.0; I.2.4" ]
Model-Driven Engineering for Machine Learning]Model-Driven Engineering Method to Support the Formalization of Machine Learning using SysML [1,2]Simon [email protected] 1]Juergen [email protected] 1]Stefanie [email protected] [1]TUM School of Computation, Information and Technology; Department of Computer Science, Technical University of Munich, Boltzmannstraße 3, Garching b. München, 85748, Germany [2]Business Informatics Group, Technical University of Vienna, Favoritenstraße 9-11/194-3, Vienna, 1040, Austria Motivation: Systems Engineering is a transdisciplinary and integrative approach, that enables the design, integration, and management of complex systems in systems engineering life cycles. In order to use data generated by cyber-physical systems (CPS), systems engineers cooperate with data scientists, to develop customized mechanisms for data extraction, data preparation, and/or data transformation. While interfaces in CPS systems may be generic, data generated for custom applications must be transformed and merged in specific ways so that insights into the data can be interpreted by system engineers or dedicated applications to gain additional insights. To foster efficient cooperation between systems engineers and data scientists, the systems engineers have to provide a fine-grained specification that describes (a) all parts of the CPS, (b) how the CPS might interact, (c) what data is exchanged between them, (d) how the data interrelates, and (e) what are the requirements and goals of the data extraction. A data scientist can then iteratively (including further refinements of the specification) prepare the necessary custom machine-learning models and components. Methods: This work introduces a method supporting the collaborative definition of machine learning tasks by leveraging model-based engineering in the formalization of the systems modeling language SysML. The method supports the identification and integration of various data sources, the required definition of semantic connections between data attributes, and the definition of data processing steps within the machine learning support. Results: By consolidating the knowledge of domain and machine learning experts, a powerful tool to describe machine learning tasks by formalizing knowledge using the systems modeling language SysML is introduced. The method is evaluated based on two use cases, i.e., a smart weather system that allows to predict weather forecasts based on sensor data, and a waste prevention case for 3D printer filament that cancels the printing if the intended result cannot be achieved (image processing). Further, a user study is conducted to gather insights of potential users regarding perceived workload and usability of the elaborated method. Conclusion: Integrating machine learning-specific properties in systems engineering techniques allows non-data scientists to understand formalized knowledge and define specific aspects of a machine learning problem, document knowledge on the data, and to further support data scientists to use the formalized knowledge as input for an implementation using (semi-) automatic code generation. In this respect, this work contributes by consolidating knowledge from various domains and therefore, fosters the integration of machine learning in industry by involving several stakeholders. [ [ ===== Acknowledgments This project has been partially supported and funded by the Austrian Research Promotion Agency (FFG) via the Austrian Competence Center for Digital Production (CDP) under the contract number 881843. § INTRODUCTION Leveraging data to allow experts making informed decisions during the product lifecycle of a product is recently defined as data-driven engineering <cit.>. The knowledge required for implementing data-driven engineering can be characterized in a two-fold way <cit.>, i.e., by i) profound machine learning skills with respect to processing and analytics of data and implementation of algorithms, and ii) by domain knowledge regarding the product of interest, relevant product lifecycle data, and related business processes with the entangled IT infrastructures to identify data provenance and information flows. Regarding i) profound machine learning skills, a recent industrial survey revealed that companies have fewer machine learning experts and too little knowledge to implement solutions themselves. Further, few experts are available on the market <cit.>. To still connect the domain and machine learning knowledge, various methods have been recently proposed in literature <cit.>. However, these methods lack support for defining machine learning tasks and do not sufficiently represent the perspective of engineers. Additionally, the methods mainly integrate engineering methods into data science methodologies supporting data scientists rather than allowing engineers to apply the methods to support the elaboration of machine learning support. Therefore, this work aims to integrate machine learning knowledge into systems engineering to support engineers in the definition of machine learning tasks, to consequently enable data-driven engineering and, ultimately, to support the product development for the definition of prerequisites for the machine learning integration. Particularly, means of Model-Based Engineering (MBE) are adapted to define tasks for data-driven engineering by leveraging data from the product lifecycle of a system. The method of this work builds upon the systems modeling language SysML <cit.>, a general-purpose modeling language allowing to formalize a system from various viewpoints and disciplines. The interdisciplinary formalization of systems knowledge refers to the term Model-Based Systems Engineering (MBSE) <cit.>. Additionally, the CRISP-DM <cit.> methodology is used as a basis for the organization of the machine learning task definition. The Cross-Industry Standard Process for Data Mining (CRISP-DM) is a methodology consisting of common approaches used by data mining professionals to work out a data mining project from inception (requirements and business understanding) through processing (data understanding, data preparation and modeling) to evaluation and deployment. Ultimately, the method proposed in this work aims to formalize machine learning tasks during product development and to use the formalized knowledge to derive parts of the machine learning and to guide the implementation, respectively. The method is evaluated using a case study representing a weather station with multiple subsystems to predict weather forecasts and a second study to prevent wasting of 3D printer filament by canceling the printing if the intended result cannot be achieved. The contribution of this work is manifold: * The proposal of a SysML metamodel extension to include stereotypes that are used to describe machine learning functions for domain-specific data objects * A method that fits to the latest research areas of the modeling community and is called MDE4AI <cit.> * A means of structuring the models based on the CRISP-DM methodology. * Two case studies using the proposed concepts for modeling machine learning support based on simple input data, followed by a discussion of the strengths and weaknesses of the method. * A user study showing the workload and usability of the method as rated by experts and computer scientists. This work lays a foundation for allowing non-programmers to define machine learning tasks by formalizing knowledge from the problem domain into a high-level model and to communicate formalized knowledge. Additionally, semantic connection of data from various Product-Lifecycle Management (PLM) <cit.> sources allows to describe the origination and composition of data relations. With the availability of such models, the goal is to support the automatic decomposition of SysML models and the (semi-)automatic generation of executable machine learning modules. This work constitutes an extension of our previous work presented in <cit.> and expands <cit.> in several ways by * providing more extensive background information to foster understanding. * extending the presented method with a generic and fine-grained sample of the modeling method. * applying the method in two case studies from industry. * conducting a user study on the perceived workload and usability of mechanical engineers and computer scientists * discussing advantages and disadvantages of the method in a more thorough way. The remainder of this paper is structured as follows: Section <ref> presents the background regarding MBSE, data science methodologies and related work of data-driven engineering. In Section <ref>, the elaborated method is introduced in detail and evaluated based on two case studies in Section <ref>. Further, a user study is presented in Section <ref> that evaluates the perceived workload and the usability of the method with mechanical engineers and computer scientists. Based on the findings of the evaluation and the user study, an extensive discussion on advantages and disadvantages is presented in Section <ref>. Finally, the study is summarized in conclusion with future remarks in Section <ref>. § BACKGROUND First, the concepts of model-based systems engineering (MBSE) and the systems modeling language SysML are explained. Second, machine learning and the CRISP-DM <cit.> methodology are introduced, acting as a basis for the method presented in Section <ref>. Next, related methods are depicted with special focus on data-driven engineering. Finally, Section <ref> presents a summary of the background. §.§ Model-Based Systems Engineering and SysML Systems engineering, particularly MBSE, aims to integrate various engineering disciplines in product development to establish a single-source of truth by formalizing system requirements, behavior, structure and parametric relations of a system. Conventional systems engineering focuses on storing artifacts in several (text) documents maintained in case of changes. In a model-based method, the relevant information to describe an abstract system are stored in a model <cit.>. The literature concerning graphical MBSE methods promises to increase design performance while supporting the communication of relevant stakeholders of a system <cit.>. MBSE is a term explicitly considering aspects of a system. Nevertheless, other terms can be considered interchangeable depending on the level of automation and the focus of the application[See <https://modelling-languages.com/clarifying-concepts-mbe-vs-mde-vs-mdd-vs-mda/> for a discussion.]. Independent of the level of automation and the focus of the modeling language, a metamodel defines the modeling concept, relations and all possible instances of a specific set of models. Models are instances of metamodels describing a specific system. The model characteristics must match all aspects of the associated metamodel. However, extensions such as additional attributes can be added directly on a model without changing the metamodel. If a metamodel does not represent an aspect, an extension for a specific group of use cases can be defined using so-called stereotypes <cit.>. A stereotype is a means of modeling to extend metaclasses by defining additional semantics for a specific class concept. A metaclass is a class describing a set of classes, e.g. the metaclass block is a general purpose structuring mechanism that describes a system, subsystem, logical or physical component without the software-specific details implicitly given in UML structured classes <cit.>. The use of stereotypes in modeling methods have been proven to support the understanding and standardization of a model <cit.>. In MBSE, the Systems Modeling Language SysML is the most prominent modeling language <cit.>. SysML is based on the UML standard with a special focus on the formalization of systems instead of modeling classes and objects for software engineering. The language supports the formalization of structural, behavioral and functional specifications <cit.>. Structural diagrams describe the composition of systems and subsystems with their attributes and relations <cit.>. Figure <ref> depicts core elements of a block definition diagram modeled in the Eclipse-based open-source software Papyrus[<https://www.eclipse.org/papyrus/index.php>]. On top of <ref>, a Block with the name Human is defined, consisting of one attribute of type String with the attribute name Name and the visibility public indicated by the plus (+). A block can also have operations, ports etc. which are not relevant for this work and, therefore not introduced here. Underneath the Human-Block, two inheriting elements are defined by the white arrows between the blocks. The attribute Name is inherited from the parent block marked by the tailing dash. One child has an additional property Age, which only affects the block (as long as no deeper inheritance is available). The second block consists of a subsystem, indicated by the black diamond being a part association (a.k.a. composition). A part association determines that a block describes a whole element and a part of the whole element is additionally described in another element[See <https://sysmlforum.com/sysml-faq/what-are-diff-among-part-shared-referenced-associations.html> for a discussion]. The 1 and the 0..2 indicate the multiplicity, allowing to define the cardinality, e.g. number of elements. In this sample, it means one element Child2 can have zero, one or two legs. The white diamond between Leg and Shoe indicates a shared association, which is a weaker form of the part association. It refers to a relationship where the part element is still valid if the whole element is deleted, e.g. if the element Leg is not valid anymore, the Shoe is still valid. The multiplicity * indicates that one can have any number of shoes. Since various software represent slightly different parts, the description of the block definition diagram can vary. In SysML, the execution of single activities can be modeled using activity diagrams. A state diagram has an entry-point and an exit-point. The arrow between the states indicates a transition and describes that one state has been completed and another is active. Behind a state, the execution of one or multiple activities can be triggered, whereas an activity is a sequential execution of single actions <cit.>, see <ref>. §.§ Data Science and Methodologies Data Science and Business Intelligence refer to the extraction of information and knowledge from data through analysis to assist people with various types of insights, such as analysis or prediction, among many others <cit.>. The digging of such information to derive knowledge is called data mining (DM)<cit.>. Machine learning (ML) is one subfield of DM, which automatically allows computer programs to improve through experience <cit.>. Machine learning algorithms aim to solve a (specific) problem to eliminate the need for being explicitly programmed <cit.>. To support the implementation of machine learning applications, methodologies have been proposed in a general manner <cit.>. Additionally, extensions of such methods with particular support for data science in the engineering domain are introduced <cit.>. In literature, the methods of the CIRSP-DM <cit.> and KDD <cit.> are assessed in a comparative study <cit.>. According to <cit.>, CRISP-DM is a kind of implementation of the KDD process. In the following, CRISP-DM is described and used as basis for the structure of the proposed method described in Section <ref>. In CRISP-DM, six core steps are defined supporting the implementation of a DM application: * Business Understanding: Project objectives, requirements and an understanding from a business level is achieved. Based thereon, a DM problem is defined and a rough roadmap is elaborated. * Data Understanding: Data is collected to understand the situation from a data point of view. * Data Preparation The construction of the final dataset for the learning algorithm based on raw data and data transformations. * Modeling: Various or sometimes one algorithm is selected and applied to elaborated dataset from the previous step. In this step, so-called hyperparameter tuning is applied to vary on parameter values and achieve a most valuable result. * Evaluation: The result of the algorithm is evaluated against metrics and the objectives from the first step. * Deployment: The achievements are presented in a way that a customer or an implementation team can use it for further integration. §.§ Related Work In literature, various methods supporting the formalization of data-driven engineering or machine learning using modeling languages, are given. The method of <cit.> is based on the Kevoree Modeling Framework KMF <cit.>, which is similar to the Eclipse Modeling Framework (EMF) that is the basis for the open source modeling framework Papyrus[<https://www.eclipse.org/papyrus/>]. <cit.> proposes to model the domain knowledge and small learning units in a single domain modeling method since both are highly entangled. The method is based on a textual modeling syntax and describes what should be learned, how and from which attributes and relations. Additionally, templates are given to render code based on the model. However, the open-source framework seems to be out of maintenance since the repository is not updated since 2017[<https://github.com/dukeboard/kevoree-modeling-framework>]. An active maintained framework family with means to model machine learning is shown in <cit.>. The method is based on the MontiAnna framework <cit.> and focuses on modeling artificial neural networks. The MoniAnna framework is part of the MontiCore Workbench Family<cit.>. Similar to <cit.>, textual modeling is used to formalize the learning units and related input and output. The formalization is used as input for template-based code generation. However, the method does not reflect domain-specific (business) knowledge from an engineering perspective. In <cit.>, focus is put on the integration of executable machine learning units modeled on a cloud platform, enabling the fast deployment of distributed systems. However, the method is stiff regarding extendability and advanced data preparation as of the current development state.Additionally, the integration of domain knowledge is hardly given and the focus on the formalisation of data-driven algorithms is not present. The integration of ML in CPS modeling is supported by the textual modeling framework ThingML+<cit.>. The method extends the ThingML <cit.> modeling method, intended to support the development of IoT devices. As with the other methods, focus is put on machine learning modeling without considering domain knowledge. The method allows deriving executable code based on model transformation using xtext. §.§ Summary MBSE has been proven beneficial in increasing the design performance of systems <cit.>. According to <cit.>, the number of components and functions are increasing in future, leading to more complex systems, requiring advanced support in the development and analysis using means of data science. Development support for data science is given in methodologies such as CRISP-DM. However, guidance specific for the engineering domain is limited <cit.> and the integration in a model-based method is unavailable as of the author's knowledge. In literature, various methods introduce specific metamodels and languages to describe a data science task and eventually enable to derive executable code. However, the methods are not based on a MBSE compatible modeling language such as SysML rather than introducing single domain-specific modeling environments. Therefore, little support for interdisciplinary communication is given and the methods are more applicable for computer scientists than to domain outsiders such as mechanical engineers with little knowledge in programming. Moreover, the domain-specific modeling methods are not aligned with the CRISP-DM methodology, leading to little support from a methodological perspective. Last but not least, the proposed methods use model transformation to reduce the implementation effort, but are seldomly built in a generic way, allowing to extend the modeling or the derivation of code without extensive changes in the generation. Therefore, maintenance and applicability in practice is rather limited. § METHOD This section describes a method to formalize machine learning tasks based on SysML and the application of an extended metamodel. In the following, first, the extension of the SysML metamodel using stereotypes is described. Special attention is given to the package structure for organizing the stereotypes, extensibility for different purposes, and generalization so that stereotypes can be used for multiple use cases. Second, a package structure aligned with the CRISP-DM methodology is presented, enabling to guide the application of the newly defined stereotypes. Next, a syntax and semantic is introduced, allowing to interpret the formalized machine learning model enriched with the introduced stereotypes. Finally, means of SysML state diagram is used to define the tasks' execution order. §.§ Metamodel Extension using Stereotypes In the following subsections, six packages are introduced, which allow to group stereotypes that semantically describe required functionalities. Subsequently, an exemplary stereotype hierarchy for defining higher-order functions for domain-specific data transformation purposes is described in detail. §.§.§ Stereotype Package Structure SysML packages are used to group and organize a model and to reduce the complexity of system parts. Similarly, it can be applied for the organization of stereotypes, as depicted in Figure <ref>. The organization of the stereotypes is as follows: in Common, general stereotypes are defined that are used in other packages as basis, e.g. a stereotype ML is defined in Common, each defined stereotype related to machine learning inherits from this stereotype to indicate that it is a machine learning stereotype. Additionally, stereotypes can be defined allowing to categorize other stereotypes, e.g. an abstract Pre-Processing stereotype allows to identify that all inheriting stereotypes are introduced for the data preparation step of the CRISP-DM methodology. In Attributes, stereotypes for a more detailed definition of attributes are defined. These attribute stereotypes cannot be applied to blocks, only to attributes of a block. Consequently, the stereotypes extend primitive data types such as Integer or Float. The purpose of the extension are additional characteristics to describe the data, e.g. valid ranges of a value or the format of a datetime property or a regular expression to collect or describe a part of a text value. The package DataStorage defines available data interfaces from a general perspective required for the loading and processing of data from various data sources, e.g. SQL servers, Application Programmable Interface (API) or other file formats (e.g. CSV). The purpose of the stereotypes are to support the data understanding of the CRISP-DM methodology. Additionally, it allows to bridge the gap between business and data understanding due to the explicit formats. Further details in Section <ref>. In the Algorithm package, various machine learning algorithms are defined and grouped with respect to algorithm types, e.g. regression or clustering algorithms. Particularly, the focus is put on key characteristics of an algorithm implementation, such as mandatory hyper-parameter or the stereotype description. Optional algorithm parameters are not described in the stereotype, but can be added during the modeling, as later illustrated in Figure <ref>. The PreProcessing package (a.k.a. as data preparation) is the most complex and extensive package due to the number of functionalities required. Additionally, a survey revealed that computer scientists spend the most effort in preparing and cleaning data <cit.>. Within this package, functions are defined allowing to transform data so that a cleaned and applicable dataset for the machine learning algorithm is defined. Finally, the AlgorithmWorkflow package, consisting of stereotypes for states of the state diagram, allowing to define the implementation order of the machine learning tasks. Typically in SysML, states are connected to activities, which are a sequence of execution steps. However, in practice, we found out that it is very time consuming to prepare activities first. Additionally, a function abstracted as a single block can be considered as a set of activities. Consequently, state diagrams are used instead of activity diagrams to reduce the implementation effort and complexity. §.§.§ Stereotypes Hierarchy As mentioned in Section <ref>, each package represents a specific hierarchy of stereotypes, allowing to describe various aspects of machine learning subtasks. An example definition of stereotypes related to data pre-processing is depicted in Figure <ref>. As described in Section <ref>, stereotypes can be hierarchically composed to describe specific attributes only once for a set of stereotypes. On top, the ML stereotype defined in the Common package is depicted, indicating that all inheriting stereotypes are related to machine learning. Formalizing a machine learning task is intended to be iteratively, which is why some stereotypes are abstract, illustrated by italic letters. If a stereotype is abstract, it means that the stereotype requires further detailization or that a child stereotype with additional information is required, e.g., DataTransformation cannot be used without further details as it can be arbitrary transformation of data. The purpose of abstraction is to support the early definition of tasks in the product development without details already being known, e.g., the final file-format used to store the data. From top to bottom in Figure <ref>, the level of detail increases and the task is more fine-grained chosen. Consequently, leaves are the most fine-grain representation. The inheritance additionally allows to group functions of a specific kind, e.g., functions regarding outlier detection etc. Due to the grouping of functions, the composition of stereotypes strongly depends on the preferences of the implementing expert and the purpose of the composition in terms of inheritance of attributes. Note that attributes defined in a parent stereotype are also available in a child or grandchild stereotype, respectively. Therefore, each level should only represent mandatory attributes. This especially applies for algorithms with a lot of hyper-parameters, e.g. logistic regression with more than 20 parameter and attributes[<https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html>]. In case a parameter is not defined in the stereotype, it sill can be add during the modeling and application of the stereotypes. A sample can be found in Section <ref>. Additionally, it is possible to add a set of values using Enumerations for a single attribute, e.g. MissingValueFunction highlighted in green. In this respect, modeling is more precise and guided by a fixed set of valid options. Similarly, specific stereotypes can be used as an attribute, which means that only blocks or attributes that apply the specific stereotype can be assigned, e.g. Method_Attribute_Input indicating that only properties with a stereotype defined in the package Attributes can be applied because each attribute stereotype inherit from that stereotype. Finally, the application of the keyword BlackBox can be used if a function shall be hidden due to security reasons or the implementation is unknown, e.g. BlackBox_Outliers on the right side of Figure <ref>. §.§ Package structure guiding the implementation. CRISP-DM as described in Section <ref> consists of six steps, each describing a specific aspect required for the development of a machine learning project. Figure <ref> illustrates the package structure aligned with the CRISP-DM methodology. Business Understanding consists of block definition diagrams describing the system under study with the composition from a system configuration point of view. In this respect, the VAMOS method (Variant Modeling with SysML, <cit.>) is integrated to describe a specific system configuration. The integration of the VAMOS method focuses on the data interfaces and attributes of a particular configuration of a system, as different configurations of a system might lead to other data output. In this method, the VAMOS method is used to focus on data interfaces. Therefore, other systems engineering knowledge is presented in other diagrams, which is out of the scope of this work. Still, the knowledge modeled in other diagrams is connected to the instance of a block used in the VAMOS method and therefore, multiple disciplines are enabled to work on the same model. The second step, Data Understanding, details the Business Understanding with the definition of delivered data on an attribute and data format level. Particularly, the data type and the name of the delivered data attribute are described using block definition diagrams. Additionally, attribute stereotypes are used to describe the data in detail as described in Section <ref>. With the application of stereotypes on a block level, the type of data interface is defined, e.g. CSV files or SQL servers. As a result of the formalization of the interfaces in this package: The information exchange between the systems engineering and the data engineering can be considered as completed. Based on the Data Understanding, the Pre-Processing is applied to transform and prepare the data in a final dataset that can be used in the Modeling. In the Pre-Processing, the most effort is required due to the possible number of required data transformations to create a dataset usable for machine learning. The result of the Pre-Processing is a final dataset, considered to be ready for the machine learning algorithm. Within the Modeling, algorithms are applied to the final dataset. Additionally, train-test-splitting and other required functions on the machine learning algorithm are applied. In the Evaluation package, various metrics are used to asses and prove the validity of the algorithm result of the Modeling package. Finally, the Workflow package, which describes the execution order of the formalization in the previous packages using state diagrams. For each state, a custom stereotype is applied allowing to connect a block that is connected to a stereotype inherited from ML. The method to assign blocks to states allows to overcome the necessity to define activities, making the method less heavy for the application and reduces time for the formalization of the machine learning. Typically in CRISP-DM, the very last step is the deployment. However, the deployment is considered out of scope in this work and therefore the method ends with the workflow. §.§ Syntax and Semantics For the purpose of implementing ML functionalities, the utilization of functional programming paradigm is intuitive <cit.>. It utilizes higher order functions, invoked on (data-)objects which are returning objects. This allows for step-by-step decomposition, filtering and transformation of data, without side-effects (changes to variables), in comparison to the imperative programming paradigm. This sequence of function invocation aligns well with how UML and other modeling languages implement abstraction-levels to reflect a relevant selection of properties to focus on the aspects of interest <cit.>. Functions are blackboxes with processing capability that are associated with (data-)artifacts upon which they can be called, and are associated with data-artifacts they produce as output. The abstraction is realized by describing functions or a set of functions with a single stereotype and instances with blocks. A class in UML is defined among others by attributes, stereotypes, operations (methods), constraints and relationships to other classes. In SysML, a block describes a system or subsystem with a similar definition as a class in UML. A machine learning task and the respective subtasks can be seen as a system with subsystems. Therefore, each subtask is modeled using blocks, aligned with the syntax described in section <ref>. Particularly, only input values represented as attributes of a block and the relation to other blocks are modeled. The operations (methods) are defined as stereotypes with abstracted implementations. Attributes defined on the stereotype are mandatory input values for the definition of a machine learning subtask. The attributes defined on a block itself are optional for documentation or to extend the stereotype with fine-grained details, e.g. utc attribute in the Format_Date2 block in Figure <ref>. The output of a subtask (block) is implicitly defined in the implementation of the code snippet related to a stereotype and not explicitly depicted in the model. The output of a block can be used as input for other blocks, e.g. CSV_1 block as input for the Format_Date block. Figure <ref> depicts a few samples of the aforementioned syntax and semantics. On top right, a date conversion subtask is modeled as Format_Date. The date conversion stereotype has a mandatory attribute to define the format of the output of the conversion. The input for the date conversion is the block CSV_1, connected using a part association. In this sample, the date attribute is the only input value matching due to the stereotype Datetime. However, if the input is ambiguous because the datetime is stored for instance as integer or multiple attributes of the connected block are in the correct input format, it is necessary to add additional attributes to the date conversion to select the particular input, e.g. with a new attribute which value is the particular input attribute from the connected block. The block Format_Date2 inherits from Format_Date. Therefore, the input and the attributes are the same except of manual overwritten values, e.g. changes on the output datetime format or the added additional attribute utc. Another sample in Figure <ref> shows the integration of multiple inputs. The Merge_DF block consists of two input blocks and the attributes on which the merging function shall be applied are defined using an attribute that consists of two values (MergeOn). The MergeOn attribute is mandatory and therefore defined on the stereotype.Although the implicit execution order of the subtasks is defined by the associations and the necessity to compute first inputs, the execution order might be ambiguous, e.g. execute first the Format_Date or the Merge_DF. As described in section <ref>, structural diagram elements, such as blocks, requires the integration in behavioral diagrams to allow the definition of an execution order <cit.>.To enable the connection of a block with a state in a state diagram, custom stereotypes are applied. The stereotypes for the states consist of a single mandatory attribute. The mandatory attribute references a block with a stereotype that inheritate from the root parent stereotype ML. § CASE STUDIES This section presents two case studies, i.e., a weather system that predicts weather forecasts based on sensor data, and an image similarity check that makes it possible to assess whether the actual print of a 3D model with a 3D printer corresponds to the desired output. As a result, the printing process can be stopped prematurely, saving filament and time. §.§ UC1 - Weather Forecast based on Sensor Data Figure <ref> illustrates the composition of the weather system that is split in two parts. On the left side, a local station is equipped with various sensors, delivering a CSV file with measuring and on the right side, a weather forecast additionally delivers a CSV file with weather forecasts over the internet. From a systems engineering perspective, the weather system is a cyber-physical system and can be configured with various sensors. Figure <ref> depicts the SysML model of the weather system with a specific configuration aligned with Figure <ref>. Particularly, Figure <ref> depicts an method aligned with <cit.> that allows to formalize variations. Additionally, the modeling of the system from an business perspective is the first step of the method. Focus is put on the values of interest, which are the output values of the subsystems, to keep the business understanding as concise as possible. In the middle of the figure, the core weather system configuration is depicted. The surrounding subsystems are sensors or subsystems, e.g., an API (right side). The attributes of the sensors are output values of each subsystems to align with the CRISP-DM business understanding that aims to get a general idea of the system and from where data originates. To transform the business understanding in valuable data understanding, connections between the system in the business understanding and output data formats are established. Particularly, a realization connection between the CPS and blocks describing the data format using stereotypes inheriting from ML are modeled. In the blocks, each attribute has a type representing the actual data type in the data source and a stereotype with a ML attribute describing the representation in the machine learning method, e.g., CSV_2 attribute date_date is of type String and is mapped to the stereotype Datetime that considers aspects such as the datetime format. Additionally, stereotype attributes are defined such as the Encoding or the Delimiter to describe the composition of the CSV file. Figure <ref> depicts a set of subtasks applied to the data sources defined in Figure <ref>. For and explanation of Figure <ref>, please refer to Section <ref>. Figure <ref> illustrates the application of a train-test-split and the integration of the split data into two different regression algorithms, which are specified in a mandatory attribute. As of the definition of the stereotypes, no further parameters are mandatory. For the RandomForestRegressor, the optional hyper-parameter max_depth is defined. Figure <ref> depicts the prediction and the application of metrics such as mean absolute error (MAE). The mandatory parameter text is a placeholder allowing to add text that shall be implemented with the evaluation result. The method's final step is integrating the blocks into an execution workflow. Figure <ref> illustrates the execution order of the algorithm steps. As can be seen, the Format_Date2 block modeled in Figure <ref> is not depicted in the workflow, meaning that it is not taken into concern during the implementation and is left out as an artifact from the formalization time. The state's name is to readily understand the workflow and the blocks connected with the ML_Block_Connection stereotype. As the scope of this work is to formalize the machine learning and not to improve the executable code or to derive the code automatically, the result of the machine learning and the implementation itself are not depicted and left to future work. §.§ UC2 - 3D Printer Success Evaluation during Printing The purpose of the application is to detect faulty 3D prints during the printing process by comparing the actual status of the printed model with the intended model. This use case illustrates the method's applicability to other data sources, such as image data, and the integration of the method into an executable workflow engine. Additionally, the integration of pre-trained models is depicted by integrating TensorFlow Hub. The idea of image similarity is based on an image similarity tutorial[<https://towardsdatascience.com/image-similarity-with-deep-learning-c17d83068f59>]. The use case process is described below and illustrated in Figure <ref>. We adopt the CPEE process engine <cit.> to orchestrate the application process, as the CPEE provides a lightweight and straightforward user interface to orchestrate any application that allows interaction via REST web services. Figure <ref> shows the workflow of the application, consisting of image generation and printing. The first three process steps define the slicing of a STL file and the generation of the reference images. Particularly, a Python script is called that generates the slices based on a given STL file and stores the generated reference images for later comparison and similarity check. The second part of the process consists of a loop that prints a slice, takes a photo with a camera from the top center of the working area, and calls a similarity script to compare the intended and actual printed model. The image similarity algorithm is defined using the machine learning formalization method, proposed here. The defined algorithm provides a similarity index compared to a threshold value. If the threshold is exceeded, the printing process is aborted, otherwise, it is repeated. The machine learning model integrated into the printing process is formalized below. Figure <ref> shows input data consisting of two images: the image sliced from the STL file and the photo from the 3D printer camera. In contrast to the first use case, the data attributes are not further detailed with stereotypes because the input data do not show any variations, i.e. the format and resolution of the images do not change. Figure <ref> depicts the scaling of the images such that they have the same dimension. The conversion parameter L allows comparing the images on a black-and-white basis. Normalization of the pixels and colors between 0 and 1 is also applied. The normalization in the block Convert_PixelsAndNormalize should be defined as a new stereotype. In this case, we show the application of the CustomCode stereotype, allowing for the injection of program code, which allows rapid prototyping. However, flaws, such as vulnerability or hijacking of the method might lead to reduced understanding and reproducibility. Additionally, it is not the purpose of the method to insert programmed code. For further discussion, see Section <ref>. With respect to potentially wrong use of the method, Figure <ref> depicts the wrong used stereotype CustomCode on top and below the correct use of stereotypes for the same result with a slightly changed code sequence. Further, the two images are fed to the classification algorithm, as illustrated in Figure <ref>. The input value Model describes a TensorFlow Hub input, a pre-trained model to classify images. Finally, the result is measured using cosine distance metrics. The threshold for canceling the printing is implemented in the workflow and can be adjusted by the user. Finally, Figure <ref> depicts the execution sequence of the algorithm. § USER STUDY Typical user of the presented method are computer scientists and engineers from various disciplines, depending on the application area. Therefore, this study aims to assess and compare computer scientists' and mechanical engineers' subjective workload and user experience regarding understanding, modifying, and creating machine learning functions in a model-based method. Further, the time required for applying changes or creating constructs in SysML is assessed to allow a comparison of the participants based on previous experiences, e.g., programming or modeling prior knowledge. Since the study and the modeling is conducted using the SysML modeling tool Papyrus[<https://www.eclipse.org/papyrus/>], it is impossible to eliminate distortions due to the usability of the underlying tool, e.g., “How to model a block”. Therefore, the study director will provide verbal assistance if a participant requires support due to the tool's usability. Large sample sizes are necessary to enable quantitative evaluation, which is not applicable due to resource constraints. Therefore, the principles of discount usability are applied to test only a small group of customers and to identify the main usability problems by applying small qualitative user studies with three to five users, a detailed scenario, and a think-aloud method <cit.>. According to <cit.>, a 70% chance to find 80% of the usability issues is given with five users. However, in literature, there are reports that the increase of five participants to ten significantly changes the amount of found issues <cit.>. In this respect, a total number of 12 users were tested, equally distributed among the two groups, Computer Scientists (CS) and Mechanical Engineers (ME). In the following, the experimental setting is illustrated. Next, an introduction to the evaluation procedure is given, followed by an introduction of the test cases in Section <ref>. Finally, the results of the user studies are depicted in Section <ref>. A discussion on the implications from the user study is given in Section <ref>. §.§ Experimental Setting The user study was conducted with 12 participants. Each participant has a university degree (B.Sc., M.Sc., or Ph.D.) and received a basic introduction to programming at university. Half of the participants are CSs, and half MEs. Other engineers can serve as potential users and equally valid test users, as well. However, to obtain a more homogeneous group, engineers are limited to MEs. Due to the participants' different knowledge in modeling, programming, and data science, a self-assessment of their experience was made at the beginning of the user test. Table <ref> summarizes the knowledge levels of the participants based on their highest university degree, years of experience, position at the current job, and self-assessment on the three relevant dimensions. §.§ Evaluation Procedure The study started with a basic introduction to SysML and an overview of the method introduced in this work, taking approximately 10 minutes and involving the presentation of two predefined block definition diagrams as samples with a focus on the modeling and understanding of a block definition diagram and the application of the introduced stereotypes. Following this, the users had to perform three tasks, i.e., (1) showing that they understand the purpose of the modeling and the basic idea of the method by describing the modeled methods in Figure <ref>, (2) replacing a CSV stereotype with Text-file stereotype and redefining the attribute properties of the text file, and (3) adding a new function by connecting a new block with a particular stereotype to an existing block. Each of the tasks (1) – (3) is subdivided into sub-activities to allow fine-grained evaluation of the tasks and the performance achieved by the participants. The sub-activities are presented with their tasks in Table <ref>. For each participant, the time taken to perform the tasks is recorded. After each of the three tasks, NASA Task Load Index (NASA-TLX, <cit.>) and the Systems Usability Scale (SUS, <cit.>) questionnaire are filled out by the users to assess the participants' subjective workload and usability. Before filling out the questionnaire, the users were explicitly told to evaluate the method's usability, not Papyrus's. §.§ Test Cases Table <ref> depicts the subtasks to accomplish the tasks of the user study. Therefore, each subtask is assessed by the study leader to determine whether they are completed correctly or not. If a user could not find a specific button due to the usability of Papyrus, but could justify why it is being searched for, e.g., “I need to remove a stereotype and add a new one so that a new function is defined”, the task is evaluated as correct. To achieve reproducibility, the tasks were set exactly with the following wordings: Task 1 Understanding: Please describe what can be seen in the currently displayed diagram and what function it fulfills. Additionally, please answer the following questions: * What are the two input files, and in which format? * What values are stored within CSV_2? * What is the type of date_date, and how is it represented in the ML model? * What are the path and encoding of the two input files? * What are the properties of DataFrame_Merge Stereotype? Task 2 Function Exchange: Behind the here presented TextFile function, a CSV stereotype is defined. However, the type is incorrect. Please change the file type to Text-File. Additionally, set the encoding to UTF-8 and the path to C:/file.txt. Task 3 Adding a Function:In the following view, you can see two input files connected to a merge block. Additionally, a normalization of the merge block is required. Please add the function for Normalization and set the value of the normalization method to MaxAbsScalar. §.§ Survey Results Figure <ref> shows boxplots of the required times for the individual tasks grouped per task and training of the participants in CS or ME. For Task1, the time required is higher than for Task2 and Task3, whereas Task2 and Task3 shows a comparable average and distribution. One reason for the higher time for Task1 is that the users had to describe a model and this task is therefore more time-consuming. It was also observed that repetitive tasks made the users faster, which also came as feedback from the participants. Further, the dispersion of Task1 for ME is higher compared to CS. This scatter might be explained because of the varying experience levels of the participants with respect to modeling and data science. However, there was no correlation between the time spent and the correctness of the execution of the sub-activities. Regarding the dispersion of CS, interestingly, Tasks 2 and 3 vary more than Task1. This can mainly be explained by the familiarity with the Papyrus modeling environment. Thus, participants with more Papyrus experience had completed the tasks much faster than those who used Papyrus for the first time. Figure <ref> shows the result of the individual tasks in terms of correctness in relation to the subtasks of Table <ref>. CS perform better for T1 and T2, which can be explained by the extended prior experience regarding UML of CS obtained during university education. In T3, however, ME perform better. This can be explained by an outlier value for CS that performs significantly below the average. The overall accuracy of ME increased with the evolving tasks although the average of T2 is lower than for T1. The results of the applied NASA-TLX test to indicate the perceived workload of the participants for the specific tasks are presented in Figure <ref>. The lower the value of a dimension of the NASA-TLX, the lower the perceived workload. Consequently, a low scale value is seen as positive. The Effort dimension shows, for example, that with increasing experience or task, the perceived effort decreases. Further, the frustration increases and the performance decreases compared to T1. For T3, the standard error is larger than for T1 and T2. Both might be justified due to the increasing complexity of the tasks. However, it is a contrast compared to the achieved accuracy in Figure <ref>. The raw overall scores of the tasks are depicted in Table <ref>. According to <cit.>, the workload is categorized as `medium', which is the second best score and ranging from 10 to 29 points. The cumulative results of CS and ME shows a decreasing workload among the evolving tasks. For CS, the workload appears to be higher than for ME, especially for T3. As of the user feedback, no justification can be given on the difference between CS and ME. The results of the SUS test with different rating scales are shown in Table <ref> based on <cit.>. Figure <ref> presents the SUS score as a boxplot, prepared with an online tool for analyzing SUS questionnaire results <cit.>. The adjective scale score in the boxplot is aligned with <cit.>, which is based on <cit.>. The figure highlights that each task achieves the rating good for both CS and ME. The standard error of CS is slightly higher than for ME, which can also be seen in Table <ref>. The values of quartile scale shown in Table <ref> are according to <cit.> and acceptability scale according to <cit.>. ME increased the score in T3, T1 and T2 are equal. CS decreased the score among the tasks. However, the changes in the scores are little and therefore not justifiable. Figure <ref> depicts the percentile scale based on <cit.>. Since the percentile score is not uniform or normally distributed, a percentile score was created based on 5000 SUS studies. In this respect, the comparison shows that the tests achieved a percentile between 60 and 79. T3 ME over performed with 79. For CS and ME the average percentile is  66. T1 and T2 for ME have exactly the same value, which is why they are shown as one colour in the Figure. § DISCUSSION This section discusses advantages and potential flaws of the newly introduced method to formalize machine learning tasks. The structure of the section is as follows: First, the metamodel's extension and the stereotypes' proposed structure are discussed. Next, the benefits and shortcomings of the modeling semantic are assessed with a particular focus on the applicability and potential ambiguous interpretation. Next, potential risks of model-driven machine learning and future work are presented. Finally, the implications of the user study are presented and discussed. §.§ Stereotypes and Structure of the Custom Metamodel The integration of custom stereotypes has been proven beneficial in the literature <cit.>. In this method, the use of stereotypes to encapsulate and abstract knowledge about machine learning tasks is beneficial as implementation details are hidden, thus supporting communication between different engineers not necessarily experienced in machine learning or programming. With structuring the stereotypes using packages, a stereotype organization aligned to the CRISP-DM methodology is given, supporting refinements and extension in a fine-grained, hierarchical manner. Particularly, the definition of blackbox and abstract stereotypes allows the description of various functions without the necessity to specify each machine learning function in detail. In the custom metamodel, custom Enumerations are defined to limit the number of attribute values, which reduces the model's wrong specifications. Another opportunity to reduce the scope of possible selections is to reduce the number of allowed stereotypes, e.g., only inheritance of the abstract stereotype PreProcessing can be assigned as a value for a specific attribute. However, the filtering of stereotypes requires specific rules that have not yet been integrated or elaborated. Although various methods are defined using stereotypes, the level of detail might be too little for practical application. DateConversion, for example, can be applied to manifold input values and various outputs, e.g., output representation as a string or Coordinated Universal Time (UTC). Adding multiple DateConversion stereotypes for each case is possible. Still, with a growing number of stereotypes, the complexity of selecting the correct, unambiguous stereotype increases while the maintainability decreases. Similarly, if too many stereotype attributes have to be set, the complexity and the effort for the application increases. With respect to these uncertainties at the level of detail required for fine-grained definition of machine learning tasks, industrial case studies have to be conducted to elaborate and validate sufficient degree of detail and additionally define future work. §.§ Complexity of Unambiguous Modeling The definition of an implementation structure aligned with the CRISP-DM methodology starting from the business understanding and ending with the definition of evaluation and workflows, is promising to be useful due to the integration of a comprehensive and mature methodology in a MBSE method. Additionally, more experienced computer scientists aware of CRISP-DM can rely on experiences and the benefits of CRISP-DM. Furthermore, in practice, one third of data scientists lack business understanding and communication skills<cit.>, which can be supported by the model-based method of CRISP-DM. Each block implementing a ML stereotype within the implementation structure can be seen as an encapsulated subtask. Each subtask provides an output that can be used as input for another block. However, the given method does not explicitly specify the output of a block. Therefore, the output is defined by the implementing computer scientist, which may lead to different results due to the range of experience of the decisions and the laziness of the semantics, which allows to create arbitrary associations that may not be implementable. In this respect, future work requires the integration of model checking to reduce orphan associations, infeasible implementations and unwanted side effects on changing associations. Despite the ambitiousness of the modeling and the potential errors of the associations, the method supports the elaboration and definition of machine learning tasks from early development, which is beneficial. The authors believe that the flaws in the beginning of the method are getting less with the application due to the possibility of reusing certain parts of the formalization. The reuse additionally allows to preserve knowledge and contribute to standardization in the modeling and implementation, which further leads to a reduction of cost and risk in the design <cit.> and the maintenance of machine learning applications. §.§ Potential of Model-Driven Machine Learning The given proposal to describe machine learning tasks using a model-based method has some benefits but also disadvantages. A core disadvantage is the initial effort to introduce stereotypes and formalize the model. In this respect, traditional programming might be less time consuming and therefore, users might use the CustomCode stereotype to inject code. However, it is not the purpose of the method to insert code injection due to vulnerability risks and the reduced documentation and understanding by others. Consequently, future work is required to investigate an extension of the method that allows to generate code from the model but with limitations so that code injections like described in the use case are not possible. Another disadvantage of the stereotypes is the potential effort for maintenance if interfaces are proprietary or rapidly changing, e.g. due to configuration changes or replacement of machines. Closely related, for huge projects, the complexity of the resulting models might be very high, including potential errors in the model or ambiguous associations, which might be very hard to find and thus lead to additional communication effort. Nevertheless, the shortcoming of a complex ramp-up might also be a benefit in the end due to the possibility of introducing model libraries containing well-defined models, leading to standardized parts that can be reused. Further, the method allows to use the formalization as documentation of the implemented technologies that improve the maintainability and extendability for various engineers. Additionally, with further investigations regarding model validation and model debugging features, errors in the semantics can be found and repaired without actually implementing the machine learning application. However, to use this efficiently, the integration into advanced model lifecycle management <cit.> might be necessary to allow collaborative working.Due to the non-programming description of machine learning, the method is promising to increase the communication among various disciplines. In particular, with the integration of the general-purpose language SysML and the intersection of CRISP-DM and MBSE, the heterogeneous communities are broadly supported, which favors the implementation of machine learning in industrial practice and supports to shift knowledge in enterprises regarding machine learning. Further, the method can be integrated into early product development due to the abstract definition that allows to foresee various data interfaces which might have been forgotten during the development. This potentially leads to increased accuracy of the machine learning applications and might reduce failing machine learning projects, which is a well-known problem in industries <cit.>. In this section, the advantages and potential shortcomings of the method have been shown. However, the key advantages of formalized knowledge was not detailed yet. The machine-readable artifacts (models) are usable with model transformations so to generate executable code, such as a Python script. Particularly, each ML stereotype consists of knowledge to describe a specific subtask, which is a function in a programming language, e.g. a date conversion. The function parameters are defined in the stereotype (mandatory parameters) or on the block (optional parameters). Since stereotypes have to be uniquely named, each can be mapped to a generic code template in a dedicated programming language, e.g. Python. The templates consist of fixed code and generic parts with placeholders, which are filled based on the model's attributes. The state diagram defines the execution order; all blocks are a well-encapsulated functionality; hence, each block can generate a single code block in an Jupyter Notebook[<https://ipython.org/notebook.html>]. With the automatic derivation of executable machine learning code, the effort for the documentation and implementation is reduced and potentially lead to less errors in the interpretation. In this respect, future work consists of implementing a proof of concept showing that a derivation and decomposition of formalized machine learning knowledge is beneficial. §.§ Implications from the User Study The user study was conducted with two groups that are representative for using the method presented in this work in practice. The results show that the majority of the tasks were successfully accomplished. From a study perspective, the users could perform each task without additional guidance on the modeling method. Still, problems occurred with the user-interface of Papyrus, e.g., expanding a group of elements to select a block element for modeling. However, learning effects could be observed among the tasks on both CS and ME. The assessment of the NASA-TLX showed that the mental demand for each task is comparable. A similar observation can be made for the level of frustration, which is slightly lower for the first task. Contrary to expectations, the participants perceived the effort as decreasing. With regard to the task, the effort for modeling should have been higher than for understanding a model. Nevertheless, it can be implied that both CS and ME can use the method in terms of task load without being more strained. From an usability perspective, the method achieved good results. Users rated especially the consistency of the method as very high. Comparing the method with others using the percentile curve, it achieved a rank over 66. However, the first positive results could be due to some shortcomings in the study design. In particular, the demand for rating Papyrus might have a larger impact on the study design than expected. The usability feeling of the users is more dedicated to the experience with Papyrus than to the method, although it was said before to focus on the method. In this respect, a paper prototype where users had to move paper snippets on the table might have been more valuable. Furthermore, most of the participants reported their data science knowledge as low and yet were able to explain what happens in a given model or create a model building block themselves. However, modeling their own data science application might not be possible, as the general understanding of data science is too low. Nevertheless, it can be seen as a result of the study that the modeled knowledge can be used as a communication medium. Therefore, it should also be possible for non-data scientists to perform a plausibility analysis, as they can gain an understanding of the process without understanding programming code. However, this would need to be evaluated in a further study. Similarly, an evaluation of the results with the help of a larger study should be sought. § CONCLUSIONS In this work machine learning task definition using means of SysML is depicted. Particularly, the metamodel of SysML is extended with stereotypes to reflect functions from the machine learning domain. Additionally, the CRISP-DM methodology is used as basis for the structure of the models to organize the development with specific viewpoints. The method is evaluated in a case study showing the integration of machine learning task definition in a cyber-physical system as well as in a case study where a workflow engine is integrated for the interruption of a 3D printer task if the aimed result cannot be achieved. Additionally, a user study is performed to collect an overview of the perceived workload using NASA-TLX questionnaire and to check usability of the system using the SUS questionnaire. The findings of the evaluation showed that the entire workflow of a machine learning solution can be reflected using SysML. Additionally, the connection between the domain of (mechanical/electrical) engineers and machine learning experts is shown. With the MBSE integration and the involvement of various stakeholders from different disciplines, an improvement in communication is expected as shown in a user study. The user study implies that non-experts in data science can use the method as medium of communication. Future work consists of the extension of the method to automatically derive executable machine learning code acting as a basis for the implementation. In addition, a case study must be conducted to develop a minimum level of detail required to sufficiently define a machine learning model that can be used for communication, and thus guide the implementation of the executable code through the formalization of the machine learning model.
http://arxiv.org/abs/2307.10190v1
20230708141246
Summary of the 3rd BINA Workshop
[ "Eugene Semenko", "Manfred Cuntz" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.SR" ]
1]Eugene Semenko 2]Manfred Cuntz [1]National Astronomical Research Institute of Thailand (Public Organization) 260 Moo 4, T. Donkaew, A. Maerim, Chiangmai, 50180 Thailand [2]Department of Physics, University of Texas at Arlington, Arlington, TX 76019, USA Summary of the BINA Workshop [ ============================ BINA-3 has been the third workshop of this series involving scientists from India and Belgium aimed at fostering future joint research in the view of cutting-edge observatories and advances in theory. BINA-3 was held at the Graphic Era Hill University, 22-24 March 2023 at Bhimtal (near Nainital), Uttarakhand, India. A major event was the inauguration of the International Liquid-Mirror Telescope (ILMT), the first liquid mirror telescope devoted exclusively to astronomy. BINA-3 provided impressive highlights encompassing topics of both general astrophysics and solar physics. Research results and future projects have been featured through invited and contributed talks, and poster presentations. § INDO-BELGIAN COLLABORATION IN SPACE AND TIME Without comprehensive international collaborations, it is difficult to imagine sustainable scientific progress in the modern age. In astronomy and astrophysics, such collaborations enabled the operation of observational facilities in the best places on the ground and in space. In big international cooperations like the European Southern Observatory, we can see how the technology exchange and mobility of human resources promote research on all levels, from universities to international institutions. Especially promising collaborations pertain to India, the world's most populous country according to the United Nations <cit.>, with exceptionally rapid economic growth. The Belgo-Indian Network for Astronomy and Astrophysics, or BINA, was initialized in 2014 to foster the existing contacts between Indian and Belgian researchers, mostly from the Aryabhatta Research Institute of Observational Sciences (ARIES) and the Royal Observatory of Brussels (ROB), and to expand this collaboration on the nation-wide scale in both countries. The third BINA workshop, which we have the pleasure of summarizing, marks the end of this project. Two previous workshops were held in 2016 in Nainital (India) and 2018 in Brussels (Belgium). We believe that our summary would not be complete without a brief comparison of the third workshop with the two preceding ones. This will help us to better understand BINA's importance and outcome. The first workshop (BINA-1) took place in Nainital on 15–18 November 2016. According to available statistics <cit.>, 107 astronomers from eight countries participated in the meeting, giving 36 oral talks and presenting 42 posters. Eighty-eight people from twelve partner institutes represented the Indian astronomical community, whereas six Belgian institutions sent ten representatives. The meetings' agenda focused primarily on the instrumentation of the newly commissioned 3.6-m Devastal Optical Telescope (DOT) and on the future of the 4-m International Liquid-Mirror Telescope (ILMT). The scientific talks covered a wide range of subjects, from solar system studies to individual stars, stellar clusters, exoplanets and extragalactic astronomy. The second BINA workshop (BINA-2) was held two years later, in 2018, in Brussels; it was aimed to further expand the existing collaborations. Despite the significantly smaller number of participants (i.e., 69 registered researchers from seven countries), the conference's scientific programme was rich in oral talks, totalling 44. Furthermore, there were eight poster presentations <cit.>. The scientific programme of the second workshop largely mirrored the agenda of the first meeting, accentuating the scientific application of the Belgo-Indian telescopes. A highly notable aspect of the second workshop's scientific programme was the presence of the review talks. In terms of participation and the number of oral talks, BINA-3, the final workshop, resembles the previous events, although, fortunately, a significant increase in participation and contributions occurred. Nearly one hundred fifty scientists from eleven countries participated in BINA-3, with the lion's share from India and Belgium. A total of 37 talks (10: invited, 27: contributory) talks were given in the main programme, and 21 contributory talks were given in the solar physics sessions. There have been 81 poster presentations; many of those were led by graduate and undergraduate students. There is significant progress hiding behind the numbers. Since 2016, the Belgo-Indian network has grown to involve new institutes from both partner countries. The members published numerous scientific papers with results obtained on the Belgo-Indian telescopes. Many of these were based on PhD theses pursued within BINA. The content of these proceedings, during 2016–2023, also reveals that many young researchers changed their affiliation, moving to new places and thus expanding the network of research contacts. Progress in instrumentation and scientific collaboration within BINA and with external institutes worldwide gave new impulses to solar and general physics studies. In general, we can count the significantly increased number of telescopes and instruments as the major indicator of progress achieved within the BINA project. The list of available instruments has been highly influential on BINA-3. In the following sections, we briefly summarize its scientific programme. § OBSERVATIONAL TECHNIQUES AND INSTRUMENTATION Telescopes and their instruments were in the spotlight of all BINA workshops. The ILMT has become the central theme of the current meeting. From a number of oral talks and poster presentations, one could get a comprehensive view of such telescopes' operation principles. It was particularly interesting to find out about the data reduction, calibration and access to the processed images obtained with the ILMT. Numerous results of the first observations with the ILMT, shown mostly in the poster presentations, have demonstrated a wide range of possible scientific applications of zenith telescopes with liquid mirrors. Given the short time that has passed since the beginning of the operation and obtained results, we can confirm that the ILMT has proven its scientific concept and significantly strengthened the observational facilities for the current and future Indo-Belgian projects. The Indo-Belgian 3.6-m Devastal Optical Telescope (DOT) remains Asia's largest so far fully steerable optical telescope, which has been in operation since 2016. Yet, accurately by the time of BINA-3, a park of Indian telescopes received strengthening with the commissioning of the 2.5-m telescope, which was built by the Advanced Mechanical and Optical Systems (AMOS) in Belgium for the Physical Research Laboratory (PRL) in Ahmedabad and installed at Mt Abu, Rajasthan, India. The development of new instruments and the upgrade of existing facilities was the central theme of the instrumentation section of the current conference. Notably, by 2028, the TIFR-ARIES Multi-Object Optical to Near-infrared Spectrograph (TA-MOONS) will bring new capabilities useful for the studies of stars in star formation regions, open clusters, and extended sources with DOT. Also, for this telescope, adding the polarimetric mode to the Aries-Devasthal Faint Object Spectrograph & Camera (ADFOSC), the existing device for observations of faint objects, will enable both linear and circular polarimetry. This new regime is of critical importance to the study of processes in star-forming regions, interacting stellar systems, supernovae, active galactic nuclei, and beyond. A spectropolarimetric mode might be a case to think of for the creators of the PRL Advanced Radial Velocity Abu Sky Search-2 (PARAS-2), a high-resolution spectrograph at the 2.5-m PRL telescope at Mt Abu. This highly stable device has been developed for precise measurements of radial velocities while providing very high spectral resolution. Due to the geographical location of Mt Abu, PARAS-2 can play a critical role in the continuous monitoring of radial velocities for a wide variety of relatively bright objects; however, with a spectropolarimetric mode being implemented (like HARPSpol at the High Accuracy Radial velocity Planet Searcher (HARPS); ), PARAS-2 can take its niche in observations of hot magnetic stars, either within Indo-Belgian collaboration or in third-party projects like MOBSTER <cit.>. (MOBSTER is an acronym for Magnetic OB[A] Stars with TESS: probing their Evolutionary and Rotational properties; it is a collaboration of more than 60 scientists from over the world.) With the completion of a High-Resolution Spectrograph for the 3.6-m Devastal Optical Telescope (DOT-HRS), the astronomical community of ARIES will possess the ability to independently carry out studies in the fields of asteroseismology and stellar abundances. Again, like in the case of PARAS-2, spectropolarimetry with DOT-HRS is expected to increase the list of potential applications of this device and could further expand the ongoing Nainital-Cape survey of pulsating early-type stars <cit.>. The rising number of telescopes in India poses questions about the most adequate time allocation policies and the optimal distribution of observational proposals between existing astronomical facilities. We found that the analysis of the time allocation for the 3.6-m DOT regarding the last six observational cycles, as presented at the workshop, indicated that it was particularly useful and appropriate for all facilities of ARIES — especially considering that the ILMT has started its operation and the upcoming arrival of the next-generation instruments for the 3.6-m DOT. From our perspective, in addition to the proposed improvements, we would also recommend the organisation of regular (e.g., on a yearly basis) conferences of the telescope's users under the auspices of the Time Allocation Committee (TAC), where the existing and potential applicants would be able to present their proposals or give feedback on the approved or running programmes. Such mini-conferences could be held online, speeding up communication between the TAC and the astronomical community. Naturally, this experience could be applied to other instruments in India and beyond as well. The theme of small telescopes has been raised in several talks. The Belgium-made High-Efficiency and high-Resolution Mercator Echelle Spectrograph (HERMES), operated at the 1.25-m Mercator telescope in La Palma (Spain), proved its effectiveness in studies of the chemical composition of single and multiple stars. This spectrograph is used for existing bilateral projects. Complimentary opportunities for high-resolution spectroscopy with the 1-m-class telescopes and the perspectives of affordable implementation of adaptive optics on small and moderate-size telescopes have been considered in BINA-3. The interest in these problems highlights the importance of small, properly equipped telescopes for big programmes complementary to missions like the Transiting Exoplanet Survey Satellite (TESS). § MAIN PROGRAMME SESSION BINA provides access to a wide variety of observational facilities located worldwide <cit.>. The observational component mostly determined the agenda of the BINA-3. Comets, planets, asteroids, and orbital debris were in the third BINA workshop's spotlight, though other topics such as stars, including stellar multiplicity, and compact objects have been discussed. The selection of objects is largely determined by the areas where optical spectroscopy and photometry are most effective with small and medium-sized telescopes. The exception is the study of planetary atmospheres using the method of stellar occultations. Similar techniques require bigger apertures, and being implemented in a 3–6-m class of telescopes can be very beneficial. The 3.6-m DOT is among those few instruments on the planet which have regularly been used for observation of such events <cit.>. Various instruments available within the Indo-Belgian collaboration promote the comprehensive study of processes occurring in star formation regions and during the ongoing evolution of stars. The efficiency of multi-wavelength observations was demonstrated in the example of the study of the star formation H ii region Sh 2-305. However, this is not a unique case where the Indian telescopes exploring the Universe in optical, radio, and X-ray domains were successfully combined. We cannot pass by the numerous results of the study of massive binary stars, stars with discs and circumstellar envelopes, introduced in the BINA-3 workshop. Stellar multiplicity runs the golden thread through many talks given in Bhimtal during the workshop. As companions significantly influence stellar lifes at all stages of evolution, proper accounting and evaluation of the companions' properties are crucial. In this regard, work with the catalogues of binary stars or their extensive study within the ongoing or future Indo-Belgian projects must receive high priority. In such programmes, high-resolution optical spectroscopy of binary and multiple stars must take a special place. Another problem passing through the scientific content of BINA-3 is stellar magnetism. As pointed out in the workshop, magnetic fields are ubiquitous on and beyond the main sequence, with their strengths varying substantially. Magnetic fields are responsible for different kinds of stellar activity and can impact stellar evolution. Besides the theoretical aspects pertaining to the physics of these processes, we would like to attract attention to the lack of observational facilities in the Asian region suitable to direct observations of stellar magnetic fields and processes. The worldwide selection of medium-sized and big telescopes equipped with sensitive spectropolarimetric devices is very limited, and Indian telescopes could fill this gap. Through the study of chemical composition, one can explore the evolution of individual stars, groups of stars, and the Galaxy at large. The last is the central task of galactic archaeology. Pursuing this task depends on the availability of spectra and proper modelling. Despite the various observational results presented in BINA-3, we find a lack of interactions between the BINA members and groups working, e.g., in the U.S., Sweden or Germany, on the theoretical aspects of abundance analysis. We believe tighter cooperation with the institutes outside of BINA would take the research of stellar abundances to a qualitatively new level. In contrast to the previous workshops, asteroseismology, a powerful tool for probing stellar interiors and validating stellar parameters, appears underrepresented in BINA-3. (On a lighter note, a superb cultural show successfully compensated for the lack of “music of the stars” in the conference programme.) This fact looks surprising to us as the Belgian groups in Brussels and Leuven are famous for their proficiency in this field. Apart from galactic archaeology, which deals with the evolution of chemical composition, probing the Galactic structure is another important direction of work within BINA. Even now, after decades of extensive exploration of the Galaxy using different methods, our knowledge of its structure is incomplete. Optical polarimetry helps to reveal the detailed fine structure of dust clouds in the star formation regions or in the areas of young open clusters. Indian astronomers are experienced in this kind of work, and their results, both published <cit.> and presented during BINA-3, deserve special attention. We look forward to further expanding this direction of galactic studies on a new technical level. § SOLAR PHYSICS SESSION The mainframe of the solar physics programme has been the study of small-scale structure, waves, flares as well as coronal mass ejections (CMEs). Science opportunities are often directly associated with instruments such as the Extreme Ultraviolet Imager (EUI) onboard of the Solar Orbiter. The EUI provides a crucial link between the solar surface, on the one hand, and the corona and solar wind, on the other hand, that ultimately shapes the structure and dynamics of the interplanetary medium. Several contributions focused on wave propagation, including their relevance to small-scale structures of the solar chromosphere, transition region and corona, such as flares, spicules and loop systems. This kind of research considered both observations and theoretical work, such as ab-initio simulations for standing waves and slow magneto-acoustic waves. Studies of the outer solar atmosphere also utilized the Interface Region Imaging Spectrograph (IRIS) and the Atmospheric Imaging Assembly (AIA), both onboard of the Solar Dynamics Observatory (SDO). In alignment with previous studies given in the literature, the potential of spectral lines, including line asymmetries, for the identification of solar atmospheric heating processes has been pointed out and carefully examined. Clearly, this approach is relevant to both solar physics and studies of solar-type stars of different ages and activity levels; it allows to embed solar studies into a broader context. Regarding CMEs, a major driver of space weather and geomagnetic stars, attention has been paid the EUropean Heliosphere FORcasting Information Asset (EUHFORIA), which is relevant for MHD modelling and the study of the evolution of CMEs in the heliosphere. In this regard, a pivotal aspect is the study of thermodynamic and magnetic properties of CMEs as well as CME forward-modeling, aimed at predicting CME breakouts as well as CME topologies and magnitudes. Relevant spectral line features include Fe XIV and Fe XI data, obtained with existing instruments or available in the archive. Another notable item has been the presentation of long-term variations of solar differential rotation and the solar cycle; the latter still poses a large set of unanswered scientific questions. § RETROSPECTIVE AND RECOMMENDATIONS A key element of BINA-3 is the future availability of the ILMT. The science goals of ILMT include cosmological research such as the statistical determination of key cosmological parameters through surveying quasars and supernovae as well as photometric variability studies of stars, transiting extra-solar planets and various types of transient events. Another aspect consists in the search for faint extended objects like low-surface brightness and star-forming galaxies. The pronounced use of ILMT, typically in conjunction with other available facilities, requires the ongoing pursuit of international collaborations; this activity is pivotal for future success. Another key aspect is the significance of theoretical studies. Regarding solar physics research, previous work encompasses the study of MHD waves and small-scale transients, with a focus on the solar chromosphere, transition region and corona. Some of this work made extensive use of the EUI onboard of the Solar Orbiter. The study of outer solar atmosphere fine structure utilized the IRIS and the AIA, both onboard of the SDO. Time-dependent coronal studies, especially CMEs, are of great significance for the Earth, such as the onset of geomagnetic storms and the safety of equipment, including those associated with satellite communication[See <https://www.swpc.noaa.gov> for further information.]. Further advances in this field are expected to benefit from additional observational studies as well as advances in theory, particularly the interface of those two. Regarding theoretical work, ongoing and future efforts should continue to focus on 3-D magneto-hydrodynamics studies in conjunction with the adequate inclusion of radiative transfer and statistical phenomena, as well as aspects of chaos theory. There are other items with the potential for future successful developments. Asteroseismology has been underrepresented in BINA-3. This is a powerful tool in the context of stellar evolution studies and the validation and improvement of stellar parameters; the latter is also relevant in the context of extrasolar planet investigations. Further important aspects concern the study of stellar magnetism and activity. Besides elementary stellar studies, these topics are also of critical importance regarding circumstellar habitability and astrobiology at large <cit.>. Moreover, studies of AGNs and GRBs are cardinal topics beyond solar and stellar physics; they have gained considerable steam within the scientific community. Processes in the extragalactic objects are characterized by high energy and rich spectra. Among the variety of works presented during BINA-3, studies of active galactic nuclei (AGN) and different transients like gamma-ray bursts (GRB) continue to deserve special attention. The members of BINA have an exhaustive set of instruments available for multi-wavelength observations of these extragalactic sources, yet there is still room for improvement. Considerable advances are attainable both in instrumentation and in techniques of analysis. In the study of intra-night variability of blazars presented in the workshop's programme <cit.>, we noted the lack of international contributors, although these types of objects are in the spotlight of groups working, e.g., at the 6-m telescope of the Special Astrophysical Observatory, located in the North Caucasus region of Russia <cit.>. Given the absence of polarimetric devices for observation with the 3.6-m DOT at the moment, such cooperation could open new opportunities. Connections established on the personal level between the member institutions of BINA and observatories operating big telescopes would facilitate future studies in extragalactic astronomy where the aperture matters. Similarly, we would recommend establishing collaborations with the institutes operating robotic telescopes for the observation of transients. However, a more radical future step might be an expansion of Indian observational facilities towards other continents, especially South America. A small network of medium-sized fully-robotic telescopes could provide easy access to observations and be used for educational purposes. It would reduce the dependence on astronomical monitoring occurring in South Asia — in consideration of possible drawbacks due to the regional climates. Last but not least, in the field of data analysis, the leitmotif now is the use of machine learning (ML) and artificial intelligence (AI). This theme was raised several times during the workshop, but we believe that it could find broader applications in projects related to the classification of light curves and spectra. At the same time, we would recommend researchers using ML and AI in their work not to ignore advances in theory, as without proper constraints and background information, these methods might lead to impractical results, especially if based on small samples. §.§.§ Acknowledgments The authors are grateful to the scientific and local organizing committees of BINA-3 for inviting them to summarize the workshop and for further assistance in preparing these proceedings. §.§.§ ORCID identifiers of the authors 0000-0002-1912-1342Eugene Semenko 0000-0002-8883-2930Manfred Cuntz §.§.§ Author contributions Both authors equally contributed to this publication. §.§.§ Conflicts of interest The authors declare no conflict of interest. apalike
http://arxiv.org/abs/2307.06097v1
20230712113834
Learning Stochastic Dynamical Systems as an Implicit Regularization with Graph Neural Networks
[ "Jin Guo", "Ting Gao", "Yufu Lan", "Peng Zhang", "Sikun Yang", "Jinqiao Duan" ]
cs.LG
[ "cs.LG", "math.DS" ]
Experimental detectability of spin current shot noise Sebastian T. B. Goennenwein^1 August 12, 2023 ===================================================== Stochastic Gumbel graph networks are proposed to learn high-dimensional time series, where the observed dimensions are often spatially correlated. To that end, the observed randomness and spatial-correlations are captured by learning the drift and diffusion terms of the stochastic differential equation with a Gumble matrix embedding, respectively. In particular, this novel framework enables us to investigate the implicit regularization effect of the noise terms in S-GGNs. We provide a theoretical guarantee for the proposed S-GGNs by deriving the difference between the two corresponding loss functions in a small neighborhood of weight. Then, we employ Kuramoto's model to generate data for comparing the spectral density from the Hessian Matrix of the two loss functions. Experimental results on real-world data, demonstrate that S-GGNs exhibit superior convergence, robustness, and generalization, compared with state-of-the-arts. [2]Email: [1]Email: [3]Email: [4]Email: [5]Email: [6]Email: § INTRODUCTION Multivariate time series enable us to thoroughly investigate the statistical pattern among the obeserved dimensions, and to make predictions. Over last decades, an amount of works have been dedicated to modeling time series data arising in applications including finance, biology, and etc. Real-world time series data often exhibit a certain amount of correlations or causal relationships between the oberved dimensions. For instance, the prices of many financial derivatives, could be simultaneously influenced by the common market signals and interventions. The dynamics of electroencephalogram (EEG) brain signals, implicitly reflect a latent graph structure <cit.>. Hence, it is crucial to exploit the graph structure in modelling multivariate time series, within a dynamical system. Graph neural networks (GNN) <cit.> aim to extract both local and global features, by leveraging available graph structure, using neural networks. Canonical GNNs, including graph convolutional networks (GCN) <cit.> and graph attention networks (GAT) <cit.>, have demonstrated their strong capacity in capturing graph-structured data. Recently, there has been growing interests in predicting time series with graph neural networks to capture the underlying graph structure. For instance, temporal graph convolutional neural networks (T-GCN) <cit.> can integrate temporal information with the available graph structure. Nonetheless, the graph information, which may characterize the correlations or causal relations between observed dimensions, are either not immediately available, or contain noise. Stochastic dynamical system, as a mathematical tool <cit.> to describe physical system evolving over time, becomes more and more popular nowadays to model various real world complex phenomena Stochastic dynamical system <cit.>, has been receiving increasing attention specifically in machine learning domain <cit.>, because of its mathematical rigor and strong modelling flexibility. Besides, it is also a bridge connecting real data with deep learning algorithms <cit.>. On the one hand, many kinds of neural network has the corresponding continuous version of differential equations, which helps building some convergence guaranteed neural networks such as <cit.>. On the other hand, mathematical analysis from dynamical system point of view could also provide insights on loss landscape and critical points <cit.>. Furthermore, investigation on Edge of Stability (EOS) phenomenon are also promising research directions <cit.>. Moreover, the Gumbel graph neural network (GGN) <cit.>, is advanced to recover the graph structure underlying time series, within a dynamical system. By effectively leveraging the graph structure, the GGN achieves better accurarcy in predicting high-dimensional time series. Nevertheless, the GGN suffer from the problems of limited generalization capabilities and excessive smoothness stemming from the inherent properties of graph neural networks themselves. To address these concerns, this paper proposes the stochastic Gumbel Graph Network (S-GGN) model by introducing a diffusion term. The main contributions of this paper can be summarized as follows: * A stochastic Gumbel graph network (S-GGN) model is proposed to improve the model robustness and generalization ability in capturing high-dimensional time series, by introducing the noise term. In particular, we thoroughly study the convergence of the S-GGN model with theoretical analysis (Sec.<ref>). * A grouped convolution S-GGN structure, is advanced to capture noisy graph-structured time series. Using convolution operations, the model can effectively reconstruct the dynamics by leveraging the external node features to remove the noise effects. * Experiments on real-world problems <cit.>, demonstrate the superior generalization capability of the S-GGN model without compromising its accuracy, compared with GCNs (Sec.<ref>). § THE S-GGN FRAMEWORKS §.§ S-GGN model The GGN, consisting of a network generator and a dynamics learner, recovers the underlying dynamical systems from observations such as high-dimensional time series data. The network generator within GGN utilizes the reparameterization technique known as the Gumbel-softmax trick <cit.>, whereby the graph is sampled based on probabilities. The application of the Gumbel-softmax trick allows the GGN to directly apply the backpropagation algorithm to calculate the gradient and optimize the network. Based on the connection between the GGN model and the discrete representation of the dynamical system, we extend its applicability to a formulation that aligns with the discrete form of the stochastic dynamical system, called Stochastic Gumbel Graph Neural Network (S-GGN). Thus, the dynamic learner of S-GGN can be represented as X_predict^t+1= X^t + f(X^t,A)Δ t + σ(X^t,A) ξ_t√(Δ t) where X^t denotes the state vector of all N nodes at moment t and A is the symmetric adjacency matrix constructed by the network generator. Here ξ_t∼𝒩(0,I) is an independent standard normal random vector. The graph neural network module within S-GGN can be depicted as a composition of the following mappings: H_e_1^t=f_v→ e(X^t⊗ (X^t)^T), H_e_2^t=f_e(H_e_1^t), H_v_1^t+1=f_e→ v(A*H_e_2^t), H_v_2^t+1=f_v(H_v_1^t+1), where X^t∈𝐑^N× d_x denotes the features of N nodes with each node has feature dimension d. Let A denotes the adjacency matrix that describes the relationships between the nodes. The f_v→ e,f_e,f_e→ v,f_v consist of a linear layer and an activation function. The operation ⊗ signifies pairwise concatenation, and the symbol * denotes multiplication by elements. The composition of these four mappings in (<ref>) corresponds to the function f, the same networks' structures as σ in equation (<ref>). Next, we train the two neural networks for functions f and σ in (<ref>), denoted as f_NN and σ_NN respectively. Their networks structures are the same with different values. §.§ Spectral Analysis We denote the parameters of GGN networks as ω_t at the t-th interaction. Consider the division 0:=t_0<t_1<⋯<t_M:=T of [0,T], we define that δ_m:=t_m+1-t_m. The discretization of parameters' evolution in GGN network can be expressed as ω_t+1=ω_t+f_NN(ω_t,X^t)δ_t, where f_NN:ℝ^d_ω×ℝ^d_x. After introducing noise ε to our networks, we consider a rescaling of the noise σ_NN↦εσ_NN, then the following discretization stochastic differential equation (SDE) holds ω_t+1^ε=ω_t^ε+f_NN(ω_t^ε,X_t)δ_t+εσ_NN(ω_t^ε,X^t)ξ_t√(δ_t) , where σ_NN:ℝ^d_ω×ℝ^d_x× r, and ξ_t∼𝒩(0,I) is an independent r-dimensional standard normal random vector. That is the evolution of parameters in our S-GGN network. Besides, we have the ω_0^ε=ω_0. Then we give some conditions on drift and diffusion term. To enhance readability, we denote the network functions f_NN and σ_NN as f and σ respectively. assumptionAssumption[section] We assume that the drift term f and diffusion term σ satisfy (i) For all t∈ [0,T] and X∈ℝ^d_x, the maps ω↦ f(ω,X) and ω↦σ(ω,X) have Lipschitz continuous partial derivatives in each coordinate up to order three (inclusive). (ii) For any ω∈ℝ^d_ω,t↦ f(ω,X) and t↦σ(ω,X) are bounded and Borel measurable on [0, T]. Under the above assumption, we define the distinct between the loss of S-GGN and GGN networks as 𝒟(ω):= 𝔼[l_S-GGN(ω_M^ε)-l_GGN(ω_M)], where l denotes the loss function. proposition1[assumption]Proposition (Comparison of the noise induced loss and the deterministic loss) Under Assumption <ref>, the following holds 𝒟(ω)=ε^2/2[R̂(ω)-Ŝ(ω)]+𝒪(ε^3), as ε→ 0, where the R̂ and Ŝ represent R̂(ω)=(∇ l(ω_M))^T∑_k=1^Mδ_k-1Φ̂_M-1,k∑_m=1^Mδ_m-1v_m, Ŝ(ω)=∑_m=1^Mδ_m-1(σ_m-1^TΦ̂_M-1,m^TH_ω_MlΦ̂_M-1,kσ_m-1), with Φ̂_m,k:=Ĵ_mĴ_m-1⋯Ĵ_k, the state-to-state Jacobians Ĵ_m=I+δ_m∂ f/∂ω(ω_m,X_m) and the v_m is a vector with the p-th component (p=1,⋯,d_ω): [v_m]^p=(σ^T_m-1Φ̂_M-2,m^TH_ω[f_M]^pΦ̂_M-2,mσ_m-1). Moreover, we have |R̂(ω)|≤ C_R Δ^2,|Ŝ(ω)|≤ C_S Δ for C_R,C_S>0 independent of Δ, where Δ:=_m∈{0,1,⋯,M-1}δ_m. Proof. We refer a proof similar to <cit.>. First, we can apply a Taylor expansion to ω_t, the drift and the diffusion coefficients at a small neighbourhood of ω_0. With Ito formula and comparing the corresponding terms of ε in the two sides of equation (<ref>), we can obtain the result in <ref>. Next, in conjunction with Lemma 1, 2 and 3 in the literature <cit.>, it can be demonstrated that (<ref>) holds. ▪ remark1[assumption]Remark Proposition <ref> indicates that introducing noise to the state of a deterministic graph can be considered, on average, as an approximation of a regularized objective functional. § EXPERIMENTS We conduct two experiments to verify the performance of our S-GGN. §.§ Kuramoto Model The Kuramoto model is a nonlinear model describing the interaction and synchronization of oscillator groups: dθ_i/dt = ω_i + K∑_j≠ iA_ijsin(θ_j - θ_i), i = 1,2,...,N, where θ_i denotes the i-th oscillator phase, ω_i denotes its natural frequency, N denotes the number of vibrator, K denotes coupling strength which measures the strength of the interaction between the oscillator. Here A_ij∈{0,1} are the elements of N × N adjacency matrix. The model takes into account the phase differences of the oscillators and the interactions between them to explain the synchronization phenomenon. Here, we need to initialize the corresponding initial phase and natural frequency for each oscillator, and then calculate the phase of the oscillator attractively according to the above formula. * Data preparation: the numerical solution of the Kuramoto model by the fourth-order Longe-Kutta method. * Data pre-processing: the sin value and frequency of the phase value at the corresponding time, and these two characteristics are taken as the characteristics of the node at the corresponding time. Set the window length to 20. * Experiment settings: the optimizer of network generator and dynamic learner, the number of iteration steps is 3 and 7 respectively. For both models, the Hessian Matrix of the empirical loss with respect to weights can be obtained every 10 epochs, and the corresponding eigenvalues over epochs is shown in the Figure <ref>. Observing that S-GGN has to smaller largest eigenvalue, which indicates that S-GGN can find flatter optimal weights, allowing its better performance from sharpness awareness point of view. Figure <ref> shows the distribution of eigenvalues of the two models after the first, 50-th and 100-th epochs. We can see that the eigenvalues' concentration of S-GGN is much stronger than that of GGN, suggesting different convergence of the two models. §.§ Wireless communication data In this experiment, the data is obtained from channel measurements in real-world scenarios. Consider a typical line-of-sight (LOS) scenario with a 24-millisecond spacing between points, and an 8x1 uniform linear array (ULA) at the transmitter end. * Data standardization: the wireless communication signal data from each base station is decomposed into real and imaginary parts for normalisation. * Data preparation: time window is chosen as 72 points. Within each time window, features are extracted using group convolution applied to data with a window length of 36. * Experimental settings: The Adam optimizer is selected to optimize both the network generator and the dynamic learner, with 3 and 12 iteration steps respectively. Due to the highly noisy, non-linear and non-smooth nature of Wireless communication data, direct modelling of the raw signal is challenging. Convolutional neural networks offer significant advantages in feature extraction, automatically learning local features and retaining spatial structure information. We take a rolling prediction approach wherein a dataset of length W is employed to forecast sequences ranging from W+1 to W+p. Notably, the actual data ranging from W+1 to W+p is not incorporated into the model during this process. The construct is illustrated in Figure <ref>. The S-GNN model is employed to predict the Wireless communication data. Figure <ref> presents the mean square error and mean absolute error of both the S-GGN and GGN models on the test set. Notably, the S-GGN model demonstrates a smaller error and exhibits superior generalization performance compared to the GGN model. Figure <ref> exhibits the comparative prediction outcomes of the GGN and the S-GGN models, utilizing a training sample on the test set. Both the real and imaginary components of the signals originating from the eight node base stations are plotted. The solid blue line represents the actual data, while the solid yellow line corresponds to the prediction results of the S-GGN model, and the solid green line represents the prediction results of the GGN model. We can see the S-GGN outperforms and its prediction result is more close to the true data. § CONCLUSION Considering the complexity of the noise and underlying dynamics of the data, we bring Stochastic Dynamical Systems as a tool to address this problem. First, by comparing the loss of S-GGN and GGN, we can see the term of the multiplicative noise can be treated as a regularization term for the perturbation in a small neighborhood of neural network’s weights. As a result, the S-GGN model could achieve better generalization capabilities and robustness on noisy data. Second, we aim to explore the spectral density over iteration steps for Hessian Matrix eigenvalues of the empirical loss w.r.t. weights. We conduct experiments on data from Kuramoto model to verify the effectiveness of S-GGN. Finally, in real-world applications such as wireless communication data, we introduce group convolution techniques as our data preprocessing, which helps us to get better long-term prediction results. Despite this, there are still some problems that need to be solved which we would like to further notice, such as loss analysis through a sharpness awareness point of view, more applications in complex spatial-temporal data like EEG signals in the brain, financial data, molecular dynamics, climate forecasting and so on. § ACKNOWLEDGMENTS This work was supported by the National Key Research and Development Program of China (No. 2021ZD0201300), the National Natural Science Foundation of China (No. 12141107), the Fundamental Research Funds for the Central Universities (5003011053). § DATA AVAILABILITY The data that support the findings of this study are available in GitHub at https://github.com/xiaolangege/sggn. unsrt
http://arxiv.org/abs/2307.04339v1
20230710043044
Miriam: Exploiting Elastic Kernels for Real-time Multi-DNN Inference on Edge GPU
[ "Zhihe Zhao", "Neiwen Ling", "Nan Guan", "Guoliang Xing" ]
cs.DC
[ "cs.DC", "cs.AI" ]
Many applications such as autonomous driving and augmented reality, require the concurrent running of multiple deep neural networks (DNN) that poses different levels of real-time performance requirements. However, coordinating multiple DNN tasks with varying levels of criticality on edge GPUs remains an area of limited study. Unlike server-level GPUs, edge GPUs are resource-limited and lack hardware-level resource management mechanisms for avoiding resource contention. Therefore, we propose Miriam, a contention-aware task coordination framework for multi-DNN inference on edge GPU. Miriam consolidates two main components, an elastic-kernel generator, and a runtime dynamic kernel coordinator, to support mixed critical DNN inference. To evaluate Miriam, we build a new DNN inference benchmark based on CUDA with diverse representative DNN workloads. Experiments on two edge GPU platforms show that Miriam can increase system throughput by 92% while only incurring less than 10% latency overhead for critical tasks, compared to state of art baselines. Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays Jaewook Ahn August 12, 2023 ================================================================================ § INTRODUCTION Deep learning (DL) has become a catalyst for a wide range of applications running on the edge, such as augmented reality and autonomous driving. These applications typically require the concurrent execution of multiple DNN tasks that have varying levels of criticality. For example, in mobile augmented reality, DNN inference tasks are often used for gesture recognition and user behaviour analysis, which are key components in providing a seamless user experience. This presents a major challenge as mobile/edge devices are constrained by limited computational resources for running multi-DNN inference tasks in real-time. To support multiple DNN-based applications that have different real-time requirements <cit.>, a common practice is to share an edge Graphics Processing Unit (GPU). However, this practice poses significant challenges. On the one hand, when executing multiple DNNs simultaneously, their contention over the limited onboard resources on the same edge GPU can result in a performance bottleneck <cit.>. On the other hand, dedicating the entire GPU to latency-critical tasks to guarantee their real-time requirements results in low GPU utilization <cit.>. Meanwhile, most of the approaches that attempt to support concurrent DNN inference tasks on GPU <cit.> require runtime support from vendors like NVIDIA Multi-Process Service (MPS) and Multi-Instance GPU (MIG) <cit.>, which are unavailable on edge GPUs due to the architectural differences. Furthermore, multi-DNN inferences present two potentially conflicting objectives. Firstly, it is imperative that critical DNN tasks are given priority over other tasks in order to minimize end-to-end latency. This necessitates that the critical tasks are treated as first-class citizens on the GPU, with no interference from other tasks. Secondly, in order to achieve high overall throughput, all co-running DNN tasks should be concurrently executed in a best effort manner. These two conflicting objectives pose a major challenge for efficiently coordinating the inferences of multiple DNN tasks on edge GPU. In this paper, we propose a new system named Miriam which aims to support real-time multi-DNN inference on edge GPUs by addressing the latency and throughput problems of co-running multiple DNN inference tasks. The key idea of Miriam is based on the elastic kernel [Kernel here refers to a small program that is executed on a GPU to perform the specific DNN kernel computations.], which can achieve more fine-grained resource mappings on GPU. Specifically, traditional kernels are elasticized by breaking them down into smaller, more flexible units that can be dynamically scheduled and remapped to different GPU resources based on their priority and criticality. This elasticization approach enables the padding of other GPU kernels, which maximizes GPU utilization without causing significant resource contention. As a result, critical tasks can be prioritized without compromising overall system throughput, thus improving the real-time performance of the system. Our design is based on the key observation that the latency degradation of co-running DNN kernels is mainly caused by two dominant factors, namely intra-multi-processor (SM) resource contention and inter-multi-processor resource contention. We leverage elastic kernels to address those two kinds of resource contention. Specifically, Miriam integrates two main components. The first component, the elastic-kernel generator, consists of an elastic grid/block generator that generates resource-controllable GPU kernels to resolve co-running DNN tasks resource contention, and a source-to-source kernel transformer that converts original GPU kernels into elastic kernels while preserving computation consistency. We also design a dynamic runtime coordinator to schedule the elastic kernels to proactively control the execution of the co-running kernel at runtime. To evaluate the effectiveness of Miriam, we implement it as a hybrid framework based on CUDA, C++, and Python. We use a set of multi-DNN inference benchmarks for edge GPUs that include tasks with different priorities to evaluate the system's effectiveness. Our results demonstrate that, compared to existing methods, Miriam can serve significantly more requests with up to 92% throughput improvement while maintaining the inference speed for critical tasks with only a 10% increase in latency. These results highlight Miriam's superior performance in achieving efficient coordination of real-time multi-DNN inference tasks on edge GPUs. § RELATED WORK To enable on-device multi-DNN inference on edge devices, prior methods such as joint DNN model compression sacrifices a modest level of accuracy for each model to reduce the computational costs of mixed DNN workloads <cit.>. In contrast, Miriam does not compromise on accuracy and can be seen as an orthogonal approach to the above systems. Other methods address this problem through new compiling techniques. For example, Veltair <cit.> proposes to generate multiple versions of compiled DNN models with different intensities of resource contention for scheduling at runtime to accelerate multi-DNN inference. However, these methods also lead to issues such as high overhead in storage and offline profiling, making them hard to scale to more use cases. Systems like DeepEye <cit.>, Abacus <cit.>, and Dart <cit.> have utilized the interleaving of operators with different "contention channels" (memory-bound or compute-bound). Although these methods have proven to be effective, they require time-consuming offline profiling and are cumbersome to generalize for new DNN tasks. REEF <cit.> addresses the same problem of mixed-critical multi-DNN inference coordination and achieves kernel-level preemption for critical tasks. However, the approach requires modification of the GPU driver library, which is not practical in many popular closed-source devices. Heimdall <cit.> and Band <cit.> also target solving resource contention of multi-DNN inference, while they have different application settings from ours. Warped-Slicer <cit.> employs performance versus computing unit occupancy curves for selecting an optimized simultaneous kernel pattern, but the method fails to address resource contention between kernels. Works such as HSM <cit.> and <cit.> model the latency degradation of concurrent GPU kernel executions based on hardware information, but the predictors built in these works are difficult to adapt to real-world multi-DNN inference scenarios that are characterized by nondeterministic kernel overlapping <cit.>. Other works such as Smcentric <cit.> and Effisha <cit.> tackle the GPU multitasking problem from resource management perspectives in a space-multiplexing manner <cit.>, which is orthogonal to Miriam's approach. § BACKGROUND In this paper, we present the design and implementation of Miriam based on the CUDA programming model for NVIDIA GPU <cit.>. We first introduce some terminologies in CUDA. Fig. <ref> (left) shows the layout of an NVIDIA Jetson TX2 GPU, which consists of two SMs, each capable of running a number of GPU threads with a maximum size, and both SMs share the global memory. CUDA Programming Model. A CUDA GPU has a number of Streaming Multiprocessor (SM). Each SM contains multiple cores, which are the processing units that execute the instructions of the threads. All cores within the same SM share the same set of registers and can communicate with each other through shared memory. Code executed by the GPU is known as a GPU kernel <cit.>. Threads are the smallest unit of work that can be executed in parallel on a GPU, and they are organized into blocks. Each block is a group of threads that can execute concurrently on a single SM. A grid is a collection of blocks that are organized in a three-dimensional array. The grid defines the overall structure of the data being processed and how it is partitioned into blocks. GPU streams are a way of organizing and executing asynchronous tasks on the GPU. Each stream is a sequence of kernels (e.g. Conv, MemCopy) that can be executed independently of other streams. Kernels in the same stream are executed in a FIFO manner <cit.>. Kernel Execution on GPU. When launching a kernel in CUDA, we specify the dimensions of the grid and blocks. Each block is dispatched to and executed on one SM. However, whether a block can be dispatched to an SM that already has a block executing on it depends on whether there are enough remaining resources, such as thread slots and shared memory, to accommodate the new block. If there is no available SM to accommodate a block, it has to wait in a queue in a first-in, first-out (FIFO) order. When a kernel executes on an SM, it competes for on-SM resources, such as thread slots and shared memory, with other kernels already dispatched to and executing on the same SM. This competition greatly affects the execution time of a kernel on the SM. Thus, the varying time a block waits in the queue, in addition to the varying time it takes to execute its workload on the SM, contributes to the overall varying latency experienced by the kernel. § MOTIVATION AND CHALLENGES Miriam aims to support co-running DNN inference tasks on edge GPU for real-time applications. Tasks that have strict real-time requirements are referred to as critical tasks. For example, obstacle detection in autonomous driving must be finished by a certain deadline, allowing sufficient time for the vehicle to maneuver around obstructions. Tasks that do not have strict real-time deadlines are referred to as normal tasks. For example, monitoring human drivers' emotions and fatigue can be executed in a best-effort manner to improve the driving experience. aims to meet the real-time requirement for latency-critical tasks while maximizing the overall throughput of co-running normal tasks in a dynamic manner. One common solution is to sequentially execute critical tasks and normal tasks, which can yield the lowest latency for critical task execution, but at the cost of significantly reduced overall throughput. An alternative solution is to directly execute multiple DNN tasks on the same edge GPU without proper contention management. However, this can cause increased latency for critical tasks. Here we investigate performance degradation caused by the simultaneous execution of multiple DNN tasks. When running alone on an edge GPU, GPU kernel execution time for DNN inferences tends to remain consistent. However, the simultaneous execution of multiple DNN tasks on an edge GPU can significantly impact performance. To study this effect, we conducted an experiment using CUDA multi-stream on an NVIDIA RTX 2060 GPU where we launched a DNN task (i.e., ResNet50) with different co-runners in a closed-loop manner. In Fig. <ref> (left), we present the cumulative distribution function (CDF) of the ResNet50 latency with various co-running tasks. The results show that the latency of ResNet50 ranges from 4.4 ms to roughly 16.2 ms when co-running with VGG16, while the solo-running latency is 4.2 ms, yielding a significant variation. Meanwhile, the latency distribution pattern for different co-running model settings also varies a lot. The primary factor that results in these large variations in latency is the complex resource contention among the co-running tasks, which can be classified into intra-SM contention and inter-SM contention, as is shown in Fig. <ref> (right). The latency experienced by a GPU kernel depends not only on the time it takes for the workload to execute on the SM (affected by intra-SM contention) but also on the time it takes for the workload to wait to be dispatched to the SM (affected by inter-SM contention). Intra-SM contention and inter-SM contention are two types of resource contention among co-running tasks on a GPU. Intra-SM contention refers to the contention within an SM, which can occur when multiple thread blocks from different kernels are dispatched to the same SM and compete for shared resources, such as registers, shared memory, and execution units. Inter-SM contention refers to the contention among SMs, which can occur when multiple thread blocks from different kernels are dispatched to different SMs and compete for shared resources, such as global memory and memory controllers. These two types of contention can cause significant performance degradation and latency variation for co-running tasks on a GPU. Thus, given two incoming DNN task queues for normal task τ^normal and critical task τ^critical, to maximize the overall task throughput while guaranteeing the real-time performance of critical tasks, it is crucial to carefully manage the contention that arises from multiple overlapping kernels during co-execution. Our design objective is: to mitigate the latency degradation of the critical kernel during concurrent execution with the normal kernel by resolving inter- and intra-SM contention while allocating idle SM resources to the normal kernel as much as possible. § MIRIAM OVERVIEW We now introduce Miriam, a holistic kernel-level system for real-time multi-DNN inference on edge GPU. Miriam is a compiler-runtime synergistic framework that achieves fine-grained kernel-level GPU resources mapping. In this section, we first introduce the key idea of Miriam and then describe its system architecture. §.§ Key Idea In Section <ref>, we show that it is imperative to give careful consideration to the resource contention that arises between multiple parallel kernels. Failure to do so can result in GPU under-utilization and degradation of inference latency. Motivated by these findings, Miriam proposes a new DNN kernel inference abstraction, elastic kernel, which is a GPU kernel that has adjustable grid size and block size. Different gird/block sizes of the elastic kernel correspond to different patterns of SM-level GPU resource usage. By transforming normal kernels into elastic kernels, Miriam can control their resource contention to the critical task, and thus maximize the overall system throughput while not compromising the real-time performance of the critical kernel. To this end, Miriam generates an elastic kernel for each normal task offline and enables kernel coordination at runtime. Specifically, Miriam employs a novel elastic kernel generator to construct an elastic kernel with adjustable GPU resource usage patterns. During the runtime phase, the coordinator will select the best implementation patterns of the elastic kernels and dynamically pad them with the critical kernels to fully utilize the GPU resource. §.§ System Architecture Fig. <ref> shows a bird-eye view of . Miriam incorporates two parts: Offline Elastic Kernel Generation and Online Kernel Coordination, working at levels of compilation, i.e., source-to-source code transformation, and kernel coordination, respectively. They collaborate to exploit elastic kernels for supporting multiple DNN inference on edge GPUs. generates elastic kernels by transforming the compiler-generated or handcrafted CUDA kernels to the elastic form. We generate elastic kernels from both grids' and blocks' perspectives of GPU kernels, which are called elastic grid and elastic block, respectively. These configuration knobs can achieve fine-grained control over inter- and intra-SM resources. There are two challenges here for generating elastic kernels. First, the design space of the elastic kernel implementation patterns is too large (e.g., 2874 on average for a single kernel in AlexNet <cit.>). Hence, we shrink the design space to decrease the number of potential elastic kernel candidates by taking the hardware limitation into consideration. Second, When a kernel is launched in CUDA, the execution configuration specifies the number of threads to be launched and how they are organized into blocks and grids. Modifying the grid and block size in a DNN kernel directly can cause computation errors because this affects how threads are organized and executed on the GPU. In case of this, includes a novel source-to-source kernel transformer, which transforms GPU programs of a given DNN kernel into an elastic kernel execution paradigm while ensuring the consistency of computation results. adopts a novel dynamic kernel coordination mechanism that controls the execution of elastic and critical kernels at run-time. Specifically, will profile the SM occupancy of each elastic kernel and the critical kernels. Then, determines the grid size and block size of the next elastic kernel from the normal task queue at runtime. In this way, tasks with elastic kernels can maximize resource utilization without interference to other co-running critical kernels. A key challenge here is that an elastic kernel may be executed solely or in parallel with different critical kernels. Hence, we cannot determine the scheduling of the elastic kernel at the time of kernel launch. To address this issue, we design a dynamic kernel sharding mechanism, in which we divide an elastic kernel into several shards and determine the scheduling for each sharding according to run-time resource usage. Miriam can support a wide range of applications that need to run multiple DNNs on the edge GPU. For instance, an obstacle detection task and a navigation task need to run in parallel to achieve autonomous driving. The obstacle detection task is critical because it is related to driving safety, while the navigation task can be executed in a best-effort manner as a normal task. For such a DL task set, as shown in Fig. <ref>, first divides them into critical kernels and normal kernels according to their task characteristic, i.e., criticality of the tasks. Normal kernels are compiled offline and transformed into elastic kernels by . At run-time, the elastic sharding policy of normal kernels is determined by the to maximize resource utilization while not interfering with the execution of the critical kernel. § GENERATION OF ELASTIC KERNELS To support finer control over inter- and intra-SM resources of a kernel running on the edge GPU, we propose an elastic kernel generator. The design principle of Miriam is based on the insight that both the block and grid's resource allocations can be distilled from the native GPU programming model. Fig. <ref> illustrates the design of the proposed elastic kernel generator: elastic block and elastic grid. By separating resource allocation for thread blocks from the logic-level grid and thread block identity, this approach generates resource-controllable GPU kernels for further resolving co-running DNN tasks resource contention problems. To improve the efficiency of the elastic kernel generation process, proposes to shrink the design space of elastic kernels according to hardware limitations, as well as observations on co-running DNN kernels from critical and normal task queues. Moreover, to maintain the accuracy of elastic kernel calculation after elastic kernel transformation, we design a source-to-source kernel transformer. Our transformer can convert original GPU kernels into elastic kernels while preserving computational equivalence. §.§ Controllable Intra-SM Resource by Elastic Block DNN kernels can be broadly categorized into memory operations (memory allocations, memory transfers, etc.) and kernel execution. To enable the execution of a single kernel on multiple GPU SMs, GPU programming divides a large kernel into multiple sub-kernels, each of which is executed by a GPU block. The block size is determined by the computation workload of each sub-computation. Blocks with smaller sizes consume less thread usage for each instruction cycle. Multi-DNN inference on edge GPU can cause severe intra-SM contention when multiple thread blocks from different kernels compete for the resource within the same SM. Some blocks would fail to execute or delay, which leads to a decrease in the overall throughput and an increase in the corresponding latency of the DNN inference. For this issue, one possible solution is to perform code-level optimization of the GPU kernel. This approach includes optimizing the memory access patterns and reducing unnecessary computations to decrease the intra-SM resource usage, and thus alleviates intra-SM contention. However, optimizing GPU codes for a specific DNN model is challenging and time-consuming. Different optimization techniques such as loop-tiling, loop-unrolling and parallelization naturally have different trade-offs in terms of execution performance, memory usage, and code complexity. Achieving the appropriate balance among those factors requires careful experimentation and tuning. Adapting codes for different concurrent kernels from diverse tasks demands a significant amount of effort and may not generalize well, thereby restricting the effectiveness and applicability of the optimization techniques. To carefully manage the resource usage of each block, adjusts the number of threads within the targeted block to generate elastic blocks for each thread block. We adopt the persistent thread technique <cit.> that is capable of adjusting a kernel’s resident block size on an SM. In contrast to traditional kernels where threads terminate after completing the kernel execution, persistent threads remain active throughout the execution of a kernel function. We limit the range of each elastic block size to fall between 1 and the maximum resident block size. We also transform the default 1:1 logical-to-physical threads mapping scheme to an N:1 mapping scheme while preserving the initial program semantics. Compared to static block fusion <cit.>, which fuses multiple thread blocks from different GPU kernels into a single one to reduce unnecessary loads and stores, our persistent thread design does not require pre-compilation of all possible combinations of kernels. This feature enables flexible SM-level resource mapping at runtime. Our elastic kernel is designed to stay within the shared memory limit, and we achieve this by modifying the way we control the intra-SM resources, including shared memory, compared to the original kernel. This modification results in a memory occupancy that is either equal to or less than that of the original kernel. While the persistent thread mechanism provides fine-grained control over intra-SM parallelism, it comes with nontrivial overhead. The optimal number of launched persistent threads does not always equal to the maximum number of concurrently executing threads from all thread blocks that can be afforded by a single SM. Hence, we will narrow the design space of elastic block which will be introduced in Section <ref>. §.§ Elastic Grid for Inter-SM Contention While elastic block design can resolve intra-SM thread-slot contention, inter-SM memory (e.g., DRAM, L2 Cache) fetching contention can still be a severe problem if blocks inside a kernel are directly launched. DNN kernels often use a large number of blocks to hide stall cycles due to data access, thus, when multiple DNN inference requests arrive in rapid succession, multiple SMs are allocated to execute the requests (e.g. memory bus) have to wait for each other, leading to decreased execution performance. Miriam proposes an elastic grid generator that slices the initial grid into multiple smaller grids. This approach can improve resource utilization and reduce inter-SM contention by allowing more efficient memory accesses across multiple SMs. Elastic grid generation implies a kernel slicing plan: Given a kernel K, a slicing plan P(K) is a scheme that slices K into a sequence of n slices [s0, s1, s2,..., s_n-1] based on thread-block-granularity partitions. Thus, given a set of kernels, the problem is to determine the optimal grid slicing policy of the initial kernel when co-running with other tasks with different workloads. To formulate, as for a DNN kernel K with M thread blocks, a dichotomy algorithm-based slicing plan S(K) can be applied to K. Specifically, there would be a sequence of slicing schemes represented as: S(K)=(M/2^n,M/2^n-1...,M), n=*max_i{M mod 2^i=0} where n is the power index of 2 to be divided. By doing this, we enable normal kernels to be issued with a flexible number of thread blocks on SM, co-locating with critical kernels. By dividing the single kernel into multiples, the sliced grids can be scheduled to run independently by the GPU, allowing the GPU to interleave the execution of them with the execution of other critical kernels. The elastic grid design efficiently reduces co-locating kernels' inter-SM memory contention by improving the time-multiplexing potential of the kernel with other kernels, allowing the GPU to better balance the allocation of resources and maximize overall performance. §.§ Workload-balanced-guided Design Space Shrinking We need to determine the execution parameters of the elastic kernel at run-time, which includes the grid number(N_blk_be) and the block size(S_blk_be). We call each pair of execution parameters a schedule. A main challenge here is the huge number of feasible schedules, which makes it difficult to enumerate schedules or heuristically find optimal ones at run time. The total number of feasible schedules is exponential to the number of operators in the incoming model and the size of input data. For example, an implemented AlexNet model in the Tango benchmark with an input image size of 3x224x224 can have up to 2.2 × 10^25 feasible schedules for all Conv kernels <cit.>. To address this challenge, we shrink the design space for each kernel by removing combinations of elastic grid sizes and block sizes that may result in dispatch failure due to severe resource contention. In another word, Miriam narrows down the design space by eliminating configurations that are expected to have low performance. When multiple kernels are co-running, thread blocks from different kernels can have many possible inter-leavings of SM-level contention or inefficiency. We propose two constraints to address these issues as shown in Eq. <ref>, and the specific parameters of these factors are shown in Table 1. N_blk_be⩽ N_SM - N_blk_rt   mod    N_SM S_blk_be⩽ L_threads - blk_size_rt The first constraint is based on the observation that workload across SMs is unbalanced. This kind of imbalance appears broadly when the number of thread blocks is not a multiple of the number of SMs inside an edge GPU. To address this issue, we prune cases where the number of thread blocks of elastic kernels exceeds the remaining available SMs after dispatching all the thread blocks from critical kernels. The second constraint addresses intra-SM workload balance, which aims to reduce contention between thread blocks from different kernels competing for resources within an SM. It is necessary to ensure that each SM has as much workload as possible and that the workload is balanced. If the workload in an SM is too light, then the resources in that SM may be wasted. On the other hand, if the workload in an SM is too heavy, it may lead to resource contention and performance degradation. We prune cases when the working threads of an elastic kernel exceed too much of the spare intra-SM resources after being occupied by blocks from the critical kernel based on the intra-SM workload balance constraint. To formulate these two inefficiency cases, we define WIScore as a workload imbalance metric: WIScore= N_blk_rt  mod   N_SM+N_blk_be/N_SM * S_blk_be+S_blk_be/L_threads(4) where the value of WIScore ranges from [0,1]. Another factor we consider when shrinking the design space is the dispatch overhead for the elastic kernels. To ensure that the potential schedule generated for each elastic kernel is feasible and does not violate critical decision-making requirements. Miriam prunes these cases using OScore: OScore = 1 ∑ LO_blk(k_be_i) < MAX_blk, ∀ i ∈ [1,N_shard] and ∑ LO_pt(k_be_i) < MAX_pt, ∀ i ∈ [1,N_shard] 0 Otherwise (5) where function LO() represents the launch overhead which equals the sum of the launching time for each elastic kernel fragment, subtracting the launching time for the initial normal kernel. OScore is set to 0 when the overhead exceeds the maximum acceptable bar we set, which is a constant number. The product of the WIScore and OScore values that are computed for each elastic kernel candidate gives a metric that can be used as a design space narrowing navigator for the performance boundary. Specifically, by multiplying these two scores (WIScore * OScore), we can identify the candidates that are likely to achieve the best performance within the given design space. Miriam computes it for every possible combination of elastic kernel implementation settings. Determining the optimal percentage of candidates to select is difficult since it is unclear how many candidates need to be chosen to ensure that Miriam finds the best parameters within the pruned design space. Thus, we test some representative tensor operations (such as convolution in CifarNet <cit.> and matrix multiplication in GRU <cit.>) and then picks out the top 20% combinations among all the candidates to be used in the next stage of runtime kernel coordination. Through these tests, we do not find any cases in which the model prunes the best-performing set of parameters. With the assistance of constraint injections, we can greatly reduce the design space without sacrificing the candidate elastic kernel's performance. This feature is especially useful given the large number of possible kernel configurations in modern edge GPUs. §.§ Source-to-Source Elastic Kernel Transformer Before assessing the effectiveness of elastic kernel design, it is crucial to investigate whether the grid or block sizes of DNN kernels can be modified directly from the original user-developed or compiler-generated GPU programs. An experiment was conducted on the benchmarks of Tango <cit.> to evaluate the effectiveness of direct kernel transformation. The results of the experiment showed that only 7.4% of the implemented kernels in the Tango benchmarks were compatible with grid/block size adjustment without requiring modifications to computation schedules inside kernels. This is because that the block size and grid size defined in a kernel are determined by the computation schedule of the kernel: either directly written in CUDA codes or through declarative loop-oriented scheduling primitives in DNN compilers, which bind symbolic-extent logical threads with physical GPU threads, as is shown in Fig. <ref>. This constraint motivates us to design a source-to-source kernel transformer that can support our elastic kernel design. Miriam rapidly equivalently transforms a DNN kernel by injecting a piece of code at the beginning of each kernel, which checks the computation and memory offsets to realize where it begins and ends after being evicted. Specifically, we compute a global thread identifier and use it as a basis for SM-level workload distribution. This identifier takes the thread ID as input and produces a corresponding index for the data element accessed by the thread. We replace references regarding physical threads (e.g. GridDim) and identity variables (e.g. threadIdx.x) in the original kernel codes with logical equivalents. Miriam employs two approaches for implementing the index function: computation-based and memory-based. The computation-based approach computes the index within the kernel when the thread accesses the corresponding data element. Alternatively, in the memory-based approach, the indices are pre-calculated on the host side (i.e., the CPU) prior to kernel launch and stored in shared memory for use during kernel execution. § RUNTIME DYNAMIC KERNEL COORDINATION This section introduces our design for the online scheduler of elastic kernel coordination. First, we call each elastic kernel (i.e., elastic grid and elastic block) as elastic kernel shard. Our guidelines for designing the coordinator are two-fold: maximizing overall real-time performance and mitigating resource contention. To achieve these goals, our runtime coordinator constantly monitors the available GPU resources, both from the critical kernels and elastic kernels. It then determines which elastic kernel shards can co-run effectively with the critical kernels. Execution timeline of co-running kernels. Upon receiving multiple normal task requests b1...bn, Miriam pushes all the kernels into a normal tasks queue and the kernels are dispatched to the GPU semantic through multiple streams. Once a critical task arrives, Miriam will instantly select appropriate elastic kernel fragments of the following normal kernel in a "bin-packing" manner, considering the current intra- and inter-SM-level resource distributions. After that, once the critical kernels finished executing, all the kernels from normal tasks will re-occupying the GPU. Grid/block size determination of elastic kernels. During runtime, a fixed size for elastic grids and block settings for elastic kernels can easily become inefficient with the optimal co-scheduled elastic kernel shards varying with different co-running with critical kernels. For example, if one critical kernel finishes and there still exists half of the computations unfinished from the co-locating elastic kernel, the rest half of thread blocks from it lead to severe resource contention or under-utilization when co-locating with the subsequent critical kernel. The selection policy for elastic kernel shards is crucial in order to prevent latency interference with critical tasks. To ensure optimal performance, one approach is to build a duration prediction model for the formation of operator groups based on runtime performance events (e.g. cache misses and global memory bandwidth)<cit.>, and control the kernel overlap based on the model. However, runtime events are not supported on edge GPUs like Nvidia Jetson devices, and the hardware events reported by tools like Nsight Sys and Nsight Compute can only be obtained with high overhead. Thus, this method cannot be applied to our problem (kernel overlaps are not determined) in a practical way. To address these challenges, Miriam adopts a greedy scheduling policy. Specifically, when the elastic kernel partially overlaps with the critical kernel, the kernel coordinator must carefully balance the resources allocated to each kernel. In this case, the coordinator needs to ensure that the padded elastic kernel does not interfere with the execution of the critical kernel, while still using as many available resources as possible. When the padded kernel runs on its own, the kernel coordinator can allocate all of the available resources to the kernel, since there are no other tasks running on the GPU. This allows the kernel to run as efficiently as possible, without any interference from other tasks. To efficiently manage these elastic kernels while achieving the goal, we propose a dynamic-sized shade binary tree approach for elastic kernel shards formation to achieve high runtime efficiency and low resource contention from different combinations of overlapped kernels. Our shaded binary tree structure is an abstract for managing the elastic kernel shards, which is similar to a complete binary tree structure of shards, as is shown in Fig. <ref>. The root of the tree represents the kernel from the normal tasks, whose initial grid size is M. Each node corresponds to a part of computations, or potential thread blocks to be dispatched inside the kernel. The shading property for each node is the elastic block size of the thread block. Directed edges indicate the potential sliced peers for the unfinished computations left over from the predecessor. The whole structure is composed of the actual shard and the virtual shard. The actual shards are the ultimately formed elastic kernel shards that are to be dispatched, and the virtual shards are the potential fragments of the elastic kernel that would not be dispatched. Miriam relies on the dynamic shaded kernel binary tree structure to manipulate the elastic kernels from normal tasks and determines the elastic kernel shards with heuristics based on the number of thread blocks of kernels from both critical and normal tasks. As is shown in Fig. <ref>, which illustrates the life cycle of an elastic normal kernel. For elastic fragment selection from normal kernels, the policy is to pick a set of elastic blocks from the head of the shaded kernel binary tree to share SM-level resources with co-locating thread blocks from resident critical kernels with trivial contention. Miriam proposes to utilize a policy to ensure that the elastic blocks from normal kernels will only use the left-over resources from the critical kernels. § EVALUATIONS §.§ Experiment Setup We implemented Miriam based on NVIDIA CUDA 11.2 <cit.> for elastic kernel generation and online kernel scheduling, and Python3.6 for the source-to-source kernel transformer. §.§.§ Implementation and Testbed. Our experiments are conducted on an NVIDIA GeForce RTX 2060 that features 1920 CUDA cores and an NVIDIA Jetson AGX Xaiver with Pascal GPU architecture with 256 NVIDIA CUDA cores <cit.>. We implemented Miriam with NVIDIA CUDA 11.2 for elastic kernel generations and Python3.6 for the end-to-end kernel transformation. Note that Miriam is extensible and can work well on other GPU platforms that officially support OpenCL, HIP or other CUDA alike programming paradigms such as AMD Embedded Radeon™ E9170 <cit.>. §.§.§ DNN Workloads. We use six popular DNN models from both computer vision and language processing fields to evaluate Miriam. Inspired by DISB <cit.>, we build a benchmark named MDTB (Mixed-critical DNN Task Benchmarks) based on both CUDA implemented Kernels to fully demonstrate the performance and generalization of our framework, summarized in Table <ref>. MDTB benchmark simulates three patterns for inference tasks from user requests: (1). Arrival in uniform distribution. The client sends inference requests at a fixed frequency (e.g. 10 requests/second), which simulates critical applications such as pose estimation. (2). Arrival in Poisson distribution, which simulates event-driven applications such as obstacle detection. (3). Closed-loop workloads simulate when the client keeps sending inference requests. We choose five representative DNN models in MDTB, including AlexNet <cit.>, SqueezeNet <cit.>, GRU <cit.>, LSTM <cit.>, ResNet <cit.>, and CifarNet <cit.>, all implemented in CUDA. We conduct neural network inference with a 224x224x3 single batch of images as the input to mimic the inference in real applications. §.§.§ Baselines. We compare Miriam with multiple DNN scheduling approaches on edge GPU. Sequential selects one model from both task queues (critical and normal) in a round-robin fashion and performs the inference one by one. In this mode, the critical tasks run independently, occupy the GPU resources, and can have optimal end-to-end latency for critical tasks. GPU Multi-stream with Priority enqueues kernels from both critical and normal tasks at the same time, and models are executed in parallel. This is adopted by NVIDIA Triton <cit.>. Inter-stream Barrier (IB) is the state-of-art multi-DNN operator scheduling method based on multi-stream <cit.>. It uses inter-stream barriers to manually synchronize kernel dispatch among different kernels. In this mode, the concurrency among kernels can be controlled by utilizing stream and synchronization-based mechanisms. §.§.§ Metrics. We use the overall throughput, the end-to-end latency for critical tasks, and the achieved occupancy as our evaluation metrics. End-to-end Latency of Critical Tasks. This metric measures the end-to-end inference speed of critical tasks with real-time demands. Overall Throughput. This metric represents how many requests from users can Miriam serve on the target edge GPU. Achieved Occupancy. By definition, achieved occupancy is the average ratio of active warps on an SM to the maximum number of active warps supported by the SM<cit.>, defined as below: Achieved Occupancy = Active_warps / Active_cyles/MAX_warps_per_SM We use this metric to evaluate the fine-grained GPU utilization of our system performance. §.§ Overall Performance To reflect the performance gain of system overall throughput with little sacrifice on the real-time performance of the critical tasks, we compare Miriam against other GPU scheduling approaches under MDTB A-D workloads on two edge GPU platforms. We merge discussion of the uniform distribution and poisson distribution of critical task requests because their workloads are comparable. This allows us to analyze and discuss their similarities more efficiently. Closed-loop Critical Tasks (MDTB A). Workloads with closed-loop critical tasks (AlexNet) experience significant resource contention when co-running with normal tasks (CifarNet). Fig. <ref> (a)-(d) show that: compared to Sequential, Multi-stream and IB increase the critical task latency by 1.95× and 1.52× on 2060 and 2.02× and 1.77× on Xavier, respectively, while Miriam incurs only a 21% and 28% overhead on critical tasks. Miriam also improves overall throughput by 64% and 83% on the two platforms, outperforming other approaches significantly under MDTB A workloads. We observed that IB's throughput performance is even worse than Sequential's due to the frequent launching of critical tasks require the insertion of more synchronization barriers among GPU streams to manage kernel groups, resulting in significant overhead. In terms of achieved occupancy, Fig. <ref> (e) and (f) demonstrate that Miriam leads to higher SM-level GPU resources compared to other baselines. It is important to note that achieving nearly 100% theoretical occupancy is difficult for DNN inference tasks due to their large thread blocks, which can easily lead to resource idleness or SM incapacity to cover memory access latency <cit.>. Uniform/Poisson Critical Tasks (MDTB B, C, and D). As the launching frequency of critical workloads decreases, the overall throughput of all approaches improves with different degrees compared to vanilla Sequential due to increased opportunities for normal tasks to share GPU resources with critical tasks. We observed that Miriam outperforms other approaches in this scenario. For instance, using MDTB B, C, and D on Xavier, Miriam increases overall throughput by 1.85×, 1.79×, and 1.91× over Sequential, which is much better than the other baselines. While both Multi-stream and IB also yield improved throughput compared to Sequential with 1.34× 1.73×, they lead to severe latency degradation for the critical tasks by 32% 88%, whereas Miriam only incurs a latency overhead of less than 21% for these benchmarks. This improvement can be attributed to our elastic kernel design and runtime dynamic kernel coordination approach. Since the Sequential approach exhibits the shortest latency for each critical task, our comparison demonstrates that Miriam maximizes overall throughput while preserving the end-to-end latency of critical tasks. From a GPU utilization standpoint, Miriam increases the average active warps of each cycle, resulting in better SM utilization. These results confirm the effectiveness of our elastic kernel sharding approach and demonstrate our ability to effectively pad critical kernels. We observe that the performance improvements offered by Miriam may not always result in higher SM occupancy on Jetson Xavier. This is because Xavier has much fewer onboard resources and a smaller number of SM compared to 2060. Additionally, the relatively low memory bandwidth of the Xavier can limit the amount of data that can be transferred between the memory and SMs, leading to performance bottlenecks with complex models. The thermal design power of the Xavier is also relatively low compared to 2060, which can limit the amount of power that can be consumed by the GPU and the amount of heat that can be generated. This can negatively impact the clock speed of the processor cores and the amount of parallelism that can be achieved, which in turn can have a negative impact on the relationship between SM occupancy and performance. §.§ In-depth Analysis of Miriam To better understand why Miriam performs better than other GPU scheduling approaches under severe contention circumstances, we provide a in-depth analysis in this section, with two AlexNet models co-running on a single 2060 GPU named AlexNet-C which serves as the critical task, and AlexNet-N which serves as the normal task. Both tasks are launched in a closed-loop manner. In Fig.  <ref>, the upper two rows show the timelines of active kernels from the two co-running DNN tasks, which demonstrate the performance difference between Miriam and Multi-stream. The figure is sketched based on real profiling results achieved from NVIDIA Nsight Sys <cit.>, in which we use the blue color to represent the critical task, green color to represent normal tasks launched by vanilla Multi-stream, and pink color represents elastic kernels of the normal task by Miriam. As shown in the figure, there are obviously more pink blocks than green blocks, and these pink blocks are tightly padded with the blue blocks, which can be a showcase of the elastic kernel shards padded with the critical kernels. The end-to-end latency of AlexNet-C in Miriam is much lower than that in Multi-stream. We also show the corresponding achieved occupancy of this case in Fig.  <ref>. The average layer-wise achieved occupancy for Miriam is 65.25% and is 32.9% for Multi-stream. As mentioned, more average active warps per cycle and less contention overhead is the key to improving the parallelism while preserving the speed of critical tasks. §.§ Evaluations on Design Space Shrinking Miriam filters out the definitely-slow cases (80%) by applying hardware limiters, as detailed in Chapter 6.3. The trade-off between elasticized scale (i.e., the dynamic shaded binary tree's depth, as discussed in Chapter 7) and scheduling granularity is a critical consideration for different implementations of elastic kernels, as shown in Fig. <ref> to guide the further shrinking process. For instance, an elastic kernel shard with elastic_grid_size=1 is flexible to accommodate other critical kernels, but launching overhead for such a shard may be too large due to the increased number of kernel shards. Fig. <ref> summarizes the pruned space of candidate elastic kernels from the models in MDTB, ranging from 84% to 95.2%. The expected pruned space may differ across candidate models due to multiple factors, such as the complexity of the models (i.e., the operator types used) and the input size. §.§ Case Study: Autonomous Driving with LGSVL We further use a real-world trace from an open autonomous driving platform (i.e., LG SVL <cit.>) as the workload, which provides a realistic arrival distribution of critical tasks (i.e., obstacle detection) and normal tasks (i.e., pose estimation) in autonomous driving. The trace was collected from a 3D Lidar perception module and a 2D camera perception module when running the LGSVL simulator, and we selected backbones from the models included in our MDTB benchmark, they are SqueezeNet for simulation of pose estimation as the normal task (lidar data), and ResNet for obstacle detection as the critical task (camera data). The clients send the inference requests in a uniform distribution, with 12.5 Hz frequency for the normal task and 10 Hz for the critical task, as is shown in Fig. <ref>. The experiment was conducted on GTX 2060. Fig. <ref> demonstrates the experimental results for this real-world workload. Compared to Sequential, Multi-stream and IB increase the overall throughput by 1.41× and 1.25×, while amplifying the critical task latency by 82% and 56%, respectively. Due to the low launching frequency of both critical and normal tasks (10 and 12.5 Hz), the elastic kernels of the normal task can execute concurrently with the critical task with little eviction overhead for elastic kernel shards. Finally, Miriam achieves 89% improvement of overall throughput compared to Sequential, and only incurs 11% latency overhead for the critical task. This proves how Miriam can achieve large improvement of throughput based on our elastic kernel design with little sacrifice on critical task latency, which is also confirmed by our high SM occupancy among all baselines shown in Fig. <ref> (c). §.§ System Overhead The scheduling overhead of Miriam mainly consists of two parts. The first part is the runtime elastic kernel shards selection, which scans the shard candidates and has the complexity of O(N). Owing to the low complexity of the scheduling mechanism in Miriam, we find that their overall average overhead for serving in each DNN model is less than 0.35 ms. The second part is the launch time overhead for critical kernels due to the padding of the elastic kernels, we evaluate this overhead and found that in most (over 80%) cases, the overhead is less than 15 us. This latency overhead is mainly because of contention on the texture cache and L2 memory, which we leave for future work. § DISCUSSION Scalability. We believe that Miriam has the potential to be scaled beyond pair-wise DNN tasks co-running and can support more general tasks. However, due to the large number of co-running kernel possibilities, some additional considerations must be taken into account. These include establishing a scheduling policy for normal tasks with the same priority, as well as finding an efficient way to perform offline kernel profiling since the design space increases exponentially. Integrated with DNN Compiler. Representative DNN compilers like TVM <cit.> can generate high-performance DNN kernels with low latency using auto-tuning <cit.>. However, DNN compiling is an offline approach with a long compilation time, and the generated kernels can not be easily modified at runtime. This creates a gap between static compilation and dynamic scenarios in IoT applications, particularly when on-device resources become available dynamically. To fill this gap, Miriam can serve as a post-compiling runtime to ensure that the on-device resources are fully utilized during runtime in an adaptive manner. Orthogonal to Other Approaches. Miriam can work symbiotically with other optimized DNN execution approaches, such as model compression <cit.>, and edge-cloud offloading <cit.>, to execute multi-DNN workloads effectively.With such a collaborative approach, it becomes possible to achieve improved runtime performance and better resource utilization, enabling effective execution of multi-DNN workloads in resource-constrained edge computing environments. § CONCLUSION We propose a novel system named Miriam that addresses latency and throughput problems of co-running multiple DNN inference tasks on edge GPUs. The proposed system utilizes elastic kernels to facilitate fine-grained GPU resource re-mapping and a runtime dynamic kernel coordinator to support dynamic multi-DNN inference tasks. Experimental results on a benchmark we built on two types of edge GPU show that Miriam can significantly improve the overall system throughput while incurring minimal latency overhead for critical tasks, compared to dedicating the GPU to critical tasks. plain
http://arxiv.org/abs/2307.04044v1
20230708204524
When greediness and self-confidence meet in a social dilemma
[ "Chaoqian Wang", "Wenqiang Zhu", "Attila Szolnoki" ]
physics.soc-ph
[ "physics.soc-ph", "cond-mat.stat-mech", "cs.GT", "nlin.CG" ]
1 .001 Chaoqian Wang et al. mode = title]When greediness and self-confidence meet in a social dilemma 1]Chaoqian Wang [email protected] Conceptualization; Methodology; Writing 2]Wenqiang Zhu Methodology; Validation 3]Attila Szolnoki [1] [cor1]Corresponding author [email protected] Conceptualization; Validation; Writing [1]Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA [2]Institute of Artificial Intelligence, Beihang University, Beijing 100191, China [3]Institute of Technical Physics and Materials Science, Centre for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary A greedy personality is usually accompanied by arrogance and confidence. This work investigates the cooperation success condition in the context of biased payoff allocation and self-confidence. The first component allows the organizer in a spatial public goods game to receive a different proportion of goods than other participants. The second aspect influences the micro-level dynamics of strategy updates, wherein players can maintain their strategy with a certain weight. Analytical results are obtained on square lattices under the weak selection limit. If the organizer attempts to monopolize the public goods, cooperation becomes more attainable. If the confidence increases, cooperation is inhibited. Consequently, these elements have conflicting effects on cooperation, and their simultaneous presence can result in a heterogeneous change of the critical synergy factor. Our theoretical findings underscore the subtle implications of a mutual trait that may manifest as greediness or self-confidence under different circumstances, which are validated through Monte Carlo simulations. * Examining biased allocation and self-confidence in spatial public goods game * Calculating cooperation success conditions in weak selection limit * Conflicting effects yield a non-monotonic critical synergy factor * Analytical results validated via Monte Carlo simulations Public goods game Weak selection Biased allocation Self-confidence Evolutionary game theory [ [ ===== § INTRODUCTION The dynamism of various facets of reciprocity—be they direct, indirect, or network reciprocity—have been unequivocally demonstrated to wield significant influence over system behaviors, particularly when there is a need to sustain costly cooperation among self-interested, or more crudely put, selfish agents <cit.>. These mechanisms, chiefly concerned with pairwise interactions among players, have been observed to incorporate higher-order interactions <cit.>. The public goods game (PGG) is an illustrative example of such complex interactions, involving simultaneous decision-making processes through multi-body or group interactions <cit.>. Players may opt to contribute or abstain from contributing to a common pool, reaping the benefits of the overall contributions regardless of their individual decisions. In a spatial population, where players engage in limited yet enduring interactions with others, reciprocity manifests on an additional level <cit.>. Here, the intricate web of relations among agents means a player is not limited to a single game, but finds themselves immersed in several others. A pragmatic approach for a player would be to partake in the group where they serve as the central agent, encircled by proximate neighbors. Concurrently, said player also engages in games instigated by their neighbors. Consequently, a player positioned on a node with a k degree finds themselves partaking in G=k+1 PGGs. This setup could potentially underpin a reciprocal mutual aid system which promotes a degree of cooperation. Assuming the most rudimentary scenario where players consistently maintain their strategies across all the games they participate in and disregard strategy diversity <cit.>, there still exists considerable flexibility in the implementation of a realistic model. To elaborate, groups do not necessarily correspond to a player, who may be more incentivized to invest effort in a venture they have personally initiated. Such dedication could be recognized and appreciated by the others. This could be simply expressed by allocating enhanced contributions in a biased manner. Specifically, a 0≤ w_L ≤ 1 fraction of the total income is allotted to the central player while the remaining 1-w_L is distributed among the participating neighbors. The w_L=1/G scenario represents the traditional PGG model, where the income is equally distributed among all participants. The w_L=0 limit corresponds to the situation where the central player allocates all income to the neighbors. While this may initially seem irrational, there have been empirical studies indicating the existence of similar practices in certain tribes where partners generally offer a larger share to an associate in an ultimatum game, signaling their honest intentions <cit.>. The other extreme case, w_L=1, denotes that the central player retains all the benefits. Interestingly, even this seemingly greedy scenario can reflect a cooperative intent and represent a form of mutual aid <cit.>. One can contemplate a barn constructed by an entire Amish community, yet later solely utilized by a single farmer. This study aims to explore the potential ramifications when players exhibit a specific w_L value. The unequal distribution of collective benefits has previously been the subject of extensive investigation <cit.>. For instance, how income is allocated remains a central issue in the ultimatum game <cit.>. For the current study, however, the diverse allocation within a group comprising several participants is of greater relevance. In certain scenarios, the individual portion accrued by a participant can be strongly contingent on their investment capability <cit.>. Additionally, the heterogeneous interaction topology is a critical aspect where income allocation is proportional to an agent's weight (degree) in the graph <cit.>. In more sophisticated model configurations, players possess an extra skill and keep track of their previous round earnings <cit.>. Yet, our current model is straightforward, emphasizing the fundamental element of biased allocation. For example, it can be applied to regular graphs where players have equal-sized neighborhoods, thus participating in an equal number of joint groups. Moreover, we presuppose homogeneous players who behave similarly and apply a pre-established allocation policy in each case. This characteristic could prove to be crucial, as it has been widely observed that a heterogeneous population, wherein players are unequal, could serve as a mechanism that encourages cooperation <cit.>. Players may differ in their views about their groups, and their approach to strategies can also be distinct. For example, they may show reluctance to alter their existing strategies, a phenomenon explained from various perspectives. This could be a result of a specific cost related to change <cit.>, or it could be interpreted as a form of self-confidence <cit.>. This strategy change inertia or updating passivity has been identified as a separate mechanism that significantly influences the evolutionary process <cit.>. To quantitatively track this effect, we introduce a 0≤ w_R ≤ 1 weight parameter, which determines the likelihood of retaining the original strategy during the elementary dynamical process. At w_R=0, this effect is completely absent, and we revert to the traditional death–birth rule <cit.>. In the opposite extreme, when w_R=1, there is no proper evaluation because all agents adamantly stick to their original strategy, despite the theoretical cooperation success condition equating to the birth-death rule as w_R→ 1 <cit.>. In between these extremes, at w_R=1/G where G denotes the group size, the strategy of the central player and the strategies of the neighbors carry equal weight and we revert to the imitation rule <cit.>. This work simultaneously considers the aforementioned effects within the framework of PGG, with players situated on a square lattice. It is important to note that the biased allocation, which can also be interpreted as autocratic behavior, and the indifference towards alternative players representing diverse strategies, may stem from a shared trait. If an individual exhibits higher levels of autocracy and retains more public goods when they organize a group, it may also display traits of arrogance, meaning they have a high self-regard and are not prone to learning from others' strategies. Therefore, the weight factors representing these traits can be similar in size. Moreover, all the mentioned details of the proposed model are strategy-neutral, making it unclear whether they support cooperation or not. Specifically, we assume the analytically feasible weak selection limit, where payoff values merely slightly alter the reproductive fitness of competing strategies. Our main goal is to determine the critical synergy factor for the success of cooperation based on the control parameters and to uncover the consequences of their simultaneous presence. In the next section, we will define our model, and our primary findings will be presented in Section <ref>. Monte Carlo simulations were also conducted to validate and confirm our theoretical results. The comparisons will be presented in Section <ref>. Our primary conclusions are summarized in Section <ref>, where potential implications will also be discussed. § MODEL In the study of spatial population dynamics, the model utilizes an L× L square lattice with periodic boundary conditions. Hence, the total population N=L^2. Each individual, referred to as an agent, inhabits a vertex on the lattice and forms a group of G=k+1 members, comprising of itself and k of its neighbors. Consequently, each agent partakes in 1+k groups, either organized by itself or by its neighbors. The group formed by agent i is represented by Ω_i. Consequently, the collection of agent i's neighbors can be expressed as Ω_i∖{i}. The common choice of group size is G=5 (k=4, von Neumann neighborhood) or G=9 (k=8, Moore neighborhood). During each elementary Monte Carlo step, a random agent i is selected to update its strategy s_i based on the payoff acquired from participating in the public goods games. Specifically, agent i organizes a public goods game within its group Ω_i. Each participant j∈Ω_i contributes a cost c>0 to the group if cooperating (s_j=1) or contributes nothing if defecting (s_j=0). The combined investments of all participants ∑_j∈Ω_is_j c is amplified by a synergy factor r>1 to generate the public goods, which are then distributed among group members. Distinct from the conventional public goods game where the goods are evenly distributed, this study extends this notion by allowing the potential for uneven distribution between the organizer and other players. Specifically, the organizer is allotted a portion w_L (0≤ w_L≤ 1), while the remaining players are evenly allocated the remaining proportion 1-w_L; that is, each of the other players receives (1-w_L)/k. Hence, as the organizer, agent i receives a payoff of w_L r∑_j∈Ω_is_j c-s_i c from group Ω_i. Correspondingly, agent i also participates in groups organized by its neighbors g∈Ω_i∖{i}, receiving a payoff in those groups as a standard player. The payoff of agent i is the average over the k+1 groups, calculated by: π_i=1/k+1{(w_L r∑_j∈Ω_is_j c-s_i c) + ∑_g∈Ω_i∖{i}(1-w_L/kr∑_j∈Ω_gs_j c-s_i c) }. As underscored, Eq. (<ref>) broadens the traditional public goods game by incorporating the self-allocation parameter w_L. At w_L=0, all public goods are allocated to the other players, while at w_L=1, all public goods are allocated to the organizer. At w_L=1/G, the public goods are distributed equally, reducing Eq. (<ref>) to the traditional public goods game scenario. In alignment with previous studies <cit.>, the payoff π_i is transformed to fitness F_i=exp(δπ_i), where δ→ 0^+ is a weak selection strength limit. Therefore, a strategy with a higher fitness has a marginal advantage to reproduce more frequently. To calculate the strategy updating probability, we also compute the payoff of agent i's neighbors and convert them to fitness in a similar manner. Consequently, the strategy of agent i is replaced by the strategy of an agent j∈Ω_i with probability W(s_i s_j), which is defined by the generalized death–birth rule <cit.>, W(s_i s_j) = (1-w_R)/k· F_j/w_RF_i+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}F_ℓ, w_R F_j/w_RF_i+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}F_ℓ, In Eq. (<ref>), ∑_j∈Ω_iW(s_i s_j)=1 is normalized. Eq. (<ref>) extends the traditional death–birth rule <cit.> by introducing a self-learning weight w_R, following a similar logic to self-allocation. The agent i learns the strategy of agent j proportional to the fitness in the group Ω_i, taking self-learning into consideration. The case of j=i implies that agent i does not learn the strategy from others. At w_R=0, Eq. (<ref>) reduces to the traditional death–birth rule, where the fitness of agent i is disregarded. At w_R=1/G, Eq. (<ref>) simplifies to the imitation rule, where the fitness of agent i is compared equally with all neighbors. An elementary Monte Carlo step concludes once the randomly selected agent i in the system updates its strategy. A full Monte Carlo step encompasses N elementary steps, ensuring that the strategy of each agent is updated on average once. Our model's key parameters are the weight factors, w_L and w_R, which dictate the bias in allocation and the rate of self-learning, respectively. In Fig. <ref>, we unveil the comprehensive parameter plane, highlighting the important weight values. These values have particular implications. When w_L=1, the total earnings from the communal pool are allocated solely to the focal player. Conversely, when w_L=0, every participant benefits from the pool while the focal player gains nothing. The midway scenario of w_L=1/G recaptures the traditional public goods game (PGG) where all group members equally share the proceeds from the common pool. Shifting our attention to the other weight factor, w_R=0 signifies the classic death–birth dynamics, where the new strategy of the focal player is exclusively drawn from the strategies of the neighbors. When w_R=1/G, all strategies present in the group are potential candidates in equal measure, which aligns with the well-established imitation rule. Finally, in the limit where w_R → 1, players tenaciously cling to their current strategies, thereby causing the evolution to stagnate. On the parameter plane, we also demarcate with a dotted line the trajectory where both weight factors are simultaneously altered. This trajectory represents the typical system behavior when both the effects of biased allocation and self-confidence are operative in the extended model with equal weights. In the ensuing section, we explore and analyze how the critical synergy factor for cooperation success evolves in the presence of these skewed allocations and self-confidence biases. § THEORETICAL ANALYSIS We assume that the evolutionary process begins from a state with the presence of N_C cooperative players. In essence, the initial proportion of cooperation is N_C/N. When the selection strength, denoted as δ, equals zero, the system defaults to the dynamics of the voter model <cit.>. In this state, cooperation will ultimately dominate the entire population with a probability of ρ_C=N_C/N <cit.>. Consequently, under a minimal selection strength of δ→ 0^+, if ρ_C>N_C/N, selection leans towards cooperation, which implies that evolution promotes the success of cooperative behavior. Here, ρ_C can be gauged by the average final proportion of cooperation obtained from independent runs. Our objective in Section <ref> is to pinpoint the condition that enables the success of cooperation, while Section <ref> focuses on exploring the inherent features of this condition. §.§ The condition for cooperation success To discern the requisite condition for cooperation success, we utilize the identity-by-descent (IBD) method <cit.>. Initially, we introduce n-step random walks. Fundamentally, this refers to moving to a random neighbor during each 1-step random walk. The quantity after completing n-step walks is represented as x^(n), where x could be π, F, and s. The x^(n) quantity is indistinguishable among various agents since the square lattice is a vertex-transitive graph, where an agent cannot identify its location by examining the network structure. Based on the random walks' definition, we can rewrite the payoff calculation in Eq. (<ref>) to obtain an agent's expected payoff from n steps away, as described in Eq. (<ref>), π^(n) =1/k+1{(w_L r(k s^(n+1)+s^(n))c-s^(n)c)+k(1-w_L/k r(k s^(n+2)+s^(n+1))c-s^(n)c)} =(w_L/k+1r-1)s^(n)c+1+(k-1)w_L/k+1rs^(n+1)c+k(1-w_L)/k+1 rs^(n+2)c, which will later be useful for calculation. To simplify, we assume a single initial cooperative player 1 in our analysis, implying that N_C=1 and evolution favors cooperation if ρ_C>1/N. In this scenario, the condition for cooperation success under weak selection can be rewritten as per the equivalent form <cit.> as shown in Eq. (<ref>), ⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0, where ⟨·⟩_[ δ=0; s_1=1 ] represents the expected value under neutral drift (δ=0) and single cooperator (s_1=1). ℬ_1 is the probability of agent 1 passing on its strategy to a neighbor. This occurs when a neighbor i∈Ω_1∖{1} of agent 1 is randomly selected with a 1/N probability to update the strategy and learns agent 1's strategy with a W(s_i s_1) probability. In the same vein, 𝒟_1 is the probability of agent 1's strategy being supplanted by a neighbor. This transpires when agent 1 is randomly selected with a 1/N probability to update its strategy and learns the strategy of a neighbor j∈Ω_1∖{1} with a W(s_1 s_j) probability. By applying Eq. (<ref>) and F_i=exp(δπ_i), we arrive at the equations summarized as follows: ℬ_1 =∑_i∈Ω_1∖{1}1/NW(s_i s_1) =∑_i∈Ω_1∖{1}1/N(1-w_R)/k·exp(δπ_1)/w_Rexp(δπ_i)+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}exp(δπ_ℓ), 𝒟_1 =1/N∑_j∈Ω_1∖{1}W(s_1 s_j) =1/N∑_j∈Ω_1∖{1}(1-w_R)/k·exp(δπ_j)/w_Rexp(δπ_1)+(1-w_R)/k·∑_ℓ∈Ω_1∖{1}exp(δπ_ℓ). In the further steps, we substitute Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>) and compute it, as shown in Eq. (<ref>). ⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0 ⇔  1-w_R/Nk( k⟨π_1⟩_[ δ=0; s_1=1 ] -w_R⟨∑_i∈Ω_1∖{1}π_i⟩_[ δ=0; s_1=1 ] -1-w_R/k⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ])  -1-w_R/Nk( -kw_R ⟨π_1⟩_[ δ=0; s_1=1 ] +⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ] -(1-w_R)⟨∑_ℓ∈Ω_1∖{1}π_ℓ⟩_[ δ=0; s_1=1 ])>0 ⇔ ⟨π_1⟩_[ δ=0; s_1=1 ] -2w_R/k(1+w_R)⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ] -1-w_R/k^2(1+w_R)⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ]>0 ⇔  π^(0) -2w_R/1+w_Rπ^(1) -1-w_R/1+w_Rπ^(2)>0. Following the definition of random walks starting from agent 1, we used Eq. (<ref>) in the last step of Eq. (<ref>). π^(0)=⟨π_1⟩_[ δ=0; s_1=1 ], π^(1)=1/k⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ], π^(2)=1/k^2⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ]. To transform the strategy quantity s^(n) into walk quantity p^(n), the probability that one returns to the starting vertex after n-step random walks, we use the substitution in Eq. (<ref>), as suggested by Allen and Nowak <cit.>: s^(n)-s^(n+1)=μ/2(Np^(n)-1)+𝒪(μ^2), where μ→ 0^+ is an auxiliary parameter, which will be eliminated later, and 𝒪(μ^2)=0. Based on Eq. (<ref>), we can then further develop Eq. (<ref>): s^(n) -2w_R/1+w_Rs^(n+1) -1-w_R/1+w_Rs^(n+2) =(s^(n)-s^(n+1)) +1-w_R/1+w_R(s^(n+1)-s^(n+2)) =μ/2(Np^(n)+1-w_R/1+w_RNp^(n+1)-2/1+w_R) +𝒪(μ^2). Utilizing this, we can further calculate the condition for cooperation success as given by Eq. (<ref>). First, we use Eq. (<ref>) to replace the payoff quantity π^(n) with strategy quantity s^(n). Second, we use Eq. (<ref>) to replace the strategy quantity s^(n) with walk quantity p^(n). This logic leads us to Eq. (<ref>): π^(0) -2w_R/1+w_Rπ^(1) -1-w_R/1+w_Rπ^(2)>0 ⇔ (w_L/k+1r-1)s^(0)c+1+(k-1)w_L/k+1rs^(1)c+k(1-w_L)/k+1 rs^(2)c -2w_R/1+w_R{(w_L/k+1r-1)s^(1)c+1+(k-1)w_L/k+1rs^(2)c+k(1-w_L)/k+1 rs^(3)c} -1-w_R/1+w_R{(w_L/k+1r-1)s^(2)c+1+(k-1)w_L/k+1rs^(3)c+k(1-w_L)/k+1 rs^(4)c}>0 ⇔ (w_L/k+1r-1) (s^(0)-2w_R/1+w_Rs^(1)-1-w_R/1+w_Rs^(2)) +1+(k-1)w_L/k+1r (s^(1)-2w_R/1+w_Rs^(2)-1-w_R/1+w_Rs^(3)) +k(1-w_L)/k+1 r (s^(2)-2w_R/1+w_Rs^(3)-1-w_R/1+w_Rs^(4))>0 ⇔ (w_L/k+1r-1) (Np^(0)+1-w_R/1+w_RNp^(1)-2/1+w_R) +1+(k-1)w_L/k+1r (Np^(1)+1-w_R/1+w_RNp^(2)-2/1+w_R) +k(1-w_L)/k+1 r (Np^(2)+1-w_R/1+w_RNp^(3)-2/1+w_R)>0. The walk quantity p^(n) can be directly perceived by analyzing the topology of the network structure. One remains in the starting vertex if not walking, so p^(0)=1. A single step cannot encompass leaving and returning to the starting vertex, hence p^(1)=0. On a square lattice, the probability that one returns to the starting vertex after two steps is p^(2)=1/k. Finally, the value of p^(3) varies from case to case. In short, p^(3)=0 for von Neumann neighborhood and p^(3)=3/64 for Moore neighborhood (for more details, refer to Ref. <cit.>). By applying the previously mentioned values of p^(0)=1, p^(1)=0, and p^(2)=1/k, but retaining p^(3), we can further calculate Eq. (<ref>) to reach the final result as shown in Eq. (<ref>): π^(0) -2w_R/1+w_Rπ^(1) -1-w_R/1+w_Rπ^(2)>0 ⇔ (w_L/k+1r-1) (N-2/1+w_R) +1+(k-1)w_L/k+1r(1-w_R/1+w_RN/k-2/1+w_R) +k(1-w_L)/k+1 r(N/k+1-w_R/1+w_RNp^(3)-2/1+w_R)>0 ⇔   r>(N-2+N w_R)(G-1)G/N(G-1)^2 (1-w_L)(1-w_R) p^(3)+N(G-2)(w_L-w_L w_R+w_R)+(N+2-2G)G≡ r^⋆. This provides the condition r>r^⋆ for cooperation success. Notably, the critical synergy factor r^⋆ is only a function of the population N, group size G, higher-order network structure p^(3), self-allocation w_L, and updating inertia w_R. Table <ref> summarizes the primary outcomes related to the critical synergy factor, r^⋆, along with their corresponding large population limits (N→ +∞), derived from taking specific parameters in Eq. (<ref>). Following the convention in much of the prior literature, we consider the death–birth rule (w_R=0) as the benchmark scenario. In this context, we present the reduced r^⋆ values corresponding to three distinct scenarios: equal allocation (w_L=1/G), allocation to other players (w_L=0), and allocation to the organizer (w_L=1). In addition, we explore a situation where the self-allocation and updating inertia are congruent (w_L=w_R≡ w), leading to consistency in the self-loops of allocation and updating. The trajectories of this case in the w_R-w_L parameter plane are visually represented in Fig. <ref> for an intuitive understanding. Table <ref> offers additional insights into the main outcomes associated with the critical synergy factor, r^⋆, in relation to specific neighborhood types. We concentrate on two commonly used cases: von Neumann neighborhood and Moore neighborhood. The former, von Neumann neighborhood, lacks triangle motifs, resulting in p^(3)=0. Conversely, the latter, Moore neighborhood, is a rudimentary structure on a two-dimensional lattice that incorporates overlapping neighbors, yielding p^(3)=3/64 <cit.>. §.§ The conflict between self-allocation and self-confidence Utilizing the analytical expression of the critical synergy factor r^⋆, we can examine the combined impact of self-allocation w_L and self-confidence w_R on cooperation. From an intuitive perspective, a decrease in the r^⋆ value needed for cooperation success (i.e., r>r^⋆) fosters cooperation. By referring to Eq. (<ref>), we can confirm that ∂ r^⋆/∂ w_L<0 holds for the specified neighborhood types. This indicates that an increase in self-allocation diminishes r^⋆ and thereby enhances cooperation. Fig. <ref>(a) portrays the critical synergy factor r^⋆ as a function of self-allocation w_L for von Neumann neighborhood under the condition of death–birth updating (w_R=0). Regardless of the population size, directing the public goods towards the organizer invariably stimulates cooperation. Similarly, we find ∂ r^⋆/∂ w_R>0 for the designated neighborhood types. This suggests that an increase in self-confidence, or alternatively, an increase in updating inertia, acts to obstruct cooperation. This effect aligns with observations made in simpler models by prior studies <cit.>. With the von Neumann neighborhood and w_L=1/G, the critical synergy factor r^⋆ as a function of updating inertia is depicted in Fig. <ref>(b). Across varying population sizes, an increase in updating inertia consistently hampers cooperation. The aforementioned observations create a fascinating dynamic when both effects coexist. Specifically, the divergent outcomes of biased allocation and self-confidence pose a question: how does the system respond when we enhance the weights of these factors simultaneously? Does it stimulate or inhibit cooperation? To explore this, we set w_L=w_R≡ w and illustrate the critical synergy factor r^⋆ as a function of w in Fig. <ref>(c). The figure reveals that an initial increase in the self-loop of allocation and strategy updating fosters cooperation, but once the weight surpasses a certain level, this effect reverses, ultimately discouraging cooperation. There exists an optimal self-loop weight w_0, which minimizes the r^⋆ value and is thus most beneficial for cooperation. We can derive the analytical expression for this optimal self-loop value by solving ∂ r^⋆/∂ w=0. The solution is given as: w_0=1/N( -(N-2)+√(2)√(2(N-1)^2+N(N-G)(G-1)/(G-1)^2 p^(3)-G+2)), which is a function of population size N, group size G, and the higher-order network structure p^(3). This weight level provides the most favorable condition for the evolution of cooperation. By setting N→ +∞ in Eq. (<ref>), we obtain the large population limit of w_0 as: w_0=-1+√(2)√(2+G-1/(G-1)^2 p^(3)-G+2). To provide a broader perspective on the simultaneous influences of these factors, we introduce a heat map of the critical synergy factor r^⋆ across the complete w_R-w_L parameter plane in Fig. <ref>. The diagonal dotted line within the figure represents the trajectory discussed in Fig. <ref>(c). This plot reveals certain general characteristics regarding the collective impact of self-loop effects. Specifically, the immediate effect of biased payoff allocation on the critical synergy factor is more pronounced when w_R is small, whereas the w_R dependency of r^⋆ is moderate for large w_R values. The inverse is true when considering the w_R dependency of r^⋆, as it changes more dramatically when w_L is low, while the w_R dependency remains moderate for small w_L values. When maintaining the aforementioned diagonal trajectory, we can identify some general trends regarding the w-dependence. Specifically, we can confirm that the r^⋆ value at w=0 is consistently lower than the one at w=1, that is, .r^⋆|_w=0<.r^⋆|_w=1. Applying w=1 and w=0 in Eq. (<ref>), we find .r^⋆|_w=1=(N-1)G/(N-G) and .r^⋆|_w=0=(N-2)(G-1)G/[N(G-1)^2 p^(3)+(N+2-2G)G], respectively. Given that N(G-1)^2 p^(3)>0 always stands, we deduce .r^⋆|_w=0<(N-2)(G-1)G/[(N+2-2G)G]=[(N-1)G-(G-2)-N]/[N-G-(G-2)]. And since (N-1)G>N-G and -(G-2)<0, it follows that .r^⋆|_w=0<[(N-1)G-N]/(N-G)<(N-1)G/(N-G). Therefore, .r^⋆|_w=0<.r^⋆|_w=1 always holds true. This indicates that, on a larger scale, when both self-loop effects are significant, the outcome is dominated by the impact of self-confidence, which hinders cooperation. This effect is more pronounced in a topology containing triangle motifs, such as the Moore neighborhood where each player forms a G=9-member group with overlapping neighbors. This case is discussed in more detail in Appendix <ref>. § NUMERICAL SIMULATION To validate our theoretical analysis, we performed Monte Carlo simulations. Initially, each agent is randomly assigned either cooperation or defection, such that N_C≈ N/2. Consequently, as outlined at the beginning of Section <ref>, evolution favors cooperation if ρ_C>1/2. To compute the expected cooperation level ρ_C, we permit up to 40,000 full Monte Carlo steps per run (if all agents become either cooperators or defectors, that specific run may be terminated earlier), and record the cooperation proportion at the last step as the result of each run. The expected cooperation level ρ_C is then the average across multiple independent runs. Based on our empirical exploration, for N=25, ρ_C is the average over 1,000,000 runs; for N=400, ρ_C is the average over 10,000 runs; for N=10000, ρ_C is obtained from a single run. Using the von Neumann neighborhood, Fig. <ref> illustrates the expected cooperation level ρ_C as a function of the synergy factor r at w=0, w=0.3, and w=0.6. In Fig. <ref>(a), where N=25, substituting all parameter values into Eq. (<ref>) gives r^⋆=5.4118, 4.9493, 5.1351 for w=0, 0.3, and 0.6, respectively. Similarly, in Fig. <ref>(b), for N=400, we get r^⋆=4.0612, 4.0280, 4.2992. In Fig. <ref>(c), where N=10000, we obtain r^⋆=4.0024, 3.9835, 4.2571. As can be observed, the cooperation level ρ_C rises with an increase in the synergy factor r, and ρ_C>0.5 when r>r^⋆, thus affirming the theoretical analysis. § CONCLUSION Collaborating on a project does not necessarily equate to equal benefits from the resulting income. For instance, an individual acting as the organizer of a group may allocate a different proportion of public goods to themselves than to other participants. If everyone follows the same protocol, allocating more public goods to the organizer boosts the gains in the game managed by oneself, but simultaneously leads to fewer gains in games organized by neighbors. Consequently, the impact of biased allocation on the level of cooperation is far from a simple question. Prior studies have demonstrated that this seemingly strategy-neutral mechanism actually promotes cooperation by preventing the diffusion of public goods <cit.>. On the other hand, if an individual allocates more public goods to themselves as an organizer, this attitude might also imply that the individual is more authoritative and confident, and less inclined to change their current strategy. Past observations have revealed that this inertia in strategy updating inhibits cooperation by slowing the aggregation of cooperators <cit.>. Thus, it can be concluded that biased allocation and strategy updating inertia play opposing roles in the evolution of cooperation. Assuming that the measure of biased allocation and updating inertia are interconnected, this study focuses on their simultaneous presence and explores how they jointly influence cooperation. We derive a theoretical solution on a two-dimensional square lattice and identify the critical synergy factor r^⋆ required for cooperation success. Consequently, cooperators are more likely to dominate when r>r^⋆. Our primary interest lies in how r^⋆ fluctuates on the plane of weight factors, which determine biased allocation and the extent of strategy updating inertia. Upon introducing the self-loop w of allocation and updating, it initially promotes and later, for larger w values, inhibits cooperation. In this scenario, we can identify an optimal self-loop value w_0 that is most conducive to cooperation. In other cases, where the network topology contains triangle motifs, the impact of strategy inertia is more potent, thus increasing the self-loop w tends to hamper cooperation. Moreover, we theoretically demonstrate that the cooperation threshold at w=0 is always smaller than at w=1. This suggests that the inhibitory effect of self-confidence on cooperation generally outweighs the facilitative effect of self-allocation on cooperation when the allocation and updating self-loop w takes extreme values. These observations propose that although biased allocation may appear as an unfair protocol, its impact on cooperation is decidedly not detrimental. However, the self-confidence driven strategy updating inertia is always harmful, and cannot be offset by the effect of allocation. § ACKNOWLEDGEMENT A.S. was supported by the National Research, Development and Innovation Office (NKFIH) under Grant No. K142948. § MOORE NEIGHBORHOOD Our primary results are summarized in Eq. (<ref>). It proposes that topology slightly influences the critical synergy factor r^⋆ through the parameter G. However, a more complex consequence is embodied in the value of p^(3). This factor creates a stark distinction between the von Neumann and Moore neighborhoods, regardless of using the same vertex-transitive square lattice. For the von Neumann neighborhood, the three-step quantity p^(3)=0, as there is no triangle motif. To explore the consequences of a non-zero p^(3), we examine the Moore neighborhood, the simplest two-dimensional lattice that contains higher-order structure where p^(3)=3/64 <cit.>. The first two panels of Fig. <ref> confirm that the separate impacts of biased allocation and strategy updating inertia are similar to those observed for the von Neumann neighborhood. However, their combined influence on r^⋆ diverges from the previous observation, as the self-confidence-based inertia is significantly stronger in this context, making the increase of the mutual weight factor w detrimental to the success of cooperation. This effect is generally valid and becomes evident when we compare the color-coded heat map of the critical synergy factor r^⋆ on the w_R-w_L parameter plane. The main difference between the last panels of Fig. <ref> and Fig. <ref> is the minimal change in the value of r^⋆ as we move horizontally on the parameter plane of Fig. <ref>(c). This suggests that changes in w_L have only a minimal impact on cooperation, because the value of w_R is the determining factor here. Our final Fig. <ref> presents a comparison of the results from our analytical and numerical calculations. In Fig. <ref>(a), where N=25, substituting all parameter values into Eq. (<ref>) yields r^⋆=10.6154, 10.6087, 11.4000 for w=0, 0.3, and 0.6, respectively. Similarly, in Fig. <ref>(b), for N=400, we obtain r^⋆=6.1546, 6.8158 for w=0, 0.3. In Fig. <ref>(c), where N=10000, we calculate r^⋆=6.0060, 6.6725, 7.5061 for w=0, 0.3, and 0.6. As before, the simulations confirm our theoretical predictions well. 53 natexlab#1#1 [#1],#1 [Nowak(2006)]nowak_s06 authorM. A. Nowak, titleFive rules for the evolution of cooperation, journalScience volume314 (year2006) pages1560–1563. [Perc et al.(2013)Perc, Gómez-Gardeñes, Szolnoki, and Floría and Y. Moreno]perc_jrsi13 authorM. Perc, authorJ. Gómez-Gardeñes, authorA. Szolnoki, authorL. M. Floría and Y. Moreno, titleEvolutionary dynamics of group interactions on structured populations: a review, journalJ. R. Soc. Interface volume10 (year2013) pages20120997. [Sigmund(2010)]sigmund_10 authorK. Sigmund, titleThe Calculus of Selfishness, publisherPrinceton University Press, addressPrinceton, NJ, year2010. [Wang et al.(2022)Wang, Dai, He, Yu, and Shen]wang_jw_pla22 authorJ. Wang, authorW. Dai, authorJ. He, authorF. Yu, authorX. Shen, titlePersistent imitation paves the way for cooperation in public goods game, journalPhys. Lett. A volume447 (year2022) pages128302. [Xiao et al.(2022)Xiao, Zhang, Li, Dai, and Yang]xiao_sl_epjb22 authorS. Xiao, authorL. Zhang, authorH. Li, authorQ. Dai, authorJ. Yang, titleEnvironment-driven migration enhances cooperation in evolutionary public goods games, journalEur. Phys. J. B volume95 (year2022) pages67. [Wang and Szolnoki(2022)]wang2022reversed authorC. Wang, authorA. Szolnoki, titleA reversed form of public goods game: equivalence and difference, journalNew J. Phys. volume24 (year2022) pages123030. [Hua and Liu(2023)]hua_sj_csf3 authorS. Hua, authorL. Liu, titleFacilitating the evolution of cooperation through altruistic punishment with adaptive feedback, journalChaos, Solit. and Fract. volume173 (year2023) pages113669. [Szolnoki et al.(2009)Szolnoki, Perc, and Szabó]szolnoki_pre09c authorA. Szolnoki, authorM. Perc, authorG. Szabó, titleTopology-independent impact of noise on cooperation in spatial public goods games, journalPhys. Rev. E volume80 (year2009) pages056109. [Yu et al.(2022)Yu, Wang, and He]yu_fy_csf22 authorF. Yu, authorJ. Wang, authorJ. He, titleInequal dependence on members stabilizes cooperation in spatial public goods game, journalChaos, Solit. and Fract. volume165 (year2022) pages112755. [Wang et al.(2021)Wang, Pan, Ju, and He]wang2021public authorC. Wang, authorQ. Pan, authorX. Ju, authorM. He, titlePublic goods game with the interdependence of different cooperative strategies, journalChaos. Solit. and Fract. volume146 (year2021) pages110871. [Wang and Huang(2022)]wang2022between authorC. Wang, authorC. Huang, titleBetween local and global strategy updating in public goods game, journalPhysica A volume606 (year2022) pages128097. [Wang and Sun(2023a)]wang2023public authorC. Wang, authorC. Sun, titlePublic goods game across multilayer populations with different densities, journalChaos. Solit. and Fract. volume168 (year2023a) pages113154. [Wang and Sun(2023b)]wang_cq_c23 authorC. Wang, authorC. Sun, titleZealous cooperation does not always promote cooperation in public goods games, journalChaos volume33 (year2023b) pages063111. [Xie et al.(2023)Xie, Liu, Wang, and Jiang]xie_k_csf23 authorK. Xie, authorX. Liu, authorH. Wang, authorY. Jiang, titleMulti-heterogeneity public goods evolutionary game on lattice, journalChaos. Solit. and Fract. volume172 (year2023) pages113562. [Ding et al.(2023)Ding, Wang, Zhao, Gu, and Wang]ding_r_csf23 authorR. Ding, authorX. Wang, authorJ. Zhao, authorC. Gu, authorT. Wang, titleThe evolution of cooperation in spatial public goods games under a risk-transfer mechanism, journalChaos, Solitons and Fractals volume169 (year2023) pages113236. [Zhang et al.(2010)Zhang, Zhang, Xie, and Wang]zhang_cy_epl10 authorC. Zhang, authorJ. Zhang, authorG. Xie, authorL. Wang, titleDiversity of game strategies promotes the evolution of cooperation in public goods games, journalEPL volume90 (year2010) pages68005. [Henrich et al.(2001)Henrich, Boyd, Bowles, Camerer, Fehr, Gintis, and McElreath]henrich_aer01 authorJ. Henrich, authorR. Boyd, authorS. Bowles, authorC. Camerer, authorE. Fehr, authorH. Gintis, authorR. McElreath, titleIn search of homo economicus: behavioral experiments in 15 small-scale societies, journalAm. Econ. Rev. volume91 (year2001) pages73–78. [Nowak et al.(1995)Nowak, May, and Sigmund]nowak_sa95 authorM. A. Nowak, authorR. M. May, authorK. Sigmund, titleArithmetics of mutual help, journalScientific American volume272 (year1995) pages76–81. [Allen et al.(2013)Allen, Gore, and Nowak]allen2013spatial authorB. Allen, authorJ. Gore, authorM. A. Nowak, titleSpatial dilemmas of diffusible public goods, journalElife volume2 (year2013) pagese01169. [Su et al.(2018)Su, Wang, and Stanley]su2018understanding authorQ. Su, authorL. Wang, authorH. E. Stanley, titleUnderstanding spatial public goods games on three-layer networks, journalNew J. Phys. volume20 (year2018) pages103030. [Zhang et al.(2012)Zhang, Shi, Liu, and Wang]zhang_hf_pa12 authorH. Zhang, authorD. Shi, authorR. Liu, authorB. Wang, titleDynamic allocation of investments promotes cooperation in spatial public goods game, journalPhysica A volume391 (year2012) pages2617–2622. [Cong et al.(2016)Cong, Li, Wang, and Zhao]cong_r_epl16 authorR. Cong, authorK. Li, authorL. Wang, authorQ. Zhao, titleCooperation induced by wise incentive allocation in spontaneous institution, journalEPL volume115 (year2016) pages38002. [Szolnoki and Chen(2020)]szolnoki_amc20 authorA. Szolnoki, authorX. Chen, titleBlocking defector invasion by focusing on the most successful partner, journalAppl. Math. Comput. volume385 (year2020) pages125430. [Wang et al.(2018)Wang, He, and Chen]wang_q_amc18 authorQ. Wang, authorN. He, authorX. Chen, titleReplicator dynamics for public goods game with resource allocation in large populations, journalAppl. Math. Comput. volume328 (year2018) pages162–170. [Bin and Yue(2023)]bin_l_amc23 authorL. Bin, authorW. Yue, titleCo-evolution of reputation-based preference selection and resource allocation with multigame on interdependent networks, journalAppl. Math. Comput. volume456 (year2023) pages128128. [Güth et al.(1982)Güth, Schmittberger, and Schwarze]guth_jebo82 authorW. Güth, authorR. Schmittberger, authorB. Schwarze, titleAn experimental analysis of ultimatum bargaining, journalJ. Econ. Behav. Org. volume3 (year1982) pages367–388. [Sigmund et al.(2002)Sigmund, Fehr, and Nowak]sigmund_sa02 authorK. Sigmund, authorE. Fehr, authorM. A. Nowak, titleThe economics of fair play, journalSci. Am. volume286 (year2002) pages82–87. [Szolnoki et al.(2012)Szolnoki, Perc, and Szabó]szolnoki_prl12 authorA. Szolnoki, authorM. Perc, authorG. Szabó, titleDefense mechanisms of empathetic players in the spatial ultimatum game, journalPhys. Rev. Lett. volume109 (year2012) pages078701. [Wang et al.(2014)Wang, Chen, and Wang]wang_xf_srep14 authorX. Wang, authorX. Chen, authorL. Wang, titleRandom allocation of pies promotes the evolution of fairness in the ultimatum game, journalSci. Rep. volume4 (year2014) pages4534. [Chen et al.(2015)Chen, Wu, Li, Wu, and Wang]chen_w_epl15 authorW. Chen, authorT. Wu, authorZ. Li, authorN. Wu, authorL. Wang, titleHeterogenous allocation of chips promotes fairness in the ultimatum game, journalEPL volume109 (year2015) pages68006. [Szolnoki et al.(2012)Szolnoki, Perc, and Szabó]szolnoki_epl12 authorA. Szolnoki, authorM. Perc, authorG. Szabó, titleAccuracy in strategy imitations promotes the evolution of fairness in the spatial ultimatum game, journalEPL volume100 (year2012) pages28005. [Fan et al.(2017)Fan, Zhang, Luo, and Zhang]fan_rg_pa17 authorR. Fan, authorY. Zhang, authorM. Luo, authorH. Zhang, titlePromotion of cooperation induced by heterogeneity of both investment and payoff allocation in spatial public goods game, journalPhysica A volume465 (year2017) pages454–463. [Peng et al.(2010)Peng, Yang, Wang, Chen, and Wang]peng_d_epjb10 authorD. Peng, authorH.-X. Yang, authorW.-X. Wang, authorG. R. Chen, authorB.-H. Wang, titlePromotion of cooperation induced by nonuniform payoff allocation in spatial public goods game, journalEur. Phys. J. B volume73 (year2010) pages455–459. [Meloni et al.(2017)Meloni, Xia, and Moreno]meloni_rsos17 authorS. Meloni, authorC.-Y. Xia, authorY. Moreno, titleHeterogeneous resource allocation can change social hierarchy in public goods games, journalR. Soc. open sci. volume4 (year2017) pages170092. [Perc and Szolnoki(2008)]perc_pre08 authorM. Perc, authorA. Szolnoki, titleSocial diversity and promotion of cooperation in the spatial prisoner's dilemma game, journalPhys. Rev. E volume77 (year2008) pages011904. [Santos et al.(2008)Santos, Santos, and Pacheco]santos_n08 authorF. C. Santos, authorM. D. Santos, authorJ. M. Pacheco, titleSocial diversity promotes the emergence of cooperation in public goods games, journalNature volume454 (year2008) pages213–216. [Szabó and Hauert(2002)]szabo_prl02 authorG. Szabó, authorC. Hauert, titlePhase transitions and volunteering in spatial public goods games, journalPhys. Rev. Lett. volume89 (year2002) pages118101. [Li et al.(2016)Li, Szolnoki, Cong, and Wang]li_k_srep16 authorK. Li, authorA. Szolnoki, authorR. Cong, authorL. Wang, titleThe coevolution of overconfidence and bluffing in the resource competition game, journalSci. Rep. volume6 (year2016) pages21104. [Szolnoki and Chen(2018)]szolnoki_pre18 authorA. Szolnoki, authorX. Chen, titleReciprocity-based cooperative phalanx maintained by overconfident players, journalPhys. Rev. E volume98 (year2018) pages022309. [Wang and Szolnoki(2023)]wang2023evolution authorC. Wang, authorA. Szolnoki, titleEvolution of cooperation under a generalized death-birth process, journalPhys. Rev. E volume107 (year2023) pages024303. [Szolnoki et al.(2009)Szolnoki, Perc, Szabó, and Stark]szolnoki_pre09 authorA. Szolnoki, authorM. Perc, authorG. Szabó, authorH.-U. Stark, titleImpact of aging on the evolution of cooperation in the spatial prisoner's dilemma game, journalPhys. Rev. E volume80 (year2009) pages021901. [Liu et al.(2010)Liu, Rong, Jia, and Wang]liu_rr_epl10 authorR.-R. Liu, authorZ. Rong, authorC.-X. Jia, authorB.-H. Wang, titleEffects of diverse inertia on scale-free-networked prisoner's dilemma games, journalEPL volume91 (year2010) pages20002. [Zhang et al.(2011)Zhang, Fu, Wu, Xie, and Wang]zhang_yl_pre11 authorY. Zhang, authorF. Fu, authorT. Wu, authorG. Xie, authorL. Wang, titleInertia in strategy switching transforms the strategy evolution, journalPhys. Rev. E volume84 (year2011) pages066103. [Wang and Szolnoki(2023)]wang2023inertia authorC. Wang, authorA. Szolnoki, titleInertia in spatial public goods games under weak selection, journalAppl. Math. Comput. volume449 (year2023) pages127941. [Wang et al.(2023)Wang, Zhu, and Szolnoki]wang2023conflict authorC. Wang, authorW. Zhu, authorA. Szolnoki, titleThe conflict between self-interaction and updating passivity in the evolution of cooperation, journalChaos, Solit. and Fract. volume173 (year2023) pages113667. [Ohtsuki and Nowak(2006)]ohtsuki_jtb06 authorH. Ohtsuki, authorM. A. Nowak, titleThe replicator equation on graphs, journalJ. Theor. Biol. volume243 (year2006) pages86–97. [Nowak et al.(2004)Nowak, Sasaki, Taylor, and Fudenberg]nowak_n04b authorM. A. Nowak, authorA. Sasaki, authorC. Taylor, authorD. Fudenberg, titleEmergence of cooperation and evolutionary stability in finite populations, journalNature volume428 (year2004) pages646–650. [McAvoy et al.(2020)McAvoy, Allen, and Nowak]mcavoy2020social authorA. McAvoy, authorB. Allen, authorM. A. Nowak, titleSocial goods dilemmas in heterogeneous societies, journalNat. Human Behav. volume4 (year2020) pages819–831. [Clifford and Sudbury(1973)]clifford1973model authorP. Clifford, authorA. Sudbury, titleA model for spatial conflict, journalBiometrika volume60 (year1973) pages581–588. [Cox and Griffeath(1983)]cox1983occupation authorJ. T. Cox, authorD. Griffeath, titleOccupation time limit theorems for the voter model, journalAnnals Prob. (year1983) pages876–893. [Cox and Griffeath(1986)]cox1986diffusive authorJ. T. Cox, authorD. Griffeath, titleDiffusive clustering in the two dimensional voter model, journalAnnals Prob. (year1986) pages347–370. [Allen and Nowak(2014)]allen2014games authorB. Allen, authorM. A. Nowak, titleGames on graphs, journalEMS Surv. Math. Sci. volume1 (year2014) pages113–151. [Nowak et al.(2010)Nowak, Tarnita, and Wilson]nowak2010evolution authorM. A. Nowak, authorC. E. Tarnita, authorE. O. Wilson, titleThe evolution of eusociality, journalNature volume466 (year2010) pages1057–1062.
http://arxiv.org/abs/2307.05247v1
20230711132313
Colloids in Two-Dimensional Active Nematics: Conformal Cogs and Controllable Spontaneous Rotation
[ "Alexander J. H. Houston", "Gareth P. Alexander" ]
cond-mat.soft
[ "cond-mat.soft" ]
School of Mathematics, University of Leeds, Leeds, LS2 9JT, United Kingdom School of Physics, Engineering and Technology, University of York, Heslington, York, YO10 5DD, United Kingdom [email protected] Department of Physics, Gibbet Hill Road, University of Warwick, Coventry, CV4 7AL, United Kingdom. A major challenge in the study of active systems is to harness their non-equilibrium dynamics into useful work. We address this by showing how to design colloids with controllable spontaneous propulsion or rotation when immersed in active nematics. This is illustrated for discs with tilted anchoring and chiral cogs, for which we determine the nematic director through conformal mappings. Our analysis identifies two regimes of behaviour for chiral cogs: orientation-dependent handedness and persistent active rotation. Finally, we provide design principles for active nematic colloids to achieve desired rotational dynamics. Colloids in Two-Dimensional Active Nematics: Conformal Cogs and Controllable Spontaneous Rotation Gareth P. Alexander August 12, 2023 ================================================================================================= § INTRODUCTION A broad class of both biological and synthetic materials are maintained out of equilibrium by motile constituents which exhibit orientational ordering <cit.>, including cell monolayers <cit.>, tissues <cit.>, bacteria in liquid crystalline environments <cit.> and bacterial suspensions <cit.>, and synthetic suspensions of microtubules <cit.>. Such materials may be modelled as active liquid crystals, of which active nematics <cit.> have garnered the most attention, although consideration has also been given to smectic <cit.>, cholesteric <cit.> and hexactic <cit.> phases. In active nematics the fundamental excitations are topological defects <cit.>, which acquire self-propulsive <cit.> and self-orienting <cit.> dynamics. They have also been shown to be foci for biological functionality, for example in epithelial cell apoptosis <cit.>, morphogenesis <cit.> and the formation of bacterial colonies <cit.> and biofilms <cit.>. Consequently, much effort has gone into controlling active nematics through confinement <cit.>, external fields <cit.>, geometry and topology <cit.>. An ongoing challenge across all forms of active matter is to devise means of rectifying the non-equilibrium dynamics of active systems, allowing for the extraction of useful work and the development of active micromachines <cit.>. Such a harnessing of active dynamics is strikingly typified by bacterial ratchets <cit.> – the spontaneous and persistent rotation of chiral cogs immersed in a bath of bacteria, a phenomenon that is precluded in equilibrium systems. These experiments were performed with an isotropic, unordered collection of bacteria. In light of the many active systems which exhibit nematic ordering, it is important to establish whether colloids are still able to produce coherent dynamics from microscale activity and in particular if the same active ratchet effects can occur. There is a long and successful history of using colloids to modify the behaviour of passive nematics <cit.>, allowing the formation of metamaterials <cit.>, the control of topological defects <cit.> and the tuning of colloidal geometry to induce certain elastic distortions <cit.>. However, the understanding of how to use colloidal inclusions to induce prescribed dynamics is in comparative infancy. There has been work on the dynamics of colloids in scalar active matter <cit.> as well as active droplets in nematic environments <cit.> and various realisations of propulsive <cit.> and rotational <cit.> colloids in active nematics, but a theoretical understanding of how to design a colloid to produce a desired dynamical response is lacking. Here we build upon our recent `active nematic multipole' framework for determining active responses to eluicidate the connection between colloidal design and the induced dynamical response, focusing on discs with tilted anchoring and chiral cogs. Of particular relevance as a point of comparison for our work is the recent demonstration of persistent rotation for chiral colloids in active nematics <cit.>. Our main findings are that spontaneous propulsion and rotation are generic properties of colloids in active nematics and can be tuned by either anchoring conditions or colloidal geometry. We illustrate the role of anchoring through discs with a uniformly tilted boundary condition for the director, showing that this tilt angle naturally determines the direction of propulsion or the magnitude and sign of rotation. The effect of colloidal geometry is demonstrated using chiral cogs, for which we find two regimes of active response - orientation-dependent chirality for cogs with few teeth and persistent rotation for those with more. We also provide optimal design principles for generating active rotation via cogs. The remainder of this paper is organised as follows. In Section <ref> we present a summary of our previous work on active nematic multipoles <cit.>, showing how dipole distortions produce net active forces and propulsion while quadrupoles lead to torques and rotation. In Section <ref> we determine analytically the energy minimising director in the presence of colloidal discs and show how the multipole structure and hence active response can be determined by the director anchoring. Additionally, this section provides an intuition for our subsequent results concerning chiral cogs, since, in the limit of many teeth, their behaviour converges onto the `classical limit' of discs with tilted anchoring. Section <ref>, in which we study cog colloids, forms the main part of the paper. In it we first construct the conformal maps and principles for the lowest energy director boundary conditions that we need to determine the induced director distortion. Then we once again perform a multipole expansion of the director to infer the active response of the cogs, before discussing how this is augmented by the number of teeth, their angle and boundary conditions. Section <ref> contains a summary and discussion. § ACTIVE NEMATIC MULTIPOLES, FORCES AND TORQUES We summarise briefly the description of two-dimensional active nematic multipole states and their connection to active forces and torques <cit.>. In a minimal analytical approximation the director field is taken to have its equilibrium form and the effect of activity is captured through the flows induced by the usual active stress, σ^a = - ζ nn, for that equilibrium director (<ref>). We consider director fields that can be linearised around a (locally) uniformly aligned state, n = e_y + δ n e_x. Taking a one-elastic constant Frank free energy, the equilibrium condition on the director field is that δ n should be harmonic, ∇^2 δ n = 0. We write the general solution as a multipole expansion δ n=Aln(r/R)/ln(a/R)+1/2∑_l=0^∞a^l(c_l∂^l_zln r+c̅_l∂^l_z̅ln r), where a is a characteristic length scale associated with the `core region', and ∂_z=1/2(∂_x-i∂_y), ∂_z̅=1/2(∂_x+i∂_y) are the usual derivatives in the complex variable z=x+iy. The first term in (<ref>) is a monopole distortion whose strength is set by a large lengthscale R. At each order there are two distinct multipoles. For example the two dipoles may be written as x/r^2 and y/r^2, or equivalently as the real and imaginary parts of ∂_z ln r. The complex coefficient c_l specifies the weightings of the two multipoles at order l, such that R{ c_l} provides the weighting of R{∂_z̅^l ln r } and similarly for the imaginary part. Linearising in the director deformation δ n, the active flows are given by the continuity and Stokes equations ∇· u = 0 , - ∇ p + μ∇^2 u = ζ[ e_y ∂_x δ n + e_x∂_y δ n ] , where u is the fluid velocity, p the pressure and μ an isotropic viscosity. The activity is extensile when ζ is positive and contractile when ζ is negative. In the linearised Stokes equation (<ref>) the activity has the structure of a multipole expansion of derivatives on the monopole source ln r, obtained from the expansion of δ n (<ref>), and the general solution can therefore be given by acting with the same derivatives of the fundamental response to a monopole <cit.>. In two dimensions this fundamental active response is <cit.> u = ζϕ_0/8μln(a/R)[ x^2-y^2/r^2(-y 𝐞_x + x 𝐞_y ) + 2lnr/R ( y 𝐞_x + x 𝐞_y ) ] , p = -ζϕ_0/ln(a/R)xy/r^2 , where ϕ_0 is the angle of rotation of the director on the surface r=a. This minimal analytic approach has been found to be effective in characterising the active flows around defects in two dimensions <cit.>, defect loops in three dimensions <cit.>, in active turbulence <cit.>, and on curved surfaces <cit.>. It may be considered to correspond to a limit of weak activity, however, even when there are defects, their structure may remain close to that in equilibrium. We can gain general insight into the active response of the nematic by considering the net contributions of the active stresses to the force and torque when integrated over a circle of radius r. This active force and torque are ∫ζ 𝐧𝐧·𝐞_r rdθ ≈∫ζ{yδ n/r𝐞_x + xδ n/r𝐞_y } rdθ =ζ aπ/2[I{ c_1}𝐞_x+R{ c_1}𝐞_y], ∫𝐱×ζ𝐧𝐧·𝐞_r rdθ ≈∫ζ(x^2-y^2)δ n/r rdθ =-ζ a^2π/2R{ c_2} , and are determined, respectively, by the dipole and quadrupole coefficients c_1 and c_2. We see that in two dimensions both dipoles will self-propel if free to move and that there is a single chiral quadrupole which produces a rotational response. Comparing with (<ref>) we see that motion along 𝐞_x and 𝐞_y is dictated by the imaginary and real parts of c_1 respectively, such that the magnitude and phase of c_1 determine the speed and direction of motion. Rotational effects are governed by R{ c_2}, which we denote by C_↺, where the arrow indicates the sense of rotation that results due to a positive chiral quadrupole coefficient in an extensile system. Consequently, much of the remainder of this paper is dedicated to means of controlling this chiral quadrupole coefficient, considering first discs with tilted boundary conditions and then chiral cogs with normal anchoring. Since the discs with tilted anchoring are a high-side-number limit of the chiral cogs, understanding their behaviour will provide intuitive principles for the more complicated cog dynamics. § DISC COLLOIDS WITH VARYING ANCHORING CONDITIONS In this section we determine the energy-minimising director in the presence of a colloid. We can then read off the relevant multipole coefficients and infer the active response from the results of Section <ref>. With the aim of understanding both the dipole propulsive and quadrupole rotational active response we consider a colloid first with a single companion bulk defect and then with a pair of antipodal defects. The director is specified by an angle ϕ, which in a one-elastic-constant approximation must be a harmonic function. To maintain consistency with our far-field description 𝐧=𝐞_y+δ n𝐞_x we define the director angle such that 𝐧=-sinϕ 𝐞_x + cosϕ 𝐞_y, with ϕ vanishing asymptotically. In the far-field region the correspondence between the exact solution and the multipole expansion of (<ref>) is therefore given by δ n=-ϕ. §.§ Spontaneous Motion We begin by considering a colloidal disc of radius a accompanied by a single bulk defect whose charge compensates the topological index of its boundary condition. We take the boundary condition to be one of uniform tilt from normal anchoring by an angle α. This boundary condition has index +1 and so the colloid is accompanied by a -1 defect which we take to have a position described by the complex number z_d=r_d e^iθ_d, with θ_d measured anticlockwise from the far-field direction, 𝐞_y. The director angle may be written as ϕ=I{lnΦ}, with Φ a meromorphic function <cit.>. In particular, by using the image system appropriate to a disc, one finds that the solution commensurate with the colloidal boundary condition we have described is ϕ=I{ln[z^2/(z-z_d)(z-a^2/z̅_d)]}+(θ_d+α)ln(√(zz̅)/R)/ln(a/R). This solution has one complex free parameter z_d, to be determined by minimisation of the Frank free energy as described in the Appendix and considered previously in the context of smectic C films <cit.>. Expanding the logarithm in (<ref>) we find that the dipole coefficient is given by c_1=-2r_d^2+a^2/ar_de^iθ_d, such that as expected, the dipole strength is set by the distance of the bulk -1 defect from the virtual +2 defect at the centre of the colloid, while the character of the dipole is determined by the orientation of this separation vector with respect to the far field. While all functions of the form of (<ref>) provide a minimum energy configuration for a given bulk defect location, there is still the question of which defect position provides the global energy minimiser for a specific colloidal anchoring condition. As determined in the Appendix, the Frank free energy is minimised by θ_d=-α and r_d=√(2)a <cit.>, such that c_1=-3√(2)e^-iα. As illustrated in Figure <ref>, this has the consequence that the direction of colloidal self-propulsion is directly determined by the anchoring condition. Note that, although the boundary condition is invariant under changing α by π, the dipole distortion is not. There are therefore two equal energy defect locations for any boundary condition, whose dipole coefficients differ by a sign. This is shown in Figure <ref> for normal anchoring, for which the bulk defect may sit below or above the colloid, with the motion of the colloid being towards the companion defect in an extensional system. However, between these two configurations the position vector of the bulk defect and the self-propulsion direction rotate in opposite senses so that generically the colloid does not propel towards the defect. This counterrotation of the propulsion direction compared to the bulk defect location has been observed in the context of Janus colloids <cit.> and can be understood by making an association between dipole distortions and pairs of ± 1/2 defects <cit.>. In contractile systems the colloidal motion is reversed, such that it is towards the companion defect for tangential anchoring and away from it for normal anchoring. §.§ Spontaneous Rotation In the above case the dipole term always dominates. For a situation where the quadrupole is dominant we consider a disc with two -1/2 bulk defects. Taking these to be at positions z_d1=r_d1e^iθ_d1 and z_d2=r_d2e^iθ_d2 the appropriate harmonic functions have the form ϕ =I{ln[z^2/√((z-z_d1)(z-a^2/z̅_d1)(z-z_d2)(z-a^2/z̅_d2))]} +(θ_d1+θ_d2/2+α)ln(√(zz̅)/R)/ln(a/R). The determination of the global energy minimiser can again be found in prior work <cit.>, although we provide the calculation in the Appendix, with the result being that the defects sit at antipodal points with angles -α±π/2 and with a common radial displacement of (7/3)^1/4a. With this in place, expansion of (<ref>) gives the quadrupole coefficient as c_2=-10/√(21)ie^-2iα. As before, the character of the nematic multipole is determined by the director anchoring condition. That the quadrupole varies with α at twice the rate of the dipole is a direct consequence of its inversion symmetry, indeed for an order l multipole c_l∼e^-ilα. Consequently the net active torque varies with anchoring angle as sin2α. The active response is shown in Figure <ref>. The chiral quadrupole coefficient C_↺ may be inferred from the defect configuration as follows. When there is a vertical mirror line C_↺ is zero, when they are placed upper right and lower left C_↺ is positive and when positioned conversely it is negative. This can also be discerned from the director orientation, represented by the colour map. For a purely achiral distortion the colour map forms a checkerboard pattern, while when C_↺ is non-zero the colours form a cross, with the colours inverted when the sign of C_↺ changes. The active flow is purely radial for normal and tangential anchoring, while the angular flow component is maximal for anchoring angles of ±π/4. Variation of the anchoring condition therefore allows both the sign and magnitude of the colloidal self-rotation to be determined. § COG COLLOIDS From chirality via anchoring conditions on a disc, we now turn our attention to controlling chiral active responses geometrically using chiral cogs. In particular we seek to understand how active ratchet behaviour such as is observed in bacterial baths <cit.> might be generated in nematically ordered systems, as recently demonstrated experimentally <cit.>. In Section <ref> we identified the chiral quadrupole as being responsible for the generation of a net active torque and so persistent actively-driven rotation will require a consistent sign of this multipole as the orientation of the colloid is varied. We find that the minimum energy director configuration changes discontinuously with the orientation of the cog whenever an edge passes through the far-field director orientation. In particular, this can lead to changes in the sign of the chiral quadrupole, a form of orientation-dependent chirality <cit.>. In light of our preceeding results this presents an apparent impasse in our attempt to generate active ratchet effects. However, we show that as the number of cog teeth is increased this orientation-dependence fades away, the cogs map onto discs with tilted anchoring and persistent active rotation is recovered. §.§ Conformal mappings We construct a two-parameter family of chiral cogs from regular n-gons by adding a right-angled triangle with base angle γπ to each edge, as illustrated in Figure <ref> for a square-based cog with γ=1/8. The energy-minimising director angle is provided by a harmonic function which is uniquely determined a boundary condition corresponding to normal anchoring on the cog surface along with an asymptotic value provided by the far-field alignment. The solution for the cog is conveniently obtained via conformal mapping f: Ω→Ω^' from the exterior of a disc (Ω) to the exterior of the polygon (Ω^'). A holomorphic function ϕ(z) on Ω satisfying the appropriate piecewise boundary conditions on the disc will then provide the solution ϕ(f(z)) for the polygon <cit.>. Provided we can construct the appropriate mapping, we can therefore separate the determination of the director field into two parts: finding conformal maps from the exterior of the disc onto the exterior of our colloid and solving the analogous Dirichlet problem for the director angle on the exterior of the disc. To facilitate the construction of explicit mappings we use the framework of Schwarz-Christoffel transformations <cit.>. The relevant elements of the theory along with the derivation of the requisite conformal map are provided in the Appendix. Here we simply present the result that a conformal map f: Ω→Ω^' from the exterior of the disc (Ω) to the exterior of a cog (Ω^') is given by f(z)=Γ(1/2+1/n-γ)zF_1(-1/n,γ+n-4/2n,-1/2-γ,1-1/n;e^inψz^-n,e^in(ψ+χ)z^-n)/Γ(1-1/n)Γ(1/2+2/n-γ)_2F_1(-1/n,-1/2-γ,1-1/n-n-4/2n-γ,e^inχ), where the angle ψ incorporates rotations of the cog. Here _2F_1 and F_1 denote respectively Gauss's hypergeometric function and the first Appell hypergeometric series of two variables <cit.>. This transformation is illustrated in Figure <ref>, with the vertices of the cog and their corresponding prevertices on the disc highlighted. For the vertices of the base n-gon the prevertices sit at the n^th roots of unity, those of the teeth tips being rigidly rotated by an angle χ, dependent on both n and γ. §.§ Director field and orientation-dependent handedness Before discussing the boundary conditions relevant to cogs it will be beneficial to consider the winding induced by an isolated vertex, as this will give insight into how the nematic distortion changes as a colloid rotates. It should be emphasised that we are treating the cog's rotation adiabatically, such that at every orientation the director adopts the minimal energy distortion. The first point to make is that the imposition of normal anchoring does not correspond to a unique Dirichlet boundary condition as at any point the director angle may be changed by a multiple of π with the director still being normal to the colloid. However, the boundary condition appropriate for the minimal energy texture is that where the director angle ϕ∈[-π/2,π/2]. Any other choice would correspond to additional topological defects on the boundary of the colloid and greater director distortion in connecting the value of ϕ on an edge to its asymptotic value. Consequently, a vertex generically exists in one of two states, as illustrated in Figure <ref>, namely with the director on the two edges oriented in the same sense, either inwards or outwards, as in a) and c) or in opposing senses, as in b) and d). According to these circumstances we will have to rotate the director on one edge either clockwise or anticlockwise around the vertex to have it match that on the other side and hence the induced winding will be given by either the exterior angle νπ or interior angle μπ=(1-ν)π. The dividing line between these two regimes of behaviour is, in light of the bounds on ϕ, when one of the edges of the vertex is vertical. As shown in Figure <ref>, when one of the edges of a vertex passes through the vertical an inversion event occurs in which the orientation on the edge is reversed and the director winding at the vertex changes by ±π. Although we have chosen an orientation for the director to aid our discussion, all statements we make are of course independent of this choice. This local description of the director winding at a vertex underpins the nature of the nematic distortions around all the colloids that we consider. As the orientation of the colloid is varied the distortion is rigidly rotated, with the addition of a monopole term to maintain the anchoring condition, along with director inversions whenever an edge is moved through the vertical so as to maintain the appropriate bounds on ϕ. The redistribution of topological charge caused by these inversions provides a mechanism for orientation-dependent chirality <cit.>, as we discuss further shortly. With this understanding in place the boundary condition for a specific cog is not particularly illuminating and so rather than presenting it for each cog that we consider we give the algorithmic process for its construction. Taking the boundary condition for the base regular polygon as a starting point, the addition of the cog teeth is captured by a square wave alternating between the values γπ and -π/2. This will typically result in a boundary condition where the director angle ϕ lies outside [-π/2,π/2] in places and so the director orientation will need to be reversed along these segments to bring ϕ back within this range. The precise form of the boundary condition depends on the value of n modulo 4, but using the case n≡ 0 mod 4 as a partial example, the first step gives, for ψ=0, ϕ(θ) =∑_m=1^∞(-1)^m/msin(2mθ)+∑_m=1^∞2/mnsin(nmθ)+γπ-(1+2γ)nχ/4+ ∑_m=1^∞(1/2+γ)(1+(-1)^m)/m[sin[nm/2(θ-nχ/2)]-sin(nm/2θ)]. Any requisite reversal of director orientation may be achieved by using that θ_2-θ_1/2+∑_m=1^∞sin(m(θ-θ_1))-sin(m(θ-θ_2))/m increases ϕ by π between θ_1 and θ_2. Pairing the boundary conditions with the conformal map in (<ref>) produces the director solutions shown in Figure <ref>. A striking feature of the director solutions for triangle- and square-based cogs shown in Figure <ref> is the reversal in sign of the chiral quadrupole, C_↺, an instance of orientation-dependent chirality <cit.>. This is the multipole associated with net active torques and so this change in its sign strikes at the heart of our ability to use cogs to generate persistent rotations in active nematics. We therefore shall devote a little attention to understanding this behaviour. In this paper we are using the induced director distortion, in particular the chiral quadrupole, as a measure of chirality for the colloid. Assigning a chiral measure is in general fraught with difficulty. There is not a unique choice, but rather an infinite hierarchy of possible measures <cit.>, with the potential for different measures to determine distinct shapes as the `most chiral' <cit.>. For us, although higher-order terms could be invoked, the chiral quadrupole coefficient is a natural choice of chirality measure becuase it determines the active rotational response. It can also be connected to previously used measures. The chirality pseudotensor <cit.> reduces in the case of two-dimensional nematics to ∂_iϕ <cit.>. Of this, the lowest-order term without mirror symmetry is the chiral quadrupole. Our observation of orientation-dependent handedness of the nematic distortion raises a second issue with associating a chiral measure to a system – that of chiral connectedness <cit.>. This occurs when a system can be continuously deformed from a chiral state to its mirror image without ever passing through an achiral configuration. This necessitates the chiral measure, which takes opposing signs for the mirror images, to pass through zero as its value is continuously changed and hence falsely label an intermediate chiral state as achiral. It is not inevitable that we encounter the paradox of chiral connectedness, since the chiral quadrupole can vary discontinuously with cog orientation. Nonetheless, we do find cog orientations for which the chiral quadrupole vanishes. However, in our context this is not so clearly a flaw, as it has a direct physical meaning – in such orientations a cog, despite being chiral, experiences no net active torque. There are two factors behind the variation of the chiral mode: the rigid rotation of the distortion as the cog is rotated and inversions of the director orientation along certain edges. Both of these can be reduced by increasing the number of teeth, as we now demonstrate. With n teeth the cog returns to its original state after a rotation of 2π/n, so the smooth variation of the chiral mode can be made arbitrarily small. Sticking with the case n≡0 mod 4 for concreteness there will generically be two inversion events, each affecting a pair of antipodal edges. The exact length of the arc of the preimage circle corresponding to each edge depends on χ but is at most 2π/n and so from (<ref>) we can see that the modification to the boundary conditions necessitated by such an inversion is bounded by 2π/n +∑_m=1^∞1+(-1)^m/m[(1-cos(2mπ/n))sin(m(θ-θ_1)). .+sin(2mπ/n)cos(m(θ-θ_1))], with θ_1 varying according to the location of the edges on which the inversion is occurring. The discontinuous change brought about by these inversions can therefore also be made arbitrarily small. In short, the distortions induced by a cog converge onto the rotationally invariant disc distortion, with anchoring angle γπ. The orientational dependence of the chiral quadrupole is a signature of discretisation which grows ever fainter with increasing side number. This smoothing out of distortion variations means that the persistent rotation of cogs observed in bacterial ratchets <cit.> is achievable in orientationally-ordered active matter. An explicit demonstration of the consistent chirality of the induced distortion is given in Figure <ref> for a cog with 16 teeth. The correspondence with the disc limit is confirmed by noting that the negative winding is situated on the first teeth to be tilted through the vertical, that is at 2π k/n where k is the smallest integer such that π/n+γπ+2π k/n>π/2. In the large n limit we may take equality and find that the displacement of the negative winding from the equatorial points tends to -γπ, in accordance with the quadrupole distortion for anchoring angle of -γπ on the disc, as established in Section <ref>. The consequences for active dynamics of this orientation-dependence of the chiral quadrupole are illustrated in Figure <ref>. These far-field active flows are attained in the same fashion as those shown for discs in Section <ref> – from our exact solutions for the director around the cogs we read off the multipole coefficients to quadrupole order and can then determine the active response from the appropriate derivatives of (<ref>) and (<ref>). For the square-based cog, also shown in row II of Figure <ref>, the discontinuous changes in the director field are of sufficient magnitude to change the sign of the chiral quadrupole and hence reverse the rotational component of the active flow, meaning the cog will not persistently rotate. By contrast, the consistent chiral quadrupole induced by the sixteen-toothed cog shown in row IV of Figure <ref> results in a consistent sense of rotation in the active flow, allowing persistent active rotation of the cog. To be explicit, and as evidenced by the expression for the net active torque in (<ref>), the direction of rotation is determined by the signs of both the chiral quadrupole and the activity. The flows and rotations shown in Figure <ref> correspond to extensile activity and would be reversed in contractile systems. §.§ Active response Having identified these two distinct modes of behaviour, let us investigate more closely the relationship between the chiral response of the active nematic and the parameters of the cog. Figure <ref> shows the phase space for the coefficient of the chiral quadrupole as a function of both the number of teeth n and the tooth angle γπ. It was generated by constructing the boundary condition for a cog and then averaging the chiral quadrupole coefficient over a full rotation. This averaging was performed for 100 evenly-spaced values of γ for each n from 3 to 16. The distortions produced by the cogs which sit at the locations of the polygonal icons are shown in Figure <ref>. Between the two black lines is the region for which the chiral quadrupole has a consistent sign under rotation, allowing for active nematic ratchets. There are several regions we can identify. When n is small (3 or 4) there is no value of γ which will produce a consistent chiral response. The square icon denotes a typical example, shown in row II of Figure <ref>; there is a modest net chiral quadrupole but its sign changes from the first two panels to the last two. The region in the bottom-left of Figure <ref> is even more extreme, as despite the angle of the cog teeth being such as to ostensibly induce a negative chiral quadrupole, the averaged coefficient is actually positive. This counter-intuitive state of affairs is illustrated by the triangle-based cog in row I of Figure <ref>. The cog shown there is also convex, something that is possible only for triangle-based cogs with γ<1/6, but this is not necessary for the net chiral quadrupole to be positive. For n≥ 5 a band of γ values develops for which the chiral response is of a consistent sign, with this band opening up with increasing n. The heptagon-based cog in row III of Figure <ref> provides an example of the behaviour typical for intermediate values of n; the chiral quadrupole is of a consistent sign but there is still significant variation in the nematic distortion as the orientation of the cog is changed. Lastly, in the high n limit we can understand the chiral response of the nematic to the cog by modelling it as a disc with anchoring angle γπ. Our discussion of discs in Section <ref> predicts the dependence of the chiral quadrupole coefficient on γ to be C_↺∼-sin(2γπ) and this is an increasingly good approximation for the behaviour in the high-n region of parameter space. As demonstrated by the hexadecagon-based cog in row IV of Figure <ref>, the variation of the distortion with cog orientation grows smaller and smoother with increasing n, the maximum value of the average chiral quadrupole tends to 1/2 and the value of γ for which this is achieved tends to 1/4. As far as using cogs to generate active nematic ratchets the optimal design is therefore to have as many teeth as feasible, in order to maximise the smoothness of the net active torque and the range of γ which can be utilised. The strength of the active torque can then be tuned using (<ref>). It should come as no surprise that the cogs which are `most chiral', at least in the sense of our definition, are those with intermediate values of γ. Both γ=0 and the limit γ→ 1/2 correspond to achiral shapes, the former regular polygons the latter tending to star-shaped polygons (although the γ=1/2 limit itself does not yield a bounded polygon, as the two edges of the cog teeth become parallel). The black lines in Figure <ref>, which bound the region of consistent chirality, are intriguingly non-monotonic and display sizeable jumps. The upper boundary exhibits an odd-even effect and is given by the largest fraction with denominator n that is less than 1/2. The lower boundary is less regular and seems to display a mod 4 effect. Neither of these is particularly surprising, given that the compatibility of discrete boundaries with the symmetry of a quadrupole is inherently dependent on n mod 4 <cit.>. We do not discuss the specifics of these threshold values any further since they are tied to the precise cog construction that we have employed in this paper and are thus of secondary interest compared to the qualitative phenomenology we have described, which applies to the chiral response of a nematic regardless of the exact geometry of the colloid. §.§ Role of anchoring conditions We conclude by discussing the effect of anchoring conditions on the active phenomenology of cogs. This is partly motivated by recent experimental realisations of cogs in active nematics <cit.>, which utilised tangential rather than normal anchoring. Once again the results we described for discs in Section <ref> will prove insightful. Having determined an energy-minimising director configuration for one anchoring condition we can find the minimal energy distortion for any other anchoring condition by applying the appropriate local rotation to the director. However, this transformation rotates the far-field direction, so to keep this fixed we must globally rotate the system in the opposite direction, changing the orientation of the cog. Denoting the anchoring and orientation angles by α and ψ and recalling from Section <ref> that for a director rotation by Δ the multipole coefficients obey c_l∼e^-ilΔ we have the relation c_l(α,ψ)=e^-ilΔc_l(α-Δ,ψ+Δ). Of particular interest is the different effect normal or tangential anchoring has on the quadrupole coefficients. In this case (<ref>) reduces to c_2(π/2,ψ)=-c_2(0,ψ+π/2). This relationship is illustrated in Figure <ref>. The net active torque, proportional to R{ c_2}, as a function of orientation for normal anchoring is connected to that for tangential anchoring by reversing the sign and shifting by an angle of π/2. The cogs in Figure <ref> have six-fold symmetry and so the angular shift appears as a displacement by π/6. Due to the reversal in sign of the chiral quadrupole, a cog with tangential anchoring will, all other properties being equal, rotate in the opposite direction to one with normal anchoring. The anticlockwise rotation illustrated in Figure <ref> is in agreement with that seen in experiments <cit.>. For any cog there is an optimal anchoring angle which maximises the strength of the net active torque. Following the principles laid out in Section <ref> we can see that this optimised anchoring will be such that the director tilt angle and the cog tooth angle sum to approximately π/4. However, in cases where it is possible to control the anchoring to this extent it is simpler to just use disc colloids as there is no need to use colloidal geometry to induce chirality. § DISCUSSION We have built upon a general framework for understanding the behaviour of distortions in active nematics <cit.> to show that not only propulsion but also rotation are generic properties of colloidal inclusions. Further, we have demonstrated how the direction and magnitude of these non-equilibrium effects can be controlled through either anchoring conditions or colloidal geometry. The later is illustrated for a form of chiral cog and our predictions for the direction of persistent rotation match those of recent experiments <cit.>. Our results allow us to go beyond predicting the existence of persistent rotation to also optimise the cog geometry for rotation by maximising the net active torque. It is worth emphasising that although the rotational effects we predict seem similar to those reported in bacterial baths <cit.>, and even share the property of becoming smoother with increasing teeth number <cit.>, their origin is not the same. The previously observed bacterial ratchets resulted from rectification of the swimming direction of the bacteria to produce coherent motion from an incoherent, isotropic environment and the mechanism for generating rotation is unchanged by rotation of the cog. The orientational order of a nematic gives meaning to the orientation of the cog relative to a far-field alignment and it was by no means guaranteed that the ratchet behaviour would be replicable in this context, particularly given our observation of orientation-dependent handedness <cit.> of the nematic distortions that drive the rotation of the cog. It is gratifying then that these effects melt away with increasing tooth number, allowing the active ratchet effect to be recovered. Our work paves the way to using colloids to induce desired dynamical responses in active nematics, including as work-extracting micromachines <cit.>. A natural continuation would be to complement the far-field results presented here with a theoretical understanding of how the motion of active nematic defects is rectified by colloidal surfaces <cit.>, as this is key to using inclusions to temper active nematic turbulence <cit.>. This work was supported by the UK EPSRC through Grant No. EP/N509796/1. § APPENDIX §.§ Free energy for disc colloids Consider a nematic texture defined by a harmonic director angle ϕ=I{lnΦ}+Aln(√(zz̅)/R)/ln(a/R), Φ=∏_j(z-z_j)^s_j, where the a is the colloid radius, R is a large length scale at which uniform alignment is recovered and A is the strength of the monopole term appropriate to enforce boundary conditions. The z_j denote the positions of topological defects of charge s_j, both in the bulk and image defects in the interior of the colloid. The Frank free energy density is given by f=4∂_zϕ∂_z̅ϕ=(A/|z|ln(a/R))^2 +iA/|z|^2ln(a/R)[z̅∑_j s_jz-z_j/|z-z_j|^2-z∑_j s_jz̅-z̅_j/|z-z_j|^2] +∑_i,js_is_j(z̅-z̅_i)(z-z_j)/|z-z_i|^2|z-z_j|^2. Rewriting this in polar coordinates we have f =(A/rln(a/R))^2+2A/r^2ln(a/R)∑_js_j𝐫×𝐫_j/|𝐫-𝐫_j|^2 +∑_i,js_is_j(𝐫-𝐫_i)·(𝐫-𝐫_j)/|𝐫-𝐫_i|^2|𝐫-𝐫_j|^2. The second of these terms integrates to zero by symmetry and in what follows it will be possible to enforce A=0, which will hold for the minimal energy configuration. We therefore consider only the final term and write F=K/2∫d^2𝐫[∑_i,j≠ 0s_is_j(𝐫-𝐫_i)·(𝐫-𝐫_j)/|𝐫-𝐫_i|^2|𝐫-𝐫_j|^2]. To perform the angular integral we use <cit.> ∫A+Bcos x+Csin x/(a_1+b_1cos x+c_1sin x)(a_2+b_2cos x+c_2sin x)dx=A_0lna_1+b_1cos x+c_1sin x/a_2+b_2cos x+c_2sin x+A_1∫dx/a_1+b_1cos x+c_1sin x +A_2∫dx/a_2+b_2cos x+c_2sin x, where A_0 = N^-1 A B C a_1 b_1 c_1 a_2 b_2 c_2 , A_1 = N^-1 B C b_1 c_1 A C a_1 c_1 B A b_1 a_1 a_1 b_1 c_1 a_2 b_2 c_2 , A_2 = N^-1 C B c_1 b_1 C A c_1 a_1 A B a_1 b_1 a_1 b_1 c_1 a_2 b_2 c_2 , 𝒩= a_1 b_1 a_2 b_2 ^2- b_1 c_1 b_2 c_2 ^2+ c_1 a_1 c_2 a_2 ^2 to show that ∫_-π^π(𝐫-𝐫_i)·(𝐫-𝐫_j)/|𝐫-𝐫_i|^2|𝐫-𝐫_j|^2dθ =r^2-r_ir_jcos(θ_i-θ_j)/r^4+r_i^2r_j^2-2r^2r_ir_jcos(θ_i-θ_j) π[sign(r-r_i)+sign(r-r_j)]. §.§.§ Dipolar colloids We now consider the particular configurations of interest. For a disc of radius a inducing a dipolar distortion with an anchoring angle of α the director angle is given by ϕ=I{ln[z^2/(z-z_d)(z-a^2/z̅_d)]}+(θ_d+α)ln(√(zz̅)/R)/ln(a/R), that is a bulk -1 defect at z_d=r_d e^iθ_d, an image -1 defect within the colloid and a +2 defect at the origin. Clearly the energy is minimised by the choice θ_d=-α such that the angle of the bulk defect is determined by the anchoring condition. This angular position of the bulk defect is the relevant part for determining the direction of self-propulsion in an active system, but for completeness we determine the radial position. Using (<ref>) in (<ref>) and performing the angular integration gives F =π K∫drr[-2/r^2+1/r^2-a^2+1/r^2-a^4/r_d^2. .+(-2/r^2+1/r^2-a^2+1/r^2-r_d^2)sign(r-r_d)]. To regularise the integral we integrate from a+ϵ to R, excluding where necessary an annulus of width 2ϵ centred on r_d. Doing this and expanding the resulting logarithms to first order in ϵ and 1/R yields F =π K[a^2+r_d^2/a^3-ar_d^2ϵ-2a^2+a^4/r_d^2+r_d^2/2R^2-ln(ϵ). .+ln(r_d^4/2a(r_d^2-a^2))], with the first two terms arbitrarily small in the limits of small ϵ and large R, the third term the isolated divergent contribution and the final term minimised by r_d=√(2)a. §.§.§ Quadrupolar colloids Again considering a disc of radius a with anchoring angle α but now inducing a quadrupole distortion, the director angle is given by ϕ =I{ln[z^2/√((z-z_d1)(z-a^2/z̅_d1)(z-z_d2)(z-a^2/z̅_d2))]} +(θ_d1+θ_d2/2+α)ln(√(zz̅)/R)/ln(a/R), where the -1 bulk defect of the dipolar distortion has been split into two -1/2 defects, each with image defects within the colloid. As the free energy is symmetric under exchange of r_d1 and r_d2 the global minimiser must occur when r_d1=r_d2. Similarly, from (<ref>) we can see that the angular positions of the defects will only appear in the combination cos(θ_d1-θ_d2) and so minimising gives the condition θ_d1-θ_d2=π. It is natural that these restrictions are required to take (<ref>) into the form of the global energy minimiser as they also ensure that the leading order distortion is a quadrupole rather than a dipole. Taking this angular condition in conjunction with the constraint that the monopole term vanishes, that is θ_d1+θ_d2=-2α, gives that the two defects sit at -α±π/2. With these simplifications in place the free energy is given by F =π K∫dr r[a^4(r^4r_d^4+a^4r^4-2a^8/r^2(a^4-r^4)(a^8-r^4r_d^4). .+r^4r_d^4+a^4(r^4-2r_d^4)/r^2(r^4-a^4)(r^4-r_d^4)sign(r-r_d)]. Integrating over r while employing the same regularisation proceedure as before and then expanding to lowest order in ϵ and 1/R we find F =π K/2[2(a^4+r_d^4)/a(a^4-r_d^4)ϵ-(a^4+r_d^4)^2/2r_d^4R^4-lnϵ. .+ln(r_d^7/4a^2(r_d^4-a^4))]. As before the only relevant term is the final one and it is minimised by r_d=(7/3)^1/4a. §.§ Schwarz-Christoffel Maps Schwarz-Christoffel transformations are predicated on the notion that the desired mapping has a derivative expressible as the product of some set of canonical functions, that is <cit.> f'(z)=∏ f_k(z). It naturally follows that argf'=Σargf_k and so by choosing the f_k appropriately such that each is a step function we can contrive to make argf' a piecewise constant function with prescribed jumps and hence map the real axis to a polygon. Through additional modifications we can generate our desired transformations from the exterior of the disc to the exterior of polygons. To make this precise, let us begin by setting out our notion for polygons. We define a polygonal colloid through a piecewise linear curve Γ comprised of n vertices w_1,…,w_n and bounding an interior region P. The vertices have interior angles μ_1π,…,μ_nπ. The corresponding exterior angles are ν_kπ, with ν_k=1-μ_k, and for Γ to form a closed curve we require ∑_kν_k=2. To determine the form of f_k(z) in (<ref>) we note that (z-z_k)^-ν_k has a constant argument on ℝ save for a jump of ν_kπ at z_k and is analytic in the upper half-plane H^+. It follows that a conformal transformation that maps ℝ onto Γ and H^+ onto P is provided by f(z)=A+C∫^z∏_k=1^n(z'-z_k)^-ν_kdz'. The z_k are the prevertices, preimages of the vertices under the map such that w_k=f(z_k). That these prevertices are well-defined is a consequence of the Carathéodory–Osgood theorem <cit.>. The complex constants A and C allow for scaling, translation and rotation of the image. Together with the prevertices they form a set of parameters which are overdetermined to the tune of the three degrees of freedom inherent in Riemann's mapping theorem <cit.>. The lower limit of integration is left unspecified as it only provides a constant that may be absorbed into A. The proof that (<ref>) truly is an analytic function as required is based upon showing that ℱ(z)=f”(z)/f'(z)-∑_kν_k/z-z_k is an entire function and may be found in <cit.>. There are two modifications we must make to (<ref>) that we now consider in turn. The first is to change the domain of the map from the upper half-plane to the unit disc, D. This is achieved by composition of maps and application of the chain rule. For h(z):H^+→ P and g(z):Σ→ H^+ we have f(z)=h(g(z)):Σ→ P with <cit.> f'(z) =h'(g(z))g'(z) =Cg'(z)∏_k[g(z)-g(z_k)]^-ν_k. A map from the disc to the upper half-plane is provided by the Möbius transformation g(z)=i1+z/1-z, giving f'(z)=C2i/(1-z)^2∏_k[2i(z-z_k)/(1-z)(1-z_k)]^-ν_k. Using that ∑_kν_k=2 and absorbing various constants into C we integrate to obtain f:D→ P, f(z)=A+C∫^z∏_k(1-z'/z_k)^-ν_kdz'. Next we must consider mapping to the exterior region P'. The first change that this necessitates is reversing the sign of the exterior angles. In the same manner as before this leads us to ℱ(z)=f”(z)/f'(z)-∑_kν_k/z-z_k as an entire function. The difference now is that since the image of f contains the point at infinity it no longer follows that ℱ(z) is constant. Taking f(0)=∞ implies that f has a simple pole there and so at the origin f”/f'=-2/z plus some analytic part. We therefore modify (<ref>) to give ℱ_2(z)=f”(z)/f'(z)-∑_kν_k/z-z_k+2/z=0. Integrating we find f:D→ P', f(z)=A+C∫^z z'^-2∏_k(1-z'/z_k)^ν_kdz'. Finally, we may make the replacement z→1/z in (<ref>) to attain our desired conformal map from the exterior of the disc to the exterior of the polygon. Having attained this general expression we now derive the particular mappings appropriate to the polygonal cogs that we consider in this paper. Our cogs are formed by attaching right-angled triangles with base angle γπ to each side of a regular n-sided polygon and we can use this rotational symmetry to simplify the integral. We take the prevertices of the base n-gon to be the n^th roots of unity and indeed to be fixed points of the map. Due to the cyclic nature of the roots of unity ∏_k(1-z'/z_k)=1-z'^n. A similar relation holds for the prevertices of the tips of the cog teeth, with these being rotated from the roots of unity by an angle χ which depends on both n and γ and must be determined numerically. In addition we can set A=0 and determine C by requiring f(1)=1 such that f(z)=∫^zz'^-2(1-z'^n)^4-n/2n-γ(1-z'^n/e^inχ)^1/2+γdz'/∫^1z'^-2(1-z'^n)^4-n/2n-γ(1-z'^n/e^inχ)^1/2+γdz'. Making the substitution t=z'^n/z^n brings the numerator to the form 1/nz∫^1t^-1/n-1(1-z^nt)^4-n/2n-γ(1-z^n/e^inχt)^1/2+γdt. Comparing with Picard's single integral representation of Appell's F_1 function <cit.> F_1(a,b_1,b_2,c;x,y)= Γ(c)/Γ(a)Γ(c-a)∫_0^1t^a-1(1-t)^c-a-1(1-xt)^-b_1(1-yt)^-b_2dt, we find that the numerator of (<ref>) is given by 1/nzΓ(-1/n)/Γ(1-1/n)F_1(-1/n,n-4/2n+γ,-1/2-γ;z^n,z^n/e^inχ). We note that the representation in (<ref>) is only valid for Ra>0, whereas here a=-1/n, leading to a singularity at t=0. This singularity can be handled via adaptation of the integration contour, introducing a prefactor into (<ref>). We leave this prefactor undetermined as it is common to both the numerator and denominator of (<ref>) and hence does not feature in the desired mapping f(z). For the denominator we use <cit.> F_1(a,b_1,b_2,c;1,z) =_2F_1(a,b_1;c;1)_2F_1(a,b_2;c-b_1;z) =Γ(c)Γ(c-a-b_1)/Γ(c-a)Γ(c-b_1)_2F_1(a,b_2;c-b_1;z), with the same caveat concering a prefactor as for (<ref>). Replacing z with 1/z and including rotations of the image polygon by an angle ψ we reach the conformal transformation from the exterior of the disc to the exterior of a cog f(z)=Γ(1/2+1/n-γ)zF_1(-1/n,γ+n-4/2n,-1/2-γ,1-1/n;e^inψz^-n,e^in(ψ+χ)z^-n)/Γ(1-1/n)Γ(1/2+2/n-γ)_2F_1(-1/n,-1/2-γ,1/2+1/n-γ,e^inχ).
http://arxiv.org/abs/2307.07589v1
20230714192959
GenAssist: Making Image Generation Accessible
[ "Mina Huh", "Yi-Hao Peng", "Amy Pavel" ]
cs.HC
[ "cs.HC" ]
The University of Texas at Austin Austin TX USA [email protected] Carnegie Mellon University Pittsburgh PA USA [email protected] The University of Texas at Austin Austin TX USA [email protected] Blind and low vision (BLV) creators use images to communicate with sighted audiences. However, creating or retrieving images is challenging for BLV creators as it is difficult to use authoring tools or assess image search results. Thus, creators limit the types of images they create or recruit sighted collaborators. While text-to-image generation models let creators generate high-fidelity images based on a text description (i.e. prompt), it is difficult to assess the content and quality of generated images. We present , a system to make text-to-image generation accessible. Using our interface, creators can verify whether generated image candidates followed the prompt, access additional details in the image not specified in the prompt, and skim a summary of similarities and differences between image candidates. To power the interface, uses a large language model to generate visual questions, vision-language models to extract answers, and a large language model to summarize the results. Our study with 12 BLV creators demonstrated that enables and simplifies the process of image selection and generation, making visual authoring more accessible to all. <ccs2012> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computer systems organization Embedded systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010575.10010755</concept_id> <concept_desc>Computer systems organization Redundancy</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Computer systems organization Robotics</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Networks Network reliability</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> < g r a p h i c s > makes image generation accessible by providing rich visual descriptions of image generation results. Given a text prompt and set of generated images, uses a large language model (GPT-4) to generate prompt verification questions from the prompt and image-based questions from the image captions. then answers the visual questions (BLIP-2) and uses a vision-language model (CLIP) and an object detection model (Detic) to extract additional visual information. then uses GPT-4 to summarize all of the information into comparison descriptions and per-image descriptions. Teaser image that illustrates how generates the comparison description and per image descriptions in the summary table. First, takes the input of text prompt “A young chef is cooking dinner for his parents” and the four images generated using the prompt. Then, based on the prompt, uses GPT4 to ask prompt verification questions and use BLIP2 to answer them. also asks questions based on the individual image captions using GPT4 and BLIP2. In addition to prompt verification questions and image based questions, also asks questions related to visual content and styles and answer them using BLIP2, Detic and CLIP. Finally, using all of the visual information, the comparison description (similarities and differences) and the per image descriptions are generated using GPT4. GenAssist: Making Image Generation Accessible Amy Pavel Received ; accepted ============================================= § INTRODUCTION BLV creators use images in presentations <cit.>, social media <cit.>, videos <cit.>, and art <cit.>. To obtain images, creators currently either describe their desired images to the sighted collaborators who then search for or create the image <cit.>, or limit the types of images they create <cit.>. Large-scale text-to-image generation models, such as DALL-E <cit.>, Stable Diffusion <cit.>, and Midjourney <cit.>, present an opportunity for these creators to generate images directly from text descriptions (i.e., prompts). However, current text-to-image generation tools are inaccessible to BLV creators, as creators must visually inspect the content and quality of the generated images to iteratively refine their prompt and select from multiple generated candidate images. While BLV creators can gain access to images using automated descriptions <cit.>, existing descriptions are intended primarily for image consumption. As a result, the descriptions leave out details that may help authors decide whether or not to use the image (e.g., style, lighting, colors, objects, emotions). Prior work also enables users to gain flexible access to the spatial layout of objects in images <cit.>, but exploring details per image makes it difficult to assess similarities and differences between image options provided during image generation. To make authoring visuals more accessible, prior work has explored describing visuals to help creators author presentations <cit.> or videos <cit.>. While such work helps creators identify low-quality visuals (e.g., blurry footage in a video <cit.>) or graphic design changes (e.g., changing slide layouts <cit.>), prior work has not yet explored how to improve the accessibility of image generation. To understand the opportunities and challenges of text-to-image generation, we conducted a formative study with 8 BLV creators who regularly create or search for images. Creators in our study reported their existing strategies for making images themselves (e.g., using SVG editors or code), searching for images, or asking others to search for or create images (similar to prior work <cit.>). All creators expressed excitement about using image generation to improve their efficiency and expressivity in image authoring. Creators all used image generation for the first time during our study and enjoyed creating high-fidelity images for their own uses (e.g., creating a logo for their website, making a card for their family). While we invited participants to ask the researchers visual questions to gain access to the visual details (e.g. “What are the differences?”, “Is the color calm or aggressive?”), it remained challenging for participants to: craft a well-specified prompt especially without visual experience, assess how well the generated image followed the prompt, recognize generated details that were not originally specified in the prompt, and understand or remember the similarities and differences between images. To improve the accessibility of image generation, we present , a system that provides access to text-to-image generation results via prompt-guided image descriptions and comparisons (Figure <ref>). Our system lets creators skim an overview of similarities and differences between images using our comparison descriptions and per image descriptions (Figure <ref>, right), assess if the images followed their prompt using prompt verification (Figure <ref>, center), and recognize visual details not in the prompt using our content and style extraction (Figure <ref>, center). Creators can also interactively ask questions across multiple images to gain additional details. Our interface design enables creators to easily navigate visual information via a screen reader-accessible table format. Our tables let creators selectively gain information about individual images (columns) or visual questions (rows) (Figure <ref>). We evaluated in a within-subjects study with 12 BLV creators who compared with a baseline interface that was designed to encompass practices of accessing images (e.g., automated caption <cit.>, object detection <cit.>, and Visual Question Answering <cit.>). Participants rated as more useful than the baseline interface for understanding similarities and differences between the images, and they reported higher satisfaction with their image generation performance. Participants all expressed excitement about using in their own workflows for authoring images and for new uses. Our work contributes: * Design opportunities making image generation accessible, derived from a formative study * , a system that provides access to image generation results via prompt-guided summaries and descriptions * User study that demonstrates how BLV creators use to interpret and generate images § BACKGROUND As we aim to enhance the experience of content BLV creators working with AI-powered image-generation tools, our work builds upon prior research that explores: the accessibility of authoring tools and images, and text-to-image generation tools. §.§ Accessibility of Authoring Tools Enabling access to authoring tools unlocks new forms of self-expression. Recent research investigated how BLV people take and edit photos and videos <cit.>, compose music <cit.>, draw digital images <cit.>, and make presentations <cit.>. Such work includes studies of current practices that highlight accessibility concerns of existing authoring tools and the authored visuals. For example, features of current authoring tools remain difficult to access using screen readers <cit.>, and it can be difficult to assess the effect of the visual edits such as color changes <cit.>. To improve the accessibility of authoring tools, researchers have explored methods for providing feedback to authors as they modify visual elements. For example, prior work has developed tactile devices that assist BLV designers in understanding and adjusting the layout of user interface elements <cit.>. Tactile feedback has also been used to help developers interpret code structure, such as indentation <cit.>. Other prior work has used audio notifications to inform users about scene changes when reviewing videos <cit.>, while text descriptions have been used to convey visual details important to authoring such as brightness and layout <cit.>. Sound and text feedback have also been used to keep blind authors informed about their collaborators' edits to documents <cit.>. Similar to prior research, we also aim to make authoring tools accessible by providing in-situ feedback, but we instead provide creation-specific information to facilitate authoring images. In addition to offering authoring feedback, researchers have developed systems to automate visual authoring. Prior systems recommend 2D layouts for visual elements during graphic design <cit.> and transform text into visual presentations <cit.>. To accommodate individual preferences and mitigate the impact of errors produced during generation, these systems typically offer multiple options for users to choose from and allow iterative generation attempts. Iterative generation and selection are not accessible for BLV creators, as it requires visually inspecting the output designs to choose a generated option or revise the input. In this work, we seek to make automated authoring tools, such as image generation, more accessible to BLV creators. Our approach provides a structured format for assessing and comparing generated results, and on-demand access to additional visual details to support creators in selecting a result and revising their input. §.§ Accessibility of Images Improving the accessibility of image generation systems involves not only ensuring access to image generation features, but also making the produced images accessible. A primary method for making images more accessible is representing them as text descriptions, such as image captions or alt text (e.g., “A person walking on the street”). Early work hired crowd workers to create alt text <cit.>, while recent research has developed machine-learning-based systems that automatically generate image descriptions <cit.>. Building on auto-generated captions, researchers have developed systems that further improve users' understanding of images by providing additional information, such as regional descriptions <cit.>, and structuring detailed descriptions into an overview <cit.>. This approach enables users to review visual information more efficiently and has been found to help blind people better understand images compared to using captions alone <cit.>. Our work builds upon this idea by presenting descriptions of image generation results in a hierarchical, easy-to-compare format, and tailoring the descriptions to the task of authoring rather than consuming images. Automatic descriptions do not always capture all of the important image details. Visual Question Answering (VQA) tools can fill this gap by offering on-demand information to visual questions (e.g., “What is the person walking on the street wearing?”). Previous research has explored what visual questions blind people would like to have answered <cit.> and provided on-demand visual question answering support using both crowdsourcing <cit.> and automated methods <cit.>. While VQA provides control over visual information gathering, it takes effort to ask individual questions. We investigate what types of visual questions BLV creators ask to create images during our formative study (similar to Brady et al. <cit.>), then use VQA to extract visual information and summarize this information as image descriptions. Thus, we explore how VQA and image descriptions work together as interconnected rather than separate accessibility solutions. §.§ Text-to-Image Generation Tools In recent years, significant progress has been made in the field of generative image models, particularly text-to-image models. These models employ pre-trained vision-language models to encode text input into guiding vectors for image generation, allowing users to create images using text prompts. This advancement can be attributed to various factors, including innovations in deep learning architectures (e.g., Variational Autoencoders (VAEs) <cit.> and Generative Adversarial Networks (GANs) <cit.>), novel training paradigms like masked modeling for language and vision tasks <cit.>, and the availability of large-scale image-text datasets <cit.>. With these advancements, recent diffusion-based models like DALL-E 2 <cit.>, Stable Diffusion <cit.>, and Midjourney <cit.> have successfully demonstrated the ability to synthesize high-quality images in versatile styles, including photorealism. This opens up potential practical applications for the content production industry <cit.>. However, none of the image generation tools provide text descriptions of the output so they are not accessible to BLV creators. In this work, we chose to use MidJourney due to its popularity among designers and content creators for its high-quality results. MidJourney enables creators to generate 4 candidate images for a single text prompt via a text-based interface hosted on Discord. However, our approach is not limited to any particular model, as we focus on comparing and describing multiple generated results from a single prompt, helping creators select the ideal image from various candidates produced by image generation tools. With the development of these models, recent works have conducted studies to understand the relationship between content creators and AI generative tools, introducing design guidelines for such systems <cit.>. These guidelines emphasize the need for more user controllability. Researchers have thus developed various tools to help designers better make use of generative AI, including assistance in exploring and writing better prompts <cit.>, recommending potential illustrations for news articles <cit.>, and supporting collaboration between writers and artists <cit.>. While these studies offer valuable insights into how designers interact with generative models, none have focused on creators with disabilities. Given the potential of text-to-image models for BLV creators, our work is the first to explore how to increase inclusivity in the expressiveness of image generation tools and make this emerging authoring approach more broadly accessible. § FORMATIVE STUDY To understand the strategies and challenges of authoring and searching for images, we conducted a formative study with BLV creators. The formative study consisted of a semi-structured interview to investigate current strategies and challenges of obtaining images, and two image generation tasks to explore current strategies and challenges of using text-to-image generation. §.§ Method We recruited 8 BLV creators who create or use visual assets on a regular basis (P1-P8, Table <ref>). Participants were recruited using mailing lists and compensated 50 USD for the 1.5-hour remote study conducted via Zoom[This study was approved by our institution's Institutional Review Board (IRB).]. Participants were totally blind (6 participants) or legally blind (2 participants) with light and color perception. All participants had previously produced or selected images for their work across several professions: teacher (English, Music), professor (Computer Science, Climate), software engineer, graduate student, and artist. 7 participants had prior knowledge of text-to-image generation models, none had previously used such tools. We first conducted a semi-structured interview asking participants how they currently created or used visual assets, and what accessibility barriers they encountered with their current approaches. We then provided a short tutorial on text-to-image generation and shared Midjourney's guidelines for creating text prompts <cit.> and example prompts from a Midjourney dataset <cit.>. Participants then completed two image generation tasks (20 minutes per task): a guided task in which participants generated a cover image for a news article <cit.> given the article's title and full text, and a freeform task in which participants generated their own image. To limit onboarding time, participant emailed us their prompt (text and/or image) instead of using Midjourney's Discord interface, then we shared the four generated candidate images back to the participants. We encouraged participants to ask questions about the four candidate images to select one or change the prompt. We recorded and transcribed the formative studies. To analyze the types of visual questions asked in the image generation task, two of the researchers labeled questions based on their goals and the types of information asked.[See Supplemental Material for the full list of prompts, images, and visual questions of the formative study] §.§ Findings Current Practice Participants reported that they currently use images for a variety of contexts including slides, website images, paintings for commission, cartoons, scientific diagrams, and music album covers (Table <ref>). Five participants noted that they created images on their own using image creation software such as SVG editors, slides, photoshop, and ProCreate (P7, P1, P5, P6), code packages including Python and Latex (P4, P5), or by taking photos (P3). Among them, three participants asked sighted people to review them (P3, P4, P6), and two participants reviewed the images using accessibility tools (e.g., audioScreen, tactile graphs, ZoomText) (P7, P3). Five participants searched for images online (P7, P8, P2, P3, P5), and three participants recruited another person to create or search the images for them (P7, P4, P5). All participants who searched for images mentioned that they ask sighted people to describe the images for them in addition to reading any available alt text. P7 noted “Alt text has never been helpful. It's too short without important details.” P8 and P5 mentioned that while a few established websites (e.g., New York Times, NASA) have good alt text, Google Image Search returns options other than established websites and “it is hard to compare the results of the image search” (P5). Participants also noted barriers to asking others to describe the image search results including finding available people to describe the images and avoiding false perceptions: “I only ask a handful of people because it might lead to some subconscious bias `that I’m not independent', cause it’s a basic task” (P7). Generating Prompts All prompts written by participants specified the content they wanted to appear in the image (e.g., P6 used the prompt “A person pushing a grocery cart down a produce aisle.”), and only two participants specified the style of the image (P1 and P7 specified “a photograph of...”). Participants mentioned several challenges of creating prompts. First, while prompt guidelines <cit.> recommend users to specify multiple attributes in their prompt (e.g., style, lighting), participants reported that they were unfamiliar with visual attributes (“I’m trying not to leave much to system randomness, I want to detail more things. But I don't know a lot about different styles.” — P5) and others found it difficult to remember what to mention in the prompt: “I want the model to behave more like a wizard – asking me a series of questions `What do you want to create?', `What style?' and so on. It is hard to create detailed prompts in one attempt (P2). Participants also noticed that it is challenging to create a prompt that AI would be capable of generating: “If I pin down something really specific or narrow [in the prompt], AI seems to break down” (P1). P5 mentioned that transparency could inform prompt iteration: “I want to know how the model works! [...] then I will know how to write a good prompt.” Finally, while participants easily generated prompts during the free-form task motivated by their own creation goals, they mentioned it was challenging to know what content would effectively convey the article in the guided task: “I have no experience reading a news article with images, so it’s hard to think of one. What do these images usually contain?” (P7). Understanding Image Candidateswith Visual Questions After generating images, participants asked visual questions to understand and select the images. Participants asked a total of 89 questions (47 asked in the guided task, 42 in the freeform task). The goals of the questions asked were to check whether the generated images followed the prompt (51), compare two or more images (34), request clarification of the answer provided by the interviewer (3), or understand a single image (1). The type of visual information asked by participants also varied. Participants asked about medium (5), settings (6), object presences (18), object types (11), position attributes (11), color/light/perspective (16), and others (22). Participants typically started by asking general questions, narrowing down to more specific questions as they ruled out images. For example, P4 progressively asked: “Can you describe the images?”, “What are the differences between the four images?”, “What are the differences between the [store] isles?”, “Is the second image realistic?”. Alternatively, participants started their questioning by directly checking if the image followed their prompts, such as in P5's first question: “Do we actually get the woman sitting at a desk?” Finally, P1 and P2 started with questions about the style of the images: “Is it realistic or cartoony?” (P1) and “Is the color calm or aggressive?” (P2). Through asking questions, participants realized differences between their prompt and the generated images: “it seems like the model generator is filling in details according to the context, even if I didn’t specify some details. I didn’t specify the clothes but in all images, the women are wearing office clothes” (P5). Participants then asked follow-up questions based on new details. While the visual questions revealed the content and structure of what participants wanted to know about the images, participants reported that asking questions for each image was “very time-consuming and confusing” (P4). 5 participants noted that they would prefer to receive descriptions before asking questions, and participants reported that remembering all of the answers was difficult, as P2 summarized: “ I wish there were more description provided in the first place. I don't know what to ask. Also, it's hard to remember all the answers for each image.” Selecting an Image Candidate While participants initially asked questions based on their prompt, they ultimately selected the final image considering both prompt-based descriptions and descriptions of extra details produced by the model. P7 suggested that information on whether the prompt is reflected in each image should be presented early so that he can decide whether to explore the image in detail or skip to the next candidate. P8 highlighted the importance of additional details: “The model has randomness. It showed items I didn’t ask for and didn’t show what I asked for in the prompt. I want much information to be surfaced so that I can make a decision. Whether that unexpected parts can be still used.” We also observed that similarities between images guided participants in deciding whether to further explore the images or to refine the prompt. For instance, after P3 generated images using a prompt “A photo looking down on a kitchen table with a plate of pizza, a plate of fried chicken, and a bowl of ice cream on it.”, he realized that all four images did not display drinks and iterated the prompt to explicitly mention “fizzy drinks”. On the other hand, differences between the images ultimately informed the final selection, as participants cited unique backgrounds, objects, and mediums as reasons for selecting the image (e.g., P3 selected the final image because that was the only image that presented a dog putting his paw on the books. ). Uses of Image Generation When participants generated their own images in the free-form task, participants created a variety of images ranging from logos, art, website decorative images, presentations, and music album cover. All participants expressed excitement about using the text-to-image model as part of their image creation process in the future. Participants mentioned with image generation, they can create new types of images they had not created before. P6 mentioned “With SVG editor, I cannot make realistic images. But now I can!” Also, participants mentioned that the quick creation will lead them to use images more often: “Because it's so quick, I will use it for communication. Similar to how sighted people draw on a whiteboard during a Zoom meeting, I can quickly generate an image because representing a concept visually is easier for sighted team members.” (P8). P4 also compared the experience of image generation with image search  “This simplifies things when I’m looking for things very niche, something that is hard to find online.” Finally, participants also mentioned the benefit of creating images alone. P7 said that because there is no need to ask a sighted person to help search images, it brings more autonomy and privacy. Participants also noted limitations and potential downsides of image generation including potential bias (P8, P4), copyright and training data concerns (P3, P4), wanting to use it only for inspiration (P1), and potential errors (P8). However, P8 expressed that he expected future models to produce fewer errors. §.§ Reflection Creators in our formative study currently employ resourceful strategies for creating or searching for images, but all creators expressed excitement to use image generation in their workflow. To improve access to image generation, our formative study reveals design opportunities (D1-D5) to make image generation accessible through technical or social support for: D1. Authoring prompts that specify content and style. D2. Understanding high-level image similarities and differences. D3. Assessing if images followed the prompt. D4. Accessing image details not specified by the prompt. D5. Organizing responses to visual questions. These design opportunities address key user tasks in accessible text-to-image generation: generating the prompt (D1), understanding and selecting images (D2, D3, D4, D5), and revising the prompt for iteration (D4). Our work aims to help creators understand their image generation results through prompt-guided descriptions and comparisons (D2-D5). While providing high-quality descriptions may help creators improve their future prompts (D1), future work should explore how to actively support creators in authoring prompts. § SYSTEM We present , a system that supports accessible image generation via prompt-guided image descriptions and comparisons (Figure <ref>). To illustrate , we follow Vito, a professional blogger who uses a screen reader to author his articles. Vito recently wrote an article about the benefits of teaching children to cook, and he wants to add an image to the article to engage his sighted readers. He attempts to use image search to find a stock photo of “a young chef” but notices that many of the images are missing detailed captions and alt text, or feature adult chefs instead of children. He decides to create an image using text-to-image generation with the prompt “a young chef is cooking dinner for his parents”. The text-to-image generation model returns four candidates: < g r a p h i c s > Four images were generated using the prompt “a young chef is cooking dinner for his parents.” The first image is an illustration of a dad cooking with his two children. The second image is a photo of a young boy cooking alone in the kitchen. The third image is a vector art image that depicts parents and their sons cooking. The fourth image is an illustration of parents and their son who is a young man cooking in the kitchen with a wide window. To decide whether to use one of these images or change his prompt, Vito enters his prompt and image results into . §.§ Prompt Verification While the text-to-image model generates output images based on the prompt, the generated image often does not reflect the specifications in the prompt, especially if the prompt is long, complicated, or ambiguous <cit.>. To help users assess how well their generated images adhered to their prompt, provides prompt verification. To perform prompt verification, we first use GPT-4 <cit.> to generate visual questions that verify each part of the prompt. We input the text instruction “Generate visual questions that verify whether each part of the prompt is correct. Number the questions.” followed by the user's prompt. GPT-4 outputs a series of questions: < g r a p h i c s > A figure that illustrates an example of prompt verification questions. On the left is the input with the instruction “Generate visual questions that verify whether each part of the prompt is correct. Number the questions.” and the input prompt “A young chef is cooking dinner for his parents.” On the right is the output prompt verification questions: 1. Is there a chef in the image? 2. How old is the young chef? 3. Is the young chef cooking the food? 4. Are the parents present in the image? We generate answers to the visual prompt verification questions for each of the four generated candidate images using the BLIP-2 model with the ViT-G Flan-T5-XXL setup <cit.>. For each generated image and prompt verification question, we instruct the BLIP-2 model with the starting sequence “Answer the given question. Don’t imagine any contents that are not in the image.” to reduce hallucinations with non-existent information: < g r a p h i c s > A figure that illustrates an example of prompt verification questions and their answers for each of the four images. The first question is “Is there a chef in the image?” and the answer is all yes for four images. The second question is “How old is the young chef?” and the first three images answer “Young kid” while the last image says “Young man”. The third question is “Is the young chef cooking food?” and the answers are all yes for the four images. The final question is “Are the parents present in the image?” and the answer is all yes except for the second image. To help users quickly find which images do or do not adhere to the prompt, we use GPT-4 to summarize the responses to each question using the following prompt: “Below are the answers of four similar images to one visual question. Write one sentence summary that captures the similarities and differences of these results. The summary should fit within 250 character limit”. When using GPT-4's chat completion API, we set the role of the system as “You are a helpful assistant that is describing images for blind and low vision individuals.”. The temperature value was set to 0.8. The summaries either indicate that all images have the same answer (e.g., “All images have a chef in the image”), or they alert users to differences: < g r a p h i c s > This figure illustrates the example of the summary descriptions for the prompt verification questions. For the question “Is there a chef in the image?”, the summary description is “Three images depict a young kid, while Image 4 depicts a young man.” For the second question “Are the parents present in the image?”, the answer summary is “Three images show parents present in the image, while image 2 does not.” To enable screen reader users to easily access the answers to each question, we present the prompt verification results as a table including the prompt verification questions (rows, with the question in column #1), prompt verification summaries (column #2), and per-image prompt verification answers (columns #3-6) (Figure <ref>). Using our prompt verification table, Vito reads the answers summaries to check if the images follow his prompt. He notices that the 4th image contains an older chef, so it does not apply to his article about teaching children how to cook. While Vito also realizes the 2nd image does not feature the chef's parents, he keeps the image in consideration as it may still apply to his article. §.§ Visual Content & Style Extraction Generated image candidates often feature similarities or differences that are not present in the original prompt. For example, Vito's prompt “A young chef is cooking dinner for his parents” does not specify the style such that the resulting images include three illustrations and one photo. To enable access to image content and style details that were not specified in the prompt, we extract the visual content and visual style of the generated image candidates. To surface content and style similarities and differences that are important for improving image generation prompts, we used text-to-image prompt guidelines <cit.> to inform our approach. We first created a list of visual questions about the image based on existing prompt guidelines, i.e. prompt guideline questions. The prompt guideline questions consist of questions about the content of the image (subjects, setting, objects), the purpose of the image (emotion, likely use), the style of the image (medium, lighting, perspective, color), and an additional question about errors in the image to surface distortions in the generated images such as blurring or unnatural human body features (Table <ref>). To answer our prompt guideline questions for each image, we answered 5 questions (setting, subjects, emotion, likely use, colors) using Visual Question Answering with BLIP-2, similar to our prompt verification approach: < g r a p h i c s > This image illustrates an example of content and style questions answered by BLIP-2. The answers are provided for the same tutorial images. For the question “What is the setting of the image?”, the answers are all kitchen. For the question “What are the subjects of the image?”, the answers are father and children for the first image, chef, kitchen, and vegetables for the second image, and father, mother, and son for the third and fourth images. For the question “What is the emotion of the image?”, all answers are happy. For the question about the usage “Where would this image be used?”, the answers are “on a website”, “in a cookbook”, “a children's cooking class”, and “on a website”. Finally, for the question “What are the main colors?”, the first image answers “Brown, blue, yellow”, the second image answers “Black, white, red, green”, the third image answers “blue and white”, and the final image answers “Red, yellow, green”. For our objects question, we used Detic <cit.>, a state-of-the-art object detection model, with an open detection vocabulary and a confidence threshold of 0.3 to enable users to access all objects: < g r a p h i c s > This figure shows an example of the object detection results using Detic per image. The first image depicts a spoon, pot, cup, tub, apron, bowl, etc. The second image depicts a spoon, sink, tomato, lettuce, hat, bowl, etc. The third image shows a spoon, fork, knife, apple, sausage, plate, etc. The fourth image shows a spoon, pot, window, flowerpot, plate, frog, etc. For the remaining questions covering medium, lighting, perspective, and errors, we answer the question for each image candidate by using CLIP <cit.> to determine the similarity between the image and a limited set of answer choices (similar to CLIP interrogator <cit.>). To provide answers that could inform future prompts, we curated our answer choices for medium, lighting, and perspective from Midjourney's list of styles <cit.> and DALL-E's prompt book <cit.>. To address common image generation errors, we retrieved the answer choices for our errors question from prior work <cit.>. We include the full list of answer choices in the Supplementary Material. For each question, presents the top three answer choices with a similarity score between the answer choice embedding and the image embedding above a threshold of 0.18: < g r a p h i c s > This figure shows the answers related to the content and style of the images that were retrieved using the CLIP model. First for the medium of the image, image 1 answers cartoon, storybook, and illustration, image 2 answers a stock photo, image 3 answers vector art, and image 4 answers cartoon, storybook, and illustration. For the lighting of the image, all four images answer natural lighting. For the perspective of the images, images 1, 3, and 4 answer medium shots while the second image answers centered shots. For the errors in the image, only image 1 answers poorly drawn hands but not the other three images. To inform creators about unfamiliar visual style types, provides the definition and the usage for each answer choice for visual style questions (Medium, Lighting, Perspective) by generating the description with GPT-4 and the prompt “Describe the definition and the usage of the following [QUESTION NAME] in one sentence: [STYLE NAME]”. Similar to the prompt verification table, we present the prompt guideline results in a table format including the prompt guideline questions (rows, with the question in column #1), prompt guideline summaries (column #2), and per-image prompt guideline answers (columns #3-6). We further split the prompt guideline results into two tables to improve ease of navigation: the visual content table includes answers to the content and purpose questions, and the visual style table includes answers to the style and errors questions. Finally, users can ask their own questions at the bottom of either table and adds a row to the table by generating the answer for each image using BLIP-2, and the summary of answers using GPT-4. Using the visual content table, Vito notices from the objects summary that Image 1 has more food items than Images 2-4. As the purpose of the article is partially to introduce children to more ingredients, he decides to remove Image 1 from consideration. Using the visual style table, Vito realizes that Image 2 is a photo, while the other images are illustrations. As Vito was initially searching for a photo, he notes he may want to further refine his prompt to get more photo results. Vito also wants to check if the images will match his blog which is primarily black and white, so he adds a question about the background color: < g r a p h i c s > This image shows the summary description of the additional question. When asked “What color is the background?”, the summary description is “Image 1 and Image 4 are light brown, Image 2 is black and Image 3 is blue.” As Image 2 fits his article and includes a black background, he ranks Image 2 as his current top choice. §.§ Description Summarization To enable users to quickly assess their image results, we summarize the results from our pipeline to create a per-image description for each image and a summary of image similarities and differences. To generate per-image descriptions, we first obtain the BLIP-2 caption for each image that provides a concise overview of the image content (e.g., “A family preparing food in the kitchen with a window.”). Then, we obtain additional detail about the image by generating questions about the caption with GPT-4 with the prompt: “Given the caption, generate 10 visual questions that are likely to be asked by blind and low vision individuals”. Unlike the other questions in our pipeline that are common across all images, this step enables the to ask image-specific questions to add detail (e.g., “What is the view outside the window?” is only asked for Image 4). We generate the answers to these questions using BLIP-2. We create individual image descriptions by first aggregating all information acquired in our pipeline for each image including the prompt verification, prompt guideline, and caption-detail question-answer pairs for each image. Then, we guide GPT-4 with the aggregated visual information and the prompt “Below is the information of an image. Write a description of this image for the blind and low vision audience. Describe the medium first. Your response should fit within 250 character limit. Do not add additional information that was not provided. Do not describe parts that are not clear or cannot be determined from the given information.” GPT-4 generates rich descriptions for each image (Figure <ref>). To generate the comparison description, we simply provide all the information extracted from our pipeline to GPT-4 with the prompt “Below is the information for four images. Write one paragraph about the similarities between the four images and one paragraph about the differences between the four images. The summary should be concise.”. GPT-4 briefly summarizes the image similarities and differences (Figure <ref>). To help users quickly assess whether to revise their prompt or continue exploring, we present the comparison description and per-image description at the top of the page before the prompt verification and prompt guidelines tables. With the per-image description, Vito can quickly recall the content of Image 2 before making his final selection. With the comparison descriptions, Vito can quickly notice that Image 2 was the only image that contained a photo, then updated his prompt to get additional photos rather than illustrations. §.§ Implementation We implemented using Gradio <cit.>, an open-source Python library for the front-end web interface. The interface was deployed through Hugging face [https://huggingface.co/spaces] space with an NVIDIA A100 GPU (large, 40GB GPU Memory). Uses' interaction logs were saved in the Firebase database. We followed the guidelines of W3C <cit.> and tested the compatibility of the with all three major screen readers: NVDA, JAWS, and VoiceOver. 's tables follow the recommendations of W3C tables with two headers [https://www.w3.org/WAI/tutorials/tables/two-headers/]. § PIPELINE EVALUATION We measured the coverage of the descriptions generated by and the accuracy of the information presented in 's tables. We compare the coverage of -generated caption with the human-generated caption and the caption generated by a state-of-the-art image captioning model BLIP-2 <cit.>. §.§ Method We selected 20 image sets (20 prompts x 4 generated images for each prompt = 80 total images) from Midjourney's community feed spanning different prompt lengths, content types, and styles. We recruited two people with experience describing images to provide descriptions for 10 randomly selected image sets each. For each image set, the describers provided descriptions of each individual image, and the similarities and differences between the images. We provided describers with prompt guidelines <cit.>, image description guidelines <cit.>, an example set of descriptions created by , and the prompt for each image set to inform their descriptions. Both describers spent 3.5 hours to create descriptions for the 10 sets of images — or around 21 minutes per image set. We compared the coverage of -generated descriptions to those generated by a baseline captioning tool (BLIP-2) and human describers. For comparison, we annotated the similarities and differences descriptions for all 20 sets of images and annotated the individual descriptions for 10 sets of images. We chose the 10 sets with the longest human descriptions to compare with the highest quality descriptions. Because BLIP-2 cannot take multiple images as input to extract similarities and differences, we generated captions of the 4 images using BLIP-2, then prompted GPT-4 with the same prompt we used in our system to generate summary descriptions. We tallied whether the descriptions contained details about the image in each of our set of pre-defined visual information categories (Table <ref>). We counted only the correct information in the descriptions. One of the researchers annotated the descriptions and the other researcher reviewed the annotations. To compute the accuracy of the detailed visual information in , one of the researchers examined the 20 sets of images with the three tables generated by the (prompt verification table, visual content table, and visual style table) and counted the number of correct and incorrect answers in each table. §.§ Results §.§.§ Coverage   We summarize our coverage evaluation results in <ref>. Overall, 's comparison descriptions covered more similarities and differences than the human describers'. In the coverage of differences, spotted more than twice the number of total differences than the human describers (4.55 vs. 2.25). The coverage of 's individual image descriptions was comparable to that of human describers. When compared to human-generated description, captured more information about the content and styles but revealed fewer image generation errors. For instance, one human describer specified in the comparison description “...All of the images have some AI generation error with fingers or clothing. ”. While and the baseline used the same GPT-4 prompt to extract the similarities and differences, the baseline's comparison description did not capture many differences. §.§.§ Accuracy <ref> summarizes the results of the accuracy evaluation. Prompt verification, content, and style categories all achieved over 90% accuracy except for medium, perspective and emotion. In the 80 images in the dataset, only detected five images as having errors, and detected the correct error types in three of them. The most common errors made in our pipeline were from perspective, medium, and error categories which are all extracted using the CLIP score. For perspective and medium, the majority of the errors were due to CLIP matching images to common style expressions (e.g., natural lighting, centered-shot) which likely reflects prevalence of these expressions in the training data. In the incorrect output of errors, detected cartoon or sketch images as `poorly drawn faces' errors. One reason for the relatively low accuracy of object detection results is that we empirically set the output threshold of 's object detection (Detic) as 0.3 to present diverse objects to users in addition to information about the main subject extracted by BLIP-2 in our pipeline. § USER EVALUATION We conducted a user study with 12 BLV visual content creators to compare with a baseline interface. §.§ Method In a within-subjects study, participants used and a baseline interface to interpret image generation results (interpretation task) and to generate images (generation task). Participants We recruited BLV creators who create or use visual assets on a regular basis using mailing lists (P7-P18, Table <ref>). Participants described their vision as totally or legally blind and they were students, consultants, software engineers, video creators, and artists. P7 and P8 participated in the formative study. Baseline The baseline interface included for each image: the image caption from BLIP-2 <cit.>, a list of objects from Detic <cit.>, and the ability to interactively ask visual questions powered by BLIP-2 <cit.>. We designed the baseline to encompass commonly used captioning and object detection tools available in commercial devices and applications (e.g., SeeingAI <cit.>). As such captions tend to be concise, we added visual question answering via BLIP-2 <cit.> to let participants gain additional information on-demand. Procedure We first asked participants demographic and background questions about how they use images in their work. We then gave a 15-minute tutorial on both the interface and the baseline interface using S0 (<ref>). Participants then completed two tasks: the interpretation task and the generation task. In the interpretation task, participants used both interfaces to evaluate pre-generated images (Figure <ref>). For each set of images, we provided participants with an example scenario (e.g., Select an image for a blog post titled `My grandma still dances!'). Using or the baseline interface, participants were asked to identify the similarities and differences in the image candidates and choose a final image. For each interface, users were given one short prompt image set (S1 or S3) and one long prompt image set (S2 or S4). The order of the interfaces and image sets were counterbalanced and randomly assigned to participants. After each interface, we conducted a post-stimulus survey that included the following ratings: Mental Demand, Performance, Effort, Frustration, and Usefulness of the caption in understanding differences between images. All ratings were on a 7-point Likert scale. In the generation task, we provided participants with the title and first 5 paragraphs of two articles, then asked participants to create a relevant image for the article by coming up with their own prompts. We selected the two articles from the New York Times: `Why Multitasking is Bad for You' and `My Kids Want Plastic Toys. I Want to Go Green.' <cit.>. The order of the interfaces and articles was counterbalanced and randomly assigned to participants. After each interface, we asked the participants to choose one image from the generated images and explain their reasoning. We also conducted a post-stimulus survey that included the following ratings: Mental Demand, Performance, Effort, Frustration, Usefulness of the caption, Satisfaction with the final image, and Confidence in posting the final image. All ratings were on a 7-point Likert scale. At the end of the study, we conducted a semi-structured interview to understand participants' strategies using   and the pros and cons of both   and the baseline. The study was 1.5 hours long, conducted in a 1:1 session via Zoom, and approved by our institution's IRB. We compensated participants 50 USD for their time. Analysis We recorded the study video, user-generated prompts and images, and the survey responses. We transcribed the exit interviews and participants' spontaneous comments during the tasks and grouped the transcript according to (1) strategies of using and (2) perceived benefits and limitations of our system. §.§ Results Overall, all participants stated they would like to use   rather than the baseline interface to create images in the future. Participants expressed that would be immediately useful in their workflows: “This is usable out of the box!” [...] “I need access to this technology” (P14), “I’d even pay for this! I really need this” (P15). In particular, participants rated to be significantly more useful for understanding the differences between images in both tasks (interpretation: μ=1.50, σ=1.00 vs. μ=3.58, σ=4.00; Z=-2.31; p<0.05; generation: μ=1.92, σ=2.00 vs. μ=4.33, σ=5.00; Z=-2.77; p<0.01) (Figure <ref>). For the interpretation task, participants reported significantly better performance (μ=1.83, σ=2.00 vs. μ=3.67, σ=3.00; Z=-2.47; p<0.05), significantly less frustration (μ=1.75, σ=1.00 vs. μ=3.50, σ=3.50; Z=2.46; p<0.05), and effort (μ=2.25, σ=2.00 vs. μ=4.00, σ=4.00; Z=-2.00; p<0.05). For generation tasks, participants rated that they were significantly more satisfied with the final image (μ=3.17, σ=3.00 vs. μ=5.00, σ=5.50; Z=-2.17; p<0.05). Significance was measured with the Wilcoxon Signed Rank test. Gaining a summary of image content With across both tasks, all participants started by reading the summary table including the comparison description (summary of similarities and differences), as well as the per-image descriptions. Participants all stated that the summary table was helpful for understanding the images they generated, as P6 explained: “I cannot do without the summary. Highlighting the differences was very useful.” (P6). In addition, participants noted that the summary table's per-image descriptions were valuable for understanding the images. For example, P19 mentioned “This is more like an audio description because I can make a very clear mental image!” and slowed down his screen reader pace to mimic the experience of listening to an audio description. P20 reported “I always thought that AI is not as capable of describing as humans, because usually alt-text generated by AI is short and doesn’t capture much information. But reading this, I am rethinking AI’s capabilities.”. P12 found the detailed descriptions particularly helpful when authoring rather than interpreting images: “The first table (comparison description table) is so comprehensive. When I'm authoring images I need more information than when I'm looking at what others uploaded.” (P12). Using the baseline, participants all initially read all of the information they had access to (the caption and objects) for each image. all participants mentioned the inconvenience of having short image captions for gaining an overview, especially when the generated images are similar to each other. For example, after reading the BLIP-2 caption of S4, P18 asked “Are they all same images?” Selectively accessing additional information While all participants accessed the summary table first, we observed multiple strategies of using additional information provided by to understand the differences between the generated images. First, P9, P7, P16, P18, and P20 checked the information from all tables before making their decision. P20 mentioned “They are equally important but in different ways. If the generated images are different, the summary table would be sufficient. For similar ones, I’d have to go down the tables more.” P16 noted “We never have too much information. All the details provided here matter to me”. After checking all the tables, P18 and P20 revisited the summary table again to remember and organize all information. The other seven participants (P10-P12, P8-P15, P17, P19) checked the tables selectively. Participants' preferences reflected their prior experiences creating images. For instance, P7 who typically creates images using an SVG editor prioritized the prompt verification table. He said  “I detail more things in the prompt and want everything to be in the image, `cause I am more used to programming-drawing.” P13 skipped the style and errors table as he was not familiar with the concepts despite the definitions provided:  “As a born blind person, most information in the visual attributes is not useful as it's hard to imagine those.” Participants also mentioned that they liked that provided the breakdown of the summary description into multiple tables. P16 described that has “So much transparency because it provides access to intermediate tables that constitute the summary table, just like a [prramming tool]! I can look at the inside of the models and see what they're doing.” P10 and P11 both mentioned that they appreciated the order of the tables: “The summary [table] is the bigger picture. Then the tables go into the details. I also like that the prompt questions come first because they're important.” Participants also employed multiple strategies for navigating within the tables. Participants browsed through questions in the tables to identify questions they found to be important and skipped questions that were less important (e.g., not interested, or already appeared in the summary descriptions). We also identified multiple patterns of navigating within the tables. Participants checked all cells in a row when they found the table to be important. For instance, P11 checked the answers of all four images in the prompt verification table. In other cases, participants first checked the questions, then decided whether to read the row or skip to the next row. Participants skipped rows if the answers to the questions were already mentioned in the summary table, or if they were not interested in the question. For example, P8 skipped the medium, lighting, and perspective row in the visual style & errors table and only attended to the error row. Sometimes, participants only checked the answer cells if the summary column highlighted the differences between the images and skipped to the next row if the summary stated mainly the similarities between the images. Participants stated that 's table format was easy to navigate. P19 noted the ease of navigation within the table: ”I like having control with the tables. If the question or summary doesn't seem interesting, I can skip to the next row instead of reading all answers of four images.” Asking additional information With the baseline, most participants (12 participants in the interpretation task, 9 participants in the generation task) asked follow-up questions to try to understand the images, while with our system participants rarely asked follow-up questions (1 participant in the interpretation task and none in the generation task). P16 was the only participant who asked additional visual questions with after reading the table (`Is the data showed falling or rising?' and `What is the date of the x-axis?' for S3 in Figure <ref>). When asked about the reason for not asking any additional questions, P18 said “Looking at captions I already had a big picture so I didn’t ask additional questions.” P7 similarly reflected: “I like that [] asks questions that I haven’t thought of but are still important. The answers to the questions told me additional stuff about the images.” In contrast, with the baseline interface, participants asked many additional visual questions. Because each image was presented separately, participants often asked the same question for each image to compare the answers. Most of the questions were about the objects detected, especially when the object was not mentioned in the caption or did not seem relevant to the setting (e.g., P11 asked “Where is the beachball in the picture?” after reading the object detection results of an image with the kitchen setting). P10 who experienced the baseline condition after reflected that “This one [Baseline] is not simply laid out for me. The previous one [] is easy peasy presenting everything for me. And this one is `Here you have to figure out.” Refining and Iterating Prompt In the generation task, none of the participants refined the prompt using the baseline and five participants refined the prompt when using (P9, P10, P13, P16, P17). Among the remaining 7 participants, 5 participants reported that they did not iterate as they were satisfied with the results, and 2 participants were unsure how to iterate the prompt after realizing that the image generation model did not reflect some parts of the original prompt (P15, P20). Participants often quickly made the decision to revise the prompt while reading the summary table and before they moved on to other tables. For instance, while generating an image about an article about multitasking, P10 first attempted to generate an image with the following prompt `A woman who is holding the iPhone is texting on it while she glances at another device which displayed some funny videos going on. She's in the kitchen trying to cook. it looks like the food is smoking' <ref>. However, she quickly noticed that most of the images generated depicted the woman as smoking instead of the food as smoking. She quickly iterated the prompt by replacing the word with `smoldering' to generate a new set of images. In addition, participants reported that informed them about the capabilities of the image generation model and guided them to refine their prompts. P20 mentioned “After reading the tables, it makes me think of what AI is capable of generating and what is not. It can't exactly reflect what I try to accomplish when the prompt is too complicated, so I will have to adjust my expectation and adjust my prompt.” Participants also noted that is helpful for learning how to generate a detailed prompt (P7, P16, P17). P16 stated “Visual [styles & errors] table is helpful for learning new styles.” Similarly, P7 said “If I don't specify the styles, I think AI is generating [the styles] based on the context and content. So I know which style is good for which.” Selecting an Image Candidate To choose the final image from the four image candidates in the generation task, participants using often considered whether the image followed the prompt, whether additional details added by the generation model were relevant, and whether the image style or emotion was appropriate to the usage context. P17 said “I choose the third image because it has the information that I described. Also, P7 mentioned “I will not choose the cartoon image because I want to be more serious here.” Some participants changed the choice of image as they moved on to the next tables in the . For example, P8 who generated images of multiple plastic containers to portray the pollution problem updated his choice as he read the style and errors table: “Oh so the last image has many colors, I want to change to this one because I want it to be colorful!” Noticed and unnoticed errors Participants encountered errors using both interfaces. In the baseline, all participants read the objects following the captions, but objects occasionally contained errors (e.g., labeling as another object that has similar shapes, colors, or textures). When the participants noticed objects irrelevant to the context, they often asked about the object but the questions about non-existent objects often led to further confusion. For instance, P11 asked `Where is the television?' for an image where a television is not present. Because the answer generated by BLIP-2 was `There is no television.', P11 was more confused and did not consider the image due to uncertainty. Also, P16 asked `Where is the lollipop in the image?' for an image without a lollipop (S1 in <ref>) and BLIP-2 answered with a hallucination `In the man's mouth.', misleading P16. While features the same list of objects, participants did not experience this issue as they prioritized other information or recognized misinformation by referencing across multiple information sources. While using , P10, P7, P16 pointed out that some visual information in the tables conflicted with one another. For instance, in the second image of S2 (<ref>), the summary table stated that the woman is walking in the street, but when the asked  `Is she dancing?' for prompt verification, BLIP-2 answered with `Yes', which confused the participants. P16 hypothesized that the caption mentioned walking because the dancing action is hard to capture in one image frame and thus the image is actually showing her dancing. Still, participants did not notice inaccurate information in if there was no conflict. For example, a woman was described as looking happy but had a neutral expression (the 4th image for P10's 2nd prompt in <ref>). P10 removed the image from consideration as she wanted the woman to look stressed rather than happy. Future improvements for Participants noted suggestions on how to improve 's description in the future. First, P9 and P8 participants noted that the visual information provided by was long and difficult to process at once. This reflects users' subjective ratings on mental demand which is comparable in and the baseline in the interpretation task. Participants suggested allowing users to remove image columns and question rows from consideration. P8 mentioned “I want to filter images based on certain answers so that from then on, I won’t consider all four images and it will be easier!” P17 also shared that he wanted to learn from his interactions with the cells so that gradually it will present only the rows of interest. Participants mentioned the difficulties of writing good prompts. P13 said “Even if I read the definitions about the style, it's hard to feel what effect it will give.” In the generation task, none of the participants specified the medium in the prompt as they were not familiar with it. This often resulted in the image generations having varied styles. In addition, P7 and P16 mentioned that it is difficult to decide on what content to put in the prompt to effectively convey the message. P16 mentioned “I want to give it the whole book and make it generate.” After experiencing that the generation model cannot reflect all the details in the prompt when the prompt is too long and complex, P12 stated “I want [] to tell me what [the generation model] can generate and what it can not.” § DISCUSSION In this section, we reflect on our findings from the development and evaluation of . We also discuss future opportunities for research exploring accessible media authoring tools. Scope of uses a text-to-image generation model <cit.> to generate image candidates, vision-language models <cit.> to extract visual information, and a large language model <cit.> to synthesize descriptions. The scope of reflects the limitations of the models it uses. First, we designed to support the images that text-to-image generation models currently support: content-driven photos or illustrations with simple structures. However, both text-to-image generation and do not yet support images that are information-rich or densely structured such as information visualizations <cit.> or diagrams <cit.>. As text-to-image generation improves, future research will explore extending to complex graphics with text. For example, could help creators recognize if their prompt-generated diagram contains the desired text (by integrating Optical Character Recognition), relationships, and perceptual qualities (e.g., legibility, saliency of important information). Second, the descriptions that GenAssist is capable of providing are also limited by the capabilities of the pre-trained vision-language models <cit.>. For example, while GenAssist helped creators notice image generation errors such as omitted prompt details <cit.>, distortions to human bodies <cit.>, and objects placed illogically <cit.>, some errors remained undetected. Also, occasionally included hallucinations (e.g., missing or non-existent objects) in the descriptions. While these issues may be mitigated with improvements to text-to-image models (e.g., better aligning with human preferences <cit.>) and vision language models (e.g., better composition reasoning <cit.>, reducing hallucinations <cit.>), could also learn what prompts are prone to generation errors and guide BLV creators in creating strong prompts. Finally, while ’s pipeline surfaced large differences between images (e.g., different objects, characters, expressions, or styles), its descriptions often missed smaller differences between images that were less likely to be described in training data captions (e.g., slightly different compositions or makeup styles). Thus, is currently useful in the early stages of prompt iteration, where large differences between images remain. In the future, could detect detailed changes by adding more detailed or domain-specific content and style questions, or integrating vision models that explicitly compare images <cit.>. Understanding Multiple Images Creators in the formative study revealed that it is difficult to understand multiple images at the same time (D2. Understanding high-level image similarities and differences). To tackle this challenge, we designed with three strategies: (1) providing the overview of similarities and differences between the generated image candidates, (2) progressively disclosing the information from high-level to low-level to give the user control over the level of detail received <cit.>, and (3) presenting the descriptions in a table format so that users can easily navigate between images to compare them. Participants highlighted that not only these detailed summaries but also the ability to selectively gain information about the underlying questions were helpful in narrowing down their choices. For example, some participants prioritized the prompt verification table to assess if the image followed their instructions (D3. Assessing if images followed the prompt), and other participants used the content and style table to learn how to improve their prompts (D4. Accessing image details not specified by the prompt). In the future, could support sorting or filtering images based on visual attributes to limit the number of images they consider at once (e.g., sorting images based on prompt adherence or filtering images that have AI-generated distortions). could also read image descriptions with multiple voice styles to help creators distinguish generation candidates. 's ability to attend to multiple similar images and surface differences can be useful in broader contexts. Our study participants expressed interest in using for comparing image search results or similar photos in social media. It can also help BLV people in decision-making situations based on visual information (e.g., online shopping, communicating with the design team in the software development, selecting a photo from similar shots). Implications for Visual Question Answering Comparing to our baseline of typical descriptions with visual question answering (VQA), all participants rated as more useful for understanding differences between images and creators asked fewer follow up questions with . reduced follow-up questions by predicting visual questions based on the formative study and applying the questions to multiple images. Our predict-ask-summarize approach also reduced the requirement for reading individual question answers. Future VQA systems intended for real-world environments may benefit from our approach as repetitive questions, “unknown unknowns”, and complex visuals are likely. Support in Creating Prompts In the formative study, we distilled the need to support creating prompts (D1. Authoring prompts that specify content and style). While we do not directly support prompt creation, we designed our system to reveal visual content and styles based on prompt guidelines to inform users about details the model filled in. In the user study, participants cited that reading the tables in helped inform their prompt iterations and learn about what styles to use. Prior work has explored using structured search for visual concepts for writing prompts <cit.>, and combining our system with such prior work is a promising avenue for future work. We are currently exploring suggesting content and styles for the prompt when the user specifies the context of image use and new ways to help users add specificity to their prompt (e.g. a chatbot, as suggested in the formative study). In addition to text input, we can also consider multimodal input from users in the future such as image prompts <cit.>, sketch prompts <cit.>, or music prompts <cit.> to create an image for a music album cover, as desired by P6. Supporting Creators with Different Visual Impairments BLV creators’ interest in color or style information (e.g., medium, lighting, angle) often depended on their prior experience with visuals and onset of blindness. supports creators in selectively accessing description details, but in the future will let creators control which details to filter out or prioritize. To support creators without knowledge of visual style, could recommend popular styles given the image's intended use, provide style descriptions, or deliver style in another modality (e.g., sound <cit.>, tactile interfaces). We will also improve GenAssist in the future to support users with remaining vision beyond providing descriptions. For example, GenAssist could provide descriptions based on the current zoom viewing window or support further visual edits to the generated images, as desired by P1. Implications of on Creativity Text-to-image generation models have sparked conversations about their implications for creativity. For BLV creators, image generation can improve creative agency compared to existing approaches for creating or selecting images. In our formative study, creators wanted to use image generation as it provided fewer limits over content and style than searching for images online and greater autonomy than asking a sighted person to create the image. supports BLV creators in exercising creative control over generated images by letting creators examine image details to revise the prompt or make an informed selection. Compared to sighted artists who use generated images primarily as references <cit.>, BLV creators often intend to use generated images directly. In the future, will further creative control by supporting prompt-based editing <cit.>. Implications of on Communication We designed to support communication goals of BLV creators. BLV creators in our formative study aimed to create images to express their ideas to a broad audience and achieve self-expression. Images are particularly useful for capturing visual attention and communicating with sighted people who have difficulty reading text. For example, P4 generated an image of his family to share with his child. BLV creators also wanted to use in the workplace and on digital platforms. As exists in an ableist environment that prioritizes visual communication, there is a risk that may cause sighted people to expect image-based communication from BLV people. Tools like must be coupled with research and activism to make digital, workplace, and educational environments accessible — e.g., enabling non-visual communication and providing access to existing visuals. Our work also reveals that generated images themselves should be shared with descriptions in addition to the prompt that might not accurately reflect the image. Generative AI for Accessible Media Authoring Advances in large-scale generative models enable people to create new types of content, yet no existing research has explored people with disabilities as the users of these tools <cit.>. We see opportunities for generative AI models to broaden the type of content that people with disabilities can create. For example, our study participants mentioned that they are interested in using generative models for creating dynamic graphics like cartoons and videos. Similarly, generative models may be useful for people with motor impairments authoring visual media, or people with hearing impairments authoring music. § CONCLUSION We created , an accessible text-to-image generation system for BLV creators. Informed by our formative study with 8 BLV creators, our interface enables users to verify the adherence of generated images to their prompts, access additional image details, and quickly assess similarities and differences between image candidates. Our system is powered by large language and vision-language models that generate visual questions, extract answers, and summarize the visual information. Our user study with 12 BLV creators demonstrated the effectiveness of our approach. We hope this research will catalyze future work in supporting people with disabilities to express their creativity. ACM-Reference-Format § STUDY PARTICIPANTS DEMOGRAPHICS
http://arxiv.org/abs/2307.05006v1
20230711035700
Improving RNN-Transducers with Acoustic LookAhead
[ "Vinit S. Unni", "Ashish Mittal", "Preethi Jyothi", "Sunita Sarawagi" ]
cs.CL
[ "cs.CL", "cs.LG", "eess.AS" ]
Elasto-plastic large deformation analysis of multi-patch thin shells by isogeometric approach [ August 12, 2023 ============================================================================================= RNN-Transducers (RNN-Ts) have gained widespread acceptance as an end-to-end model for speech to text conversion because of their high accuracy and streaming capabilities. A typical RNN-T independently encodes the input audio and the text context, and combines the two encodings by a thin joint network. While this architecture provides SOTA streaming accuracy, it also makes the model vulnerable to strong LM biasing which manifests as multi-step hallucination of text without acoustic evidence. In this paper we propose that makes text representations more acoustically grounded by looking ahead into the future within the audio input. This technique yields a significant 5%-20% relative reduction in word error rate on both in-domain and out-of-domain evaluation sets. RNN-Transducers (RNN-Ts) have gained widespread acceptance as an end-to-end model for speech to text conversion. A typical RNN-T generates the text autoregressively by independently encoding the audio and previous text, and combining the two encodings by a thin network. While the existing RNN-T architecture provides SOTA streaming accuracy, the RNN-T outputs often overly rely on the language context to generate the next words without being supported by acoustic evidence. We aim to fix this inherent limitation of RNN-T models by conditioning the text encoder on (noisy) hypotheses purely based on the audio encoder, for a fixed number of future time-steps. Index Terms: speech recognition, RNN transducer, acoustic hallucinations § INTRODUCTION RNN-Transducers (RNN-Ts) <cit.> are the predominant choice for end-to-end automatic speech recognition (ASR) offering both high accuracy and streaming capabilities <cit.>. They comprise a speech encoder that can process speech to generate an acoustic representation and a text encoder that is conditioned on label outputs from previous time-steps to generate a textual representation. Both the acoustic and textual representations are further combined by a simple joint network to predict the final output sequence. Apart from making the model streaming-friendly, separate speech and text modules in RNN-Ts also allow for text-only data to be used in training the text encoder <cit.>. While having separate speech and text modules in RNN-Ts has its benefits, it also makes the model vulnerable to strong biases from the language model. Driven by strong textual priors, the representation from the text encoder could be very biased towards an output unit that is eventually adopted by the joint network but does not have any acoustic correlates in the speech input. Such outputs could be considered hallucinations that arise due to the overconfidence of the language model in the RNN-T <cit.>. This problem is more severe when the RNN-T is used to decode out-of-domain utterances. Apart from the more egregious hallucination errors, we also find that language model biases in RNN-Ts lead to word-boundary errors. For example, “villeroy took" is mispredicted by an RNN-T baseline as “villar I took". =-1 Hallucinated outputs have been studied a lot more in the context of neural machine translation where the decoder language model hallucinates content that is not aligned to the source sentence <cit.>. In contrast, the problem of hallucination in RNN-Ts, that stems from its very design, has been far less studied and demands more attention. In this work, we propose as a fix for the problem of hallucinations in RNN-Ts. aims to make the textual representations more acoustically grounded by looking ahead into the future within the speech signal. To achieve such a lookahead without interfering with the RNN-T's online decoding capabilities, we extract a limited number of lookahead output tokens for each frame of the input speech using only the acoustic encoder and further use these extracted tokens to modify the textual representation. This technique yields significant reductions in word error rates (WERs) on the established Librispeech benchmark and a variety of out-of-domain evaluation sets. We also show that beyond improving WERs, results in predictions that are more acoustically faithful to the speech input. For example, the reference “la valliere" is misrecognized as “the valet" by an RNN-T baseline, while an RNN-T baseline with predicts “lavalier". =-1 Contributions: Thus, overall the contributions of this paper are as follows: (1) We highlight the problem of hallucination in SOTA online ASR models and attribute it to the speech independent encoding of text representations. (2) We propose a fix based on enriching text representations with a lookahead of future tokens extracted from the audio. (3) We present a simple extension of the RNN-T architecture called with very modest computational overheads. (4) We present an evaluation on three benchmarks on various settings of model sizes and show that our proposal improves WER and reduces hallucination significantly. § BACKGROUND: RNN TRANSDUCER Let the input audio be denoted by 𝐱 = {x_1, x_2, x_3, ... x_T } where each x_t represents the acoustic features at time t. Let the corresponding text transcript be 𝐲 = {y_1, y_2, y_3 ... y_U } where y_u ∈𝒱 denotes the u^th output token drawn from a vocabulary 𝒱. One of the most distinguishable features of an RNN-T is the presence of two separate encoders for text and acoustic signals respectively. The acoustic encoder (AE) takes as input 𝐱 = {x_1, x_2, x_3, ... x_T } and generates the acoustic encoded representation 𝐡 = {h_1, h_2, h_3, ... h_T }. The text or language encoder (LE) generates representation g_u appropriate for the next output token as a function of previous tokens _<u=y_1…,y_u-1 g_u = LE(_<u)      h_t = AE(𝐱,t) A Joint Network (JN) combines the two encodings for each t ∈ [1… T] and each u ∈ [1,…, U] to generate a lattice S. Each cell of the lattice represents a state s(t,u), from which we generate a probability distribution of a token belonging to vocabulary 𝒱 P(y_u^t | x_t , 𝐲_<u) = softmax{JN(h_t ⊕ g_u) } where P(y_u^t) is the probability of emitting y_u at time t. The vocabulary includes a special blank token . Using this probability lattice, the RNN-T estimates the conditional distribution P(|𝐱) by marginalizing over all possible monotonic alignments of acoustic frame t with output token position u on the lattice. This can be computed in polynomial time using a DP-sum method <cit.>. During inference, high-probability outputs are generated using beam-search. §.§ State Space The state-space we use to compute the RNN-T loss is similar to that of CTC loss. We assume each state s(t,u) is a representation of the process if it has traversed till time t and emitted till y_u-1. During training, we are only interested in emissions {, y_u} as each would progress us to states s(t+1,u) and s(t,u+1) respectively. We would like to bring to attention here that unlike a CTC state space, a single emission of token y_u is required to progress to state s(t,u+1). Also, the system can emit more than 1 non- tokens at a given time step. With these rules and the relations described in Eq<ref>, we can find the probability of any sequence of length T+U with U non- tokens and T tokens from 𝒱 where the probability is essentially a measure of the likelihood of the said sequence transcripting a given input. For such a sequence π={y_π^1, y_pπ^2, y_π^3, ... y_π^T+U}, The probability is given as P(π) = ∏_i=0^T+Uπ_i It should be noted that only tokens advance the state space along time. Let ℬ indicate the set of all such sequences which match to our actual output 𝐲. Thus our overall probability of being able to match to 𝐲 is = ∑_π∈ℬ P(π) We can simplify the calculation of this sum using...TODOTODO = ∑_t=,u=^t=T,u=K P(JN[t,u]) = -log() §.§ Related Work The inability of RNN-T models to generalize to out-of-domain utterances has been previously well-documented <cit.>. Previous approaches have adapted RNN-T models to new domains via shallow fusion with an external LM <cit.> and discounting RNN-T scores using an external LM <cit.> or an implicit LM derived from the RNN-T <cit.>. Apart from these inference-time techniques, there have also been other approaches that required training-time interventions such as subword regularization <cit.>, augmenting pronunciation dictionaries <cit.> and text-only adaptation of the RNN-T's language encoder <cit.>. Apart from adaptation-based techniques, prior work has also explored many approaches that aim at improving interactions between the acoustic encoder (AE) and the language encoder (LE) of the RNN-T. These approaches have focused on more complex mechanisms to combine the AE and LE outputs rather than simply adding the respective representations <cit.>, using quantization in the LE <cit.> and fixing overconfident LE predictions in the RNN-T lattice <cit.>. Similar in spirit to our technique are non-causal models  <cit.> that refer to future information via a self-attention or convolution mechanism. However, all these methods peek into the future at the level of AE and thus with increased number of layers, the effective window size to the future increases which is not desirable in streaming applications. § OUR APPROACH In the RNN-T architecture the text and the audio are encoded independently and combined using a thin joint layer. The text encoding g_u captures the next token state conditioned only on tokens generated before it. If the language model priors are strong, it is possible for the text encoding g_u to strongly prefer a word unit, and to bias the softmax to generate that unit, even if that has little agreement with the acoustic input. This often manifests as hallucinated words that are fluent as per the language model, but which have no support with the acoustic input. In Table <ref>, we present examples of such hallucinations. We observe that rare words, e.g., word “MAGATAMA" in the first example, gets recognized as “BY THE TOWN" which has little acoustic overlap with the utterance. Next we present our approach called of fixing this mismatch. We identify that hallucinations are generated by representations g_u which have been computed independent of the acoustic input. Our key insight is to improve the LE representation g_u by looking ahead a few tokens into the future from the acoustic input. However, implementing this insight, without adversely impacting the efficient online operation of the RNN-T model is non-trivial. We perform this modification in two steps. First, we extract from the acoustic encoder, a lookahead of k tokens after each frame t of the acoustic input. In Section <ref> we present how we do that. Second, we modify g_u with the extracted future tokens to get now a time-aware, acoustically informed encoding g_u,t. Section <ref> presents the details of this step. Overall modified architecture of our approach appears in Figure <ref>. §.§ Extraction of Lookahead tokens from the Acoustic Encoder We use the notion of implicit acoustic model (IAM) and generate a probability distribution over vocabulary tokens based on only the acoustic signals h_t (y|h_t) = softmax{JN(h_t ⊕0) } This distribution is supervised using gold transcripts by marginalizing over all non-blank tokens, much like the regular RNN-T loss. Next, for each frame of we get the most likely output based on acoustic signal h_t. ŷ_t = _y (y|h_t)   ∀ t We do this for all input frames. On this set of T tokens, at each t, we find the first w non-blank tokens after t and call it _t^w. We choose small values of w (e.g., 2 or 3) to ensure that we still inherit the streaming capabilities of the underlying RNN-T model. The _t^w provides a look ahead into the future tokens after each t based purely on the acoustic input. Figure <ref> shows an example where for t=1,w=3 we extract a _t^w comprising of the first three non-blank outputs from the acoustic model. generate a hypothesis _t such that _t = nonzero({ŷ_t, ŷ_t+1, ŷ_t+2, ... ŷ_T} Thus _t is essentially the greedy output given by from time t to T. Since we wish to continue to make the RNN-T online, we limit the context of the to a few tokens. Thus to provide a future context of Δ w tokens, we extract the first Δ w tokens from _t as Δ_t = [ỹ_1,…ỹ_w ] Δỹ_t is the noisy future context on which the PN representation is conditioned. We can also generate dense representations of the future context by using a convolutional layer on. Consider a convolutional layer Δ k Δỹ_t = Δ k [h_t ] §.§ Modifying Text Encoding with Lookahead Tokens With the presence of the above future context, we condition the text encoding g_u at each t using a simple FFN F as ĝ_t,u = F(g_u, _t^w) The remainder of the RNN-T pipeline stays the same and we pass along the new LE representation ĝ_t,u to the JN for further processing P_LA(y_u^t | , t, 𝐲_<u) = softmax{JN(h_t ⊕ĝ_t,u) } Thus, our overall training is a sum of losses over the gold transcript on marginalizations of P_LA, and of . §.§ Other methods of Lookahead As expected, future context to improve the LE representations can be obtained in multiple ways. We have explained in <ref> a method called Greedy which provides sparse representations of the future context. We mention here two more variants of the look ahead which show promising results. Conv : Here, we use a convolutional layer c() of kernel size K (example 5 or 10) to extract information from the Acoustic representations themselves. This convolved-dense representation at each frame is used as the future context for the appropriate LE representation. ŷ = c(𝐡) Δ_t = ŷ_t Greedy Frame : This method is similar to Greedy but but instead of using ŷ_t from , we use the h_t associated with it. i:e We pass on the acoustic frames which are associated with the non-tokens of ŷ in Eq <ref>. That is, if ŷ_t = max_h_t (y|h_t)   ∀ t The remainder of the pipeline stays the same as Greedy § EXPERIMENTS AND RESULTS In this section, we present various results comparing our method to an existing SoTA baseline RNN-T ASR system. §.§ Datasets We use the popular Librispeech benchmark <cit.> in three different well-known settings of varying training durations. We also use the Mozilla Common Voice dataset to derive accented out-of-domain speech samples. These are detailed below: * L100: This is the well-known Librispeech-100 dataset which is a 100 hour subset of the Librispeech corpus<cit.>. * L360: This is a 360 hour subset of the Librispeech corpus. * L960: This is the complete 960-hour Librispeech corpus. * MCV: This is a dataset obtained from the Mozilla Common Voice <cit.> Version 7.0 corpus. Using this dataset, we create a 200hr training subset consisting of seven English accents of which one is a test only accent. The accent-wise composition of the MCV dataset can be found in Table <ref>. X+P refers to systems with speed-perturbation that augments the dataset X with time-warped copies using factors 0.9, 1.0, 1.1 respectively. In Table <ref>, we indicate wherever this augmentation is used. We test the above models both on in-domain and out-of-domain test splits, where the latter is obtained by changing the accent from the MCV corpus. §.§ Implementation Details We use the ESPNet toolkit <cit.> to implement our model and perform all our experiments. Our base architecture is an RNN-T model with conformers <cit.> as AE and LSTM <cit.> as LE. The joint network adds the two encodings and applies a tanh non-linearity. We experiment with two different models corresponding to different encoder sizes. * Small: This model includes 18 AE blocks with internal representation of size 256 and four attention heads. A macaron-style feed forward projecting to size 1024 is used within the conformer block. The LE consists of a single 300 dimensional LSTM cell. Both these encoders project their output to a joint-space of 300 dimensions before feeding it to the joint network. This model was used for performing experiments with the L100+P and MCV datasets. We use a vocabulary size of 300 and 150 sub-words for these experiments, respectively (∼30M params). * Large: This model has 12 AE blocks conformer-encoder of size 512 and a macaron style feed-forward of 2048. The LE block is a 512 unit LSTM. Both representations are projected to a joint-space of of 512 dimensions. This model is used for our experiments involving the L960 and L360+P datasets. We use a vocabulary size of 500 for these experiments (∼ 80M params). We train for 100 epochs using the Noam-optimizer <cit.> with a Noam-learning rate of 5 and 25000 warm-up steps. For inference, we use ALSD <cit.> with a beam size of 30. For the purposes of we have used 3 lookahead tokens. §.§ Results Our primary set of numbers can be seen in Table <ref> which shows the capability of against baseline models trained on various training data sizes. Using L100+P, we see statistically significant improvements (p < 0.01 using the MAPSSWE test <cit.>) in WERs across both training sets for in-domain and out-of-domain evaluation sets. The gains are reduced with using larger models (L360+P and L960) but still remain consistent, especially for the out-of-domain test sets. We further focus on rare words where the effect of LM biasing is expected to have a greater effect. Table <ref> shows WER performance of baseline vs on rare-words on the L100+P dataset. We define any word that occurs lower than 20 times in the dataset as a rare-word. We see consistent improvement in rare-word recognition across both in-domain as well as out-of-domain datasets. Table <ref> shows some anecdotes indicating the effectiveness of . Note how the baseline model hallucinates words like TOWN, GIVEN, etc. that have no acoustic overlap with the spoken word. either corrects them or produces a word that is more acoustically similar to the spoken word. §.§ Error Analysis Apart from WERs, we provide further analysis in Table <ref> to validate our claim that predictions from are indeed acoustically more similar to the ground-truth. We use the following three metrics: * Phone Error Rate (PER): We use Epitran <cit.> to transcribe word sequences to phone sequences in IPA and find the IPA-based edit distance to yield PERs, which are a better measure of acoustic discrepancy compared to WERs. * Weighted Feature-based Edit Distance (WFED): With the PER metric, the distance between two dissimilar phones and two related phones are both 1. WFED is an edit distance metric based on Panphon <cit.> that helps compute distances between IPA phones based on their articulatory features. * Dolgopolsky Error Rate (DER): Both PER and WFED rely on an underlying sequence of IPA phones derived from the predicted word sequences using a fixed set of rules. These IPA sequences are not very reliable, especially for the out-of-domain MCV speech samples with varying speech accents. To alleviate errors arising from incorrect IPA, we use Dolgoposky's IPA clusters <cit.> to label similar-sounding IPA phones with the same cluster ID. DER is an edit distance using these new cluster IDs that only measures large deviations in phonetic realizations between a reference word and a predicted word and does not penalize small changes in the place and manner of articulation of phones (like PER). Table <ref> lists illustrative examples for each of the three metrics. E.g., WFED imposes a cost of 2.25 if “ball" is misrecognized as “call" (which would have a PER of 1); DER imposes no cost for “pit" being misrecognized as “beat". Table <ref> compares PER, WFED and DER values from the baseline and . DER penalizes only for large acoustic digressions; we see consistent improvements on DER using with larger improvements on the MCV test utterances. We also show ablations on the window size of future tokens considered in Table <ref>. We observe that using a window size of 3 yields the best results. § CONCLUSION We propose a simple and effective scheme of acoustic for RNN-T models to safeguard against producing acoustic hallucination. From the acoustic encoder, we extract a fixed of tokens and use it to improve the representation from the language encoder to generate a more acoustically aligned output. We obtain significant reductions in WER and various other phonetic error rates across various settings. § ACKNOWLEDGEMENTS The third author gratefully acknowledges financial support from a SERB Core Research Grant, Department of Science and Technology, Govt of India on accented speech processing. IEEEtran
http://arxiv.org/abs/2307.08628v1
20230714131338
Is (independent) subordination relevant?
[ "Michele Azzone", "Roberto Baviera" ]
q-fin.MF
[ "q-fin.MF", "math.PR" ]
(),a, #1#2#1#22mu#1#2 =-3truecm =25.5truecm -0.4truecm =17.5truecm theoremTheorem nota[theorem]Note proposition[theorem]Proposition equationsection lemma[theorem]Lemma corollary[theorem]Corollary definition definition[theorem]Definition remark example[theorem]Example remark[theorem]Remark theoremsection propositionsection lemmasection corollarysection definitionsection remarksection examplesection
http://arxiv.org/abs/2307.03960v2
20230708115812
Nonparametric estimation of the diffusion coefficient from S.D.E. paths
[ "Eddy Ella-Mintsa" ]
math.ST
[ "math.ST", "stat.TH" ]
Seismic Signatures of the ^12C(α, γ)^16O Reaction Rate in White Dwarf Models with Overshooting [ August 12, 2023 ============================================================================================== Consider a diffusion process X=(X_t)_t∈[0,1] observed at discrete times and high frequency, solution of a stochastic differential equation whose drift and diffusion coefficients are assumed to be unknown. In this article, we focus on the nonparametric esstimation of the diffusion coefficient. We propose ridge estimators of the square of the diffusion coefficient from discrete observations of X and that are obtained by minimization of the least squares contrast. We prove that the estimators are consistent and derive rates of convergence as the size of the sample paths tends to infinity, and the discretization step of the time interval [0,1] tend to zero. The theoretical results are completed with a numerical study over synthetic data. Keywords. Nonparametric estimation, diffusion process, diffusion coefficient, least squares contrast, repeated observations. MSC: 62G05; 62M05; 60J60 § INTRODUCTION Let X=(X_t)_t∈[0,1] be a one dimensional diffusion process with finite horizon time, solution of the following stochastic differential equation: dX_t=b(X_t)dt+σ(X_t)dW_t, X_0=0 where (W_t)_t≥ 0 is a standard Brownian motion. The drift function b and the diffusion coefficient σ are assumed to be unknown Lipschitz functions. We denote by (ℱ_t)_t∈ [0,1] the natural filtration of the diffusion process X. The goal of the article is to construct, from N discrete observations X̅^j=(X^j_kΔ_n)_0≤ k≤ n,    1 ≤ j ≤ N with time step Δ_n = 1/n, a nonparametric estimator of the square of the diffusion coefficient σ^2(.). We are in the framework of high frequency data since the time step Δ_n tends to zero as n tends to infinity. Furthermore, we consider estimators of σ^2(.) built from a single diffusion path (N = 1), and those built on N paths when N →∞. In this paper, we first propose a ridge estimator of σ^2(.) on a compact interval. Secondly, we focus on a nonparametric estimation of σ^2(.) on the real line . We measure the risk of any estimator σ^2 of the square of the diffusion coefficient σ^2 by [σ^2 - σ^2^2_n,N], where σ^2 - σ^2^2_n,N := (Nn)^-1∑_j=1^N∑_k=0^n-1(σ^2(X^j_kΔ) - σ^2(X^j_kΔ))^2 is an empirical norm defined from the sample paths. Related works. There is a large literature on the estimation of coefficients of diffusion processes, and we focus on the papers studying the estimation of σ^2. Estimation of the diffusion coefficient has been considered in the parametric case (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). In the nonparametric case, estimators of the diffusion coefficient from discrete observations are proposed under various frameworks. First, the diffusion coefficient is constructed from one discrete observation of the diffusion process (N = 1) in long time (T →∞) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>), or in short time (T = 1) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Note that in short time (T<∞), only the diffusion coefficient can be estimated consistently from a single discrete path contrary to the drift function whose consistent estimation relies on repeated discrete observations of the diffusion process (see e.g. <cit.>, <cit.>). For the case of short time diffusion processes (for instance T = 1), estimators of a time-dependent diffusion coefficients t ↦σ^2(t) have been proposed. In this context, <cit.> built a nonparametric estimator of t↦σ^2(t) and studied its L_2 risk using wavelets methods, <cit.> studies the L_p risk of a kernel estimator of σ^2(t), and <cit.> derived a minimax rate of convergence of order n^-ps/(1+2s) where s>1 is the smoothness parameter of the Besov space ℬ^s_p,∞([0,1]) (see later in the paper). For the space-dependent diffusion coefficient x ↦σ^2(x), a first estimator based on kernels and built from a single discrete observation of the diffusion process with T = 1 is proposed in <cit.>. The estimator has been proved to be consistent under a condition on the bandwidth, but a rate of convergence of its risk of estimation has not been established. Secondly, the diffusion coefficient is built in short time (T < ∞) from N repeated discrete observations with N →∞. In <cit.>, a nonparametric estimator of σ^2 is proposed from repeated discrete observations on the real line when the time horizon T = 1. The estimator has been proved to be consistent with a rate of order N^-1/5 over the space of Lipschitz functions. Two main methods are used to build consistent nonparametric estimators of x ↦σ^2(x). The first method is the one using kernels (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), the other method consists in estimating σ^2 as solution of a nonparametric regression model using the least squares approach. Since the diffusion coefficient is assumed to belong to an infinite dimensional space, the method consists in projecting σ^2 into a finite dimensional subspace, estimating the projection and making a data-driven selection of the dimension by minimizing a penalized least squares contrast (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Nonparametric estimation of coefficients of one-dimensional diffusion process from discrete observations is widely studied in the literature under various frameworks. In a first framework, the diffusion coefficient is constructed from one discrete observation of the diffusion process in long time (T →∞) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>), or in short time (T = 1) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Note that in short time (T<∞), only the diffusion coefficient can be estimated consistently from a single discrete path contrary to the drift function whose consistent estimation relies on repeated discrete observations of the diffusion process (see e.g. <cit.>, <cit.>). In a second framework, σ^2 is estimated from N discrete observations of the diffusion process, with N →∞ (see e.g. <cit.>). Estimation of the diffusion coefficient has also been considered in the parametric case (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). For the nonparametric setting, estimators of a time-dependent diffusion coefficients t ↦σ^2(t) have been proposed. In this context, <cit.> built a nonparametric estimator of t↦σ^2(t) and studied its L_2 risk using wavelets methods, <cit.> studies the L_p risk of a kernel estimator of σ^2(t), and <cit.> derived a minimax rate of convergence of order n^-ps/(1+2s) where s>1 is the smoothness parameter of the Besov space ℬ^s_p,∞([0,1]). For the case of space-dependent diffusion coefficients x ↦σ^2(x), a first estimator based on kernels and built from a single discrete observation of the diffusion process with T = 1 is proposed in <cit.>. The estimator has been proved to be consistent on condition on the bandwidth, but a rate of convergence of its risk of estimation has not been established. Two main methods are used to build consistent nonparametric estimators of x ↦σ^2(x). The first method is the one using kernels (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), the other method consists in estimating σ^2 as solution of a nonparametric regression model using the least squares approach. Since the diffusion coefficient is assumed to belong to an infinite dimensional space, the method consists in projecting σ^2 into a finite dimensional subspace, estimating the projection and making a data-driven selection of the dimension by minimizing a penalized least squares contrast (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Main contribution. In this article, we assume to have at our disposal N i.i.d. discrete observations of length n of the diffusion process X. The main objectives of this paper are the following. * Construct a consistent and implementable ridge estimator of σ^2 from a single diffusion path (N=1) using the least squares approach. We derive rates of convergence of the risk of estimation of the ridge estimators built on a compact interval and on the real line over a Hölder space, taking advantage of the properties of the local time of the diffusion process, and its link with the transition density. * We extend the result to the estimation of σ^2 on repeated observations of the diffusion process (N →∞). We prove that the estimators built on a compact interval and on are more efficient considering their respective rates compared to nonparametric estimators built from a single diffusion path. * Focusing on the support of the diffusion coefficient, we consider an intermediate case between a compact interval and by proposing a ridge estimator of σ^2 restricted to the compact interval [-A_N,A_N] where A_N→∞ as N→∞. The benefit of this approach is that the resulting projection estimator can reach a faster rate of convergence compared to the rate obtained on the real line . * Finally, we propose adaptive estimators of σ^2 based on a data-driven selection of the dimension through the minimization of the penalized least squares contrast in different settings. We sum up below the rates of convergence (up to a log-factor) of the ridge estimators of σ^2_|I with I⊆ over a Hölder space defined in the next section with a smoothness parameter β≥ 1. Outline of the paper. In Section <ref>, we define our framework with the key assumptions on the coefficients of the diffusion process ensuring for instance that Equation (<ref>) admits a unique strong solution. Section <ref> is devoted to the non-adaptive estimation of the diffusion coefficient from one diffusion path both on a compact interval and on the real line . In Section <ref>, we extend the study to the non-adaptive estimation of the diffusion coefficient from repeated observations of the diffusion process. We propose in Section <ref>, adaptive estimators of the diffusion coefficient, and Section <ref> complete the study with numerical evaluation of the performance of estimators. We prove our theoretical results in Section <ref>. § FRAMEWORK AND ASSUMPTIONS Consider a diffusion process X=(X_t)_t∈[0,1], solution of Equation (<ref>) whose drift and diffusion coefficient satisfy the following assumption. * There exists a constant L_0>0 such that b and σ are L_0-Lipschitz functions on ℝ. * There exist constants σ_0,σ_1>0 such that : σ_0≤σ(x)≤σ_1, ∀ x∈ℝ. * σ∈𝒞^2(ℝ) and there exist C >0 and α≥ 0 such that: |σ^'(x)|+|σ^''(x)|≤ C(1+|x|^α), ∀ x∈ℝ. Under Assumption <ref>, X=(X_t)_t∈[0,1] is the unique strong solution of Equation (<ref>), and this unique solution admits a transition density (t,x)↦ p_X(t,x). Besides, we draw from Assumption <ref> that ∀ q≥ 1, 𝔼[t∈[0,1]sup|X_t|^q]<∞. §.§ Definitions and notations We suppose to have at our disposal, a sample D_N,n={X̅^j, j=1,⋯,N} constituted of N independent copies of the discrete observation X̅ = (X_kΔ_n)_0≤ k≤ n of the diffusion process X where Δ_n = 1/n is the time-step. The objective is to construct, from the sample D_N,n, a nonparametric estimator of the square σ^2 of the diffusion coefficient on an interval I ⊆. In the sequel, we consider two main cases, the first one being the estimation of σ^2 on the interval I from a single path (N=1 and n→∞). For the second case, we assume that both N and n tend to infinity. For each measurable function h, such that 𝔼[h^2(X_t)]<∞ for all t∈[0,1], we define the following empirical norms: h^2_n:=𝔼_X[1/n∑_k=0^n-1h^2(X_kΔ_n)], h^2_n,N:=1/Nn∑_j=1^N∑_k=0^n-1h^2(X^j_kΔ_n). For all h ∈𝕃^2(I), we have h^2_n=∫_Ih^2(x)1/n∑_k=0^n-1p_X(kΔ_n,x_0,x)dx=∫_Ih^2(x)f_n(x)dx, where f_n: x↦1/n∑_k=0^n-1p_X(kΔ_n,x) is a density function. For the case of non-adaptive estimators of σ^2, we also establish bounds of the risks of the estimators based on the empirical norm ._n or the 𝕃^2-norm . when the estimation interval I is compact. For any integers p,q ≥ 2 and any matrix M ∈^p × q, we denote by ^tM, the transpose of M. §.§ Spaces of approximation We propose projection estimators of σ^2 on a finite-dimensional subspace. To this end, we consider for each m ≥ 1, a m-dimensional subspace 𝒮_m given as follows: 𝒮_m:=Span(ϕ_ℓ, ℓ=0,⋯,m-1),    m≥ 1 where the functions (ϕ_ℓ,  ℓ∈ℕ) are continuous, linearly independent and bounded on I. Furthermore, we need to control the ℓ^2-norm of the coordinate vectors of elements of 𝒮_m, which leads to the following constrained subspace, 𝒮_m,L:={h=∑_ℓ=0^m-1a_ℓϕ_ℓ, ∑_ℓ=0^m-1a^2_ℓ=𝐚^2_2≤ mL, 𝐚=(a_0,⋯,a_m-1), L>0}. Note that 𝒮_m,L⊂𝒮_m and 𝒮_m,L is no longer a vector space. The control of the coordinate vectors allows to establish an upper bound of the estimation error that tends to zero as n→∞ or N,n→∞. In fact, we prove in the next sections that the construction of consistent estimators of σ^2 requires the functions h=∑_ℓ=0^m-1a_ℓϕ_ℓ to be bounded, such that h_∞≤ℓ=0,…,m-1maxϕ_ℓ_∞ 𝐚_2. This condition is satisfied for the functions of the constrained subspaces 𝒮_m,L with m ≥ 1. In this article, we work with the following bases. [B] The B-spline basis This is an exemple of a non-orthonormal basis defined on a compact interval. Let A > 0 be a real number, and suppose (without restriction) that I = [-A,A]. Let K,M∈ℕ^*, and consider 𝐮=(u_-M,⋯,u_K+M) a knots vector such that u_-M = ⋯ = u_-1 = u_0 = -A, u_K+1 = ⋯ = u_K+M = A, and for all i=0,⋯,K, u_i = -A+i2A/K. One calls B-spline functions, the piecewise polynomial functions (B_ℓ)_ℓ=-M,⋯,K-1 of degree M, associated with the knots vector 𝐮 (see <cit.>, Chapter 14). The B-spline functions are linearly independent smooths functions returning zero for all x∉[-A,A], and satisfying some smoothness conditions established in <cit.>. Thus, we consider approximation subspaces 𝒮_K+M defined by 𝒮_K+M=Span{B_ℓ, ℓ=-M,⋯,K-1} of dimension (𝒮_K+M)=K+M, and in which, each function h=∑_ℓ=-M^K-1a_ℓB_ℓ is M-1 times continuously differentiable thanks to the properties of the spline functions (see <cit.>). Besides, the spline basis is included in the definition of both the subspace 𝒮_m and the constrained subspace 𝒮_m,L (see Equations (<ref>) and (<ref>)) with m = K + M and for any coordinates vector (a_-M,…,a_K-1) ∈^K+M, ∑_ℓ=-M^K-1a_ℓB_ℓ = ∑_ℓ=0^m-1a_ℓ-MB_ℓ-M. The integer M ∈ℕ^* is fixed, while K varies in the set of integers ℕ^*. If we assume that σ^2 belongs to the Hölder space Σ_I(β,R) given as follows: Σ_I(β,R):={h∈𝒞^⌊β⌋+1(I), |h^(ℓ)(x)-h^(ℓ)(y)|≤ R|x-y|^β-l, x,y∈ I}, where β≥ 1, ℓ=⌊β⌋ and R>0, then the unknown function σ^2_|I restricted to the compact interval I can be approximated in the constrained subspace 𝒮_K+M,L spanned by the spline basis. This approximation results to the following bias term: h ∈𝒮_K+M,Linfh - σ^2_|I^2_n≤ C|I|^2βK^-2β where the constant C > 0 depends on β, R and M, and |I| = sup I - inf I. The above result is a modification of Lemma D.2 in <cit.>. [F] The Fourier basis The subspace 𝒮_m can be spanned by the Fourier basis {f_ℓ,   ℓ = 0, …, m-1} = {1,√(2)cos(2π jx), √(2)sin(2π jx),  j=1,...,d}  with   m=2d+1. The above Fourier basis is defined on the compact interval [0,1]. The definition can be extended to any compact interval, replacing the bases functions x ↦ f_ℓ(x) by x ↦ 1/(max I - min I)f_ℓ(x-min I/max I - min I). We use this basis to build the estimators of σ^2 on a compact interval I ⊂. Define for all s ≥ 1 and for any compact interval I ⊂, the Besov space ℬ^s_2,∞(I) which is a space of functions f ∈ L^2(I) such that the ⌊ s⌋^th derivative f^(⌊ s ⌋) belongs to the space ℬ^s-⌊ s ⌋_2,∞(I) given by ℬ^s - ⌊ s ⌋_2,∞(I) = {f ∈ L^2(I)  and w_2,f(t)/t^s - ⌊ s ⌋∈ L^∞(I∩^+)} where for s-⌊ s⌋∈ (0,1), w_2,f(t)=|h|≤ tsupτ_hf - f_2 with τ_hf(x) = f(x-h), and for s-⌊ s⌋ = 1, w_2,f(t)=|h|≤ tsupτ_hf + τ_-hf - 2f_2. Thus, if we assume that the function σ^2_|I belongs to the Besov space ℬ^s_2,∞, then it can be approximated in a constrained subspace 𝒮_m,L spanned by the Fourier basis. Moreover, under Assumption <ref> and from Lemma 12 in <cit.>, there exists a constant C>0 depending on the constant τ_1 of Equation (<ref>), the smoothness parameter s of the Besov space such that h∈𝒮_m,Linfh-σ^2_|I^2_n≤τ_1h∈𝒮_m,Linfh-σ^2_|I^2≤ C|σ^2_|I|^2_β m^-2β where |σ^2_|I|_s is the semi-norm of σ^2_|I in the Besov space ℬ^s_2,∞(I). Note that for all β≥ 1, the Hölder space Σ_I(β,R) and the Besov space ℬ^β_2,∞ satisfy: L^∞() ∩Σ_I(β,R) ⊂ℬ^β_∞,∞(I) ⊂ℬ^β_2,∞(I) (see <cit.>, Chap. 2 page 16). As a result, we rather consider in the sequel the Hölder space Σ_I(β,R) which can also be approximated by the Fourier basis. [H] The Hermite basis The basis is defined from the Hermite functions (h_j,j≥ 0) defined on ℝ and given for all j≥ 0 and for all x∈ℝ by: h_j(x)=c_jH_j(x), where H_j(x)=(-1)^jexp(x^2/2)d^j/dx^j(e^-x^2/2) and c_j=(2^jj!√(π))^-1/2. The polynomials H_j(x), j≥ 0 are the Hermite polynomials, and (h_j,j≥ 0) is an orthonormal basis of L^2(ℝ). Furthermore, for all j≥ 1 and x∈, |h_j(x)|≤ c|x|exp(-c_0x^2) for x^2≥(3/2)(4j+3) where c,c_0>0 are constants independent of j (see  <cit.>, Proof of Proposition 3.5). We use the Hermite basis in the sequel for the estimation of σ^2 on the real line . If one assumes that σ^2 belongs to the Sobolev space W^s_f_n(,R) given for all s ≥ 1 by W^s_f_n(,R) := {g ∈ L^2(, f_n(x)dx), ∀ ℓ≥ 1, g - g_ℓ^2_n≤ Rℓ^-s} where for each ℓ≥ 1, g_ℓ is the L^2(, f_n(x)dx)-orthogonal projection of g on the ℓ-dimensional vector space 𝒮_ℓ spanned by the Hermite basis. Consider a compact interval I ⊂ and the following spaces: W^s(I,R) :=  {g ∈ L^2(I),  ∑_j=0^∞j^s<g,ϕ_j>^2≤ R}, W^s_f_n(I,R) :=  {g ∈ L^2(I, f_n(x)dx), ∀ ℓ≥ 1, g - g_ℓ^2_n≤ Rℓ^-s} where (ϕ_j)_j≥ 0 is an orthonormal basis defined on I and for all ℓ≥ 1, g_ℓ is the orthogonal projection of g onto 𝒮_ℓ = Span(h_j,  j≤ℓ) of dimension ℓ≥ 1 (see e.g. <cit.>). Then, for all g ∈ W^s(I,R), we have g=∑_j=0^∞<g,ϕ_j>ϕ_j  and  g-g_ℓ^2 = ∑_j=ℓ+1^∞<g,ϕ_j>^2≤ℓ^-s∑_j=ℓ+1^∞j^s<g,ϕ_j>^2≤ Rℓ^-s. We have W^s_f_n(I,R) = W^s(I,R) as the empirical norm ._n and the L^2-norm . are equivalent. The space W^s_f_n(,R) is an extension of the space W^s_f_n(I,R) wher I = and (ϕ_j)_j ≥ 0 is the Hermite basis. The B-spline basis is used for the estimation of σ^2 on a compact interval on one side (N = 1 and N>1), and on the real line on the other side restricting σ^2 on the compact interval [-log(n), log(n)] for N = 1, or [-log(N), log(N)] for N > 1, and bounding the exit probability of the process X from the interval [-log(N), log(N)] (or [-log(n), log(n)]) by a negligible term with respect to the estimation error. In a similar context, the Fourier basis is used as an othonormal basis to built nonparametric estimators of σ^2 on a compact interval and on , both for N = 1 and for N > 1. The main goal is to show that, in addition to the spline basis which is not orthogonal, we can built projection estimators of σ^2 on orthonormal bases that are consistent. The advantage of the Hermite basis compared to the Fourier basis is its definition on the real line . As a result, we use the Hermite basis to propose for N > 1, a projection estimator of σ^2 whose support is the real line . Denote by ℳ, the set of possible values of the dimension m ≥ 1 of the approximation subspace 𝒮_m. If (ϕ_0,⋯,ϕ_m-1) is an orthonormal basis, then for all m,m^'∈ℳ such that m < m^', we have 𝒮_m⊂𝒮_m^'. For the case of the B-spline basis, one can find a subset 𝒦⊂ℳ of the form 𝒦={2^q, q=0,⋯,q_max} such that for all K,K^'∈𝒦, K < K^' implies 𝒮_K + M⊂𝒮_K^' + M (see for example <cit.>). The nesting of subspaces 𝒮_m, m∈ℳ is of great importance in the context of adaptive estimation of the diffusion coefficient and the establishment of upper-bounds for the risk of adaptive estimators. In the sequel, we denote by [𝐅],  [𝐇] and [𝐁] the respective collection of subspaces spanned by the Fourier basis, the Hermite basis and the B-spline basis. §.§ Ridge estimators of the square of the diffusion coefficient We establish from Equation (<ref>) and the sample D_N,n the regression model for the estimation of σ^2. For all j ∈ [[1,N]] and k ∈ [[0,n-1]], define U^j_kΔ_n := (X^j_(k+1)Δ_n - X^j_kΔ_n)^2/Δ_n. The increments U^j_kΔ_n are approximations in discrete times of d<X,X>_t/dt since, from Equation (<ref>), one has d<X,X>_t = σ^2(X_t)dt. From Equation (<ref>), we obtain the following regression model, U^j_kΔ_n=σ^2(X^j_kΔ_n)+ζ^j_kΔ_n+R^j_kΔ_n,   ∀ (j,k)∈[[1,N]]×[[0,n-1]] where U^j_kΔ_n is the response variable, ζ^j_kΔ_n and R^j_kΔ_n are respectively the error term and a negligible residual whose explicit formulas are given in Section <ref>. We consider the least squares contrast γ_n,N defined for all m ∈ℳ and for all function h∈𝒮_m,L by γ_n,N(h):=1/Nn∑_j=1^N∑_k=0^n-1(U^j_kΔ-h(X^j_kΔ_n))^2. For each dimension m ∈ℳ, the projection estimator σ^2_m of σ^2 over the subspace 𝒮_m,L satisfies: σ^2_m∈h∈𝒮_m,Lmin γ_n,N(h). Indeed, for each dimension m ∈ℳ, the estimator σ^2_m of σ^2 given in Equation (<ref>) satisfies σ^2_m=∑_ℓ=0^m-1a_ℓϕ_ℓ, where 𝐚=(a_0,⋯,a_m-1):=𝐚^2_2≤ mLmin𝐔-𝐅_m𝐚^2_2 with ^tU = (U^1_0,…,U^1_(n-1)Δ_n, …, U^N_0,…,U^N_(n-1)Δ_n) and the matrix 𝐅_m is defined as follows F_m := ( ^t(ϕ_ℓ(X^j_0),…,ϕ_ℓ(X^j_(n-1)Δ_n)))_1 ≤ j ≤ N0 ≤ℓ≤ m-1∈ℝ^Nn × m. The vector of coefficients 𝐚 is unique and called the ridge estimator of 𝐚 because of the ℓ^2 constraint on the coordinate vectors (see <cit.> Chap. 3 page 61). § ESTIMATION OF THE DIFFUSION COEFFICIENT FROM A SINGLE DIFFUSION PATH This section focuses on the nonparametric estimation of the square of the diffusion coefficient σ^2 on an interval I ⊆ when only a single diffusion path is observed at discrete times (N=1). It is proved in the literature that one can construct consistent estimators of the diffusion coefficient from one path when the time horizon T is finite (see e.g. <cit.>). Two cases are considered. First, we propose a ridge estimator of σ^2 on a compact interval I ⊂, say for example I = [-1,1]. Secondly, we extend the study to the estimation of σ^2 on the real line I =. §.§ Non-adaptive estimation of the diffusion coefficient on a compact interval In this section, we consider the estimator σ^2_m of the compactly supported square of the diffusion coefficient σ^2_|I on the constrained subspaces 𝒮_m,L from the observation of a single diffusion path. Since the interval I⊂ is compact, the immediate benefit is that the density function f_n defined from the transition density of the diffusion process X̅ = (X_kΔ) is bounded from below. In fact, there exist constants τ_0,τ_1∈(0,1] such that ∀ x∈ I, τ_0≤ f_n(x)≤τ_1, (see <cit.>). Thus, for each function h∈𝕃^2(I), τ_0h^2≤h^2_n≤τ_1h^2 where . is the 𝕃^2-norm. Equation (<ref>) allows to establish global rates of convergence of the risk of the ridge estimators σ^2_m of σ^2_|I with m∈ℳ using the L^2-norm . which is, in this case, equivalent with the empirical norm ._n. To establish an upper-bound of the risk of estimation that tends to zero as n tends to infinity, we need to establish equivalence relations between the pseudo-norms ._n,1  (N=1) and ._X on one side, and ._X and the L^2-norm . on the other side, where the random pseudo-norm ._X is defined for each function h∈𝕃^2(I) by h^2_X := ∫_0^1h^2(X_s)ds. Define for x∈, the local time ℒ^x of the diffusion process X = (X_t)_t∈[0,1] by ℒ^x = ε→ 0lim1/2ε∫_0^1_(x-ε,x+ε)(X_s)ds. In general, the local time of a continuous semimartingale is a.s. càdlàg (see e.g. <cit.>). But, for diffusion processes and under Assumption <ref>, the local time ℒ^x is bicontinuous at any point x∈ (see Lemma <ref> in Section <ref>). Furthermore, we obtain the following result. Under Assumption <ref>, and for any continuous and integrable function h, it yields, * ∫_0^1h(X_s)ds = ∫_h(x)ℒ^xdx. * For all x∈, (ℒ^x) = ∫_0^1p_X(s,x)ds. In Lemma <ref>, we remark that there is a link between the local time and the transition density of the diffusion process. Thus, if we consider the pseudo-norm ._X depending on the process X = (X_t)_t∈[0,1] and given in Equation (<ref>), and using Lemma <ref>, we obtain that, [h_X^2] = ∫_h^2(x)[ℒ^x]dx = ∫_h^2(x)∫_0^1p_X(s,x)dsdx≥τ_0h^2. where ∫_0^1p_X(s,x)ds≥τ_0 >0 (see <cit.>, Lemma 4.3), and h^2 is the 𝕃^2-norm of h. Set L = log(n). Suppose that σ^2 is approximated in one of the collections [𝐁] and [𝐅]. Under Assumption <ref>, it yields [σ^2_m - σ^2_|I^2_n,1] ≤ 3h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n + m^2γ+1log(n)/n^γ/2 + Δ^2_n) [σ^2_m - σ^2_|I^2_n] ≤34τ_1/τ_0h∈𝒮_m,Linfh-σ^2_|I^2_n+C^'(m/n + m^2γ+1log(n)/n^γ/2 + Δ^2_n) where the number γ > 1 comes from the use of the Hölder inequality. The constant C>0 depends on σ_1 and the constant C^'>0 depends on σ_1, τ_0 and τ_1. We observe that the upper-bound of the risk of estimation of σ^2_m is composed of the bias term, which quantifies the cost of approximation of σ^2_|I in the constrained space 𝒮_m,L, the estimation error O(m/n) and the cost of the time discretization O(Δ^2_n) are established on a random event in which the pseudo-norms ._n,1 and ._X are equivalent, and whose probability of the complementary times σ^2_m - σ^2_|I^2_∞ is bounded by the term O(m^2γ+1log(n)/n^γ/2) (see Lemma <ref> and proof of Theorem <ref>). The next result proves that the risk of estimation can reach a rate of convergence of the same order than the rate established in <cit.> if the parameter γ > 1 is chosen such that the term O(m^2γ+1log(n)/n^γ/2) is of the same order than the estimation error of order m/n. Note that the risk σ^2_m - σ^2_|I^2_n is random since σ^2_m - σ^2_|I^2_n = _X[1/n∑_k=0^n-1(σ^2_m - σ^2_|I)(X_kΔ)] and the estimator σ^2_m is built from an independent copy X̅^1 of the discrete times process X̅. Thus, the expectation relates to the estimator σ^2_m. Suppose that σ^2∈Σ_I(β,R) with β > 3/2, and γ = 2(2β+1)/(2β-3). Assume that K_opt∝ n^1/(2β+1) for [𝐁] (m_opt = K_opt + M), and m_opt∝ n^1/(2β+1) for [𝐅]. Under Assumptions <ref>, it yields, [σ^2_m_opt - σ^2_|I^2_n,1] = O(log(n)n^-2β/(2β+1)) [σ^2_m_opt - σ^2_|I^2_n] = O(log(n)n^-2β/(2β+1)). Note that we obtain the exact same rates when considering the risk of σ^2_m_opt defined with the 𝕃^2-norm equivalent to the empirical norm ._n. Moreover, these rates of convergence are of the same order than the optimal rate n^-s/(2s+1) established in <cit.> over a Besov ball. §.§ Non-adaptive estimation of the diffusion coefficient on the real line In this section, we propose a ridge estimator of σ^2 on the real line , built from one diffusion path. In this context, the main drawback is that the density function f_n:x↦1/n∑_k=0^n-1p_X(kΔ,x) is no longer lower bounded. Consequently, the empirical norm ._n is no longer equivalent to the L_2-norm . and the consistency of the estimation error is no longer ensured under the only assumptions made in the previous sections. Consider the truncated estimator σ^2_m,L of σ^2 given by σ^2_m,L(x) = σ^2_m(x)_σ^2_m(x) ≤√(L) + √(L)_σ^2_m(x) > √(L). Thus, the risk of the ridge estimator σ^2_m,L is upper-bounded as follows: [σ^2_m,L - σ^2^2_n,1] ≤  [(σ^2_m,L - σ^2)_[-log(n),log(n)]^2_n,1] + [(σ^2_m,L - σ^2)_[-log(n),log(n)]^c^2_n,1] ≤  [(σ^2_m,L - σ^2)_[-log(n),log(n)]^2_n,1] + 4log^2(n)t∈[0,1]sup(|X_t|>log(n)). The first term on the r.h.s. is equivalent to the risk of a ridge estimator of σ^2 on the compact interval [-log(n),log(n)]. The second term on the r.h.s. is upper-bounded using Lemma <ref>. We derive below, an upper-bound of the risk of estimation of σ^2_m. Suppose that L = log^2(n). Under Assumption <ref>, it yields, [σ^2_m,L - σ^2^2_n,1] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(m^qlog^2(n)/n) where C>0 is a constant, q = 1 for the collection [𝐁], and q = 2 for the collection [𝐅]. We first remark that the upper-bound of the risk of the truncated estimator of σ^2 differs with respect to each of the chosen bases. This contrast comes from the fact that the Fourier basis {f_ℓ, ℓ = 0, …, m-1} and the spline basis {B_ℓ-M,  ℓ = 0, …, m-1} satisfy ∑_ℓ = 0^m - 1f_ℓ(x)≤ C_fm,  and ∑_ℓ=0^m-1B_ℓ-M(x) = 1. Secondly, the estimation error is not as fine as the one established in Theorem <ref> where σ^2 is estimated on a compact interval. In fact, on the real line , the pseudo-norm ._X can no longer be equivalent to the 𝕃^2-norm since the transition density is not bounded from below on . Consequently, we cannot take advantage of the exact method used to establish the risk bound obtained in Theorem <ref> which uses the equivalence relation between the pseudo-norms ._n,1 and ._X on one side, and ._X and the 𝕃^2-norm . on the other side. Moreover, we can also notice that the term of order 1/n^2 does not appear since it is dominated by the estimation error. We obtain below rates of convergence of the ridge estimator of σ^2 for each of the collections [𝐁] and [𝐅]. Suppose that σ^2∈Σ_I(β,R) with β≥ 1 For [B]. Assume that K ∝ n^1/(4β+1). Under Assumptions <ref>, there exists a constant C>0 depending on β and σ_1 such that [σ^2_m,L - σ^2^2_n,1] ≤ Clog^2β(n)n^-2β/(4β+1). For [F]. Assume that m ∝ n^1/2(2β+1). Under Assumptions <ref>, it yields, [σ^2_m,L - σ^2^2_n,1] ≤ Clog(n)n^-β/(2β+1) where the constant C>0 depends on β and σ_1. As we can remark, the obtained rates are slower than the ones established in Section <ref> where σ^2 is estimated on a compact interval. This result is the immediate consequence of the result of Theorem <ref>. § ESTIMATION OF THE DIFFUSION COEFFICIENT FROM REPEATED DIFFUSION PATHS We now focus on the estimation of the (square) of the diffusion coefficient from i.i.d. discrete observations of the diffusion process (N →∞). §.§ Non-adaptive estimation of the diffusion coefficient on a compact interval We study the rate of convergence of the ridge estimators σ^2_m of σ^2_|I from D_N,n when I is a compact interval. The next theorem gives an upper-bound of the risk of our estimators σ^2_m,  m∈ℳ. Suppose that L = log(Nn) and ℳ = {1,…,√(min(n,N))/log(Nn)}. Under Assumption <ref> and for all m ∈ℳ, there exist constants C>0 and C^'>0 depending on σ_1 such that, 𝔼[σ^2_m-σ^2_|I^2_n,N]≤   3h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/Nn+mlog(Nn)exp(-C√(min(n,N)))+Δ^2_n) 𝔼[σ^2_m-σ^2_|I^2_n]≤   34h∈𝒮_m,Linfh-σ^2_|I^2_n + C^'(m/Nn+mlog(Nn)exp(-C√(min(n,N)))+Δ^2_n). Note that the result of Theorem <ref> is independent of the choice of the basis that generate the approximation space 𝒮_m. The first term on the right-hand side represents the approximation error of the initial space, the second term O(m/(Nn)) is the estimation error, and the last term characterizes the cost of the time discretization. The next result is derived from Theorem <ref>. Suppose that σ^2∈Σ_I(β,R) with β > 3/2. Moreover, assume that K_opt∝ (Nn)^1/(2β+1) for [𝐁] (m_opt = K_opt + M), and m_opt∝ (Nn)^1/(2β+1) for [𝐅]. Under Assumptions <ref>, it yields, 𝔼[σ^2_m_opt-σ^2_|I^2_n,N] =  O((Nn)^-2β/(2β+1)) 𝔼[σ^2_m_opt-σ^2_|I^2_n] =  O((Nn)^-2β/(2β+1)). The obtained result shows that the nonparametric estimators of σ^2_|I based on repeated observations of the diffusion process are more efficient when N,n→∞. Note that the same rate is obtained if the risk of σ^2_m_opt is defined with the 𝕃^2-norm . equivalent to the empirical norm ._n. The rate obtained in Corollary <ref> is established for β > 3/2. If we consider for example the collection [B] and assume that β∈ [1, 3/2], then K_opt∝ (Nn)^1/(2β+1) belongs to ℳ for n ∝√(N)/log^4(N) and we have 𝔼[σ^2_m_opt-σ^2_|I^2_n,N]≤ C(Nn)^-2β/(2β+1). Under the condition n ∝√(N)/log^4(N) imposed on the length of diffusion paths, the obtained rate is of order n^-3β/(2β+1) (up to a log-factor) which is equivalent to N^-3β/2(2β+1) (up to a log-factor). §.§ Non-adaptive estimation of the diffusion coefficient on the real line Consider a ridge estimator of σ^2 on built from N independent copies of the diffusion process X observed in discrete times, where both N and n tend to infinity. For each m ∈ℳ, we still denote by σ^2_m the ridge estimators of σ^2 and σ^2_m,L the truncated estimators of σ^2 given in Equation (<ref>). We establish, through the following theorem, the first risk bound that highlights the main error terms. Suppose that L=log^2(N). Under Assumptions <ref> and for any dimension m∈ℳ, the following holds: 𝔼[σ^2_m,L-σ^2^2_n,N] ≤   2h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^qlog^2(N)/Nn) + Δ^2_n) where C>0 is a constant depending on the upper bound σ_1 of the diffusion coefficient. Moreover, q = 1 for the collection [𝐁] and q = 2 for the collection [𝐇]. If we consider the risk of σ^2_m,L using the empirical norm ._n, then we obtain 𝔼[σ^2_m,L-σ^2^2_n] ≤ 2h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^qlog^2(N)/Nn) + m^2log^3(N)/N+Δ^2_n) The risk bound given in Equation (<ref>) is a sum of four error terms. The first term is the approximation error linked to the choice of the basis, the second term is the estimation error given in Theorem <ref>, the third term m^2log^3(N)/N comes from the relation linking the empirical norm ._n to the pseudo-norm ._n,N (see Lemma <ref>), and the last term is the cost of the time-discretization. We derive, in the next result, rates of convergence of the risk bound of the truncated ridge estimators σ^2_m,L based on the collections [𝐁] and [𝐇] respectively. Suppose that σ^2∈Σ_I(β,R) with β≥ 1,  I = [-log(N),log(N)], and K ∝ (Nn)^1/(4β+1) for [𝐁], and σ^2∈ W^s_f_n(,R) with s ≥ 1 and m ∝ (Nn)^1/2(2s+1) for [𝐇]. Under Assumption <ref>, the following holds: For  [𝐁]   𝔼[σ^2_m,L-σ^2^2_n,N] ≤ C(log^2β(N)(Nn)^-2β/(4β+1) + 1/n^2), For  [𝐇]   𝔼[σ^2_m,L-σ^2^2_n,N] ≤ C(log^3(N)(Nn)^-s/(2s+1) + 1/n^2). where C>0 is a constant depending on β and σ_1 for [𝐁], or s and σ_1 for [𝐇]. The obtained rates are slower compared to the rates established in Section <ref> for the estimation of σ^2_|I where the interval I⊂ is compact. In fact, the method used to establish the rates of Theorem <ref> from which the rates of Corollary <ref> are obtained, does not allow us to derive rates of order (Nn)^-α/(2α+1) (up to a log-factor) with α≥ 1 (e.g. α = β, s). Finally, if we consider the risk defined with the empirical norm ._n, then from Equation (<ref>) with n ∝ N and assuming that m ∝ N^1/4(s+1) for [𝐇] or K ∝ N^1/4(β+1) for [𝐁], we obtain [𝐁]:     𝔼[σ^2_m,L-σ^2^2_n] ≤   Clog^2β(N)(Nn)^-β/2(β+1), [𝐇]:     𝔼[σ^2_m,L-σ^2^2_n] ≤   Clog^3(N)(Nn)^-s/2(s+1), where C>0 is a constant depending on σ_1 and on the smoothness parameter. We can see that the obtained rates are slower compared to the results of Corollary <ref> for n ∝ N. The deterioration of the rates comes from the additional term of order m^2log^3(N)/N which is now regarded as the new estimation error since it dominates the other term in each case as N→∞. §.§ Non-adaptive estimation of the diffusion coefficient on a compact interval depending on the sample size This section combines the two first sections <ref> and <ref> focusing on the estimation of σ^2 on the compact interval [-A_N,A_N] where (A_N) is a strictly positive sequence such that A_N →∞ as N→∞. Consequently, we obtain that the estimation interval tends to as the sample size N tends to infinity. Define from the observations and for each dimension m∈ℳ, the following matrices: Ψ_m:=(1/Nn∑_j=1^N∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ϕ_ℓ^'(X^j_kΔ))_0≤ℓ,ℓ^'≤ m-1, Ψ_m:=𝔼(Ψ_m)=([1/n∑_k=0^n-1ϕ_ℓ(X_kΔ)ϕ_ℓ^'(X_kΔ)])_0≤ℓ,ℓ^'≤ m-1. These two matrices play an essential role in the construction of a consistent projection estimator of σ^2 over any approximation subspace 𝒮_m spanned by the basis (ϕ_0,⋯,ϕ_m-1). Furthermore, for all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m, we have: h^2_n,N = ^t𝐚Ψ_m𝐚, h^2_n = 𝔼(h^2_n,N) = ^t𝐚Ψ_m𝐚, where 𝐚=(a_0,⋯,a_m-1). The Gram matrix Ψ_m is invertible under the spline basis (see <cit.>) and the Hermite basis (see <cit.>). We define for any invertible matrix M, the operator norm M^-1_op of M^-1 given by M^-1_op=1/inf{λ_j} where the λ_j are eigenvalues of M. For all dimension m∈ℳ, the matrices Ψ_m and 𝐅_m satisfy: Ψ_m= ^t𝐅_m𝐅_m. Consider the ridge estimator σ^2_m of σ^2_A_N = σ^2_[-A_N,A_N], with m∈ and A_N →∞ as N →∞. The estimator σ^2_m can reach a faster rate of convergence if the Gram matrix Ψ_m given in Equation (<ref>) satisfies the following condition, ℒ(m)(Ψ^-1_m_op∨ 1)≤ CN/log^2(N),   where  ℒ(m):=x∈ℝsup∑_ℓ=0^m-1ϕ^2_ℓ(x)<∞ where C>0 is a constant. In fact, the optimal rate of convergence is achieved on a random event Ω_n,N,m in which the two empirical norms ._n,N and ._n are equivalent (see  <cit.>, <cit.>). Then, Condition (<ref>)  is used to upper-bound (Ω^c_n,N,m) by a negligible term with respect to the considered rate (see <cit.>). Note that in Equation (<ref>), the square on log(N) is justified by the fact that the value of constant C>0 is unknown, and that the spline basis is not othonormal (see <cit.>, proof of Lemma 7.8). The assumption of Equation (<ref>) is also made in <cit.> on the operator norm of Ψ^-1_m based on an orthonormal basis with the bound 𝐜N/log(N) where the value of 𝐜 is known, and chosen and such that the upper-bound of (Ω^c_n,N,m) is negligible with respect to the estimation error. In our framework, since the transition density is approximated by Gaussian densities, we derive the following result. Suppose that n ∝ N and that the spline basis is constructed on the interval [-A_N,A_N] with A_N > 0. Under Assumption <ref> , for all m∈ and for all w∈^m such that w_2,m=1, there exists a constant C>0 such that For  [𝐇]:          w^'Ψ_mw ≥C/log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N))), For  [𝐁]:          w^'Ψ_mw ≥CA_N/mlog(N)exp(-c_σA^2_N), where the constant c_σ>1 that comes from the approximation of the transition density, depends on the diffusion coefficient σ. The result of Lemma <ref>  implies for the Hermite basis that (Ψ^-1_m_op∨ 1)≤log(N)/Cexp(3c_σ(4m+3)/2(1-log^-1(N))) where the upper-bound is an exponentially increasing sequence of N since the dimension m∈ has a polynomial growth with respect to N. Thus, Condition (<ref>)  cannot be satisfied for the Hermite basis in our framework. Considering the spline basis, one has ℒ(m)=ℒ(K+M)≤ 1 and there exists a constant C>0 such that Ψ^-1_m_op≤ Cmlog(N)/A_Nexp(c_σA^2_N). For K ∝(N^2/(2β+1)A_N), Condition (<ref>) is satisfied if the estimation interval [-A_N,A_N] is chosen such that A_N = o (√(log(N))). In the next theorem, we prove that the spline-based ridge estimator of σ^2_A_N reaches a faster rate of convergence compared to the result of Corollary <ref> for the collection [𝐁]. Suppose that N ∝ n and consider the ridge estimator σ^2_A_N,m of σ^2_A_N based on the spline basis. Furthermore, suppose that L = log(N), A_N = o(√(log(N))) and K ∝ (Nn)^1/(2β+1)A_N (m = K + M). Under Assumptions <ref> and for σ^2∈Σ_I(β,R) with I = [-A_N, A_N], the following holds: [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤ Clog^β(N)(Nn)^-2β/(2β+1) where C>0 is a constant depending on β. The above result shows that the risk of the ridge estimator of σ^2_A_N on [-A_N,A_N] reaches a rate of order (Nn)^-β/(2β+1) (up tp a log-factor) thanks to Condition (<ref>) which allows us to take advantage of the equivalence relation between the empirical norms ._n and ._n,N given in Equation (<ref>) to derive a finer estimation error (see proof of Theorem <ref>). Note that the obtained result depends on an appropriate choice of the estimation interval [-A_N,A_N] which tends to as N tends to infinity. Therefore, any choice of A_N such that A_N/√(log(N))⟶ +∞ cannot lead to a consistent estimation error since Equation (<ref>) is no longer satisfied for the upper-bounding of (Ω^c_n,N,m) by a term that tends to zero as N →∞. Thus, the assumption A_N = o(√(log(N))) is a necessary and sufficient condition for the validation of Condition (<ref>) which leads, together with Assumption <ref>, to the result of Theorem <ref>. Finally, under the assumptions of Theorem <ref> and considering the risk of σ^2_A_N,m based on the empirical norm ._n, we also obtain [σ^2_A_N,m-σ^2_A_N^2_n] = O(log^β(N)(Nn)^-2β/(2β+1)). In fact, under Condition (<ref>), the estimator σ^2_A_N,m satisfies the results of Theorem <ref> with I = [-A_N,A_N] and A_N = o(√(log(N))), which implies rates of the same order for the two empirical norms. § ADAPTIVE ESTIMATION OF THE DIFFUSION COEFFICIENT FROM REPEATED OBSERVATIONS In this section, we suppose that n ∝ N and we propose a adaptive ridge estimator of σ^2 by selecting an optimal dimension from the sample D_N. In fact, consider the estimator σ^2_K,L where K satisfies: K:=K∈𝒦min{γ_n,N(σ^2_K)+pen(K)} and the penalty function pen : K↦pen(K) is established using the chaining technique of <cit.>. We derive below the risk of the adaptive estimator of σ^2_|I when the interval I⊂ is compact and the sample size N →∞. Suppose that N ∝ n,   L=log(N) and consider the collection [B] with K ∈𝒦 = {2^q,  q=0,1,…,q_max}⊂ℳ = {1,…,√(N)/log(N)}. Under Assumption <ref> , there exists a constant C>0 such that, 𝔼[σ^2_K,L-σ^2_|I^2_n,N]≤ 34K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)}+C/Nn where pen(K) = κ(K+M)log(N)/Nn with κ > 0 a numerical constant. We deduce from Corollary <ref> and its assumptions that the adaptive estimator σ^2_K,L satisfies: 𝔼[σ^2_K,L-σ^2_|I^2_n] = O((Nn)^-2β/(2β+1)). This result is justified since the penalty term is of the same order (up to a log-factor) than the estimation error established in Theorem <ref>. Considering the adaptive estimator of σ^2 on the real line I= when the sample size N →∞, we obtain the following result. Suppose that N ∝ n and L = log(N), and consider the collection [𝐁] with K ∈𝒦 = {2^q,  q=0,1,…,q_max}⊂ℳ = {1,…,√(N)/log(N)}. Under Assumption <ref> and for N large enough, the exists a constant C>0 such that, [σ^2_K,L - σ^2^2_n,N] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)} + C/Nn. where pen(K) = κ^'(K+M)log(N)/Nn with κ^'>0 a numerical constant. We have a penalty term of the same order than the one obtained in Theorem <ref> where σ^2 is estimated on a compact interval. One can deduce that the adaptive estimator reaches a rate of the same order than the rate of the non-adaptive estimator given in Corollary <ref> for the collection [𝐁]. If we consider the adaptive estimator of the compactly supported diffusion coefficient built from a single diffusion path, we obtain below an upper-bound of its risk of estimation. Suppose that N = 1,   L = √(log(n)) and consider the collection [𝐁] with K ∈𝒦 = {2^q,  q=0,…,q_max}⊂ℳ = {1,…,√(n)/log(n)}. Under Assumption <ref>, it yields 𝔼[σ^2_K,L-σ^2_|I^2_n,1] ≤   3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + C/n. where C>0 is a constant depending on τ_0, and pen(K) = κ(K+M)log(n)/n with κ>0 a numerical constant. We deduce from Theorem <ref> that if we assume that σ^2 ∈Σ_I(β,R), then the adaptive estimator σ^2_K,L reaches a rate of order n^-β/(2β+1) (up to a log-factor). The result of this theorem is almost a deduction of the result of Theorem <ref>, the slight difference being the use, in the proofs, of the local time of the process and the equivalence relation between the pseudo-norm ._n,1 with the pseudo-norm ._X instead of the empirical norm ._n considered in the proof of Theorem <ref>. § NUMERICAL STUDY This section is devoted to the numerical study on a simulation scheme. Section <ref> focuses on the presentation of the chosen diffusion models. In Section <ref>, we describe the scheme for the implementation of the ridge estimators. We mainly focus on the B-spline basis for the numerical study, and in Section <ref>, we add a numerical study on the performance of the Hermite-based ridge estimator of σ^2 on . Finally, we compare the efficiency of our estimator built on the real line from a single path with that of the Nadaraya-Watson estimator proposed in <cit.>. §.§ Models and simulations Recall that the time horizon is T=1 and X_0 = 0. Consider the following diffusion models: Model 1 Ornstein-Uhlenbeck: b(x) = 1-x,   σ(x)= 1 Model 2: b(x) = 1-x,   σ(x) = 1-x^2 Model 3: b(x) = 1-x,   σ(x) = 1/3+sin(2π x)+cos^2(π/2x) Model 1 is the commonly used Ornstein-Uhlenbeck model, known to be a simple diffusion model satisfying Assumption <ref>. Model 2 does not satisfy Assumption <ref>. Model 3 satisfies Assumption <ref>  with a multimodal diffusion coefficient. The size N of the sample D_N takes values in the set {1,10,100,1000} where the length n of paths varies in the set {100,250,500,1000}. As we work with the spline basis, the dimension m=K+M of the approximation space is chosen such that M=3 and K takes values in 𝒦={2^p, p=0,⋯,5} so that the subspaces are nested inside each other. We are using for the simulation of diffusion paths via the function of package, (see <cit.> for more details on the simulation of SDEs). §.§ Implementation of the ridge estimators In this section, we assess the quality of estimation of the adaptive estimator σ^2_m in each of the 3 models through the computation of its risk of estimation. We compare the performance of the adaptive estimator with that of the oracle estimator σ^2_m^* where m^* is given by: m^*:=m∈ℳmin σ^2_m-σ^2^2_n,N. For the spline basis, we have m^* = K^* + M with M=3. Finally, we complete the numerical study with a representation of a set of 10 estimators of σ^2 for each of the 3 models. We evaluate the MISE of the spline-based adaptive estimators σ^2_K by repeating 100 times the following steps: * Simulate samples D_N,n and D_N^',n with N∈{1,10,100,1000}, N^'=100 and n ∈{100, 250,1000}. * For each K∈𝒦, and from D_N,n, compute estimators σ^2_K given in Equations (<ref>) and (<ref>). * Select the optimal dimension K∈𝒦 using Equation (<ref>) and compute K^* from Equation (<ref>) * Using D_N^',n, evaluate σ^2_K-σ^2^2_n,N^' and σ^2_K^*-σ^2^2_n,N^'. We deduce the risks of estimation considering the average values of σ^2_m-σ^2^2_n,N^' and σ^2_m^*-σ^2^2_n,N^' over the 100 repetitions. Note that we consider in this section, the estimation of σ^2 on the compact interval I = [-1,1] and on the real line . The unknown parameters κ and κ^' in the penalty functions given in Theorem <ref> and Theorem <ref> respectively, are numerically calibrated (details are given in Appendix <ref>), and we choose κ = 4 and κ^' = 5 as their respective values. §.§ Numerical results We present in this section the numerical results of the performance of the spline-based adaptive estimators of σ^2_|I with I ⊆ together with the performance of the oracle estimators. We consider the case I=[-1,1] for the compactly supported diffusion coefficient, and the case I=. Tables <ref> and <ref> present the numerical results of estimation of σ^2_|I from simulated data following the steps given in Section <ref>. The results of Table <ref> and Table <ref> show that the adapted estimator σ^2_K is consistent, since its MISE tends to zero as both the size N of the sample D_N,n and the length n of paths are larger. Moreover, note that in most cases, the ridge estimators of the compactly supported diffusion coefficients perform better than those of the non-compactly supported diffusion functions. As expected, we observe that the oracle estimator has generally a better performance compared to the adaptive estimator. Nonetheless, we can remark that the performances are very close in several cases, highlighting the efficiency of the data-driven selection of the dimension. An additional important remark is the significant influence of the length n of paths on the performance of σ^2_K and σ^2_K^*,L (by comparison of Table <ref> with Table <ref>), which means that estimators built from higher frequency data are more efficient. A similar remark is made for theoretical results obtained in Sections <ref> and <ref>. Performance of the Hermite-based estimator of the diffusion coefficient We focus on the estimation of σ^2 on and assess the performance of its Hermite-based estimator (see Section <ref>). We present in Table <ref>, the performance of the oracle estimator σ^2_m^*,L. From the numerical results of Table <ref>, we observe that the Hermite-based estimator of σ^2 is consistent as the sample size N and the length n paths take larger values. Estimation of the diffusion coefficient from one path Consider ridge estimators of σ^2_|I with I=[-1,1]. For the case of the adaptive estimators of σ^2_|I, the dimension K is selected such that K = K∈𝒦minγ_n(σ^2_K) + pen(K) where pen(K) = κ(K+M)log(n)/n with κ >0. We choose the numerical constant κ = 4 and we derive the numerical performance of the adaptive estimator of σ^2_|I. Table <ref> gives the numerical performances of both the adaptive estimator and the oracle estimator of σ^2_|I on the compact interval I=[-1,1] and from a single diffusion path. From the obtained results, we see that the estimators are numerically consistent. However, we note that the convergence is slow (increasing n from 100 to 1000), which highlights the significant impact of the number N of paths on the efficiency of the ridge estimator. Comparison of the efficiency of the ridge estimator of the diffusion coefficient with its Nadaraya-Watson estimator. Consider the adaptive estimator σ^2_K of the square of the diffusion coefficient buit on the real line from a single diffusion path (N=1), where the dimension K is selected using Equation (<ref>). For the numerical assessment, we use the interval I = [-10^6, 10^6] to approximate the real line , and then, use Equation (<ref>) for the data-driven selection of the dimension. We want to compare the efficiency of σ^2_K with that of the Nadaraya-Watson estimator of σ^2 given from a diffusion path X̅ = (X_k/n)_1≤ k≤ n and for all x ∈ by S_n(x) = ∑_k=1^n-1K(X_k/n - x/h_n)[X_(k+1)/n - X_k/n]^2/n/∑_k=1^nK(X_k/n - x/h_n) where K is a positive kernel function, and h_n is the bandwidth. Thus, the estimator S_n(x) is consistent under the condition nh^4_n→ 0 as n tends to infinity (see <cit.>). We use the function of the R-package to compute the Nadaraya-Watson estimator S_n. We remark from the results of Table <ref> that our ridge estimator is more efficient. Note that for the kernel estimator S_n, the bandwidth is computed using the rule of thumb of Scott (see <cit.>). The bandwidth is proportional to n^-1/(d+4) where n is the number of points, and d is the number of spatial dimensions. §.§ Concluding remarks The results of our numerical study show that our ridge estimators built both on a compact interval and on the real line are consistent as N and n take larger values, or as only n takes larger values when the estimators are built from a single path. These results are in accordance with the theoretical results established in the previous sections. Moreover, as expected, we obtained the consistency of the Hermite-based estimators of σ^2 on the real line . Nonetheless, we only focus on the Hermite-based oracle estimator since we did not establish a risk bound of the corresponding adaptive estimator. Finally, we remark that the ridge estimator of σ^2 built from a single path performs better than its Nadaraya-Watson kernel estimator proposed in <cit.> and implemented in the R-package . § CONCLUSION In this article, we have proposed ridge-type estimators of the diffusion coefficient on a compact interval from a single diffusion path. We took advantage of the local time of the diffusion process to prove the consistency of non-adaptive estimators of σ^2 and derive a rate of convergence of the same order than the optimal rate established in <cit.>. We also propose an estimator of σ^2 on the real line from a single path. We proved its consistency using the method described in Section <ref>, and derive a rate of convergence order n^-β/(4β+1) over a Hölder space for the collection [𝐁]. Then, we extended the study to the estimation of σ^2 from repeated discrete observations of the diffusion process. We establish rates of convergence of the ridge estimators both on a compact interval and on . We complete the study proposing adaptive estimators of σ^2 on a compact interval for N=1 and N→∞, and on the real line for N→∞. A perspective on the estimation of the diffusion coefficient could be the establishment of a minimax rate of convergence of the compactly supported (square of the) diffusion coefficient from repeated discrete observations of the diffusion process. The case of the non-compactly supported diffusion coefficient may be a lot more challenging, since the transition density of the diffusion process is no longer lower-bounded. This new fact can lead to different rates of convergence depending on the considered method (see Section <ref>). § ACKNOWLEDGEMENTS I would like to thank my supervisors, Christophe Denis, Charlotte Dion-Blanc, and Viet-Chi Tran, for their sound advice, guidance and support throughout this research project. Their experience in scientific research and their expertise in stochastic calculus and process statistics were decisive in providing precise and relevant answers to the issues raised in this paper, taking into account what has already been done in the literature. I am particularly grateful for their precise and constant help throughout the writing of this article, from editorial advice to proofreading the introduction, the proofs and all other sections of the paper. § PROOFS In this section, we prove our main results of Sections <ref>, <ref> and <ref>. To simplify our notations, we set Δ_n = Δ(=1/n) and constants are generally denoted by C>0 or c>0 whose values can change from a line to another. Moreover, we use the notation C_α in case we need to specify the dependency of the constant C on a parameter α. §.§ Technical results Recall first some useful results on the local time and estimates of the transition density of diffusion processes. For all integer q≥ 1, there exists C^*>0 depending on q such that for all 0≤ s<t≤ 1, [|X_t-X_s|^2q]≤ C^*(t-s)^q. The proof of Lemma <ref> is provided in <cit.>. Under Assumptions <ref>, there exist constants c_σ >1, C > 1 such that for all t ∈ (0,1], x ∈ℝ, 1/C√(t)exp(-c_σx^2/t) ≤ p_X(t,x) ≤C√(t)exp(-x^2/c_σt). The proof of Proposition <ref> is provided in <cit.>, Proposition 1.2. Let h be a L_0-lipschitz function. Then there exists h̃∈𝒮_K_N,M, such that |h̃(x)-h(x)| ≤ C log(N)/K_N, ∀ x ∈ (-log(N),log(N)), where C >0 depends on L_0, and M. The proof of Proposition <ref> is provided in <cit.>. The finite-dimensional vector space 𝒮_K_N,M = 𝒮_K_N+M is introduced in Section <ref>. Under Assumption <ref>, there exist C_1,C_2 >0 such that for all A >0, sup_t ∈ [0,1](|X_t|≥ A) ≤C_1/Aexp(-C_2A^2). The proof of Lemma <ref> is provided in <cit.>, Lemma 7.3. Under Assumption <ref>, the following holds: ∀ x∈,   ℒ^x = ℒ^x_-   a.s. where ℒ^x_- = ε→ 0limℒ^x-ε. The result of Lemma <ref> justifies the definition of the local time ℒ^x, for x∈, given in Equation (<ref>). From <cit.>, Theorem 1.7, we have ∀ x∈,   ℒ^x - ℒ^x_- = 2∫_0^1_X_s = xdX_s = 2∫_0^1_X_s = xb(X_s)ds + 2∫_0^1_X_s = xσ(X_s)dW_s. For all x∈ and for all s∈[0,1], we have for all ε>0, (X_s = x) =  ε→ 0lim (X_s≤ x + ε) - ε→ 0lim (X_s≤ x - ε) = ε→ 0lim F_s(x + ε) - ε→ 0lim F_s(x - ε) =   F_s(x) - F_s(x^-) =   0 Thus, for all x∈, [|ℒ^x - ℒ^x_-|] ≤   2∫_0^1|b(x)|(X_s = x)ds + 2[|∫_0^1_X_s=xσ(X_s)dW_s|] =   2[|∫_0^1_X_s=xσ(X_s)dW_s|]. Using the Cauchy Schwartz inequality, we conclude that [|ℒ^x - ℒ^x_-|] ≤   2√((∫_0^1_X_s=xσ^2(X_s)ds)) = 2σ(x)∫_0^1(X_s = x)ds = 0. Using the Markov inequality, we have ∀ ε>0,   (|ℒ^x - ℒ^x_-|>ε) ≤1/ε[|ℒ^x - ℒ^x_-|] = 0. We finally conclude that for all x ∈, (ℒ^x≠ℒ^x_-) = (|ℒ^x - ℒ^x_-|>0) = 0. §.§ Proofs of Section <ref>  §.§.§ Proof of Lemma <ref> The proof is divided into two parts for each of the two results to be proven. First result. Since the function h is continuous on , let H be a primitive of h on . We deduce that for all s ∈ [0,1], h(X_s) = ε→ 0limH(X_s + ε) - H(X_s - ε)/2ε = ε→ 0lim1/2ε∫_X_s - ε^X_s + εh(x)dx = ε→ 0lim1/2ε∫_-∞^+∞h(x)_(x-ε,x+ε)(X_s)dx. Finally, since h is integrable on and using the theorem of dominated convergence, we obtain ∫_0^1h(X_s)ds = ∫_-∞^+∞h(x)ε→ 0lim1/2ε∫_0^1_(x-ε,x+ε)(X_s)dsdx = ∫_-∞^+∞h(x)ℒ^xdx. Second result. Fix t∈(0,1] and consider P_X : (t,x) ↦∫_-∞^xp_X(t,y)dy the cumulative density function of the random variable X_t of the density function x ↦ p_X(t,x). We have: ∀ x∈,   (ℒ^x) = ε→ 0lim1/2ε∫_0^1[_(x-ε,x+ε)(X_s)]ds = ε→ 0lim1/2ε∫_0^1(x - ε≤ X_s≤ x + ε)ds = ∫_0^1ε→ 0limP_X(s,x+ε) - P_X(s,x-ε)/2εds = ∫_0^1p_X(s,x)ds. §.§.§ Proof of Theorem <ref>  Let Ω_n,m be the random event in which the two pseudo-norms ._n,1 and ._X are equivalent and given by Ω_n,m := g∈𝒮_m∖{0}⋂{|g^2_n,1/g^2_X-1| ≤1/2}. The proof of Theorem <ref> relies on the following lemma. Let γ > 1 be a real number. Under Assumption <ref>, the following holds (Ω^c_n,m) ≤ Cm^2γ/n^γ/2, where C>0 is a constant depending on γ. The parameter γ > 1 has to be chosen appropriately (i.e. such that m^2γ/n^γ/2 = o(1/n)) so that we obtain a variance term of the risk of the estimator σ^2_m of order mlog(n)/n (see Theorem <ref> and Corollary <ref>). Recall that since N = 1, ζ^1_kΔ=ζ^1,1_kΔ+ζ^1,2_kΔ+ζ^1,3_kΔ is the error term of the regression model, with: ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds], ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW^1_s, ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s. Besides, R^1_kΔ =R^1,1_kΔ+R^1,2_kΔ, with: R^1,1_kΔ=1/Δ(∫_kΔ^(k+1)Δb(X^1_s)ds)^2+1/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)Φ(X^1_s)ds R^1,2_kΔ=2/Δ(∫_kΔ^(k+1)Δ(b(X^1_s)-b(X^1_kΔ))ds)(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s) where Φ:=2bσ^'σ+[σ^''σ+(σ^')^2]σ^2. By definition of the projection estimator σ^2_m for each m∈ℳ (see Equation (<ref>)), for all h∈𝒮_m,L, we have: γ_n,1(σ^2_m)-γ_n,1(σ^2_|I)≤γ_n,1(h)-γ_n,1(σ^2_|I). Furthermore, for all h∈𝒮_m,L, γ_n,1(h)-γ_n,1(σ^2_|I)=σ^2_|I-h^2_n,1+2ν_1(σ^2_|I-h)+2ν_2(σ^2_|I-h)+2ν_3(σ^2_|I-h)+2μ(σ^2_|I-h), where, ν_i(h) = 1/n∑_k=0^n-1h(X^1_kΔ)ζ^1,i_kΔ, i∈{1,2,3}, μ(h)=1/n∑_k=0^n-1h(X^1_kΔ)R^1_kΔ, and ζ^1,1_kΔ, ζ^1,2_kΔ, ζ^1,3_kΔ are given in Equations (<ref>), (<ref>), (<ref>), and finally, R^1_kΔ = R^1,1_kΔ+R^1,2_kΔ given in Equations (<ref>) and (<ref>). Then, for all m ∈ℳ, and for all h ∈𝒮_m,L, we obtain from Equation (<ref>) that σ^2_m-σ^2_|I^2_n,1≤h-σ^2_|I^2_n,1+2ν(σ^2_m-h)+2μ(σ^2_m-h), with ν=ν_1+ν_2+ν_3. Then, it comes, 𝔼[σ^2_m-σ^2_|I^2_n,1] ≤h∈𝒮_m,Linfh-σ^2_|I^2_n+2𝔼[ν(σ^2_m-h)]+2𝔼[μ(σ^2_m-h)]. Besides, for any a,d>0, using the inequality xy ≤η x^2 + y^2/η with η = a, d, we have, 2ν(σ^2_m-h) ≤2/aσ^2_m-σ^2_|I^2_X+2/ah-σ^2_|I^2_X+ah∈𝒮_m, h_X=1supν^2(h), 2μ(σ^2_m-h) ≤2/dσ^2_m-σ^2_|I^2_n,1+2/dh-σ^2_|I^2_n,1+d/n∑_k=1^n(R^1_kΔ)^2. §.§.§ Upper bound of 1/n∑_k=1^n(R^1_kΔ)^2 We have: ∀ k∈[[1,n]], R^1_kΔ=R^1,1_kΔ+R^1,2_kΔ+R^1,3_kΔ with,   R^1,1_kΔ=1/Δ(∫_kΔ^(k+1)Δb(X^1_s)ds)^2, R^1,2_kΔ=1/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)Φ(X^1_s)ds   R^1,3_kΔ=2/Δ(∫_kΔ^(k+1)Δ(b(X^1_s)-b(X^1_kΔ))ds)(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s). For all k∈[[1,n]], using the Cauchy-Schwarz inequality and Equation (<ref>), 𝔼[|R^1,1_kΔ|^2] ≤𝔼[(∫_kΔ^(k+1)Δb^2(X^1_kΔ)ds)^2]≤Δ𝔼[∫_kΔ^(k+1)Δb^4(X^1_kΔ)ds]≤ CΔ^2. Consider now the term R^1,2_kΔ. From Equation (<ref>), we have Φ=2bσ^'σ+[σ^''σ+(σ^')^2]σ^2 and according to Assumption <ref>, there exists a constant C>0 depending on σ_1 and α such that |Φ(X^1_s)| ≤ C[(2+|X^1_s|)(1+|X^1_s|^α) + (1+|X^1_s|^α)^2]. Then, from Equation (<ref>) and for all s∈(0,1], [Φ^2(X^1_s)] ≤ Cs∈(0,1]sup[(2+|X^1_s|)^2(1+|X^1_s|^α)^2 + (1+|X^1_s|^α)^4] < ∞ and 𝔼[|R^1,2_kΔ|^2] ≤1/Δ^2∫_kΔ^(k+1)Δ((k+1)Δ-s)^2ds∫_kΔ^(k+1)Δ𝔼[Φ^2(X^1_s)]ds≤ CΔ^2 Finally, under Assumption <ref>, from Equation (<ref>) and using the Cauchy-Schwarz inequality, we have 𝔼[|R^1,3_kΔ|^2] ≤4/Δ^2𝔼[Δ∫_kΔ^(k+1)ΔL^2_0|X^1_s-X^1_kΔ|^2ds(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2] ≤4/Δ√(𝔼[L^4_0Δ∫_kΔ^(k+1)Δ|X^1_s-X^1_kΔ|^4ds]𝔼[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^4]) ≤ CΔ^2. As a result, there exists a constant C>0 such that, 𝔼[1/n∑_k=1^n(R^1_kΔ)^2]≤ CΔ^2. We set a = d = 8 and considering the event Ω_n,m on which the empirical norms ._X and ._n,1 are equivalent, we deduce from Equations (<ref>), (<ref>) and (<ref>) that, 𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]≤ 3h∈𝒮_minfh-σ^2_|I^2_n+C𝔼(h∈𝒮_m, h_X=1supν^2(h))+CΔ^2 where C>0 is a constant depending on σ_1. §.§ Upper bound of 𝔼(h∈𝒮_m, h_X=1supν^2(h)) For all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m such that h^2_X=1, we have h^2≤1/τ_0 (see Equation (<ref>)) and the coordinate vector 𝐚 = (a_-M,⋯,a_K-1) satisfies: * 𝐚^2_2≤ Cm    (m = K+M) for the spline basis (see <cit.>, Lemma 2.6) * 𝐚^2_2≤ 1/τ_0 for an orthonormal basis since h^2 = 𝐚^2_2. Furthermore, using the Cauchy-Schwarz inequality, we have: ν^2(h)=(∑_ℓ=0^m-1a_ℓν(ϕ_ℓ))^2≤𝐚^2_2∑_ℓ=0^m-1ν^2(ϕ_ℓ). Thus, since ν=ν_1+ν_2+ν_3, for all ℓ∈[[-M,K-1]] and for all i∈{1,2,3}, 𝔼[ν^2_i(ϕ_ℓ)]=  1/n^2𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,i_kΔ)^2]. * Case i=1 Recall that ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] where W=W^1. We fix a initial time s∈[0,1) and set M^s_t=∫_s^tσ(X^1_u)dW_u, ∀ t≥ s. (M^s_t)_t≥ s is a martingale and for all t∈[s,1], we have: <M^s,M^s>_t=∫_s^tσ^2(X^1_u)du. Then, ζ^1,1_kΔ=1/Δ(M^kΔ_(k+1)Δ)^2-<M^kΔ,M^kΔ>_(k+1)Δ is also a ℱ_kΔ-martingale, and, using the Burkholder-Davis-Gundy inequality, we obtain for all k∈[[0,n-1]], 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0, 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤C/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_u)du)^2]≤ Cσ^4_1. Then, using Equation (<ref>) we have: 𝔼[ν^2_1(ϕ_ℓ)] =  1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2]=1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]] ≤  Cσ^4_1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)] and, ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤Cσ^4_1/n^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)]. One has: ∑_ℓ=-M^K-1B^2_ℓ(X^1_η(s))≤ 1   for the Spline basis   (m = K + M), ∑_ℓ = 0^m-1ϕ^2_ℓ(X^1_η(s))≤ Cm   for an orthonormal basis with   C = 0 ≤ℓ≤ m-1maxϕ_ℓ^2_∞. Thus, it comes that * ∑_ℓ=-M^K-1𝔼[ν^2_1(B_ℓ)]≤ C/n   for the Spline basis, * ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤ Cm/n   for an orthonormal basis, and, 𝔼(h∈𝒮_m, h^2_X=1supν^2_1(h))≤ Cm/n where C>0 is a constant depending on σ_1 and the basis. * Case i=2 Wa have ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s and, 𝔼[ν^2_2(ϕ_ℓ)] =  4𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)∫_kΔ^(k+1)Δ(k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] =  4𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] ≤  Cσ^4_1Δ^2𝔼[∫_0^1ϕ^2_ℓ(X^1_η(s))ds] where C>0 is a constant. We deduce for both the spline basis and any orthonormal basis that there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h^2_X=1supν^2_2(h))≤ Cm/n^2. * Case i=3 We have ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW_s and, 𝔼[ν^2_3(ϕ_ℓ)] =  4/n^2𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2] ≤  4σ^2_1/n^2𝔼[∫_0^1ϕ^2_ℓ(X^1_η(s))b^2(X^1_η(s))ds] Since for all x∈ℝ, b^2(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^2)<∞, there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h^2_X=1supν^2_3(h))≤ Cm/n^2. We finally obtain from Equations (<ref>), (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h^2_X=1supν^2(h))≤ Cm/n. We deduce from Equations (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that, [σ^2_m - σ^2_|I^2_n,1_Ω_n,m] ≤ 3h∈𝒮_m,Linfσ^2_|I - h^2_n + C(m/n + Δ^2). For n large enough, we have σ^2_m - σ^2_|I^2_∞≤ 2mL since σ^2_m_∞≤√(mL). Then, from Lemma <ref> and for all m∈ℳ, there exists a constant C>0 depending on σ_1 such that 𝔼[σ^2_m-σ^2_|I^2_n,1] =𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]+𝔼[σ^2_m-σ^2_|I^2_n,1_Ω^c_n,m] ≤𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]+2mLℙ(Ω^c_n,m) ≤ 3h∈𝒮_m,Linfσ^2_|I - h^2_n + C(m/n + m^2γ+1L/n^γ/2 + Δ^2). Since the pseudo-norms ._n,1 and ._X are equivalent on the event Ω_n,m, then, using Lemma <ref>, there exists a constant C>0 depending on σ_1 such that 𝔼[σ^2_m-σ^2_|I^2_X] = 𝔼[σ^2_m-σ^2_|I^2_X_Ω_n,m] + 𝔼[σ^2_m-σ^2_|I^2_X_Ω^c_n,m] ≤ 8𝔼[σ^2_m-σ^2_|I^2_n,1] + 10h∈𝒮_minfσ^2_|I-h^2_n + 2mL(Ω^c_n,m) ≤ 34h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n+m^2γ+1L/n^γ/2+Δ^2). Finally, since the estimator σ^2_m is built from a diffusion path X̅^1 independent of the diffusion process X, and from Equations (<ref>) and (<ref>), the pseudo-norm ._X depending on the process X and the empirical norm ._n are equivalent (∀ h∈𝕃^2(I),  h^2_n≤ (τ_1/τ_0)[h^2_X]), there exists a constant C>0 depending on σ_1, τ_0 and τ_1 such that 𝔼[σ^2_m-σ^2_|I^2_n] ≤ 34τ_1/τ_0h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n+m^2γ+1L/n^γ/2+Δ^2). The proof of this Lemma mainly focus on the spline basis and the Fourier basis based on functions cos and sin which are Lipschitz functions. Thus, for all g = ∑_ℓ = 0^m-1a_ℓϕ_ℓ∈𝒮_m, |g^2_n,1 - g^2_X| ≤∫_0^1|g^2(X_η(s)) - g^2(X_s)|ds≤ 2g_∞∫_0^1|g(X_η(s)) - g(X_s)|ds. From Equation (<ref>), one has [g^2_X] ≥τ_0 g^2. Thus, if g^2_X = 1, then g^2≤ 1/τ_0, and we deduce for all g = ∑_ℓ=0^m-1a_ℓϕ_ℓ that there exists a constant C>0 such that * Spline basis: g_∞≤a_2 ≤ C√(m)   (see   <cit.>) * Fourier basis: g_∞≤ C√(m)   since g = a_2 and ∑_ℓ=0^m-1ϕ^2_ℓ = O(m). Moreover, each g∈𝒮_m such that g^2_X = 1 is the Lipschitz function with a Lipschitz coefficient L_g = O(m^3/2). For the spline basis, this result is obtained in <cit.>, proof of Lemma C.1 combined with Lemma 2.6. For the Fourier basis, for all x,y∈ I and using the Cauchy Schwarz inequality, we obtain |g(x) - g(y)| ≤ ∑_ℓ = 0^m - 1|a_ℓ|.|ϕ_ℓ(x) - ϕ_ℓ(y)| ≤ 2π m√(m)𝐚_2|x-y| ≤ 2π/τ_0m√(m)|x-y|. Back to Equation (<ref>), there exists a constant C>0 such that |g^2_n,1 - g^2_X| ≤ Cm^2∫_0^1|X_η(s) - X_s|ds We have: Ω^c_n,m = {ω∈Ω,  ∃ g∈𝒮_m∖{0},  |g^2_n,1/g^2_X-1| > 1/2}, and, using Equation (<ref>), we obtain g∈𝒮_m∖{0}sup|g^2_n,1/g^2_X-1| = g∈𝒮_m, g^2_X = 1sup|g^2_n,1-g_X|≤ Cm^2∫_0^1|X_η(s) - X_s|ds. Finally, using the Markov inequality, the Hölder inequality, Equation (<ref>), and Lemma <ref>, we conclude that (Ω^c_n,m) ≤  (Cm^2∫_0^1|X_η(s) - X_s|ds≥1/2) ≤   Cm^2γ∫_0^1[|X_η(s) - X_s|^γ]ds ≤   Cm^2γ/n^γ/2 with γ∈ (1,+∞). §.§.§ Proof of Theorem <ref> Since L=log^2(n), we have 𝔼[σ^2_m,L-σ^2^2_n,1] =  𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] + 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^c^2_n,1] ≤  𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] + 2log^2(n)t∈(0,1]sup(|X_t|>log(n)). From Equation (<ref>) (Proof of Theorem <ref>), for all h∈𝒮_m,L, 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] ≤h∈𝒮_m,Linfh-σ^2^2_n+2∑_i=1^3𝔼[ν_i(σ^2_m-h)]+2𝔼[μ(σ^2_m-h)] where ν_i,   i=1,2,3 and μ are given in Equation (<ref>). For all i∈{1,2,3} and for all h∈𝒮_m,L, one has 𝔼[ν_i(σ^2_m,L-h)]≤√(2mlog^2(n))√(∑_ℓ=0^m-1𝔼[ν^2_i(ϕ_ℓ)]). * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)] According to Equation (<ref>), we have ∀ℓ∈[[0,m-1]], ν_1(ϕ_ℓ)=1/n∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔ where ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] is a martingale satisfying 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0 and 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤1/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_s)ds)^2]≤ Cσ^4_1 with C>0 a constant, W=W^1 and (ℱ_t)_t≥ 0 the natural filtration of the martingale (M_t)_t∈[0,1] given for all t∈[0,1] by M_t=∫_0^tσ(X^1_s)dW_s. We derive that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]= 1/n^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔ)^2]=1/n^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2] since for all integers k, k^' such that k > k^'≥ 0, we have [ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔϕ_ℓ(X^1_k^'Δ)ζ^1,1_k^'Δ|ℱ_kΔ] = ϕ_ℓ(X^1_kΔ)ζ^1,1_k^'Δϕ_ℓ(X^1_k^'Δ)[ζ^1,1_kΔ|ℱ_kΔ] = 0. For each k∈[[0,n-1]], we have ∑_ℓ=0^m-1ϕ_ℓ(X^1_kΔ) = ∑_ℓ=-M^K-1B_ℓ(X^1_kΔ) =1   for   the   spline   basis ∑_ℓ=0^m-1ϕ_ℓ(X^1_kΔ)≤ Cm   For   an   orthonormal   basis   with  C=0 ≤ℓ≤ m-1maxϕ_ℓ_∞. Finally, there exists a constant C>0 such that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤C/n  for   the   spline   basis Cm/n  for   an   orthonormal   basis. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] For all k∈[[0,n-1]] and for all s∈[0,1], set η(s)=kΔ if s∈[kΔ,(k+1)Δ). We have: ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] =4∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] =4∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]. We conclude that ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]≤C/n^2  for   the   spline   basis Cm/n^2  for   an   orthonormal   basis. where the constant C>0 depends on the diffusion coefficient and the upper bound of the basis functions. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^3_2(ϕ_ℓ)] We have: ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)] =4/n^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)b(X^1_kΔ)σ(X^1_s)dW_s)^2] =4/n^2∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2] ≤4/n^2𝔼[∫_0^1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_η(s))b^2(X^1_η(s))σ^2(X^1_s)ds]. Since for all x∈ℝ, b(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^4)<∞, there exists a constant C>0 depending on the diffusion coefficient such that ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)]≤C/n^2  for   the   spline   basis Cm/n^2  for   an   orthonormal   basis. We finally deduce that from Equations (<ref>) and (<ref>)  that for all h∈𝒮_m,L, 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)+2𝔼[μ(σ^2_m,L-h)]    [B] 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)+2𝔼[μ(σ^2_m,L-h)]    [F] where C>0 is a constant. It remains to obtain an upper bound of the term μ(σ^2_m,L-h). For all a>0 and for all h∈𝒮_m,L, 2μ(σ^2_m,L-h) ≤ 2/aσ^2_m,L-σ^2^2_n,1+2/ah-σ^2^2_n,1+a/n∑_k=0^n-1(R^1_kΔ)^2 2𝔼[μ(σ^2_m,L-h)] ≤ 2/a𝔼σ^2_m,L-σ^2^2_n,1+2/ah∈𝒮_minfh-σ^2^2_n +a/n∑_k=0^n-1𝔼[(R^1_kΔ)^2]. Using Equations (<ref>), (<ref>) and setting a=4, we deduce that there exists constant C>0 depending on σ_1 such that, 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n) + 2log^2(n)t∈(0,1]sup(|X_t|>A_n)   [B] 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n) + 2log^2(n)t∈(0,1]sup(|X_t|>A_n)   [F]. From Proposition <ref>, t∈(0,1]sup(|X_t|>log(n))≤log^-1(n)exp(-clog^2(n)) with c>0 a constant. Then, we obtain from Equation (<ref>) that 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)     [B] 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)     [F]. §.§.§ Proof of Corollary <ref> We have under Assumption <ref> from Theorem <ref> that 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)     [B] 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)     [F]. For [B]. We have m=K+M with M∈ℕ^* fixed. From Proposition <ref> and under Assumption <ref>, there exists a constant C>0 depending on β such that h∈𝒮_K+M,Linfh-σ^2^2_n≤ Clog^2β(n)K^-2β. Since K ∝ n^1/(4β+1), we obtain that [σ^2_K - σ^2^2_n,1] = O(log^2β(n)n^-2β/(4β+1)). For [F]. Under Assumptions <ref> and <ref> and From Lemma 12 in <cit.>, there exists a constant C>0 depending on τ_1 of Equation (<ref>) and the smoothness parameter β of the Besov space 𝐁^β_2,∞ such that h∈𝒮_m,Linfh-σ^2^2_n≤τ_1h∈𝒮_m,Linfh-σ^2^2≤ C|σ^2|^2_β m^-2β where |σ^2|_β is the semi-norm of σ^2 in the Besov space ℬ^β_2,∞([-log(n),log(n)]). Under Assumption <ref>, |σ^2|_β < ∞. Then, for m ∝ n^1/2(2β+1), the exists a constant C>0 depending on β, σ_1 and τ_1 such that [σ^2_m - σ^2^2_n,1] ≤ Clog(n)n^-β/(2β+1). §.§ Proof of Section <ref> The following lemma allows us to obtain a risk bound of σ^2_m,L defined with the empirical norm ._n from the risk bound defined from the pseudo norm ._n,N. Let σ^2_m,L be the truncated projection estimator on of σ^2 over the subspace 𝒮_m,L. Suppose that L = log^2(N),   N>1. Under Assumption <ref>, there exists a constant C>0 independent of m and N such that [σ^2_m,L - σ^2^2_n,N] - 2[σ^2_m,L - σ^2^2_n] ≤ C m^2log^3(N)/N. The proof of Lemma <ref> is provided in <cit.>, Theorem 3.3. The proof uses the independence of the copies X̅^1,…,X̅^N of the process X at discrete times, and the Bernstein inequality. §.§.§ Proof of Theorem <ref>  For fixed n and N in ℕ^*, we set for all m∈ℳ, Ω_n,N,m:=h∈𝒮_m∖{0}⋂{|h^2_n,N/h^2_n-1|≤1/2}. As we can see, the empirical norms h_n,N and h_n of any function h∈𝒮_m∖{0} are equivalent on Ω_n,N,m. More precisely, on the set Ω_n,N,m, for all h∈𝒮_m∖{0}, we have : 1/2h^2_n≤h^2_n,N≤3/2h^2_n. We have the following result: Under Assumption <ref>, the following holds: * If n ≥ N or n ∝ N, then m ∈ℳ = {1,…,√(N)/log(Nn)} and, ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)). * If n ≤ N, then m ∈ℳ= {1,…,√(n)/log(Nn)} and ℙ(Ω^c_n,N,m) ≤ 2exp(-C√(n)) where C>0 is a constant. We have: Ω^c_n,N,m={ω∈Ω, ∃ h_0∈𝒮_m, |h^2_n,N/h^2_n-1|>1/2}, Denote by ℋ_m = {h∈𝒮_m,  h_n = 1} and ℋ^ε_m the ε-net of ℋ_m for any ε >0. We have h∈ℋ_msup|h^2_n,N/h^2_n-1| = h∈ℋ_msup|h^2_n,N-1|. Let ε > 0 and let ℋ^ε_m be the ε-net of ℋ_m w.r.t. the supremum norm ._∞. Then, for each h∈ℋ_m, there exists h_ε∈ℋ^ε_m such that h-h_ε_∞≤ε. Then |h^2_n,N - 1| ≤|h^2_n,N - h_ε^2_n,N| + |h_ε^2_n,N - 1| and, |h^2_n,N - h_ε^2_n,N| ≤  1/Nn∑_j=1^N∑_k=0^n-1|h(X^j_kΔ) - h_ε(X^j_kΔ)|(h_∞ + h_ε_∞)≤(h_∞ + h_ε_∞)ε. Moreover, we have h^2, h_ε^2≤ 1/τ_0. Then, there exists a constant 𝐜 > 0 such that |h^2_n,N - h_ε^2_n,N| ≤ 2√( cm/τ_0)ε  for the spline basis  (see  Lemma 2.6  in   Denis  et   al.(2021)) |h^2_n,N - h_ε^2_n,N| ≤ 2√( cm/τ_0)ε  for   an   orthonormal   basis   (h^2_∞≤ (0≤ℓ≤ m-1maxϕ_ℓ^2_∞)mh^2). Therefore, for all δ > 0 and for both the spline basis and any orthonormal basis, (h∈ℋ_msup|h^2_n,N-1|≥δ) ≤(h∈ℋ^ε_msup|h^2_n,N-1|≥δ/2) + _4ε√( cm/τ_0)≥δ. We set δ = 1/2 and we choose ε > 0 such that 4ε√( cm/τ_0) < 1/2. Then, using the Hoeffding inequality, there exists a constant c>0 depending on c and τ_0 such that ℙ(Ω^c_n,N,m)≤ 2𝒩_∞(ε,ℋ_m)exp(-cN/m) where 𝒩_∞(ε,ℋ_m) is the covering number of ℋ_m satisfying: 𝒩_∞(ε,ℋ_m) ≤(κ√(m)/ε)^m where the constant κ>0 depends on c>0 (see <cit.>, Proof of Lemma D.1). We set ε = κ√(m^*)/N with m^* = maxℳ and we derive from Equations (<ref>) and (<ref>) that (Ω^c_n,N,m) ≤ 2N^m^*exp(-cN/m^*) = 2exp(-cN/m^*(1-m^*2log(N)/cN)). * If n ≥ N, then m ∈ℳ = {1,…,√(N)/log(Nn)}. Since m^*2log(N)/N → 0 as N→ +∞, there exists a constant C>0 such that ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)). * If n ≤ N, then m ∈ℳ = {1,…, √(n)/log(Nn)},   m^*2log(N)/N ≤log(N)/log^2(Nn) → 0 as N,n →∞, and ℙ(Ω^c_n,N,m)≤ 2exp(-C√(n)). * If n ∝ N, then m ∈ℳ = {1,…,√(N)/log(Nn)}. Since m^*2log(N)/N → 0 as N→ +∞, there exists a constant C>0 such that ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)). The proof of Theorem <ref> extends the proof of Theorem <ref> when N tends to infinity. Then, we deduce from Equation (<ref>) that 𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]≤ 3h∈𝒮_m,Linfh-σ^2_|I^2_n+C𝔼(h∈𝒮_m, h_n=1supν^2(h))+CΔ^2 where C>0 is a constant depending on σ_1, and ν = ν_1+ν_2+ν_3 with ν_i(h) = 1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,i_kΔ,    i=1,2,3 and the ζ^j,i_kΔ's are the error terms depending on each path X^j,  j=1,…,N. §.§ Upper bound of 𝔼(h∈𝒮_m, h_n=1supν^2(h)) For all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m such that h_n=1, we have h^2≤1/τ_0 and the coordinate vector a=(a_0,⋯,a_m-1) satisfies: * a^2_2≤ CK ≤ Cm for the spline basis (see <cit.>, Lemma 2.6) * a^2_2≤ 1/τ_0 for an orthonormal basis since h^2 = a^2_2. Furthermore, using the Cauchy Schwartz inequality, we have: ν^2(h)=(∑_ℓ=0^m-1a_ℓν(ϕ_ℓ))^2≤a^2_2∑_ℓ=0^m-1ν^2(ϕ_ℓ). Thus, for all ℓ∈[[0,m-1]], ν=ν_1+ν_2+ν_3 and for all i∈{1,2,3} 𝔼[ν^2_i(ϕ_ℓ)]=  1/Nn^2𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,i_kΔ)^2]. We finally deduce from  (<ref>), (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h_n=1supν^2(h))≤ Cm/Nn. We deduce from  (<ref>) and (<ref>) that there exists a constant C>0 such that, 𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]≤ 3h∈𝒮_m,Linfσ^2_|I-h^2_n+C(m/Nn+Δ^2). Since we have σ^2_m_∞≤√(mL), then for m and L large enough, σ^2_m-σ^2_|I^2_∞≤ 2mL. There exists a constant C>0 such that for all m∈ℳ and for m and L large enough, 𝔼[σ^2_m-σ^2_|I^2_n,N] =𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]+𝔼[σ^2_m-σ^2_|I^2_n,N_Ω^c_n,N,m] ≤𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]+2mL(Ω^c_n,N,m). Then, from Equation (<ref>), Lemma <ref> and for m ∈ℳ = {1,…,√(min(n,N))/√(log(Nn))}, we have: 𝔼[σ^2_m-σ^2_|I^2_n,N]≤   3h∈𝒮_m,Linfh - σ^2_|I^2_n+C(m/Nn+mLexp(-C√(min(n,N)))+Δ^2) where C>0 is a constant. Recall that the empirical norms ._n,N and ._n are equivalent on Ω_n,N,m, that is for all h∈𝒮_m, h^2_n≤ 2h^2_n,N. Thus, we have 𝔼[σ^2_m-σ^2_|I^2_n] =  𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] + 𝔼[σ^2_m-σ^2_|I^2_n_Ω^c_n,N,m] ≤  𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] + 2mL(Ω^c_n,N,m). For all h ∈𝒮_m,L⊂𝒮_m, we have: 𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] ≤   2𝔼[σ^2_m-h^2_n_Ω_n,N,m] + 2h-σ^2_|I^2_n ≤   4𝔼[σ^2_m-h^2_n,N_Ω_n,N,m] + 2h-σ^2_|I^2_n ≤   8𝔼[σ^2_m-σ^2_|I^2_n,N] + 10h-σ^2_|I^2_n. We finally conclude that 𝔼[σ^2_m-σ^2_|I^2_n] ≤ 34h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/Nn+mLexp(-C√(min(n,N)))+Δ^2). §.§.§ Proof of Corollary <ref>  Under Assumption <ref> and from Theorem <ref> and Lemma (<ref>), there exists a constant C>0 such that 𝔼[σ^2_m-σ^2_|I^2]≤ C(h∈𝒮_m,Linfh-σ^2_|I^2_n+m/Nn+L/min(N^4,n^4)+1/n^2). For [B]. We have m=K+M where M is fixed. From Lemma (<ref>), under Assumption <ref>, we have h∈𝒮_m,Linfh-σ^2_|I^2_n = O(K^-2β). Thus, for K ∝ (Nn)^1/(2β+1) and L = log(Nn), 𝔼[σ^2_m-σ^2_|I^2]≤ C(Nn)^-2β/(2β+1)+Clog(Nn)min(N^-4,n^-4). where C>0 is a constant depending on β. For [F]. From Equation (<ref>) and the proof of Corollary <ref>, we have h∈𝒮_minfh - σ^2_|I^2_n = O(m^-2s). Then, for m = (Nn)^1/(2s+1) and L = log(Nn), we obtain 𝔼[σ^2_m-σ^2_|I^2]≤ C(Nn)^-2s/(2s+1)+Clog(Nn)min(N^-4,n^-4). §.§.§ Proof of Theorem <ref>  We consider the restriction σ^2_[-log(N),log(N)] of σ^2 on the compact interval [-log(N),log(N)] on which the spline basis is built. Then we have: 𝔼[σ^2_m,L-σ^2^2_n] = 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^2_n] + 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^c^2_n] and from Proposition <ref>, Lemma <ref> and for N large enough, there exists constants c,C>0 such that 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^c^2_n] ≤ 2L/n∑_k=0^n-1(|X_kΔ| > log(N))≤ 2Lt∈[0,1]sup(|X_t|≥log(N)) ≤ C/log(N)exp(-clog^2(N)). We deduce that 𝔼[σ^2_m,L-σ^2^2_n] = 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^2_n] + C/log(N)exp(-clog^2(N)). It remains to upper-bound the first term on the right hand side of Equation (<ref>). Upper bound of 𝔼[σ^2_m,L-σ^2^2_n_[-log(N),log(N)]]. For all h∈𝒮_m,L, we obtain from Equation (<ref>), γ_n,N(σ^2_m,L)-γ_n,N(σ^2)≤γ_n,N(h)-γ_n,N(σ^2). For all h∈𝒮_m,L, γ_n,N(h)-γ_n,N(σ^2)=h-σ^2^2_n,N+2ν_1(σ^2-h)+2ν_2(σ^2-h)+2ν_3(σ^2-h)+2μ(σ^2-h) where ν_i(h)=1/nN∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,i_kΔ, i∈{1,2,3}, μ(h)=1/nN∑_j=1^N∑_k=0^n-1h(X^j_kΔ)R^j_kΔ, we deduce from Equation (<ref>) that for all h∈𝒮_m,L, 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+2∑_i=1^3𝔼[ν_i(σ^2_m,L-h)]+2𝔼[μ(σ^2_m,L-h)]. For all i∈{1,2,3} and for all h∈𝒮_m,L, one has 𝔼[ν_i(σ^2_m,L-h)]≤√(2mL)√(∑_ℓ=0^m-1𝔼[ν^2_i(ϕ_ℓ)]). * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)] According to Equation (<ref>), we have ∀ℓ∈[[0,m-1]], ν_1(ϕ_ℓ)=1/Nn∑_j=1^N∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ζ^j,1_kΔ where ζ^j,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^j_s)dW^j_s)^2-∫_kΔ^(k+1)Δσ^2(X^j_s)ds] is a martingale satisfying 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0 and 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤1/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_s)ds)^2]≤ Cσ^4_1 with C>0 a constant, W=W^1 and (ℱ_t)_t≥ 0 the natural filtration of the martingale (M_t)_t∈[0,1] given for all t∈[0,1] by M_t=∫_0^tσ(X^1_s)dW_s. We derive that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]= 1/Nn^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ζ^1,1_kΔ)^2]=1/Nn^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2]. For each k∈[[0,n-1]], we have ∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ) = ∑_ℓ=-M^K-1B^2_ℓ(X^1_kΔ) =1   for   the   spline   basis ∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)≤ Cm   For   an   orthonormal   basis   with  C=0 ≤ℓ≤ m-1maxϕ_ℓ^2_∞. Finally, there exists a constant C>0 such that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤C/Nn  for   the   spline   basis Cm/Nn  for   an   orthonormal   basis. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] For all k∈[[0,n-1]] and for all s∈[0,1], set η(s)=kΔ if s∈[kΔ,(k+1)Δ). We have: ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] =4/N∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] =4/N∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]. We conclude that ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]≤C/Nn^2  for   the   spline   basis Cm/Nn^2  for   an   orthonormal   basis. where the constant C>0 depends on the diffusion coefficient and the upper bound of the basis functions. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^3_2(ϕ_ℓ)] We have: ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)] =4/Nn^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)b(X^1_kΔ)σ(X^1_s)dW_s)^2] =4/Nn^2∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2] ≤4/Nn^2𝔼[∫_0^1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_η(s))b(X^1_η(s))σ^2(X^1_s)ds]. Since for all x∈ℝ, b(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^2)<∞, there exists a constant C>0 depending on the diffusion coefficient such that ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)]≤C/Nn^2  for   the   spline   basis Cm/Nn^2  for   an   orthonormal   basis. We finally deduce that from Equations (<ref>) and (<ref>)  that for all h∈𝒮_m,L, 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(mL/Nn)+2𝔼[μ(σ^2_m,L-h)]    (1) 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(m^2L/Nn)+2𝔼[μ(σ^2_m,L-h)]    (2) where C>0 is a constant, the result (1) corresponds to the spline basis, and the result (2) corresponds to any orthonormal basis. It remains to obtain an upper bound of the term μ(σ^2_m,L-h). For all a>0 and for all h∈𝒮_m,L, 2μ(σ^2_m,L-h) ≤ 2/aσ^2_m,L-σ^2^2_n,N+2/ah-σ^2^2_n,N+a/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2 2𝔼[μ(σ^2_m,L-h)] ≤ 2/a𝔼σ^2_m,L-σ^2^2_n,N+2/ah∈𝒮_m,Linfh-σ^2^2_n +a/Nn∑_j=1^N∑_k=0^n-1𝔼[(R^j_kΔ)^2]. Using Equations (<ref>), (<ref>) and setting a=4, we deduce that there exists constant C>0 depending on σ_1 such that, 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]]≤h∈𝒮_m,Linfh-σ^2^2_n+C(√(mL/Nn)+Δ^2)    [𝐁] 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^2L/Nn)+Δ^2)    [𝐇]. The final result is obtained from Equations (<ref>) and (<ref>). §.§.§ Proof of Lemma <ref>  It is proven in <cit.> that for each dimension m∈ℳ, the Gram matrix Ψ_m built from the Hermite basis is invertible. For the case of the B-spline basis, let us consider a vector (x_-M,⋯,x_K-1)∈ℝ^m such that x_j∈[u_j+M,u_j+M+1) and B_j(x_j)≠ 0. Since [u_j+M,u_j+M+1)∩[u_j^'+M,u_j^'+M+1)=∅ for all j,j^'∈{-M,⋯,K-1} such that j≠ j^', then for all j,j^'∈{-M,⋯,K-1} such that j≠ j^', B_j(x_j^')=0. Consequently, we obtain: ((B_ℓ(x_ℓ^'))_-M≤ℓ,ℓ^'≤ K-1) =(diag(B_-M(x_M),⋯,B_K-1(x_K-1))) =∏_ℓ=-M^K-1B_ℓ(x_ℓ)≠ 0. Then, we deduce from <cit.>, Lemma 1 that the matrix Ψ_m is invertible for all m∈ℳ, where the function f_T are replaced by f_n : x↦1/n∑_k=0^n-1p_X(kΔ,x) with λ([-A_N,A_N]∩supp(f_n))>0, λ being the Lebesgue measure. Case of the B-spline basis. For all w∈ℝ^m such that w_2,m=1, we have: w^'Ψ_mw = t_w^2_n=∫_-A_N^A_Nt^2_w(x)f_n(x)dx+t^2_w(x_0)/n with t_w=∑_ℓ=-M^K-1w_ℓB_ℓ. Under Assumption <ref>, the transition density (t,x)↦ p_X(t,x) is approximated as follows ∀ (t,x)∈(0,1]×ℝ, 1/K_*√(t)exp(-c_σx^2/t)≤ p_X(t,x)≤K_*/√(t)exp(-x^2/c_σt) where K_*>1 and c_σ>1. Since s↦exp(-c_σx^2/s) is an increasing function, then for n large enough and for all x∈[-A_N,A_N], f_n(x) ≥1/K_*n∑_k=1^n-1exp(-cx^2/kΔ)≥1/K_*∫_0^1-Δexp(-c_σx^2/s)ds ≥1/K_*∫_1-(log(N))^-1^1-(2log(N))^-1exp(-c_σx^2/s)ds ≥1/2K_*log(N)exp(-c_σx^2/1-log^-1(N)). Thus, the density function satisfies ∀ x∈[-A_N,A_N], f_n(x)≥12K_*log(N)exp(-c_σA^2_N/1-log^-1(N))≥12K_*log(N)exp(-c_σA^2_N). Finally, since there exists a constant C_1>0 such that t_w^2≥ C_1A_NK^-1_N (see <cit.>, Lemma 2.6), for all w∈ℝ^m (m = K_N+M) such that w_2,m=1, there exists a constant C>0 such that, w^'Ψ_mw≥CA_N/mlog(N)exp(-c_σA^2_N). Case of the Hermite basis. For all w∈^m such that w_2,m=1, we have w^'Ψ_mw=t_w^2_n=∫_-∞^+∞t^2_w(x)f_n(x)dx+t^2_w(x_0)/n with t_w=∑_ℓ=0^m-1w_ℓh_ℓ. Recall that for all x∈ such that |x|≥√((3/2)(4m+3)), |h_ℓ(x)|≤ c|x|exp(-c_0x^2) for all ℓ≥ 0. Then we have w^'Ψ_mw ≥  ∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2f_n(x)dx ≥  x∈[-√((3/2)(4m+3)),√((3/2)(4m+3))]inff_n(x)∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx ≥  1/2K_*log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N)))∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx since for all x∈ℝ, f_n(x)≥(1/2K_*log(N))exp(-c_σx^2/1-log^-1(N)). Set a_N=√((3/2)(4m+3)), then we obtain w^'Ψ_mw≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(∫_-∞^+∞(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx-∫_|x|>a_N(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx) ≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(1-2c^2m∫_a_N^+∞x^2exp(-8c_0x^2)dx) ≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(1-c^2m/8c_0√(3/2(4m+3))exp(-12c_0(4m+3))) where c,c_0>0 are constants depending on the Hermite basis. Finally, for N large enough, 1-c^2m/8c_0√(3/2(4m+3))exp(-12c_0(4m+3))≥1/2. Finally, there exists a constant C>0 such that for all w∈^m such that w_2,m, w^'Ψ_mw ≥C/log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N))). §.§.§ Proof of Theorem <ref>  The proof of Theorem <ref>  relies on the following lemma: Under Assumptions <ref> and for σ^2∈Σ_I(β,R) with β≥ 1,   I = [-A_N,A_N] and N ∝ n,   A_N = o (√(log(N))),   K ∝((Nn)^1/(2β+1)A_N)    (m = K+M), the following holds: ℙ(Ω^c_n,N,m) ≤ Cexp(- c log^3/2(N)) where c,C>0 are constants independent of N. According to Equations (<ref>) in the proof of Theorem <ref>, for all dimension m=K+M, with K∈, and for all h∈𝒮_K+M, there exists a constant C>0 such that [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]≤ C[h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+(h∈𝒮_K+M,h_n=1supν^2(h))+Δ^2] where Ω_n,N,m is given in Equation (<ref>)  and ν=ν_1+ν_2+ν_3 with the ν_i given in Equation (<ref>) . For all h=∑_ℓ=-M^K-1a_ℓB_ℓ∈𝒮_K+M,L_N, h^2_n=[1/n∑_k=0^n-1h^2(X_kΔ)]=∑_ℓ=-M^K-1∑_ℓ=-M^K-1a_ℓa_ℓ^'[1/n∑_k=0^n-1B_ℓ(X_kΔ)B_ℓ^'(X_kΔ)]=a^'Ψ_ma. The Gram matrix Ψ_m is invertible for each K∈ℳ (see proof of Lemma <ref>). It follows that for all h=∑_ℓ=-M^K-1a_ℓB_ℓ such that h^2_n=a^'Ψ_ma=1, one has a=Ψ^-1/2_mu where u∈ℝ^m and u_2,m=1. Furthermore, we have: h=∑_ℓ=-M^K-1a_ℓB_ℓ=∑_ℓ=-M^K-1u_ℓ∑_ℓ^'^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'. Then for all h∈𝒮_K+M, we have ν^2(h)≤ 3(ν^2_1(h)+ν^2_2(h)+ν^2_3(h)) where, ∀ i∈{1,2,3}, ν^2_i(h)≤∑_ℓ=-M^K-1(1/Nn∑_j=1^N∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^j,i_kΔ)^2. So we obtain, ∀ i∈{1,2,3}, [h∈𝒮_K+M,h_n=1supν^2_i(h)]≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,i_kΔ)^2] For i=1, we have ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] and we obtained in the proof of Theorem <ref>  that there exists a constant C>0 such that for all k∈[[0,n-1]], 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0, 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤ C𝔼[(∫_kΔ^(k+1)Δσ^2(X_u)du)^2]≤ Cσ^4_1Δ^2. We deduce that [h∈𝒮_K+M,h_n=1supν^2_1(h)] =1/Nn^2Δ^2∑_ℓ=0^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ)ζ^1,1_kΔ)^2] ≤1/N∑_ℓ=-M^K-1∑_k=0^n-1{(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2(ζ^1,1_kΔ)^2} ≤4σ^2_1/Nn∑_ℓ=-M^K-1∑_ℓ^'=-M^K-1∑_ℓ^''=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓ[Ψ^-1/2_m]_ℓ^'',ℓ[Ψ^-1/2_m]_ℓ^',ℓ^''. We have: ∑_ℓ=-M^K-1∑_ℓ^'=-M^K-1∑_ℓ^''=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓ[Ψ^-1/2_m]_ℓ^'',ℓ[Ψ^-1/2_m]_ℓ^',ℓ^''=Tr(Ψ^-1_mΨ_m)=Tr(I_m)=m. So we obtain [h∈𝒮_K+Msupν^2_1(h)]≤4σ^2_1m/Nn. For i=2, we have ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s and [h∈𝒮_K+M,h_n=1supν^2_2(h)] ≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,2_kΔ)^2] ≤4σ^4_1σ^'^2_∞Δ/Nn^2∑_ℓ=-M^K-1∑_k=0^n-1[(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2] ≤4σ^2_1σ^'^2_∞m/Nn^2. For i=3, we have ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW_s and there exists constants C_1,C_2>0 such that [h∈𝒮_K+M,h_n=1supν^2_3(h)] ≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,3_kΔ)^2] ≤ C_1σ^2_1Δ/Nn^2∑_ℓ=-M^K-1∑_k=0^n-1[(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2] ≤ C_2σ^2_1m/Nn^2. Finally, there exists a constant C>0 depending on σ_1 and M such that [h∈𝒮_K+M,h_n=1supν^2(h)]≤ Cm/Nn. From Equations (<ref>) and (<ref>) , we deduce that [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+Δ^2) where C>0 is a constant depending on σ_1 and M. We obtain [σ^2_A_N,m-σ^2_A_N^2_n,N]≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+Δ^2)+[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m] and for N large enough, σ^2_A_N,m-σ^2_A_N^2_n,N≤ 4mL, and according to Lemma <ref> , [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m]≤ 4mLℙ(Ω^c_n,N,m)≤ CmLexp(-clog^3/2(N)) where c>0 is a constant. Thus, there exists a constant C>0 such that [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤  [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]+[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m] ≤  C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+mLexp(-clog^3/2(N))+Δ^2). Then, as n ∝ N and L = log(N), there exists a constant C>0 such that [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤  C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn). Finally, since σ^2∈Σ_I(β,R) with β≥ 1 and I = [-A_N, A_N], one has h∈𝒮_K+M,Linfh-σ^2_A_N^2_n≤ CA^2β_NK^-2β where the constant C>0 depends on β, R and M. Furthermore, as we chose the inverval [-A_N,A_N] such that A_N = o (√(log(N))) and for K ∝((Nn)^1/(2β+1)A_N), we obtain [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤  Clog^β(N)(Nn)^-2β/(2β+1). §.§ Proof of Section <ref> §.§.§ Proof of Theorem <ref> Set for all K, K^'∈𝒦 = {2^q,   q=0,…, q_max,   2^q_max≤√(N)/log(N)}⊂ℳ, 𝒯_K,K^' = {g∈𝒮_K+M+𝒮_K^'+M, g_n=1,  g_∞≤√(L)}. Recall that for all j ∈ [[1,N]] and for all k ∈ [[0,n]], ζ^j,1_kΔ = 1/Δ[(∫_kΔ^(k+1)Δσ(X^j_s)dW^j_s)^2-∫_kΔ^(k+1)Δσ^2(X^j_s)ds]. The proof of Theorem <ref> relies on the following lemma whose proof is in Appendix. Under Assumption <ref>, for all ε, v>0 and g∈𝒯_K,K^', there exists a real constant C>0 such that, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ζ^j,1_kΔ≥ε, g^2_n,N≤ v^2)≤exp(-CNnε^2/σ^2_1(εg_∞+4σ^2_1v^2)) and for all x>0 such that x≤ε^2/σ^2_1(εg_∞+4σ^2_1v^2), ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ζ^j,1_kΔ≥ 2σ^2_1v√(x)+σ^2_1g_∞x, g^2_n,N≤ v^2)≤exp(-CNnx). From Equation (<ref>), we have K:=K∈𝒦min{γ_n,N(σ^2_K)+pen(K)}. For all K∈𝒦 and h∈𝒮_K+M,L, γ_n,N(σ^2_K)+pen(K)≤γ_n,N(h)+pen(K), then, for all K∈𝒦 and for all h∈𝒮_K+M,L, γ_n,N(σ^2_K)-γ_n,N(σ^2_|I)≤  γ_n,N(h)-γ_n,N(σ^2_|I)+pen(K)-pen(K) σ^2_K-σ^2_|I^2_n,N≤  h-σ^2_|I^2_n,N+2ν(σ^2_K-h)+2μ(σ^2_K - h)+pen(K)-pen(K) ≤  h-σ^2_|I^2_n,N+1/dσ^2_K-t^2_n+dg∈𝒯_K,Ksupν^2(g)+1/dσ^2_K-h^2_n,N +d/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2+pen(K)-pen(K) where d>1 and the space 𝒯_K,K is given in Equation (<ref>). On the set Ω_n,N,K_max (given in Equation (<ref>)): ∀ h∈𝒮_K+M, 1/2h^2_n≤h^2_n,N≤3/2h^2_n. Then on Ω_n,N,K_max, for all d>1 and for all h∈𝒮_K+M with K∈𝒦, (1-10/d)σ^2_K-σ^2_|I^2_n,N≤  (1+10/d)h-σ^2_|I^2_n,N+dh∈𝒯_K,Ksupν^2(h)+d/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2 +pen(K)-pen(K). We set d=20. Then, on Ω_n,N,max and for all h∈𝒮_K+M,L, σ^2_K-σ^2_|I^2_n,N≤ 3h - σ^2_|I^2_n,N+20h∈𝒯_K,Ksupν^2(h)+20/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2+2(pen(K)-pen(K)). Let q : 𝒦^2⟶ℝ_+ such that 160 q(K,K^')≤ 18 pen(K)+16 pen(K^'). Thus, on the set Ω_n,N,K_max, there exists a constant C>0 such that for all h∈𝒮_K+M 𝔼[σ^2_K-σ^2_|I^2_n,N_Ω_n,N,K_max]≤   34(h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)) +160(h∈𝒯_K,Ksupν^2_1(h)-q(K,K))+CΔ^2 where ν_1(h):=1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,1_kΔ with ζ^j,1_kΔ the error term. We set for all K,K^'∈𝒦, G_K(K^'):=h∈𝒯_K,K^'supν^2_1(h) and for N and n large enough, σ^2_K-σ^2_|I^2_n,N≤ 4(K+M)L. We deduce that, 𝔼[σ^2_K-σ^2_|I^2_n,N] ≤  𝔼[σ^2_K-σ^2_|I^2_n,N_Ω_n,N,K_max]+𝔼[σ^2_K-σ^2_|I^2_n,N_Ω^c_n,N,K_max] ≤   34K∈𝒦inf(h∈𝒮_K+Minfh-σ^2_|I^2_n+pen(K)) +CΔ^2+4(K+M)Lℙ(Ω^c_n,N,K_max) +160𝔼[(G_K(K)-q(K,K))_+_Ω_n,N,K_max]. In the sequel, we refer to the proof of Proposition 6.1 in <cit.>. We known from Lorentz et al (see <cit.>) that given the unit ball B_._n(0,1) of the approximation subspace 𝒮_K+M with respect to norm ._n defined as follows: B_._n(0,1)={h∈𝒮_K+M : h_n≤ 1}={h∈𝒮_K+M : h≤1/τ_0}=B_2(0,1/τ_0), we can find a ε-net E_ε such that for each ε∈(0,1], |E_ε|≤(3/ετ_0)^K+M. Recall that 𝒯_K,K^'={g∈𝒮_K+M+𝒮_K^'+M, g_n=1, g_∞≤√(L)} and consider the sequence (E_ε_k)_k≥ 1 of ε-net with ε_k=ε_0 2^-k and ε_0∈(0,1]. Moreover, set N_k = log(|E_ε_k|) for each k≥ 0. Then for each g∈𝒮_K+M+𝒮_K^'+M such that g_∞≤√(L), there exists a sequence (g_k)_k≥ 0 with g_k∈ E_ε_k such that g=g_0+∑_k=1^∞g_k-g_k-1. Set ℙ:=ℙ(.∩Ω_n,N,K_max) and τ:=σ_1^2√(6x^n,N_0)+σ^2_1√(L)x^n,N_0+∑_k≥ 1ε_k-1{σ_1^2√(6x^n,N_k)+2σ^2_1√(L)x^n,N_k}=y^n,N_0+∑_k≥ 0y^n,N_k. For all h∈𝒯_K,K^' and on the event Ω_n,N,K_max, one has h^2_n,N≤3/2h^2_n=3/2. Then, using the chaining technique of <cit.>, we have ℙ(h∈𝒯_K,K^'supν_1(h)>τ) =ℙ(∃ (h_k)_k≥ 0∈∏_k≥ 0E_ε_k/ ν_1(h)=ν_1(h_0)+∑_k=1^∞ν_1(h_k-h_k-1)>τ) ≤∑_h_0∈ E_0ℙ(ν_1(h_0)>y^n,N_0)+∑_k=1^∞∑_h_k-1∈ E_ε_k-1h_k∈ E_ε_kℙ(ν_1(h_k-h_k-1)>y^n,N_k). According to Equation (<ref>) and Lemma <ref>, there exists a constant C>0 such that (ν_1(h_0) > y^n,N_0) ≤  (ν_1(h_0) > σ_1√(6x^n,N_0)+σ^2_1h_0_∞x^n,N_0) ≤  exp(-CNnx^n,N_0), ∀ k≥ 1,  (ν_1(h_k - h_k-1) > y^n,N_0) ≤  (ν_1(h_k - h_k-1) > σ_1√(6x^n,N_k)+σ^2_1h_k - h_k-1_∞x^n,N_k) ≤  exp(-CNnx^n,N_k). Finally, since N_k = log(|E_ε_k|) for all k≥ 0, we deduce that ℙ(h∈𝒯_K,K^'supν_1(h)>τ) ≤|E_ε_0|exp(-CNnx^n,N_0) + ∑_k=1^∞(|E_ε_k|+|E_ε_k-1|)exp(-CNnx^n,N_k) ≤exp(N_0-CNnx^n,N_0)+∑_k=1^∞exp(N_k+N_k-1-CNnx^n,N_k). We choose x^n,N_0 and x^n,N_k, k≥ 1 such that, N_0 - CNnx^n,N_0 = -a(K+K^' + 2M)-b N_k + N_k-1 - CNnx^n,N_k = -k(K + K^'+2M) - a(K + K^' + 2M) - b where a and b are two positive real numbers. We deduce that x^n,N_k≤ C_0(1+k)K + K^'+2M/Nn and τ≤ C_1σ^2_1√(√(L)K + K^'+2M/Nn) with C_0>0 and C_1 two constants depending on a and b. It comes that ∼ℙ(t∈𝒯_K,K^'supν(t)>τ) ≤e/e-1e^-bexp{-a(K + K^' + 2M)}. From Equation (<ref>), we set q(K,K^')=κ^*σ^2_1√(L)K + K^' + 2M/Nn where κ^*>0 depends on C_1>0. Thus, for all K,K^'∈𝒦, ℙ({h∈𝒯_K,K^'supν^2(h)>q(K,K^')}∩Ω_n,N,K_max)≤e^-b+1/e+1exp{-a(K + K^' + 2M)} and there exists constants c,C>0 such that 𝔼[(G_K(K^')-q(K,K^'))_+_Ω_n,N,K_max] ≤c(K+K^')/Nnℙ({t∈𝒯_K,K^'supν^2(t)>q(K,K^')}∩Ω_n,N,K_max) ≤C/Nnexp{-a/2(K+K^')}. Finally, there exists a real constant C>0 such that, 𝔼[(G_K(K)-q(K,K))_+_Ω_n,N,K_max]≤∑_K^'∈𝒦𝔼[(G_K(K^')-q(K,K^'))_+_Ω_n,N,K_max]≤C/Nn. We choose the penalty function pen such that for each K∈𝒦, pen(K)≥κσ^2_1√(L)K+M/Nn. For N large enough, one has σ^2_1≤√(L). Thus, we finally set pen(K)=κ(K+M)log(N)/Nn with L = log(N). Then, there exists a constant C>0 such that, 𝔼[σ^2_K-σ^2_|I^2_n,N] ≤ 34K∈𝒦inf{h∈𝒮_K+Minfh-σ^2_|I^2_n+pen(K)}+C/Nn. §.§.§ Proof of Theorem <ref>  From Equation (<ref>), we have K := K∈𝒦minγ_n,N(σ^2_K+M,L) + pen(K). Then, for all K∈𝒦 and for all h ∈𝒮_K+M,L, we have γ_n,N(σ^2_K,L) + pen(K) ≤γ_n,N(h) + pen(K). Then, for all K∈𝒦 and for all h∈𝒮_K+M,L, γ_n,N(σ^2_K,L) - γ_n,N(σ^2) ≤  γ_n,N(h) - γ_n,N(σ^2) + pen(K) - pen(K) σ^2_K,L - σ^2^2_n,N≤  h - σ^2^2_n,N + 2ν(σ^2_K,L - h) + 2μ(σ^2_K,L - h) + pen(K) - pen(K). We have for all a>0, 2𝔼[μ(σ^2_K,L-h)] ≤ 2/a𝔼σ^2_K,L-σ^2^2_n,N+2/ah∈𝒮_K+M,Linfh-σ^2^2_n +a/Nn∑_j=1^N∑_k=0^n-1𝔼[(R^j_kΔ)^2] and since ν = ν_1 + ν_2 + ν_3, according to the proof of Theorem <ref>, there exists a constant c>0 such that [ν(σ^2_K,L - h)] ≤ c[ν_1(σ^2_K,L - h)] where the for i∈{1,2,3} and for all h ∈𝒮_K+M,L,   K∈𝒦, ν_i(h) = 1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j_kΔ, and the ζ^j_kΔ are given Then, (1-2/a)[σ^2_K,L - σ^2^2_n,N] ≤  (1+2/a)h∈𝒮_K+M,Linfh-σ^2^2_n + 2c[ν_1(σ^2_K,L - h)] + pen(K)-pen(K) + a/Nn∑_j=1^N∑_k=0^n-1[(R^j_kΔ)^2] From Equation (<ref>) and for a = 4, there exists a constant C>0 such that [σ^2_K,L - σ^2^2_n,N] ≤ 3h∈𝒮_K+M,Linfh-σ^2^2_n + 4c[ν_1(σ^2_K,L - h)] + 2(pen(K)-pen(K)) + CΔ^2. Since for all K∈𝒦,  pen(K) ≥ 2κ^*σ^2_1(K+M)√(2L)/(Nn), define the function q: (K,K^') ↦ q(K,K^') such that q(K,K^') = 2C^*σ^2_1(K+K^'+2M)√(2L)/Nn≥ 2σ^2_1v√(x^n,N) + σ^2_1vx^n,N where x^n,N∝(K+K^'+2M/Nn)^2   and   v = √(2L). The constant C^*>0 depends on constants κ^*>0 and c>0 of Equation (<ref>) such that 4cq(K,K^') ≤pen(K) + 2pen(K^'). Then for all K ∈𝒦 and for all h∈𝒮_K+M,L, [σ^2_K,L - σ^2^2_n,N] ≤ 3(h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)) + 4c[(ν_1(σ^2_m,L - h) - q(K,K))_+] + CΔ^2. For all K∈𝒦 and for all h∈𝒮_K+M,L such that h_∞≤√(L), we have , σ^2_K,L - h^2_n,N≤σ^2_K,L - h^2_∞≤ 2L =: v^2. Then, using Equation (<ref>) and Lemma <ref>, there exists a constant C>0 such that for all K,K^'∈𝒦 and for all h∈𝒮_K+M,L, (ν_1(σ^2_K^',L - h) ≥ q(K,K^'),  σ^2_K,L - h^2_n,N≤ v^2) ≤exp(-CNnx^n,N). Since L = log(N), then for N large enough, σ^2_1≤√(log(N)), we finally choose pen(K) = κ(K+M)log(N)/Nn where κ>0 is a new constant. Since [ν_1(σ^2_K,L - h)] ≤O(√((K_max+M)log^2(N)/Nn)) (see proof of Theorem <ref>), for all K ∈𝒦 and h ∈𝒮_K+M,L, there exists a constant c>0 such that [(ν_1(σ^2_K,L - h) - q(K,K))_+] ≤  K^'∈𝒦max{[(ν_1(σ^2_K^',L - h) - q(K,K^'))_+]} ≤   cq(K,K_max)K^'∈𝒦max{(ν_1(σ^2_K^',L - h) ≥ q(K,K^'))}. From Equation (<ref>), we obtain that [(ν_1(σ^2_K,L - h) - q(K,K))_+] ≤ cq(K,K_max)exp(-CNn)≤C/Nn since K and K_max increase with the size N of the sample paths D_N,n, and cNnq(K,K_max)exp(-CNn) → 0   as   N →∞. Then, from Equations (<ref>) and (<ref>), there exists a constant C>0 such that [σ^2_K,L - σ^2^2_n,N] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)} + C/Nn. §.§.§ Proof of Theorem <ref> The proof of Theorem <ref> is similar to the proof of Theorem <ref>. Then, from Equation (<ref>), for all h∈𝒮_K+M, σ^2_K,L-σ^2_|I^2_n,1≤ 3h - σ^2_|I^2_n,1+20h∈𝒯_K,Ksupν^2(h)+20/n∑_k=0^n-1(R^1_kΔ)^2+2(pen(K)-pen(K)), where 𝒯_K,K^' = {h ∈𝒮_K+M+𝒮_K^'+M,  h_X = 1,  h_∞≤√(L)}. Let q: 𝒦^2⟶_+ such that 160q(K,K^') ≤ 18pen(K) + 16pen(K^'). Recall that the 𝕃^2-norm ., the norm [._X] and the empirical norm ._n are equivalent on 𝕃^2(I) since the transition density is bounded on the compact interval I. Then, for all K ∈𝒦 and h ∈𝒮_K+M,L, we have [σ^2_K,L-σ^2_|I^2_n,1] ≤   3(h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)) + 20(h∈𝒯_K,Ksupν^2_1(h)-q(K,K)) + CΔ^2 where ν_1(h):=1/n∑_k=0^n-1h(X^1_kΔ)ζ^1,1_kΔ with ζ^1,1_kΔ the error term. We set for all K,K^'∈𝒦, G_K(K^'):=h∈𝒯_K,K^'supν^2_1(h). Then, there exists C>0 such that [σ^2_K,L-σ^2_|I^2_n,1] ≤   3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + 20∑_K^'∈𝒦[(G_K(K^')-q(K,K^'))_+] + CΔ^2. Considering the unit ball B_._X(0,1) of the approximation subspace given by B_._X(0,1) = {h∈𝒮_K+M,  h^2_X≤ 1} = {h∈𝒮_K+M,  h^2≤1/τ_0}. We obtain from the proof of Theorem <ref> with N=1 that, ∑_K^'∈𝒦[(G_K(K^')-q(K,K^'))_+]≤C/n, where C>0 is a constant, q(K,K^') ∝σ^4_1(K+K^'+2M)√(log(n))/n and pen(K) ∝(K+M)log(n)/n. Then we obtain 𝔼[σ^2_K,L-σ^2_|I^2_n,1] ≤3/τ_0K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + C/n. ScandJStat Appendix §.§ Calibration Fix the drift function b(x) = 1-x, the time-horizon T=1 and at time t=0,   x_0=0. Consider the following three models: Model 1: σ(x)=1 Model 2: σ(x)=0.1+0.9/√(1+x^2) Model 3: σ(x) = 1/3+sin^2(2π x)/π + 1/(π+x^2). The three diffusion models satisfy Assumption <ref>  and are used to calibrate the numerical constant κ of the penalty function given in Theorem <ref>  As we already know, the adaptive estimator of σ^2 on the interval [-√(log(N)), √(log(N))] necessitate a data-driven selection of an optimal dimension through the minimization of the penalized least squares contrast given in Equation (<ref>) . Since the penalty function pen(d_N)=κ (K_N+M)log^2(N)/N^2 depends on the unknown numerical constant κ>0, the goal is to select an optimal value of κ in the set 𝒱={0.1,0.5,1,2,4,5,7,10} of its possible values. To this end, we repeat 100 times the following steps: * Simulate learning samples D_N and D_N^' with N∈{50,100}, N^'=100 and n ∈{100, 250} * For each κ∈𝒱: * For each K_N∈𝒦 and from D_N, compute σ^2_d_N,L_N given in Equations (<ref>) and (<ref>). * Select the optimal dimension K_N∈𝒦 using Equation (<ref>)  * Using the learning sample D_N^', evaluate σ^2_d_N,L_N-σ^2_A^2_n,N^' where d_N=K_N+M. Then, we calculate average values of σ^2_d_N,L_N-σ^2_A^2_n,N^' for each κ∈𝒱 and obtain the following results: We finally choose 5∈𝒱 as the optimal value of κ in reference to the results of Figure <ref> . §.§ Proof of Lemma <ref>  We obtain from Comte,Genon-Catalot,Rozenholc (2007) proof of Lemma 3 that for each j∈[[1,N]], k∈[[0,n-1]] and p∈ℕ∖{0,1} 𝔼[exp(ug(X^j_kΔ)ξ^j,1_kΔ-au^2g^2(X^j_kΔ)/1-bu)|ℱ_kΔ]≤ 1 with a=e(4σ^2_1c^2)^2, b=4σ^2_1c^2eg_∞, u∈ℝ such that bu<1 and c>0 a real constant. Thus, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_N,n≤ v^2)=𝔼(1_{∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^(j,1)_kΔ≥ Nnuε}1_g^2_n,N≤ v^2) =𝔼(1_{exp(∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^(j,1)_kΔ)e^-Nnuε≥ 1}_g^2_N,n≤ v^2) ≤e^-Nnuε𝔼[_g^2_n,N≤ v^2exp{∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^j,1_kΔ}]. It follows that, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_n,N≤ v^2) ≤  exp{-Nnuε+Nnau^2v^2/1-bu}. We set u=ε/ε b+2av^2. Then, we have -Nnuε+Nnav^2u^2/(1-bu)=-Nnε^2/2(ε b+2av^2) and, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_n,N≤ v^2) ≤exp(-Nnε^2/2(ε b+av^2)) ≤exp(-CNnε^2/σ^2_1(εg_∞+4σ^2_1v^2)) where C>0 is a constant depending on c>0. §.§ Proof of Lemma <ref>  Set K_n,N = K_N since N ∝ n. Let us remind the reader of the Gram matrix Ψ_K_N given in Equation (<ref>), Ψ_K_N=[1/Nn𝐅^'_K_N𝐅_K_N]=(Ψ_K_N) where, 𝐅_K_N:= ((B_ℓ(X^j_0),…,(B_ℓ(X^j_(n-1)Δ)))_1 ≤ j ≤ N0 ≤ℓ≤ K_N-1∈ℝ^Nn× (K_N+M) The empirical counterpart Ψ is the random matrix given by Ψ_K_N of size (K_N+M) × (K_N+M) is given by Ψ_K_N:=1/Nn𝐅^'_K_N𝐅_K_N=(1/Nn∑_j=1^N∑_k=0^n-1f_ℓ(X^j_kΔ)f_ℓ^'(X^j_kΔ))_ℓ,ℓ^'∈[-M,K_N-1]. For all t=∑_ℓ=-M^K_N-1 a_ℓ B_ℓ,M, u∈ S_K_N, M one has t_n,N^2 = a^'Ψ_K_N a and t_n^2 = a^'Ψ_K_N a, with a=(a_-M,⋯,a_K_N-1)^'. Under Assumption <ref>, we follow the lines of  <cit.> Proposition 2.3 and Lemma 6.2. Then, sup _t ∈ S_K_N,M,t_n=1|t_n,N^2-t_n^2| = sup _w ∈^K_N+M,Φ_K_N^1 / 2 w_2, K_N+M=1|w^'(Ψ_K_N-Ψ_K_N) w| = sup _u ∈ℝ^K_N+M,u_2, K_N+M=1|u^'Ψ_K_N^-1 / 2(Ψ_K_N-Ψ_K_N) Ψ_K_N^-1 / 2 u| = Ψ_K_N^-1 / 2Ψ_K_NΨ_K_N^-1 / 2-Id_K_N+M_op. Therefore, Ω_n, N, K_N^c={Ψ_K_N^-1 / 2Ψ_K_NΨ_K_N^-1 / 2-Id_K_N+M_op > 1 / 2}. Since A_N = o(√(log(N))), we obtain from <cit.>, proof of Lemma 7.8, there exists a constant C>0 such that (Ω^c_n,N,K_N)≤ 2(K_N+M)exp(-C log^3/2(N)). Finally, since 2(K_N+M)exp(- (C/2) log^3/2(N)) ⟶ 0 as N ⟶ +∞, one concludes from Equation (<ref>) and for N large enough, (Ω^c_n,N,K_N)≤ Cexp(- c log^3/2(N)) where c >0 and C>0 are new constants.
http://arxiv.org/abs/2307.05942v1
20230712061436
Prototypical Contrastive Transfer Learning for Multimodal Language Understanding
[ "Seitaro Otsuki", "Shintaro Ishikawa", "Komei Sugiura" ]
cs.RO
[ "cs.RO", "cs.CL", "cs.CV" ]
Empirical Bayes large-scale multiple testing for high-dimensional sparse binary sequences Bo Y.-C. Ning[label=e1][email protected] August 12, 2023 ============================================================================================ empty empty Although domestic service robots are expected to assist individuals who require support, they cannot currently interact smoothly with people through natural language. For example, given the instruction “Bring me a bottle from the kitchen,” it is difficult for such robots to specify the bottle in an indoor environment. Most conventional models have been trained on real-world datasets that are labor-intensive to collect, and they have not fully leveraged simulation data through a transfer learning framework. In this study, we propose a novel transfer learning approach for multimodal language understanding called Prototypical Contrastive Transfer Learning (PCTL), which uses a new contrastive loss called Dual ProtoNCE. We introduce PCTL to the task of identifying target objects in domestic environments according to free-form natural language instructions. To validate PCTL, we built new real-world and simulation datasets. Our experiment demonstrated that PCTL outperformed existing methods. Specifically, PCTL achieved an accuracy of 78.1%, whereas simple fine-tuning achieved an accuracy of 73.4%. § INTRODUCTION In our aging society, the demand for daily care and support is increasing, which is leading to a shortage of home care workers. Domestic service robots (DSRs) are gaining popularity as a solution because of their ability to physically assist individuals. However, DSRs currently lack the capability to smoothly interact with people through natural language. To train their language comprehension models, it is desirable to use data collected in real-world environments. However, collecting and annotating such real-world data can be labor-intensive. By contrast, collecting training data using a simulator is much more cost-effective. Hence, it is advantageous to leverage simulation data through a transfer learning framework. In this study, we focus on the task of identifying the target object in a given scenario using natural language instructions for object manipulation. For instance, given the instruction “Bring me the book closest to the lamp,” and a scene in which several books are near the lamp, the robot is expected to specify the book closest to the lamp as the target object. It is not easy to understand the meaning of human instructions correctly because such instructions are often ambiguous. In the above example, the robot should identify the book closest to the lamp among all the observed books by correctly comprehending the referring expression in the given instruction. Magassouba et al. <cit.> report cases in which a robot fails to comprehend instructions containing referring expressions. In this study, we aim to transfer experience gained from simulation data to real-world data to improve performance in our target task. In the task of multimodal language understanding for object manipulation, most conventional models have been trained only on real-world data <cit.>. However, such approaches have difficulty in increasing dataset size because building real-world datasets is labor-intensive. By contrast, collecting training data using a simulator is a significantly more cost-effective approach. Consequently, we expect that leveraging simulation data within a transfer learning framework will effectively enhance model performance. In this paper, we propose Prototypical Contrastive Transfer Learning (PCTL), which is a novel transfer learning approach for the multimodal language understanding task. PCTL performs contrastive learning between the source and target domain data using our new contrastive loss called Dual ProtoNCE. We expect that PCTL will alleviate the influence of the gap between the two domains by minimizing Dual ProtoNCE. A summary of our method is shown in Fig. <ref>. To design Dual ProtoNCE, we extended ProtoNCE <cit.> to transfer learning. ProtoNCE does not simultaneously handle source and target domains because it is not designed for transfer learning. Unlike ProtoNCE, Dual ProtoNCE is designed as a contrastive loss between the source and target domains. By defining a new contrastive loss between the source and target domains, we expect that Dual ProtoNCE will enable contrastive representation learning to bridge the gap between the two domains. Our key contributions are as follows: * We introduce transfer learning to the task of identifying target objects in domestic environments according to free-form natural language instructions. * We propose PCTL, which is a novel transfer learning approach for the multimodal language understanding task. * Within PCTL, we develop Dual ProtoNCE, which is a novel contrastive loss generalized for transfer learning. § RELATED WORK §.§ Multimodal Language Understanding Many surveys have been conducted on multimodal language understanding and vision-language pretraining (VLP) <cit.>. Uppal et al. <cit.> present an overview of the latest trends in research on multimodal language understanding. They consider task formulations, evaluation metrics, model architecture, and other topics, for example, bias and fairness, and adversarial attacks. Long et al. <cit.> describe the general task definition and architecture of recent VLP models. They also discuss vision and language data encoding methods, and the mainstream model structure. They further summarize several essential pretraining and fine-tuning strategies. Several studies have tackled referring expression comprehension (REC), which is one of the multimodal language understanding tasks <cit.>. In the REC task, models should ground a target object in an image described by a referring expression. Our task formulation is slightly more flexible than that of REC. In detail, we can address cases in which more than one or no target object exists in a given scene by formulating the task as the binary classification of whether or not the candidate object is the target. Some studies have attempted to build models for specifying the target object using natural language instructions and visual information <cit.>. Specifically, Target-Dependent UNITER<cit.> (TDU) uses UNITER-based transformer architecture to model the relationship between text and visual features. Thus, TDU is pretrainable on general-purpose datasets. Additionally, Ishikawa et al. <cit.> formulates the task of identifying the target object in a given scenario using natural language instructions for object manipulation as the Multimodal Language Understanding for Fetching Instruction (MLU-FI) task. Ishikawa et al. <cit.> proposes Moment-based Adversarial Training (MAT), which is an adversarial training approach for vision-and-language navigation (VLN) tasks. §.§ Datasets Several datasets exist for the MLU-FI task. PFN-PIC <cit.> is a dataset that consists of images and instructions about objects in the scene. It contains images of approximately 20 commodities in four boxes taken in the real world, with the limitation of a fixed viewpoint. WRS-unialt <cit.> is a dataset that consists of images and instructions collected using a simulator. Additionally, these images were observed from various viewpoints. In this study, we target the MLU-FI task in various real-world indoor environments with scenes observed from various viewpoints. However, to the best of our knowledge, no standard real-world dataset exists for this task. Therefore, we built new datasets by collecting data necessary for the task from datasets used in the VLN task. Several standard datasets exist for VLN <cit.>. The Room-to-Room (R2R) dataset <cit.> is a benchmark dataset for VLN in building-scale 3D environments in the real world. In the R2R navigation task, autonomous agents are required to follow navigation instructions in previously unseen indoor environments. The Remote Embodied Visual Referring Expression in Real Indoor Environments (REVERIE) dataset <cit.> is a standard dataset for the VLN task in real indoor environments. The REVERIE task consists of the subtask of navigating to a location where the target object exists, followed by another subtask of identifying the target object. These VLN datasets were built on the data provided by MatterPort3D <cit.>. MatterPort3D is a large-scale RGB-D dataset for scene understanding in various indoor environments in the real world. The dataset contains 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes with varied annotation, such as segmentation information. §.§ Contrastive Learning Contrastive learning is an approach used to learn a good data representation in a self-supervised manner. This category of learning strategies aims to align all instances in the embedding space where they are well-separated and locally smooth by leveraging the contrastive loss. Various contrastive learning frameworks have been proposed in the context of representation learning for vision <cit.>, language <cit.>, and multimodal models<cit.>. ProtoNCE <cit.> is a contrastive loss designed to implicitly encode the semantic structure of data into the embedding space by leveraging data prototypes obtained by clustering on embeddings as positive and negative features. In this study, we develop a novel contrastive loss called Dual ProtoNCE by extending ProtoNCE to transfer learning. Unlike ProtoNCE, we enable contrastive learning across different domains by leveraging data prototypes obtained from clustering for the embedded features of each domain. § PROBLEM STATEMENT §.§ Preliminaries The terminology used in this paper is defined as follows: * Target object: object referred to in the natural language instruction. * Candidate object: object that the model predicts whether it matches the target object or not. * Context objects: objects detected by an object detector. We refer to the bounding boxes of the target, candidate, and context objects as the target, candidate, and context regions, respectively. §.§ Task Formulation We focus on the MLU-FI task. In this task, given a natural language instruction, candidate region, and context regions, the model is required to perform the binary classification of whether the candidate object matches the target object or not. The MLU-FI task is characterized as follows: * Input: An instruction, candidate region, and context regions. * Output: Predicted probability p(ŷ=1). y and ŷ denote a label and predicted label, respectively. The condition y=1 indicates that the candidate object matches the target object. Fig. <ref> shows a typical sample of the task. In the sample, the target object is the wicker vase enclosed by the green bounding box. In this case, p(ŷ=1)=0 because the given candidate object does not match the target object. We assume that an object detector is used to extract the candidate region and context regions from the image. It is worth noting that the task is not a multi-class classification task to select a single object from all the objects in the image. The binary classification setting allows us to consider the case in which multiple or no target objects exist in the given image. For transfer learning, the target samples are collected in the real world, whereas the source samples are collected using a simulator. Every sample consists of a set of an instruction, a candidate region, and context regions. They are collected in indoor environments for the MLU-FI task. § PROPOSED METHOD In this study, we propose PCTL, which is a novel transfer learning approach for the multimodal language understanding task. Specifically, we introduce Dual ProtoNCE, which is a contrastive loss generalized for transfer learning. Although we apply our transfer learning approach to the MLU-FI task, our proposed method can be used for transfer learning on other multimodal language understanding tasks. Fig. <ref> shows an overview of our training framework. Our overall framework, referred to as PCTL, has three main modules: Encoder, Momentum Encoder, and Clustering Module. §.§ Input We define the input x to our model as follows: x = {x_inst, x_cand, X_cont} X_cont = {x_cont^(i)| i = 1, …, N_det}, where x_inst, x_cand, and x_cont^(i) denote a natural language instruction, candidate region, and i-th context region, respectively. We use Faster R-CNN to detect N_det context regions. It should be noted that we use positional encoding for x_cand, X_cont, and the text features extracted from x_inst. We use a seven-dimensional vector [x_1, y_1, x_2, y_2, x_2-x_1, y_2-y_1, (x_2-x_1) · (y_2-y_1)]^T as positional encoding for x_cand and X_cont, where (x_1, y_1) and (x_2, y_2) are the coordinates of the top-left and bottom-right corners, respectively. Additionally, we assume that x_1, x_2, y_1, and y_2 are normalized by the width and height of the input image. §.§ Encoder Encoder f_θ is parameterized by θ. The structure of f_θ follows TDU<cit.> and has three main parts: Text Embedder, Image Embedder, and Multi-Layer Transformer. The Text Embedder tokenizes x_inst using WordPiece<cit.> and converts them to text features. The Image Embedder embeds x_cand and x_cont^(i) into visual features. The Multi-Layer Transformer takes text and visual features as input and models the relationship between them. The output is the final hidden vector of the Multi-Layer Transformer corresponding to the input feature, x_cand. Hereafter, we refer to a sample of the source domain as (x_s, y_s). Similarly, (x_t, y_t) denotes that of the target domain. The input and output of f_θ are denoted as follows: u = f_θ(x_s) ∈ℝ^768 v = f_θ (x_t) ∈ℝ^768, where u and v denote the feature vector of the source sample x_s and that of the target sample x_t. They are used for k-means clustering and the loss function. Classifier g consists of a two-layer MLP and softmax function. g takes the output of f_θ and calculates the predicted probability, p(ŷ=1) = g(f_θ( x)). §.§ Momentum Encoder The Momentum Encoder f_θ' has the same structure as f_θ. f_θ' is parametrized by θ', which is modeled as a moving average of θ. Specifically, we update θ' as follows: θ'←γθ' + (1 - γ) θ, where γ denotes a smoothing coefficient. Similar to f_θ, we denote the output of f_θ' as follows: u' = f_θ' (x_s) ∈ℝ^768 v' = f_θ' (x_t) ∈ℝ^768. §.§ Clustering The Clustering Module performs k-means clustering on u' and v' M times at the beginning of each epoch. We define the i-th prototype as the centroid of the i-th cluster. Suppose c_i^(m) and d_i^(m) denote the prototypes of the i-th cluster that results from the i-th clustering step on u' and v', respectively. k^(m) denotes the number of clusters in the m-th clustering step. §.§ Contrastive Transfer Learning The Contrastive Transfer Learning aims to train the model to bridge the gap between the source and target domains by minimizing Dual ProtoNCE, which is a novel contrastive loss generalized for transfer learning. §.§.§ InfoNCE Given a training set A={a_1, a_2, …, a_n }, unsupervised representation learning is designed to train the encoder f that maps A to embeddings Z = {z_1, z_2, …, z_n } so that z_i = f(a_i) best describes a_i. Contrastive learning achieves this goal by minimizing the contrastive loss typified by InfoNCE<cit.>. Let z_i, z'_i, and {z'_j | j = n+1, n+2, …, n+r } be anchor, positive, and r negative embeddings, respectively. Then, InfoNCE is defined as ℒ_InfoNCE = ∑_i=1^n - logexp(z_i ·z'_i / τ)/∑_j ∈ Jexp(z_i ·z'_j / τ), where J = { i, n+1, n+2, …, n+r }. We set τ as a learnable temperature parameter following <cit.>. We initialize 1/τ to 0.07, and apply clipping to avoid scaling the logits beyond 100. §.§.§ ProtoNCE ProtoNCE<cit.> is a contrastive loss designed to push embeddings and their assigned prototypes together while pushing those and other prototypes apart. Let h_i^(m) be the i-th prototype obtained in m-th clustering w.r.t. z'. ProtoNCE is defined by the following equation: ℒ_ProtoNCE = ∑_i=1^n - (logexp(z_i ·z'_i / τ)/∑_j ∈ Jexp(z_i ·z'_j / τ). + .1/M∑_m=1^M logexp(z_i ·h_s^(m) / ϕ_s^(m) )/∑_j ∈ J'exp(z_i ·h_j^(m) / ϕ_j^(m))), J' ⊂{ 1, 2, …, k^(m)}, s ∈ J', r' = | J'∖{s}|. In the above expression, h_s denotes the positive prototype closest to z_i, whereas {h_j^(m)| j ∈ J' ∖{s}} denotes r' negative prototypes randomly selected from all the prototypes of the m-th clustering result except the positive prototype, h_s. ϕ indicates the level of instance-to-prototype concentration for each cluster. Hereafter, we call ϕ the concentration factor. Concentration factor ϕ_i for the prototype h_i of the i-th cluster C_i is defined as ϕ_i = ∑_z' ∈C_i‖z' - h_i ‖_2/|C_i |log (|C_i | + α)·τ'/∑_j=1^k^(m)ϕ_j / k^(m), where |C_i | is the number of instances z_i assigned to C_i, and α denotes a smoothing parameter used to avoid the divergence of ϕ_i for the small cluster. We normalize ϕ over all the clusters so that they have a mean of τ'. §.§.§ Dual ProtoNCE The Dual ProtoNCE loss is a novel contrastive loss expanded from ProtoNCE. It is defined as the summation of the two losses, Intra-Domain Loss ℒ_Intra and Inter-Domain Loss ℒ_Inter, as follows: ℒ_DualProtoNCE = ℒ_Intra + ℒ_Inter. We first compute ℒ_Intra by applying ProtoNCE to source samples and target samples independently: ℒ_Intra = ℒ_Target + ℒ_Source ℒ_Target = ∑_i=1^n -( logexp(^†v_i ·^†v'_i / τ)/∑_j ∈ Jexp(^†v_i ·^†v'_j / τ). + . 1/M∑_m=1^M logexp(^†v_i ·^†c_s^(m)/ϕ_s^(m))/∑_j ∈ J'exp(^†v_i ·^†c_j^(m) / ϕ_j^(m))) ℒ_Source = ∑_i=1^n -( logexp(^†u_i ·^†u'_i / τ)/∑_j ∈ Jexp(^†u_i ·^†u'_j / τ). + . 1/M∑_m=1^M logexp(^†u_i ·^†d_s^(m)/φ_s^(m))/∑_j ∈ J'exp(^†u_i ·^†d_j^(m) / φ_j^(m))), where ^†a represents a/ ‖a‖_2. and ϕ and φ denote the concentration factors of c and d, respectively. Next, we compute ℒ_Inter to bridge the gap between the two domains. ℒ_Inter is defined as follows: ℒ_Inter = ℒ_S2T + ℒ_T2S, where ℒ_S2T is the contrastive loss defined between source domain features u and the prototypes of target domain c, and ℒ_T2S is similarly defined between v and d. They are expressed as ℒ_S2T = -1/M∑_i=1^n∑_m=1^M ( logexp(^†u_i ·^†c_s^(m) / ϕ_s^(m))/∑_j ∈ J'exp(^†u_i ·^†c_j^(m) / ϕ_j^(m))), ℒ_T2S = -1/M∑_i=1^n∑_m=1^M ( logexp(^†v_i ·^†d_s^(m) / φ_s^(m))/∑_j ∈ J'exp(^†v_i ·^†d_j^(m) / φ_j^(m))). Let ℒ_CE and λ be the cross-entropy loss and a hyperparameter, respectively. Our overall loss function ℒ is defined as, ℒ = λℒ_DualProtoNCE + ℒ_t + ℒ_s ℒ_t = .∑_i=1^n(ℒ_CE(g(f_θ(x_t^(i))), y_t^(i)) +.1/M∑_m=1^M ℒ_CE(g(c_s^(m)), y_t^(i)) ) ℒ_s = .∑_i=1^n (ℒ_CE(g(f_θ(x_s^(i))), y_s^(i)) +.1/M∑_m=1^M ℒ_CE(g(d_s^(m)), y_s^(i)) ), where c_s^(m) and d_s^(m) are the prototypes closest to the embedding f_θ(x_t^(i)) and f_θ(x_s^(i)), respectively. § EXPERIMENTS §.§ Datasets To validate our model in real-world domestic environments, we built a new dataset called REVERIE-fetch. This is because no standard real-world dataset exists for the MLU-FI task, to the best of our knowledge. To construct such a dataset, we collected images and natural language instructions based on the REVERIE dataset <cit.>, which is a standard dataset for VLN in real-world indoor environments. Note that this dataset is not directly applicable to our task. We first collected the cubemaps <cit.> of the goal points provided in the original dataset because the target object is placed at the goal point of the navigation task in the REVERIE task. Then, we extracted images in which target objects existed from the collected cubemaps. Over 1,000 annotators collected the instructions in the REVERIE dataset using Amazon Mechanical Turk. The annotators viewed an animation of the route and a randomly highlighted target object via the interactive 3D WebGL simulator. Then, they were asked to provide instructions to find and manipulate the target object. Regarding the source-domain datasets, we extracted the source samples from the ALFRED dataset <cit.> and built the ALFRED-fetch-b dataset. The ALFRED dataset is a standard dataset for VLN with object manipulation. It includes 25,743 English instructions that describe 8,055 expert demonstrations. It contains multiple sequential subgoals that constitute the given goal, an instruction for each subgoal, and images observed from the agent’s first-person views at each timestep of ground-truth behavior. The new ALFRED-fetch-b dataset consists of instructions and images from the training set of the original ALFRED dataset. They were collected in a scenario in which the subgoal was to pick up an object. In this study, we gathered the images from scenes just before the picking action. As a preprocessing step for data extracted from the REVERIE and ALFRED datasets, we extracted the candidate and context regions from the images using Faster R-CNN <cit.> to create samples. We labeled those with a GIoU <cit.> greater than 0.80 of their target and candidate regions as positive samples and those with a GIoU less than 0.45 as negative samples. The target regions were provided in the original datasets. In the test set of REVERIE-fetch, we manually removed inappropriate samples caused by misdetection. The REVERIE-fetch dataset consists of 10,243 image-instruction pairs with a vocabulary size of 1958 words, a total of 188,965 words, and an average sentence length of 18.4 words. The ALFRED-fetch-b dataset similarly consists of 34,286 samples of image-instruction pairs and a total of 399,964 words. Its vocabulary size and average sentence length are 1,558 and 11.7 words, respectively. The REVERIE-fetch dataset includes 8,302, 994, and 947 samples in the training, validation, and test sets, respectively. We built the training set with data collected from the training set and the seen split of the validation set of the REVERIE dataset. The validation and test sets consist of data collected from the unseen split of the validation set of the REVERIE dataset. The ALFRED-fetch-b dataset includes 27,492; 3,470; and 3,324 samples in the training, validation, and test sets, respectively. We collected these samples from the training set of the ALFRED dataset. It should be mentioned that there were no overlaps among the training, validation, and test sets for either dataset. We used the training set to train our model and the validation set to tune the hyperparameters. We evaluated our model on the test set of the REVERIE-fetch dataset. §.§ Experimental Setup Table <ref> summarizes the experimental setup. Note that #L, #H, and #A denote the number of layers, hidden size, and number of attention heads in the Multi-Layer Transformer, respectively. Our model had roughly 110 million trainable parameters. We trained our model on a GeForce RTX 3090 with 24GB of memory and an Intel Core i9-10900KF with 64GB of memory. It took 2 hours to train our model. The inference time was approximately 59.3 milliseconds for one sample. We evaluated the value of the loss ℒ_CE of the model for every epoch on the validation set of REVERIE-fetch. We used the test set accuracy of the REVERIE-fetch dataset when the value of ℒ_CE on the validation set of REVERIE-fetch was minimized. §.§ Quantitative Results We conducted experiments to compare the proposed and baseline methods. Table <ref> shows the accuracy on the REVERIE-fetch dataset. The right column shows the means and standard deviations over five trials. In this experiment, we used accuracy as the evaluation metric because the numbers of positive and negative samples were almost balanced. We set the following three baseline settings: (i) Target domain only: We trained the model only on target samples. (ii) Fine-tuning: We performed pretraining on the source samples and fine-tuned the model with the target samples. (iii) MCDDA+: We extended the Maximum Classifier Discrepancy for Domain Adaptation <cit.> (MCDDA) and applied it to a supervised transfer learning setting. We set up baselines (i) and (ii) to compare our approach with the approach in which no data from the source domain was used and the approach in which pretraining was performed on data from the source domain, respectively. As reported in <cit.>, MCDDA performs well as an unsupervised transfer learning method on image classification. Therefore, we extended MCDDA to a supervised transfer learning setting and used it as baseline (iii). We called this extended method MCDDA+. As listed in Table <ref>, our method achieved an accuracy of 78.1%, whereas the accuracy of baselines (i), (ii), and (iii) were 73.0%, 73.4%, and 74.9%, respectively. Therefore our method outperformed all the baselines (i), (ii), and (iii) by 5.1, 4.7, and 3.2 points in terms of accuracy, respectively. The performance difference between baseline (i) and our method was statistically significant (p-value was lower than 0.01). §.§ Ablation Studies We conducted ablation studies to investigate the contribution of M and the combination of k^(m) to performance. Specifically, we set the following conditions: (i)-a M=1, k^(1) = 33 (i)-b M=1, k^(1) = 64 (i)-c M=1, k^(1) = 128 (ii) M=3, ( k^(1), k^(2), k^(3)) = ( 64, 128, 256 ) It is worth noting that we chose k^(1) = 33 for condition (i)-a because it is the minimum value of k^(1) that allows us to select 32 negative prototypes (r'=32) and one positive prototype from a total of k^(1) prototypes. Table <ref> lists the quantitative results of the ablation study. The accuracy column shows the means and standard deviations over five trials. As described in Table <ref>, PCTL achieved accuracy of 78.1%, whereas the accuracy under conditions (i)-a, (i)-b, (i)-c, and (ii) were 75.6%, 73.7%, 77.4%, and 71.7%, respectively. This indicates that our method achieved the highest accuracy under all the ablation conditions for k^(m) and M. Moreover, this result indicates that a decrease or increase of k^(m) reduced performance. §.§ Qualitative Results Qualitative results are shown in Fig. <ref>. Fig. <ref> (a) shows a successful sample. The instruction was “Go down the stairs to the lower balcony area and turn off the lamp on the dresser.” The target object is the lamp on the dresser. Our method correctly predicted that the candidate object matched the target object, whereas the baseline (i) wrongly predicted that the candidate object did not match the target object. Fig. <ref> (b) shows another successful sample. The given instruction was “Go to the lounge on the first level where the red carpet is and move the black vase to the right of the mirror.” The target object was the vase on the right side of the mirror. Our method successfully identified the candidate object as a different object from the target object, whereas baseline (i) failed to do this. Fig. <ref> (c) shows a failed sample. The instruction was “Fluff the light silver pillow on the smaller couch in the living room.” The target object was the pillow on the left side of the couch. Our method incorrectly predicted that the candidate object matched the target object. §.§ Error Analysis and Discussion The results consisted of 371, 65, 126, and 385 samples for true positives, false positives (FP), false negatives (FN) and true negatives, respectively. Thus, there were 191 samples for the failed cases. We randomly selected 50 FP and 50 FN samples to analyze the causes of errors. Table <ref> categorizes these samples. We classified the causes of errors into the following seven types: * Comprehension Error (CE): The model failed to process the visual information and instruction correctly. This class includes cases in which the model failed to comprehend the given referring expression or correctly specify the object to which the textual information in the instruction referred. * Missing Landmark: The given image did not contain the visual information w.r.t. the referring expression. For example, the model failed to predict the chair nearest the kitchen because the instruction had the referring expression, “nearest the kitchen,” but the given image did not contain a kitchen. * Small Region: The model failed to specify the target object because the target region was smaller than 1% of the entire image area. * Ambiguous Instruction: The given instruction was ambiguous; hence, the model failed to specify the target object. * Annotation Error: Annotation errors occurred in the bounding boxes and/or instructions. * Severe Occlusion: The target object was severely occluded by other objects. * Multiple Objects: The candidate region enclosed multiple objects. As shown in Table <ref>, the main bottleneck was CE. We could reduce the number of cases by using a huge number of source samples or introducing pretrained models <cit.> that embed language features and visual features into the same embedding space. § CONCLUSIONS In this study, we proposed PCTL, which is a novel transfer learning approach for the multimodal language understanding task. Specifically, we developed Dual ProtoNCE, which is a new contrastive loss generalized for transfer learning. Our key contributions are as follows: * We introduced transfer learning to the MLU-FI task. * We proposed PCTL, which is a novel transfer learning approach for the multimodal language understanding task. * Within PCTL, we developed Dual ProtoNCE, which is a novel contrastive loss generalized for transfer learning. * PCTL outperformed the baselines in terms of the accuracy of the MLU-FI task on the REVERIE-fetch dataset. In future work, we plan to enrich the source-domain dataset using a simulator and apply the model trained by the PCTL framework to physical robots. § ACKNOWLEDGMENT This work was partially supported by JSPS KAKENHI Grant Number 20H04269, JST Moonshot, and NEDO. IEEEtran
http://arxiv.org/abs/2307.04571v1
20230710140334
Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation
[ "Chongming Gao", "Kexin Huang", "Jiawei Chen", "Yuan Zhang", "Biao Li", "Peng Jiang", "Shiqi Wang", "Zhong Zhang", "Xiangnan He" ]
cs.IR
[ "cs.IR" ]
[email protected] 0000-0002-5187-9196 University of Science and Technology of China [email protected] 0009-0001-4868-0952 University of Science and Technology of China Corresponding author. 0000-0002-4752-2629 [email protected] Zhejiang University Hangzhou China [email protected] 0000-0002-7849-208X Kuaishou Technology Co., Ltd. [email protected] 0000-0001-5667-5347 Kuaishou Technology Co., Ltd. [email protected] 0000-0002-9266-0780 Kuaishou Technology Co., Ltd. [email protected] 0000-0002-5369-884X Chongqing University Chongqing China [email protected] 0000-0003-1349-9755 University of Electronic Science and Technology of China [1] [email protected] 0000-0001-8472-7992 University of Science and Technology of China Offline reinforcement learning (RL), a technology that offline learns a policy from logged data without the need to interact with online environments, has become a favorable choice in decision-making processes like interactive recommendation. Offline RL faces the value overestimation problem. To address it, existing methods employ conservatism, e.g., by constraining the learned policy to be close to behavior policies or punishing the rarely visited state-action pairs. However, when applying such offline RL to recommendation, it will cause a severe Matthew effect, i.e., the rich get richer and the poor get poorer, by promoting popular items or categories while suppressing the less popular ones. It is a notorious issue that needs to be addressed in practical recommender systems. In this paper, we aim to alleviate the Matthew effect in offline RL-based recommendation. Through theoretical analyses, we find that the conservatism of existing methods fails in pursuing users' long-term satisfaction. It inspires us to add a penalty term to relax the pessimism on states with high entropy of the logging policy and indirectly penalizes actions leading to less diverse states. This leads to the main technical contribution of the work: Debiased model-based Offline RL (DORL) method. Experiments show that DORL not only captures user interests well but also alleviates the Matthew effect. The implementation is available via <https://github.com/chongminggao/DORL-codes>. 2023 2023 acmlicensed[SIGIR '23]Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information RetrievalJuly 23–27, 2023Taipei, Taiwan Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23), July 23–27, 2023, Taipei, Taiwan 15.00 10.1145/3539618.3591636 978-1-4503-9408-6/23/07 <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation Xiangnan He Received January 1, 2015; accepted January 1, 2015 =========================================================================================== § INTRODUCTION Recommender systems, a powerful tool for helping users select preferred items from massive items, are continuously investigated by e-commerce companies. Previously, researchers tried to dig up static user interests from historical data by developing supervised learning-based recommender models. With the recent development of deep learning and the rapid growth of available data, fitting user interests is not a bottleneck for now. A desired recommendation policy should be able to satisfy users for a long time <cit.>. Therefore, it is natural to involve Reinforcement Learning (RL) which is a type of Machine Learning concerned with how an intelligent agent can take actions to pursue a long-term goal <cit.>. In this setting, the recommendation process is formulated as a sequential decision process where the recommender interacts with users and receives users' online feedback (i.e., rewards) to optimize users' long-term engagement, rather than fitting a model on a set of samples based on supervised learning <cit.>. However, it is expensive and impractical to learn a policy from scratch with real users, which becomes the main obstacle that impedes the deployment of RL to recommender systems. One remedy is to leverage historical interaction sequences, i.e., recommendation logs, to conduct offline RL (also called batch RL) <cit.>. The objective is to learn an online policy that makes counterfactual decisions to perform better than the behavior policies induced by the offline data. However, without real-time feedback, directly employing conventional online RL algorithms in offline scenarios will result in poor performance due to the value overestimation problem in offline RL. The problem is induced when the function approximator of the agent tries to extrapolate values (e.g., Q-values in Q-learning <cit.>) for the state-action pairs that are not well-covered by logged data. More specifically, since the RL model usually maximizes the expected value or trajectory reward, it will intrinsically prefer overestimated values induced by the extrapolation error, and the error will be compounded in the bootstrapping process when estimating Q-values, which results in unstable learning and divergence <cit.>. In recommendation, this may lead to an overestimation of user preferences for items that infrequently appear in the offline logs. This is the core challenge for offline RL algorithms because of the inevitable mismatch between the offline dataset and the learned policy <cit.>. To solve this problem, offline RL algorithms incorporate conservatism into the policy design. Model-free offline RL algorithms directly incorporate conservatism by constraining the learned policy to be close to the behavior policy <cit.>, or by penalizing the learned value functions from being over-optimistic upon out-of-distribution (OOD) decisions <cit.>. Model-based offline RL algorithms learn a pessimistic model as a proxy of the environment, which results in a conservative policy <cit.>. This philosophy guarantees that offline RL models can stick to offline data without making OOD actions, which has been proven to be effective in lots of domains, such as robotic control <cit.> or games <cit.>. However, applying conservatism to recommender systems gives rise to a severe Matthew effect <cit.>, which can be summarized as “the rich get richer and the poor get poorer”. In recommendation, it means that the popular items or categories in previous data will get larger opportunities to be recommended later, whereas the unpopular ones get neglected. This is catastrophic since users desire diverse recommendations and the repetition of certain contents will incur the filter bubble issue, which in turn hurts users' satisfaction even though users favored them before <cit.>. We will show the Matthew effect in the existing offline RL-based recommender (conservative), and analyze how users' satisfaction will be hurt (effect). In this paper, we embrace the model-based RL paradigm. The basic idea is to learn a user model (i.e., world model) that captures users' preferences, then use it as a pseudo-environment (i.e., simulated users) to produce rewards to train a recommendation policy. Compared to model-free RL, model-based RL has several advantages in recommendation. First and foremost, model-based RL is much more sample efficient <cit.>. That it needs significantly fewer samples makes it more suitable for the highly sparse recommendation data. Second, explicitly learning the user model simplifies the problem and makes it easier to incorporate expert knowledge. For example, the user model can be implemented as any state-of-the-art recommendation model (e.g., DeepFM <cit.> in this work) or sophisticated generative adversarial frameworks <cit.>. Although some works have adopted this paradigm in their recommender systems <cit.>, they did not explicitly consider the value overestimation problem in offline RL, not to mention the Matthew effect in the solutions. To address the value overestimation problem while reducing the Matthew effect, we propose a Debiased model-based Offline RL (DORL) method for recommendation. By theoretically analyzing the mismatch between real users' long-term satisfaction and the preferences estimated from the offline data, DORL adds a penalty term that relaxes the pessimism on states with high entropy of the logging policy and indirectly penalizes actions leading to less diverse states. By introducing such a counterfactual exploration mechanism, DORL can alleviate the Matthew effect in final recommendations. Our contributions are summarized as: * We point out that conservatism in offline RL can incur the Matthew effect in recommendation. We show this phenomenon in existing methods and how it hurts user satisfaction. * After theoretically analyzing how existing methods fail in recommendation, we propose the DORL model that introduces a counterfactual exploration in offline data. * We demonstrate the effectiveness of DORL in an interactive recommendation setting, where alleviating the Matthew effect increases users' long-term experience. § RELATED WORK Here, we briefly review the Matthew effect in recommendation. We introduce the interactive recommendation and offline RL. §.§ Matthew Effect in Recommendation <cit.> confirmed the existence of the Matthew effect in YouTube's recommendation system, and <cit.> gave a quantitative analysis of the Matthew effect in collaborative filtering-based recommenders. A common way of mitigating the Matthew effect in recommendation is to take into account diversity <cit.>. Another perspective on this problem is to remove popularity bias <cit.>. We consider the Matthew effect in offline RL-based recommendation systems. we will analyze why this problem occurs and provide a novel way to address it. §.§ Interactive Recommendation The interactive recommendation is a setting where a model interacts with a user online <cit.>. The model recommends items to the user and receives the user's real-time feedback. This process is repeated until the user quits. The model will update its policy with the goal to maximize the cumulative satisfaction over the whole interaction process (instead of learning on I.I.D. samples). This setting well reflects the real-world recommendation scenarios, for example, a user will continuously watch short videos and leave feedback (e.g. click, add to favorite) until he chooses to quit. Here, we emphasize the most notable difference between the interactive recommendation setting and traditional sequential recommendation settings <cit.>. settings illustrates the learning and evaluation processes in sequential and interactive recommendation settings. Sequential recommendation uses the philosophy of supervised learning, i.e., evaluating the top-k results by comparing them with a set of “correct” answers in the test set and computing metrics such as Precision, Recall, NDCG, and Hit Rate. By contrast, interactive recommendation evaluates the results by accumulating the rewards along the interaction trajectories. There is no standard answer in interactive recommendation, which is challenging <cit.>. Interactive recommendation requires offline data of high quality, which hampers the development of this field for a long time. We overcome this problem by using the recently-proposed datasets that support interactive learning and off-policy evaluation <cit.>. §.§ Offline Reinforcement Learning Recently, many offline RL models have been proposed to overcome the value overestimation problem. For model-free methods, BCQ <cit.> uses a generative model to constrain probabilities of state-action pairs the policy utilizes, thus avoiding using rarely visited data to update the value network; CQL <cit.> contains a conservative strategy to penalize the overestimated Q-values for the state-action pairs that have not appeared in the offline data; GAIL <cit.> utilizes a discriminator network to distinguish between expert policies with others for imitation learning. IQL <cit.> enables the learned policy to improve substantially over the best behavior in the data through generalization, without ever directly querying a Q-function with unseen actions. For model-based methods, MOPO <cit.> learns a pessimistic dynamics model and use it to learn a conservative estimate of the value function; COMBO <cit.> learns the value function based on both the offline dataset and data generated via model rollouts, and it suppresses the value function on OOD data generated by the model. Almost all offline RL methods have a similar philosophy: to introduce conservatism or pessimism in the learned policy <cit.>. There are efforts to conduct offline RL in recommendation <cit.>. However, few works explicitly discuss the Matthew effect in recommendation. <cit.> mentioned this effect in their experiment section, but their method is not tailored to overcome this issue. § EMPIRICAL STUDY ON MATTHEW EFFECT We conduct empirical studies in recommendation to show how the Matthew effect affects user satisfaction. When the Matthew effect is amplified, the recommender will repeatedly recommend the items with dominant categories. To illustrate the long-term effect on user experience, we explore the logs of the KuaiRand-27K video dataset[<https://kuairand.com/>] <cit.> and the LFM-1b music dataset[<http://www.cp.jku.at/datasets/LFM-1b/>] <cit.>. KuaiRand-27K contains a 23 GB log recording 27,285 users’ 322,278,385 interactions on 32,038,725 videos with 62 categories, which are collected from April 8th, 2022 to May 8th, 2022. LFM-1b contains a 40GB log recording 120,322 users’ 1,088,161,692 listening events on 32,291,134 tracks with 3,190,371 artists, which are fetched from Last.FM in the range from January 2013 to August 2014. Both the two datasets provide the timestamp of each event, hence we can assess the long-term effect of overexposure by investigating the change of Day-1 Retention. Day-1 Retention is defined as the probability of a user who returns to the app tomorrow after finishing today's viewing/listening. This metric is more convincing than real-time signals (e.g., click, adding to favorite) in regard to reflecting the long-term effect on user satisfaction. We consider the item-level and category/artist-level repeat rates as the metrics to measure the Matthew effect. The item-level (or category-level) repeat rate of a user viewing videos on a certain day is defined as: the number of viewing events/the number of unique videos (or unique categories). For example, if a user views 5 unique videos (which belong to 3 unique categories) 20 times in a day, then the item-level repeat rate is 20/5=4.0 and the category-level repeat rate is 20/3=6.67. The item-level and artist-level repeat rates for music listening are defined in a similar way. Note that in KuaiRand, video-level overexposure rarely appears because of the rule of video recommendation, i.e., the same video will not be recommended twice. In this definition, user activity can become a confounder. For instance, a user who views 100 videos a day can be more active than a user who views only 10 videos a day, and thus is more likely to revisit the App the next day. Therefore, we control for this confounder by splitting the users w.r.t. the number of their daily viewing events. Groups with different user activity levels are marked by different colors and marker types. The results are shown in effect. In short, Day-1 Retention reduces when the repeat rate increases in each group with a user activity level. This phenomenon can be observed for both the item-level and category/artist-level repeat rates within the video dataset and music dataset. The results show that users' satisfaction will be hurt when the Matthew effect becomes severer. § PRELIMINARY ON MODEL-BASED RL We introduce the basics of RL and model-based offline RL. §.§ Basics of Reinforcement Learning Reinforcement learning (RL) is the science of decision making. We usually formulate the problem as a Markov decision process (MDP): M = (𝒮,𝒜,T,r,γ), where 𝒮 and 𝒜 represent the state space and action space, T(s,a,s')= P(s_t+1=s'|s_t=s,a_t=a) is the transition probability from (s,a) to s', r(s,a) is the reward of taking action a at state s, and γ is the discount factor. Accordingly, the offline MDP can be denoted as M=(𝒮,𝒜,T,r̂, γ), where T and r̂ are the transition probability and reward function predicted by an offline model. In offline RL, the policy is trained on an offline dataset 𝒟 which was collected by a behavior policy π_β running in online environment M. By modifying the offline MDP M to be conservative for overcoming the value overestimation issue, we will derive a modified MDP M=(𝒮,𝒜,T,r, γ), where the modified reward r is modified from the predicted reward r̂. Since RL considers long-term utility, we can define the value function as V_M^π(s) = 𝔼_π,T[∑_t=0^∞γ^t r(s_t,a_t)|s_0=s], denoting the cumulative reward gain by policy π after state s in MDP M. Let P_T,t^π be the probability of the agent's being in state s at time t, if the agent uses policy π and transits with T. Defining ρ_ T^π(s,a) = (1-γ)π(a|s)∑_t=0^∞γ^t P_ T,t^π(s) as the discounted distribution of state-action pair (s,a) for policy π over T, we can derive another form of the policy's accumulated reward as η_ M(π) = 𝔼_(s,a)∼ρ_ T^π[r(s,a)]. §.§ Model-Based Offline RL Framework In this paper, we follow a state-of-the-art general Model-based Offline Policy Optimization framework, MOPO[We consider an RL framework to be general if it doesn't require domain-related prior knowledge or any specific algorithms. Satisfying our demands, MOPO is shown to be one of the best-performing model-based offline RL frameworks.] <cit.>. The basic idea is to learn a dynamics model T which captures the state transition (s,a) → s' of the environment and estimates reward r̂(s,a) given state s and action a. For addressing the distributional shift problem where values V_M^π(s) are usually over-optimistically estimated, MOPO introduces a penalty function p(s,a) on the estimated reward r̂(s,a) as: r̃(s,a) = r̂(s,a)-λ p(s,a). On the modified reward r̃(s,a), the offline MDP M will be modified to be a conservative MDP: M = (𝒮,𝒜,T,r̃,γ). MOPO learns its policy in this MDP: M. By defining ϵ_p(π) = 𝔼_(s,a)∼ρ_T^π[p(s,a)], MOPO has the following theoretical guarantee: If the penalizer p(s,a) meets: λ𝔼_(s,a)∼ρ_T^π[p(s,a)] ≥ |η_M(π) - η_M(π)|, then the best offline policy π̂ trained in M satisfies: η_M(π̂)≥sup_π{η_M(π) - 2λϵ_p(π)}. The proof can be found in <cit.>. penalty_reg requires the penalty to be a measurement of offline and online mismatch, thus ϵ_p(π) can be interpreted as how much policy π will be affected by the offline extrapolation error. lower_bound is considered to be a theoretical guarantee for reward penalty in model-based offline RL. For example, with π^* denoting optimal policy in online MDP M, we have η_M(π̂)≥η_M(π^*) - 2λϵ_p(π^*). Remark: Through learning π̂ offline in the conservative MDP M with mopo_penalty, we can obtain the result that will not deviate too much from the result of learning an optimal policy π^* online in the ground-truth MDP M. The deviation will not exceed 2λϵ_p(π^*). However, there was no sufficient analysis on how to properly choose the penalty term p(s,a). Next, We introduce how to adapt this framework to recommendation and reformulate the p(s,a) according to the characteristic of the recommendation scenario. § METHOD We implement the model-based offline RL framework in recommendation. we redesign the penalty to alleviate the accompanied Matthew effect. Then, we introduce the proposed DORL model. §.§ Model-based RL in Recommendation In recommendation, we cannot directly obtain a state from the environment, we have to model the state by capturing the interaction context and the user's mood. Usually, a state s∈𝒮 is defined as the vector extracted from the user's previously interacted items and corresponding feedback. After the system recommends an item as action a∈𝒜, the user will give feedback as a scale reward signal r R(s,a). For instance, r∈{0,1} indicates whether the user clicks the item, or r∈ℝ^+ reflects a user's viewing time for a video. The state transition function (i.e., state encoder) T can be written as s' f_ω(s,a,r), where f_ω(s,a,r) autoregressively outputs the next state s' and can be implemented as any sequential models. When learning offline, we cannot obtain users' reactions to the items that are not covered by the offline dataset. we address this problem by using a user model (or reward model) R(s,a) to learn users' static interests. This model can be implemented as any state-of-the-art recommender such as DeepFM <cit.>. The user model will generate an estimated reward r̂=R(s,a) representing a user's intrinsic interest in an item. The transition function T will be written as s' f_ω(s,a,r̂). The offline MDP is defined as M = (𝒮,𝒜,T,r̂,γ). Since the estimated reward r̂ can deviate from the ground-truth value r, we follow MOPO to use mopo_penalty to get the modified reward r̃(s,a)= r̂(s,a)-λ p(s,a). Afterward, we can train the recommendation policy on the modified reward r̃ by treating the user model as simulated users. Now, the problem turns into designing the penalty term p(s,a). To begin with, we extend the mismatch function in <cit.>. We use R and R as the shorthand for R(s,a) and R(s,a), respectively. Define the mismatch function G_M^π(s,a) of a policy π on the ground truth MDP M and the estimated MDP M as: 3ex G_M^π(s,a) 𝔼_ŝ'∼T,r̂∼R[γ V_M^π(ŝ') + r̂] - 𝔼_s'∼ T,r∼ R[γ V_M^π(s')+r] = 𝔼_r̂∼R[γ V_M^π(f_ω(s,a,r̂)) + r̂] - 𝔼_r∼ R[γ V_M^π(f_ω(s,a,r))+r]. It satisfies: 𝔼_(s,a)∼ρ_T^π[G_M^π(s,a)] = η_M(π) - η_M(π). The mismatch function G_M^π(s,a) extends the definition presented in <cit.>, with the key distinction being the separation of state transition function into state s and reward r. This is due to the fact that, in the context of recommendation systems, the stochastic nature of state probabilities arises solely from the randomness associated with their reward signals r. Consequently, when integrating along the state transition, it is essential to explicitly express the impact of reward r. The proof of G_p can be adapted from the proof procedure of the telescoping lemma in <cit.>. Following the philosophy of conservatism, we add a penalty term p(s,a) according to the mismatch function G_M^π(s,a) by assuming: λ p(s,a) ≥ |G_M^π(s,a)|. By combining G_satisfy and G_penalty, the condition in penalty_reg is met, which provides the theoretical guarantee for the recommendation policy π learned in the conservative MDP: M. Remark: G_p provides a perspective for designing the penalty term p(s,a) that satisfies the theoretical guarantee in guarantee. According to G_penalty, the problem of defining p(s,a) turns into analyzing G_M^π(s,a), which will be described in remedy. The original MOPO model uses the uncertainty of the dynamics model P_U as the penalty, i.e., p(s,a)=P_U. However, penalizing uncertainty will encourage the model to pay more attention to items that are frequently recommended while neglecting the rarely recommended ones. This will accelerate the Matthew effect. §.§ Matthew Effect To quantify the Matthew effect in the results of recommendation, we use a metric: majority category domination (MCD), which is defined as the percentage of the recommended items that are labeled as the dominated categories in training data[The dominated categories are the most popular categories that cover 80% items in the training set. There are 13 (out of 46) dominated categories in KuaiRand, and 12 (out of 31) dominated categories in KuaiRec.]. We show the effect of conservatism of MOPO by varying the coefficient λ of mopo_penalty. The results on the KuaiRec dataset are shown in conservative. With increasing λ, the model receives a higher single-round reward (the blue line), which means the policy captures users' interest more accurately. On the other hand, MCD also increases (the red bars), which means the recommended items tend to be the most popular categories (those cover 80% items) in training data. I.e., the more conservative the policy is, the stronger the Matthew effect becomes. When the results narrow down to these categories, users' satisfaction will be hurt and the interaction process terminates early, which results in low cumulative rewards over the interaction sequence. More details will be described in exp. §.§ Solution: Re-design the Penalty To address this issue, we consider a more sophisticated manner to design the penalty term p(s,a) in mopo_penalty. We dissect the mismatch function in G_define as: 3ex |G_M^π(s,a)| ≤ γ|𝔼_r̂∼R[V_M^π(f_ω(s,a,r̂))] - 𝔼_r∼ R[ V_M^π(f_ω(s,a,r))]| + |𝔼_r̂∼Rr - 𝔼_r∼ Rr| γ d_V(R,R) + d_1(R,R), where d_1(R,R) represents the deviation of estimated reward R from true reward R, d_V(R,R) measures the difference between the value functions V_M^π of next state calculated offline (via R) and online (via R). Both of them can be seen as specific metrics measuring the distance between R and R. While d_1(R,R) is straightforward, d_V(R,R) considers the long-term effect on offline learning and is hard to estimate. Based on the aforementioned analysis, a pessimistic reward model R in MOPO will amplify the Matthew effect that reduces long-term satisfaction, thus resulting in a large d_V(R,R). An intuitive way to solve this dilemma is to introduce exploration on states with high entropy of the logging policy. Without access to online user feedback, we can only conduct the counterfactual exploration in the offline data. ∙ An illustrative example. To illustrate the idea, we give an example in distribution. The goal is to estimate a user's preferences given the logged data induced by a behavior policy. In reality, the distribution of the logged data is dependent on the policies of previous recommenders. For convenience, we use a Gaussian distribution as the behavior policy in distribution(a). Since previous recommenders cannot precisely reflect users' ground-truth preferences, there is always a deviation between the behavior policy (the red line) and users' ground-truth preferences (the blue line). Besides, as the items are not equally exposed in the behavior policy, there will be high uncertainty in estimating the rarely appeared items (as shown in the filled area). Offline RL methods emphasize conservatism in estimation and penalize the uncertain samples, which results in the distribution that narrows down the preferences to these dominated items (the green line). This is how the Matthew effect appears. By contrast, using a uniform distribution to collect data can prevent biases and reduce uncertainty (distribution(b)). Ideally, a policy learned on sufficient data collected uniformly can capture unbiased user preferences and produce recommendations without the Matthew effect, i.e., γ d_V(R,R) + d_1(R,R) can reduce to 0. Therefore, an intuitive way to design penalty term p(s,a) is to add a term: the discrepancy of behavior between the uniform distribution π_u(·|s) and the behavior policy π_β(·|s) given state s. We use the Kullback–Leibler divergence D_KL(π_β(·|s)||π_u(·|s)) to measure the distance, which can be written as: P_E := -D_KL(π_β(·|s)||π_u(·|s)) = -𝔼_a∼π_β(·|s)[log(π_β(a|s))-log(π_u(a|s))] =  ℋ(π_β(·|s)) - log(|𝒜|), where |𝒜| is a constant representing the number of items. Hence, the term P_E depends on the entropy of the behavior policy π_β(·|s) given state s. The modified penalty term can be written as p(s,a) = P_U + P_E, and the modified reward model will be formulated as: r̃(s,a) = r̂(s,a) - λ_1 P_U + λ_2 P_E. Except for penalizing high uncertainty areas, the new model also penalizes policies with a low entropy at state s. Intuitively, if the behavior policy π_β(·|s) recommended only a few items at state s, then the true user preferences at state s may be unrevealed. Under such circumstances, the entropy term ℋ(π_β(·|s)) is low, hence we penalize the estimated reward r̂(s,a) by a large P_E. The entropy penalizer does not depend on the chosen action but only on the state in which the agent is. Which means the effect of this penalty will be indirect and penalize actions that lead to less diverse states, because of the long-term optimization. Hence, The learned policy achieves the counterfactual exploration in the offline data, which in turn counteracts the Matthew effect in offline RL. §.§ The DORL Method Now, we provide a practical implementation motivated by the analysis above. The proposed model is named Debiased model-based Offline RL (DORL) model, whose framework is illustrated in DORL. ∙ Penalty on entropy. We introduce how to compute P_E in the entropy penalizer module. entropy shows a trajectory of the interaction process, where the current action at time t is to recommend item 8. We define P_E in final_r to be the summation of k-order entropy (k=1,2,⋯). For example, when k=3, we search all users' recommendation logs to collect all continuous sub-sequences with pattern [{3,7,8},?], where “?” can match any item, and {3,7,8} is a sorted set that can cover all of its enumeration, e.g., [8,3,7] or [7,3,8]. On these sub-sequences, we can count the frequencies of action “?” to estimate the entropy of behavior policy π_β given the previous three recommended items. Without losing generality, we normalize the entropy to range (0,1]. ∙ Penalty on uncertainty. We penalize both the epistemic uncertainty of the reward model and the aleatoric uncertainty of offline data. We use the variance of K ensemble reward models {R_θ_k, k =1,2,⋯,K} to capture the epistemic uncertainty, which is commonly used to capture the uncertainty of the model in offline RL <cit.>. Aleatoric uncertainty is data-dependent <cit.>. By formulating the user model as a Gaussian probabilistic model (GPM), we can directly predict the variance of the reward and take this predicted variance as aleatoric uncertainty. For the k-th model R_θ_k, the loss function is: ℒ(θ_k) = 1/N∑_i=1^N1/2σ^2_θ_k(x_i)y_i-f_θ_k(x_i)^2 + 1/2logσ^2_θ_k(x_i), where N is the number of samples, f_θ_k(x_i) and σ^2_θ_k(x_i) are the predicted mean and variance of sample x_i, respectively. By combining epistemic uncertainty and aleatoric uncertainty, we formulate the uncertainty pernalizer P_U in final_r as: P_U := max_k∈{1,2,⋯,K}σ^2_θ_k. We define the fitted reward r̂ as the mean of K ensemble models: r̂(s,a)=1/K∑_kf_θ_k(s,a). The final modified reward r̃(s,a) will be computed by final_r. The framework of the proposed DORL model is illustrated in DORL. Without losing generality, we use DeepFM <cit.> as the backbone for the user model and implement the actor-critic method <cit.> as the RL policy. The state tracker f_ω(s,a,r) is a network modeling the transition function T(s,a,s')= P(s_t+1=s'|s_t=s,a_t=a). It can be implemented as any sequential model such as recurrent neural network (RNN)-based models <cit.>, Convolutional models <cit.>, Transformer-based methods <cit.>. <cit.> investigated the performances of different state encoders in RL-based recommenders. We use a naive average layer as the state tracker since it requires the least training time but nonetheless outperforms many complex encoders <cit.>. It can be written as: s⃗_t+11/N∑^t_n=t-N+1[e⃗_a_n⊕r̃_n], where ⊕ is the concatenation symbol, s⃗_t+1 is the vector representing the state at time t+1, e⃗_a_n is the embedding vector of action a_n. r̃_n is the reward value calculated by final_r and we normalize it to range (0,1] here. N is the window size reflecting how many previous item-reward pairs are calculated. § EXPERIMENTS We introduce how we evaluate the proposed DORL model in the interactive recommendation setting. We want to investigate the following questions: * (RQ1) How does DORL perform compared to state-of-the-art offline RL methods in the interactive recommendation setting? * (RQ2) To what extent can DORL alleviate the Matthew effect and pursue long-term user experience? * (RQ3) How does DORL perform in different environments with different user tolerance to repeated content? §.§ Experimental Setup We introduce the experimental settings with regard to environments and state-of-the-art offline RL methods. §.§.§ Recommendation Environments As mentioned in IRS, in the interactive recommendation setting, we are interested in users' long-term satisfaction rather than users' fitting capabilities <cit.>. Traditional recommendation datasets are too sparse or lack necessary information (e.g., timestamps, explicit feedback, item categories) to evaluate the interactive recommender systems. We create two recommendation environments on two recently-proposed datasets, KuaiRec and KuaiRand-Pure, which contain high-quality logs. KuaiRec <cit.> is a video dataset that contains a fully-observed user-item interaction matrix where 1,411 users have viewed all 3,327 videos and left feedback. By taking the fully-observed matrix as users' true interest, we can give a reward for the model's every recommendation (without missing entries like other datasets). We use the normalized viewing time (i.e., the ratio of viewing time to the video length) as the online reward. KuaiRand-Pure <cit.> is a video dataset that inserted 1,186,059 random recommendations involving 7,583 items into 27,285 users' standard recommendation streams. These randomly exposed data can reflect users' unbiased preferences, from which we can complete the matrix to emulate the fully-observed matrix in KuaiRec. This is an effective way to evaluate RL-based recommendation <cit.>. We use the “is_click” signal to indicate users' ground-truth interest, i.e., as the online reward. In mattew_hurt, we have shown that users' experience can be hurt by the Matthew effect. To let the environments reflect this phenomenon, we follow <cit.> to introduce a quit mechanism: when the model recommends more than M items with the same category in previous N rounds, the interaction terminates. Note that the same item will not be recommended twice in an interaction sequence. Since we evaluate the model via the cumulative rewards ∑_tr_t over the interaction trajectory, quitting early (due to the Matthew effect) will lead to inferior performances. For now, the two environments can play the same role as the online users. Therefore, we can evaluate the model as the process shown in settings (b). The evaluation environments are used for assessing models and they are not available in the training stage. For the training purpose, both KuaiRec and KuaiRand provide additional recommendation logs. The statistics of the training data are illustrated in data. §.§.§ Baselines We select two naive bandit-based algorithms, four model-free offline RL methods, and four model-based offline RL methods (including ours) in evaluation. We use the DeepFM model <cit.> as the backbone in the two bandit methods and four model-based methods. These baselines are: * ϵ-greedy, a naive bandit-based policy that outputs a random result with probability ϵ or outputs the deterministic results of DeepFM with probability 1-ϵ. * UCB, a naive bandit-based policy that maintains an upper confidence bound for each item and follows the principle of optimism in the face of uncertainty. * SQN, or Self-Supervised Q-learning <cit.>, contains two output layers (heads): one for the cross-entropy loss and the other for RL. We use the RL head to generate final recommendations. * BCQ, or Batch-Constrained deep Q-learning <cit.>, adapts the conventional deep Q-learning to batch RL. We use the discrete-action version <cit.>, whose core idea is to reject these uncertain data and update the policy using only the data of high confidence. * CQL, or Conservative Q-Learning <cit.>, is a model-free RL method that adds a Q-value regularizer on top of an actor-critic policy. * CRR, or Critic Regularized Regression <cit.>, is a model-free RL method that learns the policy by avoiding OOD actions. * MBPO, a vanilla model-based policy optimization method that uses DeepFM as the user model to train an actor-critic policy. * IPS <cit.> is a well-known statistical technique adjusting the target distribution by re-weighting each sample in the collected data. We implement IPS in a DeepFM-based user model, then learn the policy using an actor-critic method. * MOPO, a model-based offline policy optimization method <cit.> that penalizes the uncertainty of the DeepFM-based user model and then learns an actor-critic policy. §.§ Overall Performance Comparison (RQ1) We evaluate all methods in two environments. For the four model-based RL methods (MBPO, IPS, MOPO, and our DORL), we use the same DeepFM model as the user model and fixed its parameters to make sure the difference comes only from the policies. We use the grid search technique on the key parameters to tune all methods in the two environments. For DORL, we search the combination of two key parameters λ_1 and λ_2 in final_r. Both of them are searched in {0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0, 5, 10, 50, 100}. We report the results with λ_1=0.01,λ_2=0.05 for KuaiRand and λ_1=0.05,λ_2=5 for KuaiRec. All methods in two environments are evaluated with the quit parameters: M=0, N=4, and the maximum round is set to 30. The results are the average metrics of 100 interaction trajectories. The results are shown in main_result, where all policies are learned with 200 epochs. After learning in each epoch, we will evaluate all methods with 100 episodes (i.e., interaction trajectories) in the two interactive environments. The first row shows the cumulative reward, which directly reflects the long-term satisfaction in our interactive recommendation setting. The second row and third rows dissect the cumulative reward into two parts: the length of the interaction trajectory and the single-round reward, respectively. For a better comparison, we average the results in 200 epochs and show them in results. Besides the three metrics, we also report the majority category domination (MCD) in results. From the results, we observe that the four model-based RL methods (MBPO, IPS, MOPO, and DORL) significantly outperform the four model-free RL methods (SQN, CRR, CQL, and BCQ) with respect to trajectory length and cumulative reward. This is because model-based RL is much more sample efficient than model-free RL. In recommendation, the training data is highly sparse. Model-free RL learns directly from the recommendation logs that we have split into different sequences according to the exit rule described above. However, it is extremely difficult to capture the exit mechanism from the sparse logs. By contrast, the model-based RL can leverage the user model to construct as many interaction sequences as possible during training, which guarantees that the policy can distill useful knowledge from the limited offline samples. That is why we embrace model-based RL in recommendation. For model-based RL methods, MOPO shows an obvious improvement compared to the vanilla method MBPO in terms of single-round reward. It is because MBPO does not consider the OOD actions in the offline data that will incur extrapolation errors in the policy. MOPO introduces the uncertainty penalizer to make the policy pay more attention to the samples of high confidence, which in turn makes the policy capture users' interest more precisely. However, MOPO sacrifices many unpopular items because they appear less frequently and are considered uncertain samples. Therefore, the average length decreases, which in turn reduces the cumulative reward. Our method DORL overcomes this problem. From main_result, we observe that DORL attains the maximal average cumulative reward after several epochs in both KuaiRec and KuaiRand due to that it reaches the largest interaction length. Compared to MOPO and MBPO, DORL sacrifices a little bit of the single-round reward due to its counterfactual exploration philosophy, meanwhile, this greatly improves the diversity and enlarges the length of interactions due. Therefore, it achieves the goal of maximizing users' long-term experiences. After enhancing the vanilla MBPO with the IPS technique, the learned user model gives adjustments to the distribution of training data by re-weighting all items. IPS obtains a satisfactory performance in KuaiRec but receives abysmal performances in KuaiRand. This is due to its well-known high variance issue can incur estimation errors. Compared to IPS's hard debiased mechanism, DORL's soft debiased method is more suitable for model-based RL in recommendation. For model-free RL methods, As discussed above, four model-free methods fail in two datasets due to limited offline samples. Though they can capture users' interest by returning a high single-round reward (e.g., SQN and CRR in KuaiRec, and BCQ in KuaiRand), they cannot maintain a long interaction trajectory. For example, BCQ updates its policy only on those samples with high confidence, which results in a severe Matthew effect in the recommendation results (reflected by high MCD and short length). SQN's performance oscillates with the largest magnitude since its network is updated by two heads. The RL head serves as a regularizer to the self-supervised head. When the objectives of the two heads conflict with each other, the performance becomes unstable. Therefore, these methods are not suitable for recommendation where offline data are sparse. As for the naive bandit methods, UCB and ϵ-greedy, they are designed to explore and exploit the optimal actions for the independent and identically distributed (IID) data. They do not even possess the capability to optimize long-term rewards at all. Therefore, they are inclined to recommend the same items when the model finishes exploring offline data, which leads to high MCD and short interactions. These naive policies are not suitable for pursuing long-term user experiences in recommendation. §.§ Results on alleviating Matthew effect (RQ2) We have shown in conservative that penalizing uncertainty will result in the Matthew effect in recommendation. More specifically, increasing λ_1 can make the recommended items to be the most dominant ones in the training set, which results in a high MCD value. Here, we show how the introduced “counterfactual” exploration mechanism helps alleviate this effect. We conduct the experiments for different combinations of (λ_1, λ_2) as described above, then we average the results along the λ_1 to show the influence of λ_2 alone. The results are shown in res_entropy. Obviously, increasing λ_2 can lengthen the interaction process and reduce majority category domination. I.e., When we penalize the entropy of behavior policy hard, (1) the recommender does not repeat the items with the same categories; (2) the recommended results will be diverse instead of focusing on dominated items. The results show the effectiveness of penalizing entropy in DORL in alleviating the Matthew effect. §.§ Results with different environments (RQ3) To validate that DORL can work robustly in different environment settings, we vary the window size N in the exit mechanism and fix M=10 during the evaluation. The results are shown in leave. We only visualize the most important metric: the cumulative reward. When N is small (N=1), other model-based methods can surpass our DORL. When N gets larger (N>3), users' tolerance for similar content (i.e., items with the same category) becomes lower, and the interaction process comes to be easier to terminate. Under such a circumstance, DORL outperforms all other policies, which demonstrates the robustness of DORL in different environments. § CONCLUSION We point out that conservatism in offline RL can incur the Matthew effect in recommendation. We conduct studies to show that the Matthew effect hurts users' long-term experiences in both the music and video datasets. Through theoretical analysis of the model-based RL framework, we show that the reason for amplifying the Matthew effect is the philosophy of suppressing uncertain samples. It inspires us to add a penalty term to make the policy emphasize the data induced by the behavior policies with high entropy. This will reintroduce the exploration mechanism that conservatism has suppressed, which alleviates the Matthew effect. In the future, when fitting user interests is not a bottleneck anymore, researchers could consider higher-level goals, such as pursuing users' long-term satisfaction <cit.> or optimizing social utility <cit.>. With the increase in high-quality offline data, we believe that offline RL can be better adapted to recommender systems to achieve these goals. During this process, many interesting yet challenging issues (such as the Matthew effect in this work) will be raised. After addressing these issues, we can create more intelligent recommender systems that benefit society. § ACKNOWLEDGEMENTS This work is supported by the National Key Research and Development Program of China (2021YFF0901603), the National Natural Science Foundation of China (61972372, U19A2079, 62121002), and the CCCD Key Lab of Ministry of Culture and Tourism. ACM-Reference-Format
http://arxiv.org/abs/2307.11004v1
20230712105625
Efficient and Joint Hyperparameter and Architecture Search for Collaborative Filtering
[ "Yan Wen", "Chen Gao", "Lingling Yi", "Liwei Qiu", "Yaqing Wang", "Yong Li" ]
cs.IR
[ "cs.IR", "cs.LG" ]
0009-0002-6425-5056 Department of Electronic Engineering Beijing National Research Center for Information Science and Technology Tsinghua University Beijing China [email protected] 0000-0002-7561-5646 Corresponding author. Department of Electronic Engineering Beijing National Research Center for Information Science and Technology Tsinghua University Beijing China [email protected] 0000-0001-8809-7676 Tencent Inc. Shenzhen China [email protected] 0009-0006-9742-6762 Tencent Inc. Shenzhen China [email protected] 0000-0003-1457-1114 Baidu Inc. Beijing China [email protected] 0000-0001-5617-1659 Department of Electronic Engineering Beijing National Research Center for Information Science and Technology Tsinghua University Beijing China [email protected] Automated Machine Learning (AutoML) techniques have recently been introduced to design Collaborative Filtering (CF) models in a data-specific manner. However, existing works either search architectures or hyperparameters while ignoring the fact they are intrinsically related and should be considered together. This motivates us to consider a joint hyperparameter and architecture search method to design CF models. However, this is not easy because of the large search space and high evaluation cost. To solve these challenges, we reduce the space by screening out usefulness hyperparameter choices through a comprehensive understanding of individual hyperparameters. Next, we propose a two-stage search algorithm to find proper configurations from the reduced space. In the first stage, we leverage knowledge from subsampled datasets to reduce evaluation costs; in the second stage, we efficiently fine-tune top candidate models on the whole dataset. Extensive experiments on real-world datasets show better performance can be achieved compared with both hand-designed and previous searched models. Besides, ablation and case studies demonstrate the effectiveness of our search framework. <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems Efficient and Joint Hyperparameter and Architecture Search for Collaborative Filtering Yong Li ====================================================================================== § INTRODUCTION Collaborative Filtering (CF) is the most widely used approach for Recommender Systems <cit.>, aiming at calculating the similarity of users and items to recommend new items to potential users. They mainly use Neural Networks to build models for users and items, simulating the interaction procedure and predict the preferences of users for items. Recent works also built CF models based on Graph Neural Networks (GNNs) <cit.>. While CF models may have different performance on different scenes <cit.>, recent works <cit.> have begun to apply Automated Machine Learning (AutoML) to search data-specific CF models. Previous works, including SIF <cit.>, AutoCF <cit.> and <cit.>, applied Neural Architecture Search (NAS) on CF tasks. They split the architectures of CF models into several parts and searched each part on architecture space. However, most of these methods focus on NAS in architecture space, only considering hyperparameters as fixed settings and therefore omitting the dependencies among them. A CF model can be decided by a given architecture and a configuration of hyperparameters. Especially in the task of searching best CF models, hyperparameter choice can affect search efficiency and the evaluation performance an architecture can receive on a given dataset. Recent methods mainly focus on model search, ignoring the important role of hyperparameters. For instance, SIF focuses on interaction function, while it uses grid search on hyperparameters space. AutoCF does not use hyperparameter tuning on each architecture, which may make the searched model sub-optimal since proper hyperparameters vary for different architectures. <cit.> includes GNN models in the architecture space, but the search and evaluation cost for architectures may be high. We find that these works only focus on either hyperparameters or fixed parts in CF architecture, neglecting the relation between architecture and hyperparameters. If architecture cannot be evaluated with proper hyperparameters, a sub-optimal model may be searched, possibly causing a reduction in performance. To summarize, there exists a strong dependency between hyperparameters and architectures. That is, the hyperparameters of a model are based on the design of architectures, and the choices of hyperparameters also affect the best performance an architecture may approach. We suppose that hyperparameters can be adaptively changed when the architecture change in CF tasks, so we consider that the CF search problem can be defined on a joint space of hyperparameters and architectures. Therefore, the CF problem can be modeled as a joint search problem on architecture and hyperparameter space. While hyperparameters and architectures both influences the cost and efficiency of CF search task, there exist challenges for joint search problems: (1) Since the joint search space is designed to include both hyperparameters and architectures, the joint search problem has a large search space, which may make it more difficult to find the proper configuration of hyperparameters and architectures; (2) In a joint search problem, since getting better performance on a given architecture requires determining its hyperparameters, the evaluation cost may be more expensive. We propose a general framework, which can optimize CF architectures and their hyperparameters at the same time in search procedure. The framework of our method is shown in Figure <ref>, consisting of two stages. Prior to searching hyperparameters and architectures, we have a full understanding of the search space and reduce hyperparameter space to improve efficiency. Specifically, we reduced the hyperparameter space by their performance ranking on CF tasks from different datasets. We also propose a frequency-based sampling strategy on the user-item matrix for fast evaluation. In first stage, we search and evaluate models on reduced space and subsampled datasets, and we jointly search architecture and hyperparameters with a surrogate model. In second stage, we propose a knowledge transfer-based evaluation strategy to leverage surrogate model to larger datasets. Then we evaluate the model with transferred knowledge and jointly search the hyperparameters and architectures to find the best choice in original dataset. Overall, we make the following important contributions: * We propose an approach that can jointly search CF architectures and hyperparameters to get performance from different datasets. * We propose a two-stage search algorithm to efficiently optimize the problem. The algorithm is based on a full understanding of search space and transfer ability between datasets. It can jointly update CF architectures and hyperparameters and transfer knowledge from small datasets to large datasets. * Extensive experiments on real-world datasets demonstrate that our proposed approach can efficiently search configurations in designed space. Furthermore, results of ablation and case study show the superiority of our method. § RELATED WORK §.§ Automated Machine Learning (AutoML) Automated Machine Learning (AutoML) <cit.> refers to a type of method that can learn models adaptively to various tasks. Recently, AutoML has achieved great success in designing the state-of-the-art model for various applications such as image classification and segmentation <cit.>, natural language modeling <cit.>, and knowledge graph embedding <cit.>. AutoML can be used in mainly two fields: Neural Architecture Search (NAS) and Hyperparameter Optimization (HPO). * NAS <cit.> splits architectures into several components and searches for each part of the architecture to achieve the whole part. DARTS <cit.> uses gradient descent on continuous relaxation of the architecture representation, while NASP <cit.> improves DARTS by including proximal gradient descent on architectures. * HPO <cit.>, usually tuning the hyperparameters of a given architecture, always plays an important role in finding the best hyperparameters for the task. Random Search is a frequently used method in HPO for finding proper hyperparameters. Algorithms for HP search based on model have been developed for acceleration <cit.>, including Bayesian Optimization (BO) methods like Hyperopt <cit.>,BOHB <cit.> and BORE <cit.>, etc. Recent studies on AutoML have shown that incorporating HPO into the NAS process can lead to better performance and a more effective exploration of the search space. For example, a study on ResNet <cit.> showed that considering the NAS process as HPO can lead to improved results. Other works, such as AutoHAS <cit.>, have explored the idea of considering hyperparameters as a choice in the architecture space. FEATHERS <cit.> has focused on the joint search problem in Federated Learning. ST-NAS <cit.> uses weight-sharing NAS on architecture space and consider HP as part of architecture encoding. These studies demonstrate the potential of joint search on hyperparameters and architectures in improving the performance and efficiency of machine learning models. §.§ Collaborative Filtering (CF) §.§.§ Classical CF Models Collaborative Filtering (CF) <cit.> is the most fundamental solution for Recommender Systems (RecSys). CF models are usually designed to learn user preferences based on the history of user-item interaction. Matrix Factorization (MF) <cit.> generates IDs of users and items, using a high-dimensional vector to represent the location of users and items' features. The inner product is used as interaction function, calculating the similarity of user/item vectors. The MF-based method has been demonstrated effective in SVD++ <cit.> and FISM <cit.>. NCF <cit.> applied neural networks to building CF models, using a fused model with MF and multi-layer perceptron (MLP) as an interaction function, taking user/item embeddings as input, and inferring preference scores. JNCF <cit.> extended NCF by using user/item history to replace user/item ID as the input encoding. Recently, the user-item interaction matrix can also be considered as a bipartite graph, thus Graph Neural Networks (GNNs) <cit.> are also applied to solve CF tasks for their ability to capture the high-order relationship between users and items <cit.>. They consider both users and items as nodes and the interaction of users and items as edges in bipartite graph. For example, PinSage<cit.> uses sampling on a graph according to node (user/item) degrees, and learn the parameters of GNN on these sampled graphs. NGCF <cit.> uses a message-passing function on both users themselves and their neighbor items and collects both the information to build proper user and item embeddings. LightGCN <cit.> generates embeddings for users and items with simple SGC layers. §.§.§ AutoML for CF Recently, AutoML has been frequently used in CF tasks, aiming at finding proper hyperparameters and architectures for different tasks <cit.>. Hyperparameter Optimization (HPO) has been applied on the embedding dimension of RecSys models. For example, AutoDim <cit.> searches embedding dimension in different fields, aiming at assigning embedding dimension for duplicated content. PEP <cit.> used learnable thresholds to identify the importance of parameters in the embedding matrix, and trains the embedding matrix and thresholds by sub-gradient decent. AutoFIS <cit.> is designed to learn feature interactions by adding an attention gate to every potential feature interaction. Most of these works tune the embedding dimension adaptively on Recommender Systems tasks, mostly in Click-Through-Rate (CTR) tasks. Neural Architecture Search (NAS) has also been applied on CF tasks, including SIF <cit.> AutoCF <cit.> and <cit.>. In detail, SIF adopts the one-shot architecture search for adaptive interaction function in the CF model. AutoCF designs an architecture space of neural network CF models, and the search space is divided into four parts: encoding function, embedding function, interaction function and prediction function. AutoCF selects an architecture and its hyperparameters in the space with a performance predictor. Hyperparameters are considered as a discrete search component in search space, neglecting its continuous characteristic. <cit.> designs the search space on graph-based models, which also uses random search on the reduced search space. § SEARCH PROBLEM As mentioned in the introduction, finding a proper CF model should be considered as a joint search problem on hyperparameters and architectures. We propose to use AutoML to find architectures and their proper hyperparameters efficiently. The joint search problem on CF hyperparameters and architectures can be modeled as a bilevel optimization problem as follows: [Joint Automated Hyperparameters and Architecture Search for CF] Let f^* denote the proper CF model, and then the joint search problem for CF can be formulated as: α^*, h^* = max_α∈𝒜, h ∈ℋℳ(f(𝐏^*; α, h), 𝒮_val), s.t. 𝐏^* = _𝐏ℳ(f(𝐏; α, h), 𝒮_tra) . where ℋ contains all possible choices of hyperparameters h, where 𝒜 contains all possible choices of architectures α, 𝒮_val and 𝒮_tra denote the training and validation datasets, 𝐏 denotes the learnable parameters of the CF architecture α, and ℳ denotes the performance measurement, such as Recall and NDCG. We encounter the following key challenges in effectively and efficiently solving the search problem: Firstly, the joint search space must contains a wide range of architectural operations within 𝒜, as well as frequently utilized hyperparameters in the learning stage within ℋ. Since this space contains various types of components, including continuous hyperparameters and categorical architectures, it is essential to appropriately encode the joint search space for effective exploration. Secondly, considering the dependency between hyperparameters and architectures, the search strategy should be robust and efficient, meeting the accuracy and efficiency requirements of real-world applications. Compared to previous AutoML works on CF tasks, our method is the first to consider a joint search on hyperparameters and architectures on CF tasks. §.§ Architecture Space: 𝒜 In this paper, the general architecture of CF models can be separated into four parts <cit.>: Input Features, Feature Embedding, Interaction Function, and Prediction Function. Based on the frequently used operations, we build the architecture space 𝒜, illustrated in Table <ref>. Input Features The input features of CF architectures come from original data: user-item rating matrix. We can apply the interactions between users and items and map them to high-dimension vectors. There are two manners for encoding users and items as input features: one-hot encoding () and multi-hot encoding (). As for one-hot encoding (), we consider the reorganized id of both users and items as input. Since the number of users and items is different, we should maintain two matrices when we generate these features. As for multi-hot encoding (), we can consider the interaction of users and items. A user vector can be encoded in several places by its historical interactions with different items. The items can be encoded in the same way. Feature Embedding The function of feature embedding maps the input encoding with high dimension into vectors with lower dimension. According to designs in Section <ref>, we can elaborate the embedding manners in two categories: Neural Networks based (NN-based) embeddings and Graph Neural Networks based (Graph-based) embeddings. The embedding function is related to the size of input features. As for NN-based methods, a frequently used method of calculation is , mainly consists of a lookup-table in level, and mean pooling on both users/items sides. We can also use a multi-layer perceptron () for each user and item side, helping convert multi-hot interactions into low-dimensional vectors. As for Graph-based methods, the recent advances in GNNs use more complex graph neural networks to aggregate the neighborhoods, such as  <cit.>,  <cit.>, and  <cit.> etc. Interaction Function The interaction function calculates the relevance between a given user and an item. In this operation, the output is a vector affected by the embeddings of the user and item. The inner product is frequently used in many research works on CF tasks. We split the inner product and consider an element-wise product as an essential operation, which is noted as . In coding level, we can also use , , and . They help us join users and items with different element-wise calculations. Prediction Function This operation stage helps turn the output of the interaction function into an inference of similarity. As for the output vector of a specific interaction, a simple way is to use summation on the output vector. Thus, can be considered the inner product in this way. Besides, we use a weight vector with learnable parameters, noted as . Multi-layer perceptron () can also be used for more complex prediction on similarity. §.§ Hyperparameter Space: ℋ Besides the model architecture, the hyperparameter (HP) setting also plays an essential role in determining the performance of the CF model. The used components for hyperparameter space are illustrated in the first column of Table <ref>. The CF model, like any machine learning model, consists of standard hyperparameters such as learning rate and batch size. Specifically, excessively high learning rates can hinder convergence, while overly low values result in slow optimization. The choice of batch size is also a trade-off between efficiency and effectiveness. In addition, CF model has specific and essential hyperparameters, which may not be so sensitive in other machine learning models. Embedding dimension for users and items influence the representative ability of CF models. Besides, the embedding size determines the model's capacity to store all information of users and items. In general, too-large embedding dimension leads to over-fitting, and too-small embedding dimension cannot fit the complex user-item interaction data. The regularization term is always adopted to address the over-fitting problem. § SEARCH STRATEGY As mentioned in Section <ref>, joint search on hyperparameters and architectures have two challenges in designing a search strategy effectively and efficiently: the large search space and the high evaluation costs on large network datasets. Previous research works on model design with joint search space of hyperparameters and architectures such as <cit.> consider the search problem in the manner of Figure <ref>. They considered the search component in space jointly and used different search algorithms to make the proper choice. The search procedure can be costly on a large search space and dataset. Therefore, addressing the challenges of a vast search space and the costly evaluation process is crucial. The first challenge means the joint search space of ℋ and 𝒜 described in Section <ref> is large for accurate search. Thus, we need to reduce the search space. In practice, we choose to screen ℋ choices by comparing relative ranking in controlled variable experiments, which is explained in Section <ref>. The second challenge means the evaluation time cost in Equation (<ref>) is high. A significant validation time will lower the search efficiency. Thus, we apply a sampling method on datasets by interaction frequency, elaborated in Section <ref>. In comparison to conventional joint search problem in Figure <ref>, we design a two-stage search algorithm in Figure <ref>. We use Random Forest (RF) Regressor, a surrogate model, to improve the efficiency of the search algorithm. To transfer the knowledge, including the relation modeling between hyperparameters, architectures, and evaluated performance, we learn the surrogate model's parameters θ in the first stage, and we use θ as initialization of RF model in the second stage. Our search algorithm is shown in Algorithm <ref> and described in detail in Section <ref>. Besies, we have a discussion on our choices in Section <ref>, elaborating how we solve the challenge in the joint search problem. §.§ Screening Hyperparameter Choices We screen the hyperparameter (HP) choices from ℋ to ℋ̂ with two techniques. First, we shrink the HP space by comparing relative performance ranking of a special HP while fixing the others. After we get the ranking distribution of different HPs, we find the performance distribution among different choices of a HP, thus we may find the proper range or shrunk set of a given HP. Second, we decouple the HP space by calculating the consistency of different HP. If the consistency of a HP is high, that means the performance can change positively or negatively by only alternating this HP. Thus, this HP can be tuning separately neglecting its relation with other HPs in HP set. §.§.§ Shrink the hyperparameter space The screening method on hyperparameter space is based on analysis of ranking distribution on the performance with fixed value for a certain HP and random choices for other HPs and architectures. In this part, we denote a selection of hyperparameter h ∈ℋ as a vector, noted as h = (h^(1), h^(2),…, h^(n)). For instance, h^(1) means optimizer, and h^(2) means learning rate. To obtain the ranking distribution of a certain HP h^(i), we start with a controlled variable experiment. We vary h^(i) in discrete values as the third column in Table <ref>, and we vary other HPs in original range as the second column. Specifically, given H_i as a discrete set of HP h^(i), we choose a value λ∈ H_i and we can obtain the ranking (𝗁, λ) of the anchor HP 𝗁∈ℋ_i by fixing other HPs except the i-th HP. To ensure a fair evaluation of different architecture, we traverse the architecture space and calculate rank of performance with different configurations, then we can get the distribution of a type of HP. The relative performance ranking with different HPs is shown as violin plots in Figure <ref>. In this figure, we can get Ĥ through the distribution of different HP values. We learn that the proper choice for optimizer can be shrunk to Adam and Adagrad; Proper range for learning rate is (1e-5, 1e-2); Proper range for embedding dimension can be reduced to [2, 64]; And we can fix weight decay in our experiments. We demonstrate the conclusion in the fourth column in Table <ref>. §.§.§ Decouple the hyperparameter space To decouple the search space, we consider the consistency of the ranking of hyperparameters when only alternating a given hyperparameter. For the i-th element h^(i) of h∈ℋ, we can change different values for h^(i), and then we can decouple the search procedure of the i-th hyperparameter with others. We use Spearman Rank-order Correlation Coefficient () to show the consistency of various types of HPs, which is definited in Equation (<ref>). (λ_1, λ_2) = 1-∑_h∈ℋ_i|(h,λ_1)-(h,λ_2)|^2/|ℋ_i|·(|ℋ_i|^2-1). where |ℋ_i| means the number of anchor hyperparameters in ℋ_i. demonstrates the matching rate of rankings for anchor hyperparameters in ℋ_i with respect to h^(i) = λ_2 and h^(i) = λ_2. The of the i-th HP is evaluated by the average of (λ_1, λ_2) among different pairs of λ∈ H_i, as is shown in Equation (<ref>). _i = 1/|H_i|^2∑_(λ_1, λ_2)∈ H_i× H_i(λ_1, λ_2). The results is demonstrated in Figure <ref>, we can directly find that the important hyperparameter with higher has a more linear relationship with its values. Since high embedding dimension model is time-costly during training and evaluation, we decide to use lower dimension in the first stage to reduce validation time, and then raise them on the original dataset to get better performance. To summarize, we shrink the range of HP search space and find the consistency of different HPs. The shrunk space shown in Table <ref> help us search more accurately, and the consistency analysis on performance ranking help us find the dependency between different HPs, thus we can tune HP with high consistency separately. §.§ Evaluating Architecture with Sampling To evaluate architecture more efficiently, we collect performance information evaluated from subgraphs, since the subgraph can approximate the properties of whole graph <cit.>. In this part, we introduce our frequency-based sampling method, and then we show the transfer ability of subgraphs by testing the consistency of architectures' performance from subsampled dataset to origin dataset. §.§.§ Matrix sampling method Since dataset for CF based on interaction of records, our sampling method is based on items' appearance frequency. That is, we can subsample the original dataset when we preserve part of the bipartite graph, and the relative performance on smaller datasets should have a similar consistency (i.e. ranking distribution of performance on 𝒮_val and 𝒮̂_val). The matrix subsample algorithm is demonstrated in Algorithm <ref>. First, we set the number of user-item interactions to be preserved first, which can be controlled by a sample ratio γ, γ∈ (0,1). We calculate the interactions for each item, and then we preserve the item with a higher frequency. The items can be chosen in a fixed list (i.e. , Line 6-7 in Algorithm <ref>), or the interaction frequency count of items can be normalized to a probability, then different items have corresponding possibility to be preserved (i.e. , Line 8-10 in Algorithm <ref>). §.§.§ Transfer ability of architecture on subsampling matrix To ensure that the relative performance ranking on subsampled datasets is similar to that on original datasets, we need to test the consistency of architecture ranking on different datasets. We evaluate the transfer ability among from subgraph to whole graph by . For a given value of γ, we choose to select a sample set of architecture from A_γ∈𝒜. Then we evaluate them on a subsampled dataset 𝒮̂ and origin dataset 𝒮. The relative rank of α∈ A_γ on 𝒮̂ and 𝒮 is noted as (α, 𝒮̂) and (α, 𝒮). _γ = 1-∑_α∈ A_γ|(α, 𝒮̂)-(α, 𝒮)|^2/|A_γ|·(|A_γ|^2-1). We can choose different subsampled dataset 𝒮̂ to get average consistency. As is demonstrated in Figure <ref>, sample ratio with higher has better transfer ability among graphs with sample mode , and the proper sample ratio should be in γ∈ [0.2,1). To summarize, through the sampling method, the evaluation cost will be reduced, and thus the search efficiency is improved. Since the ranking distribution among subgraphs is similar to that of the original dataset, we can transfer the evaluation modeling from small to large dataset. §.§ Two-stage Joint Hyperparameter and Architecture Search As discussed above, the evaluation cost can be highly reduced with sampling methods. Since the sampling method ensure the transfer ability from subgraphs to whole graphs, we propose a two-stage joint search algorithm, shown in Algorithm <ref>, and the framework is also shown in Figure <ref>. The main notations in algorithm can be found in Table <ref> in Appendix. We also compare our method with conventional method in Figure <ref>. To briefly summarize our algorithm: In the first stage (Lines 3-14), we sample several subgraphs with frequency-based methods, and preserve γ=0.2 of interactions from the original rating matrix. Based our understanding of hyperparameters in Section <ref>, we select architecture and its hyperparameters in our reduced hyperparameter space. We use a surrogate model to find proper architecture and hyperparameters to improve search efficiency, noted as c(·, θ), θ is the parameter of the model. After we get evaluations of architecture and hyperparameters, we update parameters θ of c. In our framework, we choose BORE <cit.> as a surrogate and Random Forest (RF) as the regressor to modulate the relation between search components and performance. Details of the BORE+RF search algorithm () is shown in Algorithm <ref> in Appendix. We first transfer the parameters of c trained in the first stage by collected configurations and performance. Then, in the second stage (Lines 15-22), we select architectures and hyperparameters and evaluate them in the original dataset. Besides, we increase the embedding dimension to reach better performance. Similarly, the configurations and performance are recorded for the surrogate model (RF+BORE), which can give us the next proper configuration. Finally, after two-stage learning on the subsampled and original datasets, we get the final output of architecture and hyperparameters as the best choice of architecture and its hyperparameters. § EXPERIMENTS Extensive experiments are performed to evaluate the performance of our search strategy by answering the following several research questions: * RQ1: How does our algorithm work in comparison to other CF models and automated model design works? * RQ2: How efficiently does our search algorithm work in comparison to other typical search algorithms? * RQ3: What is the impact of every part of our design? * RQ4: How specific is our model for different CF tasks? §.§ Experimental Settings §.§.§ Datasets We use MovieLens-100K, MovieLens-1M, Yelp, and Amazon-Book for CF tasks. The detailed statistics and preprocess stage of datasets are shown in Table <ref> in Appendix <ref>. §.§.§ Evaluation metrics As for evaluation metrics, we choose two widely used metrics for CF tasks, Recall@K and NDCG@K. According to recent works <cit.>, we set the length for recommended candidates K as 20. We use Recall@20 and NDCG@20 as validation. As for loss function, we use BPR loss <cit.>, the state-of-the-art loss function for optimizing recommendation models. §.§.§ Baselines for Comparison Since we encode both hyperparameters and architectures, we can use previous search algorithms on hyperparameters <cit.> and extend them on a joint search space. The details of search algorithms can be found in Appendix <ref>. §.§ Performance Comparison (RQ1) For CF tasks, we compare our results with NN-based CF models and Graph-based CF models. Besides, we also compare our search algorithm with other search algorithms designed for CF models. The search space is based on analysis of hyperparameter understanding in Section <ref>. We report the performance on four datasets in Table <ref>. We summarize the following observation: * We find that in our experiment, our CF model trained by searched hyperparameters and architectures can achieve better performance than the classical CF models. Some single models also perform well due to their special design. For example, LightGCN performs well on ML-100K and Yelp, while NGCF also performs well on ML-1M. Since our search algorithm has included various operations used in CF architectures, the overall architecture space can cover the architectures of classical models. * Compared to NAS method on CF models, our method works better than SIF for considering multiple operations in different stages of architecture. Our methods also outperform AutoCF by 3.10% to 12.1%. The reason for that is we have an extended joint search space for both hyperparameters and architectures. Besides, our choice for hyperparameters is shrunk to a proper range to improve search efficiency and performance. * We can observe in Table <ref> that our searched models can achieve the best performance compared with all other baselines. Note that our proposed method can outperform the best baseline by 2.33% to 13.02%. The performance improvement on small datasets is better than that on large ones. §.§ Algorithm Efficiency (RQ2) We compare the different search algorithms mentioned in Section <ref>. The search results are shown in Figure <ref> on dataset ML-100K and Figure <ref> on dataset ML-1M. We plot our results by the output of the search algorithm. As is demonstrated in Figure <ref> and <ref>, search algorithms of BO perform better than random search, since the BO method considers a Gaussian Process surrogate model for simulating the relationship between performance output and hyperparameters and architectures. We find that BORE+RF outperforms other search strategies in efficiency. The reason is that the surrogate model RF can better classify one-hot encoding of architectures. Besides we also compare different time curves for BORE+RF in single-stage and two-stage evaluations. Since our two-stage algorithm uses subsampling method on rating matrix, and ensure the consistency between subgraph and whole graph, we can achieve better performance and higher search efficiency. §.§ Ablation Study (RQ3) In this subsection, we analyze how important and sensitive the various components of our framework are. We focus on the performance's improvement and efficiency by reducing and decoupling the hyperparameter space. We also elaborate on how we choose sample ratio and effectiveness of tuning hyperparameters. §.§.§ Plausibility of screening hyperparameters To further validate the effectiveness of our design of screening hyperparameter choices, we choose hyperparameters on the origin space, shrunk space, and decoupled space. The search time curve is shown in Figure <ref>. We demonstrate that screening hyperparameter choices can help improve performance on CF tasks and search efficiency. According to our results, performance on ℋ̂ is better than the one on the origin space ℋ. The reason is that the shrunk hyperparameter space has a more significant possibility of including better HP choices for CF architectures to achieve better performance. The search efficiency is improved since the choice of hyperparameters is reduced, and proper hyperparameters can be found more quickly in a smaller search space. We also find that the search efficiency reduces when we increase the batch size and embedding dimension when we search on the decoupled search space. While increasing batch size and embedding dimension help improve final performance, the cost evaluation is higher, which may reduce search efficiency. §.§.§ Choice of Sampling Ratio To find the impact of changing sampling ratio, we search with different sampling ratios on different datasets in the same controlled search time. The evaluation results (Recall@20) for experiments on different sampling ratio settings are listed in Table <ref>. According to the table, smaller sampling ratios have lower performance. The reason is that the user-item matrix sampled by low sample ratio may not capture the consistency in the origin matrix, as the consistency results shown in Section <ref>. While sampled dataset generated by higher sample ratio has more considerable consistency with the original dataset, the results in a limited time may not be better. The reason is that too much time on evaluation in the first stage may have fewer results for the surrogate model to learn the connection between performance and configurations of hyperparameters and architectures. Thus, the first stage has a trade-off between sample ratio and time cost, and we choose 20% as our sample ratio in our experiments. §.§.§ Tuning on Hyperparameters To show the effectiveness of our design on joint search, we propose to apply our joint search method to previous research work AutoCF <cit.>. The results are shown in Table <ref>. We apply our search strategy to AutoCF to include hyperparameters in our search space, noted as AutoCF(HP). We find that hyperparameter searched on shrunk space can perform better than the one that uses architecture space and random search on different datasets. §.§ Case Study (RQ4) In this part, we mainly focus on our search strategy's search results of different architectures. The search results with top performance share some similarities, while different architecture operations have different performances. According to search results, we present the top models for each task on all datasets in Table <ref>. It is easy to find that interaction history-based encoding features may have better performance and more powerful representative ability. Besides, the embedding function of has stronger representative ability since it can capture the high-order relationship. Both and collect infomation from different layers, simply designed have stronger ability. As for the interaction function, we find the classical element-wise can receive strong performance; The prediction function of learnable vector and can capture more powerful and complex interaction than . We can find that these top models of each task have similar implementations, but there also have some differences among different datasets. One top architecture on a given dataset may not get the best performance on another one. In summary, the proper architectures for different datasets may not be the same, but these top results may share some same operations in architecture structures. The result and analysis can help human experts to design more powerful CF architectures. § CONCLUSION AND FUTURE WORK In this work, we consider a joint search problem on hyperparameters and architectures for Collaborative Filtering models. We propose a search framework based on a search space consisting of frequently used hyperparameters and operations for architectures. We make a complete understanding of hyperparameter space to screen choices of hyperparameters, We propose a two-stage search algorithm to find proper hyperparameters and architectures configurations efficiently. We design a surrogate model that can jointly update CF architectures and hyperparameters and can be transferred from small to large datasets. We do experiments on several datasets, including comparison on different models, search algorithms and ablation study. For future work, we find it important to model CF models based on Knowledge Graphs as a search problem. With additional entities for items and users, deep relationships can be mined for better performance. An extended search framework can be built on larger network settings. Models on extensive recommendation tasks and other data mining tasks can also be considered as a search problem. This work is partially supported by the National Key Research and Development Program of China under 2021ZD0110303, the National Natural Science Foundation of China under 62272262, 61972223, U1936217, and U20B2060, and the Fellowship of China Postdoctoral Science Foundation under 2021TQ0027 and 2022M710006. ACM-Reference-Format § APPENDIX §.§ Search Space The main notations in this paper are listed in Table <ref>, and we discuss some details for search space in this section. §.§.§ Hyperparameter Choice We explain how to reduce the hyperparameters by Figure <ref> in this section. We shrink hyperparameter search space based on the performance ranking distribution. We can split the hyperparameters into four categories: * Reduction: This kind of hyperparameter is usually categorical, like optimizer. The choices of hyperparameters can be reduced. * Shrunk range: This kind of hyperparameter is usually selected in a continuous range, such as learning rate. The choices of these values can be constrained to a smaller range. * Monotonous related: The performance with this kind of hyperparameter usually rises when the hyperparameter increase, such as embedding dimension. However, we can not choose these HPs with too large values for limited memory. Thus, we choose a smaller value in first stage and a larger one in second stage. * No obvious pattern: We do not have to change this kind of HP in our experiment, just as weigh decay. §.§ Search Algorithms §.§.§ Surrogate Model Design We demonstrate our search algorithm with surrogate model in Algorithm <ref>. We design our search algorithm with BORE and Random Forest (RF) regressor. In the first stage, we train the surrogate model with D, update parameters of surrogate model. The output of BORE+RF can help give an inference between (0,1), and we choose the least one as output. After the first stage, we save the parameters of this surrogate model, and transfer it to second stage, which we do experiment on larger datasets. The configuration of hyperparameters and architectures and the performance on larger dataset can also update the parameters of surrogate model. With the knowledge we learn on the first stage, the surrogate model can better choose the proper architecture and hyperparameter for CF tasks. §.§.§ Fair Comparison Compared with hyperparameters, the choice of architectures can affect the performance of CF models more. Thus, to compare different experiment settings fairly, we use the average of top5 configurations instead for the search time curve, noted as . §.§.§ Search Procedure We show the comparison of conventional joint search method and our method in Figure <ref>. In Figure <ref>, conventional one-stage method search components separately, while our method search hyperparameters on a shrunk space. Besides, the evaluation time is lower in the first stage in our method. §.§ Discussion In this section, we discuss about the method we have chosen, and the difference with previous works on joint search problems. The first is the reason for screening in the hyperparameters space rather than the architecture space. The search space of hyperparameters mainly consists of components of continuous values with infinite choices. And screening range of hyperparameters <cit.> is proven effective on deep neural networks and graph-based models. Architecture space components are typically categorized, and each one is necessary in some manner and should not be ignored. Furthermore, we believe that it is unnecessary to reduce the architecture space after conducting a fair ranking of different architectures. Our work is the first on CF tasks, which is different from previous research studies on joint search. <cit.> mainly focuses on a joint search problem on typical neural networks. Another study on the joint search problem, AutoHAS <cit.>, focuses on model search with weight sharing. FEATHERS <cit.> focuses on a joint search problem on Federate Learning, an aspect of Reinforcement Learning. In ST-NAS <cit.>, the sub-models are candidates of a designed super-net, and they sample sub-ST-models from the super-net and weights are update using training loss while updating HPs with the validation loss. §.§ Data Preprocess §.§.§ Details of Datasets In our experiments, we choose four real-world raw datasets and build them for evaluation on CF tasks. * MovieLens-100K [https://grouplens.org/datasets/movielens/100k] This widely used movie-rating dataset contains 100,000 ratings on movies from 1 to 5. We also convert the rating form to binary form, where each entry is c vmarked as 0 or 1, indicating whether the user has rated the item. * MovieLens-1M [https://grouplens.org/datasets/movielens/1m] This widely used movie-rating dataset contains more than 1 million ratings on movies from 1 to 5. Similarly, for MovieLens-1M, we build an implicit dataset. * Yelp [https://www.yelp.com/dataset/download] This is published officially by Yelp, a crowd-sourced review forum website where users can write comments and reviews for various POIs, such as hotels, restaurants, etc. * Amazon-Book [https://nijianmo.github.io/amazon/index.html] This book-rating dataset is collected from users’ uploaded review and rating records on Amazon. §.§.§ Preprocess Since the origin dataset in Table <ref> is too large for training, we reduce them in frequency order. We use 10-core select on Yelp and 50-core select on Amazon-Book. N-core means we choose users and items that appear more than N times in the whole record history. After we select the dataset, we split the data into training, validation and test sets. We shuffle each set when we start a new evaluation task. §.§.§ Dataset sampling Some papers studying long-tail recommendation <cit.> or self-supervised recommendation <cit.> may discuss the impact of data sparsity. We can sample the subgraph according to the popularity <cit.>, mainly user-side and item-side, and the long-tail effect on the item-side is more severe. Thus we sample the rating matrix based on the item frequency. The more the item appears in rating records, the more likely it is reserved in the subsampled matrix. §.§ Detailed Settings of Experiments §.§.§ Baseline algorithms * Random Search <cit.>. It is a simple method for finding hyperparameters and architectures with random choices on either continuous or categorical space. * Bayesian Optimization <cit.>. Bayesian Optimization (BO) is a search algorithm based on the analysis of posterior information, and it can use Gaussian Process as a surrogate model. * BOHB <cit.>. BOHB is a method consisting of both Bayesian Optimization (BO) and HyperBand (HB), helping modulate functions with a lower time budget. * BORE <cit.>. BORE is a BO method considering expected improvement as a binary classification problem. It can be combined with regressors, including Random Forest (RF), Multi-Layer Perceptron (MLP), and Gaussian Process (GP). We choose RF as a regressor in our experiments. §.§.§ Hardware Environment We implement models with PyTorch 1.12 and run experiments on a 64-core Ubuntu 20.04 server with NVIDIA GeForce RTX 3090 GPU with 24 GB memories each. It takes 3.5-4 hours to search on a dataset with one million records. §.§.§ Codes We have released our implementation code for experiments in https://github.com/overwenyan/Joint-Searchhttps://github.com/overwenyan/Joint-Search
http://arxiv.org/abs/2307.07411v1
20230710121834
Detecting LLM-Generated Text in Computing Education: A Comparative Study for ChatGPT Cases
[ "Michael Sheinman Orenstrakh", "Oscar Karnalim", "Carlos Anibal Suarez", "Michael Liut" ]
cs.CL
[ "cs.CL", "cs.CY" ]
Detecting LLM-Generated Text in Computing Education]Detecting LLM-Generated Text in Computing Education: A Comparative Study for ChatGPT Cases [email protected] University of Toronto Mississauga Mississauga Canada [email protected] 0000-0003-4930-6249 Maranatha Christian University Bandung Indonesia [email protected] 0000-0002-6012-932X Escuela Superior Politécnica del Litoral Guayaquil Ecuador [email protected] 0000-0003-2965-5302 University of Toronto Mississauga Mississauga Canada Due to the recent improvements and wide availability of Large Language Models (LLMs), they have posed a serious threat to academic integrity in education. Modern LLM-generated text detectors attempt to combat the problem by offering educators with services to assess whether some text is LLM-generated. In this work, we have collected 124 submissions from computer science students before the creation of ChatGPT. We then generated 40 ChatGPT submissions. We used this data to evaluate eight publicly-available LLM-generated text detectors through the measures of accuracy, false positives, and resilience. The purpose of this work is to inform the community of what LLM-generated text detectors work and which do not, but also to provide insights for educators to better maintain academic integrity in their courses. Our results find that CopyLeaks is the most accurate LLM-generated text detector, GPTKit is the best LLM-generated text detector to reduce false positives, and GLTR is the most resilient LLM-generated text detector. We also express concerns over 52 false positives (of 114 human written submissions) generated by GPTZero. Finally, we note that all LLM-generated text detectors are less accurate with code, other languages (aside from English), and after the use of paraphrasing tools (like QuillBot). Modern detectors are still in need of improvements so that they can offer a full-proof solution to help maintain academic integrity. Further, their usability can be improved by facilitating a smooth API integration, providing clear documentation of their features and the understandability of their model(s), and supporting more commonly used languages. <ccs2012> <concept> <concept_id>10003456.10003457.10003527</concept_id> <concept_desc>Social and professional topics Computing education</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003347.10003355</concept_id> <concept_desc>Information systems Near-duplicate and plagiarism detection</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003338.10003341</concept_id> <concept_desc>Information systems Language models</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Social and professional topics Computing education [500]Information systems Near-duplicate and plagiarism detection [500]Information systems Language models [ Michael Liut August 12, 2023 =================== § INTRODUCTION In academia, a way to encourage students utilizing all learning opportunities and experiences is to properly maintain academic integrity in the courses <cit.>. Students need to complete any exams and assessments with their best effort. Further, they need to actively engage with the instructors (and tutors). Although Artificial Intelligence (AI) can foster education <cit.>, it might be misused to breach academic integrity. Paraphrasing tools <cit.> and code obfuscation tools <cit.> for example, are misused to cover up evidence for plagiarism (a breach of academic integrity about copying one's work and reusing it without proper acknowledgment <cit.>). Misuse of AI chatbots with large language models (LLM) <cit.> such as ChatGPT[<https://openai.com/blog/chatgpt>] is another trending threat for breaching academic integrity. Students can complete exams or assessments with limited effort, resulting in questionable performance; it is unclear whether the learning objectives are actually met. The misuse can be considered as contract cheating (i.e., getting help in exchange for mutual incentives <cit.>) since AI chatbots provide responses in exchange for additional user data. However, considering AI responses are generated based on other people's textual data without proper acknowledgment, we believe it is more justifiable to consider the misuse as plagiarism. While checking student work for plagiarism, instructors are often aided by automated detectors. A number of detectors have been developed to detect whether a work is a result of LLM. Two of them are GPT-2 Output Detector <cit.> and Giant Language model Test Room (GLTR) <cit.>. Nevertheless, due to the recency of misuse of AI chatbots, Computing educators might have limited information about publicly available detection detectors. Further, it is challenging to choose the most suitable detector for their teaching environment. To the best of our knowledge, there are no empirical studies comparing the detectors in terms of effectiveness. In response to the aforementioned gaps, we investigate LLM-generated text detectors and formulate the following research question (RQ): “How effective are LLM-generated text detectors?” It is clear that there is a need in the community to understand if the currently available detectors are able to detect LLM-generated content <cit.> and what there reliability is. As an additional contribution, we also report our experience in using the LLM-generated text detectors. It might be useful for readers interested in employing those detectors in their classrooms. § RELATED WORK This section discusses common breaches of academic integrity in computing education and misuse of AI to breach academic integrity. §.§ Common Breaches of Academic Integrity Academic integrity encourages students to act honestly, trustworthy, respectfully, and responsibly in learning[<https://lo.unisa.edu.au/course/view.php?id=6751&amp;section=6>]. <cit.> lists five common breaches of academic integrity in computing education: plagiarism, collusion, contract cheating, exam cheating, and research fraud. It is important to inform students about instructors' expectations about academic integrity in their courses <cit.> and penalize those who breach academic integrity. Plagiarism happens when ideas, words, or even code is reused without proper acknowledgment and permission to the original author(s) <cit.>. It is commonly identified with the help of automated detectors <cit.> such as Turnitin[<https://www.turnitin.com/>], Lichen <cit.>, MOSS[<https://theory.stanford.edu/ aiken/moss/>], and JPlag <cit.>. Any submissions with high similarity will be investigated and if they are indeed a result of misconduct, the students will be penalized <cit.>. Nevertheless, identifying plagiarism is not always straightforward; some perpetrators disguise their act with automated paraphrasing <cit.>, essay spinning <cit.> or code obfuscation <cit.>. The automated detectors should be resilient to common disguising practices in addition to being effective and efficient. GPlag <cit.> and BPlag <cit.> for examples, focus on content semantic while measuring similarity among submissions. <cit.> and <cit.> developed detectors that detect substantial changes among consecutive saves. <cit.> developed a detector that is automatically integrated to a programming workspace to record any code edits. Collusion is also about reusing ideas, words, or code without proper acknowledgment. However, the original author(s) is aware about the matter and somewhat allows it <cit.>. Typically, this occurs when two or more students work closely beyond reasonable levels of collaboration <cit.>. Collusion can be identified in the same manner as plagiarism with the help of automated detectors. Similar submissions are reported by the detectors and then manually investigated by the instructors; students whose submissions are indeed a result of misconduct will be penalized. Contract cheating occurs when third parties are paid to complete student assessments <cit.>. The third parties can be professional companies or even their colleagues. Contract cheating is quite challenging to identify as the third parties tend to know how to evade detection. It is only identifiable when the writing style and the quality of the submission is substantially different to those of the student's prior submissions. To expedite the identification process, instructors can either use the help of authorship identification detectors <cit.> such as Turnitin Authorship Investigate[<https://help.turnitin.com/MicroContent/authorship-investigate.htm>] <cit.> or check contract cheating sites <cit.>. Exam cheating happens when some students have unfair advantages in the exams <cit.>. The advantages can vary from concealed notes during exams, leaked exam questions, to impersonation (i.e., an individual switch places with a student to take the exam). Exam cheating can be identified via careful investigation on the whole process of the exams. Sometimes, such identification can be aided with online proctoring systems <cit.> (e.g., Proctorio[<https://proctorio.com/>] and ProctorExam[<https://proctorexam.com/>]) or local monitoring tools (e.g., NetSupport[<https://www.netsupportschool.com/>]). Research fraud means reporting research results without verifiable evidence <cit.>. It can be data fabrication (i.e., generating artificial data to benefit the students) or data falsification (i.e., updating the data so that it aligns with the students' desired findings). Both are parts of research misconduct[<https://grants.nih.gov/policy/research_integrity/definitions.htm>] and they can happen in research-related assessments. Research fraud can be identified via careful investigation on the whole process of research. Due to its complex nature, such misbehaviour is manually identified on most cases. However, instructors can get some help from source metadata <cit.> and automated image manipulation detection <cit.>. §.§ Misuse of AI AI substantially affects education <cit.>. It improves student learning experience via the help of intelligent tutoring systems <cit.> and personalized learning materials <cit.>. AI expedites the process of providing feedback <cit.>, identifying breaches of academic integrity <cit.>, maintaining student retention <cit.>, learning programming <cit.>, creating programming exercises <cit.>, and recording attendance <cit.>. Advances in AI might also be misused for breaching academic integrity. Paraphrasing tools <cit.> which are intended to help students learn paraphrasing are misused to cover up plagiarism. Code generators like GitHub Copilot <cit.> which are intended to help programmers in developing software are misused to complete programming tasks that should be solved independently. Code obfuscation tools <cit.> which are intended to secure code in production are misused to disguise similarities in copied code submissions. AI chatbots <cit.>, especially those with Large Language Model (LLM) <cit.> are intended to help people searching information, but they are misused to unethically complete exams[<https://edition.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html>] and assessments[<https://theconversation.com/chatgpt-students-could-use-ai-to-cheat-but-its-a-chance-to-rethink-assessment-altogether-198019>]. LLM is derived from Language Model (LM), a statistical model at which each sequence of words are assigned with a probability <cit.>. Per query or question, the response is generated by concatenating sequences of words that have high probability with the query or the question. ChatGPT is a popular example of LLM. The tool is developed by OpenAI, a non-profit American research laboratory on top of GPT-3, a LLM with deep learning to generate human-like text. The tool relies on reinforcement and supervised learning to further tune the model. A number of automated detectors have been developed to help instructors identifying AI misuses for breaching academic integrity. In the context of plagiarism and collusion, automated detectors nullify common alterations that can be done without understanding the content <cit.> and remove contents that are not evident for raising suspicion <cit.>. In dealing with misuses of AI chatbots, a few automated detectors are developed under the same way as the chatbots via pretrained model, but dedicated to detect AI-generated texts. GPT-2 Output Detector <cit.> and GLTR <cit.> are two of the examples. § METHODOLOGY This section discusses how the research question stated in the introduction would be addressed and our preliminary work to discover publicly available LLM-generated text detectors. We collected historical assignment data dating back to 2016 from two publicly funded research-focused institutions, one in North America and one in South America. The data collected was from upper-year undergraduate computer science and engineering students. We analyzed a total of 164 submissions (124 were submitted by humans, 30 were generated using ChatGPT, and 10 were generated by ChatGPT and altered using the Quillbot paraphrasing tool) and compared them against eight LLM-generated text detectors. This results in a total of 1,312 prediction results. Of the 164 submissions, 134 were written in English (20 of which were generated by a LLM, and another 10 which were LLM-generated and paraphrased) and 20 were written in Spanish (10 of which were AI-generated). The submissions were collected between 2016 and 2018 (prior to the release of ChatGPT), and were made in “databases”, “networking”, and a “final thesis project” course. These courses were specifically selected as they are upper-year computer science major courses that touch on a mix of systems and theory (databases and networking), as well as technical writing in computer science with a programming/development component (final thesis project). The students in these courses were primarily in a computer science major. It should also be noted that Spanish was selected as an alternative language to analyse because it is one of the world's most popular languages, and some of the authors have experience writing and evaluating technical material in this language. The assessments analyzed in this study (see Table <ref>) are taken from three undergrad courses. The first course is a databases course offered to third-year computer science students in their first or second semester. It is a mix of database theory and practical systems application. There are 101 paper submissions from this course which involved a final assessment where students wrote a report analyzing two industry players and their use of databases and data centers, this was written in English. The second course is a networking course offered to third-year computer science students in their second semester. It is a mix of theoretical concepts and practical system application. There are 13 paper submissions from this course which involved an exam question where students explain how they would implement the NOVEL-SMTP and NEO-SMTP email protocols using only UDP, this was written in English. The third course is a final thesis project course offered to fourth-year computer science students throughout their final year of study (across both semesters). It is meant to bridge theory and practice to develop something that can be used/implemented in the real world. There are 10 paper submissions from this course which involved improving computing systems and engineering processes in their local community, this was written in Spanish. Due to the character limitations, data below 1,000 characters was excluded and data above 2,500 characters was truncated to the last complete sentence. This ensures the input data fits within the range of all detectors. As many LLM-generated text detection platforms have a 2,500 character maximum, to ensure fairness across platform, we used 2,500 characters as our upper-bound. LLM-generated texts were created with the help of ChatGPT[<https://openai.com/blog/chatgpt>], a popular LLM. The handouts were parsed to prompts by removing irrelevant information (course code, deadlines, submission instruction) so the prompts only contain the core requirements of the task. These prompts were then fed into ChatGPT to generate a solution to the assignment. It should be noted, the authors mined through over 2,000 submissions in programming, data structures and algorithms, and compilers courses, however, the submission data varied too much for the content to easily be extracted and analyzed for detectors. Often due to a lack of context after removing any code. The selected submissions were purely writing-based and did not involve coding components, they did in some cases discuss theoretical concepts in computer science. Finally, all of the detectors were tested in April 2023. §.§ Discovering Publicly Available LLM-generated Text Detectors Publicly available LLM-generated text detectors were discovered from January to February 2023 from social media (i.e., Twitter, Facebook, and blogs), online news, and previous literature on LLM-generated text detection (GPT-2, GLTR). Public interest in LLM-generated text detectors followed the release of GPTZero which went viral on January, 2023. After GPTZero, many other companies launched their own LLM-generated text detectors. A number of LLM-generated text detectors were discovered but we limited this study to LLM-generated text detectors that appear to offer proprietary solutions to LLM-generated text detection. We found that some LLM-generated text detectors are likely to be replicas of open source work (GPT-2) and hence we excluded such detectors from the study. We identified eight such publicly available LLM-generated text detectors, as shown in Table <ref>. Two of them (GPT-2 Output Detector and GLTR) are featured with technical reports <cit.>. GPT-2 Output Detector <cit.> is a LLM-generated text detector based on the RoBERTa large pretrained model <cit.>. RoBERTa is a transformers model trained on a large corpus of raw English data. The GPT-2 Output Detector starts with the pre-trained ROBERTA-large model and trains a classifier for web data and the GPT-2 output dataset. The GPT-2 Detector returns the probability that an input text is real on GPT-2 text with accuracy of 88% at 124 million parameters and 74% at 1.5 billion parameters <cit.>. The detector is limited to the first 510 tokens, although there are extensions that extend this limit <cit.>. GLTR <cit.> is a detector that applies statistical methods to detect GPT-2 text. The model is based on three simple tests: the probability of the word, the absolute rank of a word, and the entropy of the predicted distribution. This detector shows an interface where each word is highlighted along with a top-k class for that word. The GLTR detector does not provide quantifiable overall probability that a text is AI-generated. To make a fair comparison between GLTR and other detectors, we define a detector on top of GLTR to make probability predictions using the normal distribution. We compute an average μ and a standard deviation σ over a sample dataset of 20 human and 20 ChatGPT submissions. The results were μ = 35.33, and s = 15.68. We then used those results to normalize a prediction by computing the standard score of a data point x using x - μ/s. This score is sent as input to the sigmoid function to obtain a probability prediction. GPTZero was the first detector <cit.> to claim to detect ChatGPT data. The original version of the detector used two measures: perplexity and burstiness. Perplexity refers to a measurement of how well GPT-2 can predict the next word in the text. This appears similar to the way the GLTR detector works <cit.>. The second measure is burstiness: the distribution of sentences. The idea is that humans tend to write with bursts of creativity and are more likely to have a mix of short and long sentence. The current version of GPTZero gives four classes of results. Table <ref> shows how different classes are interpreted as probability. GPTZero claims an 88% accuracy for human text and 72% accuracy for AI text for this detector <cit.>. AI Text Classifier is OpenAI's 2023 model fine tuned to distinguish between human-written and AI-generated text <cit.>. The model is trained on text generated from 34 models from 5 different organization. The model provides 5 different categories for the results based on the internal probabilities the model provides. Table <ref> shows how different classes are interpreted as probability. The interpretations are based on the final category, not the internal model. Usage of this classifier requires at least 1,000 characters. GPTKit uses an ensemble of 6 other models, including DistilBERT <cit.>, GLTR, Perplexity, PPL, RoBERTa <cit.>, and RoBERTa (base). The predictions of these models are used to form an overall probability that a text is LLM-generated. However, the exact weight used for each of the detectors is unclear. The detector claims an accuracy of 93% based on testing on a dataset of 100K+ responses <cit.>. CheckForAI claims to combine the GPT-2 Output Detector along with custom models to help limit false readings <cit.>. The detector also supports account sign up, history storage, and file uploads. The detector provides four classes to compute the probability of text, as shown in Table <ref>. This detector is currently limited to 2,500 characters. CopyLeaks offers products for plagiarism and AI content detection targeted broadly for individuals, educators, and enterprises. The detector highlights paragraphs written by a human and by AI. CopyLeaks also claims detection across multiple languages, including Spanish (tested in this paper). CopyLeaks claims an accuracy of 99.12% <cit.>. The detector is currently available publicly <cit.>. Originality.AI is a detector targeted for content publishers. The detector is available through a commercial sign-up page <cit.> with a minimum fee $20. We received research access for analysis of the detector. The detector comes with API access and a number of additional features for content creators. A self-proclaimed study by Originality on ChatGPT suggests that the detector has an accuracy of 98.65% <cit.>. We did not impose a systematic approach <cit.> to discover publicly available LLM-generated text detectors. Most of the detectors are recent and cannot be easily found on the internet or academic papers. A systematic approach might cover fewer results. §.§ Addressing the RQ: Effectiveness of LLM-generated text detectors A detector is only worthy of use if it is reasonably effective. We addressed the RQ by comparing detectors listed in Table <ref> under three metrics: accuracy, false positives, and resilience. Instructors prefer to use detectors that are reasonably accurate, reporting a minimal number of false positives, and are resilient to disguises. Accuracy refers to how effective the detectors are in identifying LLM-generated texts. We present all accuracy results using two measures of accuracy, as we have found that using only one measure may mislead about some aspect of the results. The first method (averages) takes the average prediction each detector across a dataset. As discussed in the discovery section, each detector either provides a probability that a text is LLM-generated or a category that represents such a probability. We apply our category to AI conversion tables to obtain a probability for each detector. These probabilities are averaged for the final results. The second method (thresholds) is calculated as the proportion of correctly-classified LLM-generated texts. These are measured as the number of texts that correctly receive above or below a 50% score out of the total number of texts. This measure is strict, so a prediction of 50% is always considered to be incorrect. False positives are original submissions that are suspected by LLM-generated text detectors. Fewer false positives are preferred. For this metric, we collected student submissions before the release of ChatGPT (2019) and measured their degree of originality with the detectors. Any suspected submissions (originality degree less than 50%) were expected to be false positives. Resilience refers to how good LLM-generated text detectors are in removing disguises. Some students might disguise their LLM-generated texts to avoid getting caught. QuillBot <cit.> is a paraphrasing tool capable of paraphrasing text. The tool uses Artificial Intelligence to reword writing. We paraphrased 10 ChatGPT submissions through QuillBot and measured the results. It is worth noting that measuring effectiveness of LLM-generated text detectors is time consuming and labour intensive. Further, some detectors are not supported with API integration; the authors needed to manually copy and paste each test case. §.§ Summarizing our experience using the LLM-generated text detectors We also report our experience in using the LLM-generated text detectors. Several aspects are considered: intuitiveness, clarity of documentation, extendability, variety of inputs, quality of reports, number of supported LLM-generated languages, and pricing. § RESULTS This section discusses our findings from addressing the research question and our experience using LLM-generated text detectors. §.§ Addressing the RQ: Effectiveness of LLM-generated Text Detector Table <ref> shows accuracy of each detector across human and ChatGPT data using the threshold method. The data shows CopyLeaks to be the most accurate LLM-generated text detector, with an accuracy of 97.06%. CopyLeaks is followed by the GPT-2 Output Detector/CheckForAI (96.62%), GLTR (88.73%), GPTKit (87.50%), OpenAI's Detector (77.37%), and GPTZero (49.69%). Table <ref> shows the results using averages instead of thresholds. The results show CopyLeaks to provide the best probabilities (99.53%), followed by CheckForAI (96.56%), the GPT-2 Output Detector (96.29%), GPTKit (82.09%), OpenAI's Detector (82%), OriginalityAI (76.63%), GLTR (65.84%), GPTZero (64.47%). The data in Tables <ref> and <ref> are both normally distributed, verified using the Shapiro-Wilk and Kolmogorov-Smirnov tests. Thus, no correction needed to be applied. Overall, from the t-tests (Table <ref>: t = 1.67 and p = 0.116, Table <ref>: t = 1.154, p = 0.268, both with 14 degrees of freedom) we did not find significant differences in the accuracy of LLM-generated text detectors between human and ChatGPT data. Table <ref> shows the false positive results on the human data from the databases and network assignments. GPTKit is the only detector that managed to achieve no false positives across the entire set of human submissions. This is followed by CopyLeaks (1), the GPT-2 Output Detector/CheckForAI (2), OpenAI's detector (6), OriginalityAI (7), GLTR (20), and finally GPTZero (52). A further investigation of GPTKit, which appears to be the the best detector for avoiding false positives, shows that this detector is still prone to false positives. While none of our original test samples appeared more than 50% fake, we found that some submissions score up to 37% fake from GPTKit. In some cases, removing the last paragraph(s) from these submissions led to a false positive. Figures <ref> and <ref> show such a case. We note that in this case the output of GPTKit also shows that the detector merged separate paragraphs into a single one. This unexpected merge may contribute to the problem. Table <ref> shows results of 10 ChatGPT papers before and after the Quillbot paraphraser. The results are measured using overall accuracy. The GLTR detector was the most resilient, with none of the predictions changing. It is worth noting that the overall weighted result of GLTR also decreased by 10%, although the change did not effect the accuracy. In contrast, the rest of the detector saw a significant drop following the transformation of Quillbot. Figures <ref> and <ref> show an example of a ChatGPT data point that went from 98% before Quillbot to 5% after Quillbot on Originality. Tables <ref> and <ref> show results from the capstone course data, written using Spanish. We found that CopyLeaks and the AI Text Classifier tend always output fake predictions on AI data. In contrast, the GPT-2 Output Detector, GPTZero, CheckForAI, GLTR, GPTKit, and Originality tend to output human predictions. The data in Tables <ref> and <ref> are both normally distributed, verified using the Shapiro-Wilk and Kolmogorov-Smirnov tests. Thus, no correction needed to be applied. Overall, from the t-tests (Table <ref>: t = 1.766 and p = 0.099, Table <ref>: t = 1.862, p = 0.084, both with 14 degrees of freedom) we did not find significant differences in the accuracy of LLM-generated text detectors between human (Spanish text) and ChatGPT (Spanish text) data. The GLTR detector shows an interesting mild success with Spanish data. The average top-k score on human data was 104, while the average top-k score on ChatGPT data was 85. When we changed the implementation of GLTR to set a mean of a 94.5 top-k score, GLTR managed to achieve the highest accuracy of 65% on Spanish text. §.§ Our experience using the LLM-generated text detectors Generally, many LLM-generated text detectors are intuitive to use. Similar with many online similarity detectors for identifying text plagiarism <cit.>. They have a web-based interface where a user can paste the text they want to check its originality. GPTZero and CheckForAI allow their users to upload a document instead. While there are a number of LLM-generated text detectors, only two of them have their technical reports publicly available (GPT-2 Output Detector <cit.> and GLTR <cit.>). This is possibly due to at least two reasons. First, technical reports might be misused by some individuals to trick the detectors. Second, some detectors are commercial. Most LLM-generated text detectors do not facilitate API integration. GPTZero, GPTKit, OriginalityAI, CopyLeaks provide such a feature with a fee. Without API integration, it is challenging to integrate the detectors to existing teaching environments, especially learning management system. LLM-generated text detectors are unlikely to be independently used as the task is labor intensive. As many of the detectors are commercial, their code is not publicly available. This might complicate instructors to further develop the detectors to fit their particular needs. The only open source detectors are the GPT-2 Output Detection and GLTR. The detectors are also limited in the input formats they support. Most of them only allow raw text pasted in a form, making them difficult to automate. The PDF parsers that we attempted to use often parsed in an incorrect order and had a tendency to include unwanted characters. We had to write custom scripts to parse the text in a format that translates all information to text. Detection results are challenging to interpret. Detectors attempt to combat this problem by highlighting content that is more likely to be AI-generated. Table <ref> shows the highlighting support each detector provides. Highlighting is provided on either a paragraph, sentence, or a word basis. While highlighting does seem to mitigate some barriers, we found that the highlighting feature can still be misleading. This was particularly evident in GPTZero, which highlighted 52 human submissions as either possibly or entirely AI-generated. Figure <ref> shows a sample human report where some sentences were highlighted as more likely to be written by AI. It is unclear what makes the highlighted text more likely be written by AI than the other sentences. In terms of output quality, it seems like the detectors are limited in their ability to export results. Nevertheless, some detectors were more effective than others. We provided screenshots of GPTKit, GPTZero, and Originality in this report since they provided more detailed results and it was easier to screenshot the results along with the text in contrast to the other detectors. It was more challenging to show full results of other detectors as they did not allow side-by-side results. Most LLM-generated text detectors only support English as the language of LLM-generated text. While one can still send text in other languages, the results do not appear meaningful as we previously showed. As many LLM-generated text detectors are commercial and they are relatively new, there appear to mostly individual pricing options. GPTZero CopyLeaks, for instance, have business pricing. GPTZero currently has a subscription plan for business users for $19.99USD per month. These detectors might be far less useful for instructors living in countries with weak currency; the pricing options are only available in USD. § DISCUSSION The current state of LLM-generated text detectors suggests that they are not yet ready to be trusted blindly for academic integrity purposes or as reliable plagiarism detectors such as Turnitin, MOSS, or JPlag. Our study demonstrates that detectors under-perform compared to the GPT-2 Output Detector and GLTR, which are older and freely available detectors from 2019. At first glance, it appears that LLM-generated text detectors are fairly accurate with human data being correctly detected ∼89.59%[this percentage is the average accuracy for human data using Tables <ref> and <ref>.] while the average accuracy for ChatGPT-generated data is substantially lower; ∼77.42%[this percentage is the average accuracy for ChatGPT-generated data using Tables <ref> and <ref>.]. Upon deeper inspection, it is apparent that the number of potential false positives can lead to a wide array of issues, especially if being trusted for plagiarism detection at educational institutions. Delving further, when a paraphraser (in this case, QuillBot) is utilized the average accuracy is slightly reduced for human data ∼89.02%[this percentage is the average accuracy for human data using Tables <ref> and <ref>.] but this substantially reduces the accuracy of ChatGPT-generated data ∼49.17%[this percentage is the average accuracy for ChatGPT-generated data using Tables <ref> and <ref>.]. This means that in more than half of all cases, ChatGPT-generated data cannot correctly be identified by these detectors. Though, some detectors perform better than others (e.g., GLTR), it is still a serious concern for users of these detectors. Additionally, once non-English languages are introduced, these detectors are easily exacerbated. We investigate submissions made in Spanish and see that the average accuracy for human data lowers to an average of ∼70.99% [this percentage is the average accuracy for human data using Tables <ref> and <ref>.], and ChatGPT-generated data reduces to an abysmal ∼17.50%[this percentage is the average accuracy for ChatGPT-generated data using Tables <ref> and <ref>.]. Though only Spanish was investigated, it introduces the need for additional research into alternative languages (non-English). Presently, all LLM-generated text detectors struggle with languages other than English, code, and special symbols, resulting in fairly inaccurate results. As a point of clarity, it would be ideal for these detectors to explicitly state their limitations and aim to produce human predictions in such cases. In terms of usability, LLM-generated text detectors need some improvements. Although they are intuitive to use and generate acceptable reports, many of them are not well documented at a technical level, some do not have APIs making them more difficult to integrate into local and larger systems (e.g., Learning Management Systems), and the support of these detectors is limited. Furthermore, some of these detectors require processing fees. From our results, LLM-generated text detectors appear to lack in understandability. We are aware that all of these detectors leverage similar large language models for detection purposes. However, they might differ in terms of their technical implementation, parameters, pre-trained data, etc. These are unlikely to be revealed since most of the detectors are for commercial-use and, thus, proprietary. While some detectors highlight sentences that are more likely to be AI-generated (Table <ref>), the results produced by the detectors are not clear enough for users of these detectors. § THREATS TO VALIDITY Our study has several threats to validity: * The findings of the study reflect detector results that are accurate as of April 2023. The detectors are volatile, and owners of these detectors could update their models. Results could change based on updates to LLM-generated text detectors. * Accuracy, false positives, and resilience were arguably sufficient to represent effectiveness. However, additional findings can be obtained by considering other effectiveness metrics. * The data sets were obtained from two institutions; one uses English as the operational language while another uses Spanish. This means that the findings might not be generalizable to other institutions, especially those with different operational languages. * While we believe that the data sets are sufficient to support our findings, we acknowledge that more data sets can strengthen the findings. § CONCLUSION This paper examines eight LLM-generated text detectors on the basis of effectiveness. The paper shows that while detectors manage to achieve a reasonable accuracy, they are still prone to flaws and can be challenging to interpret by the human eye. Ultimately, LLM-generated text detectors, while not yet reliable for academic integrity or plagiarism detection, show relatively accurate results for human-generated data compared to ChatGPT-generated data. However, false positives are a significant concern, especially when used for plagiarism detection in educational institutions. When a paraphrasing tool like QuillBot is employed, the accuracy decreases for both human and ChatGPT-generated data. Additionally, the detectors struggle with non-English languages, resulting in even lower accuracy. It is crucial for these detectors to acknowledge their limitations and aim for improved performance in various language contexts. §.§ Future Work Future detectors could attempt to incorporate a combination of metrics along with their accuracy for AI detectors. A combination of many factors along with the accuracy and false positive rates may give educators better insights into the predictions. This could include text-based features such as burstiness and repetition as well as AI-learned features such as probabilities. These detectors could further be fine-tuned for specific domains to improve their reliability. Additionally, there is a fundamental need to have accurate and understandable LLM-generated text detectors available for educators to combat against the rising concern of academic integrity due to these publicly available LLMs. It is also important for the researchers to contact the creators of these detectors to better understand the related issues and needs of the end users, but also to facilitate a deeper conversation about the functionality and correctness of their instruments. Finally, there is an apparent need to investigate the use of non-English languages using these detectors as large language models, like the one(s) used by ChatGPT, can produce content in languages other than English. ACM-Reference-Format
http://arxiv.org/abs/2307.04318v1
20230710032008
Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series
[ "Feiyu Jiang", "Changbo Zhu", "Xiaofeng Shao" ]
stat.ME
[ "stat.ME" ]
[4] [] Z>0=c<@ thmTheorem[section] assAssumption[section] thmbis[1] ass-1 corCorollary[section] proProposition[section] remRemark[section] exaExample algAlgorithm[section] defnDefinition[section] lemLemma[section] positioning plotmarks matrix compat=1.7 matrix,backgrounds, arrows.meta myback myback,background,main decorations.pathreplacing,angles,quotes mycolor/.style = dashed,rounded corners,line width=1bp,color=#1 myfillcolor/.style = draw,fill=#1 declare function= normcdf(,,)=1/(1 + exp(-0.07056*((-)/)^3 - 1.5976*(-)/)); #1 1 1 1]Feiyu Jiang 2]Changbo Zhu[Corresponding author. Email address: [email protected].] 3]Xiaofeng Shao [1]Department of Statistics and Data Science, Fudan University [2]Department of Applied and Computational Mathematics and Statistics, University of Notre Dame [3]Department of Statistics, University of Illinois at Urbana Champaign Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series [ ========================================================================== 0 Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series ========================================================================== Data objects taking value in a general metric space have become increasingly common in modern data analysis. In this paper, we study two important statistical inference problems, namely, two-sample testing and change-point detection, for such non-Euclidean data under temporal dependence. Typical examples of non-Euclidean valued time series include yearly mortality distributions, time-varying networks, and covariance matrix time series. To accommodate unknown temporal dependence, we advance the self-normalization (SN) technique <cit.> to the inference of non-Euclidean time series, which is substantially different from the existing SN-based inference for functional time series that reside in Hilbert space <cit.>. Theoretically, we propose new regularity conditions that could be easier to check than those in the recent literature, and derive the limiting distributions of the proposed test statistics under both null and local alternatives. For change-point detection problem, we also derive the consistency for the change-point location estimator, and combine our proposed change-point test with wild binary segmentation to perform multiple change-point estimation. Numerical simulations demonstrate the effectiveness and robustness of our proposed tests compared with existing methods in the literature. Finally, we apply our tests to two-sample inference in mortality data and change-point detection in cryptocurrency data. § INTRODUCTION Statistical analysis of non-Euclidean data that reside in a metric space is gradually emerging as an important branch of functional data analysis, motivated by increasing encounter of such data in many modern applications. Examples include the analysis of sequences of age-at-death distributions over calendar years <cit.>, covariance matrices in the analysis of diffusion tensors in medical imaging <cit.>, and graph Laplacians of networks <cit.>. One of the main challenges in dealing with such data is that the usual vector/Hilbert space operation, such as projection and inner product may not be well defined and only the distance between two non-Euclidean data objects is available. Despite the challenge, the list of papers that propose new statistical techniques to analyze non-Euclidean data has been growing. Building on Fréchet mean and variance <cit.>, which are counterparts of mean and variance for metric space valued random object, <cit.> proposed a test for comparing N(≥ 2) populations of metric space valued data. <cit.> developed a novel test to detect a change point in the Fréchet mean and/or variance in a sequence of independent non-Euclidean data. The classical linear and nonparametric regression has also been extended to metric spaced valued data; see <cit.>, <cit.>, and <cit.>, among others. So far, the majority of the literature on non-Euclidean data has been limited to independent data, and the only exceptions are <cit.> and <cit.>, which mainly focused on the autoregressive modeling of non-Euclidean valued time series. To the best of our knowledge, no inferential tools are available for non-Euclidean valued time series in the literature. In this paper, we address two important problems: two-sample testing and change-point detection, in the analysis of non-Euclidean valued time series. These two problems are also well motivated by the data we analyzed in the paper, namely, the yearly age-at-death distributions for countries in Europe and daily Pearson correlation matrices for five cryptocurrencies. For time series data, serial dependence is the rule rather than the exception. This motivates us to develop new tests for non-Euclidean time series that is robust to temporal dependence. Note that the two testing problems have been addressed by <cit.> and <cit.>, respectively for independent non-Euclidean data, but as expected, their tests fail to control the size when there is temporal dependence in the series; see Section <ref> for simulation evidence. To accommodate unknown temporal dependence, we develop test statistics based on self-normalization <cit.>, which is a nascent inferential technique for time series data. It has been mainly developed for vector time series and has been extended to functional time series in Hilbert space <cit.>. The functional extension is however based on reducing the infinite dimensional functional data to finite dimension via functional principal component analysis, and then applying SN to the finite-dimensional vector time series. Such SN-based inference developed for time series in Hilbert space cannot be applied to non-Euclidean valued time series, since the projection and inner product commonly used for data in Hilbert space are not available for data objects that live in a general metric space. The SN-based extension to non-Euclidean valued time series is therefore fairly different from that in <cit.> and <cit.>, in terms of both methodology and theory. For independent non-Euclidean valued data, <cit.> build on the empirical process theory <cit.> by regulating the complexity of the analyzed metric space, which is in general abstract and may not be easy to verify. In our paper, we take a different approach that is inspired by the M-estimation theory in <cit.> and <cit.> for Euclidean data, and extend it to non-Euclidean setting. We assume that the metric distance between data and the estimator of the Fréchet mean admits certain decomposition, which includes a bias term, a leading stochastic term, and a remainder term. Our technical assumptions are more intuitive and could be easier to check in practice. Furthermore, we are able to obtain explicit asymptotic distributions of our test statistics under the local alternatives of rate O(n^-1/2), where n is the sample size, under our assumptions, whereas they seem difficult to derive under the entropy integral type conditions employed by <cit.>. The remainder of the paper is organized as follows. Section <ref> provides background of non-Euclidean metric space in which random objects of interest reside in, and some basic assumptions that will be used throughout the paper. Section <ref> proposes SN-based two-sample tests for non-Euclidean time series. Section <ref> considers SN-based change-point tests. Numerical studies for the proposed tests are presented in Section <ref>, and Section <ref> demonstrates the applicability of these tests through real data examples. Section <ref> concludes. Proofs of all results are relegated to Appendix <ref>. Appendix <ref> summarizes the examples that satisfy assumptions in Section <ref>, and Appendix <ref> provides simulation results for functional time series. Some notations used throughout the paper are defined as follows. Let · denote the conventional Euclidean norm. Let D[0,1] denote the space of functions on [0, 1] which are right continuous with left limits, endowed with the Skorokhod topology <cit.>. We use ⇒ to denote weak convergence in D[0,1] or more generally in ℝ^m-valued function space D^m[0,1], where m∈ℕ; →_d to denote convergence in distribution; and →_p to denote convergence in probability. A sequence of random variables X_n is said to be O_p(1) if it is bounded in probability. For x∈ℝ, define ⌊ x⌋ as the largest integer that is smaller than or equal to x, and ⌈ x ⌉ as the smallest integer that is greater than or equal to x. § PRELIMINARIES AND SETTINGS In this paper, we consider a metric space (Ω,d) that is totally bounded, i.e. for any ϵ>0, there exist a finite number of open ϵ-balls whose union can cover Ω. For a sequence of stationary random objects {Y_t}_t∈ℤ defined on (Ω,d), we follow <cit.>, and define their Fréchet mean and variance by μ=min_ω∈Ω𝔼d^2(Y_t,ω), V=𝔼d^2(Y_t,μ), respectively. Fréchet mean extends the traditional mean in linear spaces to more general metric spaces by minimizing expected squared metric distance between the random object Y_t and the centroid akin to the conventional mean by minimizing the expected sum of residual squares. It is particularly useful for objects that lie in abstract spaces without explicit algebraic structure. Fréchet variance, defined by such expected squared metric distance, is then used for measuring the dispersion in data. Given finite samples {Y_t}_t=1^n, we define their Fréchet subsample mean and variance as μ̂_[a,b]=min_ω∈Ω∑_t=1+⌊ na⌋^⌊ nb⌋d^2(Y_t,ω), V̂_[a,b]=1/⌊ nb⌋-⌊ na⌋∑_t=1+⌊ na⌋^⌊ nb⌋d^2(Y_t,μ̂_[a,b]), where (a,b)∈ℐ_η, ℐ_η={(a,b): 0≤ a<b≤ 1, b-a≥η} for some trimming parameter η∈(0,1). The case corresponding to a=0 and b≥η is further denoted as μ̂_[0,b]=μ̂_b, V̂_[0,b]=V̂_b, with special case of b=1 corresponding to Fréchet sample mean and variance <cit.>, respectively. Note that both Fréchet (subsample) mean and variance depend on the space Ω and metric distance d, which require further regulation for desired inferential purposes. In this paper, we do not impose independence assumptions, and our technical treatment differs substantially from those in the literature, c.f. <cit.>. μ is unique, and for some δ>0, there exists a constant K>0 such that, inf _d(ω, μ)<δ{𝔼(d^2(Y_0, ω))-𝔼(d^2(Y_0, μ))-K d^2(ω, μ)}≥ 0. For any (a,b)∈ℐ_η, μ̂_[a,b] exists and is unique almost surely. For any ω∈Ω, and (a,b)∈ℐ_η, as n→∞, 1/⌊ nb⌋-⌊ na⌋∑_t=⌊ na⌋+1^⌊ nb⌋[d^2(Y_t,ω)-𝔼d^2(Y_t,ω)]→_p 0. For some constant σ>0, 1/√(n)∑_t=1^⌊ nr⌋(d^2(Y_t,μ)-V)⇒σ B(r), r∈(0,1], where B(·) is a standard Brownian motion. Let B_δ(μ) ⊂Ω be a ball of radius δ centered at μ. For ω∈ B_δ(μ), i.e. d(ω,μ)≤δ, we assume the following expansion d^2(Y_t,ω)-d^2(Y_t,μ)= K_dd^2(ω,μ)+ g(Y_t,ω,μ)+R(Y_t,ω,μ), t∈ℤ, where K_d∈(0,∞) is a constant, and g(Y_t,ω,μ) and R(Y_t,ω,μ) satisfy that, as n→∞, sup_(a,b)∈ℐ_ηsup_ω∈ B_δ(μ)| n^-1/2∑_t=⌊ n a⌋+1^⌊ n b⌋ g(Y_t,ω,μ)/d(ω,μ)|=O_p(1), and sup_(a,b)∈ℐ_ηsup_ω∈ B_δ(μ)|n^-1/2∑_t=⌊ n a⌋+1^⌊ n b⌋ R(Y_t,ω,μ)/d(ω,μ)+n^1/2d^2(ω,μ)|→_p 0, respectively. Several remarks are given in order. Assumptions <ref>-<ref> are standard and similar conditions can be found in <cit.> and <cit.>. Assumptions <ref> and <ref> are adapted from Assumption (A1) in <cit.>, and are required for identification purpose. In particular, Assumption <ref> requires that the expected squared metric distance 𝔼d^2(Y_t,ω) can be well separated from the Fréchet variance, and the separation is quadratic in terms of the distance d(ω,μ). Assumption <ref> is useful for obtaining the uniform convergence of the subsample estimate of Fréchet mean, i.e., μ̂_[a,b], which is a key ingredient in forming the self-normalizer in SN-based inference. Assumption <ref> is a pointwise weak law of large numbers, c.f. Assumption (A2) in <cit.>. Assumption <ref> requires the invariance principle to hold to regularize the partial sum that appears in Fréchet subsample variances. Note that d^2(Y_t,ω) takes value in ℝ for any fixed ω∈Ω, thus both Assumption <ref> and <ref> could be implied by high-level weak temporal dependence conditions (e.g., strong mixing) in conventional Euclidean space, see <cit.> for discussions. <Ref> distinguishes our theoretical analysis from the existing literature. Its idea is inspired by <cit.> and <cit.> for M-estimators. In the conventional Euclidean space, i.e. (Ω,d)=(ℝ^m,·) for m≥ 1, it is easy to see that the expansion in <Ref> holds with K_d=1, g(Y_t,ω,μ)=2(μ-ω)^⊤(Y_t-μ) and R(Y_t,ω,μ)≡ 0. In more general cases, Assumption <ref> can be interpreted as the expansion of d^2(Y_t,ω) around the target value d^2(Y_t,μ). In particular, K_dd^2(ω,μ) can be viewed as the bias term, g(Y_t,ω,μ) works as the asymptotic leading term that is proportional to the distance d(ω,μ) while R(Y_t,ω,μ) is the asymptotically negligible remainder term. More specifically, after suitable normalization, it reads as, n^-1/2 ∑_t=⌊na⌋+1^⌊nb⌋ [d^2(Y_t,ω)-d^2(Y_t,μ)] = n^1/2(b-a)K_dd^2(ω,μ)_bias term + d(ω,μ)n^-1/2∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,ω,μ)/d(ω,μ)_stochastic term +n^-1/2∑_t=⌊na⌋+1^⌊nb⌋ R(Y_t,ω,μ)_remainder term. And the verification of this assumption can be done by analyzing each term. In comparison, existing literature, e.g. <cit.>, <cit.>, impose assumptions on the complexity of (Ω,d). These assumptions typically involve the behaviors of entropy integral and covering numbers rooted in the empirical process theory <cit.>, which are abstract and difficult to check in practice, see Propositions 1 and 2 in <cit.>. Assumption <ref>, on the contrary, regulates directly on the metric d and could be easily checked for the examples below. Moreover, Assumption <ref> is useful for deriving local powers of tests to be developed in this paper, see Section <ref> and <ref> for more details. Examples that can satisfy Assumptions <ref>-<ref> include: * L_2 metric d_L for Ω being the set of square integrable functions on [0,1]; * 2-Wasserstein metric d_W for Ω being the set of univariate probability distributions on ℝ; * Frobenius metric d_F for Ω being the set of square matrices, including the special cases of covariance matrices and graph Laplacians; * log-Euclidean metric d_E for Ω being the set of covariance matrices. We refer to Appendix <ref> for more details of these examples and verifications of above assumptions for them. § TWO-SAMPLE TESTING This section considers two-sample testing in metric space under temporal dependence. For two sequences of temporally dependent random objects {Y_t^(1),Y_t^(2)}_t∈ℤ on (Ω,d), we denote Y_t^(i)∼ P^(i), where P^(i) is the underlying marginal distribution of Y_t^(i) with Fréchet mean and variance μ^(i) and V^(i), i=1,2. Given finite sample observations {Y_t^(1)}_t=1^n_1 and {Y_t^(2)}_t=1^n_2, we are interested in the following two-sample testing problem, ℍ_0: P^(1)=P^(2),  ℍ_a: P^(1)≠ P^(2). Let n=n_1+n_2, we assume two samples are balanced, i.e. n_1/n→γ_1 and n_2/n→γ_2 with γ_1,γ_2∈(0,1) and γ_1+γ_2=1 as min(n_1,n_2)→∞. For r∈(0,1], we define their recursive Fréchet sample mean and variance by μ̂^(i)_r=min_ω∈Ω∑_t=1^⌊ rn_i⌋d^2(Y_t^(i),ω), V̂^(i)_r=1/⌊ rn_i⌋∑_t=1^⌊ rn_i⌋d^2(Y_t^(i),μ̂^(i)_r), i=1,2. A natural candidate test of ℍ_0 is to compare their Fréchet sample mean and variance by contrasting (μ̂^(1)_1,V̂^(1)_1) and (μ̂^(2)_1,V̂^(2)_1). For the mean part, it is tempting to use d(μ̂^(1)_1,μ̂^(2)_1) as the testing statistic. However, this is a non-trivial task as the limiting behavior of d(μ̂^(1)_1,μ̂^(2)_1) depends heavily on the structure of the metric space, which may not admit conventional algebraic operations. Fortunately, both V̂^(1)_1 and V̂^(2)_1 take value in ℝ, and it is thus intuitive to compare their difference. In fact, <cit.> propose the test statistic of the form U_n= n_1n_2/n σ̂_1^2σ̂_2^2(V̂^(1)_1-V̂^(2)_1)^2, where σ̂_i^2 is a consistent estimator of lim_n_i→∞Var{√(n)(V̂^(i)_1-V^(i))}, i=1,2. However, U_n requires both within-group and between-group independence, which is too stringent to be realistic for applications in this paper. When either of such independence is violated, the test may fail to control size, see Section <ref> for numerical evidence. Furthermore, taking into account the temporal dependence requires replacing the variance by long-run variance, whose consistent estimation usually involves laborious tuning such as choices of kernels and bandwidths <cit.>. To this end, we invoke self-normalization technique to bypass the foregoing issues. The core principle of self-normalization for the time series inference is to use an inconsistent long-run variance estimator that is a function of recursive estimates to yield an asymptotically pivotal statistic. The SN procedure does not involve any tuning parameter or involves less number of tuning parameters compared to traditional counterparts. See <cit.> for a comprehensive review of recent developments for low dimensional time series. For recent extension to inference for high-dimensional time series, we refer to <cit.> and <cit.>. §.§ Test Statistics Define the recursive subsample test statistic based on Fréchet variance as T_n(r)=r(V̂^(1)_r-V̂^(2)_r), r∈ [η,1], and then construct the SN based test statistic as D_n,1=n[T_n(1)]^2/∑_k=⌊ nη⌋^n [T_n(k/n)-k/nT_n(1)]^2, where η∈(0,1) is a trimming parameter for controlling the estimation effect of T_n(r) when r is close to 0, which is important for deriving the uniform convergence of {√(n)T_n(r), r∈[η,1]}, see <cit.> and <cit.> for similar technical treatments. The testing statistic (<ref>) is composed of the numerator n[T_n(1)]^2, which captures the difference in Fréchet variances, and the denominator ∑_k=⌊ nη⌋^n [T_n(k/n)- k/nT_n(1)]^2, which is called self-normalizer and mimics the behavior of the numerator with suitable centering and trimming. For each r∈[η,1], T_n(r) is expected to be a consistent estimator for r(V^(1)-V^(2)). Therefore, under ℍ_a, T_n(1) is large when there is significant difference in Fréchet variance, whereas the key element T_n(r)-rT_n(1) in self-normalizer remains to be small. This suggests that we should reject ℍ_0 for large values of D_n,1. Note that (<ref>) only targets at difference in Fréchet variances. To detect the difference in Fréchet means, we can use contaminated Fréchet variance <cit.>. Let V̂^C,(1)_r=1/⌊ rn_1⌋∑_t=1^⌊ rn_1⌋d^2(Y_t^(1),μ̂^(2)_r), and V̂^C,(2)_r=1/⌊ rn_2⌋∑_t=1^⌊ rn_2⌋d^2(Y_t^(2),μ̂^(1)_r), and T_n^C(r)=r(V̂^C,(1)_r+V̂^C,(2)_r-V̂^(1)_r-V̂^(2)_r). The contaminated Fréchet sample variances V̂^C,(1)_r and V̂^C,(2)_r switch the role of μ̂_r^(1) and μ̂_r^(2) in V̂^(1)_r and V̂^(2)_r, respectively, and could be viewed as proxies for measuring Fréchet mean differences. Intuitively, it is expected that V̂^C,(i)_r≈𝔼d^2(Y_t^(i),μ^(3-i)), and V̂^(i)_r≈𝔼d^2(Y_t^(i), μ^(i)), i=1,2. Under ℍ_0, both μ̂_r^(1) and μ̂_r^(2) are consistent estimators for μ^(1)=μ^(2), thus V̂^C,(i)_r≈V̂^(i)_r, i=1,2, which indicates a small value for T_n^C(r). On the contrary, when d(μ^(1),μ^(2))>0, V̂^C,(i)_r could be much larger than V̂^(i)_r as 𝔼d^2(Y_t^(i),μ^(3-i))>𝔼d^2(Y_t^(i),μ^(i))=min_ω∈Ω𝔼d^2(Y_t^(i),ω), i=1,2, resulting in large value of T_n^C(r). The power-augmented test statistic is thus defined by D_n,2=n{[T_n(1)]^2+[T_n^C(1)]^2}/∑_k=⌊ nη⌋^n {[T_n(k/n)-k/nT_n(1)]^2+ [T_n^C(k/n)-k/nT_n^C(1)]^2}, where the additional term ∑_k=⌊ nη⌋^n [T_n^C(k/n)-k/nT_n^C(1)]^2 that appears in the self-normalizer is used to stabilize finite sample performances. Our proposed tests could be adapted to comparison of N-sample populations <cit.>, where N≥ 2. An natural way of extension would be aggregating all the pairwise differences in Fréchet variance and contaminated variance. Specifically, let the N groups of random data objects be {Y_t^(i)}_t=1^n_i, i=1,⋯,N. The null hypothesis is given as ℍ_0: P^(1)=⋯=P^(N), for some N≥ 2. Let μ̂^(i)_r and V̂^(i)_r, r∈[η,1] be the Fréchet subsample mean and variance, respectively, for the ith group, i=1,⋯, N. For 1≤ i≠ j≤ N, define the pairwise contaminated Fréchet subsample variance as V̂^C,(i,j)_r=1/⌊ rn_i⌋∑_t=1^⌊ rn_i⌋d^2(Y_t^(i),μ̂^(j)_r), r∈ [η,1], and define the recursive statistics T_n^i,j(r)=r(V̂^(i)_r-V̂^(j)_r), T_n^C,i,j(r)=r(V̂^C,(i,j)_r+V̂^C,(j,i)_r-V̂^(i)_r-V̂^(j)_r), r∈ [η,1]. In the same spirit of the test statistics D_n,1 and D_n,2, for n=∑_i=1^N n_i, we may construct their counterparts for the N-sample testing problem as D^(N)_n,1=n∑_i<j[T_n^i,j(1)]^2/∑_k=⌊ nη⌋^n ∑_i<j[T_n^i,j(k/n)-k/nT_n^i,j(1)]^2, and D^(N)_n,2=n∑_i<j{[T_n^i,j(1)]^2+[T_n^C,i,j(1)]^2}/∑_k=⌊ nη⌋^n ∑_i<j{[T_n^i,j(k/n)-k/nT_n^i,j(1)]^2+[T_n^C,i,j(k/n)-k/nT_n^C,i,j(1)]^2}. Compared with classical N-sample testing problem in Euclidean spaces, e.g. analysis of variance (ANOVA), the above modification does not require Gaussianity, equal variance, or serial independence. Therefore, they could be work for broader classes of distributions. We leave out the details for the sake of space. §.§ Asymptotic Theory Before we present asymptotic results of the proposed tests, we need a slightly stronger assumption than Assumption <ref> to regulate the joint behavior of partial sums for both samples. For some σ_1>0 and σ_2>0, we have 1/√(n)∑_t=1^⌊ nr⌋( d^2(Y_t^(1),μ^(1))-V^(1) d^2(Y_t^(2),μ^(2))-V^(2))⇒(σ_1B^(1)(r) σ_2B^(2)(r) ), where B^(1)(·) and B^(2)(·) are two standard Brownian motions with unknown correlation parameter ρ∈ (-1,1), and σ_1,σ_2≠ 0 are unknown parameters characterizing the long-run variance. Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>) hold for both {Y_t^(1)}_t=1^n_1 and {Y_t^(2)}_t=1^n_2. Then as n→∞, under ℍ_0, for i=1,2, D_n,i→_d ξ^2_γ_1,γ_2(1;σ_1,σ_2)/∫_η^1[ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2)]^2dr:=𝒟_η, where ξ_γ_1,γ_2(r;σ_1,σ_2)=γ_1^-1σ_1B^(1)(γ_1r)-γ_2^-1σ_2B^(2)(γ_2r). Theorem <ref> obtains the same limiting null distribution for Fréchet variance based test D_n,1 and its power-augmented version D_n,2. Although D_n,2 contains contaminated variance T_n^C(1), its contribution is asymptotically vanishing as n→∞. This is an immediate consequence of the fact that sup_r∈[η,1]|√(n)T_n^C(r)|→_p0, see proof of Theorem <ref> in Appendix <ref>. Similar phenomenon has been documented in <cit.> under different assumptions. We next consider the power behavior under the Pitman local alternative, ℍ_an: V^(1)-V^(2)=n^-κ_VΔ_V, d^2(μ^(1),μ^(2))=n^-κ_MΔ_M, with Δ_V∈ℝ, Δ_M∈(0,∞), and κ_V,κ_M∈ (0,∞). Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>) hold for both {Y_t^(1)}_t=1^n_1 and {Y_t^(2)}_t=1^n_2. As n→∞, under ℍ_an, * if max{κ_V,κ_M}∈(0,1/2), then for i=1,2, D_n,i→_p∞; * if min{κ_V,κ_M}∈(1/2,∞), then for i=1,2, D_n,i→_d𝒟_η; * if κ_V=1/2 and κ_M∈(1/2,∞), then for i=1,2, D_n,i→_d (ξ_γ_1,γ_2(1;σ_1,σ_2)+Δ_V)^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr; * if κ_V∈ (1/2,∞) and κ_M=1/2, then D_n,1→_d𝒟_η, and D_n,2→_d (ξ_γ_1,γ_2(1;σ_1,σ_2))^2+4K_d^2Δ_M^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr; * if κ_V=κ_M=1/2, then D_n,1→_d (ξ_γ_1,γ_2(1;σ_1,σ_2)+Δ_V)^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr, D_n,2→_d (ξ_γ_1,γ_2(1;σ_1,σ_2)+Δ_V)^2+4K_d^2Δ_M^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr; where K_d is defined in Assumption <ref>. Theorem <ref> presents the asymptotic behaviors for both test statistics under local alternatives in various regimes. In particular, D_n,1 can detect differences in Fréchet variance at local rate n^-1/2, but possesses trivial power against Fréchet mean difference regardless of the regime of κ_M. In comparison, D_n,2 is powerful for differences in both Fréchet variance and Fréchet mean at local rate n^-1/2, which validates our claim that D_n,2 indeed augments power. Our results merit additional remarks when compared with <cit.>. In <cit.>, they only obtain the consistency of their test under either n^1/2|V^(1)-V^(2)|→∞ or n^1/2d^2(μ^(1),μ^(2))→∞, while Theorem <ref> explicitly characterizes the asymptotic distributions of our test statistics under local alternatives of order O(n^-1/2), which depend on κ_V and κ_M. Such theoretical improvement relies crucially on our newly developed proof techniques based on Assumption <ref>, and it seems difficult to derive such limiting distributions under empirical-process-based assumptions in <cit.>. However, we do admit that self-normalization could result in moderate power loss compared with t-type test statistics, see <cit.> for evidence in Euclidean space. Note that the limiting distributions derived in <Ref> and <Ref> contain a key quantity ξ_γ_1,γ_2(r;σ_1,σ_2) defined in (<ref>), which depends on nuisance parameters σ_1,σ_2 and ρ. This may hinder the practical use of the tests. The following corollary, however, justifies the wide applicability of our tests. Under Assumption <ref>, if either γ_1=γ_2=1/2 or ρ=0, then for any constants C_a,C_b∈ℝ, (ξ_γ_1,γ_2(1;σ_1,σ_2)+C_a)^2+C_b^2/∫_η^1(ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2))^2dr=_d (B(1)+C_a/C_ξ)^2+ (C_b/C_ξ)^2/∫_η^1(B(r)-rB(1))^2dr, where C_ξ=√(2σ_1^2+2σ_2^2-4ρσ_1σ_2), if γ_1=γ_2, √(σ_1^2/γ_1+σ_2^2/γ_2), if ρ=0. Therefore, by choosing C_a=C_b=0 in <Ref>, we obtain the pivotal limiting distribution 𝒟_η=_dB^2(1)/∫_η^1(B(r)-rB(1))^2dr. The asymptotic distributions in <Ref> can be similarly derived by letting either C_a=Δ_V or C_b=2K_dΔ_M. Therefore, when either two samples are of the same length (γ_1=γ_2) or two samples are asymptotically independent (ρ=0), the limiting distribution 𝒟_η is pivotal. In practice, we reject ℍ_0 if D_n,i>Q_𝒟_η(1-α) where Q_𝒟_η(1-α) denotes the 1-α quantile of (the pivotal) D_η. In Table <ref>, we tabulate commonly used critical values under various choices of η by simulating 50,000 i.i.d. 𝒩(0,1) random variables 10,000 times and approximating a standard Brownian motion by standardized partial sum of i.i.d. 𝒩(0,1) random variables. § CHANGE-POINT TEST Inspired by the two-sample tests developed in Section <ref>, this section considers the change-point detection problem for a sequence of random objects {Y_t}_t=1^n, i.e. ℍ_0: Y_1, Y_2, …, Y_n∼ P^(1) against the single change-point alternative, ℍ_a: there exists 0<τ<1 such that Y_t={[ Y_t^(1)∼ P^(1), 1≤ t≤⌊ nτ⌋; Y_t^(2)∼ P^(2), ⌊ nτ⌋ +1≤ t ≤ n. ]. The single change-point testing problem can be roughly viewed as two-sample testing without knowing where the two samples split, and they share certain similarities in terms of statistical methods and theory. Recall the Fréchet subsample mean μ̂_[a,b] and variance V̂_[a, b] in (<ref>), we further define the pooled contaminated variance separated by r∈(a,b) as V̂_[r ; a, b]^C=1/⌊ n r⌋-⌊ n a⌋∑_i=⌊ n a⌋+1^⌊ n r⌋ d^2(Y_i, μ̂_[r, b])+1/⌊ n b⌋-⌊ n r⌋∑_i=⌊ n r⌋+1^⌊ n b⌋ d^2(Y_i, μ̂_[a, r]). Define the subsample test statistics T_n(r ; a, b)=(r-a)(b-r)/b-a(V̂_[a, r]-V̂_[r, b]), and T_n^C(r ; a, b)=(r-a)(b-r)/b-a(V̂_[r ; a, b]^C-V̂_[a, r]-V̂_[r, b]). Note that T_n(r ; a, b) and T_n^C(r ; a, b) are natural extensions of T_n(r) and T_n^C(r) from two-sample testing problem to change-point detection problem by viewing {Y_t}_t=⌊ na⌋+1^⌊ nr⌋ and {Y_t}_t=⌊ nr⌋+1^⌊ nb⌋ as two separated samples. Intuitively, the contrast statistics T_n(r ; a, b) and T_n^C(r ; a, b) are expected to attain their maxima (in absolute value) when r is set at or close to the true change-point location τ. §.§ Test Statistics For some trimming parameters η_1 and η_2 such that η_1>2η_2, and η_1∈(0,1/2), in the same spirit of D_n,1 and D_n,2, and with a bit abuse of notation, we define the testing statistics SN_i= max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k), i=1,2, where D_n,1(k)= n[T_n(k/n ; 0,1)]^2/ ∑_l=⌊nη_2⌋^k-⌊nη_2⌋ [T_n(l/n ; 0, k/n)]^2+ ∑_l=k+⌊nη_2⌋^n-⌊nη_2⌋ [T_n(l/n ; k/n, 1)]^2, D_n,2(k)= n{[T_n(k/n ; 0,1)]^2+[T_n^C(k/n ; 0,1)]^2}/ L_n(k)+R_n(k), with L_n(k)= ∑_l=⌊nη_2⌋^k-⌊nη_2⌋ {[T_n(l/n ; 0, k/n)]^2+[T^C_n(l/n ; 0, k/n)]^2 }, R_n(k)= ∑_l=k+⌊nη_2⌋^n-⌊nη_2⌋ {[T_n(l/n ; k/n, 1)]^2+ [T^C_n(l/n ; k/n, 1)]^2}. The trimming parameter η_1 plays a similar role as η in two-sample testing problem for stabilizing the estimation effect for relatively small sample sizes, while the additional trimming η_2 is introduced to ensure that the subsample estimates in the self-normalizers are constructed with the subsample size proportional to n. Furthermore, we note that the self-normalizers here are modified to accommodate for the unknown change-point location, see <cit.>, <cit.> for more discussion. §.§ Asymptotic Theory Suppose Assumptions <ref>-<ref> hold. Then, under ℍ_0, we have for i=1,2, SN_i=max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k) ⇒sup _r∈[η_1,1-η_1][B(r)-rB(1)]^2/V(r,η):=𝒮_η, where V(r,η)=∫_η_2^r-η_2 [B(u)-u/rB(r)]^2du+∫_r+η_2^1-η_2 [B(1)-B(u)-(1-u)/(1-r){B(1)-B(r)}]^2du. Similar to Theorem <ref>, Theorem <ref> states that both change-point test statistics have the same pivotal limiting null distribution 𝒮_η. The test is thus rejected when SN_i>Q_𝒮_η(1-α), i=1,2, where Q_𝒮_η(1-α) denotes the 1-α quantile of 𝒮_η. In Table <ref>, we tabulate commonly used critical values under various choices of (η_1,η_2) by simulations. Recall in Theorem <ref>, we have obtained the local power of two-sample tests D_n,1 and D_n,2 at rate n^-1/2. To this end, consider the local alternative ℍ_an: V^(1)-V^(2)=n^-1/2Δ_V, d^2(μ^(1),μ^(2))=n^-1/2Δ_M, where Δ_V∈ℝ and Δ_M∈(0,∞). The following theorem states the asymptotic power behaviors of SN_1 and SN_2. Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>) hold. If Δ_V≠ 0 and Δ_M≠ 0 are fixed, then under ℍ_an, if τ∈(η_1,1-η_1), then as n→∞, we have lim_|Δ_V|→∞lim_n→∞{max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,1(k)}→_p∞, lim_max{|Δ_V|,Δ_M}→∞lim_n→∞{max _⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,2(k)}→_p∞. We note that <Ref> deals with the alternative involving two different sequences before and after the change-point, while <Ref> only involves one stationary sequence. Therefore, we need to replace <Ref> by <Ref>. <Ref> demonstrates that our tests are capable of detecting local alternatives at rate n^-1/2. In addition, it is seen from Theorem <ref> that SN_1 is consistent under the local alternative of Fréchet variance change as |Δ_V|→∞, while SN_2 is consistent not only under |Δ_V|→∞ but also under the local alternative of Fréchet mean change as Δ_M→∞. Hence SN_2 is expected to capture a wider class of alternatives than SN_1, and these results are consistent with findings for two-sample problems in Theorem <ref>. When ℍ_0 is rejected, it is natural to estimate the change-point location by τ̂_i=n^-1k̂_i, k̂_i=max_⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k), We will show that the estimators are consistent under the fixed alternative, i.e. ℍ_a: V^(1)-V^(2)=Δ_V. Before that, we need to regulate the behaviour of Fréchet mean and variance under ℍ_a. Let μ(α)= min_ω∈Ω{α𝔼(d^2(Y_t^(1),ω))+(1-α)𝔼(d^2(Y_t^(2),ω))}, V(α)= α𝔼(d^2(Y_t^(1),μ(α)))+(1-α)𝔼(d^2(Y_t^(2),μ(α))), be the limiting Fréchet mean and variance of two mixture distributions indexed by α∈[0,1]. μ(α) is unique for all α∈[0,1], and |V^(2)-V(α)|≥φ(α), |V^(1)-V(α)|≥φ(1-α), such that φ(α)≥ 0 is a continuous, strictly increasing function of α∈[0,1] satisfying φ(0)=0 and φ(1)≤ |Δ_V|. The uniqueness of Fréchet mean and variance for mixture distribution is also imposed in <cit.>, see Assumption (A2) therein. Furthermore, Assumption <ref> imposes a bi-Lipschitz type condition on V(α), and is used to distinguish the Fréchet variance V(α) under mixture distribution from V^(1) and V^(2). Suppose Assumptions <ref>-<ref> (with <ref> replaced by <ref>), and Assumption <ref> hold. Under ℍ_a, for i=1,2, we have τ̂_i→_pτ, where τ̂_i is defined in (<ref>). Theorem <ref> obtains the consistency of τ̂_i, i=1,2 when Fréchet variance changes. We note that it is very challenging to derive the consistency result when ℍ_a is caused by Fréchet mean change alone, which is partly due to the lack of explicit algebraic structure on (Ω,d) that we can exploit and the use of self-normalization. We leave this problem for future investigation. §.§ Wild Binary Segmentation To detect multiple change-points and identify the their locations given the time series {Y_t}_t=1^n, we can combine our change-point test with the so-called wild binary segmentation (WBS) <cit.>. The testing procedure in conjunction with WBS can be described as follows. Let I_M = { (s_m, e_m) }_m=1,2, …, M, where s_m, e_m are drawn uniformly from { 0, 1/n, 1/(n-1), …, 1/2, 1 } such that ⌈ n e_m ⌉ - ⌊ n s_m ⌋≥ 20. Then we simulate J i.i.d samples, each sample is of size n, from multivariate Gaussian distribution with mean 0 and identity covariance matrix, i.e., for j=1, 2, …, J, { Z^j_i }_i=1^n i.i.d.∼𝒩(0,1). For the jth sample { Z^j_i }_i=1^n, let D(k; s_m,e_m; {Z_i^j}_i=1^n) be the statistic D_⌊ n e_m ⌋ - ⌈ n s_m ⌉ +1, 2(k) that is computed based on sample { Z_⌈ n s_m ⌉^j, Z_⌈ n s_m ⌉ + 1^j, …, Z_⌊ n e_m ⌋^j } and ξ_j = max_1 ≤ m ≤ Mmax_⌊ñ_m η_1 ⌋≤ k ≤ñ_m - ⌊ñ_m η_1 ⌋D(k; s_m,e_m; {Z_i^j}_i=1^n), where ñ_m = ⌈ n e_m ⌉ - ⌊ n s_m ⌋ +1. Setting ξ as the 95% quantile of ξ_1, ξ_2, …, ξ_J, we can apply our test in combination with WBS algorithm to the data sequence {Y_1, Y_2, … Y_n} by running Algorithm <ref> as WBS(0, 1, ξ). The main rational behind this algorithm is that we exploit the asymptotic pivotality of our SN test statistic, and the limiting null distribution of our test statistic applied to random objects is identical to that applied to i.i.d 𝒩(0,1) random variables. Thus this threshold is expected to well approximate the 95% quantile of the finite sample distribution of the maximum SN test statistic on the M random intervals under the null. § SIMULATION In this section, we examine the size and power performance of our proposed tests in two-sample testing (Section <ref>), change-point detection (Section <ref>) problems, and provide simulation results of WBS based change-point estimation (Section <ref>). We refer to Appendix <ref> with additional simulation results regarding comparison with FPCA approach for two-sample tests in functional time series. The time series random objects considered in this section include (i). univariate Gaussian probability distributions equipped with 2-Wasserstein metric d_W; (ii). graph Laplacians of weighted graphs equipped with Frobenius metric d_F; (iii). covariance matrices <cit.> equipped with log-Euclidean metric d_E. Numerical experiments are conducted according to the following data generating processes (DGPs): (i) Gaussian univariate probability distribution: we consider Y_t^(1)=𝒩(arctan (U_t,1),[arctan(U_t,1^2)+1]^2), Y_t^(2)=𝒩(arctan (U_t,2)+δ_1, δ_2^2[arctan(U_t,2^2)+1]^2). (ii) graph Laplacians: each graph has N nodes (N=10 for two-sample test and N=5 for change-point test) that are categorized into two communities with 0.4N and 0.6N nodes respectively, and the edge weight for the first community, the second community and between community are set as 0.4+arctan(U_t,1^2), 0.2+arctan(U_t,1^'2), 0.1 for the first sample Y_t^(1), and δ_2[0.4+arctan(U_t,2^2)], δ_2[0.2+arctan(U_t,2^'2)], 0.1+δ_1 for the second sample Y_t^(2), respectively; (iii) covariance matrix: Y_t^(i)=(2I_3+Z_t,i)(2I_3+Z_t,i)^⊤, i=1,2, such that all the entries of Z_t,1 (resp. Z_t,2) are independent copies of arctan(U_t,1 ) (resp. δ_1+δ_2arctan(U_t,2)). For DGP (i)-(iii), (U_t,1,U_t,2)^⊤ (with independent copies (U'_t,1,U'_t,2)^⊤) are generated according to the following VAR(1) process, ( U_t,1 U_t,2)=ρ( U_t-1,1 U_t-1,2)+ϵ_t, ϵ_ti.i.d.∼𝒩(0,( 1 a a 1 )); where a∈{0,0.5} measures the cross-dependence, and ρ∈{-0.4,0,0.4,0.7} measures the temporal dependence within each sample (or each segment in change-point testing). For size evaluation in change-point tests, only {Y_t^(1)} is used. Furthermore, δ_1∈[0,0.3] and δ_2∈[0.7,1] are used to characterize the change in the underlying distributions. In particular, δ_1 can only capture the location shift, while δ_2 measures the scale change, and the case (δ_1,δ_2)=(0,1) corresponds to ℍ_0. For DGP (i) and (ii), i.e. Gaussian distribution with 2-Wasserstein metric d_W and graph Laplacians with Euclidean metric d_F, the location parameter δ_1 directly shifts Fréchet mean while keeping Fréchet variance constant; and the scale parameter δ_2 works on Fréchet variance only while holding the Fréchet mean fixed. For DGP (iii), i.e. covariance matrices, the log-Euclidean metric d_E operates nonlinearly, and thus changes in either δ_1 or δ_2 will be reflected on changes in both Fréchet mean and variance. The comparisons of our proposed methods with <cit.> for two-sample testing and <cit.> for change-point testing are also reported, which are generally referred to as DM. §.§ Two-Sample Test For the two-sample testing problems, we set the sample size as n_1=n_2∈{50,100,200,400}, and trimming parameter as η=0.15. Table <ref> presents the sizes of our tests and DM test for three DGPs based on 1000 Monte Carlo replications at nominal significance level α=5%. In all three subtables, we see that: (a) both D_1 and D_2 can deliver reasonable size under all settings; (b) DM suffers from severe size distortion when dependence magnitude among data is strong; (c) when two samples are dependent, i.e. a=0.5, DM is a bit undersized even when data is temporally independent. These findings suggest that our SN-based tests provide more accurate size relative to DM when either within-group temporal dependence or between-group dependence is exhibited. In Figure <ref>, we further compare size-adjusted power of our SN-based tests and DM test, in view of the size-distortion of DM. That is, the critical values are set as the empirical 95% quantiles of the test statistics obtained in the size evaluation, so that all curves start from the nominal level at 5%. For all settings, we note that D_2 is more powerful than (or equal to) D_1. In particular, D_1 has trivial power in DGP (i) and (ii) when only Fréchet mean difference is present. In addition, D_2 is more powerful in detecting Fréchet mean differences than DM for DGP (i) and (ii), and beats DM in DGP (i) for detecting Fréchet variance differences, although it is slightly worse than DM in detecting Fréchet variance differences for DGP (ii) and (iii). Due to robust size and power performance, we thus recommend D_2 for practical purposes. §.§ Change-Point Test For the change-point testing problems, we set the sample size n∈{200, 400,800}, and trimming parameter as (η_1,η_2)=(0.15,0.05). Table <ref> outlines the size performance of our tests and DM test for three DGPs based on 1000 Monte Carlo replications at nominal significance level α=5%. DM tests based asymptotic critical value and bootstraps (with 500 replications) are denoted as DM^a and DM^b, respectively. From Table <ref>, we find that SN_1 always exhibits accurate size while SN_2 is a bit conservative. As a comparison, the tests based on DM^a and DM^b suffer from severe distortion when strong temporal dependence is present, although DM^b is slightly better than DM^a in DGP (i) and (ii). In Figure <ref>, we plot the size-adjusted power of our tests and DM test based on bootstrap calibration. Here, the size-adjusted power of DM^b is implemented following <cit.>. Similar to the findings in change-point tests, we find that SN_1 has trivial power in DGP (i) and (ii) when there is only Fréchet mean change and is worst among all three tests. Furthermore, SN_2 is slightly less powerful compared to DM and the power loss is moderate. Considering its better size control, SN_2 is preferred. We further provide numerical evidence for the estimation accuracy by considering the alternative hypothesis of δ_1=1-δ_2=0.3 with true change-point location at τ=0.5 for DGP (i)-(iii) in the main context. When varying sample size n∈{400,800,1600}, we find that for all DGPs, the histograms of τ̂ (based on SN_2) plotted in Figure <ref> get more concentrated around the truth τ=0.5, when sample size increases, which is consistent with our theoretical consistency of τ̂. §.§ Multiple Change Point Detection For simulations of multiple change point estimation, we consider non-Euclidean time series of length n=500 generated from the following two models. These models are the same as before, but reformulated for better presentation purpose. * Gaussian univariate probability distribution: Y_t=𝒩(arctan (U_t)+δ_t,1, δ_t, 2^2[arctan(U_t^2)+1]^2). * covariance matrix: Y_t=(2I_3+Z_t)(2I_3+Z_t)^⊤ with Z_t= δ_t,1+δ_t,2arctan(U_t). Here, U_t are generated according to the AR(1) process U_t=ρ U_t-1+ϵ_t, ϵ_ti.i.d.∼𝒩(0,1). There are 3 change points at t=110, 250 and 370. The changes point locations are reflected in the definitions of {δ_t,1} and {δ_t,2}, where δ_t,1 = a_1 𝕀_{n ≤ 110 } + a_2 𝕀_{ 110 < n ≤ 250 } + a_3 𝕀_{ 250 < n ≤ 370 } + a_4 𝕀_{ 370 < n ≤ 500 }, δ_t,2 = b_1 𝕀_{ n ≤ 110 } + b_2 𝕀_{ 110 < n ≤ 250 } + b_3 𝕀_{ 250 < n ≤ 370 } + b_4 𝕀_{ 370 < n ≤ 500 }. For each model, we consider 3 cases that are differentiated by the magnitudes of a_i, b_i, i=1,2,3,4. For the data generating model of Gaussian distributions, we set * (a_1, a_2, a_3, a_4) = (0, 0.7, 0, 0.8), (b_1, b_2, b_3, b_4) = (1, 1.5, 0.7, 1.4); * (a_1, a_2, a_3, a_4) = (0, 0.2, 0, 0.3), (b_1, b_2, b_3, b_4) = (0.5, 1.5, 0.4, 1.4); * (a_1, a_2, a_3, a_4) = (0, 0.5, 1.5, 3.3), (b_1, b_2, b_3, b_4) = (0.2, 1.5, 3.8, 6.5). As for the data generating model of covariance matrices, we set * (a_1, a_2, a_3, a_4) = (0, 1.2, 0, 1.3), (b_1, b_2, b_3, b_4) = (0.8, 1.5, 0.7, 1.6); * (a_1, a_2, a_3, a_4) = (0, 1, 0, 1), (b_1, b_2, b_3, b_4) = (0.5, 2, 0.4, 1.9); * (a_1, a_2, a_3, a_4) = (0, 2, 3.9, 5.7), (b_1, b_2, b_3, b_4) = (0.2, 0.7, 1.3, 2). Cases 1 and 2 correspond to non-monotone changes and Case 3 considers the monotone change. Here, our method described in Section <ref> is denoted as WBS-SN_2 (that is, a combination of WBS and our SN_2 test statistic). The method DM in conjunction with binary segmentation, referred as BS-DM, is proposed in <cit.> and included in this simulation for comparison purpose. In addition, our statistic SN_2 in combination with binary segmentation, denoted as BS-SN_2, is implemented and included as well. The critical values for BS-DM and BS-SN_2 are obtained from their asymptotic distributions respectively. The simulation results are shown in Table <ref>, where we present the ARI (adjusted rand index) and number of detected change points for two dependence levels ρ=0.3, 0.6. Note that ARI ∈ [0,1] measures the accuracy of change point estimation and larger ARI corresponds to more accurate estimation. We summarize the main findings as follows. (a) WBS-SN_2 is the best method in general as it can accommodate both monotonic and non-monotoic changes, and appears quite robust to temporal dependence. For Cases 1 and 2, we see that BS-SN_2 does not work for non-monotone changes, due to the use of binary segmentation procedure. (b) BS-DM tends to have more false discoveries comparing to the other methods. This is expected, as method DM is primarily proposed for i.i.d data sequence and exhibit serious oversize when there is temporal dependence in Section <ref>. (c) When we increase ρ=0.3 to ρ=0.6, the performance of WBS-SN_2 appears quite stable for both distributional time series and covariance matrix time series. § APPLICATIONS In this section, we present two real data illustrations, one for two sample testing and the other for change-point detection. Both datasets are in the form of non-Euclidean time series and neither seems to be analyzed before by using techniques that take into account unknown temporal dependence. §.§ Two sample tests Mortality data. Here we are interested in comparing the longevity of people living in different countries of Europe. From the Human Mortality Database (<https://www.mortality.org/Home/Index>), we can obtain a time series that consists of yearly age-at-death distributions for each country. We shall focus on distributions for female from year 1960 to 2015 and there are 26 countries included in the analysis after exclusion of countries with missing data. Pair-wise two sample tests between the included countries are performed using our statistic D_2 to understand the similarity of age-at-death distributions between different countries. The resulting p-value matrix is plotted in Figure <ref> (left). To better present the testing results and gain more insights, we define the dissimilarity between two given countries by subtracting each p-value from 1. Treating these dissimilarities as “distances", we apply multidimensional scaling to “project" each country onto two dimensional plane for visualization. See Figure <ref> (right) for the plot of “projected" countries. It appears that several west European countries, including UK, Belgium, Luxembourg, Ireland, and Austria, and Denmark, form a cluster; whereas several central and eastern European countries, including Poland, Latvia, Russian, Bulgaria, Lithuania and Czechia share similar distributions. We suspect the similarity in Mortality distribution is much related to the similarity in their economic development and healthcare system, less dependent on the geographical locations. §.§ Change point detection Cryptocurrency data. Detecting change points in the Pearson correlation matrices for a set of interested cryptocurrencies can uncover structural breaks in the correlation of these cryptocurrencies and can play an important role in the investors' investment decisions. Here, we construct the daily Pearson correlation matrices from minute prices of Bitcoin, Doge coin, Cardano, Monero and Chainlink for year 2021. The cryptocurrency data can be downloaded at <https://www.cryptodatadownload.com/analytics/correlation-heatmap/>. See Figure <ref> for the plot of time series of pairwise correlations. Three methods, namely, our SN_2 test combined with WBS (WBS-SN_2), SN_2 test combined with binary segmentation (BS-SN_2), and DM test of <cit.> in conjunction with binary segmentation (BS-DM), are applied to detect potential change points for this time series, Method WBS-SN_2 detects an abrupt change on day 2021-05-17 and method BS-SN_2 detects a change point on day 2021-04-29. By comparison, more than 10 change points are detected by BS-DM and we suspect that many of them are false discoveries (see Section <ref> for simulation evidence of BS-DM's tendency of over-detection). The change point in mid-May 2021 is well expected and corresponds to a major crush in crypto market that wiped out 1 trillion dollars. The major causes of this crush are the withdrawal of Tesla's commitment to accept Bitcoin as payment and warnings regarding cryptocurrency sent by Chinese central bank to the financial institutes and business in China. Since this major crush, the market has been dominated by negative sentiments and fear for a recession. We refer the following CNN article for some discussions about this crush <https://www.cnn.com/2021/05/22/investing/crypto-crash-bitcoin-regulation/index.html>. § CONCLUSION Motivated by increasing availability of non-Euclidean time series data, this paper considers two-sample testing and change-point detection for temporally dependent random objects. Our inferential framework builds upon the nascent SN technique, which has been mainly developed for conventional Euclidean time series or functional time series in Hilbert space, and the extension of SN to the time series of objects residing in metric spaces is the first in the literature. The proposed tests are robust to weak temporal dependence, enjoy effortless tuning and are broadly applicable to many non-Euclidean data types with easily verified technical conditions. On the theory front, we derive the asymptotic distributions of our two sample and change-point tests under both null and local alternatives of order O(n^-1/2). Furthermore, for change-point problem, the consistency of the change-point estimator is established under mild conditions. Both simulation and real data illustrations demonstrate the robustness of our test with respect to temporal dependence and the effectiveness in testing and estimation problems. To conclude, we mention several interesting but unsolved problems for analyzing non-Euclidean time series. For example, although powerful against Fréchet mean differences/changes, the testing statistics developed in this paper rely on the asymptotic behaviors of Fréchet (sub)sample variances. It is imperative to construct formal tests that can target directly at Fréchet mean differences/changes. For the change-point detection problem in non-Euclidean data, the existing literature, including this paper, only derives the consistency of the change-point estimator. It would be very useful to derive explicit convergence rate and the asymptotic distribution of the change-point estimator, which is needed for confidence interval construction. Also it would be interesting to study how to detect structural changes when the underlying distributions of random objects change smoothly. We leave these topics for future investigation. § TECHNICAL PROOFS §.§ Auxiliary Lemmas We first introduce some notations. We denote o_up(·) as the uniform o_p(·) w.r.t. the partial sum index (a,b)∈ℐ_η. Let M_n(ω,[a,b])=n^-1∑_t=⌊ na⌋+1^⌊ nb⌋f_ω(Y_t), where f_ω(Y)=d^2(Y,ω)-d^2(Y,μ), then it is clear that μ̂_[a,b]=min_ω∈ΩM_n(ω,[a,b]). Let Ṽ_[a,b]=1/⌊ n b⌋-⌊ n a⌋∑_t=⌊ n a⌋+1^⌊ n b⌋ d^2(Y_t, μ). The following three main lemmas are verified under Assumption <ref>-<ref>, and they are used repeatedly throughout the proof for main theorems. sup_(a,b)∈ℐ_η√(n)d(μ̂_[a,b],μ)=O_p(1). (1). We first show the uniform convergence, i.e. sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)=o_up(1). For any ϵ>0, define ψ(ϵ):=inf_d(ω,μ)>ϵ𝔼f_ω(Y), and we know by that ψ(ϵ)>0 by the uniqueness of μ in Assumption <ref>. Hence, let M(ω,[a,b])=(b-a)𝔼f_ω(Y), we have P(sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)>ϵ) = P(⋃_(a,b)∈ℐ_η{d(μ̂_[a,b],μ)>ϵ}) ≤ P(⋃_(a,b)∈ℐ_η{M(μ̂_[a,b],[a,b])-inf_d(ω,μ)> ϵM(ω,[a,b])≥ 0}) ≤ P(⋃_(a,b)∈ℐ_η{M(μ̂_[a,b],[a,b])≥ηψ(ϵ)/2}) ≤ P(⋃_(a,b)∈ℐ_η{M(μ̂_[a,b],[a,b])-M_n(μ̂_[a,b],[a,b]) +M_n(μ,[a,b])-M(μ,[a,b])≥ηψ(ϵ)/2}) ≤ P(sup_(a,b)∈ℐ_ηsup_ω∈Ω|M_n(ω,[a,b])-M(ω,[a,b])|≥ηψ(ϵ)/4) where the first inequality holds because the event {d(μ̂_[a,b],μ)>ϵ} implies that μ̂_[a,b]∈{ω∈Ω:d(ω,μ)> ϵ}, and thus M(μ̂_[a,b],[a,b])≥inf_d(ω,μ)>ϵM(ω,[a,b]); the second inequality holds by b-a≥η (hence (⌊ nb⌋-⌊ na⌋)/n>η/2 for large n) and the definition of (<ref>) such that inf_d(ω,μ)>ϵM(ω,[a,b])=(b-a)ψ(ϵ)>ηψ(ϵ)/2; and the third holds by that M(μ,[a,b])=0 and M_n(μ,[a,b])≥ M_n(μ̂_[a,b],[a,b]). Note M_n(ω,[a,b])-M(ω,[a,b])=M_n(ω,[0,b])-M(ω,[0,b])-M_n(ω,[0,a])+M(ω,[0,a]). Therefore, it suffices to show the weak convergence of the process {M_n(ω,[0,u])-M(ω,[0,u]), u∈[0,1],ω∈Ω} to zero. Note the pointwise convergence holds easily by the boundedness of f_ω and Assumption <ref>, so we only need to show the stochastic equicontinuity, i.e. lim sup_n→∞P(sup_|u-v|<δ_1,d(ω_1,ω_2)<δ_2|M_n(ω_1,[0,u])-M(ω_1,[0,u]) -M_n(ω_2,[0,v])+M(ω_2,[0,v])|>ϵ)→ 0 as max(δ_1,δ_2)→ 0. Then, by triangle inequality, we have |M_n(ω_1,[0,u])-M(ω_1,[0,u])-M_n(ω_2,[0,v])+M(ω_2,[0,v])| ≤ |M_n(ω_1,[0,u])-M_n(ω_1,[0,v])|+|M_n(ω_1,[0,v])-M_n(ω_2,[0,v])| +|M(ω_1,[0,u])-M(ω_1,[0,v])|+|M(ω_1,[0,v])-M(ω_2,[0,v])| := ∑_i=1^4 R_n,i. Without loss of generality, we assume v>u, and by the boundedness of the metric, we have for some K>0, R_n,1≤n^-1∑_t=⌊nu⌋+1^⌊nv⌋d^2(Y_t,ω_1)≤K|u-v|≤Kδ_1. Similarly, R_n,3≤ K. Furthermore, we can see that R_n,2,R_n,4≤ 2diam(Ω)d(ω_1,ω_2)≤ Kδ_2. Hence, the result follows by letting δ_1 and δ_2 sufficiently small. Thus, the uniform convergence holds. (2). We then derive the convergence rate based on Assumption <ref>. By the consistency, we have for any δ>0, P(sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)≤δ)→ 1. Hence, on the event that sup_(a,b)∈ℐ_ηd(μ̂_a,b,μ)≤δ, and note that M_n(μ,[a,b])=n^-1∑_t=⌊ na⌋+1^⌊ nb⌋[d^2(Y_t,μ)-d^2(Y_t,μ)]=0, we have 0= M_n(μ,[a,b]) ≥ M_n(μ̂_[a,b],[a,b]) = K_d⌊nb⌋-⌊na⌋/nd^2(μ̂_[a,b],μ) + n^-1∑_t=⌊na⌋+1^⌊nb⌋[g(Y_t,μ̂_[a,b],μ)+R(Y_t,μ̂_[a,b],μ)] ≥ K_d η/2d^2(μ̂_[a,b],μ) +d(μ̂_[a,b],μ)[ n^-1∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,μ̂_[a,b],μ)/d(μ̂_[a,b],μ)+o_up(n^-1/2+d(μ̂_[a,b],μ))], where the last inequality holds by Assumption <ref> and the fact (⌊ nb⌋-⌊ na⌋)/n>η/2 for large n. Note the above analysis holds uniformly for (a,b)∈ℐ_η, this implies that sup_(a,b)∈ℐ_η[K_d η/2d(μ̂_[a,b],μ)-o_up(d(μ̂_[a,b],μ))] ≤ n^-1/2 sup_(a,b)∈ℐ_η| n^-1/2∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,μ̂_[a,b],μ)/d(μ̂_[a,b],μ)|+o_up(n^-1/2)=O_p(n^-1/2), and hence sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ)=O_p(n^-1/2). sup_(a,b)∈ℐ_η√(n)|V̂_[a,b]-Ṽ_[a,b]|=o_p(1). By Lemma <ref>, and Assumption <ref>, we have sup_(a,b)∈ℐ_η√(n)M_n(μ̂_[a,b],[a,b]) ≤ K_dsup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ) sup_(a,b)∈ℐ_η|√(n)d(μ̂_[a,b],μ) + n^-1/2∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,μ̂_[a,b],μ)/d(μ̂_[a,b],μ)+o_up(1+√(n)d(μ̂_[a,b],μ))| = O_p(n^-1/2). Hence, we have that sup_(a,b)∈ℐ_η√(n)|V̂_[a,b]-Ṽ_[a,b]|≤η^-1sup_(a,b)∈ℐ_η√(n)M_n(μ̂_[a,b],[a,b]), the result follows. Let V̂^C_[a,b](ω̃)=1/⌊ nb⌋-⌊ na ⌋∑_t=⌊ na⌋+1^⌊ nb⌋d^2(Y_i,ω̃), where ω̃∈Ω is a random object such that √(n)sup_(a,b)∈ℐ_ηd(ω̃,μ̂_[a,b])=O_p(1). Then, √(n)sup_(a,b)∈ℐ_η|V̂^C_[a,b](ω̃)-Ṽ_[a,b]|=o_p(1). By triangle inequality and Lemma <ref>, √(n)sup_(a,b)∈ℐ_η|V̂^C_[a,b](ω̃)-Ṽ_[a,b]| = sup_(a,b)∈ℐ_η|√(n)/⌊n b⌋-⌊n a⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ d^2(Y_t, ω̃)-d^2(Y_i,μ)| ≤ (η/2)^-1sup_(a,b)∈ℐ_η√(n)M_n(ω̃,[a,b]). Note by triangle inequality for the metric, d(ω̃,μ)≤ d(μ̂_[a,b],μ)+d(ω̃,μ̂_[a,b])=O_p(n^-1/2), and we know that d(ω̃,μ)<δ with probability tending to 1, and on this event, by Assumption <ref>, √(n)M_n(ω̃,[a,b]) ≤ K_dd^2(ω̃,μ) +n^-1| ∑_t=⌊na⌋+1^⌊nb⌋g(Y_t,ω̃,μ)| +n^-1|∑_t=⌊na⌋+1^⌊nb⌋R(Y_t,ω̃,μ)|. Similar to the proof of Lemma <ref>, we get the result. §.§ Proof of Theorems in Section <ref> Let Ṽ^(1)_r=1/⌊ rn_1⌋∑_t=1^⌊ rn_1⌋d^2(Y_t^(1),μ^(1)), and Ṽ^(2)_r=1/⌊ rn_2⌋∑_t=1^⌊ rn_2⌋d^2(Y_t^(2),μ^(2)). For each r∈[η,1], we consider the decomposition, √(n)T_n(r)= √(n)r(V̂^(1)_r-V̂^(2)_r) = √(n)r(V̂^(1)_r-Ṽ^(1)_r+Ṽ^(1)_r-V^(1)) -√(n)r(V̂^(2)_r-Ṽ^(2)_r+Ṽ^(2)_r-V^(2)) +√(n)r(V^(1)-V^(2)) := R_n,1(r)+R_n,2(r)+R_n,3(r). and √(n)T_n^C(r)= √(n)r(V̂^C,(1)_r-Ṽ^(1)_r)-√(n)r(V̂^(1)_r-Ṽ^(1)_r) +√(n)r(V̂^C,(2)_r -Ṽ^(2)_r)-√(n)r(V̂^(2)_r-Ṽ^(2)_r) := R^C_n,1(r)+R^C_n,2(r)+R^C_n,3(r)+R^C_n,4(r). By Lemma <ref>, sup_r∈[η,1]√(n)r(V̂^(1)_r-Ṽ^(1)_r)=o_p(1), sup_r∈[η,1]√(n)r(V̂^(2)_r-Ṽ^(2)_r)=o_p(1), i.e. {R^C_n,2(r)+R^C_n,4(r)}_r∈[η,1]⇒ 0. Furthermore, by Assumption <ref>, √(n)r(V̂^(1)_r-V^(1))⇒γ_1^-1σ_1B^(1)(γ_1r), √(n)r(V̂^(2)_r-V^(2))⇒γ_2^-1σ_2B^(2)(γ_2r). This implies that {R_n,1(r)+R_n,2(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)}_r∈[η,1]. §.§ Proof of Theorem <ref> Under ℍ_0, R_n,3(r)≡ 0, and μ^(1)=μ^(2)=μ. Hence, by (<ref>) and (<ref>), we obtain that {√(n)T_n(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)}_r∈[η,1]. Next, by Lemma <ref>, can obtain that √(n)sup_r∈[η,1]d(μ̂^(1)_r,μ)=o_p(1), √(n)sup_r∈[η,1]d(μ̂^(2)_r,μ)=o_p(1). Hence, by Lemma <ref>, we have {R^C_n,1(r)+R^C_n,3(r)}_r∈[η,1]⇒ 0. Together with (<ref>), we have {√(n)T^C_n(r)}_r∈[η,1]⇒ 0. Hence, by continuous mapping theorem, for both i=1,2, D_n,i→_d ξ^2_γ_1,γ_2(1;σ_1,σ_2)/∫_η^1[ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2)]^2dr. §.§ Proof of Theorem <ref> In view of (<ref>) and (<ref>), {√(n)T_n(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)+rn^-κ_V+1/2Δ_V}_r∈[η,1]. Hence * For κ_V ∈(1/2,∞), {√(n)T_n(r)}⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)}_r∈[η,1]. * For κ_V=1/2, {√(n)T_n(r)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)+rΔ_V}_r∈[η,1]. * For κ_V∈(0,1/2), √(n)T_n(1)→_p∞, and {√(n)T_n(r)-√(n)rT_n(1)}_r∈[η,1]⇒{ξ_γ_1,γ_2(r;σ_1,σ_2)-rξ_γ_1,γ_2(1;σ_1,σ_2)}_r∈[η,1]. Next, we focus on √(n)T_n^C(r). When κ_M∈ (0,∞), it holds that d(μ^(1),μ^(2))=O(n^-κ_M/2)=o(1), and by triangle inequality, for any r∈[η,1], |d(μ^(1),μ^(2))-d(μ̂^(2)_r,μ^(2))|≤ d(μ̂^(2)_r,μ^(1))≤ |d(μ^(1),μ^(2))+d(μ̂^(2)_r,μ^(2))|. By Lemma <ref>, we have sup_r∈[η,1]d(μ̂^(2)_r,μ^(2))=O_p(n^-1/2). This and (<ref>) imply that * when κ_M∈(1/2,∞), d^2(μ̂^(2)_r,μ^(1))=o_up(n^-1/2); * when κ_M∈(0,1/2], d^2(μ̂^(2)_r,μ^(1))=d^2(μ^(1),μ^(2))+o_up(n^-1/2)=n^-κ_MΔ_M+o_up(n^-1/2). Similarly, * when κ_M∈(1/2,∞), d^2(μ̂^(1)_r,μ^(2))=o_up(n^-1/2); * when κ_M∈(0,1/2], d^2(μ̂^(1)_r,μ^(2))=n^-κ_MΔ_M+o_up(n^-1/2). Furthermore, by Assumption <ref>, equations (<ref>) and (<ref>), we obtain √(n)T_n^C(r)=R_n,1^C(r)+R_n,3^C(r)+o_up(1) = √(n)K_d rd^2(μ̂^(2)_r,μ^(1))+ rd(μ̂^(2)_r,μ^(1))[n^-1/2∑_t=1^⌊γ_1nr⌋g(Y_t^(1),μ̂^(2)_r,μ^(1))/d(μ̂^(2)_r,μ^(1))] +o_up(d(μ̂^(2)_r,μ^(1))+√(n)d^2(μ̂^(2)_r,μ^(1))) +√(n)K_dr d^2(μ̂^(1)_r,μ^(2))+ rd(μ̂^(1)_r,μ^(2))[n^-1/2∑_t=1^⌊γ_2nr⌋g(Y_t^(2),μ̂^(1)_r,μ^(2))/d(μ̂^(1)_r,μ^(2))] +o_up(d(μ̂^(1)_r,μ^(2))+√(n)d^2(μ̂^(1)_r,μ^(2))) +o_up(1). * For κ_M ∈(1/2,∞), d^2(μ̂^(2)_r,μ^(1))=o_up(n^-1/2), and d^2(μ̂^(1)_r,μ^(2))=o_up(n^-1/2). Hence, {√(n)T_n^C(r)}_r∈[η,1]⇒ 0. * For κ_M=1/2, we note that d^2(μ̂^(2)_r,μ^(1))=n^-1/2Δ_M+o_up(1), and d^2(μ̂^(1)_n,μ^(2))=n^-1/2Δ_M+o_up(1). Hence, {√(n)T_n^C(r)}_r∈[η,1]⇒{2rK_dΔ_M}_r∈[η,1], and {√(n)[T_n^C(r)-rT_n^C(1)]}_r∈[η,1]⇒ 0. * For κ_M∈(0,1/2), we multiply n^2κ_M-1 on both denominator and numerator of D_n,2, and obtain D_n,2=n^2κ_M{[T_n(1)]^2+[T_n^C(1)]^2}/n^-1∑_k=⌊ nη⌋^n n^2κ_M{[T_n(k/n)-k/nT_n(1)]^2+[T_n^C(k/n)-k/nT^C_n(1)]^2}. Note that n^κ_M-1/2→0, as n→∞, we obtain that {n^κ_M[T_n(r)-rT_n(1)]}_r∈[η,1]⇒ 0. Furthermore, in view of (<ref>), we obtain n^κ_MT^C_n(r)= n^κ_Mr(K_d+o_up(1))[d^2(μ̂^(2)_r,μ^(1))+d^2(μ̂^(1)_r,μ^(2))]+o_up(1), By arguments below (<ref>), we know that n^κ_Md^2(μ̂^(2)_r,μ^(1))=Δ_M+o_up(n^κ_M-1/2)=Δ_M+o_up(1). And similarly, n^κ_Md^2(μ̂^(1)_r,μ^(2))=Δ_M+o_up(1). We thus obtain that {n^κ_M T^C_n(r)-rT^C_n(1)}_r∈[η,1]⇒ 0, and n^κ_MT_n^C(1)→_p 2K_dΔ_M. Therefore, (<ref>) and (<ref>) implies that the denominator of (<ref>) converges to 0 in probability, while (<ref>) implies the numerator of (<ref>) is larger than a positive constant in probability, we thus obtain D_n,2→_p∞. Summarizing the cases of κ_V and κ_M, and by continuous mapping theorem, we get the result. §.§ Proof of <Ref> When γ_1=γ_2=1/2, it can be shown that ξ_γ_1,γ_2(r;σ_1,σ_2)=2σ_1B^(1)(r/2)-2σ_2B^(1)(r/2)=_d √(2σ_1^2+2σ_2^2-4ρσ_1σ_2)B(r); and when ρ=0. ξ_γ_1,γ_2(r;σ_1,σ_2)=_d √(σ_1^2/γ_1+σ_2^2/γ_2)B(r). The result follows by the continuous mapping theorem. §.§ Proof of Theorems in Section <ref> With a bit abuse of notation, we define ℐ_η={(a,b): 0≤ a<b≤ 1, b-a≥η_2 } and 𝒥_η={(r;a,b): 0≤ a<r<b≤ 1, b-r≥η_2, r-a≥η_2 }. §.§ Proof of Theorem <ref> For (r;a,b)∈𝒥_η, we note that √(n)T_n(r;a,b) = √(n){(r-a)(b-r)/(b-a)(V̂_[a, r]-Ṽ_[a,r]+Ṽ_[a,r]-V)} -√(n){(r-a)(b-r)/(b-a)(V̂_[r, b]-Ṽ_[r, b]+Ṽ_[r, b]-V)}. By Lemma <ref> we know that sup_(a,r)∈ℐ_η√(n)|V̂_[a, r]-Ṽ_[a,r]|=o_p(1), sup_(r,b)∈ℐ_η√(n)|V̂_[r,b]-Ṽ_[r,b]|=o_p(1), and by Assumption <ref>, {√(n)(r-a)(Ṽ_[a,r]-V)}_(a,r)∈ℐ_η⇒{σ[B(r)-B(a)]}_(a,r)∈ℐ_η, {√(n)(b-r)(Ṽ_[r,b]-V)}_(r,b)∈ℐ_η⇒{σ[B(b)-B(r)]}_(r,b)∈ℐ_η. Hence, {√(n)T_n(r;a,b)}_(r;a,b)∈𝒥_η ⇒ σ{ (b-r)/(b-a)[B(r)-B(a)]-(r-a)/(b-a)[B(b)-B(r)]}_(r;a,b)∈𝒥_η. Furthermore, we note that √(n)T_n^C(r;a,b) = (b-r)/(b-a)n^-1/2{∑_i=⌊n a⌋+1^⌊n r⌋ [d^2(Y_i, μ̂_[r, b])-d^2(Y_i, μ)] - ∑_i=⌊n a⌋+1^⌊n r⌋ [d^2(Y_i, μ̂_[a,r])-d^2(Y_i, μ)]} + (r-a)/(b-a)n^-1/2∑_i=⌊n r⌋+1^⌊n b⌋ {[d^2(Y_i, μ̂_[a, r])-d^2(Y_i, μ)] - ∑_i=⌊n r⌋+1^⌊n b⌋ [d^2(Y_i, μ̂_[r, b])-d^2(Y_i, μ)]}+o_up(1) where o_up(1) is the rounding error due to [n(r-a)]^-1-[⌊ nr⌋-⌊ na⌋]^-1 and [n(b-r)]^-1-[⌊ nb⌋-⌊ nr⌋]^-1. Note by Lemma <ref>, we know that sup_(a,r)∈ℐ_ηd(μ̂_[a, r],μ)=O_p(n^-1/2) and sup_(r,b)∈ℐ_ηd(μ̂_[r, b],μ)=O_p(n^-1/2), hence by Lemma <ref> and <ref>, we obtain sup_(r;a,b)∈𝒥_η|√(n)T_n^C(r;a,b)|=o_p(1). The result follows by continuous mapping theorem. §.§ Proof of Theorem <ref> Note for any k=⌊ nη_1⌋,⋯,n-⌊ nη_1⌋, and i=1,2, max_⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k)≥ D_n,i(⌊ nτ⌋). We focus on k^*=⌊ nτ⌋. In this case, the left and right part of the self-normalizer are both from stationary segments, hence by similar arguments as in ℍ_0, {√(n)T_n(r ; 0, τ)}_r∈[η_2,τ-η_2]⇒{σ_1𝒢_1(r; 0, τ) }_r∈[η_2,τ-η_2], {√(n)T^C_n(r ; 0, τ)}_r∈[η_2,τ-η_2]⇒ 0; and {√(n)T_n(r ; τ, 1)}_r∈[τ+η_2,1-η_2]⇒{σ_2𝒢_2(r;τ, 1) }_r∈[τ+η_2,1-η_2], {√(n)T^C_n(r ; τ, 1)}_r∈[η_2,τ-η_2]⇒ 0, where 𝒢_i(r;a,b)=(b-r)/(b-a)[B^(i)(r)-B^(i)(a)]-(r-a)/(b-a)[B^(i)(b)-B^(i)(r)] for i=1,2. Hence, we only need to consider the numerator, where √(n)T_n(τ;0,1)=√(n)τ(1-τ)(V̂_[0, τ]-V̂_[τ, 1]), √(n)T_n^C(τ;0,1)=√(n)τ(1-τ)(V̂_[τ; 0, 1]^C-V̂_[0,τ]-V̂_[τ, 1]). For √(n)T_n(τ;0,1), we have √(n)T_n(τ;0,1)= √(n){τ(1-τ)(V̂_[0, τ]-Ṽ_[0,τ]+Ṽ_[0,τ]-V^(1))} -√(n){τ(1-τ)(V̂_[τ, 1]-Ṽ_[τ, 1]+Ṽ_[τ, 1]-V^(2))} +√(n)τ(1-τ)(V^(1)-V^(2)) = T_11+T_12+T_13. By Lemma <ref>, we know that √(n)(V̂_[0,τ]-Ṽ_[0,τ])=o_p(1), and by Assumption <ref>, we have √(n)τ(Ṽ_[0,τ]-V^(1))→_d σ_1B^(1)(τ). This implies that T_11→_d (1-τ)σ_1B^(1)(τ). Similarly, we can obtain T_12→_d -τσ_2[B^(2)(1)-B^(2)(τ)]. Hence, using the fact that √(n)(V^(1)-V^(2))=Δ_V, we obtain √(n)T_n(τ;0,1)→_d(1-τ)σ_1B^(1)(τ)-τσ_2[B^(2)(1)-B^(2)(τ)]+τ(1-τ)Δ_V. For √(n)T_n^C(τ;0,1) we have √(n)T_n^C(τ;0,1) = (1-τ)n^-1/2{ ∑_i=1^⌊n τ⌋ [d^2(Y_i, μ̂_[τ,1])- d^2(Y_i, μ^(1))] - ∑_i=1^⌊n τ⌋[ d^2(Y_i, μ̂_[0,τ])- d^2(Y_i, μ^(1))]} +τn^-1/2 {∑_i=⌊n τ⌋+1^n [d^2(Y_i, μ̂_[0,τ])-d^2(Y_i, μ^(2))] -∑_i=⌊n τ⌋+1^n [d^2(Y_i, μ̂_[τ,1])-d^2(Y_i, μ^(2))]}+o_p(1) := T_21+T_22+T_23+T_24+o_p(1), where o_p(1) is the rounding error due to (nτ)^-1-⌊ nτ⌋^-1 and [n(1-τ)]^-1-(n-⌊ nτ⌋)^-1. Note by Lemma <ref>, we have d(μ̂_[0,τ],μ^(1))=O_p(n^-1/2), and by triangle inequality, we know that d(μ̂_[τ,1],μ^(1))≤ d(μ̂_[τ,1],μ^(2))+d(μ^(1),μ^(2))=O_p(n^-1/4). Then, by Assumption <ref>, we know T_21 = √(n)(1-τ)τK_d d^2(μ̂_[τ,1],μ^(1)) +(1-τ) d(μ̂_[τ,1],μ^(1))[n^-1/2∑_i=1^⌊nτ⌋g(Y_i,μ̂_[τ,1],μ^(1))/d(μ̂_[τ,1],μ^(1))] +o_p(d(μ̂_[τ,1],μ^(1))+√(n)d^2(μ̂_[τ,1],μ^(1))) = √(n)(1-τ)τK_dd^2(μ̂_[τ,1],μ^(1))+O_p(n^-1/4)+o_p(1). Now, by triangle inequality, we know √(n)[d(μ̂_[τ,1],μ^(2))-d(μ^(1),μ^(2))]^2≤√(n)d^2(μ̂_[τ,1],μ^(1)) ≤√(n)[d(μ̂_[τ,1],μ^(2))+d(μ^(1),μ^(2))]^2, and note d(μ̂_[τ,1],μ^(2))=O_p(n^-1/2) by Lemma <ref>, we obtain √(n)d^2(μ̂_[τ,1],μ^(1))=Δ_M+o_p(1), and T_21=(1-τ)τ K_dΔ_M+o_p(1). By Lemma <ref>, T_22=o_p(1). Hence T_21+T_22=(1-τ)τ K_dΔ_M+o_p(1). Similarly, we obtain that T_23+T_24=(1-τ)τ K_dΔ_M+o_p(1). Therefore, √(n)T_n^C(τ;0,1)=2τ(1-τ)K_dΔ_M+o_p(1). Hence, combining results of (<ref>)–(<ref>), we have D_n,1(⌊nτ⌋) →_d[(1-τ)σ_1B^(1)(τ)-τσ_2[B^(2)(1)-B^(2)(τ)]+τ(1-τ)Δ_V]^2/[∫_η_2^r-η_2 σ_1^2𝒢_1^2(u ; 0, r) d u+∫_r+η_2^1-η_2 σ_2^2𝒢_2^2(u ; r, 1) d u] := 𝒮_η,1(τ;Δ), and, D_n,2(⌊ nτ⌋) →_d [(1-τ)σ_1B^(1)(τ)-τσ_2[B^(2)(1)-B^(2)(τ)]+τ(1-τ)Δ_V]^2+4[τ(1-τ)Δ_M]^2/[∫_η_2^r-η_2σ_1^2𝒢_1^2(u ; 0, r) d u+∫_r+η_2^1-η_2σ_2^2𝒢_2^2(u ; r, 1) d u] := 𝒮_η,2(τ;Δ). Therefore, we know that for the 1-α quantile of 𝒮_η, denoted by Q_1-α(𝒮_η), for i=1,2, lim_n→∞ P(max_⌊ nη_1⌋≤ k≤ n-⌊ nη_1⌋D_n,i(k)≥ Q_1-α(𝒮_η)) ≥ lim_n→∞ P(D_n,i(⌊ nτ⌋)≥ Q_1-α(𝒮_η)) = P(𝒮_η,i(τ;Δ)≥ Q_1-α(𝒮_η)). In particular, lim_|Δ_V|→∞P(𝒮_η,1(τ;Δ)≥Q_1-α(𝒮_η))=1, lim_max{|Δ_V|,Δ_M}→∞P(𝒮_η,2(τ;Δ)≥Q_1-α(𝒮_η))=1. §.§ Proof of Theorem <ref> Define the pointwise limit of μ̂_[a,b] under ℍ_a as μ_[a,b]= μ^(1), b≤τ min_ω∈Ω{(τ-a)𝔼d^2(Y_t^(1),ω)+(b-τ)𝔼d^2(Y_t^(2),ω)}, a<τ<b μ^(2), τ≤ a Define the Fréchet variance and pooled contaminated variance under ℍ_a as V_[a,b]= V^(1) b≤τ τ-a/b-a𝔼(d^2(Y_t^(1),μ_[a,b]))+b-τ/b-a𝔼(d^2(Y_t^(2),μ_[a,b])), a<τ<b V^(2), τ≤ a, and V^C_[r;a,b]= V^(1) b≤τ τ-a/r-a𝔼(d^2(Y_t^(1),μ_[r,b]))+r-τ/r-a𝔼(d^2(Y_t^(2),μ_[r,b]))+𝔼(d^2(Y_t^(2),μ_[a,r])), a<τ≤r 𝔼(d^2(Y_t^(1),μ_[r,b]))+τ-r/b-r𝔼(d^2(Y_t^(1),μ_[a,r]))+b-τ/b-r𝔼(d^2(Y_t^(2),μ_[a,r])), r<τ<b V^(2), τ≤a. We want to show that {T_n(r;a,b)}_(r;a,b)∈𝒥_η ⇒{T(r;a,b)}_(r;a,b)∈𝒥_η, {T^C_n(r;a,b)}_(r;a,b)∈𝒥_η ⇒{T^C(r;a,b)}_(r;a,b)∈𝒥_η, where T(r;a,b)=(r-a)(b-r)/b-a(V_[a, r]-V_[r, b]), T^C(r;a,b)=(r-a)(b-r)/b-a(V_[r ; a, b]^C-V_[a, r]-V_[r, b]). To achieve this, we need to show (1). sup_(a,b)∈ℐ_ηd(μ̂_[a,b],μ_[a,b])=o_p(1); (2). sup_(a,b)∈ℐ_η|V̂_[a,b]-V_[a,b]|=o_p(1); and (3). sup_(r;a,b)∈𝒥_η|V̂^C_[r;a,b]-V^C_[r;a,b]|=o_p(1). (1). The cases when b≤τ and a≥τ follow by Lemma <ref>. For the case when τ∈(a,b), recall μ̂_[a, b]= min_ω∈Ω1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ d^2(Y_t, ω) = min_ω∈Ω{n/⌊nb⌋-⌊na⌋1/n ∑_t=⌊n a⌋+1^⌊n τ⌋ d^2(Y_t^(1), ω) +n/⌊nb⌋-⌊na⌋ 1/n ∑_t=⌊n τ⌋+1^⌊n b ⌋ d^2(Y_t^(2), ω)}. By the proof of (1) in Lemma <ref>, for i=1,2, we have {1/n∑_t=1^⌊ n u ⌋ d^2(Y_t^(i), ω)-u𝔼d^2(Y_t^(i),ω)}_ω∈Ω,u∈[0,1]⇒ 0, which implies that {n/⌊nb⌋-⌊na⌋1/n ∑_t=⌊n a⌋+1^⌊n τ⌋ d^2(Y_t^(1), ω) +n/⌊nb⌋-⌊na⌋ 1/n ∑_t=⌊n τ⌋+1^⌊n b ⌋ d^2(Y_t^(2), ω)}_ω∈Ω,(a,b)∈ℐ_η ⇒{τ-a/b-a𝔼(d^2(Y_t^(1),ω)+b-τ/b-a𝔼(d^2(Y_t^(2),ω))}_ω∈Ω,(a,b)∈ℐ_η. By Assumption <ref>, and the argmax continuous mapping theorem (Theorem 3.2.2 in <cit.>), the result follows. (2). The cases when b≤τ and a≥τ follows by Lemma <ref>. For the case when τ∈(a,b), we have for some constant K>0 sup_(a,b)∈ℐ_η|V̂_[a,b]-V_[a,b]| ≤ sup_(a,b)∈ℐ_η(1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ |d^2(Y_t, μ̂_[a,b])-d^2(Y_t, μ_[a,b])|) +sup_(a,b)∈ℐ_η|1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ d^2(Y_t, μ_[a,b])-V_[a,b]| ≤ sup_(a,b)∈ℐ_η(1/⌊nb⌋-⌊na⌋ ∑_t=⌊n a⌋+1^⌊n b⌋ K|d(Y_t, μ̂_[a,b])-d(Y_t, μ_[a,b])|)+o_p(1) ≤ sup_(a,b)∈ℐ_ηKd(μ̂_[a,b],μ_[a,b])+o_p(1)=o_p(1) where the second inequality holds by the boundedness of the metric and (<ref>), and the third inequality holds by the triangle inequality of the metric. (3). The proof is similar to (2). By continuous mapping theorem, we obtain that for i=1,2, {D_n,i(⌊ nr⌋)}_r∈[η_1,1-η_1]⇒{D_i(r)}_r∈[η_1,1-η_1], where D_1(r)= [T(r;0,1)]^2/∫_η_2^r-η_2[T(u;0,r)]^2du+∫_r+η_2^1-η_2[T(u;r,1)]^2du, D_2(r)= [T(r;0,1)]^2+[T^C(r;0,1)]^2/∫_η_2^r-η_2[T(u;0,r)]^2+[T^C(u;0,r)]^2du+∫_r+η_2^1-η_2[T(u;r,1)]^2+[T^C(u;r,1)]^2du. In particular, at r=τ, we obtain D_i(τ)=∞. Hence, to show the consistency of τ̂, it suffices to show that for any small ϵ>0, if |r-τ|>ϵ, D_i(r)<∞. By symmetry, we consider the case of r-τ>ϵ. For r-τ>ϵ, we note that for both i=1,2, sup_r-τ>ϵD_i(r)≤sup_r{[T(r;0,1)]^2+[T^C(r;0,1)]^2}/inf_r-τ>ϵ∫_η_2^r-η_2[T(u;0,r)]^2du. By proof of Proposition 1 in <cit.>, we obtain that for some universal constant K>0, sup_r{[T(r;0,1)]^2+[T^C(r;0,1)]^2}≤K(Δ^2_M+Δ^2_V)<∞. Therefore, it suffices to show that there exists a function ζ(ϵ)>0, such that for any r-τ>ϵ, ∫_η_2^τ-η_2[T(u;0,r)]^2du>ζ(ϵ). For r>τ, and for any u∈[η_2,τ-η_2], T(u;0,r) = u(r-u)/r(V^(1)-V_[u,r]) = u(r-u)/r[V^(1)-τ-u/r-u𝔼(d^2(Y_t^(1),μ_[u,r]))-r-τ/r-u𝔼(d^2(Y_t^(2),μ_[u,r]))] = u(r-u)/r[V^(1)-V(τ-u/r-u)]. By Assumption <ref>, we can obtain that |T(u;0,r)|>u(r-u)/rφ(ϵ/r-u)≥η_2^2φ(ϵ). Hence, we can choose ζ(ϵ)=η_2^6φ^2(ϵ). § EXAMPLES As we have mentioned in the main context, since d^2(Y_t,ω) takes value in ℝ for any fixed ω∈Ω, both Assumption <ref> and <ref> could be implied by high-level weak temporal dependence conditions in conventional Euclidean space. Therefore, we only discuss the verification of Assumption <ref>, <ref> and <ref> in what follows. §.§ Example 1: L_2 metric d_L for square integrable functions defined on [0,1] Let Ω be the Hilbert space of all square integrable functions defined on I=[0,1] with inner product ⟨ f,g⟩=∫_If(t)g(t)dt for two functions f,g∈Ω. Then, for the corresponding norm f=⟨ f,f⟩^1/2, L_2 metric is defined by d_L^2(f,g)=∫_I[f(t)-g(t)]^2dt. Assumptions <ref> and <ref> follows easily by the Riesz representation theorem and convexity of Ω. We only consider Assumption <ref>. Note that d_L^2(Y,ω)-d_L^2(Y,μ)= ∫_0^1 [ω(t)-μ(t)][ω(t)+μ(t)-2Y(t)]dt = d_L^2(ω,μ)+2∫_0^1 [ω(t)-μ(t)][μ(t)-Y(t)]dt := d_L^2(ω,μ)+g(Y,ω,μ), and R(Y,ω,μ)≡ 0. Furthermore, |n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋g(Y_i,ω,μ)| = |2∫_0^1 [ω(t)-μ(t)]n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋[Y_i(t)-μ(t)]dt| ≤ 2d_L(ω,μ) {∫_0^1 |n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋[Y_i(t)-μ(t)]|^2dt}^1/2, where the inequality holds by Cauchy-Schwarz inequality. By the boundedness of d_L(ω,μ), Assumption <ref> then follows if sup_t∈[0,1]sup_(a,b)∈ℐ_η|n^-1/2∑_i=⌊ n a⌋+1^⌊ n b⌋[Y_i(t)-μ(t)]|=O_p(1), which holds under general weak temporal dependence for functional observations, see, e.g. <cit.>. §.§ Example 2: 2-Wasserstein metric d_W of univariate CDFs Let Ω be the set of univariate CDF function on ℝ, consider the 2-Wasserstein metric defined by d_W^2(G_1,G_2)=∫_0^1 (G_1(t)-G_2(t))^2dt, where G_1 and G_2 are two inverse CDFs or quantile functions. The verification of Assumption <ref> and <ref> can be found in Proposition C.1 in <cit.>. Furthermore, by similar arguments as Example 1, Assumption <ref> holds under weak temporal dependence conditions, see <cit.>. §.§ Example 3: Frobenius metric d_F for graph Laplacians or covariance matrices Let Ω be the set of graph Laplacians or covariance matrices of a fixed dimension r, with uniformly bounded diagonals, and equipped with the Frobenius metric d_F, i.e. d_F^2(Σ_1,Σ_2)=tr[(Σ_1-Σ_2)^⊤(Σ_1-Σ_2)]. for two r× r matrices Σ_1 and Σ_2. The verification of Assumption <ref> and <ref> can be found in Proposition C.2 in <cit.>. We only consider Assumption <ref>. Note that d_F^2(Y,ω)-d_F^2(Y,μ)= tr(ω-μ)^⊤(ω+μ-2Y) = d_F^2(ω,μ)+2tr(ω-μ)^⊤(μ-Y) := d_F^2(ω,μ)+g(Y,ω,μ), and R(Y,ω,μ)≡ 0. Furthermore, by Cauchy-Schwarz inequality, |n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋g(Y_i,ω,μ)| = 2|tr[(ω-μ)^⊤n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋(Y_i-μ)]| ≤ 2d_F(ω,μ) d_F(n^-1/2∑_i=⌊n a⌋+1^⌊n b⌋[Y_i-μ],0). By the boundedness of d_F(ω,μ), Assumption <ref> then follows if sup_(a,b)∈ℐ_ηn^-1/2∑_i=⌊ n a⌋+1^⌊ n b⌋vec(Y_i-μ)=O_p(1), which holds under common weak dependence conditions in conventional Euclidean space. §.§ Example 4: Log-Euclidean metric d_E for covariance matrices Let Ω be the set of all positive-definite covariance matrices of dimension r, with uniformly both upper and lower bounded eigenvalues, i.e. for any Σ∈Ω, c≤λ_min(Σ)≤λ_max(Σ)≤ C for some constant 0<c<C<∞. The log-Euclidean metric is defined by d_E^2(Σ_1,Σ_2)=d_F^2(log_mΣ_1,log_mΣ_2), where log_m is the matrix-log function. Note that log_mΣ has the same dimension as Σ, hence the verification of Assumptions <ref>, <ref> and <ref> follows directly from Example 3. § FUNCTIONAL DATA IN HILBERT SPACE Our proposed tests and DM test are also applicable to the inference of functional data in Hilbert space, such as L_2[0,1], since the norm in Hilbert space naturally corresponds to the distance metric d. In a sense, our methods can be regarded as fully functional <cit.> since no dimension reduction procedure is required. In this section, we further compare them with SN-based testing procedure by <cit.> for comparing two sequences of temporally dependent functional data, i.e. {Y_t^(i)}_t=1^n_i i=1,2, defined on [0,1]. The general idea is to first apply FPCA, and then compare score functions (for mean) or covariance operators (for covariance) between two samples in the space spanned by leading K eigenfunctions. SN technique is also invoked to account for unknown temporal dependence. Although the test statistic in <cit.> targets at the difference in covariance operators of {Y_t^(1)} and {Y_t^(2)}, their test can be readily modified to testing the mean difference. To be specific, denote μ^(i) as the mean function of Y_t^(i), t=1,⋯,n_i, i=1,2, we are interested in testing ℍ_0: μ^(1)(x)=μ^(2)(x), ∀ x∈[0,1]. We assume the covariance operator is common for both samples, which is denoted by C_p. By Mercer’s Lemma, we have C_p=∑_j=1^∞λ_p^jϕ_p^j⊗ϕ_p^j, where {λ^j_p}_j=1^∞ and {ϕ^j_p}_j=1^∞ are the eigenvalues and eigenfunctions respectively. By the Karhunen-Loève expansion, Y_t^(i)=μ^(i)+∑_j=1^∞η_t,j^(i)ϕ^j_p, t=1,⋯,n_i;  i=1,2, where {η_t,j^(i)} are the principal components (scores) defined by η_t,j^(i)=∫_[0,1]{Y_t^(i)-μ^(i)}ϕ^j_p(x)dx=∫_[0,1]{Y_t^(i)-μ_p+μ_p-μ^(i)}ϕ^j_p(x)dx with μ_p=γ_1μ^(1)+γ_2μ^(2). Under ℍ_0, μ^(1)=μ^(2)=μ_p, and η_t,j^(i) should have mean zero. We thus build the SN based test by comparing empirical estimates of score functions. Specifically, define the empirical covariance operator based on the pooled samples as Ĉ_p= 1/n_1+n_2(∑_t=1^n_1𝒴^(1)_t+∑_t=1^n_2𝒴^(2)_t), where 𝒴^(i)_t= Y_t^(i)⊗ Y_t^(i), i=1,2. Denote by {λ̂^j_p}_j=1^∞ and {ϕ̂^j_p}_j=1^∞ the corresponding eigenvalues and eigenfunctions. We define the empirical scores (projected onto the eigenfunctions of pooled covariance operator) for each functional observation as η̂^(i)_t,j=∫_[0,1]{Y_t^(i)(x)-μ̂_p(x)}ϕ̂^j_p(x)dx, t=1,⋯,n_i;  i=1,2;   j=1,⋯, K, where μ̂_p=(∑_t=1^n_1Y_t^(1)+∑_t=1^n_2Y_t^(2))/n is the pooled sample mean function. Let η̂^(i,K)_t,(K)=(η̂^(i)_t,1,⋯,η̂^(i)_t,K)^⊤, and α̂^(K)(r)=(⌊ rn_1⌋)^-1∑_t=1^⌊ rn_1⌋η̂^(1,K)_t-(⌊ rn_2⌋)^-1∑_t=1^⌊ rn_2⌋η̂^(2,K)_t as the difference of recursive subsample mean of empirical scores, we consider the test statistic as ZSM= n[α̂^(K)(1)]^⊤{∑_k=1^nk^2/n^2[α̂^(K)(k/n) -α̂^(K)(1)][α̂^(K)(k/n)-α̂^(K)(1)]^⊤}^-1[α̂^(K)(1)], and under ℍ_0 with suitable conditions, it is expected that ZSM→_d B_K(1)^⊤{∫_0^1(B_K(r)-r B_K(1))(B_K(r)-r B_K(1))^⊤d r}^-1 B_K(1), where B_K(·) is a K-dimensional vector of independent Brownian motions. Consider the following model taken from <cit.>, Y_t(x)= ∑_j=1^3{ξ^j, 1_t √(2) sin(2 πj x)+ξ^j, 2_t √(2) cos(2 πj x)}, t=1,2, …,n_1 where the coefficients ξ_t=(ξ^1,1_t, ξ^2,1_t, ξ^3,1_t, ξ^1,2_t, ξ^2,2_t, ξ^3,2_t)^' are generated from a VAR process, ξ_t= ρξ_t-1+√(1-ρ^2) e_t, e_t i.i.d.∼ 𝒩(0,1/2 diag(𝐯)+1/2 1_6)∈ℝ^6 with v=(12, 7, 0.5, 9, 5, 0.3)^⊤. To compare the size and power performance, we generate independent functional time series {Y_t^(1)} and {Y_t^(2)} from the above model, and modify {Y_t^(2)} according to the following settings: * Y_t^(2)(x)= Y_t(x)+20δ_1sin(2π x), x∈[0,1]; * Y_t^(2)(x)= Y_t(x)+20δ_2η_tsin(2π x), x∈[0,1]; * Y_t^(2)(x)= Y_t(x)+20δ_1x, x∈[0,1]; * Y_t^(2)(x)= Y_t(x)+20δ_2η_tx, x∈[0,1]; * Y_t^(2)(x)= Y_t(x)+20δ_11(x∈[0,1]); * Y_t^(2)(x)= Y_t(x)+20δ_2η_t1(x∈[0,1]); where η_ti.i.d.∼𝒩(0,1) and δ_1,δ_2∈[0,0.3]. The size performance of all tests are evaluated by setting δ_1=δ_2=0. As for the power performance, Cases 1m-3m with δ_1∈(0,0.3] correspond to alternatives caused by mean differences and Cases 1v-3v with δ_2∈(0,0.3] correspond to covariance operator differences. In particular, we note the alternative of Cases 1m and 1v depends on the signal function f(x)=sin(2π x), x∈[0,1], which is in the space spanned by the eigenfunctions of Y_t(x), while for Cases 3m and 3v, the signal function f(x)=1(x∈[0,1]) is orthogonal to these eigenfunctions. We denote the two-sample mean test and covariance operator test based on <cit.> as ZSM and ZSV respectively. The empirical size of all tests are outlined in Table <ref> at nominal level α=5%. From this table, we see that (a) D_1 has accurate size across all model settings and D_2 is generally reliable for moderate dependence level, albeit oversize phenomenon for small n when ρ=0.7; (b) DM suffers from severe size distortion when temporal dependence is exhibited even for large n; (c) although both ZSM and ZSV utilize SN to robustify the tests due to temporal dependence, we find their performances depend on the user-chosen parameter K a lot, and still suffer from size distortion when n is small. In particular, the size distortion when K=4 is considerably larger than that for K=2 in the presence of temporal dependence. Figure <ref> further compares their size-adjusted powers when n_1=n_2=400 and ρ=0.4. As can be seen, D_1 possesses trivial power against mean differences while D_2 is rather stable in all settings with evident advantages in Cases 2m and 3m. In contrast, the power performances of DM, ZSM and ZSV vary among different settings. For example, when the alternative signal function is in the span of leading eigenfunctions, i.e. Cases 1m and 1v, ZSM and ZSV with K=2 can deliver (second) best power performances as expected, while they are dominated by other tests when the alternative signal function is orthogonal to eigenfunctions in Cases 3m and 3v. As for DM, it is largely dominated by D_2 in terms of mean differences, although it exhibits moderate advantage over D_2 for covariance operator differences. In general, whether the difference in mean/covariance operator is orthogonal to the leading eigenfunctions, or lack thereof, is unknown to the user. Our test D_2 is robust to unknown temporal dependence, exhibits quite accurate size and delivers comparable powers in all settings, and thus should be preferred in practice. agsm
http://arxiv.org/abs/2307.05271v1
20230711140430
Examples of compact $LΣ(\leqω)$-spaces
[ "Antonio Avilés", "Mikołaj Krupski" ]
math.GN
[ "math.GN", "54D30, 26A21 54F05, 54C60" ]
Universidad de Murcia, Departamento de Matemáticas, Campus de Espinardo 30100 Murcia, Spain. [email protected] Universidad de Murcia, Departamento de Matemáticas, Campus de Espinardo 30100 Murcia, Spain and Institute of Mathematics University of Warsaw ul. Banacha 2 02–097 Warszawa, Poland [email protected] The class of LΣ(≤ω)-spaces was introduced in 2006 by Kubiś, Okunev and Szeptycki as a natural refinement of the classical and important notion of Lindelöf Σ-spaces. Compact LΣ(≤ω)-spaces were considered earlier, under different names, in the works of Tkachuk and Tkachenko in relation to metrizably fibered compacta. In this paper we give counterexamples to several open questions about compact LΣ(≤ω)-spaces that are scattered in the literature. Among other things, we refute a conjecture of Kubiś, Okunev and Szeptycki by constructing a separable Rosenthal compactum which is not an LΣ(≤ω)-space. We also give insight to the structure of first-countable (K)LΣ(≤ω)-compacta. [2010]54D30, 26A21 54F05, 54C60 Examples of compact LΣ(≤ω)-spaces Mikołaj Krupski August 12, 2023 ================================= § INTRODUCTION All spaces under consideration are assumed to by Tychonoff. Given a class 𝒦 of compact spaces, Kubiś, Okunev and Szeptycki introduced in <cit.> the following refinement of the classical notion of a Lindelöf Σ-space: We say that a space X is an LΣ(𝒦)-space if there is a separable metrizable space M and a compact-valued upper semicontinuous onto map p:M→ X with p(z)∈𝒦, for all z∈ M. It is well known that X is a Lindelöf Σ-space if and only if X is an LΣ(𝒦)-space, where 𝒦 is the class of all compact spaces (see <cit.>). In this paper we are mainly concerned with the case when 𝒦 consists of compact metrizable spaces, i.e. compact spaces of countable weight. Following <cit.>, we write LΣ(≤ω) to denote this particular subclass of Lindelöf Σ-spaces. More specifically, we say that X is an LΣ(≤ω)-space if X is an LΣ(𝒦)-space where 𝒦 is the class of compact metrizable spaces. Although the systematic study of LΣ(≤ω)-spaces was initiated in <cit.>, compact LΣ(≤ω)-spaces were investigated earlier, independently by Tkachuk <cit.> (under the name weakly metrizably fibered spaces) and Tkachenko <cit.> (under the name metrizably-approximable spaces), as a natural generalization of so-called metrizably fibered spaces. The aim of the present paper is to give counterexamples to several open question about compact LΣ(≤ω)-spaces scattered in the literature. Let us describe our main results along with some motivations behind them. There is a separable Rosenthal compact space KA which is not an LΣ(≤ω)-space. A compact space K is a Rosenthal compact space if K is homeomorphic to a subspace of the space B_1(X) of Baire class one functions on a Polish space X, equipped with the pointwise topology. It is was proved in <cit.> that if K is homeomorphic to a subspaces of B_1(X) consisting of functions with countably many discontinuities, then K is an LΣ(≤ω)-space. This result motivated Kubiś, Okunev and Szeptycki to ask whether analogous assertion holds for all Rosenthal compacta (see <cit.>, <cit.> or <cit.>). Example <ref> provides a negative answer to this question. Moreover, since the class of LΣ(≤ω)-spaces is stable under continuous images, the space KA from Example A is a Rosenthal compactum which is not a continuous image of any compact subset of B_1(ω^ω) consisting of functions with countably many discontinuities. As far as we know this is the first example of that sort (see <cit.> and Remark <ref> below). We also give another counterexample to the question of Kubiś, Okunev and Szeptycki mentioned above. It is non-separable but has other interesting features. We need some notation first. For a set Γ by Σ(ℝ^Γ) we denote the following subset of the product ^Γ Σ(^Γ)={x∈^Γ:|{γ∈Γ:x(γ)≠0}|≤ω}. Put Σ({0,1}^Γ)=Σ(^Γ)∩{0,1}^Γ. Recall that a compact space which, for some Γ, is homeomorphic to a subspace of Σ(^Γ) is called Corson compact. A compact space which is homeomorphic to a weakly compact subset of a Banach space is called an Eberlein compact space. A compact space K is Gul'ko compact if the space C_p(K) of continuous functions on K equipped with the pointwise topology, is a Lindelöf Σ-space. It is well known that the class of Gul'ko compacta lies strictly between the class of Eberlein compacta and the class of Corson compact spaces. There is a compact space KB∈ B_1(^2)∩Σ({0,1}^^2) such that KB is not an LΣ(≤ω)-space yet it is an LΣ(ℰ)-space, where ℰ is the class of Eberlein compact spaces of cardinality not exceeding continuum. It was proved by Tkachuk in <cit.> that any Eberlein compact space of cardinality at most continuum is an LΣ(≤ω)-space. Later Molina Lara and Okunev <cit.> generalized this to the class of Gul'ko compacta (see Section 5 below for a further generalization). However, similar result for the class of Corson compact space is no longer true; a suitable example of a Corson compact space which is not LΣ(≤ω) was given in <cit.>. Example <ref> is a different space of that sort and has additional property of being Rosenthal. Moreover, since it is an LΣ(ℰ)-space, it can serve as a counterexample to Problems 4.10 and 4.11 in <cit.>. The next example provides a negative answer to Problems 4.12–4.14 in <cit.>. There is a Corson compact LΣ(≤ω)-space which is not Gul'ko compact. Let n be a positive integer. We say that a space X is metrizably fibered (n-fibered) if there is a metrizable space M and a continuous map f:X→ M all of whose fibers f^-1(z) are metrizable (have cardinality at most n). Every metrizably fibered compact space X is first-countable (see <cit.>). Let D be the Alexandroff double circle space (see <cit.>). Clearly, D is 2-fibered. By identifying all of the nonisolated points of D we obtain a map of D onto A(𝔠), the one point compactification of a discrete set of size continuum. Since A(𝔠) is not first-countable, it is not metrizably fibered. Hence, the class of metrizably fibered compacta is not invariant under continuous images. On the other hand, the class of compact LΣ(≤ω)-spaces is invariant under continuous images and every metrizably fibered compact space is LΣ(≤ω). In view of the above Tkachuk asked in <cit.> the following two questions: <cit.> Let K be a metrizably fibered compact space. Is it true that every first-countable continuous image of K is metrizably fibered? <cit.> Is any first-countable compact LΣ(≤ω)-space a continuous image of a metrizably fibered space? Regarding Question <ref>, we prove that a lexicographic product of countably many intervals is a continuous image of a metrizably fibered compact space. In consequence the lexicographic product of three intervals can serve as a counterexample to Question <ref>, as this space is not metrizably fibered (cf. <cit.>). Actually, we have the following: There is a compact first-countable space KD which is not metrizable fibered yet KD is a continuous image of a compact 3-fibered space. It turns out however that we cannot replace 3-fibered by 2-fibered above. Namely, we shall prove (see Corollary <ref> below): Let K be a first-countable space. If K is a continuous image of a 2-fibered compact space, then K is metrizably fibered. Regarding Question <ref>, we obtain a partial solution given by the following: There is a compact first-countable LΣ(≤ω)-space KE which is is not a continuous image of any compact metrizably fibered space. § PRELIMINARIES In this section we collect basic definitions and facts that are used throughout the paper. A set-valued map from a space X to a space Y is a function that assigns to every point of X a subset of Y. We say that a set-valued map p:X→ Y is: * onto if ⋃{p(x):x∈ X}=Y; * compact-valued if p(x) is compact for all x∈ X; * upper semicontinuous if for every open subset U of Y, the set {x∈ X:p(x)⊆ U} is open in X. A space X is an LΣ(≤ω)-space if there is a separable metrizable space M and a compact-valued upper semicontinuous onto map p:M→ X such that p(z) is metrizable for all z∈ M. The following fact is a part of folklore (cf. <cit.>): Let K be a compact space. The following two conditions are equivalent: * K is an LΣ(≤ω)-space * There is a countable cover 𝒞 of K consisting of closed subsets of K such that for every x∈ K the intersection ⋂{C∈𝒞:x∈ C} is metrizable. Spaces (not necessarily compact) satisfying condition (B) above were first considered independently by Tkachuk in <cit.> and Tkachenko in <cit.>. In <cit.> they are called weakly metrizably fibered spaces, whereas in <cit.> they are considered under the name metrizably-approximable spaces. They were introduced and studied as a natural generalization of an important class of metrizably fibered spaces defined below. Let n be a positive integer. A space X is metrizably fibered (n-fibered) if there is a metrizable space M and a continuous map f:X→ M such that f^-1(z) is metrizable (have cardinality at most n) for all z∈ M. The following notion was introduced in <cit.>: A space X is a KLΣ(≤ω)-space if there is a compact metrizable space M and a compact-valued upper semicontinuous onto map p:M→ X such that p(z) is metrizable for all z∈ M. The following conditions are equivalent for any space K: * K is a KLΣ(≤ω)-space. * K is a continuous image of a compact metrizably fibered space. * There is a family {C_s:s∈ 2^<ω} of closed subsets of K satisfying the following conditions: * C_∅ =K * C_s=C_s⌢ 0∪ C_s⌢ 1, for every s∈ 2^<ω * for every σ∈ 2^ω the set ⋂_nC_σ|n is metrizable. Suppose that K is a KLΣ(≤ω)-space. Fix a compact metrizable space M and an upper semicontinuous onto map p:M→ K so that p(z) is metrizable for each z∈ M. Let L={ z,x ∈ M× K:x∈ p(z)} be the graph of p. Since p is upper semicontinuous, the set L is closed in M× K and hence it is compact. The projection onto the second coordinate maps L onto K (because p is onto), whereas the projection onto the first coordinate maps L onto a (metrizable) subspace of M and its fibers are of the form p(z), so they are metrizable. This proves (A)⇒ (B). To prove (B)⇒ (A) fix a compact space L, a compact metrizable space M, a continuous surjection f:L→ K and a continuous onto map g:L→ M such that g^-1(z) is metrizable for every z∈ M. It is easy to check that the assignment z↦ f(g^-1(z)) is an upper semicontinuous compact-valued map from M onto K and f(g^-1(z)) is metrizable being a continuous image of a compact metrizable space g^-1(z). To show (A)⇒ (C), fix a compact metric space M and a compact valued upper semicontinuous map p:M→ K such that p(M)=K and p(x) is metrizable, for all x∈ M. The space M is a continuous image of the Cantor set 2^ω. Hence, that there is a family {F_s:s∈ 2^<ω} of closed subsets of M such that F_∅=M, F_s=F_s⌢ 0∪ F_s⌢ 1 and for every σ∈ 2^ω the set ⋂_n F_σ|n is a singleton. For s∈ 2^<ω, we define C_s=⋃{p(x):x∈ F_s}. Since p is compact-valued and upper semicontinuous, the set C_s is compact, for every s∈ 2^<ω. Condition (i) follows from the fact that p is onto, and condition (ii) is clear. To show (iii), observe that if σ∈ 2^ω, then ⋂_nF_σ|n={a} for some a∈ M and p(a)=⋂_n C_σ|n by compactness and upper semicontinuity of p. Indeed, for any open set U in K, if p(a)⊆ U, then the set V={x∈ M: p(x)⊆ U} is an open neighborhood of a in M. Hence, F_σ|n⊆ V, for all but finitely many n's. This gives C_σ|n⊆ U for all but finitely many n's so p(a)=⋂_n∈ω C_σ|n. But p(a) is metrizable by our assumption on p. For (C)⇒ (A), fix a family {C_s:s∈ 2^<ω} as in condition (C). It can be easily verified that the multivalued map p:2^ω→ K given by p(σ)=⋂_nC_σ|n is is compact-valued upper semicontinuous and p(σ) is metrizable, for every σ∈ 2^ω, by (iii). For a subset A of a space X, we denote by χ_A:X→{0,1} the characteristic function of the set A, given by the formula: χ_A(x)= { 1 x∈ A 0 x∉ A . It follows from the Baire criterion that if X is a Polish space and A⊆ X, then χ_A is a Baire class one function if and only if the set A is both G_δ and F_σ. By 𝕊 we will denote the split interval, i.e. the space ((0,1]×{0})∪ ([0,1)×{1}) endowed with the lexicographic order topology. To simplify notation, for x∈ [0,1], the points x,0 , x,1∈𝕊 will be denoted by x^- and x^+ respectively. For a subset A of a topological space we denote by A the closure of A. The interior of A is denoted by A. § EXAMPLE A: A SEPARABLE ROSENTHAL COMPACT THAT IS NOT AN LΣ(≤Ω)-SPACE In this section we will construct a space KA from Example <ref>. For a∈^2 let D(a) be the open disk of radius 1 centered at a. By ∂ D(a) we denote the boundary (i.e. the circumference) of D(a). For a= a_1,a_2∈^2 and θ∈ we set L_0(a,θ)=D(a)∪{ a_1+cosφ,a_2+sinφ:θ<φ<θ+π} L_1(a,θ)=D(a)∪{ a_1+cosφ,a_2+sinφ:θ<φ≤θ+π} L_2(a,θ)=D(a)∪{ a_1+cosφ,a_2+sinφ:θ≤φ<θ+π} Note that L_i(a,θ) is the open disk of radius 1 centered at a with a half of its circumference attached (without endpoints if i=0 or with precisely one endpoint if i=1,2, see Figure <ref> below). Let 𝒟={D(p):p∈ℚ^2} and let Y={χ_D:D∈𝒟} be considered as a subspace of the Cantor cube {0,1}^ℝ^2. Our compact space KA is the closure of Y in {0,1}^ℝ^2. Let ℬ={L_i(a,θ):i∈{0,1,2}, a∈^2, θ∈} We have the following: Y∪{χ_B:B∈ℬ}∪{χ_∅}⊆ KA. A basic clopen neighborhood of χ_∅ in {0,1}^ℝ^2 is of the form C_F={ξ∈{0,1}^^2:ξ(x)=0x∈ F}, where F⊆ℝ^2 is finite. Since for any finite F⊆^2 we can find D∈𝒟 disjoint from F, the set Y meets C_F for each F. This gives χ_∅∈ KA. Fix x= x_1,x_2∈^2 and θ∈. Consider the following two points in ^2: α= x_1+cosθ,x_2+sinθβ= x_1+cos(θ+π), x_2+sin(θ+π) The points α and β are the endpoints of the half of the circumference that is contained in each L_i(x,θ), i=0,1,2 (see Figure <ref>). χ_L_0(x,θ)∈ KA Let l be the line through α and β. The set ^2∖ l is the union of two disjoint open half-planes H and H' where H contains the set L_0(x,θ)∖ D(x). To show that χ_L_0(x,θ)∈ KA let us consider the following subset of ^2 (cf. Figure <ref> below): U=D(x)∩ H∩(^2∖(D(α)∪D(β))) Clearly, U is open in ^2 and x∈U. So there is a sequence {p_n∈ℚ^2∩ U:n∈ω} that converges to x in ^2. We will check that the sequence {χ_D(p_n):n∈ω} pointwise converges to χ_L_0(x,θ). Since (p_n)_n∈ω converges to x, if z∈ D(x) then χ_D(p_n)(z)=1 for sufficiently large n. Similarly, if z∉D(x) then eventually χ_D(p_n)(z)=0. It remains to verify that (χ_D(p_n)(z))_n∈ω converges to χ_L_0(x,θ)(z) for z∈∂ D(x). For every n∈ω denote {a_n, b_n}=∂ D(p_n)∩∂ D(x). Let us assume that a_n is closer to α than b_n, and b_n is closer to β than a_n. Since p_n∈ U, we have a_n∈ H b_n∈ H The points a_n, b_n divide the circle ∂ D(x) into two open arcs Γ_n and Δ_n. One of them, say Γ_n, is entirely contained in D(p_n) whereas Δ_n is disjoint from D(p_n) (see Figure <ref> below). Since (p_n)_n∈ω converges to x, it follows that the sequences (a_n)_n∈ω and (b_n)_n∈ω converge to α and β, respectively. So if z∈ L_0(x,θ)∩∂ D(x), then z∈Γ_n for sufficiently large n. Since Γ_n⊆ D(p_n), we infer that eventually χ_D(p_n)(z)=1=χ_L_0(x,θ)(z). If z∈∂ D(x)∖ L_0(x,θ), then by (<ref>), z∈Δ_n. As Δ_n∩ D(p_n)=∅ for all n, we conclude that χ_D(p_n)(z)=0=χ_L_0(x,θ)(z) for all n. By Claim, χ_L_0(x, θ+1/n), χ_L_0(x, θ-1/n)∈ KA, n≥ 1. The sequences (χ_L_0(x,θ+1/n))_n∈ω and (χ_L_0(x,θ-1/n))_n∈ω pointwise converge to χ_L_1(x,θ) and χ_L_2(x,θ), respectively. Hence, χ_L_1(x,θ), χ_L_2(x,θ)∈ KA. The space KA is a separable Rosenthal compact space and KA=Y∪{χ_B:B∈ℬ}∪{χ_∅}. By Lemma <ref> we only need to show that KA⊆ Y∪{χ_B:B∈ℬ}∪{χ_∅}. The set Y is countable and consists of Baire class one functions on ^2 (because each function in Y is the characteristic function of an open disk which is simultaneously F_σ and G_δ in ^2). By Bourgain-Fremlin-Talagrand theorem (cf. <cit.>), it is enough to check that every sequence of elements of Y has a subsequence convergent to an element of Y∪{χ_B:B∈ℬ}∪{χ_∅}. To this end, take a nontrivial sequence {χ_D(p_n):p_n∈ℚ^2} of elements of Y. If (p_n)_n∈ω is unbounded, then there is a subsequence (p_n_k)_k∈ω with p_n_k→ +∞, whence χ_D(p_n_k) converge to χ_∅ and we are done in this case. Suppose that (p_n)_n∈ω is bounded in ^2. By passing to a suitable subsequence, we may without loss of generality assume that (p_n)_n∈ω converges to a point p∈^2. In particular, for sufficiently large n, the set ∂ D(p_n)∩∂ D(p)={a_n,b_n} for some distinct points a_n,b_n∈^2. The points a_n and b_n partition the circle ∂ D(p) into two open arcs Γ_n and Δ_n. One of them, say Γ_n, is contained in D(p_n) whereas Δ_n∩ D(p_n)=∅ (cf. Figure <ref>; now the point p_n is not necessarily in U). By compactness of ∂ D(p), there are subsequences (a_n_k)_k∈ω and (b_n_k)_k∈ω convergent to some points a∈∂ D(p) and b∈∂ D(p), respectively. Since points p_n converge to p, we must have d(a,b)=2, where d(·,·) is the distance between points in ^2. At least one of the following three cases holds: Case 1: The set {k∈ω: a∈Γ_n_k} is infinite. As Γ_n⊆ D(p_n) and Δ_n∩ D(p_n), it follows that a∈ D(p_n_k) and b∉ D(p_n_k) for infinitely many k's. Therefore, there is a subsequence of {D(p_n_k):k∈ω}, convergent to L_1(p,θ) or L_2(p,θ), for some θ. Case 2: The set {k∈ω: b∈Γ_n_k} is infinite. This case is analogous to the previous one. Case 3: Both {k∈ω: a∈Γ_n_k} and {k∈ω: b∈Γ_n_k} are finite. It is easy to see that in this case the sequence {D(p_n_k):k∈ω} converges to L_0(p,θ), for some θ. The space KA is not an LΣ(≤ω)-space. Let ℝ^2∪{∞} be the one point compactification of ^2. Consider the function s:𝒟∪ℬ→ℝ^2 that assigns to each disk its center. Identifying a set with its characteristic function and declaring that s(∅)=∞, we may view s as a map s:KA→ℝ^2∪{∞} (cf. Lemma <ref>). Observe that s is sequentially continuous. True, if a sequence (ξ_n)_n∈ω of elements of KA converges to χ_∅ then the sequence (s(ξ_n))_ω has no convergent subsequence so it converges to ∞. If (ξ_n)_n∈ω converges to ξ∈ KA∖{χ_∅}, then the sequence (s(ξ_n))_ω must converge to the center of the disk defining ξ. Since s is sequentially continuous, it is continuous by Bourgain-Fremlin-Talagrand theorem. We will show that condition (B) of Proposition <ref> fails. To this end, fix a countable compact cover 𝒞={C_n:n∈ω} of KA. Since s is continuous, the set A_n=s(C_n)∩^2 is closed in ℝ^2, for all n∈ω. By the Baire Category Theorem there is p= p_1,p_2 ∈^2∖⋃_n∈ω(A_n∖ A_n ). We will show that if ξ∈ s^-1(p), then the set L=⋂{C_n∈𝒞:ξ∈ C_n} contains a copy of the split interval 𝕊 and hence, it is nonmetrizable. Consider K_p=s^-1(p)∩{χ_B:B∈ℬ}, i.e. K_p consists of all elements of KA corresponding to non-open disks centered at p. Fix ξ∈ s^-1(p) and let L=⋂{C_n∈𝒞:ξ∈ C_n}. Note that ξ∈ C_n implies that s(ξ)=p∈ A_n. By the choice of p this means that p∈ A_n. We have K_p⊆ C_n for all n with ξ∈ C_n. Let us first show that χ_L_0(p,θ)∈ C_n for all θ. To this end, fix θ and consider the following point m∈ℝ^2: m= p_1+cos(θ+π/2),p_2+sin(θ+π/2). Note that m is the midpoint of the arc lying on the boundary of L_0(p,θ). Let I be the open interval in ^2 joining m and p. It follows from p∈ A_n that there is a sequence {a_k∈ I∩ A_n:k∈ω} convergent to p. Since a_k∈ A_n, for each k we may find ξ_k∈ C_n so that s(ξ_k)=a_k. It is easy to see that (ξ_k)_k∈ω pointwise converges to χ_L_0(p,θ). Therefore χ_L_0(p,θ)∈ C_n because C_n is closed. The sequences (χ_L_0(p,θ+1/i))_n∈ω and (χ_L_0(p,θ-1/i))_i∈ω pointwise converge to χ_L_1(p,θ) and χ_L_2(p,θ), respectively. As we have already proved χ_L_0(p,θ+1/i), χ_L_0(p,θ-1/i)∈ C_n. Hence χ_L_1(p,θ), χ_L_2(p,θ)∈ C_n. Define a map f:𝕊→ K_p by letting f(x^-)=χ_L_2(p,x), f(x^+)=χ_L_1(p,x). It can be readily checked that f is one-to-one and continuous so it is an embedding. By Claim, K_p⊆ L so L contains a copy of 𝕊. We may consider the following two subclasses of the class ℛ of all Rosenthal compacta. By ℛ𝒦 denote the class of compact spaces that are homeomorphic to a subspace of B_1(C) of Baire class one functions on the Cantor set C. A compact space K belongs to the class 𝒞𝒟 if K is homeomorphic to a compact subset of ℝ^X consisting of functions with countably many discontinuities, for some Polish space X. It is well known that 𝒞𝒟⊆ℛ𝒦⊆ℛ (see <cit.>). The first example of a Rosenthal compact space K which is not in ℛ𝒦 was given by Pol in <cit.>. However only very recently the first example of a space distinguishing 𝒞𝒟 and ℛ𝒦 was found by the first author and Todorcevic <cit.>. Note that KA may be viewed as a compact subspace of B_1(ℝ^2∪{∞}) of Baire class one functions on the one point compactifications of the plane ^2. Thus, KA∈ℛ𝒦. If K∈𝒞𝒟, then K is an LΣ(≤ω)-space (see <cit.>). Hence, by Theorem <ref>, we get KA∉𝒞𝒟. Moreover, since the class of LΣ(≤ω)-spaces is stable under taking continuous images <cit.>, the space KA is not a continuous image of any space K∈𝒞𝒟. § EXAMPLE B: A COUNTABLY SUPPORTED ROSENTHAL COMPACT THAT IS NOT AN LΣ(≤Ω)-SPACE In this section, we will construct a space KB from Example B. The space KB which we are going to define will be a compact subspace of B_1(^2)∩Σ({0,1}^^2). Let 𝒜 be the collection of all subsets A of ^2 that simultaneously satisfy the following two conditions: x_1,y_1,… , x_n,y_n∈ A x_1<… < x_n∀i |x_i+1-x_i|≤ 2^-i. x∈ ({x}×)∩ A . Let KB={χ_A:A∈𝒜} be considered as a subspace of the Cantor cube {0,1}^ℝ^2. Since the definition of KB is of finite character, KB is compact. Let π:^2→ be the projection onto the first coordinate. Note that condition (<ref>) implies that for every A∈𝒜, the set π(A) has at most one accumulation point. Hence, by (<ref>), the set A is countable and has at most one non-isolated point. It follows that all elements of KB are of the first Baire class and belong to the Σ-product Σ({0,1}^^2). In the remaining part of this section, we will use the following notation. For an infinite A∈𝒜, by α(A) we will denote the unique accumulation point of π(A). For t∈^2, the map p_t:KB→{0,1} is the projection onto coordinate t. The space KB is not an LΣ(≤ω)-space. Striving for a contradiction, suppose that 𝒞={C_1,C_2,…} is a countable closed cover of KB such that for any ξ∈ KB, the set C_ξ=⋂{C∈𝒞:ξ∈ C} is metrizable (cf. Proposition <ref>). Consider the family 𝒜_0⊆𝒜 that consists of all infinite elements A of 𝒜 that satisfy the following conditions: α(A)∉π(A). x_1,y_1 ,… , x_n,y_n∈ A x_1<… <x_n, |α(A)-x_i|<2^-i. Take A∈𝒜_0. Clearly, for any y∈, the set A_y=A∪{α(A),y} belongs to 𝒜. Let K(A)={χ_A}∪{χ_A_y:y∈}. It is easy to see that K(A), considered as a subspace of {0,1}^^2, is a copy of the one point compactification of a discrete set of size 𝔠. Hence, the set {y∈:χ_A_y∈ C_χ_A} must be countable, as C_χ_A is metrizable. In particular, for every A∈𝒜_0 and every finite set F⊆, we have {n:χ_A∈ C_n χ_A_y∉ C_n y∈∖ F}≠∅. Let n_1=min{n: ∃ A∈𝒜_0 χ_A∈ C_nχ_A_y∉ C_n y∈}. Fix X^0∈𝒜_0 and a(0)∈ such that χ_X^0∈ C_n_1 and χ_X^0_a(0)∉ C_n_1. Let U_1 be a basic clopen neighborhood of χ_X^0_a(0) disjoint from C_n_1. The set U_1 can be taken of the form U_1=⋂{p^-1_t(0):t∈ F^1_0}∩⋂{p^-1_t(1):t∈ F^1_1}, for some finite sets F^1_0, F^1_1⊆^2. Write F^1_1={ x^1_1,y^1_1,… , x^1_m(1),y^1_m(1)}, where x^1_1<… <x^1_m(1) and x^1_m(1)=α(X^0), y^1_m(1)=a(0). Let γ_1=min{x^1_i+2^-i:i=1,… ,m(1)}. Since X^0∈𝒜_0 and F^1_1⊆ X^0_a(0), by definition of γ_1 we get γ_1>α(X^0) (see (<ref>)). Also, we infer from (<ref>) that the family 𝒜_1={A∈𝒜_0:χ_A∈ U_1 α(A)<γ_1}, is nonempty. Inductively, we construct: * an increasing sequence n_1<n_2<… of positive integers, * increasing sequences of finite sets F^1_0⊆ F^2_0⊆… and F^1_1⊆ F^2_1⊆… * a sequence γ_1≥γ_2≥… of real numbers and * a decreasing sequence of nonempty families 𝒜_0⊇𝒜_1⊇𝒜_2⊇… such that if F_1^k={ x_1,y_1,… , x_m(k),y_m(k)} x_1<… <x_m(k) m(1)<m(2)<… and if U_k=⋂{p^-1_t(0):t∈ F^k_0}∩⋂{p^-1_t(1):t∈ F^k_1}, then * γ_k=min{x_i+2^-i:i=1,… ,m(k)}, * 𝒜_k={A∈𝒜_0:χ_A∈ U_k α(A)<γ_k}, * U_k∩ C_n_k=∅ * n_k+1=min{n:∃ A∈𝒜_k χ_A∈ C_n χ_A_y∉ C_n y∈}. * x_m(k+1)<γ_k Fix k≥ 1 and suppose that n_i , F^i_0, F^i_1, γ_i and 𝒜_i are constructed for all i≤ k. Let F be the projection of the set F^k_0 onto the second coordinate. Define (cf. (<ref>)) n_k+1=min{n: ∃ A∈𝒜_k χ_A∈ C_nχ_A_y∉ C_n y∈∖ F}. Since 𝒜_k⊆𝒜_k-1, we infer from the inductive assumption on n_k (cf. (iii)) that n_k<n_k+1. Pick X^k∈𝒜_k and a(k)∈∖ F with χ_X^k∈ C_n_k+1 and χ_X^k_a(k)∉ C_n_k+1. Note that X^k∈𝒜_k and a(k)∉ F imply that χ_X^k_a(k)∈ U_k. We take U_k+1⊆ U_k a basic clopen neighborhood of χ_X^k_a(k) disjoint from C_n_k+1 of the form U_k+1=⋂{p^-1_t(0):t∈ F^k+1_0}∩⋂{p^-1_t(1):t∈ F^k+1_1}, for some finite sets F^k+1_0, F^k+1_1⊆^2 containing F^k_0 and F^k_1, respectively. Write F^k+1_1={ x_1,y_1 ,… , x_m(k+1),y_m(k+1)}, where m(k+1)>m(k), x_1<… <x_m(k+1)=α(X^k) and y_m(k+1)=a(k). We set γ_k+1=min{x_i+2^-i:i=1,… ,m(k+1)}. Since F^k_1⊆ F^k+1_1, we get γ_k+1≤γ_k, by (i). Similarly as for γ_1, we argue that γ_k+1>α(X^k) and that the family 𝒜_k+1={A∈𝒜_k:χ_A∈ U_k+1α(A)<γ_k+1}, is nonempty. This finishes the inductive construction. Let S=⋃_k=1^∞ F^k_1. The set S is infinite because for each k, we have (α(X^k),a(k))∈ F^k+1_1∖ F^k_1. Moreover, from (v) and γ_1≥γ_2≥…, we get S∈𝒜_k for all k=0,1,…. K(S)⊆⋂{C∈𝒞:χ_S∈ C}. Pick z∈. Aiming at a contradiction, suppose χ_S∈ C_n and χ_S_z∉ C_n, for some n. Since n_1<n_2<…, there exists k such that n<n_k+1. But S∈𝒜_k, so this contradicts the choice of n_k+1 (see (iv)). Since K(S) is nonmetrizable being a copy of the one point compactification of an uncountable discrete set, it follows that ⋂{C∈𝒞:χ_S∈ C} is nonmetrizable, contradicting the assumption on the cover 𝒞. Denote by ℰ the class of Eberlein compacta of cardinality not exceeding continuum. We say that a space X is an LΣ(ℰ)-space if there is a separable metrizable space M and a compact-valued upper semicontinuous onto map p:M→ X such that p(z)∈ℰ for all z∈ M. If X is compact this is equivalent to saying that there is a countable closed cover 𝒞 of X such that for every x∈ X the intersection ⋂{C∈𝒞:x∈ C} is Eberlein of cardinality not exceeding continuum (cf. Proposition <ref>). The space KB is an LΣ(ℰ)-space. Let ℬ be a countable basis for ^2. For every U∈ℬ, let C_U={ξ∈ KB:∀ a∈ U ξ(a)=0}. It is clear that C_U is closed in KB. So the family 𝒞={C_U:U∈ℬ}, is a countable closed cover of KB. Fix Y∈𝒜. Observe that if χ_A∈⋂{C∈𝒞:χ_Y∈ C}, then A⊆Y (the closures are taken in ℝ^2). So if Y is finite, then ⋂{C∈𝒞:χ_Y∈𝒞} is finite. Suppose that Y is infinite and let α(Y) be the accumulation point of π(Y). Let Y'=Y∖ ((α(Y)×). We have Y⊆ Y∪ (α(Y)×) so restricting coordinates we get ⋂{C∈𝒞:χ_Y∈ C}⊆{0,1}^Y'∪ (α(Y)×) and using condition (<ref>) we infer that ⋂{C∈𝒞:χ_Y∈ C} embeds into the product {0,1}^Y'×σ_1({0,1}^α(Y)×), where σ_1({0,1}^α(Y)×) is the subset of {0,1}^α(Y)× consisting of elements with at most one nonzero coordinate. Since σ_1({0,1}^α(Y)×) is homeomorphic to the one point compactification of the discrete set of size 𝔠 and Y is countable, the space {0,1}^Y'×σ_1({0,1}^α(Y)×) is an Eberlein compact space of cardinality continuum. It is known (see <cit.>) that if K∈ℰ, then K is an LΣ(≤ω)-space. Hence we have the following corollary to Theorem <ref> and Proposition <ref> which answers Problems 4.10 and 4.11 from <cit.> in the negative: The space KB is an LΣ(LΣ(≤ω))-space but not an LΣ(≤ω)-space. In particular, the classes of LΣ(LΣ(≤ω))-spaces and of LΣ(≤ω)-spaces are different in the realm of compacta. § EXAMPLE C AND CORSON COMPACTA THAT ARE LΣ(≤Ω)-SPACES Given a set Γ and a point x∈ℝ^Γ, the support of x is the set x={γ∈Γ:x(γ)≠ 0}. The following subclass of Corson compacta was introduced by A. Leiderman <cit.> under the name almost Gul'ko compact spaces. Our notation follows Todorcevic <cit.>. We refer the interested reader to <cit.> for a more general notion of ℰ_2(θ)-spaces. We say that a compact subspace K of some Σ-product Σ(ℝ^Γ) has the property ℰ_2(ℵ_1) if there is a sequence (Γ_n)_n∈ω of subsets of Γ such that if, for x∈ K, we let N_x={n∈ω: | x∩Γ_n|<ℵ_0}, then the set Γ∖⋃_n∈ N_xΓ_n is countable. We will give an elementary proof the following If a compact space K of cardinality ≤𝔠 has the property ℰ_2(ℵ_1), then K is an LΣ(≤ω)-space. Sokolov showed in <cit.> that every Gul'ko compact space has the property ℰ_2(ℵ_1). It is also known that there are compact spaces of cardinality 𝔠 which are not Gul'ko compact but have the property ℰ_2(ℵ_1) (see <cit.> or <cit.>). Therefore, Theorem <ref> generalizes <cit.> and provides a space for Example <ref>. It is worth mentioning that another extension of <cit.> (different than ours), using a similar concept as in Definition <ref>, was given by Rojas-Hernández in <cit.>. Fix a set Γ such that K⊆Σ(^Γ) and fix a sequence (Γ_n)_n∈ω of subsets of Γ as in Definition <ref>. Consider T=⋃{ x:x∈ K}. Since K⊆Σ(^Γ) satisfies |K|≤𝔠, the set T has cardinality at most 𝔠. The restriction mapping x↦ x↾ T embeds K into the Σ-product Σ(^T) and if we let T_n=Γ_n∩ T, then the set T and the sequence (T_n)_n∈ω have the property from Definition <ref>. Denote by I the unit interval [0,1] and fix an arbitrary injection h:T→ I. Put I_n+1=h(T_n) and I_0=I∖(h), where (h) is the range of h. It is clear that h defines a homeomorphic embedding of K into the Σ-product Σ(^I) and if, for x∈ K, we let N_x={n∈ω:| x∩ I_n|<ℵ_0}, then the set I∖⋃_n∈ N_x I_n is countable. For rationals p<q and integer n define C_n,p,q={x∈ K:∀γ∈ (p,q)∩ I_n x(γ)=0}. It can be easily checked that the set C_n,p,q is closed in K. If z∈ K, then I∖⋃{I_n:n∈ N_z} is countable. Hence, for some m∈ N_z the set I_m is uncountable. Since z∩ I_m is finite, there are rationals r<t such that z(γ)=0 for every γ∈ (r,t)∩ I_m. This shows that z∈ C_m,r,t and thus the family {C_n,p,q: p,q∈ℚ, n∈ω} is a countable closed cover of K. Fix z∈ K and let C_z=⋂{C_n,p,q:z∈ C_n,p,q}. The set R_z=I∖(⋃_n∈ N_zI_n) is countable. We claim that x∈ C_z x⊆ z∪ R_z. Indeed, otherwise x(γ)≠ 0 and z(γ)=0 for some γ∈⋃_n∈ N_zI_n. Take n∈ N_z with γ∈ I_n. Since n∈ N_z, the set F= z∩ I_n is finite and γ∉ F. It follows that there are rationals p<q such that γ∈ (p,q) (p,q)∩ F=∅. This gives z∈ C_n,p,q and thus x∈ C_n,p,q (because x∈ C_z). This contradicts x(γ)≠ 0. Since the set z∪ R_z is countable, we infer from (<ref>) that the set C_z is metrizable. § EXAMPLE D: A FIRST COUNTABLE KLΣ(≤Ω)-SPACE NEED NOT BE METRIZABLY FIBERED Tkachuk asked in <cit.> whether every first-countable continuous image of a metrizably fibered compactum is metrizably fibered (see Question <ref>). A related question of Tkachuk, whether every first-countable continuous image of the lexicographic square is metrizably fibered, was answered in the affirmative by Daniel and Kennaugh <cit.> (see <cit.> for a slight generalization). However, as we we will show in this section, in general the answer to Question <ref> is in the negative (see Corollary <ref>). The following fact is easy do derive. Let K be a compact space. The following conditions are equivalent: * K∈ KLΣ(≤ω) * There is a compact metric space M and a closed subspace F of K× M such that π_1(F)=K and π_2^-1(y)∩ F is metrizable for every y∈ M, where π_1:K× M→ K and π_2:K× M→ M are the projections. The implication (B)⇒ (A) follows from <cit.>. To show the converse, suppose that K∈ KLΣ(≤ω). By definition, there is a compact metric space M and a compact-valued upper semicontinuous map p:M→ K such that K=⋃{p(y):y∈ M} and p(y) is metrizable for all y∈ M. Let F={ x,y ∈ K× M:x∈ p(y)}. The set F is closed being the graph of the upper semicontinuous function p. It is easy to verify that F is as required. For an ordinal number λ, by [0,1]^λ_lex we denote the space [0,1]^λ endowed with the order topology given by the lexicographic product order. We write [0,1]^λ when the usual product topology on [0,1]^λ is considered. Let us show the following: For every countable ordinal λ<ω_1, the compact space [0,1]^λ_lex is a KLΣ(≤ω)-space. Consider the following subset F of [0,1]^λ_lex× [0,1]^λ: F={ (x_α), (y_α) : (∃β) (∀α<β x_α=y_α) (∀α≥β x_α=0 ∀α≥β x_α=1 )}. The set F is closed in [0,1]^λ_lex× [0,1]^λ. Pick (x_α), (y_α) ∈ [0,1]^λ_lex× [0,1]^λ so that (x_α), (y_α) ∉ F. Let β=min{α<λ: x_α≠ y_α}. Let A and B be two disjoint open subsets of [0,1] satisfying x_β∈ A and y_β∈ B. Denote by p_β:[0,1]^λ→ [0,1] the projection onto the coordinate β. Consider the following three cases: Case 1: x_β∉{0,1}. Consider the following set: W={(z_α)∈ [0,1]^λ_lex:z_α=x_αα<βz_β∈ A∩ (0,1)}. It is easy to see that W is open in [0,1]^λ_lex and (x_α)∈ W. Hence, the set W× p_β^-1(B) is an open neighborhood of (x_α), (y_α) in [0,1]^λ_lex× [0,1]^λ disjoint from F. Case 2: x_β=0. Since (x_α), (y_α) ∉ F, there is α>β with x_α≠ 0. Let γ=min{α>β:x_α≠ 0}. Fix ε>0 such that [0,ε)⊆ A. Define .4 a_α= { x_α α<γ 0 α=γ 1 α>γ. .1 and .3 b_α= { x_α α<β ε α=β 0 α>β. Let W be the open subset of [0,1]^λ_lex consisting of all elements of [0,1]^λ_lex that lie between (a_α)_α<λ and (b_α)_α<λ in the lexicographic order. Clearly, (x_α)∈ W and one easily verifies that the set W× p^-1_β(B) is an open neighborhood of (x_α), (y_α) in [0,1]^λ_lex× [0,1]^λ disjoint from F. Case 3: x_β=1. This case is analogous to the previous one. There is α>β with x_α≠ 1. Let γ=min{α>β:x_α≠ 1}. Fix ε>0 such that (ε,1]⊆ A. Define .4 a_α= { x_α α<β ε α=β 1 α>β. .1 and .3 b_α= { x_α α<γ 1 α=γ 0 α>γ. As in Case 2, let W consists of all elements in [0,1]^λ_lex that lie between (a_α)_α<λ and (b_α)_α<λ. Then, the set W× p_β^-1(B) is an open neighborhood of (x_α), (y_α) in [0,1]^λ_lex× [0,1]^λ disjoint from F. This finishes the proof of the claim. Let π_1:[0,1]^λ_lex× [0,1]^λ→ [0,1]^λ_lex π_2:[0,1]^λ_lex× [0,1]^λ→ [0,1]^λ be projections. Observe that π_1(F)=[0,1]^λ_lex and if y∈ [0,1]^λ, then the set π^-1_2(y)∩ F is countable and compact, thus metrizable. It follows from Proposition <ref> that [0,1]^λ_lex∈ KLΣ(≤ω). From Theorem <ref>, Proposition <ref> and <cit.> we get the following. The lexicographic product of three intervals [0,1]^3_lex is a first-countable continuous image of a metrizably fibered space, yet it is not metrizably fibered. §.§ Example D Let us describe now the space KD from Example D announced in the Introduction (see Propositions <ref> and <ref> below). Consider the following lexicographic product: X=[0,1]×_lex [0,1] ×_lex{0,1}. Let KD be the quotient space obtained from X by identifying each pair of points (t,0,0) and (t,1,1), where t∈ [0,1]. Let q:X→ KD be the quotient map. It is clear that KD is a compact first-countable space. The following fact which asserts that KD is not metrizably fibered can be proved essentially in the same way as <cit.>. We enclose the argument for the convenience of the reader. The space KD is not metrizably fibered For t∈ [0,1] consider the following subset X_t of X X_t={t}×[0,1]×{0,1}. Let M be a compact metric space with a metric d. Fix a continuous map f:KD→ M. Since each q(X_t) is nonmetrizable (because q(X_t) contains a copy of the split interval), it is enough to show that the set A={t∈ [0,1]:|f(q(X_t))|>1} is countable. Striving for a contradiction, suppose that A is uncountable. For each t∈ A, fix a_t,b_t∈ X_t with f(q(a_t))≠ f(q(b_t)). As A is uncountable, there is ε>0 and an uncountable B⊆ A such that d(q(f(a_t)),f(q(b_t)))≥ε. The set B being an uncountable subset of [0,1] contains a nontrivial sequence S={t_n:n∈ω}⊆ B convergent to a point p∈ B. Without loss of generality we may assume that S is monotone (as S contains a monotone subsequence). But then both (a_t_n)_n∈ω and (b_t_n)_n∈ω converge in X to the same point: either to (p,0,0) if S is increasing, or to (p,1,1) if S is decreasing. This is a contradiction because d(q(f(a_t_n)),f(q(b_t_n)))≥ε, for all n∈ω. The space KD is a continuous image of a 3-fibered compactum. Consider the following subspace F of the Cartesian product KD× [0,1]^2 F= { q(x,y,i), (x,y): (x,y)∈ [0,1]^2, i∈{0,1}} ∪{ q(x,0,0), (x,y): (x,y)∈ [0,1]^2 } The set F is closed in KD× [0,1]^2. Pick z, (x,y)∈ (KD× [0,1]^2)∖ F. Take a,b∈ [0,1] and i∈{0,1} such that z=q(a,b,i). Since z, (x,y)∉ F, one of the following three cases holds: Case 1: a≠ x. Let A and B be two disjoint open subsets of [0,1] satisfying a∈ A and x∈ B. We put U=q(A× [0,1]×{0,1}) V=B× [0,1]. The set U is open in KD, as q^-1(U)=A× [0,1]×{0,1} and the latter set is open in X. Therefore, the set U× V is an open neighborhood of z,(x,y) disjoint from F. Case 2: a=x and 0<b<1. Since z,(x,y) ∉ F and z=q(a,b,i)=q(x,b,i), we have b≠ y. Let A and B be two disjoint open sets in [0,1] satisfying b∈ A and y∈ B. Since 0<b<1, there are α<β∈ [0,1] with b∈ (α,β)⊆ A. Define W to be the set of all points in [0,1]×[0,1]×{0,1} that lie strictly between (x,α,0) and (x,β,1) in the lexicographic order. The set W is open in X and since W=q^-1(q(W)), the set q(W) is open in KD. Since α<b<β we have (x,b,i)∈ W, whence z∈ q(W). Let V=[0,1]× B. We conclude that the set q(W)× V is an open neighborhood of z,(x,y) in KD× [0,1]^2 disjoint from F. Case 3: a=x, b∈{0,1} and (b,i)≠ (0,0),(1,1). As in the previous case, we have b≠ y. Let A and B be two disjoint open sets in [0,1] satisfying b∈ A and y∈ B. Since A is an open neighborhood of b in [0,1], there are α,β∈ [0,1] with b∈ [α,β]⊆ A. Define W to be the set of all points in [0,1]×[0,1]×{0,1} that lie strictly between (x,α,0) and (x,β,1) in the lexicographic order. The set W is open in X and since W=q^-1(q(W)), the set q(W) is open in KD. Since (b,i)≠ (0,0),(1,1), we have (x,b,i)∈ W, whence z∈ q(W). Let V=[0,1]× B. We conclude that the set q(W)× V is an open neighborhood of z,(x,y) in KD× [0,1]^2 disjoint from F. The proof of the claim is finished. Let π_1: KD× [0,1]^2→ KD π_2:KD× [0,1]^2→ [0,1]^2 be projections. First observe that π_1(F)=KD, for if q(x,y,i)∈ KD then q(x,y,i),(x,y) ∈ F. So F maps continuously onto KD. Next, observe that if p=(x,y)∈ [0,1]^2, then π_2^-1(p)∩ F={ q(x,y,0), p , q(x,y,1),p , q(x,0,0), p}. So F is 3-fibered. § CONTINUOUS IMAGES OF N-FIBERED COMPACTA The purpose of this section is to show that 3-fibered cannot be improved to 2-fibered in Example <ref>. According to <cit.>, We say that the open degree of K does not exceed n and write odeg(K)≤ n if there exists a countable family 𝒪 of open sets such that for every n+1 points x_0,…,x_n∈ K there exists W_i∈𝒪 such that x_i∈ W_i for i=0,…,n and ⋂_i=0^n W_i = ∅. The following theorem is an unpublished result of the first author and Todorcevic: A compact space K satisfies odeg(K)≤ n if and only if K is a continuous image of an n-fibered compactum. It is convenient to rephrase the definition of the open degree in terms of closed sets instead of open sets. We have that odeg(K)≤ n if and only if there exists a countable family ℱ of closed sets such that whenever we are given n+1 different points x_0,…,x_n∈ K there exist F_0,…,F_n∈ℱ such that x_i∉F_i and ⋃_i=0^n F_i = K. Suppose first that there are continuous surjections f:L→ K and g:L→ M such that M is metric and |g^-1(x)|≤ n for all x∈ M. Since every compact metric space is a continuous image of the Cantor set, we can find a tree 𝒯 = {M_s : s∈ 2^<ω} of closed subsets of M such that * M_∅ = M, * M_s = M_s^⌢ 0∪ M_s^⌢ 1 for all s∈ 2^<ω, * |⋂_m<ωM_σ|m| = 1 for all σ∈ 2^ω. Define ℱ = {f(g^-1(A)) :A is a finite union of sets from 𝒯}. Take a set of n+1 different points X={x_0,…,x_n}⊂ K. We claim that there exists m<ω such that X⊄f(g^-1(M_s)) for all s∈ 2^m. Otherwise, by König's lemma, there exists σ∈ 2^ω such that X⊂⋂_m<ωf(g^-1(M_σ|m)). But a compactness argument gives that ⋂_m<ωf(g^-1(M_σ|m)) = f(⋂_m<ωg^-1(M_σ|m)) = f(g^-1(⋂_m<ωM_σ|m) ), that has cardinality at most n, so it cannot contain X. Now, define A_i = ⋃{M_s : s∈ 2^m, x_i∉f(g^-1(M_s))}. The sets F_i = f(g^-1(A_i)) are as desired. Now suppose that ℱ is a family of closed sets that witnesses that odeg(K)≤ n. Consider the countable set Z = {F=(F_0,…,F_n) : F_i∈ℱ F_0∪⋯∪ F_n= K}. Let L = {(x,(i_F)_F∈ Z) ∈ K ×{0,…,n}^Z : x∈⋂_F∈ ZF_i_F}. The set L is closed, therefore compact. The projection on the first coordinate L→ K is onto. The fact that ℱ witnesses that odeg(K)≤ n implies that the projection on the second coordinate L→{0,…,n}^Z is n-to-one. Let K be a compact space and F⊂ K be a finite G_δ-subset of K. Suppose also that odeg(L)<n for all closed L⊂ K∖ F. Then odeg(K)<n Consider V_0⊂V_0⊂ V_1 ⊂V_1⊂⋯ a chain of open sets such that ⋃ V_i = K∖ F. For every m take a countable family 𝒪_m of open subsets of V_m+1 whose intersections with V_m witness the fact that odeg(V_m)<n. Put 𝒪_m={A∩ V_m:A∈𝒪_m}. For every x∈ F, take a countable basis 𝒩_x of neighborhoods of x. The family ⋃_m 𝒪_m ∪⋃_x∈ F𝒩_x witnesses that odeg(K)<n. If K is a first-countable compact space and odeg(K)≤ n, then there is a continuous surjection ϕ:K→ Z onto a metrizable space Z such that odeg(ϕ^-1(t))<n for all t∈ Z. By Theorem <ref>, the space K is a continuous image of an n-fibered compact space L. Let f:L→ K be a continuous surjection and let π:L→ M a continuous map onto a metric space such that |π^-1(z)|≤ n for all z∈ M. We consider a countable basis ℬ of M. Let Γ = {(A,B) ∈ℬ×ℬ : f(π^-1(A))∩ f(π^-1(B)) = ∅}. For each (A,B)∈Γ, we consider a continuous function ϕ_(A,B):K→ [0,1] such that ϕ_(A,B)(f(π^-1(A))) = {0} and ϕ_(A,B)(f(π^-1(B)))={1}. We then take ϕ:K→ [0,1]^Γ be given by ϕ(x)_(A,B) = ϕ_(A,B)(x). Since Γ is countable, [0,1]^Γ is metrizable. So it is enough to prove that odeg(ϕ^-1(t))<n for every t∈ [0,1]^Γ. For every x,y∈ L, if ϕ(f(x)) = ϕ(f(y)), then there exist x̃,ỹ∈ L such that π(x̃) = π(x), π(ỹ) = π(y) and f(x̃)=f(ỹ). For every basic neighborhoods A of π(x) and B of π(y) we must have that f(π^-1(A))∩ f(π^-1(B))≠∅ because otherwise ϕ_(A,B)(x)=0 and ϕ_(A,B)(y)=1 which contradicts that ϕ(f(x)) = ϕ(f(y)). Take {A_m}_m∈ℕ and {B_m}_m∈ℕ decreasing countable bases of neighborhoods of π(x) and π(y) respectively. For every m we will be able to find x̃_m ∈π^-1(A_m) and ỹ_m∈π^-1(B_m) such that f(x̃_m) = f(ỹ_m). If we now take a nonprincipal ultrafilter 𝒰, then x̃ = lim_𝒰x̃_m and ỹ = lim_𝒰ỹ_m are as desired. Now fix t∈ [0,1]^Γ with ϕ^-1(t)≠∅, fix y∈ L such that ϕ(f(y)) = t and s=π(y). Let F = f(π^-1(s)). By Lemma <ref>, it is enough to prove that odeg(A)<n whenever A⊂ϕ^-1(t)∖ F is closed. For this, it is enough to prove that π: f^-1(A) → M is at most (n-1)-to-one (cf. Theorem <ref>). If it was not the case, then there must exist r∈ M such that π^-1(r) ⊂ f^-1(A). But given x∈π^-1(r), notice that ϕ(f(x)) = ϕ(f(y)), so we can consider the x̃ and ỹ provided by the Claim above. Notice that x̃∈π^-1(r) since π(x̃) = π(x). Also, f(x̃) = f(ỹ)∉A because π(ỹ)=π(y) = s, so f(ỹ)∈ F. So we found x̃∈π^-1(r) ∖ f^-1(A) a contradiction. If a first-countable compact K is a continuous image of a 2-fibered compactum, then K is metrizably fibered. Apply the previous theorem for n=1. § EXAMPLE E: A FIRST-COUNTABLE COMPACT LΣ(≤Ω)-SPACE WHICH IS NOT A CONTINUOUS IMAGE OF ANY METRIZABLY FIBERED COMPACTUM Let Y=[0,1]×𝕊 be the lexicographic product of [0,1] and the split interval 𝕊. By I( x,s , y,t) we denote the set of all points lying strictly between x,s and y,t in the lexicographic order. Let X=[0,1]×(0,1]. For a positive integer n and z,a,b∈ [0,1] such that a<b we set (see Figure <ref> below) U_n(z,a,b)=X∩{ z+rcos(π-π t), rsin(π-π t)∈^2:0<r<1n, a<t<b}. We define the following topology on KE=X∪ Y: If p∈ X then U is a neighborhood of p in KE if U is a neighborhood of p in X (in its usual product topology). To define neighborhoods of points p∈ Y we shall consider the following four cases: Case 1: p= x,y^-∈ Y=[0,1]×𝕊 and y∉{0,1}. In this case basic open neighborhoods of p are of the form U_n(x,a,y)∪ I( x,a^-, x,y^+), a<y. Case 2: p= x,y^+∈ Y=[0,1]×𝕊 and y∉{0,1}. Then basic open neighborhoods of p are of the form U_n(x,y,b)∪ I( x,y^-, x,b^+), y<b. Case 3: p= x,0^+∈ Y=[0,1]×𝕊 Then basic open neighborhoods of p are of the form U_n(x,0,b)∪ I( x-1n,12^-, x,b^+), b>0. Case 4: p= x,1^-∈ Y=[0,1]×𝕊 Then basic open neighborhoods of p are of the form U_n(x,a,1)∪ I( x,a^-, x+1n,12^+), a<1. One can readily check that this is a well defined neighborhood system on KE and that the topology defined in this way is Hausdorff, separable and first-countable. It is also evident that the subspace topology on X (respectively, Y) agrees with the product topology on X (the lexicographic order topology on Y). To show that KE is compact, fix an open cover 𝒰 of KE. Since Y is a compact subset of KE, being the lexicographic product of [0,1] and 𝕊, there is a finite subfamily 𝒱 of 𝒰 that covers Y. Now, KE∖⋃𝒱⊆ X is closed in KE and hence it is closed in the product topology of [0,1]× [0,1]. It follows that there is a finite subfamily 𝒱'⊆𝒰 that covers KE∖⋃𝒱. Thus, 𝒱∪𝒱' is a finite subcover of 𝒰. Let T⊆ [0,1] be an open interval and let a>0. Suppose that C_0,C_1 are closed subsets of KE covering the set (T×𝕊)∪ (T× (0,a)). Then either * The set X(C_0)={x∈ T: ∃ s∈𝕊 x,s ∉ C_0 } is meager in T or * There is an open interval J⊆ T and ε>0 such that (J×𝕊)∪ (J× (0,ε)) ⊆ C_1. Suppose that X(C_0) is non-meager in T. We will show that assertion (2) holds. The set C_0 is closed in KE, so if x∈ X(C_0), then for some s∈𝕊, there is a basic open neighborhood U of x,s disjoint from C_0. Hence, for some positive integer n and rationals q<r we have U_n(x,q,r)∩ C_0=∅. We can therefore write X(C_0)=⋃{X(n,q,r):n≥ 1, q,r∈ℚ∩[0,1], q<r}, where X(n,q,r)={x∈ T: U_n(x,q,r)∩ C_0=∅}. Since X(C_0) is not meager in T, we can find n,q,r so that X(n,q,r) is not nowhere dense in T, i.e. X(n,q,r) is dense in some open interval J'⊆ T. It follows that for sufficiently small ε>0 and some interval J⊆ J', the family {U_n(x,q,r):x∈ X(n,q,r))} covers the whole rectangle J× (0,ε), whence J× (0,ε) is disjoint from C_0. Since C_0∪ C_1=KE, we conclude that J× (0,ε)⊆ C_1. Observe that if x ,s∈ J×𝕊, then every open neighborhood of x ,s in KE meets J× (0,ε). Consequently, J×𝕊⊆ C_1 because C_1 is closed. This gives assertion (2). Let I⊆ [0,1] be an open interval. Suppose that 𝒞 is a finite family of closed subsets of KE. If 𝒞 covers a set (T×𝕊) ∪ (T× (0,a)), for some open interval T⊆ I and some a>0, then there is C∈𝒞 and an open interval J⊆ I such that the set {x∈ J:∀ s∈𝕊 x,s ∈ C} is comeager in J. We proceed by induction on the size of 𝒞. If 𝒞 consists of one set and covers (T×𝕊) ∪ (T× (0,a)), then we can obviously take J=T. Fix n≥ 1 and suppose that our assertion holds for any family 𝒞 of size n. Let 𝒞' be a family of size n+1. Suppose that for some open interval T⊆ I and some a>0, we have (T×𝕊) ∪ (T× (0,a))⊆⋃𝒞' Pick F∈𝒞' and define 𝒞=𝒞'∖{F}. By (<ref>), we may apply Lemma <ref>, to C_0=F and C_1=⋃𝒞. If X(C_0) is meager in T, then we take J=T and C=C_0=F. Otherwise, by Lemma <ref>, there is an open interval J'⊆ T and ε >0 such that (J'×𝕊)∪ (J'× (0,ε)) ⊆ C_1=⋃ C. Since |𝒞|=n, the result follows from the inductive assumption. If 𝒞 is a finite cover of KE consisting of closed sets, then the set X(𝒞)=⋃_C∈𝒞{x∈ [0,1]:∀ s∈𝕊 x,s ∈ C} is comeager in [0,1]. By Proposition <ref>, for every open interval I⊆ [0,1], we can find an open interval J⊆ I so that the set X(𝒞)∩ J is comeager in J. Hence, X(𝒞) is comeager in [0,1] (cf. <cit.>). The space KE is not a continuous image of any metrizably fibered compactum. Let 𝒞={C_t:t∈ 2^< ω} be an arbitrary family of closed subsets of KE such that C_∅=KE and C_t=C_t⌢ 0∪ C_t⌢ 1, for every t∈ 2^<ω. According to Proposition <ref>, it is enough to find σ∈ 2^ω so that ⋂_n C_σ|n is nonmetrizable. For n∈ω denote 𝒞_n={C_t:|t|=n}. By Corollary <ref>, for every n∈ω, the set X(𝒞_n)=⋃_C∈𝒞_n{x∈ [0,1]:∀ s∈𝕊 x,s ∈ C} is comeager in [0,1], whence ⋂_n∈ω X(𝒞_n)≠∅. Pick a∈⋂_n∈ω X(𝒞_n). For every n there is t∈ 2^<ω of length n such that for every s∈𝕊 we have a,s ∈ C_t. It follows that the set {t∈ 2^<ω:∀ s∈𝕊 a,s ∈ C_t} is an infinite tree and thus by König's lemma it has an infinite branch σ. Now, the set ⋂_n∈ωC_σ|n is nonmetrizable because it contains a copy of the split interval {a}×𝕊. The corollary below gives a partial answer to a question of Tkachuk <cit.>. The space KE is a compact first-countable LΣ(≤ω)-space which is not a continuous image of any compact metrizably fibered space. By definition of the topology, the space KE is compact and first-countable. Recall that X=[0,1]× (0,1] and Y=[0,1]×𝕊. Both X and Y are LΣ(≤ω)-spaces (cf. <cit.>) so KE is an LΣ(≤ω)-space being the union of X and Y (see <cit.>). According to Theorem <ref>, the space KE is as required. § ACKNOWLEDGEMENTS The authors were partially supported by Fundación Séneca - ACyT Región de Murcia project 21955/PI/22, Agencia Estatal de Investigación (Government of Spain) and ERDF project PID2021-122126NB-C32 (A. Avilés and M. Krupski); European Union - NextGenerationEU funds through María Zambrano fellowship and the NCN (National Science Centre, Poland) research Grant no. 2020/37/B/ST1/02613 (M. Krupski) siam
http://arxiv.org/abs/2307.05293v1
20230711143751
Demonstrating Photon Ring Existence with Single-Baseline Polarimetry
[ "Daniel C. M. Palumbo", "George N. Wong", "Andrew A. Chael", "Michael D. Johnson" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.IM" ]
Single-Baseline Photon Ring Polarimetry 0000-0002-7179-3816]Daniel C. M. Palumbo [email protected] Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA 0000-0001-6952-2147]George N. Wong School of Natural Sciences, Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540, USA Princeton Gravity Initiative, Jadwin Hall, Princeton University, Princeton, NJ 08544, USA 0000-0003-2966-6220]Andrew A. Chael Princeton Gravity Initiative, Jadwin Hall, Princeton University, Princeton, NJ 08544, USA 0000-0002-4120-3029]Michael D. Johnson Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA Images of supermassive black hole accretion flows contain features of both curved spacetime and plasma structure. Inferring properties of the spacetime from images requires modeling the plasma properties, and vice versa. The Event Horizon Telescope Collaboration has imaged near-horizon millimeter emission from both Messier 87* () and Sagittarius A* () with very-long-baseline interferometry (VLBI) and has found a preference for magnetically arrested disk (MAD) accretion in each case. MAD accretion enables spacetime measurements through future observations of the photon ring, the image feature composed of near-orbiting photons. The ordered fields and relatively weak Faraday rotation of MADs yield rotationally symmetric polarization when viewed at modest inclination. In this letter, we utilize this symmetry along with parallel transport symmetries to construct a gain-robust interferometric quantity that detects the transition between the weakly lensed accretion flow image and the strongly lensed photon ring. We predict a shift in polarimetric phases on long baselines and demonstrate that the photon rings in and can be unambiguously detected with sensitive, long-baseline measurements. For , we find that photon ring detection in snapshot observations requires ∼1 mJy sensitivity on >15 Gλ baselines at 230 GHz and above, which could be achieved with space-VLBI or higher-frequency ground-based VLBI. For , we find that interstellar scattering inhibits photon ring detectability at 230 GHz, but ∼10 mJy sensitivity on >12 Gλ baselines at 345 GHz is sufficient, which is accessible from the ground. For both sources, these sensitivity requirements may be relaxed by repeated observations and averaging. § INTRODUCTION Black holes impose symmetries upon emission near their horizons. In resolved images of optically thin supermassive black hole accretion flows, these symmetries manifest themselves most clearly in the photon ring, the combination of images formed by photon trajectories that reach the observer after half-orbiting the black hole at least once. The photon ring contains self-similar structure indexed by the integer n, which corresponds to the number of half-orbits undertaken by photons before reaching the observer. With each successive half-orbit, the image of the surrounding accretion flow is demagnified exponentially along the radial direction into each n^ th sub-image, asymptotically approaching the “critical curve,” a shape predicted by general relativity and determined solely by the mass-to-distance ratio, spin, and viewing inclination of the black hole system <cit.>. This curve forms the boundary between observer-reaching geodesics that intersect the horizon, and those that do not. <cit.> identified additional photon ring symmetries in polarized emission, most relevantly that the Penrose-Walker constant <cit.> complex conjugates across adjacent sub-images for axisymmetric emission at large n. Recent work has demonstrated analytically that this symmetry corresponds directly to reflections of the electric vector position angle (EVPA) across the origin in face-on images of optically thin, axisymmetric accretion flows around Schwarzschild black holes <cit.>. Decompositions of these polarized images into azimuthal modes show complex conjugation in the rotationally symmetric β_2 mode, which traces near-horizon magnetic field geometry through polarized synchrotron emission <cit.>. This formalism was used to measure the accretion state of in <cit.> and <cit.>, hereafter and . In more general models, such as spinning black holes viewed at higher inclination, these symmetries are more approximate. Nonetheless, showed that for magnetically arrested disks (MADs), the class of simulations preferred for Messier 87* () and Sagittarius A* () by Event Horizon Telescope (EHT) observations, general relativistic magnetohydrodynamic (GRMHD) simulations showed approximate complex conjugation of β_2 between the primary (n=0) and secondary (n=1) image. This effect leads to a depolarization of the photon ring region in images which have spiraling polarization, as first noted by <cit.>. Thus, using polarimetric observations that resolve out diffuse n=0 structure and are dominated by n=1 emission, the existence of the photon ring can be demonstrated by measuring the β_2 phase of the n=1 image. In this letter, we construct a polarized interferometric quantity that is invariant under source translations and robust to unknown gain amplitudes and phases. This quantity serves as a Fourier analogue of β_2 and refines a similar construction carried out in . We compute this quantity on Earth-based and Earth-space very-long-baseline interferometry (VLBI) baselines for simulated images and find a detectable photon ring signature in the phase of the interferometric quantity β̆_2. Throughout, we use ∠ to denote phases of complex numbers. We develop the observable in <ref>. We treat observational considerations in simulations of and in <ref>. We conclude with a discussion in <ref>. We diagram some mathematical details in <ref> and analyze potential Earth-based observing sites in <ref>. § INTERFEROMETRIC POLARIZATION SPIRALS The image-domain definition of the β_m decomposition radially averages azimuthal structure in linear polarization in terms of the usual Stokes parameters Q and U. We consider images in polar coordinates r and ϕ, with ϕ increasing east of north and right ascension increasing east, to the left, as usual. As in <cit.>, for an image with complex polarization scalar P(r,ϕ)=Q(r,ϕ)+i U(r,ϕ), each modal coefficient is given by β_m = 1I_ tot∫_0^∞∫_0^2 π P(r, ϕ) e^- i m ϕ r drdϕ . Here, the upper bound on the integral in r is set effectively by the field of view of an observed image. The β_2 coefficient provides an image-averaged measurement of the rotational symmetry of the observed EVPA. We now construct interferometric β_2 modes, which were first described in Appendix A of . A perfectly calibrated interferometer measures components of the Fourier transforms of each Stokes parameter image, Ĩ, Q̃, and Ũ. Following <cit.>, we may rotate into an interferometric E and B mode basis by applying a rotation by twice the angle of each measured visibility Q̃(u,v) and Ũ(u,v), where u and v are two-dimensional Fourier coefficients conjugate to right ascension and declination, respectively <cit.>. The rotation yields quantities Ẽ and B̃: [ Ẽ(ρ,θ); B̃(ρ,θ) ] = [ cos 2 θ sin 2 θ; -sin 2 θ cos 2 θ ][ Q̃(ρ,θ); Ũ(ρ,θ) ], where ρ is the coordinate radius and θ≡arctan(u/v) is the position angle measured east of north of each point (u,v).[ At this point, the construction in Appendix A of would normalize each of Ẽ and B̃ by |Ĩ| (Equation A13 in that work) and construct a quantity encoding EVPA spiral phase with a reliance on knowledge of the image center (and therefore absolute phase calibration of interferometric visibilities). Subsection A.3 of that work connects the signs of these interferometric modes to the image-domain β_2 phase; the precise connection (and sign convention in Equation A14) claimed in the paper holds only for short baselines, whereas for general baseline lengths, additional signs are imposed by image structure (for example, when the Bessel function response to a ring-like structure passes through a null and changes sign, as shown in the orange curve in the left panel of <ref>).] In order to produce a quantity that captures rotationally symmetric polarization structure in ring-like images while avoiding both the need for phase calibration and the sign structure imposed by oscillating Fourier signatures of image features, we construct the following polarimetric quantities: ĕ(u,v) ≡Ẽ(u,v)/Ĩ(u,v), b̆(u,v) ≡B̃(u,v)/Ĩ(u,v), β̆_2(u,v) ≡ Re(ĕ(u,v)) + i Re(b̆(u,v)). Dividing by Ĩ removes most unknown gain contributions (in both amplitude and phase) from the signal, except for unknown (but typically stable and calibrated) right-left gain ratios and leakage terms as shown in <ref>. This construction also makes ĕ, b̆, and β̆_2 close cousins to the polarimetric ratio or interferometric fractional polarization m̆ and other cross-hand-to-parallel-hand polarization ratios, which share similar gain robustness <cit.>. In addition, these ratios are insensitive to some classes of physical corruptions, such as are introduced by scattering in the ionized interstellar medium. Specifically, the dominant effects of scattering are a convolution with a “diffractive” blurring kernel. Because the interstellar medium is not appreciably birefringent at mm wavelengths, the Fourier manifestation of interstellar scattering cancels in the formation of a quotient between Q̃ or Ũ and Ĩ, though the signal-to-noise ratio is depressed by the diffractive kernel. On long baselines, additional effects of “refractive” scattering become significant, which cannot be described as an image convolution. On these baselines, we will demonstrate that refractive scattering contaminates the visibility quotients. In constructing β̆_2, we take the real part of each of ĕ and b̆ because the real parts of Ẽ and B̃ project out rotationally symmetric structure corresponding to even β_m modes. This relation stems from basic properties of the Fourier transforms of even and odd functions, and is apparent from Equation A10 of , reproduced here: Ẽ(ρ,θ) = ∑_m=-∞^∞i^-m Re[β_m e^i (m-2)θF_m(ρ)], B̃(ρ,θ) = ∑_m=-∞^∞i^-m Im[β_m e^i (m-2)θF_m(ρ)]. Here, F_m(ρ) is the Hankel transform of a radial function f_m(r) describing radial image structure associated with sinusoidal variation of order m: F_m(ρ) = ∫_0^∞ f_m(r)J_m(2 π r ρ) r dr, with J_m as the Bessel function of the first kind of order m. The symmetry projection property does not formally generalize to to ĕ and b̆ for arbitrary Ĩ, but the quotient preserves this property for images in which the image features in I and P are similar. The spiral quotient β̆_2(u,v) thus encodes a translation-invariant notion of rotationally symmetric polarization structure. It is notable that an interferometric measure of rotational symmetry can be constructed without a defined image center; the situation is analogous to the definition of the image second moment in <cit.>, where image covariance about an unspecified center of light is used. Ultimately, the phase of the quantity encodes the dominant spiral phase at a particular spatial frequency given by (u,v); if the photon ring reversal is to be observed, there must be an observable reversal in this phase across (u,v)-distance ρ. We now examine the detailed character of β̆_2 by considering a concrete example. §.§ Thin Polarized Rings As discussed in <cit.>, we consider a thin ring with total flux I_ tot and diameter d in radians with rotationally symmetric polarization; the total intensity and polarization in image polar coordinates r and ϕ are given by I(r,ϕ) = I_ tot/π dδ(r-d/2), P(r,ϕ) = β_2 I_ tot/π dδ(r-d/2)e^i 2 ϕ, where β_2 sets the rotationally symmetric EVPA spiral phase. The visibility responses are then Ĩ(ρ,θ) = I_ tot J_0(π d ρ), P̃(ρ,θ) = -β_2 I_ tot J_2(π d ρ) e^i 2 θ. We then project into Q̃ and Ũ: Q̃(ρ,θ) = Re(P̃(ρ,θ)), = -I_ tot J_2(π d ρ) ×[ Re(β_2) cos2θ - Im(β_2) sin2θ], Ũ(ρ,θ) = Im(P̃(ρ,θ)), = -I_ tot J_2(π d ρ) ×[ Re(β_2) sin2θ + Im(β_2) cos2θ]. Ẽ and B̃ project out real and imaginary parts of β_2 as expected, using <ref>: Ẽ(ρ,θ) = -I_ tot Re(β_2) J_2(π d u), B̃(ρ,θ) = -I_ tot Im(β_2) J_2(π d u). Lastly, we divide by Ĩ and sum the two quotients to finish the β̆_2 construction: β̆_2 = -[ Re(β_2)+i Im(β_2)] J_2(π d u)/J_0 (π d u), = -β_2 J_2(π d u)/J_0 (π d u). As we discuss at length in <ref>, the ratio of J_2 to J_0 has a nearly constant negative sign after the first null of J_0 <cit.>, meaning ∠β̆_2 ≈∠β_2. §.§ General Images and Simulations We now examine ∠β̆_2 for baseline lengths of interest for VLBI in an example GRMHD simulation which is in decent agreement with the EHT Collaboration's constraints on both and , though with slight differences in viewing inclination and electron distribution function (see and ). This simulation is of a magnetically arrested disk with dimensionless black hole spin a_*=0.5 spinning prograde with respect to the large scale accretion flow. The electron distribution function post-processing parameter R_ high=80 <cit.>. This simulation is generally typical by the sub-image polarization standards of ; its subimage is polarized and demonstrates a near complex conjugation of β_2 across sub-image index. We ray trace images of this simulation at 230, 345, 460, and 690 GHz, with 480 pixels across 160 μas on each edge at each frequency except 690 GHz, where 600 pixels are used to robustly capture fine structures sampled by 690 GHz Earth-diameter baselines. The 480 pixel images have one third of a μas per pixel, corresponding to a spatial frequency of ∼ 600 Gλ; phases of Fourier quantities are thus highly robust in the at-most 50√(2) Gλ coefficients we compute. The GRMHD simulations were done with <cit.>; ray tracing was performed using <cit.>. Additional details on the image generation process may be found in <cit.>. We ray trace the fluid snapshots from this simulation with two sets of parameters befitting and . For , we use the mass-to-distance ratio measured by the EHTC (corresponding to the ray traced and decomposed images in ) with a 163^∘ inclination (such that fluid motion is clockwise on the sky) and we rotate images so that the approaching jet is oriented 288^∘ degrees east of north. For , we use the mass-to-distance ratio in <cit.> and a 150^∘ inclination with accretion flowing clockwise in the sky; we do not rotate images, so the approaching outflow is oriented towards the viewer and upwards on the sky by 30^∘. <ref> shows the time-averaged 230 GHz image of the simulation decomposed into its direct (n=0), indirect (n=1), and full (all n) images, as well as the corresponding spiral phase ∠β̆_2 over the interval from -30 to 30 Gλ in u and v. We see that the phase corresponds to the image-domain β_2 in each of the individual n cases, but in the full image, there is a transition between the dominance of n=0 and n=1 at a particular radius in the (u,v) plane. This radius is sensitive to the relative size and brightness of n=0 and n=1 images; time-averaging drastically reduces the fine structure in the n=0 image, causing a clear cutoff in the bottom right panel as the n=0 thickness is resolved. Snapshots, however, introduce significant fine structure to the n=0 image, as shown for all four frequencies of interest for in <ref> and for in <ref>. For , we scatter images using a single realization of the frequency-dependent scattering screen implemented by <cit.> in the stochastic-optics library of eht-imaging. In both and , the phase transition between n=0 and n=1 shifts inward over frequency primarily because the brightness ratio between the direct and indirect image shifts to favor the indirect image at higher frequencies, likely due to optical depth effects. In , at lower frequencies, the refractive noise from scattering mixes the diffuse n=0 structure into finer spatial scales, outshining the photon ring on long baselines. § OBSERVATIONAL PROSPECTS We examine the practicality of observing the spiral quotient phase change caused by the photon ring by considering baseline lengths corresponding to Earth-based VLBI at each frequency and Earth-space VLBI at 230 and 345 GHz. We examine this problem in general terms by computing properties along the u and v axes in the Fourier domain; in <ref>, we consider the sampling and temporal statistics of particular sites of interest for and . §.§ Intrinsic Averages of the Spiral Quotient Given the phase structure in <ref> and <ref>, one may worry that an individual observation on a small number of baselines could mislead conclusions about photon ring existence due to transient image structures. It would thus be useful to be able to average the complex quantities ĕ and b̆ over many epochs and extract the average phase of β̆_2. One would expect that as the direct image of the flow varies in thickness and diameter slightly over time, the nulls in its visibility response may slide inward and outward leading to regions of unpredictable phase; the success of an experiment searching for a consistent phase offset between the direct and indirect emission would thus rely on having coverage far enough from the nulls to have stable spiral phase and non-zero spiral amplitude on average. <ref> shows the result of measuring ĕ and b̆ at each of the four frequencies on baselines along u and v for each snapshot of the and simulations, averaging the results over time, and computing the average ∠β̆_2. Averages are computed over 5000 gravitational times (G M / c^3) in the simulation as ray traced for and , where G is the gravitational constant, M is the black hole mass, and c is the speed of light. In , we use a random realization of the scattering screen in each frame of the simulation, effectively averaging over both intrinsic and extrinsic structure to produce the phase curves. No instrumental noise is added; the averaging thus represents only an average over variation on the sky, assuming that right-left gain ratios and D-terms are known, while absolute amplitude and phase calibration is unnecessary. We observe that in , at low frequencies, the transition to photon ring domination is muddled by source variation and brighter n=0 structure which becomes more crisp as frequency increases. In , we observe that the refractive scattering noise imposes small scale structure at low frequencies that obscures the photon ring on long baselines (with notable differences between u and v at 345 GHz due to the stronger scattering on the E-W axis). However, at 460 GHz and higher, the scattering is nearly negligible. It is noteworthy that the phase in the scattering-dominated regimes of is not random, but instead indicative of the n=0 structure on average; this property again suggests that the refractive scattering mixes diffuse structure into finer angular scales, as is most prominent in the bottom left of <ref>. In summary, using these polarimetric visibility quotients the photon ring in can only be detected with VLBI observations at 460 GHz and higher on the ground or at 230 GHz and higher with space-VLBI baselines. For , the photon ring never dominates the time-averaged signal at 230 GHz due to scattering corruptions. However, at 345 GHz, the photon ring dominates near-Earth-diameter baselines (particular N-S baselines, which are the most weakly affected by scattering), though longer baselines (particularly E-W) baselines are corrupted by refractive scattering. At 460 and 690 GHz, the effects of refractive scattering are negligible, and the photon ring phase transition is apparent, even on Earth baselines. In each source, these simulations also show transition regions (approximately 15 Gλ for and 12 Gλ for Sgr A*) beyond which the n=1 ring begins to dominate the observed polarization. The exact value of this transition point will depend on simulation parameters, but intuitively, the larger angular size of suggests a transition at a smaller baseline length. §.§ Sensitivity Requirements Error propagates into β̆_2 similarly to m, the interferometric fractional polarization (see appendices A and B of ) in the high signal-to-noise ratio limit: σ_β̆_2 ≈σ√(2/|Ĩ|^2 + |P̃|^2/|Ĩ|^4), σ_∠β̆_2 ≈σ_β̆_2/|β̆_2|. Here, once again P̃ = Q̃+iŨ and σ is the thermal noise on individual baseline amplitude measurements, assumed to be equal for Ĩ, Q̃, and Ũ. The amplitude of the spiral quotient |β̆_2| is proportional to the fractional polarization, so unsurprisingly, the phase error decreases as the fractional polarization increases. As shown in Figure 2 of , the direct and indirect image β_2 phases are typically separated by at least π/2 radians in the magnetically arrested disks best suited for fitting observations of . To have a clear photon ring signal in the spiral quotient phase, we therefore choose a target phase uncertainty of σ_∠β̆_2≤π/4. This condition yields a constraint on thermal noise that can be calibrated to simulations: σ_ max ≤π |β̆_2|/4√(2/|Ĩ|^2+|P̃|^2/|Ĩ|^4). In order to inform future hardware requirements, as well as to broadly determine whether this signature will ever be detectable from the Earth's surface, we now compute <ref> for each GRMHD snapshot used to produce <ref>. Once again, the values of each visibility used to evaluate the target noise level are computed without the addition of thermal noise, assuming known right-left gain ratios and leakage terms. In the case of , we generate 2000 realizations of the scattered GRMHD images to estimate the root-mean-squared variation in complex visibilities as a result of scattering (“refractive noise”) along east-west and north-south baselines. As shown in <ref>, for the example simulation of , a thermal noise on individual visibility measurements of ∼ 2 mJy will be required to capture the 460 or 690 GHz spiral phase in a single observation of , while in 10 mJy will be required. In , even with a perfect noiseless instrument, on long baselines at low frequencies, the interferometric signal is dominated by refractive sub-structure. Comparing to <ref>, the regions in sampling of in which the phase departs from the expected n=1 phase correspond to where the refractive scattering curves (dashed lines) exceed the intrinsic signal strength. However, the photon ring detection need not be captured in a single observation. The translation invariance and gain robustness of β̆_2 allow coherent averaging over multiple observations, potentially spanning years of accretion, ultimately yielding values that correspond to an instrument-corrupted sub-sampling of <ref>. Assuming the accretion state of is approximately constant or has a typical quiescent magnetic field structure, even observations with signal-to-noise ratio less than 1 may be useful in constraining the sub-image relation; the only requirement for a visibility to be useful is then a detection in Stokes I, which is less stringent. It must be noted that we have chosen only a single simulation for this test, and that this simulation was ray traced to produce an average flux density of 0.5 Jy for and 2.5 Jy for , from which individual realizations of the flow may differ significantly. Moreover, the thermal noise requirement is sensitive to the relative brightness of the photon ring compared to the direct emission, and this flux ratio can vary by factors of two across different GRMHD parameters, and factors of a few during flares <cit.>. Ultimately, expectations will need to be refined as next-generation hardware capabilities are better understood, and as EHT observations elucidate the typical brightness of fine structure in and . § DISCUSSION In this letter, we have constructed a gain-robust, image translation-invariant interferometric observable β̆_2 which is intrinsically sensitive to rotationally symmetric polarization structures. In magnetically arrested disks (the increasingly favored accretion paradigm for low luminosity active galactic nuclei), this observable elegantly captures the transition between direct image and photon ring-dominated emission on long interferometric baselines. We have shown that VLBI at high frequencies is capable of unambiguously detecting the photon ring with this observable in and ; however, we estimate that the required sensitivities for snapshot detection are stringent, at the level of ≤10 mJy. These sensitivity challenges could potentially be addressed in many ways. The gain-robustness and translational invariance of β̆_2 permits the averaging of many epochs, meaning that the primary concern at high frequencies is obtaining detections in Stokes I. Long integration times may enable these detections even in sub-optimal weather; these integrations may be enabled by frequency phase transfer from 230 GHz, though not all sites can support the prerequisite simultaneous multifrequency observation <cit.>. Moving to even larger bandwidths will also help overcome these sensitivity limitations, as the prominent features of the direct and indirect image persist over wide radial ranges in the (u,v) plane permitting large bandwidth smearing. Though we have restricted ourselves to single-baseline measurements, it is conceivable that interferometric sensitivity to the photon ring is enough to depolarize reconstructed images from future EHT data even at 345 GHz. Indeed, <cit.> identified frequent depolarization of the photon ring image region that remains apparent even with small levels of blurring, as might be expected of 345 GHz reconstructions. Moreover, in imaging or hybrid imaging-modeling experiments that attempt to extract the photon ring <cit.>, our approach can build confidence in extracted photon rings by demonstrating before model fitting that a sharp, oppositely polarized feature is present. As discussed at length in <cit.> and treated analytically in <cit.>, the relationship between the direct and indirect image ∠β_2 is a function of the black hole spin, the magnetic field in the emitting region, and the geometry of the emission. At first, a detection of a large, persistent difference in the polarization spiral phase between the intermediate and extremely long baselines is a detection of the photon ring that depends only on broad belief that the emission is optically thin. However, by specifying and model-fitting the emission, such as in semi-analytic models or by making stronger GRMHD assumptions, the spin and mean magnetic field morphology may be tightly constrained by the quiescent β̆_2 phases. In the context of proposed demographic studies of supermassive black holes with VLBI <cit.>, observational schemes such as ours may enable single-baseline spin and photon ring demographic studies with fewer model assumptions than other approaches. The interferometry of the future may well target the n=2 image for its excellent approximation to the critical curve; our construction predicts an infinite sequence of phase transitions in ∠β̆_2 as each subsequent n becomes dominant, provided that optical and Faraday rotation depths internal to the accretion flow permit structured polarization on long photon trajectories. Though the polarization of the n=2 image in GRMHD simulations has not been studied exhaustively, the polarization spiral will likely resemble the n=0 image spiral in magnetically arrested disks. Though we have used polarization quotients such as ĕ = Ẽ / Ĩ throughout this letter in order to remove dependencies on the image center as well as unknown gains, these quotients may not serve as optimal estimators in realistic observations. For example, the quantities ẼĨ^* and B̃Ĩ^* are also translation invariant (and thus robust to unknown gain phases). Though these quantities are not robust to unknown gain amplitudes, products typically have more favorable low signal-to-noise ratio statistics than quotients. Ultimately, the aim of this letter is to lay out key ideas of relevance to the VLBI observations of the near future; finding the optimal interferometric data product by which the photon ring will be robustly detected is a task for future work. We thank Lindy Blackburn, Ramesh Narayan, and Dominic Pesce for many useful conversations. We also thank Jim Moran for thorough comments on the manuscript. We are also very grateful to our reviewer for many thoughtful comments which greatly improved this letter. This work was supported by the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation to Harvard University (GBMF-5278). D.C.M.P. was supported by National Science Foundation grants AST 19-35980 and AST 20-34306. G.N.W. gratefully acknowledges support from the Taplin Fellowship. AC was supported by the Gravity Initiative at Princeton University. § RADIO INTEFEROMETRIC CORRUPTIONS IN POLARIMETRIC QUOTIENTS In this appendix, we review the radio interferometer measurement equation (RIME) outlined for analysis of EHT observations in section 2 of ; the results reached in this appendix are analagous to similar arguments in <cit.> and <cit.>, and as usual, the most comprehensive primer can be found in <cit.>. This review addresses complications in measuring polarization which are present even in the limit of high signal-to-noise ratio, corresponding to fundamental unknowns (particularly gains) in the interferometric problem which are typically calibrated away. Throughout this section, we distinguish intrinsic variables from measured variables with a prime ('). Nearly all EHT stations measure polarization in a circular basis, sampling the left-handed (L) and right-handed (R) complex-valued electric field. For a pair of telescopes indicated by the indices j and k, the corresponding complex correlation matrix ρ_jk is given as follows: ρ_jk = [ R_j R_k^* R_j L_k^*; L_j R_k^* L_j L_k^* ], = [ Ĩ_jk + Ṽ_jk Q̃_jk + i Ũ_jk; Q̃_jk - i Ũ_jk Ĩ_jk - Ṽ_jk ]. Here, the second line shows the relation between the correlation matrix and Stokes visibilities; though we do not use it in this letter, Ṽ refers to the Stokes V visibility. In practice, the measured correlation matrix ρ_jk' is corrupted by complex time-dependent gains, leakages, and field rotations, represented as matrices G, D, and Φ, which are unique to each station and are joined in a Jones matrix J: J = G D Φ, G = [ G_R 0; 0 G_L ], D = [ 1 D_R; D_L 1 ], Φ = [ e^-iΦ 0; 0 e^i Φ ]. As explained in detail in subsection 3.2 of , the time-dependent field rotation Φ is a function only of the geometry of each antenna (and its feed orientation) as well as the parallactic angle and elevation of the source over the course of an observation. These are all known a priori to high precision, and so their effects can be removed (except for contributions from leakage, as will be apparent in the expressions that follow) by applying the inverse field rotation matrix Φ^†. J acts on the measured electric fields at a single station; adding in the geometric derotation matrix for each station, the resulting time-dependent corrupted correlation matrix for a pair of stations is then given by ρ'_jk = Φ^†_j J_jρ_jk J^†_k Φ_k. The gains and leakages are generally computed using observations of calibrators, or can be fit to data along with source structure; examining the measured correlation matrix reveals useful structures that illuminate their impact. First we introduce the measured single-station pre-gain fields R'_D and L'_D, which have a small contribution from the orthogonal handedness given by the so-called “D terms” D_R and D_L. These leakage-affected fields also carry the only non-canceling term from the geometric field rotation, manifesting as a rotation of the orthogonal handedness: R'_D = R+L D_R e^i 2 Φ, L'_D = L+R D_L e^i 2 Φ. The fully corrupted ρ'_jk can then be expressed simply in terms of the gains at each station and the leakage-affected fields: ρ'_jk = [ G_R,j G^*_R,k R'_D,jR'^*_D,k G_R_j G_L,k^* R'_D,j L'^*_D,k; G_L,j G_R,k^* L'_D,j R'^*_D,k G_L,j G_L,k^* L'_D,jL'_D,k. ] The measured correlation products then correspond to the elements of the corrupted correlation matrix: R'_j R'^*_k = G_R,j G_R,k^* R'_D,jR'^*_D,k, R'_j L'^*_k = G_R,j G_L,k^* R'_D,j L'^*_D,k, L'_j R'^*_k = G_L,j G_R,k^* L'_D,j R'^*_D,k, L'_j L'^*_k = G_L,j G^*_L,k L'_D,jL'_D,k. As is apparent from <ref>, measured Stokes visibilities may be constructed by taking linear combinations of elements of ρ'_jk as follows: Ĩ'_jk = R'_j R'^*_k + L'_j L'^*_k/2, Q̃'_jk = L'_j R'^*_k + R'_j L'^*_k/2, Ũ'_jk = i L'_j R'^*_k - R'_j L'^*_k/2, Ṽ'_jk = R'_j R'^*_k - L'_j L'^*_k/2. We are interested in ratios between linear polarimetric Stokes visibilities and the Stokes I visibility. Both m̆ and β̆_2 are linearly related to quotients Q̃/Ĩ and Ũ/Ĩ; Q̃ and Ũ are similarly just linear combinations of the same correlation matrix elements, so without loss of generality, we will consider only the ratio Q̃/Ĩ. We define the complex gain ratio g ≡ G_R/G_L, and find Q̃'_jk/Ĩ'_jk = L'_j R'^*_k + R'_j L'^*_k/R'_j R'^*_k + L'_j L'^*_k = G_R,j G_L,k^* R'_D,j L'^*_D,k + G_L,j G_R,k^* L'_D,j R'^*_D,k/G_R,j G^*_R,k R'_D,j R'^*_D,k + G_L,j G^*_L,k L'_D,j L'^*_D,k, = G_R,j G_L,k^*(R'_D,j L'^*_D,k + g_j^-1 g_k^* L'_D,j R'^*_D,k)/G_L,j G_R,k^*(g_j R'_D,j R'^*_D,k + g_k^*-1 L'_D,j L'^*_D,k), = (g_j R'_D,j L'^*_D,k + g_k^* L'_D,j R'^*_D,k/g_j g_k^* R'_D,j R'^*_D,k + L'_D,j L'^*_D,k) We observe that the ratio Q̃'_jk/Ĩ'_jk depends only on the leakage-affected correlation products and the complex gain ratios g_j and g_k. A similar manipulation is possible for Ũ'_jk/Ĩ'_jk. Notably, because the left and right gains contain the same atmospheric contribution to the gain phase, the gain ratio cancels any phase variation imposed by atmospheric image translation, the primary corruption to VLBI phases. The complex gain ratio can be decomposed into two unknowns, the gain amplitude ratio |g| =|G_R|/|G_L| and the right left gain phase offset ∠ g = ∠ G_R - ∠ G_L. Each of these quantities is routinely calibrated much more easily than the absolute values |G| or ∠ G, as measurements of bright, unresolved, unpolarized and linearly polarized calibrator sources constrain the full complex ratio while leaving both |G| and ∠ G unspecified in pessimal cases <cit.>. Thus, polarimetric ratios are significantly more robust than, for example, visibility amplitudes, in the limit when the D terms are small. These leakage effects are typically a few percent, reaching 10% in pessimal cases, but can typically be modeled during data analysis as was done in . § RELATIVE SIGNS OF BESSEL FUNCTIONS A useful property of Bessel functions of the first kind is that functions with same-parity order (that is, all even and all odd) have asymptotically close zeros with increasing argument. <cit.> obtained the following approximate formula for the s^ th root of the equation J_n(x)=0 (here showing the first two terms of McMahon's equation 8 with a slight change in notation): x_n^(s) = Y - 4n^2-1/8Y + …, Y ≡1/4π(2 n -1 + 4s). For the thin ring image, we are interested in the separation between the (s+1)^ th root of J_0 and the s^ th root of J_2 (because J_2 has a root at x=0 and is thus “one root ahead”): x_0^(s+1) - x_2^(s) ≈8/4 π s + 3 π. For large s, the null separation falls like 1/s, clearly approaching zero. For small s, the null separation is approximately 8/3π < 1, which is a small fraction of the ∼π null spacing of each individual function. In the particular case of the EHT coverage of , the second null of J_0 is approached by the longest baselines at 230 GHz and will be exceeded by the longest baselines at 345 GHz and beyond; this is sufficient for the nulls of J_0 and J_2 to nearly align, meaning that the quotient of the two functions will have nearly constant (negative) sign except for brief transitional regions around each null. As shown in <ref>, the relative signs (here standing in for the relative phase between P̃ and Ĩ) between J_2 and J_0 are remarkably stable, increasingly so with larger (u,v) distance. Meanwhile, the separation in nulls falls rapidly, following the approximate behavior derived in <ref>. § EXAMPLE SITES We will examine the Fourier coverage available at 230, 345, 480, and 690 GHz from the Earth's surface at likely participant sites in high-frequency observations. We consider four locations with existing or planned facilities: the Atacama Large (sub)Millimeter Array (ALMA), the Greenland Telescope (GLT, at its planned summit location), the Submillimeter Array (SMA), and the South Pole Telescope (SPT). We do not consider detailed instrument properties in this letter, instead using only geographical position. Thus, the ALMA location is equally indicative of requirements for the Atacama Pathfinder Experiment (APEX) telescope, and the SMA is predictive for the James Clark Maxwell Telescope (JCMT). These sites are unique in that few other locations on Earth are ever capable of 690 GHz VLBI at the sensitivities relevant to this observation. For these particular sites, we consider two potential limitations on observations: intrinsic source variation and instrument noise. <ref> shows the multifrequency coverage provided by the sites under consideration along with the corresponding temporal distributions of ∠β̆_2 over 1000 gravitational times (G M / c^3) in the simulation as ray traced for and , where G is the gravitational constant, M is the black hole mass, and c is the speed of light. Instrumental noise is not considered; all variation represents intrinsic evolution of source structure. The phase variation is broadly consistent with the temporal variation seen in the image domain in Figure 2 of , with the exception of regions near nulls of the visibility response and transitions between n=0 and n=1.
http://arxiv.org/abs/2307.03960v2
20230708115812
Nonparametric estimation of the diffusion coefficient from S.D.E. paths
[ "Eddy Ella-Mintsa" ]
math.ST
[ "math.ST", "stat.TH" ]
Seismic Signatures of the ^12C(α, γ)^16O Reaction Rate in White Dwarf Models with Overshooting [ August 12, 2023 ============================================================================================== Consider a diffusion process X=(X_t)_t∈[0,1] observed at discrete times and high frequency, solution of a stochastic differential equation whose drift and diffusion coefficients are assumed to be unknown. In this article, we focus on the nonparametric esstimation of the diffusion coefficient. We propose ridge estimators of the square of the diffusion coefficient from discrete observations of X and that are obtained by minimization of the least squares contrast. We prove that the estimators are consistent and derive rates of convergence as the size of the sample paths tends to infinity, and the discretization step of the time interval [0,1] tend to zero. The theoretical results are completed with a numerical study over synthetic data. Keywords. Nonparametric estimation, diffusion process, diffusion coefficient, least squares contrast, repeated observations. MSC: 62G05; 62M05; 60J60 § INTRODUCTION Let X=(X_t)_t∈[0,1] be a one dimensional diffusion process with finite horizon time, solution of the following stochastic differential equation: dX_t=b(X_t)dt+σ(X_t)dW_t, X_0=0 where (W_t)_t≥ 0 is a standard Brownian motion. The drift function b and the diffusion coefficient σ are assumed to be unknown Lipschitz functions. We denote by (ℱ_t)_t∈ [0,1] the natural filtration of the diffusion process X. The goal of the article is to construct, from N discrete observations X̅^j=(X^j_kΔ_n)_0≤ k≤ n,    1 ≤ j ≤ N with time step Δ_n = 1/n, a nonparametric estimator of the square of the diffusion coefficient σ^2(.). We are in the framework of high frequency data since the time step Δ_n tends to zero as n tends to infinity. Furthermore, we consider estimators of σ^2(.) built from a single diffusion path (N = 1), and those built on N paths when N →∞. In this paper, we first propose a ridge estimator of σ^2(.) on a compact interval. Secondly, we focus on a nonparametric estimation of σ^2(.) on the real line . We measure the risk of any estimator σ^2 of the square of the diffusion coefficient σ^2 by [σ^2 - σ^2^2_n,N], where σ^2 - σ^2^2_n,N := (Nn)^-1∑_j=1^N∑_k=0^n-1(σ^2(X^j_kΔ) - σ^2(X^j_kΔ))^2 is an empirical norm defined from the sample paths. Related works. There is a large literature on the estimation of coefficients of diffusion processes, and we focus on the papers studying the estimation of σ^2. Estimation of the diffusion coefficient has been considered in the parametric case (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). In the nonparametric case, estimators of the diffusion coefficient from discrete observations are proposed under various frameworks. First, the diffusion coefficient is constructed from one discrete observation of the diffusion process (N = 1) in long time (T →∞) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>), or in short time (T = 1) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Note that in short time (T<∞), only the diffusion coefficient can be estimated consistently from a single discrete path contrary to the drift function whose consistent estimation relies on repeated discrete observations of the diffusion process (see e.g. <cit.>, <cit.>). For the case of short time diffusion processes (for instance T = 1), estimators of a time-dependent diffusion coefficients t ↦σ^2(t) have been proposed. In this context, <cit.> built a nonparametric estimator of t↦σ^2(t) and studied its L_2 risk using wavelets methods, <cit.> studies the L_p risk of a kernel estimator of σ^2(t), and <cit.> derived a minimax rate of convergence of order n^-ps/(1+2s) where s>1 is the smoothness parameter of the Besov space ℬ^s_p,∞([0,1]) (see later in the paper). For the space-dependent diffusion coefficient x ↦σ^2(x), a first estimator based on kernels and built from a single discrete observation of the diffusion process with T = 1 is proposed in <cit.>. The estimator has been proved to be consistent under a condition on the bandwidth, but a rate of convergence of its risk of estimation has not been established. Secondly, the diffusion coefficient is built in short time (T < ∞) from N repeated discrete observations with N →∞. In <cit.>, a nonparametric estimator of σ^2 is proposed from repeated discrete observations on the real line when the time horizon T = 1. The estimator has been proved to be consistent with a rate of order N^-1/5 over the space of Lipschitz functions. Two main methods are used to build consistent nonparametric estimators of x ↦σ^2(x). The first method is the one using kernels (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), the other method consists in estimating σ^2 as solution of a nonparametric regression model using the least squares approach. Since the diffusion coefficient is assumed to belong to an infinite dimensional space, the method consists in projecting σ^2 into a finite dimensional subspace, estimating the projection and making a data-driven selection of the dimension by minimizing a penalized least squares contrast (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Nonparametric estimation of coefficients of one-dimensional diffusion process from discrete observations is widely studied in the literature under various frameworks. In a first framework, the diffusion coefficient is constructed from one discrete observation of the diffusion process in long time (T →∞) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>), or in short time (T = 1) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Note that in short time (T<∞), only the diffusion coefficient can be estimated consistently from a single discrete path contrary to the drift function whose consistent estimation relies on repeated discrete observations of the diffusion process (see e.g. <cit.>, <cit.>). In a second framework, σ^2 is estimated from N discrete observations of the diffusion process, with N →∞ (see e.g. <cit.>). Estimation of the diffusion coefficient has also been considered in the parametric case (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). For the nonparametric setting, estimators of a time-dependent diffusion coefficients t ↦σ^2(t) have been proposed. In this context, <cit.> built a nonparametric estimator of t↦σ^2(t) and studied its L_2 risk using wavelets methods, <cit.> studies the L_p risk of a kernel estimator of σ^2(t), and <cit.> derived a minimax rate of convergence of order n^-ps/(1+2s) where s>1 is the smoothness parameter of the Besov space ℬ^s_p,∞([0,1]). For the case of space-dependent diffusion coefficients x ↦σ^2(x), a first estimator based on kernels and built from a single discrete observation of the diffusion process with T = 1 is proposed in <cit.>. The estimator has been proved to be consistent on condition on the bandwidth, but a rate of convergence of its risk of estimation has not been established. Two main methods are used to build consistent nonparametric estimators of x ↦σ^2(x). The first method is the one using kernels (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), the other method consists in estimating σ^2 as solution of a nonparametric regression model using the least squares approach. Since the diffusion coefficient is assumed to belong to an infinite dimensional space, the method consists in projecting σ^2 into a finite dimensional subspace, estimating the projection and making a data-driven selection of the dimension by minimizing a penalized least squares contrast (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Main contribution. In this article, we assume to have at our disposal N i.i.d. discrete observations of length n of the diffusion process X. The main objectives of this paper are the following. * Construct a consistent and implementable ridge estimator of σ^2 from a single diffusion path (N=1) using the least squares approach. We derive rates of convergence of the risk of estimation of the ridge estimators built on a compact interval and on the real line over a Hölder space, taking advantage of the properties of the local time of the diffusion process, and its link with the transition density. * We extend the result to the estimation of σ^2 on repeated observations of the diffusion process (N →∞). We prove that the estimators built on a compact interval and on are more efficient considering their respective rates compared to nonparametric estimators built from a single diffusion path. * Focusing on the support of the diffusion coefficient, we consider an intermediate case between a compact interval and by proposing a ridge estimator of σ^2 restricted to the compact interval [-A_N,A_N] where A_N→∞ as N→∞. The benefit of this approach is that the resulting projection estimator can reach a faster rate of convergence compared to the rate obtained on the real line . * Finally, we propose adaptive estimators of σ^2 based on a data-driven selection of the dimension through the minimization of the penalized least squares contrast in different settings. We sum up below the rates of convergence (up to a log-factor) of the ridge estimators of σ^2_|I with I⊆ over a Hölder space defined in the next section with a smoothness parameter β≥ 1. Outline of the paper. In Section <ref>, we define our framework with the key assumptions on the coefficients of the diffusion process ensuring for instance that Equation (<ref>) admits a unique strong solution. Section <ref> is devoted to the non-adaptive estimation of the diffusion coefficient from one diffusion path both on a compact interval and on the real line . In Section <ref>, we extend the study to the non-adaptive estimation of the diffusion coefficient from repeated observations of the diffusion process. We propose in Section <ref>, adaptive estimators of the diffusion coefficient, and Section <ref> complete the study with numerical evaluation of the performance of estimators. We prove our theoretical results in Section <ref>. § FRAMEWORK AND ASSUMPTIONS Consider a diffusion process X=(X_t)_t∈[0,1], solution of Equation (<ref>) whose drift and diffusion coefficient satisfy the following assumption. * There exists a constant L_0>0 such that b and σ are L_0-Lipschitz functions on ℝ. * There exist constants σ_0,σ_1>0 such that : σ_0≤σ(x)≤σ_1, ∀ x∈ℝ. * σ∈𝒞^2(ℝ) and there exist C >0 and α≥ 0 such that: |σ^'(x)|+|σ^''(x)|≤ C(1+|x|^α), ∀ x∈ℝ. Under Assumption <ref>, X=(X_t)_t∈[0,1] is the unique strong solution of Equation (<ref>), and this unique solution admits a transition density (t,x)↦ p_X(t,x). Besides, we draw from Assumption <ref> that ∀ q≥ 1, 𝔼[t∈[0,1]sup|X_t|^q]<∞. §.§ Definitions and notations We suppose to have at our disposal, a sample D_N,n={X̅^j, j=1,⋯,N} constituted of N independent copies of the discrete observation X̅ = (X_kΔ_n)_0≤ k≤ n of the diffusion process X where Δ_n = 1/n is the time-step. The objective is to construct, from the sample D_N,n, a nonparametric estimator of the square σ^2 of the diffusion coefficient on an interval I ⊆. In the sequel, we consider two main cases, the first one being the estimation of σ^2 on the interval I from a single path (N=1 and n→∞). For the second case, we assume that both N and n tend to infinity. For each measurable function h, such that 𝔼[h^2(X_t)]<∞ for all t∈[0,1], we define the following empirical norms: h^2_n:=𝔼_X[1/n∑_k=0^n-1h^2(X_kΔ_n)], h^2_n,N:=1/Nn∑_j=1^N∑_k=0^n-1h^2(X^j_kΔ_n). For all h ∈𝕃^2(I), we have h^2_n=∫_Ih^2(x)1/n∑_k=0^n-1p_X(kΔ_n,x_0,x)dx=∫_Ih^2(x)f_n(x)dx, where f_n: x↦1/n∑_k=0^n-1p_X(kΔ_n,x) is a density function. For the case of non-adaptive estimators of σ^2, we also establish bounds of the risks of the estimators based on the empirical norm ._n or the 𝕃^2-norm . when the estimation interval I is compact. For any integers p,q ≥ 2 and any matrix M ∈^p × q, we denote by ^tM, the transpose of M. §.§ Spaces of approximation We propose projection estimators of σ^2 on a finite-dimensional subspace. To this end, we consider for each m ≥ 1, a m-dimensional subspace 𝒮_m given as follows: 𝒮_m:=Span(ϕ_ℓ, ℓ=0,⋯,m-1),    m≥ 1 where the functions (ϕ_ℓ,  ℓ∈ℕ) are continuous, linearly independent and bounded on I. Furthermore, we need to control the ℓ^2-norm of the coordinate vectors of elements of 𝒮_m, which leads to the following constrained subspace, 𝒮_m,L:={h=∑_ℓ=0^m-1a_ℓϕ_ℓ, ∑_ℓ=0^m-1a^2_ℓ=𝐚^2_2≤ mL, 𝐚=(a_0,⋯,a_m-1), L>0}. Note that 𝒮_m,L⊂𝒮_m and 𝒮_m,L is no longer a vector space. The control of the coordinate vectors allows to establish an upper bound of the estimation error that tends to zero as n→∞ or N,n→∞. In fact, we prove in the next sections that the construction of consistent estimators of σ^2 requires the functions h=∑_ℓ=0^m-1a_ℓϕ_ℓ to be bounded, such that h_∞≤ℓ=0,…,m-1maxϕ_ℓ_∞ 𝐚_2. This condition is satisfied for the functions of the constrained subspaces 𝒮_m,L with m ≥ 1. In this article, we work with the following bases. [B] The B-spline basis This is an exemple of a non-orthonormal basis defined on a compact interval. Let A > 0 be a real number, and suppose (without restriction) that I = [-A,A]. Let K,M∈ℕ^*, and consider 𝐮=(u_-M,⋯,u_K+M) a knots vector such that u_-M = ⋯ = u_-1 = u_0 = -A, u_K+1 = ⋯ = u_K+M = A, and for all i=0,⋯,K, u_i = -A+i2A/K. One calls B-spline functions, the piecewise polynomial functions (B_ℓ)_ℓ=-M,⋯,K-1 of degree M, associated with the knots vector 𝐮 (see <cit.>, Chapter 14). The B-spline functions are linearly independent smooths functions returning zero for all x∉[-A,A], and satisfying some smoothness conditions established in <cit.>. Thus, we consider approximation subspaces 𝒮_K+M defined by 𝒮_K+M=Span{B_ℓ, ℓ=-M,⋯,K-1} of dimension (𝒮_K+M)=K+M, and in which, each function h=∑_ℓ=-M^K-1a_ℓB_ℓ is M-1 times continuously differentiable thanks to the properties of the spline functions (see <cit.>). Besides, the spline basis is included in the definition of both the subspace 𝒮_m and the constrained subspace 𝒮_m,L (see Equations (<ref>) and (<ref>)) with m = K + M and for any coordinates vector (a_-M,…,a_K-1) ∈^K+M, ∑_ℓ=-M^K-1a_ℓB_ℓ = ∑_ℓ=0^m-1a_ℓ-MB_ℓ-M. The integer M ∈ℕ^* is fixed, while K varies in the set of integers ℕ^*. If we assume that σ^2 belongs to the Hölder space Σ_I(β,R) given as follows: Σ_I(β,R):={h∈𝒞^⌊β⌋+1(I), |h^(ℓ)(x)-h^(ℓ)(y)|≤ R|x-y|^β-l, x,y∈ I}, where β≥ 1, ℓ=⌊β⌋ and R>0, then the unknown function σ^2_|I restricted to the compact interval I can be approximated in the constrained subspace 𝒮_K+M,L spanned by the spline basis. This approximation results to the following bias term: h ∈𝒮_K+M,Linfh - σ^2_|I^2_n≤ C|I|^2βK^-2β where the constant C > 0 depends on β, R and M, and |I| = sup I - inf I. The above result is a modification of Lemma D.2 in <cit.>. [F] The Fourier basis The subspace 𝒮_m can be spanned by the Fourier basis {f_ℓ,   ℓ = 0, …, m-1} = {1,√(2)cos(2π jx), √(2)sin(2π jx),  j=1,...,d}  with   m=2d+1. The above Fourier basis is defined on the compact interval [0,1]. The definition can be extended to any compact interval, replacing the bases functions x ↦ f_ℓ(x) by x ↦ 1/(max I - min I)f_ℓ(x-min I/max I - min I). We use this basis to build the estimators of σ^2 on a compact interval I ⊂. Define for all s ≥ 1 and for any compact interval I ⊂, the Besov space ℬ^s_2,∞(I) which is a space of functions f ∈ L^2(I) such that the ⌊ s⌋^th derivative f^(⌊ s ⌋) belongs to the space ℬ^s-⌊ s ⌋_2,∞(I) given by ℬ^s - ⌊ s ⌋_2,∞(I) = {f ∈ L^2(I)  and w_2,f(t)/t^s - ⌊ s ⌋∈ L^∞(I∩^+)} where for s-⌊ s⌋∈ (0,1), w_2,f(t)=|h|≤ tsupτ_hf - f_2 with τ_hf(x) = f(x-h), and for s-⌊ s⌋ = 1, w_2,f(t)=|h|≤ tsupτ_hf + τ_-hf - 2f_2. Thus, if we assume that the function σ^2_|I belongs to the Besov space ℬ^s_2,∞, then it can be approximated in a constrained subspace 𝒮_m,L spanned by the Fourier basis. Moreover, under Assumption <ref> and from Lemma 12 in <cit.>, there exists a constant C>0 depending on the constant τ_1 of Equation (<ref>), the smoothness parameter s of the Besov space such that h∈𝒮_m,Linfh-σ^2_|I^2_n≤τ_1h∈𝒮_m,Linfh-σ^2_|I^2≤ C|σ^2_|I|^2_β m^-2β where |σ^2_|I|_s is the semi-norm of σ^2_|I in the Besov space ℬ^s_2,∞(I). Note that for all β≥ 1, the Hölder space Σ_I(β,R) and the Besov space ℬ^β_2,∞ satisfy: L^∞() ∩Σ_I(β,R) ⊂ℬ^β_∞,∞(I) ⊂ℬ^β_2,∞(I) (see <cit.>, Chap. 2 page 16). As a result, we rather consider in the sequel the Hölder space Σ_I(β,R) which can also be approximated by the Fourier basis. [H] The Hermite basis The basis is defined from the Hermite functions (h_j,j≥ 0) defined on ℝ and given for all j≥ 0 and for all x∈ℝ by: h_j(x)=c_jH_j(x), where H_j(x)=(-1)^jexp(x^2/2)d^j/dx^j(e^-x^2/2) and c_j=(2^jj!√(π))^-1/2. The polynomials H_j(x), j≥ 0 are the Hermite polynomials, and (h_j,j≥ 0) is an orthonormal basis of L^2(ℝ). Furthermore, for all j≥ 1 and x∈, |h_j(x)|≤ c|x|exp(-c_0x^2) for x^2≥(3/2)(4j+3) where c,c_0>0 are constants independent of j (see  <cit.>, Proof of Proposition 3.5). We use the Hermite basis in the sequel for the estimation of σ^2 on the real line . If one assumes that σ^2 belongs to the Sobolev space W^s_f_n(,R) given for all s ≥ 1 by W^s_f_n(,R) := {g ∈ L^2(, f_n(x)dx), ∀ ℓ≥ 1, g - g_ℓ^2_n≤ Rℓ^-s} where for each ℓ≥ 1, g_ℓ is the L^2(, f_n(x)dx)-orthogonal projection of g on the ℓ-dimensional vector space 𝒮_ℓ spanned by the Hermite basis. Consider a compact interval I ⊂ and the following spaces: W^s(I,R) :=  {g ∈ L^2(I),  ∑_j=0^∞j^s<g,ϕ_j>^2≤ R}, W^s_f_n(I,R) :=  {g ∈ L^2(I, f_n(x)dx), ∀ ℓ≥ 1, g - g_ℓ^2_n≤ Rℓ^-s} where (ϕ_j)_j≥ 0 is an orthonormal basis defined on I and for all ℓ≥ 1, g_ℓ is the orthogonal projection of g onto 𝒮_ℓ = Span(h_j,  j≤ℓ) of dimension ℓ≥ 1 (see e.g. <cit.>). Then, for all g ∈ W^s(I,R), we have g=∑_j=0^∞<g,ϕ_j>ϕ_j  and  g-g_ℓ^2 = ∑_j=ℓ+1^∞<g,ϕ_j>^2≤ℓ^-s∑_j=ℓ+1^∞j^s<g,ϕ_j>^2≤ Rℓ^-s. We have W^s_f_n(I,R) = W^s(I,R) as the empirical norm ._n and the L^2-norm . are equivalent. The space W^s_f_n(,R) is an extension of the space W^s_f_n(I,R) wher I = and (ϕ_j)_j ≥ 0 is the Hermite basis. The B-spline basis is used for the estimation of σ^2 on a compact interval on one side (N = 1 and N>1), and on the real line on the other side restricting σ^2 on the compact interval [-log(n), log(n)] for N = 1, or [-log(N), log(N)] for N > 1, and bounding the exit probability of the process X from the interval [-log(N), log(N)] (or [-log(n), log(n)]) by a negligible term with respect to the estimation error. In a similar context, the Fourier basis is used as an othonormal basis to built nonparametric estimators of σ^2 on a compact interval and on , both for N = 1 and for N > 1. The main goal is to show that, in addition to the spline basis which is not orthogonal, we can built projection estimators of σ^2 on orthonormal bases that are consistent. The advantage of the Hermite basis compared to the Fourier basis is its definition on the real line . As a result, we use the Hermite basis to propose for N > 1, a projection estimator of σ^2 whose support is the real line . Denote by ℳ, the set of possible values of the dimension m ≥ 1 of the approximation subspace 𝒮_m. If (ϕ_0,⋯,ϕ_m-1) is an orthonormal basis, then for all m,m^'∈ℳ such that m < m^', we have 𝒮_m⊂𝒮_m^'. For the case of the B-spline basis, one can find a subset 𝒦⊂ℳ of the form 𝒦={2^q, q=0,⋯,q_max} such that for all K,K^'∈𝒦, K < K^' implies 𝒮_K + M⊂𝒮_K^' + M (see for example <cit.>). The nesting of subspaces 𝒮_m, m∈ℳ is of great importance in the context of adaptive estimation of the diffusion coefficient and the establishment of upper-bounds for the risk of adaptive estimators. In the sequel, we denote by [𝐅],  [𝐇] and [𝐁] the respective collection of subspaces spanned by the Fourier basis, the Hermite basis and the B-spline basis. §.§ Ridge estimators of the square of the diffusion coefficient We establish from Equation (<ref>) and the sample D_N,n the regression model for the estimation of σ^2. For all j ∈ [[1,N]] and k ∈ [[0,n-1]], define U^j_kΔ_n := (X^j_(k+1)Δ_n - X^j_kΔ_n)^2/Δ_n. The increments U^j_kΔ_n are approximations in discrete times of d<X,X>_t/dt since, from Equation (<ref>), one has d<X,X>_t = σ^2(X_t)dt. From Equation (<ref>), we obtain the following regression model, U^j_kΔ_n=σ^2(X^j_kΔ_n)+ζ^j_kΔ_n+R^j_kΔ_n,   ∀ (j,k)∈[[1,N]]×[[0,n-1]] where U^j_kΔ_n is the response variable, ζ^j_kΔ_n and R^j_kΔ_n are respectively the error term and a negligible residual whose explicit formulas are given in Section <ref>. We consider the least squares contrast γ_n,N defined for all m ∈ℳ and for all function h∈𝒮_m,L by γ_n,N(h):=1/Nn∑_j=1^N∑_k=0^n-1(U^j_kΔ-h(X^j_kΔ_n))^2. For each dimension m ∈ℳ, the projection estimator σ^2_m of σ^2 over the subspace 𝒮_m,L satisfies: σ^2_m∈h∈𝒮_m,Lmin γ_n,N(h). Indeed, for each dimension m ∈ℳ, the estimator σ^2_m of σ^2 given in Equation (<ref>) satisfies σ^2_m=∑_ℓ=0^m-1a_ℓϕ_ℓ, where 𝐚=(a_0,⋯,a_m-1):=𝐚^2_2≤ mLmin𝐔-𝐅_m𝐚^2_2 with ^tU = (U^1_0,…,U^1_(n-1)Δ_n, …, U^N_0,…,U^N_(n-1)Δ_n) and the matrix 𝐅_m is defined as follows F_m := ( ^t(ϕ_ℓ(X^j_0),…,ϕ_ℓ(X^j_(n-1)Δ_n)))_1 ≤ j ≤ N0 ≤ℓ≤ m-1∈ℝ^Nn × m. The vector of coefficients 𝐚 is unique and called the ridge estimator of 𝐚 because of the ℓ^2 constraint on the coordinate vectors (see <cit.> Chap. 3 page 61). § ESTIMATION OF THE DIFFUSION COEFFICIENT FROM A SINGLE DIFFUSION PATH This section focuses on the nonparametric estimation of the square of the diffusion coefficient σ^2 on an interval I ⊆ when only a single diffusion path is observed at discrete times (N=1). It is proved in the literature that one can construct consistent estimators of the diffusion coefficient from one path when the time horizon T is finite (see e.g. <cit.>). Two cases are considered. First, we propose a ridge estimator of σ^2 on a compact interval I ⊂, say for example I = [-1,1]. Secondly, we extend the study to the estimation of σ^2 on the real line I =. §.§ Non-adaptive estimation of the diffusion coefficient on a compact interval In this section, we consider the estimator σ^2_m of the compactly supported square of the diffusion coefficient σ^2_|I on the constrained subspaces 𝒮_m,L from the observation of a single diffusion path. Since the interval I⊂ is compact, the immediate benefit is that the density function f_n defined from the transition density of the diffusion process X̅ = (X_kΔ) is bounded from below. In fact, there exist constants τ_0,τ_1∈(0,1] such that ∀ x∈ I, τ_0≤ f_n(x)≤τ_1, (see <cit.>). Thus, for each function h∈𝕃^2(I), τ_0h^2≤h^2_n≤τ_1h^2 where . is the 𝕃^2-norm. Equation (<ref>) allows to establish global rates of convergence of the risk of the ridge estimators σ^2_m of σ^2_|I with m∈ℳ using the L^2-norm . which is, in this case, equivalent with the empirical norm ._n. To establish an upper-bound of the risk of estimation that tends to zero as n tends to infinity, we need to establish equivalence relations between the pseudo-norms ._n,1  (N=1) and ._X on one side, and ._X and the L^2-norm . on the other side, where the random pseudo-norm ._X is defined for each function h∈𝕃^2(I) by h^2_X := ∫_0^1h^2(X_s)ds. Define for x∈, the local time ℒ^x of the diffusion process X = (X_t)_t∈[0,1] by ℒ^x = ε→ 0lim1/2ε∫_0^1_(x-ε,x+ε)(X_s)ds. In general, the local time of a continuous semimartingale is a.s. càdlàg (see e.g. <cit.>). But, for diffusion processes and under Assumption <ref>, the local time ℒ^x is bicontinuous at any point x∈ (see Lemma <ref> in Section <ref>). Furthermore, we obtain the following result. Under Assumption <ref>, and for any continuous and integrable function h, it yields, * ∫_0^1h(X_s)ds = ∫_h(x)ℒ^xdx. * For all x∈, (ℒ^x) = ∫_0^1p_X(s,x)ds. In Lemma <ref>, we remark that there is a link between the local time and the transition density of the diffusion process. Thus, if we consider the pseudo-norm ._X depending on the process X = (X_t)_t∈[0,1] and given in Equation (<ref>), and using Lemma <ref>, we obtain that, [h_X^2] = ∫_h^2(x)[ℒ^x]dx = ∫_h^2(x)∫_0^1p_X(s,x)dsdx≥τ_0h^2. where ∫_0^1p_X(s,x)ds≥τ_0 >0 (see <cit.>, Lemma 4.3), and h^2 is the 𝕃^2-norm of h. Set L = log(n). Suppose that σ^2 is approximated in one of the collections [𝐁] and [𝐅]. Under Assumption <ref>, it yields [σ^2_m - σ^2_|I^2_n,1] ≤ 3h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n + m^2γ+1log(n)/n^γ/2 + Δ^2_n) [σ^2_m - σ^2_|I^2_n] ≤34τ_1/τ_0h∈𝒮_m,Linfh-σ^2_|I^2_n+C^'(m/n + m^2γ+1log(n)/n^γ/2 + Δ^2_n) where the number γ > 1 comes from the use of the Hölder inequality. The constant C>0 depends on σ_1 and the constant C^'>0 depends on σ_1, τ_0 and τ_1. We observe that the upper-bound of the risk of estimation of σ^2_m is composed of the bias term, which quantifies the cost of approximation of σ^2_|I in the constrained space 𝒮_m,L, the estimation error O(m/n) and the cost of the time discretization O(Δ^2_n) are established on a random event in which the pseudo-norms ._n,1 and ._X are equivalent, and whose probability of the complementary times σ^2_m - σ^2_|I^2_∞ is bounded by the term O(m^2γ+1log(n)/n^γ/2) (see Lemma <ref> and proof of Theorem <ref>). The next result proves that the risk of estimation can reach a rate of convergence of the same order than the rate established in <cit.> if the parameter γ > 1 is chosen such that the term O(m^2γ+1log(n)/n^γ/2) is of the same order than the estimation error of order m/n. Note that the risk σ^2_m - σ^2_|I^2_n is random since σ^2_m - σ^2_|I^2_n = _X[1/n∑_k=0^n-1(σ^2_m - σ^2_|I)(X_kΔ)] and the estimator σ^2_m is built from an independent copy X̅^1 of the discrete times process X̅. Thus, the expectation relates to the estimator σ^2_m. Suppose that σ^2∈Σ_I(β,R) with β > 3/2, and γ = 2(2β+1)/(2β-3). Assume that K_opt∝ n^1/(2β+1) for [𝐁] (m_opt = K_opt + M), and m_opt∝ n^1/(2β+1) for [𝐅]. Under Assumptions <ref>, it yields, [σ^2_m_opt - σ^2_|I^2_n,1] = O(log(n)n^-2β/(2β+1)) [σ^2_m_opt - σ^2_|I^2_n] = O(log(n)n^-2β/(2β+1)). Note that we obtain the exact same rates when considering the risk of σ^2_m_opt defined with the 𝕃^2-norm equivalent to the empirical norm ._n. Moreover, these rates of convergence are of the same order than the optimal rate n^-s/(2s+1) established in <cit.> over a Besov ball. §.§ Non-adaptive estimation of the diffusion coefficient on the real line In this section, we propose a ridge estimator of σ^2 on the real line , built from one diffusion path. In this context, the main drawback is that the density function f_n:x↦1/n∑_k=0^n-1p_X(kΔ,x) is no longer lower bounded. Consequently, the empirical norm ._n is no longer equivalent to the L_2-norm . and the consistency of the estimation error is no longer ensured under the only assumptions made in the previous sections. Consider the truncated estimator σ^2_m,L of σ^2 given by σ^2_m,L(x) = σ^2_m(x)_σ^2_m(x) ≤√(L) + √(L)_σ^2_m(x) > √(L). Thus, the risk of the ridge estimator σ^2_m,L is upper-bounded as follows: [σ^2_m,L - σ^2^2_n,1] ≤  [(σ^2_m,L - σ^2)_[-log(n),log(n)]^2_n,1] + [(σ^2_m,L - σ^2)_[-log(n),log(n)]^c^2_n,1] ≤  [(σ^2_m,L - σ^2)_[-log(n),log(n)]^2_n,1] + 4log^2(n)t∈[0,1]sup(|X_t|>log(n)). The first term on the r.h.s. is equivalent to the risk of a ridge estimator of σ^2 on the compact interval [-log(n),log(n)]. The second term on the r.h.s. is upper-bounded using Lemma <ref>. We derive below, an upper-bound of the risk of estimation of σ^2_m. Suppose that L = log^2(n). Under Assumption <ref>, it yields, [σ^2_m,L - σ^2^2_n,1] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(m^qlog^2(n)/n) where C>0 is a constant, q = 1 for the collection [𝐁], and q = 2 for the collection [𝐅]. We first remark that the upper-bound of the risk of the truncated estimator of σ^2 differs with respect to each of the chosen bases. This contrast comes from the fact that the Fourier basis {f_ℓ, ℓ = 0, …, m-1} and the spline basis {B_ℓ-M,  ℓ = 0, …, m-1} satisfy ∑_ℓ = 0^m - 1f_ℓ(x)≤ C_fm,  and ∑_ℓ=0^m-1B_ℓ-M(x) = 1. Secondly, the estimation error is not as fine as the one established in Theorem <ref> where σ^2 is estimated on a compact interval. In fact, on the real line , the pseudo-norm ._X can no longer be equivalent to the 𝕃^2-norm since the transition density is not bounded from below on . Consequently, we cannot take advantage of the exact method used to establish the risk bound obtained in Theorem <ref> which uses the equivalence relation between the pseudo-norms ._n,1 and ._X on one side, and ._X and the 𝕃^2-norm . on the other side. Moreover, we can also notice that the term of order 1/n^2 does not appear since it is dominated by the estimation error. We obtain below rates of convergence of the ridge estimator of σ^2 for each of the collections [𝐁] and [𝐅]. Suppose that σ^2∈Σ_I(β,R) with β≥ 1 For [B]. Assume that K ∝ n^1/(4β+1). Under Assumptions <ref>, there exists a constant C>0 depending on β and σ_1 such that [σ^2_m,L - σ^2^2_n,1] ≤ Clog^2β(n)n^-2β/(4β+1). For [F]. Assume that m ∝ n^1/2(2β+1). Under Assumptions <ref>, it yields, [σ^2_m,L - σ^2^2_n,1] ≤ Clog(n)n^-β/(2β+1) where the constant C>0 depends on β and σ_1. As we can remark, the obtained rates are slower than the ones established in Section <ref> where σ^2 is estimated on a compact interval. This result is the immediate consequence of the result of Theorem <ref>. § ESTIMATION OF THE DIFFUSION COEFFICIENT FROM REPEATED DIFFUSION PATHS We now focus on the estimation of the (square) of the diffusion coefficient from i.i.d. discrete observations of the diffusion process (N →∞). §.§ Non-adaptive estimation of the diffusion coefficient on a compact interval We study the rate of convergence of the ridge estimators σ^2_m of σ^2_|I from D_N,n when I is a compact interval. The next theorem gives an upper-bound of the risk of our estimators σ^2_m,  m∈ℳ. Suppose that L = log(Nn) and ℳ = {1,…,√(min(n,N))/log(Nn)}. Under Assumption <ref> and for all m ∈ℳ, there exist constants C>0 and C^'>0 depending on σ_1 such that, 𝔼[σ^2_m-σ^2_|I^2_n,N]≤   3h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/Nn+mlog(Nn)exp(-C√(min(n,N)))+Δ^2_n) 𝔼[σ^2_m-σ^2_|I^2_n]≤   34h∈𝒮_m,Linfh-σ^2_|I^2_n + C^'(m/Nn+mlog(Nn)exp(-C√(min(n,N)))+Δ^2_n). Note that the result of Theorem <ref> is independent of the choice of the basis that generate the approximation space 𝒮_m. The first term on the right-hand side represents the approximation error of the initial space, the second term O(m/(Nn)) is the estimation error, and the last term characterizes the cost of the time discretization. The next result is derived from Theorem <ref>. Suppose that σ^2∈Σ_I(β,R) with β > 3/2. Moreover, assume that K_opt∝ (Nn)^1/(2β+1) for [𝐁] (m_opt = K_opt + M), and m_opt∝ (Nn)^1/(2β+1) for [𝐅]. Under Assumptions <ref>, it yields, 𝔼[σ^2_m_opt-σ^2_|I^2_n,N] =  O((Nn)^-2β/(2β+1)) 𝔼[σ^2_m_opt-σ^2_|I^2_n] =  O((Nn)^-2β/(2β+1)). The obtained result shows that the nonparametric estimators of σ^2_|I based on repeated observations of the diffusion process are more efficient when N,n→∞. Note that the same rate is obtained if the risk of σ^2_m_opt is defined with the 𝕃^2-norm . equivalent to the empirical norm ._n. The rate obtained in Corollary <ref> is established for β > 3/2. If we consider for example the collection [B] and assume that β∈ [1, 3/2], then K_opt∝ (Nn)^1/(2β+1) belongs to ℳ for n ∝√(N)/log^4(N) and we have 𝔼[σ^2_m_opt-σ^2_|I^2_n,N]≤ C(Nn)^-2β/(2β+1). Under the condition n ∝√(N)/log^4(N) imposed on the length of diffusion paths, the obtained rate is of order n^-3β/(2β+1) (up to a log-factor) which is equivalent to N^-3β/2(2β+1) (up to a log-factor). §.§ Non-adaptive estimation of the diffusion coefficient on the real line Consider a ridge estimator of σ^2 on built from N independent copies of the diffusion process X observed in discrete times, where both N and n tend to infinity. For each m ∈ℳ, we still denote by σ^2_m the ridge estimators of σ^2 and σ^2_m,L the truncated estimators of σ^2 given in Equation (<ref>). We establish, through the following theorem, the first risk bound that highlights the main error terms. Suppose that L=log^2(N). Under Assumptions <ref> and for any dimension m∈ℳ, the following holds: 𝔼[σ^2_m,L-σ^2^2_n,N] ≤   2h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^qlog^2(N)/Nn) + Δ^2_n) where C>0 is a constant depending on the upper bound σ_1 of the diffusion coefficient. Moreover, q = 1 for the collection [𝐁] and q = 2 for the collection [𝐇]. If we consider the risk of σ^2_m,L using the empirical norm ._n, then we obtain 𝔼[σ^2_m,L-σ^2^2_n] ≤ 2h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^qlog^2(N)/Nn) + m^2log^3(N)/N+Δ^2_n) The risk bound given in Equation (<ref>) is a sum of four error terms. The first term is the approximation error linked to the choice of the basis, the second term is the estimation error given in Theorem <ref>, the third term m^2log^3(N)/N comes from the relation linking the empirical norm ._n to the pseudo-norm ._n,N (see Lemma <ref>), and the last term is the cost of the time-discretization. We derive, in the next result, rates of convergence of the risk bound of the truncated ridge estimators σ^2_m,L based on the collections [𝐁] and [𝐇] respectively. Suppose that σ^2∈Σ_I(β,R) with β≥ 1,  I = [-log(N),log(N)], and K ∝ (Nn)^1/(4β+1) for [𝐁], and σ^2∈ W^s_f_n(,R) with s ≥ 1 and m ∝ (Nn)^1/2(2s+1) for [𝐇]. Under Assumption <ref>, the following holds: For  [𝐁]   𝔼[σ^2_m,L-σ^2^2_n,N] ≤ C(log^2β(N)(Nn)^-2β/(4β+1) + 1/n^2), For  [𝐇]   𝔼[σ^2_m,L-σ^2^2_n,N] ≤ C(log^3(N)(Nn)^-s/(2s+1) + 1/n^2). where C>0 is a constant depending on β and σ_1 for [𝐁], or s and σ_1 for [𝐇]. The obtained rates are slower compared to the rates established in Section <ref> for the estimation of σ^2_|I where the interval I⊂ is compact. In fact, the method used to establish the rates of Theorem <ref> from which the rates of Corollary <ref> are obtained, does not allow us to derive rates of order (Nn)^-α/(2α+1) (up to a log-factor) with α≥ 1 (e.g. α = β, s). Finally, if we consider the risk defined with the empirical norm ._n, then from Equation (<ref>) with n ∝ N and assuming that m ∝ N^1/4(s+1) for [𝐇] or K ∝ N^1/4(β+1) for [𝐁], we obtain [𝐁]:     𝔼[σ^2_m,L-σ^2^2_n] ≤   Clog^2β(N)(Nn)^-β/2(β+1), [𝐇]:     𝔼[σ^2_m,L-σ^2^2_n] ≤   Clog^3(N)(Nn)^-s/2(s+1), where C>0 is a constant depending on σ_1 and on the smoothness parameter. We can see that the obtained rates are slower compared to the results of Corollary <ref> for n ∝ N. The deterioration of the rates comes from the additional term of order m^2log^3(N)/N which is now regarded as the new estimation error since it dominates the other term in each case as N→∞. §.§ Non-adaptive estimation of the diffusion coefficient on a compact interval depending on the sample size This section combines the two first sections <ref> and <ref> focusing on the estimation of σ^2 on the compact interval [-A_N,A_N] where (A_N) is a strictly positive sequence such that A_N →∞ as N→∞. Consequently, we obtain that the estimation interval tends to as the sample size N tends to infinity. Define from the observations and for each dimension m∈ℳ, the following matrices: Ψ_m:=(1/Nn∑_j=1^N∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ϕ_ℓ^'(X^j_kΔ))_0≤ℓ,ℓ^'≤ m-1, Ψ_m:=𝔼(Ψ_m)=([1/n∑_k=0^n-1ϕ_ℓ(X_kΔ)ϕ_ℓ^'(X_kΔ)])_0≤ℓ,ℓ^'≤ m-1. These two matrices play an essential role in the construction of a consistent projection estimator of σ^2 over any approximation subspace 𝒮_m spanned by the basis (ϕ_0,⋯,ϕ_m-1). Furthermore, for all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m, we have: h^2_n,N = ^t𝐚Ψ_m𝐚, h^2_n = 𝔼(h^2_n,N) = ^t𝐚Ψ_m𝐚, where 𝐚=(a_0,⋯,a_m-1). The Gram matrix Ψ_m is invertible under the spline basis (see <cit.>) and the Hermite basis (see <cit.>). We define for any invertible matrix M, the operator norm M^-1_op of M^-1 given by M^-1_op=1/inf{λ_j} where the λ_j are eigenvalues of M. For all dimension m∈ℳ, the matrices Ψ_m and 𝐅_m satisfy: Ψ_m= ^t𝐅_m𝐅_m. Consider the ridge estimator σ^2_m of σ^2_A_N = σ^2_[-A_N,A_N], with m∈ and A_N →∞ as N →∞. The estimator σ^2_m can reach a faster rate of convergence if the Gram matrix Ψ_m given in Equation (<ref>) satisfies the following condition, ℒ(m)(Ψ^-1_m_op∨ 1)≤ CN/log^2(N),   where  ℒ(m):=x∈ℝsup∑_ℓ=0^m-1ϕ^2_ℓ(x)<∞ where C>0 is a constant. In fact, the optimal rate of convergence is achieved on a random event Ω_n,N,m in which the two empirical norms ._n,N and ._n are equivalent (see  <cit.>, <cit.>). Then, Condition (<ref>)  is used to upper-bound (Ω^c_n,N,m) by a negligible term with respect to the considered rate (see <cit.>). Note that in Equation (<ref>), the square on log(N) is justified by the fact that the value of constant C>0 is unknown, and that the spline basis is not othonormal (see <cit.>, proof of Lemma 7.8). The assumption of Equation (<ref>) is also made in <cit.> on the operator norm of Ψ^-1_m based on an orthonormal basis with the bound 𝐜N/log(N) where the value of 𝐜 is known, and chosen and such that the upper-bound of (Ω^c_n,N,m) is negligible with respect to the estimation error. In our framework, since the transition density is approximated by Gaussian densities, we derive the following result. Suppose that n ∝ N and that the spline basis is constructed on the interval [-A_N,A_N] with A_N > 0. Under Assumption <ref> , for all m∈ and for all w∈^m such that w_2,m=1, there exists a constant C>0 such that For  [𝐇]:          w^'Ψ_mw ≥C/log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N))), For  [𝐁]:          w^'Ψ_mw ≥CA_N/mlog(N)exp(-c_σA^2_N), where the constant c_σ>1 that comes from the approximation of the transition density, depends on the diffusion coefficient σ. The result of Lemma <ref>  implies for the Hermite basis that (Ψ^-1_m_op∨ 1)≤log(N)/Cexp(3c_σ(4m+3)/2(1-log^-1(N))) where the upper-bound is an exponentially increasing sequence of N since the dimension m∈ has a polynomial growth with respect to N. Thus, Condition (<ref>)  cannot be satisfied for the Hermite basis in our framework. Considering the spline basis, one has ℒ(m)=ℒ(K+M)≤ 1 and there exists a constant C>0 such that Ψ^-1_m_op≤ Cmlog(N)/A_Nexp(c_σA^2_N). For K ∝(N^2/(2β+1)A_N), Condition (<ref>) is satisfied if the estimation interval [-A_N,A_N] is chosen such that A_N = o (√(log(N))). In the next theorem, we prove that the spline-based ridge estimator of σ^2_A_N reaches a faster rate of convergence compared to the result of Corollary <ref> for the collection [𝐁]. Suppose that N ∝ n and consider the ridge estimator σ^2_A_N,m of σ^2_A_N based on the spline basis. Furthermore, suppose that L = log(N), A_N = o(√(log(N))) and K ∝ (Nn)^1/(2β+1)A_N (m = K + M). Under Assumptions <ref> and for σ^2∈Σ_I(β,R) with I = [-A_N, A_N], the following holds: [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤ Clog^β(N)(Nn)^-2β/(2β+1) where C>0 is a constant depending on β. The above result shows that the risk of the ridge estimator of σ^2_A_N on [-A_N,A_N] reaches a rate of order (Nn)^-β/(2β+1) (up tp a log-factor) thanks to Condition (<ref>) which allows us to take advantage of the equivalence relation between the empirical norms ._n and ._n,N given in Equation (<ref>) to derive a finer estimation error (see proof of Theorem <ref>). Note that the obtained result depends on an appropriate choice of the estimation interval [-A_N,A_N] which tends to as N tends to infinity. Therefore, any choice of A_N such that A_N/√(log(N))⟶ +∞ cannot lead to a consistent estimation error since Equation (<ref>) is no longer satisfied for the upper-bounding of (Ω^c_n,N,m) by a term that tends to zero as N →∞. Thus, the assumption A_N = o(√(log(N))) is a necessary and sufficient condition for the validation of Condition (<ref>) which leads, together with Assumption <ref>, to the result of Theorem <ref>. Finally, under the assumptions of Theorem <ref> and considering the risk of σ^2_A_N,m based on the empirical norm ._n, we also obtain [σ^2_A_N,m-σ^2_A_N^2_n] = O(log^β(N)(Nn)^-2β/(2β+1)). In fact, under Condition (<ref>), the estimator σ^2_A_N,m satisfies the results of Theorem <ref> with I = [-A_N,A_N] and A_N = o(√(log(N))), which implies rates of the same order for the two empirical norms. § ADAPTIVE ESTIMATION OF THE DIFFUSION COEFFICIENT FROM REPEATED OBSERVATIONS In this section, we suppose that n ∝ N and we propose a adaptive ridge estimator of σ^2 by selecting an optimal dimension from the sample D_N. In fact, consider the estimator σ^2_K,L where K satisfies: K:=K∈𝒦min{γ_n,N(σ^2_K)+pen(K)} and the penalty function pen : K↦pen(K) is established using the chaining technique of <cit.>. We derive below the risk of the adaptive estimator of σ^2_|I when the interval I⊂ is compact and the sample size N →∞. Suppose that N ∝ n,   L=log(N) and consider the collection [B] with K ∈𝒦 = {2^q,  q=0,1,…,q_max}⊂ℳ = {1,…,√(N)/log(N)}. Under Assumption <ref> , there exists a constant C>0 such that, 𝔼[σ^2_K,L-σ^2_|I^2_n,N]≤ 34K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)}+C/Nn where pen(K) = κ(K+M)log(N)/Nn with κ > 0 a numerical constant. We deduce from Corollary <ref> and its assumptions that the adaptive estimator σ^2_K,L satisfies: 𝔼[σ^2_K,L-σ^2_|I^2_n] = O((Nn)^-2β/(2β+1)). This result is justified since the penalty term is of the same order (up to a log-factor) than the estimation error established in Theorem <ref>. Considering the adaptive estimator of σ^2 on the real line I= when the sample size N →∞, we obtain the following result. Suppose that N ∝ n and L = log(N), and consider the collection [𝐁] with K ∈𝒦 = {2^q,  q=0,1,…,q_max}⊂ℳ = {1,…,√(N)/log(N)}. Under Assumption <ref> and for N large enough, the exists a constant C>0 such that, [σ^2_K,L - σ^2^2_n,N] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)} + C/Nn. where pen(K) = κ^'(K+M)log(N)/Nn with κ^'>0 a numerical constant. We have a penalty term of the same order than the one obtained in Theorem <ref> where σ^2 is estimated on a compact interval. One can deduce that the adaptive estimator reaches a rate of the same order than the rate of the non-adaptive estimator given in Corollary <ref> for the collection [𝐁]. If we consider the adaptive estimator of the compactly supported diffusion coefficient built from a single diffusion path, we obtain below an upper-bound of its risk of estimation. Suppose that N = 1,   L = √(log(n)) and consider the collection [𝐁] with K ∈𝒦 = {2^q,  q=0,…,q_max}⊂ℳ = {1,…,√(n)/log(n)}. Under Assumption <ref>, it yields 𝔼[σ^2_K,L-σ^2_|I^2_n,1] ≤   3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + C/n. where C>0 is a constant depending on τ_0, and pen(K) = κ(K+M)log(n)/n with κ>0 a numerical constant. We deduce from Theorem <ref> that if we assume that σ^2 ∈Σ_I(β,R), then the adaptive estimator σ^2_K,L reaches a rate of order n^-β/(2β+1) (up to a log-factor). The result of this theorem is almost a deduction of the result of Theorem <ref>, the slight difference being the use, in the proofs, of the local time of the process and the equivalence relation between the pseudo-norm ._n,1 with the pseudo-norm ._X instead of the empirical norm ._n considered in the proof of Theorem <ref>. § NUMERICAL STUDY This section is devoted to the numerical study on a simulation scheme. Section <ref> focuses on the presentation of the chosen diffusion models. In Section <ref>, we describe the scheme for the implementation of the ridge estimators. We mainly focus on the B-spline basis for the numerical study, and in Section <ref>, we add a numerical study on the performance of the Hermite-based ridge estimator of σ^2 on . Finally, we compare the efficiency of our estimator built on the real line from a single path with that of the Nadaraya-Watson estimator proposed in <cit.>. §.§ Models and simulations Recall that the time horizon is T=1 and X_0 = 0. Consider the following diffusion models: Model 1 Ornstein-Uhlenbeck: b(x) = 1-x,   σ(x)= 1 Model 2: b(x) = 1-x,   σ(x) = 1-x^2 Model 3: b(x) = 1-x,   σ(x) = 1/3+sin(2π x)+cos^2(π/2x) Model 1 is the commonly used Ornstein-Uhlenbeck model, known to be a simple diffusion model satisfying Assumption <ref>. Model 2 does not satisfy Assumption <ref>. Model 3 satisfies Assumption <ref>  with a multimodal diffusion coefficient. The size N of the sample D_N takes values in the set {1,10,100,1000} where the length n of paths varies in the set {100,250,500,1000}. As we work with the spline basis, the dimension m=K+M of the approximation space is chosen such that M=3 and K takes values in 𝒦={2^p, p=0,⋯,5} so that the subspaces are nested inside each other. We are using for the simulation of diffusion paths via the function of package, (see <cit.> for more details on the simulation of SDEs). §.§ Implementation of the ridge estimators In this section, we assess the quality of estimation of the adaptive estimator σ^2_m in each of the 3 models through the computation of its risk of estimation. We compare the performance of the adaptive estimator with that of the oracle estimator σ^2_m^* where m^* is given by: m^*:=m∈ℳmin σ^2_m-σ^2^2_n,N. For the spline basis, we have m^* = K^* + M with M=3. Finally, we complete the numerical study with a representation of a set of 10 estimators of σ^2 for each of the 3 models. We evaluate the MISE of the spline-based adaptive estimators σ^2_K by repeating 100 times the following steps: * Simulate samples D_N,n and D_N^',n with N∈{1,10,100,1000}, N^'=100 and n ∈{100, 250,1000}. * For each K∈𝒦, and from D_N,n, compute estimators σ^2_K given in Equations (<ref>) and (<ref>). * Select the optimal dimension K∈𝒦 using Equation (<ref>) and compute K^* from Equation (<ref>) * Using D_N^',n, evaluate σ^2_K-σ^2^2_n,N^' and σ^2_K^*-σ^2^2_n,N^'. We deduce the risks of estimation considering the average values of σ^2_m-σ^2^2_n,N^' and σ^2_m^*-σ^2^2_n,N^' over the 100 repetitions. Note that we consider in this section, the estimation of σ^2 on the compact interval I = [-1,1] and on the real line . The unknown parameters κ and κ^' in the penalty functions given in Theorem <ref> and Theorem <ref> respectively, are numerically calibrated (details are given in Appendix <ref>), and we choose κ = 4 and κ^' = 5 as their respective values. §.§ Numerical results We present in this section the numerical results of the performance of the spline-based adaptive estimators of σ^2_|I with I ⊆ together with the performance of the oracle estimators. We consider the case I=[-1,1] for the compactly supported diffusion coefficient, and the case I=. Tables <ref> and <ref> present the numerical results of estimation of σ^2_|I from simulated data following the steps given in Section <ref>. The results of Table <ref> and Table <ref> show that the adapted estimator σ^2_K is consistent, since its MISE tends to zero as both the size N of the sample D_N,n and the length n of paths are larger. Moreover, note that in most cases, the ridge estimators of the compactly supported diffusion coefficients perform better than those of the non-compactly supported diffusion functions. As expected, we observe that the oracle estimator has generally a better performance compared to the adaptive estimator. Nonetheless, we can remark that the performances are very close in several cases, highlighting the efficiency of the data-driven selection of the dimension. An additional important remark is the significant influence of the length n of paths on the performance of σ^2_K and σ^2_K^*,L (by comparison of Table <ref> with Table <ref>), which means that estimators built from higher frequency data are more efficient. A similar remark is made for theoretical results obtained in Sections <ref> and <ref>. Performance of the Hermite-based estimator of the diffusion coefficient We focus on the estimation of σ^2 on and assess the performance of its Hermite-based estimator (see Section <ref>). We present in Table <ref>, the performance of the oracle estimator σ^2_m^*,L. From the numerical results of Table <ref>, we observe that the Hermite-based estimator of σ^2 is consistent as the sample size N and the length n paths take larger values. Estimation of the diffusion coefficient from one path Consider ridge estimators of σ^2_|I with I=[-1,1]. For the case of the adaptive estimators of σ^2_|I, the dimension K is selected such that K = K∈𝒦minγ_n(σ^2_K) + pen(K) where pen(K) = κ(K+M)log(n)/n with κ >0. We choose the numerical constant κ = 4 and we derive the numerical performance of the adaptive estimator of σ^2_|I. Table <ref> gives the numerical performances of both the adaptive estimator and the oracle estimator of σ^2_|I on the compact interval I=[-1,1] and from a single diffusion path. From the obtained results, we see that the estimators are numerically consistent. However, we note that the convergence is slow (increasing n from 100 to 1000), which highlights the significant impact of the number N of paths on the efficiency of the ridge estimator. Comparison of the efficiency of the ridge estimator of the diffusion coefficient with its Nadaraya-Watson estimator. Consider the adaptive estimator σ^2_K of the square of the diffusion coefficient buit on the real line from a single diffusion path (N=1), where the dimension K is selected using Equation (<ref>). For the numerical assessment, we use the interval I = [-10^6, 10^6] to approximate the real line , and then, use Equation (<ref>) for the data-driven selection of the dimension. We want to compare the efficiency of σ^2_K with that of the Nadaraya-Watson estimator of σ^2 given from a diffusion path X̅ = (X_k/n)_1≤ k≤ n and for all x ∈ by S_n(x) = ∑_k=1^n-1K(X_k/n - x/h_n)[X_(k+1)/n - X_k/n]^2/n/∑_k=1^nK(X_k/n - x/h_n) where K is a positive kernel function, and h_n is the bandwidth. Thus, the estimator S_n(x) is consistent under the condition nh^4_n→ 0 as n tends to infinity (see <cit.>). We use the function of the R-package to compute the Nadaraya-Watson estimator S_n. We remark from the results of Table <ref> that our ridge estimator is more efficient. Note that for the kernel estimator S_n, the bandwidth is computed using the rule of thumb of Scott (see <cit.>). The bandwidth is proportional to n^-1/(d+4) where n is the number of points, and d is the number of spatial dimensions. §.§ Concluding remarks The results of our numerical study show that our ridge estimators built both on a compact interval and on the real line are consistent as N and n take larger values, or as only n takes larger values when the estimators are built from a single path. These results are in accordance with the theoretical results established in the previous sections. Moreover, as expected, we obtained the consistency of the Hermite-based estimators of σ^2 on the real line . Nonetheless, we only focus on the Hermite-based oracle estimator since we did not establish a risk bound of the corresponding adaptive estimator. Finally, we remark that the ridge estimator of σ^2 built from a single path performs better than its Nadaraya-Watson kernel estimator proposed in <cit.> and implemented in the R-package . § CONCLUSION In this article, we have proposed ridge-type estimators of the diffusion coefficient on a compact interval from a single diffusion path. We took advantage of the local time of the diffusion process to prove the consistency of non-adaptive estimators of σ^2 and derive a rate of convergence of the same order than the optimal rate established in <cit.>. We also propose an estimator of σ^2 on the real line from a single path. We proved its consistency using the method described in Section <ref>, and derive a rate of convergence order n^-β/(4β+1) over a Hölder space for the collection [𝐁]. Then, we extended the study to the estimation of σ^2 from repeated discrete observations of the diffusion process. We establish rates of convergence of the ridge estimators both on a compact interval and on . We complete the study proposing adaptive estimators of σ^2 on a compact interval for N=1 and N→∞, and on the real line for N→∞. A perspective on the estimation of the diffusion coefficient could be the establishment of a minimax rate of convergence of the compactly supported (square of the) diffusion coefficient from repeated discrete observations of the diffusion process. The case of the non-compactly supported diffusion coefficient may be a lot more challenging, since the transition density of the diffusion process is no longer lower-bounded. This new fact can lead to different rates of convergence depending on the considered method (see Section <ref>). § ACKNOWLEDGEMENTS I would like to thank my supervisors, Christophe Denis, Charlotte Dion-Blanc, and Viet-Chi Tran, for their sound advice, guidance and support throughout this research project. Their experience in scientific research and their expertise in stochastic calculus and process statistics were decisive in providing precise and relevant answers to the issues raised in this paper, taking into account what has already been done in the literature. I am particularly grateful for their precise and constant help throughout the writing of this article, from editorial advice to proofreading the introduction, the proofs and all other sections of the paper. § PROOFS In this section, we prove our main results of Sections <ref>, <ref> and <ref>. To simplify our notations, we set Δ_n = Δ(=1/n) and constants are generally denoted by C>0 or c>0 whose values can change from a line to another. Moreover, we use the notation C_α in case we need to specify the dependency of the constant C on a parameter α. §.§ Technical results Recall first some useful results on the local time and estimates of the transition density of diffusion processes. For all integer q≥ 1, there exists C^*>0 depending on q such that for all 0≤ s<t≤ 1, [|X_t-X_s|^2q]≤ C^*(t-s)^q. The proof of Lemma <ref> is provided in <cit.>. Under Assumptions <ref>, there exist constants c_σ >1, C > 1 such that for all t ∈ (0,1], x ∈ℝ, 1/C√(t)exp(-c_σx^2/t) ≤ p_X(t,x) ≤C√(t)exp(-x^2/c_σt). The proof of Proposition <ref> is provided in <cit.>, Proposition 1.2. Let h be a L_0-lipschitz function. Then there exists h̃∈𝒮_K_N,M, such that |h̃(x)-h(x)| ≤ C log(N)/K_N, ∀ x ∈ (-log(N),log(N)), where C >0 depends on L_0, and M. The proof of Proposition <ref> is provided in <cit.>. The finite-dimensional vector space 𝒮_K_N,M = 𝒮_K_N+M is introduced in Section <ref>. Under Assumption <ref>, there exist C_1,C_2 >0 such that for all A >0, sup_t ∈ [0,1](|X_t|≥ A) ≤C_1/Aexp(-C_2A^2). The proof of Lemma <ref> is provided in <cit.>, Lemma 7.3. Under Assumption <ref>, the following holds: ∀ x∈,   ℒ^x = ℒ^x_-   a.s. where ℒ^x_- = ε→ 0limℒ^x-ε. The result of Lemma <ref> justifies the definition of the local time ℒ^x, for x∈, given in Equation (<ref>). From <cit.>, Theorem 1.7, we have ∀ x∈,   ℒ^x - ℒ^x_- = 2∫_0^1_X_s = xdX_s = 2∫_0^1_X_s = xb(X_s)ds + 2∫_0^1_X_s = xσ(X_s)dW_s. For all x∈ and for all s∈[0,1], we have for all ε>0, (X_s = x) =  ε→ 0lim (X_s≤ x + ε) - ε→ 0lim (X_s≤ x - ε) = ε→ 0lim F_s(x + ε) - ε→ 0lim F_s(x - ε) =   F_s(x) - F_s(x^-) =   0 Thus, for all x∈, [|ℒ^x - ℒ^x_-|] ≤   2∫_0^1|b(x)|(X_s = x)ds + 2[|∫_0^1_X_s=xσ(X_s)dW_s|] =   2[|∫_0^1_X_s=xσ(X_s)dW_s|]. Using the Cauchy Schwartz inequality, we conclude that [|ℒ^x - ℒ^x_-|] ≤   2√((∫_0^1_X_s=xσ^2(X_s)ds)) = 2σ(x)∫_0^1(X_s = x)ds = 0. Using the Markov inequality, we have ∀ ε>0,   (|ℒ^x - ℒ^x_-|>ε) ≤1/ε[|ℒ^x - ℒ^x_-|] = 0. We finally conclude that for all x ∈, (ℒ^x≠ℒ^x_-) = (|ℒ^x - ℒ^x_-|>0) = 0. §.§ Proofs of Section <ref>  §.§.§ Proof of Lemma <ref> The proof is divided into two parts for each of the two results to be proven. First result. Since the function h is continuous on , let H be a primitive of h on . We deduce that for all s ∈ [0,1], h(X_s) = ε→ 0limH(X_s + ε) - H(X_s - ε)/2ε = ε→ 0lim1/2ε∫_X_s - ε^X_s + εh(x)dx = ε→ 0lim1/2ε∫_-∞^+∞h(x)_(x-ε,x+ε)(X_s)dx. Finally, since h is integrable on and using the theorem of dominated convergence, we obtain ∫_0^1h(X_s)ds = ∫_-∞^+∞h(x)ε→ 0lim1/2ε∫_0^1_(x-ε,x+ε)(X_s)dsdx = ∫_-∞^+∞h(x)ℒ^xdx. Second result. Fix t∈(0,1] and consider P_X : (t,x) ↦∫_-∞^xp_X(t,y)dy the cumulative density function of the random variable X_t of the density function x ↦ p_X(t,x). We have: ∀ x∈,   (ℒ^x) = ε→ 0lim1/2ε∫_0^1[_(x-ε,x+ε)(X_s)]ds = ε→ 0lim1/2ε∫_0^1(x - ε≤ X_s≤ x + ε)ds = ∫_0^1ε→ 0limP_X(s,x+ε) - P_X(s,x-ε)/2εds = ∫_0^1p_X(s,x)ds. §.§.§ Proof of Theorem <ref>  Let Ω_n,m be the random event in which the two pseudo-norms ._n,1 and ._X are equivalent and given by Ω_n,m := g∈𝒮_m∖{0}⋂{|g^2_n,1/g^2_X-1| ≤1/2}. The proof of Theorem <ref> relies on the following lemma. Let γ > 1 be a real number. Under Assumption <ref>, the following holds (Ω^c_n,m) ≤ Cm^2γ/n^γ/2, where C>0 is a constant depending on γ. The parameter γ > 1 has to be chosen appropriately (i.e. such that m^2γ/n^γ/2 = o(1/n)) so that we obtain a variance term of the risk of the estimator σ^2_m of order mlog(n)/n (see Theorem <ref> and Corollary <ref>). Recall that since N = 1, ζ^1_kΔ=ζ^1,1_kΔ+ζ^1,2_kΔ+ζ^1,3_kΔ is the error term of the regression model, with: ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds], ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW^1_s, ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s. Besides, R^1_kΔ =R^1,1_kΔ+R^1,2_kΔ, with: R^1,1_kΔ=1/Δ(∫_kΔ^(k+1)Δb(X^1_s)ds)^2+1/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)Φ(X^1_s)ds R^1,2_kΔ=2/Δ(∫_kΔ^(k+1)Δ(b(X^1_s)-b(X^1_kΔ))ds)(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s) where Φ:=2bσ^'σ+[σ^''σ+(σ^')^2]σ^2. By definition of the projection estimator σ^2_m for each m∈ℳ (see Equation (<ref>)), for all h∈𝒮_m,L, we have: γ_n,1(σ^2_m)-γ_n,1(σ^2_|I)≤γ_n,1(h)-γ_n,1(σ^2_|I). Furthermore, for all h∈𝒮_m,L, γ_n,1(h)-γ_n,1(σ^2_|I)=σ^2_|I-h^2_n,1+2ν_1(σ^2_|I-h)+2ν_2(σ^2_|I-h)+2ν_3(σ^2_|I-h)+2μ(σ^2_|I-h), where, ν_i(h) = 1/n∑_k=0^n-1h(X^1_kΔ)ζ^1,i_kΔ, i∈{1,2,3}, μ(h)=1/n∑_k=0^n-1h(X^1_kΔ)R^1_kΔ, and ζ^1,1_kΔ, ζ^1,2_kΔ, ζ^1,3_kΔ are given in Equations (<ref>), (<ref>), (<ref>), and finally, R^1_kΔ = R^1,1_kΔ+R^1,2_kΔ given in Equations (<ref>) and (<ref>). Then, for all m ∈ℳ, and for all h ∈𝒮_m,L, we obtain from Equation (<ref>) that σ^2_m-σ^2_|I^2_n,1≤h-σ^2_|I^2_n,1+2ν(σ^2_m-h)+2μ(σ^2_m-h), with ν=ν_1+ν_2+ν_3. Then, it comes, 𝔼[σ^2_m-σ^2_|I^2_n,1] ≤h∈𝒮_m,Linfh-σ^2_|I^2_n+2𝔼[ν(σ^2_m-h)]+2𝔼[μ(σ^2_m-h)]. Besides, for any a,d>0, using the inequality xy ≤η x^2 + y^2/η with η = a, d, we have, 2ν(σ^2_m-h) ≤2/aσ^2_m-σ^2_|I^2_X+2/ah-σ^2_|I^2_X+ah∈𝒮_m, h_X=1supν^2(h), 2μ(σ^2_m-h) ≤2/dσ^2_m-σ^2_|I^2_n,1+2/dh-σ^2_|I^2_n,1+d/n∑_k=1^n(R^1_kΔ)^2. §.§.§ Upper bound of 1/n∑_k=1^n(R^1_kΔ)^2 We have: ∀ k∈[[1,n]], R^1_kΔ=R^1,1_kΔ+R^1,2_kΔ+R^1,3_kΔ with,   R^1,1_kΔ=1/Δ(∫_kΔ^(k+1)Δb(X^1_s)ds)^2, R^1,2_kΔ=1/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)Φ(X^1_s)ds   R^1,3_kΔ=2/Δ(∫_kΔ^(k+1)Δ(b(X^1_s)-b(X^1_kΔ))ds)(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s). For all k∈[[1,n]], using the Cauchy-Schwarz inequality and Equation (<ref>), 𝔼[|R^1,1_kΔ|^2] ≤𝔼[(∫_kΔ^(k+1)Δb^2(X^1_kΔ)ds)^2]≤Δ𝔼[∫_kΔ^(k+1)Δb^4(X^1_kΔ)ds]≤ CΔ^2. Consider now the term R^1,2_kΔ. From Equation (<ref>), we have Φ=2bσ^'σ+[σ^''σ+(σ^')^2]σ^2 and according to Assumption <ref>, there exists a constant C>0 depending on σ_1 and α such that |Φ(X^1_s)| ≤ C[(2+|X^1_s|)(1+|X^1_s|^α) + (1+|X^1_s|^α)^2]. Then, from Equation (<ref>) and for all s∈(0,1], [Φ^2(X^1_s)] ≤ Cs∈(0,1]sup[(2+|X^1_s|)^2(1+|X^1_s|^α)^2 + (1+|X^1_s|^α)^4] < ∞ and 𝔼[|R^1,2_kΔ|^2] ≤1/Δ^2∫_kΔ^(k+1)Δ((k+1)Δ-s)^2ds∫_kΔ^(k+1)Δ𝔼[Φ^2(X^1_s)]ds≤ CΔ^2 Finally, under Assumption <ref>, from Equation (<ref>) and using the Cauchy-Schwarz inequality, we have 𝔼[|R^1,3_kΔ|^2] ≤4/Δ^2𝔼[Δ∫_kΔ^(k+1)ΔL^2_0|X^1_s-X^1_kΔ|^2ds(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2] ≤4/Δ√(𝔼[L^4_0Δ∫_kΔ^(k+1)Δ|X^1_s-X^1_kΔ|^4ds]𝔼[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^4]) ≤ CΔ^2. As a result, there exists a constant C>0 such that, 𝔼[1/n∑_k=1^n(R^1_kΔ)^2]≤ CΔ^2. We set a = d = 8 and considering the event Ω_n,m on which the empirical norms ._X and ._n,1 are equivalent, we deduce from Equations (<ref>), (<ref>) and (<ref>) that, 𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]≤ 3h∈𝒮_minfh-σ^2_|I^2_n+C𝔼(h∈𝒮_m, h_X=1supν^2(h))+CΔ^2 where C>0 is a constant depending on σ_1. §.§ Upper bound of 𝔼(h∈𝒮_m, h_X=1supν^2(h)) For all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m such that h^2_X=1, we have h^2≤1/τ_0 (see Equation (<ref>)) and the coordinate vector 𝐚 = (a_-M,⋯,a_K-1) satisfies: * 𝐚^2_2≤ Cm    (m = K+M) for the spline basis (see <cit.>, Lemma 2.6) * 𝐚^2_2≤ 1/τ_0 for an orthonormal basis since h^2 = 𝐚^2_2. Furthermore, using the Cauchy-Schwarz inequality, we have: ν^2(h)=(∑_ℓ=0^m-1a_ℓν(ϕ_ℓ))^2≤𝐚^2_2∑_ℓ=0^m-1ν^2(ϕ_ℓ). Thus, since ν=ν_1+ν_2+ν_3, for all ℓ∈[[-M,K-1]] and for all i∈{1,2,3}, 𝔼[ν^2_i(ϕ_ℓ)]=  1/n^2𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,i_kΔ)^2]. * Case i=1 Recall that ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] where W=W^1. We fix a initial time s∈[0,1) and set M^s_t=∫_s^tσ(X^1_u)dW_u, ∀ t≥ s. (M^s_t)_t≥ s is a martingale and for all t∈[s,1], we have: <M^s,M^s>_t=∫_s^tσ^2(X^1_u)du. Then, ζ^1,1_kΔ=1/Δ(M^kΔ_(k+1)Δ)^2-<M^kΔ,M^kΔ>_(k+1)Δ is also a ℱ_kΔ-martingale, and, using the Burkholder-Davis-Gundy inequality, we obtain for all k∈[[0,n-1]], 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0, 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤C/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_u)du)^2]≤ Cσ^4_1. Then, using Equation (<ref>) we have: 𝔼[ν^2_1(ϕ_ℓ)] =  1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2]=1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]] ≤  Cσ^4_1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)] and, ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤Cσ^4_1/n^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)]. One has: ∑_ℓ=-M^K-1B^2_ℓ(X^1_η(s))≤ 1   for the Spline basis   (m = K + M), ∑_ℓ = 0^m-1ϕ^2_ℓ(X^1_η(s))≤ Cm   for an orthonormal basis with   C = 0 ≤ℓ≤ m-1maxϕ_ℓ^2_∞. Thus, it comes that * ∑_ℓ=-M^K-1𝔼[ν^2_1(B_ℓ)]≤ C/n   for the Spline basis, * ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤ Cm/n   for an orthonormal basis, and, 𝔼(h∈𝒮_m, h^2_X=1supν^2_1(h))≤ Cm/n where C>0 is a constant depending on σ_1 and the basis. * Case i=2 Wa have ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s and, 𝔼[ν^2_2(ϕ_ℓ)] =  4𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)∫_kΔ^(k+1)Δ(k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] =  4𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] ≤  Cσ^4_1Δ^2𝔼[∫_0^1ϕ^2_ℓ(X^1_η(s))ds] where C>0 is a constant. We deduce for both the spline basis and any orthonormal basis that there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h^2_X=1supν^2_2(h))≤ Cm/n^2. * Case i=3 We have ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW_s and, 𝔼[ν^2_3(ϕ_ℓ)] =  4/n^2𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2] ≤  4σ^2_1/n^2𝔼[∫_0^1ϕ^2_ℓ(X^1_η(s))b^2(X^1_η(s))ds] Since for all x∈ℝ, b^2(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^2)<∞, there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h^2_X=1supν^2_3(h))≤ Cm/n^2. We finally obtain from Equations (<ref>), (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h^2_X=1supν^2(h))≤ Cm/n. We deduce from Equations (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that, [σ^2_m - σ^2_|I^2_n,1_Ω_n,m] ≤ 3h∈𝒮_m,Linfσ^2_|I - h^2_n + C(m/n + Δ^2). For n large enough, we have σ^2_m - σ^2_|I^2_∞≤ 2mL since σ^2_m_∞≤√(mL). Then, from Lemma <ref> and for all m∈ℳ, there exists a constant C>0 depending on σ_1 such that 𝔼[σ^2_m-σ^2_|I^2_n,1] =𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]+𝔼[σ^2_m-σ^2_|I^2_n,1_Ω^c_n,m] ≤𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]+2mLℙ(Ω^c_n,m) ≤ 3h∈𝒮_m,Linfσ^2_|I - h^2_n + C(m/n + m^2γ+1L/n^γ/2 + Δ^2). Since the pseudo-norms ._n,1 and ._X are equivalent on the event Ω_n,m, then, using Lemma <ref>, there exists a constant C>0 depending on σ_1 such that 𝔼[σ^2_m-σ^2_|I^2_X] = 𝔼[σ^2_m-σ^2_|I^2_X_Ω_n,m] + 𝔼[σ^2_m-σ^2_|I^2_X_Ω^c_n,m] ≤ 8𝔼[σ^2_m-σ^2_|I^2_n,1] + 10h∈𝒮_minfσ^2_|I-h^2_n + 2mL(Ω^c_n,m) ≤ 34h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n+m^2γ+1L/n^γ/2+Δ^2). Finally, since the estimator σ^2_m is built from a diffusion path X̅^1 independent of the diffusion process X, and from Equations (<ref>) and (<ref>), the pseudo-norm ._X depending on the process X and the empirical norm ._n are equivalent (∀ h∈𝕃^2(I),  h^2_n≤ (τ_1/τ_0)[h^2_X]), there exists a constant C>0 depending on σ_1, τ_0 and τ_1 such that 𝔼[σ^2_m-σ^2_|I^2_n] ≤ 34τ_1/τ_0h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n+m^2γ+1L/n^γ/2+Δ^2). The proof of this Lemma mainly focus on the spline basis and the Fourier basis based on functions cos and sin which are Lipschitz functions. Thus, for all g = ∑_ℓ = 0^m-1a_ℓϕ_ℓ∈𝒮_m, |g^2_n,1 - g^2_X| ≤∫_0^1|g^2(X_η(s)) - g^2(X_s)|ds≤ 2g_∞∫_0^1|g(X_η(s)) - g(X_s)|ds. From Equation (<ref>), one has [g^2_X] ≥τ_0 g^2. Thus, if g^2_X = 1, then g^2≤ 1/τ_0, and we deduce for all g = ∑_ℓ=0^m-1a_ℓϕ_ℓ that there exists a constant C>0 such that * Spline basis: g_∞≤a_2 ≤ C√(m)   (see   <cit.>) * Fourier basis: g_∞≤ C√(m)   since g = a_2 and ∑_ℓ=0^m-1ϕ^2_ℓ = O(m). Moreover, each g∈𝒮_m such that g^2_X = 1 is the Lipschitz function with a Lipschitz coefficient L_g = O(m^3/2). For the spline basis, this result is obtained in <cit.>, proof of Lemma C.1 combined with Lemma 2.6. For the Fourier basis, for all x,y∈ I and using the Cauchy Schwarz inequality, we obtain |g(x) - g(y)| ≤ ∑_ℓ = 0^m - 1|a_ℓ|.|ϕ_ℓ(x) - ϕ_ℓ(y)| ≤ 2π m√(m)𝐚_2|x-y| ≤ 2π/τ_0m√(m)|x-y|. Back to Equation (<ref>), there exists a constant C>0 such that |g^2_n,1 - g^2_X| ≤ Cm^2∫_0^1|X_η(s) - X_s|ds We have: Ω^c_n,m = {ω∈Ω,  ∃ g∈𝒮_m∖{0},  |g^2_n,1/g^2_X-1| > 1/2}, and, using Equation (<ref>), we obtain g∈𝒮_m∖{0}sup|g^2_n,1/g^2_X-1| = g∈𝒮_m, g^2_X = 1sup|g^2_n,1-g_X|≤ Cm^2∫_0^1|X_η(s) - X_s|ds. Finally, using the Markov inequality, the Hölder inequality, Equation (<ref>), and Lemma <ref>, we conclude that (Ω^c_n,m) ≤  (Cm^2∫_0^1|X_η(s) - X_s|ds≥1/2) ≤   Cm^2γ∫_0^1[|X_η(s) - X_s|^γ]ds ≤   Cm^2γ/n^γ/2 with γ∈ (1,+∞). §.§.§ Proof of Theorem <ref> Since L=log^2(n), we have 𝔼[σ^2_m,L-σ^2^2_n,1] =  𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] + 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^c^2_n,1] ≤  𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] + 2log^2(n)t∈(0,1]sup(|X_t|>log(n)). From Equation (<ref>) (Proof of Theorem <ref>), for all h∈𝒮_m,L, 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] ≤h∈𝒮_m,Linfh-σ^2^2_n+2∑_i=1^3𝔼[ν_i(σ^2_m-h)]+2𝔼[μ(σ^2_m-h)] where ν_i,   i=1,2,3 and μ are given in Equation (<ref>). For all i∈{1,2,3} and for all h∈𝒮_m,L, one has 𝔼[ν_i(σ^2_m,L-h)]≤√(2mlog^2(n))√(∑_ℓ=0^m-1𝔼[ν^2_i(ϕ_ℓ)]). * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)] According to Equation (<ref>), we have ∀ℓ∈[[0,m-1]], ν_1(ϕ_ℓ)=1/n∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔ where ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] is a martingale satisfying 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0 and 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤1/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_s)ds)^2]≤ Cσ^4_1 with C>0 a constant, W=W^1 and (ℱ_t)_t≥ 0 the natural filtration of the martingale (M_t)_t∈[0,1] given for all t∈[0,1] by M_t=∫_0^tσ(X^1_s)dW_s. We derive that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]= 1/n^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔ)^2]=1/n^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2] since for all integers k, k^' such that k > k^'≥ 0, we have [ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔϕ_ℓ(X^1_k^'Δ)ζ^1,1_k^'Δ|ℱ_kΔ] = ϕ_ℓ(X^1_kΔ)ζ^1,1_k^'Δϕ_ℓ(X^1_k^'Δ)[ζ^1,1_kΔ|ℱ_kΔ] = 0. For each k∈[[0,n-1]], we have ∑_ℓ=0^m-1ϕ_ℓ(X^1_kΔ) = ∑_ℓ=-M^K-1B_ℓ(X^1_kΔ) =1   for   the   spline   basis ∑_ℓ=0^m-1ϕ_ℓ(X^1_kΔ)≤ Cm   For   an   orthonormal   basis   with  C=0 ≤ℓ≤ m-1maxϕ_ℓ_∞. Finally, there exists a constant C>0 such that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤C/n  for   the   spline   basis Cm/n  for   an   orthonormal   basis. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] For all k∈[[0,n-1]] and for all s∈[0,1], set η(s)=kΔ if s∈[kΔ,(k+1)Δ). We have: ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] =4∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] =4∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]. We conclude that ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]≤C/n^2  for   the   spline   basis Cm/n^2  for   an   orthonormal   basis. where the constant C>0 depends on the diffusion coefficient and the upper bound of the basis functions. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^3_2(ϕ_ℓ)] We have: ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)] =4/n^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)b(X^1_kΔ)σ(X^1_s)dW_s)^2] =4/n^2∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2] ≤4/n^2𝔼[∫_0^1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_η(s))b^2(X^1_η(s))σ^2(X^1_s)ds]. Since for all x∈ℝ, b(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^4)<∞, there exists a constant C>0 depending on the diffusion coefficient such that ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)]≤C/n^2  for   the   spline   basis Cm/n^2  for   an   orthonormal   basis. We finally deduce that from Equations (<ref>) and (<ref>)  that for all h∈𝒮_m,L, 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)+2𝔼[μ(σ^2_m,L-h)]    [B] 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)+2𝔼[μ(σ^2_m,L-h)]    [F] where C>0 is a constant. It remains to obtain an upper bound of the term μ(σ^2_m,L-h). For all a>0 and for all h∈𝒮_m,L, 2μ(σ^2_m,L-h) ≤ 2/aσ^2_m,L-σ^2^2_n,1+2/ah-σ^2^2_n,1+a/n∑_k=0^n-1(R^1_kΔ)^2 2𝔼[μ(σ^2_m,L-h)] ≤ 2/a𝔼σ^2_m,L-σ^2^2_n,1+2/ah∈𝒮_minfh-σ^2^2_n +a/n∑_k=0^n-1𝔼[(R^1_kΔ)^2]. Using Equations (<ref>), (<ref>) and setting a=4, we deduce that there exists constant C>0 depending on σ_1 such that, 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n) + 2log^2(n)t∈(0,1]sup(|X_t|>A_n)   [B] 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n) + 2log^2(n)t∈(0,1]sup(|X_t|>A_n)   [F]. From Proposition <ref>, t∈(0,1]sup(|X_t|>log(n))≤log^-1(n)exp(-clog^2(n)) with c>0 a constant. Then, we obtain from Equation (<ref>) that 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)     [B] 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)     [F]. §.§.§ Proof of Corollary <ref> We have under Assumption <ref> from Theorem <ref> that 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)     [B] 𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)     [F]. For [B]. We have m=K+M with M∈ℕ^* fixed. From Proposition <ref> and under Assumption <ref>, there exists a constant C>0 depending on β such that h∈𝒮_K+M,Linfh-σ^2^2_n≤ Clog^2β(n)K^-2β. Since K ∝ n^1/(4β+1), we obtain that [σ^2_K - σ^2^2_n,1] = O(log^2β(n)n^-2β/(4β+1)). For [F]. Under Assumptions <ref> and <ref> and From Lemma 12 in <cit.>, there exists a constant C>0 depending on τ_1 of Equation (<ref>) and the smoothness parameter β of the Besov space 𝐁^β_2,∞ such that h∈𝒮_m,Linfh-σ^2^2_n≤τ_1h∈𝒮_m,Linfh-σ^2^2≤ C|σ^2|^2_β m^-2β where |σ^2|_β is the semi-norm of σ^2 in the Besov space ℬ^β_2,∞([-log(n),log(n)]). Under Assumption <ref>, |σ^2|_β < ∞. Then, for m ∝ n^1/2(2β+1), the exists a constant C>0 depending on β, σ_1 and τ_1 such that [σ^2_m - σ^2^2_n,1] ≤ Clog(n)n^-β/(2β+1). §.§ Proof of Section <ref> The following lemma allows us to obtain a risk bound of σ^2_m,L defined with the empirical norm ._n from the risk bound defined from the pseudo norm ._n,N. Let σ^2_m,L be the truncated projection estimator on of σ^2 over the subspace 𝒮_m,L. Suppose that L = log^2(N),   N>1. Under Assumption <ref>, there exists a constant C>0 independent of m and N such that [σ^2_m,L - σ^2^2_n,N] - 2[σ^2_m,L - σ^2^2_n] ≤ C m^2log^3(N)/N. The proof of Lemma <ref> is provided in <cit.>, Theorem 3.3. The proof uses the independence of the copies X̅^1,…,X̅^N of the process X at discrete times, and the Bernstein inequality. §.§.§ Proof of Theorem <ref>  For fixed n and N in ℕ^*, we set for all m∈ℳ, Ω_n,N,m:=h∈𝒮_m∖{0}⋂{|h^2_n,N/h^2_n-1|≤1/2}. As we can see, the empirical norms h_n,N and h_n of any function h∈𝒮_m∖{0} are equivalent on Ω_n,N,m. More precisely, on the set Ω_n,N,m, for all h∈𝒮_m∖{0}, we have : 1/2h^2_n≤h^2_n,N≤3/2h^2_n. We have the following result: Under Assumption <ref>, the following holds: * If n ≥ N or n ∝ N, then m ∈ℳ = {1,…,√(N)/log(Nn)} and, ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)). * If n ≤ N, then m ∈ℳ= {1,…,√(n)/log(Nn)} and ℙ(Ω^c_n,N,m) ≤ 2exp(-C√(n)) where C>0 is a constant. We have: Ω^c_n,N,m={ω∈Ω, ∃ h_0∈𝒮_m, |h^2_n,N/h^2_n-1|>1/2}, Denote by ℋ_m = {h∈𝒮_m,  h_n = 1} and ℋ^ε_m the ε-net of ℋ_m for any ε >0. We have h∈ℋ_msup|h^2_n,N/h^2_n-1| = h∈ℋ_msup|h^2_n,N-1|. Let ε > 0 and let ℋ^ε_m be the ε-net of ℋ_m w.r.t. the supremum norm ._∞. Then, for each h∈ℋ_m, there exists h_ε∈ℋ^ε_m such that h-h_ε_∞≤ε. Then |h^2_n,N - 1| ≤|h^2_n,N - h_ε^2_n,N| + |h_ε^2_n,N - 1| and, |h^2_n,N - h_ε^2_n,N| ≤  1/Nn∑_j=1^N∑_k=0^n-1|h(X^j_kΔ) - h_ε(X^j_kΔ)|(h_∞ + h_ε_∞)≤(h_∞ + h_ε_∞)ε. Moreover, we have h^2, h_ε^2≤ 1/τ_0. Then, there exists a constant 𝐜 > 0 such that |h^2_n,N - h_ε^2_n,N| ≤ 2√( cm/τ_0)ε  for the spline basis  (see  Lemma 2.6  in   Denis  et   al.(2021)) |h^2_n,N - h_ε^2_n,N| ≤ 2√( cm/τ_0)ε  for   an   orthonormal   basis   (h^2_∞≤ (0≤ℓ≤ m-1maxϕ_ℓ^2_∞)mh^2). Therefore, for all δ > 0 and for both the spline basis and any orthonormal basis, (h∈ℋ_msup|h^2_n,N-1|≥δ) ≤(h∈ℋ^ε_msup|h^2_n,N-1|≥δ/2) + _4ε√( cm/τ_0)≥δ. We set δ = 1/2 and we choose ε > 0 such that 4ε√( cm/τ_0) < 1/2. Then, using the Hoeffding inequality, there exists a constant c>0 depending on c and τ_0 such that ℙ(Ω^c_n,N,m)≤ 2𝒩_∞(ε,ℋ_m)exp(-cN/m) where 𝒩_∞(ε,ℋ_m) is the covering number of ℋ_m satisfying: 𝒩_∞(ε,ℋ_m) ≤(κ√(m)/ε)^m where the constant κ>0 depends on c>0 (see <cit.>, Proof of Lemma D.1). We set ε = κ√(m^*)/N with m^* = maxℳ and we derive from Equations (<ref>) and (<ref>) that (Ω^c_n,N,m) ≤ 2N^m^*exp(-cN/m^*) = 2exp(-cN/m^*(1-m^*2log(N)/cN)). * If n ≥ N, then m ∈ℳ = {1,…,√(N)/log(Nn)}. Since m^*2log(N)/N → 0 as N→ +∞, there exists a constant C>0 such that ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)). * If n ≤ N, then m ∈ℳ = {1,…, √(n)/log(Nn)},   m^*2log(N)/N ≤log(N)/log^2(Nn) → 0 as N,n →∞, and ℙ(Ω^c_n,N,m)≤ 2exp(-C√(n)). * If n ∝ N, then m ∈ℳ = {1,…,√(N)/log(Nn)}. Since m^*2log(N)/N → 0 as N→ +∞, there exists a constant C>0 such that ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)). The proof of Theorem <ref> extends the proof of Theorem <ref> when N tends to infinity. Then, we deduce from Equation (<ref>) that 𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]≤ 3h∈𝒮_m,Linfh-σ^2_|I^2_n+C𝔼(h∈𝒮_m, h_n=1supν^2(h))+CΔ^2 where C>0 is a constant depending on σ_1, and ν = ν_1+ν_2+ν_3 with ν_i(h) = 1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,i_kΔ,    i=1,2,3 and the ζ^j,i_kΔ's are the error terms depending on each path X^j,  j=1,…,N. §.§ Upper bound of 𝔼(h∈𝒮_m, h_n=1supν^2(h)) For all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m such that h_n=1, we have h^2≤1/τ_0 and the coordinate vector a=(a_0,⋯,a_m-1) satisfies: * a^2_2≤ CK ≤ Cm for the spline basis (see <cit.>, Lemma 2.6) * a^2_2≤ 1/τ_0 for an orthonormal basis since h^2 = a^2_2. Furthermore, using the Cauchy Schwartz inequality, we have: ν^2(h)=(∑_ℓ=0^m-1a_ℓν(ϕ_ℓ))^2≤a^2_2∑_ℓ=0^m-1ν^2(ϕ_ℓ). Thus, for all ℓ∈[[0,m-1]], ν=ν_1+ν_2+ν_3 and for all i∈{1,2,3} 𝔼[ν^2_i(ϕ_ℓ)]=  1/Nn^2𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,i_kΔ)^2]. We finally deduce from  (<ref>), (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that: 𝔼(h∈𝒮_m, h_n=1supν^2(h))≤ Cm/Nn. We deduce from  (<ref>) and (<ref>) that there exists a constant C>0 such that, 𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]≤ 3h∈𝒮_m,Linfσ^2_|I-h^2_n+C(m/Nn+Δ^2). Since we have σ^2_m_∞≤√(mL), then for m and L large enough, σ^2_m-σ^2_|I^2_∞≤ 2mL. There exists a constant C>0 such that for all m∈ℳ and for m and L large enough, 𝔼[σ^2_m-σ^2_|I^2_n,N] =𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]+𝔼[σ^2_m-σ^2_|I^2_n,N_Ω^c_n,N,m] ≤𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]+2mL(Ω^c_n,N,m). Then, from Equation (<ref>), Lemma <ref> and for m ∈ℳ = {1,…,√(min(n,N))/√(log(Nn))}, we have: 𝔼[σ^2_m-σ^2_|I^2_n,N]≤   3h∈𝒮_m,Linfh - σ^2_|I^2_n+C(m/Nn+mLexp(-C√(min(n,N)))+Δ^2) where C>0 is a constant. Recall that the empirical norms ._n,N and ._n are equivalent on Ω_n,N,m, that is for all h∈𝒮_m, h^2_n≤ 2h^2_n,N. Thus, we have 𝔼[σ^2_m-σ^2_|I^2_n] =  𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] + 𝔼[σ^2_m-σ^2_|I^2_n_Ω^c_n,N,m] ≤  𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] + 2mL(Ω^c_n,N,m). For all h ∈𝒮_m,L⊂𝒮_m, we have: 𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] ≤   2𝔼[σ^2_m-h^2_n_Ω_n,N,m] + 2h-σ^2_|I^2_n ≤   4𝔼[σ^2_m-h^2_n,N_Ω_n,N,m] + 2h-σ^2_|I^2_n ≤   8𝔼[σ^2_m-σ^2_|I^2_n,N] + 10h-σ^2_|I^2_n. We finally conclude that 𝔼[σ^2_m-σ^2_|I^2_n] ≤ 34h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/Nn+mLexp(-C√(min(n,N)))+Δ^2). §.§.§ Proof of Corollary <ref>  Under Assumption <ref> and from Theorem <ref> and Lemma (<ref>), there exists a constant C>0 such that 𝔼[σ^2_m-σ^2_|I^2]≤ C(h∈𝒮_m,Linfh-σ^2_|I^2_n+m/Nn+L/min(N^4,n^4)+1/n^2). For [B]. We have m=K+M where M is fixed. From Lemma (<ref>), under Assumption <ref>, we have h∈𝒮_m,Linfh-σ^2_|I^2_n = O(K^-2β). Thus, for K ∝ (Nn)^1/(2β+1) and L = log(Nn), 𝔼[σ^2_m-σ^2_|I^2]≤ C(Nn)^-2β/(2β+1)+Clog(Nn)min(N^-4,n^-4). where C>0 is a constant depending on β. For [F]. From Equation (<ref>) and the proof of Corollary <ref>, we have h∈𝒮_minfh - σ^2_|I^2_n = O(m^-2s). Then, for m = (Nn)^1/(2s+1) and L = log(Nn), we obtain 𝔼[σ^2_m-σ^2_|I^2]≤ C(Nn)^-2s/(2s+1)+Clog(Nn)min(N^-4,n^-4). §.§.§ Proof of Theorem <ref>  We consider the restriction σ^2_[-log(N),log(N)] of σ^2 on the compact interval [-log(N),log(N)] on which the spline basis is built. Then we have: 𝔼[σ^2_m,L-σ^2^2_n] = 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^2_n] + 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^c^2_n] and from Proposition <ref>, Lemma <ref> and for N large enough, there exists constants c,C>0 such that 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^c^2_n] ≤ 2L/n∑_k=0^n-1(|X_kΔ| > log(N))≤ 2Lt∈[0,1]sup(|X_t|≥log(N)) ≤ C/log(N)exp(-clog^2(N)). We deduce that 𝔼[σ^2_m,L-σ^2^2_n] = 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^2_n] + C/log(N)exp(-clog^2(N)). It remains to upper-bound the first term on the right hand side of Equation (<ref>). Upper bound of 𝔼[σ^2_m,L-σ^2^2_n_[-log(N),log(N)]]. For all h∈𝒮_m,L, we obtain from Equation (<ref>), γ_n,N(σ^2_m,L)-γ_n,N(σ^2)≤γ_n,N(h)-γ_n,N(σ^2). For all h∈𝒮_m,L, γ_n,N(h)-γ_n,N(σ^2)=h-σ^2^2_n,N+2ν_1(σ^2-h)+2ν_2(σ^2-h)+2ν_3(σ^2-h)+2μ(σ^2-h) where ν_i(h)=1/nN∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,i_kΔ, i∈{1,2,3}, μ(h)=1/nN∑_j=1^N∑_k=0^n-1h(X^j_kΔ)R^j_kΔ, we deduce from Equation (<ref>) that for all h∈𝒮_m,L, 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+2∑_i=1^3𝔼[ν_i(σ^2_m,L-h)]+2𝔼[μ(σ^2_m,L-h)]. For all i∈{1,2,3} and for all h∈𝒮_m,L, one has 𝔼[ν_i(σ^2_m,L-h)]≤√(2mL)√(∑_ℓ=0^m-1𝔼[ν^2_i(ϕ_ℓ)]). * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)] According to Equation (<ref>), we have ∀ℓ∈[[0,m-1]], ν_1(ϕ_ℓ)=1/Nn∑_j=1^N∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ζ^j,1_kΔ where ζ^j,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^j_s)dW^j_s)^2-∫_kΔ^(k+1)Δσ^2(X^j_s)ds] is a martingale satisfying 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0 and 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤1/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_s)ds)^2]≤ Cσ^4_1 with C>0 a constant, W=W^1 and (ℱ_t)_t≥ 0 the natural filtration of the martingale (M_t)_t∈[0,1] given for all t∈[0,1] by M_t=∫_0^tσ(X^1_s)dW_s. We derive that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]= 1/Nn^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ζ^1,1_kΔ)^2]=1/Nn^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2]. For each k∈[[0,n-1]], we have ∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ) = ∑_ℓ=-M^K-1B^2_ℓ(X^1_kΔ) =1   for   the   spline   basis ∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)≤ Cm   For   an   orthonormal   basis   with  C=0 ≤ℓ≤ m-1maxϕ_ℓ^2_∞. Finally, there exists a constant C>0 such that ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤C/Nn  for   the   spline   basis Cm/Nn  for   an   orthonormal   basis. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] For all k∈[[0,n-1]] and for all s∈[0,1], set η(s)=kΔ if s∈[kΔ,(k+1)Δ). We have: ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] =4/N∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2] =4/N∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]. We conclude that ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]≤C/Nn^2  for   the   spline   basis Cm/Nn^2  for   an   orthonormal   basis. where the constant C>0 depends on the diffusion coefficient and the upper bound of the basis functions. * Upper bound of ∑_ℓ=0^m-1𝔼[ν^3_2(ϕ_ℓ)] We have: ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)] =4/Nn^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)b(X^1_kΔ)σ(X^1_s)dW_s)^2] =4/Nn^2∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2] ≤4/Nn^2𝔼[∫_0^1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_η(s))b(X^1_η(s))σ^2(X^1_s)ds]. Since for all x∈ℝ, b(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^2)<∞, there exists a constant C>0 depending on the diffusion coefficient such that ∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)]≤C/Nn^2  for   the   spline   basis Cm/Nn^2  for   an   orthonormal   basis. We finally deduce that from Equations (<ref>) and (<ref>)  that for all h∈𝒮_m,L, 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(mL/Nn)+2𝔼[μ(σ^2_m,L-h)]    (1) 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(m^2L/Nn)+2𝔼[μ(σ^2_m,L-h)]    (2) where C>0 is a constant, the result (1) corresponds to the spline basis, and the result (2) corresponds to any orthonormal basis. It remains to obtain an upper bound of the term μ(σ^2_m,L-h). For all a>0 and for all h∈𝒮_m,L, 2μ(σ^2_m,L-h) ≤ 2/aσ^2_m,L-σ^2^2_n,N+2/ah-σ^2^2_n,N+a/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2 2𝔼[μ(σ^2_m,L-h)] ≤ 2/a𝔼σ^2_m,L-σ^2^2_n,N+2/ah∈𝒮_m,Linfh-σ^2^2_n +a/Nn∑_j=1^N∑_k=0^n-1𝔼[(R^j_kΔ)^2]. Using Equations (<ref>), (<ref>) and setting a=4, we deduce that there exists constant C>0 depending on σ_1 such that, 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]]≤h∈𝒮_m,Linfh-σ^2^2_n+C(√(mL/Nn)+Δ^2)    [𝐁] 𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^2L/Nn)+Δ^2)    [𝐇]. The final result is obtained from Equations (<ref>) and (<ref>). §.§.§ Proof of Lemma <ref>  It is proven in <cit.> that for each dimension m∈ℳ, the Gram matrix Ψ_m built from the Hermite basis is invertible. For the case of the B-spline basis, let us consider a vector (x_-M,⋯,x_K-1)∈ℝ^m such that x_j∈[u_j+M,u_j+M+1) and B_j(x_j)≠ 0. Since [u_j+M,u_j+M+1)∩[u_j^'+M,u_j^'+M+1)=∅ for all j,j^'∈{-M,⋯,K-1} such that j≠ j^', then for all j,j^'∈{-M,⋯,K-1} such that j≠ j^', B_j(x_j^')=0. Consequently, we obtain: ((B_ℓ(x_ℓ^'))_-M≤ℓ,ℓ^'≤ K-1) =(diag(B_-M(x_M),⋯,B_K-1(x_K-1))) =∏_ℓ=-M^K-1B_ℓ(x_ℓ)≠ 0. Then, we deduce from <cit.>, Lemma 1 that the matrix Ψ_m is invertible for all m∈ℳ, where the function f_T are replaced by f_n : x↦1/n∑_k=0^n-1p_X(kΔ,x) with λ([-A_N,A_N]∩supp(f_n))>0, λ being the Lebesgue measure. Case of the B-spline basis. For all w∈ℝ^m such that w_2,m=1, we have: w^'Ψ_mw = t_w^2_n=∫_-A_N^A_Nt^2_w(x)f_n(x)dx+t^2_w(x_0)/n with t_w=∑_ℓ=-M^K-1w_ℓB_ℓ. Under Assumption <ref>, the transition density (t,x)↦ p_X(t,x) is approximated as follows ∀ (t,x)∈(0,1]×ℝ, 1/K_*√(t)exp(-c_σx^2/t)≤ p_X(t,x)≤K_*/√(t)exp(-x^2/c_σt) where K_*>1 and c_σ>1. Since s↦exp(-c_σx^2/s) is an increasing function, then for n large enough and for all x∈[-A_N,A_N], f_n(x) ≥1/K_*n∑_k=1^n-1exp(-cx^2/kΔ)≥1/K_*∫_0^1-Δexp(-c_σx^2/s)ds ≥1/K_*∫_1-(log(N))^-1^1-(2log(N))^-1exp(-c_σx^2/s)ds ≥1/2K_*log(N)exp(-c_σx^2/1-log^-1(N)). Thus, the density function satisfies ∀ x∈[-A_N,A_N], f_n(x)≥12K_*log(N)exp(-c_σA^2_N/1-log^-1(N))≥12K_*log(N)exp(-c_σA^2_N). Finally, since there exists a constant C_1>0 such that t_w^2≥ C_1A_NK^-1_N (see <cit.>, Lemma 2.6), for all w∈ℝ^m (m = K_N+M) such that w_2,m=1, there exists a constant C>0 such that, w^'Ψ_mw≥CA_N/mlog(N)exp(-c_σA^2_N). Case of the Hermite basis. For all w∈^m such that w_2,m=1, we have w^'Ψ_mw=t_w^2_n=∫_-∞^+∞t^2_w(x)f_n(x)dx+t^2_w(x_0)/n with t_w=∑_ℓ=0^m-1w_ℓh_ℓ. Recall that for all x∈ such that |x|≥√((3/2)(4m+3)), |h_ℓ(x)|≤ c|x|exp(-c_0x^2) for all ℓ≥ 0. Then we have w^'Ψ_mw ≥  ∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2f_n(x)dx ≥  x∈[-√((3/2)(4m+3)),√((3/2)(4m+3))]inff_n(x)∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx ≥  1/2K_*log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N)))∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx since for all x∈ℝ, f_n(x)≥(1/2K_*log(N))exp(-c_σx^2/1-log^-1(N)). Set a_N=√((3/2)(4m+3)), then we obtain w^'Ψ_mw≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(∫_-∞^+∞(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx-∫_|x|>a_N(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx) ≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(1-2c^2m∫_a_N^+∞x^2exp(-8c_0x^2)dx) ≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(1-c^2m/8c_0√(3/2(4m+3))exp(-12c_0(4m+3))) where c,c_0>0 are constants depending on the Hermite basis. Finally, for N large enough, 1-c^2m/8c_0√(3/2(4m+3))exp(-12c_0(4m+3))≥1/2. Finally, there exists a constant C>0 such that for all w∈^m such that w_2,m, w^'Ψ_mw ≥C/log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N))). §.§.§ Proof of Theorem <ref>  The proof of Theorem <ref>  relies on the following lemma: Under Assumptions <ref> and for σ^2∈Σ_I(β,R) with β≥ 1,   I = [-A_N,A_N] and N ∝ n,   A_N = o (√(log(N))),   K ∝((Nn)^1/(2β+1)A_N)    (m = K+M), the following holds: ℙ(Ω^c_n,N,m) ≤ Cexp(- c log^3/2(N)) where c,C>0 are constants independent of N. According to Equations (<ref>) in the proof of Theorem <ref>, for all dimension m=K+M, with K∈, and for all h∈𝒮_K+M, there exists a constant C>0 such that [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]≤ C[h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+(h∈𝒮_K+M,h_n=1supν^2(h))+Δ^2] where Ω_n,N,m is given in Equation (<ref>)  and ν=ν_1+ν_2+ν_3 with the ν_i given in Equation (<ref>) . For all h=∑_ℓ=-M^K-1a_ℓB_ℓ∈𝒮_K+M,L_N, h^2_n=[1/n∑_k=0^n-1h^2(X_kΔ)]=∑_ℓ=-M^K-1∑_ℓ=-M^K-1a_ℓa_ℓ^'[1/n∑_k=0^n-1B_ℓ(X_kΔ)B_ℓ^'(X_kΔ)]=a^'Ψ_ma. The Gram matrix Ψ_m is invertible for each K∈ℳ (see proof of Lemma <ref>). It follows that for all h=∑_ℓ=-M^K-1a_ℓB_ℓ such that h^2_n=a^'Ψ_ma=1, one has a=Ψ^-1/2_mu where u∈ℝ^m and u_2,m=1. Furthermore, we have: h=∑_ℓ=-M^K-1a_ℓB_ℓ=∑_ℓ=-M^K-1u_ℓ∑_ℓ^'^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'. Then for all h∈𝒮_K+M, we have ν^2(h)≤ 3(ν^2_1(h)+ν^2_2(h)+ν^2_3(h)) where, ∀ i∈{1,2,3}, ν^2_i(h)≤∑_ℓ=-M^K-1(1/Nn∑_j=1^N∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^j,i_kΔ)^2. So we obtain, ∀ i∈{1,2,3}, [h∈𝒮_K+M,h_n=1supν^2_i(h)]≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,i_kΔ)^2] For i=1, we have ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] and we obtained in the proof of Theorem <ref>  that there exists a constant C>0 such that for all k∈[[0,n-1]], 𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0, 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤ C𝔼[(∫_kΔ^(k+1)Δσ^2(X_u)du)^2]≤ Cσ^4_1Δ^2. We deduce that [h∈𝒮_K+M,h_n=1supν^2_1(h)] =1/Nn^2Δ^2∑_ℓ=0^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ)ζ^1,1_kΔ)^2] ≤1/N∑_ℓ=-M^K-1∑_k=0^n-1{(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2(ζ^1,1_kΔ)^2} ≤4σ^2_1/Nn∑_ℓ=-M^K-1∑_ℓ^'=-M^K-1∑_ℓ^''=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓ[Ψ^-1/2_m]_ℓ^'',ℓ[Ψ^-1/2_m]_ℓ^',ℓ^''. We have: ∑_ℓ=-M^K-1∑_ℓ^'=-M^K-1∑_ℓ^''=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓ[Ψ^-1/2_m]_ℓ^'',ℓ[Ψ^-1/2_m]_ℓ^',ℓ^''=Tr(Ψ^-1_mΨ_m)=Tr(I_m)=m. So we obtain [h∈𝒮_K+Msupν^2_1(h)]≤4σ^2_1m/Nn. For i=2, we have ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s and [h∈𝒮_K+M,h_n=1supν^2_2(h)] ≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,2_kΔ)^2] ≤4σ^4_1σ^'^2_∞Δ/Nn^2∑_ℓ=-M^K-1∑_k=0^n-1[(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2] ≤4σ^2_1σ^'^2_∞m/Nn^2. For i=3, we have ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW_s and there exists constants C_1,C_2>0 such that [h∈𝒮_K+M,h_n=1supν^2_3(h)] ≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,3_kΔ)^2] ≤ C_1σ^2_1Δ/Nn^2∑_ℓ=-M^K-1∑_k=0^n-1[(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2] ≤ C_2σ^2_1m/Nn^2. Finally, there exists a constant C>0 depending on σ_1 and M such that [h∈𝒮_K+M,h_n=1supν^2(h)]≤ Cm/Nn. From Equations (<ref>) and (<ref>) , we deduce that [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+Δ^2) where C>0 is a constant depending on σ_1 and M. We obtain [σ^2_A_N,m-σ^2_A_N^2_n,N]≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+Δ^2)+[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m] and for N large enough, σ^2_A_N,m-σ^2_A_N^2_n,N≤ 4mL, and according to Lemma <ref> , [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m]≤ 4mLℙ(Ω^c_n,N,m)≤ CmLexp(-clog^3/2(N)) where c>0 is a constant. Thus, there exists a constant C>0 such that [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤  [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]+[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m] ≤  C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+mLexp(-clog^3/2(N))+Δ^2). Then, as n ∝ N and L = log(N), there exists a constant C>0 such that [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤  C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn). Finally, since σ^2∈Σ_I(β,R) with β≥ 1 and I = [-A_N, A_N], one has h∈𝒮_K+M,Linfh-σ^2_A_N^2_n≤ CA^2β_NK^-2β where the constant C>0 depends on β, R and M. Furthermore, as we chose the inverval [-A_N,A_N] such that A_N = o (√(log(N))) and for K ∝((Nn)^1/(2β+1)A_N), we obtain [σ^2_A_N,m-σ^2_A_N^2_n,N] ≤  Clog^β(N)(Nn)^-2β/(2β+1). §.§ Proof of Section <ref> §.§.§ Proof of Theorem <ref> Set for all K, K^'∈𝒦 = {2^q,   q=0,…, q_max,   2^q_max≤√(N)/log(N)}⊂ℳ, 𝒯_K,K^' = {g∈𝒮_K+M+𝒮_K^'+M, g_n=1,  g_∞≤√(L)}. Recall that for all j ∈ [[1,N]] and for all k ∈ [[0,n]], ζ^j,1_kΔ = 1/Δ[(∫_kΔ^(k+1)Δσ(X^j_s)dW^j_s)^2-∫_kΔ^(k+1)Δσ^2(X^j_s)ds]. The proof of Theorem <ref> relies on the following lemma whose proof is in Appendix. Under Assumption <ref>, for all ε, v>0 and g∈𝒯_K,K^', there exists a real constant C>0 such that, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ζ^j,1_kΔ≥ε, g^2_n,N≤ v^2)≤exp(-CNnε^2/σ^2_1(εg_∞+4σ^2_1v^2)) and for all x>0 such that x≤ε^2/σ^2_1(εg_∞+4σ^2_1v^2), ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ζ^j,1_kΔ≥ 2σ^2_1v√(x)+σ^2_1g_∞x, g^2_n,N≤ v^2)≤exp(-CNnx). From Equation (<ref>), we have K:=K∈𝒦min{γ_n,N(σ^2_K)+pen(K)}. For all K∈𝒦 and h∈𝒮_K+M,L, γ_n,N(σ^2_K)+pen(K)≤γ_n,N(h)+pen(K), then, for all K∈𝒦 and for all h∈𝒮_K+M,L, γ_n,N(σ^2_K)-γ_n,N(σ^2_|I)≤  γ_n,N(h)-γ_n,N(σ^2_|I)+pen(K)-pen(K) σ^2_K-σ^2_|I^2_n,N≤  h-σ^2_|I^2_n,N+2ν(σ^2_K-h)+2μ(σ^2_K - h)+pen(K)-pen(K) ≤  h-σ^2_|I^2_n,N+1/dσ^2_K-t^2_n+dg∈𝒯_K,Ksupν^2(g)+1/dσ^2_K-h^2_n,N +d/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2+pen(K)-pen(K) where d>1 and the space 𝒯_K,K is given in Equation (<ref>). On the set Ω_n,N,K_max (given in Equation (<ref>)): ∀ h∈𝒮_K+M, 1/2h^2_n≤h^2_n,N≤3/2h^2_n. Then on Ω_n,N,K_max, for all d>1 and for all h∈𝒮_K+M with K∈𝒦, (1-10/d)σ^2_K-σ^2_|I^2_n,N≤  (1+10/d)h-σ^2_|I^2_n,N+dh∈𝒯_K,Ksupν^2(h)+d/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2 +pen(K)-pen(K). We set d=20. Then, on Ω_n,N,max and for all h∈𝒮_K+M,L, σ^2_K-σ^2_|I^2_n,N≤ 3h - σ^2_|I^2_n,N+20h∈𝒯_K,Ksupν^2(h)+20/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2+2(pen(K)-pen(K)). Let q : 𝒦^2⟶ℝ_+ such that 160 q(K,K^')≤ 18 pen(K)+16 pen(K^'). Thus, on the set Ω_n,N,K_max, there exists a constant C>0 such that for all h∈𝒮_K+M 𝔼[σ^2_K-σ^2_|I^2_n,N_Ω_n,N,K_max]≤   34(h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)) +160(h∈𝒯_K,Ksupν^2_1(h)-q(K,K))+CΔ^2 where ν_1(h):=1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,1_kΔ with ζ^j,1_kΔ the error term. We set for all K,K^'∈𝒦, G_K(K^'):=h∈𝒯_K,K^'supν^2_1(h) and for N and n large enough, σ^2_K-σ^2_|I^2_n,N≤ 4(K+M)L. We deduce that, 𝔼[σ^2_K-σ^2_|I^2_n,N] ≤  𝔼[σ^2_K-σ^2_|I^2_n,N_Ω_n,N,K_max]+𝔼[σ^2_K-σ^2_|I^2_n,N_Ω^c_n,N,K_max] ≤   34K∈𝒦inf(h∈𝒮_K+Minfh-σ^2_|I^2_n+pen(K)) +CΔ^2+4(K+M)Lℙ(Ω^c_n,N,K_max) +160𝔼[(G_K(K)-q(K,K))_+_Ω_n,N,K_max]. In the sequel, we refer to the proof of Proposition 6.1 in <cit.>. We known from Lorentz et al (see <cit.>) that given the unit ball B_._n(0,1) of the approximation subspace 𝒮_K+M with respect to norm ._n defined as follows: B_._n(0,1)={h∈𝒮_K+M : h_n≤ 1}={h∈𝒮_K+M : h≤1/τ_0}=B_2(0,1/τ_0), we can find a ε-net E_ε such that for each ε∈(0,1], |E_ε|≤(3/ετ_0)^K+M. Recall that 𝒯_K,K^'={g∈𝒮_K+M+𝒮_K^'+M, g_n=1, g_∞≤√(L)} and consider the sequence (E_ε_k)_k≥ 1 of ε-net with ε_k=ε_0 2^-k and ε_0∈(0,1]. Moreover, set N_k = log(|E_ε_k|) for each k≥ 0. Then for each g∈𝒮_K+M+𝒮_K^'+M such that g_∞≤√(L), there exists a sequence (g_k)_k≥ 0 with g_k∈ E_ε_k such that g=g_0+∑_k=1^∞g_k-g_k-1. Set ℙ:=ℙ(.∩Ω_n,N,K_max) and τ:=σ_1^2√(6x^n,N_0)+σ^2_1√(L)x^n,N_0+∑_k≥ 1ε_k-1{σ_1^2√(6x^n,N_k)+2σ^2_1√(L)x^n,N_k}=y^n,N_0+∑_k≥ 0y^n,N_k. For all h∈𝒯_K,K^' and on the event Ω_n,N,K_max, one has h^2_n,N≤3/2h^2_n=3/2. Then, using the chaining technique of <cit.>, we have ℙ(h∈𝒯_K,K^'supν_1(h)>τ) =ℙ(∃ (h_k)_k≥ 0∈∏_k≥ 0E_ε_k/ ν_1(h)=ν_1(h_0)+∑_k=1^∞ν_1(h_k-h_k-1)>τ) ≤∑_h_0∈ E_0ℙ(ν_1(h_0)>y^n,N_0)+∑_k=1^∞∑_h_k-1∈ E_ε_k-1h_k∈ E_ε_kℙ(ν_1(h_k-h_k-1)>y^n,N_k). According to Equation (<ref>) and Lemma <ref>, there exists a constant C>0 such that (ν_1(h_0) > y^n,N_0) ≤  (ν_1(h_0) > σ_1√(6x^n,N_0)+σ^2_1h_0_∞x^n,N_0) ≤  exp(-CNnx^n,N_0), ∀ k≥ 1,  (ν_1(h_k - h_k-1) > y^n,N_0) ≤  (ν_1(h_k - h_k-1) > σ_1√(6x^n,N_k)+σ^2_1h_k - h_k-1_∞x^n,N_k) ≤  exp(-CNnx^n,N_k). Finally, since N_k = log(|E_ε_k|) for all k≥ 0, we deduce that ℙ(h∈𝒯_K,K^'supν_1(h)>τ) ≤|E_ε_0|exp(-CNnx^n,N_0) + ∑_k=1^∞(|E_ε_k|+|E_ε_k-1|)exp(-CNnx^n,N_k) ≤exp(N_0-CNnx^n,N_0)+∑_k=1^∞exp(N_k+N_k-1-CNnx^n,N_k). We choose x^n,N_0 and x^n,N_k, k≥ 1 such that, N_0 - CNnx^n,N_0 = -a(K+K^' + 2M)-b N_k + N_k-1 - CNnx^n,N_k = -k(K + K^'+2M) - a(K + K^' + 2M) - b where a and b are two positive real numbers. We deduce that x^n,N_k≤ C_0(1+k)K + K^'+2M/Nn and τ≤ C_1σ^2_1√(√(L)K + K^'+2M/Nn) with C_0>0 and C_1 two constants depending on a and b. It comes that ∼ℙ(t∈𝒯_K,K^'supν(t)>τ) ≤e/e-1e^-bexp{-a(K + K^' + 2M)}. From Equation (<ref>), we set q(K,K^')=κ^*σ^2_1√(L)K + K^' + 2M/Nn where κ^*>0 depends on C_1>0. Thus, for all K,K^'∈𝒦, ℙ({h∈𝒯_K,K^'supν^2(h)>q(K,K^')}∩Ω_n,N,K_max)≤e^-b+1/e+1exp{-a(K + K^' + 2M)} and there exists constants c,C>0 such that 𝔼[(G_K(K^')-q(K,K^'))_+_Ω_n,N,K_max] ≤c(K+K^')/Nnℙ({t∈𝒯_K,K^'supν^2(t)>q(K,K^')}∩Ω_n,N,K_max) ≤C/Nnexp{-a/2(K+K^')}. Finally, there exists a real constant C>0 such that, 𝔼[(G_K(K)-q(K,K))_+_Ω_n,N,K_max]≤∑_K^'∈𝒦𝔼[(G_K(K^')-q(K,K^'))_+_Ω_n,N,K_max]≤C/Nn. We choose the penalty function pen such that for each K∈𝒦, pen(K)≥κσ^2_1√(L)K+M/Nn. For N large enough, one has σ^2_1≤√(L). Thus, we finally set pen(K)=κ(K+M)log(N)/Nn with L = log(N). Then, there exists a constant C>0 such that, 𝔼[σ^2_K-σ^2_|I^2_n,N] ≤ 34K∈𝒦inf{h∈𝒮_K+Minfh-σ^2_|I^2_n+pen(K)}+C/Nn. §.§.§ Proof of Theorem <ref>  From Equation (<ref>), we have K := K∈𝒦minγ_n,N(σ^2_K+M,L) + pen(K). Then, for all K∈𝒦 and for all h ∈𝒮_K+M,L, we have γ_n,N(σ^2_K,L) + pen(K) ≤γ_n,N(h) + pen(K). Then, for all K∈𝒦 and for all h∈𝒮_K+M,L, γ_n,N(σ^2_K,L) - γ_n,N(σ^2) ≤  γ_n,N(h) - γ_n,N(σ^2) + pen(K) - pen(K) σ^2_K,L - σ^2^2_n,N≤  h - σ^2^2_n,N + 2ν(σ^2_K,L - h) + 2μ(σ^2_K,L - h) + pen(K) - pen(K). We have for all a>0, 2𝔼[μ(σ^2_K,L-h)] ≤ 2/a𝔼σ^2_K,L-σ^2^2_n,N+2/ah∈𝒮_K+M,Linfh-σ^2^2_n +a/Nn∑_j=1^N∑_k=0^n-1𝔼[(R^j_kΔ)^2] and since ν = ν_1 + ν_2 + ν_3, according to the proof of Theorem <ref>, there exists a constant c>0 such that [ν(σ^2_K,L - h)] ≤ c[ν_1(σ^2_K,L - h)] where the for i∈{1,2,3} and for all h ∈𝒮_K+M,L,   K∈𝒦, ν_i(h) = 1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j_kΔ, and the ζ^j_kΔ are given Then, (1-2/a)[σ^2_K,L - σ^2^2_n,N] ≤  (1+2/a)h∈𝒮_K+M,Linfh-σ^2^2_n + 2c[ν_1(σ^2_K,L - h)] + pen(K)-pen(K) + a/Nn∑_j=1^N∑_k=0^n-1[(R^j_kΔ)^2] From Equation (<ref>) and for a = 4, there exists a constant C>0 such that [σ^2_K,L - σ^2^2_n,N] ≤ 3h∈𝒮_K+M,Linfh-σ^2^2_n + 4c[ν_1(σ^2_K,L - h)] + 2(pen(K)-pen(K)) + CΔ^2. Since for all K∈𝒦,  pen(K) ≥ 2κ^*σ^2_1(K+M)√(2L)/(Nn), define the function q: (K,K^') ↦ q(K,K^') such that q(K,K^') = 2C^*σ^2_1(K+K^'+2M)√(2L)/Nn≥ 2σ^2_1v√(x^n,N) + σ^2_1vx^n,N where x^n,N∝(K+K^'+2M/Nn)^2   and   v = √(2L). The constant C^*>0 depends on constants κ^*>0 and c>0 of Equation (<ref>) such that 4cq(K,K^') ≤pen(K) + 2pen(K^'). Then for all K ∈𝒦 and for all h∈𝒮_K+M,L, [σ^2_K,L - σ^2^2_n,N] ≤ 3(h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)) + 4c[(ν_1(σ^2_m,L - h) - q(K,K))_+] + CΔ^2. For all K∈𝒦 and for all h∈𝒮_K+M,L such that h_∞≤√(L), we have , σ^2_K,L - h^2_n,N≤σ^2_K,L - h^2_∞≤ 2L =: v^2. Then, using Equation (<ref>) and Lemma <ref>, there exists a constant C>0 such that for all K,K^'∈𝒦 and for all h∈𝒮_K+M,L, (ν_1(σ^2_K^',L - h) ≥ q(K,K^'),  σ^2_K,L - h^2_n,N≤ v^2) ≤exp(-CNnx^n,N). Since L = log(N), then for N large enough, σ^2_1≤√(log(N)), we finally choose pen(K) = κ(K+M)log(N)/Nn where κ>0 is a new constant. Since [ν_1(σ^2_K,L - h)] ≤O(√((K_max+M)log^2(N)/Nn)) (see proof of Theorem <ref>), for all K ∈𝒦 and h ∈𝒮_K+M,L, there exists a constant c>0 such that [(ν_1(σ^2_K,L - h) - q(K,K))_+] ≤  K^'∈𝒦max{[(ν_1(σ^2_K^',L - h) - q(K,K^'))_+]} ≤   cq(K,K_max)K^'∈𝒦max{(ν_1(σ^2_K^',L - h) ≥ q(K,K^'))}. From Equation (<ref>), we obtain that [(ν_1(σ^2_K,L - h) - q(K,K))_+] ≤ cq(K,K_max)exp(-CNn)≤C/Nn since K and K_max increase with the size N of the sample paths D_N,n, and cNnq(K,K_max)exp(-CNn) → 0   as   N →∞. Then, from Equations (<ref>) and (<ref>), there exists a constant C>0 such that [σ^2_K,L - σ^2^2_n,N] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)} + C/Nn. §.§.§ Proof of Theorem <ref> The proof of Theorem <ref> is similar to the proof of Theorem <ref>. Then, from Equation (<ref>), for all h∈𝒮_K+M, σ^2_K,L-σ^2_|I^2_n,1≤ 3h - σ^2_|I^2_n,1+20h∈𝒯_K,Ksupν^2(h)+20/n∑_k=0^n-1(R^1_kΔ)^2+2(pen(K)-pen(K)), where 𝒯_K,K^' = {h ∈𝒮_K+M+𝒮_K^'+M,  h_X = 1,  h_∞≤√(L)}. Let q: 𝒦^2⟶_+ such that 160q(K,K^') ≤ 18pen(K) + 16pen(K^'). Recall that the 𝕃^2-norm ., the norm [._X] and the empirical norm ._n are equivalent on 𝕃^2(I) since the transition density is bounded on the compact interval I. Then, for all K ∈𝒦 and h ∈𝒮_K+M,L, we have [σ^2_K,L-σ^2_|I^2_n,1] ≤   3(h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)) + 20(h∈𝒯_K,Ksupν^2_1(h)-q(K,K)) + CΔ^2 where ν_1(h):=1/n∑_k=0^n-1h(X^1_kΔ)ζ^1,1_kΔ with ζ^1,1_kΔ the error term. We set for all K,K^'∈𝒦, G_K(K^'):=h∈𝒯_K,K^'supν^2_1(h). Then, there exists C>0 such that [σ^2_K,L-σ^2_|I^2_n,1] ≤   3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + 20∑_K^'∈𝒦[(G_K(K^')-q(K,K^'))_+] + CΔ^2. Considering the unit ball B_._X(0,1) of the approximation subspace given by B_._X(0,1) = {h∈𝒮_K+M,  h^2_X≤ 1} = {h∈𝒮_K+M,  h^2≤1/τ_0}. We obtain from the proof of Theorem <ref> with N=1 that, ∑_K^'∈𝒦[(G_K(K^')-q(K,K^'))_+]≤C/n, where C>0 is a constant, q(K,K^') ∝σ^4_1(K+K^'+2M)√(log(n))/n and pen(K) ∝(K+M)log(n)/n. Then we obtain 𝔼[σ^2_K,L-σ^2_|I^2_n,1] ≤3/τ_0K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + C/n. ScandJStat Appendix §.§ Calibration Fix the drift function b(x) = 1-x, the time-horizon T=1 and at time t=0,   x_0=0. Consider the following three models: Model 1: σ(x)=1 Model 2: σ(x)=0.1+0.9/√(1+x^2) Model 3: σ(x) = 1/3+sin^2(2π x)/π + 1/(π+x^2). The three diffusion models satisfy Assumption <ref>  and are used to calibrate the numerical constant κ of the penalty function given in Theorem <ref>  As we already know, the adaptive estimator of σ^2 on the interval [-√(log(N)), √(log(N))] necessitate a data-driven selection of an optimal dimension through the minimization of the penalized least squares contrast given in Equation (<ref>) . Since the penalty function pen(d_N)=κ (K_N+M)log^2(N)/N^2 depends on the unknown numerical constant κ>0, the goal is to select an optimal value of κ in the set 𝒱={0.1,0.5,1,2,4,5,7,10} of its possible values. To this end, we repeat 100 times the following steps: * Simulate learning samples D_N and D_N^' with N∈{50,100}, N^'=100 and n ∈{100, 250} * For each κ∈𝒱: * For each K_N∈𝒦 and from D_N, compute σ^2_d_N,L_N given in Equations (<ref>) and (<ref>). * Select the optimal dimension K_N∈𝒦 using Equation (<ref>)  * Using the learning sample D_N^', evaluate σ^2_d_N,L_N-σ^2_A^2_n,N^' where d_N=K_N+M. Then, we calculate average values of σ^2_d_N,L_N-σ^2_A^2_n,N^' for each κ∈𝒱 and obtain the following results: We finally choose 5∈𝒱 as the optimal value of κ in reference to the results of Figure <ref> . §.§ Proof of Lemma <ref>  We obtain from Comte,Genon-Catalot,Rozenholc (2007) proof of Lemma 3 that for each j∈[[1,N]], k∈[[0,n-1]] and p∈ℕ∖{0,1} 𝔼[exp(ug(X^j_kΔ)ξ^j,1_kΔ-au^2g^2(X^j_kΔ)/1-bu)|ℱ_kΔ]≤ 1 with a=e(4σ^2_1c^2)^2, b=4σ^2_1c^2eg_∞, u∈ℝ such that bu<1 and c>0 a real constant. Thus, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_N,n≤ v^2)=𝔼(1_{∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^(j,1)_kΔ≥ Nnuε}1_g^2_n,N≤ v^2) =𝔼(1_{exp(∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^(j,1)_kΔ)e^-Nnuε≥ 1}_g^2_N,n≤ v^2) ≤e^-Nnuε𝔼[_g^2_n,N≤ v^2exp{∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^j,1_kΔ}]. It follows that, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_n,N≤ v^2) ≤  exp{-Nnuε+Nnau^2v^2/1-bu}. We set u=ε/ε b+2av^2. Then, we have -Nnuε+Nnav^2u^2/(1-bu)=-Nnε^2/2(ε b+2av^2) and, ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_n,N≤ v^2) ≤exp(-Nnε^2/2(ε b+av^2)) ≤exp(-CNnε^2/σ^2_1(εg_∞+4σ^2_1v^2)) where C>0 is a constant depending on c>0. §.§ Proof of Lemma <ref>  Set K_n,N = K_N since N ∝ n. Let us remind the reader of the Gram matrix Ψ_K_N given in Equation (<ref>), Ψ_K_N=[1/Nn𝐅^'_K_N𝐅_K_N]=(Ψ_K_N) where, 𝐅_K_N:= ((B_ℓ(X^j_0),…,(B_ℓ(X^j_(n-1)Δ)))_1 ≤ j ≤ N0 ≤ℓ≤ K_N-1∈ℝ^Nn× (K_N+M) The empirical counterpart Ψ is the random matrix given by Ψ_K_N of size (K_N+M) × (K_N+M) is given by Ψ_K_N:=1/Nn𝐅^'_K_N𝐅_K_N=(1/Nn∑_j=1^N∑_k=0^n-1f_ℓ(X^j_kΔ)f_ℓ^'(X^j_kΔ))_ℓ,ℓ^'∈[-M,K_N-1]. For all t=∑_ℓ=-M^K_N-1 a_ℓ B_ℓ,M, u∈ S_K_N, M one has t_n,N^2 = a^'Ψ_K_N a and t_n^2 = a^'Ψ_K_N a, with a=(a_-M,⋯,a_K_N-1)^'. Under Assumption <ref>, we follow the lines of  <cit.> Proposition 2.3 and Lemma 6.2. Then, sup _t ∈ S_K_N,M,t_n=1|t_n,N^2-t_n^2| = sup _w ∈^K_N+M,Φ_K_N^1 / 2 w_2, K_N+M=1|w^'(Ψ_K_N-Ψ_K_N) w| = sup _u ∈ℝ^K_N+M,u_2, K_N+M=1|u^'Ψ_K_N^-1 / 2(Ψ_K_N-Ψ_K_N) Ψ_K_N^-1 / 2 u| = Ψ_K_N^-1 / 2Ψ_K_NΨ_K_N^-1 / 2-Id_K_N+M_op. Therefore, Ω_n, N, K_N^c={Ψ_K_N^-1 / 2Ψ_K_NΨ_K_N^-1 / 2-Id_K_N+M_op > 1 / 2}. Since A_N = o(√(log(N))), we obtain from <cit.>, proof of Lemma 7.8, there exists a constant C>0 such that (Ω^c_n,N,K_N)≤ 2(K_N+M)exp(-C log^3/2(N)). Finally, since 2(K_N+M)exp(- (C/2) log^3/2(N)) ⟶ 0 as N ⟶ +∞, one concludes from Equation (<ref>) and for N large enough, (Ω^c_n,N,K_N)≤ Cexp(- c log^3/2(N)) where c >0 and C>0 are new constants.
http://arxiv.org/abs/2307.05671v1
20230711180002
Hourglass-Like Spin Excitation in a Doped Mott Insulator
[ "Jia-Xin Zhang", "Chuan Chen", "Jian-Hao Zhang", "Zheng-Yu Weng" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.supr-con" ]
figures/ These authors contributed equally to this work [email protected] Institute for Advanced Study, Tsinghua University, Beijing 100084, China These authors contributed equally to this work Institute for Advanced Study, Tsinghua University, Beijing 100084, China Department of Physics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA Institute for Advanced Study, Tsinghua University, Beijing 100084, China We examine the dynamical magnetic response in a two-component resonating-valence-bond (RVB) description of the doped Mott insulator. The half-filled antiferromagnetic phase described by the Schwinger-boson mean-field theory will evolve into a bosonic-RVB state in the superconducting phase upon doping, where the doped holes introduce another fermionic itinerant spinon which forms a BCS-like RVB order. The spin excitations are thus composed of a resonance-like mode from the former and a weak dispersive mode from the itinerant component at the mean-field level. These two-component spinons are shown to give rise to an hourglass-like spin excitation at the RPA level via an antiferromagnetic coupling between the two modes, which provides an unconventional explanation of the experimental observations in the cuprate. In particular, we also discuss an instability towards an incommensurate magnetic order in this theoretical framework. Hourglass-Like Spin Excitation in a Doped Mott Insulator Zheng-Yu Weng August 12, 2023 ======================================================== Introduction.—The spin dynamics is essential for understanding the mechanism of the cuprate superconductor, which reduces to the only relevant low-lying mode in the undoped limit <cit.>. At finite doping, the dynamic spin susceptibility measured by the inelastic neutron scattering (INS) reveals that the gapless spin-wave <cit.> at the antiferromagnetic (AFM) wave vector Q_0=(π,π) becomes gapped with the destruction of the AFM long-range order. The spin excitation further displays a resonance-like mode <cit.> with a characteristic energy E_g. Slightly deviating from Q_0, the resonance mode splits and extends to both higher and lower energies to result in the well-known hourglass-shaped spectrum <cit.>. Phenomenologically, two distinct starting points have been commonly employed to describe the experimentally observed dynamical spin susceptibility. One is based on the itinerant magnetism approach <cit.>, where the spin resonance formation below T_c originates from the enhanced feedback effect of the d-wave superconductivity for quasiparticles with a large Fermi surface. Alternatively, the local moment approach <cit.> starts with the undoped two-dimensional (2D) AFM state by examining a mixture of local spins described by the superexchange interaction J and itinerant carriers with tight-binding energy dispersion. Microscopically, the parent compound of the cuprate acts as a Mott insulator, in which all the electrons form local magnetic moments as described by the minimal AFM Heisenberg model at half-filling. How such an AFM state can be doped into a short-range AF state at finite doping has been a central issue in the study of the doped Mott insulator, which is described by an effective one-band model, e.g., the t-J model <cit.>. The fermionic RVB state was originally proposed by Anderson <cit.> is one of the conjectures for such a phase, which results in a d-wave Superconducting (SC) instability at low temperatures <cit.>. Nevertheless, this fermionic RVB state seems incompatible with the Schwinger-boson or bosonic RVB description<cit.> of the AFM state at half-filling, and how to bridge the two phases still remains unclear <cit.>. Recently, a two-component RVB description has been proposed<cit.>, which theorizes doping an AFM state into a short-range AF state with an intrinsic low-temperature SC instability. Here the AFM phase is well characterized by the Schwinger-boson mean-field state at half-filling, which is then turned into a bosonic RVB state by doping due to the phase-string effect<cit.> generally associated with a doped Mott insulator. The latter will lead to a nontrivial spin-current backflow created by doped holes moving in a spin singlet background<cit.>. The resulting spin current, in combination with the doped holes, gives rise to distinct spinons which are fermionic and itinerant in nature<cit.>. In this paper, we study an unconventional spin excitation in the doped Mott insulator at finite doping as the consequence of such a two-component RVB description. At the RPA level, such a new spin excitation is hourglass-like, which is composed of the bosonic spinons evolved from the Schwinger bosons at half-filling and the itinerant fermionic spinons emerging upon doping. The result is consistent with the INS observations<cit.> in the cuprate. Further physical implications are also discussed. Emergent two-component RVB description at finite doping.— Starting from the half-filling by doping, a two-component RVB description of the short-range AF state has been recently proposed<cit.> based on the t-J model, whose ground state is given by |Ψ_G⟩=P̂[e^iΘ̂|Φ_h⟩⊗|Φ_a⟩⊗|Φ_b⟩]  . Here |Φ_b⟩ originated from the Schwinger-boson mean-field state at half-filling and is known as the bosonic RVB state[shown by blue thick lines in fig_GS(b)], |Φ_a⟩ is a BCS-like state[shown by blue wave lines in fig_GS(b)] formed by the fermionic spinons which are introduced by the doped holes, and |Φ_h⟩ describes a Bose-condensed state of the bosonic holons which are also introduced by the doped holes as carrying electric charges. The unitary operator e^iΘ̂ in Eq. (<ref>) is a duality transformation to implement the so-called phase-string effect<cit.>, which is very singular as created by the doped holes. The projection operator P̂ further enforces the constraint between the three fractionalized sub-systems in Eq. (<ref>) by n_i^h S_b^z(r_i)=-S_a^z(r_i), in which n_i^h is the holon number at site i, and S_a^z and S_b^z denote the z-component spins of the a-spinon and b-spinon, respectively. Physically, Eq.(<ref>) means the half-filled b-spinons at the hole sites must be compensated by the a-spinons, whose number is equal to the hole number[depicted in fig_GS(a)]. Previously, the individual behaviors for |Φ_b⟩, and |Φ_a⟩ have been studied<cit.>, whose results will be first given in the following. Then the effect of P̂ in Eq. (<ref>) will be further incorporated at the RPA level. Local moments.— At half-filling, the ground state of the Heisenberg Hamiltonian is well described by the Schwinger-boson mean-field state<cit.>, which will evolve into the short-range AF state |Φ_b⟩ at finite doping as outlined above[cf. blue thick line in fig_GS(b)]. In contrast to conventional Schwinger bosons with continuous spectra <cit.>, the b-spinons in this study exhibit dispersionless, “Landau-level-like” discrete energy levels with a gap E_s <cit.>. Consequently, the corresponding low-lying dynamical spin susceptibility originating from the lowest Landau level is given by as <cit.> χ_b(iν_n,Q)=chib.png4pt = a_c^2 D e^-a_c^2/2(Q-Q_0)^2 ×(1/iΩ_n-E_g-1/iΩ_n+E_g), where E_g=2E_s represents the resonance energy, the “cyclotron length” a_c=a / √(πδ) determines the effective spin-spin correlation length[a for lattice constant, δ for doped hole density], and the weight D is not sensitive to doping <cit.>. As depicted in fig_chi(a), the spin-wave excitation, derived from the imaginary component of chib, becomes a gapped resonance-like mode near Q_0=(π,π). Itinerant spinons.— The doped holes are created by removing spins from the half-filling spin-singlet background characterized by |Φ_b⟩. The doping introduces new spinons centered at the hole sites known as the a-spinons [the yellow arrows in fig_GS(a)], which form the itinerant RVB state |Φ_a⟩ in Eq. (<ref>) [cf. blue wave line in fig_GS(b)]. The a-spinons as fermions form the multi-pocket Fermi surfaces illustrated in fig_GS(c), which are determined by: H_a = ∑_K, kϵ_K(k) a_K+k, σ^† a_K+k, σ +∑_K, kΔ_a a_K+k, ↑^† a_K-k, ↓^†+ h.c. . Here a_K+k, σ^† denotes the creation operator for an itinerant a-spinons from pockets K=Γ, X, M with relative momentum k[depicted in fig_GS(c)], whose band energy reads ϵ_K(k)=k^2/2 m_a-μ_a. The Δ_a term characterizes the uniform s-wave pairing within all pockets. We also assume identical parabolic band structures for all pockets as shown in fig_GS(d), implying a consistent effective mass m_a and chemical potential μ_a. This model aligns with hopping fermions in the π flux states, displaying well-nested, distinct pockets <cit.>. Importantly, the Luttinger sum rule for itinerant a-spinons, which arise from doped holes, is associated with the doping density δ, represented as ∑_k,σn_k,σ^a/N=δ [where n_k,σ^a denotes the a-spinon number operator and N denotes the total number of sites], rather than half-filling as in conventional spin liquids <cit.>. This relationship determines the chemical potential μ_a. The dynamical spin susceptibility of itinerant a-spinons is defined as χ_a(r_i-r_j)=⟨ S_a^z(r_i) S_a^z(r_j)⟩, with r_i=(τ_i, r_i) representing the time-space vector. The χ_a can be formulated in the frequency-momentum space as: χ_a(i v_n,q)=chia.png24pt=-1/2 N∑_k(1-Δ_a^2+ϵ_k+qϵ_k/E_k+q E_k) ×(1/i v_n-E_k+q-E_k-1/i v_n+E_k+q+E_k), where the term in the first parenthesis represents the coherence factor due to BCS-type pairing and the solid line Ga.png6pt formally denotes the a-spinon propagator. The q in chia denotes the momentum deviation from all the nesting vectors, such as (0,0), (π,π), (0,π), or (π, 0), and it can be easily verified that they are identical. The dynamic spin susceptibility is given by Imχ(ν + i0^+, q) after the analytic continuation iν_n →ν + i0^+, as depicted in fig_chi(b). The spin spectrum around the AFM wave vector Q_0, contributed by the scattering between Γ(M_x) and X(M_y) pockets, exhibits a continuum above the gap 2Δ_a. A significant feature is the complete disappearance of the weight at exact Q_0=(π, π) due to the coherence factor effect <cit.> of the uniform s-wave pairing, i.e., 1-(Δ_a^2+ϵ_k+qϵ_k)/E_k+q E_k 0, which is crucial in yielding an “hourglass” dispersion in the subsequent results. Hybrid model.— So far at the mean-field level, two-component a and b spinons are separated. At the next step, the local spin constraint Sab will be incorporated at the RPA level via the following local coupling, which is given by: H_int= g ∑_i S_a^z(r_i) S_b^z(r_i), where g>0 represents the strength of this effective interaction. At the RPA level, the dynamical spin susceptibility based on Hint can be diagrammatically expressed as: χ^RPA(q) = Dyson.png22pt = χ_b(q)/1-g^2 χ_a(q) χ_b(q). The low-energy spin spectrum, Imχ^RPA(q), around the AFM wave vector Q_0 is depicted in fig_chiRPA(a) at δ=0.1, resembling the well-known “hourglass” spectrum observed in INS<cit.>[with experimental results<cit.> marked by yellow points in fig_chiRPA(a)]. In details, the lower branch of the “hourglass” can be interpreted as the resonance modes[shown in fig_chi(a)] originating from local moments, influenced by itinerant spin modes[displayed in fig_chi(b)] through the “level repulsion” of RPA correction, resulting in the transfer of spectral weight to lower energy around the Q_0. It is essential to emphasize that the resonance mode at the exact Q_0-point with characteristic energy E_g remains protected without any spectral weight transfer. This protection results from the complete disappearance of the a-spinon dynamical spin susceptibility χ_a at this momentum due to the coherence factor effects discussed earlier. On the other hand, the spin fluctuation from fermionic itinerant a-spinons near Q_0 is enhanced with the aid of that from local moments via the term 1-g^2 χ_a(q) χ_b(q) in RPA correction chiRPA, leading to the upper branch in fig_chi(b), which is relatively comparable to the lower branch primarily contributed by local moments. Additionally, the frequency slices of the calculated spin fluctuation spectrum for χ^RPA around Q_0 displayed in fig_chiRPA(b)-(d) exhibit circular features deviating from E_g. This is distinct from the experimentally observed four weight peaks<cit.> marked by yellow points in fig_chiRPA)(d), suggesting that a higher-order correction might be needed to enhance them. It is worth noting that all phenomenological parameters in our model include the resonance energy E_g, determined directly by the peak of weight in INS<cit.>, as well as m_a and Δ_a for fermionic itinerant a-spinons, and the coupling strength g. In this study, at δ=0.1, we choose 2Δ_a = 1.1 E_g, m_a= 1/J, and g=60meV to fit the experimental data, with J=120 meV representing the bare spin exchange interaction. Also, the doping evolution of m_a can be inferred from the relative change in the residual uniform spin susceptibility at low temperatures under strong magnetic fields<cit.>, the relationship with m_a will be discussed in subsequent sections. Furthermore, we show that the existence of the hourglass structure is insensitive to the specific choice of these parameters<cit.>, as long as the gap 2Δ_a does not differ too much from the resonance energy E_g. Incommensurate magnetic instability.— When the coupling strength g approaches a critical value g_c, sign changes in static susceptibility become possible, i.e., Reχ^RPA(ω=0, Q_in)<0 as illustrated in fig_stripe(b), at incommensurate momenta Q_in≡Q_0 + Δq, alongside the gapless spin excitation shown in fig_stripe(a) stemming from the extension of the lower branch of the “hourglass” structure[with Q_in marked by red arrows in fig_stripe(a)]. This results in the emergence of incommensurate magnetic instability with wave vectors Q_in, which may be associated with stripe order<cit.> once circular gapless modes further break rotational symmetry and select a specific direction due to higher-order corrections. Furthermore, the determination of the deviating incommensurate wave vector Δq for magnetic instability is related to the pocket size of itinerant a-spinon and the width of resonance modes, both of which increase with the rise in doping density δ. As depicted in fig_stripe(c), the doping evolution of Δq is consistent with experimental and theoretical conclusions<cit.>, i.e., 2πδ as indicated by the dashed line. Unifrom susceptibility.— The uniform static susceptibility in our study is contributed by both a-spinons and b-spinons, denoted as χ^loc=χ_b^loc+χ_a^loc. Due to the existence of an energy gap for both a-spinons and b-spinons, the uniform static susceptibility χ^loc appears to be significantly suppressed at temperatures close to zero. Nonetheless, in a specific situation where a strong magnetic field is applied, it is possible to suppress Δ_a at the conventional vortex cores mediated by the emergent U(1) gauge field between holons and a-spinons from the constraint Sab<cit.>. Consequently, a finite DOS of 𝒩(0)=a^2/2 πħ^2 m_a from the gapless Fermi pockets of a-spinon can be restored at these vortex cores, resulting in a finite residual χ_a^loc∝𝒩(0) at low temperatures in cuprates, which is in agreement with the observed NMR results<cit.>. Further details regarding the temperature evolution of χ^loc can be found in Ref. SM. In addition, our previous work<cit.> suggests that the emergence of gapless a-spinon Fermi pockets when Δ_a is suppressed by strong magnetic fields can also account for the observed linear-T heat capacity<cit.> and the quantum oscillations<cit.> associated with pocket physics. Discussion.—The hourglass-like spin excitation has been discussed as the consequence of a two-component RVB description of the doped Mott insulator at finite doping. Here two-component spinons characterize the local and itinerant spin moments emerging upon doping the single-band t-J model, in contrast to the single-component spinon in the original RVB theory proposed by Anderson<cit.>. Note that the separation of itinerant spins (electrons) and local moments is a natural concept in multi-band systems such as the heavy fermion systems with Kondo coupling <cit.> and iron-based superconductors with Hund's rule coupling <cit.>, where the mutual interaction between the two degrees of freedom produces the correct low-lying spin excitations. In the present study, the emergence of two distinct spin components is due to the unique strong correlation effect within a single-band system that results in fractionalization. Specifically, the itinerant fermionic a-spinons carry the spin degrees of freedom associated with hopping holes, while the b-spinons describe the background local moments persisting from the half-filling. The interaction between these two components, as described in Sab, arises from the no-double-occupancy constraint in the t-J model. In our study, the hourglass spectrum uniquely relies on the coherence factor effect <cit.> of the s-wave pairing Δ^a of the itinerant spinons. It is worth pointing out that, within this framework (in the presence of holon condensation), the superconducting order parameters have a composition structure given by ⟨ĉ_i ↑ĉ_j ↓⟩∝Δ_ij^a ⟨ e^i 1/2(Φ_i^s+Φ_j^s)⟩, where the amplitude Δ^a is s-wave-like while the d-wave pairing symmetry as well as the phase coherence arise from the phase factor e^i 1/2(Φ_i^s+Φ_j^s) contributed by the b-spinons <cit.>. Such a hidden s-wave component with a BCS-like d-wave pairing order parameter leads to a novel pairing-symmetry dichotomy, which has been revealed and discussed in recent numerical<cit.> and may have important experimental implications<cit.>. Here the phase transition near T_c is dictated by the free b-spinon excitations carrying the π-vortices <cit.>. Finally, we shall show elsewhere how the spin excitations discussed in the present work may also naturally reduce to a commensurate AFM Goldstone mode in a dilute doping limit. Acknowledgments.— We acknowledge stimulating discussions with Zhi-Jian Song, Zhen Bi, and Ji-Si Xu. J.-X.Z., C.C., and Z.-Y.W. are supported by MOST of China (Grant No. 2017YFA0302902). C.C. acknowledges the support from the Shuimu Tsinghua Scholar Program. J.H.Z. is supported by a startup fund from the Pennsylvania State University (Zhen Bi), and thanks the hospitality of the Kavli Institute for Theoretical Physics, which is partially supported by the National Science Foundation under Grant No. NSF PHY-1748958. Supplementary Materials for: “Hourglass-Like Spin Excitation in a Doped Mott Insulator” In the following supplementary materials, we provide more analytical results to support the conclusions presented in the main text. In Sec. I., we present a detailed derivation of the dynamical spin susceptibility for itinerant fermionic a-spinons, χ_a(q), as given in chia. In Sec. II., we give the discrete energy levels for bosonic b-spinons, as well as a comprehensive derivation of the corresponding dynamical spin susceptibility χ_b(q) in chib. In Sec. III., we show that the four well-nested Fermi pockets of itinerant a-spinons, discussed in the main text, are consistent with the hopping fermions in the square lattice with uniform π-flux. In Sec. IV., we reveal the existence of two types of vortex excitations in different temperature regions and provide the temperature evolution of spin susceptibility related to vortex states. In Sec. V., we display the dynamical spin susceptibility χ^RPA(q) at the RPA level with various chosen parameters, illustrating that the “hourglass” feature is not sensitive to the specific parameters. § I. DERIVATION OF DYNAMICAL SPIN SUSCEPTIBILITY FOR ITINERANT FERMIONIC A-SPINONS IN CHIA Following the order of particle–hole and pocket degrees of freedom, we arrange the a-spinon operators as: ψ_k = ([ a_k ↑; a_-k ↓^† ]) ⊗([ Γ; X ]) Ψ_k = ([ a_k ↑; a_-k ↓^† ]) ⊗([ M_x; M_y ]), where k=(iω_n, k) refers to the fermionic momentum-frequency vector. This work is primarily focused on the magnetic fluctuation around Q_0=(π, π), thus only the particle-hole scattering between two pockets shifted by Q_0 is relevant. Specifically, scattering between Γ and X pockets, or M_x and M_y pockets, is considered. Consequently, the pocket indices consist of either (Γ, X) or (M_x, M_y) combinations. Using such representation, the Hamiltonian in Ha can be written as H_a=∑_kψ_k^† h_kψ_k + ∑_kΨ_k^† h_kΨ_k with h_k=ϵ_kσ_z ⊗τ_0+Δ_a σ_x ⊗τ_0, where ϵ_K(k)=k^2 / 2 m_a-μ_a is the dispersion for a-spinons, and σ and τ are Pauli matrices denoting the particle-hole and pocket degrees of freedom, respectively. Therefore, the Green's function for a-spinon G_a(k) = -⟨ψ_k ψ_k^†⟩ = -⟨Ψ_k Ψ_k^†⟩ is given by: G_a(k) ≡Ga.png7pt = (iω_n σ_0 ⊗τ_0-h_k)^-1 = iω_n σ_0 ⊗τ_0+Δ_a σ_x ⊗τ_0+ϵ_kσ_z ⊗τ_0/(iω_n)^2-E_k^2, where E_k=√(ϵ_k^2+Δ_a^2) is the dispersion for a-spinon with BCS pairing. The dynamical spin susceptibility from itinerant a-spinons is defined as χ_a(r_i-r_j)=⟨ S_a^z(r_i) S_a^z(r_j)⟩. χ_a can be expressed in the frequency-momentum space as follows: χ_a(q)=-2×1/4 N∑_k Tr G_a(k+q) s_a G_a(k) s_a=chia.png24pt, where s_a=σ_0 ⊗τ_x and s_a = σ_0 ⊗τ_0 denote the magnetic fluctuation near (π, π) and (0, 0), respectively. Note that the factor 2 in chia0 arises from the summation over ψ and Ψ components. Following the Matsubara summation, the expression for the dynamical spin susceptibility becomes chia, which reads: χ_a(i v_n,q)=-1/2 N∑_k(1-Δ_a^2+ϵ_k+qϵ_k/E_k+q E_k) ×(1/i v_n-E_k+q-E_k-1/i v_n+E_k+q+E_k), where q represents the momentum deviation from (0,0) and (π,π). Furthermore, by replacing pocket indexes in psi to ([ Γ M_y ])^T and ([ Γ M_x ])^T, the dynamical spin susceptibility χ_a around (π, 0) and (0, π) can be determined, respectively. χ_a is found to be identical across all scenarios where q deviates from (0, 0), (0, π), (π, 0), or (π, π). § II. DERIVATION OF DYNAMICAL SPIN SUSCEPTIBILITY FOR BACKGROUND BOSONIC B-SPINONS IN CHIB The b-spinons in the main text are in the RVB states on a square lattice under uniform magnetic flux. The corresponding Hamiltonian can be expressed as: H_b = -J_s ∑_⟨ ij ⟩, σ b_iσ^† b_j-σ^† e^i σ A_ij^h + h.c. + λ_b ∑_i,σ ( b_iσ^† b_iσ - N ), Here the assumed gauge field A_ij^h comes from the mutual Chern-Simons interaction between holons and background b-spinons. Therefore, with the holons condensed, the RVB-pairing b-spinons experience a uniform static gauge field with a δπ flux per plaquette. Now, the pairing component can be redefined as: ∑_i,j b_i,↑^† M_i,j b_j,↓^† + b_i,↓ M_i,j b_j,↑, where M is a hermitian matrix defined as: M_i,j = -J_s e^i A_ij^h j ∈NN(i) 0 others Then, with the standard diagonalization procedure as in Hofstadter system, we obtain: H_b=∑_m, σ E_m^b γ_m σ^†γ_m σ with the b-spinons spectrum: E_m^b=√(λ_b^2-(ξ_m^b)^2) via introducing the following Bogoliubov transformation: b_i σ=∑_mω_m σ(r_i)(u_mγ_m σ-v_mγ_m-σ^†), where the coherent factors are given by u_m = √(1/2(1+λ/E_m^b)) v_m = sgn(ξ_m^b) √(1/2(-1+λ/E_m^b)). Here, ξ_m^b as well as w_m(r_i)≡ w_m σ(r_i)=w_m-σ^*(r_i) in bogo are the eigenfunctions and eigenvalues of the following equation: ξ_m^b ω_m (r_i)=-J Δ^s/2∑_j=NN(i) e^i σ A_i j^hω_m (r_j). We select the Landau gauge along the x-axis, as expressed in A_i,i+ê_y^h=-δπ i_x. The resulting b-spinon dispersion E_m^b in Emb with the unit of J_s is depicted in fig_Eb(b), which manifests the dispersionless, “Landau-level-like” discrete energy levels<cit.> with a gap E_s[labeled by the red arrow]. For comparison, fig_Eb(a) displays the continuous spectra for conventional Schwinger bosons under zero flux conditions, highlighting the low-lying propagating modes. For the sake of clear representation, we depict all quantum numbers excluding k_y simultaneously in the figures. Subsequently, using the relation S_i^b, z=1/2∑_σσ b_i σ^†b_i σ, the Matsubara spin-spin correlation function can be expressed as: χ_b(τ, r_i- r_j) =⟨T̂ S_j^b, z(τ) S_i^b, z(0)⟩_0 = 1/4∑_σσ^'σσ^'⟨T̂ b_j σ^†(τ) b_j σ(τ) b_i σ^'^†(0) b_i σ^'(0)⟩_0 = 1/4∑_σσ^'σσ^'[⟨T̂ b_j σ^†(τ) b_i σ^'^†(0)⟩_0⟨T̂ b_j σ(τ) b_i σ^'(0)⟩_0 + ⟨T̂ b_j σ^†(τ) b_i σ^'(0)⟩_0⟨T̂ b_j σ(τ) b_i σ^'^†(0)⟩_0] where ⟨⟩_0 denotes the expectation value under the mean-field state, and the Wick's theorem is applied in the last line. Then, by using the Bogoliubov transformation bogo, together with the Green's function G_γ(m, i ω_n ; σ) ≡ -⟨γ_m σ(iω_n) γ_m σ^†(iω_n)⟩_0 = 1/i ω_n-E_m^b After performing the summation over σ and replacing w_m,σ with w_m, the Matsubara spin correlation function in chib1 at T=0 can be further simplified as: χ_b(i ν_n, r_i-r_j) = -1/4∑_m,n w_m^*(r_i)w_m(r_j) w_n^*(r_j) w_n(r_i) (u_m^2 v_n^2 + v_m^2 u_n^2 - 2 u_m v_m u_n v_n) ×( 1/i ν_n - E_m - E_n - 1/i ν_n + E_m + E_n) From the second line of chib2, the dominant contribution to χ_b evidently originates from the lowest Landau level(LLL), wherein E_m = E_n = E_s, leading to u_m = u_n according to Bogofactor. Thus, the only non-vanishing contributions are from the cases where v_n = -v_m, i.e., ξ_n^b = -ξ_m^b. Moreover, according to previous works<cit.>, in the LLL, there exists N_m eigenvectors of M matrix in Mmat (N_m is the number of magnetic unit cells), with w_m(r) peaking at the center of a magnetic unit cell located at 𝐑_m. We term these localized w_m's as local modes (LM). Moreover, for each local mode w_m(r), a corresponding eigenvector w_m^*(r) = (-1)^r w_m(r) exists, and ξ_m^*^b = -ξ_m^b, therefore u_m^* = u_m and v_m^* = -v_m. We term these w_m^*'s as π-shifted modes. The Bogoliubov quasiparticles corresponding to both local and π-shifted modes possess a common energy E_s and a common u_m, but v_m differs in sign between these two classes of modes. In essence, under this approximation, the low-lying spin spectrum χ_b will be dominated by the localized b-spinon excitations, which are non-propagating modes with an intrinsic size on the order of a “cyclotron length”, a_c. These spinon wave packets with magnetic Wannier wave functions w_m^*(𝐫) or w_m(𝐫) are situated in separate magnetic unit cells and are highly degenerate, as illustrated in fig_D(a). In the summation of chib2, m and n will be either local or π-shifted modes, thus we find: χ_b(i ν_n, r_i-r_j) = 1/4 (-1)^r_i-r_j 2 | ∑_m ∈LM w_m^*(r_i) w_m(r_j) |^2 (1-4 λ_b^2/E_g^2)( 1/i ν_n - E_g - 1/i ν_n + E_g) = 1/4 (-1)^r_i - r_j e^-1/2a_c^2(r_i - r_j)^21/2π^2 a_c^4(1-4 λ_b^2/E_g^2)( 1/i ν_n - E_g - 1/i ν_n + E_g). where E_g=2E_s is the resonance energy discussed in the main text. Here, we employ the fact that: |∑_m ∈LM w_m^*(r) w_m(r') | = 1/2π a_c^2 e^-(r - r')^2/4a_c^2, where a_c = 1/√(πδ) is the cyclotron length, and we assume lattice constants to be unit, i.e., a=1 for simplicity. By executing a Fourier transformation into the momentum space, we can obtain the expression in chib: χ_b(iν_n, Q) = chib.png4pt = 1/N∑_rχ_b(iν_n, r) e^-i Q·r = 1/41/π a_c^2(1-4λ_b^2/E_g^2) e^-a_c^2/2(Q - Q_0)^2( 1/i ν_n - E_g - 1/i ν_n + E_g) = a_c^2 D e^-a_c^2/2(Q-Q_0)^2×(1/iν_n-E_g-1/iν_n+E_g), where Q_0 = (π,π) is the AFM wave vector, and D is defined as: D≡1/41/π a_c^4(1-4 λ_b^2/E_g^2). The doping dependence of the weight of χ_b is mainly contributed from a_c^2 in the last line of chib3, rather than from the value of D. fig_D(b) shows the doping evolution of D based on the mean-field self-consistent calculation from the prior work<cit.>, demonstrating the insensitivity of D with respect to the doping density δ. § III. BCS STATES OF FERMIONS IN A SQUARE LATTICE WITH UNIFORM Π-FLUX Assume that fermions form nearest-neighbor (NN) pairing on a square lattice with uniform π flux, as depicted in fig_Ea(a). The Hamiltonian for this setup is provided in H_π=-t_a ∑_⟨ i j⟩, σ a_i σ^† a_j σ e^-i ϕ_i j^0- Δ_a∑_⟨ i j⟩, σσ a_i σ a_j σ̅ e^i ϕ_i j^0+ h.c. +λ_a(∑_i, σ a_i σ^† a_i σ-δ N), where ϕ_ij^0 is the π-flux gauge field, while Δ_a and λ_a denote the NN pairing amplitude and the chemical potential, respectively. The latter constrains the number of fermions to equal that of doping holes. Selecting the Landau gauge displayed in fig_Ea(a) yields the dispersion of Hasup as presented in E_k,±=√((ξ_k,±)^2+Δ_k^2), where ξ_k, ± and Δ_k is the dispersion for free fermions and the s-wave BCS type pairing order parameter, as specified in ξ_k, ± = ± 2 t_a √(cos ^2 k_x+cos ^2 k_y)+μ_a Δ_k = 2 Δ_a √(cos ^2 k_x+cos ^2 k_y). The lower branch dispersion ξ_k,- from piEa is portrayed in fig_Ea(a), exhibiting well-nested Fermi pockets denoted by red circles. Here, we can understand the origin of this gapless “Fermi pockets” as follows: according to the Hasup, in the absence of pairings, free fermions are in the π-flux lattices, of which the half-filled case corresponds to the well-known π-flux state in fermionic spin liquids, with the Fermi surface shrinking to the Dirac point marked by the red arrow in fig_Ea(a). However, the number of fermions corresponds to the doping density δ, not half-filling, which results in the Dirac point transforming into a gapless Fermi pocket, as illustrated by the red circles in fig_Ea(a). Furthermore, the calculated BCS type pairing order parameter Δ^k is shown in fig_Ea(b), demonstrating a strongly momentum-dependent s-wave without sign flip. As our focus lies on the physics near the Fermi surface of ξ_k,-—namely, around (0,0), (π,0), (0,π), and (π,π)—the anisotropy of the pairing amplitude is not crucial. Finally, the next-nearest neighbor (NNN) hopping term solely opens the gap of Dirac points, as indicated by the red arrows in fig_Ea(a), implying that such further neighbor term would not affect the Fermi pockets, which are our primary concern at low energy. As a result, coupled with the features of well-nested pockets and the s-wave pairing presented in Hasup, hopping fermions on the square lattice with uniform π-flux emerge as a potential model. This model could account for the low-lying physical behaviors of a-spinons discussed in the main text. Moreover, the mean-field phase string theory in earlier work<cit.> can provide the effective Hamiltonian Hasup. § IV. VORTEX TYPES AND TEMPERATURE EVOLUTION OF UNIFORM SPIN SUSCEPTIBILITY In phase string theory, we identify two distinct types of "vortices" generated by the magnetic fields. Specifically, under holon condensation, the experimentally observed superconducting order parameters are given by: ⟨ĉ_i ↑ĉ_j ↓⟩∝Δ_ij^a e^i 1/2(Φ_i^s+Φ_j^s), with the d-wave pairing symmetry arising from the phase e^i 1/2(Φ_i^s+Φ_j^s), which is contributed by b-spinons<cit.>. The magnetic field induces a novel magnetic π-vortex core which entraps a free b-spinon, and suppresses RVB pairing Δ_s, while Δ_a remains unaffected [illustrated in Figure 1(b)]. This gives rise to a phase transition near T_c, manifesting Kosterlitz-Thouless-like behavior<cit.>. This behavior disrupts only the phase e^i 1/2(Φ_i^s+Φ_j^s) in cc due to the novel magnetic π-vortices. On the other hand, there also exists the conventional magnetic vortex with a quantization of 2π. This causes the phase of Δ_a in cc to twist, resulting in the unpairing of a-spinons at the vortex cores mediated by the emergent U(1) gauge field, which comes from the constraint Sab. In contrast, b-spinons remain gapped[illustrated in fig_uniform(b)]. The two vortex types appear within distinct temperature domains. At temperatures much lower than E_g/k_B, novel magnetic π-vortices may be energetically unfavorable due to the minimum b-spinon gap E_s=E_g/2 required to break an RVB pair. This is in contrast to a conventional 2π-vortex where Δ_a=0. However, near T_c, π-vortices carrying b-spinons are more readily formed under external magnetic fields, preceding the disruption of superconducting phase coherence by thermally excited spinon-vortices. Furthermore, our study investigates the contribution of both a-spinons and b-spinons to the uniform static susceptibility χ^loc, as expressed by χ^loc=χ_b^loc+χ_a^loc. To derive the uniform static susceptibility χ_b^uni for b-spinons, we introduce the external magnetic field in Hb, represented as -2 μ_B ∑_i S_i^z H. This inclusion leads to the Zeeman splitting effect in the b-spinon dispersion given by: E_m,σ^b=E_m^b-σμ_B H, where E_m^b is defined in Emb. Consequently, the total magnetic moment induced by the magnetic field from b-spinons can be expressed as: M_b=μ_B ∑_m[n_B(E^b_m,↑)-n_B(E^b_m,↓)] where n_B(ω)=1 /(e^βω-1) denotes the bosonic distribution function. Therefore, the χ_b^loc at local site is defined by χ_b^loc = M_b/N B|_H → 0, resulting in χ_b^loc=2 βμ_B^2/N∑_m n_B(E_m)[n_B(E_m)+1], The temperature evolution of χ_b^uni as described in chibuni is depicted by the black solid line in fig_uniform(c). It can be observed that χ_b^uni decreases as the temperature decreases due to the strengthening antiferromagnetic correlations, which oppose the uniform polarization of the spin. Moreover, the existence of an energy gap in b-spinons leads to the opening of a gap at low temperatures, approximately below T_c. The values of the parameters used in our calculations are determined by the mean-field self-consistent equations presented in Ref. Weng.Ma.2014, Zhang.Weng_2022. Furthermore, the uniform static susceptibility χ_a^loc for a-spinons can be derived by setting q→ 0 and μ→ 0 in chia, resulting in χ_a^uni=2/N∑_k n_F(E_k)[1-n_F(E_k)] where n_F(ω)=1 /(e^βω+1) denotes the fermionic distribution function. At low temperature, χ_a^loc in chia2 can be further simplified as a temperature-independent Pauli susceptibility directly related to the density of states (DOS) 𝒩(0) at the Fermi surface. However, itinerant fermionic a-spinons possess a BCS-type gap Δ_a, leading to the disappearance of 𝒩(0) and uniform static susceptibility χ_a^loc. However, itinerant fermionic a-spinons possess a BCS-type gap Δ_a, resulting in the disappearance of 𝒩(0) and the uniform static susceptibility χ_a^loc. Nevertheless, the application of a strong magnetic field can suppress Δ_a at conventional 2π vortex cores, leading to the restoration of a finite DOS with 𝒩(0)=a^2/2 πħ^2 m_a contributed by the gapless Fermi pockets of a-spinons. This restoration induces a finite residual χ_a^loc given by: χ_a^loc = 2 𝒩(0) = 2 a^2/2 πħ^2 m_a F(T) where an additional coefficient F(T) is introduced to account for the temperature effect of conventional 2π-vortex, which only exists below temperature T_c. The specific expression of F(T) is irrelevant for the structure of χ_a^loc, and for simplicity of representation, we assume F(T) = [exp[(T-0.75 T_c) / 0.1T_c]+1]^-1/4. Therefore, the black dashed line in fig_uniform(c) represents χ_a^loc. As the result, the total uniform static susceptibility χ^loc in totchi under a strong magnetic field, is depicted by the blue line in fig_uniform(c). Comparing it with the case without magnetic fields, in which χ_a^loc vanishes completely [shown by the black solid line in fig_uniform(c)], we observe the emergence of a finite residual χ^loc under a strong magnetic field when T<T_c. This finding is consistent with the NMR measurements. § V. COMPARISON OF Χ^RPA IN CHIRPA FOR VARIOUS PARAMETERS In the main text, we select specific values for the fitting parameters at δ=0.1, namely 2Δ_a = 1.1 E_g, m_a= 1/J, and g=60meV, to match the experimental data, where J=120 meV represents the bare spin exchange interaction. Furthermore, we present the results of χ^RPA(q) determined by chiRPA for different parameter choices at δ=0.1 in fig_fitparameter. These results clearly demonstrate that the presence of the hourglass structure is not significantly affected by the specific values of these parameters, as long as the gap 2Δ_a is not too different from the resonance energy E_g=2E_s. This condition is reasonable because the BCS-type pairing Δ_a for a-spinons originates from the RVB pairing Δ_s for b-spinons, following the relation |Δ_a|^2 ≃δ^2|Δ_s|^2.
http://arxiv.org/abs/2307.04974v1
20230711022924
Determination of matter radius and neutron-skin thickness of $^{60,62,64}$Ni from reaction cross section of proton scattering on $^{60,62,64}$Ni targets
[ "Shingo Tagami", "Tomotsugu Wakasa", "Masanobu Yahiro" ]
nucl-th
[ "nucl-th", "nucl-ex" ]
  (1,1)>     (1,1)<  
http://arxiv.org/abs/2307.03921v1
20230708072624
Social-Mobility-Aware Joint Communication and Computation Resource Management in NOMA-Enabled Vehicular Networks
[ "Tong Xue", "Haixia Zhang", "Hui Ding", "Dongfeng Yuan" ]
eess.SP
[ "eess.SP" ]
Social-Mobility-Aware Joint Communication and Computation Resource Management in NOMA-Enabled Vehicular Networks Tong Xue, Haixia Zhang, Senior Member, IEEE, Hui Ding, and Dongfeng Yuan, Senior Member, IEEE T. Xue, H. Zhang, H. Ding and D. Yuan are all with Shandong Key Laboratory of Wireless Communication Technologies, Shandong University, Jinan, Shandong, 250061, China. T. Xue and H. Zhang are also with School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China (e-mail: [email protected]; [email protected]). August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= The existing computation and communication (2C) optimization schemes for vehicular edge computing (VEC) networks mainly focus on the physical domain without considering the influence from the social domain. This may greatly limit the potential of task offloading, making it difficult to fully boom the task offloading rate with given power, resulting in low energy efficiency (EE). To address the issue, this letter devotes itself to investigate social-mobility-aware VEC framework and proposes a novel EE-oriented 2C assignment scheme. In doing so, we assume that the task vehicular user (T-VU) can offload computation tasks to the service vehicular user (S-VU) and the road side unit (RSU) by non-orthogonal multiple access (NOMA). An optimization problem is formulated to jointly assign the 2C resources to maximize the system EE, which turns out to be a mixed integer non-convex objective function. To solve the problem, we transform it into separated computation and communication resource allocation subproblems. Dealing with the first subproblem, we propose a social-mobility-aware edge server selection and task splitting algorithm (SM-SSTSA) to achieve edge server selection and task splitting. Then, by solving the second subproblem, the power allocation and spectrum assignment solutions are obtained utilizing a tightening lower bound method and a Kuhn-Munkres algorithm. Finally, we solve the original problem through an iterative method. Simulation results demonstrate the superior EE performance of the proposed scheme. VEC, NOMA, edge server selection, task splitting, spectrum assignment, power allocation. Social-Mobility-Aware Joint Communication and Computation Resource Management in NOMA-Enabled Vehicular Networks Tong Xue, Haixia Zhang, Senior Member, IEEE, Hui Ding, and Dongfeng Yuan, Senior Member, IEEE T. Xue, H. Zhang, H. Ding and D. Yuan are all with Shandong Key Laboratory of Wireless Communication Technologies, Shandong University, Jinan, Shandong, 250061, China. T. Xue and H. Zhang are also with School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China (e-mail: [email protected]; [email protected]). August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION With the booming development of intelligent vehicles and wireless communications, a variety of advanced vehicular entertainment services such as high-definition map have emerged in vehicular networks. Quite a lot emerging vehicular entertainment services are computationally-intensive, but the vehicular users (VUs) with constrained computation capability can not satisfy the quality of service (QoS) of such services. To overcome this, it is paramount crucial to utilize vehicular edge computing (VEC) technology that leverages the abundant computation resources at proximity edge servers (i.e., road side units (RSUs) and idle service vehicular users (S-VUs)) <cit.>. However, when the VUs offload tasks to the edge servers, the power consumption increases significantly. Improving the transmission rate of offloaded tasks with limited power, i.e. energy efficiency (EE), has become a major concern in VEC networks. One feasible method is to optimize the communication resources, such as spectrum and power. In addition, designing appropriate task computation policies, such as determining where to offload the computational tasks, is another way to enhance the EE <cit.>. There are works focusing on joint optimizing communication and computation (2C) resource allocation strategies to maximize the EE in orthogonal multiple access (OMA)-enabled VEC networks <cit.>. In addition, non-orthogonal multiple access (NOMA) has also been regarded as a potential technology to further enhance the system EE <cit.>. With the help of successive interference cancellation (SIC) at the receiver, co-channel interference can be suppressed, which enhances the system sum-rate and finally achieves the significant improvement of the system EE. Therefore, there are works focusing on optimizing 2C resources by integrating NOMA into VEC networks<cit.>. For instance, Cheng et al. <cit.> proposed a joint optimization strategy for binary task splitting and power control to maximize EE, where the task VU (T-VU) can offload its computation task to the S-VU or RSU by NOMA. With the same goal, based on the minimum distance S-VU selection (MDSS) strategy, Wen et al. studied a NOMA-enabled three-sided matching theory to jointly optimize the task splitting and power control in cognitive vehicular networks <cit.>. Literature <cit.> focused on the 2C optimization strategies based on the physical domain without considering the influence from the social domain. This may greatly limit the potential of task offloading, making it difficult to fully boom the task offloading rate with given power, resulting in a low EE. Therefore, it is indispensable to improve the system EE by designing a social-mobility-aware 2C optimization strategy. Inspired by the aforementioned analysis, this work designs a social-mobility-aware VEC framework and proposes a novel EE-oriented 2C assignment scheme. In doing so, we assume the T-VU offloading computation tasks to S-VU and RSU by NOMA. Meanwhile, to improve the resource utilization, we enable T-VUs to reuse the spectrum resource with cellular users (CUs). An optimization problem is formulated to jointly allocate the 2C resource to maximize the system EE, while guaranteeing the QoS requirements of all CUs and T-VUs. The formulated optimization problem is a mixed integer non-convex. To solve this problem, we decompose it into separated computation and communication resource allocation subproblems. To deal with the computation subproblem, we propose a social-mobility-aware edge server selection and task splitting algorithm (SM-SSTSA) to determine edge server and task splitting. Then, by solving the communication subproblem, the power allocation and spectrum assignment solutions are obtained by using a tightening lower bound method and a Kuhn-Munkres algorithm. Finally, we solve the original problem by iteratively solve the two subproblems. Simulation results demonstrate the superiority of the proposed scheme in terms of the EE. § SYSTEM MODEL AND PROBLEM FORMULATION §.§ Physical and Social Domain Model This work studies a social-mobility-aware VEC network that utilizes NOMA technology to ensure the differentiated QoS requirements for each CU and T-VU, as shown in SystemModel. In the physical domain, a macro base station (MBS) is deployed to support high-rate data transmission of U CU indexed by u∈𝒰={1,2,...,U}, and S RSUs indexed by s∈𝒮={1,2,...,S} with coverage radius r are deployed to support the computationally-intensive services of M T-VU indexed by m∈ℳ={1,2,...,M}. Each RSU is equipped with a mobile edge computing (MEC) server. Given the limited computation capability of T-VUs, we allow the T-VUs to offload computational tasks to the proximity RSUs through vehicle-to-infrastructure (V2I) links and the idle S-VUs through vehicle-to-vehicle (V2V) links. It is assumed that there are N idle S-VUs indexed by n∈𝒩={1,2,...,N}. Based on the characteristic of task offloading, this work allows the T-VU offloading tasks to the RSU server and the S-VU by utilizing NOMA. In the social domain, leveraging social relationships can help build trustworthy V2V offloading links and improve the effective task offloading rate with limited power, i.e., EE <cit.>. In this work, the social relationship graph among VUs is denoted by 𝒢=(Z,δ), where Z denotes the set of all VUs with Z=ℳ∪𝒩, and δ_m,n∈δ={δ_1,1,δ_1,2,...δ_M,N} is a binary variable representing the social relationship between the mth T-VU and the nth S-VU. If the mth T-VU agrees to share computation task with the nth S-VU, then δ_m,n=1, otherwise, δ_m,n=0. §.§ Communication and Computation Model In the NOMA-enabled VEC network, it is assumed that there are totally F available sub-channels (SCs) indexed by f∈ℱ={1,2,…,F}. Without loss of generality, we assume F = U, and each CU uses a single SC. To improve the spectrum resource utilization, the CUs and the T-VUs are allowed to share the spectrum band. It is assumed that only one V2I link and one V2V link utilize NOMA mode to share the SC occupied by one CU. Therefore, the signal-to-interference-plus-noise ratio (SINR) of the uth CU at the time slot t, t∈𝒯={1,2,...,T}, can be expressed as R_u(t)=∑_f∈ℱBlog_2(1+ P_u^op(t)X_u,f(t)H_u(t)/∑_m∈ℳQ_1+σ^2), where B represents the bandwidth of each SC, Q_1=( ϵ_m,1(t)+ϵ_m,2(t))P_m^thX_m,f(t)H_m,u(t), with ϵ_m,1(t) and ϵ_m,2(t) represent the power allocation coefficients from the mth T-VU to the RSU and to the S-VU at the tth time slot, respectively, P_m^th is the maximum transmit power of the mth T-VU, P_u^op(t) denotes the optimal transmit power of the uth CU at the tth time slot, σ^2 is the noise power, the binary variable X_m,f(t)∈{0,1} is defined as the spectrum assignment factor. If the mth T-VU occupies the fth SC at the tth time slot, X_m,f(t)=1, otherwise, X_m,f(t)=0. Similarly, X_u,f(t) is also a spectrum assignment indicator of the uth CU at the tth time slot. H_u(t) and H_m,u(t) are the channel and interference channel power gain of the uth CU at the tth time slot. For each NOMA-enabled V2V link and V2I link's receiver, it assumes that each receiver is able to decode the received messages via SIC, and the decoding order is based on the increasing order of channel coefficients. If H_m,s<H_m,n, the mth T-VU tends to allocate higher power to the sth RSU than that of the nth S-VU, such that ϵ_m,1>ϵ_m,2. Through the NOMA protocol, the mth V2I receiver is firstly decoded. The mth V2V link is then decoded and the co-channel interference from the mth V2I link is removed[If H_m,s>H_m,n, the mth V2V link will be firstly decoded, and the SINR of receiver will be changed.] by SIC. Therefore, the SINR of the mth V2I link's receiver (i.e, the sth RSU) at the tth time slot can be expressed as R_m,s(t)=∑_f∈ℱBlog_2(1+ ϵ_m,1(t)P_m^thX_m,f( t)H_m,s(t)/Q_2+σ^2_γ_m,f(t) ), where Q_2=∑_u∈𝒰P_u^op(t)X_u,f(t)H_u,s(t)+ϵ_m,2(t)P_m^thX_m,f(t)H_m,s(t). The SINR of the mth V2V link's receiver (i.e., the nth S-VU) at the tth time slot can be expressed as R_m,n(t)=∑_f∈ℱBlog_2(1+ X_m,f(t)Ψ_m,n(t)Q_3/Q_4+σ^2_γ_m,n,f(t)), where Q_3=ϵ_m,2(t)P_m^thH_m,n(t), Q_4=∑_u∈𝒰P_u^op(t)X_u,f(t)Ψ_m,n(t)H_u,n(t), H_m,s(t) and H_m,n(t) are the channel power gains from the mth T-VU to the sth RSU server and to the nth S-VU at the tth time slot, respectively, H_u,s(t) denotes the interference channel power gain from the uth CU to the sth RSU at the tth time slot, H_u,n(t) is the interference channel power gain from the uth CU to the nth S-VU at the tth time slot. The binary variable Ψ_m,n(t) composed of both mobility and social relationships is denoted as Ψ_m,n(t)=k_m, n(t)·δ_m,n(t), where k_m, n(t) is the mobility relationship between the mth T-VU and the nth S-VU at the tth time slot, where k_m, n(t)= {[ 1, if ρ_m, n(t)<ζ_th,; 0, otherwise, ]. where ζ_th represents the threshold of physical domain, and ρ_m,n(t) is written as ρ_m,n(t)= ψ· f(Δ d_m,n(t))+(1-ψ) · f(Δ v_m,n(t)), where Δ d_m,n(t) is the distance between T-VU and S-VU, Δ v_m,n(t) represents the difference in velocity between T-VU and S-VU, ψ∈[0,1] is the weight of the distance, f(·) is the normalized function. We define a tuple (D_m(t), C_m,β_m(t)) to characterize the task of the mth T-VU at the tth time slot, where D_m(t) is the size of the computation task, C_m is the number of CPU cycles required for computing 1-bit data, β_m(t)={β_m,1(t),β_m,2(t)}∈[0,1], β_m,1(t) represents the computing task splitting factor from the mth T-VU to the RSU server. β_m,2(t) is the computed ratio by the S-VU. Thus, (1-β_m,1(t)-β_m,2(t)) denotes the portion of the computing task left for local executing (i.e., the mth T-VU). Therefore, the task executing delay at the mth T-VU is D_m(t)(1-β_m,1(t)-β_m,2(t)) C_m/y_m<T_tol, where y_m (in CPU cycle/s) is the assigned computing resource for executing local tasks, T_tol denotes the maximum tolerant delay of each T-VU. The task offloading and executing delay from the mth T-VU to the RSU server and to the nth S-VU can be expressed as D_m(t)β_m,1(t)/R_m,s(t)+D_m(t)β_m,1(t) C_m/y_m,s<T_tol, D_m(t)β_m,2(t)/R_m,n(t)+D_m(t)β_m,2(t)C_m/y_m,n<T_tol, where y_m,s (in CPU cycle/s) and y_m,n (in CPU cycle/s) are the computing resource allocated to the mth T-VU served by the RSU server and the nth S-VU, respectively. The EE of the NOMA-enabled VEC networks is expressed as ξ=∑_t∈𝒯R_total(t)/P_total(t)=∑_t∈𝒯∑_n∈𝒩∑_m∈ℳ∑_s∈𝒮R_m,s(t)+R_m,n(t)/P_cir+P̃_m,n,s(t), where P̃_m,n,s(t)=κ y_m^3+ϵ_m,1(t)P_m^th+κ y_m,s^3+ϵ_m,1(t)P_m^th+κ y_m,n^3, κ is the effective switched capacitance depending on the CPU architecture, and P_cir is the circuit power consumption. §.§ Problem Formulation In this work, our objective is to maximize the EE for task offloading of the NOMA-enabled VEC network by optimizing the edge server selection Ψ, the task splitting β, the spectrum assignment X and the power allocation ϵ. Notably, Ψ, β, X and ϵ are matrices composed of variables Ψ_m,n(t), {β_m,1(t),β_m,2(t)}, {X_u,f(t), X_m,f(t)} and {ϵ_m,1(t),ϵ_m,2(t)}, respectively. Mathematically, the problem is formulated as 𝒫1:max_Ψ_m,n(t),β_m,1(t),β_m,2(t),X_u,f(t), X_m,f(t),ϵ_m,1(t),ϵ_m,2(t)ξ s. t.  Ψ_m,n(t), X_m,f(t), X_u,f(t)∈{0,1},∀ m,s,n,u,f,t, 0≤β_m(t)≤1,∀ m,t, ϵ_m,1(t)≥ 0, ϵ_m,2(t)≥ 0,ϵ_m,1(t)+ϵ_m,2(t)≤ 1,∀ m,t, R_u(t)≥ R_th,u,∀ f,u,t, ∑_f∈ℱX_u,f(t)= ∑_f∈ℱX_m,f(t)=1, ∀ u,m,t, ∑_u∈𝒰X_u,f(t)=1, ∑_m∈ℳX_m,f(t)≤1,∀ f,t, (<ref>),(<ref>),(<ref>), where R_th,u is the minimum data rate thresholds for the uth CU, constraints (<ref>)-(<ref>) list the feasible task splitting and power allocation of the T-VUs, respectively, constraint (<ref>) represents the QoS requirements of CUs, constraint (<ref>) restricts that each user (T-VU and CU) can only access to one SC, each SC can be shared by one CU and at most one T-VU according to constraint (<ref>). It is obvious that (<ref>) is a fractional programming, which can be converted into a subtractive form <cit.>. Therefore, (<ref>) is reformulated as max_Ψ_m,n(t),β_m,1(t),β_m,2(t),X_u,f(t), X_m,f(t),ϵ_m,1(t),ϵ_m,2(t)∑_t∈𝒯(R_total(t)-ξ P_total(t)). § SOLUTION OF THE EE OPTIMIZATION PROBLEM Since the communication and computation resource decision of 𝒫1 is made in each time slot and there is no interdependence among time slots, we transform the optimization problem across the whole time slots into one time slot optimization problem. But, the obtained one time slot optimization problem is still non-convex, and it is difficult to obtain the global optimal solution. As an alternative, we decompose it into 1) computation resource optimization subproblem 𝒫2 and 2) communication resource optimization subproblem 𝒫3. 𝒫2 and 𝒫3 can be given by 𝒫2:  max_Ψ_m,n(t),β_m,1(t),β_m,2(t) (R_total(t)-ξ P_total(t)) s. t.   Ψ_m,n(t),Ψ_m,s(t)∈{0,1},∀ m,s,n,   (<ref>), (<ref>), 𝒫3: max_X_u,f(t),X_m,f(t),ϵ_m,1(t),ϵ_m,2(t)(R_total(t)-ξ P_total(t)) s. t.   X_m,f(t), X_u,f(t)∈{0,1},∀ m,u,f,   (<ref>), (<ref>), (<ref>)-(<ref>). It is seen that 𝒫2 is NP-hard. To find a tractable solution, we design a heuristic SM-SSTSA as shown in Algorithm 1. Then, to solve the communication resource allocation subproblem, we decouple 𝒫3 into a power allocation subproblem and a spectrum assignment subproblem, which can be solved iteratively. As proved in <cit.>, when the spectrum assignment variable is fixed, (<ref>) is written as R_m,s(t)+R_m,n(t)-ξ P_total(t) ≥ b_1log_2γ_m,f(t)+c_1 _Φ_1(t) + b_2log_2γ_m,n,f(t)+c_2 _Φ_2(t) -ξ P_total(t), where b_1, b_2, c_1 and c_2 are b_1=γ̃_m,f(t)/1+γ̃_m,f(t), b_2=γ̃_m,n,f(t)/1+γ̃_m,n,f(t), c_1=log_2(1+γ̃_m,f(t))-γ̃_m,f(t)/1+γ̃_m,f(t)log_2 γ̃_m,f(t), c_2=log_2(1+γ̃_m,n,f(t))-γ̃_m,n,f(t)/1+γ̃_m,n,f(t)log_2 γ̃_m,n,f(t). Then, the lower bound of the objective function in (<ref>) can be written as max_ϵ ( Φ_1(t) + Φ_2(t) -ξ P_total(t)). Denote ϵ_m,1(t)=2^w_m,1(t) and ϵ_m,2(t)=2^w_m,2(t), the power control subproblem can be rewritten as 𝒫4:  max_w_m,1(t),w_m,2(t)( Φ_1(t) + Φ_2(t) -ξ P_total(t)) s. t.    2^w_m,1(t)≥ 0, 2^w_m,2(t)≥ 0,∀ m,  2^w_m,1(t)+2^w_m,2(t)≤ 1,∀ m,   (<ref>),(<ref>),(<ref>). Since (<ref>) is a standard convex optimization problem, we adopt Lagrange dual decomposition to solve it. Given ϵ, the spectrum assignment subproblem is a complicated matching among CUs, T-VUs and SCs, which is proved to be NP-hard. From (<ref>)-(<ref>), the relationship between the cellular user and SC belongs to one-to-one match. To facilitate the solution, the complex match among CUs, T-VUs and SCs is transformed into the new match among CUs and T-VUs. The new spectrum assignment variable between the uth CU and the mth T-VU at the tth time slot is denoted as X_u,m(t). Therefore, the spectrum assignment subproblem can be rewritten as 𝒫5:  max_X_u,m(t)(R_total(t)-ξ P_total(t)) s. t.  X_u,m(t)∈{0,1},∀ u,m, ∑_m∈ℳX_u,m(t)≤1, ∀ u, ∑_u∈𝒰X_u,m(t)=1,∀ m, which can be solved by a Kuhn-Munkres algorithm. To solve 𝒫1, JCCRAA is proposed as shown in Algorithm 2, which composes of solving the computation resource allocation subproblem and the communication resource allocation subproblem. In Algorithm 2, by solving Algorithm 1, Ψ and β can be obtained. Then, the analytical expression of X and ϵ can be derived by using the tightening lower bound method and the Kuhn-Munkres algorithm. Next, by substituting the obtained assigned spectrum and allocated power into Algorithm 1, the S-VUs selection and task splitting strategies are updated. Repeat the process until convergence, the original problem is solved. § SIMULATION RESULTS AND ANALYSIS Intensive simulations are done to show the performance of the proposed algorithm. It is assumed that all the users are located within a target rectangular area 1000 m× 1000 m. The simulation parameters are set according to 3GPP TR 36.885 <cit.>, where a MBS is located at the center of the area and a number of RSUs with r= 150 m are located at the roadside in the area. The number of lanes is 6, and the width of each lane is 4 m. The average inter-VU distance driving in the same lane is 2.5v m with v representing the moving speed of vehicles in meter per second. Besides, we set P_u^op=20 dBm, P_m^th= [15, 30] dBm, D_m=[10^4,10^5] bits. The impact of the number of T-VUs, the size of the offloaded tasks and the number of SCs on the system EE are simulated, respectively. The obtained resluts are shown in Figs. <ref>-<ref>. To show the superiority of the proposed JCCRAA, three baselines are simulated and compared: 1) NOMA-MDSS-TSCRA algorithm, which is composed of the MDSS and the proposed task splitting and communication resource allocation algorithm. 2) RSU-SAPC algorithm, which is composed of the RSU-based offloading strategy and the proposed communication resource allocation algorithm. 3) OMA-JCCRA algorithm, which is adopted by the proposed JCCRA algorithm based on the orthogonal multiple access. The system EE for different number T-VUs is shown in Sim1, from which we see that the EE decreases as the number of T-VUs increases for all the simulated algorithms. The reason is that, as the number of T-VUs increases, the competition for limited communication resources intensifies, resulting in severe co-channel interference and a degradation in EE performance. In addition, we see that the proposed NOMA-JCCRAA performs best, and with the social-mobility-aware algorithm, a gain of approximately 17%-32% can be achieved. Sim2 shows the effect of the size of offloaded tasks at each T-VU on the EE performance when the size of the offloaded tasks at each T-VU varies. The simulation results reveal that as the size of the offloaded tasks increases, the EE decreases. This is attributed to an increase in task delay, making it difficult to satisfy the constraint of delay and ultimately reducing the EE performance of the VEC network. From Sim3, we see that the EE increases when the number of the available SCs increases from 30 to 60 for all the simulated algorithms. This is because when the number of the available SCs increases, more users can occupy the spectrum bands individually, improving the system EE. § CONCLUSIONS This letter focuses investigation on the social-mobility-aware EE maximization in VEC networks, where the T-VUs can offload the computation tasks to the S-VUs and the RSUs by NOMA. An EE maximization problem was formulated to jointly assign the 2C resources. Since the optimization turned out to be NP-hard, to solve it, an iterative JCCRAA was proposed. Simulation results have shown that the proposed JCCRAA not only can help appropriately allocate the communication and computation resources, but also can achieve a system EE gain of approximately 17%-32% by using the proposed social-mobility-aware strategy. IEEEtran
http://arxiv.org/abs/2307.05127v1
20230711090513
Optimal Coordinated Transmit Beamforming for Networked Integrated Sensing and Communications
[ "Gaoyuan Cheng", "Yuan Fang", "Jie Xu", "Derrick Wing Kwan Ng" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Optimal Coordinated Transmit Beamforming for Networked Integrated Sensing and Communications Gaoyuan Cheng, Yuan Fang, Jie Xu, and Derrick Wing Kwan Ng Part of this paper has been presented at the IEEE International Conference on Communications (ICC) 2023 <cit.>. G. Cheng, Y. Fang, and J. Xu are with the School of Science and Engineering (SSE), Future Network of Intelligence Institute (FNii), The Chinese University of Hong Kong (Shenzhen), Shenzhen 518172, China (e-mail: [email protected], [email protected], [email protected]). J. Xu is the corresponding author. D. W. K. Ng is with the School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia (e-mail: [email protected]). August 12, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper studies a multi-antenna networked integrated sensing and communications (ISAC) system, in which a set of multi-antenna base stations (BSs) employ the coordinated transmit beamforming to serve multiple single-antenna communication users (CUs) and perform joint target detection by exploiting the reflected signals simultaneously. To facilitate target sensing, the BSs transmit dedicated sensing signals combined with their information signals. Accordingly, we consider two types of CU receivers with and without the capability of canceling the interference from the dedicated sensing signals, respectively. In addition, we investigate two scenarios with and without time synchronization among the BSs. For the scenario with synchronization, the BSs can exploit the target-reflected signals over both the direct links (BS-to-target-to-originated BS links) and the cross-links (BS-to-target-to-other BSs links) for joint detection, while in the unsynchronized scenario, the BSs can only utilize the target-reflected signals over the direct links. For each scenario under different types of CU receivers, we optimize the coordinated transmit beamforming at the BSs to maximize the minimum detection probability over a particular targeted area, while guaranteeing the required minimum signal-to-interference-plus-noise ratio (SINR) constraints at the CUs. These SINR-constrained detection probability maximization problems are recast as non-convex quadratically constrained quadratic programs (QCQPs), which are then optimally solved via the semi-definite relaxation (SDR) technique. The numerical results show that for each considered scenario, the proposed ISAC design achieves enhanced target detection probability compared with various benchmark schemes. In particular, enabling time synchronization and sensing signal cancellation at the BSs is always beneficial for further improving the joint detection and communication performance. Networked integrated sensing and communications (ISAC), coordinated transmit beamforming, target detection, semi-definite relaxation, likelihood ratio test. § INTRODUCTION Integrated sensing and communications (ISAC) has been recognized as one of the usage scenarios of sixth-generation (6G) wireless networks <cit.> to support emerging applications such as auto-driving, smart city, industrial automation, and unmanned aerial vehicles (UAVs) <cit.>. Specifically, ISAC allows the sharing of cellular base station (BS) infrastructures, signal processing modules, as well as scarce spectrum and power resources for the dual roles of wireless communications and radar sensing. This not only enhances the utilization efficiency of limited resources, but also enables seamless coordination and mutual assistance between communication and sensing for improving their performances with reduced costs. In particular, by enabling the joint optimization of the sensing and communication transmit waveforms, beamforming, and resource allocation, ISAC can efficiently harness the co-channel interference and provide new design degrees of freedom for enhancing system performance. Conventionally, mono-static and bi-static ISAC systems have been widely investigated in the literature (see, e.g., <cit.> and the references therein), in which one BS serves as an ISAC transceiver or two BSs operate as the ISAC transmitter and the sensing receiver, respectively. For instance, the authors in <cit.> considered the transmit beamforming in a downlink ISAC system, where a BS sends combined information-bearing and dedicated sensing signals to perform downlink multiuser communication and radar target sensing simultaneously. In particular, two joint beamforming designs were investigated in <cit.>, one aimed to match the transmit beampattern with a desired one and the other aimed to maximize the transmit beampattern gains towards desired target directions, while ensuring the communication quality of service (QoS) requirements. Also, the mean square error of the beampattern as well as the cross correlation pattern were optimized in <cit.> to enhance the performance of multiple-input multiple-output (MIMO) radar, subject to the communication QoS constraints. Moreover, a multi-antenna ISAC system adopting rate splitting multiple access (RSMA) was studied in <cit.>, in which the joint transmission of communication streams and radar sequences was jointly optimized for optimizing the ISAC performance. However, the mono-static and bi-static ISAC systems can only offer limited service coverage and the resulting sensing and communication performances may degrade when there are rich obstacles in the environment and/or when the communication users (CUs) and sensing targets are located far apart from the BS. Recently, motivated by multi-BS cooperation for communications (e.g., coordinated multi-point transmission/reception <cit.>, cloud-radio access networks (C-RAN) <cit.>, cell-free MIMO <cit.>, etc.) and distributed MIMO radar sensing <cit.>, the notions of networked ISAC <cit.> or perceptive mobile networks <cit.> have drawn significant research momentum to address the aforementioned issues. On the one hand, as compared with conventional cellular architectures, C-RAN and cell-free MIMO allow centralized signal processing at the cloud to enable cooperative transmission and reception among distributed BSs, thus effectively mitigating or even exploiting the interference originating from different CUs to enhance their communication performance <cit.>. On the other hand, a distributed MIMO radar is able to exploit the inherent spatial diversity of target radar cross section (RCS) by designing orthogonal waveforms, thus enhancing both the sensing accuracy in estimating target parameters and the detection probability <cit.>. Besides, the coherent processing in MIMO radar can be further adopted to acquire high-resolution target detection exploiting both time and phase synchronization among different radar transceivers <cit.>. As such, by unifying the BSs' cooperative communications and distributed MIMO radar in integrated systems, networked ISAC is envisioned to provide seamless sensing and communication coverage, efficient interference management, enhanced communication data rate, high-resolution and high-accuracy detection and estimation <cit.>, as well as reduced energy and hardware costs. In the literature, there have been a handful of prior works, e.g., <cit.>, studying networked ISAC. For instance, the authors in <cit.> studied a single-antenna networked ISAC system, in which different BSs jointly optimized their transmit power control to minimize the total transmit power consumption, while fulfilling the required individual signal-to-interference-plus-noise ratio (SINR) constraints set by the associated CUs and the estimation accuracy or Cramér-Rao bound (CRB) constraint for localizing a target. In addition, the work <cit.> considered cell-free massive MIMO for networked ISAC with regularized zero-forcing (ZF) transmit beamforming, in which the BSs jointly optimized their transmit power control over different CUs to maximize the sensing signal-to-noise ratio (SNR) while ensuring the minimum required communication SINR at the CUs. Furthermore, the work <cit.> investigated the network utility maximization problem for a multi-UAV networked ISAC system, in which multiple UAVs serve a group of CUs and cooperatively sense the target simultaneously. Despite the research progress on networked ISAC, however, these prior works only considered the design of transmit power control at single-antenna BSs <cit.> or employed the simplified regularized ZF transmit beamforming at multi-antenna BSs <cit.>. Furthermore, these works only reused the information signals for performing target sensing <cit.> or employing only one additional common dedicated sensing beam for facilitating the target detection at a known location <cit.>. To the best of our knowledge, the joint optimization of transmit information beamforming and general-rank transmit sensing beamforming in multi-antenna networked ISAC systems has not been well investigated in the literature yet. This task, however, is particularly challenging due to the following reasons. First, while the information and sensing signals at multiple BSs may cause interference at different CUs, these signals can also be jointly exploited for performing target sensing. As a result, this introduces a new tradeoff between mitigating the co-channel interference to enhance the communication performance and increasing the sensing signal strength to improve the sensing performance. Next, practical dedicated sensing signals can be generated offline and are known to each CU prior to transmission <cit.>. Therefore, this presents a new opportunity to exploit interference cancellation to enhance the SINR at the CUs. However, the impact of such interference cancellation on the networked ISAC has not been investigated thus far. Furthermore, distributed MIMO sensing may exploit the cross-link echo signals from one sensing transmitter to a target captured by another sensing receiver for facilitating sensing <cit.>, when perfect synchronization can be achieved between them. Indeed, investigating the impact of such synchronization in the performance of networked ISAC is an intriguing problem that remains unexplored. To address the above issues, the joint design of target detection and multiuser communication in networked ISAC are of utmost importance. This paper studies a multi-antenna networked ISAC system, in which a set of multi-antenna BSs employ coordinated transmit beamforming to serve their associated CUs and at the same time reuse the reflected wireless signals to perform joint target detection. Our main results are summarized as follows. * To fully utilize the degrees of freedom for sensing, the BSs sends dedicated sensing signals in addition to communication signals. Accordingly, to exploit the benefit of these newly introduced dedicated sensing signals, we consider two types of CU receivers: those without the capability of canceling the interference from dedicated sensing signals (Type-I receivers) and those with the capability to perform that (Type-II receivers), respectively. * We consider two target detection scenarios depending on the availability of time synchronization among the BSs. In Scenario I, these BSs are all synchronized in time such that they can exploit the target-reflected signals over both the direct links (BS-to-target-to-originated BS links) and the cross links (BS-to-target-to-other BSs links) for joint detection. In Scenario II, these BSs are not synchronized and thus they can only utilize the target-reflected signals over their direct links for joint detection. For each of the two scenarios, we analyze the likelihood ratio test for detection and accordingly derive the detection probability subject to a required false alarm probability at any given target location, showing that the detection probability is monotonically increasing with respect to the total received reflection-signal power (over the utilized links for each scenario) at the BSs. * Based on the derivation in each scenario and by considering each type of CU receivers, we propose the coordinated transmit beamforming design at the BSs to maximize the minimum detection probability (or equivalently the minimum total received reflection-signal power) over a particular targeted area, while satisfying the minimum SINR constraints at the CUs, subject to the maximum transmit power constraints at the BSs. These problems are recast as non-convex quadratically constrained quadratic programs (QCQPs), which are then optimally solved via the semi-definite relaxation (SDR) technique. In particular, we rigorously prove that the adopted SDRs are tight for these QCQPs and the optimal rank-one solutions for information beamforming can be properly constructed based on the optimal solution of the SDRs. * Finally, we provide numerical results to validate the performance of our proposed designs as compared to two benchmark schemes that perform ZF information beamforming and conduct the target detection only via dedicated sensing signals, respectively. It is shown that for each scenario, the proposed ISAC design achieves a higher detection probability than the benchmark schemes. It is also shown that ensuring time synchronization among BSs in Scenario I is consistently enhances the detection performance. Moreover, under both Scenario I and Scenario II, we show that Type-II CUs equipped with sensing interference cancellation capability outperform their Type-I counterparts and other benchmark designs in terms of detection performance, due to the higher flexibility in interference management of the Type-II CUs, which is always beneficial. The remainder of this paper is organized as follows. Section II presents the networked ISAC system model. Section III derives the detection probability under a specific false alarm probability at any target location. Section IV presents the optimal coordinated transmit beamforming optimization problems for the two considered scenarios, respectively. Section V provides numerical results to validate the performance of our proposed schemes. Section VI concludes this paper. Notations: Vectors and matrices are denoted by boldface lowercase and uppercase letters, respectively. I denotes an identity matrix with appropriate dimension. 𝔼 (·) denotes the statistical expectation. var( ·) denotes the statistical variance. For a scalar a, | a | denotes its absolute value. For a vector v, v denotes its Euclidean norm. For a matrix M of arbitrary dimension, M^T and M^H denote its transpose and conjugate transpose, respectively. ℂ^x × y denotes the space of x × y complex matrices. Re( ·) denotes the real part of a complex number, vector, or matrix. N( x, Y) and CN( x, Y) denote the real-valued Gaussian and the circularly symmetric complex Gaussian (CSCG) distributions with mean vector x and covariance matrix Y, respectively, and “∼" means “distributed as". Q(·) denotes the Q-function. rank( ·) denotes the rank of a matrix. § SYSTEM MODEL We consider a multi-antenna networked ISAC system consisting of L BSs each with N_t>1 transmit and N_r>1 receive antennas, where each BS serves the same number of K single-antenna CUs. Note that the total number of CUs is LK, let L≜{1, … ,L} and K_l ≜{1, … ,K} denote the set of BSs and the set of CUs in each cell associated with BS l, respectively. In this system, the BSs send individual messages and dedicated sensing signals to their associated CUs. At the same time, the BSs receive and properly process the reflected signals and then convey them to a central controller (CC) for joint target detection, cf. Fig. 1. As such, the multi-antenna networked ISAC system unifies the multi-antenna coordinated beamforming system for communication <cit.> and the distributed MIMO radar for target detection <cit.>, as will be detailed next. Specifically, we focus on the ISAC transmission over a communication block with duration T that consists of N symbols, where T = N T_s with T_s denoting the duration of each symbol. Here, T or N is assumed to be sufficiently large for the ease of analysis <cit.>. Let 𝒯≜ (0,T] denote the ISAC period of interest and 𝒩≜{1, …, N} the set of symbols. First, we consider the communication from the BSs to the CUs, in which the coordinated transmit beamforming is employed at these BSs. Let s̅_l,i(t) ∈ℂ denote the communication signal sent by BS l ∈ L for CU i ∈ K_l at time t ∈ T, w_l,i∈ℂ^N_t× 1 denote the corresponding transmit beamforming vector by BS l ∈ℒ, and s̅_l^r( t ) ℂ^N_t× 1 denote the dedicated sensing signal sent by BS l at time t with zero mean and covariance matrix R_l^r = 𝔼[s̅_l^r( t )s̅_l^rH( t )] ≽ 0. With loss of generality, we assume r_t = rank( R_l^r), As such, we express s̅_l^r ( t ) as s̅_l^r( t )= ∑_k = 1^r_tw_l,k^rs̅_l,k^r(t), where s̅_l,k^r(t) denotes the kth waveform of BS l that is modeled by an independently generated pseudorandom signal with zero mean and unit variance, and w_l,k^r denotes the corresponding transmit beamforming vector that can be determined based on R_l^r via eigenvalue decomposition (EVD). Denote s_l,i[n], s_l,k^r[n], and s_l^r[n] as the sampled signals of s̅_l,i(t), s̅_l,k^r( t ), and s̅_l^r( t ), respectively, at each symbol n∈𝒩. Here, {s_l,i[n]} are assumed to be independent and identically distributed (i.i.d.) random variables with zero mean and unit variance. Let h_l,m,i∈ℂ^ N_t× 1 denote the channel vector from BS l∈ℒ to CU i∈𝒦_m that is located in cell m ∈ L. Then, the received signal by CU k ∈𝒦_m at cell m in symbol n ∈𝒩 is y_m,k[ n ] = h_m,m,k^Hw_m,ks_m,k[ n ] + ∑_i ∈ K_m,i kh_m,m,k^Hw_m,is_m,i[n]_intra-cell  interference +∑_l ∈ L,l mh_l,m,k^H∑_i ∈ K_lw_l,is_l,i[n]_inter-cell interference+ ∑_l ∈ Lh_l,m,k^Hs_l^r[n]_sensing interference + z_m,k[ n ], where z_m,k[ n ] ∼CN( 0,σ ^2_c) denotes the noise at the receiver of CU k at cell m, with σ_c^2 denoting the corresponding noise power. It is observed in (<ref>) that each CU suffers from both the intra-cell and inter-cell interference, as well as the interference from the dedicated sensing signals. In practice, the noise term z_m,k[ n ] may also include the background and clutter interference <cit.>. Notice that {s̅_l^r(t)} are predetermined pseudorandom signals that can be a-priori known by all the BSs and CUs. Therefore, we consider two different types of CUs: those without the capability of canceling the interference caused by the dedicated sensing signal <cit.>, referred to as Type-I receivers, and those with the capability, referred to as Type-II receivers, respectively. Type-I receivers: Each Type-I receiver k in cell m is not equipped with the capability to cancel the interference generated by the dedicated sensing signal s_l^r[n]. The SINR of CU k in cell m is given by (<ref>). Type-II receivers: Each Type-II receiver k in cell m is dedicatedly designed for the ISAC system with the capability to cancel the interference generated by dedicated sensing signal s_l^r[n] before decoding its desired communication signal s_m,k[n]. In this case, the SINR of CU k in cell m is given by (<ref>). Next, we consider the distributed MIMO radar detection by the L BSs via reusing both the communication signals {s̅_l,i(t)} and the dedicated sensing signals {s̅_l^r(t)} concurrently. Let (x_l, y_l) denote the location of each BS l∈L. Suppose that there is one target present at location (x_0,y_0), for which the target angle with respect to BS l is denoted by θ_l. Let a_t,l( θ _l) ∈ℂ^ N_t× 1 and a_r,l( θ _l) ∈ℂ^ N_r× 1 denote the transmit and receive steering vectors at BS l∈ℒ, respectively, where a_t,l(θ _l)/ . -N_t = a_r,l(θ _l)/ . -N_r = 1 is assumed without loss of generality <cit.>. In the practical case with uniform linear arrays (ULAs) deployed at the BSs, we have a_t,l( θ _l) = [ 1,e^j2πd_a/λsin( θ _l), … ,e^j2πd_a/λ( N_t - 1)sin( θ _l)]^T, and a_r,l(θ _l) = [ 1,e^j2πd_a/λsin( θ _l), … ,e^j2πd_a/λ( N_r - 1)sin( θ _l)]^T, where j =√(-1), while parameters d_a and λ denote the antenna spacing and wavelength, respectively. Let H_m,l = ζ̂_m,la_r,m( θ _m)a_t,l^T( θ _l) ∈ℂ^ N_r× N_t denote the end-to-end target response matrix from BS l-to-the-target-to-BS m, in which ζ̂_m,l = √(β _m,l)ζ _m,l is the reflection coefficient incorporating both the RCS ζ_m,l and the equivalent round-trip path loss β _m,l. Specifically, we assume that β _m,l = κ^2 d_ ref^4/d_m^2d_l^2, where κ denotes the path loss at the reference distance d_ ref and d_l = √((x_l - x_0)^2 + (y_l - y_0)^2) denotes the distance between the target and BS m. As such, the received echo signal at BS m is r_m( t ) = ∑_l ∈ LH_m,l( ∑_i ∈ K_lw_l,is̅_l,i( t - τ _m,l) + s̅_l^r( t - τ _m,l) ) + z̅_m( t ), where z̅_m(t) ∼CN( 0,σ ^2_dI) denotes the noise at the receiver of BS m and τ_m,l = 1/c(d_m + d_l) denotes the transmission delay from BS l-to-the-target-to-BS m, with c denoting the speed of light. Without loss of generality, we assume that each information signal and dedicated sensing signal have a normalized power over block 𝒯, i.e., 1/T∫_𝒯| s̅_l,i( t ) |^2dt = 1 and 1/T∫_ T| s̅_l,k^r( t )|^2dt = 1. Furthermore, notice that {s̅_l,i(t)} and {s̅_l,k^r(t)} are with zero mean and independent over different CUs and different times. As T is sufficiently large, we have 1/T∫_𝒯s̅_l,i( t )s̅_m,k^*( t - τ)dt= 0, ∀τ, l≠ m, k≠ i, and 1/T∫_ Ts̅_l,k^r( t )s̅_m,i^r*( t - τ)dt = 0, ∀τ, l≠ m, k≠ i, as well as 1/T∫_𝒯s̅_l,i(t) s̅_l,i^* (t-τ) dt = 0 and 1/T∫_ Ts̅_l,i^r (t)s̅_l,i^r*(t - τ )dt = 0 for any l, i and |τ| ≥ T_s. Based on the received signals { r_m(t)} in (<ref>), the L BSs jointly detect the existence of the target, as will be illustrated in the next section. § DETECTION PROBABILITY AT GIVEN TARGET LOCATION In this section, we derive the detection probability and the false alarm probability at a given target location (x_0, y_0), by particularly considering the two joint detection scenarios with and without time synchronization among the BSs, namely Scenario-I and Scenario-II, respectively. Scenario 1: All the BSs are synchronized in time such that the mutual delays τ_m,l are known. As such, the target-reflected signals over both the direct links (i.e., H_m,m( ∑_i ∈ K_mw_m,is̅_m,i( t - τ _m,m ) + s̅_m^r( t - τ _m,m ) ) from each BS m-to-the-target-to-originated BS) and the cross links (i.e., H_m,l( ∑_i ∈ K_lw_l,is̅_l,i( t - τ _m,l ) + s̅_l^r( t - τ _m,l ) ) from other BSs l's-to-the-target-to-BS m, ∀ l≠ m) can be exploited for joint detection. Towards this end, each BS m performs the matched filtering (MF) processing based on r_m(t) by exploiting {s̅_l,i(t)}, {s̅_l,k^r(t)}, and delay {τ_m,l}. Accordingly, the processed signal based on s̅_l,i(t) is d_m,l,i =1/T∫_𝒯r_m( t )s̅_l,i^*( t - τ _m,l)dt = 1/T∫_𝒯H_m,lw_l,i| s̅_l,i^*( t - τ _m,l)|^2dt_ desired signal + 1/T∫_𝒯z_m( t )s̅_l,i^*( t - τ _m,l)dt_ filtered  noise = H_m,lw_l,i + ẑ_m,l,i. Similarly, the processed signal based on s̅_l,k^r(t) is d_m,l,k^r =1/T∫_𝒯r_m( t )s̅_l,k^r*( t - τ _m,l)dt = 1/T∫_𝒯H_m,lw_l,k^r| s̅_l,k^r*( t - τ _m,l)|^2dt_ desired signal + 1/T∫_𝒯z_m( t )s̅_l,k^r*( t - τ _m,l)dt _ filtered  noise = H_m,lw_l,k^r + ẑ_m,l,k^r. In (<ref>) and (<ref>), ẑ_m,l,i∼ CN( 0,σ ^2_d I) and ẑ_m,l,k^r∼ CN( 0,σ ^2_d I) denote the equivalent noise after MF processing, where σ_d^2 is the per-antenna noise power at each BS. After obtaining { d_m,l,i}_l∈ℒ and { d_m,l,k^r}_l∈ℒ, each BS m delivers them to the CC, which then performs the joint radar detection based on { d_m,l,i} and { d_m,l,k^r}. For ease of illustration, the observed signals can be stacked as (<ref>). Scenario 2: In this scenario, the BSs are not synchronized in time and thus the value of transmission delay τ _m,l with respect to other BS l≠ m is not available at BS m. In this scenario, the BSs can only utilize the target-reflected signals over their direct links, i.e., (H_m,m( ∑_i ∈ K_mw_m,is̅_m,i( t - τ _m,m ) + s_m^r( t - τ _m,m ) ), ∀ m∈ℒ), for joint detection. After the MF processing similarly as in Scenario I, we have the processed signal as (<ref>), where { d_m,l,i} and { d_m,l,k^r} are defined as (<ref>) and (<ref>), respectively. §.§ Detection Probability in Scenario 1 with BSs Synchronization To start with, we define two hypotheses for target detection, i.e., H_1 when the target exists and H_0 when the target does not exist. For notational simplicity, we define α _m,l,i^c = H_m,lw_l,i and α _m,l,k^r = H_m,lw_l,k^r as the reflected communication signal and dedicated sensing signal vectors from BS l-to-the target-to-BS m when the target exists. Also, we present the correspondingly accumulated signal vector as (<ref>). Furthermore, define the noise vector in Scenario I as (<ref>). Then, based on (<ref>), we have the processed signals after the MF processing as {[ H_1:d_I = α_I + ẑ_I,; H_0:d_I = ẑ_I. ]. Next, we adopt the likelihood ratio test for target detection. Based on (<ref>), the likelihood functions of vector d_ I under the hypothesis H_1 and H_0 are respectively given by p( d_ I| H_1) = c_0exp( - 1/σ ^2_d( d_ I-α_ I)^H( d_ I-α_ I)), p( d_ I| H_0) = c_0exp( - 1/σ ^2_dd _ I^Hd _ I), where c_0 = 1/π ^N_r(K+r_t)L^2σ ^2N_r(K+r_t)L^2_d. Accordingly, the Neyman-Pearson (NP) detector is given by the likelihood ratio test<cit.>: lnp( d_ I| H_1)/p( d_ I| H_0) = 1/σ_d ^2( 2Re( α_ I^Hd_ I) - α_ I^Hα_ I) ≷_ H_0^ H_1δ, where δ denotes the threshold determined by the tolerable level of false alarm. Notice that since α_ I^Hα_ I is given, the detector in (<ref>) can be equivalently simplified as T( d_ I) = Re( α_ I^Hd_ I) ≷_ H_0^ H_1δ^', where δ^' denotes the threshold related to T( d _ I). Then, we derive the distribution of T( d_ I). Towards this end, we consider x=α_ I^Hd_ I, whose expectation and variance under hypothesis H_1 and H_0 are respectively obtained as 𝔼( x| H_0) = 0, 𝔼( x| H_1) = E _ I≜∑_l ∈ L∑_m ∈ L ( ∑_i ∈ K_lH_m,lw_l,i^2 + ∑_k = 1^r_tH_m,lw_l,k^r ^2 ) =N_r∑_l ∈ L∑_m ∈ Lζ_m,l^2β _m,l( ∑_i ∈ K_l| a_t,l^T( θ _l)w_l,i|^2 + a_t,l^T( θ _l)R_l^ra_t,l( θ _l) ) , var( x| H_0) = var( x| H_1)= σ_d ^2 E_ I. By combining (<ref>), (<ref>), and (<ref>), we have {[ x ∼ CN( 0,σ_d ^2 E_ I), H_0,; x ∼ CN( E_ I,σ_d ^2 E_ I), H_1. ]. As a result, for T( d_ I) = Re( x), it follows that {[ T( d_ I) ∼ N( 0,σ ^2_d E_ I/ . - 2), H_0,; T( d_ I) ∼ N( E_ I ,σ ^2_d E_ I/ . - 2), H_1. ]. Finally, we derive the detection probability under a required false alarm probability. Based on (<ref>) and (<ref>), we obtain the detection probability p_D^ I and the false alarm probability p^ I_FA with respect to the detector threshold δ^' as p^ I_D = Q( (δ^'- E_ I ) √(2/σ _d^2 E_ I)), p^ I_FA = Q( δ^'√(2/σ _d^2 E_ I)), respectively. Based on (<ref>), we have δ^'√(2/σ _d^2 E_ I) =Q^ - 1( p^ I_FA). By substituting this into (<ref>), we obtain the detection probability for given false alarm probability p_FA^ I as p_D^ I = Q( Q^ - 1( p_FA^ I) - √(2 E_ I/σ _d^2)). It is observed that the detection probability p_D^ I in (<ref>) is monotonically increasing with respect to E_ I in (<ref>), which corresponds to the total received reflection-signal power over both direct and cross reflection links. Denote A_l(θ _l) = a_t,l^ * (θ _l)a_t,l^T(θ _l),∀ l∈ℒ, E_ I is reexpressed as E_I = N_r∑_l ∈ L∑_m ∈ Lζ_m,l^2β _m,l( ∑_i ∈ K_l tr( w_l,iw_l,i^H A_l( θ _l ) ) + tr( R_l^r A_l( θ _l ) ) ). As a result, maximizing the detection performance of the system is equivalent to maximizing the received reflection-signal power E_I in (<ref>). §.§ Detection Probability in Scenario 2 without BSs Synchronization Next, we consider Scenario II without synchronization among the BSs. The detection probability in this scenario can be similarly derived as that in Scenario I, by replacing d_ I by d_ II and accordingly replacing E_ I in (<ref>) by E_ II = N_r ∑_m ∈ Lζ_m,m^2β _m,m( ∑_i ∈ K_m| a_t,m^T( θ _m )w_m,i|^2 + a_t,m^T( θ _l)R_m^ra_t,m( θ _m) ) = N_r∑_m ∈ Lζ_m,m^2β _m,m( ∑_i ∈ K_mtr( w_m,iw_m,i^HA_m( θ _m)) + tr( R_m^rA_m( θ _m)) ), where A_m( θ _m) = a_t,m^*(θ _m)a_t,m^T(θ _m),∀ m ∈ L. Based on the similar derivation procedure as in Section <ref>, we have the detection probability p_D^ II for a given false alarm probability p_FA^ II as p_D^ II = Q( Q^ - 1( p_FA^ II) - √(2E_ II/σ _d^2)). It is observed from (<ref>) that the detection probability p_D^ II is monotonically increasing with respect to the total received reflection-signal power over the direct links only, i.e., E_ II in (<ref>). Therefore, maximizing p_D^ II is equivalent to maximizing the received reflection-signal power E_ II in (<ref>). § SINR-CONSTRAINED DETECTION PROBABILITY MAXIMIZATION VIA COORDINATED TRANSMIT BEAMFORMING In this section, we design the coordinated transmit beamforming {w_l,k} and dedicated sensing signal covariance matrix { R_l^r} to maximize the minimum detection probability with a given false alarm probability p_FA over a particular targeted area, subject to the minimum SINR requirement Γ_m,k at each cell m∈ L and CU k∈ K_m, and the maximum power constraint P_max at each BS. In particular, let Q denote the targeted area for detection. To facilitate the design, we select Q sample locations from Q, denoted by (x_0^(q), y_0^(q)), ∀ q∈ Q≜{1,…, Q}. For a potential target located at (x_0^(q), y_0^(q)), we denote the target angle with respect to BS l∈ℒ as θ_l^(q) and the round-trip path loss from BS l∈ℒ to target to BS m∈ℒ as β _m,l^(q). §.§ Scenario 1 with BSs Synchronization First, we consider Scenario I with BSs synchronization. In this scenario, for given target location q, maximizing the detection probability in this scenario is equivalent to maximizing E_ I in (<ref>). Based on this observation, the SINR-constrained minimum detection probability maximization problems over the given targeted area with Type-I and Type-II receivers are formulated as ( P1): max_{w_l,i, R_l^r}   min_q∈Q f^I( {w_l,i} ,{R_l^r})  s.t.   γ _m,k^ I( {w_l,i},{R_l^r})≥Γ _m,k,   ∀ k ∈ K_m, ∀ m ∈ L,   ∑_i ∈ K_lw_l,i^2 + Tr ( R_l^r) ≤ P_max,∀ l ∈ L,   R_l^r ≽ 0, ∀ l ∈ L, and ( P2): max_{w_l,i, R_l^r}   min_q∈Q f^I( {w_l,i} ,{R_l^r})  s.t.   γ _m,k^ II( {w_l,i},{R_l^r})≥Γ _m,k,   ∀ k ∈ K_m, ∀ m∈ L,   (<ref>) and (<ref>), respectively, where f^I( {w_l,i} ,{R_l^r}) = ∑_l ∈ L∑_m ∈ Lζ _m,l^2β _m,l^(q)tr( ( ∑_i ∈ K_lw_l,iw_l,i^H + R_l^r )A_l( θ _l^(q) ) ) , (<ref>) and (<ref>) denote the minimum SINR constraints at different types of CUs and (<ref>) denotes the maximum transmit power constraints at the BSs. Notice that problems (P1) and (P2) are non-convex due to the non-convex constraints in (<ref>) and (<ref>). In the following, we apply the SDR technique to solve problems (P1) and (P2). Towards this end, we introduce Ω as an auxiliary optimization variable and define W_l,i =w_l,iw_l,i^H ≽0 with rank (W_l,i)≤ 1, ∀ l ∈ L, i ∈ K_l. Problems (P1) and (P2) are equivalently reformulated as ( P1.1): max_{W_l,i≽0,R_l^r ≽0,Ω} Ω     s.t. f̂^I( {W_l,i} ,{R_l^r}) ≥Ω, ∀ q ∈ Q,         ∑_l ∈ L∑_i ∈ K_ltr( h_l,m,kh_l,m,k^HW_l,i)         + ∑_l ∈ L tr( h_l,m,kh_l,m,k^HR_l^r)         + σ _c^2 ≤ ( 1 + 1/Γ _m,k )tr( h_m,m,kh_m,m,k^HW_m,k),           ∀ k ∈ K_m, ∀ m∈ L,         ∑_i ∈ K_l tr( W_l,i) + tr ( R_l^r) ≤ P_max,∀ l ∈ L,         rank (W_l,i) ≤ 1, ∀ i ∈ K_l, ∀ l ∈ L and ( P2.1): max_{W_l,i≽0,R_l^r ≽0,Ω} Ω     s.t. ∑_l ∈ L∑_i ∈ K_ltr( h_l,m,kh_l,m,k^HW_l,i) + σ _c^2         ≤( 1 + 1/Γ _m,k)tr( h_m,m,kh_m,m,k^HW_m,k),           ∀ k ∈ K_m, ∀ m ∈ L,           (<ref>), (<ref>), and (<ref>), respectively, where f̂^I( {W_l,i} ,{R_l^r}) = ∑_l ∈ L∑_m ∈ Lζ _m,l^2β _m,l^(q)tr( ( ∑_i ∈ K_lW_l,i + R_l^r )A_l( θ _l^(q) ) ) However, problems (P1.1) and (P2.1) are still non-convex due to the rank-one constraints in (<ref>). To tackle this issue, we drop these rank-one constraints and obtain the SDR version of (P1.1) and (P2.1) as (SDR1.1) and (SDR2.1) <cit.> respectively, both of them are convex and can be optimally solved by standard convex optimization program solvers such as CVX <cit.>. Let {{W_l,i^ * } ,{R_l^r * }, Ω ^ *} and {{W_l,i^**} ,{R_l^r**} ,Ω ^**} denote the optimal solutions to (SDR1.1) and (SDR2.1), respectively. Notice that the obtained W_l,i^* and W_l,i^** are generally of high ranks, which do not necessarily satisfy the rank-one constraints in (P1.1) and (P2.1). As such, we introduce the following additional step to construct the equivalent optimal rank-one solutions to (P1.1) and (P2.1). The SDR of problem (P1.1) is tight. In particular, based on the optimal solution of {{W_l,i^ * } ,{R_l^r * }, Ω ^ *} to (SDR1.1), if any of {W_l,i^ * } is not rank-one, we can always construct the equivalent optimal rank-one solution of {{W̃_l,i} ,{R̃_l^r} ,Ω̃} to (P1.1) according to the following, which achieves the same objective value as (SDR1.1): w̃_l,i = ( h_l,m,k^HW_l,i^ * h_l,m,k)^-1/2W_l,i^ * h_l,m,k, W̃_l,i = w̃_l,iw̃_l,i^H, R̃_l^r = ∑_i ∈ K_lW_l,i^* + R̃_l^r * - ∑_i ∈ K_lW̃_l,i^* Ω̃ =Ω ^* . See Appendix <ref>. Similarly as for problem (P1.1), we find the optimal solution to (P2.1) by showing that the SDR is tight in the following proposition. The SDR of problem (P2.1) is tight. In particular, based on the optimal solution of {{W_l,i^ **} ,{R_l^r ** }, Ω ^ **} to (SDR2.1), if any of {W_l,i^ **} is not rank-one, we can always construct the equivalent optimal rank-one solution of {{W̅_l,i} ,{R̅_l^r} ,Ω̅} to (P2.1) in the following, which achieves the same objective value as (SDR2.1): w̅_l,i = ( h_l,m,k^HW_l,i^**h_l,m,k)^ - 1/2W_l,i^**h_l,m,k, W̅_l,i = w̅_l,iw̅_l,i^H, R̅_l^r = ∑_i ∈ K_lW_l,i^** + R_l^r** - ∑_i ∈ K_lW̅_l,i Ω̅ =Ω ^**. The proof is similar to that in Appendix <ref>, for which the details are omitted. §.§ Scenario 2 without BSs Synchronization Next, we consider Scenario II without BSs synchronization. In this scenario, the SINR-constrained minimum detection probability maximization problems of Type-I and Type-II CU receivers are respectively formulated as problems (P3) and (P4) in the following, which are similar to problems (P1) and (P2) by replacing E_I in (<ref>) as E_II in (<ref>), respectively: ( P3): max_{w_m,i, R_m^r≽0}   min_q∈Q f^II( {w_m,i} ,{R_m^r})  s.t.   (<ref>) and (<ref>). ( P4): max_{w_m,i,R_m^r≽0}   min_q∈Q f^II( {w_m,i} ,{R_m^r})  s.t.   (<ref>) and (<ref>). where f^II( {w_m,i} ,{R_m^r}) = ∑_m ∈ Lζ _m,m^2β _m,m^(q)tr( ( ∑_i ∈ K_mw_m,iw_m,i^H + R_m^r )A_m( θ _m^(q) ) ). As problems (P3) and (P4) have similar structures as problems (P1) and (P2), respectively, they can also be solved optimally based on the SDR. More specifically, by introducing the auxiliary variable Ω, and defining W_m,i =w_m,iw_m,i^H ≽0 with rank (W_m,i)≤ 1, problems (P3.1) and (P4.1) can be reformulated equivalently as ( P3.1): max_{W_m,i≽0,R_m^r≽0,Ω} Ω         s.t. f̂^II( {W_m,i} ,{R_m^r}) ≥Ω, ∀ q ∈ Q,         (<ref>),  (<ref>), and (<ref>), and ( P4.1): max_{W_m,i≽0,R_m^r ≽0,Ω} Ω         s.t. (<ref>), (<ref>), (<ref>), and (<ref>), respectively, where f̂^II( {W_m,i} ,{R_m^r}) = ∑_m ∈ Lζ _m,m^2β _m,m^(q)tr( ( ∑_i ∈ K_mW_m,i + R_m^r )A_m( θ _m^(q) ) ) Then, we drop the rank-one constraints on { W_m,i} in (<ref>) to obtain the SDR versions of (P3.1) and (P4.1) as (SDR3.1) and (SDR4.1), respectively, which are convex and can be solved optimally. Note that problems (SDR3.1) and (SDR4.1) generally have high rank optimal solutions, which may not satisfy the rank-one constraints in (<ref>) for (P3.1) and (P4.1). Fortunately, by following the similar concepts as in Propositions 1 and Propositions 2 for the SDRs of (P3.1) and (P4.1), one can show that the optimal rank-one solutions to (P3.1) and (P4.1) can always be constructed. The details of the derivations are thus omitted for brevity. Comparing problem (P1.1) (or (P2.1)) in Scenario I with problem (P3.1) (or (P4.1)) in Scenario II, we observe that the feasible solutions of (P1.1) (or (P3.1)) is a subset of (P2.1) (or (P4.1)), but not vice versa. Thus, we conclude that problem (P2.1) (or (P4.1)) can always achieve a higher optimal objective value or at least equal to that of (P1.1) (or (P3.1)), since (P2.1) and (P4.1) enjoy a larger feasible solution set than (P1.1) and (P3.1), respectively. Moreover, via extensive simulations in the next section, we observe that for Type-I receivers without sensing signal interference cancellation, the optimal solutions to (P2.1) and (P4.1) satisfy R_l^r= 0, which shows that employing dedicated sensing signals is not necessary. Intuitively, this is because the dedicated sensing signals would introduce harmful interference for communications in this case. § NUMERICAL RESULTS In this section, we provide numerical results to validate the performance of our proposed coordinated transmit beamforming designs for the multi-antenna networked ISAC system. §.§ Benchmark Schemes First, we consider the following benchmark schemes for performance comparison. * ISAC with ZF information beamforming: In this scheme, we apply coordinated ZF beamforming <cit.>. Let H̅_m,m,k = [h_m,1,1, … ,h_m,m,k - 1,h_m,m,k + 1, …h_m,L,K], and H̅_m,m,k = U̅_m,m,kΛ̅_m,m,kV̅_ m,m,k^H denotes the application of singular value decomposition (SVD) on H̅_m,m,k, where U̅_m,m,k = [U̅_ m,m,k^n̅u̅l̅l̅U̅_ m,m,k^null] and U̅_m,m,k^null∈ℂ^N_t× (N_t - L^2K + 1). The ZF transmit beamforming at BS m for CU k is designed as w_m,k^ZF = √(p_m,k^ZF)U̅_m,m,k^nullU̅_m,m,k^null Hh_m,m,k/U̅_ m,m,k^nullU̅_m,m,k^null Hh_m,m,k,   ∀ m ∈ L, ∀ k ∈ K_m, where p_m,k^ZF denotes the power for CU k by BS m, which is a variable to be optimized. Accordingly, the power constraint at each BS m becomes∑_k ∈ Kp_m,k^ZF≤P_max. By substituting w_m,k^ZF in (<ref>) into problems (P1.1), (P2.1), (P3.1), and (P4.1), we obtain the corresponding power allocation problems (ZF1.1), (ZF2.1), (ZF3.1), and (ZF4.1), which can be optimized similarly as in Section IV for obtaining the optimal coordinated power control solutions. * Joint detection via dedicated sensing signals: In this scheme, the BSs only employ the dedicated sensing signals for joint detection. The corresponding detection probabilities in Scenario I and Scenario II are respectively given by p_D^I = Q( Q^ - 1( p_FA^I) - √(2Ê_I/σ _d^2)), p_D^II = Q( Q^ - 1( p_FA^II) - √(2Ê_II/σ _d^2)), where Ê_I = N_r∑_l ∈ L∑_m ∈ Lζ_m,l ^2β _m,ltr( R_l^rA_l( θ _l^(q) ) ), Ê_II = N_r∑_m ∈ Lζ_m,m ^2β _m,mtr( R_m^rA_m( θ _m^(q) ) ), denote the correspondingly received echo signals of the dedicated sensing signals. Accordingly, we optimize the joint beamforming by solving problems (P1)-(P4) via replacing E_I and E_II as Ê_I and Ê_II, respectively. §.§ Simulation Results In the simulation, we consider the networked ISAC scenario with L = 3 BSs as shown in Fig. 2, where each BS serves one CU or multiple CUs. Each BS is deployed with a ULA with half a wavelength spacing between the antennas. The noise powers are set as σ^2_c = -84 dBm and σ^2_d = -102 dBm. The SINR constraints at the CUs are set to be identical, i.e., Γ_m,k=Γ, ∀ k ∈ K_m, m ∈ L. The coordinates of the three BSs are set as ( 80 m,0 m), (-40 m,40√(3) m), and ( -40 m,-40√(3) m), respectively. The numbers of transmit and receive antennas at each BSs are N_t=N_r=N_a=32. Furthermore, the path loss between each BS and CU is given by μ_l,i = κ̂[d_0/d_l,i]^ν, where κ̂ denotes the path loss at the reference distance of d_0 = 1 meter and ν denotes the path loss exponent. In addition, the targeted area is set as a square region with an area of 2 × 2=4 m^2 centering at origin ( 0 m ,0 m), we take M=9 sample locations that are uniformly distributed in the targeted area. Firstly, we consider the case when there is only K=1 CU served by each BS, thus the system has LK=3 CUs in total. In particular, we consider the Rayleigh fading channel in communication from each BS to CU, and that the three CU locations are located at (38.85 m,-20.97 m), (-1.26 m,44.13 m), and (-37.58 m,-23.16 m), respectively. Fig. 3 shows the detection probability p_D versus the SINR requirement Γ at the CUs in Scenario I and Scenario II, in which the maximum transmit power constraint P_max=15W and the false alarm probability p_FA=10^-3. It is observed that for all the schemes, the detection probability decreases with an increasing SINR requirement. This is due to the fact that when the communication requirement becomes stringent, the BSs need to steer the transmit beamformers towards the CUs, thus leading to less power being reflected by the target location and jeopardizing the performance of detection. It is also observed that the proposed design with Type-II receivers achieves the highest detection probability compared to other schemes under same channel conditions in both Scenario I and Scenario II. When Γ becomes large, the performance gaps between the proposed design and the benchmark schemes are enlarged. This shows the importance of joint communication and sensing coordinated transmit beamforming. Furthermore, with a large value of Γ, the performance achieved by Type-I CU receivers approaches that of their Type-II counterparts. This is due to the fact that in this case, more power should be allocated to information signals. As a result, the power of dedicated sensing signals and the resultant interference become smaller, thus making the gain of sensing interference cancellation marginal. Fig. 4 shows the detection probability p_D versus the false alarm probability p_FA in the two scenarios for the two types of CU receivers with P_max=15 W and Γ = 25 dB. For all the schemes, it is observed that the detection probability increases towards one as the false alarm probability becomes large, as correctly predicted by (<ref>) and (<ref>). It is also observed that the detection probability achieved in Scenario II is much smaller than that in Scenario I under the same setup. This gain is attributed to the joint exploitation of both direct and cross echo links in Scenario I, thanks to the time synchronization among different BSs. Furthermore, the schemes with Type-II receivers are observed to achieve higher detection probability than the counterparts with Type-I receivers, thus validating again the benefit of dedicated sensing signals together with interference cancellation in enhancing the networked ISAC performance. Fig. 5 shows the detection probability p_D versus the maximum transmit power budget P_max at each BS with Γ = 25 dB and p_FA=10^-3. Similar observations can be made as in Figs. 3 and 4, demonstrating the performance gains achieved by the proposed joint optimization framework. Next, we consider the case when each CU is located at a different angle with respect to its home BS, in which the LoS channel is considered in communication from each BS to its respective CU. In particular, as shown in Fig. 2, the CUs in different cells are located at the same angle θ with respect to the corresponding BS, and the distance between each BS and the correspondingly associated CU is 45 m. Fig. 6 shows the detection probability p_D versus the angle θ with P_max=12 W, Γ=30 dB, and p_FA=10^-3. It is observed that when θ is close to 0^ ∘ and 360^ ∘, the scheme with Type-II receivers significantly outperforms that with Type-I receivers. This is because the CUs are located at similar angles as the targeted area in this case, and accordingly, the interference caused by sensing signals becomes severe. By contrast, when θ is close to 110^∘, such performance gap is observed to become less. This is due to the fact that the CUs are located at different angles from the targeted area. As a result, the information and sensing beamformers can be steered toward different directions for communication and sensing, respectively, with minimized interference. Furthermore, we consider that each BS serves K=3 CUs, in which the Rayleigh fading channel model is considered from each BS to its respective CU. The locations of the CUs are generated as shown in Fig. 7. Fig. 8 shows the detection probability p_D versus the SINR requirement Γ at each CU, where P_max=15 W, Γ=15 dB, p_FA=10^-3. Fig. 9 shows the detection probability p_ D versus the false alarm probability p_FA with P_max = 15 W, Γ = 15 dB, and p_FA = 10^-3. Fig. 10 shows the detection probability p_D versus the power budget P_max=15 at each BS, with Γ = 15 dB, and p_FA = 10^-3. By comparing these figures to Figs. 3, 4, and 5 for the case with K=1, it is observed that the proposed design with Type-I receivers achieves similar performance as that with Type-II receivers, which means that the gain brought by the dedicated signals becomes marginal in this case. This is due to the fact that when there are more CUs in each cell, we have a larger number of information beams that can provide sufficient degrees of freedom for target sensing, thus making the benefit of dedicated sensing signals limited or even not necessary. It is also observed that the benchmark scheme with ZF-based information beamforming performs significantly worse than the proposed designs. This is because with more CUs, the inter-user interference becomes more severe, and thus the ZF-based design leads to degraded performance due to the limited available degrees of freedom for effective interference suppression. § CONCLUSION This paper studied the joint multi-cell communication and distributed MIMO radar detection in a networked ISAC system, in which a set of multi-antenna BSs employed the coordinated transmit beamforming to serve their associated single-antenna CUs, and at the same time utilized the dedicated sensing signals together with their communication signals for target detection. Two joint detection scenarios with and without time synchronization among the BSs were considered, for which the detection probability and the false alarm probability were derived in closed forms. Accordingly, we developed the coordinated transmit beamforming ISAC design to maximize the minimum detection probability (or equivalently the total received reflection-signal power) over a particular targeted area, while ensuring the SINR constraints at the CUs for communication. By considering the transmission of dedicated sensing signals, we introduced two types of CU receivers, Type-I and Type-II, without and with the capability of dedicated sensing interference cancellation, respectively. For the proposed non-convex optimization problems, we adopted the SDR technique to obtain the optimal joint beamforming solutions. Finally, numerical results showed that the proposed ISAC design achieved higher detection probability than other benchmark schemes. It was also shown that the presence of time synchronization among the BSs and dedicated sensing interference cancellation can further enhance the sensing and communication performances for networked ISAC systems. §.§ Proof of Proposition 1 It can be verified based on (<ref>) and (<ref>) that W̃_l,i achieves the same objective values in (P1.1) as W_l,i^ *, and satisfy the power constraints in (<ref>). Next, we verify that W̃_l,i can satisfy the SINR constraints in (<ref>) for communications. From (<ref>) and (<ref>), we obtain that h_l,m,k^HW̃_l,ih_l,m,k = h_l,m,k^Hw̃_l,iw̃_l,i^Hh_l,m,k = h_l,m,k^HW_l,i^ * h_l,m,k. Thus, it follows that ( 1 + 1/Γ _m,k )h_m,m,k^HW̃_m,kh_m,m,k = ( 1 + 1/Γ _m,k )h_m,m,k^HW_m,k^ * h_m,m,k ≥∑_l ∈ Lh_l,m,k^H( ∑_i ∈ K_lW_l,i^* + R_l^r* )h_l,m,k + σ_c ^2 = ∑_l ∈ Lh_l,m,k^H( ∑_i ∈ K_lW̃_l,i + R̃_l^r )h_l,m,k + σ_c ^2. The first equality follows from (<ref>), the inequality follows from (<ref>), and the last equality follows from (<ref>). Therefore, we show that the constructed solution {W̃_l,i} and {R̃_l^r} also satisfy the SINR constraints in (<ref>) for problem (P1.1). Thus, this completes the proof of this proposition. IEEEtran
http://arxiv.org/abs/2307.07350v1
20230714140032
Beyond the worst case: Distortion in impartial culture electorate
[ "Ioannis Caragiannis", "Karl Fehrs" ]
cs.GT
[ "cs.GT", "cs.DS" ]
Studying quantum entanglement and quantum discord in the cavity QED models Li Wang-shun August 12, 2023 ========================================================================== Distortion is a well-established notion for quantifying the loss of social welfare that may occur in voting. As voting rules take as input only ordinal information, they are essentially forced to neglect the exact values the agents have for the alternatives. Thus, in worst-case electorates, voting rules may return low social welfare alternatives and have high distortion. Accompanying voting rules with a small number of cardinal queries per agent may reduce distortion considerably. To explore distortion beyond worst-case conditions, we introduce a simple stochastic model, according to which the values the agents have for the alternatives are drawn independently from a common probability distribution. This gives rise to so-called impartial culture electorates. We refine the definition of distortion so that it is suitable for this stochastic setting and show that, rather surprisingly, all voting rules have high distortion on average. On the positive side, for the fundamental case where the agents have random binary values for the alternatives, we present a mechanism that achieves approximately optimal average distortion by making a single cardinal query per agent. This enables us to obtain slightly suboptimal average distortion bounds for general distributions using a simple randomized mechanism that makes one query per agent. We complement these results by presenting new tradeoffs between the distortion and the number of queries per agent in the traditional worst-case setting. § INTRODUCTION Voting has been the subject of social choice theory for centuries. Traditionally, having elections as the main application area in mind, social choice theorists have made a lot of progress in understanding the axiomatic properties of voting rules (e.g., see <cit.> for an introduction to basic voting axioms). However, the use of voting goes far beyond elections, and today, it provides a compelling way of collective decision-making. So, in addition to the traditional axiomatic treatment of voting rules, other approaches that have an optimization flavour have attracted a lot of attention, starting with the work of Young (see <cit.> and references therein). Among others, the utilitarian approach <cit.> assumes that voters have values for the alternatives. The preferences expressed in the ballot of a voter are a very short summary of these values, e.g., a ranking of the candidates in terms of their values, a set of candidates with values exceeding a threshold, or just the alternative with the highest value for the voter. A voting rule takes as input the voters' ballots and computes an outcome (e.g., a winning alternative). As it has no access to the underlying values of the voters for the alternatives, the outcome depends only on the short summaries in the voters' ballots. Rather unsurprisingly, the outcome of the voting rule will be sub-optimal if evaluated in terms of the voters' values for the alternatives. How far can the outcome be from the optimal alternative? To answer this question, we need to specify a measure to assess the quality of alternatives. Following a rather standard approach in the literature, we use the social welfare —the total value the agents have for an alternative— as our quality measure. Then, the following question arises naturally. How far from the optimum can the outcome of a voting rule be in terms of social welfare? The notion of distortion, introduced by <cit.>, comes to answer this question. The distortion of a voting rule is defined as the worst-case ratio, among all voting profiles on a set of alternatives, between the maximum social welfare among all alternatives and the social welfare of the winning alternative returned by the voting rule. The distortion of voting rules has been the subject of a long list of papers in computational social choice for more than ten years now; see <cit.> for a nice survey of the key results in the area. For example, under mild assumptions about the valuations, the ubiquitous plurality rule has a distortion of O(m^2), where m is the number of alternatives <cit.>. Nevertheless, the distortion is inherently high. Even when the values for the alternatives are restricted (e.g., the total value of each agent for all alternatives is normalized to 1), the distortion of any (possibly randomized) voting rule can be as bad as Θ(√(m)) <cit.>. A recent approach <cit.> aims to bypass such lower bounds by making very limited use of the underlying cardinal information the agents have for the alternatives. Here, besides the ranking submitted as a ballot by an agent, the voting rule (or, better, the mechanism) can pose queries to the agent regarding her value for particular alternatives. Even though optimal distortion results are now possible (by naively querying the values of all alternatives in each agent's ranking), the important problem to be solved is to design mechanisms that achieve low distortion by making a limited number of queries per agent. Among other results, <cit.> present an algorithm that achieves constant distortion by making at most O(log^2m) queries per agent. On the negative side, they show that any mechanism that achieves constant distortion must make at least Ω(logm/loglogm) per agent. Both results refer to deterministic mechanisms. By the definition of distortion, the results discussed above are of a worst-case flavour. Voting rules and mechanisms are evaluated on a profile for which they have the worst-possible performance, no matter how frequently such a profile may appear in practice. As such, they may not be suitable to explain the success of certain voting rules in practice. §.§ Overview of our contribution We follow the utilitarian framework but —motivated by the recent trend of analyzing algorithms from non-worst-case perspectives <cit.>— attempt an evaluation of voting rules on an average-case basis. At the conceptual level, we introduce the notion of average distortion. We consider a simple stochastic setting in which each agent draws a random value for each alternative according to a common probability distribution. The draws of each agent for each alternative are independent. Naturally, impartial culture electorates <cit.> —i.e., profiles with uniformly random rankings of alternatives— emerge in this way. The average distortion of a mechanism is defined as the expected maximum social welfare among the alternatives over the expected social welfare of the alternative returned by the mechanism. To distinguish between the two notions, we use the term worst-case distortion to refer to the traditional definition. It is not hard to see that the average distortion is always upper-bounded by the worst-case distortion. We warm up by showing (in Section <ref>) that, perhaps surprisingly, any (potentially randomized) voting rule has average distortion Ω(m), implying that the trivial voting rule that selects one of the alternatives uniformly at random (ignoring the profile) has an almost best possible average distortion. Our lower bound uses a very simple binary probability distribution (i.e., one that returns values of 1 and 0). Our first positive result is for distributions of this type. In Section <ref>, we present a deterministic mechanism, called , which makes a single query per agent and achieves constant average distortion. This is our most technically involved result and indicates that using queries (even in a minimal way) can provide a significant improvement to average distortion. Interestingly, mechanism can be used as the building block for our randomized mechanism , which works for a general distribution F. still requires only one query per agent and achieves an average distortion that depends on the parameters of F. For many distributions of interest, such as uniform or exponential, the average distortion bound obtained is only logarithmic in m. A similar idea is used to define our randomized mechanism for the traditional model of worst-case distortion. uses a logarithmic number of queries per agent and achieves logarithmic worst-case distortion. Such an upper bound is not known for deterministic mechanisms. These results are presented in Section <ref>. To the best of our knowledge, this is the first analysis of randomized mechanisms that make value queries in the distortion literature. Our upper bound on the average distortion of mechanism is not attainable by deterministic mechanisms in the traditional worst-case model. We prove that no deterministic mechanism that makes a single query per agent can achieve distortion better than Ω(√(m)), even when the valuations are binary. More importantly, for general valuations, we present a new lower bound of Ω(logm) on the number of queries per agent that allows for constant worst-case distortion. This improves the previously best-known lower bound of <cit.> by a sublogarithmic factor. These results appear in Section <ref>. §.§ Further related work Using different methodological approaches, utilities in voting have been considered in social choice theory since the work of Bentham in the 18th century <cit.>; see <cit.> for recent work on the topic. The study of the utilitarian framework in the recent CS literature goes beyond the study of single-winner voting. Caragiannis et al. <cit.> and Benade et al. <cit.> consider multiwinner voting and participatory budgeting settings, respectively. Filos-Ratsikas et al. <cit.> investigate voting in distributed settings. In the current paper, we assume that agents have non-negative values for the alternatives. In a series of recent papers, originating from the work of Anshelevich et al. <cit.>, agents and alternatives are assumed to be located in a metric space, and the preferences of each agent reflect her relative distance from the alternatives. In that different setting, the distortion quantifies the suboptimality of the outcome of voting rules in terms of the social cost. This line of research has led to challenging algorithmic problems with beautiful solutions <cit.>. The utilitarian approach has also been used in settings that are more general than voting, including matching, clustering, and facility location problems. The main idea is to explore how well algorithms that use only ordinal information about the underlying input (as opposed to cardinal values) can approximate the optimal solutions to these problems. The survey <cit.> covers early work on this hot topic, as well as on the other directions discussed above. We remark that stochastic models are ubiquitous in the EconCS literature, including a long line of research in mechanism design and auctions originating from the seminal paper of <cit.>, and a rich menu of other settings like matchings <cit.>, fair division <cit.>, kidney-exchange <cit.>, and many more. In the voting literature, two very recent papers with stochastic assumptions are <cit.> and <cit.>. Particularly, <cit.> addresses questions related to distortion on an average-case basis, using a different definition than ours and with no focus on mechanisms that make queries. § PRELIMINARIES Throughout the paper, we denote by N and A the sets of agents and alternatives and reserve n and m for their cardinalities, respectively. Each agent i∈ N has a ranking ≻_i of the alternatives, i.e., a strict ordering of the elements in A. A profile P={≻_i}_i∈ N is just a collection of the agents' rankings. We denote by the set of all possible profiles with n agents and m alternatives. A voting rule :→ A takes as input a profile and returns a single winning alternative. We assume that the ranking of each agent results from underlying hidden non-negative values that the agent has for each alternative. For i∈ N, the valuation function _i:A →_≥ 0 returns the values of agent i for the alternatives in A. Then, agent i's ranking ≻_i is consistent with _i. This means that a ≻_i a' if _i(a) ≥_i(a') for every pair of alternatives a,a' ∈ A. Given a ranking ≻, the function _≻:A → [m] returns the position of a given alternative in the ranking. We say that alternative a is the top-ranked alternative of agent i if _≻_i(a) = 1. Consider a set of valuation functions v = {_i}_i∈ N. We typically refer to the set v as the agents' valuations. The social welfare of an alternative a ∈ A is defined as (a,v) = ∑_i ∈ N_i(a). Let be the set of all possible valuations the agents in N can have for the alternatives in A. We denote by (_i) the set of rankings that are consistent with the valuation function _i of agent i. We say that a profile P={≻_i}_i∈ N is consistent with the valuations v if the ranking ≻_i of every agent i∈ N is consistent with her valuation function _i, i.e., ≻_i∈(_i). Then, (v) represents the set of profiles in which are consistent with the valuations v. Besides voting rules, we consider mechanisms that have, in addition to the profile, access to the valuations. Such a mechanism :×→ A takes as input the profile and the valuations and returns a winning alternative. We are particularly interested in mechanisms that use the whole profile on input but only a small part of the valuations by making a limited number of queries per agent. A query for the value of agent i∈ N for alternative a∈ A simply returns the value of _i(a). The worst-case distortion of a mechanism applied on profiles consistent with valuations from is defined as () = sup_v∈ P∈(v)max_a ∈ A(a,v)/((P,v), v), i.e., it is the worst-case ratio—among all valuations and consistent profiles—between the maximum social welfare and the social welfare of the alternative returned by the mechanism. For randomized mechanisms, in which the alternative returned is a random variable, we use the expectation [((P,v), v)] (taken over the random choices of ) instead of ((P,v), v) in the denominator. We extend the notion of distortion to stochastic environments with random valuations and consistent profiles. We assume that the values of the agents for the alternatives are drawn from a known common distribution F. In particular, for each agent i∈ N and alternative a∈ A, the value _i(a) is drawn independently from distribution F. Given valuations v selected in this way, a consistent profile P is selected uniformly at random among all profiles in (v) (essentially, in her ranking, each agent breaks ties among alternatives in terms of value uniformly at random). This gives rise to uniformly random profiles, which are known as impartial culture electorates in the social choice theory literature. For valuations v, we use v ∼ F to denote that, for every alternative a∈ A and every agent i∈ N, the value _i(a) is drawn independently from F. We use the notation ≻_i∼(_i) to refer to a ranking that is selected uniformly at random among all rankings that are consistent with the valuation function _i of agent i. Similarly, we use P∼(v) for a profile that is selected uniformly at random among all profiles that are consistent with valuations v. Then, the expected social welfare of the winning alternative picked by a deterministic mechanism is _v∼ F P ∼(v)[((P,v), v)]. When mechanism is randomized, the expectation is taken over the randomness of as well. We now adapt the notion of distortion to this stochastic setting. The average distortion of a mechanism on a family of distributions is (,) = sup_F∈_v∼ F[max_a ∈ A(a,v)]/_v∼ F P ∼(v)[((P,v), v)] . A fundamental family that is of central importance to our work is the family of binary distributions, consisting of the set of probability distributions {F_p}_p∈ [0,1], where F_p is such that the random variable z that is drawn according to F is equal to 1 with probability p and to 0 with probability 1-p. § AVERAGE DISTORTION CAN BE HIGH We begin our technical exposition with a lower bound of Ω(m) on the average distortion. Essentially, Theorem <ref> implies that the trivial voting rule that returns an alternative uniformly at random on every profile has approximately optimal average distortion. In our proof, we make use of a particular distribution F from the family . Furthermore, we exploit a class of voting rules called positional scoring rules. A positional scoring rule g is defined by a scoring vector ⟨α_1, α_2, ..., α_m⟩ with α_1≥α_2≥ ...≥ a_m≥ 0. On input a profile P, the voting rule g assigns a score _g(a,P) to every alternative a∈ A and the winning alternative is the one with the highest score (breaking ties according to some tie-breaking rule). The scores of alternatives is computed as follows. For j=1, ..., m, alternative a takes α_j points each time it appears in the j-th position in an agent's ranking, i.e., _g(a,P)=∑_i∈ Nα__≻_i(a). For example, plurality is the positional scoring rule that uses the m-entry scoring vector ⟨ 1, 0, ..., 0⟩. The plurality winner on profile P, denoted by (P), is the alternative that appears most often in the top position of the agents' rankings. For every (possibly randomized) mechanism , (,)∈Ω(m). Consider impartial culture electorates with m alternatives, n≥ 6mlnm agents, and binary values drawn from the probability distribution F∈ with p=1/nm. Then, the probability that some alternative has positive social welfare is 1-(1-1/nm)^nm≥ 1-e^-1. Thus, _v∼ F P∼(v)[max_a∈ A(a,v)]≥ 1-e^-1. Now consider any voting rule ; we will complete the proof by showing that _v∼ F P∼(v)[((P),P)] ≤4/m. Below, we prove inequality (<ref>), assuming that is deterministic. For randomized , the expectation in the LHS of (<ref>) should also be taken over the randomness of as well. In that case, inequality (<ref>) follows trivially by our arguments, interpreting as a probability distribution over deterministic voting rules. Let g be the positional scoring rule that uses the scoring vector ⟨α_1, α_2, ..., α_m⟩ with α_j=__i∼ F ≻_i∼(_i)[_i(a)|_≻_i(a)=j], i.e., score α_j is equal to the expected value according to F given by each agent to the alternative ranked at position j. Now, observe that for a given profile P={≻_i}_i∈ N and alternative a∈ A, we have _v∼ F[(a,P)|P∈(v)] =∑_i∈ N__i∼ F[_i(a)|≻_i ∈(_i)] =∑_i∈ Nα__≻_i(a)=_g(a,P), where _g(a,P) denotes the score of alternative a in profile P according to positional scoring rule g. Thus, _v∼ F P∼(v)[((P),v)] = ∑_P∈_v∼ F[((P),v)|P∈(v)]·_v∼ F[P∈(v)] = ∑_P∈_g((P),P) ·_v∼ F[P∈(v)] = _v∼ F P∼(v)[_g((P),P)]. Notice that, by the definitions of voting rules and g, for any profile P and alternative a∈ A, it holds _g(a,P) ≤α_1 ·_(a,P)+n·α_2 ≤α_1 ·_((P),P)+n· (mp-α_1). The first inequality follows since g gives to alternative a α_1 points for each of its _(a,P) appearances in the top position in the agents' rankings and at most α_2 points for its remaining appearances. The second inequality follows since the plurality winner (P) has the highest plurality score (and, hence, _(a,P)≤_((P),P) for every alternative a∈ A) and since α_2≤∑_j=1^mα_j-α_1 and ∑_j=1^mα_j=mp (as the expected total value given to all alternatives by an agent). We will also need the following claim. _v∼ F P∼(v)[_((P),P)] ≤3n/m. The proof will follow by a simple application of the Chernoff bound, e.g., see <cit.>. For every binomial random variable Q and any δ>0, we have [Q≥ (1+δ)[Q]] ≤exp(-δ^2[Q]/2+δ). Consider an alternative a∈ A and denote by X_i the random variable indicating whether agent i ranks a first (X_i=1; this happens with probability 1/m) or not (X_i=0). Observe that _(a,P)=∑_i∈ NX_i, i.e., _(a,P) is a binomial random variable with expectation n/m. By applying Lemma <ref> with δ=1, we obtain that [_(a,P)≥2n/m]≤exp(-n/3m)≤1/m^2. The last inequality follows since n≥ 6mlnm. Taking the union bound over the m alternatives then gives [_((P),P)≥2n/m]≤1/m. Finally, since the plurality score never exceeds n, we have _v∼ F P∼(v)[_((P),P)]≤2n/m·(1-1/m)+n·1/m≤3n/m. Notice that the top alternative of an agent has value 0 with probability (1-p)^m and value 1 otherwise. Thus, α_1=1-(1-p)^m. Now, observe that 1-(1-p)^m=1-(1-1/nm)^m≥ 1-1/e^1/n≥1/n+1, using the properties (1-r/t)^t≤ e^-r for t>0 and r≥ 0 and e^r≥ 1+r. Thus, mp-α_1≤1/n-1/n+1<1/n^2. Also, using the property (1-r)^t≥ 1-rt for t≥ 1, we get α_1=1-(1-p)^m≤ pm=1/n. Combining these two last inequalities with equations (<ref>) and (<ref>) and Claim <ref>, we obtain _v∼ F P∼(v)[((P),v] = _v∼ F P∼(v)[_g((P),P)] ≤α_1·_v∼ F P∼(v)[_((P),P)] + n(mp-α_1) ≤3/m+1/n≤4/m, as desired by (<ref>). The last inequality follows by the relation between n and m. § CONSTANT AVERAGE DISTORTION WITH A SINGLE QUERY PER AGENT In the previous section, we saw that the average distortion can be high even for binary distributions if the mechanism has no access to the agents' underlying valuations. Still, binary valuations may reveal a lot of information with a single query. Consider a query for the value that an agent has for the alternative at position j of her ranking. If the query returns a value of 1, this means that the agent has a value of 1 for all alternatives in positions higher than j as well. This observation motivates the following definition of implied social welfare. For a distribution F ∈, assume that a mechanism queried each agent for the value of the alternative in position k of her ranking. For each agent i∈ N, let _i,k denote the respective value. The implied social welfare then is (a,P,v,k) = ∑_i ∈ N{_i,k = 1 ∧ _≻_i(a) ≤ k}. We use the notion of implied social welfare in the definition of the following mechanism. Given a profile P with underlying valuations drawn from a distribution F_p∈, the mechanism queries each agent for the value of the alternative at position τ = max{1, ⌊ pm⌋}. The mechanism then returns the alternative that maximizes the implied social welfare, that is, (P,v) = _a ∈ A(a,P,v,τ). We show that mechanism has constant average distortion with a single query for the family of distributions . Notably, it is impossible for deterministic 1-query mechanisms to achieve similar guarantees in the traditional setting of worst-case distortion. Indeed, even for binary valuations, the worst case distortion of these mechanisms is Ω(√(m)); see Theorem <ref>. Mechanism has average distortion at most 27 in impartial culture electorates with n agents and m alternatives with underlying values drawn from any probability distribution in . We partition family into the subfamilies of probability distributions _1, _2, and _3 defined as follows: _1 = {F_p∈ : p≥ 1/m} _2 = {F_p∈ : 1-(1-1/n)^1/m≤ p < 1/m } _3 = {F_p∈ : p < 1-(1-1/n)^1/m} Let F_p∈; we will prove the theorem by distinguishing between the three cases F_p∈_1, F_p∈_2, and F_p∈_3. We introduce some notation that we use throughout this proof. Let τ denote the position that mechanism queried in each agent's ranking, i.e., τ=max{1,⌊ mp⌋}. For valuations v, let X_i(v) be the number of alternatives for which agent i draws a value of 1 and let X(v)=∑_i∈ NX_i(v). We will consider valuations v drawn from the probability distribution F_p. Thus, X_i(v) is a random variable following the binomial distribution with m trials and success probability p. Since X(v) is the sum over i.i.d. binomially distributed random variables, X(v) itself follows a binomial distribution with nm trials and success probability p. Furthermore, we denote by Z_i(v,τ) the random variable indicating whether the query at position τ of agent i's ranking returned a value of 1 (then, Z_i(v,τ)=1) or not (then, Z_i(v,τ)=0). Clearly, Z_i(v,τ)={X_i(v)≥τ} such that the random variable Z(v,τ)=∑_i∈ NZ_i(v,τ) follows a binomial distribution with n trials and success probability [X_i(v)≥τ]. We note that the variables X_i are identically distributed for every i∈ N. For the first two cases below, we will need a technical lemma which we prove in Appendix <ref>. For integer s∈ [nm], define the condition (v,s) to be X(v)=s with ⌊ s/n⌋≤ X_i(v) ≤⌈ s/n⌉ for i∈ N. Then, for any distribution F∈, it holds that _v∼ F[max_a∈ A(a,v)|X(v)=s] ≤_v∼ F[max_a∈ A(a,v)|(v,s)]. Case 1. Consider impartial culture electorates with n agents and m alternatives with underlying values drawn from the distribution F_p∈_1. Denote by (v) the condition X_i(v)=τ for i∈ S and X_i(v)=0 for i∈ N∖ S, where S is a subset of the agents of size exactly ⌊ n/2 ⌋. Now, define the quantity B =_v∼ F[max_a∈ A(a,v)|(v)], i.e., the expected maximum social welfare among all alternatives given that ⌊ n/2⌋ agents draw a value of 1 for exactly τ alternatives and the remaining agents draw values of 0 for all alternatives. The quantity B will be a benchmark that will help us compare the expected social welfare of (P,v) to the maximum social welfare. As stated above, each X_i(v) is a binomial random variable with m trials and success probability p. Hence, its median is either at the value ⌊ pm ⌋ or ⌈ pm ⌉, which are both at least τ since p≥ 1/m. Thus, _v∼ F[X_i(v) ≥τ]≥ 1/2. The random variable Z(v,τ) then follows a binomial distribution with n trials and success probability at least 1/2. By the same property of the median of the binomial distribution, it now holds that _v∈ F[Z(v,τ)≥⌊ n/2⌋]≥ 1/2. With this observation and by applying the law of total expectation, we have _v∼ F_p P∼(v)[((P,v),v)] = _v∼ F_p P∼(v)[((P,v),v)|Z(v,τ)≥⌊ n/2⌋]·_v∈ F_p[Z(v,τ)≥⌊ n/2⌋] +_v∼ F_p P∼(v)[((P,v),v)|Z(v,τ)< ⌊ n/2⌋]·_v∈ F_p[Z(v,τ)< ⌊ n/2⌋] ≥1/2·_v∼ F_p P∼(v)[((P,v),v)|Z(v,τ)≥⌊ n/2⌋] ≥1/2·_v∼ F_p P∼(v)[max_a∈ A(a,P,v,τ)|Z(v,τ)≥⌊ n/2⌋] ≥1/2·_v∼ F_p[max_a∈ A(a,v)|(v)] = 1/2· B. The first inequality above is obvious. By definition of , (P,v) is the alternative that maximizes the implied social welfare when querying each agent in position τ. Hence, the implied social welfare is a lower bound for the social welfare of (P,v), which yields the second inequality. Finally, under the condition that Z(v,τ)≥⌊ n/2⌋, there are at least ⌊ n/2⌋ agents i for each of which X_i(v) ≥τ. This includes the condition (v) and the third inequality follows. The last equality follows by the definition of B in (<ref>). We proceed to bound the expected maximum social welfare from above in terms of B. For this purpose, we require another technical lemma which we prove in Appendix <ref>. For every distribution F∈ and any positive integer j, it holds that _v∼ F_p[max_a∈ A(a,v)|(v,jnτ)] ≤ 3j· B. For a positive integer j, define the condition (v,j) to be (j-1)nτ < X(v)≤ jnτ. By applying the law of total expectation, we obtain _v∼ F_p[max_a∈ A(a,v)] = ∑_j=1^∞_v∼ F_p[max_a∈ A(a,v)|(v,j)]·_v∼ F_p[(v,j)] ≤∑_j=1^∞_v∼ F_p[max_a∈ A(a,v)|X(v)=jnτ]·_v∼ F_p[(v,j)] ≤∑_j=1^∞_v∼ F_p[max_a∈ A(a,v)|(v,jnτ)]·_v∼ F_p[(v,j)] ≤ 3B·∑_j=1^∞j·_v∼ F_p[(v,j)]. The first inequality follows since the condition (v,j) includes the condition X(v)=jnτ and since the quantity [max_a∈ A(a,v)|X(v)=t] is non-decreasing in terms of t. The second inequality follows by Lemma <ref> while the third one follows by Lemma <ref>. We now bound the term ∑_j=1^∞j·_v∼ F_p[(v,j)]. ∑_j=1^∞j·_v∼ F_p[(v,j)] = ∑_j=1^∞ j ∑_k=(j-1)nτ^jnτ_v∼ F_p[X(v)=k] = 1/nτ·∑_j=1^∞∑_k=(j-1)nτ^jnτjnτ·_v∼ F_p[X(v)=k] ≤_v∼ F_p[X(v)≤ nτ] + 2/nτ·∑_j=2^∞∑_k=(j-1)nτ^jnτ k ·_v∼ F_p[X(v)=k] ≤1/2 + 2/nτ·_v∼ F_p[X(v)] ≤ 4.5. The first inequality follows from the fact that jnτ/2 ≤ (j-1)nτ≤ k for j≥ 2. The second inequality follows by the definition of the expectation of random variable X(v). Since X(v) follows the binomial distribution with nm trials and success probability p, we have [X(v)]=nmp which is at most 2n⌊ mp⌋=2nτ since p≥ 1/m. Now, inequality (<ref>) implies that E_v∼ F[max_a∈ A(a,v)] ≤ 13.5B. Combining this bound with inequality (<ref>) lets us conclude that (,_1)≤ 27, as desired. Case 2. Consider impartial culture electorates with n agents and m alternatives with underlying values drawn from the distribution F_p∈_2. Since p<1/m, mechanism always queries the value of the top-ranked alternative in each agent, i.e., τ = 1. Then, picks the alternative that maximizes the implied social welfare (a,P,v,1). It is thus immediately clear that _v ∼ F_p P ∼(v)[((P,v),v)] ≥_v∼ F_p P∼(v)[max_a∈ A(a,P,v,1)]. We intend to also relate the expected maximum social welfare to the RHS of (<ref>). First, we observe that _v∼ F_p[max_a∈ A(a,v)] = ∑_t=1^n _v∼ F_p P∼(v)[max_a∈ A(a,v) | Z(v,1) = t] ·_v∼ F_p[Z(v,1) = t]. In the following, we will upper-bound the term [max_a∈ A(a,v) | Z(v,1) = t] and show that this quantity is within a constant factor of [max_a∈ A(a,P,v,1) | Z(v,1) = t] for every positive t. We will need another technical lemma (see Appendix <ref> for the proof). Indeed, the proof of the Lemma <ref> makes use of Lemma <ref> and can be seen as a refined formulation of Lemma <ref> for the present case where τ = 1. For every distribution F∈ and anypositive integer j, it holds that _v∼ F_p[max_a∈ A(a,v)|Z(v,1)=t,X(v)=j] ≤⌈ j/t ⌉·_v∼ F_p P∼(v)[max_a∈ A(a,P,v,1)|Z(v,1) = t]. With this lemma in hand, we have that _v∼ F_p[max_a∈ A(a,v) | Z(v,1) = t] =∑_j=t^∞_v∼ F_p[max_a∈ A(a,v)|Z(v,1)=t,X(v)=j]·_v∼ F_p[X(v)=j|Z(v,1)=t] ≤_v∼ F_p P∼(v)[max_a∈ A(a,P,v,1)|Z(v,1) = t] ∑_j=t^∞⌈j/t⌉_v∼ F_p[X(v)=j|Z(v,1)=t]. We proceed to bound the sum that appears in the previous inequality. ∑_j=t^∞⌈j/t⌉_v∼ F_p[X(v)=j|Z(v,1)=t] ≤∑_j=t^∞(j/t+1) _v∼ F_p[X(v)=j|Z(v,1)=t] = ∑_j=t^∞_v∼ F_p[X(v)=j|Z(v,1)=t] + 1/t∑_j=t^∞ j·_v∼ F_p[X(v)=j|Z(v,1)=t] = 1 + 1/t_v∼ F_p[X(v)=j|Z(v,1)=t] = 1 + _v∼ F_p[X_i(v)|X_i(v)≥ 1], for any agent i∈ N. The last equality is true since, under the condition Z(v,1)=t, X(v) is the sum ∑_i∈ SX_i(v) for a set S of t agents who are selected uniformly at random and each satisfy X_i(v)≥ 1. As the random variables X_i(v) are identically distributed for all agents in S, we obtain that [X(v)|Z(v,1)=t]=t·[X_i(v)|X_i(v)≥ 1] for every agent i, which yields the equality. Now, observe that _v∼ F_p[X_i(v) | X_i(v) ≥ 1] = _v∼ F_p[X_i(v)]/_v∼ F_p[X_i(v)≥ 1] = pm/1-(1-p)^m. The derivative of the RHS of (<ref>) with respect to p∈ (0,1) is m/(1-(1-p)^m)^2(1-(1-p)^m-1(1+p(m-1)) ≥m/(1-(1-p)^m)^2(1-((1-p)(1+p))^m-1) = m/(1-(1-p)^m)^2(1-(1-p^2)^m-1)>0. The first inequality follows by the inequality (1+t)^r≥ 1+rt for r≥ 1. Hence, [X_i(v) | X_i(v) ≥ 1] is strictly increasing in p. Since p<1/m in the current case, it follows from (<ref>) that _v∼ F_p[X_i(v) | X_i(v) ≥ 1] = pm/1-(1-p)^m≤1/1-(1-1/m)^m≤1/1-e^-1, using the inequality (1-1/m)^m ≤ e^-1 for any m≥ 1. Using this last observation together with inequalities (<ref>) and (<ref>), we obtain that _v∼ F_p[max_a∈ A(a,v)] ≤2e-1/e-1·_v∼ F_p P∼(v)[max_a∈ A(a,P,v,1)]. In combination with the lower bound on the social welfare of the mechanism in inequality (<ref>), this yields an average distortion of at most 2.6. Case 3. Again, mechanism queries the top-ranked alternative in each agent. Notice that if we have at most two agents, the average distortion of mechanism is at most 2. Indeed, mechanism always returns an alternative of positive social welfare whenever there exists one, and no alternative ever has a social welfare higher than 2. So, in the following, we assume that n≥ 3. We will show that the average distortion is lower than 2 in this case. Consider impartial culture electorates with n agents and m alternatives with underlying values drawn from the distribution F_p∈_3. Notice that the maximum social welfare is never larger than the total social welfare of all alternatives. Hence, _v∼ F_p[max_a∈ A(a,v)]≤ pnm. With probability 1-(1-p)^nm, there is at least one agent that gives a non-zero value to some alternative. Then, returns an alternative of social welfare at least 1. Hence, _v∼ F_p P∼(v)[((P,v),v)]≥ 1-(1-p)^nm. The average distortion of can therefore be upper-bounded by the term pnm/1-(1-p)^nm. Notice that the derivative with respect to p of this quantity is nm/(1-(1-p)^nm)^2· (1-(1-p)^nm-1(1-p+pnm)) ≥ nm/(1-(1-p)^nm)^2· (1-(1-p)^nm-1(1+p)^nm-1) = nm/(1-(1-p)^nm)^2· (1-(1-p^2)^nm-1)>0. The first inequality follows by the property (1+t)^r≥ 1+rt for r≥ 1 and the second (strict) inequality is due to the fact that p > 0. Hence, the average distortion of the mechanism is strictly increasing in p. Using p^*=1-(1-1/n)^1/m and since p<p^*, we have (,_3)=max_F_p∈_3_v∼ F_p[max_a∈ A(a,v)]/_v∼ F P∼(v)[(M(P,v),v)]≤pnm/1-(1-p)^nm<p^*nm/1-(1-p^*)^nm. Substituting p^*, we get that the denominator in the RHS of (<ref>) is equal to 1-(1-1/n)^n≥ 1-e^-1. To bound the numerator, observe that (1-3ln3/2/nm)^nm<8/27 and that (1-1/n)^n≥8/27 for n≥ 3. These inequalities follow since the expression (1-r/t)^t is strictly increasing for t≥ r and approaches e^-r from below as t goes to infinity. Thus, we have that (1-3ln3/2/nm)^m<1-1/n, which is equivalent to 3ln3/2/nm>1-(1-1/n)^1/m=p^*, implying that the numerator of the RHS of (<ref>) is at most 3ln3/2. We conclude that (,_3)<3ln3/2/1-e^-1<2, completing the proof. § RANDOMIZED MECHANISMS We now present two randomized mechanisms —for impartial culture and worst-case electorates, respectively— that are nevertheless similar in spirit. Both mechanisms randomly pick a single threshold from a suitably defined set of thresholds and query each agent to determine a set of alternatives that have value above the threshold. This information is then used to compute an approximation of the alternatives' respective social welfare, which is used to decide the winning alternative. For impartial culture electorates, our first “random threshold” mechanism uses mechanism as a building block. The mechanism uses k thresholds ℓ_1, ..., ℓ_k with 0<ℓ_1<...<ℓ_k as parameters. Given a profile P with underlying valuations drawn from a probability distribution F, selects an integer t uniformly at random from [k], and sets p=_z∼ F[z≥ℓ_t]. It then simulates an execution of on the distribution F_p∈ by * making the same value queries as for F_p, but * interpreting the answer _i(a) to a query as 1 if _i(a)≥ℓ_t and 0 otherwise. returns as output the alternative that selects. Notice that mechanism uses exactly one query per agent. The next statement relates the average distortion of to the structure of the distribution F. Let m>0 be an integer, z a non-negative random variable following a probability distribution F, and L,U>0 such that _z∼ F[z{z<L}+(z-U){z≥ U}] ≤_z∼ F[z]/2m. Then, there is a choice of thresholds ℓ_1, ℓ_2, ..., ℓ_k such that mechanism yields average distortion at most 108⌈logU/L⌉. Set k=⌈logU/L⌉ and define the thresholds of mechanism as ℓ_t=L· 2^t-1 for t=1, 2, ..., k and ℓ_k=U. We begin by observing that, for any z≥ 0, we have z ≤ z{z<ℓ_1}+ℓ_1{z≥ℓ_1}+∑_t=1^k-1(ℓ_t+1-ℓ_t){z≥ℓ_t}+(z-ℓ_k){z≥ℓ_k} ≤ z{z<ℓ_1}+(z-ℓ_k){z≥ℓ_k}+2∑_t=1^kℓ_t{z≥ℓ_t}. The second inequality follows since the definition of the thresholds implies that ℓ_t+1≤ 2ℓ_t for t=1, ..., k-1 and, hence ℓ_t+1-ℓ_t≤ℓ_t. We now have _v∼ F[max_a∈ A(a,v)] = _v∼ F[max_a∈ A∑_i=1^n_i(a)] ≤_v∼ F[max_a∈ A∑_i=1^n(_i(a){_i(a)<ℓ_1}+(_i(a)-ℓ_k){_i(a)≥ℓ_k}.. ..+2 ·∑_t=1^kℓ_t{_i(a)≥ℓ_t})] ≤_v∼ F[max_a∈ A∑_i=1^n(_i(a){_i(a)<ℓ_1}+(_i(a)-ℓ_k){_i(a)≥ℓ_k})] + 2 ·_v∼ F[max_a∈ A∑_i=1^n∑_t=1^kℓ_t{_i(a)≥ℓ_t}] ≤_v∼ F[∑_a∈ A∑_i=1^n(_i(a){_i(a)<ℓ_1}+(_i(a)-ℓ_k){_i(a)≥ℓ_k})] + 2·_v∼ F[max_a∈ A∑_i=1^n∑_t=1^kℓ_t{_i(a)≥ℓ_t}] ≤1/2m·_v∼ F[∑_a∈ A∑_i=1^n_i(a)]+ 2·_v∼ F[max_a∈ A∑_i=1^n∑_t=1^kℓ_t{_i(a)≥ℓ_t}] ≤1/2·_v∼ F[max_a∈ A(a,v)]+ 2·_v∼ F[max_a∈ A∑_i=1^n∑_t=1^kℓ_t{_i(a)≥ℓ_t}]. The first inequality follows by (<ref>), the second and third inequalities use the fact that the maximum among non-negative values is upper-bounded by their sum, the fourth inequality uses the assumption in the statement of the theorem for the random variable _i(a), and the last inequality follows since the average among non-negative values is a lower-bound on the maximum value among them. The above inequality is equivalent to _v∼ F[max_a∈ A∑_i=1^n∑_t=1^kℓ_t{_i(a)≥ℓ_t}] ≥1/4·_v∼ F[max_a∈ A(a,v)]. For valuations v and t∈ [k], define the valuations v^t that consists of the binary values {_i(a)≥ℓ_t}. Notice that for every alternative a∈ A, it is (a,v) ≥ℓ_t·(a,v^t). Mechanism selects the integer t uniformly at random from [k] and, for every profile P that is consistent with the valuations v, it returns the alternative (P,v^t). We thus have _v∼ F P∼(v)[((P,v),v)] = 1/k·∑_t=1^k_v∼ F P∼(v)[((P,v^t),v)]≥1/k·∑_t=1^kℓ_t·_v∼ F P∼(v)[ ((P,v^t),v^t)] ≥1/27k∑_t=1^kℓ_t·_v∼ F[max_a∈ A(a,v^t)]≥1/27k_v∼ F[max_a∈ A∑_t=1^kℓ_t·(a,v^t)] = 1/27k·_v∼ F[max_a∈ A∑_i=1^n∑_t=1^kℓ_t {_i(a)≥ℓ_t}] ≥1/108k·_v∼ F[max_a∈ A(a,v)], as desired. The second inequality follows by the average distortion guarantee for mechanism from Theorem <ref>, and the fifth one by inequality (<ref>). Specifically for uniform and exponential distributions, we obtain the following corollary. The average distortion of mechanism when applied to impartial culture electorates with n agents and m alternatives, and the underlying values drawn according to a uniform or exponential distribution is at most O(logm). By setting U=α and L=α/√(2m) for the uniform distribution in [0,α] and U=ln(4m)/α and L=1/4α m for the exponential distribution with probability density function f(z)=α e^-α z (and expectation 1/α), we can verify that the condition of Theorem <ref> is satisfied. Our next mechanism uses a similar idea and achieves low worst-case distortion. For a given profile P with underlying valuations v over m alternatives, mechanism picks an integer r uniformly at random from the set {1,2,…,⌈log 2m⌉}. For every agent i∈ N, mechanism * queries the value ν_i of the agent's top-ranked alternative, and * finds the set of alternatives S_i,r, for each of which the agent has value more than ν_i/2^r using binary search. The mechanism then returns an alternative (P,v)∈_a∈ A∑_i∈ Nν_i {a∈ S_i,r}. Mechanism achieves worst-case distortion at most O(log m) with O(log m) queries per agent. Clearly, for any fixed r and any agent i, the alternatives S_i,r can be identified by finding the lowest-ranked alternative in ≻_i with value more than ν_i/2^r. This can be accomplished using binary search using O(log m) queries per agent. For the upper bound on the distortion of , we introduce the following notion of artificial social welfare. Given valuations v and an alternative a∈ A, we define the artificial social welfare of a as (a,v) = ∑_i∈ Nν_i ∑_r=1^∞{a∈ S_i,r}/2^r. With the next lemma, we show that for any alternative, its artificial social welfare is within a factor of 2 of its social welfare. For any set of valuations v and any alternative a∈ A, it holds that (a,v) ≤(a,v) ≤ 2(a,v). For any agent i∈ N, let r_i^* be a non-negative integer such that _i(a) ∈ (ν_i/2^r_i^*,ν_i/2^r_i^*-1]. Notice that thereby ∑_i∈ Nν_i/2^r_i^*-1≥∑_i∈ N_i(a) = (a,v), and ∑_i∈ Nν_i/2^r_i^*-1 < 2∑_i∈ N_i(a) = 2(a,v). Then, for every agent i and any alternative a, we have that ∑_r=1^∞{a∈ S_i,r}/2^r = ∑_r=r_i^*^∞1/2^r = 1/2^r_i^*∑_r=0^∞1/2^r = 1/2^r_i^*-1, and, thus, (a,v) = ∑_i∈ Nν_i ∑_r=1^∞{a∈ S_i,r}/2^r = ∑_i∈ Nν_i/2^r_i^*-1. The lemma follows from inequalities (<ref>) and (<ref>). For a given profile P, let a_r be the alternative returned by mechanism for a draw of r = 1,2,…,⌈log 2m⌉, respectively. Denote by a^* the alternative of maximum social welfare for the valuations v underlying P. Then, by definition of , it holds that ∑_i∈ Nν_i{a_r∈ S_i,r}/2^r≥∑_i∈ Nν_i{a^*∈ S_i,r}/2^r, for every r ∈{1,2,…,⌈log 2m⌉}, and, hence, ∑_r=1^⌈log 2m⌉∑_i∈ Nν_i{a_r∈ S_i,r}/2^r≥∑_r=1^⌈log 2m⌉∑_i∈ Nν_i{a^*∈ S_i,r}/2^r. We now have that ∑_r=1^⌈log 2m⌉(a_r,v) = ∑_r=1^⌈log 2m⌉∑_i∈ Nν_i∑_r'=1^∞{a_r∈ S_i,r'}/2^r'≥∑_i∈ Nν_i∑_r=1^⌈log 2m⌉{a_r∈ S_i,r}/2^r ≥∑_i∈ Nν_i∑_r=1^⌈log 2m⌉{a^*∈ S_i,r}/2^r = (a^*,v) - ∑_i∈ Nν_i∑_r=⌈log 2m⌉ +1^∞{a^*∈ S_i,r}/2^r ≥(a^*,v) - ∑_i∈ Nν_i (1/2^⌈log2m⌉+1∑_r=0^∞1/2^r) ≥(a^*,v) - 1/2m∑_i∈ Nν_i ≥1/2(a^*,v). Here, the inequality on the second line is due to inequality (<ref>). The last inequality follows since the average value of the top-ranked alternatives, that is, (1/m) ∑_i∈ Nν_i, cannot be higher than (a^*,v) and, by Lemma <ref>, (a^*,v)≤(a^*,v). Hence, for any set of valuations v and any profile P∈(v), we have that _r∼[⌈log 2m⌉][((P,v),v)] = 1/⌈log 2m⌉∑_r=1^⌈log 2m⌉(a_r,v) ≥1/2⌈log 2m⌉∑_r=1^⌈log 2m⌉(a_r,v) ≥1/4⌈log 2m⌉(a^*,v) ≥1/4⌈log 2m⌉(a^*,v), where the first inequality follows from Lemma <ref>, the second inequality follows from (<ref>), and the third inequality again follows from Lemma <ref>. This concludes the proof of the theorem. § WORST-CASE DISTORTION LOWER BOUNDS We conclude our technical exposition by presenting two lower bounds on the worst-case distortion that improve upon lower bounds given by <cit.>. Both lower bound constructions are significantly different than those in <cit.>. Our basic approach in both of them is as follows. First, for every (large enough) value of m and a value of n of our choice, we decide the agents' rankings. For every position in an agent's ranking, we pre-define a value that is revealed if this particular position is queried by a mechanism. Let a be the alternative that a mechanism picked as winning on the given profile. We then show that —for every choice of a— it is possible to fix (i.e., to choose) the agents' remaining concealed valuations in such a way that the distortion is high. That is, for any position not queried by the mechanism, we assume an adversarial set of valuations that is consistent with the agents' rankings and with the values revealed to the mechanism. §.§ Lower bounding the number of queries for constant distortion Our first lower bound on the number of queries per agent that are necessary to get constant worst-case distortion improves the previously best bound of <cit.> by a sublogarithmic factor. Any deterministic mechanism that achieves a constant worst-case distortion must make Ω(log m) queries per agent. Let m≥ 154 and λ be an integer such that 2≤λ≤logm. Consider a mechanism that makes at most λ queries per agent; we will show[In terms of m and λ, the lower bound of <cit.> can be expressed (asymptotically) as 1/λ+1· m^1/2(λ+1). This gives better lower bounds on the number of queries per agent for higher than logarithmic worst-case distortion, but is inferior to ours in the most interesting regimes in which the required worst-case distortion is (close to) constant.] that has worst-case distortion at least 1/8m^1/3λ. We define the symmetric profile P={≻_i}_i∈ N with n=m agents, so that agent i has the ranking i ≻_i i+1 ≻_i …≻_i m ≻_i 1 ≻_i …≻_i i-1. The ranking of every agent i is divided into 2λ + 1 sets —or buckets— B^(i)_1, …, B^(i)_2λ + 1 where |B^(i)_j| = b_j = ⌈ m^j/3 λ⌉, for j ∈ [2λ], and |B^(i)_2λ+1| = b_2λ + 1 = m - ∑_j=1^2λ b_j. Hence, B_j^(i)={i+∑_t=1^j-1b_t m, ..., i-1+∑_t=1^jb_t m}. We refer to the alternatives from bucket B^(i)_2λ + 1 as the tail alternatives of agent i.[Our assumptions m≥ 154 and λ≤logm guarantee that ∑_j=1^2λb_j≤ m and, thus, buckets B_2λ+1^(i) are well-defined.] We proceed to describe the agents' valuations v. Every agent i assigns a value of 0 to each of her tail alternatives, i.e., _i(a) = 0 for every a ∈ B^(i)_2λ + 1. For j∈ [2λ], agent i assigns to all the alternatives of bucket B_j^(i) either a low value of m^2λ - j/3 λ or a high value of m^2λ - j + 1/3 λ in the following way. Whenever the mechanism makes a query for the value of an alternative in bucket B_j^(i), the concealed value of each alternative in bucket B_j^(i) is set to the low value, i.e., _i(a)=m^2λ-j/3λ for every alternative a∈ B_j^(i); this value is also revealed as the outcome of the query. Now, consider a bucket B_j^(i), in which mechanism did not query the value of any alternative. The concealed values of all alternatives in this bucket are set to the low value m^2λ - j/3 λ if the winning alternative (P,v) belongs to the bucket and the high value m^2λ - j+1/3 λ otherwise. Figure <ref> shows an example that demonstrates this approach. Let a=(P,v). Observe that alternative a belongs to bucket B_j^(i) for b_j different choices of i∈ N. Hence, (a,v) = ∑_i ∈ N ∑_j ∈ [2λ]: a∈ B_j^(i) m^2λ - j/3 λ = ∑_j ∈ [2λ] b_j · m^2λ - j/3 λ = ∑_j ∈ [2λ]⌈ m^j/3 λ⌉· m^2λ - j/3 λ≤ 4λ m^2/3. We now compute the sum of the social welfare over all alternatives by summing up all the values in every bucket of every agent. To do so, define the subsets H and L of N× [2λ] as follows. L consists of the pairs (i,j) such that either (P,v) ∈ B_j^(i) or mechanism queried the value of some alternative in bucket B_j^(i). Let H=N× [2λ]∖ L. We have ∑_a∈ A(a,v) = ∑_i∈ N(∑_j∈ [2λ]:(i,j)∈ Lb_j· m^2λ - j/3 λ+∑_j∈ [2λ]:(i,j)∈ Hb_j· m^2λ - j+1/3 λ) = ∑_i∈ N(∑_j∈ [2λ]b_j· m^2λ - j/3 λ+∑_j∈ [2λ]:(i,j)∈ Hb_j·(m^2λ - j+1/3 λ-m^2λ - j/3 λ)) ≥∑_i∈ N∑_j∈ [2λ]:(i,j)∈ Hb_j· m^2λ - j+1/3 λ≥∑_i∈ N∑_j∈ [2λ]:(i,j)∈ Hm^2λ +1/3 λ =|H|· m^2λ +1/3 λ. Now, observe that for every agent i, the set L contains at most λ+1 pairs (i,·) for the up to λ buckets in which queried the value of some alternative and (possibly) one extra bucket that contains alternative a. Hence, |L|≤ (λ+1)· n and, consequently, |H|≥ (λ-1)· n. Recalling that n=m, inequality (<ref>) yields max_a∈ A(a,v) ≥1/m·∑_a∈ A(a,v)≥ (λ-1)· m^2λ +1/3 λ. The desired lower bound of λ-1/4λ· m^1/3λ≥1/8· m^1/3λ on the distortion now follows by inequalities (<ref>) and (<ref>). §.§ Lower bounding the distortion of 1-query mechanisms In Section <ref>, we saw that a single query per agent is sufficient to guarantee constant average distortion when the agents draw their valuations according to a binary distribution. In the traditional setting of worst-case distortion, <cit.> proved that any deterministic 1-query mechanism must have distortion Ω(m). However, their lower bound construction uses valuations that are more complex than binary valuations where the agents have a value of either 1 or 0 for each alternative. Our next result shows that, even in the case of valuations as simple as binary valuations, the worst-case distortion of any deterministic mechanism is still Ω(√(m)). Every deterministic 1-query mechanism has a worst-case distortion of at least Ω(√(m)). This is true even if the agents have binary values for each of the alternatives. Let m≥ 16 and t be the largest even integer such that t^2≤ m. Clearly, t∈Ω(√(m)). We consider the profile P = {≻_i}_i∈ N with n=t^2 agents so that for every agent i ∈ N, the ranking ≻_i has the form i ≻_i i+1 ≻_i …≻_i t^2 ≻_i 1 ≻_i ... ≻_i i-1 ≻_i t^2+1 ≻_i …≻_i m. We divide the t^2 agents into t groups, each containing t agents. We call these groups cohorts. For k ∈ [t], the k-th cohort _k contains the agents (k-1)t +1,…, kt. Due to the symmetry of P and the assumption that n=t^2, an element in _k may refer to an agent i∈_k as well as to an alternative j that is the top-ranked alternative of agent j∈_k. Figure <ref> shows an example of our lower bound construction. Let _i,j denote the value that agent i has for the alternative in the j-th position of her ranking. For every query that the mechanism makes, we reveal the following information: * If agent i is the first agent of cohort _k who is queried by the mechanism at a position j ≤ t, we reveal _i,j = 1. * Otherwise, we reveal _i,j = 0. The latter item includes the case where the mechanism queries an agent at any position j > t as well as the case where the mechanism already queried an agent of cohort _k at a position j ≤ t. With the next two lemmas, we distinguish two cases that —taken together— cover all possible choices a mechanism may make for querying positions in P. In each case, we are able to fix the agents' valuations in a way that is consistent with the values revealed to the mechanism and results in a high distortion. The following piece of notation will be helpful for this purpose. Let N_≤ j(a) be the set of agents that rank alternative a at position j or higher, that is, N_≤ j(a)={i∈ N : _≻_i(a) ≤ j}. Assume that there is a cohort _k such that mechanism does not query any agent i ∈_k at a position j ≤ t. Then, there exists a choice of valuations v that is consistent with profile P and the values revealed to , such that max_a ∈ A(a,v)/((P,v),v)≥ t/2. First, assume that (P,v) = a ∉_k. Consider the alternative a' = kt ∈_k. Note that N_≤ t(a') = _k, that is, all agents of cohort _k rank alternative a' at a position j ≤ t. Since the algorithm did not query any of these positions, we are free to fix the concealed values for every i ∈ N_≤ t(a') such that _i,j= 1 for every position j ≤_≻_i(a') 0 otherwise. The remaining concealed values (outside of cohort _k) are set to 0 except for those positions where a value of 1 is implied by a revealed value of 1 at a position further down in the ranking. Thereby, (a',v) = t. Furthermore, notice that the set N_≤ t(a) intersects with at most two cohorts and alternative a receives a value of 0 from every agent i∈ N_≤ t(a'). Thus, (a,v)≤ 2, which shows that the lemma holds for (P,v) = a ∉_k. Now, let (P,v) = a ∈_k. We will further distinguish between the cases where a ≤ (k-1/2)t (i.e., a is in the upper half of _k; see Figure <ref>) and a > (k-1/2)t (i.e., a is in the lower half of _k). For both cases, we first show that there is an alternative a'≠ a such that (a',v) ≥ t/2. Case 1: a ≤ (k-1/2)t. Consider alternative a' = kt. Note that N_≤ t(a') = _k. Furthermore, since a is in the upper half of the cohort, it holds that |N_≤ t(a') ∖ N_≤ t(a)| ≥ t/2. Since did not query any agent from _k at a position j ≤ t, we can fix the concealed values for every i∈ N_≤ t(a') ∖ N_≤ t(a) such that _i,j= 1 for every position j ≤_≻_i(a') 0 otherwise. Thereby, (a',v) ≥ t/2. Case 2: a > (k-1/2)t. Let a' = a-1 and note that |N_≤ t(a') ∩_k| ≥ t/2. Since did not query any agent of cohort _k at a position j ≤ t, we are free to fix the concealed values for every agent i ∈ N_≤ t(a') ∩_k such that _i,j= 1 for every position j ≤_≻_i(a') 0 otherwise. Thereby, (a',v) ≥ t/2 in this case as well. In both cases, the remaining concealed values are set to 0 except for those positions where a value of 1 is implied by a revealed value of 1 at a position further down in the ranking. In particular, this means that _i(a) = 0 for every agent i ∈ N_≤ t(a) ∩_k. By assumption, did not query any of these agents at a position j≤ t. In case 1, among cohort _k, only the agents N_≤ t(a') ∖ N_≤ t(a) = _k ∖ N_≤ t(a) assign a value of 1 to any alternative. In case 2, among cohort _k, only the agents N_≤ t(a') ∩_k have a value of 1 for any alternative and exclusively for those alternatives ranked above a. Finally, by our construction, there is at most one other cohort k' such that N_≤ t(a) ∩_k'≠∅. Hence, (a,v) ≤ 1 and the lemma follows. Assume that mechanism queries every cohort at at least one position j ≤ t. Then, there exists a choice of valuations v that is consistent with profile P and the values revealed to , such that max_a ∈ A(a,v)/((P,v),v)≥ t/2-1. Since no agent gives any value to alternatives t^2+1, ..., m, the distortion is infinite when (P,v) > t^2. Hence, assume that there is a cohort _k such that (P,v) = a ∈_k. Consider the alternative a' = a-1 t^2. The set of agents N_≤ t(a) and N_≤ t(a') intersect with at most two cohorts, namely, cohort k and cohort k' = k-1 t^2. For every cohort _ℓ where ℓ≠ k,k', it holds by our assumption that there is an agent i_ℓ who is the first agent in this cohort to be queried by at a position j_ℓ≤ t. At this position, there is an alternative other than a or a', and we revealed the value _i_ℓ,j_ℓ=1; see above. Since can make at most one query to each agent, we can set the remaining concealed values such that _i_ℓ,j= 1 for every position j ≤_≻_i_ℓ(a') 0 otherwise for every ℓ≠ k,k'. This implies that _i_ℓ(a) = 0 for every ℓ≠ k,k'. The remaining concealed values are set to 0 except for those positions in _k, _k' where a value of 1 is implied by a revealed value of 1 at a position further down in the ranking. Then, (a,v)≤ 2 since N_≤ t(a) can intersect only with cohorts _k and _k'. On the other hand, by setting the concealed values for every cohort ℓ≠ k,k' as described, it holds that (a',v) ≥ t-2. From this, the lemma follows. Theorem <ref> now follows by combining Lemmas <ref> and <ref>. § DISCUSSION AND OPEN PROBLEMS We have initiated the study of average distortion in a simple stochastic setting that creates impartial culture electorates. The main open problem is whether constant average distortion is possible for general probability distributions of valuations. Notice that, throughout the paper, we assume that the distribution is given as part of the input, and this information is crucial to make our mechanisms work. It would be interesting to explore whether this —admittedly strong— requirement can be removed. Other natural extensions of our model include different distributions per alternative, possibly correlated. For the worst-case setting, we have improved the previously best-known lower bound on the number of queries per agent that are necessary for constant worst-case distortion by deterministic mechanisms. Still, the conjecture of <cit.> that constant worst-case distortion is possible with Θ(logm) deterministic queries per agent is wide open. Furthermore, we have also demonstrated that the use of randomization can yield worst-case bounds that the known deterministic mechanisms cannot achieve. Exploring whether there is a separation between deterministic and randomized mechanisms in terms of their worst-case distortion for a given number of queries per agent is another challenging problem that deserves investigation. § PROOF OF LEMMA <REF> For the proof of Lemma <ref>, we present another technical statement as Lemma <ref>. The latter lemma formalizes the intuition that the expectation for the maximum social welfare only increases when a given number of values of 1 is distributed more evenly between two agents. The proof of the lemma leans heavily on notation. We therefore highlight two key properties that the proof of Lemma <ref> exploits. These properties depend crucially on the fact that the agents' rankings are uniformly random and independent. Let r be any non-negative integer. * Property 1: Let u = (u_1,u_2,...,u_n) be a vector with non-negative integer entries. Now, consider the question whether, for a random draw of v∼ F, P ∼(v), there exists an alternative a∈ A that appears in the rankings of at least r agents such that for each of these agents i it holds that _≻_i(a) ≤ u_i. Whether or not such an alternative exists does not depend on the number of values of 1 underlying the agents' rankings. * Property 2: Let a_1 be any alternative appearing in any position in the ranking of agent 1 and let a_2 be any alternative appearing in any position in the ranking of agent 2. The probability of having social welfare of exactly r for agents 3,...,n is exactly the same for a_1 and a_2. We continue with the statement and formal proof of Lemma <ref>. Consider two vectors t=(t_1,t_2,..., t_n) and t'=(t'_1,t'_2,..., t'_n) with non-negative integer entries such that t_j≥ t_k+2 for two entries j,k∈ [n] and t'_j = t_j -1, t'_k = t_k +1 and t'_i = t_i for all i ∈ [n]∖{j,k}. Then, _v∼ F[max_a∈ A(a,v)|X_i(v)=t_i, i∈ [n]] ≤_v∼ F[max_a∈ A(a,v)|X_i(v)=t'_i, i∈ [n]]. Without loss of generality, we assume that j=1 and k=2. For each choice of vector t∈{t,t'}, we will abbreviate the event X_i(v)=t_i for i∈ [n] by C(t). Define the vector u = (u_1,u_2,...,u_n) = (t_1-1,t_2,t_3,...,t_n) and, for a non-negative integer r, let M(r) be the event that there is an alternative a∈ A such that ∑_i∈[n]{_≻_i(a)≤ u_i}≥ r. We denote by M(r) the complement event. By the properties of the expectation and using the law of total expectation, we have _v∼ F[max_a∈ A(a,v)|C(t)] = ∑_r=1^n_v∼ F[max_a∈ A(a,v)≥ r|C(t)] = ∑_r=1^n(_v∼ F P∼(v)[M(r)|C(t)] ·_v∼ F[max_a∈ A(a,v)≥ r|M(r),C(t)]. + ._v∼ F P∼(v)[M(r)|C(t)] ·_v∼ F[max_a∈ A(a,v)≥ r|M(r),C(t)]) = ∑_r=1^n(_v∼ F P∼(v)[M(r)]+_v∼ F P∼(v)[M(r)] ·_v∼ F[max_a∈ A(a,v)≥ r|M(r),C(t)]). On the last line, the conditions M(r) and C(t) combined imply that there is indeed an alternative with social welfare at least r. We also used the observation (i.e., property 1 above) that the conditions M(r) and M(r) only depend on the realization of the agents' (uniformly random) rankings. Now, notice that in the previous equality only the term [max_a∈ A(a,v)≥ r|M(r),C(t)] depends on the choice of t among t,t'. In the following, we distinguish between t = t (case 1) and t = t' (case 2). Let a_1,a_2 be the random alternatives appearing in position t_1 of agent 1's ranking and position t_2+1 of agent 2's ranking. Under the conditions M(r), C(t) combined, only the alternatives a_1,a_2 may have social welfare of at least r. Case 1. t = t = (t_1,t_2,t_3,..., t_n). Alternative a_2 has social welfare of at most r-1 due to M(r), C(t) and the fact that _≻_2(a_2)>t_2. Hence, only alternative a_1 can have social welfare of at least r such that _v∼ F[max_a∈ A(a,v)≥ r|M(r),C(t)] = _v∼ F[(a_1,v)≥ r|M(r),C(t)] = _v∼ F P∼(v)[1+{_≻_2(a_1) ≤ t_2}+∑_i=3^n{_≻_i(a_1)≤ t_i}≥ r|M(r)] = _v∼ F P∼(v)[{_≻_2(a_1) ≤ t_2}+∑_i=3^n{_≻_i(a_1)≤ t_i}= r-1]. The last equality follows since, under the condition M(r), alternative a_1 can appear at positions t_i or above for at most r-1 agents among the agents 2,3,...,n. Applying the law of total probability yields _v∼ F P∼(v)[{_≻_2(a_1) ≤ t_2}+∑_i=3^n{_≻_i(a_1)≤ t_i}= r-1] = _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_1)≤ t_i}= r-1]· 1 + _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_1)≤ t_i}= r-2]·_v∼ F P∼(v)[{_≻_2(a_1) ≤ t_2}=1] + _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_1)≤ t_i}< r-2]· 0 = _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_1)≤ t_i}= r-1] + _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_1)≤ t_i}= r-2]·t_2/m. Case 2. t = t'= (t_1-1,t_2+1,t_3,..., t_n). Similar to the previous case, we obtain _v∼ F[max_a∈ A(a,v)≥ r|M(r),C(t')] = _v∼ F P∼(v)[{_≻_1(a_2) ≤ t_2}+∑_i=3^n{_≻_i(a_2)≤ t_i}= r-1] = _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_2)≤ t_i}= r-1] + _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_2)≤ t_i}= r-2]·t_1-1/m. We now use the equalities obtained for the two cases to show that, for every r∈[n], the probability [max_a∈ A(a,v)≥ r|M(r),C(t)] is greater for t = t' than for t = t which proves the lemma. As highlighted above by property 2, it holds that _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_1)≤ t_i}= r-1] = _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_2)≤ t_i}= r-1], and _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_1)≤ t_i}= r-2] = _v∼ F P∼(v)[∑_i=3^n{_≻_i(a_2)≤ t_i}= r-2] due to the agents' rankings being uniformly random. Finally, by the assumption that t_1 ≥ t_2+2, we have that (t_1-1)/m > t_2/m such that, for every r, _v∼ F[max_a∈ A(a,v)≥ r|M(r),C(t')] > _v∼ F[max_a∈ A(a,v)≥ r|M(r),C(t)] which concludes the proof. We are now ready to prove Lemma <ref>. Starting from any vector t with non-negative integer entries and applying Lemma <ref> repeatedly, we obtain that _v∼ F[max_a∈ A(a,v)|X_i(v)=t_i, i∈ [n]] ≤_v∼ F[max_a∈ A(a,v)|X_i(v)∈{⌊ s/n ⌋, ⌈ s/n ⌉}, i∈ [n], X(v)=s] = _v∼ F[max_a∈ A(a,v)| (v,s)]. Then, _v∼ F[max_a∈ A(a,v)|X(v)=s] = ∑_t=(t_1, ..., t_n): ∑_i∈ [n]t_i=s_v∼ F[max_a∈ A(a,v)|X_i(v)=t_i, i∈ [n]] ·_v∼ F[X_i(v)=t_i, i∈ [n]|X(v)=s] ≤∑_t=(t_1, ..., t_n): ∑_i∈ [n]t_i=s_v∼ F[max_a∈ A(a,v)| (v,s)]·_v∼ F[X_i(v)=t_i, i∈ [n]|X(v)=s] =_v∼ F[max_a∈ A(a,v)| (v,s)], implying the lemma. § PROOF OF LEMMA <REF> In the proof of Lemmas <ref> and <ref>, we will use the following simple claim. Let k be a positive integer, S and S' sets of agents with |S|≤ |S'|, and T_i a set of k consecutive positions in agent i∈ S. Then, _v∼ F P∼(v)[max_a∈ A∑_i∈ S{_≻_i(a)∈ T_i}] ≤_v∼ F P∼(v)[max_a∈ A∑_i∈ S'{_≻_i(a)≤ k}] To see why the claim holds, observe that the expectations are defined over uniformly random profiles. Hence, the probability that an alternative appears in any position of any agents' ranking is equal to 1/m, i.e., independent from both the agent and the position. For the proof of Lemma <ref>, define the sets of agents N_1={1, ..., ⌊ n/2⌋}, N_2={1, ..., ⌊ n/2⌋+1, ..., 2⌊ n/2 ⌋}, and N_3=N∖ (N_1∪ N_2). We have _v∼ F[max_a∈ A(a,v)|(v,jnτ)] = _v∼ F P∼(v)[max_a∈ A∑_i∈ N{_≻_i(a)≤ jτ}] = _v∼ F P∼(v)[max_a∈ A∑_ℓ∈ [3]∑_i∈ N_ℓ∑_k=1^j{(k-1)τ <_≻_i(a)≤ kτ}] ≤∑_ℓ∈ [3]∑_k=1^j_v∼ F P∼(v)[max_a∈ A∑_i∈ N_ℓ{(k-1)τ <_≻_i(a)≤ kτ}] ≤∑_ℓ∈ [3]∑_k=1^j_v∼ F P∼(v)[max_a∈ A∑_i∈ N_1{_≻_i(a)≤τ}] = 3j·_v∼ F[max_a∈ A(a,v)|(v)]=3j· B. The first equality is due to the fact that the condition (v,jnτ) requires that the contribution to the social welfare of each alternative comes from positions from 1 to jτ only. The first inequality uses a simple property of the maximum function and linearity of expectation, while the second one follows by Claim <ref>. The last equality follows since the condition (v) requires that the contribution to the social welfare of each alternative comes from positions from position 1 to τ of exactly ⌊ n/2 ⌋ agents. The last equality uses the definition of B. § PROOF OF LEMMA <REF> Let N_1=[t]. Define the condition (v,t,s) to be Z(v,1)=t, ⌊ s/t⌋≤ X_i(v)≤⌈ s/t ⌉ for i∈ [N_1] and X_i(v)=0 for i∈ N∖ N_1. Using Lemma <ref> in a similar way we used it to prove Lemma <ref>, we can show that _v∼ F[max_a∈ A(a,v)|Z(v,1)=t, X(v)=j] ≤_v∼ F[max_a∈ A(a,v)|(v,t,j)]. So, we have _v∼ F[max_a∈ A(a,v)|Z(v,1)=t, X(v)=j] ≤_v∼ F[max_a∈ A(a,v)|(v,t,j)] ≤_v∼ F[max_a∈ A(a,v)|(v,t,⌈ j/t⌉· t)] = _v∼ F P∼(v)[max_a∈ A∑_i∈ N_1{_≻_i(a)≤⌈ j/t⌉}] =_v∼ F P∼(v)[max_a∈ A∑_i∈ N_1∑_k=1^⌈ j/t⌉{_≻_i(a)=k}] ≤∑_k=1^⌈ j/t⌉_v∼ F P∼(v)[max_a∈ A∑_i∈ N_1{_≻_i(a)=k}] ≤⌈ j/t ⌉·_v∼ F P∼(v)[max_a∈ A∑_i∈ N_1{_≻_i(a)=1}] = ⌈ j/t ⌉·_v∼ F P∼(v)[max_a∈ A(a,P,v,1)|Z(v,1) = t]. The second inequality is due to the fact that the condition (v,t,⌈ j/t⌉· t) implies the condition (v,t,j) and the quantity _v∼ F[max_a∈ A(a,v)|(v,t,j)] is non-decreasing in j. The first equality is due to the fact that condition (v,jnτ) requires that the contribution to the social welfare of each alternative comes from positions 1 to ⌈ j/t⌉ of t agents only. Next, the fourth inequality follows by a simple property of the maximum function and linearity of expectation, while the fifth inequality follows by Claim <ref>. Finally, the last inequality is due to the fact that under the condition Z(v,1)=t, the implied social welfare of each alternative comes from the top position of a set of t agents.
http://arxiv.org/abs/2307.04313v1
20230710025609
Unknotted Curves on Seifert Surfaces
[ "Subhankar Dey", "Veronica King", "Colby T. Shaw", "Bülent Tosun", "Bruce Trace" ]
math.GT
[ "math.GT", "57K30, 57K10" ]
Department of Mathematics University of Alabama Tuscaloosa AL [email protected] Department of Mathematics University of Texas Austin Austin TX [email protected] School of Mathematics Georgia Institute of Technology Atlanta GA [email protected] Department of Mathematics University of Alabama Tuscaloosa AL [email protected] Department of Mathematics University of Alabama Tuscaloosa AL [email protected] [2010]57K33, 57K43, 32E20 We consider homologically essential simple closed curves on Seifert surfaces of genus one knots in S^3, and in particular those that are unknotted or slice in S^3. We completely characterize all such curves for most twist knots: they are either positive or negative braid closures; moreover, we determine exactly which of those are unknotted. A surprising consequence of our work is that the figure eight knot admits infinitely many unknotted essential curves up to isotopy on its genus one Seifert surface, and those curves are enumerated by Fibonacci numbers. On the other hand, we prove that many twist knots admit homologically essential curves that cannot be positive or negative braid closures. Indeed, among those curves, we exhibit an example of a slice but not unknotted homologically essential simple closed curve. We further investigate our study of unknotted essential curves for arbitrary Whitehead doubles of non-trivial knots, and obtain that there is a precisely one unknotted essential simple closed curve in the interior of the doubles' standard genus one Seifert surface. As a consequence of all these we obtain many new examples of 3-manifolds that bound contractible 4-manifolds. Unknotted Curves on Seifert Surfaces Bruce Trace ==================================== § INTRODUCTION Suppose K ⊆ S^3 is a genus g knot with Seifert Surface Σ_K. Let b be a curve in Σ_K which is homologically essential, that is it is not separating Σ_K, and a simple closed curve, that is it has one component and does not intersect itself. Furthermore, we will focus on those that are unknotted or slice in S^3, that is each bounds a disk in S^3 or B^4. In this paper we seek to progress on the following problem: Characterize and, if possible, list all such b's for the pair (K, Σ_K) where K is a genus one knot and Σ_K its Seifert surface. Our original motivation for studying this problem comes from the intimate connection between unknotted or slice homologically essential curves on a Seifert surface of a genus one knot and 3-manifolds that bound contractible 4-manifolds. We defer the detailed discussion of this connection to Section <ref>, where we also provide some historical perspective. For now, however, we will focus on getting a hold on the stated problem above for a class of genus one knots, and as we will make clear in the next few results, this problem is already remarkably interesting and fertile on its own. §.§ Main Results. A well studied class of genus one knots is so called twist knot K=K_t which is described by the diagram on the left of Figure <ref>. We note that with this convention K_-1 is the right-handed trefoil T_2,3 and K_1 is the figure eight knot 4_1. We will consider the genus one Seifert surface Σ_K for K=K_t as depicted on the right of Figure <ref>. The first main result in this paper is the following. Let t≤ 2. Then the genus one Seifert surface Σ_K of K=K_t admits infinitely many homologically essential, unknotted curves, if and only if t=1, that is K is the figure eight knot 4_1. Indeed, we can be more precise and characterize all homologically essential, simple closed curves on Σ_K, from which Theorem <ref> follows easily. To state this we recall an essential simple closed curve c on Σ_K can be represented (almost uniquely) by a pair of non-negative integers (m,n) where m is the number of times c=(m,n) runs around the left band and n is the number of times it runs around the right band in Σ_K. Moreover, since c is connected, we can assume gcd(m,n) = 1. Finally, to uniquely describe c, we adopt the notation of ∞ curve and loop curve for a curve c, if the curve has its orientation switches one band to the other and it has the same orientation on both bands, respectively (See Figure <ref>). Let K=K_t be a twist knot and Σ_K its Seifert surface as in Figure <ref>. Then; * For K =K_t, t≤ -1, we can characterize all homologically essential simple closed curves on Σ_K as the closures of negative braids in Figure <ref>. In case of the right-handed trefoil K_-1=T_2,3, exactly 6 of these, see Figure <ref>, are unknotted in S^3. For t<-1, exactly 5 of these, see Figure <ref>, are unknotted in S^3. * For K=K_1=4_1, we can characterize all homologically essential simple closed curves on Σ_K as the closures of braids in Figure <ref>. A curve on this surface is unknotted in S^3 if and only if it is (1) a trivial curve (1,0) or (0,1), (2) an ∞ curve in the form of (F_i+1,F_i), or (3) a loop curve in the form of (F_i,F_i+1), where F_i represents the i^th Fibonacci number, see Figure <ref>. For twist knot K=K_t with t>1 the situation is more complicated. Under further hypothesis on the parameters m,n we can obtain results similar to those in Theorem <ref>, and these will be enough to extend the theorem entirely to the case of K=K_2, so called Stevedore's knot 6_1 (here we use the Rolfsen's knot tabulation notation). More precisely we have; Let K=K_t be a twist knot and Σ_K its Seifert surface as in Figure <ref>. Then; * When t>1 and m<n, we can characterize all homologically essential simple closed curves on Σ_K as the closures of positive braids in Figure <ref>(a)(b). Exactly 5 of these, see Figure <ref>, are unknotted in S^3. * When t>1 and m>n. * If m-tn>0, then we can characterize all homologically essential simple closed curves on Σ_K as the closures of negative braids in Figure <ref> and  <ref>. Exactly 5 of these, see Figure <ref>, are unknotted in S^3. * If m-n<n and the curve is ∞ curve, then we can characterize all homologically essential simple closed curves on Σ_K as the closures of positive braids Figure <ref>. Exactly 5 of these, see Figure <ref>, are unknotted in S^3. * For K =K_2=6_1, we can characterize all homologically essential simple closed curves on Σ_K as the closures of positive or negative braids. Exactly 5 of these, see Figure <ref>, are unknotted in S^3. What Theorem <ref> cannot cover is the case t>2, m>n and m-tn<0 or when m-n<n and the curve is a loop curve. Indeed in this range not every homologically essential curve is a positive or negative braid closure. For example, when (m,n)=(5,2) and t=3 one obtains that the corresponding essential ∞ curve, as a smooth knot in S^3, is the knot 5_2, and for (m,n)=(7,3) and t=3, the corresponding knot is 10_132 both of which are known to be not positive braid closures–coincidentally, these knots are not unknotted or slice. Moreover we can explicitly demonstrate, see below, that if one removes the assumption of “∞” from part 2(b) in Theorem <ref>, then the conclusion claimed there fails for certain loop curves when t>2. A natural question is then whether for knot K =K_t with t>2, m>n and m-tn<0 or m-n<n loop curve, there exists unknotted or slice curves on Σ_K other than those listed in Figure <ref>? A follow up question will be whether there exists slice but not unknotted curves on Σ_K for some K=K_t? We can answer the latter question in affirmative as follows: Let K=K_t be a twist knot with t>2 and Σ_K its Seifert surface as in Figure <ref> and consider the loop curve (m,n) with m=3, n=2 on Σ_K. Then this curve, as a smooth knot in S^3, is the pretzel knot P(2t-5, -3, 2). This knot is never unknotted but it is slice (exactly) when t=4, in which case this pretzel knot is also known as the curious knot 8_20. We note that the choices of m,n values made in Theorem <ref> are somewhat special in that they yielded an infinite family of pretzel knots, and that it includes a slice but not unknotted curve. Indeed, by using Rudolph's work in <cit.>, we can show (see Proposition <ref>) that the loop curve (m,n) with m-n=1, n>2 and t>4 on Σ_K, as a smooth knot in S^3, is never slice. The calculation gets quickly complicated once m-n>1, and it stays an open problem if in this range one can find other slice but not unknotted curves. We can further generalize our study of unknotted essential curves on minimal genus Seifert surface of genus one knots for the Whitehead doubles of non-trivial knots. We first introduce some notation. Let P be the twist knot K_t embedded (where t=0 is allowed) in a solid torus V⊂ S^3, and K denote an arbitrary knot in S^3, we identify a tubular neighborhood of K with V in such a way that the longitude of V is identified with the longitude of K coming from a Seifert surface. The image of P under this identification is a knot, D^±(K,t), called the positive/negative t–twisted Whitehead double of K. In this situation the knot P is called the pattern for D^±(K,t) and K is referred to as the companion. Figure <ref> depicts the positive -3–twisted Whitehead double of the left-handed trefoil, D^+(T_2,-3, -3). If one takes K to be the unknot, then D^+(K,t) is nothing but the twist knot K_t. Let K denote a non-trivial knot in S^3. Suppose that Σ_K is a standard genus one Seifert surface for the Whitehead double of K. Then there is precisely one unknotted homologically essential, simple closed curves in the interior of Σ_K. §.§ From unknotted curves to contractible 4-manifolds. The problem of finding unknotted homologically essential curves on a Seifert surface of a genus one knot is interesting on its own, but it is also useful for studying some essential problems in low dimensional topology. We expand on one of these problems a little more. An important and still open question in low dimensional topology asks: which closed oriented homology 3-sphere [A homology 3-sphere/4-ball is a 3-/4- manifold having the integral homology groups of S^3/B^4.] bounds a homology 4-ball or contractible 4-manifold (see <cit.>). This problem can be traced back to the famous Whitney embedding theorem and other important subsequent results due to Hirsch, Wall and Rokhlin <cit.> in the 1950s. Since then the research towards understanding this problem has stayed active. It has been shown that many infinite families of homology spheres do bound contractible 4-manifolds <cit.> and at the same time many powerful techniques and invariants, mainly coming from Floer and gauge theories <cit.> have been used to obtain constraints. In our case, using our main results, we will be able to list some more homology spheres that bound contractible 4-manifolds. This is because of the following theorem of Fickle <cit.>. Let K be a knot in S^3 which has a genus one Seifert surface F with a primitive element [b]∈ H_1(F) such that the curve b is unknotted in S^3. If b has self-linking s, then the homology sphere obtained by 1/(s± 1) Dehn surgery on K bounds a contractible [Indeed, this contractible manifold is a Mazur-type manifold, namely it is a contractible 4-manifold that has a single handle of each index 0, 1 and 2 where the 2-handle is attached along a knot that links the 1-handle algebraically once. This condition yields a trivial fundamental group.] 4-manifold. This result in <cit.> was generalized to genus one knots in the boundary of an acyclic 4–manifold W, and where the assumption on the curve b is relaxed so that b is slice in W. This will be useful for applying to the slice but not unknotted curve/knot found in Theorem <ref>. The natural task is to determine self-linking number s, with respect to the framing induced by the Seifert surface, for the unknotted curves found in Theorem <ref> and <ref>. For this we use the Seifert matrix given by S = [ -1 -1; 0 t ] where we use two obvious cycles–both oriented counterclockwise–in Σ_K. Recall that, if c=(m,n) is a loop curve then m and n strands are endowed with the same orientation and hence the same signs. On the other hand for ∞ curve they will have opposite orientation and hence the opposite signs. Therefore, given t, the self-linking number of c=(m,n) loop curve is s=-m^2-mn+n^2t, and the self-linking number of (m,n) ∞ curve is s=-m^2+mn+n^2t. A quick calculation shows that the six unknotted curves in Figure <ref> for K_-1=T_2,3 share self-linking numbers s=-1, -3. As we will see during the proof of Theorem <ref> the infinitely many unknotted curves for the figure eight knot K_1=4_1 reduce (that are isotopic) to unknotted curves with s=-1 or s=1. The five unknotted curves in Figure <ref> for K_t, t<-1 or t>1, share self-linking numbers s=-1, t and t-2 (see <cit.> and references therein for some relevant work). Finally, Theorem <ref> finds a slice but not unknotted curve which is the curve (3,2) with t=4. One can calculate from the formula above that this curve has self-linking number s=1. Finally, the unique unknotted curve from Theorem <ref> has self linking s=-1. Thus, as an obvious consequence of these calculations and Theorem <ref> and its generalization in <cit.> we obtain: Let K be any non-trivial knot. Then, the homology spheres obtained by * -1/2 Dehn surgery on D^+(K,t) * ±1/2 Dehn surgery on K_1=4_1 * -1/2 and -1/4 Dehn surgeries on K_-1=T_2,3 * -1/2 and 1/t±1 and 1/(t-2)±1 Dehn surgeries on K_t, t≠± 1 * 1/2 Dehn surgery on K_4 bound contractible 4-manifolds. The 3-manifolds in part (3) are Brieskorn spheres Σ(2,3,13) and Σ(2,3,25); they were identified by Casson-Harer and Fickle that they bound contractible 4-manifolds. Also, it was known already that the result of 1/2 Dehn surgery on the figure eight knot bounds a contractible 4-manifold (see <cit.>) from this we obtain the result in part (2) as the figure eight knot is an amphichiral knot. It is known that the result of 1/n Dehn surgery on a slice knot K⊂ S^3 bounds a contractible 4-manifold. To see this, note that at the 4-manifold level with this surgery operation what we are doing is to remove a neighborhood of the slice disk from B^4 (the boundary at this stage is zero surgery on K) and then attach a 2-handle to a meridian of K with framing -n. Now, simple algebraic topology arguments shows that this resulting 4-manifold is contractible. It is a well known result that <cit.>; a nontrivial twist knot K=K_t is slice if and only if K=K_2 (Stevedore's knot 6_1). So, by arguments above we already know that result of 1/n surgery on K_2 bounds contractible 4-manifold for any integer n. But interestingly we do not recover this by using Theorem <ref>. The paper is organized as follows. In Section <ref> we set some basic notations and conventions that will be used throughout the paper. Section <ref> contains the proofs of Theorem <ref>,  <ref> and  <ref>. Our main goal will be to organize, case by case, essential simple closed curves on genus one Seifert surface Σ_K, through sometimes lengthy isotopies, into explicit positive or negative braid closures. Once this is achieved we use a result due to Cromwell that says the Seifert algorithm applied to the closure of a positive/negative braid closure gives a minimal genus surface. This together with some straightforward calculations will help us to determine the unknotted curves exactly. But sometimes it will not be obvious or even possible to reduce an essential simple closed curve to a positive or negative closure (see Section <ref>,  <ref> and  <ref>). Further analyzing these cases will yield interesting phenomenon listed in Theorem  <ref> and  <ref>. Section <ref> contains the proof of Theorem <ref>. §.§ Acknowledgments We thank Audrick Pyronneau and Nicolas Fontova for helpful conversations. The first, second and third authors were supported in part by a grant from NSF (DMS-2105525). The fourth author was supported in part by grants from NSF (CAREER DMS-2144363 and DMS-2105525) and the Simons Foundation (636841, BT). § PRELIMINARIES In this section, we set some notation and make preparations for the proofs in the next three sections. In Figure <ref> we record some basic isotopies/conventions that will be repeatedly used during proofs. Most of these are evident but for the reader's convenience we explain how the move in part (f) works in Figure <ref>. We remind the reader that letters on parts of our curve, as in part (e) of the figure, or in certain location is to denote the number of strands that particular curve has. Recall also an essential, simple closed curve on Σ_K can be represented by a pair of non-negative integers (m,n) where m is the number of times it runs around the left band and n is the number of times it runs around the right band in Σ_K, and since we are dealing with connected curves we must have that m,n are relatively prime. We have two cases: m>n or n>m. For an (m,n) curve with m>n, after the m strands pass under the n strands on the Seifert surface, it can be split into two sets of strands. For this case, assume that the top set is made of n strands. They must connect to the n strands going over the right band, leaving the other set to be made of m-n strands. Now, we can split the other side of the set of m strands into two sections. The m-n strands on the right can only go to the bottom of these two sections, because otherwise the curve would have to intersect itself on the surface. This curve is notated an (m,n) ∞ curve. See Figure <ref>(a). The other possibility for an (m,n) curve with m>n, has n strands in the bottom set instead, which loop around to connect with the n strands going over the right band. This leaves the other to have m-n strands. We can split the other side of the set of m strands into two sections. The m-n strands on the right can only go to the top of these two sections, because again otherwise the curve would have to intersect itself on the surface. The remaining subsection must be made of n strands and connect to the n strands going over the right band. This curve is notated as an (m,n) loop curve. See Figure <ref>(b). The case of (m,n) curve with n>m is similar. See Figure <ref>(c)&(d). § TWIST KNOTS In this section we provide the proofs of Theorem <ref>, <ref> and  <ref>. We do this in four parts. Section <ref> and <ref> contains all technical details of Theorem <ref>, Section <ref> contains details of Theorem <ref> and Section <ref> contains Theorem <ref> . §.§ Twist knot with t<0 In this section we consider twist knot K=K_t, t≤ -1. This in particular includes the right-handed trefoil K_-1. All essential, simple closed curves on Σ_K can be characterized as the closure of one of the negative braids in Figure <ref>. It suffices to show all possible curves for an arbitrary m and n such that gcd(m, n) = 1 are the closures of either braid in Figure <ref>. As mentioned earlier we will deal with cases where both m, n ≥1 since cases involving 0 are trivial. There are four cases to consider. The arguments for each of these will be quite similar, and so we will explain the first case in detail and refer to to the rather self-explanatory drawings/figures for the remaining cases. Case 1: (m,n) ∞ curve with m>n>0. This case is explained in Figure <ref>. The picture on top left is the (m,n) curve we are interested. The next picture to its right is the (m,n) curve where we ignore the surface it sits on and use the convention from Figure <ref>(e). The next picture is an isotopy where we push the split between n strands and m-n strands along the dotted blue arc. The next three pictures are obtained by applying simple isotopies coming from Figure <ref>. For example, the passage from the bottom right picture to one to its left is via Figure <ref>(c). Finally, the picture on the bottom left, one can easily see that, is the closure of the negative braid depicted in Figure <ref>(a). Case 2: (m,n) loop curve with m>n>0. By series isotopies, as indicated in Figure <ref>, the (m,n) curve in this case can be simplified to the knot depicted on the right of Figure <ref>, which is the closure of negative braid in Figure <ref>(b). Case 3: (m,n) ∞ curve with n>m>0. By series isotopies, as indicated in Figure <ref>, the (m,n) curve in this case can be simplified to the knot depicted on the bottom left of Figure <ref>, which is the closure of negative braid in Figure <ref>(c). Case 4: (m,n) loop curve with n>m>0. By series isotopies, as indicated in Figure <ref>, the (m,n) curve in this case can be simplified to the knot depicted on the right of Figure <ref>, which is the closure of negative braid in Figure <ref>(d). Next, we determine which of those curves in Proposition <ref> are unknotted. It is a classic result due to Cromwell <cit.> (see also <cit.>) that the Seifert algorithm applied to the closure of a positive braid gives a minimal genus surface. Let β be a braid as in Figure <ref> and K = β̂ be its closure. Let s(K) be the number of crossings and l(K) be the number of Seifert circles Seifert circles. Then; (s(K), l(K))= (m, |t|n(n-1) + (m - n)(m - n - 1) + n(m - n))  β as in Figure <ref>(a) (m+n, (|t|+1)n(n-1) + (m - n)(m - n - 1) + nm+2n(m - n))  β as in Figure <ref>(b) (n, (|t-1|)n(n-1) + (n - m)(n - m - 1) +m(m-1) + m(n - m))  β as in Figure <ref>(c) (m+n, |t|n(n-1) + m(m - 1) + nm)  β as in Figure <ref>(d) Consider the braid β as in Figure <ref>(a). Clearly, it has m Seifert circles as β has m strands. Next, we will analyze the three locations in which crossings occur. First, the t negative full twists on n strands. Since each strand crosses over the other n-1 strands, we obtain |t|n(n-1) crossings. Second, the negative full twist on m-n strands produces additional (m-n)(m-n-1) crossings. Lastly, notice the part of β where m-n strands overpass the other n strands, and so for each strand in m-n strands we obtain an additional n crossings. Hence for K=β̂ we calculate: l(β̂) = |t|n(n-1) + (m - n)(m - n - 1) + n(m - n). The calculations for the other cases are similar. We can now prove the first part of Theorem <ref>. Proposition <ref> proves the first half of our theorem. To determine there are exactly six unknotted curves when t=-1 and five when t<-1, let B be the set containing the six and five unknotted curves as in Figure <ref> and <ref>, respectively. It suffices to show an essential, simple closed curve c on Σ_K where c ∉B, cannot be unknotted in S^3. We know by Proposition <ref>, c is the closure of one of the braids in Figure <ref> in S^3, where m,n ≥ 1, gcd(m,n) = 1. We show, case by case, that the Seifert surface obtained via the Seifert algorithm for curves c∉B in each case has positive genus, and hence it cannot be unknotted. * Let c=(m,n) be the closure of the negative braid as in Figure <ref>(a) and Σ_c its Seifert surface obtained by the Seifert algorithm. There are m Seifert circles and by Proposition <ref> l(c) = |t|n(n-1) + (m-n)(m-n-1) + n(m-n). Hence, g(Σ_c) = 1 + l - s/2= m(m-n-2) + n(|t|(n-1)+1) + 1/2. If m=n+1, then we get g(Σ_c)=|t|n(n-1)/2 which is positive as long as n>1–note that when c=(2,1) we indeed get an unknotted curve. If m>n+1, then g(Σ_c)≥n(|t|(n-1)+1)+1/2>0 as long as n>0. So, c∉B is not an unknotted curve as long as m>n≥ 1. * Let c=(m,n) be the closure of the negative braid as in Figure <ref>(b) and Σ_c its Seifert surface obtained by the Seifert algorithm. There are n+m Seifert circles and by Proposition <ref> l(c) = (|t|+1)n(n-1) + (m - n)(m - n - 1) + nm+2n(m - n). Hence, g(Σ_c) = m(m+n-2)+n(|t|(n-1)-1)+1/2. One can easily see that this quantity is always positive as long as n≥ 1. So, c∉B is not an unknotted curve when m>n≥ 1. * Let c=(m,n) be the closure of the negative braid as in Figure <ref>(c) and Σ_c its Seifert surface obtained by the Seifert algorithm. There are n Seifert circles and by Proposition <ref> l(c) = (|t|-1)n(n-1) + (n - m)(n - m - 1) + m(m-1)+m(n - m). Hence, g(Σ_c) = n(|t|(n-1)-m-1)+m^2+1/2. This is always positive as long as m≥ 1 and |t|≠ 1–note that when c=(1,2) and |t|=1 we indeed get unknotted curve. So, c∉B is not an unknotted curve when n>m≥ 1. * Let c=(m,n) be the closure of the negative braid as in Figure <ref>(d) and Σ_c its Seifert surface obtained by the Seifert algorithm. There are n+m Seifert circles and by Proposition <ref> l(c) = |t|n(n-1) + m(m - 1) + nm. Hence, g(Σ_c) = |t|n(n-1)+m(m-2)+n(m-1)+1/2. One can easily see that this quantity is always positive as long as m≥ 0. So, c∉B is not an unknotted curve when n>m≥ 1. This completes the first part of Theorem <ref>. §.§ Figure eight knot The case of figure eight knot is certainly the most interesting one. It is rather surprising, even to the authors, that there exists a genus one knot with infinitely many unknotted curves on its genus one Seifert surface. As we will see understanding homologically essential curves for the figure eight knot will be similar to what we did in the previous section. The key difference develops in Case 2 and 4 below where we show how, under certain conditions, a homologically essential (m,n) ∞ (resp. (m,n) loop) curve can be reduced to the homologically essential (m-n, 2n-m) ∞ (resp. (2m-n, n-m) loop) curve, and how this recursively produces infinitely many distinct homology classes that are represented by the unknot, and we will show that certain Fibonacci numbers can be used to describe these unknotted curves. Finally we will show fort he figure eight knot this is the only way that an unknotted curve can arise. Adapting the notations developed thus far we start characterizing homologically essential simple closed curves on genus one Seifert surface Σ_K of the figure eight knot K. All essential, simple closed curves on Σ_K can be characterized as the closure of one of the braids in Figure <ref> (note the first and third braids from the left are negative and positive braids, respectively). The curves (1,0), (0,1) are clearly unknots. Moreover, because gcd(m,n)=1, the only curve with n=m is (1,1) curve, which is also unknot in S^3. For the rest of the arguments below, we will assume n>m or m>n. There are four cases to consider: Case 1: (m,n) loop curve with m>n>0. This curve can be turned into a negative braid following the process in Figure <ref>. Case 2: (m,n) ∞ curve with m> n>0. As mentioned at the beginning, this case (and Case 4) are much more involved and interesting (in particular the subcases of Case 2c and 4c). Following the process as in Figure <ref>, the curve can be isotoped as in the bottom right of that figure, which is the closure of the braid on its left–that is the second braid from the left in Figure <ref>. Case 3: (m,n) ∞ curve with n>m>0. This curve can be turned into a positive braid following the process in Figure <ref>. Case 4: (m,n) loop curve with n> m>0. This curve can be turned into the closure of a braid following the process in Figure <ref>. We next determine which of these curves are unknotted: A homologically essential curve c characterized as in Proposition <ref> is unknotted if and only if it is (a) a trivial curve (1,0) or (0,1), (b) an ∞ curve in the form of (F_i+1,F_i), or (c) a loop curve in the form of (F_i,F_i+1). Let c denote one of these homologically essential curve listed in Proposition <ref>. We will analyze the unknottedness of c in four separate cases. Case 1. Suppose c=(m,n) is the closure of the negative braid in the bottom left of Figure <ref>. Note the minimal Seifert Surface of c, Σ_c, has (n)(m-n)+(m)(m-1) crossings and m Seifert circles. Hence; g(Σ_c) = n(m-n)+(m-1)^2/2 This is a positive integer for all m,n with m>n. So c is never unknotted in S^3 as long m>n>0 . Case 2. Suppose c is of the form in the bottom right of Figure <ref>. Since this curve is not a positive or negative braid closure, we cannot directly use Cromwell's result as in Case 1 or the previous section. There are three subcases to consider. Case 2a: m-n=n. Because m and n are relatively prime integers, we must have that m=2, n=1, and we can easily see that this (2,1) curve unknotted. Case 2b: m-n>n. This curve can be turned into a negative braid following the process in Figure <ref>. More precisely, we start, on the top left of that figure, with the curve appearing on the bottom right of Figure <ref>. We extend the split along the dotted blue arc and isotope m strands to reach the next figure. We note that this splitting can be done as by the assumption we have m-2n>0. Then using Figure <ref>(a) and further isotopy we reach the final curve on the bottom right of Figure <ref> which is obviously the closure of the negative braid depicted on the bottom left of that picture. The minimal Seifert Surface coming from this negative braid closure contains m-n circles and (m-2n)n+(m-n)(m-n-1) twists. Hence; g(Σ_c)=(m-2n)n+(m-n)(m-n-2)+1/2. This a positive integer for all integers m,n with m-n>n. So, c is not unknotted in S^3. Case 2c: m-n<n. We organize this curve some more. We start, on the top left of Figure <ref>, with the curve that is appearing on the bottom left of Figure <ref>. We extend the split along the dotted blue arc and isotope m-n strands to reach the next figure, After some isotopies we reach the curve on the bottom left of Figure <ref>. In other words, this subcase of Case 2c leads to a reduced version of the original picture (top left curve in Figure <ref>), in the sense that the number of strands over either handle is less than the number of strands in the original picture. This case can be further subdivided depending on the relationship between 2n-m and m-n, but this braid (or rather its closure) will turn into a (m-n, 2n-m) ∞ curve when m-n>2n-m: Case 2c-i: 2n-m = m-n. This simplifies to 3n=2m. Because gcd(m,n)=1, this will only occur for m=3 and n=2, and the resulting curve is (1,1) ∞ curve. In other words here we observed that (3,2) curve has been reduced to (1,1) curve Case 2c-ii: 2n-m > m-n. This means that we are dealing with a curve under Case 3, and we will see that all curves considered there are positive braid closures. Case2c-iii: 2n-m < m-n. This means we are back to be under Case 2. So for m>n>m-n, the (m,n) ∞ curve is isotopic to the (m-n, 2n-m) ∞ curve. This isotopy series will be notated (m,n) ∼ (m-n, 2n-m). Equivalently, there is a series of isotopies such that (m-n, 2n - m) ∼ (m,n). If (k,l) denote a curve at one stage of this isotopy, then (k,l) ∼ ((k +l) + k, k+l). So, starting with k = l = 1, we recursively obtain: (1,1) ∼ (3,2) ∼ (8,5) ∼ (21, 13) ∼ (55, 34) ∼⋯ In a similar fashion, if we start with k = 2, l = 1 we obtain: (2,1) ∼ (5,3) ∼ (13,8) ∼ (34, 21) ∼ (89, 55) ∼⋯ Notice every curve c above is of the form c = (F_i + 1, F_i), i ∈ℤ_>0 where F_i denotes the i^th Fibonacci number. We will call these Fibonacci curves. We choose (1,1) and (2,1) because they are known unknots. As a result, this relation generates an infinite family of homologically distinct simple closed curves on Σ_K that are unknotted in S^3. Case 3. Suppose a curve, c, is of the form (3), which is the closure of the positive braid depicted in the bottom left of Figure <ref>. An argument similar to that applied to Case 1 can be used to show c is never unknotted in S^3. Case 4. Suppose c is of the form as in the bottom middle of Figure <ref>. Similar to Case 2, there are three subcases to consider. Case 4a: m = n - m. Then 2m=n. Because gcd(m,n)=1, m=1 and n=2, resulting in unknot. Case 4b: n-m>m. Then n-2m>0 and following the isotopies in Figure <ref>, the curve can be changed into the closure of positive braid depicted on the bottom right of that figure. Identical to Case 2b, the curve c in this case is never unknotted in S^3. Case 4c: m>n-m. Then 2m-n>0, and we can split the m strands into two: a n-m strands and a 2m-n strands. This case can be further subdivided depending on the relationship between n-m and 2m-n, but this braid will turn into a (2m-n, n-m) loop curve when n-m>2m-n: Case 4c-i: 2m-n = n-m. This simplifies to 3m=2n. Because gcd(m,n)=1, this will only occur for m=2 and n=3, and the resulting curve is a (1,1) loop curve. Case 4c-ii: n-m < 2m-n. This means that we are dealing with a curve under Case 1, and we saw that all curves considered there are negative braid closures. Case 4c-iii: n-m > 2m-n. This means that we are back to be under Case 4. So for n>m>n-m, an (m,n) loop curve has the following isotopy series: (m,n) ∼ (2m-n,n-m). If (k,l) denote a curve at one stage of this isotopy, then the reverse also holds: (k,l) ∼ (k+l, (k+l)+l). As a result, much like Case 2c, we can generate two infinite families of unknotted curves in S^3: (1,1) ∼ (2,3) ∼ (5,8) ∼ (13, 21) ∼ (34, 55) ∼⋯ and (1,2) ∼ (3,5) ∼ (8,13) ∼ (21, 34) ∼ (55, 89) ∼⋯ Notice every curve c is of the form c = (F_i, F_i + 1), i ∈ℤ_>0. Finally, we show that this is the only way one can get unknotted curves. That is, we claim: If a homologically essential curve c on Σ_K for K=4_1 is unknotted, then it must be a Fibonacci curve. From above, it is clear that if our curve c is Fibonacci, then it is unknotted. So it suffices to show if a curve is not Fibonacci then it is not unknotted. We will demonstrate this for loop curves under Case 4. Let c be a loop curve that is not Fibonacci but is unknotted. Since it is unknotted, it fits into either Case 4a or 4c. But the only unknotted curve from Case 4a is (1,1) curve which is a Fibonacci curve, so c must be under Case 4c. By our isotopy relation, (m,n) ∼ (2m-n, n - m). So, the curve can be reduced to a minimal form, say (a,b) where (a,b) ≠ (1,1) and (a,b) ≠ (2,1). We will now analyze this reduced curve (a,b): * If a = b, then (a,b) = (1,1); a contradiction. * If a > b, then (a,b) is under Case 1; none of those are unknotted. * If b - a < a < b, then (a,b) is still under Case 4c, and not in reduced form; a contradiction. * If a < b - a < b, then (a,b) is under Case 4b; none of those are unknotted. * If b-a = a<b, then (a,b) = (2,1); a contradiction. So, it has to be that either (a,b) ∼ (1,1) or (a,b) ∼ (2,1). Hence, it must be that c = (F_i, F_i+1) for some i. The argument for the case where c is an ∞ curve under Case 2 is identical. §.§ Twist knot with t>1–Part 1 In this section we consider twist knot K=K_t, t≥ 2, and give the proof of Theorem <ref>. All essential, simple closed curves on Σ_K can be characterized as the closure of one of the braids in Figure <ref>. It suffices to show all possible curves for an arbitrary m and n such that gcd(m, n) = 1 are the closures of braids in Figure <ref>. Here too there are four cases to consider but we will analyze these in slightly different order than in the previous two sections. Case 1: (m,n) ∞ curve with n>m>0. In this case the curve is the closure of a positive braid, and this is explained in Figure <ref> below. More precisely, we start with the curve which is drawn in the top left of the figure, and after a sequence of isotopies this becomes the curve in the bottom right of the figure which is obviously the closure of the braid in the bottom left of the figure. In particular, when n>m≥ 1, none of these curves will be unknotted. Case 2: (m,n) loop curve with n>m>0. In this case too the the curve is the closure of a positive braid, and this is explained in Figure <ref> below. In particular, when n>m>1, none of these curves will be unknotted. In the remaining two cases we will follow slightly different way of identifying our curves as braid closures. As we will see (which is evident in part (c) and (d) of Proposition <ref>) that the braids will not be positive or negative braids for general and m, n and t values. We will then verify how under the various hypothesis listed in Theorem <ref> these braids can be reduced to a positive or negative braids. Case 3: (m,n) ∞ curve with m>n>0. We explain in Figure <ref> below how the (m,n) ∞ curve with m>n>0 is the closure of the braid in the bottom left of the figure. This braid is not obviously a positive or negative braid. Case 3a (m,n) ∞ curve with m>n>0 and m-tn>0. We want to show the braid in the bottom left of Figure <ref> under the hypothesis that m-tn>0 can be made a negative braid. We achieve this in Figure <ref>. More precisely, in part (a) of the figure we see the braid that we are working on. We apply the move in Figure 7(f) and some obvious simplifications to reach the braid in part (d). In part (e) of the figure we re-organize the braid: more precisely, since m-tn>0 and m-n=m-tn+(t-1)n, we can split the piece of the braid in part (d) made of m-n strands as the stack of m-tn strands and set of t-1 n strands. We then apply the move in Figure <ref>(f) repeatedly (t-1 times) to obtain the braid in part (f). We note that the block labeled as “all negative crossings” is not important for our purpose to draw explicitly but we emphasize that each time we apply the move in Figure <ref>(f) it produces a full left handed twist between an n strands and the rest. Next, sliding -1 full twists one by one from n strands over the block of these negative crossings we reach part (g). After further obvious simplifications and organizations in parts (h)–(j) we reach the braid in part (k) which is a negative braid. Case 3b (m,n) ∞ curve with m>n>0 and m-n<n. We want to show in this case the braid in the bottom left of Figure <ref> under the hypothesis that m-n<n can be made a positive braid (regardless of t value). This is achieved in Figures  <ref>. Case 4: (m,n) loop curve with m>n>0. The arguments for this case are identical Case 3 and 3a above. The (m,n) loop curve with m>n>0 is the closure of the braid that is drawn in the bottom left of Figure <ref>. Case 4a (m,n) loop curve with m>n>0 and m-tn>0. We show the braid, which the (m,n) ∞ curve with m>n>0 is closure of, can be made a negative braid under the hypothesis m-tn>0. This follows very similar steps as in Case 3a which is explained through a series drawings in Figure <ref>. Case 4b (m,n) loop curve with m>n>0 and m-n<n. Finally, we consider the (m,n) loop curve with m>n>0 and m-n<n. Interestingly, this curve for t>2 does not have to the closure of a positive or negative braid. This will be further explored in the next section but for now we observe, through Figure <ref>(a)-(c) that when t=2 the curve is the closure of a negative braid: The braid in (a) in the figure is the braid from Figure <ref>(d). After applying the move in Figure <ref>f, and simple isotopies we obtain the braid in (c) which is clearly a negative braid when t=2. The proof of part (1) follows from Case 1 and 2 above. Part (2)a/b follows from Case 3a/b and Case 4a above. As for part (3), observe that when n>m by using Case 1 and 2 we obtain that all homologically essential curves are the closures of positive braids. When m>n, we have either m-2n>0 or m-2n<0. In the former case we use Case 3a and 4a to obtain that all homologically essential curves are the closures of negative braids. In the latter case, first note that m-2n<0 is equivalent to m-n<n, Now by Case 3b all homologically essential ∞ curves are the closures of positive braids, and by Case 4b all homologically essential loop curves are the closures of negative braids. Now by using Cromwell's result and some straightforward genus calculations we deduce that when m>n>1 or n>m≥ 1 there are no unknotted curves among (positive/negative) braid closures obtained in Case 1-4 above. Therefore, there are exactly 5 unknotted curves among homologically essential curves on Σ_K for K=K_t in Theorem <ref>. §.§ Twist knot with t>1–Part 2 In this section we consider twist knot K=K_t, t≥ 3, and give the proof of Theorem <ref>. We show that the loop curve (3,2) when t≥ 3 is the pretzel knot P(2t-5, -3,2). This is explained in Figure <ref>. The braid in (a) is from Figure <ref>(d) with m=3, n=2, where we moved (t-2) full right handed twists to the top right end. We take the closure of the braid and cancel the left handed half twist on the top left with one of the right handed half twists on the top right to reach the knot in (c). In (c)-(g) we implement simple isotopies, and finally reach, in (h), the pretzel knot P(2t-5, -3,2). This knot has genus t-1 (<cit.>[Corollary 2.7] , and so is never unknotted as long as t>1. This pretzel knot is slice exactly when 2t-5+(-3)=0. That is when t=4. The pretzel knot P(3,-3,3) is also known as 8_20. An interesting observation is that although P(2t-5, -3, 2) for t>2 is not a positive braid closure, it is a quasi-positive braid closure. The (m,n) loop curve with m-n=1, n>3 and t> 4 is never slice. By Rudoplh in <cit.>, we have that for a braid closure β̂ when k_+≠ k_- g_4(β̂) ≥|k_+ - k_-| - n + 1/2 where β is a braid in n strands, and k_± is the number of positive and negative crossings in β. For quasi-positive knots, equality holds. In which case, the Seifert genus is also the same as the four ball (slice) genus. Note that this formula can also be thought as a generalization to the Seifert genus calculation formula we used for positive/negative braid closures, since for those braids when, |k_+ - k_-| is the number of crossings and n, the braid number, is exactly the number of Seifert circles. Thus Rudoplh's inequality can also be used to state that the above calculations to rule out unknotted curves on various genus one Seifert surface can also be used to state that there are no slice knots other than the unknotted ones found. Now for the loop curve c=(m,n) as in Figure <ref>(c), we have that k_+ = (t-2)n(n-1), k_- = (m-n)(m-n-1) + 3(m-n)n Hence, when m-n = 1, we get that k_- = 3n. Notice also that for n ≥ 3, t ≥ 4, we have k_+ > k_-. Thus, for n > 3, t > 4, m-n=1 we obtain c=β̂ is never slice as; g_4(β̂=c) ≥(t-2)n(n-1) - 3n - m +1/2 = n((t-2)(n-1) - 4) > 0 It can be manually checked that the (4,3) loop curve when t = 3 is not slice either. § WHITEHEAD DOUBLES In this section we provide the proof of Theorem <ref> Let f:S^1× D^2→ S^3 denote a smooth embedding such that f(S^1×{0})=K. Set T=f(S^1× D^2). Up to isotopy, the collection of essential, simple closed, oriented curves in ∂ T is parameterized by {mμ+nλ | m, n∈ℤ and gcd(m,n)=1} where μ denotes a meridian in ∂ T and λ denotes a standard longitude in ∂ T coming from a Seifert surface. With this parameterization, the only curves that are null-homologous in T are ±μ and the only curves that are null-homologous in S^3∖int(T) are ±λ. Of course ±μ will bound embedded disks in T, but ±λ will not bound embedded disks in S^3∖int(T) as K is a non-trivial knot. In other words, the only compressing curves for ∂ T in S^3 are meridians. Suppose now that C is a smooth, simple closed curve in the interior of T, and there is a smoothly embedded 2-disk, say Δ, in S^3 such that ∂Δ=C. Since C lies in the interior of T, we may assume that Δ meets ∂ T transversely in a finite number of circles. Initially observe that if Δ∩∂ T=∅, then we can use Δ to isotope C in the interior of T so that the result of this isotopy is a curve in the interior of T that misses a meridinal disk for T. Now suppose that Δ∩∂ T≠∅. We show, in this case too, C can be isotoped to a curve that misses a meridinal disk for T. To this end, let σ denote a simple closed curve in Δ∩∂ T such that σ is innermost in Δ. That is σ bounds a sub-disk, Δ' say, in Δ and the interior of Δ' misses ∂ T. There are two cases, depending on whether or not that σ is essential in ∂ T. If σ is essential in ∂ T, then, as has already been noted, σ must be a meridian. As such, Δ' will be a meridinal disk in T and C misses Δ'. If σ is not essential in ∂ T, then σ bounds an embedded 2-disk, say D, in ∂ T. It is possible that Δ meets the interior of D, but we can still cut and paste Δ along a sub-disk of D to reduce the number of components in Δ∩∂ T. Repeating this process yields that if C is smoothly embedded curve in the interior of T and C is unknotted in S^3, then C can be isotoped in the interior of T so as to miss a meridinal disk for T. With all this in place, we return to discuss Whitehead double of K. Suppose that F is a standard, genus 1 Seifert surface for a double of K. See Figure <ref>. The surface F can be viewed as an annulus A with a a 1-handle attached to it. Here K is a core circle for A, and the 1-handle is attached to A as depicted in Figure <ref> Observe that F can be constructed so that it lives in the interior of T. Now, the curve C that passes once over the 1-handle and zero times around A obviously misses a meridinal disk for T, and it obviously is unknotted in S^3. On the other hand, if C is any other essential simple closed curve in the interior of F, then C must go around A some positive number of times. It is not difficult, upon orienting, C can be isotoped so that the strands of C going around A are coherently oriented. As such, C is homologous to some non-zero multiple of K in T. This, in turn, implies that C cannot be isotoped in T so as to miss some meridinal disk for T. It follows that C cannot be an unknot in S^3. 10 CH A. Casson and J. Harer, Some homology lens spaces which bound rational homology balls, Pacific Journal of Mathematics 96 (1981), no. 1, 23–36. CG A. Casson and C. McA. Gordon, On slice knot in dimension three, Proc. Smpos. Pure Math. XXXII Amer. Math. Soc. (1978), 39–53. CD T. D. Cochran C. W. Davis and , Counterexamples to Kauffman's conjectures on slice knots, Adv. Math. 274 (2015), 263–284. Cr P.  R.  Cromwell, Homogeneous links, J. London Math. Soc. (series 2) 39 (1989), 535–552. 1002465 ET J.B. Etnyre and B. Tosun, Homology spheres bounding acyclic smooth manifolds and symplectic fillings. , Michigan Math. Journal (2022). Hirsch M. W. Hirsch, On imbedding differentiable manifolds in euclidean space, Ann. of Math. (2) 73 (1961), 566–571. 124915 Fickle H. C. Fickle, Knots, Z-homology 3-spheres and contractible 4-manifolds, Houston J. Math. 10 (1984), no. 4, 467–493. 774711 FintushelStern84 R. Fintushel and R. J. Stern, A μ-invariant one homology 3-sphere that bounds an orientable rational ball, Four-manifold theory (Durham, N.H., 1982), Contemp. Math., vol. 35, Amer. Math. Soc., Providence, RI, 1984, pp. 265–268. 780582 Kirby:problemlist R. Kirby, Problems in low dimensional manifold theory, Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976), Part 2, Proc. Sympos. Pure Math., XXXII, Amer. Math. Soc., Providence, R.I., 1978, pp. 273–312. 520548 KimLee D. Kim and J. Lee, Some invariants of pretzel links, Bull. Austral. Math. Soc., 75 2007, 253–271 Manolescu:T C. Manolescu, Pin(2)-equivariant Seiberg-Witten Floer homology and the triangulation conjecture, J. Amer. Math. Soc. 29 (2016), no. 1, 147–176. 3402697 rudolph L. Rudoplh Quasipositivity as an obstruction to sliceness Bulletin of the American Mathematical Society, 29, 1993 Rohlin V. A. Rohlin, The embedding of non-orientable three-manifolds into five-dimensional Euclidean space, Dokl. Akad. Nauk SSSR 160 (1965), 549–551. 0184246 Rohlin:3manembedding V. A. Rohlin, The embedding of non-orientable three-manifolds into five-dimensional Euclidean space, Dokl. Akad. Nauk SSSR 160 (1965), 549–551. 0184246 Stern R. Stern, Some Brieskorn spheres which bound contractible manifolds, Notices Amer. Math. Soc (25) (1978). St A. Stoimenow, Positive knots, closed braids and the Jones polynomial, Ann. Scuola Noem. Sup. Pisa Cl. Sci. (5) Vol. II, (2003) 237–285. 2004964 tosun:survey B. Tosun, Stein domains in ℂ^2 with prescribed boundary, Adv. Geom. 22(1) (2022), 9–22. 4371941 Wall:embedding C. T. C. Wall, All 3-manifolds imbed in 5-space, Bull. Amer. Math. Soc. 71 (1965), 564–567. 175139 Zeeman E. C. Zeeman, Twisting spun knots, Trans. Amer. Math. Soc. 115 (1965), 471–495. 195085
http://arxiv.org/abs/2307.04487v1
20230710112041
The abundance and excitation of molecular anions in interstellar clouds
[ "M. Agundez", "N. Marcelino", "B. Tercero", "I. Jimenez-Serra", "J. Cernicharo" ]
astro-ph.GA
[ "astro-ph.GA" ]
Molecular anions in the ISM Agúndez et al. Instituto de Física Fundamental, CSIC, Calle Serrano 123, E-28006 Madrid, Spain [email protected] Observatorio Astronómico Nacional, IGN, Calle Alfonso XII 3, E-28014 Madrid, Spain Observatorio de Yebes, IGN, Cerro de la Palera s/n, E-19141 Yebes, Guadalajara, Spain Centro de Astrobiología (CSIC/INTA), Ctra. de Torrejón a Ajalvir km 4, 28806, Torrejón de Ardoz, Spain We report new observations of molecular anions with the Yebes 40m and IRAM 30m telescopes toward the cold dense clouds TMC-1 CP, Lupus-1A, L1527, L483, L1495B, and L1544. We detected for the first time C_3N^- and C_5N^- in Lupus-1A and C_4H^- and C_6H^- in L483. In addition, we report new lines of C_6H^- toward the six targeted sources, of C_4H^- toward TMC-1 CP, Lupus-1A, and L1527, and of C_8H^- and C_3N^- in TMC-1 CP. Excitation calculations using recently computed collision rate coefficients indicate that the lines of anions accessible to radiotelescopes run from subthermally excited to thermalized as the size of the anion increases, with the degree of departure from thermalization depending on the H_2 volume density and the line frequency. We noticed that the collision rate coefficients available for the radical C_6H cannot explain various observational facts, which advises for a revisitation of the collision data for this species. The observations presented here, together with observational data from the literature, are used to model the excitation of interstellar anions and to constrain their abundances. In general, the anion-to-neutral ratios derived here agree within 50 % (a factor of two at most) with literature values, when available, except for the C_4H^-/C_4H ratio, which shows higher differences due to a revision of the dipole moment of C_4H. From the set of anion-to-neutral abundance ratios derived two conclusions can be drawn. First, the C_6H^-/C_6H ratio shows a tentative trend in which it increases with increasing H_2 density, as expected from theoretical grounds. And second, it is incontestable that the higher the molecular size the higher the anion-to-neutral ratio, which supports a formation mechanism based on radiative electron attachment. Nonetheless, calculated rate coefficients for electron attachment to the medium size species C_4H and C_3N are probably too high and too low, respectively, by more than one order of magnitude. The abundance and excitation of molecular anions in interstellar cloudsBased on observations carried out with the Yebes 40m telescope (projects 19A003, 20A014, 20A016, 20B010, 20D023, 21A006, 21A011, 21D005, 22B023, and 23A024) and the IRAM 30m telescope. The 40m radio telescope at Yebes Observatory is operated by the Spanish Geographic Institute (IGN; Ministerio de Transportes, Movilidad y Agenda Urbana). IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain). M. Agúndez1, N. Marcelino2,3, B. Tercero2,3, I. Jiménez-Serra4, J. Cernicharo1 Received; accepted =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION The discovery of negatively charged molecular ions in space has been a relatively recent finding <cit.>. To date the inventory of molecular anions detected in interstellar and circumstellar clouds consists of four hydrocarbon anions, C_4H^- <cit.>, C_6H^- <cit.>, C_8H^- <cit.>, and C_10H^- <cit.>, and four nitrile anions, CN^- <cit.>, C_3N^- <cit.>, C_5N^- <cit.>, and C_7N^- <cit.>. The astronomical detection of most of these species has been possible thanks to the laboratory characterization of their rotational spectrum <cit.>. However, the astronomical detection of C_5N^-, C_7N^-, and C_10H^- is based on high level ab initio calculations and astrochemical arguments <cit.>. In fact, in the case of C_10H^- it is not yet clear whether the identified species is C_10H^- or C_9N^- <cit.>. The current situation is such that there is only one astronomical source where the eight molecular anions have been observed, the carbon-rich circumstellar envelope IRC +10216 <cit.>, while the first negative ion discovered, C_6H^- <cit.>, continues to be the most widely observed in astronomical sources <cit.>. Observations indicate that along each of the series C_2n+2H^- and C_2n-1N^- (with n = 1, 2, 3, 4) the anion-to-neutral abundance ratio increases with increasing molecular size <cit.>. This is expected according to the formation mechanism originally proposed by <cit.>, which involves the radiative electron attachment to the neutral counterpart of the anion <cit.>. However, the efficiency of this mechanism in interstellar space has been disputed <cit.>, and alternative formation mechanisms have been proposed <cit.>. Currently there is not yet consensus on the formation mechanism of molecular anions in space (see discussion in ). Moreover, detections of negative ions other than C_6H^- in interstellar clouds are scarce, and thus our view of the abundance of the different anions in interstellar space is statistically very limited. Apart from the anion-to-anion behavior it is also interesting to know which is the source-to-source behavior. That is, how does the abundance of anions behave from one source to another. Based on C_6H^- detections, the C_6H^-/C_6H abundance ratio seems to increase with increasing H_2 volume density <cit.>, which is expected from chemical considerations (e.g., ; see also Sect. <ref>). However, most anion detections in interstellar clouds have been based on one or two lines and their abundances have been estimated assuming that their rotational levels are populated according to local thermodynamic equilibrium (LTE), which may not be a good assumption given the large dipole moments, and thus high critical densities, of anions. Recently, rate coefficients for inelastic collisions with H_2 or He have been calculated for C_2H^- <cit.>, C_4H^- <cit.>, C_6H^- <cit.>, CN^- <cit.>, C_3N^- <cit.>, and C_5N^- <cit.>, which makes it possible to study the excitation of anions in the interstellar medium. Here we report new detections of anions in interstellar sources. Concretely, we detected C_3N^- and C_5N^- in Lupus-1A and C_6H^- and C_4H^- in L483. We also present the detection of new lines of C_4H^-, C_6H^-, C_8H^-, C_3N^-, and C_5N^- in interstellar clouds where these anions have been already observed. We use the large observational dataset from this study, together with that available from the literature, to review the observational status of anions in interstellar clouds and to carry out a comprehensive analysis of the abundance and excitation of anions in the interstellar medium. § OBSERVATIONS §.§ Yebes 40m and IRAM 30m observations from this study The observations of cold dark clouds presented in this study were carried out with the Yebes 40m and IRAM 30m telescopes. We targeted the starless core TMC-1 at the cyanopolyyne peak position (hereafter TMC-1 CP)[TMC-1 CP: α_J2000=4^ h 41^ m 41.9^ s and δ_J2000=+25^∘ 41' 27.0”], the starless core Lupus-1A[Lupus-1A: α_J2000=15^ h 42^ m 52.4^ s and δ_J2000=-34^∘ 07' 53.5”], the prestellar cores L1495B[L1495B: α_J2000=4^ h 15^ m 41.8^ s and δ_J2000=+28^∘ 47' 46.0”] and L1544[L1544: α_J2000=5^ h 4^ m 18.0^ s and δ_J2000=+25^∘ 11' 10.0”], and the dense cores L1527[L1527: α_J2000=4^ h 39^ m 53.9^ s and δ_J2000=+26^∘ 03' 11.0”] and L483[L483: α_J2000=18^ h 17^ m 29.8^ s and δ_J2000=-4^∘ 39' 38.3”], which host a Class 0 protostar. All observations were done using the frequency switching technique to maximize the on-source telescope time and to improve the sensitivity of the spectra. The Yebes 40m observations consisted in a full scan of the Q band (31-50 GHz) acquired in a single spectral setup with a 7 mm receiver, which was connected to a fast Fourier transform spectrometer that provides a spectral resolution of 38 kHz <cit.>. The data of TMC-1 CP are part of the on-going QUIJOTE line survey <cit.>. The spectra used here were obtained between November 2019 and November 2022 and contain a total of 758 h of on-source telescope time in each polarization (twice this value after averaging both polarizations). Two frequency throws of 8 and 10 MHz were used. The sensitivity ranges from 0.13 to 0.4 mK in antenna temperature. The data of L1544 were taken between October and December 2020 toward the position of the methanol peak of this core, where complex organic molecules have been detected <cit.>, and are part of a high-sensitivity Q-band survey (31 h on-source; Jiménez-Serra et al. in prep.). The data for the other sources were obtained from July 2020 to February 2023 for L483 (the total on-source telescope time is 103 h), from May to November 2021 for L1527 (40 h on-source), from July 2021 to January 2023 for Lupus-1A (120 h on-source), and from September to November 2021 for L1495B (45 h on-source). Different frequency throws were adopted depending on the observing period, which resulted from tests done at the Yebes 40m telescope to find the optimal frequency throw. We used frequency throws of 10 MHz and 10.52 MHz for L483, 10 MHz for L1544, 8 MHz for L1527, and 10.52 MHz for Lupus-1A and L1495B. The antenna temperature noise levels, after averaging horizontal and vertical polarizations, are in the range 0.4-1.0 mK for L483, 1.3-1.8 mK for L1544, 0.7-2.7 mK for L1527, 0.7-2.8 mK for Lupus-1A, and 0.8-2.6 mK for L1495B. The observations carried out with the IRAM 30m telescope used the 3 mm EMIR receiver connected to a fast Fourier transform spectrometer that provides a spectral resolution of 49 kHz. Different spectral regions within the 3 mm band (72-116 GHz) were covered depending on the source. The data of TMC-1 CP consist of a 3 mm line survey <cit.> and spectra observed in 2021 <cit.>. The data of L483 consists of a line survey in the 80-116 GHz region (see ), together with data in the 72-80 GHz region, which are described in <cit.>. Data of Lupus-1A, L1495B, L1521F, L1251A, L1512, L1172, and L1389 were observed from September to November 2014 during a previous search for molecular anions at mm wavelengths (see ). Additional data of Lupus-1A were gathered during 2021 and 2022 during a project aimed to observe H_2NC <cit.>. In the case of L1527, the IRAM 30m data used were observed in July and August 2007 with the old ABCD receivers connected to an autocorrelator that provided spectral resolutions of 40 or 80 kHz <cit.>. The half power beam width (HPBW) of the Yebes 40m telescope is in the range 35-57 ” in the Q band, while that of the IRAM 30m telescope ranges between 21 ” and 34 ” in the 3 mm band. The beam size can be fitted as a function of frequency as HPBW(”) = 1763/ν(GHz) for the Yebes 40m telescope and as HPBW(”) = 2460/ν(GHz) for the IRAM 30m telescope. Therefore, the beam size of the IRAM 30m telescope at 72 GHz is similar to that of the Yebes 40m at 50 GHz. The intensity scale in both the Yebes 40m and IRAM 30m telescopes is antenna temperature, T_A^*, for which we estimate a calibration error of 10 %. To convert antenna temperature into main beam brightness temperature see foot of Table <ref>. All data were analyzed using the program CLASS of the GILDAS software[https://www.iram.fr/IRAMFR/GILDAS/]. §.§ Observational dataset of anions in dark clouds In Table <ref> we compile the line parameters of all the lines of negative molecular ions detected toward cold dark clouds, including lines from this study and from the literature. The line parameters of C_7N^- observed toward TMC-1 CP are given in <cit.> and are not repeated here. In the case of C_10H^- in TMC-1 CP we do not include line parameters here because the detection by <cit.> is not based on individual lines but on spectral stack of many lines. The lines of molecular anions presented in this study are shown in Fig. <ref> for C_6H^-, Fig. <ref> for C_4H^-, and Fig. <ref> for the remaining anions, i.e., C_8H^-, C_3N^-, and C_5N^-. Since we are interested in the determination of anion-to-neutral abundance ratios, we also need the lines of the corresponding neutral counterpart of each molecular anion, which are the radicals C_4H, C_6H, C_8H, C_3N, and C_5N. The velocity-integrated intensities of the lines of these species are given in Table <ref>. According to the literature, the most prevalent molecular anion, C_6H^-, has been detected in 11 cold dark clouds: TMC-1 CP <cit.>, L1527 and Lupus-1A <cit.>, L1544 and L1521F <cit.>, and L1495B, L1251A, L1512, L1172, L1389, and TMC-1 C <cit.>. All these detections were based on two individual or stacked lines lying in the frequency range 11-31 GHz (see Table <ref>). Here we present additional lines of C_6H^- in the Q band for TMC-1 CP, Lupus-1A, L1527, L1495B, and L1544, together with the detection of C_6H^- in a new source, L483, through six lines lying in the Q band (see Fig. <ref>). Molecular anions different to C_6H^- have turned out to be more difficult to detect as they have been only seen in a few sources. For example, C_4H^- has been only detected in three dark clouds, L1527 <cit.>, Lupus-1A <cit.>, and TMC-1 CP <cit.>. These detections rely on one or two lines (see Table <ref>). Here we report the detection of two additional lines of C_4H^- in the Q band toward these three sources, together with the detection of C_4H^- in one new source, L483 (see Fig. <ref>). The hydrocarbon anion C_8H^- has been observed in two interstellar sources. <cit.> reported the detection of four lines in the 12-19 GHz frequency range toward TMC-1 CP, while <cit.> reported the detection of this anion in Lupus-1A through two stacked lines at 18.7 and 21.0 GHz (see Table <ref>). Thanks to our Yebes 40m data, we present new lines of C_8H^- in the Q band toward TMC-1 CP (see Fig. <ref>). Finally, the nitrile anions C_3N^- and C_5N^- have resulted to be quite elusive as they have been only seen in one cold dark cloud, TMC-1 CP <cit.>. Here we present the same lines of C_3N^- and C_5N^- reported in <cit.> in the Q band, but with improved signal-to-noise ratios, plus two additional lines of C_3N^- in the 3 mm band. We also present the detection of C_3N^- and C_5N^- in one additional source, Lupus-1A (see Fig. <ref>). § PHYSICAL PARAMETERS OF THE SOURCES The interstellar clouds where molecular anions have been detected are 12 in total and comprise cold dense cores in different evolutionary stages, such as starless, prestellar, and protostellar (see Table <ref>). The classification as protostellar cores is evident in the cases of L1527 and L483 as the targeted positions are those of the infrared sources IRAS 04368+2557 and IRAS 18148-0440, respectively <cit.>. We also classified L1251A, L1172, and L1389 as protostellar sources based on the proximity of an infrared source (L1251A IRS3, CB17 MMS, and IRAS 21017+6742, respectively) to the positions targeted by <cit.>. The differentiation between starless and prestellar core is in some cases more ambiguous. In those cases we followed the criterion based on the N_2D^+/N_2H^+ column density ratio by <cit.>. In any case, for our purposes it is not very important whether a given core is starless or prestellar. To study the abundance and excitation of molecular anions in these 12 interstellar sources through non-LTE calculations we need to know which are the physical parameters of the clouds, mainly the gas kinetic temperature and the H_2 volume density, but also the emission size of anions and the linewidth. The adopted parameters are summarized in Table <ref>. Given that C_6H^- has not been mapped in any interstellar cloud to date, it is not known whether the emission of molecular anions in each of the 12 sources is extended compared to the telescope beam sizes, which are in the range 21-67 ” for the Yebes 40m, IRAM 30m, and GBT telescopes at the frequencies targeted for the observations of anions. Therefore one has to rely on maps of related species. In the case of TMC-1 CP we assume that anions are distributed in the sky as a circle with a diameter of 80 ” based on the emission distribution of C_6H mapped by <cit.>. Recent maps carried out with the Yebes 40m telescope <cit.> support the previous results of <cit.>. For the remaining 11 sources, the emission distribution of C_6H is not known and thus we assume that the emission of anions is extended with respect to the telescope beam. This assumption is supported by the extended nature of HC_3N emission in the cases of L1495B, L1251A, L1512, L1172, L1389, and TMC-1 C, according to the maps presented by <cit.>, and of multiple molecular species, including C_4H, in L1544, according to the maps reported by <cit.>. The linewidth adopted for each source (see Table <ref>) was calculated as the arithmetic mean of the values derived for the lines of C_6H^- in the Q band for TMC-1 CP, Lupus-1A, L1527, L1495B, and L1544. In the case of L483 we adopted the value derived by <cit.> from the analysis of all the lines in the 3 mm band. For L1521F, L1251A, L1512, L1172, and L1389 the adopted linewidths come from IRAM 30m observations of CH_3CCH in the 3 mm band (see Sect. <ref>). Finally, for TMC-1 C we adopted as linewidth that derived for HC_3N by <cit.>. The gas kinetic temperature was determined for some of the sources from the J = 5-4 and J = 6-5 rotational transitions of CH_3CCH, which lie around 85.4 and 102.5 GHz, respectively. We have IRAM 30m data of these lines for TMC-1 CP, Lupus-1A, L483, L1495B, and L1521F, while for L1527 we used the data obtained with the Nobeyama 45m telescope by <cit.>. Typically, the K = 0, 1, and 2 components are detected, which allow us to use the line intensity ratio between the K = 1 and K = 2 components, belonging to the E symmetry species, to derive the gas kinetic temperature. Since transitions with Δ K 0 are radiatively forbidden, the relative populations of the K = 1 and K = 2 levels are controlled by collisions with H_2 and thus are thermalized at the kinetic temperature of H_2. We do not use the K = 0 component because it belongs to a different symmetry species, A, and interconversion between A and E species is expected to be slow in cold dense clouds and thus their relative populations may not necessarily reflect the gas kinetic temperature. For TMC-1 CP we derive kinetic temperatures of 8.8 ± 0.6 K and 9.0 ± 0.6 K from the J = 5-4 and J = 6-5 lines of CH_3CCH, respectively. Similarly, using the J = 8-7 through J = 12-11 lines of CH_3C_4H, which lie in the Q band, we derive temperatures of 9.1 ± 0.7 K, 8.7 ± 0.6 K, 9.0 ± 0.6 K, 8.1 ± 0.7 K, and 9.1 ± 0.8 K, respectively. We thus adopt a gas kinetic temperature of 9 K, which is slightly lower than values derived in previous studies, 11.0 ± 1.0 K and 10.1 ± 0.9 K at two positions close to the cyanopolyyne peak using NH_3 <cit.> and 9.9 ± 1.5 K from CH_2CCH <cit.>. In Lupus-1A we derive temperatures of 11.4 ± 1.7 K and 10.2 ± 1.1 K from the J = 5-4 and J = 6-5 lines of CH_3CCH, respectively. We thus adopt a gas kinetic temperature of 11 K, which is somewhat below the value of 14 ± 2 K derived in <cit.> using the K = 0, 1, and 2 components of the J = 5-4 transition of CH_3CCH. In L1527 we derive 13.6 ± 2.5 K and 15.1 ± 2.4 K from the line parameters of CH_3CCH J = 5-4 and J = 6-5 reported by <cit.>. We thus adopt a kinetic temperature of 14 K, which agrees perfectly with the value of 13.9 K derived by <cit.> using CH_3CCH as well. The gas kinetic temperature in L483 has been estimated to be 10 K by <cit.> using NH_3, while <cit.> derive values of 10 K and 15 ± 2 K using either ^13CO or CH_3CCH. A new analysis of the CH_3CCH data of <cit.> in which the weak K = 3 components are neglected and only the K = 1 and K = 2 components are used results in kinetic temperatures of 11.5 ± 1.1 K and 12.6 ± 1.5 K, depending on whether the J = 5-4 or J = 6-5 transition is used. We thus adopt a kinetic temperature of 12 K for L483. For L1495B we derive 9.1 ± 0.9 K and 9.2 ± 0.7 K from CH_3CCH J = 5-4 and J = 6-5, and we thus adopt a kinetic temperature of 9 K. In L1521F we also adopt a gas kinetic temperature of 9 K since the derived temperatures from CH_3CCH J = 5-4 and J = 6-5 are 9.0 ± 0.7 K and 8.9 ± 0.9 K. The value agrees well with the temperature of 9.1 ± 1.0 K derived by <cit.> using NH_3. For the remaining cores, the gas kinetic temperatures were taken from the literature, as summarized in Table <ref>. To estimate the volume density of H_2 we used the ^13C isotopologues of HC_3N when these data were available. We have Yebes 40m data of the J = 4-3 and J = 5-4 lines of H^13CCCN, HC^13CCN, and HCC^13CN for TMC-1 CP, Lupus-1A, L1527, and L483. Data for one or various lines of these three isotopologues in the 3 mm band are also available from the IRAM 30m telescope (see Sect. <ref>) or from the Nobeyama 45 telescope (for L1527; see ). Using the ^13C isotopologues of HC_3N turned out to constrain much better the H_2 density that using the main isotopologue because one gets rid of optical depth effects. We carried out non-LTE calculations under the Large Velocity Gradient (LVG) formalism adopting the gas kinetic temperature and linewidth given in Table <ref> and varying the column density of the ^13C isotopologue of HC_3N and the H_2 volume density. As collision rate coefficients we used those calculated by <cit.> for HC_3N with ortho and para H_2, where we adopted a low ortho-to-para ratio of H_2 of 10^-3, which is theoretically expected for cold dark clouds (e.g., ). The exact value of the ortho-to-para ratio of H_2 is not very important as long as the para form is well in excess of the ortho form, so that collisions with para H_2 dominate. The best estimates for the column density of the ^13C isotopologue of HC_3N and the volume density of H_2 are found by minimizing χ^2, which is defined as χ^2 = ∑_i=1^N_l[ (I_calc - I_obs)/σ]^2, where the sum extends over the N_l lines available, I_calc and I_obs are the calculated and observed velocity-integrated brightness temperatures, and σ are the uncertainties in I_obs, which include the error given by the Gaussian fit and the calibration error of 10 %. To evaluate the goodness of the fit, we use the reduced χ^2, which is defined as χ^2_red = χ^2_min/(N_l-p), where χ^2_min is the minimum value of χ^2 and p is the number of free parameters. Typically, a value of χ^2_red ≲ 1 indicates a good quality of the fit. In this case we have p = 2 because there are two free parameters, the column density of the ^13C isotopologue of HC_3N and the H_2 volume density. Errors in these two parameters are given as 1 σ, where for p = 2, the 1 σ level (68 % confidence) corresponds to χ^2+2.3. The same statistical analysis is adopted in Sect. <ref> when studying molecular anions and their neutral counterparts through the LVG method. In some cases in which the number of lines is small or the H_2 density is poorly constrained, the H_2 volume density is kept fixed. In those cases p = 1 and the 1 σ error (68 % confidence) in the column density is given by χ^2+1.0. In Fig. <ref> we show the results for TMC-1 CP. In this starless core the H_2 volume density is well constrained by the four available lines of the three ^13C isotopologues of HC_3N to a narrow range of (0.9-1.1) × 10^4 cm^-3 with very low values of χ^2_red. We adopt as H_2 density in TMC-1 CP the arithmetic mean of the values derived for the three isotopologues, i.e., 1.0 × 10^4 cm^-3 (see Table <ref>). Similar calculations allow to derive H_2 volume densities of 1.8 × 10^4 cm^-3 for Lupus-1A, 5.6 × 10^4 cm^-3 for L483, and a lower limit of 10^5 cm^-3 for L1527 (see Table <ref>). The value for L483 is of the same order than those derived in the literature, 3.4 × 10^4 cm^-3 from the model of <cit.> and 3 × 10^4 cm^-3, from either NH_3 <cit.> or CH_3OH <cit.>. For L1495B we could only retrieve data for one of the ^13C isotopologues of HC_3N, HCC^13CN, from which we derive a H_2 density of 1.6 × 10^4 cm^-3 (see Table <ref>). In the case of L1521F, ^13C isotopologues of HC_3N were not available and thus we used lines of HCCNC, adopting the collision rate coefficients calculated by <cit.>, to derive a rough estimate of the H_2 volume density of 1 × 10^4 cm^-3 (see Table <ref>). Higher H_2 densities, in the range (1-5) × 10^5 cm^-3, are derived for L1521F from N_2H^+ and N_2D^+ <cit.>, probably because these molecules trace the innermost dense regions depleted in CO. For the remaining sources we adopted H_2 volume densities from the literature (see Table <ref>). For L1544 we adopted a value of 2 × 10^4 cm^-3 from the analysis of SO and SO_2 lines by <cit.>. This H_2 density is in agreement with the range of values, (1.5-4.0) × 10^4 cm^-3, found by <cit.> in their excitation analysis of HCCNC and HNC_3. Note that H_2 volume densities toward the dust peak are larger than 10^6 cm^-3. However, as shown by <cit.>, the emission of C_4H probes the outer shells and thus a density of a few 10^4 cm^-3 is appropriate for our calculations toward the CH_3OH peak. In the cases of L1251A, L1512, L1172, L1389, and TMC-1 C, we adopted the H_2 densities from the analysis of HC_3N lines by <cit.>. The reliability of the H_2 volume densities derived by these authors is supported by the fact that the densities they derive for TMC-1 CP and L1495B, 1.0 × 10^4 cm^-3 and 1.1 × 10^4 cm^-3, respectively, are close to the values determined in this study from ^13C isotopologues of HC_3N (see Table <ref>). In spite of the different evolutionary status of the 12 anion-containing clouds, the gas kinetic temperatures and H_2 volume densities at the scales proven by the Yebes 40m, IRAM 30m, and GBT telescopes are not that different. Gas temperatures are restricted to the very narrow range 9-14 K, while H_2 densities are in the range (1.0-7.5) × 10^4 cm^-3, at the exception of L1527 which has an estimated density in excess of 10^5 cm^-3 (see Table <ref>). § EXCITATION OF ANIONS: GENERAL CONSIDERATIONS One may expect that given the large dipole moments of molecular anions, as high as 10.4 D in the case of C_8H^- <cit.>, the rotational levels should be populated out of thermodynamic equilibrium in cold dark clouds. This is not always the case as it will be shown here. To get insight into the excitation of negative molecular ions in interstellar clouds we run non-LTE calculations under the LVG formalism adopting typical parameters of cold dark clouds, i.e., a gas kinetic temperature of 10 K, a column density of 10^11 cm^-2 (of the order of the values typically derived for anions in cold dark clouds; see references in Sect. <ref>), and a linewidth of 0.5 km s^-1 (see Table <ref>), and we varied the volume density of H_2 between 10^3 and 10^6 cm^-3. The sets of rate coefficients for inelastic collisions with H_2 adopted are summarized in Table <ref>. In those cases in which only collisions with He are available we scaled the rate coefficients by multiplying them by the square root of the ratio of the reduced masses of the H_2 and He colliding systems. When inelastic collisions for ortho and para H_2 are available, we adopted a ortho-to-para ratio of H_2 of 10^-3. In Fig. <ref> we show the calculated excitation temperatures (T_ ex) of lines of molecular anions as a function of the quantum number J of the upper level and the H_2 volume density. The different panels correspond to different anions and show the regimes in which lines are either thermalized (T_ ex ∼ 10 K) of subthermally excited (T_ ex < 10 K). To interpret these results it is useful to think in terms of the critical density, which for a given rotational level can be evaluated as the ratio of the de-excitation rates due to spontaneous emission and due to inelastic collisions (e.g., ). Collision rate coefficients for transitions with Δ J = -1 or -2, which are usually the most efficient, are of the order of 10^-10 cm^3 s^-1 at a temperature of 10 K for the anions for which calculations have been carried out (see Table <ref>). The Einstein coefficient for spontaneous emission depends linearly on the square of the dipole moment and the cube of the frequency. Therefore, the critical density (and thus the degree of departure from LTE) is very different depending on the dipole moment of the anion and on the frequency of the transition. Regarding the dependence of the critical density on the dipole moment, C_2H^- and CN^- have a similar weight, and thus their low-J lines, which are the ones observable for cold clouds, have similar frequencies. However, these two anions have quite different dipole moments, 3.1 and 0.65 Debye, respectively <cit.>, which make them to show a different excitation pattern. As seen in Fig. <ref>, the low-J lines of CN^- are in LTE at densities above 10^5 cm^-3 while those of C_2H^- require much higher H_2 densities to be in LTE. With respect to the dependence of the critical density with frequency, as one moves along the series of increasing weight C_2H^- → C_4H^- → C_6H^- or CN^- → C_3N^- → C_5N^- (see Fig. <ref>), the most favorable lines for detection in cold clouds (those with upper level energies around 10 K) shift to lower frequencies, which make the Einstein coefficients, and thus the critical densities, to decrease. That is, the lines of anions targeted by radiotelescopes are more likely to be thermalized for heavy anions than for light ones (see the higher degree of thermalization when moving from lighter to heavier anions in Fig. <ref>). The volume densities of H_2 in cold dark clouds are typically in the range 10^4-10^5 cm^-3 (see Table <ref>). Therefore, if C_2H^- is detected in a cold dark cloud at some point in the future, the most favorable line for detection, the J = 1-0, would be most likely subthermally excited, making necessary to use the collision rate coefficients to derive a precise abundance. In the case of a potential future detection of CN^- in a cold interstellar cloud, the J = 1-0 line would be in LTE only if the H_2 density of the cloud is ≥ 10^5 cm^-3 and out of LTE for lower densities (see Fig. <ref>). The medium-sized anions C_4H^- and C_3N^- are predicted to have their Q band lines more or less close to LTE depending on whether the H_2 density is closer to 10^5 or to 10^4 cm^-3, while the lines in the 3 mm band are likely to be subthermally excited unless the H_2 density is above 10^5 cm^-3 (see Fig. <ref>). For the heavier anions C_6H^- and C_5N^-, the lines in the K band are predicted to be thermalized at the gas kinetic temperature, while those in the Q band may or may not be thermalized depending on the H_2 density (see Fig. <ref>). Comparatively, the Q band lines of C_5N^- are more easily thermalized than those of C_6H^- because C_5N^- has a smaller dipole moment than C_6H^-. We note that the results concerning C_5N^- have to be taken with caution because we used the collision rate coefficients calculated for C_6H^- in the absence of specific collision data for C_5N^- (see Table <ref>). We did similar calculations for C_8H^-, C_10H^-, and C_7N^- (not shown) using the collision rate coefficients of C_6H^-. We find that the lines in a given spectral range deviate more from thermalization as the size of the anion increases. In the K band, the lines of C_6H^- and C_5N^- are thermalized, while those of C_10H^- become subthermally excited at low densities, around 10^4 cm^-3. In the Q band the deviation from thermalization is even more marked for these large anions. In summary, non-LTE calculations are particularly important to derive accurate abundances for anions when just one or two lines are detected and these lie in a regime of subthermal excitation, as indicated in Fig. <ref>. This becomes critical, in order of decreasing importance, for C_2H^-, CN^-, C_4H^-, C_3N^-, C_6H^-, C_8H^-, and C_5N^- (for the three latter only if observed at frequencies above 30 GHz). The drawback is that the H_2 volume density must be known with a good precision if one aims at determining the anion column density accurately with only one or two lines. In the case of the neutral counterparts of molecular anions, collision rate coefficients have been calculated for C_6H and C_3N with He as collider <cit.>. We thus carried out LVG calculations similar to those presented before for anions. In this case we adopt a higher column density of 10^12 cm^-2, in line with typical values in cold dark clouds (see references in Sect. <ref>). The results are shown in Fig. <ref>. It is seen that in the case of C_3N, the excitation pattern is similar to that of the corresponding anion, C_3N^-, shown in Fig. <ref>. The thermalization of C_3N occurs at densities somewhat higher compared to C_3N^-, mainly because the collision rate coefficients calculated for C_3N with He <cit.> are smaller than those computed for C_3N^- with para H_2 <cit.>. We note that this conclusion may change if the collision rate coefficients of C_3N with H_2 are significantly larger than the factor of 1.39 due to the change in the reduced mass when changing He by H_2. In the case of C_6H however the excitation behavior is very different to that of C_6H^- (compare C_6H^- in Fig. <ref> with C_6H in Fig. <ref>). The rotational levels of the radical are much more subthermally excited than those of the corresponding anion, with a difference in the critical density of about a factor of 30. This is a consequence of the much smaller collision rate coefficients calculated for C_6H with He <cit.> compared to those calculated for C_6H^- with para H_2 <cit.>, a difference that is well beyond the factor of 1.40 due to the change in the reduced mass when changing He by H_2. § ANION ABUNDANCES We evaluated the column densities of molecular anions and their corresponding neutral counterparts in the 12 studied sources by carrying out LVG calculations similar to those described in Sect. <ref> for the ^13C isotopologues of HC_3N. We used the collision rate coefficients given in Table <ref>. Gas kinetic temperatures and linewidths were fixed to the values given in Table <ref>, the ortho-to-para ratio of H_2, when needed, was fixed to 10^-3, and both the column density of the species under study and the H_2 volume density were varied. The best estimates for these two parameters were found by minimization of χ^2 (see Sect. <ref>). In addition, to evaluate the rotational temperature, and thus the level of departure from LTE, and to have an independent estimate of the column density, we constructed rotation diagrams. The LVG method should provide a more accurate determination of the column density than the rotation diagram, as long as the collision rate coefficients with para H_2 and the gas kinetic temperature are accurately known. If an independent determination of the H_2 volume density is available from some density tracer (in our case the ^13C isotopologues of HC_3N are used in several sources), a good agreement between the values of n(H_2) obtained from the species under study and from the density tracer supports the reliability of the LVG analysis. We note that densities do not need to be similar if the species studied and the density tracer are distributed over different regions, although in our case we expect similar distributions for HC_3N, molecular anions, and their neutral counterparts, as long as all them are carbon chains. A low value of χ^2_red, typically ≲ 1, is also indicative of the goodness of the LVG analysis. If the quality of the LVG analysis is not satisfactory or the collision rate coefficients are not accurate, a rotation diagram may still provide a good estimate of the column density if the number of detected lines is high enough and they span a wide range of upper level energies. Therefore, a high number of detected lines makes likely to end up with a correct determination of the column density. On the other hand, if only one or two lines are detected, the accuracy with which the column density can be determined relies heavily on whether the H_2 volume density, in the case of an LVG calculation, or the rotational temperature, in the case of the rotation diagram, are known with some confidence. In Table <ref> we present the results from the LVG analysis and the rotation diagram for all molecular anions detected in cold dark clouds and for the corresponding neutral counterparts, and compare the column densities derived with values from the literature, when available. In general, the column densities derived through the rotation diagram agree within 50 %, with those derived by the LVG analysis. The sole exceptions are C_8H in TMC-1 CP and C_6H in TMC-1 C. In the former case, the lack of specific collision rate coefficients for C_8H probably introduces an uncertainty in the determination of the column density. In the case of C_6H in TMC-1 C, the suspected problem in the collision rate coefficients used for C_6H (see below) is probably behind the too large column density derived by the LVG method. We first discuss the excitation and abundance analyses carried out for negative ions. For the anions detected in TMC-1 CP through more than two lines, i.e., C_6H^-, C_8H^-, C_3N^-, and C_5N^-, the quality of the LVG analysis is good (in Fig. <ref> we show the case of C_3N^-). First, the number of lines available is sufficiently high and they cover a wide range of upper level energies. Second, the values of χ^2_ red are ≲ 1. And third, the H_2 densities derived are on the same order (within a factor of two) of that obtained through ^13C isotopologues of HC_3N. The rotational temperatures derived by the rotation diagram indicate subthermal excitation, which is consistent with the H_2 densities derived and the excitation analysis presented in Sect. <ref>. We note that the column densities derived by the rotation diagram are systematically higher, by ∼ 50 %, compared to those derived through the LVG analysis. These differences are due to the breakdown of various assumptions made in the frame of the rotation diagram method, mainly the assumption of a uniform excitation temperature across all transitions and the validity of the Rayleigh-Jeans limit. Only the assumption that exp(hν/kT_ex) - 1 = hν/kT_ex, implicitly made by the rotation diagram method in the Rayleigh-Jeans limit, already implies errors of 10-20 % in the determination of the column density for these anions. We therefore adopt as preferred values for the column densities those derived through the LVG method and assign an uncertainty of 15 %, which is the typical statistical error in the determination of the column density by the LVG analysis. The recommended values are given in Table <ref>. Based on the same arguments, we conclude that the LVG analysis is satisfactory for C_6H^- and C_5N^- in Lupus-1A , C_6H^- and C_4H^- in L1527, and C_6H^- in L483, and thus adopt the column densities derived by the LVG method with the same estimated uncertainty of 15 % (see Table <ref>). In other cases the LVG analysis is less reliable due to a variety of reasons: only one or two lines are available (C_4H^- in TMC-1 CP, C_8H^- and C_3N^- in Lupus-1A, C_4H^- in L483, and C_6H^- in the clouds L1521F, L1251A, L1512, L1172, L1389, and TMC-1 C), the parameter χ^2_ red is well above unity (C_4H^- in Lupus-1A), or the column density has a sizable error (C_6H^- in L1495B and L1544). In those cases we adopt the column densities derived by the LVG method but assign a higher uncertainty of 30 % (values are given in Table <ref>). In order to derive anion-to-neutral abundance ratios, we applied the same analysis carried out for the anions to the corresponding neutral counterparts. We first focus on the radical C_6H. There is one striking issue in the LVG analysis carried out for this species: the H_2 volume densities derived through C_6H are systematically higher, by 1-2 orders of magnitude, than those derived through the ^13C isotopologues of HC_3N (see Fig. <ref>). This fact, together with the previous marked difference in the excitation pattern compared to that of C_6H^- discussed in Sect. <ref>, suggests that the collision coefficients adopted for C_6H, which are based on the C_6H – He system studied by <cit.>, are too small. A further problem when using the collision coefficients of <cit.> is that the line intensities from the ^2Π_1/2 state, which in TMC-1 CP are around 100 times smaller than those of the ^2Π_3/2 state, are overestimated by a factor of ∼ 10. All these issues indicate that it is worth to undertake calculations of the collision rate coefficients of C_6H with H_2. The suspected problem in the collision rate coefficients of C_6H make us to adopt a conservative uncertainty of 30 % in the column densities derived. Moreover, in those sources in which C_6H is observed through just a few lines (L1521F, L1251A, L1512, L1172, L1389, and TMC-1 C) we need to fix the H_2 density to the values derived through other density tracer (see Table <ref>), and given the marked difference between the H_2 densities derived through C_6H and other density tracers, it is likely that the C_6H column densities derived by the LVG method are unreliable. In these cases we therefore adopted as preferred C_6H column densities those obtained from the rotation diagram (see Table <ref>). For the other neutral radicals, we adopted the column densities derived by the LVG method with an estimated uncertainty of 15 % when the LVG analysis was satisfactory (C_3N and C_5N in TMC-1 CP, C_4H, C_3N, and C_5N in Lupus-1A, and C_4H in L1527) and a higher uncertainty of 30 % otherwise (C_4H and C_8H in TMC-1 CP, C_8H in Lupus-1A, and C_4H in L483). The recommended column densities for molecular anions and their neutral counterparts, and the corresponding anion-to-neutral ratios, are given in Table <ref>. Since the lines of a given anion and its corresponding neutral counterpart where in most cases observed simultaneously, we expect the error due to calibration to cancel when computing anion-to-neutral ratios. We therefore subtracted the 10 % error due to calibration in the column densities when computing errors in the anion-to-neutral ratios. In general, the recommended anion-to-neutral abundance ratios agree within 50 % with the values reported in the literature, when available. Higher differences, of up to a factor of two, are found for C_6H^- in L1527 and L1495B and for C_5N^- in TMC-1 CP. The most drastic differences are found for the C_4H^-/C_4H abundance ratio, for which we derive values much higher than those reported in the literature. The differences are largely due to the fact that here we adopt a revised value of the dipole moment of C_4H (2.10 D; ), which is significantly higher than the value of 0.87 D calculated by <cit.> and adopted in previous studies. This fact makes the column densities of C_4H to be revised downward by a factor of ∼ 6, and consequently the C_4H^-/C_4H ratios are also revised upward by the same factor. § DISCUSSION Having at hand a quite complete observational picture of negative ions in the interstellar medium, as summarized in Table <ref>, it is interesting to examine which lessons can be learnt from this. There are at least two interesting aspects to discuss. First, how do the anion-to-neutral abundance ratio behave from one source to another, and whether the observed variations can be related to some property of the cloud. And second, within a given source, how do the anion-to-neutral abundance ratio vary for the different anions, and whether this can be related to the formation mechanism of anions. Regarding the first point, since C_6H^- is the most widely observed anion, it is very convenient to focus on it to investigate the source-to-source behavior of negative ions. The detection of C_6H^- in L1527 and the higher C_6H^-/C_6H ratio derived in that source compared to that in TMC-1 CP led <cit.> to suggest that this was a consequence of the higher H_2 density in L1527 compared to TMC-1 CP. This point was later on revisited by <cit.> with a larger number of sources detected in C_6H^-. These authors found a trend in which the C_6H^-/C_6H ratio increases with increasing H_2 density and further argued that this ratio increases as the cloud evolves from quiescent to star-forming, with ratios below 3 % in quiescent sources and above that value in star-forming ones. There are theoretical grounds that support a relationship between the C_6H^-/C_6H ratio and the H_2 density. Assuming that the formation of anions is dominated by radiative electron attachment to the neutral counterpart and that they are mostly destroyed through reaction with H atoms, as expected for the conditions of cold dense clouds <cit.>, it can be easily shown that at steady state the anion-to-neutral abundance ratio is proportional to the abundance ratio between electrons and H atoms, which in turn is proportional to the square root of the H_2 volume density (e.g., ). That is, C_6H^-/C_6H∝e^-/H∝n(H_2)^1/2. In Fig. <ref> we plot the observed C_6H^-/C_6H ratio as a function of the H_2 density for the 12 clouds where this anion has been detected. This is an extended and updated version of Figure 5 of <cit.>, where we superimpose the theoretical trend expected according to Eq. (<ref>). In general terms, the situation depicted by Fig. <ref> is not that different from that found by <cit.>. The main difference concerns L1495B, for which we derive a higher C_6H^-/C_6H ratio, 3.0 % instead of 1.4 %. Our value should be more accurate, given the larger number of lines used here. Apart from that, the C_6H^-/C_6H ratio tends to be higher in those sources with higher H_2 densities, which tend to be more evolved. This behavior is similar to that found by <cit.>. The data points in Fig. <ref> seem to be consistent with the theoretical expectation. We however caution that there is substantial dispersion in the data points. Moreover, the uncertainties in the anion-to-neutral ratios, together with those affecting the H_2 densities (not shown), make it difficult to end up with a solid conclusion on whether or not observations follow the theoretical expectations. If we restrict to the five best characterized sources (TMC-1 CP, Lupus-1A, L1527, L483, and L1495B), all them observed in C_6H^- through four or more lines and studied in the H_2 density in a coherent way, then the picture is such that all sources, regardless of its H_2 density, have similar C_6H^-/C_6H ratios, at the exception of L1527, which remains the only data point supporting the theoretical relation between anion-to-neutral ratio and H_2 density. It is also worth noting that when looking at C_4H^-, L1527 shows also an enhanced anion-to-neutral ratio compared to TMC-1 CP, Lupus-1A, and L483. Further detections of C_6H^- in sources with high H_2 densities, preferably above 10^5 cm^-3, should help to shed light on the suspected relation between anion-to-neutral ratio and H_2 density. This however may not be easy because chemical models predict that, although the C_6H^-/C_6H ratio increases with increasing H_2 density, an increase in the density also brings a decrease in the column density of both C_6H and C_6H^- <cit.>. The second aspect that is worth to discuss is the variation of the anion-to-neutral ratio for different anions within a given source. Unlike the former source-to-source case, where variations were small (a factor of two at most), here anion-to-neutral ratios vary by orders of magnitude, i.e., well above uncertainties. Figure <ref> summarizes the observational situation of interstellar anions in terms of abundances relative to their neutral counterpart. The variation of the anion-to-neutral ratios across different anions is best appreciated in TMC-1 CP and Lupus-1A, which stand out as the two most prolific sources of interstellar anions. The lowest anion-to-neutral ratio is reached by far for C_4H^-, while the highest values are found for C_5N^- and C_8H^-. We caution that the C_5N^-/C_5N ratio could have been overestimated if the true dipole moment of C_5N is a mixture between those of the ^2Σ and ^2Π states, as discussed by <cit.>, in a case similar to that studied for C_4H by <cit.>. For the large anion C_7N^-, the anion-to-neutral ratio is not known in TMC-1 CP but it is probably large, as suggested by the detection of the lines of the anion and the non detection of the lines of the neutral <cit.>. In the case of the even larger anion C_10H^-, the anion is found to be even more abundant than the neutral in TMC-1 CP by a factor of two, although this result has probably an important uncertainty since the detection is done by line stack <cit.>. Moreover, it is yet to be confirmed that the species identified is C_10H^- and not C_9N^- <cit.>. In any case, a solid conclusion from the TMC-1 CP and Lupus-1A data shown in Fig. <ref> is that when looking at either the hydrocarbon series of anions or at the nitrile series, the anion-to-neutral ratio clearly increases with increasing size. The most straitforward interpretation of this behavior is related to the formation mechanism originally proposed by <cit.>, which relies on the radiative electron attachment (REA) to the neutral counterpart and for which the rate coefficient is predicted to increase markedly with increasing molecular size. If electron attachment is the dominant formation mechanism of anions and destruction rates are similar for all anions, we expect the anion-to-neutral abundance ratio to be proportional to the rate coefficient of radiative electron attachment. That is, A^-/A∝k_REA, where A^- and A are the anion and its corresponding neutral counterpart, respectively, and k_ REA is the rate coefficient for radiative electron attachment to A. To get insight into this relation we plot in Fig. <ref> the rate coefficients calculated for the reactions of electron attachment forming the different anions on a scale designed on purpose to visualize if observed anion-to-neutral ratios scale with calculated electron attachment rates. We arbitrarily choose C_6H^- as the reference for the discussion. If we first focus on the largest anion C_8H^-, we see that the C_8H^-/C_8H ratios are systematically higher, by a factor of 2-3, than the C_6H^-/C_6H ones, while <cit.> calculate identical electron attachment rates for C_6H and C_8H. Similarly, the C_5N^-/C_5N ratios are higher, by a factor of 6-8 than the C_6H^-/C_6H ratios, while the electron attachment rate calculated for C_5N is twice of that computed for C_6H in the theoretical scenario of <cit.>. That is, for the large anions C_8H^- and C_5N^- there is a deviation of a factor of 2-4 from the theoretical expectation given by Eq. (<ref>). This deviation is small given the various sources of uncertainties in both the observed anion-to-neutral ratio (mainly due to uncertainties in the dipole moments) and the calculated electron attachment rate coefficient. The situation is different for the medium size anions C_4H^- and C_3N^-. In the case of C_4H^-, anion-to-neutral ratios are ∼ 100 times lower than for C_6H^-, while the electron attachment rate calculated for C_4H is just ∼ 6 times lower than that computed for C_6H. The deviation from Eq. (<ref>) of a factor ∼ 20, which is significant, is most likely due to the electron attachment rate calculated for C_4H by <cit.> being too large. In the case of C_3N^-, the observed anion-to-neutral ratios are 4-6 times lower than those derived for C_6H^-, while the electron attachment rate calculated by <cit.> for C_3N is 300 times lower than that computed for C_6H by <cit.>. Here the deviation is as large as two orders of magnitude and it is probably caused by the too low electron attachment rate calculated for C_3N. In summary, calculated electron attachment rates are consistent with observed anion-to-neutral ratios for the large species but not for the medium-sized species C_4H and C_3N, in which cases calculated rates are too large by a factor of ∼ 20 and too small by a factor of ∼ 100, respectively. Of course, the above conclusion holds in the scenario of anion formation dominated by electron attachment and similar destruction rates for all anions, which may not be strictly valid. For example, it has been argued <cit.> that the process of radiative electron attachment is much less efficient than calculated by <cit.>, with rate coefficients that are too small to sustain the formation of anions in interstellar space. <cit.> discuss this point making the difference between direct and indirect radiative electron attachment, where for long carbon chains the direct process would be slow, corresponding to the rates calculated by <cit.>, while the indirect process could be fast if a long-lived superexcited anion is formed, something that has some experimental support. <cit.> conclude that there are enough grounds to support rapid electron attachment to large carbon chains, as calculated by <cit.>. The formation mechanism of anions through electron attachment is very selective for large species and thus has the advantage of naturally explaining the marked dependence of anion-to-neutral ratios with molecular size illustrated in Fig. <ref>, something that would be difficult to explain through other formation mechanism. Indeed, mechanisms such as dissociative electron attachment to metastable isomers such as HNC_3 and H_2C_6 <cit.> or reactions of H^- with polyynes and cyanopolyynes <cit.> could contribute to some extent but are unlikely to control the formation of anions since they can hardly explain why large anions are far more abundant than small ones. § CONCLUSIONS We reported new detections of molecular anions in cold dense clouds and considerably expanded the number of lines through which negative ions are detected in interstellar clouds. The most prevalent anion remains to be C_6H^-, which to date has been seen in 12 interstellar clouds, while the rest of interstellar anions are observed in just 1-4 sources. We carried out excitation calculations, which indicate that subthermal excitation is common for the lines of interstellar anions observed with radiotelescopes, with the low frequency lines of heavy anions being the easiest to thermalize. Important discrepancies between calculations and observations are found for the radical C_6H, which suggest that the collision rate coefficients currently available for this species need to be revisited. We analyzed all the observational data acquired here and in previous studies through non-LTE LVG calculations and rotation diagrams to constrain the column density of each anion in each source. Differences in the anion-to-neutral abundance ratios with respect to literature values are small, less than 50 % in general and up to a factor of two in a few cases. The highest difference is found for the C_4H^-/C_4H ratio, which is shifted upward with respect to previous values due to the adoption of a higher dipole moment for the radical C_4H. The observational picture of interstellar anions brought by this study shows two interesting results. On the one side, the C_6H^-/C_6H ratio seems to be higher in clouds with a higher H_2 density, which is usually associated to a later evolutionary status of the cloud, although error bars make it difficult to clearly distinguish this trend. On the other hand, there is a very marked dependence of the anion-to-neutral ratio with the size of the anion, which is in line with the formation scenario involving radiative electron attachment, the theory of which must still be revised for medium size species such as C_4H and C_3N. We acknowledge funding support from Spanish Ministerio de Ciencia e Innovación through grants PID2019-106110GB-I00, PID2019-107115GB-C21, and PID2019-106235GB-I00. [Agúndez et al.(2008)]Agundez2008 Agúndez, M., Cernicharo, J., Guélin, M., et al. 2008, , 478, L19 [Agúndez et al.(2010)]Agundez2010 Agúndez, M., Cernicharo, J., Guélin, M., et al. 2010, , 517, L2 [Agúndez et al.(2015)]Agundez2015 Agúndez, M., Cernicharo, J., & Guélin, M. 2015, , 577, L5 [Agúndez et al.(2019)]Agundez2019 Agúndez, M., Marcelino, N., Cernicharo, J., et al. 2019, , 625, A147 [Agúndez et al.(2022)]Agundez2022 Agúndez, M., Marcelino, N., Cabezas, C., et al. 2022, , 657, A96 [Agúndez et al.(2023)]Agundez2023 Agúndez, M., Roncero, O., Marcelino, N., et al. 2023, , in press [Alexander(1982)]Alexander1982 Alexander, M. H. 1982, , 76, 5974 [Alexander et al.(1986)]Alexander1986 Alexander, M. H., Smedley, J. E., & Corey, G. C. 1986, , 84, 3049 [Anglada et al.(1997)]Anglada1997 Anglada, G., Sepúlveda, I., & Gómez, J. F. 1997, , 121, 255 [Bacmann et al.(2002)]Bacmann2002 Bacmann, A., Lefloch, B., Ceccarelli, C., et al. 2002, , 389, L6 [Balança et al.(2021)]Balanca2021 Balança, C., Quintas-Sánchez, E., Dawes, R., et al. 2021, , 508, 1148 [Biswas et al.(2023)]Biswas2023 Biswas, R., Giri, K., González-Sánchez, L. et al. 2023, , 522, 5775 [Blanksby et al.(2001)]Blanksby2001 Blanksby, S. J., McAnoy, A. M., Dua, S., & Bowie, J. H. 2001, , 328, 89 [Bop et al.(2021)]Bop2021 Bop, C. T., Lique, F., Faure, A., et al. 2021, , 501, 1911 [Bop et al.(2022)]Bop2022 Bop, C. T., Desrousseaux, B., & Lique, F. 2022, , 662, A102 [Botschwina et al.(1995)]Botschwina1995 Botschwina, P., Seeger, S., Mladenovic, M., et al. 1995, , 14, 169 [Botschwina(2000)]Botschwina2000 Botschwina, P. 2000, 55th Ohio Symposium on Molecular Spectroscopy, TC06 [Botschwina & Oswald(2008)]Botschwina2008 Botschwina, P. & Oswald, R. 2008, , 129, 044305 [Brünken et al.(2007a)]Brunken2007a Brünken, S., Gupta, H., Gottlieb, C. A., et al. 2007a, , 664, L43 [Brünken et al.(2007b)]Brunken2007b Brünken, S., Gottlieb, C. A., Gupta, H., et al. 2007b, , 464, L33 [Cabezas et al.(2021)]Cabezas2021 Cabezas, C., Agúndez, M., Marcelino, N., et al. 2021, , 654, A45 [Cabezas et al.(2022)]Cabezas2022 Cabezas, C., Agúndez, M., Marcelino, N., et al. 2022, , 657, L4 [Carelli et al.(2013)]Carelli2013 Carelli, F., Satta, M., Grassi, T., & Gianturco, F. A. 2013, , 774, 97 [Cernicharo et al.(2007)]Cernicharo2007 Cernicharo, J., Guélin, M., Agúndez, M., et al. 2007, , 467, L37 [Cernicharo et al.(2008)]Cernicharo2008 Cernicharo, J., Guélin, M., Agúndez, M., et al. 2008, , 688, L83 [Cernicharo et al.(2012)]Cernicharo2012 Cernicharo, J., Marcelino, N., Roueff, E., et al. 2012, , 759, L43 [Cernicharo et al.(2020)]Cernicharo2020 Cernicharo, J., Marcelino, N., Pardo, J. R., et al. 2020, , 641, L9 [Cernicharo et al.(2021)]Cernicharo2021 Cernicharo, J., Agúndez, M., Kaiser, R. I., et al. 2021, , 652, L9 [Cernicharo et al.(2023a)]Cernicharo2023a Cernicharo, J., Pardo, J. R., Cabezas, C., et al. 2023a, , 670, L19 [Cernicharo et al.(2023b)]Cernicharo2023b Cernicharo, J., Tercero, B., Marcelino, N., et al. 2023b, , submitted [Codella et al.(1997)]Codella1997 Codella, C., Welser, R., Henkel, C., et al. 1997, , 324, 203 [Cordiner et al.(2011)]Cordiner2011 Cordiner, M. A., Charnley, S. B., Buckle, J. V., et al. 2011, , 730, L18 [Cordiner & Charnley(2012)]Cordiner2012 Cordiner, M. A. & Charnley, S. B. 2012, , 749, 120 [Cordiner et al.(2013)]Cordiner2013 Cordiner, M. A., Buckle, J. V., Wirström, E. S., et al. 2013, , 770, 48 [Crapsi et al.(2005)]Crapsi2005 Crapsi, A., Caselli, P., Walmsley, C. M., et al. 2005, , 619, 379 [Douguet et al.(2015)]Douguet2015 Douguet, N., Fonseca dos Santos, S., Raoult, M., et al. 2015, , 142, 234309 [Dumouchel et al.(2012)]Dumouchel2012 Dumouchel, F., Spielfiedel, A., Senent, M. L., & Feautrier, N. 2012, , 533, 6 [Dumouchel et al.(2023)]Dumouchel2023 Dumouchel, F., Quintas-Sánchez, E., Balança, C., et al. 2023, , 158, 164307 [Faure et al.(2016)]Faure2016 Faure, A., Lique, A., & Wiesenfeld, L. 2016, , 460, 2103 [Fehér et al.(2016)]Feher2016 Fehér, O., Tóth, L. V., Ward-Thompson, D., et al. 2016, , 590, A75 [Flower et al.(2006)]Flower2006 Flower, D. R., Pineau des Forêts, G., & Walmsley, C. M. 2006, , 449, 621 [Flower et al.(2007)]Flower2007 Flower, D. R., Pineau des Forêts, G., & Walmsley, C. M. 2007, , 474, 923 [Forer et al.(2023)]Forer2023 Forer, J., Kokoouline, V., & Stoecklin, T. 2023, , 107, 043117 [Fossé et al.(2001)]Fosse2001 Fossé, D., Cernicharo, J., Gerin, M., & Cox, P. 2001, , 552, 168 [Franz et al.(2020)]Franz2020 Franz, J., Mant, B. P., González-Sánchez, L., et al. 2020, , 152, 234303 [Frayer et al.(2018)]Frayer2018 Frayer, D. T., Ghigo, F., & Maddalena, R. J. 2018, GBT Memo #301 [Gianturco et al.(2016)]Gianturco2016 Gianturco, F. A., Satta, M., Mendolicchio, M., et al. 2016, , 830, 2 [Gianturco et al.(2019)]Gianturco2019 Gianturco, F. A., González-Sánchez, L., Mant, B. P., & Wester, R. 2019, , 151, 144304 [González-Sánchez et al.(2020)]Gonzalez-Sanchez2020 González-Sánchez, L., Mant, B. P., Wester, R., & Gianturco, F. A. 2020, , 897, 75 [Gottlieb et al.(2007)]Gottlieb2007 Gottlieb, C. A., Brünken, S., McCarthy, M. C., & Thaddeus, P. 2007, , 126, 191101 [Gupta et al.(2007)]Gupta2007 Gupta, H., Brünken, S., Tamassia, F., et al. 2007, , 655, L57 [Gupta et al.(2009)]Gupta2009 Gupta, H., Gottlieb, C. A., McCarthy, M. C., & Thaddeus, P. 2009, , 691, 1494 [Harada & Herbst(2008)]Harada2008 Harada, N. & Herbst, E. 2008, , 685, 272 [Herbst(1981)]Herbst1981 Herbst, E. 1981, , 289, 656 [Herbst & Osamura(2008)]Herbst2008 Herbst, E. & Osamura, Y. 2008, , 679, 1670 [Jiménez-Serra et al.(2016)]Jimenez-Serra2016 Jiménez-Serra, I., Vasyunin, A. I., Caselli, P., et al. 2016, , 830, L6 [Jørgensen et al.(2002)]Jorgensen2002 Jørgensen, J. K., Schöier, F. L., & van Dishoeck, E. F. 2002, , 389, 908 [Khamesian et al.(2016)]Khamesian2016 Khamesian, M., Douguet, N., Fonseca dos Santos, S., et al. 2016, , 117, 123001 [Kłos & Lique(2011)]Klos2011 Kłos, J. & Lique, F. 2011, , 418, 271 [Kołos et al.(2008)]Kolos2008 Kołos, R., Gronowski, M., & Botschwina, P. 2008, , 128, 154305 [Lara-Moreno et al.(2017)]Lara-Moreno2017 Lara-Moreno, M., Stoecklin, T., & Halvick, P. 2017, , 467, 4174 [Lara-Moreno et al.(2019)]Lara-Moreno2019 Lara-Moreno, M., Stoecklin, T., & Halvick, P. 2019, , 486, 414 [Lara-Moreno et al.(2021)]Lara-Moreno2021 Lara-Moreno, M., Stoecklin, T., & Halvick, P. 2021, , 507, 4086 [McCarthy et al.(1995)]McCarthy1995 McCarthy, M. C., Gottlieb, C. A., Thaddeus, P., et al. 1995, , 103, 7820 [McCarthy et al.(2006)]McCarthy2006 McCarthy, M. C., Gottlieb, C. A., Gupta, H., & Thaddeus, P. 2006, , 652, L141 [Marcelino et al.(2007)]Marcelino2007 Marcelino, N., Cernicharo, J., Agúndez, M., et al. 2007, , 665, L127 [Martínez et al.(2010)]Martinez2010 Martínez Jr., O., Yang, Z., Demarais, N. J., et al. 2010, , 720, 173 [Millar et al.(2017)]Millar2017 Millar, T. J., Walsh, C., & Field, T. A. 2017, , 117, 1765 [Murakami et al.(2022)]Murakami2022 Murakami, T., Iida, R., Hashimoto, Y., et al. 2022, , 126, 9244 [Oyama et al.(2020)]Oyama2020 Oyama, T., Ozaki, H., Sumiyoshi, Y., et al. 2020, , 890, 39 [Pardo et al.(2023)]Pardo2023 Pardo, J. R., Cabezas, C., Agúndez, M., et al. 2023, , submitted [Petrie & Herbst(1997)]Petrie1997 Petrie, S. & Herbst, E. 1997, , 491, 210 [Punanova et al.(2018)]Punanova2018 Punanova, A., Caselli, P., Feng, S., et al. 2018, , 855, 112 [Remijan et al.(2007)]Remijan2007 Remijan, A. J., Hollis, J. M., Lovas, F. J., et al. 2007, , 664, L47 [Remijan et al.(2023)]Remijan2023 Remijan, A., Scolati, H. N., Burkhardt, A. M., et al. 2023, , 944, L45 [Sakai et al.(2007)]Sakai2007 Sakai, N., Sakai, T., Osamura, Y., & Yamamoto, S. 2007, , 667, L65 [Sakai et al.(2008)]Sakai2008 Sakai, N., Sakai, T., Hirota, T., & Yamamoto, S. 2008, , 672, 371 [Sakai et al.(2010)]Sakai2010 Sakai, N., Shiino, T., Hirota, T., et al. 2010, , 718, L49 [Senent et al.(2019)]Senent2019 Senent, M. L., Dayou, F., Dumouchel, F., et al. 2019, , 486, 422 [Spezzano et al.(2017)]Spezzano2017 Spezzano, S., Caselli, P., Bizzocchi, L., et al. 2017, , 606, A82 [Suzuki et al.(1992)]Suzuki1992 Suzuki, H., Yamamoto, S., Ohishi, M., et al. 1992, , 392, 551 [Tafalla et al.(2002)]Tafalla2002 Tafalla, M., Myers, P. C., Caselli, P., et al. 2002, , 569, 815 [Tchakoua et al.(2018)]Tchakoua2018 Tchakoua, T., Motapon, O., & Nsangou, M. 2018, , 51, 045202 [Tercero et al.(2021)]Tercero2021 Tercero, F., López-Pérez, J. A., Gallego, J. D., et al. 2021, , 645, A37 [Thaddeus et al.(2008)]Thaddeus2008 Thaddeus, P., Gottlieb, C. A., Gupta, H., et al. 2008, , 677, 1132 [Toumi et al.(2021)]Toumi2021 Toumi, I., Yazidi, O., & Najar, F. 2021, , 11, 13579 [Vastel et al.(2018)]Vastel2018 Vastel, C., Quénard, D., Le Gal, R., et al. 2018, , 478, 5514 [Visser et al.(2002)]Visser2002 Visser, A. E., Richer, J. S., & Chandler, C. J. 2002, , 124, 2756 [Vuitton et al.(2009)]Vuitton2009 Vuitton, V., Lavvas, P., Yelle, R. V., et al. 2009, , 57, 1558 [Walker et al.(2016)]Walker2016 Walker, K. M., Dumouchel, F., Lique, F., & Dawes, R. 2016, , 145, 024314 [Walker et al.(2017)]Walker2017 Walker, K. M., Lique, F., Dumouchel, F., & Dawes, R. 2017, , 466, 831 [Walker et al.(2018)]Walker2018 Walker, K. M., Lique, F., & Dawes, R. 2018, , 473, 1407 [Walsh et al.(2009)]Walsh2009 Walsh, C., Harada, N., Herbst, E., Millar, T. J. 2009, , 700, 752 [Woon(1995)]Woon1995 Woon, D. E. 1995, , 244, 45 [Yoshida et al.(2019)]Yoshida2019 Yoshida, K., Sakai, N., Nishimura, Y., et al. 2019, , 71, S18 § SUPPLEMENTARY TABLE lcc@c@cccc@c@ll Observed line parameters of molecular anions in interstellar clouds. 1lSpecies 1cTransition 1cFrequency 1cV_ LSR 1cΔ v 1cT_A^* peak ^a 1c∫ T_A^* dv ^a Telescope Reference 1c 1c 1c(MHz) 1c(km s^-1) 1c(km s^-1) 1c(mK) 1c(mK km s^-1) continued. 1lSpecies 1cTransition 1cFrequency 1cV_ LSR 1cΔ v 1cT_A^* peak ^a 1c∫ T_A^* dv ^a Telescope Reference 1c 1c 1c(MHz) 1c(km s^-1) 1c(km s^-1) 1c(mK) 1c(mK km s^-1) 11cTMC-1 CP C_6H^- 4-3 11014.896 +5.80(2) 0.38(4) 25(3) 10.1(33) GBT <cit.> 5-4 13768.614 +5.80(11) 0.44(7) 24(3) 11.2(43) GBT <cit.> 10-9 27537.130 2*{ 2*41.6(90) ^b, c 2*} 2*GBT 2*<cit.> 11-10 30290.813 12-11 33044.488 +5.78(1) 0.73(1) 22.3(23) 17.4(18) Yebes 40m This work 13-12 35798.153 +5.78(1) 0.70(1) 20.9(22) 15.5(17) Yebes 40m This work 14-13 38551.808 +5.78(1) 0.64(2) 18.9(20) 12.8(14) Yebes 40m This work 15-14 41305.453 +5.79(2) 0.56(3) 17.2(19) 10.3(12) Yebes 40m This work 16-15 44059.085 +5.79(2) 0.57(3) 12.8(15) 7.7(10) Yebes 40m This work 17-16 46812.706 +5.81(2) 0.59(4) 9.6(13) 6.0(8) Yebes 40m This work 18-17 49566.313 +5.84(3) 0.56(5) 5.4(10) 3.2(5) Yebes 40m This work C_4H^- 2-1 18619.761 +5.70(5) 0.43(13) 1.0(3) ^b, d GBT <cit.> 4-3 37239.410 +5.81(2) 0.71(2) 6.0(7) 4.5(6) Yebes 40m This work 5-4 46549.156 +5.81(2) 0.55(3) 5.8(8) 3.4(4) Yebes 40m This work C_8H^- 11-10 12833.460 +5.71(5) 0.36(4) 8(1) 3.1(10) GBT <cit.> 12-11 14000.134 +5.86(5) 0.37(4) 7(1) 2.8(10) GBT <cit.> 13-12 15166.806 +5.84(6) 0.45(4) 6(1) 2.9(10) GBT <cit.> 16-15 18666.814 +5.80(7) 0.34(5) 10(2) 3.6(16) GBT <cit.> 27-26 31500.029 +5.82(4) 0.63(10) 1.28(28) 0.86(20) Yebes 40m This work 28-27 32666.670 +5.76(3) 0.76(6) 1.08(26) 0.87(15) Yebes 40m This work 29-28 33833.309 +5.90(12) 0.68(17) 0.78(19) 0.56(18) Yebes 40m This work 30-29 34999.944 +5.86(6) 0.60(10) 0.87(20) 0.56(14) Yebes 40m This work 31-30 36166.576 +5.83(8) 0.32(20) 1.01(24) 0.34(10) Yebes 40m This work 32-31 37333.205 +5.73(5) 0.66(11) 0.87(23) 0.61(16) Yebes 40m This work 33-32 38499.831 +5.81(9) 0.82(17) 0.68(20) 0.60(18) Yebes 40m This work 34-33 39666.453 +5.93(10) 0.40(12) 0.44(21) 0.19(7) ^e Yebes 40m This work C_3N^- 4-3 38812.797 +5.78(1) 0.88(2) 4.2(2) 3.9(5) Yebes 40m This work 5-4 48515.872 +5.86(2) 0.61(4) 6.3(9) 4.1(6) Yebes 40m This work 8-7 77624.540 +5.88(3) 0.52(8) 7.1(17) 3.9(9) IRAM 30m This work 10-9 97029.687 +5.77(4) 0.38(6) 2.7(8) 1.1(3) IRAM 30m This work C_5N^- 12-11 33332.570 +5.83(1) 0.71(3) 6.5(7) 4.9(6) Yebes 40m This work 13-12 36110.238 +5.80(1) 0.64(2) 6.1(7) 4.1(5) Yebes 40m This work 14-13 38887.896 +5.81(1) 0.63(2) 6.5(8) 4.4(5) Yebes 40m This work 15-14 41665.541 +5.82(2) 0.58(2) 5.7(7) 3.5(5) Yebes 40m This work 16-15 44443.173 +5.79(2) 0.56(2) 4.7(6) 2.8(4) Yebes 40m This work 17-16 47220.793 +5.81(2) 0.50(4) 3.6(6) 1.9(3) Yebes 40m This work 11cLupus-1A C_6H^- 7-6 19276.037 +5.046(8) 0.16(2) 85(8) ^b 14(2) ^b GBT <cit.> 8-7 22029.741 +5.034(10) 0.17(2) 94(11) ^b 15(3) ^b GBT <cit.> 12-11 33044.488 +5.06(2) 0.59(3) 30.1(37) 18.9(24) Yebes 40m This work 13-12 35798.153 +5.08(2) 0.51(3) 32.9(40) 17.8(25) Yebes 40m This work 14-13 38551.808 +5.05(2) 0.48(4) 30.4(38) 15.7(20) Yebes 40m This work 15-14 41305.453 +5.09(3) 0.40(7) 32.7(42) 13.8(19) Yebes 40m This work 16-15 44059.085 +5.07(3) 0.55(6) 24.2(35) 14.2(22) Yebes 40m This work 17-16 46812.706 +5.10(6) 0.51(8) 17.1(33) 9.3(18) Yebes 40m This work C_4H^- 4-3 37239.410 +5.078(13) 0.34(3) 59(5) ^b 19(5) ^b GBT <cit.> 4-3 37239.410 +5.04(4) 0.78(7) 7.4(14) 6.1(11) Yebes 40m This work 5-4 46549.156 +5.05(9) 0.45(12) 9.8(27) 4.7(13) Yebes 40m This work 9-8 83787.297 +5.23(6) 0.47(12) 10.4(31) 5.3(13) IRAM 30m This work C_8H^- 16-15 18666.814 2*{ 2*+5.014(11) 2*0.09(3) 2*35(9) 2*4(1) ^b, c 2*} 2*GBT 2*<cit.> 18-17 21000.145 C_3N^- 4-3 38812.797 +5.16(15) 0.96(15) 2.8(10) 2.8(9) Yebes 40m This work C_5N^- 12-11 33332.570 +5.11(7) 0.50(9) 8.4(16) 4.4(10) Yebes 40m This work 13-12 36110.238 +5.11(7) 0.44(9) 6.5(13) 3.1(7) Yebes 40m This work 14-13 38887.896 +5.13(7) 0.64(8) 8.0(17) 5.4(11) Yebes 40m This work 15-14 41665.541 +5.14(9) 0.37(10) 9.2(19) 3.7(9) Yebes 40m This work 16-15 44443.173 +5.09(10) 0.58(15) 6.1(18) 3.8(11) Yebes 40m This work 11cL1527 C_6H^- 7-6 19276.037 +5.93(9) 0.45(11) 14(3) ^b 7(2) ^b GBT <cit.> 8-7 22029.741 +5.89(3) 0.49(10) 26(4) ^b 18(4) ^b GBT <cit.> 12-11 33044.488 +5.90(5) 0.85(10) 9.6(14) 8.6(16) Yebes 40m This work 13-12 35798.153 +5.85(4) 0.60(4) 11.4(20) 7.3(18) Yebes 40m This work 14-13 38551.808 +5.84(3) 0.61(5) 12.0(18) 7.8(12) Yebes 40m This work 15-14 41305.453 +5.90(3) 0.60(4) 16.4(25) 10.4(19) Yebes 40m This work 16-15 44059.085 +5.90(3) 0.52(4) 14.5(23) 8.0(16) Yebes 40m This work 17-16 46812.706 +5.83(5) 0.58(8) 11.1(23) 6.8(14) Yebes 40m This work C_4H^- 4-3 37239.410 +5.92(12) 0.80(20) 3.2(10) 2.7(7) Yebes 40m This work 5-4 46549.156 +6.05(15) 0.73(15) 4.9(19) 3.8(13) Yebes 40m This work 9-8 83787.297 +5.80(3) 0.62(9) 13(2) 8(1) IRAM 30m <cit.> 10-9 93096.550 +5.90(4) 0.59(9) 11(2) 7(1) IRAM 30m <cit.> 11cL483 C_6H^- 12-11 33044.488 +5.38(6) 0.66(8) 4.9(11) 3.4(8) Yebes 40m This work 13-12 35798.153 +5.33(5) 0.70(7) 5.8(10) 4.3(8) Yebes 40m This work 14-13 38551.808 +5.33(5) 0.78(7) 5.2(9) 4.3(9) Yebes 40m This work 15-14 41305.453 +5.29(6) 0.46(9) 5.3(12) 2.6(6) Yebes 40m This work 16-15 44059.085 +5.24(10) 0.75(12) 4.8(12) 3.8(10) Yebes 40m This work 17-16 46812.706 +5.34(7) 0.63(9) 5.0(14) 3.4(9) Yebes 40m This work C_4H^- 4-3 37239.410 +5.39(8) 0.73(12) 2.8(7) 2.2(5) Yebes 40m This work 5-4 46549.156 +5.37(10) 0.44(15) 2.7(12) 1.3(5) ^e Yebes 40m This work 11cL1495B C_6H^- 10-9 27537.130 2*{ 2*9.6(20) ^b, c 2*} 2*GBT 2*<cit.> 11-10 30290.813 12-11 33044.488 +7.66(5) 0.80(7) 5.9(12) 5.0(9) Yebes 40m This work 13-12 35798.153 +7.65(5) 0.50(8) 5.8(12) 3.1(6) Yebes 40m This work 14-13 38551.808 +7.58(7) 0.39(10) 4.3(11) 1.8(4) Yebes 40m This work 15-14 41305.453 +7.66(10) 0.36(14) 6.6(16) 2.6(6) Yebes 40m This work 16-15 44059.085 +7.61(8) 0.49(12) 4.1(11) 2.1(6) Yebes 40m This work 11cL1544 2*C_6H^- 2*7-6 2*19276.037 ^e 2*{ +7.08(3) 0.16(3) 16(2) 2*6.0(18) 2*} 2*GBT 2*<cit.> +7.30(3) 0.13(3) 26(2) 12-11 33044.488 +7.11(13) 0.67(28) 4.5(16) 3.2(14) Yebes 40m This work 13-12 35798.153 +7.04(10) 0.48(16) 4.1(12) 2.1(9) Yebes 40m This work 14-13 38551.808 +6.98(8) 0.50(13) 6.0(16) 3.2(12) Yebes 40m This work 15-14 41305.453 +7.34(18) 0.76(36) 4.6(15) 3.7(16) Yebes 40m This work 11cL1521F 2*C_6H^- 2*7-6 2*19276.037 ^e 2*{ +6.33(5) 0.18(3) 17(2) 2*7.0(17) 2*} 2*GBT 2*<cit.> +6.64(5) 0.35(9) 9(2) 11cL1251A C_6H^- 10-9 27537.130 2*{ 2*6.5(17) ^b, c 2*} 2*GBT 2*<cit.> 11-10 30290.813 11cL1512 C_6H^- 10-9 27537.130 2*{ 2*4.3(8) ^b, c 2*} 2*GBT 2*<cit.> 11-10 30290.813 11cL1172 C_6H^- 10-9 27537.130 2*{ 2*6.7(15) ^b, c 2*} 2*GBT 2*<cit.> 11-10 30290.813 11cL1389 C_6H^- 10-9 27537.130 2*{ 2*5.9(14) ^b, c 2*} 2*GBT 2*<cit.> 11-10 30290.813 11cTMC-1 C C_6H^- 10-9 27537.130 2*{ 2*13.6(25) ^b, c 2*} 2*GBT 2*<cit.> 11-10 30290.813 ^a Unless otherwise stated, the intensity scale is antenna temperature (T_A^*). It can be converted to main beam brightness temperature (T_ mb) by dividing by B_ eff/F_ eff, where B_ eff is the main beam efficiency and F_ eff is the telescope forward efficiency. For the Yebes 40m telescope in the Q band B_ eff = 0.797 exp[-(ν(GHz)/71.1)^2] and F_ eff = 0.97 (), for the IRAM 30m telescope B_ eff = 0.871 exp[-(ν(GHz)/359)^2] and F_ eff = 0.95 (), and for the GBT telescope we adopt F_ eff = 1.0 and B_ eff = 1.32 × 0.71 exp[-(ν(GHz)/103.7)^2] <cit.>. The error in ∫ T_A^* dv includes the contributions from the Gaussian fit and from calibration (assumed to be 10 %). ^b Intensity scale is T_ mb. ^c Average of two lines. ^d Line neglected in the analysis. Intensity should be ∼ 3 times larger to be consistent with the other lines. ^e Line detected marginally. llccll Observed velocity-integrated line intensities of neutral counterparts of molecular anions in interstellar clouds. 1lSpecies 1cTransition 1cFrequency (MHz) 1c∫ T_A^* dv (mK km s^-1) ^a Telescope Reference continued. 1lSpecies 1cTransition 1cFrequency (MHz) 1c∫ T_A^* dv (mK km s^-1) ^a Telescope Reference 6cTMC-1 CP C_6H ^2Π_3/2 J=15/2-13/2 a 20792.907 133(24) ^b GBT <cit.> ^2Π_3/2 J=15/2-13/2 b 20794.475 112(22) ^b GBT <cit.> ^2Π_3/2 J=21/2-19/2 a 29109.658 332.4(420) ^b GBT <cit.> ^2Π_3/2 J=23/2-21/2 a 31881.860 175.6(176) Yebes 40m This work ^2Π_3/2 J=23/2-21/2 b 31885.541 173.5(175) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 a 34654.037 158.9(160) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 b 34658.383 158.5(160) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 a 37426.192 141.5(180) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 b 37431.255 141.1(175) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 a 40198.323 119.3(149) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 b 40204.157 118.6(147) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 a 42970.432 93.4(106) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 b 42977.089 93.3(106) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 a 45742.519 73.0(98) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 b 45750.052 73.4(99) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 a 48514.584 52.6(73) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 b 48523.044 52.2(70) Yebes 40m This work C_4H N=2-1 J=3/2-1/2 19054.476 411.3(418) ^b GBT <cit.> N=4-3 J=9/2-7/2 38049.654 1369(138) Yebes 40m This work N=4-3 J=7/2-5/2 38088.461 1007(102) Yebes 40m This work N=5-4 J=11/2-9/2 47566.792 1094(111) Yebes 40m This work N=5-4 J=9/2-7/2 47605.496 864(87) Yebes 40m This work N=9-8 J=19/2-17/2 85634.010 417(53) IRAM 30m <cit.> N=9-8 J=17/2-15/2 85672.580 386(49) IRAM 30m <cit.> N=10-9 J=21/2-19/2 95150.393 251(26) IRAM 30m This work N=10-9 J=19/2-17/2 95188.947 243(26) IRAM 30m This work N=11-10 J=23/2-21/2 104666.568 111(12) IRAM 30m This work N=11-10 J=21/2-19/2 104705.108 105(13) IRAM 30m This work N=12-11 J=25/2-23/2 114182.523 60(8) IRAM 30m This work N=12-11 J=23/2-21/2 114221.023 47(6) IRAM 30m This work C_8H ^2Π_3/2 J=53/2-51/2 a 31093.035 6.0(7) Yebes 40m This work ^2Π_3/2 J=53/2-51/2 b 31093.415 4.4(6) Yebes 40m This work ^2Π_3/2 J=55/2-53/2 a 32266.325 4.3(6) Yebes 40m This work ^2Π_3/2 J=55/2-53/2 b 32266.735 4.2(6) Yebes 40m This work ^2Π_3/2 J=57/2-55/2 a 33439.612 3.5(5) Yebes 40m This work ^2Π_3/2 J=57/2-55/2 b 33440.052 3.4(6) Yebes 40m This work ^2Π_3/2 J=59/2-57/2 b 34613.367 2.7(3) Yebes 40m This work ^2Π_3/2 J=61/2-59/2 a 35786.176 3.0(4) Yebes 40m This work ^2Π_3/2 J=61/2-59/2 b 35786.679 2.4(3) Yebes 40m This work ^2Π_3/2 J=63/2-61/2 a 36959.452 2.3(3) Yebes 40m This work ^2Π_3/2 J=63/2-61/2 b 36959.989 2.2(3) Yebes 40m This work ^2Π_3/2 J=65/2-63/2 a 38132.725 1.7(2) Yebes 40m This work ^2Π_3/2 J=65/2-63/2 b 38133.297 1.5(2) Yebes 40m This work ^2Π_3/2 J=67/2-65/2 a 39305.995 1.4(2) Yebes 40m This work ^2Π_3/2 J=67/2-65/2 b 39306.602 1.4(2) Yebes 40m This work ^2Π_3/2 J=69/2-67/2 a 40479.260 1.2(2) Yebes 40m This work ^2Π_3/2 J=69/2-67/2 b 40479.904 1.2(2) Yebes 40m This work ^2Π_3/2 J=71/2-69/2 a 41652.522 0.8(1) Yebes 40m This work ^2Π_3/2 J=71/2-69/2 b 41653.203 0.9(1) Yebes 40m This work ^2Π_3/2 J=73/2-71/2 a 42825.779 0.7(1) Yebes 40m This work ^2Π_3/2 J=73/2-71/2 b 42826.499 0.7(1) Yebes 40m This work C_3N N=4-3 J=9/2-7/2 39571.347 332(34) Yebes 40m This work N=4-3 J=7/2-5/2 39590.181 240(25) Yebes 40m This work N=5-4 J=11/2-9/2 49466.421 244(25) Yebes 40m This work N=5-4 J=9/2-7/2 49485.224 198(20) Yebes 40m This work N=9-8 J=19/2-17/2 89045.583 64.2(73) IRAM 30m This work N=9-8 J=17/2-15/2 89064.347 58.6(68) IRAM 30m This work N=10-9 J=21/2-19/2 98940.087 28.1(36) IRAM 30m This work N=10-9 J=19/2-17/2 98958.770 22.7(30) IRAM 30m This work N=11-10 J=23/2-21/2 108834.254 11.6(24) IRAM 30m This work N=11-10 J=21/2-19/2 108853.012 21.2(35) IRAM 30m This work C_5N N=12-11 J=25/2-23/2 33668.234 5.6(7) Yebes 40m This work N=12-11 J=23/2-21/2 33678.966 5.9(7) Yebes 40m This work N=13-12 J=27/2-25/2 36474.308 5.8(7) Yebes 40m This work N=13-12 J=25/2-23/2 36485.042 5.5(7) Yebes 40m This work N=14-13 J=29/2-27/2 39280.369 5.1(7) Yebes 40m This work N=14-13 J=27/2-25/2 39291.105 5.0(7) Yebes 40m This work N=15-14 J=31/2-29/2 42086.415 4.7(6) Yebes 40m This work N=15-14 J=29/2-27/2 42097.151 4.4(6) Yebes 40m This work N=16-15 J=33/2-31/2 44892.444 4.6(6) Yebes 40m This work N=16-15 J=31/2-29/2 44903.182 4.4(6) Yebes 40m This work N=17-16 J=35/2-33/2 47698.457 3.7(5) Yebes 40m This work N=17-16 J=33/2-31/2 47709.196 3.4(5) Yebes 40m This work 6cLupus-1A C_6H ^2Π_3/2 J=15/2-13/2 a 20792.907 114(14) ^b GBT <cit.> ^2Π_3/2 J=15/2-13/2 b 20794.475 131(16) ^b GBT <cit.> ^2Π_3/2 J=23/2-21/2 a 31881.860 150.3(166) Yebes 40m This work ^2Π_3/2 J=23/2-21/2 b 31885.541 153.1(163) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 a 34654.037 151.6(161) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 b 34658.383 150.0(159) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 a 37426.192 140.3(143) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 b 37431.255 141.0(148) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 a 40198.323 126.2(134) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 b 40204.157 124.8(130) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 a 42970.432 115.5(123) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 b 42977.089 114.9(123) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 a 45742.519 90.7(125) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 b 45750.052 91.3(128) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 a 48514.584 73.6(109) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 b 48523.044 66.9(103) Yebes 40m This work C_4H N=4-3 J=9/2-7/2 38049.654 1219(123) Yebes 40m This work N=4-3 J=7/2-5/2 38088.461 921(94) Yebes 40m This work N=5-4 J=11/2-9/2 47566.792 1123(114) Yebes 40m This work N=5-4 J=9/2-7/2 47605.496 846(86) Yebes 40m This work N=8-7 J=17/2-15/2 76117.439 1124(114) IRAM 30m This work N=8-7 J=15/2-13/2 76156.028 1024(104) IRAM 30m This work N=9-8 J=19/2-17/2 85634.010 779(83) IRAM 30m This work N=9-8 J=17/2-15/2 85672.580 730(77) IRAM 30m This work N=11-10 J=23/2-21/2 104666.568 349(39) IRAM 30m This work N=11-10 J=21/2-19/2 104705.108 334(38) IRAM 30m This work C_8H ^2Π_3/2 J=33/2-31/2 a 19359.975 10(2) ^b GBT <cit.> ^2Π_3/2 J=33/2-31/2 b 19360.123 9(2) ^b GBT <cit.> C_3N N=4-3 J=9/2-7/2 39571.347 251(30) Yebes 40m This work N=4-3 J=7/2-5/2 39590.181 175(19) Yebes 40m This work N=5-4 J=11/2-9/2 49466.421 177(19) Yebes 40m This work N=5-4 J=9/2-7/2 49485.224 138(15) Yebes 40m This work N=9-8 J=19/2-17/2 89045.583 141.5(150) IRAM 30m This work N=9-8 J=17/2-15/2 89064.347 126.7(136) IRAM 30m This work N=10-9 J=21/2-19/2 98940.087 74.6(83) IRAM 30m This work N=10-9 J=19/2-17/2 98958.770 66.0(74) IRAM 30m This work C_5N N=12-11 J=25/2-23/2 33668.234 4.5(12) Yebes 40m This work N=12-11 J=23/2-21/2 33678.966 7.0(14) Yebes 40m This work N=13-12 J=27/2-25/2 36474.308 4.8(11) Yebes 40m This work N=13-12 J=25/2-23/2 36485.042 5.7(11) Yebes 40m This work N=14-13 J=29/2-27/2 39280.369 7.8(24) Yebes 40m This work N=14-13 J=27/2-25/2 39291.105 5.7(15) Yebes 40m This work N=15-14 J=31/2-29/2 42086.415 4.1(9) Yebes 40m This work N=15-14 J=29/2-27/2 42097.151 4.8(11) Yebes 40m This work N=16-15 J=33/2-31/2 44892.444 3.2(9) Yebes 40m This work N=16-15 J=31/2-29/2 44903.182 1.8(8) ^d Yebes 40m This work 6cL1527 C_6H ^2Π_3/2 J=15/2-13/2 a 20792.907 24(5) ^b GBT <cit.> ^2Π_3/2 J=15/2-13/2 b 20794.475 21(5) ^b GBT <cit.> ^2Π_3/2 J=23/2-21/2 a 31881.860 34.8(75) Yebes 40m This work ^2Π_3/2 J=23/2-21/2 b 31885.541 26.0(59) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 a 34654.037 29.3(34) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 b 34658.383 31.8(37) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 a 37426.192 31.7(46) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 b 37431.255 32.2(51) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 a 40198.323 32.7(50) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 b 40204.157 32.3(48) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 a 42970.432 30.2(47) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 b 42977.089 31.1(49) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 a 45742.519 30.5(48) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 b 45750.052 31.3(49) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 a 48514.584 27.3(48) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 b 48523.044 26.9(47) Yebes 40m This work C_4H N=4-3 J=9/2-7/2 38049.654 388(39) Yebes 40m This work N=4-3 J=7/2-5/2 38088.461 295(30) Yebes 40m This work N=5-4 J=11/2-9/2 47566.792 434(44) Yebes 40m This work N=5-4 J=9/2-7/2 47605.496 347(35) Yebes 40m This work N=9-8 J=19/2-17/2 85634.010 747(86) IRAM 30m <cit.> N=9-8 J=17/2-15/2 85672.580 712(82) IRAM 30m <cit.> N=11-10 J=23/2-21/2 104666.568 542(64) IRAM 30m <cit.> N=11-10 J=21/2-19/2 104705.108 487(59) IRAM 30m <cit.> N=12-11 J=25/2-23/2 114182.523 462(59) IRAM 30m <cit.> N=12-11 J=23/2-21/2 114221.023 406(53) IRAM 30m <cit.> 6cL483 C_6H ^2Π_3/2 J=23/2-21/2 a 31881.860 29.4(34) Yebes 40m This work ^2Π_3/2 J=23/2-21/2 b 31885.541 31.0(36) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 a 34654.037 28.4(32) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 b 34658.383 27.7(31) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 a 37426.192 26.2(29) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 b 37431.255 26.2(30) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 a 40198.323 24.4(28) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 b 40204.157 23.2(27) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 a 42970.432 19.7(23) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 b 42977.089 20.4(24) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 a 45742.519 13.6(22) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 b 45750.052 14.2(21) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 a 48514.584 13.0(23) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 b 48523.044 13.7(24) Yebes 40m This work C_4H N=4-3 J=9/2-7/2 38049.654 470(48) Yebes 40m This work N=4-3 J=7/2-5/2 38088.461 356(36) Yebes 40m This work N=5-4 J=11/2-9/2 47566.792 439(50) Yebes 40m This work N=5-4 J=9/2-7/2 47605.496 352(36) Yebes 40m This work N=8-7 J=17/2-15/2 76117.439 375(38) IRAM 30m This work N=8-7 J=15/2-13/2 76156.028 337(35) IRAM 30m This work N=9-8 J=19/2-17/2 85634.010 272(27) IRAM 30m <cit.> N=9-8 J=17/2-15/2 85672.580 249(24) IRAM 30m <cit.> N=10-9 J=21/2-19/2 95150.393 157(15) IRAM 30m <cit.> N=10-9 J=19/2-17/2 95188.947 147(14) IRAM 30m <cit.> N=11-10 J=23/2-21/2 104666.568 110(10) IRAM 30m <cit.> N=11-10 J=21/2-19/2 104705.108 100(9) IRAM 30m <cit.> N=12-11 J=25/2-23/2 114182.523 64(6) IRAM 30m <cit.> N=12-11 J=23/2-21/2 114221.023 64(6) IRAM 30m <cit.> 6cL1495B C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 55(10) ^c GBT <cit.> ^2Π_3/2 J=13/2-11/2 b 18021.783 55(10) ^c GBT <cit.> ^2Π_3/2 J=21/2-19/2 a 29109.658 141.6(164) ^b GBT <cit.> ^2Π_3/2 J=23/2-21/2 a 31881.860 51.9(59) Yebes 40m This work ^2Π_3/2 J=23/2-21/2 b 31885.541 47.9(53) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 a 34654.037 46.8(52) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 a 37426.192 45.4(51) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 b 37431.255 42.8(49) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 a 40198.323 36.2(42) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 b 40204.157 37.7(42) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 a 42970.432 33.3(40) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 b 42977.089 33.6(40) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 a 45742.519 24.7(38) Yebes 40m This work ^2Π_3/2 J=33/2-31/2 b 45750.052 24.0(35) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 a 48514.584 19.4(32) Yebes 40m This work ^2Π_3/2 J=35/2-33/2 b 48523.044 18.6(33) Yebes 40m This work 6cL1544 C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 51(11) GBT <cit.> ^2Π_3/2 J=13/2-11/2 b 18021.783 50(11) GBT <cit.> ^2Π_3/2 J=23/2-21/2 a 31881.860 23.8(36) Yebes 40m This work ^2Π_3/2 J=23/2-21/2 b 31885.541 30.0(44) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 a 34654.037 25.7(39) Yebes 40m This work ^2Π_3/2 J=25/2-23/2 b 34658.383 31.6(48) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 a 37426.192 23.3(36) Yebes 40m This work ^2Π_3/2 J=27/2-25/2 b 37431.255 19.9(34) Yebes 40m This work ^2Π_3/2 J=29/2-27/2 b 40204.157 18.0(31) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 a 42970.432 13.6(26) Yebes 40m This work ^2Π_3/2 J=31/2-29/2 b 42977.089 12.1(23) Yebes 40m This work 6cL1521F C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 36(10) GBT <cit.> ^2Π_3/2 J=13/2-11/2 b 18021.783 26(9) GBT <cit.> 6cL1251A C_6H ^2Π_3/2 J=21/2-19/2 a 29109.658 36(8) GBT <cit.> ^2Π_3/2 J=21/2-19/2 b 29112.730 35(8) GBT <cit.> ^2Π_3/2 J=21/2-19/2 a 29109.658 43.6(65) ^b GBT <cit.> 6cL1512 C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 20(7) ^c GBT <cit.> ^2Π_3/2 J=13/2-11/2 b 18021.783 20(7) ^c GBT <cit.> ^2Π_3/2 J=21/2-19/2 a 29109.658 27(5) GBT <cit.> ^2Π_3/2 J=21/2-19/2 b 29112.730 28(5) GBT <cit.> ^2Π_3/2 J=21/2-19/2 a 29109.658 26.3(35) ^b GBT <cit.> 6cL1172 C_6H ^2Π_3/2 J=21/2-19/2 a 29109.658 41.1(57) ^b GBT <cit.> 6cL1389 C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 10(6) ^c GBT <cit.> ^2Π_3/2 J=13/2-11/2 b 18021.783 10(6) ^c GBT <cit.> ^2Π_3/2 J=21/2-19/2 a 29109.658 27.1(40) ^b GBT <cit.> 6cTMC-1 C C_6H ^2Π_3/2 J=21/2-19/2 a 29109.658 88.1(105) ^b GBT <cit.> ^a Unless otherwise stated, the intensity scale is antenna temperature (T_A^*). It can be converted to main beam brightness temperature (T_ mb) by dividing by B_ eff/F_ eff (see caption of Table <ref>. The error in ∫ T_A^* dv includes the contributions from the Gaussian fit and from calibration (assumed to be 10 %). ^b Intensity scale is T_ mb. ^c Intensity distributed equally among the two fine components. ^d Marginal detection.
http://arxiv.org/abs/2307.08577v1
20230714145252
Dark matter detection using nuclear magnetization in magnet with hyperfine interaction
[ "So Chigusa", "Takeo Moroi", "Kazunori Nakayama", "Thanaporn Sichanugrist" ]
hep-ph
[ "hep-ph", "cond-mat.mes-hall", "hep-ex", "quant-ph" ]
footnote July 2023 .5in Dark matter detection using nuclear magnetization in magnet with hyperfine interaction .3in So Chigusa^(a,b,e), Takeo Moroi^(c,e), Kazunori Nakayama^(d,e) and Thanaporn Sichanugrist^(c) 0.3in ^(a) Theoretical Physics Group, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA 0.1in ^(b) Berkeley Center for Theoretical Physics, Department of Physics, University of California, Berkeley, CA 94720, USA 0.1in ^(c) Department of Physics, The University of Tokyo, Tokyo 113-0033, Japan 0.1in ^(d) Department of Physics, Tohoku University, Sendai 980-8578, Japan 0.1in ^(e) International Center for Quantum-field Measurement Systems for Studies of the Universe and Particles (QUP), KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan .5in We consider the possibility to detect cosmic light dark matter (DM), i.e., axions and dark photons, of mass ∼e-6eV and ∼e-4eV, by magnetic excitation in a magnet with strong hyperfine interaction. In particular, we consider a canted anti-ferromagnet, MnCO3, as a concrete candidate material. With spin transfer between nuclear and electron spins allowed by the hyperfine interaction, nuclear spins become naturally highly polarized due to an effective (electron-spin-induced) magnetic field, and have long-range interactions with each other. The collective precession of nuclear spins, i.e., a nuclear magnon, can be generated by the DM field through the nucleon-DM interaction, while they are also sensitive to the electron-DM interaction through the electron-nuclear spin mixing. Compared to conventional nuclear-spin precession experiments, this system as a DM sensor is sensitive to higher frequency needing only a small static magnetic field applied. The system also has collective precession of electron spins, mixed with nuclear spins, as the additional channels that can be used for DM probes. We estimate the sensitivity under appropriate readout setups such as an inductive pick-up loop associated with an LC resonant circuit, or a photon cavity with a photon counting device. We show that this method covers an unexplored parameter region of light bosonic DM. § INTRODUCTION Dark matter (DM) has been the unanswered mystery of particle physics for decades. Its existence, covering 80% of matter in the universe, has been vividly suggested and supported by both astrophysical and cosmological evidences (for a review, see, e.g., Ref. <cit.>). However, DM cannot be reasonably accounted for by particles already known in the standard model (SM), and its particle-physics properties are still unknown. Undoubtedly, DM would provide us priceless hints for understanding physics beyond the standard model (BSM), even if we discover just a few secrets of its anatomy directly. Light bosonic particles with mass e-22eV–keV are possible candidates of DM. These include axions, pseudo Nambu–Goldstone bosons emerging from the Peccei–Quinn (PQ) symmetry breaking <cit.>, including axion-like-particles (ALPs) motivated from string theory <cit.>, and dark photons <cit.>, spin-1 vector bosons kinetically mixing with ordinary photons. Generally, they have small mass with large number density which causes them to effectively behave like a coherent classical field. This type of DM is out of reach from the nuclear (electron) recoil experiments, e.g., CDMS <cit.>, XENON <cit.>, and PANDA <cit.>; these experiments have excellent sensitivity for ∼ GeV (MeV) DM, however, for the DM of mass below GeV (MeV) the sensitivities become worse rapidly because the recoil energy is severely suppressed due to the smallness of the momentum of DM. So far, constraints on the sub-MeV and light DM are weak and require dedicated methods for their detection. Interestingly, physical excitations in condensed matter show their unique and bizarrely-various range of excitation energy which is sensitive to the energy deposition of the sub-eV scale. They also provide rich types of interaction between them and DM. The scattering and absorption of light DM with/via the excitation in a solid has been studied and experimented on by, e.g., SENSEI <cit.>, QUAX <cit.> and CASPEr <cit.>. They aim to utilize electronic states, collective electron spins, and nuclear spins in condensed matter, respectively. Various types of excitation are showing their potential: for example, electrons <cit.>, phonons <cit.>, electron spins or magnons <cit.>, electron states in topological material <cit.>, and qubits <cit.>. In this paper, we consider the nuclear magnetization system with a strong hyperfine interaction <cit.>, which allows the mixing between electron spin and nuclear spin. At low temperature, nuclear spins are highly polarized due to the electron-spin-induced effective magnetic field of O(10) T. Besides, nuclear spins have a long range interaction with each other via the so-called Suhl–Nakamura exchange interaction <cit.>, which ensures the existence of nuclear spin waves (or nuclear magnons in the quantized version <cit.>). The presence of DM can excite the mixed state between electron and nuclear spin precessions, resulting in macroscopically observable magnetization, which is enhanced by a factor of O(10)-O(10^3) compared to the typical nuclear magnetization signal. According to the eigenfrequency profile of material with a strong hyperfine interaction such as MnCO3 <cit.>, CsMnF3 <cit.>, CoCO3 <cit.> and FeBO3 <cit.>, they may probe ∼ eV – m eV DM with good sensitivity to various interaction parameters through either electron-DM or nucleon-DM coupling.[To achieve sensitivity in such a range of frequency, technically, a nuclear magnetic system without a strong hyperfine interaction needs an applied magnetic field of magnitude larger than 30 T for the experimental setup.] This kind of system also has electron-spin precession modes, mixed with nuclear spins, as an additional channels that can be used for DM probes. In order to show the possibility of using magnetically-ordered material with strong hyperfine interaction for DM detection, we focus on the canted antiferromagnetic material MnCO3, which has strong hyperfine interaction with 100% magnetic isotope, as a concrete example. We explore in detail its resonance profile and response to the DM field composed of axions or dark photons under the sensible readout setup such as an inductive pick-up loop associated with LC resonant circuit, or a photon cavity with a photon counting device. We show that a sizable signal from DM axion or dark photons can be expected. We summarize the advantage of a magnetic system with hyperfine interaction as: * nuclear spins are naturally highly polarized, which leads to large resonance signal * sensitivity at high DM mass is achievable with a small applied static field, compared to a nuclear magnetic system without hyperfine interaction * there are sensitivities to both electron-DM or nucleon-DM couplings * readout is enhanced or becomes possible by the presence of mixing of electron spins to nuclear spins The construction of this paper is as follows. We start with a review of DM focusing on the two promising candidates, axions and dark photons in Sec. <ref>. We elaborate the properties of the material in Sec. <ref>: the magnetic material with strong hyperfine interaction; and excitation of hybridized precession state of electron and nuclear spins induced by DM. In Sec. <ref>, we show the sensitivity of light DM detection with several concrete proposals of experimental setup. We conclude in Sec. <ref>. § DARK MATTER TARGET We consider two candidates of light bosonic DM, axions and dark photons, whose masses correspond to the energy which can cause the magnetic resonance in a magnet with hyperfine interaction. Because their masses are small, their number densities are large such that they behave coherently as a classical field and may act as an oscillating magnetic field in the view of spins of SM particles. If the coupling is strong enough, spins inside magnetic material can be perturbed from the ground state and macroscopically produce an observable signal as oscillating magnetization. We review models of axions and dark photons, and describe the oscillating effective magnetic field induced by them. We illustrate in Sec. <ref> the response of magnet excited by DM, based on MnCO3 as a concrete example material with strong hyperfine interaction. In this section, natural unit is adopted. §.§ DM axion QCD axions and ALPs are candidates for the DM particle. QCD axions are pseudo Nambu–Goldstone bosons arising from the PQ symmetry breaking <cit.>. They are proposed to solve the strong CP problem. QCD axions interact with gluons, photons, and SM fermions with interaction strength determined by the axion mass. On the other hand, ALPs are the generalization of QCD axions; they are particles that interact with SM particles with a similar form of interactions but do not have roles to solve the strong CP problem and do not have a specific relation between the mass and the coupling with SM particles. We collectively call QCD axions and ALPs as simply “axions”. §.§.§ Axion model and effective Lagrangian Typically, the QCD axion is introduced as the Nambu-Goldstone boson from spontaneous symmetry breaking of a new U(1) symmetry, the so-called U(1) PQ symmetry, with chiral anomaly associated with the color SU(3) symmetry and also electromagnetic symmetry in some models. With this mechanism called the PQ mechanism, axion effective Lagrangian reads ℒ_a= 1/2 (∂_μ a)^2 +1/2 m_a^2 a^2+1/4 g_a γγ a F_μνF̃^μν + g_aNN∂_μ a/2 m_NN̅γ^μγ_5 N + g_aee∂_μ a/2 m_ee̅γ^μγ_5 e, where a, N (≡ p,n), and e represent axion, nucleon (proton, neutron), and electron fields, respectively, m_a is the axion mass, and F_μν is the field strength tensor of photons. Coupling constants g_aγγ, g_aNN, and g_aee are model dependent and can be written in the form g_app=m_p c_ap/f_a, g_ann=m_n c_an/f_a, g_aee=m_e c_ae/f_a, showing the dependence on the axion decay constant f_a and model-dependent constants c_af with f=p,n,e. The QCD axion mass m_a is related to scale f_a by the relation <cit.>: m_a ≃ 5.7 ×( e12GeV/f_a) eV, which implies a model-dependent relation between the coupling constant g_aff and the axion mass m_a. As explicit examples of the QCD axion model, there are several famous ones adopted as building blocks of others. The main difference lies in how the anomaly of the U(1)_PQ symmetry is introduced. The model of Kim–Shifman–Vainshtein–Zakharov (KSVZ) type introduces heavy quarks 𝒬s which transform under PQ charge chirally <cit.>. The other one is the Dine–Fischler–Srednicki–Zhitnitsky (DFSZ) type model which contains two Higgs doublets responsible for the electroweak symmetry breaking and assigns the PQ charges to SM particles and to Higgs bosons <cit.>. For the KSVZ model of axions, the model-dependent parameters are c^KSVZ_ap=-0.47(3), c^KSVZ_an=-0.02(3), c^KSVZ_ae=0. For the DFSZ model of axions, c^DFSZ_ap= -0.182 -0.435 sin^2 β± 0.025, c^DFSZ_an=0.160 +0.414 sin^2β± 0.025, c^DFSZ_ae=1/3sin^2 β, where β is the ratio of the vacuum expectation values of two Higgs doublets. It should be noted that the value of parameter β is constrained by the perturbativity of the Yukawa coupling as 0.28 <tanβ < 140 <cit.>. Recently flavorful axion models are also considered <cit.>, which also predict sizable axion-fermion couplings. On the other hand, ALPs in general are expected to possess the same Lagrangian as shown in Eq. (<ref>) while there is no relationship between its mass m_a and the coupling g_aff. §.§.§ Axion-induced magnetic field We now estimate the magnitude of the axion-induced effective magnetic field acting on spins of SM particles assuming that all of the DM is composed of axions. Recall that the axion-nucleon interaction is given by the fourth term of Eq. (<ref>). In the non-relativistic limit, the interaction reduces to the form ℒ_aNN=g_aNN/m_N ( ∇⃗ a) ·S⃗_N, with S⃗_N representing the nucleon spin. This shows us that the axion field acts as an effective magnetic field interacting with nuclear spins. Below, we estimate the amplitude of the effective magnetic field from the axion properties. We consider the axion DM in the mass range e-6eV≲ m_a≲e-4eV. We adopt the standard cold DM velocity profiles that the DM velocity v_DM is expected to be ∼ 10^-3 with spreading Δ v_DM of the same order. The de Broglie wavelength of axion DM, λ_DM=2π/m_a v_DM∼ O(0.01)-O(1) km, is much longer than the size of the magnetic material used for the experiment. The occupation number of the axion DM is also large due to the small mass, so we treat the axion DM as a classical field that interacts coherently with nuclear spins inside the magnetic material within coherence time τ_DM = 2 π/m_a v_DM^2∼ O(0.01)-O(1) ms. We parameterize the axion DM classical field as a(x⃗,t)=a_0 sin (m_a t+ m v^2_DMt/2- m_a v⃗_DM·x⃗ + δ), where a_0 is the oscillation amplitude with which the energy density is given by ρ_a=1/2m_a a_0^2, while δ is a random phase. We assume ρ_a=ρ_DM, where ρ_DM is the local DM density around the earth. In our numerical calculation, we take ρ_DM = [per-mode=symbol]0.43. We assume that the oscillation persists in its coherent phase for time interval τ_DM, each interval connected with a discrete jump of v⃗_ DM and δ. Due to the velocity distribution, the bandwidth of the field is given by Δω_DM=m_av_DM^2=2π/τ_DM, which translates to quality factor Q_DM=m_a/Δω_DM∼10^6. To consider the interaction between nuclear spin I⃗ and the axion, we need to take into account the nucleon spin contribution to the nuclear spin. This is shown in detail in Appendix <ref>. Matching the nucleon spin-axion interaction to the interaction between the nuclear magnetic moment μ⃗_I and an arbitrary magnetic field h⃗ of the form ℒ_int=μ⃗_I ·h⃗, we can define the effective magnetic field induced by the axion, which is felt by the nuclear spin as h⃗_n^axion= 1/γ_ng̃_aI/m_N√(2ρ_DM)v⃗_DMsin(m_a t + δ), where g̃_aI≡ g_appσ_p + g_annσ_n with σ_p,n denoting the spin contribution of proton and neutron to nuclear spin, and γ_n ≡ g_I μ_N is the nuclear gyromagnetic ratio defined by the nuclear g-factor g_I and the nuclear magneton μ_N≡ e/2m_p. Note that the value of γ_n depends on magnetic isotope, while the typical order of magnitude is O(10^7) [inter-unit-product=·]. On the other hand, the axion also interacts with an electron through the last term of Eq. (<ref>). Similarly to the case of the nuclear spin, we introduce the axion-induced effective magnetic field for the electron spin as h⃗_e^axion=1/γ_eg_aee/m_e√(2 ρ_DM)v⃗_DMsin (m_a t + δ), where γ_e=[inter-unit-product=·]1.760e11 is the electron gyromagnetic ratio. As a result, we obtain the amplitude of the effective magnetic field as: h^axion_n= 4.0e-18 ×(g_aNN/10^-10) (ρ_DM/0.4GeV / cm^3)^1/2( v_DM/c/10^-3) × ( [inter-unit-product=·]5e7/γ_n ) ( σ_p+σ_n/0.5), h^axion_e=4.2e-18 ×(g_aee/10^-10) (ρ_DM/0.4GeV / cm^3)^1/2( v_DM/c/10^-3), for g_app=g_ann=g_aNN. §.§ DM dark photon The dark photon is introduced as a new gauge boson of a new dark U(1) gauge symmetry in addition to the SM symmetry. Even when SM particles do not have dark U(1) charges, kinetic mixing between the dark photon and the SM photon becomes a portal for the SM particles to interact with dark photons. The massive dark photon that is light enough and weakly interact with SM particles is a viable candidate of DM from the viewpoint of the up-to-present observable constraints. See Refs. <cit.> for production mechanisms of dark photon DM. The general review of the dark photon is given in Ref. <cit.>, while the careful treatment of dark photon polarization and its effect on actual experiments are discussed in Ref. <cit.>. §.§.§ Dark photon model We introduce the dark photon, a massive vector field (denoted as A_μ') which couples to the SM fields only through the kinetic mixing with the ordinary photon. The Lagrangian includes the following terms: ℒ∋1/2 m_γ' A'_μ A'^μ - 1/4(F'_μν )^2-1/4( ℱ_μν )^2 + ϵ/2 F_μν' ℱ^μν+e J_μ^EM𝒜^μ, where 𝒜_μ,A'_μ are the gauge fields associated with the electromagnetic U(1) symmetry and dark U(1) symmetry, respectively, ℱ_μν≡∂_μ𝒜_ν-∂_ν𝒜_μ and F'_μν≡∂_μ A'_ν-∂_ν A'_μ are the corresponding field strength tensors, while J^EM_μ is the electromagnetic current of the SM particles. In addition, ϵ is the kinetic mixing parameter that should be much smaller than unity. In this basis, there is no direct coupling between the dark photon and the SM fermions. We can also work in the basis in which the kinetic mixing of dark and ordinary photons vanishes. There is a mass eigenstate with zero mass, which corresponds to the ordinary photon as A_μ≡𝒜_μ - ϵ A'_μ. The Lagrangian can be expressed as ℒ∋1/2 m^2_γ'A'_μA'^μ - 1/4 ( F'_μν )^2 -1/4 ( F_μν )^2 +e J^EM_μ (A^μ + ϵA'^μ). Here and hereafter, we neglect terms of O(ϵ^2) because ϵ≪ 1. We can see that, in this basis, the dark photon A_μ' obeys the Proca Lagrangian with ϵ e J^EM_μA'^μ as a source term. By taking the Lorenz gauge for the SM photon, the equations of motion are given by □ A_μ =J^EM_μ, ∂_μ A^μ=0, (□ +m^2_γ ') A'_μ=ϵ J_μ^EM, ∂_μA'^μ=0, with □=∂_t^2-∇⃗^2. §.§.§ Dark-photon-induced magnetic field Let us focus on the interaction between the dark photon and the SM particles in the basis of mass eigenstates: ℒ_int = eϵ J^EM_μ A'^μ. We assume the following form of the vector potential of dark photon: A⃗' (t)= A⃗_0' sin (m_γ' t+m_γ' v_DM^2t/2 - m_γ 'v⃗_DM·x⃗ + δ). The spread of amplitude in frequency space is given also by Eq. (<ref>) with mass replaced by m_γ'. Then, assuming that the whole DM abundance is explained by the dark photon, we obtain ρ_DM=(1/2) m^2_γ'A⃗_0'^2. Similarly to the ordinary relation between the vector potential and the magnetic field, the effective magnetic field induced by the dark photon is expressed as h⃗^γ ' (t)=ϵ∇⃗×A⃗ '(t) = ϵ v_DM√(2 ρ_DM)sin ( m_γ ' t + δ) v̂_DM×Â', where the hat symbol represents the unit vector pointing in the direction of the original vector. Numerically, we obtain the amplitude of the effective magnetic field as h^γ'=9e-19×(ϵ/10^-10) (ρ_DM/0.4GeV / cm^3)^1/2( v_DM/c/10^-3)( sinφ/√(0.5)), where sinφ≡v̂_DM×Â' is the angle between the DM velocity and the vector potential A⃗'. Note that the effective magnetic/electric field induced by the dark photon is significantly affected by the conductor shield around the experimental apparatus <cit.>. As a rough estimate, for the typical length scale of the shield L_ shield, the effective magnetic field (<ref>) should be multiplied by an additional factor of (m_γ' L_ shield v_ DM)^-1 for 1< m_γ' L_ shield < v^-1_ DM and by m_γ' L_ shield/v_ DM for m_γ' L_ shield < 1, which can either enhance or suppress the signal depending on m_γ' L_ shield. Note that v_DM∼ 10^-3. Numerically, the signal is suppressed when L_ shield≲20( e-6eV/m_γ'), which could be avoided by using a reasonably large magnetic shield. On the other hand, thanks to the absence of a suppression factor v_DM in the shielding effect, the effective magnetic field (<ref>) can be enhanced up to a factor 10^3. In this paper, we do not discuss a real experimental apparatus including the shield and simply use Eq. (<ref>) as a conservative estimate. § MAGNETIZATION DYNAMICS IN MNCO3 The DM-induced magnetic field discussed in the previous section can cause magnetic resonance in the material and give an observable oscillating magnetization signal. We propose to use materials with strong hyperfine interaction that allows spin transfer between nuclear and electron spins for DM detection. The hyperfine interaction between electron and nuclear spins originates from the magnetic dipole interaction between them. The following properties are realized due to this interaction. Nuclear spins are highly polarized by the large effective magnetic field due to electron spin even in the low external static magnetic field. It provides large magnetic signals in the high-frequency region not easily accessible by other approaches using ordinary nuclear magnetization. Through the exchange interaction with electron spins, nuclear spins effectively obtain an exchange interaction among themselves called the Suhl–Nakamura interaction <cit.>, to realize the nuclear spin wave modes (or the nuclear magnons in the quantum picture). In this mode dominated by the precession of nuclear spins, the electron spins with a larger gyromagnetic ratio also contribute to the total observable magnetization enhancing the overall magnetization signal compared to nuclear magnetization alone. On the other hand, there is also an electron-spin-dominated mode mixed with nuclear spins. They are sensitive to both the DM-electron and the DM-nucleon interactions. As a concrete example, we focus on a canted anti-ferromagnetic material MnCO3. We introduce the property of MnCO3 in Sec. <ref>. Then, we illustrate the magnetic dynamics of MnCO3 in Sec. <ref> and its response to the DM coupled only to either the nuclear spin or the electron spin in Sec. <ref>. The calculation in the following sections is done in the SI unit. However, we simply omit the vacuum permeability factor μ_0 in the formulae for convenience.[ The formulae are then the same as that in the CGS unit except for the definition of susceptibility. ] The factor μ_0 is restored in Sec. <ref>. §.§ Introduction to MnCO3 MnCO3 is a canted anti-ferromagnetic material. It has large nuclear spins (with I=5/2) associated with 100 % magnetic isotope of ^55Mn, which couple to localized electron spins (with S=5/2) through strong hyperfine interactions. The electron ground state configuration of ^55Mn is [Ar] 4s^2 3d^5 with multiplet ^6S_5/2. The MnCO3 lattice structure can be represented by the rhombohedral unit cell. The parameters for this representation are interval length a_rh=5.84 with angle α=47. In the following, lattice directions such as [111] or [101̅] are to be understood as of the rhombohedral representation. For the lattice structure, see, e.g., Refs. <cit.>. The hard-axis anisotropy presents in the [111] lattice direction, forcing electron spins, which is localized at each Mn ion, to lie in the basal plane ((111) plane). Importantly, due to an exchange interaction between electron spins, they align anti-parallel to the one in the nearest site, forming an anti-ferromagnetic order below the Nèel temperature (T^*≈35 <cit.>). There are also weak anisotropic fields present in the basal plane. Besides, the Dzyaloshinskii–Moriya interaction produces the weak-ferromagnetic properties of MnCO3 in the basal plane. In general, the interaction between nuclear spin I⃗ and electron spin s⃗ originates from the dipolar interaction between them. It is given as follows <cit.>: H_hy =γ_e γ_n ħ^2 [ (l⃗-s⃗) ·I⃗/r^3 +3 1/r^5(s⃗·r⃗) (I⃗·r⃗)+8/3 (s⃗·I⃗)δ (r⃗) ], where γ_e and γ_n are gyromagnetic ratios of electron and nucleus, respectively; l⃗ is the orbital angular momentum of electron and r⃗ is the position vector of electron with the origin r⃗=0 being the nucleus position. In the MnCO3 case, the unpaired 3d electrons have Coulomb interaction with the 2s electrons of the opposite spin more efficiently than the 2s electrons of the same spin. Thus the spin-polarization occurs at the core and is proportional to the magnitude of 3d electron spins. The last term of Eq. (<ref>) shows the interaction between electron-spin polarized core and nuclear spins, which indicates an effective but strong interaction between 3d electron spins and nuclear spins of the form H_hy∝I⃗·S⃗ where S⃗ is the total electron spin associated with each Mn atom. In MnCO3, the nuclear spin is sensitive to a hyperfine field and becomes highly polarized pointing to the direction correlated to the electron spin. Besides, nuclear magnetic resonance occurs at a very high frequency ∼500 compared to that of typical nuclear spin precession. Under the presence of the strong hyperfine interaction and the exchange interaction of electron spins, there exists an effective exchange interaction between nuclear spins ensuring the existence of nuclear spin wave. On the other hand, the system can also be viewed as a hybrid system of nuclear and electron spins. Detailed dynamics of such a system and its response to the DM will be discussed in the following subsections. Materials with ^55Mn ions are frequently used in nuclear-spin-wave experiments. The reasons are the following: * ^55Mn magnetic isotopes have large localized nuclear and electron spins (I=5/2,S=5/2). * The electron spin wave modes have low eigenfrequency close to that of nuclear modes, leading to a large mixing between nuclear and electron spins.[Note that the 3d electrons of ^55Mn are all unpaired and fill each 3d shell, and hence the total orbital angular momentum of the electron spin associated with each atom is zero. This only leads to a weak magnetocrystalline anisotropy, which causes a small gap of the electron-spin system.] Other example materials with ^55Mn ions besides the canted antiferromagnet MnCO3[ MnCO3 has a weak-ferromagnetic property due to the Dzyaloshinkii–Moriya interaction, which forces the ground state spins to be canted even in the absence of an applied static magnetic field. Therefore, there is no need to worry about the phase transition from the antiferromagnetic phase to the spin-flop phase as in usual antiferromagnetic materials. ] <cit.>, include the hexagonal antiferromagnet CsMnF3 <cit.> with biaxial anisotropy, the antiferromagnet RbMnF3 <cit.> and KMnF3 <cit.> with cubic crystalline anisotropy, the antiferromagnet MnF2 <cit.> with uniaxial anisotropy, and the ferrimagnet MnFe2O4 <cit.>. Other materials are also studied in the context of nuclear spin waves: CoCO3 <cit.> since ^59Co isotope is 100% abundance, and FeBO3 <cit.> since its electron magnetic precession has low eigenfrequencies. Recently, Shiomi et al. <cit.> and Kikkawa et al. <cit.> reported experiments combining nuclear spin waves with spintronics in MnCO3, showing nuclear spin pumping effect and nuclear spin Seebeck effect of the system and establishing a new area of spin technology, nuclear spintronics. The spin transfer between electron and nuclear spins allows us to deal with nuclear spins easier through more-accessible electron spins. Here, for DM detection, it provides a unique probe for nucleon-DM and electron-DM interactions of the favorable frequency range, with a “nature" tool (electron spins and their magnetization) supporting signal readout. At the same time, the precession of electron spins, mixed with nuclear spins, is also sensitive to DM (specifically to electron-DM coupling) and we then include it in the discussion. Next, we move to the details of the magnetic system of MnCO3. §.§ Magnetic system of MnCO3 We discuss the (macroscopic) magnetization dynamics of MnCO3 within a classical theory, with some details shown in Appendix <ref>. We also show that the results are consistent with those derived in the quantum magnon picture in Appendix <ref>. We define M⃗_1 and M⃗_2 as electron magnetizations vectors and m⃗_1 and m⃗_2 as nuclear magnetization vectors, where the subscripts indicate sub-lattices they belong to. Magnetization is defined as the magnetic dipole moment per unit volume, and thus[The gyromagnetic ratio of electron is negative. However, we define γ_e and γ_n to be positive and hence the additional minus sign appears for electron magnetization.] M⃗_1=-γ_e ħ∑^lattice1_i S⃗_i/V, M⃗_2=-γ_e ħ∑^lattice2_j S⃗_j/V, m⃗_1=γ_n ħ∑^lattice1_i I⃗_i/V, m⃗_2=γ_n ħ∑^lattice2_j I⃗_j/V, where S⃗_i and I⃗_i are the electron-spin operator and nuclear-spin operator at the spin site i, respectively, V is the volume of the sample. The summations of spins run over the sub-lattice 1 and 2 for the corresponding magnetization vector. At the ground state, the magnitude of magnetization of each lattice is assumed to be M_0 and m_0 for electron magnetization and nuclear magnetization, respectively; they can be expressed by M_0=γ_e ħS/VN_total/2, m_0 =γ_n ħ⟨ I ⟩/VN_total/2, where N_total is the total number of Mn ions (which is equal to the number of spin sites), S=5/2 is the total value of electron spins localized at each Mn ion, and ⟨ I ⟩ is thermal average of nuclear spins localized at each Mn ion, which generally takes a smaller value than I=5/2 as we will argue later. In the anti-ferromagnetic phase, the potential per unit volume for magnetizations in MnCO3 is given by U= H_E/M_0M⃗_1 ·M⃗_2 + H_D/M_0{ M_1^x M_2^z- M_1^z M_2^x } + H_K/2M_0{ [ M_1^y ]^2+ [ M_2^y ]^2 } -H_K'/2M_0{ [ M_1^z ]^2+ [ M_2^z ]^2 } -(M⃗_1+M⃗_2) · (H⃗+h⃗_e(t)) - (m⃗_1 +m⃗_2) ·( H⃗+h⃗_n(t)) -A_hyM⃗_1 ·m⃗_1 -A_hyM⃗_2 ·m⃗_2. The subscripts x,y,z attached to the magnetization parameters refer to the component of the vector which points in those directions. The potential contains the (1) anti-ferromagnetic exchange interaction, (2) Dzyaloshinskii–Moriya interaction, (3) hard-axis anisotropy, (4) in-plane uniaxial anisotropy, (5) Zeeman effect, and (6) hyperfine interaction. The coefficients H_E, H_D, H_K, H_K' and A_hy correspond to constants associated with the exchange interaction, Dzyaloshinskii–Moriya interaction, hard-axis anisotropic effect, in-plane anisotropic effect, and hyperfine interaction, respectively. The coordinate setup is assumed such that the y-direction is the hard-axis corresponding to the [111] lattice direction. External magnetic fields are assumed to include an applied static field H⃗ pointing in the [101̅] lattice-direction (which we call x): H⃗=H_0e⃗_x, with e⃗_i being the unit vector pointing in the i-direction, an oscillating field h⃗_e coupling to electron spins, and an oscillating field h⃗_n coupling to nuclear spins. The latter two account for the exotic fields originating from DM. For convenience of later discussion, we define effective fields H_a≡ A_hy m_0, H_n≡ A_hyM_0, which come from hyperfine interactions and are felt by electron and nuclear spins in their ground state, respectively. In the ground state of this system, there are two sub-lattices of spins pointing in almost anti-parallel directions along the z-axis and in the basal plane (xz-plane) due to the in-plane and hard-axis anisotropic effects, respectively. The Dyaloshinskii–Moriya interaction and an applied magnetic field in the x-direction tilt the spins of two sub-lattices; the tilt angle is denoted as ψ. Then, the magnetic system shows effective ferromagnetism in the basal plane. The schematic picture of the ground state and effective fields are shown in Fig. <ref>. For convenience, we use magnetization parameters based on the coordinate systems tilted by angles ψ, π-ψ. For electron magnetization, we apply [ M_1^x; M_1^y; M_1^z ] = [ cosψ 0 sinψ; 0 1 0; -sinψ 0 cosψ ][ M_1^x_1; M_1^y_1; M_1^z_1 ] , [ M_2^x; M_2^y; M_2^z ] = [ -cosψ 0 sinψ; 0 1 0; -sinψ 0 -cosψ ][ M_2^x_2; M_2^y_2; M_2^z_2 ], and similarly, for nuclear magnetization, we apply [ m_1^x; m_1^y; m_1^z ] = [ cosψ 0 sinψ; 0 1 0; -sinψ 0 cosψ ][ m_1^x_1; m_1^y_1; m_1^z_1 ] , [ m_2^x; m_2^y; m_2^z ] = [ -cosψ 0 sinψ; 0 1 0; -sinψ 0 -cosψ ][ m_2^x_2; m_2^y_2; m_2^z_2 ]. In these tilted frames with sinψ=H_0+H_D/2H_E+H_K', the ground state expectation values are given by M_1^x_1,y_1=0, M_2^x_2,y_2=0 with M_1^z_1=M_2^z_2=M_0 and m_1^x_1,y_1=0,m_2^x_2,y_2=0 with m_1^z_1=m_2^z_2=m_0. The coordinate system adopted in the transformation is shown in Fig. <ref>. The polarization factor ⟨ I ⟩/I is expressed by the thermal average in the presence of magnetic field H_n exerted on the nuclear spins: ⟨ I⟩/I= B_5/2( 5/2γ_n ħ H_n/k T), with B_J (x)≡2J+1/2J( 2J+1 /2J x)- 1/2 Jx/2J. We plot it in Fig. <ref>, which clearly shows a paramagnetic property of nuclear spins. However, due to the large magnetic field H_n ∼60 through the hyperfine interaction, the polarization is naturally large without any other external field. This is one benefit of the strong hyperfine interaction. For concrete estimation, we adopt the temperature of the sample and of nuclear spins to be T=0.1. With this coordinate, the hyperfine interaction (the last line of (<ref>)) can be written in the form U_hy= U_∥ +U_mix, where U_∥ =-A_hy ( M^z_1_1 m_1^z_1 + M_2^z_2 m_2^z_2), U_mix=-A_hy( M_1^x_1 m_1^x_1 + M_1^y_1 m_1^y_1 + M_2^x_2 m_2^x_2 + M_2^y_2 m_2^y_2). The term U_∥ represents the hyperfine interaction in the direction of spin alignment in the ground state, which makes the Larmor frequency of both nuclear and electron spin precessions higher. On the other hand, the term U_mix causes the mixing of the nuclear and electron spin precessions. In the presence of an effective oscillating magnetic field induced by DM, the magnetic resonance of this system may occur. We derive equation of motion for magnetization similarly as done in Refs. <cit.>. Under the potential U(M⃗_1,2,m⃗_1,2), the magnetizations M⃗_1,2 and m⃗_1,2 feel the effective magnetic fields determined by H⃗^M_1,2_eff≡ -∂ U/∂M⃗_1,2, H⃗^m_1,2_eff≡ -∂ U/∂m⃗_1,2, from which they receive torque and precess according to the equations of motion given by dM⃗_1,2/dt= - γ_e M⃗_1,2×H⃗^M_1,2_eff, dm⃗_1,2/dt= γ_n m⃗_1,2×H⃗^m_1,2_eff. We can linearize these equations by focusing on the small perturbations around the ground state. We consider the precession of magnetization with a small deflect angle such that M_1^x_1,y_1≪ M_1^z_1 and M_2^x_2,y_2≪ M_2^z_2, and approximate M^z_1_1, M^z_2_2 to be M_0. Similarly, we consider a situation where the precession angle of the nuclear magnetization vector from the ground state is small such that m_1^x_1,y_1≪ m_1^z_1 and m_2^x_2,y_2≪ m_2^z_2, and approximate m^z_1_1, m^z_2_2 to be m_0. Defining “plus” and “minus” modes as M^α_±≡ M_1^α_1± M_2^α_2, m^α_±≡ m_1^α_1± m_2^α_2, with α=x,y,z, it turns out that the equation of motions for magnetization vector are decoupled for + and - combinations, which are usually called the in-phase and out-phase modes, respectively. For electron magnetization, we obtain the equation of motion for the in-phase mode as -1/γ_ed M^x_+/dt = { 2H_E +H_K+H_K'+ H_a +H_D(H_0+H_D)/2H_E}M_+^y -H_n m_+^y -2M_0 h_e^y , -1/γ_ed M^y_+/dt = - {2H_E (H_K'+ H_a) + H_0 (H_0 + H_D)/2H_E } M_+^x - 2M_0 h_e^z sinψ+H_n m_+^x , and for the out-phase mode as -1/γ_ed M^x_-/dt = { H_K+H_K'+H_a+H_D(H_0+H_D)/2H_E} M_-^y-H_n m_-^y , -1/γ_ed M^y_-/dt = -M^x_- {2H_E+H_K'+H_a+(2H_D-H_0)(H_D+H_0)/2H_E} +H_n m^x_- +(2M_0 -M_0 (H_0+H_D)^2/4H^2_E)h_e^x , where the magnetic field h^x,y,z_e is an exotic field interacting only with electron spins and oscillates in the x-, y-, or z-direction, respectively. Note that the M⃗_+ modes can only be excited by oscillating fields in y- and z-directions. On the other hand, the M⃗_- can be excited only via the x-direction. For nuclear magnetization, the equations of motion for the in-phase mode are 1/γ_nd m^x_+/dt = H_n m^y_+ -2 m_0 h_n^y - H_a M_+^y, 1/γ_nd m^y_+/dt = H_a M^x_+ - H_n m^x_+ - 2m_0 h_n^z sinψ, and those for the out-phase mode are 1/γ_nd m^x_-/dt = H_n m^y_-- H_a M_-^y , 1/γ_nd m^y_-/dt = H_a M^x_- - H_n m^x_+ + 2 m_0 h_n^x , which include an exotic oscillating field h^x,y,z_n that interacts only with nuclear spins and oscillates in the x-, y- or z-direction, respectively. The eigensystem of the in-phase mode can be solved with the ansatz: M^x_+(t)=M^x_+ e^iω t, M^y_+(t)=M^y_+ e^iω t, m^x_+(t)=m^x_+ e^iω t, m^y_+(t)=m^y_+ e^iω t, in the absence of exotic fields (h_e,n=0). The problem is reduced to an ordinary eigenproblem. The detailed calculation is shown in Appendix <ref>. The same can be done for the out-phase mode. For convenience, we give first the eigenfrequency of the in-phase/out-phase precession mode of the electron and nuclear magnetization without mixing between them (i.e., neglecting U_mix). The eigenfrequncies of the electron-magnetization system are ω_e,-=γ_e√(2H_E (H_K+H_K'+H_a) +H_D(H_0+H_D)), ω_e,+=γ_e √(2H_E(H_K'+H_a)+H_0(H_0+H_D)),, and the eigenfrequency of the nuclear-magnetization system is ω_n=γ_n H_n. Note that nuclear magnetization precessions have two degenerate modes (with angular frequency ω_n) in the absence of the mixing term U_mix. Once we take into account of the mixing U_mix, we obtain the following. For the out-phase mode, the eigenfrequencies are ω_ẽ,-≈ω_e,-, ω_ñ,-≈ω_n[ 1- ⟨ I ⟩/Sω_nω_E/ω^2_e,-] ; ω_E=γ_e H_E, which correspond to the eigenmodes dominated by electron and nuclear spins, respectively. For the in-phase mode, we can obtain a similar expression: ω_ẽ,+≈ω_e,+, ω_ñ,+≈ω_n[ 1- 2 ⟨ I ⟩/Sω_nω_E/ω^2_e,+]^1/2 , corresponding to the electron- and nuclear-dominated modes, respectively. Note that the nuclear-dominated modes receive the so-called pulling effect through the hyperfine mixing between electron and nuclear spins, such that its eigenfrequency is bent by some amount depending on the mixing angle and strength. The value of the gyromagnetic ratio γ_n of ^55Mn is taken from Refs. <cit.>, where values of the other parameters in the Hamiltonian is taken from Refs. <cit.>. We show them in Table <ref>. There is a relation among magnitudes of effective fields H_K', H_a ≪ H_0, H_D, H_K ≪ H_E, H_n while H_E H_K', H_E H_a ∼ H_0^2,H_D^2, H_0 H_D. The eigenfrequencies of the magnetic precession in MnCO3 are plotted in Fig. <ref>. Because of the large difference between the eigenfrequencies of the out-phase modes of nuclear and electron magnetizations, the mixing angle between their precessions due to the hyperfine interaction is expected to be small. On the other hand, the in-phase modes of nuclear and electron magnetizations, which have similar eigenfrequencies, significantly mix with each other. Therefore, from now we focus on and discuss only the in-phase modes and their response to the external oscillating field, while neglecting the out-phase modes. Note that the magnetic resonance of the system has a relatively high frequency compared to nuclear magnetic system without hyperfine interaction. For instance, ^3He needs a magnetic field of 30 to realize a magnetic resonance of the same frequency. §.§ Response to DM field As discussed in Sec. <ref>, when the entire DM is composed of axions, a magnetized sample is affected by oscillating magnetic fields h⃗_n=h⃗_n^axion and h⃗_e=h⃗_e^axion defined in Eqs. (<ref>) and (<ref>), respectively. It should be noted that, based on the nuclear shell model of nuclei with odd atomic number, the nuclear spin of MnCO3 mainly comes from the proton spin <cit.> and hence is sensitive to the axion-proton coupling g_app, not to the axion-neutron coupling g_ann. For simplicity, we take σ_p=0.1,σ_n=0.0 as a spin contribution from protons and neutrons for the numerical estimation of axion-induced magnetic field h⃗^axion_n. See Appendix <ref> for more detail. For the case of dark photon DM, we have instead h⃗_n=h⃗_e=h⃗^γ ' defined in (<ref>). Here we show the response of the system by evaluating the magnetization and power absorbed by the system, in Secs. <ref> and <ref>, respectively, when the DM mass is equal to the excitation energy of the magnetic material. §.§.§ Magnetization signal Now suppose the existence of a non-zero oscillating field from axion or dark-photon DM, h⃗_n,e∝sinω_DM t with ħω_DM=m_a,m_γ'. We can write the response in terms of susceptibility χ defined by the relation of total magnetization M⃗_total and external oscillating field h⃗_n and h⃗_e as M_total^α≡ M_1^α+M_2^α+m_1^α+m_2^α= ∑_β(χ_n^αβh_n^β+χ_e^αβh_e^β), where α,β=x,y,z denoting the direction of vectors M⃗_total,h⃗_n,h⃗_e or the corresponding component of the tensor χ. The magnetization susceptibility χ can be derived by solving the special solution of the equation of motion for magnetization with the initial condition representing the ground state of the magnetic system. The relaxation time of magnetization should also be taken into account. In the present assumption that MnCO3 sample is magnetized along the x-direction and that the direction of the oscillating field induced by DM is the y- or z-direction, the precession of total magnetization occurs in the yz-plane for the in-phase mode. We define a magnetization signal M_signal as the z-component of the total magnetization vector: M_signal≡ M^z_total =∑_α=y,z(χ^zα_n h_n^α+χ^zα_e h_e^α). Note that because the gyromagnetic ratio of the electron spin is much larger than that of the nuclear spin, a large part of magnetization of the sample is induced by electrons. One can then estimate M^z_total from M^z_total≃ M_1^z+M_2^z=M^x_+sinψ. The detailed calculation for signal susceptibility is shown in Appendix <ref>. When the DM mass is close to the precession frequency, in particular | ω_DM - ω_ñ,+| ≲ 1/T_2n or | ω_DM - ω_ẽ,+| ≲ 1/T_2e, the collective spins would be excited, where T_2n and T_2e are the relaxation times for the nuclear-dominated mode and electron-dominated mode, respectively. We assume that these relaxation times are much smaller than the coherence time τ_DM of the DM field (T_2e,2n≪τ_DM), which is actually the case for MnCO3. We assume the values of relaxation time at T=0.1 as T_2n=1 and T_2e=1 <cit.>. Within DM coherence time t ≲τ_DM, the magnetic field from DM is a coherent driving field, so the magnetization signal rotates coherently with frequency ω_DM (whereas its amplitude is determined by the relaxation time T_2e,2n). The spread Δω_signal in frequency space of the signal is determined by that of DM spectrum: Δω_signal= Δω_DM=2π/τ_DM. The susceptibility on resonance is shown in Tables <ref> and <ref>. The star symbols represent the sensitive channels for probing the listed coupling parameters of DM. We introduce a parameter η defined by η≡ H_n (H_0+H_D)γ_e^2/ω_e,+^2, which is an enhancement factor of signal magnetization compared to nuclear magnetization, in addition to the effect that nuclear spins are highly polarized by the hyperfine interaction with electron spins. Note that η→ 0 for sinψ→ 0 (see Eq. (<ref>)). At T=0.1, we obtain η≈ 50. For example, at H_0= 1 corresponding to ω_ñ,+/2π≈500 and ω_ẽ,+/2π≈40, we obtain numerically for sensitive channels: χ^zy_n(ω_DM≈ω_ñ,+) = 0.2 ×( T_2n/1), χ^zy_n(ω_DM≈ω_ẽ,+) = 0.2 × 10^-3×( T_2e/1), χ^zz_e(ω_DM≈ω_ñ,+) = 10 ×( T_2n/1), χ^zy,zz_e(ω_DM≈ω_ẽ,+) = 2 ×( T_2e/1). The order of magnitude does not change around the frequency range of interest (corresponding to H_0 ∼ 0.5–2) where the mixing between precessions of electron spins and nuclear spins remains to be large. (See, e.g., Fig. <ref> for the relation between H_0 and the eigenfrequency of the system.) The y-direction is the most sensitive to the DM-nuclear interaction, while only the z- or both y- and z-directions are to the DM-electron interaction depending on which mode is excited. We define the angle between the sensitive direction and the polarization direction of the DM-induced magnetic field by a parameter θ. To account for the unknown polarization (equivalently the unknown direction of the velocity of DM), we perform a substitution cos^2 θ→ 1/3 if the system is sensitive to a single polarization direction, or by cos^2 θ→ 2/3 if it is sensitive to any polarization direction in a plane <cit.>. §.§.§ Power absorbed to the magnetized sample The absorbed power into the material can be estimated by the relation P_absorb=V ·∑_α=y,z( dm^α/dth^α_n + dM^α/dth^α_e), where V is the sample volume. In Tables <ref> and <ref>, we show the time-averaged power absorbed under the resonance condition for each channel. For example, at H_0=1 corresponding to ω_ñ,+/2π≈500 and ω_ẽ,+/2π≈40, numerically, we obtain for the most sensitive y-direction of the DM-nucleon interaction (h_n^y≠0): P_absorb (ω_DM∼ω_ñ,+)=2.2e-21×( T_2n/1) (h^y_n/e-18)^2 ( W_MnCO_3/1.5), P_absorb (ω_DM∼ω_ẽ,+)=7.8e-25×( T_2e/1)(h^y_n/e-18)^2 ( W_MnCO_3/1.5), where W_MnCO_3 is the total mass of the MnCO3 sample. Also, at H_0=1, we obtain for the y- and z-directions of the DM-electron interaction (h_e^y,z≠0): P_absorb (ω_DM∼ω_ñ,+)=6.0e-18×( T_2n/1) (h^y_e/e-18)^2 ( W_MnCO_3/1.5), P_absorb (ω_DM∼ω_ẽ,+)=8.7e-17×( T_2e/1) (h^y,z_e/e-18)^2 ( W_MnCO_3/1.5). Within the frequency range of interest (corresponding to H_0 ∼ 0.5–2), magnitude of absorption power only slightly changes. In Sec. <ref>, we suggest experimental ways to detect the signal through a magnetic system with the hyperfine interaction. We estimate the sensitivity of the method based on the magnetization signal and power absorbed into the system, which we compare with power of relevant noises. § DARK MATTER DETECTION WITH NUCLEAR MAGNETIC EXCITATION In this section, we present the sensitivity of DM detection by excitation of hybridized spin precession modes under a low-temperature environment T=0.1. Our basic strategy is as follows. We can set an eigenfrequency of the magnetic system to a desired value by tuning the applied magnetic field H_0. When the DM mass is close to the eigenfrequency, collective motion of spins can be excited, since the field of DM axions or DM dark photons oscillates with the frequency ω_DM≃ m_a,γ'/ħ. Then, a wide DM mass range can be searched for by sweeping H_0. The probe range is divided into two much different scales corresponding to the two bands of the nuclear- and electron-dominated modes in MnCO3. We discuss them in Secs. <ref> and <ref>, respectively. §.§ Sensitivity at the nuclear resonance frequency We consider the LC resonant circuit with a pick-up loop similar to Ref <cit.>, combined with the microstrip SQUID amplifier[ In the frequency range of interest, the dc-SQUID is not favorable as a magnetometer because there is a severe parasitic capacitance between dc-SQUID washer and the input coil. Detection with a reactive (dissipationless) ac-SQUID and the microwave resonator <cit.> is proposed to detect dark photons in the frequency range from 10 to 1 in <cit.>. Putting the input coil inside the hole in SQUID washer might cure the parasitic capacitance problem for the dc-SQUID as well <cit.>. Using a microstrip-coupled dc-SQUID as the amplifier is also a way to deal with this issue; we pursue this possibility in this work. ]  <cit.> to detect magnetization induced by DM. It has flexibility to tune the resonance frequency in the range of interest and the high quality factor can be achieved. Before going into the details of the measurement method and setup, let us consider the frequency range of interest and the quality factor of the resonator. At T=0.1, the spectrum of the nuclear-dominant magnetization precession of MnCO3 covers a high frequency range from O(10) to ∼600 (see Fig. <ref>). For a frequency range ω / 2 π≳500, the observed intrinsic relaxation time T_2n from spin echoes is of order O(10) at a low temperature O(1), while below 500 the relaxation time T_2n drops rapidly and the signal from DM might be suppressed (see, e.g., Refs. <cit.>). Here, for simplicity we focus on a frequencty range 500–600, which can be covered by sweeping the static field H_0 from 0.7 to 2 (see Fig. <ref>), while assuming constant T_2n=1. An important quantity for quantifying the signal and noise in the resonant approach is the quality factor Q_r of the circuit, which is related to the circuit bandwidth Δω_r and resonant frequency ω as Δω_r=ω/Q_r. We desire the value of Q_r as high as possible up to Q_DM=10^6 so that the signal is highly resonant while the circuit bandwidth Δω_r still covers bandwidth Δω_DM of DM signal. Here, in numerical calculation we assume Q_r=10^5, while a higher value such as Q_r∼ 10^6 might also be achieved by the LC circuit, which is discussed in Ref. <cit.>. The following steps can be used to scan DM masses within the frequency range ω / 2 π=500–600: * Match eigenfrequencies of the magnetic system and the resonant circuit to the frequency ω of interest at the same time. The former can be tuned by changing the external static field H_0, while the latter by changing the values of resistance and capacitance of the circuit. * Measure the magnetization signal from the magnetic material MnCO_3 for an interrogation time Δ t. * Shift the frequency ω by an interval Δω determined by Eq. (<ref>) after each step of the signal measurement until the frequency range of interest is fully covered. The interrogation time Δ t is determined by the total observation time T_total and the number of scan steps determined as a function of Q_r. Now we move to the discussion of experimental setup. We consider picking up the signal inductively by a pick-up loop L that is in parallel with the resistance R and capacitor C, while the resonant circuit is capacitively coupled to the amplifier through the capacitor C_c (Fig. <ref>). When DM induces a magnetic excitation in the magnetized sample, the oscillation of magnetization produces an oscillating flux through the pick-up loop, which generates a voltage V_p that can be detected through the load resistance R_load associated with the amplifier. Let us estimate the voltage over the load resistance R_load (which is the input voltage of the amplifier) so that we can compare the power due to the magnetization signal with that of the thermal noise to derive the signal-to-noise ratio (SNR). We apply the Thevenin theorem to simplify the task. For convenience, the Thevenin equivalent circuit is illustrated in Fig. <ref> (c). The Thevenin impedance Z(ω) is the impedance of the circuit when we look from the terminal of R_load with V_p neglected. It is given by Z(ω)=1/(1/ i ω L)+(1/R)+ iω C +1/iω C_c. The parameters are to be chosen such that the impedance is matched to the amplifier impedance R_load=50 at resonance. Requiring Z=Z_0=50 at an arbitrary resonance frequency ω_0, we can choose <cit.> R=Q_0 ω_0 L, C=1/ω_0^2 L(1- 1/Q_0√(R-Z_0/Z_0)), C_c=1/ω_0√(1/Z_0(R-Z_0)), where Q_0 is the unloaded quality factor (or, equivalently, the quality factor of the RLC circuit without C_c and R_load). Since the impedance is matched, the loaded quality factor Q_r is determined to be Q_r=Q_0/2 <cit.>. On the other hand, according to the Thevenin theorem, the equivalent voltage V_s is equal to the voltage between terminals of R_load when it is replaced by open terminals (the unloaded probe). Equivalently, this is the case when we take the limit of R_load→∞. Therefore, we obtain the Thevenin equivalent voltage V_s in the frequency space as Ṽ_s(ω) = Ṽ_p (1/R+ iω C)^-1/ i ω L+(1/R+ iω C)^-1 = -i ωΦ̃_p (ω) (1/R+ iω C)^-1/ i ω L+(1/R+ iω C)^-1 ≈ -iωΦ̃_p(ω) √(Q_0 Z_0/ω_0 L), where Q_0 ω_0 L ≫ Z_0 is assumed, and Φ_p is the flux signal at the pick-up coil induced by the transverse magnetization M_signal of the sample. The tilde symbol represents the value in the frequency space. One can derive the flux signal Φ_p from the Faraday induction law <cit.> and the reciprocity theorem: Φ_p=∫_V dV M_signalβ, where β is the rate of flux induction from one unit current. With the geometry of a one-turn pick-up loop with the sample at the center, we obtain Φ_p= M_signal V μ_0/4π2π a^2/(a^2+d^2)^3/2≈ M_signal V μ_0/2 a ( when d→ 0) , where a is the loop radius, d the distance of the sample from the loop and V the sample volume. Note that the sample volume V is limited by the size of the loop. Assuming a sample of cylindrical shape shrunk by ratio ν from a cylinder of radius a and height 2a, the sample volume is given by V= νπ a^2 (2a). We arrive at the Thevenin equivalent circuit with V_s given by Eq. (<ref>) and impedance Z(ω) given by Eq. (<ref>), whose value on resonance is set to R_load. Since the impedance is matched, half of the voltage V_s is applied to the input amplifier. (See Fig. <ref> (c) for the Thevenin equivalent circuit.) We consider the Johnson-Nyquist thermal noise from the resonator and amplifier with voltage Ṽ_noise=k_B(T+T_a)Z_0 at the input amplifier, where T_a is the noise temperature of the amplifier, T the resonator temperature, and k_B the Boltzmann constant. Combining contributions from the DM signal and noise, we obtain the power density at the input amplifier as P̃(ω)=P̃_s(ω ) + P̃_noise (ω), P̃_s(ω ) = 1/4Ṽ^2_s (ω)/Z_0, P̃_noise (ω)=k_B(T+T_a). At the temperature T=0.1, with the microstrip SQUID amplifier tuned for a frequency ∼ O(100), combined with the hetero-structure field-effect transistors amplifier <cit.>, the noise temperature of the amplifier is less than T_a=0.1. The SNR is estimated from the Dicke radiometer equation <cit.> with Eqs. (<ref>) and (<ref>). We obtain SNR= P_s/P̃_noise (ω) Δ f/ √(Δ f Δ t ) =1/2( μ_0 M_signalνπ a^2 )^2 ω_0 Q_0/ L1/ 4k(T+T_a) √(Δ t/Δ f), where Δ f is the bandwidth, Δ t is the observation time at each frequency or the interrogation time mentioned above, and the overall coefficient 1/2 is added to account for the time average. By requiring SNR≥ 1, we obtain the expected sensitivity for the coupling constant between DM and nucleons or electrons. Here we assume that the material is placed in the proper direction to read out M_signal ≡ M^z_total, and is appropriately shielded from unwanted perturbations. The sensitivities based on sensitive channels of MnCO_3 discussed in Sec. <ref>, along with the present constraints of the relevant DM parameters, are shown in Figs. <ref>, <ref>, and <ref>. We have taken T=T_a=0.1, the pick-up coil inductance L ≈ 0.45 × (a/5) μ H, ν=0.5, a=5 and the total observation time T_total=1 year. Note that ν=0.5 and a=5 cm correspond to the total MnCO3 mass of 1.5. The quality factor Q_r is assumed to be 10^5. Although the quality factor of circuit might be affected by the relaxation time of the magnet through the coupling between the circuit and magnet, we neglect this effect just for simplicity. The interrogation time at each scan step is Δ t=1.7e3. The relaxation time T_2n is assumed to be 1 for a realistic setup and, in addition, 10 for an optimistic one. The latter choice corresponds to a situation where the (effective) magnetic relaxation time is longer thanks to the reduced inhomogeneity in material and applied field. For the dark photon DM detection, the observed flux at the pick-up loop is due to both the dark photon field itself h^γ' and the field generated by the magnetization M_signal. Since typically χ_e ≳ 1, the pick-up magnetic flux caused by M_signal is larger than that of h^γ'; thus, the sensitivity to ϵ is derived taking into account only the contribution of M_signal, neglecting that of h^γ'. §.§ Sensitivity at the electron resonance frequency The material MnCO3 is also sensitive to another frequency range of ω / 2 π≳10. It is due to excitation of the electron-dominated mode (see Fig. <ref>). However, because of the hyperfine interaction, it also contains a nuclear-spin component, and hence we also expect a sensitivity to g_app. Here we consider the frequency range ω_DM/2π≈ω_ẽ,+ /2π = 45–55. The relaxation time of this mode is mainly determined by that of the electron spin which is found to be T_2e≈ O(0.1) at T=4.2 <cit.> and T_2e≈ O(1) at T=1.5 <cit.>. We use T_2e=1 for estimating the sensitivity at T=0.1 in the following. Our strategy is to use a microwave cavity strongly coupled to the magnetization of the material <cit.>. When the DM mass is the same as the excitation energy of the homogeneous magnetic precession, the collective motion of spins is resonantly excited. With the setup of the microwave cavity strongly coupled to the magnetization of the material, half of the energy of the magnetic resonance is transferred to the excitation of cavity photons which can be detected as the signal. The readout can be done by a linear amplifier or photon counter, which are discussed in subsections <ref> and <ref>, respectively. However, our aim is to utilize the latter, a photon counter, to avoid the quantum limit that significantly suppresses the sensitivity at the frequency of interest when we use a linear amplifier <cit.>. The microwave cavity frequency should be tuned to the electron precession frequency, realizing a strong coupling between the magnet and the cavity. The overall relaxation time of a single mode of this coupled system is given by T_coupled=2 ( 1/T_2e + 1/τ_cav)^-1≈ 2 T_2e=2, where τ_cav is the cavity lifetime, which we assume to be O(1) as in Ref. <cit.>. The strong-coupling between the magnet and the cavity affects the signal calculation discussed in Sec. <ref> in several ways. Firstly, the relaxation time scale T_2e should be replaced by the coupled scale: T_2e→ T_coupled. Secondly, the interaction strength between a coupled mode and the DM field is reduced by a factor √(2) due to the maximal mixing, which results in 50% reduction of power absorbed to the material when only a single mode is excited[However, with T_coupled≈ 2 T_2e, the two mentioned contributions cancel with each other, giving the same absorption power P_absorb as originally derived in Sec. <ref>.]. Finally, the coupled relaxation time T_coupled also determines the bandwidth of the magnon-cavity coupled mode: Δω=2/T_coupled≈ 1/T_2e. Similarly to the case of the nuclear-frequency range, the following steps can be used to scan the DM mass over the frequency range ω / 2 π=45–55: * Match eigenfrequencies of the magnetic system and the cavity photon to the frequency ω of interest at the same time, realizing the strong coupling between them. The former can be tuned by changing the external static field H_0, while the latter by changing the size of the cavity. * Measure the power of cavity photons induced by the magnetic material MnCO_3 for an interrogation time Δ t. * Shift the frequency ω by an interval Δω=2/T_coupled after each step of the signal measurement until the frequency range of interest is fully covered. The interrogation time Δ t is determined by the total observation time T_total and the number of scan steps, which is fixed from Δω and the whole frequency range of interest. §.§.§ Microwave cavity with linear amplifier Using a linear amplifier for probing ω_DM/2π∼50, the effective temperature of the quantum noise is T_Q=ħω_DM/k_B ≃2.3. Therefore, under an experimental setup with the temperature T=0.1, we assume the quantum noise dominates over others. Let us assume that the resonance condition is satisfied: ω_DM≈ω_ẽ,+. Input power is discussed in Sec. <ref> with some subtle care from the magnet-cavity mixing mentioned below Eq. (<ref>). In the case of the strong coupling where half of power is transferred to the cavity <cit.>, we have P_out= P_absorb/2. The following is output power calculated at ω_DM/2π=48. For detecting the axion-induced field through the axion-proton coupling g_app and axion-electron coupling g_aee, χ_n^zy(ω_ẽ,+) and χ^zy,zz_e(ω_ẽ,+) are the most important, respectively; the corresponding output powers are P^g_app_out= 3.2e-32×( W_MnCO_3/1.5) ( T_2e/1) ( g_app/10^-10)^2 ( v_DM/c/10^-3)^2 ( cos^2 θ/1/3), and P^g_aee_out= 7.7e-26×( W_MnCO_3/1.5) ( T_2e/1) ( g_aee/10^-12)^2 ( v_DM/c/10^-3)^2 ( cos^2 θ/2/3). In the case of dark photon DM, the dominant channels are χ^zy,zz_e(ω_ẽ,+), and we obtain output power P^ϵ_out= 3.5e-27×( W_MnCO_3/1.5) ( T_2e/1) ( ϵ/10^-12)^2 ( v_DM/c/10^-3)^2 ( cos^2 θ/2/3) ( sin^2 φ/1/2). On the other hand, power of the quantum noise is P_noise=ħω_DM√(Δ f/Δ t)≃5.5e-22, with Δ f ≈0.16 and Δ t=5e5, corresponding to the case that T_2e=1 and T_total= 1 year. We can see that this method is not sensitive enough for detecting the DM signal, suffering from the quantum noise from the linear amplifier. §.§.§ Microwave cavity with photon counter Instead of using a linear amplifier, we may utilize a single photon counter avoiding the standard quantum limit <cit.>. In this case, the only relevant noise is the shot noise of the cavity photon of frequency ω=ω_DM, whose rate is determined by the effective relaxation time of the coupled system T_coupled and the cavity temperature T_c as <cit.> R_th= 2/T_coupled/e^ħω /kT_c-1. In this setup, we define the SNR as SNR≡R_signalΔ t/√(R_thΔ t). The constraints on DM couplings are given by requiring SNR≳ 1. The output power P_out transferring to the cavity photon system is related to the excitation rate R_signal (of spin-flip-induced photons) by the relation R_signal≡ P_out/ħω_DM. The sensitivities obtained for ω/2π=45–55, based on the sensitive channels of MnCO_3 discussed in Sec. <ref> are shown in Fig. <ref>, <ref> and <ref>, together with the relevant constraints. We assume the relaxation time T_2e=1 for a realistic setup. In addition, we add the case with T_2e=10, illustrating the optimistic situation when inhomogeneity in the material and the applied field could be reduced and hence the effective magnetic relaxation time is increased. The interrogation time at each scan step is Δ t=5e5 and Δ t=5e4 for the two cases, respectively. Again, the volume of MnCO_3 is assumed to be given by Eq. (<ref>) with ν=0.5 and a=5, corresponding to the total MnCO_3 mass of 1.5. § CONCLUSION AND DISCUSSION We have discussed a possibility of detecting axion or dark photon DM using nuclear magnetization in a magnet, which has a strong hyperfine coupling between electron and nuclear spins. We focus on the canted antiferromagnet MnCO_3 as a concrete example of such a magnet. Both axion and dark photon DMs may interact with nuclear and electron spins and exert (effective) oscillating magnetic fields on them. This can induce magnetic resonance when the DM mass is equal to the excitation energy of the magnet. Due to strong magnetic fields generated by electron spins through the hyperfine interaction, nuclear spins are highly polarized and can give a sizable resonance signal under relatively low external magnetic field. There is also an electron magnetic resonance, which is also sensitive to DM. Due to the hyperfine interaction, the nuclear and electron resonance modes are mixed with each other. Specifically in MnCO_3, both modes are sensitive to frequencies much higher than typical values achieved by ordinary nuclear spin precession experiments. Other materials such as CsMnF3 <cit.>, CoCO3 <cit.>, FeBO3 <cit.>, and Nd_2CuO_4 <cit.> are expected to share the same properties. We investigated the observable magnetization and energy stored in the magnetic material due to the DM-induced oscillating magnetic field. The details of measurement setup and strategy are discussed in Sec. <ref> for two frequency ranges: the nuclear frequency ω_DM / 2 π∼ 500–600 corresponding to m_DM∼e-6eV (Sec. <ref>) and the electron frequency ω_DM / 2 π∼ 45–55 corresponding to m_DM∼e-4eV (Sec. <ref>). As detection schemes, a pick-up loop with LC resonant circuit and cavity photons coupled with the spin systems are considered for the former and latter channels, respectively. The sensitivity for the 1-year measurement with 1.5 of MnCO_3 under temperature 0.1 is shown in Figs. <ref>, <ref>, and <ref>. We summarize the results in the following. * This system is sensitive to the axion-proton interaction. The axion of mass ∼e-6eV with g_app≳ 10^-11 could be probed by our method through the nuclear-dominated mode. So far it is hardly reached by other experiments. For example, a nuclear-precession experiment like CASPEr is sensitive to lower mass regions due to the limitation of the magnitude of the external magnetic field. We have also examined the electron magnetic resonance excited through the mixing with the nuclear-spins, which indeed has a sensitivity to the nucleon-DM interaction at the mass scale ∼e-4eV or O(10). However, it turns out that this electron-dominated mode is not sensitive enough to constrain the axion parameter space beyond astrophysical constraints, because the signal is suppressed by the short relaxation scale T_2e of the material. Other materials with longer relaxation times should be able to achieve better sensitivity, which is left as a future work. * The system is also sensitive to the axion-electron interaction assuming an optimistic magnetic relaxation time achieved by small inhomogeneity of the material and the applied magnetic field. The nuclear-dominated mode can be used to explore the axion parameter region with g_aee≈10^-14 and m_a ∼e-6eV. This region still survives the astrophysical constraints from stellar evolution such as white dwarf luminosity function and the tip of red giant branch. In addition, by using the electron magnetic resonance mixed with nuclear spins, we can search for the axion mass of e-4eV with parameter g_aee∼ 10^-13. * For the kinetic mixing parameter ϵ of the dark photon, there are already haloscope experiments that are searching for the mass range m_γ'∼e-6eV corresponding to the frequency regions to which the nuclear-dominated mode is sensitive. Even though we can only probe the already scanned regions with our method, a complementary check of such regions is plausible for ϵ∼ 10^-13. In addition, with the electron magnetic resonance mixed with nuclear spins, we might be able to probe the parameter region down to ϵ∼ 10^-13 at m_γ'∼e-4eV, which is beyond the current experimental limits and cosmological constraints. As shown above, DM detection with a magnet with a strong hyperfine interaction provides possibilities to detect both axion DM and dark photon DM. The unique feature of DM detection via the proposed system is the allowed spin transfer between electron and nuclear spins, which provides several available channels and supports the readout of the nuclear signal relying on more accessible electron spins. Nevertheless, further study is still needed to investigate its real potential and improvability. Further study of parameters of magnetic materials is encouraged, such as the magnetic relaxation time T_2n,2e and the susceptibility of materials with a strong hyperfine coupling other than MnCO_3. Taking into account the statistical behavior of the DM field would also help one to estimate the result more precisely. Further study to cure the weak points associated with the limited band of nuclear-dominated modes and the short relaxation time of electron-spin systems is also definitely useful to maximize the competence of this method. § ACKNOWLEDGMENT We thank Takahiko Makiuchi, Takashi Kikkawa and Eiji Saitoh for the valuable informations about nuclear magnon and magnetic excitation in MnCO_3. S.C. is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under the Contract No. DE-AC02-05CH1123. T.M. is supported by JSPS KAKENHI Grant Number 22H01215. T.S. is supported by the JSPS fellowship Grant number 23KJ0678. This work is supported by World Premier International Research Center Initiative (WPI), MEXT, Japan. § EXOTIC INTERACTION BETWEEN NUCLEAR SPINS OF MN AND DARK MATTER In this Appendix, we consider the interaction between nuclear spin I⃗ and axions based on the interaction between the nucleon spin and axion given by Eq. (<ref>). We mainly follow Ref. <cit.> in the discussion below. Natural unit is adopted in this Appendix. Both the spin and orbital angular momentum of protons and neutrons contribute to the nuclear spin I⃗ and nuclear magnetic moment μ⃗_I: I⃗=S⃗_p+S⃗_n+L⃗_p+L⃗_n, μ⃗_I=g_p μ_NS⃗_p+g_n μ_NS⃗_n + g_lpμ_NL⃗_p+g_lnμ_NL⃗_n, where S⃗_p,n and L⃗_p,n are the total spin and orbital angular momentum operators for protons or neutrons, respectively. Spin and orbital g-factors are given by g_p=5.586, g_n=-3.826, g_lp=1, and g_ln=0. For convenience, using contribution variable σ_ξ≡⟨I⃗·ξ⃗⟩ / ⟨I⃗·I⃗⟩ with the expectation value taken with respect to a nuclear state, and defining the nuclear g-factor g_I from ⟨μ_zI⟩=g_Iμ_N⟨ I_z⟩, we obtain the relation 1= σ_p+σ_n+σ_lp+σ_ln, g_I=g_pσ_p+g_n σ_n + g_lpσ_lp+ g_lnσ_ln. Now let us discuss the axion interaction with the nuclear spin. One should note that the axion only couples to the nucleon spin, not to its orbital motion. Starting from the axion interaction of the form (g_appS⃗_p+g_annS⃗_n)·∇⃗a, we obtain the interaction between the nuclear spin I⃗ and the axion of the form ℒ_int=g̃_aI/m_N ( ∇⃗ a) ·I⃗ = 1/g_I μ_Ng̃_aI/m_N (∇⃗a) ·μ⃗_I , where g̃_aI≡ g_appσ_p+g_annσ_n. Matching this to the magnetic interaction of the form ℒ_int=μ⃗_I ·h⃗, with h⃗ being the magnetic field and μ⃗_I the nuclear magnetic moment, we can define the effective magnetic field caused by the axion and felt by the nucleus as h⃗_I^axion= 1/g_I μ_Ng̃_aI/m_p√(2ρ_DM)v⃗_DMsin(m_a t + δ), where ρ_DM and v⃗_DM are density and velocity vector of DM, respectively. We have used ∇⃗ a ≈ m v⃗_DM a(t) and assumed that DM is composed solely of axions, hence ρ_DM=m^2_a a_0^2/2 where a_0 is the amplitude of axion coherent oscillation. The remaining task is to find a spin contribution for the case of magnetic isotope of interest; here we focus on ^55Mn. For example, we can adopt the semi-empirical approach following Engel and Vogel <cit.>, by assuming that the nucleon species with even number (in this case, neutrons) contributes little to the angular momentum of the system, and taking g_I as the observed value. With g_I=1.38 as observed nuclear g-factor for ^55Mn and σ_n = 0, σ_ln=0, we obtain the spin contribution as σ_p ≈ 0.1, σ_lp≈ 0.9. However, there are some uncertainties in the above semi-empirical estimation. One of them comes from the deformed shell structure of the ^55Mn nucleus <cit.>, which implies the collective rotation of the nucleus in addition to the intrinsic state of nucleons. This can affect the ratio of the spin and orbital contribution to nuclear spin. In particular, concerning the z-direction contribution to the magnetic moment, the nuclear magnetic moment is given by <cit.> μ=I/I+1⟨Ω | μ_z | Ω⟩ + 1/I+1g_RI. The first term is the magnetic moment from the angular momentum of proton in the state | Ω⟩, while the second term accounts for the collective rotation of nucleons with the rotational g-factor g_R. Due to the deformed configuration of ^55Mn nucleus, particles experience axially-symmetric potential (with z being the axial direction), and degenerate spherical shells split. Almost all nucleons couple in pair-wise contributing little to the angular momentum, and the last odd proton occupies the Nilsson state with the asymptotic quantum number [312]5/2^-. (For the Nilsson state [N n_z Λ]Ω^π of a nucleon <cit.>, N is the total quantum number, n_z the number of oscillator quanta in the z-direction, Λ the orbital angular momentum in the z-direction, Ω the total angular momentum in the z-direction, and π the parity.) This last odd proton and collective rotation of nucleons contribute to the nuclear spin I=5/2. For the ground state of ^55Mn, the state of the last proton can be approximated by | Ω⟩=|l_z=2,S_z=1/2 ⟩. Since ⟨Ω | μ_z | Ω⟩= 1/2g_p+2 g_lp, and the rotation g-factor g_R of nucleus should not have a contribution from the spin of the nucleons, we know that σ_p =0.5/ (I+1) ≈ 0.14 is the proton-spin contribution realizing the magnetic moment μ. The value of σ_p from this deformed shell model is slightly higher than the one obtained from the simple semi-empirical model. Another uncertainty is the neutron-spin contribution which has been neglected so far. For the axion coupling to the nuclear spin, the above two approximations imply that the ^55Mn nucleus is sensitive to the axion-proton coupling constant g_app, but not to the axion-neutron coupling g_ann. However, in reality there might also be a significant spin contribution from the core neutrons through its spin polarization via the spin-spin interaction <cit.>. It may make the ^55Mn nucleus also sensitive to the axion-neutron coupling. In this paper, we neglect this effect for simplicity. In this work, we use the value σ_p=0.1, σ_n=0, to numerically estimate the magnitude of the axion-induced magnetic field acting on the ^55Mn nuclear spin. § DETAILED CALCULATION OF MAGNETIC DYNAMICS IN MNCO3 In this Appendix we calculate the magnetic dynamics of MnCO3 under the DM background. In this Appendix and the following, SI unit is adopted with vacuum permeability factor μ_0 is omitted in the formulae for convenience. To estimate the eigensystem of the magnetic system in MnCO3, we first note that there is the orders-of-magnitude relation of the effective fields and other constants: H_E → H_E, H_n → H_n, γ_e →γ_e H_0 → H_0 δ, H_D → H_D δ, H_K→ H_Kδ H_a → H_a δ^2, H_K'δ^2, γ_n →γ_n δ^2 where δ∼ 10^-2 roughly accounts for the hierarchy among these parameters. Below we will consider only the leading order result in δ, while δ appearance is omitted for notational simplicity. Recall that potential of the system and equation of motions are given by Eqs. (<ref>) and (<ref>). The dynamics of magnetic system concerning only in-phase modes following Eqs. (<ref>) and (<ref>) reads d u⃗ (t) /dt = Ωu⃗(t) + R⃗(t) + D⃗(t), where u⃗≡( M^x_+ , M^y_+ , m^x_+ , m^y_+)^T represents a collect of the perturbation of magnetization parameters defined by Eq. (<ref>), and at leading order Ω= [ 0 -2 γ_e H_E 0 γ_e H_n; γ_e^2 ( H_0(H_0+H_D) +2 H_E (H_a+H_K'))/2 γ_e H_E 0 -γ_e H_n; 0 -γ_n H_a 0 γ_n H_n; γ_n H_a 0 -γ_n H_n 0 ]. Term R⃗ represents relaxation of the magnetizations. D⃗ is source term due to the oscillating magnetic field induced by DM and is given by D⃗(t)= [ h_e^y M_0; h_e^z M_0 sinψ; h_n^y m_0; h_n^z m_0 sinψ ](e^iω_DM t+e^-iω_DM t). In the case that R⃗=0 and D⃗=0, with the ansatz u⃗(t)=u⃗e^iω t, the equation of motion reads iωu⃗=Ωu⃗ of which the solution give eigenvalues of the precession mode of magnetization as ω_1,3=±ω_ñ,+, ω_2,4=±ω_ẽ,+, where ω_ñ,+ and ω_ẽ,+ are given by Eqs. (<ref>) and (<ref>). The corresponding eigenmodes are u⃗_1=u⃗_3^*= ( -2 i H_E γ_e^2 ω_n^2/γ_n ω_ñ,+ω^2_e,+, - ω_n ( 2H_Eγ_e ω_n - ω^2_e,+)/2H_E γ_n ω_e,+^2 , -i ω_n/ω_2,1 )^T, u⃗_2= u⃗_4^*= (i ω_e,+/H_a γ_n, ω_e,+^2/2H_a H_E γ_e γ_n, -i (2 H_E γ_e ω_n-ω_e,+^2)/2H_E γ_e ω_e,+,1 )^T, up to an arbitrary normalization factor. Now, we take into account the damping effect R⃗ and source term D⃗. We adopt here for simplicity relaxation term of the form R⃗= -u_i/T_i, where T_1,3=T_2n and T_2,4=T_2e denoting the relaxation time scale of nuclear-dominated mode and electron-dominated mode, respectively. With ansatz u⃗ (t)=∑_i c_i u⃗_i e^iω_DM t+h.c, the equation of motion reduces to the problem finding c_i's satisfying ∑_i (i( ω_DM - ω_i ) + 1/T_i) c_i u⃗_i=D⃗. The signal magnetization (M_signal≡ M^z_total ) is then given by M_signal (t) =sinψ∑_i c_i (u⃗_i)_1 e^iω_DM t+h.c.. When ω_DM∼ω_1, the i=1 term is dominant with corresponding steady-state solution of the form c_1 ∝T_2n/(ω_1-ω_DM)^2 T_2n^2+1. It is approximately the solution at time t ≳ T_2n after begin excited. On the other hand, when ω_DM∼ω_2 the i=2 term is dominant with steady-state solution of the form c_2 ∝T_2e/(ω_2-ω_DM)^2 T_2e^2+1, which is a good approximation for the solution at the time t ≳ T_2e after being excited. That is the resonance occurs when | ω_DM - ω_ñ,+| ≲ 1/T_2n or | ω_DM - ω_ẽ,+| ≲ 1/T_2e. Note that when t ≲τ_DM, the magnetization rotates with a coherent phase with frequency ω_DM, where τ_DM is the coherence time of DM assumed to be larger than T_2n,2e. Due to the velocity distribution of DM, the signal spread with bandwidth Δω_DM in the frequency space. For power estimation, we take into account the relevant component of magnetization and use Eq. (<ref>). The signal susceptibility and power are listed in table <ref>, <ref>, and <ref>, <ref>, respectively. If one use relaxation term R=(-1/τ_2n,-/τ_2n, -/τ_2e, -τ_2e)^T of Bloch's kind, the effective relaxation time T_2n,T_2e can be expressed as T_2n=τ_2nτ_2e (H_a γ_e ω_n (2H_E γ_e ω_n-ω_e,+^2)+ω_e,+^4)/τ_2eω_e,+^4+H_a τ_2nγ_e ω_n (2H_Eγ_e ω_n - ω_e,+^2), T_2e=τ_2nτ_2e (H_a γ_e ω_n (2H_E γ_e ω_n-ω_e,+^2)+ω_e,+^4)/τ_2nω_e,+^4+H_a τ_2eγ_e ω_n (2H_Eγ_e ω_n - ω_e,+^2) . § NUCLEAR MAGNON PICTURE In this Appendix, we discuss the dynamics of magnetic system of MnCO3 using the quantum magnon picture. Based on the following derivation, we check that the response derived in the magnon picture is consistent with that derived in the classical picture illustrated in the main text (Sec. <ref>). §.§ Hamiltonian in the magnon picture The following is the Hamiltonian for the spin of MnCO3 in the antiferromagnetic phase <cit.>: ℋ= 2J ∑_ i,j ≠ iS⃗_i ·S⃗_j + 2 ∑_ i,j≠ i S⃗_j ·(S⃗_i e⃗_y D) + K/2[ ∑_i (S_i^y)^2 + ∑_j (S_j^y)^2 ]- K'/2[ ∑_i (S_i^z)^2 + ∑_j (S_j^z)^2 ] + γ_e ħ[ ∑_i S⃗_i + ∑_j S⃗_j ] · (H⃗+h⃗_e) - γ_n ħ[ ∑_i I⃗_i + ∑_j I⃗_j ] · (H⃗+h⃗_n) + Ã_hy [ ∑_i S⃗_i ·I⃗_i + ∑_j S⃗_j ·I⃗_j], where i and j represent lattice cites in two sublattices; J,K,K' and Ã_hy are positive constants, and γ_n,γ_e are gyromagnetic ratios for nuclear and electron spin, respectively. The Hamiltonian includes (1) isotropic exchange interaction between electron spins of nearest sites between S⃗_i and S⃗_j; (2) Dzyaloshinskii–Moriya interactions; (3) easy-plane anisotropy; (4) in-plane uniaxial anisotropy; (5) Zeeman effects for electron spins S⃗ and nuclear spins I⃗; and (6) hyperfine interaction between electron spins S⃗ and nuclear spins I⃗. Here, we discuss the situation where there are static field H⃗=H_0 e⃗_x applied in x direction and exotic oscillating field induced by DM of which h⃗_e(t) interacting with electron spin and h⃗_n(t) interacting with nuclear spin. The coordinate is set to coincide with situation discussed in the main text using classical theory. The relations between spins and magnetizations are given by Eqs. (<ref>) and (<ref>). It reduces to the classical Hamiltonian (<ref>) when all the spins in each sublattice are aligned, which gives the relation H_E=2JzS/γ_e ħ, H_D=2DzS/γ_e ħ, H_K'=K' S/γ_eħ, H_K=KS/γ_eħ, A_hyM_0= Ã_hyS/γ_nħ. Note that the direction between magnetization and spin of electron is opposite while we take gyromagnetic ratio parameter γ_e,γ_n to be positive. In the ground state, electron spins align in the xz plane pointing (almost in) to +z and -z direction for each sublattice due to easy-plane and in-plane anisotropy. They slightly tilt toward -x direction due to the applied static field H_0 and the Dzyaloshinskii–Moriya interactions. For nuclear spins, they align opposite to electron spin of the same Mn due to hyperfine interaction. The configuration (of magnetizations) is shown in Fig. <ref>. For convenience to consider fluctuation around ground state, we rotate our spin parameters for each sub-lattice: S_i^x=S_i^x_1cosψ + S_i^z_1sinψ, S_i^z=-S_i^x_1sinψ + S_i^z_1cosψ, S_j^x=-S_j^x_2cosψ + S_j^z_2sinψ, S_j^z= -S_j^x_2sinψ - S_j^z_2cosψ, where the ground state expectation value gives ⟨ S_i^z_1⟩ =⟨ S_j^z_2⟩ =-S=-5/2 and ⟨ S_i,j^x_1,2,y⟩=0. The coordinate transformation is shown in Fig. <ref>. Due to hyperfine interaction, at ground state nuclear spins lie in the opposite direction to the electron spins of the same sites and similarly form two sub-lattices. We perform the same coordinate transformation for nuclear spins I⃗ (Eq. (<ref>) but with S↔ I). In this case, the thermal expectation value for nuclear spin temperature T is ⟨ I_i^z_1⟩ =⟨ I_j^z_2⟩ = ⟨ I⟩ given by Eq. (<ref>), and ⟨ I_i,j^x_1,2,y⟩=0. The function ⟨ I ⟩/I is shown in Fig. <ref>. Now the hyperfine interaction can be written in the form ℋ_hy= ℋ_∥ +ℋ_mix, where ℋ_∥ =Ã_hy∑_i ( S^z1_i I^z1_i )+ ∑_j ( S^z2_j I^z2_j ), ℋ_mix=Ã_hy∑_i ( S^x1_i I^x1_i + S^y_i I^y_i )+ ∑_j ( S^x2_j I^x2_j + S^y_j I^y_j ). The term ℋ_∥ represents the hyperfine interaction in the direction of spin alignment in the ground state, which make both the eigenfrequency of nuclear and electron spin precession higher. On the other hand, the term ℋ_mix causes the mixing between the nuclear and electron spin precession modes. Then we perform the Holstein–Primakoff transformation for expressing perturbations in terms of magnons: (-S_i^z_1) =S- a_i^† a_i, I_i^z_1 = ⟨ I ⟩ - c_i^† c_i S_i^+= (-S_i^x_1) +i S_i^y =√(2S) a_i, I_i^+= I_i^x_1 +i I_i^y = √(2⟨ I ⟩) c_i, S_i^-= (-S_i^x_1) -i S_i^y=√(2S) a^†_i, I_i^-= I_i^x_1 -i I_i^y = √(2⟨ I ⟩) c_i^† for the first sublattice, where a,a^† and c,c^† are magnon annihilation, creation operators satisfying bosonic commutation relation. Here, we consider only small perturbation and hence the higher order of creation/annihilation in the Holstein–Primakoff are neglected. We perform the same transformation for the other sublattice by using b_j and d_j representing the electron and nuclear magnon operators respectively. We also perform Fourier transformation: a_i=√(1/N)∑_k⃗ e^i k⃗r⃗_i a_k, b_j=√(1/N)∑_k⃗ e^i k⃗r⃗_j b_k, c_i=√(1/N)∑_k⃗ e^i k⃗r⃗_i c_k, d_j=√(1/N)∑_k⃗ e^i k⃗r⃗_j d_k, with N denoting number of spin sites in each sublattice. However, we focus only on the k⃗=0 mode since it is dominantly excited by DM which is assumed to be almost spatially uniform. As long as only the k=0 mode is considered, we simply omit the subscript k for the annihilation and creation operators. §.§ Diagonalization The quadratic terms in the Hamiltonian can be divided into several terms as ℋ=ℋ_e+ℋ_n +ℋ_mix+ℋ_DM, where ℋ_e is the electron-magnon term including ℋ_∥ part of the hyperfine interaction, ℋ_n is the nuclear-magnon term, ℋ_mix is the mixing term given by Eq. (<ref>), and ℋ_DM is the interaction term between the spin and DM. The electron-magnon Hamiltonian ℋ_e reads ℋ_e/ħ= A_e (a^† a +b^† b) + B_e (a b+ a^† b^†)+ 1/2 C_e(a a+ b b + h.c.) + D_e (a b^† + a^† b), with A_e =γ_e [ H_E - H_0^2 - H_D^2/2 H_E + H_K' + H_0 sinψ + H_a + H_K/2], B_e=γ_e ( -H_E + 1/2H_0^2 - H_D^2/2 H_E), C_e= -γ_e H_K/2, D_e=γ_e 1/2H_0^2 - H_D^2/2 H_E. Nuclear-magnon Hamiltonian ℋ_n reads ℋ_n /ħ=ω_n(c^† c+d^† d), where ω_n= Ã_hy S/ħ. The interaction Hamiltonian between the DM and spin is ℋ_DM= γ_e ħ[ ∑_i S⃗_i + ∑_j S⃗_j ] ·h⃗_e (t)- γ_n ħ[ ∑_i I⃗_i + ∑_j I⃗_j ] ·h⃗_n(t) with h⃗_n=h⃗_n^axion and h⃗_e^axion given by Eqs. (<ref>) and (<ref>) for the case of axion DM, and h⃗_n=h⃗_e=h⃗^γ' given by Eq.  (<ref>) for the case of dark photon DM. We can diagonalize ℋ_e with the following generalized Bogoliubov transformation <cit.> X=QZ, where X≡( a b a^† b^†)^T,Z ≡( α β α^† β^†)^T, and Q = [ Q_1 Q_2; Q_2^* Q_1^* ] , Q_1= [ Q_11 Q_12; -Q_11 Q_12 ] , Q_2= [ Q_13 Q_14; -Q_13 Q_14 ], with the following elements: Q_11 =- [ (A_e-D_e) + ω_e,-/4 ω_e,-]^1/2, Q_12 =[ (A_e+ D_e) + ω_e,+/4 ω_e,+ ]^1/2, Q_13 =[ (A_e-D_e) - ω_e,-/4 ω_e,-]^1/2, Q_14 =[ (A_e+ D_e) - ω_e,+/4 ω_e,+]^1/2. Note that the matrix Q ensures the canonical quantization properties of α and β through relation Q [ I_2 0; 0 -I_2 ] Q^†= [ I_2 0; 0 -I_2 ]. The diagonalized Hamiltonian is written as ℋ_e/ħ=ω_e,-α^†α+ω_e,+β^†β, where ω_e,- and ω_e,+ are equal to those given by Eqs. (<ref>) and (<ref>), respectively. Magnons α and β correspond to the out-phase and in-phase mode of electron-spin precession, respectively (see Sec. <ref>). Because the eigenenergy of out-phase mode is far from that of the nuclear magnon, its mixing with the nuclear magnon is expected to be small. We then focus only on the in-phase mode β. Nuclear magnons and electron magnons mix with each other by the precession component of hyperfine interaction: ℋ_ mix=-Ã_ hy√(S ⟨ I⟩) (a c+a^† c^† +b d+ b^† d^† ). When we focus only on the in-phase mode β of the electron magnon, we obtain ℋ_ mix=-Ã_ hy√(S ⟨ I ⟩)[ ( Q_12β + Q_14β^† ) η + ( Q_12β^† + Q_14β ) η^†], where η≡ (c+d)/√(2). Let us focus on the in-phase magnons. First neglecting the DM part, the Hamiltonian of interest is given by ℋ_0/ħ =ω_e,+β^†β+ω_nη^†η + ℋ_mix/ħ =A β^†β +A' η^†η + B (βη + β^†η^†) + D(βη^† + β^†η) with A=ω_e,+, A'=ω_n B=-Ã_hy√(2⟨ I⟩ S) Q_12 / ħ, D=-Ã_hy√(2 ⟨ I⟩ S) Q_14 / ħ. We can diagonalize it by the transformation Y=RW, where Y ≡( β η β^† η^†)^T and W ≡( β̃ η̃ β̃^† η̃^†)^T. We find that matrix R that preserves canonical quantization of magnon operators is in the form R= [ U V; V^* U^* ], with the elements given by U= (u_1 u_2) , u_i= k_i[ +_i; 2AD ], +_i=ω_i^2+( A-A') ω_i -(A A' + D^2 -B^2), V= (v_1 v_2 ) , v_i= C_i k_i[ -_i; 2AD ], -_i=ω_i^2-(A-A') ω_i -(A A' + D^2 -B^2), where C_i=-D/B[ (ω_i-A)(ω_i-A')-(D^2-B^2)/(ω_i-A)(ω_i+A')-(D^2-B^2)], k_1=1/2A D( C_2 ω_2/-C_1 ω_1+C_1C_2^2 ω_1 + C_2 ω_2 -C_1^2C_2 ω_2 )^1/2, k_2=1/2A D( -C_1 ω_1/-C_1 ω_1+C_1C_2^2 ω_1 + C_2 ω_2 -C_1^2C_2 ω_2 )^1/2, and ω^2_1,2 = D^2-B^2 + A^2+A'^2/2 ±1/2√((A-A')^2[(A+A')^2 + 4 (D^2-B^2)] + 16 A A' D^2). Note that ω_1=ω_ẽ,+ and ω_2=ω_ñ,+, which are equal to those given by Eqs. (<ref>) and (<ref>), respectively. At last, diagonalized full Hamiltonian of electron and nuclear magnon is given in the form ℋ_0/ħ=ω_ẽ,+β̃^†β̃+ ω_ñ,+η̃^†η̃. For convenience of later discussion, we define the following mixing angle matrix ϕ^±: [ β^†±β; η^†±η ] = [ ϕ^±_ββ̃ ϕ^±_βη̃; ϕ^±_ηβ̃ ϕ^±_ηη̃ ][ β̃^†±β̃; η̃^†±η̃, ] which can be obtained from matrix R defined in Eq. (<ref>). §.§ Response Next, we discuss about the magnetization response due to DM induced magnetic field. As defined in Eq. (<ref>), magnetization signal M_signal is defined by the total magnetization in z direction M^z_total which is the oscillation component in this setup. We can write it in term of magnon operator β̃,η̃ as M_signalV =⟨ M_total^z⟩ V = -γ_e ħ⟨∑_i S^z_i + ∑_j S^z_j ⟩ =- γ_e ħsinψ√(N_totalS) (Q_12+Q_14) ( ϕ^+_βη̃⟨η̃ + η̃^†⟩ + ϕ^+_ββ̃⟨β̃ + β̃^†⟩), where N_total= 2N is the number of total spin sites. On the other hand, Hamiltonian interaction between DM-induced field and magnon β̃,η̃ reads ℋ_DM= -iγ_n ħ h_n^y (t) √(N_total⟨ I⟩/2)( ϕ^-_ηη̃ ( η̃^† - η̃ ) + ϕ^-_ηβ̃ (η̃^†- η̃) ) + γ_n ħ h_n^z (t) sinψ√(N_total⟨ I ⟩/2)( ϕ^+_ηη̃ ( η̃^† +η̃ ) + ϕ^+_ηβ̃ (β̃^†+ β̃) ) +γ_e ħ h^z_e(t) sinψ√(N_totalS) (Q_12+Q_14) ( ϕ^+_βη̃ ( η̃ + η̃^†) + ϕ^+_ββ̃ (β̃+ β̃^† ) ) +i γ_e ħ h_e^y(t) √( N_total S) (Q_12-Q_14) ( ϕ^-_βη̃ (η̃^†-η̃)+ ϕ^-_ββ̃ (β̃^†-β̃) ). One can then apply the linear excitation theory to find expectation value of operators in Eq. (<ref>), and hence magnetization signal. Focusing on one particular mode ζ=β̃,η̃, the Hamiltonian is H=H_0+H_DM; H_0= ħω_ζζ^†ζ, H_DM=[ g_ζζ e^iω_DM t+ h.c. ], where g_ζ is coupling constant depending on the amplitude of DM-induced field h⃗_e, h⃗_n (see Eqs. (<ref>),(<ref>) and (<ref>)), while ω_DM is equal to axion mass m_a or dark photon mass m_γ'. The Heisenberg equation of motion reads d ζ/dt=-iω_ζζ-1/T_2ζζ - i/ħ g^*_ζ e^-i ω_DM t, with T_2ζ representing the relaxation time of precession mode ζ which is twice a value of the magnon lifetime. The stationary steady-state solution for this system is given by ⟨ζ⟩= g^*_ζ/ħ/(ω_DM-ω_ζ) + i(1/T_2ζ) e^-iω_DM t, which is a good approximation for the solution at t ≳ T_2ζ. Substituting the expectation value to the expression of magnetization signal (Eq. (<ref>)), one can derive the response of the system to DM. Due to velocity distribution of DM, the magnetization signal spreads with width Δω_DM determined by that of DM field. We find that the response of the system derived from quantum magnon picture is consistent to that of classical theory given in table <ref> and <ref>. On the other hand, the power of the system can be derived from the relation P= ħω_ζ⟨ n_ζ⟩2/T_2ζ where n_ζ=ζ^†ζ is the number operator. jhep
http://arxiv.org/abs/2307.06883v1
20230712153847
Cyber Framework for Steering and Measurements Collection Over Instrument-Computing Ecosystems
[ "Anees Al-Najjar", "Nageswara S. V. Rao", "Ramanan Sankaran", "Helia Zandi", "Debangshu Mukherjee", "Maxim Ziatdinov", "Craig Bridges" ]
cs.OH
[ "cs.OH", "physics.ins-det" ]
Groups of Binary Operations and Binary G-Spaces Pavel S. Gevorgyan August 12, 2023 =============================================== We propose a framework to develop cyber solutions to support the remote steering of science instruments and measurements collection over instrument-computing ecosystems. It is based on provisioning separate data and control connections at the network level, and developing software modules consisting of Python wrappers for instrument commands and Pyro server-client codes that make them available across the ecosystem network. We demonstrate automated measurement transfers and remote steering operations in a microscopy use case for materials research over an ecosystem of Nion microscopes and computing platforms connected over site networks. The proposed framework is currently under further refinement and being adopted to science workflows with automated remote experiments steering for autonomous chemistry laboratories and smart energy grid simulations. science workflows, science instrument ecosystems. § INTRODUCTION Distributed workflows orchestrated by artificial intelligence (AI) over science instrument-computing ecosystems (ICE) hold an enormous promise for accelerating the productivity and discovery in science scenarios. Their success depends on the ability to orchestrate automated experiments at remote physical instruments by steering them and collecting the generated measurements. We consider physical science instruments and computing platforms existing at geographically dispersed locations that form an ecosystem with a goal to seamlessly support these workflows <cit.>. The underlying tasks may involve configuring instruments, collecting and transferring measurements, and analyzing them at remote computing systems to extract parameters for the next step in a series of experiments. Nowadays, a number of these tasks are performed manually as part of a workflow that may span days to weeks where experiments are repeated using different parameters produced by simulation and analysis codes. These human-driven operations confine the scalability and performance of scientific workflows, leading to inadequate utilization of high-cost instruments. The science workflows may require computing and storage services provisioned by complex science ecosystems that utilize diverse instruments, for example, light sources <cit.>, chemistry instruments , microscopes <cit.>, and others. Generally, these instruments are locally controlled through Windows-based platforms, while the remote computing systems are GPU Linux-based situated at different networked domains separated by firewalls. So far, a general cyber framework does not exist to support steering and measurement collections needed for these ICE workflows with diverse instruments, except solutions deployed at neutron and light source facilities, like the Experimental Physics and Industrial Control System (EPICS) <cit.>. To fully realize the potential of diverse ICE, the design, implementation and testing of software modules are mandated to collect and transfer the acquired data, as well as perform cross-facility instrumentation steering, as illustrated in Fig. <ref>. In addition, the underlying network connections need to be provisioned to support custom, protected flows needed for concurrent steering and data transfer operations. We propose a cyber framework for implementing remote steering and measurements collection capabilities over ICE, as illustrated in Fig. <ref>. Separate data and control connections are set up at the network level by configuring the physical ecossystem connections. Software modules are developed by wrapping instrument APIs and control commands in Python, and exposing them using Pyro client-server codes for network access. This framework is general and applicable to instruments that provide APIs (both Windows and Linux) and network interfaces. We describe the design and implementation of this framework, and workflow execution using Jupyter notebook in Section <ref>. We illustrate automatic data transfer and cross-facility control over ORNL site networks for microscopy workflows in <ref>. § CYBER FRAMEWORK: DESIGN AND IMPLEMENTATION We propose a cyber framework for implementing remote steering and measurements collection capabilities over ICE, illustrated in Fig. <ref>. Scalable and reliable science workflows are supported utilizing a novel cyber framework of network connections and software modules, explained as follows. (a) Network connections: Separate data and control connections are set up at the network level by configuring physical connections between ecosystem components at different facilities. These cross-facility connections are established by aligning the firewalls and access policies of networks, facilities, and hardware and system applications at edge systems, to grant network traffic access. (b) Cross-platform software modules: Software modules are developed to support cyber-physical and analytical operations, including instruments control (typically run on Windows platforms), measurements analytics (typically run on mutli-GPU Linux systems), and cross-facility measurements transfer. The instruments control modules utilize Python to wrap instruments APIs and control commands to be exposed and called via Pyro client-server modules for remote instruments steering possibly by AI codes. The data transfer is enabled across the ecosystem between Windows-based storage, such as network-attached storage (NAS), located near the instrument and remote Linux systems, as shown in Fig. <ref>. We utilize cross-platform file mounting methods specific to facility setup, for example, Secure Shell File System (SSHFS) between Windows and Linux platforms. §.§ Control Channel Steering science experiments and instruments (via their control computers) across an ecosystem is accomplished through remote control commands sent via a (network) control channel. The incorporated control computers respond back, as a result of the execution of the control commands, with measurements and possibly metadata and system messages. We developed Pyro client-server codes to support remote steering of such experiments across the ecosystem. Pyro provides a Python API for network access <cit.>. We developed and installed Pyro server modules on the control computer, as part of instrument control software , while the client modules are installed on remote computing systems. The client-server communication over a multi-site ecosystem is explained in Fig. <ref>. The Pyro modules at the client access the controls and programmable interfaces on the server. The client modules are run as part of automated workflow from a console or a web-based interactive platform, such as a Jupyter Notebook. The communication passes the control node IP address with the control commands and parameters required to steer the instruments. The proposed framework supports the concurrent execution of Pyro client modules on multiple remote systems, which communicate with the Pyro server to execute the exposed control commands on the control platforms. §.§ Data Channel The data channel allows the acquired measurements at a science facility, as a result of the execution of the instrumentation control commands, to be shared and available at the remote servers across the ecosystem. Usually, instrument controller APIs are configured to store the measurements on Windows-based storage as files, as in the case of Swift software (Nion microscopes controller API), that stores the Scanning transmission electron microscopy (STEM) measurements on NAS system. We implement the data channel by remotely cross-mounting the data files using a certain file sharing technique (Common Internet File System (CIFS) or SSHFS) for providing access across different operating systems . The technique depends on: (i) setup of instrument storage and remote computing systems, and (ii) network domain configurations and traffic access policies on the incorporated systems. The file-sharing access privileges on the computing nodes are configured for granting access to authorized users to the measurement files across the ecosystem. The access configuration is performed once for permanent file sharing across the ecosystem. §.§ Networks, Access and Firewalls The ecosystem components of computers and instruments dispersed over multiple facilities are integrated into various domains isolated with network and system host firewalls. To enable the data channel, firewall rules are inserted to procure file mounting between the storage and computing servers through the data channel. Firewall rules are also injected to open communication ports for Pyro servers and clients over the control channel. § EXPERIMENTAL SETUP AND DEMONSTRATION The ecosystem capabilities described in the previous section are implemented on Oak Ridge National Laboratory (ORNL) ecosystem comprising of scientific instruments and remote computing systems. The microscopy workflow utilizes Nion microscopes located at the Center for Nanophase Materials Sciences (CNMS). The computing facility utilizes GPU-based Linux systems available at K200 computing facility located in a different building. We utilize the proposed cyber framework design, discussed in Section <ref>, to integrate the science instruments with remote computing systems across ORNL network to perform remote instrument steering and measurement transfers. This workflow uses U200 Nion microscope with its NAS at CNMS, and DGX workstation at the K200 computing facility as part of ORNL physical infrastructure shown in Fig. <ref>. §.§ Steering microscopy experiments over ORNL ecosystem The microscopy control channel between the U200 microscope and DGX computing system has successfully been tested to steer the microscopy experiments across ORNL physical infrastructure (Fig. <ref>). Several STEM Python-based controls are integrated into the scientific workflow to get the beam status, position the beam, and obtain instrument and experiment metadata and results. The functions scan_status and probe_position <cit.> are the corresponding Pyro server objects on the U200 control computer. The microscope is steered by check_scan.py and probe_position.py Pyro client applications running on DGX. §.§ Automated Data Transfers We tested the data channel over ORNL ecosystem by mounting the microscope measurements directory at U200 NAS on DGX using SSHFS file mounting. The authorized microscopists and automated codes seamlessly access NAS files at the mounted directory and utilize them in computations on the remote computing system. § CONCLUSIONS We presented a cyber framework consisting of network provisioning and software modules for integrating science instruments into a complex ecosystem. It utilizes separate network connections for control commands and measurement transfers across the ecosystem, and cross-platform software modules using Pyro for communications, and file mounting techniques for data transfers. This framework has been implemented and tested, and it is now part of operations in supporting cross-facility Nion microscope workflows at ORNL. It is currently under refinement and further development to support workflows in other areas, including autonomous chemistry laboratory and smart grid simulations. Overall, this demonstration and initial results show the applicability of this cyber framework to genera ieeetr
http://arxiv.org/abs/2307.04167v1
20230709132458
Dream Content Discovery from Reddit with an Unsupervised Mixed-Method Approach
[ "Anubhab Das", "Sanja Šćepanović", "Luca Maria Aiello", "Remington Mallett", "Deirdre Barrett", "Daniele Quercia" ]
cs.CY
[ "cs.CY", "cs.CL", "physics.soc-ph", "H.4.0; K.4.0" ]
Possible open charm molecular pentaquarks from Λ_cK^(*)/Σ_cK^(*) interactions Qi Huang^2[Corresponding author] August 12, 2023 ============================================================================= § INTRODUCTION Dreaming is a fundamental human experience and a cornerstone of sleep psychology, yet its underlying mechanisms remain elusive. The fascination with the contents and meaning of our dreams dates back to early human civilizations,<cit.> but despite significant progress in dream research, fundamental questions about the physiological and psychological functions of dreaming remain unanswered, leaving us to ponder the question: why do we dream?<cit.> Previous literature on dream analysis suggests that dreaming plays a vital role in learning processes,<cit.> has psychotherapeutic effects,<cit.> and safeguards the brain's neuroplasticity.<cit.> Recent AI-inspired theories have drawn parallels between the brain's activity during sleep and the functioning of artificial neural networks. One theory suggests that dreams act as “garbage collectors” that clear the memory,<cit.> while another posits that they prevent over-fitting of the brain's neuronal network.<cit.> While the question of why we dream remains open, before we can even begin to answer it, we must first understand the nature of what we dream. This question is important not only for helping us understand the fundamental function of dreams but also as it offers a window into our psyche and what is prominent in people's minds. Dream content is composed of fragments of waking life experiences and events,<cit.> but these are not veridical replays, <cit.> nor do they represent the entirety of dream content.<cit.> One popular approach to investigating dream content systematically is through content analysis.<cit.> This is a family of methods that analyze quantitatively the elements present in dreams to answer specific questions, such as whether depressed individuals experience more rejection in their dreams,<cit.> how dream content changes from teenage years to adulthood,<cit.> or whether in the times of collective health crises there is a shift in the medical symptoms people dream about.<cit.> Although these studies may seem narrow in scope, they provide a critical foundation for addressing the overarching question of why we dream. The importance of content analysis is evidenced by the development of over 130 scales and rating systems for dream content analysis.<cit.> Early scales tended to score based on the raters' subjective interpretation of dream symbolism and rarely reported inter-rater reliability.<cit.> Dream research became more systematic with the development of the Hall and van de Castle method <cit.> of dream content analysis, a quantitative system that scores dream reports based on the frequency and type of characters, interactions, activities, emotions, settings, and objects present in the dream. This method relies solely on the dream reports and does not use any additional information provided by the dreamer. Studies using the Hall and van de Castle scales have revealed consistent patterns and norms in dream content across different groups of people, such as gender, age, culture, and personality. For instance, women tend to dream more about family members, friends, and indoor settings, while men tend to dream more about strangers, violence, and outdoor settings. Moreover, research using these scales has demonstrated that dream content correlates with an individual's waking concerns and interests, such as work, relationships, hobbies, and fears.<cit.> Limited content scope and representatives of existing scales. Traditionally, dream researchers had to manually sift through large numbers of dream reports to gain insights into the range of dream topics.<cit.> Even with recent developments in automated dream analysis,<cit.> most studies continued to rely on experimenter-driven content searches, i.e., supervised approaches for content analysis, such as the Hall and van de Castle method. These methods involve the use of predetermined categories that are often biased towards existing knowledge of dreams. For example, dreams are often characterized as bizarre, involving impossible or improbable events that deviate from everyday experiences,<cit.> which may not fit within the predetermined categories established in the literature. As a result, current approaches to dream analysis may miss important aspects of dream content that fall outside of these predetermined categories. In contrast to traditional methods of dream coding, unsupervised theme discovery from dream reports may provide a fresh perspective and a more comprehensive understanding of the categorization of dream content and its relationship with waking life events. Furthermore, previous studies relied on dream reports collected through retrospective surveys that are susceptible to memory biases, and laboratory studies that may be confounded by the strong impact of the laboratory setting on dream content.<cit.> Therefore, a more ecologically valid approach to studying dream content at the population level is necessary. Our unsupervised mixed-method approach for dream content discovery. Our study addresses the identified research gap (i.e., limited scope and representatives of dream content scales) by developing an unsupervised mixed-method approach for dream content discovery, and by applying it to new large-scale data that more closely approximates spontaneous dream recollections than survey studies. To achieve this, we i) leveraged recent advances in AI for Natural Language Processing (NLP) and ii) used a crowd-sourced dataset of dream self-reports from the community on Reddit. Unlike traditional lab studies, dream experiences shared on are reported voluntarily and spontaneously, enabling us to collect a large set of dream reports and conduct an ecological study. We collected over 44K dream reports from more than 34K Reddit users over the past five years, and applied the BERTopic content analysis method <cit.> to automatically discover topics in each dream report. The resulting taxonomy includes 22 themes that can be broken down into 217 more specific topics. Confirming its validity, we found that most of the themes in our taxonomy align with the dream element categories present in the Hall and Van de Castle scale, but the specific topics inside those themes provide a description of dreams that is much more detailed. Going beyond what was possible with existing scales, our method also allowed to uncover the importance and relationships among specific topics and themes. To demonstrate the applications enabled by our method, we used the metadata from the community, and classified dreams into four types: nightmares, recurring dreams, vivid dreams, and lucid dreams. Our analysis revealed that each type of dream has distinct characteristics and prominent topics. Notably, nightmares were associated with scary and shadowy imagery, and sexual-assault scenes. Vivid dreams featured rich expressions of feelings and topics inducing extreme emotions such as pregnancy and birth, religious figures, war, and aliens. Lucid dreams were characterized by topics of control, and by an overarching theme of mental reflections and interactions. For recurring dreams, we found that the most salient topics were dating, sex, and cheating, with recurring themes related to school and mentions of parts of the human body. Additionally, we investigated the relationship between dream topics and real-life experiences by studying the evolution of topics over the past five years. Our findings showed that the COVID-19 outbreak coincided with a gradual and collective shift in dream content. People started to dream less about people and relationships, feelings, sight and vision, outdoor locations, and movement and action, and more about the human body, especially teeth and blood, violence and death, religious and spiritual themes, and indoor locations. Similarly, after the war in Ukraine started the topics about soldiers and nuclear war both peaked. § RESULTS Figure <ref> outlines the framework of our study. It consist of three main stages: 1) data preprocessing, which includes collecting dream reports, ideally in an ecological setting, and using NLP methods for cleaning the content; 2) topic modelling, which is the core NLP stage for topic modelling. Once the topics and themes in the dream reports collection are discovered, various applications are supported, and we demonstrate three of those: 3) building a dream topics taxonomy, which allows to uncover the relationships between individual themes and topics, as well as the frequency of each of them in the dream collection; 4) finding topics and themes by dream types, which as an application that uses a proposed measure of topic or theme importance in a dream and odds-ratio analysis to discover topics that are specific to a dream type (or any other dream reports subset of choosing), and 5) finding topics and themes through time, which is an application using the proposed topic or theme importance in a dream to quantify the prevalence of dream topics and themes through time. §.§ Dream reports from Reddit In the established literature of dream analysis, dream reports are defined as “the recollection of mental activity which has occurred during sleep.”<cit.> To gather a dataset of such reports, we turned to Reddit, a social media platform organized in communities known as subreddits. Using the PushShift API<cit.>, we collected data from , a subreddit where members share their dreams and engage informally in their interpretation — which is common in therapy contexts <cit.> — as well as in providing social and emotional support to other members. The subreddit was established in September 2008, and as of June 2022, it had accumulated 280K subscribers. We collected over 185K posts published on from March 2016 to September 2022; prior to 2016, the community was almost inactive. Authors on annotate their posts with one or more tags selected from a fixed set community-specific labels called flairs. Flairs on denote posts that contain dream reports of a given type (Short Dream, Medium Dream, Long Dream, Nightmare, Recurring Dream, and Lucid Dream) or posts that contain discussions about dreams in general (e.g., Dream Help, Dream Art, or Question). To ensure that we considered only posts that contained dream reports, we only kept the 44,213 unique posts tagged with dream-type flairs (Figure <ref>). In our analysis, a dream report was the concatenation of the title and body of each of these posts. §.§ Our unsupervised mixed-method approach for dream content discovery As mentioned above, the core part of our approach shown in Figure <ref> is the unsupervised mixed-method topic modelling and it involves: 2.1) extracting topics using an advanced NLP topic model, such as BERTopic <cit.>; 2.2) grouping topics into themes using a clustering method to group embedding representations of individual topics discovered in the previous step into themes, and 2.3) filtering non-dream content and adjusting themes, which is a mixed-method part of the proposed methodology, requiring human, ideally dream experts knowledge to filter out topics or themes that do not pertain to the actual dream content, as well as to adjust any topic to theme associations, if needed. For more details about any of the steps of our approach, please refer to Methods section. §.§ The Reddit taxonomy of dream topics Using state-of-the-art topic modeling techniques, we have identified 217 semantically-cohesive topics that emerged from the dream reports analyzed (details in Methods). We assigned to each dream report the list of unique topics that our model was able to extract from the report text. The distribution of number of dreams associated with a topic is broad (Figure SI5), with only 38 topics being associated with at least 1000 dream reports. Table <ref> shows the 20 most frequent topics. To offer a more concise representation of dream topics, we automatically clustered the 217 fine-grained topics into 22 themes (see Methods), which we then manually parsed to assess their conformity to categories from the dream coding system by Hall and Van de Castle (HVdC for brevity). HVdC defines 12 categories and several subcategories of elements empirically relevant for quantitative dream analysis. All themes but one (a miscellaneous theme containing diverse topics) matched some HVdC category or subcategory (Table <ref>). At a high level of abstraction, HVdC views the dream as a cast of characters (i), interacting with each other (ii), while being immersed in some background setting (iii). These three aspects emerged in the most prominent themes extracted from Reddit dream reports. The largest theme, both in terms of the number of topics it contains (17) and the number of dream reports associated with it (20K), is People and relationships. The main topics included in this theme are family members and relationships, and intimacy and romance (see Table <ref> and Table 1 SI). Characters and interactions are also represented in themes concerning Animals, Supernatural entities, and Religious and Spiritual, which map directly to the two HVdC subcategories of Animals and Imaginary Characters. Interactions respectively of aggressive and friendly type are represented in the themes Violence and Death and Life Events. The second most frequent group of themes involves events or elements that are typical of well-characterized places, such as home or the workplace. Among them, the most prominent theme is Indoor locations, which includes 16 topics such as house, hospital, and mall. These themes map to different subcategories of the HVdC category of Settings. The remaining themes correspond to the categories of Activities, Objects, Emotions, and Time in the HVdC framework. With respect to Activities, Reddit users more frequently reported their mental activities and perceptions, rather than physical actions, when recounting their dreams. Specifically, visual and auditory sensory experiences were often recounted. The most commonly described categories of Objects included body parts (often with gruesome details) and personal objects, with a particular emphasis on technological devices such as phones or elements from virtual worlds of computer games. Emotions were represented by a single theme, characterized by common formulas for describing emotions of any type, with negative emotions being more prominently featured. Finally, while one theme captured temporal scales, such references were infrequent, appearing in fewer than 1,000 dreams. Some of the HVdC categories did not map directly to any of the themes in our taxonomy. This is the case with Misfortune/Fortune, for example. We found that these types of events are usually reported as elements in dreams that were predominantly characterised by other themes, such as Life events, People and relationships, or Violence and death, for example. The emerging themes in the dream reports do not exist in isolation; they often co-occur in the same reports and jointly construct their narratives. Figure <ref> presents the backbone of the network of co-occurrence of these themes, where the connections between themes are weighted proportional to the number of dreams in which they co-occur. People and relationships and Indoor locations are the most frequent and central themes, occurring often with many other ones. Feelings are mainly associated with actions and relationships rather than objects or settings; conversely, sensory elements are more associated with settings, especially outdoor locations. Some of the central topics, such as relationships and emotions, are highly valued in rating scales by dream researchers. However, other themes, such as indoor/outdoor settings, are frequently omitted in HVdC coding research. Besides the prominence of both types of settings, their proportions differ somewhat. Our data show a greater predominance of indoor settings, while HVdC norms indicate lower levels of indoor settings: 49% for males and 61% for females <cit.>. This difference could reflect our data, which includes two years of the COVID pandemic, whereas the comparison HVdC studies do not. Our findings challenge traditional dream analysis literature that found males mostly dream of male characters. <cit.> The two most recurring topics in our data concern the presence of female characters in dreams, despite the young male dominance in the Reddit user base. <cit.> A possible reason for this contradicting finding could have to do with HVdC ratings not being exactly equivalent to ours. In our method, the strength of association of a dream with topics concerning male characters and indoor settings is stronger the more mentions of male characters and indoor settings appear in the report. Conversely, in HVdC, a male character who is mentioned only once in the dream gets the same score as one who is referred to in every sentence of the report. Likewise, an indoor setting is scored in the same way regardless whether it is inferred from one mention in the dream or whether the dream account is largely a description of an indoor setting. In that sense, our methods allows for a richer characterisation and quantification of dream content. §.§ Topics and themes by dream type Using odds ratios (see Methods), we uncovered that certain topics (Table <ref>) and themes (Figure <ref>) appeared more frequently in dreams of specific types. §.§.§ Topics and themes in Nightmares Results in Table <ref> revealed among the top topics specific for nightmares, keywords such as shadows, rape, sexual-assault, scary, creepy, violence, demons, 911, and blood. In terms of themes, Religious and spiritual is the most prominent in nightmares, followed by Feelings, Supernatural entities, and Violence and death, while the least prominent ones were Media and tech and Time, time travel and timelines. §.§.§ Topics and themes in Vivid dreams Results presented in Table <ref> reveal among topics specific for vivid dreams, keywords such as felt-real, religious, felt-right, felt-wrong, apocalyptic, felt-pain, baby birth and pregnancy, aliens, mirrors, and nuclear war. This has translated into the most prominent themes being Feelings, followed by Sights and vision and Life events. §.§.§ Topics and themes in Lucid dreams Results presented in Table <ref> reveal among topics specific for lucid dreams, keywords such as control, alternate-reality, and reflection and mirrors, felt-real, couldn-speak, falling, heard-voices and demon. When considering themes, Mental reflection and interactions, followed by Movement and action, Sights and vision, and Feelings were the most prominent. §.§.§ Topics and themes in Recurring dreams Results in Table <ref> revealed among the top topics specific for recurring dreams, keywords such as cheating, ex, teeth, school, house and apartment, time-travel, and sex-dreams. Looking at the themes, we found School, Human body, especially teeth and blood, and Other topics (size, smell, apocalypse, etc.) being those that characterize recurring dreams. §.§ Topics and themes through time Finally, we studied the evolution of topics and themes over time. We focused on the period with at least 300 monthly dream reports, namely from January 2019 to September 2022. We found that soon after COVID-19 started, there was a gradual collective shift in the content of dreams from (Figure <ref>). At the very beginning of the COVID-19 outbreak (February-March 2020), and even more so after the first peak of recorded deaths (April 2020), people gradually dreamt less of Sight and vision, Outdoor locations, Movement and action, and Mental reflections and interactions, while they dreamt more of Religious and spiritual figures, Indoor locations, and Human body, especially teeth and blood. An example dream from this period talks about “spitting teeth and cornea onto the palm.” We found a sharp decrease in the frequency of Life events topics in February 2020, while in March 2020 there we recorded a peak of mentions of Human body, especially teeth and blood, which are predominantly found in nightmares, as our previous analysis showed. These trends continued throughout the time of the COVID-19 second death peak (January 2021), from when we also detect a stark decrease in dreams of other People and relationships, Feelings, and, for a while, of Animals and Work. Finally, the start of the war in Ukraine (February 2022), is associated with a strong transition on the content of people's dreams towards violent topics. We found a sharp increase in topics from the themes of Violence and death, and Other topics (size, smell, apocalypse, etc.). Example dreams from the former group talk about “being a murderer,”,“tooth falling out,” and “getting shot in the head.” Example dreams from the second group talked about “temple spirit monster,” and “being attacked by an opossum.” The changes in response to external events were also evident at the level of individual topics, such as those about soldiers and nuclear war, which both peaked after the war in Ukraine started (Figure <ref>). § DISCUSSION The current study advances the field of dream science by implementing a new methodology to study dreams in a more objective and ecological manner and also showcasing how this method can generate new insights into dreams. §.§ Reddit as a source of dreams Prior dream research has relied heavily on traditional laboratory, survey, and diary methods. Laboratory studies benefit from monitoring participants with PSG and waking them directly from REM sleep,<cit.> which is known to increase dream recall drastically.<cit.> However, these dreams are not representative of natural dream content, as they are highly influenced by the laboratory setting.<cit.> Survey studies avoid this contextual bias on dream content, but often ask of a “recent” dream, which might be days or weeks prior to the survey response, thus suffering from memory distortion. Lastly, diary studies track dream content in a participant sample through morning diaries.<cit.> While these studies benefit from having dream content collected from ecological settings and with less retrospective bias, these studies are often limited by small sample sizes due to the high burden of participation. In the current study, we extracted morning dream reports from social media, thus capturing dreams in an ecological setting and also at a much larger scale. While other studies have investigated dedicated online dream forums,<cit.> Reddit is one of the most popular social media sites and its usage continues to grow at higher rates than specialized forums, suggesting that the current approach might continue to be a source of population dream content for scientific analysis. §.§ Unsupervised generation of dream themes In using this ecological data source to generate common dream themes, our results complement previous studies using more traditional survey methods. Though it is widely accepted that dream content varies based on individual personality and cultural differences, previous research suggests there might also be thematic “universals” that appear in a disproportionately high amount of dreams. Universal dream themes are typically quantified using surveys with predetermined thematic content developed by the researcher,<cit.> which are biased towards existing knowledge of dreams. In the current study, our unsupervised approach to developing common dream themes confirms some previously developed themes while also offering more specificity within them. For example previous survey studies using the Typical Dreams Questionnaire<cit.> or Dream Motif Scale<cit.> have identified common dream themes of failure, paranoia, snakes/insects and animal symbolism, alien life, fighting, and sex. Our results identified similar themes, while offering a finer-grained view with the subtopics that formed each theme. For example, we observed a popular theme of animals, and animal subtopics included what might be positive animal dreams (kitten, birds) and negative animal dreams (spider, maggots; snake, bite). It is difficult to compare the ranking of our dream themes with prior work, since prior work is heavily-dependent on the method of data collection.<cit.> A notable advance of our approach is the ability to “map out” the relationship between dream themes presence. The co-occurence of common themes has not been studied extensively before, and future work comparing the co-occurence of waking and dreaming themes might help to uncover what is truly unique about dream content. §.§ Phenomenology of dream subtypes Dreams are highly varied experiences, and have long been grouped into subclasses or types of dreams (e.g., nightmares, lucid dreams). However, these dreams are often defined by a single feature (e.g., nightmares as intensely negative dreams) and the phenomenological variety within each subtype is not well understood. The present results offer new insights into the consistent-yet-variable content of nightmares, lucid dreams, vivid dreams, and recurring dreams, some of which have important clinical implications. Nightmares are defined as intensely negative dreams, sometimes with a secondary requirement that they result in a direct awakening, <cit.> and have immense clinical relevance in PTSD <cit.> and other psychiatric diagnoses. <cit.> Our observation of keywords relating to sexual assault highlight prior observations of uniquely episodic event replay in PTSD patients, <cit.> including survivors of sexual assault.<cit.> The additional observation of increased themes of Feeling and Violence suggest that the episodic replays are highly emotional recreations of violent events, and the inclusion of many escape-related words highlights the helplessness felt by many recurrent nightmare sufferers. <cit.> For nightmare sufferers, the negative affect during dreams <cit.> or the dream recall during the day might increase negative symptoms of other co-morbid diagnoses (e.g., anxiety). <cit.> Lastly, the heightened presence of supernatural entities in nightmares might relate to the common state of sleep paralysis, an under-studied and cross-cultural phenomenon that occurs during sleep-wake transitions and frequently involves a feeling of helplessness amidst a hallucinated “demon” or otherwise frightening figure.<cit.> Vivid dreams are highly realistic (or similarly, well-remembered) dreams. Our unsupervised approach suggests that vivid dreams are not only realistic (e.g., keywords felt-real, real-like), but also that these dreams often contain major life events, strong emotions, and supernatural/religious experiences. Vivid dreams included births and pregnancies, missiles and war, apocalyptic events, and alien invasions. This dream subtype overlaps almost directly with a class of dreams referred to as “big dreams.” <cit.> Big dreams occur rarely, but when they do, they are highly meaningful experiences that make a significant and long-lasting impression on waking life. Thus, our analysis of vivid dreams might be representative of these dreams, given that they consisted of major life events and religious experiences that likely influenced future thinking. Interestingly, the presence of an Alien invasion theme in vivid/realistic dreams suggests that prior reports of UFO abductions might result from cases of dream-reality confusion, <cit.> where a dreamt abduction is misinterpreted as a memory from waking life. <cit.> Lucid dreams are defined as those that include awareness of the dream while still dreaming, <cit.> sometimes with a secondary requirement of having control over the dream. <cit.> The present unsupervised analysis was consistent with these defining features, with common themes and keywords in lucid dreams such as mental reflection and control. Additionally, our results confirm more recent preliminary findings about lucid dream content, such as frequent episodes of flying <cit.> and an overlap with sleep paralysis. <cit.> Though lucid dreams are generally regarded as more positive in valence than non-lucid dreams, <cit.> there are more recent reports of extremely negative lucid dreams, or lucid nightmares. <cit.> Our results offer a cohesive explanation for these differential findings, in that we observed a general heightened realness and emotion in lucid dreams (themes of Feeling and Sights and visions and keywords of felt-real) without attachment to positive or negative valence. Our recent findings, focused on a different subreddit (r/LucidDreaming) suggest that positively-valenced lucid dreams are more likely to occur when dream control is involved, <cit.> and the current results highlight the importance of focusing future clinical applications of lucid dreaming on the dream control rather than simply awareness of the dream (see <cit.> for a review of the clinical efficacy of lucid dreams to treat nightmares). A recurrent dream is one that is experienced repeatedly, and these have been estimated to occur at least once in roughly 75% of the population.<cit.> It is likely that the many themes and keywords we observed in recurrent dreams are related to waking anxieties or worries. The limited amount of prior work on recurrent dream phenomenology suggests that recurrent dreams are primarily related to waking anxieties <cit.> and other negative content, <cit.> and also increase in frequency during periods of stress. <cit.> Our analysis extends these findings by observing more specific anxieties in recurrent dream content. The top two themes were related to relationships, particularly negative aspects of relationships (i.e., cheating, ex-partners), and other common themes were even still related to relationships (e.g., sex, dating, crush). Other recurrent dream themes were explicitly negative, and at times ultraviolent; themes regarding serial killers, paralysis, sexual assault, and the apocalypse suggest that recurrent dreams are far more negative than positive or neutral. We suspect that many of these recurrent dreams would also be classified as nightmares (see statistics in Figure SI1), and thus our analysis might help future work in the prediction/monitoring of nightmares via the inclusion of recurrent themes. §.§ Impact of major collective events on dreams Previous research has revealed that major personal and cultural events might influence dream content. For example, an increase in nightmares was observed after the terrorist attacks of September 11th, 2001 <cit.> and during the COVID-19 pandemic. <cit.> Dreams during the COVID-19 pandemic have also been shown to include pandemic-related content. The present results expand on these prior findings by offering a finer-grained view into the topical and temporal impact of major events on dreams. Rather than a categorical increase in nightmares or pandemic-related content, our analysis allowed us to identify specific sub-themes of pandemic-related topics and how they change over time. During the COVID-19 pandemic, population dreams transitioned from outdoor to indoor locations and decreased in social interactions, as did our waking experiences during the pandemic. Interestingly, these two effects had qualitatively different time courses, where the location change of dreams was longer-lasting and more persistent than social changes. This might reflect the mass adoption of technical communications (e.g., Zoom gatherings) that occurred while people were still mostly indoors. These effects are consistent with prior work showing a continuity between wake and dream content <cit.> and future work might evaluate how subtopics contribute uniquely to this continuity. <cit.> These results also contribute to hypothesized dream functions, such as the drop in social content contradicting the Social Simulation Theory that predicts a compensatory effect of social activity in dreams. <cit.> We also observed more mental reflections in dreams, which might reflect a “cognitive continuity” hypothesis <cit.> if the population was more reflective during the increased loneliness during the pandemic.<cit.> With the state of mental reflections during waking being less established than location and social changes, our finding suggests that it might be possible in the future to infer waking cognitive changes based on those observed in dream content. Alternatively, the rise in mental reflection during dreams might be representative of the increase in lucid dream frequency during the pandemic.<cit.> While dreams during the COVID-19 pandemic have been investigated at great length <cit.>, much less work has been dedicated to observing the influence of the Russo-Ukrainian war on sleep and dream patterns. We observed a population increase in negative war-related topics (e.g., violence, death) after the start of the war, which is notable because our social media sample is expected to be primarily American users and thus not those who were directly exposed to the war. The association between the war and population dreams could be a result of American media exposure (see also <cit.>, highlighting a cognitive continuity between dreams and wake, where it is not daily activities per se, but the thought processes and internal imagery that predicts dream content). The negative content of population dreams during the war has important implications, given recent findings that negative dreams, including specifically dreams of death, are predictive of next-day negative affect <cit.> and nightmares have extensive mental health implications.<cit.> §.§ Limitations The first limitation of our work has to do with potential biases in the dream reports that we analysed. The first bias is that Reddit users report dreams that they have recalled spontaneously. However, according to the salience hypothesis, dream content that is vivid, new, intense, or strange, is more easily remembered.<cit.> Moreover, personality traits also affect dream recall, so that people with higher openness to experience, and who are prone to imagination and fantasy, more often recall their dreams.<cit.> Finally, the users of Reddit are not a representative sample of the general population; they are known to be more male, young, educated, and urbanites.<cit.> The second limitation is about the elements from dream reports not reflecting the dream content. Given the free-form social media format, the users would not always describe only their dream report, but sometimes they would include some contextual information, for example recounting how they felt when they were woken up from the dream, or how the people they dreamt of are related to them in real life. For example, a Reddit user might drop in their dream report a sentence like “It was about 5 am when I woke up from this dream...” Such contextual information is not a direct description of the dream content, yet it often helps to qualify it and it is therefore useful to our analysis. Dream researchers analysing a small number of dreams could read through each report and manually remove such instances to focus on dream content only. Given the automatic topic extraction method that we employed, such a data cleaning step was not possible. Some common categories of contextual information that were unrelated to the dream experience (e.g., the author explaining when they last time met the person they dreamt of, or what time they woke up from the dream) emerged as independent topics in our topic analysis and we removed them (see Methods, Clustering Topics). The third limitation of our work is shared with other content analysis methods.<cit.> We inevitably lose some dream information that could not be captured by the topics. This also means that our approach cannot represent subtle aspects of the individuality of each dreamer. §.§ Implications Our work has three main implications: Developing an unsupervised dream content analysis method. The development of dream scoring systems has historically favored certain aspects of dream reports over others, based on assumptions about what are the most emotionally and socially important dimensions in dream analysis, rather than considering all potential topics or types of human experiences. Previous attempts to apply machine learning to dream content analysis have either replicated existing scoring systems<cit.> or focused on specific types of dream content (e.g., symptoms<cit.>). In contrast, our study used a deep learning approach that prioritizes no particular topic. By analyzing dream content from an exclusively linguistic standpoint, we were able to discover and quantify themes that have not been previously considered. Uncovering the first ecological taxonomy of dream topics. Our results demonstrate that many of the themes we discovered align with those captured by existing scoring systems, particularly the Hall and Van de Castle scale, as it includes includes topics such as interpersonal relationships, emotions, and friendly and violent interactions. However, our approach also revealed differences in the frequency of certain topics, such as a significant category of weather-related topics that have not been previously considered in any dream scoring system. The significance of weather as a conversational topic varies, being mentioned as a safe subject for small talk or as the subject of jokes regarding dull conversations. Nevertheless, considering the evolutionary perspective, weather played a crucial role during the era of limited shelters and the absence of temperature controls, which shaped human instincts. Hence, weather likely holds a deep-rooted importance for our well-being and survival. To sum up, our findings support the continued use of traditional scoring systems in clinical psychology research and psychotherapy, where there is a strong rationale for focusing on the more emotional and social content of dreams. At the same time, our results suggest that AI tools can provide a more detailed and nuanced understanding of dream content, and may be mature enough to support dream analysts in their work. Collaborating with AI in scientific discovery. The AI's ability to categorize dream content in ways unknown to human researchers is reminiscent of the scenario when chess and go-playing programs began surpassing human players. These programs didn't merely excel at the strategies employed by humans, instead, they developed unique strategies that had never been observed by humans before. It was assumed that humans approached both games based on evolutionary instincts developed for social interactions, while the AI adopted a more objective perspective, solely focusing on the game rules without any assumptions influenced by human endeavors. Similarly, AI comprehends texts based solely on their intrinsic content, without filtering them through instinctual categories primarily designed for interactions during waking states. § METHODS §.§ Data pre-processing For the purpose of topical analysis, we employed the model from Spacy (<spacy.io>) to segment dream reports into individual sentences. The corpus consisted of 44,213 dream reports, for a total of 761,619 sentences. On average, each dream report contains 17 sentences and 290 words. Notably, recurring dreams tended to be shorter, with 14 sentences and 253 words, while lucid dreams were longer, with 27 sentences and 465 words (Table <ref>). It is worth mentioning that the distribution of dream reports was not concentrated among a few individuals; the majority of users shared a single dream report, whereas only a small group of frequent users contributed more than 30 dream reports (Figure <ref>). We discarded the top frequent 10,000 sentences with the fewest characters. These sentences typically contained non-dream related content, such as greetings to the reader (hello, hi, etc.), sentences consisting solely of special characters, and similar instances. This step of removing a significant number of sentences that were not relevant to the dream report itself ensured that our subsequent labeling process for non-dream topics was sufficient to preserve predominantly dream topics. §.§ Topic Modelling Our topic modelling procedure consisted of the following five steps: i) extracting topics, ii) grouping topics into themes, iii) building dream topics taxonomy, iv) finding topics and themes by dream type, and v) finding topics and themes through time. §.§.§ Extracting topics BERTopic <cit.> is largely based on a neural model that is designed to identify latent topics within document collections. Unlike conventional topic modeling techniques such as Latent Dirichlet Allocation, BERTopic leverages semantic information by utilizing embeddings as an initial step to cluster documents into semantically cohesive topics. Each discovered topic in BERTopic is described by a list of 10 topic words, which are the most distinctive words associated with that particular topic. The topics are numbered based on their frequency rank, indicating their prevalence within the corpus. In its default configuration, BERTopic assigns a single topic to each document. However, our manual inspection of the dream reports revealed that most dreams cannot be adequately characterized by a single topic. Instead, they often encompass multiple topics, such as the dream's location, the people involved, and the emotions experienced by the dreamer. Additionally, dreams are known for combining various elements from waking experiences, resulting in a sense of bizarreness. To address this, our initial alternative was to modify BERTopic to associate a distribution of topics with each dream report. We tested this approach by allowing up to ten topics per report. Subsequently, we applied a threshold to the probabilities associated with each topic to identify the relevant topics for each report. However, we encountered two issues with this method. Firstly, due to the substantial variability in the length of dream reports, we often missed relevant topics in longer dreams. Secondly, even with varying the threshold, over 55% of dreams ended up being associated with no topic, resulting into a considerable loss of data. To mitigate this issue, we opted to consider individual dream sentences as input documents for BERTopic. Applying BERTopic at sentence-level is a practice recommended by the BERTopic authors on the official website of the tool. Such a solution enabled us to associate over 88% of dream reports with at least one topic. To establish robust topic representations, we configured the hyperparameters of BERTopic. We set the minimum frequency threshold () to 10, ensuring that a word appears in at least 10 sentences before it is considered for inclusion in the topic representation. This criterion helped to ensure that topics were formed based on words with a reasonable level of occurrence within the dream reports. Additionally, we employed the Maximal Marginal Relevance (MMR) algorithm with a diversity parameter set to 0.4. The MMR algorithm was utilized to enhance the diversity of the topic words, preventing dreams from being described solely by near-synonyms. By incorporating this diversity measure, the resulting topics encompassed a wider range of relevant terms, capturing distinct aspects and avoiding redundant or similar descriptions within the topic representation. Our model successfully extracted a total of 288 topics which included both dreaming and non-dreaming subjects. An exhaustive manual inspection of the topic words demonstrated a significant degree of semantic coherence across the majority of topics: for most topics, the top four topic words provided sufficient information to understand the essence of the topic. To ensure topic specificity, we excluded all instances of the terms “dream” and “dreams” from the list of topic words associated with each topic. Furthermore, we aggregated topics related to multiple sentences within the same dream, consolidating them into a comprehensive list. This approach allowed us to capture the overarching themes and content associated with individual dreams more effectively. §.§.§ Grouping topics into themes To provide a concise overview of the 288 identified topics, we used clustering techniques to group them into broader, yet semantically-coherent themes. To achieve this, we used the Sentence Transformer model all-mpnet-base-v2 <cit.> to project each topic word into a 768-dimensional embedding space. For each topic t, every word w^t within that topic was assigned a probability (p_w^t) by BERTopic, indicating its contribution to the overall representation of the topic. To compute an embedding for a topic (t), we calculated the weighted sum of the embeddings of its topic words, where each word's embedding was weighted by its normalized probability (p_w^t): t = ∑_w^temb(w^t) · p_w^t, where emb is the sentence transformer model that maps the topic words into embeddings, and w^t are all the words associated with topic t. To facilitate semantic clustering of the topics, we standardized their embeddings and reduced their dimensionality from 768 to 10 using UMAP. We then explored two clustering algorithms, K-Means and DBSCAN, applied to the reduced topic embeddings. Through manual inspection, we observed that the K-Means clustering method effectively generated semantically coherent clusters, with most topics within each cluster sharing a cohesive theme. We also observed that K-Means with 20 clusters produced the result with the best quality. Further details on the analysis can be found in Section SI2.1. §.§.§ Filtering non-dream content and adjusting themes K-Means give us 20 clusters that contained dreaming and non-dreaming topic clusters, as well as a mixture of both. Despite the potential inclusion of non-relevant elements in dream reports, BERTopic's remarkable semantic capabilities proved highly effective in distinguishing the actual recollections from other content. Leveraging this capability, we employed a filtering process to exclude non-dream themes and topics from our analysis. To achieve this, we manually classified each theme into one of five categories: * Dream-related content – encompassing elements such as dream locations, animals, and people. * Dream types – including nightmares and recurring dreams. * Dreaming-waking interface topics – covering experiences like waking up crying or feeling confused, as well as being awakened by an alarm. * Waking phenomena – addressing aspects such as mental health issues and the date and time of the dream. * Social media artifacts – encompassing elements like expressions of gratitude to the reader or requests for dream interpretation. To assign these categories, we relied on the 10 topic words associated with each topic within a theme. In cases where necessary, we also consulted three representative dream sentences generated by BERTopic, along with 20 randomly sampled dream sentences. In most instances, the topics naturally fell into one of the predefined categories, allowing us to filter out entire themes comprising non-dream content. We found three non-dream clusters: Dream types, Dreaming-waking interface topics, and Social media artifacts as discussed in the above. Notably, Waking phenomena was not present as a coherent cluster. We had to manually inspect all the 288 topics and top corresponding dream sentences, which allowed us to separate out this category. For example, from the theme Time, Time Travel, and Timelines, we excluded topics such as `5am currently clock checked phone' and `date 2018 June.' Similarly, in the theme Mental Reflection and Interactions, we omitted topics such as `interpret, does really mean' and `don remember details.' After this filtering step, we ended up with the final 217 dream topics. In addition, some composite clusters contained divergent dream themes (i.e., semantically dissimilar overarching themes). For instance, there was a cluster talking about Outdoor environment and space, which could be deconstructed into three themes: Outdoor locations, and Space and Weather. So, we manually split up such clusters (and also carried out minimal re-adjustment of topics into appropriate clusters (e.g., our largest topic (0 lady, face, looked, head) was incorrectly present in a cluster which we later termed Other topics, and which is as a catch-all for topics which did not fit into a semantically coherent cluster. We moved this topic into People and relationships). After carrying out this whole process, we end up with 217 topics clustered into 22 dream themes. §.§ Building dream topics taxonomy (co-occurrence network) To create a taxonomy of dream topics, we employed a network-based approach that explores the interplay between dream themes and their constituent topics within dream reports. To facilitate this analysis, we conducted an initial assessment of the frequency distributions of both topics and themes across the entire corpus of dream reports. For topics, we linked each one to a dream report (/dreamer) if the topic was found at least once in the report (in all dream reports of that dreamer), and counted the number of dreams (/dreamers) associated with each topic. In Table <ref>, we present the topic name, top 4 topic words, number of dreams, and number of dreamers for the top 20 most frequent topics. Full statistics (e.g., the 10 topic words and number of sentences associated with each topic) for these and the rest of the topics, can be found in the Supplementary Information File (see Data Availability section). Similarly, we linked a dream report (/dreamer) to a theme, if any of the theme's constituent topics was found at least once in the report (/in all dream reports of that dreamer). Additionally, we computed the number of topics in each theme associated with corresponding dream reports (dreamers). In Table <ref>, we present the theme name, top 3 topics associated with it, the total number of topics in it, and number of dreams, and number of dreamers associated with it. Additionally, we linked to a corresponding HVdC category each theme for which such a link is found. The full list of topics belonging to each theme can be found in Table SI1. Having these frequencies at hand, we built the co-occurrence network of dream themes as follows. Each node in the network represents a theme, and pairs of themes were connected by an edge if they co-occurred in a dream report. The edges were weighted by the number of dream reports in which such a co-occurrence was found. This undirected network had a single connected component with 22 nodes and edge weights ranging from 13 to 7643 (Mean = 762.31 ± 928.54 and median = 482.0). For the purpose of visualization, we used backboning to sparsify the network by preserving the most important edges. We used noise-corrected backboning <cit.>—a technique that relies on a statistical null-model to identify and prune non-salient edges—with a backboning threshold of 3.8 (which reduced the network from 231 to 46 edges). We used Gephi <cit.> to visualize this network (see Figure <ref>). We scaled the size of the nodes according to the number of dreams associated with each theme. §.§ Finding topics and themes by dream type (odds-ratio analysis) In addition to discovering common topics across all dreams, we studied whether specific types of dreams (i.e., nightmares, lucid, vivid, and recurring dreams) are characterised by particular topics and themes. Odds-ratio metric allowed us to do so as it compares the odds of a topic occurring in the specific type of dreams (e.g., recurring) to the odds of the same topic occurring in the rest of dreams. We first assigned the dream reports to the corresponding dream types by searching for relevant keywords (‘nightmar’, ‘recurring’ or ‘re-occurring’, ‘lucid’, ‘vivid’) in Reddit post title and body or, if the dream type had a corresponding flair (which were present for nightmares and recurring dreams only). If either of these conditions were satisfied, we assigned the dream to the experimental subset; else to the control subset. First, we defined a topic or theme t's importance in a dream d as: I_t,d = # sentences mentioning topic t in d/# total sentences in d We then computed the odds ratio for all topics and themes across the 4 dream types as follows: Odds Ratio (DT, t) = Odds of DT association with t/Odds of the rest of dreams association with t = ∑_d ∈ DT I_t,d/# dreams in DT that do not contain t/∑_d ∉ DT I_t,d/# dreams that are not in DT that do not contain t where DT is a dream type, t is a topic or theme. §.§ Finding topics and themes through time Analyzing the temporal dynamics of dream topics and themes holds particular significance, especially in light of major events such as the COVID-19 pandemic, known to have a substantial impact on collective dreaming patterns. <cit.> For our analysis, we focused on a monthly timescale, leveraging data spanning from January 2019 to August 2022, encompassing a total of 44 months. The inclusion criterion for each month required a minimum of 300 dream reports, ensuring robust statistical representation. Throughout this period, February 2019 recorded the lowest number of dreams (n = 366), whereas January 2021 exhibited the highest dream count (n = 1573). The average number of dreams per month was 925 ± 332, with a median of 935 dreams per month. We used topic/ theme importance in a dream (I_t,d) introduced in Equation <ref> to calculate topic/ theme importance at a given point in time m (i.e., month) as follows: I_t,m = ∑_(d posted at time m) I_t,d/# dreams posted at time m We tracked z-scores of topic/ theme importances I_t,m through time, to understand the relative change of a topic/ theme w.r.t. itself: z_I_t,m = I_t,m - μ_I_t,m/σ_I_t,m To additionally improve the quality of signals, we used the centered average with a window length of 5 for smoothing the temporal plots: z_I_t,m = ∑_t=(t-2)^(t+2)z_I_t,m. 10 rm url<#>1urlprefixURL doiprefixDOI: mota2020dream authorMota-Rolim, S. A. et al. journaltitleThe dream of god: how do religion and science see lucid dreaming and other conscious states during sleep? Frontiers in Psychology volume11, pages555731 (year2020). schredl2010characteristics authorSchredl, M. journaltitleCharacteristics and contents of dreams. International review of neurobiology volume92, pages135–154 (year2010). hobson1977brain authorHobson, J. A. & authorMcCarley, R. W. journaltitleThe brain as a dream state generator: an activation-synthesis hypothesis of the dream process. The American journal of psychiatry (year1977). hartmann1995making authorHartmann, E. journaltitleMaking connections in a safe place: Is dreaming psychotherapy? Dreaming volume5, pages213 (year1995). eagleman2021defensive authorEagleman, D. M. & authorVaughn, D. A. journaltitleThe defensive activation theory: Rem sleep as a mechanism to prevent takeover of the visual cortex. Frontiers in neuroscience volume15, pages632853 (year2021). crick1983function authorCrick, F. & authorMitchison, G. journaltitleThe function of dream sleep. Nature volume304, pages111–114 (year1983). hoel2021overfitted authorHoel, E. journaltitleThe overfitted brain: Dreams evolved to assist generalization. Patterns volume2, pages100244 (year2021). schredl2010dream authorSchredl, M. titleDream content analysis: Basic principles (publisherUniversitätsbibliothek der Universität Heidelberg, year2010). payne2010memory authorPayne, J. D. journaltitleMemory consolidation, the diurnal rhythm of cortisol, and the nature of dreams: a new hypothesis. International review of neurobiology volume92, pages101–134 (year2010). wamsley2010dreaming authorWamsley, E. J., authorTucker, M., authorPayne, J. D., authorBenavides, J. A. & authorStickgold, R. journaltitleDreaming of a learning task is associated with enhanced sleep-dependent memory consolidation. Current Biology volume20, pages850–855 (year2010). revonsuo1995content authorRevonsuo, A. & authorSalmivalli, C. journaltitleA content analysis of bizarre elements in dreams. Dreaming volume5, pages169 (year1995). beck1961dreams authorBeck, A. T. & authorWard, C. H. journaltitleDreams of depressed patients: Characteristic themes in manifest content. Archives of general psychiatry volume5, pages462–467 (year1961). fogli2020our authorFogli, A., authorMaria Aiello, L. & authorQuercia, D. journaltitleOur dreams, our selves: automatic analysis of dream reports. Royal Society Open Science volume7, pages192080 (year2020). vscepanovic2022epidemic authorŠćepanović, S., authorAiello, L. M., authorBarrett, D. & authorQuercia, D. journaltitleEpidemic dreams: dreaming about health during the covid-19 pandemic. Royal Society open science volume9, pages211080 (year2022). winget1979dimensions authorWinget, C. & authorKramer, M. titleDimensions of dreams (publisherUniversity of Florida, year1979). hall1966content authorHall, C. S. & authorde Castle Robert L., V. titleDream content analysis: Basic principles (publisherAppleton-Century-Crofts, year1966). elce2021language authorElce, V., authorHandjaras, G. & authorBernardi, G. journaltitleThe language of dreams: application of linguistics-based approaches for the automated analysis of dream experiences. Clocks & Sleep volume3, pages495–514 (year2021). colace2003dream authorColace, C. journaltitleDream bizarreness reconsidered. Sleep and Hypnosis volume5, pages105–128 (year2003). picard2021dreaming authorPicard-Deland, C., authorNielsen, T. & authorCarr, M. journaltitleDreaming of the sleep lab. Plos one volume16, pagese0257738 (year2021). devlin2018bert authorDevlin, J., authorChang, M.-W., authorLee, K. & authorToutanova, K. journaltitleBert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (year2018). baumgartner2020pushshift authorBaumgartner, J., authorZannettou, S., authorKeegan, B., authorSquire, M. & authorBlackburn, J. titleThe pushshift reddit dataset. In booktitleProceedings of the international AAAI conference on web and social media, vol. volume14, pages830–839 (year2020). keller1995use authorKeller, J. W. et al. journaltitleUse of dreams in therapy: A survey of clinicians in private practice. Psychological Reports volume76, pages1288–1290 (year1995). grootendorst2022bertopic authorGrootendorst, M. journaltitleBertopic: Neural topic modeling with a class-based tf-idf procedure. arXiv preprint arXiv:2203.05794 (year2022). domhoff2003scientific authorDomhoff, G. W. titleThe scientific study of dreams: Neural networks, cognitive development, and content analysis. (publisherAmerican Psychological Association, year2003). hargittai2020potential authorHargittai, E. journaltitlePotential biases in big data: Omitted voices on social media. Social Science Computer Review volume38, pages10–24 (year2020). siclari2013assessing authorSiclari, F., authorLaRocque, J. J., authorPostle, B. R. & authorTononi, G. journaltitleAssessing sleep consciousness within subjects using a serial awakening paradigm. Frontiers in psychology volume4, pages542 (year2013). nemeth2022route authorNemeth, G. journaltitleThe route to recall a dream: Theoretical considerations and methodological implications. Psychological Research pages1–24 (year2022). schredl2002questionnaires authorSchredl, M. journaltitleQuestionnaires and diaries as research instruments in dream research: Methodological issues. Dreaming volume12, pages17–26 (year2002). sanz2018experience authorSanz, C., authorZamberlan, F., authorErowid, E., authorErowid, F. & authorTagliazucchi, E. journaltitleThe experience elicited by hallucinogens presents the highest similarity to dreaming within a large database of psychoactive substance reports. Frontiers in neuroscience volume12, pages7 (year2018). nielsen2003typical authorNielsen, T. A. et al. journaltitleThe typical dreams of canadian university students. Dreaming volume13, pages211 (year2003). yu2012pornography authorYU Kai Ching, C. journaltitlePornography consumption and sexual behaviors as correlates of erotic dreams and nocturnal emissions. Dreaming (year2012). mathes2014frequency authorMathes, J., authorSchredl, M. & authorGöritz, A. S. journaltitleFrequency of typical dream themes in most recent dreams: An online study. Dreaming volume24, pages57 (year2014). gieselmann2019aetiology authorGieselmann, A. et al. journaltitleAetiology and treatment of nightmare disorder: State of the art and future perspectives. Journal of sleep research volume28, pagese12820 (year2019). campbell2016nightmares authorCampbell, R. L. & authorGermain, A. journaltitleNightmares and posttraumatic stress disorder (ptsd). Current Sleep Medicine Reports volume2, pages74–80 (year2016). sheaves2022nightmares authorSheaves, B., authorRek, S. & authorFreeman, D. journaltitleNightmares and psychiatric symptoms: A systematic review of longitudinal, experimental, and clinical trial studies. Clinical Psychology Review pages102241 (year2022). phelps2008understanding authorPhelps, A. J., authorForbes, D. & authorCreamer, M. journaltitleUnderstanding posttraumatic nightmares: An empirical and conceptual review. Clinical psychology review volume28, pages338–355 (year2008). krakow2002nightmare authorKrakow, B. et al. journaltitleNightmare frequency in sexual assault survivors with ptsd. Journal of anxiety disorders volume16, pages175–190 (year2002). rousseau2018mechanisms authorRousseau, A. & authorBelleville, G. journaltitleThe mechanisms of action underlying the efficacy of psychological nightmare treatments: A systematic review and thematic analysis of discussed hypotheses. Sleep medicine reviews volume39, pages122–133 (year2018). mallett2021relationship authorMallett, R. et al. journaltitleThe relationship between dreams and subsequent morning mood using self-reports and text analysis. Affective Science pages1–6 (year2021). blagrove2004relationship authorBlagrove, M., authorFarmer, L. & authorWilliams, E. journaltitleThe relationship of nightmare frequency and nightmare distress to well-being. Journal of sleep research volume13, pages129–136 (year2004). denis2018systematic authorDenis, D., authorFrench, C. C. & authorGregory, A. M. journaltitleA systematic review of variables associated with sleep paralysis. Sleep Medicine Reviews volume38, pages141–157 (year2018). bulkeley2011big authorBulkeley, K. & authorHartmann, E. journaltitleBig dreams: An analysis using central image intensity, content analysis, and word searches. Dreaming volume21, pages157 (year2011). wamsley2014delusional authorWamsley, E., authorDonjacour, C. E., authorScammell, T. E., authorLammers, G. J. & authorStickgold, R. journaltitleDelusional confusion of dreaming and reality in narcolepsy. Sleep volume37, pages419–422 (year2014). holden2002alien authorHolden, K. J. & authorFrench, C. C. journaltitleAlien abduction experiences: Some clues from neuropsychology and neuropsychiatry. Cognitive neuropsychiatry volume7, pages163–178 (year2002). mallett2021exploring authorMallett, R. et al. journaltitleExploring the range of reported dream lucidity. Philosophy and the Mind Sciences volume2, pages1–23 (year2021). windt2018spontaneous authorWindt, J. M. & authorVoss, U. journaltitleSpontaneous thought, insight, and control in lucid dreams. The Oxford handbook of spontaneous thought: Mind-wandering, creativity, and dreaming pages385–410 (year2018). picard2020flying authorPicard-Deland, C., authorPastor, M., authorSolomonova, E., authorPaquette, T. & authorNielsen, T. journaltitleFlying dreams stimulated by an immersive virtual reality task. Consciousness and Cognition volume83, pages102958 (year2020). mainieri2021sleep authorMainieri, G. et al. journaltitleAre sleep paralysis and false awakenings different from rem sleep and from lucid rem sleep? a spectral eeg analysis. Journal of Clinical Sleep Medicine volume17, pages719–727 (year2021). mallett2020partial authorMallett, R. journaltitlePartial memory reinstatement while (lucid) dreaming to change the dream environment. Consciousness and Cognition volume83, pages102974 (year2020). schredl2022differences authorSchredl, M., authorFuchs, C. & authorMallett, R. journaltitleDifferences between lucid and nonlucid dream reports: A within-subjects design. Dreaming (year2022). voss2013measuring authorVoss, U., authorSchermelleh-Engel, K., authorWindt, J., authorFrenzel, C. & authorHobson, A. journaltitleMeasuring consciousness in dreams: the lucidity and consciousness in dreams scale. Consciousness and Cognition volume22, pages8–21 (year2013). mallett2022benefits authorMallett, R., authorSowin, L., authorRaider, R., authorKonkoly, K. R. & authorPaller, K. A. journaltitleBenefits and concerns of seeking and experiencing lucid dreams: benefits are tied to successful induction and dream control. Sleep Advances volume3, pageszpac027 (year2022). schredl2020lucid authorSchredl, M. & authorBulkeley, K. journaltitleLucid nightmares: An exploratory online study. International Journal of Dream Research pages215–219 (year2020). stumbrys2018lucid authorStumbrys, T. journaltitleLucid nightmares: A survey of their frequency, features, and factors in lucid dreamers. Dreaming volume28, pages193 (year2018). ouchene2023effectiveness authorOuchene, R., authorEl Habchi, N., authorDemina, A., authorPetit, B. & authorTrojak, B. journaltitleThe effectiveness of lucid dreaming therapy in patients with nightmares: A systematic review. L'Encéphale (year2023). vallat2018sleep authorVallat, R., authorEskinazi, M., authorNicolas, A. & authorRuby, P. journaltitleSleep and dream habits in a sample of french college students who report no sleep disorders. Journal of Sleep Research volume27, pagese12659 (year2018). zadra1996recurrent authorZadra, A. journaltitleRecurrent dreams and their relation to life events and well-being. Trauma and dreams pages231–247 (year1996). robbins1992comparison authorRobbins, P. R. & authorTanck, R. H. journaltitleA comparison of recurrent dreams reported from childhood and recent recurrent dreams. Imagination, Cognition and Personality volume11, pages259–262 (year1992). weinstein2018linking authorWeinstein, N., authorCampbell, R. & authorVansteenkiste, M. journaltitleLinking psychological need experiences to daily and recurring dreams. Motivation and Emotion volume42, pages50–63 (year2018). duke2002ordinary authorDuke, T. & authorDavidson, J. journaltitleOrdinary and recurrent dream recall of active, past and non-recurrent dreamers during and after academic stress. Dreaming volume12, pages185–197 (year2002). propper2007television authorPropper, R. E., authorStickgold, R., authorKeeley, R. & authorChristman, S. D. journaltitleIs television traumatic?: Dreams, stress, and media exposure in the aftermath of september 11, 2001. Psychological Science volume18, pages334–340 (year2007). gorgoni2022dreaming authorGorgoni, M., authorScarpelli, S., authorAlfonsi, V. & authorDe Gennaro, L. journaltitleDreaming during the covid-19 pandemic: A narrative review. Neuroscience & Biobehavioral Reviews pages104710 (year2022). margherita2022observatory authorMargherita, G. & authorCaffieri, A. journaltitleAn observatory on changes in dreaming during a pandemic: a living systematic review (part 1). Journal of Sleep Research pagese13742 (year2022). schredl2003continuity authorSchredl, M. et al. journaltitleContinuity between waking and dreaming: A proposal for a mathematical model. Sleep and Hypnosis volume5, pages38–52 (year2003). tuominen2022no authorTuominen, J., authorOlkoniemi, H., authorRevonsuo, A. & authorValli, K. journaltitle‘no man is an island’: Effects of social seclusion on social dream content and rem sleep. British Journal of Psychology volume113, pages84–104 (year2022). domhoff2017invasion authorDomhoff, G. W. journaltitleThe invasion of the concept snatchers: The origins, distortions, and future of the continuity hypothesis. Dreaming volume27, pages14 (year2017). kauhanen2022systematic authorKauhanen, L. et al. journaltitleA systematic review of the mental health changes of children and young people before and during the covid-19 pandemic. European child & adolescent psychiatry pages1–19 (year2022). kelly2022lucid authorKelly, P. et al. journaltitleLucid dreaming increased during the covid-19 pandemic: An online survey. Plos one volume17, pagese0273281 (year2022). solomonova2021stuck authorSolomonova, E. et al. journaltitleStuck in a lockdown: Dreams, bad dreams, nightmares, and their relationship to stress, depression and anxiety during the covid-19 pandemic. Plos one volume16, pagese0259040 (year2021). watson2003dream authorWatson, D. journaltitleTo dream, perchance to remember: Individual differences in dream recall. Personality and individual differences volume34, pages1271–1286 (year2003). coscia2017network authorCoscia, M. & authorNeffke, F. M. titleNetwork backboning with noisy data. In booktitle2017 IEEE 33rd International Conference on Data Engineering (ICDE), pages425–436 (organizationIEEE, year2017). bastian2009gephi authorBastian, M., authorHeymann, S. & authorJacomy, M. titleGephi: an open source software for exploring and manipulating networks. In booktitleProceedings of the international AAAI conference on web and social media, vol. volume3, pages361–362 (year2009). barrett2020dreams authorBarrett, D. journaltitleDreams about covid-19 versus normative dreams: Trends by gender. Dreaming volume30, pages216 (year2020). schredl2020dreaming authorSchredl, M. & authorBulkeley, K. journaltitleDreaming and the covid-19 pandemic: A survey in a us sample. Dreaming volume30, pages189 (year2020). § ACKNOWLEDGEMENTS L.M.A acknowledges the support from the Carlsberg Foundation through the COCOONS project (CF21-0432). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. § AUTHOR CONTRIBUTIONS STATEMENT A.D., S.Š, and L.M.A. conceived and designed the experiment(s), A.D. conducted the experiment(s), A.D., S.Š, L.M.A., R.M., D.B., and D.Q. analysed the results. All authors wrote and reviewed the manuscript. § DATA AVAILABILITY STATEMENT The data generated in this study is publicly available and can be accessed at https://doi.org/10.6084/m9.figshare.23618064.v2https://doi.org/10.6084/m9.figshare.23618064.v2. § SUPPLEMENTARY INFORMATION § DATA STATISTICS (EXTENDED) Fig. <ref>, <ref>, <ref> & <ref> provide additional insights into the dream reports. We were able to geolocate 4.04% (n=1784) of all dream reports (distributed amongst 3.22% (n=1423) users) at the US state level. Fig. <ref> demonstrates representative coverage of our data in all 50 US states; using the aforementioned data subset. § METHODS & IMPLEMENTATION DETAILS (EXTENDED) §.§ Details of Topic Modelling BERTopic embeds all the documents into vectors, projects them onto a lower dimensional space using UMAP, clusters the reduced embeddings using HDBSCAN (Hierarchical Density-based Spatial Clustering of Applications with Noise), and finally generates the topic names using a class-based variation of TF-IDF, namely c-TF-IDF. In the application of BERTopic, we set the hyperparameters of the all-mpnet-base-v2 sentence transformer model such that the min. no. of dream sentences required to form a topic is 100 (min_topic_size = 100) and it automatically merges similar topics together using HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise) (nr_topic = `auto'). After fitting the model, we used its functionality to update the topic representation i.e., modify the topic words to remove English stop words and take into account both unigrams and bigrams as topic words/phrases. This is particularly important for 2-word phrases like sleep paralysis, recurring dreams and, serial killers which we found, quite frequently mentioned in the extracted dataset of topics. The probabilities of all topic words for each topic sum to 1; however since we removed “dream” and “dreams” topic words, we had to re-normalize the topic word probabilities to ensure they summed to 1. Fig. <ref> captures the distribution of no. of dreams reports, dreamers & the no. of dream sentences linked to a topic. There were 1982 (4.48%) dreams distributed amongst 1277 dreamers which weren't associated with any topic at all. While there were 4978 (11.26%) dreams amongst 3416 dreamers which weren't associated with any dreaming topics or themes. Fig. <ref> shows the plots for selecting the no. of clusters, post application of K-Means to topic embeddings. §.§ Details of building dream topics taxonomy (co-occurrence network) Fig. <ref> shows the plot used to determine the backboning threshold for visualizing the theme co-occurrence network. §.§ Details of finding topics and themes through time We did not include dreams from September 2022 (last month in our dataset) in temporal analyses as we only collected data up until 7th September 2022. Manual inspection revealed that, during 2018, the temporal curves for the raw z-scores were quite noisy to accurately infer anything, hence we further restricted our analyses from January 2019 to August 2022. For July and August 2022, since we did not have the data from the subsequent months for smoothing, we used min_periods = 0 for the rolling function in Pandas which helped to compute the average by excluding the subsequent months' data. For calculating the smoothed z-scores of January and February 2019, we leveraged the values from November and December 2018, prior to discarding them, as discussed above. § RESULTS (EXTENDED) Table <ref> provides an exhaustive list of dream topics and themes they belong to; as discovered by our proposed methodology.
http://arxiv.org/abs/2307.04004v1
20230708161850
MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction
[ "Harnaik Dhami", "Vishnu D. Sharma", "Pratap Tokekar" ]
cs.RO
[ "cs.RO", "cs.MA" ]
MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction Harnaik Dhami* Vishnu D. Sharma* Pratap Tokekar *Equal contribution. Names are listed alphabetically. Authors are with the Department of Computer Science, University of Maryland, U.S.A. .This work is supported by the ONR under grant number N00014-18-1-2829. August 12, 2023 ===================================================================================================================================================================================================================================================================== We propose MAP-NBV, a prediction-guided active algorithm for 3D reconstruction with multi-agent systems. Prediction-based approaches have shown great improvement in active perception tasks by learning the cues about structures in the environment from data. But these methods primarily focus on single-agent systems. We design a next-best-view approach that utilizes geometric measures over the predictions and jointly optimizes the information gain and control effort for efficient collaborative 3D reconstruction of the object. Our method achieves 22.75% improvement over the prediction-based single-agent approach and 15.63% improvement over the non-predictive multi-agent approach. We make our code publicly available through our project website: <http://raaslab.org/projects/MAPNBV/> § INTRODUCTION Visual surveying and inspection with robots have been studied for a long time for a wide range of applications such as inspection of civil infrastructure <cit.> and large vehicles <cit.>, precision agriculture <cit.>, and digital mapping for real estate <cit.>. The utilization of robots in these applications is highly advantageous as they can access hard-to-reach areas with greater ease and safety compared to situations with direct human involvement. Recent work on making robots autonomous for these tasks make their use more appealing. This work focuses on one such long-studied problem of 3D object reconstruction <cit.>, where the objective is to digitally reconstruct the object of interest by combining observations from multiple vantage points. While it could be easier to achieve this in an indoor environment by carefully placing sensors around the object, the same can't be achieved for the outdoors and open areas. For the latter, the sensor(s), must be moved around the object to capture information from different viewpoints. This can be realized with sensors such as cameras and LiDARs mounted on unmanned aerial vehicles (UAVs). A UAV with unlimited power supply capacity could capture infinite observations for an almost perfect reconstruction of the object, but the real-world limitation of battery capacity adds another dimension to the problem: achieving an accurate 3D reconstruction as fast as possible. The trade-off between reconstruction accuracy and task duration in unknown environments is commonly addressed through Next-Best-View (NBV) planning, wherein a robot determines the optimal location for the next observation to maximize information gain. Numerous solutions have been proposed by the research community to tackle this problem, with a majority of them catering to single-agent systems <cit.>. However, deploying a team of robots instead of a single agent can enhance task efficiency multi-fold, while also offering additional benefits such as fault tolerance through redundancy. But the direct application of single-agent NBV methods to multi-agent systems does not translate well in terms of performance. This issue stems from the potential overlap between the individual observations. An efficient multi-agent NBV formulation requires coordination among robots to build a joint representation and minimize the overlap. In this work, we extend our previous work on prediction-driven single-agent NBV, Pred-NBV <cit.>, to a team of robots for 3D reconstruction to bring the advantages of the prediction-guided approach to a multi-agent system. We call this multi-agent prediction-based next-best-view method MAP-NBV. Pred-NBV <cit.> uses a 3D point cloud prediction network along with a geometric NBV approach while also considering the control effort required for object reconstruction. An important feature of Pred-NBV is that it doesn't require the partially observed point cloud to be centered at the full object center, an implicit assumption in many 3D reconstruction networks. Naively extending Pred-NBV to a team of robots would result in significant overlap as all the agents would move in the same direction to maximize individual information gain. This is inefficient as it would be more advantageous for the robots to move in different directions. MAP-NBV solves this issue by defining NBV measures over joint observation. We accomplish this by removing duplicate points in observations from multiple robots when calculating the information gain. Along with this, we account for the total control effort in our NBV objective, which results in efficient planning for the whole team. We make the following contributions in this work: * We propose a multi-agent, prediction-based NBV planning approach for active 3D reconstruction of various objects with a novel objective combining visual information gain and control effort. * We modify a single-agent baseline NBV algorithm based on <cit.> that uses frontier-based information gain, and extend its functionality to effectively operate in multi-agent settings. * We show that our method outperforms Pred-NBV <cit.>, a single-agent prediction-based algorithm, by 22.75% and the multi-agent version of a traditional NBV baseline <cit.> by 15.63%. We share the qualitative results and release the project code from our method on our project website[<http://raaslab.org/projects/MAPNBV/>]. § RELATED WORK The use of robots for data acquisition purposes is an extensively studied topic for various domains. Their usage range from infrastructure inspection <cit.> and environment monitoring <cit.> for real-world application to the real-world digitization for research datasets and simulations <cit.>. When the environment is unknown, active methods such as next-best-view (NBV) are used to construct an object model on the fly by capturing additional observations. A majority of the works on NBV planning use information-theoretic measures <cit.> for selection to account for uncertainty in observations <cit.>. The widely used frontier and tree-based exploration approaches also utilize uncertainty about the environment for guiding the robot motion <cit.>. Some works devise geometric methods which make inferences about the exact shape of the object of interest and try to align the observations with the inferred model <cit.>. Prediction-based NBV approaches have emerged as another alternative in recent years, where a neural network takes the robot and/or the environment state as the input and NBV pose or velocity as the output <cit.>. A majority of the existing work on NBV is focused on single robot systems. The task performance can be enhanced by adding more robots to the systems, but directly extending single-robot NBV approaches to multi-robot systems may result in sub-optimal performance due to significant overlap in observations. This issue led to the development of exploration algorithms specifically for multi-robot systems <cit.> with information-theoretic measures for determining NBV. Some recent works on multi-robot systems have explored the utilization of predictions for improvement in task efficiency. Almadhoun et al. <cit.> designed a hybrid planner that switches between a classical NBV approach and a learning-based predictor for NBV selection but uses a partial model obtained by robot observations only. Wu et al. <cit.> use a point cloud prediction model for plants to use the predicted point cloud as an oracle leading to better results than the traditional approaches. This method uses entropy-based information gain measures for NBV and is designed for plant phenotyping with robotic arms. These methods do not consider the control effort required which is important for UAVs with energy constraints when deployed for observing large objects such as airplanes and ships. Also, these works employ information theoretic NBV approaches. We aim to explore a prediction-based approach for geometric NBV selection. In this work, we extend Pred-NBV <cit.> which also uses point cloud prediction and build a multi-robot NBV planner. The prediction on the point cloud makes the pipeline modular and interpretable and can be improved by improving individual modules. We select NBV based on information gain, as well as control effort, making our approach more grounded in real-world limitations. § PROBLEM FORMULATION We are given a team of n robots, each equipped with a 3D sensor. The team flies around a closed object of volume 𝒱∈ℝ^3 and observes the point on its surface 𝒮⊂𝒱. The surface points s_i observed by the robot r_j from the view-point ϕ_k ∈Φ are represented as a voxel-filtered point cloud and the relationship between them is defined as s_i = f(r_j, ϕ_k). The robot r_j follows a trajectory ξ_r_j, consisting of multiple viewpoints, and keeps track of the points observed so far. The distance traveled by a robot between two poses ϕ_i and ϕ_j is represented by d(ϕ_i, ϕ_j). The point cloud observed by the team of robots is the union of the surface points observed by the individual robots over their respective trajectories, i.e., s_ξ = ⋃_i=1^n ⋃_ϕ∈ξ_r_i f(r_i, ϕ) and ξ represents the set of trajectories for each robot, i.e., ξ = {ξ_r_1, ξ_r_2,..., ξ_r_n}. The objective is to find a set of feasible trajectories ξ^* = {ξ_r_1^*, ξ_r_2^*, ..., ξ_r_n^*}, such that the team observes the whole voxel-filtered surface, while also minimizing the total distance traveled by the robots on their respective trajectories. ξ^* = _ξ∑_i=1^n ∑_j=1^| ξ_r_j| - 1 d(ϕ_j, ϕ_j+1) such that  ⋃_i=1^n ⋃_ϕ∈ξ_r_i f(r_i, ϕ) = 𝒮 Given a finite set of trajectories, if 𝒮, the object model is known, the optimal set of trajectories can be found with an exhaustive search. As the object model is not known apriori in an unknown environment, the optimal solution can not be found beforehand. Thus, each robot needs to move based on the partial observations of the team to determine the NBV to reconstruct the object's surface. Here we assume that each robot can observe the object at the start of the mission, which can be accomplished by moving the robots till they see the object. In this work, we define this problem in a centralized form; all the robots share their observations with a central entity that prescribes the NBV for each by solving the aforementioned objective. § PROPOSED APPROACH In this paper, we present Multi-Agent Pred-NBV (MAP-NBV), a model prediction-guided NBV approach for a team of robots. Figure <ref> shows the overview of our process, which consists of two parts: (1)3D Model Prediction, where we combine the observations from all the robots to build a partial model of the object and use PoinTr-C <cit.>, a 3D point cloud completion network, to predict the full shape of the objects, and (2) Multi-Agent NBV Algorithm, which uses the partial model and the predicted model to determine the NBV for the team, while trying to minimize the distance traveled. Our NBV solution performs a greedy selection over the candidate points to generate the trajectory, which also reduces the computation complexity. The following subsections provide further details of our approach. §.§ 3D Model Prediction To start, the target object is segmented out from the rest of the environment in the captured RGB images for each UAV. This allows the algorithm to focus on only the target infrastructure as opposed to also including other obstacles. Then, each of these segmented images is aligned with the captured depth image per UAV to segment the target object out. Point clouds are then generated per each segmented depth image. This gives us a point cloud per each UAV that contains points belonging only to the target object. Assuming a centralized system, each segmented point cloud per UAV is transformed into a central reference frame and concatenated together into a singular point cloud. This point cloud represents the entire multi-agent system's observations of the target object at the current timestamp. The point cloud concatenation can be replaced with a registration algorithm <cit.>, but we use concatenation due to its ease of use. Lastly, this current timestamp's point cloud is then concatenated with previous observations to get an up-to-date observation point cloud. This process is shown in Figure <ref>. In order to get an approximation of the 𝒱̂ of the full model 𝒱, we use PoinTr-C <cit.> a 3D point cloud completion network, developed by fine-tuning PoinTr <cit.> using curriculum learning over ShapeNet dataset <cit.>. Unlike PoinTr and similar point cloud completion networks, PoinTr-C doesn't make implicit assumptions about the knowledge of the center of the full model by fine-tuning over rotationally and translationally perturbed point clouds. Relaxing this assumption makes PoinTr-C more suitable for inputs from an unknown environment than PoinTr. The 3D point cloud of the object obtained as the union of the observed surface points goes as input to PoinTr-C and it predicts the full object point cloud 𝒱̂. PoinTr-C was trained over isolated point clouds and therefore requires object point clouds to be isolated from the scene. This can be realized with the help of distance-based filters and state-of-the-art segmentation networks<cit.> without any fine-tuning. An example of an input point cloud and a predicted point cloud is shown in Figure <ref>. §.§ Next-Best View Planner We use the predicted point cloud as an approximation of the ground truth point cloud for NBV planning. For this, we first generate a set of candidate poses around the partially observed object. From these, we select a set of n poses, corresponding to each robot, based on information gain and control effort. The information gain for the set of n viewpoints is defined as the number of new, unique points expected to be observed after the robots move to these viewpoints. The control effort is defined as the total distance covered by the robots in moving to the viewpoints. The number of new points varies in each iteration since the robots observe more of the surface of the object as they move to new locations. While PoinTr-C predicts the point cloud for the whole object, the robots can observe only the surface points. Hence, before counting the number of new points, we apply hidden point removal <cit.> to the predicted point cloud. We represent this relationship between the number of points observed and the trajectories traversed till time t by I({ξ_t), where ξ_t = {ξ_r_1, ξ_r_2, ..., ξ_r_n}_t represents the set of trajectories for all the robots till time t. To balance the information gain and control effort, we use a hyperparameter τ which is kept fixed throughout an episode. The robots select the candidate to pose set which results in at least τ% of the total possible information gain over all candidate poses. Thus, we formulate our multi-agent NBV objective as follows. {ϕ_r_1, ϕ_r_2, ..., ϕ_r_n}_t+1 = _ϕ∈𝒞∑_i=1^n d(ϕ_r_i, ϕ_r_it)  such that  ⋃_i =1^n I(ξ_r_it∪ϕ)/max_ϕ∈𝒞⋃_i =1^n I(ξ_r_it∪ϕ)≥τ In our experiments, we implement the information gain by first isolating the predicted points that can be observed from a given set of viewpoints and then taking a union of such points from each agent to identify the unique points in the joint observation. The number of the points thus obtained is used as the information gain. For finding the control effort, we use RRT-Connect <cit.> to find the path between a robot's current location to each candidate pose. The candidate poses are generated similar to Pred-NBV <cit.>, i.e. on circles at different heights around the center of the predicted object point cloud. One circle is at the same height as the predicted object center with radius 1.5 × d_max, where d_max is the maximum distance of a point from the center of the predicted point cloud. The other two circles are located above and below this circle 0.25 ×z-range away, with a radius of 1.2 × d_max. The viewpoints are located at steps of 30^∘ on each circle. We set τ = 0.95 for all our experiments. § EXPERIMENTS AND EVALUATION In order to gauge our method's effectiveness, we compare it with a non-predictive multi-agent baseline and a prediction-driven NBV approach which was developed for a single agent. While the first highlights the benefits of including predictions in the NBV pipeline, the latter supports the argument for using a team of robots. §.§ Setup We extend the setup in Pred-NBV <cit.> to work in a multi-agent setting. Similarly, we use Robot Operating System (ROS) Melodic and AirSim <cit.> on Ubuntu 18.04 for our simulation experiments. Multiple UAVs are spawned into the AirSim environment. We equipped each of the UAVs with a depth camera and an RGB camera. Each UAV published a segmented image using AirSim's built-in segmentation. We adapted the depth segmentation package from Pred-NBV to work with multiple UAVs. We then converted these segmented depth images into 3D point clouds. For our collision-free point-to-point planner, we use the MoveIt <cit.> software package implementing the work done by Köse <cit.>. §.§ Qualitative Example We evaluate MAP-NBV on the same 20 objects that were used in Pred-NBV to allow a direct comparison. The 20 objects consist of 5 different ShapeNet classes: airplane, rocket, tower, train, and watercraft. Examples of each class are shown in Figure <ref>. These classes represent diverse shapes and infrastructures that are regularly inspected. Figure <ref> shows the path followed by 2 UAVs as given by MAP-NBV in the C-17 airplane simulation. This environment includes other obstacles that are not of interest but still need to be accounted for in collision-free path planning. MAP-NBV finds a collision-free path for both UAVs while targeting the maximum coverage of the C-17 airplane. §.§ Comparison with Single-agent Baseline We compared the performance of MAP-NBV with a single-agent prediction-based NBV planner called Pred-NBV <cit.>. MAP-NBV is an extension of Pred-NBV designed for multi-agent scenarios. However, in single-agent cases, both algorithms function identically. In MAP-NBV, UAVs are spawned close together, ensuring that the initial environment information is virtually the same as in the single-agent Pred-NBV case. Consequently, the initial points observed and the initial shape completion predictions for both algorithms are highly similar. This means that MAP-NBV and Pred-NBV select their initial NBVs using nearly identical information. To demonstrate the immediate information gain of MAP-NBV over Pred-NBV, we compare the number of points observed after navigating to the first NBVs selected by the algorithms. Our findings, presented in Table <ref>, reveal that, on average, MAP-NBV observes 22.75% more points after the first iteration compared to Pred-NBV in the context of object reconstruction. These results are based on evaluations across 20 objects and 5 object classes. Furthermore, on average, each UAV in MAP-NBV flew a similar distance to the UAV in Pred-NBV. This similarity arises from both algorithms generating candidate viewpoints in the same manner and employing the same point-to-point planner. §.§ Comparison with Multi-agent Baseline We also compared the performance of MAP-NBV with a modified baseline NBV method <cit.> designed for multi-agent use. The baseline method employs frontiers to select the next-best views. Frontiers are points located at the edge of the observed space near unknown areas. We utilized the same modifications described in Pred-NBV <cit.>. Specifically, we used our segmented point cloud to choose frontiers near the target object. To ensure that the UAVs always face the target object, the orientation of all poses selected by the baseline aligns with the center of the observed target object point clouds. We further adapted this baseline method to function in a multi-agent setting. The pose for the first UAV is selected in the exact same manner as in the single-agent baseline. For each subsequent UAV, the remaining best pose is chosen, as long as it does not fall within a certain distance threshold compared to the previously selected poses in the current iteration of the algorithm. Both MAP-NBV and the baseline algorithm employ the same stopping criteria. The algorithm terminates if the total points observed in the previous step exceed 95% of the total points observed in the current step. Our evaluation, presented in Table <ref>, demonstrates that MAP-NBV observes, on average, 15.63% more points than the multi-agent baseline for object reconstruction across all 20 objects from the 5 different model classes. In our simulations, we utilized 2 UAVs for both algorithms. Furthermore, the MAP-NBV algorithm can be readily extended to accommodate more than just 2 robots. By incorporating additional UAVs, the algorithm can effectively leverage the collaborative efforts of a larger multi-agent system to improve object reconstruction performance and exploration efficiency. However, in our current evaluation, we utilized 2 UAVs for both algorithms due to limited computational resources. The simulations were computationally intensive, and our computer experienced significant slowdowns with just 2 robots in the simulation. Despite this limitation, the promising results obtained with 2 UAVs suggest that scaling up the algorithm to include more robots has the potential to yield even more significant improvements in performance. Additionally, Figure <ref> illustrates that MAP-NBV observes more points per step than the multi-agent baseline while also covering a shorter flight distance. § CONCLUSION We present a multi-agent, prediction-guided NBV planning approach for active 3D reconstruction. This method can be helpful in a variety of applications including civil infrastructure inspection. We show that our method is able to faithfully reconstruct the object point clouds efficiently compared to non-predictive multi-agent methods and single-agent prediction-based methods. Our NBV planning objective considers both information gain and control effort, making it more suitable for real-world deployment given the flight time limit imposed on UAVs by their battery capacity. In this work, we focus solely on geometric measures for information gain. Many existing works on NBV have developed sophisticated information theoretic measures. We will explore combining both types of measures in our future work. Also, we consider all possible viewpoint pairs for finding the NBV for the team, which hinders the scalability of MAP-NBV. We will look into methods to make this process more computationally efficient search over a larger candidate viewpoint set. IEEEtran
http://arxiv.org/abs/2307.04708v1
20230710171405
Topological recursion of the Weil-Petersson volumes of hyperbolic surfaces with tight boundaries
[ "Timothy Budd", "Bart Zonneveld" ]
math-ph
[ "math-ph", "hep-th", "math.AG", "math.GT", "math.MP" ]
Topological recursion of the Weil–Petersson volumes of hyperbolic surfaces with tight boundaries Timothy Budd and Bart Zonneveld IMAPP, Radboud University, Nijmegen, The Netherlands. August 12, 2023 ================================================================================================= The Weil–Petersson volumes of moduli spaces of hyperbolic surfaces with geodesic boundaries are known to be given by polynomials in the boundary lengths. These polynomials satisfy Mirzakhani's recursion formula, which fits into the general framework of topological recursion. We generalize the recursion to hyperbolic surfaces with any number of special geodesic boundaries that are required to be tight. A special boundary is tight if it has minimal length among all curves that separate it from the other special boundaries. The Weil–Petersson volume of this restricted family of hyperbolic surfaces is shown again to be polynomial in the boundary lengths. This remains true when we allow conical defects in the surface with cone angles in (0,π) in addition to geodesic boundaries. Moreover, the generating function of Weil–Petersson volumes with fixed genus and a fixed number of special boundaries is polynomial as well, and satisfies a topological recursion that generalizes Mirzakhani's formula. This work is largely inspired by recent works by Bouttier, Guitter & Miermont on the enumeration of planar maps with tight boundaries. Our proof relies on the equivalence of Mirzakhani's recursion formula to a sequence of partial differential equations (known as the Virasoro constraints) on the generating function of intersection numbers. Finally, we discuss a connection with JT gravity. We show that the multi-boundary correlators of JT gravity with defects (cone points or FZZT branes) are expressible in the tight Weil–Petersson volume generating functions, using a tight generalization of the JT trumpet partition function. § INTRODUCTION §.§ Topological recursion of Weil–Petersson volumes In the celebrated work <cit.> Mirzakhani established a recursion formula for the Weil–Petersson volume V_g,n(𝐋) of the moduli space of genus-g hyperbolic surfaces with n labeled boundaries of lengths 𝐋 = (L_1, …, L_n) ∈_>0^n. Denoting [n] = {1,2,…,n} and using the notation 𝐋_I = (L_i)_i∈ I, I⊂[n], for a subsequence of 𝐋 and 𝐋_I = 𝐋_[n]∖ I, the recursion can be expressed for (g,n)∉{(0,3),(1,1)} as V_g,n(𝐋) = 1/2L_1∫_0^L_1t∫_0^∞x∫_0^∞y xy K_0(x+y,t) [ V_g-1,n+1(x,y,𝐋_{1}) 370mu +∑_g_1+g_2=g I⨿ J={2,…,n} V_g_1,1+|I|(x,𝐋_I) V_g_2,1+|J|(y,𝐋_J)] +1/2L_1∫_0^L_1t∫_0^∞x∑_j=2^n x (K_0(x,t+L_j) + K_0(x,t-L_j)) V_g,n-1(x,𝐋_{1,j}), where K_0(x,t) =1/1+exp(x+t/2)+1/1+exp(x-t/2). Together with V_0,3(𝐋) = 1 and V_1,1(𝐋) = L_1^2/48 + π^2/12 this completely determines V_g,n as a symmetric polynomial in L_1^2, …, L_n^2 of degree 3g-3+n. This recursion formula remains valid <cit.> when we replace one or more of the boundaries by cone points with cone angle α_i ∈ (0,π) if we assign to it an imaginary boundary length L_i = i α_i. Cone points with angles in (0,π) are called sharp, as opposed to blunt cone points that have angle in (π,2π). The Weil–Petersson volume of the moduli space of genus-g surfaces with n geodesic boundaries or sharp cone points is thus correctly computed by the polynomial V_g,n(𝐋). It was recognized by Eynard & Orantin <cit.> that Mirzakhani's recursion (in the case of geodesic boundaries) fits the general framework of topological recursion. To state this result explicitly one introduces for any g,n≥0 satisfying 3g-3+n ≥ 0 the Laplace transformed[Note that, due to the extra factors L_i in the integrand, 𝒲_g,n(𝐳) is (-1)^n times the partial derivative in each of the variables z_1,…,z_n of the Laplace transforms of V_g,n(𝐋), but we will refer to 𝒲_g,n(𝐳) as the Laplace-transformed Weil–Petersson volumes nonetheless.] Weil–Petersson volumes ω_g,n^(0)(𝐳)= ∫_0^∞[∏_i=1^n L_i L_i e^-z_i L_i] V_g,n(𝐋), which are even polynomials in z_1^-1, …,z_n^-1 of degree 6g-4+2n, while setting ω_0,1^(0)(𝐳)= 0, ω_0,2^(0)(𝐳) = 1/(z_1-z_2)^2. Then Mirzakhani's recursion (<ref>) translates into the recursion <cit.> ω_g,n^(0)(𝐳) = _u→0π/(z_1^2-u^2) sin2π u[ω^(0)_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω^(0)_g_1,1+|I|(u,𝐳_I)ω^(0)_g_2,1+|J|(-u,𝐳_J)] valid when g,n≥ 0 and 3g-3+n ≥ 0, which one may recognize as the recursion for the invariants ω^(0)_g,n(𝐳) of the complex curve x(z) = z^2 y(z) = 1/πsin(2π z). The main purpose of the current work is to generalize these recursion formulas to hyperbolic surfaces with so-called tight boundaries, which we introduce now. §.§ Hyperbolic surfaces with tight boundaries Let S_g,n be a fixed topological surface of genus g with n boundaries and 𝒯_g,n(𝐋) the Teichmüller space of hyperbolic structures on S_g,n with geodesic boundaries of lengths 𝐋 = (L_1,…,L_n) ∈_>0^n. Denote the boundary cycles[Our constructions will not rely on an orientation of the boundary cycles, but for definiteness we may take them clockwise (keeping the surface on the left-hand side when following the boundary).] by ∂_1,…,∂_n and the free homotopy class of a cycle γ in S_g,n by [γ]_S_g,n. For a hyperbolic surface X ∈𝒯_g,n(𝐋) and a cycle γ, we denote by ℓ_γ(X) the length of γ, in particular ℓ_∂_i(X) = L_i. The mapping class group of S_g,n is denoted Mod_g,n and the quotient of 𝒯_g,n(𝐋) by its action leads to the moduli space ℳ_g,n(𝐋) = 𝒯_g,n(𝐋)/Mod_g,n. Let us denote by S_g,n,p⊃ S_g,n+p the topological surface obtained from S_g,n+p by capping off the last p boundaries with disks (Figure <ref>). Note that the free homotopy classes [·]_S_g,n+p of S_g,n+p are naturally partitioned into the free homotopy classes [·]_S_g,n,p of S_g,n,p. In particular, [∂_j]_S_g,n+p for j=n+1,…,n+p are all contained in the null-homotopy class of S_g,n,p. For i=1,…,n the boundary ∂_i of X ∈ℳ_g,n+p(𝐋) is said to be tight in S_g,n,p if ∂_i is the only simple cycle γ in [∂_i]_S_g,n,p of length ℓ_γ(X) ≤ L_i. Remark that both [∂_i]_S_g,n+p and [∂_i]_S_g,n,p for i=1,…,n are Mod_g,n+p-invariant, so these classes are well-defined at the level of the moduli space. This allows us to introduce the moduli space of tight hyperbolic surfaces ℳ^tight_g,n,p(𝐋) = { X ∈ℳ_g,n+p(𝐋) : ∂_1,…,∂_n are tight in S_g,n,p}⊂ℳ_g,n+p(𝐋). Note that ℳ^tight_g,n,0(𝐋) = ℳ^tight_g,0,n(𝐋) = ℳ_g,n(𝐋), while ℳ^tight_0,1,p(𝐋) = ∅ because ∂_1 is null-homotopic and ℳ^tight_0,2,p(𝐋) = ∅ because [∂_1]_S_0,2,p=[∂_2]_S_0,2,p and therefore ∂_1 and ∂_2 can never both be the unique shortest cycle in their class. In general, it is an open subset of ℳ_g,n+p(𝐋) and therefore it inherits the Weil–Petersson symplectic structure and Weil–Petersson measure μ_WP from ℳ_g,n+p(𝐋). The corresponding tight Weil–Petersson volumes are denoted T_g,n,p(𝐋) = ∫_ℳ^tight_g,n,p(𝐋)μ_WP ≤ V_g,n+p(𝐋), such that T_g,n,0(𝐋) = T_g,0,n(𝐋) = V_g,n(𝐋) and T_0,1,p(𝐋)=T_0,2,p(𝐋)=0. We can extend this definition to the case in which one or more of the boundaries ∂_n+1,…,∂_n+p is replaced by a sharp cone point with cone angle α_i ∈ (0,π). In this case we make the usual identification L_i = i α_i, and still denote the corresponding Weil–Petersson volume by T_g,n,p(𝐋). Our first result is the following. For g,n,p≥ 0 such that 3g-3 + n ≥ 0, the tight Weil–Petersson volume T_g,n,p(𝐋) of genus g surfaces with n tight boundaries and p geodesic boundaries or sharp cone points is a polynomial in L_1^2, …, L_n+p^2 of degree 3g-3+n+p that is symmetric in L_1,…, L_n and symmetric in L_n+1,…,L_n+p. For most of the upcoming results we maintain the intuitive picture that the tight boundaries are the “real” boundaries of the surface, whose number and lengths we specify, while we allow for an arbitrary number of other boundaries or cone points that we treat as defects in the surface. To this end we would like to encode the volume polynomials in generating functions that sum over the number of defects with appropriate weights. A priori it is not entirely clear what is the best way to organize such generating functions, so to motivate our definition we take a detour to a natural application of Weil–Petersson volumes in random hyperbolic surfaces. §.§ Intermezzo: Random (tight) hyperbolic surfaces If we fix g, n and 𝐋∈ ([0,∞) ∪ i (0,π))^n, then upon normalization by 1/V_g,n(𝐋) the Weil–Petersson measure μ_WP provides a well-studied probability measure on ℳ_g,n(𝐋) defining the Weil–Petersson random hyperbolic surface, see e.g. <cit.>. A natural way to extend the randomness to the boundary lengths or cone angles is by choosing a (Borel) measure μ on [0,∞) ∪ i (0,π) and first sampling 𝐋∈ ([0,∞) ∪ i (0,π))^n from the probability measure 1/μ^⊗ n(V_g,n) V_g,n(𝐋)μ(L_1)⋯μ(L_n), μ^⊗ n(V_g,n) ∫ V_g,n(𝐋)μ(L_1)⋯μ(L_n), and then sampling a Weil–Petersson random hyperbolic surface on ℳ_g,n(𝐋). If the genus-g partition function[We use the physicists' convention of writing the argument μ in square brackets to signal it is a functional dependence (in the sense of calculus of variations).] F_g[μ] = ∑_n≥ 0μ^⊗ n(V_g,n)/n! converges, we can furthermore make the size n≥ 0 random by sampling it with the probability μ^⊗ n(V_g,n)/(n! F_g[μ]). The resulting random surface (of random size) is called the genus-g Boltzmann hyperbolic surface with weight μ. See the upcoming work <cit.> for some of its statistical properties. A natural extension is to consider the genus-g Boltzmann hyperbolic surface with n tight boundaries of length 𝐋=(L_1,…,L_n), where the number p of defects and their boundary lengths/cone angles 𝐊=(K_1,…,K_p) are random. The corresponding partition function is T_g,n(𝐋;μ] = ∑_p≥ 0μ^⊗ p(T_g,n,p)(𝐋)/p!, μ^⊗ p(T_g,n,p)(𝐋) ∫ T_g,n,p(𝐋,𝐊)μ(K_1)⋯μ(K_p). If it is finite, we can sample p with probability μ^⊗ p(T_g,n,p)(𝐋)/(p!T_g,n(𝐋;μ]) and then 𝐊 from the probability measure 1/μ^⊗ p(T_g,n,p)(𝐋) T_g,n,p(𝐋,𝐊)μ(K_1)⋯μ(K_n) and then finally a random tight hyperbolic surface from the probability measure μ_WP/T_g,n,p(𝐋,𝐊) on ℳ^tight_g,n,p(𝐋,𝐊). Note that for μ=0 the genus-g Boltzmann hyperbolic surface with n tight boundaries reduces to the Weil–Petersson random hyperbolic surface we started with. The important observation for the current work is that the partition functions F_g[μ] and T_g,n(𝐋;μ] of these random surfaces can be thought of as (multivariate, exponential) generating functions of the volumes V_g,n(𝐋) and T_g,n,p(𝐋,𝐊) if we treat μ as a formal generating variable. Since we will not be concerned with the details of the measures (<ref>) and (<ref>) and F_g[μ] and T_g,n(𝐋;μ] only depend on the even moments ∫ L^2kμ(L), we can instead take these moments as the generating variables. §.§ Generating functions To be precise, we let a weight μ be a real linear function on the ring of even, real polynomials (i.e. μ∈[K^2]^*). For an even real polynomial f we use the suggestive notation μ(f) = ∫μ(K) f(K), making it clear that the notion of weight generalizes the Borel measure described in the intermezzo above. For L ∈ [0,∞)∪ i(0,π), the Borel measure given by the delta measure δ_L at L gives a simple example of a weight μ=δ_L satisfying δ_L(f) = f(L). The choice of weight μ is clearly equivalent to the choice of a sequence of times (t_0,t_1,…) ∈^_≥0 recording the evaluations of μ on the even monomials, up to a conventional normalization, t_k[μ]= 2/4^k k!μ(K^2k) = ∫μ(K) 2K^2k/4^kk!. Naturally we can interpret μ^⊗ p∈ ([K^2]^*)^⊗ p as an element of ([K^2]^⊗ p)^* ≅[K_1^2,…,K_p^2]^* by setting μ^⊗ p(f_1(K_1)⋯ f_p(K_p)) = μ(f_1)⋯μ(f_p) for even polynomials f_1,…,f_p and extending by linearity. More generally, we can view μ^⊗ p as a linear map [L_1^2,…,L_n^2,K_1^2,…,K_p^2] ≅[L_1^2,…,L_n^2][K_1^2,…,K_p^2] →[L_1^2,…,L_n]. We use the notation μ^⊗ p(f) = ∫ f(𝐊) μ(K_1)⋯μ(K_p), μ^⊗ p(f)(𝐋) = ∫ f(𝐋,𝐊) μ(K_1)⋯μ(K_p). One can then naturally introduce the generating function F[μ] of a collection of symmetric, even polynomials f_1(L_1),f_2(L_1,L_2),… via F[μ] = ∑_p≥ 0μ^⊗ p(f_p)/p!. Then the generating function of tight Weil–Petersson volumes is defined to be T_g,n(𝐋;μ] = ∑_p=0^∞μ^⊗ p(T_g,n,p)(𝐋)/p! = ∑_p=0^∞1/p!∫μ(K_1)⋯∫μ(K_p)T_g,n,p(𝐋,𝐊), which we interpret in the sense of a formal power series, so we do not have to worry about convergence. We could make this more precise by fixing a weight μ and considering T_g,n(𝐋;x μ] ∈[[x]] as a univariate formal power series in x. Or we could view T_g,n(𝐋;μ] as a multivariate formal power series in the times (t_0,t_1,…) defined in (<ref>). What is important is that we can make sense of the functional derivative δ/δμ(L) on these types of series defined by δ/δμ(L) P[μ] = ∂/∂ x P[μ + x δ_L] |_x=0. In particular, if f(𝐋,𝐊), with 𝐋 = (L_1,…,L_n) and 𝐊 = (K_1,…,K_p), is an even polynomial that is symmetric in K_1,…,K_p then δ/δμ(L)μ^⊗ p(f)(𝐋) = p μ^⊗ p-1(f)(𝐋,L). At the level of the generating function we thus have δ/δμ(L)T_g,n(𝐋;μ] = ∑_p=0^∞1/p!∫μ(K_1)⋯∫μ(K_p) T_g,n,p+1(𝐋,L,𝐊). In terms of formal power series in the times we may instead identify the functional derivative in terms of the formal partial derivatives as δ/δμ(L) = ∑_k=0^∞2 L^2k/4^kk!∂/∂ t_k, δ/δμ(0) = 2∂/∂ t_0. §.§ Main results To state our main results about T_g,n(𝐋;μ], we need to introduce the generating function R[μ] as the unique formal power series solution satisfying R[μ] = ∫μ(L) + O(μ^2) to Z(R[μ];μ] = 0, Z(r;μ]√(r)/√(2)πJ_1(2π√(2r)) - ∫μ(L) I_0(L√(2r)) where I_0 and J_1 are (modified) Bessel functions. Let also the moments M_k[μ] be the defined recursively via M_0[μ] = 1/ Rμ(0) , M_k[μ] = M_0[μ] M_k-1[μ]μ(0), k≥ 1, where the reciprocal in the first identity makes sense because Rμ(0) = 1 + O(μ). Alternatively, for k≥0 we may express M_k as M_k[μ] = Z^(k+1)(R[μ];μ]=(-√(2)π/√(R[μ]))^kJ_k(2π√(2R[μ])) - ∫μ(L) (L/√(2R[μ]))^k+1I_k+1(L√(2 R[μ])) , where Z^(k+1)(r;μ] denotes the (k+1)th derivative of Z(r;μ] with respect to r. We further consider the series η(u;μ]=∑_p=0^∞ M_p[μ] u^2p/(2p+1)!!, X̂(u;μ] = sin(2π u)/2π u η(u;μ], which we both interpret as formal power series in u with coefficients that are formal power series in μ. The reciprocal in the second definition is well-defined because η(0) = M_0[μ] = 1 + O(μ). We can now state our main result that generalizes Mirzakhani's recursion formula. The tight Weil–Petersson volume generating functions T_g,n(𝐋) satisfy T_g,n(𝐋) = 1/2L_1∫_0^L_1t∫_0^∞x∫_0^∞y xy K(x+y,t;μ] [ T_g-1,n+1(x,y,𝐋_{1}) 370mu +∑_g_1+g_2=g I⨿ J={2,…,n} T_g_1,1+|I|(x,𝐋_I) T_g_2,1+|J|(y,𝐋_J)] +1/2L_1∫_0^L_1t∫_0^∞x∑_j=2^n x (K(x,t+L_j;μ] + K(x,t-L_j;μ]) T_g,n-1(x,𝐋_{1,j}), which is the same recursion formula (<ref>) as for the Weil–Petersson volumes V_g,n(𝐋) except that the kernel K_0(x,t) is replaced by the “convolution” K(x,t,μ] = ∫_-∞^∞ X(z) K_0(x+z,t), where X(z) = X(z;μ] is a measure on determined by its two-sided Laplace transform ∫_-∞^∞ X(z) e^-uz = X̂(u;μ] = sin(2π u)/2π u η(u;μ]. Furthermore, we have T_0,3(L_1,L_2,L_3;μ] =1/M_0[μ], T_1,1(L;μ] =-M_1[μ]/24M_0[μ]^2+L^2/48M_0[μ]. We will not specify precisely what it means to have a measure X(z;μ] that itself is a formal power series in μ. Importantly its moments ∫_-∞^∞ X(z) z^p = (-1)^p p![u^p]X̂(u;μ] are formal power series in μ, so for any x,t∈ K(x,t;μ] = ∑_p=0^∞ (-1)^p ∂^p/∂ x^pK_0(x,t) [u^p]X̂(u;μ] is a formal power series in μ as well. In the case μ=0, it is easily verified that M_k[0] = (-2π^2)^k/k!, η(u,0] = sin(2π u)/2π u, so X̂(u;0] = 1 and X(z;0] = δ_0(z) and therefore one retrieves Mirzakhani's kernel K(x,t) = K_0(x,t). Given that the form of Mirzakhani's recursion is unchanged except for the kernel, this strongly suggests that the Laplace transforms ω_g,n(𝐳)=ω_g,n(𝐳;μ]∫_0^∞[∏_i=1^n L_i L_i e^-z_i L_i] T_g,n(𝐋;μ] of the tight Weil–Petersson volumes can be obtained as invariants in the framework of topological recursion as well. When μ=0 this reduces to the Laplace-transformed Weil–Petersson volumes ω_g,n(𝐳;0] = ω_g,n^(0)(𝐳) defined in (<ref>). The following theorem shows that this is the case in general. Setting ω_0,2(𝐳)=(z_1-z_2)^-2 and ω_0,0(𝐳) = ω_0,1(𝐳)=0, the Laplace transforms (<ref>) satisfy for every g,n≥ 0 such that 3g-3+n ≥ 0 the recursion ω_g,n(𝐳) =_u→01/2u(z_1^2-u^2)η(u;μ][ω_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω_g_1,|I|(u,𝐳_I)ω_g_2,|J|(-u,𝐳_J)]. These correspond precisely to the invariants of the curve x=z^2 y=2z η(z;μ]. Another consequence of Theorem <ref> is that the tight Weil–Petersson volumes T_g,n(𝐋;μ] for all g,n≥ 0, such that n≥ 3 for g=0 and n≥ 1 for g=1, are expressible as a rational polynomial in L_1^2,…, L_n^2 and M_0^-1, M_1 , M_2, …. Besides satisfying a recursion in the genus g and the number of tight boundaries n, these also satisfy a recurrence relation in n only. For all g,n≥ 0, such that n≥ 3 for g=0 and n≥ 1 for g=1, we have that T_g,n(𝐋;μ] = 1/M_0^2g-2+n𝒫_g,n(𝐋,M_1/M_0,…,M_3g-3+n/M_0), where 𝒫_g,n(𝐋,𝐦) is a rational polynomial in L_1^2,…,L_n^2,m_1,…,m_3g-3+n. This polynomial is symmetric and of degree 3g-3+n in L_1^2,…,L_n^2, while 𝒫_g,n(√(σ)𝐋,σ m_1,σ^2 m_2, σ^3 m_3, …) is homogeneous of degree 3g-3+n in σ. For all g≥ 0, n≥ 1 such that 2g-3+n>0 the polynomial 𝒫_g,n(𝐋,𝐦) can be obtained from 𝒫_g,n-1(𝐋,𝐦) via the recursion relation 𝒫_g,n(𝐋,𝐦) = ∑_p=1^3g-4+n(m_p+1 - L_1^2p+2/2^p+1(p+1)!-m_1 m_p + 1/2L_1^2 m_p) ∂𝒫_g,n-1/∂ m_p(𝐋_{1},𝐦) + (2g-3+n)(-m_1+12 L_1^2) 𝒫_g,n-1(𝐋_{1},𝐦) + ∑_i=2^n ∫L_i L_i 𝒫_g,n-1(𝐋_{1},𝐦), where we use the shorthand notation ∫L L f(L,…) = ∫_0^L x x f(x,…). Furthermore, we have 𝒫_0,3(𝐋) =1 𝒫_1,1(L_1,m_1) =1/24(-m_1+12 L_1^2) and 𝒫_g,0 for g≥2 is given by 𝒫_g,0(m_1,…,m_3g-3) = ∑_d_2,d_3,…≥ 0 ∑_k≥ 2 (k-1)d_k = 3g-3⟨τ_2^d_2τ_3^d_3⋯⟩_g ∏_k≥ 2(-m_k-1)^d_k/d_k!, where ⟨τ_2^d_2τ_3^d_3⋯⟩_g are the ψ-class intersection numbers on the moduli space ℳ_g,n with n = ∑_k d_k ≤ 3g-3 marked points. For instance, the first few applications of the recursion yield 𝒫_0,4(𝐋,𝐦) = 1/2 (L_1^2+⋯+L_4^2) - m_1, 𝒫_0,5(𝐋,𝐦) = 1/8 (L_1^4+⋯+L_5^4) + 1/2(L_1^2L_2^2+⋯+L_4^2L_5^2) - 3/2(L_1^2+⋯+L_5^2)m_1 + 3 m_1^2 - m_2, 𝒫_1,2(𝐋,𝐦) = 1/192(L_1^4+L_2^4) + 1/96L_1^2L_2^2 -1/24(L_1^2+L_2^2)m_1 + 1/12m_1^2-1/24m_2^2. Note that this provides a relatively efficient way of calculating the Weil–Petersson volumes V_g,n(𝐋) from the polynomial 𝒫_g,0, since V_g,n(𝐋) = T_g,n(𝐋;0] = 𝒫_g,n(𝐋,𝐦) |_m_k = (-2π^2)^k/k!. A simple corollary of Theorem <ref> is that the volumes satisfy string and dilaton equations generalizing those for the Weil–Petersson volumes derived by Do & Norbury in <cit.>. For all g ≥ 0 and n ≥ 1, such that n≥ 4 when g=0 and n≥ 2 when g=1, we have the identities ∑_p=0^∞ 2^p p! M_p[μ] [L_1^2p] T_g,n(𝐋;μ] = ∑_j=2^n ∫ L_j L_j T_g,n-1(𝐋_{1};μ]+_{g=0,n=3}, ∑_p=1^∞ 2^p p! M_p-1[μ] [L_1^2p] T_g,n(𝐋;μ] = (2g-3+n)T_g,n-1(𝐋_{1};μ], where the notation [L_1^2p]T_g,n(𝐋;μ] refers to the coefficient of L_1^2p in the polynomial T_g,n(𝐋;μ]. As explained in <cit.>, the string and dilaton equations for symmetric polynomials, in particular for the Weil–Petersson volumes, give rise to a recursion in n for genus 0 and 1. Using Theorem <ref>, we also get such a recursion for higher genera in the case of tight Weil–Petersson volumes. §.§ Idea of the proofs This work is largely inspired by the recent work <cit.> of Bouttier, Guitter & Miermont. There the authors consider the enumeration of planar maps with three boundaries, i.e. graphs embedded in the triply punctured sphere, see the left side of Figure <ref>. Explicit expressions for the generating functions of such maps, also known as pairs of pants, with controlled face degrees were long known, but they show that these generating functions become even simpler when restricting to tight pairs of pants, in which the three boundaries are required to have minimal length (in the sense of graph distances) in their homotopy classes. They obtain their enumerative results on tight pairs of pants in a bijective manner by considering a canonical decomposition of a tight pair of pants into certain triangles and diangles, see Figure <ref>. Our result (<ref>) for the genus-0 tight Weil–Petersson volumes with three distinguished boundaries can be seen as the analogue of <cit.>, although less powerful because our proof is not bijective. Instead, we derive generating functions of tight Weil–Petersson volumes from known expressions in the case of ordinary Weil–Petersson volumes. The general idea is that a genus-0 hyperbolic surface with two distinguished (but not necessarily tight) boundaries can unambiguously be cut along a shortest geodesic separating those two boundaries, resulting in a pair of certain half-tight cylinders (Figure <ref>). Also a genus-g surface with n distinguished (not necessarily tight) boundaries can be shown to decompose into a tight hyperbolic surface and n half-tight cylinders. The first decomposition uniquely determines the Weil–Petersson volumes of the moduli spaces of half-tight cylinders, while the second determines the tight Weil–Petersson volumes. This relation is at the basis of Proposition <ref>. To arrive at the recursion formula of Theorem <ref> we follow the line or reasoning of Mirzakhani's proof <cit.> of Witten's conjecture <cit.> (proved first by Kontsevich <cit.>). She observes that the recursion equation (<ref>) implies that the generating function of certain intersection numbers satisfies an infinite family of partial differential equations, the Virasoro constraints. Mulase & Safnuk <cit.> have observed that the reverse implication is true as well. We will demonstrate that the generating functions of tight Weil–Petersson volumes and ordinary Weil–Petersson volumes are related in a simple fashion when expressed in terms of the times (<ref>) and that the former obey a modified family of Virasoro constraints. These constraints in turn are equivalent to the generalized recursion of Theorem <ref>. §.§ Discussion Mirzakhani's recursion formula has a bijective interpretation <cit.>. Upon multiplication by 2L_1 the left-hand side 2L_1 V_g,n(𝐋) accounts for the volume of surfaces with a marked point on the first boundary. Tracing a geodesic ray from this point, perpendicularly to the boundary, until it self-intersects or hits another boundary allows one to canonically decompose the surface into a hyperbolic pair of pants (3-holed sphere) and one or two smaller hyperbolic surfaces. The terms on the right-hand side of <cit.> precisely take into account the Weil–Petersson volumes associated to these parts and the way they are glued. It is natural to expect that Theorem <ref> admits a similar bijective interpretation, in which the surface decomposes into a tight pair of pants (a sphere with 3+p boundaries, three of which are tight) and one or two smaller tight hyperbolic surfaces. However, Mirzakhani's ray shooting procedure does not generalize in an obvious way. Nevertheless, working under the assumption that a bijective decomposition exists, one is led to suspect that the generalized kernel K(x,t,μ] of Theorem <ref> contains important information about the geometry of tight pairs of pants. Moreover, one would hope that this geometry can be further understood via a decomposition of the tight pairs of pants themselves analogous to the planar map case of <cit.> described above. Since a genus-0 surface 𝖷∈ℳ_0,3+p(0,0,0,𝐋) with three distinguished cusps is always a tight pair of pants (since the zero length boundaries are obviously minimal), a consequence of a bijective interpretation of Theorem <ref> is a conjectural interpretation of the series X̂(u;μ] = sin(2π u)/(2π uη) in (<ref>) in terms of the hyperbolic distances between the three cusps. To be precise, let c_1,c_2,c_3 be unit-length horocycles around the three cusps and Δ(𝖷) = d_hyp(c_1,c_2)-d_hyp(c_1,c_3) the difference in hyperbolic distance between two pairs, then it is plausible that X̂(u;μ] ?=∑_p≥ 01/p!∫(∫_ℳ_0,3+p(0,0,0,𝐋) e^2u Δ(𝖷)μ_WP(𝖷) ) μ(L_1)⋯μ(L_p). Or in the probabilistic terms of Section <ref>, the measure M_0[μ] X(z;μ] on ℝ, which integrates to 1 due to (<ref>), is the probability distribution of the random variable 2Δ(𝖷) in a genus-0 Boltzmann hyperbolic surface 𝖷 with weight μ. In upcoming work we shall address this conjecture using very different methods. Another natural question to ask is whether the generalization of the spectral curve (<ref>) of Weil–Petersson volumes to the one of tight Weil–Petersson volumes in Theorem <ref> can be understood in the general framework of deformations of spectral curves in topological recursion <cit.>. §.§ Outline The structure of the paper is as follows: In section <ref> we introduce the half-tight cylinder, which allows us to do tight decomposition of surfaces, which relates the regular hyperbolic surfaces to the tight surfaces. Using the decomposition we prove Proposition <ref>. In section <ref> consider the generating functions of (tight) Weil–Petersson volumes and their relations. Furthermore, we use the Virasoro constraints to prove Theorem <ref>, Theorem <ref> and Corollary <ref>. In section <ref> we take the Laplace transform of the tight Weil–Petersson volumes and prove Theorem <ref>. We also look at the relation between the disk function of the regular hyperbolic surfaces and the generating series of moment η. Finally, in section <ref> we briefly discuss how our results may be of use in the study of JT gravity. Acknowledgments This work is supported by the START-UP 2018 programme with project number 740.018.017 and the VIDI programme with project number VI.Vidi.193.048, which are financed by the Dutch Research Council (NWO). § DECOMPOSITION OF TIGHT HYPERBOLIC SURFACES §.§ Half-tight cylinder Recall that a boundary ∂_i of X ∈ℳ_g,n+p(𝐋) is said to be tight in S_g,n,p if ∂_i is the only simple cycle γ in [∂_i]_S_g,n,p of length ℓ_γ(X) ≤ L_i and we defined the moduli space of tight hyperbolic surfaces as ℳ^tight_g,n,p(𝐋) = { X ∈ℳ_g,n+p(𝐋) : ∂_i is tight in S_g,n,p}⊂ℳ_g,n+p(𝐋). We noted before that when g=0 and n=2 we have ℳ^tight_0,2,p(𝐋) = ∅ because ∂_1 and ∂_2 belong to the same free homotopy class of S_0,2,p and can therefore never both be the unique shortest cycle. Instead, it is useful for any p≥ 1 to consider the moduli space of half-tight cylinders ℋ_p(𝐋) = { X ∈ℳ_0,2+p(𝐋) : ∂_2 is tight in S_0,2,p}⊂ℳ_0,2+p(𝐋), which is non-empty whenever L_1 > L_2 > 0. We will also consider ℋ_p(𝐋) = { X ∈ℳ_0,2+p(𝐋) : ∂_2 has minimal length in [∂_2]_S_0,2,p}⊂ℳ_0,2+p(𝐋) and denote its Weil–Petersson volume by H_p(𝐋). By construction, it is non-zero for L_1 ≥ L_2 > 0 and H_p(𝐋) ≤ V_0,2+p(𝐋). ℋ_p(𝐋) is an open subset of ℳ_0,2+p(𝐋), and when it is non-empty (L_1>L_2) its closure is ℋ_p(𝐋). In particular, both have the same finite Weil–Petersson volume H_p(𝐋) when L_1 > L_2, but ℋ_p(𝐋) has 0 volume and ℋ_p(𝐋) non-zero volume H_p(𝐋) when L_1 = L_2. For L_1 > L_2, ℋ_p(𝐋) is the intersection of the open sets {ℓ_γ(X) > L_2 } indexed by the countable set of free homotopy classes γ in [∂_2]_S_0,2,p. It is not hard to see that in a neighbourhood of any X ∈ℋ_p(𝐋) only finitely many of these are important, so the intersection is open. Its closure is given by the countable intersection of closed sets {ℓ_γ(X) ≥ L_2 }, which is precisely ℋ_p(𝐋). §.§ Tight decomposition We are now ready to state the main result of this section. The Weil–Petersson volumes T_g,n,p(𝐋) and H_p(𝐋) satisfy V_g,n+p(𝐋) = ∑_I_0 ⊔⋯⊔ I_n = {n+1,…,n+p}∫ T_g,n,|I_0|(𝐊,𝐋_I_0)∏_1≤ i≤ n I_i ≠∅ H_|I_i|(L_i,K_i,𝐋_I_i) K_i K_i, (g≥ 1 or n≥ 3) V_0,2+p(𝐋) = H_p(𝐋) + ∑_I_1 ⊔ I_2 = {3,…,2+p} I_1,I_2≠∅∫_0^L_2 H_|I_1|(L_1,K,𝐋_I_1) H_|I_2|(L_2,K,𝐋_I_2) K K, (L_1 ≥ L_2) where in the first equation it is understood that K_i = L_i whenever I_i = ∅. The remainder of this section will be devoted to proving this result. But let us first see how it implies Proposition <ref>. Clearly H_1(𝐋) = V_0,3(𝐋) = 1 for L_1 ≥ L_2 and T_g,n,0(𝐋) = V_g,n(𝐋). Rewriting the equations as T_g,n,p(𝐋) = V_g,n+p(𝐋) - ∑_I_0 ⊔⋯⊔ I_n = {n+1,…,n+p} |I_0| < p∫ T_g,n,|I_0|(𝐊,𝐋_I_0)∏_1≤ i≤ n I_i ≠∅ H_|I_i|(L_i,K_i,𝐋_I_i) K_i K_i, H_p(𝐋) = V_0,2+p(𝐋) - ∑_I_1 ⊔ I_2 = {3,…,2+p} I_1,I_2≠∅∫_0^L_2 H_|I_1|(L_1,K,𝐋_I_1) H_|I_2|(L_2,K,𝐋_I_2) K K, it is clear that they are uniquely determined recursively in terms of V_g,n(𝐋). Moreover, by induction we easily verify that H_p(𝐋) in the region L_1 ≥ L_2 is a polynomial in L_1^2,…,L_2+p^2 of degree p-1 that is symmetric in L_3,…,L_2+p, and T_g,n,p is a polynomial in L_1^2,…,L_n+p^2 of degree 3g-3+n+p that is symmetric in L_1,…,L_n and symmetric in L_n+1,…,L_n+p. §.§ Tight decomposition in the stable case §.§.§ Shortest cycles The following parallels the construction of shortest cycles in maps described in <cit.>. Given a hyperbolic surface X ∈ℳ_g,n+p for g≥ 1 or n≥ 2, then for each i=1,…,n there exists a unique innermost shortest cycle σ^i_S_g,n,p(X) on X, meaning that it has minimal length in [∂_i]_S_g,n,p and such that all other cycles of minimal length (if they exist) are contained in the region of X delimited by ∂_i and σ^i_S_g,n,p(X). Moreover, if g≥ 1 or n ≥ 3, the curves σ^1_S_g,n,p(X), …,σ^n_S_g,n,p(X) are disjoint. First note that if a shortest cycle exists, it is a simple closed geodesic. As a consequence of <cit.>, there are only finitely many closed geodesics with length ≤ L_i in [∂_i]_S_g,n,p. Since ∂_i∈ [∂_i]_S_g,n,p has length L_i, this proves the existence of at least one cycle in [∂_i]_S_g,n,p with minimal length. Regarding the existence and uniqueness of a well-defined innermost shortest cycle, suppose α,β∈ [∂_i]_S_g,n,p are two distinct simple closed geodesics with minimal length ℓ (see left side of Figure <ref>). Since α∈ [∂_i]_S_g,n,p, cutting along α separates the surface in two disjoint parts. Therefore, α and β can only have an even number of intersections. If the number of intersections is greater than zero, we can choose two distinct intersections and combine α and β to get two distinct cycles γ_1 and γ_2 by switching between α and β at the chosen intersections, such that γ_1 and γ_2 are still in [∂_i]_S_g,n,p. Since the total length is still 2ℓ, at least one of the new cycles has length ≤ℓ. This cycle is not geodesic, so there will be a closed cycle in [∂_i]_S_g,n,p with length <ℓ, which contradicts that α and β have minimal length. We conclude that α and β are disjoint. Since all cycles in [∂_i]_S_g,n,p with minimal length are disjoint and separating, the notion of being innermost is well-defined. Consider α_i=σ^i_S_g,n,p(X) and α_j=σ^j_S_g,n,p(X) for i≠ j (see right side of Figure <ref>). Just as before, since α_i is separating and α_i and α_j are simple, the number of intersections is even. If α_i and α_j are not disjoint, we can choose two distinct intersections and construct two distinct cycles γ_i and γ_j by switching between α_i and α_j at the chosen intersections, such that γ_i and γ_j are in [∂_i]_S_g,n,p and [∂_j]_S_g,n,p respectively. Since the total length of the cycles stays the same, there is at least one a∈{i,j} such that γ_a has length less or equal than α_a. Since γ_a is not geodesic, there is a closed cycle in [∂_a]_S_g,n,p with length strictly smaller than α_a, which is a contradiction, so the innermost shortest cycles are disjoint. In particular the proof implies the following criterions are equivalent: * A simple closed geodesic α∈ [∂_i]_S_g,n,p is the innermost shortest cycle σ^i_S_g,n,p(X); * For a simple closed geodesic α∈ [∂_i]_S_g,n,p we have ℓ(α) ≤ L_i and each simple closed geodesic β∈σ^i_S_g,n,p(X) that is disjoint from α has length ℓ(β)≥ℓ(α) with equality only being allowed if β is contained in the region between α and ∂_i. §.§.§ Integration on Moduli space Let us recap Mirzakhani's decomposition of moduli space integrals in the presence of distinguished cycles <cit.>. A multicurve Γ = (γ_1,…,γ_k) is a collection of disjoint simple closed curves Γ = (γ_1,…,γ_k) in S_g,n which are pairwise non-freely-homotopic. Given a multicurve, in which each curve γ_i may or may not be freely homotopic to a boundary ∂_j of S_g,n, one can consider the stabilizer subgroup Stab(Γ) = { h∈Mod_g,n : h ·γ_i = γ_i }⊂Mod_g,n. Note that if γ_i ∈ [∂_j]_S_g,n is freely homotopic to one of the boundaries ∂_j then h ·γ_i = γ_i for any h∈Mod_g,n. The moduli space of hyperbolic surfaces with distinguished (free homotopy classes of) curves is the quotient ℳ_g,n(𝐋)^Γ = 𝒯_g,n(𝐋)/Stab(Γ). For a closed curve γ in S_g,n and X∈ℳ_g,n, let ℓ_γ(X) be the length of the geodesic representative in the free homotopy class of γ. For 𝐊 = (K_1,…,K_k) ⊂_>0^k we can restrict the lengths of the geodesic representatives of curves in Γ by setting ℳ_g,n(𝐋)^Γ(𝐊) = { X ∈ℳ_g,n(𝐋)^Γ : ℓ_γ_i(X) = K_i, i=1,…,k}⊂ℳ_g,n(𝐋)^Γ. If γ_i ∈ [∂_j]_S_g,n then this set is empty unless K_i = L_j. Denote by π^Γ : ℳ_g,n(𝐋)^Γ→ℳ_g,n(𝐋) the projection. If there are exactly p cycles among Γ that are not freely homotopic to a boundary, then this space admits a natural action of the p-dimensional torus (S^1)^p obtained by twisting along each of these p cycles proportional to their length. The quotient space is denoted ℳ_g,n(𝐋)^Γ*(𝐊) = ℳ_g,n(𝐋)^Γ(𝐊) / (S^1)^p and is naturally equipped with a symplectic structure inherited from the Weil–Petersson symplectic structure on ℳ_g,n(𝐋)^Γ. If we denote by S_g,n(Γ) the possibly disconnected surface obtained from S_g,n by cutting along all γ_i that are not freely homotopic to a boundary and by ℳ(S_g,n(Γ),𝐋,𝐊) its moduli space, then according to <cit.>, the canonical mapping ℳ_g,n(𝐋)^Γ*(𝐊) →ℳ(S_g,n(Γ),𝐋,𝐊) is a symplectomorphism. Given an integrable function F : ℳ_g,n(𝐋)^Γ→ that is invariant under the action of (S^1)^p, there exists a naturally associated function F̃ : ℳ(S_g,n(Γ),𝐋,𝐊) → such that (essentially <cit.>) ∫_ℳ_g,n(𝐋)^Γ F(X) μ_WP(X) = ∫∏_1≤ i≤ n γ_i ∉ [∂_j]_S_g,n K_i K_i ∫_ℳ(S_g,n(Γ),𝐋,𝐊)F̃(X) μ_WP(X). §.§.§ Shortest multicurves Suppose g≥ 1 or n≥ 3, meaning that we momentarily exclude the cylinder case (g=0, n=2). We consider now a special family of multicurves Γ = (γ_1,…,γ_n) on S_g,n+p for n≥ 1, p≥0. Namely, we require that γ_i ∈ [∂_i]_S_g,n,p is freely homotopic to the boundary ∂_i in the capped-off surface S_g,n,p for i=1,…,n. Then there exists a partition I_0 ⊔⋯⊔ I_n = {n+1,…,n+p} such that S_g,n+p(Γ) has n+1 connected components s_0,…,s_n, where s_0 is of genus g and is adjacent to all curves Γ as well as the boundaries (∂_j)_j∈ I_0 while for each i=1, …,n, s_i is of genus 0 and contains the ith boundary ∂_i as well as (∂_j)_j∈ I_i and is adjacent to γ_i. Note that I_i = ∅ if and only if γ_i ∈ [∂_i]_S_g,n+p. Finally, we observe that mapping class group orbits {Mod_g,n+p· [Γ]_S_g,n+p} of these multicurves Γ are in bijection with the set of partitions {I_0 ⊔⋯⊔ I_n = {n+1,…,n+p}}. With the help of Lemma <ref> we may introduce the restricted moduli space in which we require γ_i to be (freely homotopic to) the innermost shortest cycle in [∂_i]_S_g,n,p, ℳ̂_g,n,p(𝐋)^Γ = { X ∈ℳ_g,n+p(𝐋)^Γ : γ_i∈ [σ^i_S_g,n,p(X)] for i=1,…,n}. The natural projection _Γℳ̂_g,n,p(𝐋)^Γ⟶ℳ_g,n+p(𝐋), where the disjoint union is over (representatives of) the mapping class group orbits of multicurves Γ, is a bijection. If X,X'∈𝒯_g,n+p(𝐋) are representatives of hyperbolic surfaces in ℳ̂_g,n,p(𝐋)^Γ and ℳ̂_g,n,p(𝐋)^Γ' respectively, then by definition [γ_i] = [σ^i_S_g,n,p(X)] and [γ_i']=[σ^i_S_g,n,p(X')]. If X and X' represent the same surface in ℳ_g,n+p(𝐋), they are related by an element h of the mapping class group, X' = h· X, and therefore also σ^i_S_g,n,p(X) = h·σ^i_S_g,n,p(X') and [γ_i] = h·[γ_i']. So Γ and Γ' belong to the same mapping class group orbit and, if Γ and Γ' are freely homotopic, we must have h ∈Stab(Γ). Hence, X and X' represent the same element in the set on the left-hand side, and we conclude that the projection is injective. It is also surjective since any X∈𝒯_g,n+p(𝐋) is a representative of ℳ̂_g,n,p(𝐋)^Γ if we take Γ = (σ^1_S_g,n,p(X),…,σ^n_S_g,n,p(X)), which is a valid multicurve due to Lemma <ref>. We can introduce the length-restricted version ℳ̂_g,n,p(𝐋)^Γ(𝐊) ⊂ℳ_g,n+p(𝐋)^Γ(𝐊) as before. The subset ℳ̂_g,n,p(𝐋)^Γ(𝐊)⊂ℳ_g,n+p(𝐋)^Γ(𝐊) is invariant under twisting (the torus-action on ℳ_g,n+p(𝐋)^Γ(𝐊) described above). The image of the quotient ℳ̂_g,n,p(𝐋)^Γ*(𝐊) under the symplectomorphism (<ref>) is precisely ℳ^tight_g,n,|I_0|(𝐊,𝐋_I_0)×∏_1≤ i≤ n I_i ≠∅ℋ_|I_i|(L_i,K_i,𝐋_I_i). Let X ∈ℳ_g,n+p(𝐋)^Γ(𝐊) be a hyperbolic surface with distinguished multicurve Γ. The lengths of the geodesics associated to Γ as well as the lengths of the geodesics that are disjoint from those geodesics are invariant under twisting X along Γ. The criterion explained just below Lemma <ref> for γ_i to be the innermost shortest cycle σ^i_S_g,n,p(X) is thus also preserved under twisting, showing that the subset ℳ̂_g,n,p(𝐋)^Γ(𝐊) is invariant. Let X_0 ∈ℳ_g,n,|I_0|(𝐊,𝐋_I_0) and X_i ∈ℳ_0,2+|I_i|(L_i,K_i,𝐋_I_i) for those i=1,…, n for which I_i≠∅ be the hyperbolic structures on the connected components s_0,…,s_n of S_g,n+p(Γ) obtained by cutting X along the geodesics associated to Γ. For each i=1,…,n the criterion for γ_i to be the innermost shortest cycle σ^i_S_g,n,p(X) is equivalent to the following two conditions holding: * the ith boundary of X_0 is tight in the capped-off surface associated to s_0; * I_i = ∅ (meaning γ_i = ∂_i) or X_i ∈ℋ_|I_i|(L_i,K_i,𝐋_I_i) (recall the definition in (<ref>)). Hence, we have X ∈ℳ̂_g,n,p(𝐋)^Γ(𝐊) precisely when X_0 ∈ℳ^tight_g,n,|I_0|(𝐊,𝐋_I_0) and X_i ∈ℋ_|I_i|(L_i,K_i,𝐋_I_i) when I_i ≠∅. This proves the second statement of the lemma. It follows that the Weil–Petersson volume of ℳ̂_g,n,p(𝐋)^Γ*(𝐊) is equal to the product of the volumes of the spaces appearing in (<ref>). Combining with Lemma <ref> and the integration formula (<ref>) this shows that V_g,n+p(𝐋) = ∑_Γ∫_ℳ̂_g,n,p(𝐋)^Γμ_WP = ∑_I_0 ⊔⋯⊔ I_n = {n+1,…,n+p}∫ T_g,n,|I_0|(𝐊,𝐋_I_0)∏_1≤ i≤ n I_i ≠∅ H_|I_i|(L_i,K_i,𝐋_I_i) K_i K_i, where it is understood that K_i = L_i whenever I_i = ∅. This proves the first relation of Proposition <ref>. §.§ Tight decomposition of the cylinder The decomposition we have just described does not work well in the case g=0 and n=2, because ∂_1 and ∂_2 are in the same free homotopy class of the capped surface S_0,2,p. Instead, we should consider a multicurve Γ=(γ_1) consisting of a single curve γ_1 on S_0,2+p in the free homotopy class [∂_1]_S_0,2,p=[∂_2]_S_0,2,p, see Figure <ref>. In this case there exists a partition I_1 ⊔ I_2 = {3,…,p+2} such that S_0,2+p(Γ) has two connected components s_1 and s_2, with s_i a genus-0 surface with 2+|I_i| boundaries corresponding to ∂_i, γ_1 and (∂_j)_j∈ I_i. We consider the restricted moduli space ℳ̂_0,2,p(𝐋)^Γ = { X ∈ℳ_0,2+p(𝐋)^Γ : γ_1∈ [σ^1_S_0,2,p(X)]}, which thus treats the two boundaries ∂_1 and ∂_2 asymmetrically, by requiring that γ_1 is the shortest curve farthest from ∂_1. Lemma <ref> goes through unchanged: the projection _Γℳ̂_0,2,p(𝐋)^Γ⟶ℳ_0,2+p(𝐋), where the disjoint union is over the mapping class group orbits of Γ = (γ_1), is a bijection. Assuming L_1 ≥ L_2, we cannot have γ_1 ∈ [∂_1]_S_0,2+p so I_1 ≠∅. There are two cases to consider: * γ_1 ∈ [∂_2]_S_0,2+p and therefore I_2 = ∅: this means that ∂_2 has minimal length in [∂_2]_S_0,2,p, so ℳ̂_0,2,p(𝐋)^Γ = ℋ_p(𝐋). * I_2 ≠∅: by reasoning analogous to that of Lemma <ref> we have that ℳ̂_0,2,p(𝐋)^Γ*(K) is symplectomorphic to ℋ_|I_1|(L_1,K,𝐋_I_1) ×ℋ_|I_2|(L_2,K,𝐋_I_2). Hence, when L_1 ≥ L_2 we have V_0,2+p(𝐋) = H_p(𝐋) + ∑_I_1 ⊔ I_2 = {3,…,2+p} I_1,I_2≠∅∫_0^L_2 H_|I_1|(L_1,K,𝐋_I_1) H_|I_2|(L_2,K,𝐋_I_2) K K. This proves the second relation of Proposition <ref>. § GENERATING FUNCTIONS OF TIGHT WEIL–PETERSSON VOLUMES §.§ Definitions Let us define the following generating functions of the Weil–Petersson volumes, half-tight cylinder volumes and tight Weil–Petersson volumes: F_g[μ] = ∑_n=0^∞1/n!∫μ(L_1)⋯∫μ(L_n) V_g,n(𝐋), H(L_1,L_2;μ] = ∑_p=1^∞1/p!∫μ(L_3)⋯∫μ(L_2+p) H_p(𝐋), F̃_g[ν,μ] =∑_n=0^∞1/n!∫ν(L_1)⋯∫ν(L_n) T_g,n(L_1,…,L_n;μ]. Furthermore, for g≥ 2 we recall the polynomial 𝒫_g,0(m_1,…,m_3g-3) = ∑_d_2,d_3,…≥ 0 ∑_k≥ 2 (k-1)d_k = 3g-3⟨τ_2^d_2τ_3^d_3⋯⟩_g ∏_k≥ 2(-m_k-1)^d_k/d_k!, where ⟨τ_2^d_2τ_3^d_3⋯⟩_g are the ψ-class intersection numbers on the moduli space ℳ_g,n with n = ∑_k d_k ≤ 3g-3 marked points. Then according to <cit.>[Note that there has been a shift in conventions, e.g. regarding factors of 2. ] F_0[μ] = 1/2∫_0^R r Z(r;μ]^2, F_1[μ] = - 1/24log M_0[μ], F_g[μ] = 1/(M_0[μ])^2g-2 𝒫_g,0(M_1[μ]/M_0[μ],…, M_3g-3[μ]/M_0[μ]) for g≥ 2. In the genus-0 case we can take successive derivatives to find useful formulas for one, two or three distinguished boundaries of prescribed lengths, F_0[μ]μ(L_1) = -∫_0^R[μ] I_0(L_1√(2r)) Z(r;μ] r, , δ^2F_0[μ]/δμ(L_1)δμ(L_2) = ∫_0^R[μ] I_0(L_1√(2r)) I_0(L_2√(2r)) r, δ^3 F_0[μ]/δμ(L_1)δμ(L_2)δμ(L_3) = 1/M_0[μ][∏_i=1^3 I_0(L_i√(2R[μ]))]. §.§ Volume of half-tight cylinder The equations of Proposition <ref> turn into the equations δ^n F_g[μ]/δμ(L_1)⋯δμ(L_n) = ∫ T_g,n(𝐊;μ]∏_i=1^n (K_i H(L_i,K_i;μ] + δ(K_i - L_i)) K_i, (g≥ 1 or n≥ 3) δ^2 F_0[μ]/δμ(L_1)δμ(L_2) = H(L_1,L_2;μ] + ∫_0^L_2 H(L_1,K;μ] H(L_2,K;μ] K K. (L_1 ≥ L_2) Let us focus on the last equation, which should determine H(L_1,L_2;μ] uniquely. The left-hand side depends on μ only through the quantity R[μ], and the dependence on R is analytic, δ^2 F_0[μ]/δμ(L_1)δμ(L_2) = R + 1/4(L_1^2+L_2^2)R^2 + 1/48(L_1^4 + 4 L_1^2L_2^2 + L_2^4) R^3 + ⋯. Hence, the same is true for H(L_1,L_2;μ] and one may easily calculate order by order in R that H(L_1,L_2;μ] = R + 1/4(L_1^2-L_2^2)R^2 + 1/48(L_1^2-L_2^2)^2R^3+⋯. This suggests that H(L_1,L_2;μ] depends on L_1 and L_2 only through the combination L_1^2 - L_2^2. Let's prove this. The half-tight cylinder generating function satisfies (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) H(L_1,L_2;μ] = 0, and is therefore given by H(L_1,L_2;μ] = ∑_ℓ=0^∞2^-ℓ R[μ]^ℓ+1/ℓ!(ℓ+1)!(L_1^2-L_2^2)^ℓ = √(2R[μ]/L_1^2-L_2^2) I_1( √(L_1^2-L_2^2)√(2R[μ])) (L_1 ≥ L_2). By construction H(L_1,0;μ] = δ^2F_0[μ]/δμ(L_1)δμ(0) and the integral (<ref>) with L_2=0 evaluates to H(L_1,0;μ] = δ^2F_0[μ]/δμ(L_1)δμ(0) = √(2R)/L_1 I_1(L_1√(2R)). The identity ∂/∂ r( √(2r)/L_1 I_1(L_1√(2r))√(2r)/L_2 I_1(L_2√(2r))) = (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) I_0(L_1√(2r)) I_0(L_2√(2r)), which can be easily checked by calculating the derivatives, implies that H(L_1,0;μ] H(L_2,0;μ] = (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) δ^2F_0[μ]/δμ(L_1)δμ(L_2). Hence, by (<ref>) we find that (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) H(L_1,L_2;μ] = H(L_1,0;μ] H(L_2,0;μ] - (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) ∫_0^L_2 H(L_1,K;μ] H(L_2,K;μ] K K = H(L_1,0;μ] H(L_2,0;μ] - H(L_1,L_2;μ] H(L_2,L_2;μ] - ∫_0^L_2(1/L_1∂/∂ L_1+1/L_2∂/∂ L_2)H(L_1,K;μ] H(L_2,K;μ] K K = -∫_0^L_2∂/∂ K(H(L_1,K;μ] H(L_2,K;μ]) K - ∫_0^L_2(1/L_1∂/∂ L_1+1/L_2∂/∂ L_2)H(L_1,K;μ] H(L_2,K;μ] K K = - ∫_0^L_2[H(L_1,K;μ](1/L_2∂/∂ L_2+1/K∂/∂ K) H(L_2,K;μ] + H(L_2,K;μ](1/L_1∂/∂ L_1+1/K∂/∂ K) H(L_1,K;μ]] K K. Since the leading coefficient in R of H(L_1,L_2;μ] satisfies (<ref>), it follows that the same is true for the higher-order coefficients in R. As a consequence of (<ref>), H(L_1,L_2;μ] = H(√(L_1^2-L_2^2),0;μ] and the claimed expression (<ref>) follows from (<ref>). §.§ Rewriting generating functions Since the work of Mirzakhani <cit.> it is known that the Weil–Petersson volumes V_g,n(𝐋) are expressible in terms of intersection numbers as follows. The compactified moduli space ℳ_g,n of genus-g curves with n marked points comes naturally equipped with the Chern classes ψ_1,…,ψ_n associated with its n tautological line bundles, as well as the cohomology class κ_1 of the Weil–Petersson symplectic structure (up to a factor 2π^2). The corresponding intersection numbers are given by the integrals <κ_1^mτ_d_1⋯τ_d_n>_g,n = ∫_ℳ_g,nκ_1^m ψ_1^d_1⋯ψ_n^d_n, where d_1,…,d_n ≥ 0 and n = d_1+⋯ d_n + m + 3 - 3g. For g ≥ 0 we denote the generating function of these intersection numbers by G_g(s;x_0,x_1,…)=∑_n≥ 01/n!∑_m,d_1,…,d_n ≥ 0 d_1+⋯+d_n+m=3g-3+n<κ_1^mτ_d_1⋯τ_d_n>_g,n s^m/m! x_d_1⋯ x_d_n. We may sum over all genera to arrive at the generating function G(s;x_0,x_1,…)=∑_g=0^∞λ^2g-2 G_g(s;x_0,x_1,…). In order to lighten the notation we do not write the dependence on λ explicitly here, which only serves as a formal generating variable. Note that λ is actually redundant for organizing the series, since any monomial appears in at most one of the G_g as can be seen from (<ref>). Then the generating function of Weil–Petersson volumes can be expressed as ∑_g=0^∞λ^2g-22^3-3gF_g[μ]=G(π^2;t_0[μ],t_1[μ],…), where the times t_k[μ] are defined by t_k[μ]=∫μ(L) 2L^2k/4^kk!. See <cit.> based on <cit.>, where one should be careful that some conventions differ by some factors of two compared to the current work. We will show that the (bivariate) generating function F̃_g[ν,μ] of tight Weil–Petersson volumes, defined in (<ref>), is also related to the intersection numbers, but with different times. The generating function of the volumes T_g,n is related to the generating function of intersection numbers via F̃_g[ν,μ] = 2^3g-3 G_g(0;τ_0[ν,μ],τ_1[ν,μ],…), where the shifted times τ_k[ν,μ] are defined by τ_k[ν,μ]=t_k[ν]+δ_k,1-2^1-kM_k-1[μ]. This proposition will be proved in the remainder of this subsection, relying on an appropriate substitution of the weight ν. To this end, we informally introduce a linear mapping H_μ on measures on the half line [0,∞) as follows. If ρ is a measure on [0,∞) we let H_μρ be the measure given by ρ + (∫ H(L,K;μ] ρ(L)) K K. The effect of H_μ on the times can be computed using the series expansion (<ref>), t_k[H_μρ] = t_k[ρ] + 2/4^k k!∫_0^∞(∫ H(L,K;μ] ρ(L)) K^2k+1K = t_k[ρ] + ∑_p=1^∞2(2R[μ])^p/p!4^p+k(p+k)!∫ρ(L) L^2p+2k = ∑_p=0^∞(2R[μ])^p/p! t_p+k[ρ]. We observe that H_μ acts as an infinite upper-triangular matrix on the times. This matrix is easily inverted to give t_q[ρ] = ∑_p=0^∞(-2R[μ])^p/p! t_p+q[H_μρ]. This means that knowledge of the generating function F̃_g[H_μρ,μ] with substituted weight H_μρ is sufficient to recover the original generating function F̃_g[ν,μ]. Luckily the former is within close reach. The generating functions for tight Weil–Petersson volumes and regular Weil–Petersson volumes are related by F̃_g[H_μρ,μ]=F_g[ρ+μ] - δ_g,0F_corr[ρ,μ], where the correction term F_corr[ρ,μ]=∑_n=0^21/n!∑_p=0^∞1/p!∫ρ(L_1)⋯ρ(L_n) μ(L_n+1)⋯μ(L_n+p) V_0,n+p(𝐋) is necessary to subtract the constant, linear and quadratic dependence on ρ in the genus-0 case. If g,n≥ 0 (such that n≥ 3 if g=0) and L_1,…,L_n ∈ [0,∞)∪ i (0,π), then Proposition <ref> allows us to compute ∑_p=0^∞1/p!∫μ(L_n+1)⋯μ(L_n+p) V_g,n+p(𝐋) = ∑_p=0^∞1/p!∫μ(L_n+1)⋯μ(L_n+p)-20mu ∑_I_0 ⊔⋯⊔ I_n = {n+1,…,n+p}∫ T_g,n,|I_0|(𝐊,𝐋_I_0)∏_1≤ i≤ n I_i ≠∅ H_|I_i|(L_i,K_i,𝐋_I_i) K_i K_i = ∑_p_0,…,p_n=0^∞1/p_0!… p_n!∫μ(L_n+1)⋯μ(L_n+p_0+⋯+p_n) T_g,n,p_0(𝐊,𝐋^(0))∏_1≤ i≤ n p_i ≠ 0 H_p_i(L_i,K_i,𝐋^(i)) K_i K_i, where we use the notation 𝐋^(j) = (L_n+p_0+⋯+p_j-1+1, … ,L_n+p_0+⋯+p_j). In terms of the tight Weil–Petersson volume generating function (<ref>) and the half-tight cylinder generating function (<ref>) this evaluates to ∑_p=0^∞1/p!∫μ(L_n+1)⋯μ(L_n+p) V_g,n+p(𝐋)=∑_J ⊂{1,…,n}∫ T_g,n(𝐊;μ] ∏_i∈ J K_i H(L_i,K_i;μ] K_i, where it is understood that in the argument of T_g,n(𝐊;μ] we take K_i = L_i for i∉ J. Expanding F_g[ρ+μ] from its definition (<ref>) we find F_g[ρ+μ] -F_corr[ρ,μ]δ_g,0 = ∑_n,p=0^∞_g≥1 or n≥31/n!p!∫ρ(L_1)⋯ρ(L_n) μ(L_n+1)⋯μ(L_n+p)V_g,n+p(𝐋) = ∑_n=0^∞1/n!_g≥1 or n≥3∫(∏_i=1^nρ(L_i))∑_p=0^∞1/p!∫(∏_i=n+1^n+pμ(L_i))V_g,n+p(𝐋) Plugging in (<ref>) and using that T_0,n(𝐊;μ]=0 for n<3, yields F_g[ρ+μ] -F_corr[ρ,μ]δ_g,0 = ∑_n=0^∞1/n!∫ρ(L_1)⋯ρ(L_n)∑_J ⊂{1,…,n}∫ T_g,n(𝐊;μ] ∏_i∈ J K_i H(L_i,K_i;μ] K_i = ∑_n=0^∞1/n!∫(H_μρ)(L_1)⋯(H_μρ)(L_n) T_g,n(𝐋;μ] = F̃_g[H_μρ , μ] as claimed. Lemma <ref> and (<ref>) together lead to the relation ∑_g=0^∞λ^2g-22^3-3gF̃_g[H_μρ,μ]=G(π^2;t_0[ρ+μ], t_1[ρ+μ],…)-8λ^-2F_corr. The right-hand side can be specialized, making use of a variety of identities between intersection numbers. Firstly, a relation between intersection numbers involving κ_1 and pure ψ-class intersection numbers <cit.> leads to the identity <cit.> G(s;x_0,x_1,…)=G(0;x_0,x_1,x_2+γ_2(s),…), where the shifts are γ_k(s)=(-1)^k/(k-1)!s^k-1_k≥2. For us this gives ∑_g=0^∞λ^2g-22^3-3gF̃_g[H_μρ,μ]=G(0;𝐭[ρ+μ]+γ(π^2))-8λ^-2F_corr, where we use the notation G(0;𝐱)=G(0;x_0,x_1,x_2,…). This can be further refined using Witten's observation <cit.>, proved by Kontsevich <cit.>, that G(0;𝐱) satisfies the string equation (- x_0 + ∑_p=0^∞ x_p+1x_p + x_0^2/2λ^2) e^G(0;𝐱)=0. Following a computation of Itzykson and Zuber <cit.>, it implies the following identity. The solution to the string equation (<ref>) satisfies a formal power series identity in the parameter r, G(0;x_0,x_1,…) = G(0; r + ∑_k=0^∞(-r)^k/k!x_k, ∑_k=0^∞(-r)^k/k!x_k+1, ∑_k=0^∞(-r)^k/k!x_k+2, …) - 1/2λ^2∫_0^r s(s + ∑_k=0^∞(-s)^k/k! x_k)^2 . For x_0,x_1,… fixed, let us consider the sequence of functions 𝐲(s) = (y_0(s),y_1(s),…), y_i(s)=δ_i,0 s+∑_k=0^∞(-s)^k/k!x_k+i, such that y_p'(s) = δ_p,0 - y_p+1(s). The string equation (<ref>) then implies s G(0;𝐲(s)) = ∂ G/∂ x_0 (0;𝐲(s)) - ∑_p=0^∞ y_p+1(s) ∂ G/∂ x_p(𝐲(s))= y_0(s)^2/2λ^2. Integrating from s=0 to s=r gives the claimed identity. Before we can use this lemma, we establish a relation between τ_k[H_μρ,μ] and t_k[ρ+μ]. We can rewrite t_q[ρ+μ]+γ_q(π^2)=δ_q,0 2R[μ]+∑_p=0^∞(-2R[μ])^p/p!τ_p+q[H_μρ,μ] , where the shifted times τ_k[ν,μ] are defined in (<ref>). We first relate the moments M_i[μ] defined in (<ref>) to the times t_i[μ]. Note that Z(u;μ] defined in (<ref>) can be expressed in the times as Z(u;μ] =u-∑_k=0^∞(2u)^k/2k!(t_k[μ]+γ_k(π^2)) By taking p derivatives with respect to u, we get ∑_k=0^∞(2R[μ])^k/k!(t_k+p[μ]+γ_k+p(π^2)) = 2R[μ] if p=0 1-M_0[μ] if p=1 -2^1-pM_p-1[μ] if p≥ 2 . Just like before in obtaining (<ref>), this can be inverted to t_q[μ]+γ_q(π^2) =δ_q,1-∑_p=0^∞(-2R[μ])^p/p!2^1-p-qM_p+q-1[μ] The right-hand side of (<ref>) can thus be expressed as δ_q,0 2R[μ]+∑_p=0^∞(-2R[μ])^p/p!τ_p+q[H_μρ,μ] = δ_q,1+∑_p=0^∞(-2R[μ])^p/p!t_p+q[H_μρ] -∑_p=0^∞(-2R[μ])^p/p!2^1-p-qM_p+q-1[μ] =t_q[μ]+γ_q(π^2)+∑_p=0^∞(-2R[μ])^p/p! t_p+q[H_μρ]. From (<ref>) the last term is just t_q[ρ], so we have reproduced the left-hand side of (<ref>), since t_q[ρ+μ] = t_q[ρ]+t_q[μ]. The last two lemmas allow us to express (<ref>) as ∑_g=0^∞λ^2g-22^3-3gF̃_g[H_μρ,μ] =G(0;τ[H_μρ,μ]) +1/2λ^2∫_0^2R[μ]s(s + ∑_k=0^∞(-s)^k/k!τ_k[H_μρ,μ])^2 -8/λ^2F_corr[ρ,μ]. To finish the proof of Proposition <ref> we thus only need to check that the last two terms cancel. We have F_corr[ρ,μ]= 1/16∫_0^2R[μ]s(s + ∑_k=0^∞(-s)^k/k!τ_k[H_μρ,μ])^2. Let us denote the right-hand side by G_corr. By the definition (<ref>), G_corr=1/16∫_0^2R[μ]s(∑_k=0^∞(-s)^k/k! (t_k[H_μρ]-2^1-kM_k-1[μ]))^2 Changing integration variables to r=R[μ]-s/2 gives G_corr =1/8∫_0^R[μ]r(∑_k=0^∞(2r-2R[μ])^k/k! (t_k[H_μρ]-2^1-kM_k-1[μ]))^2 =1/8∫_0^R[μ]r(∑_k=0^∞(2r-2R[μ])^k/k! t_k[H_μρ] -2Z(r;μ])^2 =1/8∫_0^R[μ]r(∑_k=0^∞(2r)^k/k! t_k[ρ] -2Z(r))^2, where in the second equality we use the series expansion of Z(r)=Z(r;μ] around r=R[μ] (recall from (<ref>) that M_k[μ] = Z^(k+1)(R[μ];μ]), and in the third equality we expanded (2r-2R[μ])^k as a polynomial in r and made use of (<ref>). In terms of the weight ρ this can be written as G_corr = 1/2∫_0^R[μ]r(Z(r)-∫ρ(L) I_0(L√(2r)) )^2. In the constant, linear and quadratic term in ρ we then recognize exactly the expressions (<ref>), (<ref>) and (<ref>), G_corr = F_0[μ] + ∫ρ(L_1) δ F_0[μ]/δμ(L_1)+ 1/2∫ρ(L_1)ρ(L_2) δ F_0[μ]/δμ(L_1)δμ(L_2) = F_corr[ρ,μ]. §.§ Properties of the new kernel Recall that the new kernel is given by K(x,t,μ] = ∫_-∞^∞ K_0(x+z,t) X(z), K_0(x,t)=1/1+exp(x+t/2)+1/1+exp(x-t/2), where X(z)=X(z;μ] is determined by its two-sided Laplace transform X̂(u;μ], X̂(u;μ]=∫_-∞^∞X(z) e^-uz = sin(2π u)/2π u η(u;μ] and η(u;μ]=∑_m=0^∞M_m[μ]/(2m+1)!!u^2m. To prove Theorem <ref>, we need to relate K(x,t;μ] to the moments M_k[μ], since they appear in the shifted times. We define the reverse moments β_m[μ] as the coefficients of the reciprocal series 1/η(u;μ] =∑_m=0^∞β_m[μ] u^2m. Multiplying both series shows that the moments and reverse moments obey ∑_m=0^p M_m[μ]/(2m+1)!!β_p-m[μ]=δ_p,0 for each p≥ 0. Note in particular that β_0[μ] = 1/M_0[μ], β_1[μ] = - M_1[μ]/3M_0[μ]^2. For i,j≥1, the new kernel satisfies ∫_0^∞x x^2i-1/(2i-1)! K(x,t;μ]=∑_m=0^iβ_m[μ] t^2i-2m/(2i-2m)! and ∫_0^∞x∫_0^∞y x^2i-1y^2j-1/(2i-1)!(2j-1)! K(x+y,t;μ]=∑_m=0^i+jβ_m[μ] t^2i+2j-2m/(2i+2j-2m)!. We need two lemmas to prove this proposition. First we examine the one-sided Laplace transforms K̂_0(u,t) ∫_0^∞x e^-ux K_0(x,t), K̂(u,t;μ] ∫_0^∞x e^-ux K(x,t;μ]. K̂_0(u,t)- K̂_0(-u,t)= -4πcosh(tu)/sin(2π u)+2/u To compute the integral (<ref>), we only need positive values for x, so we assume x > 0. Since K_0(x,-t)=K_0(x,t) we can also assume t ≥ 0. For x<t we may expand K_0(x,t) =exp(-x-t/2)/1+exp(-x-t/2)+1/1+exp(x-t/2) = -∑_p=1^∞(-e^-x-t/2)^p + ∑_p=0^∞(-e^x-t/2)^p, while for x>t we may use K_0(x,t) =exp(-x-t/2)/1+exp(-x-t/2)+exp(-x+t/2)/1+exp(-x+t/2) = -∑_p=1^∞(-e^-x-t/2)^p - ∑_p=1^∞(-e^-x+t/2)^p. This gives K̂_0(u,t) = -∑_p=1^∞∫_0^∞x e^-ux(-e^-x-t/2)^p + ∑_p=0^∞∫_0^t x e^-ux(-e^x-t/2)^p - ∑_p=1^∞∫_t^∞x e^-ux(-e^-x+t/2)^p = -e^-ut∑_p=-∞^∞(-1)^p/u-p/2 -∑_p=1^∞ (-1)^p exp(-tp/2)/u+p/2 + ∑_p=0^∞ (-1)^p exp(-tp/2)/u-p/2 = -2π e^-ut/sin(2π u)+1/u + ∑_p=0^∞ (-1)^p e^-tp/2(1/u-p/2-1/u+p/2). When subtracting K̂_0(-u,t) it should be clear that the sum cancels and we easily obtain the claimed formula. K̂(u,t;μ]-K̂(-u,t;μ] = X̂(u;μ](K̂_0(u,t) - K̂_0(-u,t)-2/u) + 2X̂(0;μ]/u From the definition (<ref>) we obtain K̂(u,t;μ] -K̂(-u,t;μ] = ∫_-∞^∞X(z)∫_0^∞x (e^-ux-e^ux) K_0(x+z,t) =∫_-∞^∞X(z)∫_z^∞x (e^(z-x)u-e^(x-z)u) K_0(x,t) =∫_-∞^∞X(z) (e^zuK̂_0(u,t) - e^-zuK̂_0(-u,t)) -∫_-∞^∞X(z)∫_0^z x (e^(z-x)u-e^(x-z)u) K_0(x,t). The first integral evaluates to X̂(u;μ](K̂_0(u,t)-K̂_0(-u,t)). By changing variables (x,z)→(-x,-z) and using the symmetry of X(z), we observe that the second integral is unchanged when K_0(x,t) is replaced by K_0(-x,t). Since also K_0(x,t)+K_0(-x,t)=2, the second integral can be calculated to give ∫_-∞^∞X(z)∫_0^z x (e^(z-x)u-e^(x-z)u) K_0(x,t) = ∫_-∞^∞X(z)∫_0^z x (e^(z-x)u-e^(x-z)u) =- 2/u∫_-∞^∞X(z) (1-e^uz) =2/uX̂(u;μ]- 2X̂(0;μ]/u. Subtracting both integrals gives the desired result. We start by noting ∫_0^∞x x^2i-1/(2i-1)! K(x,t;μ]=-1/2[u^2i-1] (K̂(u,t;μ]-K̂(-u,t;μ]) Using Lemma <ref> and Lemma <ref>, we get for i≥1 ∫_0^∞x x^2i-1/(2i-1)! K(x,t;μ] =-1/2[u^2i-1] (X̂(u;μ](-4πcosh(tu)/sin(2π u)) + 2X̂(0;μ]/u) = [u^2i] cosh(tu)/η(u;μ] = ∑_m=0^i β_m[μ] t^2i-2m/(2i-2m)! The second identity follows easily from the first by performing the integration at constant x+y, since ∫_0^zx^2i-1(z-x)^2j-1/(2i-1)!(2j-1)! x = z^2i+2j-1/(2i+2j-1)!. §.§ Proof of Theorem <ref> We will prove the tight topological recursion by retracing Mirzakhani's proof <cit.> of Witten's conjecture, which relies on the observation that her recursion formula (<ref>), expressed as an identity on the coefficients of the volume polynomials V_g,n(𝐋), is equivalent to certain differential equations for the generating function G(s;x_0,x_1,…) of intersection numbers (see also <cit.>). These differential equations can be expressed as the Virasoro constraints <cit.> Ṽ_p e^G(0;x_0,x_1,…)=0. Here the Virasoro operators Ṽ_-1,Ṽ_0,Ṽ_1,Ṽ_2,… are the differential operators acting on the ring of formal power series in x_0,x_1,x_2,… via Ṽ_p =-(2p+3)!!/2^p+1x_p+1+1/2^p+1∑_n=0^∞(2n+2p+1)!!/(2n-1)!! x_n x_n+p +λ^2/2^p+2∑_i+j=p-1(2i+1)!!(2j+1)!!x_ix_j +δ_p,-1(λ^-2x_0^2/2)+δ_p,0/16. They satisfy the Virasoro relations Ṽ_mṼ_n=(m-n)Ṽ_m+n. Proposition <ref> suggests introducing the shift x_k → x_k + γ̃_k in G with γ̃_k = δ_k,1 - 2^1-kM_k-1[μ], which satisfies ( Ṽ_p + (2p+3)!!/2^p+1x_p+1 - 1/2^p+1∑_n=0^∞2^-nM_n[μ]/(2n+1)!!(2p+2n+3)!! x_p+n+1)e^G(0;x_0,x_1+γ̃_1,x_2+γ̃_̃2̃,…)=0. We use the reverse moments β_m[μ] of (<ref>) to introduce linear combinations V_p =∑_m=0^∞β_m[μ] 2^p( Ṽ_p+m + (2p+2m+3)!!/2^p+m+1x_p+m+1 - 1/2^p+m+1∑_n=0^∞2^-nM_n[μ]/(2n+1)!!(2p+2m+2n+3)!! x_p+m+n+1) of these operators for all p≥ -1, which therefore obey V_p exp(G(0;x_0,x_1+γ̃_1,x_2+γ̃_̃2̃,…))=0. Using (<ref>) the operators V_p can be expressed as V_p =∑_m=0^∞β_m[μ] 2^p( Ṽ_p+m + (2p+2m+3)!!/2^p+m+1x_p+m+1 - 1/2^p+m+1∑_n=0^∞2^-nM_n[μ]/(2n+1)!!(2p+2m+2n+3)!! x_p+m+n+1) =-1/2 (2p+3)!! x_p+1 +λ^2/4∑_m=0^∞∑_i+j=p+m-12^-mβ_m[μ](2i+1)!!(2j+1)!!x_ix_j +1/2∑_n,m=0^∞ 2^-mβ_m[μ] (2n+2p+2m+1)!!/(2n-1)!! x_nx_n+p+m +δ_p,-1(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+δ_p,0β_0[μ]/16 In particular, after some rearranging (and shifting p→ p-1) we observe the identity 1/2 (2p+1)!! ∂ G/∂ x_p =λ^2/4∑_m=0^∞∑_i+j=p+m-22^-mβ_m[μ](2i+1)!!(2j+1)!!(∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) +1/2∑_n,m=0^∞ 2^-mβ_m[μ] (2n+2p+2m-1)!!/(2n-1)!! x_n∂ G/∂ x_n+p+m-1 +δ_p,0(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+δ_p,1β_0[μ]/16, where G is understood to be evaluated at G=G(0;x_0,x_1+γ̃_1,x_2+γ̃_̃2̃,…). Substituting x_k = t_k[ν] such that x_k + γ̃_k = τ_k[ν,μ], Proposition <ref> links G to the generating function G(0;τ_0[ν,μ],τ_1[ν,μ],…) =∑_g=0^∞λ^2g-22^3-3gF̃_g[ν,μ] =∑_g=0^∞λ^2g-22^3-3g∑_n=0^∞1/n!∫ν(L_1)⋯ν(L_n) T_g,n(𝐋;μ] of tight Weil–Petersson volumes. The differential equations (<ref>) can then be reformulated as the functional differential equation (here all partial derivates of G are evaluated at 0, τ_0[ν,μ],τ_1[ν,μ],…) δ/δν(L_1)G(0;τ_0[ν,μ],τ_1[ν,μ],…) = ∑_p=0^∞2L_1^2p/4^p p!∂ G/∂ x_p = ∑_p=0^∞1/L_1∫_0^L_1 t t^2p/(2p)!2^1-p(2p+1)!!∂ G/∂ x_p = ∑_p=0^∞λ^2/L_1∫_0^L_1t t^2p/(2p)!∑_m=0^∞∑_i+j=p+m-22^-i-j-2β_m[μ](2i+1)!!(2j+1)!! (∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) +∑_p=0^∞1/L_1∫_0^L_1t t^2p/(2p)!∫ν(P) ∑_n,m=0^∞ 2^-m-p-n+2β_m[μ] (2n+2p+2m-1)!!/(2n)! P^2n∂ G/∂ x_n+p+m-1 +λ^-2t_0^2[ν]β_0[μ]+β_1[μ]/8+β_0[μ]L_1^2/48 Inserting the integral identities of Proposition <ref> this can also be expressed in terms of the kernel K(x,t;μ] as δ/δν(L_1)G(0;τ_0[ν,μ],τ_1[ν,μ],…) = λ^2 /4L_1∫_0^L_1t∫_0^∞x∫_0^∞y∑_i,j=0^∞ K(x+y,t;μ]x^2i+1y^2j+1/4^i i!4^j j!(∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) + 1/L_1∫_0^L_1t∫_0^∞x∫ν(P) ∑_q=0^∞ ( K(x,t+P;μ] + K(x,t-P;μ]) x^2q+1/4^q q!∂ G/∂ x_q +λ^-2t_0^2[ν]β_0[μ]+β_1[μ]/8+β_0[μ]L_1^2/48 = λ^2 /16L_1∫_0^L_1t∫_0^∞x∫_0^∞y xy K(x+y,t;μ] (δ^2 G/δν(x)δν(y)+δ G/δν(x)δ G/δν(y)) +1/2L_1∫_0^L_1t∫_0^∞x∫ν(P) x ( K(x,t+P;μ] + K(x,t-P;μ]) δ G/δν(x) +ν(L_1)(λ^-2t_0^3[ν]/6M_0[μ]-M_1[μ]t_0[ν]/48M_0[μ]^2+t_1[ν]/24M_0[μ]), where in the last line we used (<ref>). This equation at the level of the generating function (<ref>) is precisely equivalent to the recursion equation on its polynomial coefficients T_g,n(𝐋) = 1/2L_1∫_0^L_1t∫_0^∞x∫_0^∞y xy K(x+y,t;μ] [ T_g-1,n+1(x,y,𝐋_{1}) 370mu +∑_g_1+g_2=g I⨿ J={2,…,n} T_g_1,1+|I|(x,𝐋_I) T_g_2,1+|J|(y,𝐋_J)] +1/2L_1∫_0^L_1t∫_0^∞x∑_j=2^n x (K(x,t+L_j;μ] + K(x,t-L_j;μ]) T_g,n-1(x,𝐋_{1,j}), for (g,n)∉{(0,3),(1,1)} combined with the initial data T_0,3(L_1,L_2,L_3;μ] =1/M_0[μ] T_1,1(L;μ] =-M_1[μ]/24M_0[μ]^2+L^2/48M_0[μ]. This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> We follow a strategy along the lines of the proof of Theorem <ref>. Recall the relation (<ref>) between the intersection number generating function G and the tight Weil–Petersson volumes T_g,n. Let us denote by G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…) the homogeneous part of degree n in x_0,x_1,… of G_g(0;x_0,x_1 + 1 - 𝖬_0,x_2-12 𝖬_1, x_3 - 14𝖬_2, …). In other words, they are homogeneous polynomials of degree n in x_0,x_1,… with coefficients that are formal power series in 𝖬_0,𝖬_1,…, such that G_g(0;x_0,x_1 + 1 - 𝖬_0,x_2-12 𝖬_1, …) = ∑_n=0^∞ G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…). We will prove that there exist polynomials 𝒫̅_g,n(x_0,x_1,…;m_1,m_2…) such that G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…) = 1/n!1/𝖬_0^2g-2+n𝒫̅_g,n(x_0,x_1,…;𝖬_1/𝖬_0,𝖬_2/𝖬_0,…) and deduce a recurrence in n. For g≥ 2 and n=0, the existence of a polynomial 𝒫̅_g,0(m_1,m_2,…) follows from <cit.>, since G_g,0(x_0,x_1,…;𝖬_0,𝖬_1,…) = G_g(0;0,1 - 𝖬_0,-12 𝖬_1,…) = 𝖬_0^2-2gG_g(0;0,0,-12 𝖬_1/𝖬_0,-14 𝖬_2/𝖬_0,…) and G_g(0;0,0,x_2,x_3,…) is polynomial by construction. Also G_0,3 = x_0^3/61/𝖬_0, G_1,1 = x_1/241/𝖬_0-x_0/48𝖬_1/𝖬_0^2 Let us now assume 2g-2+n ≥ 2 and aim to express G_g,n in tems of G_g,n-1. By construction the series G_g,n obeys for k≥ 1, ∂ G_g,n/∂ x_k = -2^k-1∂ G_g,n-1/∂𝖬_k-1. The string equation, i.e. (<ref>) at p=-1, written in terms of G_g,n reads ∑_k=1^∞ x_k ∂ G_g,n-1/∂ x_k-1 - ∑_k=0^∞ 2^-k𝖬_k ∂ G_g,n/∂ x_k= 0, which after rearranging gives the relation ∂ G_g,n/∂ x_0 = ∑_k=1^∞(x_k/𝖬_0∂ G_g,n-1/∂ x_k-1 - 2^-k𝖬_k/𝖬_0∂ G_g,n/∂ x_k) Together with (<ref>) this is sufficient to identify the recursion relation G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…) = 1/n∑_k=0^∞ x_k ∂ G_g,n/∂ x_k = 1/n∑_k=1^∞(x_0x_k/𝖬_0∂ G_g,n-1/∂ x_k-1 + x_0𝖬_k/2𝖬_0∂ G_g,n-1/∂𝖬_k-1 - 2^k-1 x_k ∂ G_g,n-1/∂𝖬_k-1). By induction, we now verify that G_g,n is of the form (<ref>). If (<ref>) is granted for G_g,n-1, then G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…) = 1/n!1/𝖬_0^2g-2+n[∑_k=1^∞ x_0x_k∂𝒫̅_g,n-1/∂ x_k-1 - ∑_k=2^∞(-x_0𝖬_k/2𝖬_0 +2^k-1x_k)∂𝒫̅_g,n-1/∂ m_k-1 + (-x_0𝖬_1/2𝖬_0 +x_1)((2g+n-3)𝒫̅_g,n-1 - ∑_k=1^∞ m_k ∂𝒫̅_g,n-1/∂ m_k)] is indeed of the form (<ref>) provided 𝒫̅_g,n(x_0,x_1,…;m_1,m_2,…) = ∑_k=1^∞ x_0x_k∂𝒫̅_g,n-1/∂ x_k-1 - ∑_p=1^∞(-x_0m_p+1/2 +2^p x_p+1-x_0 m_1 m_k/2+x_1 m_k)∂𝒫̅_g,n-1/∂ m_p + (-x_0m_1/2 +x_1)(2g+n-3)𝒫̅_g,n-1 According to (<ref>) the series G_g,n and the tight Weil–Petersson volume T_g,n are related via G_g,n(t_0[ν],t_1[ν],…;M_0[μ],M_1[μ],…) = 2^3-3g/n!∫ν(L_1)⋯ν(L_n) T_g,n(𝐋;μ]. This naturally leads to the existence of polynomials 𝒫_g,n(𝐋,m_1,m_2,…) such that T_g,n(𝐋;μ] = 1/M_0^2g-2+n𝒫_g,n(𝐋,M_1/M_0,…,M_3g-3+n/M_0), to get 𝒫_g,n(𝐋,𝐦) = ∑_p=1^∞(m_p+1 - L_1^2p+2/2^p+1(p+1)!-m_1 m_p + 1/2L_1^2 m_p) ∂𝒫_g,n-1/∂ m_p(𝐋_{1},𝐦) + (2g-3+n)(-m_1+12 L_1^2) 𝒫_g,n-1(𝐋_{1},𝐦) + _{g=0,n=3} + ∑_i=2^n ∫L_i L_i 𝒫_g,n-1(𝐋_{1},𝐦). The claims about the degree of the polynomials 𝒫_g,n are easily checked to be valid for the initial conditions and to be preserved by the recursion formula (<ref>). This proves theorem <ref>. We note that [L_1^2p]𝒫_g,n(𝐋,𝐦) = δ_p,0[∑_q=1^∞(m_q+1-m_1 m_q) ∂𝒫_g,n-1/∂ m_q(𝐋_{1},𝐦)-(2g-3+n)m_1 𝒫_g,n-1(𝐋_{1},𝐦) + _{g=0,n=3} + ∑_i=2^n ∫L_i L_i 𝒫_g,n-1(𝐋_{1},𝐦)] +δ_p,1[ ∑_q=1^∞(1/2m_q) ∂𝒫_g,n-1/∂ m_q(𝐋_{1},𝐦)+ 12(2g-3+n) 𝒫_g,n-1(𝐋_{1},𝐦) ] +_p>1[ -1/2^pp!∂𝒫_g,n-1/∂ m_p-1(𝐋_{1},𝐦) ] Setting m_0=1 this gives ∑_p=0^∞ 2^p p! m_p [L_1^2p]𝒫_g,n(𝐋,𝐦) = _{g=0,n=3} + ∑_i=2^n ∫L_i L_i 𝒫_g,n-1(𝐋_{1},𝐦) and ∑_p=1^∞ 2^p p! m_p-1 [L_1^2p]𝒫_g,n(𝐋,𝐦) = (2g-3+n) 𝒫_g,n-1(𝐋_{1},𝐦). Using equation (<ref>), we get the desired result. § LAPLACE TRANSFORM, SPECTRAL CURVE AND DISK FUNCTION §.§ Proof of Theorem <ref> Let us consider the partial derivative operator Δ(z)=4∑_p=0^∞ (2z^2)^-1-p (2p+1)!! x_p on the ring of formal power series in x_0,x_1,… and 1/z. For later purposes we record several identities for the power series coefficients around z=∞, valid for a≥0, [u^-2-2a]Δ(u) =2^1-a(2a+1)!!x_a, [u^-4-2a]Δ(u)Δ(-u) =2^2-a∑_i+j=a(2i+1)!!(2j+1)!!∂^2/∂ x_i∂ x_j, [u^2a-1]1/u(z^2-u^2)η(u;μ] =∑_m=0^a z^2m-2a-2β_m[μ], where the reverse moments β[μ] were introduced in (<ref>). From the definition (<ref>) and the relation (<ref>) we deduce that for g≥1 or n≥3 ω_g,n(𝐳) = ∫_0^∞[∏_1≤ i≤ n L_i e^-z_i L_i] δ^n F̃_g[ν,μ]/δν(L_1)⋯δν(L_n)|_ν=0 L_1⋯ L_n =2^3g-3Δ(z_1)…Δ(z_n)G_g(0;x_0,x_1+γ̃_1,x_2+γ̃_2,…)_x_0=x_1=⋯=0, where γ̃_k = δ_k,1 - 2^1-k M_k-1[μ] as before. Recall the differential equation (<ref>) satisfied by this (shifted) intersection number generating function G(0;x_0,x_1+γ̃_1,x_2+γ̃_2,…), 1/2 (2p+1)!! ∂ G/∂ x_p =λ^2/4∑_m=0^∞∑_i+j=p+m-22^-mβ_m[μ](2i+1)!!(2j+1)!!(∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) +1/2∑_n,m=0^∞ 2^-mβ_m[μ] (2n+2p+2m-1)!!/(2n-1)!! x_n∂ G/∂ x_n+p+m-1 +δ_p,0(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+δ_p,1β_0[μ]/16. With the help of the identities (<ref>) it can be recast in terms of the operator Δ(z) as Δ(z_1)G =2λ^2∑_a=0^∞∑_m=0^a+2∑_i+j=a(2z_1^2)^m-a-32^-mβ_m[μ](2i+1)!!(2j+1)!!(∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) +4∑_n,m,p=0^∞ (2z_1^2)^-1-p 2^-mβ_m[μ] (2n+2p+2m-1)!!/(2n-1)!! x_n∂ G/∂ x_n+p+m-1 +4/z_1^2(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+β_0[μ]/8z_1^4[0] =λ^2/16∑_a=0^∞([u^2a+3]1/u(z_1^2-u^2)η(u;μ])[u^-4-2a](Δ(u)Δ(-u)G+(Δ(u)G)(Δ(-u)G)) +1/2∑_q=0^∞∑_n=0^q+12^n x_n/(2n-1)!!([u^2q-2n+1]1/u(z_1^2-u^2)η(u;μ]) [u^-2q-2]Δ(u)G +4/z_1^2(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+β_0[μ]/8z_1^4[0] =λ^2/16_u→01/u(z_1^2-u^2)η(u;μ](Δ(u)Δ(-u)G+(Δ(u)G)(Δ(-u)G)) +1/2_u→0∑_n=0^∞(2u^2)^n x_n/(2n-1)!!1/u(z_1^2-u^2)η(u;μ]Δ(u)G +4/z_1^2(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+β_0[μ]/8z_1^4. Extracting the genus-g contribution, which appears as the coefficient of λ^2g-2, the relation (<ref>) allows us to turn this into a recursion for ω_g,n, ω_g,n(𝐳) =1/2_u→01/u(z_1^2-u^2)η(u;μ][ω_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω_g_1,|I|(u,𝐳_I)ω_g_2,|J|(-u,𝐳_J)] +_u→0∑_j=2^n∑_p=0^∞ u^2p z_j^-2-2p(2p+1)1/u(z_1^2-u^2)η(u;μ]ω_g,n-1(u,𝐳_{1,j}) +δ_g,0δ_n,3/M_0[μ]z_1^2z_2^2z_3^2+δ_g,1δ_n,1(-M_1[μ]/24M_0[μ]^2z_1^2+1/8M_0[μ]z_1^4) [0] =_u→01/2u(z_1^2-u^2)η(u;μ][ω_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω_g_1,|I|(u,𝐳_I)ω_g_2,|J|(-u,𝐳_J)+ +∑_j=2^n(1/(z_j-u)^2+1/(z_j+u)^2)ω_g,n-1(u,𝐳_{1,j})] +δ_g,0δ_n,3/M_0[μ]z_1^2z_2^2z_3^2+δ_g,1δ_n,1(-M_1[μ]/24M_0[μ]^2z_1^2+1/8M_0[μ]z_1^4). Finally, if we set ω_0,2(𝐳)=(z_1-z_2)^-2 and ω_0,0(𝐳)=ω_0,1(𝐳)=0, this reduces to ω_g,n(𝐳) =_u→01/2u(z_1^2-u^2)η(u;μ][ω_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω_g_1,|I|(u,𝐳_I)ω_g_2,|J|(-u,𝐳_J)]. §.§ Disk function Due to proposition <ref>, there is a relation between the regular and tight Weil–Petersson volumes. In this subsection we will look at this relation in the Laplace transformed setting. In particular, we are interested in the Laplace transformed generating functions of (regular) Weil–Petersson volumes 𝒲_g,n(𝐳) = ∫_0^∞L_1 L_1e^-z_1L_1⋯∫_0^∞L_nL_ne^-z_nL_nδ^n F_g[μ]/δμ(L_1)⋯δμ(L_n), where we recall that δ^n F_g[μ]/δμ(L_1)⋯δμ(L_n) = ∑_p=0^∞1/p!∫ V_g,n+p(𝐋,𝐊) μ(K_1)⋯μ(K_p). We define x_i=x_i(z_i;μ]=√(z_i^2-2R[μ]). For g≥ 1 or n≥ 3 we have 𝒲_g,n(𝐳) = ω_g,n(𝐱) ∏_i=1^n z_i/x_i, while for g=0 and n=1,2, 𝒲_0,1(𝐳) = - ∫_0^R z_1/(z_1^2-2r)^3/2 Z(r) r, 𝒲_0,2(𝐳) =z_1/x_1z_2/x_2ω_0,2(𝐱)- 1/(z_1-z_2)^2. For the first identity we wish to combine (<ref>) and (<ref>). It requires an expression for the Laplace transform of the half-tight cylinder. Using ∫_0^∞yz/√(4π y^3) e^-z^2/4y-yL^2 = e^-zL, ∫_0^∞pe^-yp^2/2RI_1(p) = e^R/2y-1, allows us to compute ∫_K^∞L L e^-z L (H(L,K) K + δ(L-K)) =K e^-zK+[t] ∫_K^∞L L (∫_0^∞yzK/√(4π y^3) e^-z^2/4y-yL^2) √(2R/L^2-K^2)I_1(√(2R(L^2-K^2))) =K e^-zK+∫_0^∞yzK/√(4π y^3) e^-z^2/4y-yK^2∫_0^∞pe^-yp^2/2RI_1(p) =K e^-zK+∫_0^∞yzK/√(4π y^3) e^-z^2/4y-yK^2(e^R/2y-1) =Kz/√(z^2 -2R)e^-K√(z^2-2R). Therefore, 𝒲_g,n(𝐳) = ∫_0^∞ T_g,n(𝐊;μ] ∏_i=1^n K_i z_i/x_i e^-x_i K_i K_i, which by (<ref>) gives the first stated identity. For the last two identities we use that the Laplace transform of the modified Bessel function I_0 is given by ∫_0^∞ I_0(L√(2r)) L e^-z L L = z/(z^2-2r)^3/2. Then (<ref>) follows directly from (<ref>), while for the cylinder case (<ref>) implies 𝒲_0,2(𝐳) = ∫_0^R z_1/(z_1^2-2r)^3/2z_2/(z_2^2-2r)^3/2 r = z_1/√(z_1^2-2R)z_2/√(z_2^2-2R)1/(√(z_1^2-2R)-√(z_2^2-2R))^2-1/(z_1-z_2)^2 = z_1/x_1z_2/x_2ω_0,2(𝐱)- 1/(z_1-z_2)^2. We finish this section by giving alternative expressions for the disk function and the series η(u;μ]. The disk function 𝒲_0,1(𝐳) is related to η via 𝒲_0,1(𝐳) = - z_1 √(z_1^2-2R) η(√(z_1^2-2R)) + z_1/2πsin2π z_1 -∫μ(L)cosh(L z_1), valid when 4|R| < |z_1|^2. Note that μ=0 gives 𝒲_0,1(𝐳)=0 as expected. The starting point is the standard generating function <cit.> 1/usin(√(u^2-2ut)) = ∑_n=0^∞t^n/n!y_n-1(u) for the spherical Bessel functions y_k(u) valid when 2|t| < |u|. Restricting to 2|R| < |x|^2 and using the series expansion of the ordinary and spherical Bessel functions we find x/2πsin(2πλ√(x^2+2R)) = ∑_k=0^∞ (-π)^k/k! 2^kx^2-kλ^k+1 y_k-1(2πλ x) R^k = ∑_k,m=0^∞(-1)^mπ^2m+1/2/k!m!2^k-1x^2(m-k+1)λ^2m+1R^k/Γ(m-k+32) = ∑_p=-∞^∞ x^2p∑_m=max(0,p-1)^∞(-1)^mπ^2m+1/2/(m+1-p)!m!2^m-pλ^2m+1R^m+1-p/Γ(12+p) = ∑_p=-∞^∞ x^2pλ^p Γ(1/2)/Γ(p+1/2)2^p(-√(2)π/√(R))^p-1J_p-1(2πλ√(2R)). Setting λ=1 now gives x/2πsin(2π√(x^2+2R))=∑_p=-∞^∞ x^2pΓ(1/2)/Γ(p+1/2)2^p(-√(2)π/√(R))^p-1J_p-1(2π√(2R)). On the other hand we can show that λ[x/√(x^2+2R)cos(2πλ√(x^2+2R))] =-2π x sin(2πλ√(x^2+2R)) =-4π^2 ∑_p=-∞^∞ x^2pλ^p Γ(1/2)/Γ(p+1/2)2^p(-√(2)π/√(R))^p-1J_p-1(2πλ√(2R)) =λ[∑_p=-∞^∞ x^2pλ^pΓ(1/2)/Γ(p+1/2)2^p(-2π/√(2R))^p J_p(2πλ√(2R))]. Integrating and setting λ=(iL)/(2π) gives x/√(x^2+2R)cosh(L√(x^2+2R)) = ∑_p=-∞^∞ x^2pΓ(1/2)/Γ(p+1/2)2^p(L/√(2R))^p I_p(L√(2R)), valid for 2|R| < |x|^2. Starting from (<ref>) and restricting to 2|R| < |x_1|^2 we find the series expansion 𝒲_0,1(√(x_1^2+2R))x_1/√(x_1^2+2R) = - ∫_0^R x_1/(x_1^2 + 2R - 2r)^3/2 Z(r) r =- x_1^-2∫_0^R 1/(1 + 2R - 2r/x_1^2)^3/2 Z(r) r = - ∑_p=1^∞(-2)^pΓ(1/2)/(p-1)!Γ(1/2-p) x_1^-2p∫_0^R (r-R)^p-1 Z(r) r. We can use <cit.>, ∫_0^R (r-R)^p-1√(r)/√(2)πJ_1(2π√(2r)) r = (-1)^p-1√(2)R^p+1/2/π∫_0^1 x^2 (1-x^2)^p-1 J_1(2π√(2R)x)x = (p-1)! (-√(R)/√(2)π)^p+1J_p+1(2π√(2R)), ∫_0^R (r-R)^p-1 I_0(L√(2r)) r = (-R)^p-12R ∫_0^1 x (1-x^2)^p-1 I_0(L√(2R)x)x = -(p-1)! (-√(2R)/L)^pI_p(L√(2R)). This yields 𝒲_0,1(√(x_1^2+2R))x_1/√(x_1^2+2R) = -∑_p=1^∞ x_1^-2p(-2)^pΓ(1/2)/Γ(1/2-p)[(-√(R)/√(2)π)^p+1 J_p+1(2π√(2R)) + ∫μ(L)(-√(2R)/L)^p I_p(L√(2R))] = ∑_p=-∞^-1 x_1^2pΓ(1/2)/Γ(1/2+p)2^p[(-√(2)π/√(R))^p-1 J_p-1(2π√(2R)) -∫μ(L)(L/√(2R))^p I_p(L√(2R))]. On the other hand, (<ref>) and (<ref>) imply x_1^2 η(x_1) = ∑_p=1^∞ x_1^2pΓ(1/2)/Γ(1/2+p)2^p[(-√(2)π/√(R))^p-1 J_p-1(2π√(2R)) -∫μ(L)(L/√(2R))^p I_p(L√(2R))] Together with Z(R)=0 we may now conclude that 𝒲_0,1(√(x_1^2+2R))x_1/√(x_1^2+2R) + x_1^2 η(x_1) = x_1/2πsin(2π√(x_1^2+2R)) - ∫μ(L)x_1/√(x_1^2+2R)cosh(L√(x_1^2+2R)), valid for 2|R| < |x_1|^2. Substituting x_1 = √(z_1^2 - 2R) gives the desired expression. For convenience, we record an explicit expression for η(u;μ] that follows from this proof. For a formal power series F(r,u) in r with coefficients that are Laurent polynomials in u, we denote by [u^≥ 0]F(r,u) the formal power series obtained by dropping the negative powers of u in the coefficients of F(r,u). Then we can write η(u;μ] as η(u;μ] = [u^≥ 0](u/2πsin(2π√(u^2+2r)) - ∫μ(L)u/√(u^2+2r)cosh(L√(u^2+2r)))|_r=R[μ]. § JT GRAVITY The Weil–Petersson volumes play an important role in Jackiw-Teitelboim (JT) gravity <cit.>, a two-dimensional toy model of quantum gravity. JT gravity has received significant attention in recent years because of the holographic perspective on the double-scaled matrix model it is dual to <cit.>. In this section we point to some opportunities to use our results in the context of JT gravity and its extensions in which hyperbolic surfaces with defects play a role <cit.>. But we start with a brief introduction to the JT gravity partition function in Euclidean signature. JT gravity is governed by the (Euclidean) action I_JT, bulk[g_μν,ϕ]=-1/2∫_ℳ√(g)ϕ(R+2), where ϕ is the scalar dilaton field, g_μν is a two-dimensional Riemannian metric and R the corresponding Ricci scalar curvature. Since we want this action to make sense when the manifold has boundaries, the boundary term I_JT, boundary[g_μν,ϕ]=-∫_∂ℳ√(h)ϕ(K-1) is included, where h_μν is the induced metric on the boundary and K is the extrinsic curvature at the boundary. Including the topological Einstein–Hilbert term, proportional to parameter S_0, gives the full (Euclidean) JT action I_JT[g_μν,ϕ]=-S_0χ+I_JT,bulk[g_μν,ϕ]+I_JT,boundary[g_μν,ϕ], where χ is the Euler characteristic of the manifold. The JT gravity partition function on a manifold ℳ with n boundaries of lengths β = (β_1,…,β_n) can formally be written as Z_n(β)=∫_ℳ𝒟g 𝒟ϕexp(I_JT). In the partition function, the dilation field ϕ acts as a Lagrange multiplier on (R+2), therefore enforcing a constant negative curvature R=-2 in the bulk. This is why the relevant manifolds will be hyperbolic surfaces. Due to the Einstein–Hilbert term, we can do a topological expansion by a formal power series expansion in e^-S_0, Z_n(β)=∑_g=0^∞(e^-S_0)^2g+n-2 Z_g,n(β). It has been shown by Saad, Shenker & Stanford <cit.> that the JT partition functions Z_g,n for 2g+n-2>0 can be further decomposed by splitting the surfaces into n trumpets and a hyperbolic surface of genus g and n geodesic boundaries with lengths 𝐛=(b_1,…,b_n), and that the partition function measure is closely related to the Weil–Petersson measure. To be precise, it satisfies the identity Z_g,n(β)=∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i)) V_g,n(𝐛), where V_g,n(𝐛) are the Weil–Petersson volumes and the trumpet contributions are given by Z^Trumpet(β,b)=1/2√(πβ)e^-b^2/4β. This formula is the link between JT gravity and Weil–Petersson volumes. There are several natural extensions of the JT action. If we only allow up to two derivatives, the most general action can be transformed to <cit.> I_bulk[g_μν,ϕ]=-1/2∫_ℳ√(g) [ϕ(R+2)+U(ϕ)]. In the next subsection we will discuss a natural choice of the dilaton potential U(ϕ), which gives rise to defects in the hyperbolic surfaces. §.§ Conical defects One of the most natural dilaton potentials is U(ϕ)=μ e^-2π(1-α)ϕ, which adds a gas of conical defects of cone angle 2πα carrying weight μ each. It naturally arises <cit.> from Kaluza–Klein instantons when performing dimensional reduction on three-dimensional black holes. More generally, one can allow multiple types of defects by considering a measure μ on i[0,2π) and setting U(ϕ)=∫_0^1 μ(2π iα) e^-2π(1-α)ϕ. For instance, the choice μ = ∑_j=1^k μ_j δ_i γ_j gives k types of defects with cone angles γ_1,…,γ_k ∈ [0,2π], U(ϕ)=∑_i μ_i e^-2π(1-α_i)ϕ. The choice to consider the measure on the imaginary interval will be convenient later. It can be shown <cit.> that these potentials indeed lead to conical defects. For example, one can look at the term linear in μ in the integrand of the partition function for a single type of gas: [μ^1]exp(-I_bulk)=1/2exp(-I_JT, bulk)∫_ℳx_1√(g(x_1))exp(-2π(1-α)ϕ(x_1)) =1/2∫_ℳx_1√(g(x_1))exp(1/2∫_ℳx√(g(x))ϕ(x) (R(x)+2-4π(1-α)δ^2(x-x_1))). It follows that the surface has curvature R=-2 everywhere, except at point x_1, where we have a conical defect with cone angle 2πα. If one includes all orders of μ, any number of defects may appear and each defect carries a weight μ <cit.>. As already mentioned in the introduction, the Weil–Petersson volumes for surfaces with sharp cone points (cone angle 2πα< π) are obtained <cit.> from the usual Weil–Petersson volume polynomials by treating the defect angle 2πα as a geodesic boundary with imaginary boundary length 2π iα. This that partition function Z_g,n(β) is closely related to the generating function F_g[μ] of Weyl–Petersson volumes considered in this paper. To be precise, using (<ref>) and (<ref>), Z_g,n(β) =∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i))∑_p=0^∞1/p!∫μ(b_n+1)⋯μ(b_n+p) V_g,n+p(𝐛) =∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i))δ^n F_g[μ]/δμ(b_1)⋯δμ(b_n) =∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i))(∏_i=1^n K_i(K_i H(b_i,K_i;μ]+δ(K_i-b_i))) T_g,n(𝐊;μ], which we can compute using the recursions described in this paper. In particular, its topological recursion can in principle be derived from that of T_g,n in Theorem <ref>. We can simplify (<ref>) by considering the tight trumpet, which is a genus-0 hyperbolic surface with an asymptotic boundary of length β, a tight boundary of length K and an arbitrary number of extra geodesic boundaries, with the constraint that the tight boundary cannot be separated from the asymptotic one by a curve of length β. See Figure <ref>. Since it can be obtained by gluing a trumpet to a half-tight cylinder, with the help of Lemma <ref> we find that the partition function associated to a tight trumpet is given by Z^TT(β,K) =∫_K^∞b/Kb Z^Trumpet(β,b)(K H(b,K;μ]+δ(K-b)) = 1/2√(πβ)e^-K^2/4β + ∫_K^∞ bb1/2√(πβ)e^-b^2/4β√(2R[μ]/b^2-K^2) I_1( √(b^2-K^2)√(2R[μ])) =1/2√(πβ)e^-K^2/4β+2R[μ]β = e^2R[μ]βZ^Trumpet(β,K). Remarkably it differs from the JT trumpet only in a factor exponential in the boundary length β. We conclude that for g≥ 1 or n≥ 3, Z_g,n(β)= ∫_0^∞(∏_i=1^n K_i K_iZ^TT(β_i,K_i)) T_g,n(𝐊;μ], which we understand as a gluing of tight trumpets to tight hyperbolic surfaces. In the case g=0 and n=2, we only need to glue two tight trumpets together to find the universal two-boundary correlator Z_0,2(β_1,β_2) = ∫_0^∞ Z^TT(β_1,K) Z^TT(β_2,K) K K = 1/2π√(β_1β_2)/β_1+β_2 e^2(β_1+β_2)R[μ]. We note that these expressions do not apply to the case of blunt cone points (cone angle 2πα∈ [π,2π]). The problem is that in the presence of such defects it is no longer true that every free homotopy class of closed curves necessarily contains a geodesic, because, informally, when shortening a closed curve it can be pulled across a blunt cone point, while that never happens for a short one. However, this is not an issue when considering tight cycles, because in that setting one is considering larger homotopy classes, namely of the manifold with its defects closed off. Such homotopy classes will always contain a shortest geodesic, which generically is unique. Whereas the JT trumpet cannot always be removed from a surface with blunt defects in a well-defined manner, the removal of a tight trumpet should pose no problem. It is natural to ask whether such reasoning can be used to connect to the recent works <cit.> in JT gravity dealing with blunt cone points. §.§ FZZT-branes Another well-studied extension of JT gravity, is the introduction of FZZT branes. With this extension the hyperbolic surfaces can end on a FZZT brane. In the random matrix model description of JT gravity, this corresponds to fixing some eigenvalues of the random matrix <cit.>. In the partition function, this leads to the addition of an arbitrary number of geodesic boundaries as defects with a certain weight ℳ(L)=-e^-zL, where L is the length of the boundary, Z_g,n(β)_FZZT = ∑_p=0^∞e^-S_0p/p!∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i)) (∏_i=n+1^n+pb_iℳ(b_i)) V_g,n+p(𝐛). Such weights have been interpreted <cit.>[Please note that z in our work corresponds to z/(√(2)π) in <cit.>] as the action of a fermion with mass z. Using our setup, we can rewrite this to: Z_g,n(β)_FZZT = ∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i))(∏_i=1^n K_i (K_i H(b_i,K_i;μ_FZZT] +δ(b_i-K_i))) T_g,n(𝐊;μ_FZZT] , with μ_FZZT=-e^-S_0-zL L, or again using the tight trumpet Z_g,n(β)_FZZT= ∫_0^∞(∏_i=1^n K_i K_iZ^TT(β_i,K_i;μ_FZZT]) T_g,n(𝐊;μ_FZZT], with Z^TT(β,K;μ] =1/2√(πβ)e^-K^2/4β+2R[μ]β. The behaviour of R[μ_FZZT] depends on z and S_0 and its critical points should give insight into critical phenomena of the partition function, see <cit.>. 10 Abramowitz1964 M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables, Courier Corporation, 1964. Blommaert_2021 A. Blommaert, T. G. Mertens, and H. Verschelde, Eigenbranes in Jackiw-Teitelboim gravity, Journal of High Energy Physics (2021) 2. Bouttier_Bijective_2022 J. Bouttier, E. Guitter, and G. Miermont, Bijective enumeration of planar bipartite maps with three tight boundaries, or how to slice pairs of pants, Annales Henri Lebesgue, 5 (2022), pp. 1035–1110. budd2020irreducible T. Budd, Irreducible metric maps and Weil-Petersson volumes, Comm. Math. Phys., 394 (2022), pp. 887–917. Budd_Statistics_ T. Budd and P. Koster, Statistics of critical boltzmann hyperbolic surfaces. in preparation. buser1992geometry P. Buser, Geometry and Spectra of Compact Riemann Surfaces, Birkhäuser, Boston, 1992. castro2023critical A. Castro, Critical jt gravity, arXiv preprint arXiv:2306.14823, (2023). Dijkgraaf_Loop_1991 R. Dijkgraaf, H. Verlinde, and E. Verlinde, Loop equations and virasoro constraints in non-perturbative two-dimensional quantum gravity, Nuclear Physics B, 348 (1991), pp. 435–456. do2011moduli N. Do, Moduli spaces of hyperbolic surfaces and their weil-petersson volumes, arXiv preprint arXiv:1103.4674, (2011). Do_Weil_2009 N. Do and P. Norbury, Weil-Petersson volumes and cone surfaces, Geom. Dedicata, 141 (2009), pp. 93–107. Eberhardt_2D_2023 L. Eberhardt and G. J. Turiaci, 2d dilaton gravity and the weil-petersson volumes with conical defects, arXiv preprint arXiv:2304.14948, (2023). Eynard_Invariants_2007 B. Eynard and N. Orantin, Invariants of algebraic curves and topological expansion, arXiv preprint math-ph/0702045, (2007). eynard2007weil height 2pt depth -1.6pt width 23pt, Weil-petersson volume of moduli spaces, mirzakhani's recursion and matrix models, arXiv preprint arXiv:0705.3600, (2007). Faber_conjectural_1999 C. Faber, A conjectural description of the tautological ring of the moduli space of curves, in Moduli of curves and abelian varieties, Aspects Math., E33, Friedr. Vieweg, Braunschweig, 1999, pp. 109–129. Gilmore_Short_2021 C. Gilmore, E. Le Masson, T. Sahlsten, and J. Thomas, Short geodesic loops and L^p norms of eigenfunctions on large genus random surfaces, Geom. Funct. Anal., 31 (2021), pp. 62–110. Gradshteyn_Table_2015 I. S. Gradshteyn and I. M. Ryzhik, Table of integrals, series, and products, Elsevier/Academic Press, Amsterdam, eighth ed., 2015. Translated from the Russian, Translation edited and with a preface by Daniel Zwillinger and Victor Moll, Revised from the seventh edition [MR2360010]. Guth_Pants_2011 L. Guth, H. Parlier, and R. Young, Pants decompositions of random surfaces, Geom. Funct. Anal., 21 (2011), pp. 1069–1090. Itzykson_Combinatorics_1992 C. Itzykson and J.-B. Zuber, Combinatorics of the modular group. II. The Kontsevich integrals, Internat. J. Modern Phys. A, 7 (1992), pp. 5661–5705. jackiw1985 R. Jackiw, Lower dimensional gravity, Nuclear Physics B, 252 (1985), pp. 343–356. Kaufmann_Higher_1996 R. Kaufmann, Y. Manin, and D. Zagier, Higher Weil-Petersson volumes of moduli spaces of stable n-pointed curves, Comm. Math. Phys., 181 (1996), pp. 763–787. Kontsevich1992 M. Kontsevich, Intersection theory on the moduli space of curves and the matrix Airy function, Comm. Math. Phys., 147 (1992), pp. 1–23. Maxfield_2021 H. Maxfield and G. J. Turiaci, The path integral of 3d gravity near extremality; or, JT gravity with defects as a matrix integral, Journal of High Energy Physics (2021) 1. Mirzakhani2007 M. Mirzakhani, Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces, Invent. Math., 167 (2007), pp. 179–222. Mirzakhani2007a height 2pt depth -1.6pt width 23pt, Weil-Petersson volumes and intersection theory on the moduli space of curves, J. Amer. Math. Soc., 20 (2007), pp. 1–23. Mirzakhani_Growth_2013 height 2pt depth -1.6pt width 23pt, Growth of Weil-Petersson volumes and random hyperbolic surfaces of large genus, J. Differential Geom., 94 (2013), pp. 267–300. Mirzakhani_Lengths_2019 M. Mirzakhani and B. Petri, Lengths of closed geodesics on random surfaces of large genus, Comment. Math. Helv., 94 (2019), pp. 869–889. Monk_Benjamini_2022 L. Monk, Benjamini–schramm convergence and spectra of random hyperbolic surfaces of high genus, Analysis & PDE, 15 (2022), pp. 727–752. mulase2006mirzakhanis M. Mulase and B. Safnuk, Mirzakhani's recursion relations, virasoro constraints and the kdv hierarchy, arXiv preprint math/0601194, (2006). Mulase_Mirzakhanis_2008 M. Mulase and B. Safnuk, Mirzakhani's recursion relations, Virasoro constraints and the KdV hierarchy, Indian J. Math., 50 (2008), pp. 189–218. Okuyama2021FZZT K. Okuyama and K. Sakai, FZZT branes in JT gravity and topological gravity, Journal of High Energy Physics (2021) 9. Saad_JT_2019 P. Saad, S. H. Shenker, and D. Stanford, Jt gravity as a matrix integral, arXiv preprint hep-th/1903.11115, (2019). Tan_Generalizations_2006 S. P. Tan, Y. L. Wong, and Y. Zhang, Generalizations of McShane's identity to hyperbolic cone-surfaces, J. Differential Geom., 72 (2006), pp. 73–112. teitelboim1983 C. Teitelboim, Gravitation and hamiltonian structure in two spacetime dimensions, Physics Letters B, 126 (1983), pp. 41–45. Turiaci_2021 G. J. Turiaci, M. Usatyuk, and W. W. Weng, 2d dilaton-gravity, deformations of the minimal string, and matrix models, Classical and Quantum Gravity, 38 (2021), p. 204001. Witten_Two_1991 E. Witten, Two-dimensional gravity and intersection theory on moduli space, in Surveys in differential geometry (Cambridge, MA, 1990), Lehigh Univ., Bethlehem, PA, 1991, pp. 243–310. Witten_2020 E. Witten, Matrix models and deformations of JT gravity, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 476 (2020).
http://arxiv.org/abs/2307.05414v1
20230711163045
Duncode Characters Shorter
[ "Changshang Xue" ]
cs.CL
[ "cs.CL", "cs.DB", "cs.IR", "68P30, 68P20", "E.2" ]
Optimizing Scientific Data Transfer on Globus with Error-bounded Lossy Compression Yuanjian Liu1, Sheng Di2, Kyle Chard12, Ian Foster12, Franck Cappello2 1 University of Chicago, Chicago, IL, USA 2 Argonne National Laboratory, Lemont, IL, USA [email protected], [email protected], [email protected], [email protected], [email protected] Corresponding author: Sheng Di, Mathematics and Computer Science Division, Argonne National Laboratory, 9700 Cass Avenue, Lemont, IL 60439, USA August 12, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================== This paper investigates the employment of various encoders in text transformation, converting characters into bytes. It discusses local encoders such as ASCII and GB-2312, which encode specific characters into shorter bytes, and universal encoders like UTF-8 and UTF-16, which can encode the complete Unicode set with greater space requirements and are gaining widespread acceptance. Other encoders, including SCSU, BOCU-1, and binary encoders, however, lack self-synchronizing capabilities. Duncode is introduced as an innovative encoding method that aims to encode the entire Unicode character set with high space efficiency, akin to local encoders. It has the potential to compress multiple characters of a string into a Duncode unit using fewer bytes. Despite offering less self-synchronizing identification information, Duncode surpasses UTF8 in terms of space efficiency. The application is available at <https://github.com/laohur/duncode>. Additionally, we have developed a benchmark for evaluating character encoders across different languages. It encompasses 179 languages and can be accessed at <https://github.com/laohur/wiki2txt>. § INTRODUCTION The process of text processing with computers begins with the encoding of characters into bytes. An ideal character encoder should possess several features such as the inclusion of all characters, compactness, and robustness (which includes lossless). However, no single character encoder embodies all these features simultaneously. Foundational encode features, such as the inclusion of all characters and robustness, are fundamental. Compactness is beneficial for conserving storage space and network bandwidth<cit.>. The Unicode (ISO/IEC 10646) <cit.> coded character set, also known as the Universal Character Set (UCS) <cit.>, is the largest of its kind. In the current context, the term 'including all characters' refers to the storage and transmission of Unicode characters. This is a strict requirement for global text exchange <cit.>. Furthermore, a character encoder should ideally perform well with any language. Symbol Length(Bytes/Characters) is a metric used to denote the average length of a symbol encoded into bytes within a byte sequence. This index is typically used to measure the space efficiency of encoders; however, it may vary for different characters within a single encoder. Symbol,Length = Number, of, Bytes/Number, of, Characters Self-synchronization refers to the ability to determine whether a code unit initiates a character without referring to previous code units. This feature allows a reader to start from any point, promptly identify byte sequence boundaries, and ensure that the encoder is robust enough to exclude erroneous characters from the text. Lossy encoding is not our focus <cit.>. Some encoders specifically avoid false positives when searching for strings directly within bytes. There are two main types of encoders used for encoding thousands of languages: local and universal encoders <cit.>. Local encoders (for example, ASCII <cit.> and ISO8859 <cit.>) were developed earlier and typically produce shorter encoded byte lengths. However, their character sets often contain only specific symbols and are insufficient for exchange purposes. Some also lack self-synchronizing capabilities during decoding, which can lead to corruption during exchange, especially over the internet. Due to these shortcomings, the use of local encoders has declined. Universal encoders, on the other hand, are capable of encoding the entire range of Unicode symbols. Certain encoders, such as the Unicode Transformation Format (UTF) <cit.>, achieve this objective while limiting errors to a single unit, making them the most popular encoders for exchange. They typically work in conjunction with other binary compressors over the internet <cit.>. Other universal encoders, like SCSU <cit.> and BOCU-1 <cit.>, strive to enhance encoding efficiency by introducing a tag byte as the leading byte for new blocks in a sequence. While these encoders achieve space efficiency, they do so at the expense of robustness, rendering them incapable of decoding units within a byte stream. General compressors <cit.> often outperform character encoders, thanks to advanced compression algorithms <cit.> <cit.> <cit.>. However, these compressors lack self-synchronizing capabilities during decoding. Binary encoders, meanwhile, handle bytes after texts have already been encoded. Most character encoders, including local encoders and UTFs, encode each character as an isolated unit. Although the character set of a text could contain millions of characters, neighboring characters often belong to the same language. These characters can be encoded using a language-specific prefix coupled with a set of letters from a small alphabet. The Unicode Point of a character can be decomposed into Alphabet ID and Letter Index within its alphabet. Subsequently, continuous symbols can be compressed by sharing a single Alphabet ID in a Duncode Unit. This shared common Alphabet ID allows Duncode to reduce the size of the encoded bytes. In contrast to UTF-8, Duncode uses the Tail Byte (Last Byte) to segment Duncode units in the byte stream, rather than the Leading Byte (First Byte) employed by UTF-8. A Duncode unit ranges from 1-4 bytes. Given the luxury of encoding unit length into bytes, Duncode foregoes the unit length message in the segment identifier byte to maintain self-synchronizing capability. As Duncode maintains unidirectional compatibility with ASCII only, false-positive errors might occur when searching strings directly on Duncode bytes. More details on this issue are provided in Section <ref> and Section <ref>. This paper introduces a highly space-efficient text encoder – Duncoder. It represents the entire set of Unicode characters with high space efficiency and self-synchronization, albeit at the expense of occasional forbidden lookup false positives. To evaluate the compression performance of character encoders across all languages <cit.>, we developed a tool to collect a corpus of 179 languages from Wikipedia. The results and details of these tests are presented in Section <ref> and Section <ref>. § RELATED WORK §.§ Local Encoder ASCII <cit.>, one of the most renowned encoders, encompasses a set of 128 symbols. It efficiently encodes commonly used English letters, text symbols, and computer commands into a single byte, making it particularly suitable for English language texts and the rewriting of computer commands. Western European countries often utilize the remaining 128 positions in the ASCII byte to incorporate their custom characters. Most subsequent encoders retain compatibility with ASCII. However, encoding characters from other languages often requires more bytes. For example, GB2312 <cit.> uses 2 bytes to represent thousands of commonly used Chinese characters. These local encoders usually limit their character set, leading to high space efficiency. However, conversion becomes an unavoidable obstacle in exchanges, which is far from a simple task <cit.>. Unfortunately, character corruption and wide word injection are common issues in these scenarios <cit.>. In some extreme cases, a single erroneous byte can corrupt the entire text. §.§ Universal Encoder Character sets have become decoupled from encoders to increase universality. Unicode/ISO 10646 <cit.>, which includes over 140,000 characters and supports more than 300 languages, aims to encompass all characters globally. As a result, Unicode has become the most popular character set, and the Unicode Transformation Format (UTF) <cit.> has emerged as the most widely used text encoder. As the most prevalent encoders today, UTF-8 <cit.> and UTF-16 <cit.> can encode all Unicode characters. UTF-16 generally has a symbol length of 2 and requires twice the space of ASCII for English. The symbol length of UTF-8 ranges from 1 to 6. UTF-8 retains ASCII in its original form and encapsulates other characters into longer bytes, making it cost-effective primarily for Latin languages. Other universal encoders, such as SCSU <cit.> and BOCU-1 <cit.>, strive to improve encoding efficiency by inserting a tag byte as the leading byte for each new block in a sequence. However, they cannot decode units from a byte stream. Binary compressors typically outperform character encoders, particularly those utilizing general compression algorithms <cit.> <cit.> <cit.> <cit.> <cit.>. However, they operate on bytes rather than characters. <cit.> optimizes binary compression algorithms for UTF-8 byte sequences. <cit.> demonstrates that binary compressors effectively compress files. <cit.> and <cit.> advocate for compression outside of text. Some studies <cit.> <cit.> <cit.> <cit.> aim to accelerate text searches on compressed files. § METHODOLOGY §.§ Tail Byte for Self-Synchronizing In a UTF-8 byte sequence as shown in Table <ref>, only ASCII units start with the byte '0xxxxxxx' while all non-ASCII character units start with the byte "1xxxxxxx". In a multibyte unit, only the first byte resembles "11xxxxxx" and the rest resemble "10xxxxxx". The length of the "1" sequence in the first byte's head determines the number of unit bytes. The first byte of a UTF-8 unit not only segments the byte sequence but also indicates the length of this unit. However, the unit length varies only from 1 to 6. The UTF-8 unit length flag bits occupy many bits but convey a limited amount of information. Therefore, we discard this information in Duncode. The Duncode code unit provides less synchronization information than the UTF-8 code unit. Only the Tail Byte (last byte of the unit) of a Duncode unit is encoded as "0xxxxxxx", with the other bytes encoded as "1xxxxxxx", as shown in Table <ref>. For ASCII symbols, the singular byte is also the tail byte. It is straightforward to find unit boundaries in a Duncode byte sequence. The unit length, which varies from 1 to 4, is determined dynamically by decoding the deque. Only a three-byte unit (2ˆ21) is required to store all Unicode symbols. Each Duncode byte uses only the first bit as a flag. We classify zones where the unit length ranges from 1 to 3 bytes. These are referred to as "ascii", "byte2", and "isolate" zones in <ref>, with each holding only one character per unit. The tail byte could be an ASCII symbol or part of a longer unit, which might cause false positive errors when directly searching strings in the encoded byte sequence. §.§ Compress Multi Characters into One Duncode Unit Almost every self-character encoder maps a character to a single encoding unit. However, in most cases, the characters in a sentence belong to one common language. This often results in redundant information when encoding these characters as isolated units. We can deconstruct a character into two components: an Alphabet ID and a Letter Index within that alphabet. Each alphabet represents a character subset associated with a specific language, typically combining one or two Unicode blocks. The index of a Unicode alphabet is referred to as the "Alphabet ID", while the position of a character within that alphabet is called the "Letter Index". Multiple characters can be represented as several Letter Indexes with a shared Alphabet ID used as a prefix. Their associated alphabets typically contain no more than 128/256 letters, allowing a Letter Index to use only 7/8 bits for each alphabet. This approach enables us to compress three symbols into a 4-byte unit for these languages, reducing the symbol length to 1.33 in the "bit8" and "bit7" zones after compression. Consider the example of encoding a Greek string “αβγ” into Duncode. Firstly, we identify that these characters are Greek, obtaining an Alphabet ID of 0 from the mapping. Subsequently, we calculate their Letter Indexes within the Greek alphabet as 0,1,2. We then assign these Alphabet ID and Letter Indexes to the Duncode bit8 zone, using Alphabet ID 1. The end result is a 4-byte Duncode unit. §.§ Auto Adaptive If a longer unit is not fully occupied by symbols, it will convert to a shorter unit. As a result, the same character may be located in different zones. For instance, the string "α" will be encoded as a two-byte unit (in the byte2 zone) instead of a four-byte unit (in the bit8 zone). § EXPERIMENTS AND RESULTS §.§ Zones and Alphabets As depicted in Table <ref>, we preserve ASCII characters in their original form within the ASCII zone (noted in lowercase to differentiate from ASCII). Languages such as Latin, General Punctuation, Hiragana, Katakana, and widely-used Chinese characters are allocated to the byte2 zone, with a symbol length of 2. These are often intermixed with other languages or possess a large alphabet size that prevents compression. Languages with alphabets containing fewer than 256 symbols can be compressed into a four-byte unit. Common languages with 128 letters or fewer are placed in the bit7 zone, while languages with more expansive alphabets (Arabic, Russian, etc., with 128-256 letters) are allocated to the bit8 zone. Some Unicode Blocks share a single Alphabet. For instance, the Unicode Blocks "Greek and Coptic" and "Ancient Greek Numbers" share a single Duncode Alphabet. There are approximately 300 Unicode Blocks to accommodate all Unicode symbols. By uniting them, we create around 100+ Duncode Alphabets. To determine which Alphabet a character belongs to, the Duncoder stores approximately 300 Unicode Block ranges as a list and about 2ˆ14 characters as a map for the byte2 zone. §.§ Benchmark To evaluate compression performance across all languages, we have collected texts in 179 languages to form our corpus dataset. All corpus data for this experiment were obtained from <https://dumps.wikipedia.org> and extracted into texts of a maximum size of 1 MB using wiki2txt[<https://github.com/laohur/wiki2txt>]. Our primary comparison is between Duncoder and UTF-8, as the UTF (Unicode Transformation Format) shares many key attributes with Duncoder, and UTF-8 is the most widely used and efficient variant. The complete results are presented in Table <ref>. §.§ Results Analysis The Bytes/Characters (or Symbol Length) of Duncoder in these languages align with the expected performance. In most cases, Duncoder outperforms UTF-8 across all languages. We selected several typical languages for detailed analysis, with the results provided in Table <ref>. The performance of these encoders shows minimal variations in languages like English and French. These languages primarily consist of ASCII and Latin characters. All of these encoders encode ASCII symbols in one byte and Latin symbols in two bytes. Due to the dominance of ASCII symbols, texts in these languages are encoded at approximately one byte per character on average. Texts in Arabic and Russian can be compressed with Duncoder, resulting in smaller sizes than UTF-8 (reducing from 2 bytes/char to 1.33 bytes/char). Due to the high prevalence of blank spaces in these texts, their average byte length per character ranges from 1 to 2 in both Duncoder and UTF-8. Languages with extensive character sets, such as Chinese, Japanese, and Korean, are treated differently. For comparison, we have only optimized Duncoder for Hanzi (Chinese characters, CJKV Unified Ideographs), not Hangul Jamos or Syllables. The common Hanzi are stored in a map, allowing them to be encoded as two bytes. As a result, Chinese and Japanese texts undergo significant size reduction (from 3 bytes/char to 2 bytes/char). However, the size of Korean text in Duncoder remains the same (3 bytes/char) as in UTF-8. Rare languages such as Abkhazian, Burmese, Central Khmer, Tibetan, and Yoruba all benefit from Duncoder, with the exception of Yoruba. This is because Yoruba primarily uses ASCII and Latin symbols. Whether a language benefits from this approach depends on its character set distribution. Abkhazian primarily uses Cyrillic, while Burmese, Central Khmer, and Tibetan use their unique character sets. As a result, these languages reap significant benefits from Duncoder (with byte/char values shifting from [2,3] to [1.33,2]). § CONCLUSION In this paper, we introduced Duncode to encode various texts in high space efficiency. It is universal, robust, customizable and keep the unidirectional compatibility with ASCII. With the cost of lookup false positive, Duncode benefits a great space advantage than UTF-8. It is particular suitable for storing diverse texts. In addition, we released a corpus including 179 languages and a tool to collect them. And we built a benchmark for character encoders on all languages. acl_natbib § APPENDICES §.§ Encoding Steps Detect Alphabet ID The Alphabet may encompass one or two Unicode Blocks. We use a lookup table to identify to which Alphabet a given character belongs. Decompose Next, we decompose a character from a Unicode Point into an Alphabet ID and an Index within the Alphabet. The Unicode Point is equal to the sum of the Alphabet ID and the Letter Index. Compress In an uncompressed Duncode unit sequence, if the symbol in the last unit and the current symbol belong to the same alphabet and the last unit is not full, the current symbol will be inserted into the last unit, and the current Duncode unit will be discarded. As depicted in Figure <ref>, we insert 3 symbols into one Duncode unit. Assemble For each compressed Duncode unit, we assemble the Alphabet ID and Indexes into different Duncode Zones based on the Alphabet ID. §.§ Decoding Steps Segment For a Duncode byte sequence, Duncode units can be split by the Tail Byte (0xxxxxxx). Detect Zone, Alphabet ID, and Character Indexes For each compressed Duncode unit, we first detect the zone using flag bits. The Alphabet ID and Character Indexes are identified subsequently. Decompress We decompress a Duncode unit containing multiple Letter Indexes into several Duncode units, ensuring that each unit contains only one symbol. Generate Character For each decompressed Duncode unit, we can assemble the Alphabet ID and Letter Index into a character in the appropriate zone. @crrrll@ Benchmark Details on corpus. "hz.txt" and "kr.txt" are empty after extracted. 1lwiki_file_1m n_chars bytes_utf8 bytes_duncode bytes/utf8 bytes/duncoder aa.txt 135 148 147 1.096 1.089 ab.txt 1,049,130 1,846,144 1,411,219 1.760 1.345 af.txt 1,064,351 1,075,107 1,071,541 1.010 1.007 ak.txt 769,002 814,811 812,865 1.060 1.057 am.txt 1,050,665 2,437,382 2,431,631 2.320 2.314 an.txt 1,051,327 1,068,238 1,066,376 1.016 1.014 ar.txt 1,103,308 1,855,164 1,462,890 1.681 1.326 as.txt 1,052,400 2,262,353 1,376,996 2.150 1.308 av.txt 1,048,809 1,774,760 1,368,398 1.692 1.305 ay.txt 1,049,302 1,069,827 1,066,001 1.020 1.016 az.txt 1,048,850 1,211,125 1,201,764 1.155 1.146 ba.txt 1,048,631 1,850,444 1,403,582 1.765 1.338 be.txt 1,048,579 1,825,171 1,402,696 1.741 1.338 bg.txt 1,049,268 1,777,473 1,400,699 1.694 1.335 bh.txt 1,048,911 2,190,884 1,380,707 2.089 1.316 bi.txt 218,091 221,344 219,201 1.015 1.005 bm.txt 357,032 378,550 376,310 1.060 1.054 bn.txt 1,060,806 2,472,105 1,450,265 2.330 1.367 bo.txt 1,053,038 3,108,029 2,080,294 2.951 1.976 br.txt 1,055,652 1,079,657 1,075,782 1.023 1.019 bs.txt 1,049,786 1,072,485 1,070,267 1.022 1.020 ca.txt 1,073,230 1,100,117 1,098,909 1.025 1.024 ce.txt 1,048,602 1,837,755 1,406,538 1.753 1.341 ch.txt 129,361 134,508 132,508 1.040 1.024 co.txt 1,060,056 1,090,727 1,088,470 1.029 1.027 cr.txt 17,333 28,113 27,833 1.622 1.606 cs.txt 1,055,737 1,159,181 1,153,879 1.098 1.093 cu.txt 462,741 825,969 652,117 1.785 1.409 cv.txt 1,049,285 1,844,550 1,466,533 1.758 1.398 cy.txt 1,050,639 1,060,864 1,058,520 1.010 1.008 da.txt 1,050,196 1,077,241 1,075,205 1.026 1.024 de.txt 1,050,528 1,072,438 1,068,837 1.021 1.017 dv.txt 1,050,848 1,921,186 1,416,724 1.828 1.348 dz.txt 310,724 858,221 584,403 2.762 1.881 ee.txt 280,228 297,292 295,492 1.061 1.054 el.txt 1,049,405 1,770,860 1,369,767 1.687 1.305 en.txt 1,054,002 1,066,474 1,061,949 1.012 1.008 eo.txt 1,048,873 1,078,033 1,073,017 1.028 1.023 es.txt 1,049,707 1,075,445 1,072,754 1.025 1.022 et.txt 1,049,247 1,085,530 1,079,530 1.035 1.029 eu.txt 1,048,842 1,060,860 1,057,453 1.011 1.008 fa.txt 1,058,698 1,816,881 1,457,016 1.716 1.376 ff.txt 503,152 617,832 538,409 1.228 1.070 fi.txt 1,050,348 1,085,591 1,082,824 1.034 1.031 fj.txt 497,382 500,275 498,381 1.006 1.002 fo.txt 1,049,176 1,122,285 1,120,126 1.070 1.068 fr.txt 1,054,065 1,096,721 1,094,085 1.040 1.038 fy.txt 1,065,075 1,080,793 1,079,544 1.015 1.014 ga.txt 1,048,950 1,106,459 1,103,747 1.055 1.052 gd.txt 1,048,838 1,086,757 1,078,849 1.036 1.029 gl.txt 1,050,492 1,077,723 1,076,035 1.026 1.024 gn.txt 1,051,576 1,131,966 1,118,462 1.076 1.064 gu.txt 1,065,827 2,499,704 1,448,002 2.345 1.359 gv.txt 1,049,881 1,060,374 1,057,448 1.010 1.007 ha.txt 1,049,904 1,059,909 1,057,872 1.010 1.008 he.txt 1,058,309 1,833,281 1,434,241 1.732 1.355 hi.txt 1,049,758 2,608,190 1,487,044 2.485 1.417 ho.txt 1,218 1,227 1,220 1.007 1.002 hr.txt 1,051,227 1,079,772 1,077,857 1.027 1.025 ht.txt 1,050,145 1,070,733 1,069,807 1.020 1.019 hu.txt 1,051,729 1,148,520 1,144,229 1.092 1.088 hy.txt 1,052,342 1,811,174 1,385,451 1.721 1.317 hz.txt 0 0 0 ­ ­ ia.txt 1,049,441 1,057,971 1,053,944 1.008 1.004 id.txt 1,051,882 1,063,624 1,059,494 1.011 1.007 ie.txt 1,048,912 1,146,682 1,110,250 1.093 1.058 ig.txt 1,052,913 1,166,756 1,113,920 1.108 1.058 ii.txt 261 478 415 1.831 1.590 ik.txt 83,095 87,540 86,225 1.053 1.038 io.txt 1,049,261 1,055,452 1,053,891 1.006 1.004 is.txt 1,049,702 1,149,928 1,146,471 1.095 1.092 it.txt 1,064,336 1,074,815 1,072,780 1.010 1.008 iu.txt 102,983 216,868 215,287 2.106 2.091 ja.txt 1,051,113 2,689,017 1,872,561 2.558 1.782 jv.txt 1,048,611 1,071,004 1,066,499 1.021 1.017 ka.txt 1,054,859 2,534,120 1,412,512 2.402 1.339 kg.txt 235,777 245,142 240,662 1.040 1.021 ki.txt 289,476 311,073 309,083 1.075 1.068 kj.txt 838 849 840 1.013 1.002 kk.txt 1,049,332 1,884,244 1,411,157 1.796 1.345 kl.txt 402,490 404,972 403,637 1.006 1.003 km.txt 1,081,258 2,890,359 1,516,227 2.673 1.402 kn.txt 1,051,506 2,709,242 1,448,139 2.577 1.377 ko.txt 1,048,759 2,103,649 2,087,029 2.006 1.990 kr.txt 0 0 0 ­ ­ ks.txt 358,381 619,065 461,352 1.727 1.287 ku.txt 1,050,172 1,149,542 1,145,047 1.095 1.090 kv.txt 1,049,872 1,744,416 1,404,876 1.662 1.338 kw.txt 1,059,998 1,068,603 1,064,158 1.008 1.004 ky.txt 1,072,845 1,914,993 1,441,986 1.785 1.344 la.txt 1,057,597 1,069,208 1,065,448 1.011 1.007 lb.txt 1,051,739 1,076,859 1,074,987 1.024 1.022 lg.txt 1,049,053 1,071,739 1,060,447 1.022 1.011 li.txt 1,048,897 1,066,137 1,065,019 1.016 1.015 ln.txt 1,049,167 1,115,033 1,110,845 1.063 1.059 lo.txt 1,051,413 2,656,032 1,383,759 2.526 1.316 lt.txt 1,054,230 1,132,785 1,121,031 1.075 1.063 lv.txt 1,050,215 1,150,328 1,139,656 1.095 1.085 mg.txt 1,049,224 1,064,549 1,061,628 1.015 1.012 mh.txt 3,148 3,270 3,261 1.039 1.036 mi.txt 1,048,616 1,090,831 1,088,423 1.040 1.038 mk.txt 1,050,407 1,807,308 1,412,958 1.721 1.345 ml.txt 1,049,136 2,564,675 1,389,893 2.445 1.325 mn.txt 1,050,075 1,828,576 1,395,636 1.741 1.329 mr.txt 1,051,589 2,560,750 1,441,600 2.435 1.371 ms.txt 1,049,273 1,056,111 1,053,374 1.007 1.004 mt.txt 1,052,893 1,095,095 1,090,599 1.040 1.036 my.txt 1,052,890 2,820,647 1,456,799 2.679 1.384 na.txt 224,739 234,089 229,751 1.042 1.022 ne.txt 1,052,018 2,634,100 1,473,330 2.504 1.400 ng.txt 21,339 21,378 21,352 1.002 1.001 nl.txt 1,189,301 1,194,604 1,192,975 1.004 1.003 nn.txt 1,056,326 1,087,947 1,083,779 1.030 1.026 no.txt 1,052,080 1,079,185 1,075,334 1.026 1.022 nv.txt 1,048,735 1,345,272 1,340,291 1.283 1.278 ny.txt 1,049,312 1,055,213 1,053,021 1.006 1.004 oc.txt 1,049,379 1,080,530 1,078,600 1.030 1.028 om.txt 1,052,694 1,061,522 1,057,797 1.008 1.005 or.txt 1,049,082 2,287,591 1,380,361 2.181 1.316 os.txt 1,048,871 1,860,170 1,485,113 1.773 1.416 pa.txt 1,050,021 2,410,900 1,462,044 2.296 1.392 pi.txt 473,902 777,614 552,513 1.641 1.166 pl.txt 1,088,573 1,152,076 1,143,677 1.058 1.051 ps.txt 1,057,349 1,815,710 1,471,991 1.717 1.392 pt.txt 1,054,300 1,085,823 1,084,075 1.030 1.028 qu.txt 1,049,115 1,071,419 1,068,046 1.021 1.018 rm.txt 1,075,028 1,103,281 1,094,257 1.026 1.018 rn.txt 359,070 365,332 362,715 1.017 1.010 ro.txt 1,048,802 1,099,145 1,094,233 1.048 1.043 ru.txt 1,049,337 1,821,554 1,398,275 1.736 1.333 rw.txt 1,048,887 1,062,604 1,056,094 1.013 1.007 sa.txt 1,052,407 2,655,317 1,446,092 2.523 1.374 sc.txt 1,048,935 1,074,319 1,068,526 1.024 1.019 sd.txt 1,048,901 1,761,417 1,421,304 1.679 1.355 se.txt 1,048,913 1,126,991 1,109,431 1.074 1.058 sg.txt 157,336 169,263 168,661 1.076 1.072 sh.txt 1,050,057 1,072,112 1,068,282 1.021 1.017 si.txt 1,049,987 2,429,987 1,425,067 2.314 1.357 sk.txt 1,049,200 1,134,432 1,131,023 1.081 1.078 sl.txt 1,052,257 1,082,072 1,078,131 1.028 1.025 sm.txt 762,554 774,167 770,192 1.015 1.010 sn.txt 1,049,047 1,052,745 1,050,726 1.004 1.002 so.txt 1,049,058 1,055,824 1,053,031 1.006 1.004 sq.txt 1,048,696 1,114,188 1,111,205 1.062 1.060 sr.txt 1,050,605 1,717,480 1,364,645 1.635 1.299 ss.txt 546,788 554,288 551,570 1.014 1.009 st.txt 817,347 822,214 820,598 1.006 1.004 su.txt 1,048,681 1,085,020 1,082,192 1.035 1.032 sv.txt 1,052,621 1,104,884 1,097,727 1.050 1.043 sw.txt 1,049,738 1,055,492 1,052,862 1.005 1.003 ta.txt 1,048,999 2,460,455 1,374,203 2.346 1.310 te.txt 1,066,784 2,682,043 1,459,958 2.514 1.369 tg.txt 1,048,631 1,773,182 1,382,015 1.691 1.318 th.txt 1,049,536 2,747,511 1,385,799 2.618 1.320 ti.txt 158,935 377,391 376,330 2.374 2.368 tk.txt 1,048,962 1,159,714 1,152,289 1.106 1.099 tl.txt 1,049,152 1,060,988 1,055,790 1.011 1.006 tn.txt 1,048,609 1,052,937 1,051,558 1.004 1.003 to.txt 915,162 962,213 960,159 1.051 1.049 tr.txt 1,049,314 1,136,726 1,134,037 1.083 1.081 ts.txt 975,843 983,577 980,917 1.008 1.005 tt.txt 1,048,655 1,686,413 1,336,501 1.608 1.274 tw.txt 1,048,945 1,112,019 1,109,909 1.060 1.058 ty.txt 149,036 166,322 160,290 1.116 1.076 ug.txt 1,051,124 1,950,004 1,477,330 1.855 1.405 uk.txt 1,048,828 1,800,824 1,391,203 1.717 1.326 ur.txt 1,051,569 1,784,925 1,444,569 1.697 1.374 uz.txt 1,049,363 1,088,618 1,075,666 1.037 1.025 ve.txt 242,125 245,211 243,424 1.013 1.005 vi.txt 1,173,008 1,501,596 1,388,612 1.280 1.184 vo.txt 1,050,168 1,145,071 1,132,800 1.090 1.079 wa.txt 1,050,692 1,095,828 1,092,625 1.043 1.040 wo.txt 1,048,791 1,091,114 1,087,101 1.040 1.037 xh.txt 1,087,868 1,092,314 1,090,715 1.004 1.003 yi.txt 1,051,166 1,849,231 1,416,118 1.759 1.347 yo.txt 1,050,996 1,230,927 1,193,098 1.171 1.135 za.txt 505,418 606,031 555,463 1.199 1.099 zh.txt 1,052,649 2,420,113 1,740,409 2.299 1.653 zu.txt 1,050,600 1,059,590 1,054,789 1.009 1.004 § SUPPLEMENTAL MATERIAL §.§ Duncode Algorithm Duncode Block DataStructure A Duncode Block bears a resemblance to a Unicode Block. Typically, one Duncode Block corresponds to one Unicode Block. However, at times, two smaller Unicode Blocks are consolidated to constitute one Duncode Block. In such scenarios, the initial block is termed the Mother Block, while the subsequent ones are identified as Children Blocks. [ language=Go, breaklines=true, ] type Block struct BlockId int // Index of the Duncode Block Began int // Unicode Point of the first symbol in this Duncode Block End int // Unicode Point of the last symbol in this Duncode Block Size int // Symbol Capacity of this Duncode Block English string // English name of the Block Chinese string // Chinese name of the Block Mother string // Name of the Mother Duncode Block for a Child Duncode Block MotherId int // ID of the Mother Duncode Block for a Child Duncode Block Offset int // For a Child Duncode Block, Block Offset = Beginning of Mother Block + Block Size Child []string // Names of Child Duncode Blocks ZoneName string // Name of the Zone ZoneId int // ID of the Zone Zone2Id int // ID of the 'bit7' Zone Zone3Id int // ID of the 'bit8' Zone Duncode Unit DataStructure The Duncode struct represents a Duncode Unit, which can store 1 to 3 symbols from the same Duncode Block after compression. [language=Go] type Duncode struct CodePoint int // ID of the Duncode Zone ZoneId int // ID of the Duncode Zone BlockId int // ID of the Duncode Block MotherId int // ID of the Mother Duncode Block Index int // Index of the Duncode Unit = Unicode CodePoint - Beginning of the Duncode Block Symbols []int // Array storing 1 to 3 Duncode Indexes in compressed Duncode Units
http://arxiv.org/abs/2307.04496v1
20230710113448
Distinguishing between Dirac and Majorana neutrinos using temporal correlations
[ "Bhavya Soni", "Sheeba Shafaq", "Poonam Mehta" ]
hep-ph
[ "hep-ph", "quant-ph" ]
LaTeX2e =1 =7.truein =9.5truein justification=justified,singlelinecheck=false #1(#1) #1(#1) #1#1 #1#1 #1#1 = 1.5ex 5pt plus 1pt 0pt -0.1in -0.1in 6.45in 9.3in -1.5cm 1.0cm [4] -.4ex∼ .4ex< -.4ex∼ .4ex> myenumi mylist ) m mysubequation[equation]
http://arxiv.org/abs/2307.03995v1
20230708153652
Linear approximation to the statistical significance autocovariance matrix in the asymptotic regime
[ "V. Ananiev", "A. L. Read" ]
physics.data-an
[ "physics.data-an", "stat.ME" ]
ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation Masum Hasan August 12, 2023 ========================================================================================== § INTRODUCTION In high energy physics searches for new particles that appear in the data as resonances <cit.>, one usually scans a mass region and hopes to find a peak of high significance at some mass. The significance at each mass of the scan is generally found by applying Wilks' theorem <cit.> to the likelihood-ratio test statistic (LRT) <cit.> for each point, and results in a field of significances measured across the search region. While the resonance may appear anywhere in the search region, the analysis usually targets the highest (local) significance, which leads to the recurring challenge of estimating the global significance of this observation. The necessity of calculating the probability for a background fluctuation to give such a peak of significance anywhere in the search region, and not simply where the significance is maximal, is commonly referred to as the look-elsewhere effect (LEE). There have been a number of studies investigating the LEE, and in our work we pay particular attention to those describing the significance field with a Gaussian process. While some studies <cit.> set the upper bound on the trials factor, which converts a local p-value into a global one, and only use a Gaussian process implicitly to link the low and high significance regions, other studies <cit.> require explicit values for the Gaussian process parameters. In this paper we establish a chain of lightweight steps from a non-linear parametric statistical model to the trials factor by estimating the covariance matrix of the significance field. To construct the estimate involving only one background only fit to the data, we apply linear expansion to the non-linear background shape. The way to calculate the covariance matrix starting from a linear model was briefly discussed by Demortier <cit.>. As part of our work, we give a strict mathematical formulation of the method and demonstrate a practical application of it to non-linear background shapes, with the estimated covariance matrix serving as a proxy for the straightforward trials factor estimate. A common input for the methods that quantify the LEE is a set of maximum likelihood fits to some number of Monte Carlo generated data realizations. They may be used to estimate the trials factor in the lower significance region, or the covariance matrix of the Gaussian process itself (the significance autocovariance). The challenge, then, is to fit enough datasets to estimate the trials factor with a satisfactory precision, while keeping the number of fits as small as possible. In high-energy physics searches for a new particle or a resonance, typically, the likelihood-ratio test statistic is used to construct the p-value for each point on a search grid. In the asymptotic regime, the test statistic follows a χ^2 distribution. For analyses that use a Gaussian process to model the significance, the number of degrees of freedom of the test statistic distribution is, typically, 1. For this case, in Chapter <ref>, we suggest a method to estimate the significance covariance matrix that makes use of a single background-only fit to the data. We replace the set of fits that were required in our previous work, with derivatives of the best-fit-to-the-data background model. Fortunately, the derivatives can often be extracted from the fit software. Core assumptions. In section <ref> we show that three quite generic requirements: * the background model should be well approximated by its linear expansion around the best fit parameters, * the assumption that the fluctuations in different bins of the data set are independent, * the fluctuations in each bin follow a Gaussian distribution, together, are consistent with the assumptions made in the empirical study by Ananiev & Read <cit.>, which relied on the additivity (superposition) principle for the fluctuations to empirically estimate the covariance matrix of the significances. We argue, therefore, that this work serves as a theoretical basis for the method of the Asimov set of background samples introduced in the study, and at the same time may rely on its validations. §.§ Statistical model The basic structure of a statistical model commonly used in high-energy physics experiments that search for a new particle or a resonance was described in detail in the empirical study <cit.>. For the present study, we chose the H→γγ inspired model as a benchmark, because it satisfies without approximation the second and third requirements above. The search is conducted with the likelihood ratio test statistic evaluated for each point M of the search grid ℳ. In this binned model, the expected background b_i(θ⃗), used as null-hypothesis H_0, together with the expected signal μ s_i(θ⃗) form the alternative H_1, expected signal + background estimate: n_i(μ, θ⃗, M) = b_i(θ⃗) + μ s_i(θ⃗, M), where i enumerates bins, θ⃗ denotes the vector of nuisance parameters and μ is the signal strength nuisance parameter. In the asymptotic regime (e.g. large sample), and neglecting constant terms, log-likelihoods for H_0 and H_1 may be approximated as follows: -2lnℒ_0(μ=0, θ⃗) = ∑_i ( d_i - b_i(θ⃗)/σ_i)^2, -2lnℒ_1(μ, θ⃗, M) = ∑_i ( d_i - b_i(θ⃗) - μ s_i(M, θ⃗)/σ_i)^2, where i enumerates bins, M ∈ℳ denotes the point in the search region ℳ of parameters which are not present under the background-only hypothesis, θ⃗ are the nuisance parameters, and d_i corresponds to the binned data with errors σ_i. We have assumed that the errors σ_i are independent of the nuisance parameters θ⃗. With a linear correction to σ_i it is still possible to get a closed form expression for the test statistic and significance. The calculation of the covariance would require sampling toys to average out the fluctuations. No additional fits would be required, however, so this may be a potential option for more sophisticated analyses. Our goal is to estimate the covariance matrix Σ_MN of the statistical significances Z_M and Z_N evaluated at two different points of the search region ℳ: Σ_MN = ⟨ Z_M Z_N ⟩_d, M, N ∈ℳ, Z_M = (μ̂) √(t_μ(M))∼𝒩[0, 1], t_μ(M) = -2 lnℒ_0(μ=0, θ⃗_0)/ℒ_1(μ̂, θ⃗_0 + θ⃗_1, M)∼χ^2_d.o.f=1, where t_μ(M) is the likelihood-ratio test statistic (LRT), Z_M is the so-called signed-root LRT, θ⃗_0 are the nuisance parameters that maximize the background-only likelihood ℒ_0, and θ⃗_0 + θ⃗_1 together with the signal strength μ̂ maximize the signal+background likelihood ℒ_1. We would like to remark that for the signal+background model we are fitting θ⃗ as a deviation from θ⃗_0. This is essential for the proper separation of variables in the subsequent calculations. We assume that the best fit of the backgound model b_i to the data d_i is available for the study as b_i(θ⃗̂⃗) = b̂_i. In order to simplify the notation, we make use of the freedom to choose the reference point for the model parameters θ⃗ and define the best fit parameters to be θ⃗̂⃗ = 0⃗. § METHOD To simplify the notation, we redefine d_i, s_i and b_i to include σ_i: d_i/σ_i↦ d_i, s_i/σ_i↦ s_i, b_i/σ_i↦ b_i. The log-likelihoods then become: -2lnℒ_0 = ∑_i ( d_i - b_i(θ⃗) )^2, -2lnℒ_1 = ∑_i ( d_i - b_i(θ⃗) - μ s_i(θ⃗) )^2. For every realization of the data (e.g. an LHC run), we expect the deviations of the fit parameters μ and θ⃗ from 0 to be small (in the absence of a signal), and therefore the first-order expansion of b_i(θ⃗) and s_i(θ⃗) around 0⃗ to be accurate enough. The log-likelihoods then are: -2lnℒ_0 = ∑_i ( d_i - b̂_i - Δ_i βθ^β)^2, -2lnℒ_1 = ∑_i ( d_i - b̂_i - Δ_i βθ^β - μ s_i(0⃗) )^2, where Δ_i α = ∂ b_i(θ⃗)/∂θ^α|_θ⃗ = 0⃗ is the Jacobian of the best-fit background model and the Einstein summation rule applies to the indices β. Since the signal model s_i contributes to the log-likelihoods eq. (<ref>) only at lowest order, thus is constant, we simplify s_i(0⃗) to s_i from now on. The equations that define optimal values of θ⃗_0, θ⃗_1, and μ then are: ∂ℒ_0/∂θ_α|_θ⃗_0∝ ∑_i (d_i - b̂_i - Δ_i βθ_0^β)·Δ_iα = 0, ∂ℒ_1/∂θ_α|_θ⃗_1, μ̂∝ ∑_i (d_i - b̂_i - Δ_i β (θ_0^β + θ_1^β) - μ̂ s_i)·Δ_iα = 0, ∂ℒ_1/∂μ|_θ⃗_1, μ̂∝ ∑_i (d_i - b̂_i - Δ_i β (θ_0^β + θ_1^β) - μ̂ s_i)· s_i = 0. To reduce the number of indices, we rewrite the expressions above with bra-ket notation: ⟨d -b̂|Δ = ⟨θ_0|Δ^⊺Δ, 0⃗ = ⟨θ_1|Δ^⊺Δ + μ̂⟨s|Δ, ⟨d - b̂|s⟩ = ⟨θ_0 + θ_1|Δ^⊺|s⟩ + μ̂⟨s|s⟩, where in eq. (<ref>) we used eq. (<ref>) to cancel the θ⃗_0 contribution. We can solve eq. (<ref>) and eq. (<ref>) for θ⃗_0 and θ⃗_1 correspondingly: ⟨θ_0| = ⟨d-b̂|Δ(Δ^⊺Δ)^-1, ⟨θ_1| = - μ̂⟨s|Δ(Δ^⊺Δ)^-1. It is important to mention that, although Δ itself is generally singular, the product Δ^⊺Δ appears to be a Hessian of -2lnℒ_1 with respect to θ⃗_1. For the background model best-fit point θ⃗ = 0⃗ to be a minimum, it is required that the Hessian be positive definite, thus Δ^⊺Δ is invertible. We substitute eq. (<ref>) and eq. (<ref>) into eq. (<ref>) and solve for μ̂: μ̂(M) = ⟨d-b̂| P |s_M⟩/⟨s_M| P |s_M⟩, P = 1 - Δ(Δ^⊺Δ)^-1Δ^⊺. An interesting and important fact is that P is a projector and it is symmetric: P^2 = P, P = P^⊺. A projector is always positive semi-definite, which means that the product below is non-negative for any non-zero s⃗: ⟨s| P |s⟩ = ⟨s| P^2 |s⟩ = ( P |s⟩)^2 ≥ 0, ∀s⃗≠0⃗ . Let us estimate the test statistic t_M: t_M = (-2 lnℒ_0) - (-2 lnℒ_1) = 2 ⟨d - b̂ - Δθ⃗_0|Δθ⃗_1 + μ̂ s⟩ + ⟨Δθ⃗_1 + μ̂ s|Δθ⃗_1 + μ̂ s⟩. We again use eq. (<ref>) to cancel the θ⃗_0 contribution and eq. (<ref>) to substitute the solution for θ⃗_1: t_M = μ̂⟨d-b̂| P |s_M⟩ = μ̂^2 ⟨s_M| P |s_M⟩. The significance Z_M, as defined in eq. (<ref>), is: Z_M = μ̂√(⟨s_M| P |s_M⟩) = ⟨d-b̂| P |s_M⟩/√(⟨s_M| P |s_M⟩). The square root in eq. (<ref>) is always defined, as the product under the square root is always positive (eq. (<ref>)). For the covariance matrix estimation, we would need to average over data. We are looking for a solution with uncorrelated fluctuations in each bin (sec. <ref>), and we recall that we normalized the errors to 1 in eq. (<ref>), therefore, the following is true: E_d{|d-b̂⟩⟨d-b̂|} = 1. The covariance matrix, then, is: Σ_MN = E_d{ Z_M Z_N } = E_d{⟨s_M| P |d-b̂⟩/√(⟨s_M| P |s_M⟩)⟨d-b̂| P |s_N⟩/√(⟨s_N| P |s_N⟩)} = ⟨s_M| P /√(⟨s_M| P |s_M⟩) E_d{|d-b̂⟩⟨d-b̂|} P |s_N⟩/√(⟨s_N| P |s_N⟩) = ⟨s_M|/√(⟨s_M| P |s_M⟩) P |s_N⟩/√(⟨s_N| P |s_N⟩), To see the parallel with Demortier <cit.>, one needs to think of the background model as a linear combination of vectors in Δ. Then eq. (<ref>) defines a vector |v_M⟩ = P|s_M⟩/√(⟨s_M|P|s_M⟩), which was introduced by Demortier and is orthogonal to each of the vectors constituting the background shape. The test statistic, then, can be rewritten as t_M = (⟨d - b̂|v_M⟩)^2, and the covariance can be expressed as Σ_MN = ⟨v_M|v_N⟩. where we used the symmetry and projector properties of P. It should be noted that from the data fluctuations d⃗ - b⃗̂⃗ contributing to the covariance matrix in the form Fluct. ∝ E_d{|d - b̂⟩⟨d - b̂|}, a superposition principle, relied on in ref. <cit.>, can be derived: Σ_MN = ∑_f Σ^f_MN, where f enumerates independent fluctuations in different bins. In summary, we can estimate the autocovariance matrix of the significance field from the signal model and derivatives of the background model: Σ_MN = ⟨s_M|/√(⟨s_M| P |s_M⟩) P |s_N⟩/√(⟨s_N| P |s_N⟩), M, N ∈ℳ P = 1 - Δ(Δ^⊺Δ)^-1Δ^⊺, Δ_i α = ∂ b_i(θ⃗)/∂θ^α|_θ⃗ = 0⃗. § JUSTIFICATION OF THE SET OF ASIMOV BACKGROUND SAMPLES In this section we would like to compare the derived expression eq. (<ref>) for the linear approximation of the significance covariance matrix to the empirical study <cit.> and the H →γγ inspired model introduced there. To carry out the calculations we used the SigCorr package that we developed specifically for trials factor studies, which now includes functionality for the linear approximation <cit.>. We estimate the linear approximation using eq. (<ref>) with the true parameters of the model, which were predefined in the paper. The resulting matrix shown in figure <ref> clearly resembles the one presented in the empirical study. We also show, in figure <ref>, the difference between the linear approximation computed on the model's true parameters (figure <ref>) and the empirical estimate. We confirm that the empirical covariance matrix is compatible with the linear approximation suggested in this paper within the accuracy of the empirical estimate. On the one hand, the compatibility of the linear approximation and the empirical study allows us to refer to the validations conducted in the empirical study, including those regarding trials factor estimation, and to re-apply them to the method suggested in this paper. The direct calculation of the up-crossings from the covariance matrix, described in <cit.>, becomes particularly appealing now, since it requires only a single fit of the statistical model to the data. The linear approximation, on the other hand, serves as the theoretical basis for the empirical set of Asimov background samples used to estimate the covariance matrix in the aforementioned work. § CONCLUSION In this work we proposed a novel method for the estimation of the covariance matrix of statistical significance in new particle searches using a linear expansion of the statistical model around its background-only best fit to the data. In addition to the closed form expression for the linear approximation of the significance covariance matrix, we also presented elegant expressions for the best fitted signal strength and statistical significance in this approximation. We proved that the suggested covariance matrix satisfies the superposition principle with regard to the fluctuations of the data, which makes it a good proxy to the covariance matrix constructed with the set of Asimov background samples<cit.>. Finally, we compared these two approaches with the example of a H →γγ inspired model and showed that the deviations are compatible with the error of the set of Asimov background samples. We, therefore, claim that all the validations conducted in the empirical study, including those regarding trials factor estimation, hold for the linear approximation suggested in this paper, and the linear approximation serves as a theoretical basis for the empirical set of Asimov background samples construction. We would like to thank Elliot Reynolds for the encouraging discussion at the HDBS Workshop at Uppsala. This research was supported by the European Union Framework Programme for Research and Innovation Horizon 2020 (2014–2021) under the Marie Sklodowska-Curie Grant Agreement No.765710. JHEP
http://arxiv.org/abs/2307.04381v1
20230710072646
ADAQ-SYM: Automated Symmetry Analysis of Defect Orbitals
[ "William Stenlund", "Joel Davidsson", "Viktor Ivády", "Rickard Armiento", "Igor A. Abrikosov" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
[email protected] Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden Department of Physics of Complex Systems, Eötvös Loránd University, Egyetem tér 1-3, H-1053 Budapest, Hungary Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden MTA–ELTE Lendület "Momentum" NewQubit Research Group, Pázmány Péter, Sétány 1/A, 1117 Budapest, Hungary Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden Quantum technologies like single photon emitters and qubits can be enabled by point defects in semiconductors, with the NV-center in diamond being the most prominent example. There are many different semiconductors, each potentially hosting interesting defects. High-throughput methods and automated workflows become necessary when searching for novel point defects in a large chemical space. The symmetry properties of the point defect orbitals can yield useful information about the behavior of the system, such as the interaction with polarized light. We have developed an automated code to perform symmetry analysis of point defect orbitals obtained by plane-wave density functional theory simulations. The code, named ADAQ-SYM, calculates the characters for each orbital, finds the irreducible representations, and uses selection rules to find which optical transitions are allowed. The capabilities of ADAQ-SYM are demonstrated on several defects in diamond and 4H-SiC. The symmetry analysis explains the different zero phonon line (ZPL) polarization of the hk and kh divacancies in 4H-SiC. ADAQ-SYM is automated, making it suitable for high-throughput screening of point defects. ADAQ-SYM: Automated Symmetry Analysis of Defect Orbitals Igor A. Abrikosov August 12, 2023 ======================================================== § INTRODUCTION Point defects in semiconductors can provide a platform for solid state quantum technology, with applications such as qubits<cit.>, sensors <cit.> and single photon emitters <cit.>. One significant benefit of quantum applications made with solid state point defects is room temperature operation <cit.>. Theoretical calculations have been proven useful for identification of potentially interesting defects in wide-band gap semiconductors and quantitative estimations of their properties <cit.>. Indeed, first principles methods based on density functional theory (DFT) can simulate the electronic structures and predict multiple properties <cit.>. Each semiconductor material may host a multitude of intrinsic and extrinsic point defects. To probe the large combinatorically complex chemical space in an efficient manner, high-throughput workflows have been developed <cit.> to simulate thousands of defect combinations, calculate relevant properties and store the results into a searchable database <cit.>. Automatic Defect Analysis and Qualification (ADAQ) <cit.> is one such high-throughput workflow. There are many relevant properties to study, such as how the defect interacts with light. By analyzing symmetry of the defect orbitals and selection rules, one can deduce polarization of incoming and outgoing light. With symmetry analysis on the theoretical side, polarization specific PL measurements can be more accurately matched with simulated defects and orientation for single defects can be identified <cit.>. Before analyzing the orbitals, the point group symmetry of the crystal hosting the defect needs to be found. There are two broadly used codes for this, spglib <cit.> and AFLOW-SYM <cit.>. We use AFLOW-SYM because of its reported lowest mismatch when finding symmetry for known crystals <cit.>. In addition, there are several codes that calculate irreducible representations of bands but mostly with focus on topological insulators <cit.>. Within quantum chemistry, one method to quantitatively analyse the symmetry of molecular orbitals is with continuous symmetry measures (CSM) <cit.>, which provide a numerical measure of how close molecular orbitals are to certain irreducible representations. Defect orbitals in the band gap are localized much like molecular orbitals, yet methods similar to CSM have not been applied to point defects in solid host materials. Presently, a common method of symmetry analysis of defect orbitals is visually inspecting an isosurface of the wave function and how it behaves under the symmetry transformations, this may be prone to human error especially for high symmetry structures, and is not applicable in high-throughput workflows. Another method of analyzing the symmetry is to describe the defect orbitals as a linear combination of atomic orbitals and manually carrying out the group theory <cit.>. This method may be applicable in high-throughput, however, since the structure has already been relaxed with plane waves we focus on analyzing these directly without projecting to atomic orbitals. By omitting the projection we can keep all information present in the plane waves. This paper presents a quantitative symmetry analysis method for defect orbitals in solid host materials simulated with a plane wave basis set and the selection rules of optical transitions between defect orbitals. We introduce ADAQ-SYM, a Python implementation of this method. The code is fast and automated, requiring little user input, making it applicable as an analysis tool for high-throughput simulations of defects. Section <ref> presents an introduction to the group theory, specifically applied to defects. Section <ref> describes the ADAQ-SYM algorithm that performs the symmetry analysis, and Appendix <ref> deals with how the code is constructed and what approximations are used. Computational details of the simulations in this paper are described in Section <ref>. Section <ref> presents the results from symmetry analyses of several known defects; nitrogen vacancy (NV) center and silicon vacancy (SiV) center in diamond and the silicon vacancy (V_Si) and several divacancy (V_SiV_C) configurations in 4H-SiC. Section <ref> discusses these results and Appendix <ref> presents the recommended best practices when using the software. § THEORETICAL BACKGROUND In this paper we consider point groups. For convenience we summarize basic concepts following Ref. <cit.>. We refer to symmetry transformations as unitary transformations in three dimensional space which have at least one fixed point, meaning no stretching or translation. In the Schönflies notation, these transformations are: * Identity, E. * Rotation of 2π/n or 2π m/n, where n and m are integers, C_n or C_n^(m). * Reflection in a plane, σ_x. x = h, v or d, denoting reflection in a horizontal, vertical or diagonal plane. * Inversion, i. * Improper rotation of 2π/n or 2π m/n, where n and m are integers, which is a rotation 2π/n or 2π m/n followed by a reflection in a horizontal plane, S_n or S_n^(m). A set of these symmetry transformations, if they have a common fixed point and all leave the system or crystal structure invariant, constitutes the point group of that system or crystal structure. The axis around which the rotation with the largest n occurs is called the principal axis of that point group. The point groups relevant to solid materials are the 32 crystallographic point groups, of which the following four are used in this paper, C_1h, C_2h, C_3v and D_3d. In brief, character describes how a physical object transforms under a symmetry transformation, (1 = symmetric, -1 = anti-symmetric, 0 = orthogonal), and representation Γ describes how an object transforms under the set of symmetry transformations in a point group. Each point group has a character table which has classes of symmetry transformations on the columns and irreducible representation (IR) on the rows, with the entries in the table being characters. IRs can be seen as basis vectors for representations. Each point group has an identity representation, which is an IR that is symmetric with respect to all transformations of that point group. Character tables can have additional columns with rotations and polynomial functions, showing which IR they transform as. Appendix <ref> contains the character tables of the point groups used in this paper, these character tables also show how the linear polynomials (x, y and z) transform. For defects in solids, the point group is determined by the crystal structure, and the symmetry of the orbitals can be described by characters and IRs. Figure <ref> shows a divacancy defect in silicon carbide with the point group C_3v as an example. Comparing with the character table for C_3v, Table <ref> in Appendix <ref>, one sees that the orbital has the IR a_1. When defects are simulated with DFT, one obtains (one-electron) orbital wave functions ϕ_i and corresponding eigenvalues ϵ_i. Optical transitions, where an electron moves from an initial state with orbital i to a final state with orbital f, has an associated transition dipole moment (TDM) μ⃗ , which is expressed as: μ⃗ = ⟨ϕ_f|er⃗|ϕ_i⟩, where e is the electron charge and r⃗ is the position operator. Selection rules can be formulated with group theory <cit.>. For TDM the following applies: for optical transitions to be allowed the representation of the TDM Γ_μ must contain the identity representation, where Γ_μ = Γ_f ⊗Γ_r ⊗Γ_i, with ⊗ being the direct product, and Γ_r is the IR of the polarization direction of the light, corresponding to the linear functions in the character tables. § METHODOLOGY Figure <ref> shows the symmetry analysis process of ADAQ-SYM. Here, we describe the steps in detail. First, we perform a DFT simulation on a defect in a semiconductor host material. This produces a relaxed crystal structure and a set of orbital wave functions and their corresponding eigenvalues. These are the main inputs for ADAQ-SYM. The orbitals to be analyzed are chosen by the user. The electron orbitals associated with defects are localized around the defect, and the inverse participation ratio (IPR) is a good measure of how localized an orbital is <cit.>. The discreet evaluation of IPR is χ = ∑_r |ϕ_i(r⃗)|^4/(∑_r |ϕ_i(r⃗)|^2)^2, and can be used to identify defect orbitals in the band gap, since they have much higher IPR than the bulk orbitals. There are also defect orbitals in the bands which are hybridized with the delocalized orbitals, their IPR are lower than the ones in the band gap, but still higher than the other orbitals in the bands. We employ IPR as a tool for identifying defect orbitals in the bands by spotting outliers. After the careful selection of ab initio data and inputs, ADAQ-SYM is able to perform the symmetry analysis. Second, the "center of mass" c⃗ of the each orbital is calculated according to c⃗ = ⟨ϕ(r⃗)|r⃗|ϕ(r⃗)⟩ = ∫ dr^3 ϕ^*(r⃗) r ϕ(r⃗)) = ∑_rϕ^*(r⃗) r ϕ(r⃗). These centers are used as the fixed points for the symmetry transformations. Orbitals are considered degenerate if the difference in their eigenvalues are less than a threshold. When calculating c for degenerate orbitals, they are considered together and the average center is used. This method does not consider periodic boundary conditions and necessitates the defect be in the middle of the unit cell. To mitigate skew of the center of mass, the wave function is sampled in real space, and points with moduli under a certain percentage p of the maximum are set to zero according to ϕ_trunc(r⃗) = 0 if |ϕ(r⃗)| < p max_r⃗ (ϕ(r⃗)) ϕ(r⃗) otherwise . Third, the point group and symmetry transformations of the crystal structure is found via existing codes. Each symmetry transformation has an operator Û. To get characters the overlap of an orbital wave function and its symmetry transformed counterpart, the symmetry operator expectation value (SOEV), is calculated ⟨Û⟩ = ⟨ϕ(r⃗)|Ûϕ(r⃗)⟩ = ∫ dr^3 ϕ^*(r⃗) (Ûϕ(r⃗)) , for each orbital and symmetry transformation. The wave function is expanded in a plane wave basis set, with G-vectors within the energy cutoff radius. Therefore, Eq. <ref> can be rewritten to be evaluated by summing over these G-vectors only once, the plane wave expansion is also truncated by reducing the cutoff radius when reading the wave function and renormalizing <cit.>. Fourth, the character of a conjugacy class is taken to be the mean of the overlaps of operators within that class, and the overlaps of degenerate orbitals are added. To find the representation of a set of characters, the row of characters is projected on each IR, resulting in how many of each IR the representation contains. Consider an IR Γ, where W⃗_Γ is a vector with the characters of Γ times their order. For example, for C_3v the vector for a_2 is W⃗_a_2 = (1·1, 1·2, -1·3) = (1,2,-3). Let V⃗ be the vector with a row of characters and h be the order of the point group, then N_Γ is the number of times the IR Γ occurs which is calculated as follows N_Γ = W⃗_Γ·V⃗1/h . For degenerate states the found representation should be an IR with dimension equal to the degeneracy, e.g. double degenerate orbital should have a two-dimensional e state. If an IR is not found, the overlap calculation is rerun with the center of another orbital as the fixed point. The CSM S for the IRs of molecular orbitals <cit.> is used for the defect orbitals and calculated with S(ϕ, Γ) = 100(1 - N_Γ), which produces a number between 0 and 100. S(ϕ, Γ)=0 means that the orbital is completely consistent with IR Γ, and S(ϕ, Γ)=100 means that the orbital is completely inconsistent with the IR Γ. Fifth, to calculate the IR of the TDM and find the allowed transitions, the characters of the TDM is calculated by taking the Hadamard (element-wise) product of the character vectors of each 'factor' V⃗_μ = V⃗_f ∘V⃗_r ∘V⃗_i and Eq. <ref> is used to calculate Γ_μ. The representation of the resulting character vector is found in the same way as the IR of the orbital was found. As an example, consider the group C_3v with the three IRs a_1, a_2 and e. If some TDM in this group has the character vector V⃗_⃗μ⃗ = (4,1,0) calculating the representation would look like: W⃗_a_1 = (1,2,3), W⃗_a_2 = (1,2,-3), W⃗_e = (2,-2,0), h = 6 , Γ_μ = [N_a_1,N_a_2,N_e] = [4+2/6, 4+2/6, 8-2/6] = [1,1,1] . Since Γ_μ contains a_1, the transition is allowed. The code contains a function to convert a representation array of the above format to a string such as "a_1 + a_2 + e". Finally, the information produced by ADAQ-SYM is entered into a script to produce an energy level diagram which shows the position in the band gap, orbital occupation, IR and allowed transitions. § COMPUTATIONAL DETAILS The DFT simulations are executed with VASP <cit.>, using the projector augmented-wave method <cit.>. We apply the periodic boundary conditions, and the defects in the adjacent supercells cause a degree of self-interaction. To limit this, the supercell needs to be sufficiently large. In our case supercells containing more than 500 atoms are used. The defects are simulated with the semi-local Perdew, Burke and Ernzerhof (PBE) exchange-correlation functional <cit.>. These simulations only include the gamma point, run with a plane-wave cutoff energy of 600 eV, with the energy convergence parameters 1×10^-6 eV and 5×10^-5 eV for the electronic and ionic relaxations, respectively. The simulations are done without symmetry constraints so symmetry breaking due to the Jahn-Teller effect can occur when relaxing the crystal structure. Excited states are simulated by constraining the electron occupation <cit.>. § RESULTS AND DISCUSSION To illustrate the capability of our method, we apply ADAQ-SYM to several defects in two different host materials, diamond and 4H-SiC, and analyze the symmetry properties for the defects orbitals. The symmetry analysis provides a coherent picture of the known defects, finds the allowed optical transitions between defect orbitals, and, specifically, explains the different ZPL polarization of the hk and kh divacancies in 4H-SiC. §.§ Diamond Defects We first analyze the symmetry of the ground state of NV-, SiV0 and SiV- centers and silicon vacancy center in diamond. These defects were simulated in a cubic (4a,4a,4a) supercell containing 512 atoms, where a=3.57 Å. §.§.§ Negatively Charged NV Center Figure <ref> shows the ground state crystal structure and electronic structure of the NV- center in diamond. Figure <ref> (b) is the generated output from ADAQ-SYM, for each orbital in the band gap. It shows the eigenvalue, occupation and IR, as well as the allowed transitions for each polarization. In this case, the found IRs are in accordance with previous work <cit.>, and only one allowed transition is found, where the light is polarized perpendicular (⊥) to the principal axis. This selection rule has been experimentally confirmed <cit.>. §.§.§ Silicon Vacancy Center Figure <ref> shows the ground state electronic structure of the neutral (a) and negatively charged (b) silicon vacancy center in diamond, and the IPR for 30 KS-orbitals around the band gap. Our DFT calculations show that most orbitals in the VB are delocalized and have a low IPR. However, some orbitals have larger IPRs meaning that they are more localized and indicating that they are defect states. These defect states in the VB are ungerade (u), meaning anti-symmetric with respect to inversion. Both the charge states considered in this work have point groups with inversion symmetry which only allow optical transitions between orbitals of different symmetry with respect to inversion. To populate an orbital that is gerade (g), that is symmetric with respect to inversion, an electron from an u-state must be excited. When some orbitals in the valance band are taken into account, ADAQ-SYM finds two allowed transitions from defect states in the valance band to an empty state in the band gap, in agreement with previous calculations <cit.>. For SiV^- the behavior of the orbitals under inversion is clear. The IR of these states will depend on the point group being analyzed, and CSM is used measure how well the orbitals conform to different IRs. Table <ref> shows the CSM of the defect orbitals of SiV^- in different point groups. The orbitals conform well to the C_i point group, and with some tolerance they also conform to the IRs of C_2h. The orbitals do not conform IRs in D_3d, unless one considers the orbitals degenerate despite the difference in eigenvalues. §.§ Silicon Carbide In this subsection, we carry out the symmetry analysis of defects in 4H-SiC with ADAQ-SYM, in both the ground state and the lowest excited state. The IR of each KS-orbital in the band gap and the allowed polarization of light for both absorption and emission is shown in the figures below. 4H-SiC consists of alternating hexagonal (h) and quasi-cubic (k) layers, resulting in different defect configurations for the same stoichiometry. The defects were simulated in a hexagonal (6a,6a,2c) supercell containing 576 atoms, where a=3.09 Å and c=10.12 Å. For 4H-SiC, "in-plane" refers to the plane perpendicular to the c-axis. §.§.§ Negatively Charged Silicon Vacancy We simulated the ground and excited state of the negatively charged silicon vacancy in the h site. Figure <ref> (a) shows two allowed transitions with different polarization, where the parallel polarized transition has slightly lower energy than the perpendicular, this corresponds well to the V1 and V1' absorption lines <cit.> associated with the silicon vacancy in the h site <cit.>. Figure <ref> (b) shows that the transition back to the ground state emits light polarized parallel to the c-axis, in agreement with previous calculations and measurements of the V1 ZPL <cit.>. §.§.§ High Symmetry Divacancy Figure <ref> shows the ground and excited state of the hh configuration of the divacancy, and the allowed transitions. In the excited state, one electron occupies what was previously an empty degenerate state and causes an Jahn-Teller effect. Because of this, the point group symmetry is reduced from C_3v to C_1h and degenerate states split when the system is relaxed in our simulations. This also changes the principal axis from being parallel to the c-axis to being perpendicular to it, that is the principal axis now lies in-plane. The selection rule tells us that absorption (to the lowest excited state) happens only for light polarized perpendicular to the c-axis, and the transition from the excited state emits light polarized parallel to the in-plane principal axis, thus also perpendicular to the c-axis. This behavior corresponds well to previous calculations and measurements <cit.>. The kk divacancy is basically identical to the hh divacancy with respect to symmetry. §.§.§ Low Symmetry Divacancies The two low symmetry divacancy configurations hk and kh exhibit different behavior regarding the polarization of the ZPL <cit.>. Examining the symmetry of the orbitals and applying selection rules regarding the TDM allows us to distinguish between these configurations. For both of these low symmetry configurations, the only symmetry transformation is a reflection in a plane where the principal axis lies in-plane. Figure <ref> shows crystal- and electronic structure information of the hk divacancy. From panel (d) one sees that the relaxation to the ground state only emits light polarized parallel to the in-plane principal axis. Figure <ref> shows crystal- and electronic structure information of the kh divacancy. Panel (d) demonstrates that the relaxation to the ground state only emits light polarized perpendicular to the in-plane principal axis, meaning there are components both in-plane and along the c-axis. From the symmetry analysis by ADAQ-SYM, one can attribute the differing polarization behavior of the hk and kh configurations to the symmetry of the lowest excited state (symmetric and anti-symmetric respectively). Due to the principal axis laying in-plane, it is possible to experimentally determine the orientation of individual defects measuring the in-plane polarization angle of the PL detected along the c-axis, in an experiment similar to Alegre et al. <cit.>. In such a experiment, the hk divacancy will exhibit a luminescence intensity maxima when the polarization is parallel to the principal axis, and a minima when the polarization in perpendicular. The opposite would be true for the kh divacancy, and the two configurations could be distinguished by the approximately 30 meV difference in ZPL <cit.>, or by the 30 degree polarization differences between the respective maxima. § DISCUSSION The orbitals of the NV- center, seen in Figure <ref> (c)-(d), are a little asymmetric, despite this ADAQ-SYM reproduces the results of previous calculations <cit.> because there is a tolerance when finding the characters of an orbital. This shows that the code can produce correct IRs, even for systems that are not simulated with symmetry constraints and not very tightly converged, making this a useful tool for high-throughput calculations of defects where high convergence becomes costly. One issue that arose when analyzing SiV- is that the crystal symmetry was a little inconsistent with point group which the electronic structure seemed to conform to. Depending on the tolerance, AFLOW-SYM found either C_i or D_3d as the point group. The orbitals seem to conform to a C_2h point group, although with a strict tolerance on IR, it only matches with C_i. The crystal structure seems to be distorted in a way to break the symmetry by little and the difference between the distortion that would reduce D_3d to C_2h is of similar magnitude to the distortion that reduced the symmetry to C_i, meaning they both fall within or outside of the tolerance of AFLOW-SYM. A more accurate DFT simulation might address this and make the distortions distinguishable. In this case, it was solved manually by calculating overlaps in D_3d and then calculating CSM of various subgroups of D_3d and seeing in which subgroup the orbitals conformed reasonably to IRs. Having a loose tolerance parameter for the AFLOW-SYM crystal symmetry finder can be useful in ambiguous cases since ADAQ-SYM will then run for a larger set of symmetry operators, which gives an overview and can provide insight to what extent the orbitals are asymmetric with regards to each operator. It is also recommended to do this when multiple gradual distortions of the same defect are examined. The initial excited state calculation of the silicon vacancy seemed to show a case of the pseudo Jahn-Teller effect where the symmetry was reduced and the degenerate states split despite not being partially occupied in either spin channel. Upon running a simulation with more accurate tolerance parameters the point group remained C_3v and the splitting reduced to less than the threshold of 10 meV. For cases like this, convergence becomes more important and looser high-throughput simulations may exaggerate these effects, to resolve this one can have a higher degeneracy tolerance parameter which will cause more states to be grouped together as degenerate. § CONCLUSION We have presented a method of determining the symmetry of defect orbitals, and implemented this method in the software ADAQ-SYM. The implementation calculates the characters and irreducible representations of defect orbitals, the continuous symmetry measure is also calculated to get a numerical measure of how close the orbitals are described by the irreducible representations. Finally, ADAQ-SYM applies selection rules to the optical transitions between the orbitals. The code is applicable to efficient analysis of defects. We have applied the software to a variety of known defects with different point groups and host materials, and it reliably reproduces their symmetry properties. It is found that the polarization of the allowed transition for hk (kh) is parallel (perpendicular) to the in-plane principal axis, in accordance with experiments. A method to determine the orientation of individual hk and kh divacancies is also proposed. In summary, ADAQ-SYM is an automated defect symmetry analysis code which is useful for both manual and high-throughput calculations. § SOFTWARE AVAILABILITY For availability of ADAQ-SYM and instructions, see https://httk.org/adaq/https://httk.org/adaq/. § ACKNOWLEDGEMENTS This work was partially supported by the Knut and Alice Wallenberg Foundation through the Wallenberg Centre for Quantum Technology (WACQT). We acknowledge support from the Knut and Alice Wallenberg Foundation (Grant No. 2018.0071). Support from the Swedish Government Strategic Research Area Swedish e-science Research Centre (SeRC) and the Swedish Government Strategic Research Area in Materials Science on Functional Materials at Linköping University (Faculty Grant SFO-Mat-LiU No. 2009 00971) are gratefully acknowledged. JD and RA acknowledge support from the Swedish Research Council (VR) Grant No. 2022-00276 and 2020-05402, respectively. The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at NSC, partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973. This research was supported by the National Research, Development, and Innovation Office of Hungary within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004) and within grant FK 145395. § IMPLEMENTATION ADAQ-SYM is written in python using functional programming. Table <ref> provides an overview of the principal functions, and Table <ref> shows the settings ADAQ-SYM uses. To run the code, the user needs to provide three files from a VASP simulation; POSCAR or CONTCAR, the crystal structure; WAVECAR, wave function; EIGENVAL, eigenvalues and occupation of the bands. The user must also define which bands should be considered for the analysis. This should be a list of indices for each of the spin channels in EIGENVAL. In most cases, one should list the indices of the bands in the band gap. The functions and call AFLOW-SYM <cit.> to find the point group and symmetry operators of the input crystal structure, the symmetry operators are then sorted by their conjugacy class and arranged in the order the classes appear in the character table. These functions use the setting which determine tolerance for asymmetry AFLOW-SYM uses, values may be "tight" or "loose". The point group is used to load the right character table from text files by Gernot Katzer <cit.>. The vaspwfc module in the VaspBandUnfolding package <cit.> is used for reading the WAVECAR file and working with the plane wave expansion of the wave function, and it also serves as the basis of the IPR calculations. The calculates the "center of mass" of each of the considered bands using Eq. <ref> and <ref>, where the cutoff percentage p is read from the setting. The wave function is sampled in a real space grid where the setting makes the grid denser. The function loops through all considered orbitals and all symmetry operators and calculates the overlap, how Eq. <ref> is computed is described in more detail in <cit.> and Numpy <cit.> is used to accelerate the evaluation. The evaluation time of the overlap calculation scales linearly with the number of G-vectors in the plane wave expansion. To speed up the code the series is truncated by multiplying the cutoff energy by the factor . The cutoff energy corresponds to a radius in k-space and only G-vectors within the radius are used, so halving the cutoff energy gives roughly one eighth as many G-vectors. Truncating the series produces some error in the overlap, this error is relatively small for larger than 0.1 <cit.>, the symmetry does not depend strongly on the high frequency components of the plane wave expansion. Note that the overlap calculation will produce a complex number. The function reads the EIGENVAL file and groups the considered bands by degeneracy. Two bands are considered degenerate if the difference in eigenvalue is less than . This function also outputs the eigenvalue and occupation of the considered bands. The function takes the overlaps and the bands grouped by degeneracy and first adds the overlaps of degenerate bands for each symmetry operator, then the overlaps within each conjugacy class is averaged to produce the character. At this point, the character is complex valued but this is resolved with the following function. The function takes a set of characters and computes Eq. <ref> for all IRs of a point group, since the overlaps are in general complex N_Γ will also be a complex number. Doing this for a truly symmetric orbital will produce a complex number with a small imaginary component and a real component close to an integer. For a set of characters to be said to transform as IR Γ, the imaginary component must be smaller than and the real component must be within of a non-zero integer. For example with a tolerance of 0.05, characters producing N_Γ= 0.99 + 0.02i will be interpreted as transforming as IR Γ, while characters producing N_Γ= 0.96 + 0.07i or N_T= 0.92 + 0.03i will not. The same procedure is used when the CSM is calculated, since Eq. <ref> uses N_Γ. The function calculates Eq. <ref> for each occupied state i, each non-full state f and each linear function r. The representation is found with and if the trivial representation is contained, the transition is marked as allowed. The function uses Matplotlib <cit.> to create energy level diagrams of the considered states with the occupation and IR drawn, and all allowed transitions represented by arrows between the bands. The color of the arrow differs on the polarization of the transition. § BEST PRACTICES The following summarizes our recommendations when running the software: If no IR is found and several bands are close, increase degeneracy tolerance which will cause more states to be grouped together as degenerate. This may be preferential since actually degenerate orbitals split apart will not be assigned any IR, while accidentally degenerate orbitals grouped together as degenerate will assigned an IR which is the sum each orbitals IR, such as a_g+b_g, which makes it clear that the orbitals are accidentally degenerate. If no IR is found, check that the centers of mass are close to your defect. If not, recalculate the centers with higher grid density, setting 6 or 8. There is also an automated fallback where the atomic position of any unique atomic species will be used. If the crystal symmetry is unclear or you think it should be higher, increase AFLOWs tolerance. This way, the overlaps will be calculated for a larger set of symmetry operators.Then, check the overlaps manually, and look for subsets where the characters are close to integers, any such subset should be a point group which is a subset of the larger group. § CHARACTER TABLES The character tables used in this paper are presented here.
http://arxiv.org/abs/2307.05706v1
20230711181335
Data-driven Discovery of Diffuse Interstellar Bands with APOGEE Spectra
[ "Kevin A. McKinnon", "Melissa K. Ness", "Constance M. Rockosi", "Puragra Guhathakurta" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR", "stat.AP" ]
Kevin McKinnon [email protected] 0000-0001-7494-5910]Kevin A. McKinnon Department of Astronomy & Astrophysics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA 0000-0001-5082-6693]Melissa K. Ness Department of Astronomy, Columbia University, Pupin Physics Laboratories, New York, NY 10027, USA Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA 0000-0002-6667-7028]Constance M. Rockosi Department of Astronomy & Astrophysics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA 0000-0001-8867-4234]Puragra Guhathakurta Department of Astronomy & Astrophysics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA Data-driven models of stellar spectra are useful tools to study non-stellar information, such as the Diffuse Interstellar Bands (DIBs) caused by intervening gas and dust. Using ∼ 55000 spectra of ∼ 17000 red clump stars from the APOGEE DR16 dataset, we create 2nd order polynomial models of the continuum-normalized flux as a function of stellar parameters (, , , , and ). The model and data show good agreement within uncertainties across the APOGEE wavelength range, although many regions reveal residuals that are not in the stellar rest-frame. We show that many of these residual features – having average extrema at the level of ∼3% in stellar flux on average – can be attributed to incompletely-removed spectral lines from the Earth's atmosphere and DIBs from the interstellar medium (ISM). After removing most of the remaining contamination from the Earth's sky, we identify () absorption features that have less than a 50% (5%) probability of being explained by chance alone, including all 10 previously-known DIBs in the APOGEE wavelength range. Because many of these features occur in the wavelength windows that APOGEE uses to measure chemical abundances, characterization and removal of this non-stellar contamination is an important step in reaching the precision required for chemical tagging experiments. Proper characterization of these features will benefit Galactic ISM science and the currently-ongoing Milky Way Mapper program of SDSS-V, which relies on the APOGEE spectrograph. § INTRODUCTION Stellar spectra capture the parameters of a star's evolutionary state and record the chemical composition of the material in which it formed. Small samples of high resolution stellar spectra have been used to describe the individual element abundance distributions of the Milky Way (MW) in the local neighbourhood <cit.>. With the advent of large surveys – such as RAVE <cit.>, SEGUE <cit.>, APOGEE <cit.>, Gaia-ESO <cit.>, GALAH <cit.>, LAMOST <cit.>, and H3 <cit.> – has come the ability to map abundances across the disk, bulge, and halo of our Galaxy <cit.>. These large data ensembles have also enabled new, statistically-motivated questions to be tackled about topics such as the underlying dimensionality of individual abundance distributions and the information content of stellar spectra <cit.>. The answers to these questions are key to understanding the origin of individual elements and the utility of those elements to reconstruct the assembly history of the MW. Chemical tagging – the ability to distinguish co-natal stars based on chemical abundances derived from spectra – is one of the foundational ideas of stellar surveys. Understanding the conditions that create particular populations of stars informs our stellar physics models and puts constraints on models of galaxy formation and evolution. In theory, stars that are born together were formed from the same gas cloud and thus share a chemical signature in their atmospheres. In practice, the level of precision required for chemical tagging is not currently feasible <cit.>. The difficulties around chemical tagging become even more severe if there are unknown or unmodeled features in a spectrum, especially if those features impact wavelength regions used for measuring chemical abundances. In the visible and infrared (IR) regimes, the largest and most obvious source of non-stellar signal comes from the Earth's atmosphere. Because of detailed measurements of the night sky's effects as well as knowing the rest-frame that spectral features are produced in, astronomers are able to account for and remove the bulk of Earth's atmosphere's signature. However, many spectra suffer from imperfect skyline and telluric removal, which leaves residual features capable of confusing spectral analysis pipelines. Another (often ignored) source of contamination comes from intervening dust and gas along the line-of-sight (LOS) to a star. Due to the velocity offset between gas/dust clouds and stars, spectral features from the Interstellar Medium (ISM) can appear at different wavelength locations in a set of observations at different LOS in the Galaxy. This issue is complicated further when the identification or central wavelength of an ISM-based feature is unknown or poorly constrained. Without a complete and accurate model of a star's light, it is often difficult to know a priori whether a particular residual feature is caused by non-stellar sources or is simply unknown physics/missing chemical species in the model. One common detection and characterization method for diffuse interstellar bands (DIBs) is to measure a feature's presence in multiple spectra of different stars and then to show correlations between ISM properties (e.g. extinction from dust) and that feature's strength. Efforts to detect, characterize, and map these DIBs have historically been focused on the optical regime, though a growing number of studies have been exploring the near-IR <cit.>. For instance, the ten currently-known DIBs that fall in the near-IR H-band (1.51-1.7 μm) wavelengths seen by the APOGEE spectrograph are summarized in Table <ref>, which is in stark contrast to the thousands of known optical DIBs. It is particularly important to understand sources of IR features as this regime is able to peer through the dusty regions of our Galaxy's disk. cc Most precise measurements of rest-frame wavelengths for currently-known DIBs that fall inside of the wavelength regions covered by the APOGEE spectrographa. DIBs that fall between the wavelength coverage of the three APOGEE detectors have been omitted. λ_0 (Å) Reference 15225 ± 10 <cit.> 15272.42 ± 0.04 <cit.> 15616.13 ± 0.07 <cit.> 15651.38 ± 0.07 <cit.> 15671.82 ± 0.03 <cit.> 15990 ± 10 <cit.> 16231.1 ± 0.5 <cit.> 16571.5 ± 0.5 <cit.> 16582.5 ± 0.5 <cit.> 16592.5 ± 0.5 <cit.> aThe <cit.> values have been converted from their reported Air wavelengths to Vacuum. Astronomy's burgeoning “Big Data Era” has facilitated the development of novel data-driven approaches to understanding stellar spectra that are less reliant on underlying physical models. A few successful techniques to characterize stellar light include using deep learning <cit.>, polynomial models of stellar labels <cit.>, and non-Gaussian Processes <cit.>. One significant benefit of data-motivated models is that they can describe stellar features – and correlations between features – in spectra that are currently unknown to physics-based models. Additionally, the data models do not rely on many of the simplifying assumptions that are common in synthetic models (e.g. local thermal equilibrium, 1D radial stellar models and atmospheres). Finally, data-driven models are especially useful when the physics is not well constrained, such as in the low-density environments of the ISM that are currently impossible to recreate on Earth. As a star's light passes through intervening gas and dust on its way to our telescopes, it is imprinted with ISM signatures from many chemical species whose identities and properties are generally unknown. Detailed characterization of all the signatures in a spectrum are therefore important in disentangling the origin of various spectral features – which furthers the science goals of both abundance measurements and ISM studies. Currently, the Milky Way Mapper (MWM) program of SDSS-V is using the APOGEE spectrograph to collect millions of stellar across all regions of the MW to understand its formation history and the physics of it stars <cit.>; any improvements in the APOGEE reduction pipeline will therefore have compounding effects on the MWM science goals. Finally, better constraints on near-IR DIBs can be studied with tomography techniques <cit.> to develop a more complete picture of our Galaxy's ISM. In this paper, we describe the APOGEE spectra and stellar parameters used in our analysis in Section <ref> and then present a data-driven model of those spectra in Section <ref>. In Section <ref>, we study the spectral residuals to show that DIBs, tellurics, and skylines are responsible for many of the relatively-large remaining features. We remove the Earth-based residuals to detect and characterize the remaining DIB features in Section <ref>. Finally, we summarize our results in Section <ref>. § DATA This work makes extensive use of stellar spectra, abundances, and parameters of Red Clump (RC) stars in the MW as measured by the APOGEE spectrograph <cit.> on the Sloan Telescope at the Apache Point Observatory as a component of the Sloan Digital Sky Survey <cit.>. The RC sample – defined by <cit.> using stellar parameters and simulated stellar evolution – boasts high spectral signal-to-noise ratios (SNR) as well as precise stellar parameters and abundances. These properties make the RC sample an ideal population for data-driven modelling and for studying non-stellar residuals. The APOGEE spectra cover the H-band (∼ 15000 - 17000 Å) with high resolution (R∼ 22500) and fine pixel spacing (Δλ∼ 0.2 Å·pixel^-1). The publicly-available spectral data are given in the rest-frame of each star. Our analysis uses the individual visit spectra instead of the combined visit spectra to account for changes in the LOS velocity – and, therefore, the location of non-stellar features – of each observation. Distributions of the number of visits per star and the median spectral SNR of the individual visit spectra are given in Figure <ref>. The spectra were analysed by the the APOGEE Stellar Parameter and Chemical Abundances <cit.> pipeline. We use the ASPCAP , , , and measurements and uncertainties from Data Release 16 <cit.>, while stellar ages come from the catalogue of <cit.>. As exemplified in Figure <ref>, the stars in our sample occupy a relatively narrow range in stellar parameters and abundances. After noticing a minor secondary peak near (,)=(-0.6,+0.2) dex in the versus panel, we removed stars with abundances above the red line to ensure that our modelling only focuses on a single chemical population. For the individual visit spectra in our sample, we remove pixels that have SNR< 50 pixel^-1. We also set the maximum pixel SNR to 200 pixel^-1 as recommended by ASPCAP, which suggests that the “uncertainty floor floor is at the level of 0.5%”[see <https://www.sdss4.org/dr16/irspec/spectra/>]. Because of known superpersistence issues in the blue detector, we mask out spectral observations where the fiber number is ≤ 100; this selection removes approximately 6500 RC stars that do not have a single observation with a fiber number greater than 100. Finally, following the approach of <cit.>, we remove data from pixels that have any of the the bitmask flags listed in Table <ref>. These APOGEE individual visit spectra in the stellar rest-frame, along with the <cit.> ages and ASPCAP parameters and abundances, are used in combination to build data-driven models in Section <ref>, which are the basis of our analysis. § MODELLING RC SPECTRA §.§ Preprocessing Spectra First, we continuum normalize the individual visit spectra using iterative B-spline fitting. At each iteration, the B-spline is defined by 50 Å-spaced knots. For the first iteration, all flux measurements in a spectrum are used to define the initial spline. For subsequent iterations, the new spline is measured using only flux values that are within 3σ (in flux uncertainty) for fluxes below the spline or 5σ for fluxes above the spline. This masking of fluxes is done to ensure that strong absorption features do not overly impact the continuum measurement. We iterate the spline fitting up to 100 times per spectrum, but stop iterating if the current iteration's spline is very similar to the previous one: ∑_i| c'_i,j,k-c_i,j,k|/c_i,j,k < 1× 10^-5 where c_i,j,k is the continuum spline value at pixel i from the previous iteration and c'_i,j,k is the current iteration's continuum spline value at pixel i for observation/visit number k of star j. This condition is such that the summed absolute fractional change is smaller than 0.001%, which tends to occur when the subset of fluxes being masked hasn't changed from one iteration to another. This process usually only takes a handful of iterations (i.e. ≤ 5), and virtually all of the spectra converge on a spline well before the 100 maximum iterations. To capture any remaining continuum, we repeat this continuum-fitting process after the first model fit to define a continuum-adjustment B-spline. We divide each individual visit spectrum by the previously-defined continuum spline and the best-fit model to get an approximate noise spectrum that may still have some continuum trends in it. We then use the same iterative B-spline fitting process as above, but using 25 Å-spaced knots and a 3σ threshold above and below for masking. The finer-spaced knots and the narrower threshold are because the residual spectrum ideally only consists of noise and any trends larger than ∼ 20 Å likely arise from an incomplete initial continuum removal. The continuum-normalized flux in pixel i for spectral observation k of star j is then y_i,j,k = f_i,j,k/c_i,j,k with corresponding uncertainty of σ_i,j,k = σ_f,i,j,k/c_i,j,k, where f_i,j,k and σ_f,i,j,k are the raw flux and uncertainty values. The continuum-normalized fluxes are then used as the inputs for the data-driven modelling. §.§ Modelling Flux using Stellar Labels Following the approach of <cit.>, we define the continuum-normalized flux in each pixel to be a 2nd-order polynomial of stellar labels <cit.>. We note that a key difference between these bodies of work is the stellar labels used in the fitting: <cit.> uses , , and , <cit.> uses the same, but also includes and mass, while our model uses , , , , and age. The vector of stellar labels in our analysis for star j is therefore given as x⃗_j = [ T_eff,j^2; T_eff,j×_j; T_eff,j×_j; T_eff,j×_j; T_eff,j×_j; T_eff,j; ^2_j; ⋮; ^2_j; ⋮; ^2_j; ⋮; ^2_j; ⋮; 1 ] such that the vector contains all the parameters to the first and second powers, cross terms, and a constant. Our model of the continuum-normalized fluxes is defined as y_i,j,k = x⃗_j^T ·θ⃗_i +ε_i,j,k where θ⃗_i are the coefficients for the stellar label terms in Equation <ref> for pixel i and ε_i,j,k∼𝒩(0,σ_i,j,k) describes the noise as a result of the uncertainty in a pixel's flux. This functional form implies that the data likelihood at pixel i is likelihood_i = ∏_j^n_*∏_k^n_obs,j𝒩(y_i,j,k | x⃗_j^T ·θ⃗_i, σ_i,j,k). We then see that the likelihood at a given pixel is maximized when ∑_j^n_*∑_k^n_obs,j(y_i,j,k - x⃗_j^T ·θ⃗_i/σ_i,j,k)^2 is minimized, which occurs when θ̂_i = [ ∑_j^n_*∑_k^n_obs,j1/σ_i,j,k^2x⃗_j ·x⃗_j^T ]^-1·[ ∑_j^n_*∑_k^n_obs,jy_i,j,k/σ_i,j,k^2x⃗_j ], which defines the best-fit coefficient vector at pixel i for a set of normalized fluxes, uncertainties, and stellar labels. To propagate the uncertainties on the stellar parameters to the model coefficients, we repeatedly draw realizations of the stellar parameters for each star and remeasure the best-fit coefficients at each pixel. After 500 iterations, we take the median of the best-fit coefficients to be the coefficients of final model. The residual flux for pixel i of observation k of star j is defined to be r_i,j,k,l = y_i,j,k - x⃗_j,l^T ·θ̂_i,l, for realization l of the the stellar parameters draws, x⃗_j,l, which yields a best-fit coefficient measurement of θ̂_i,l. To propagate the uncertainty on the stellar labels – and therefore the uncertainty on the best-fit coefficients – to the residuals, we also repeatedly measure the residual flux values for the 500 realizations, giving samples of r_i,j,k,l. The final residual flux, r̂_i,j,k, is taken to be the median of these realizations, with an uncertainty that is given by σ_r,i,j,k^2 = σ_i,j,k^2 + var( r_i,j,k,1, …, r_i,j,k,500), where var( r_i,j,k,1, …, r_i,j,k,500) is the variance of the 500 residual measurements at a given pixel for a given spectrum. In words, the residual uncertainty is a quadrature sum of the normalized flux uncertainty and the propagation of the best-fit coefficient uncertainty. A comparison of the the best fit model and data for a single observation of one star in our sample is shown in Figure <ref>. The uncertainty-scaled residual distribution for this star is close to the expected unit Gaussian distribution (mean of 0, standard deviation of 1), which shows that the model does a good job of capturing the information in the spectrum. If we assume that the fluxes and uncertainties reported by APOGEE accurately describe the data, then the best-fit Gaussian to the residual distribution (orange histogram) having a width of ∼ 1.15σ implies that there is information in this star's spectrum that the model does not capture at the level of ∼ 15% of the flux uncertainties. If we instead look at the uncertainty-scaled residual distribution across all observations and all pixels, we get the distribution shown in Figure <ref>. As before, the distribution is quite similar to the unit Gaussian, which suggests that the model performs well across all the spectral observations in our sample. Again, that the width is greater than 1σ reveals that there may be more information in the spectra than the model is able to describe. Using the residual histogram and best-fit Gaussian distribution in Figure <ref>, we vertically align the distributions so that they have the same height at their peaks so that we can quantify their difference in the wings (i.e. excess in the data over the expectation). First, we find that the data excess in the wings corresponds to ∼ 3.3% of the total pixels. Next, we draw many realizations of residual flux measurements from each bin in the blue data histogram where the data counts exceed the expected distribution. The results of this process are summarized in Figure <ref>, where the blue curve shows the median distribution of the size of flux residual in excess of the expectation. The blue point above the histogram shows the median and 68% region of the distribution, and the grey histograms show individual realizations of residual flux draws. In summary, the residuals that are not explained by the model have a median size of ∼ 2.9% of the stellar flux, with a 68% region covering 2% to 5% of the stellar flux. § STRUCTURE IN RESIDUALS To further explore the residuals, we begin by looking for trends in the flux residuals as a function of wavelength (in stellar rest-frame) and other explanatory variables. In many APOGEE pixels, we find that the model almost perfectly describes the data; for instance, the pixel summarized in the left panels of Figure <ref> shows a residual distribution (scaled by the residual uncertainties) that agrees very well with the unit Gaussian. When the stars are split up into different heliocentric velocity () bins – as is shown in the left middle panel – we see little difference between the residual distributions, and no obvious trend in those distributions' medians or standard deviations (left bottom panels). On the other hand, there are some pixels where the residual distributions are distinctly non-Gaussian and have widths that are much larger than 1σ. An example of this is shown in the right panels of Figure <ref> for a pixel near the central wavelength of the strongest known DIB in the APOGEE wavelength range <cit.>. Breaking the stars up into bins in the middle right panel, we now see significant differences between the residual distributions' shapes as well as their medians and widths (right bottom panels. In general, the extreme velocity bins have more positive residual medians and smaller standard deviations while the moderate velocity bins have negative residual medians and larger widths. We next look at the trends in the residuals with heliocentric velocity across neighbouring pixels at different wavelength cutouts. First, we sort the stars by their heliocentric velocity and then smooth the residuals (using a combination of inverse variance weighting and Gaussian smoothing based on at each pixel for each residual spectrum) to produce Figure <ref>. Each of the 18 panels show the smoothed residuals using 21 APOGEE pixels (i.e. width of ∼4 Å) centered on a particular feature, with the central wavelength of that feature listed on the right edge of the cutout. These regions correspond to 15 elemental features used by ASPCAP to measure abundances <cit.>, 1 region we identify as having minimal visible features (i.e. continuum), 1 region where we notice a large amount of residual Earth-based contamination, and 1 region around the strongest DIB in the APOGEE wavelength range. The y-axis of each panel in Figure <ref> corresponds to wavelength relative to the central wavelength label on the right of each plot, with blue wavelengths at the top and red wavelengths at the bottom. The x-axis is the sorted list of heliocentric velocities of the stars such that a vertical column in this figure corresponds to a single residual spectrum of a particular observation with that velocity. By eye, some of the wavelength cutouts (e.g. Fe, Ni, C, Cr, continuum) show small amplitude in smoothed residuals and not much correlation with heliocentric velocity. Other cutouts (e.g. K, DIB, P, sky, Na, Yb) show relatively strong trends in residuals with . Focusing on the DIB panel in particular, we see a residual minimum (i.e. an absorption feature) move through the cutout as a function of heliocentric velocity. This is because the strong DIB feature, while present in many of the APOGEE spectra, appears at different wavelength locations in the stellar rest-frame spectrum as a result of the offset between the velocity of the star and the velocity of the DIB-producing source. The wavelength of a DIB feature in a star's rest-frame spectrum is given by: λ_rest,* = λ_rest,DIB·(1+v_*+Δ v/c)·(1+v_*/c)^-1 where λ_rest,DIB is the wavelength of the DIB in its own rest-frame, v_* is the star's heliocentric velocity, and Δ v is the LOS velocity offset between the DIB source and the star. Because DIB features will show up at different wavelength locations in the stellar rest-frame spectra, our model is not able to describe the feature and thus leaves DIB residuals behind. Similarly, features from the Earth's sky are also likely to show up as residuals because they too appear at different wavelengths in the stellar rest-frame spectra: λ_rest,* = λ_rest,Earth·(1+v_*/c)^-1. For the Earth-based features, sorting by heliocentric velocity means that the features will occur at similar wavelength locations, which is why they appear strengthened in the velocity-sorted, smoothed residual panels. Comparing the features in the sky and the DIB panels, we also notice a difference in the horizontal width of the features; DIB features are known to be quite broad, so it agrees with expectations that the DIB panel residual has a visually larger width than, for example, the width of the local maximum diagonal stripe in the left half of the sky panel. Based on this difference, the narrow width of features in the P and Ce panels suggest these residuals are likely caused by the Earth instead of DIBs, while the large width of the residual minima stripes in the K and Na panels may originate from DIBs. The particular pattern seen in the DIB panel of Figure <ref> is a result of the average RC heliocentric velocity having a correlation with the average ISM heliocentric velocity in the MW disk. Because the RC stars in our sample are generally quite old (average of ∼2.8 Gyr, from Figure <ref>), they have had a longer time to kinematically decouple from the gaseous disk they were likely born in. In the Galactocentric cylindrical radius range of our stellar sample – 95% of the APOGEE RC sample is within 6.4<R<13.4 kpc with a median at 10.1 kpc – the gaseous disk has been seen to have relatively flat rotation at ∼220 km/s <cit.>. Using RC stars from APOGEE and GALAH, <cit.> measure rotation velocities that are also relatively constant (210<V_ϕ<230 km/s) in the 6<R<11 kpc region for |Z|<0.75 kpc. In particular, their R>9 kpc measurements – where approximately 75% of our APOGEE RC stars fall – shows remarkably stable V_ϕ∼215 km/s. This suggests that RC stars can be thought of as belonging to a rigidly rotating disk that is spinning ∼5 km/s slower than the gaseous disk. When we sort the DIB-based features by heliocentric velocity, we are largely sorting by azimuthal angle, as can be seen in Figure <ref>. If we assume both the RC stars and the ISM are described by rigid-rotating disks, then there is an average stellar and an average DIB for each bin in azimuthal angle; it is the relationship between the average stellar and average DIB that produces the pattern in the DIB panel. We can explore this relationship explicitly by tracing the location of the DIB minimum as a function of stellar rest-frame wavelength. This is done in Figure <ref>, where the orange best fit line to the DIB minimum location and the known rest wavelength of the DIB feature allow us to estimate the DIB source velocity as a function of stellar . We next compare the relationship we measure for gas as a function of stellar to our expectations. To that end, we assume the RC stars all belong to a rigidly rotating disk with V_ϕ∼215 km/s. Similarly, we assume there is also a gaseous disk rotating at V_ϕ∼220 km/s. For each RC star in Figure <ref>, we use the star's Galactic position to obtain the expected velocity vector from the rotating stellar disk, which is then transformed into a heliocentric velocity after accounting for the sun's position and motion[We use the same values for solar position and motion as <cit.>: (X,Y,Z)_⊙=(-8,0,0) kpc and (U,V,W)_⊙=(11.1,241.92,7.25) km/s. ]. We follow a similar procedure for generating the simulated of the gas, but we now need to assume a distance from the sun to the DIB source along each LOS. To be agnositic of the exact DIB source distance, we choose a fraction of the total LOS distance to each star, and use those new implied Galactic coordinates and the gas disk to measure for the intervening gas. We repeat this process for distance fractions of 100%, 50%, 10%, and 1% of the total distance to each star. A comparison of the velocity difference between simulated gas and simulated stars is shown in Figure <ref>, where the data points are colored by the different fractional distances of the gas. The orange line is the result of using the best fit line in Figure <ref> with the known rest-frame wavelength of the DIB to measure heliocentric velocity of the DIB as a function of stellar ; to be clear, the orange line in Figure <ref> is not a fit to the simulated data. We see good agreement between the orange line and the simulated cases when the DIB sources are, on average, between 1% and 50% of the distance to the stars. This implies that the velocity offset function we measure from the location of the DIB minimum in the stellar rest-frame is caused by a source that is in the foreground of the stars, as we would expect for an ISM-based absorption feature. Additionally, we see that the gas velocity offset in the simulated data has a relatively tight correlation with stellar velocity. This is ultimately what causes the pattern we see in the DIB panel of the smoothed residual in Figure <ref>; for the RC sample, stellar velocity has an approximately linear relationship with the gas offset velocity, so the wavelength location of a DIB in the stellar rest-frame, Equation <ref>, effectively becomes a function of stellar velocity alone. This explains why sorting by amplifies DIB signals and causes their location to move smoothly in wavelength. By comparing the apparent strength of residual spectral features with various parameters, we found a slight correlation in residual strength with spectral SNR. In particular, residual spectra with lower median SNR show higher levels of contamination, especially in wavelengths regions where tellurics and skylines are known to occur. We shift the residual spectra to the observer reference frame using each observation's LOS velocity to allow for Earth-based features to align in wavelength. We then inverse-variance combine residual spectra in different spectral SNR bins, which yields the coadded spectra shown in Figure <ref>; the combined uncertainty in each of the SNR bins are approximately the same, with median combined SNRs of ∼ 550 pixel^-1. The top panel shows the coadded residual flux in 10 different SNR bins, and this panel reveals that the extrema of the low SNR bins are generally larger in magnitude than the extrema of the higher SNR bins. The bottom panel is a similar plot, but now the spectrum of the highest SNR bin has been subtracted from all the spectra of the other SNR bins to highlight the change in residuals as a function of SNR. The faint blue regions in both panels corresponds to the strongest telluric absorption region in APOGEE <cit.>. In even the highest SNR bins, the residual features are commonly on the order of 1%-2% of the stellar flux, and as high as 4%-5% in the extreme cases. The structure we observe in our residual spectra reveals that a complicated combination of Earth-based features and DIBs are common across the APOGEE wavelength range. These trends of residuals with spectral SNR and heliocentric velocity highlighted in this work are what prompted the authors of <cit.> to include SNR and as explanatory variables in their regression fitting to measure the relationship between abundances and stellar evolutionary state. We posit that the non-stellar residuals seen in Figure <ref> may be responsible for the larger-than-expected scatter in the some of abundance distributions measured by ASPCAP <cit.>; some elemental abundances (e.g. Na) are determined by flux measurements at a small number of pixels, so the presence of DIBs at those wavelengths may have a relatively large impact. If we attribute a large portion of the residual excess in Figure <ref> to non-stellar features, then it wouldn't be unexpected to see a large effect from features with local minima that are ∼3% of the stellar flux. § REMOVING SKYLINES AND TELLURICS TO IDENTIFY DIFFUSE INTERSTELLAR BANDS To identify the DIBs present in the residual spectra, we first need to characterize and remove the more common contamination from Earth-based features. One advantage of isolating the Earth-based residuals versus the DIBs is that we are easily able to shift to the rest-frame of the skyline and telluric residuals. Because almost all the spectra have residual Earth-based features, but not all spectra have DIB contamination, it is useful to split the residual spectra up into two groups: those with and without the strongest DIB feature. Once we have these two groups, we can study the Earth-based contamination using the low-DIB-strength group. Those results can then be used to remove the Earth-based contamination from the high-DIB-strength group. To accomplish this, we need to rank the spectra by their DIB strength. We fit an inverted Gaussian to the strong DIB feature that is present around 15272 Å DIB in each residual spectrum, after shifting to the heliocentric frame using the for each observation; an example of this is show in Figure <ref>. This allows us to measure a LOS velocity and equivalent width for the strong DIB in each residual spectrum. We then use the EW measurements to define a low- and a high-DIB-strength group; residual spectra that failed to fit an absorption feature near 15272 Å DIB were automatically assigned to the low-DIB-strength group, and then a threshold of EW=1×10^-2 Å was chosen to divide the spectra with DIB detections to ensure an approximately equal number of spectra (∼ 21000) in the low- and high-DIB-strength groups. For the residual spectra in the high-DIB-strength group, the 15272 Å DIB EW is compared to the K-band reddening in Figure <ref> with data points colored by the EW bins used in our later analysis. For the low-DIB-strength group, we fit a 2nd-order polynomial model to the observer-frame residual fluxes at each pixel using a process similar to what is described in Section <ref>. In this case, however, the stellar label vector in Equation <ref> is replaced by x⃗_j = [ SNR_j^2; SNR_j; 1 ] and the data we are fitting are the residuals in the observer frame. This model enables us to quantify the predictive power of SNR alone to describe the spectral residuals that are not captured by our five label model from Section <ref>. Examples of the resulting best-fit model for different SNR bins are shown in Figure <ref>. Overall, this model captures much of the behaviour we see in the coadded residuals of Figure <ref>. There are a few regions that stand out visually (e.g. in the blue shaded regions where CO_2 telluric features are particularly strong) that have a saw-toothed shape to the residuals; this is a shape that can be created by subtracting two Gaussians with the same width but a slight offset in mean. We suggest that these Earth-based residuals are caused by a wavelength mismatch between the raw spectra and the sky models being subtracted. As shown in <cit.>, the APOGEE telluric and sky removal performs very well in most cases. <cit.> further shows improvement in APOGEE's Earth-feature removal; however, they also point out that the APOGEE reduction pipeline's approach is to flag and ignore pixels near particularly strong telluric and sky lines instead of a more computationally expensive approach that might achieve smaller residuals. Of particular interest, Figure 3 of <cit.> reveals the same saw-tooth residuals that we see in our Figure <ref>, with their figure focusing on the 15680-15800 Å region that is sensitive to tellurics from CO_2. While the APOGEE reduction process works well for their science goals, our analysis has shown that detailed accounting of Earth's atmosphere's light can reveal previously-hidden information. By looking at the wavelengths and shapes of the Earth-based features in Figure <ref>, we can see that some of the smoothed residual patterns in Figure <ref> are likely explained by the night sky. The Ce panel of Figure <ref>, for example, shows a repeating alternation of residual fluxes from low to high, which is visually dissimilar to the DIB panel which shows a single feature moving through the cutout window. The Ce panel's central wavelength of 15789.1 Å places it in a region of saw-toothed features in Figure <ref>, so this mottled pattern that appears after sorting by heliocentric velocity is likely the result of these saw-tooth shapes constructively and destructively combining. While this SNR-dependent Earth-residual model performs well in describing the observer-frame residuals, we choose to not use this model in our subsequent DIB analysis without extensive further testing. Instead, we remove skyline and telluric features by pairing up residual spectra in the low- and high-DIB-strength groups based on spectral SNR. That is, for each high-DIB-strength residual spectrum, we identify a low-DIB-strength residual spectrum with median spectral SNR that is within 5 pixel^-1 of the high-DIB-strength spectrum's SNR. We ensure that each low-DIB-strength residual spectrum is used only once. Once each high-DIB-strength residual spectrum has a corresponding low-DIB-strength residual spectrum, we subtract each pair and add their uncertainties in quadrature in the observer frame. This has the effect of removing the Earth-based residuals from the high-DIB-strength residual spectra while leaving the possible DIB features relatively untouched. The spectra are then shifted from the observer frame to the 15272 Å DIB rest-frame using the velocity measured in the inverted-Gaussian fitting for the high-DIB-strength spectrum in the pair. The next step is to break the spectra up into different bins based on the strength of the 15272 Å DIB. We define 10 DIB-strength bins such that approximately 2100 spectra are in each bin; these are shown by the different colors in Figure <ref>. We then combine the residual spectra in these bins using population fitting to measure a population mean and uncertainty/standard deviation at each pixel. The precise statistics used for this combination are explained in Appendix <ref>, but the key takeaway is that the population fitting takes into account both the uncertainty in each residual measurement as well as the spread in those measurements to give back realistic means and uncertainties at each pixel. The results of this population-fitting process are shown in Figure <ref>, where the colors denote the same DIB-strength bins as in Figure <ref>. The vertical red lines show the locations of the previously-known DIBs in Table <ref>; by eye, the increasing DIB-strength bins show increasing depth of features at most of these wavelengths. For the 15225 Å feature, the large literature uncertainty in central wavelength may suggest that the true central wavelength of that DIB should correspond to one of the features near 15250 Å instead. We also notice additional absorption features (e.g. slightly redward of 15700 Å) that have shapes similar to the previously-known DIBs. We recognize that all the DIB features are likely not in the same rest-frame as the 15272 Å DIB, as they might be produced by different intervening clouds along the sight-line to each star and therefore have different relative LOS velocities. However, the velocity difference between DIB sources along a single LOS aren't likely to be large enough to shift the wavelength by more than a few APOGEE pixels. For instance, the vast majority of the 15272 Å DIB velocities we measure are in the ± 30 km/s range, so at ∼ 16000 Å, a 10 km/s velocity offset would manifest as a ∼ 0.5Å offset in wavelength. When we combine spectra in each DIB-strength bin, we may slightly reduce the signal of DIB features from other sources because of a combination of wavelength offsets and their strength not correlating with the 15272 Å strength, but we still expect to detect their presence. Our decision to use the 15272 Å DIB rest-frame will, however, preserve the strength of features that are being produced by the 15272 Å DIB source, so this technique is particularly useful for identifying DIBs that correlate with the strongest-DIB. For future work where extremely precise central wavelengths are needed or the goal is to find every possible DIB in the APOGEE wavelength range, one approach would be to cross-correlate the residual spectra using different wavelength cutouts to find the velocities that amplify the signals of all DIBs. To identify possible locations of DIBs, we smooth the highest DIB-strength residual spectrum (i.e. the darkest purple line in Figure <ref>) with a Gaussian kernel width of 5 pixels; after some visual vetting, this yields local maxima and minima as possible DIBs in emission and absorption. Of the possible DIBs, we highlight the locations of 35 features in Figure <ref> that we will ultimately find are likely produced by the same source that produces the 15272 Å DIB; we choose to not show all possible features for visual clarity. As expected, this step finds features nearby to all of the previously-known DIBs. Many of the local extrema we find are likely spurious detections, but we choose to err on the side of testing too many extrema versus applying a more complicated thresholding at the detection step. For each local extremum, we then step redward and blueward from that wavelength in the smoothed spectrum until the slope is near 0 to define a useful wavelength region around each feature. We then manually check these regions to confirm that they cover the entirety of an apparent feature, tweaking the boundaries where necessary. This defines the wavelength cutouts we will later use for measuring the EW strength of the possible DIBs and for quantifying DIB detection probabilities. The list of possible features agree well with the features we find when we smooth a population-combined spectrum using all ∼ 21000 residual spectral pairs; we choose to use the feature locations from the highest DIB-strength spectrum instead of the total population because the signal from true DIBs will be strongest in the high DIB-strength bin. As mentioned in the last few paragraphs, the central wavelengths we measure in the 15272 Å DIB rest-frame for DIBs that are produced by a different source may be offset from their true value by a few pixels, but this is still an improvement on the wavelength location for a few of the previously-known DIBs in Table <ref>. Additionally, because the spectra in each DIB-strength bin are a combination of ∼2100 individual residual spectra, a systematic velocity offset between different DIB sources would be required to produce a significant shift in the central wavelength location. This implies that our central wavelength measurements are likely within a handful of pixels of the true rest-frame wavelength of each DIB feature. To be conservative, we estimate the uncertainty on the central wavelengths of the possible DIBs to be ∼1 Å. We next measure the equivalent width of the local extrema in each of the DIB-strength bins of Figure <ref>. Using the wavelength cutouts defined above, we empirically integrate the flux. Specifically, we use the 5 pixels at the edges of each cutout (3 inside, 2 outside of the cutout) to define an average flux and wavelength on the blue and red edge; these blue and red average fluxes define a line for each DIB-strength bin that we use at the local continuum measurement. After subtracting off the local continuum line for each bin, we perform a trapezoid numerical integration to measure the area of the feature. To propagate uncertainties in the flux, we repeated sample flux measurements – including the fluxes used to measure the local continuum line – to get a distribution on the empirical EW for each bin. The EW measurements we report are the median and standard deviation of those realizations. An example of the resulting EW measurements is shown in the left panel of Figure <ref> for the possible DIB near 15706 Å, with a cutout of the feature shown in the right panel for the different DIB-strength bins. To characterize how much the EW measurements change between the bins, we perform a population fit to the EW measurements of the possible DIB feature (e.g. the y-axis values of the left panel); this population fit again follows the approach detailed in Appendix <ref>, and an example of the resulting medians on the population mean and width are shown with a black line and grey region in the left panel of Figure <ref>. We are particularly interested in the population width because a large width indicates that the EW measurements between the bins are quite different. Of course, each possible DIB feature may only be a spurious detection; that is, a feature may only show up in the highest DIB-strength bin by random chance alone. To determine the significance of a detection, we repeat the binning, combining, and EW-measuring process, but this time, we bin randomly. To be clear, instead of using the strong-DIB-strength ranking to bin our spectra, we assign spectra randomly into 10 different bins, with the same ∼2100 spectra per bin. Using random binning, we expect that the features in each bin will be quite similar to each other, as will their EW measurements. To quantify exactly how strong a detection is (i.e. its significance above randomness), we also measure the population width of the EW measurements from random binning. In particular, we are interested in how the population width distributions compare between the random binning and the binning by the strong-DIB-strength cases. To account for the variation in the random binning that might occur by chance alone, we repeat the random binning 5 times. The resulting population width distributions of the populations agree quite well between the realizations for each possible feature. This implies that our results aren't overly sensitive to the particular choice in random binning. To be careful, however, we average the population width distributions from the 5 random realizations together. Measurements from this point on that refer to “random” come from the “averaged random” results. Recognizing that strong-DIB-strength might not be the only/best parameter to explain the change in a feature's strength between bins, we explore a few additional binning options: , median spectral SNR, and A_K reddening. Respectively, these can be thought of as testing for features that are the result of residual chemical information that the model did not capture, remaining Earth-based contamination or other SNR effects, and DIBs that correlate with the amount of LOS dust but don't originate from the 15272 Å DIB source. As before, we sort by a given parameter and then bin the spectra so that there are approximately 2100 spectra per bin. In every case, we measure EWs in each bin at each possible feature, and then use those EWs to measure a population width distribution, which are then compared to the results from random binning. An example of these population width distributions for the different sorting parameters is shown in the left panel of Figure <ref>; in the right panel, samples from the random distribution have been subtracted from samples in the other distributions to get a distribution of σ_SORT- σ_RANDOM. We can then directly integrate these population width difference distributions to measure the probability that the population width for a given sorting parameter is greater than the population width from random sorting. In cases where none of the probabilities are greater than 50%, we decide that randomness alone likely produced the observed feature. We also use the medians of these distributions to determine which sorting parameter yields the greatest difference from random, implying that a particular feature is best explained by that parameter. Because the relationship between each feature's EW measurements and a given sorting parameter is potentially complicated (i.e. not a simple functional form), we argue that our approach is a cautious, statistically-motivated way of making detections without assuming a parameterized relationship. We are simply answering the question: “Which of the sorting parameters produces the largest difference in EW measurements between the bins?” If a feature has its best sorting from DIB-strength, it may suggest that feature is produced by the same source as the 15272 Å DIB. If the best sorting is A_K, it may be that feature is still truly a DIB, but that it has a different source than the 15272 Å DIB. We summarize the probabilities from all the sorting parameters for all possible features in Figure <ref>. At our best estimate of the wavelength for each feature, a vertical line of 4 points are plotted to show the probability above random for each of the different sorting parameters; the point with the highest median population width is enlarged compared to the others, provided that the maximum probability above random is greater than 50% (i.e. there is only one enlarged point per feature). These probabilities reveals that of the features are best explained by random chance alone (i.e. spurious detections), while the other , , , and features correlate best with the 15272 Å DIB strength, , A_K, and spectral SNR respectively. Comparing to the previously-measured DIBs (vertical red lines), we recover > 93% detection probabilities in each case. Apart from the 15225 Å DIB – which finds best sorting using A_K – all remaining previously-measured DIBs in Table <ref> have best sorting from DIB-strength. These results may suggest that most of the previously-known DIBs in APOGEE are produced by the same source/species along each LOS while the remaining DIB has a different origin. While we may not have perfectly captured the central wavelength or cutout region around each of the possible features, we emphasize that our process is likely biased towards calling a true DIB spurious than the other way around. A more rigorously-defined cutout region and fitting of the local continuum would likely yield stronger detection signals than we report, suggesting there may be even more DIB features than we discover. Highly-detailed analyses in the future may reveal DIBs that are not included in our results, and possible DIBs we classify as having non-ISM origin may be found to have significant signal. In general, the average probability above random for detected features is higher for the DIB-strength sorting compared to the other sorting parameters (e.g. mean Pr_DIB = 0.89 versus mean Pr_A_K = 0.83). This is likely a consequence of our choice to bin in the 15272 Å DIB rest-frame; as discussed previously, DIBs from other sources with slightly different rest-frames will experience a reduced signal, so their difference from random is not as strong. Table <ref> lists the number of features in each bin with detection probabilities (i.e. probability above random) above various thresholds. This is equivalent to asking how many of the enlarged points persist above changing heights of the shaded grey region in Figure <ref>. We notice that the number of detected features in the A_K sorting drops off faster than the number of features in the DIB-strength bin, and this again is likely caused by binning in the 15272 Å DIB rest-frame. Linking the resulting DIBs back to the features seen in Figure <ref>, many of the element windows occur near newly-detected DIBs. The Na panel, for instance, is near a detected feature at 16382.1 Å that is best explained by A_K sorting with 93% probability above random. This feature likely explains the diagonal streak in the smoothed residuals of the Na panel. It may also be responsible for increasing the scatter in Na abundances that ASPCAP reports, emphasizing the need to account for DIBs in future stellar abundance pipelines. To summarize the patterns in the -sorted, smoothed residuals, the Fe, Ni, C, and Cr panels show minimal residual features suggesting these windows are free of significant non-stellar light. Like the DIB panel, the K, Mn, Co, Na, and Yb panels have nearby detected DIB features that likely explain their broad diagonal stripes. The P, Mg, Ce, Cu, Al, and even the continuum panels have narrower diagonal stripes like the sky panel and have no significant nearby detected DIBs, suggesting that these patterns are almost exclusively explained by Earth-based residuals. The O panel is unique in that it shows a mix of both DIB signal (i.e. broad emission near the right edge) and telluric residuals (i.e. narrower diagonals near the middle). As an initial step towards identifying the chemical species that producing each DIB, we compare the central wavelength locations to known hydrogen transitions in the APOGEE wavelength range. The newly-detected DIB in Figure <ref>, for instance, is within 1 Å of the Brackett series n=15 to n=4 transition (λ_0=15705.0 Å), suggesting that atomic hydrogen in the ISM is a probable source of this feature. That we are able to detect features at many known hydrogen wavelengths without a priori searching there bodes well for our general methods. A complete summary of the / DIBs whose best sorting parameter is either DIB-strength or A_K is listed in Table <ref>. of the features are in emission and the remaining are in absorption. All wavelengths are given in the 15272 Å DIB rest-frame, and the Wavelength Range column gives the window we used to measure the EWs; the continuum wavelengths are defined by the 5 pixels nearest to the values in the Wavelength Range column. By summing the probabilities above random, we expect that of the DIB detections are truly DIBs. For features that have central wavelengths within 2 Å of a known hydrogen recombination line, we give the name of the series and level. Many of these newly-discovered DIBs occur in the same wavelength regions that were obscured by incomplete skyline and telluric removal; this, combined with their sub-percent sizes, explains why our analysis has been able to reveal these features for the first time. § SUMMARY We have created data-driven models of RC stellar spectra using ∼ 5.5× 10^5 individual observations of ∼ 1.7× 10^5 stars from the APOGEE dataset. The modeling uses five parameters – , , , , and age – to predict the spectra, and the resulting models agree quite well with the data (residual Gaussian mean near 0 and width of ∼ 1.16 σ) across the APOGEE wavelength range. This implies that these five labels are sufficient to explain the majority of information present in the spectra. Consequently, there is not a substantial amount of residual information that may be leveraged for pursuits such as chemical tagging. Though the residuals of the data-minus-model are relatively small (∼3% of stellar flux on average), it is very important to understand and isolate their astrophysical origin. We discover that there are many pixels where the residuals suggest that a significant number of non-stellar features are also present in the stellar spectra. We identify which of these features are likely Earth-based and which are likely Diffuse Interstellar Bands. We find possible DIBs in APOGEE spectra that have less than 50% probability of appearing by chance alone, including all previously-discovered DIBs in this wavelength region. Our key results include: * The residuals of our data compared to the model show correlations with heliocentric velocity, which we show is evidence that many of these residual features are not in the stellar rest-frame, such as skylines, tellurics, and DIBs. These residual features appear at the level of 3% of the stellar flux on average (Section <ref>, Figure <ref>; Section <ref>, Figures <ref>, <ref>, and <ref>); * The size of Earth-based residuals appear anti-correlated with spectral SNR. The shape of these residuals suggest that they may be removed by correcting for a wavelength offset between the sky model and raw observations (Section <ref>, Figure <ref>); * After removing the Earth-based residuals, we combine residual spectra in the rest-frame of the strongest DIB in the APOGEE wavelength range (λ_0 = 15272 Å). We detect DIB features in absorption (including all of previously-measured DIBs) and emission that show highest correlation in strength with either K-band reddening (A_K) or EW strength of the 15272 Å DIB feature (Section <ref>, Figures <ref> and <ref>, Tables <ref> and <ref>). Future work based on our results will focus on measuring the impact that unaccounted-for DIB and Earth-based features have on ASPCAP-measured abundances. It would also be worthwhile to compare the DIB wavelengths to lines produced by chemical species (i.e. more than just atomic hydrogen) that are known components of the ISM, as well as to correlate the new DIB strengths with additional tracers of ISM density. Another useful undertaking that would advance this work is detailed joint modelling of the DIBs, skylines, tellurics, and stellar spectra in APOGEE. A complete understanding of all sources of spectral features is necessary for chemical tagging experiments, and this will have the added benefit of improving our understanding of the ISM. KM, CMR, and PG were supported by NSF Grant AST-2206328. We thank Julianne Dalcanton and the Center for Computational Astrophysics Astrophysical Data Group for useful discussions. KM thanks his PhD advisors, CMR and PG, for enthusiastically supporting his decision to pursue a Designated Emphasis in Statistics, without which, this work would not have been possible. APO, LCO, SDSS Astropy <cit.>, Bovy's Code (<https://github.com/jobovy/apogee>), corner <cit.>, emcee <cit.>, IPython <cit.>, jupyter <cit.>, matplotlib <cit.>, numpy <cit.>, Price-Jones' Code (<https://github.com/npricejones/spectralspace>), scipy <cit.> aasjournal cccccccccccc Summary of the DIB features ( in absorption, in emission) where the best sorting parameter is either A_K or 15272 Å DIB-strength. Only features where Pr( σ_SORT > σ_RANDOM) > 0.5 are included. λ_0a Hydrogen Feature Wavelength Rangec 2cPr( σ_SORT > σ_RANDOM) Best Sort 4cEWd (mÅ) in bin 5-6 8-11 (Å) Transitionb Type (Å) DIB A_K Parameter min DIB max DIB min A_K max A_K 15172.08 Absorption 15169.6 - 15175.4 0.9607 0.8572 DIB -0.8 ± 0.9 5.6 ± 1.5 -0.8 ± 0.8 4.6 ± 1.5 15178.79 Absorption 15176.1 - 15181.3 0.9565 0.6423 DIB -1.4 ± 0.8 4.3 ± 1.4 0.3 ± 0.7 2.1 ± 1.3 15189.07 Emission 15186.8 - 15192.2 0.8240 0.6193 DIB -3.4 ± 0.8 -10.0 ± 1.6 -3.6 ± 0.8 -6.1 ± 1.4 15191.80 Absorption 15189.5 - 15194.5 0.2387 0.6785 A_K 2.3 ± 0.7 3.7 ± 1.3 1.4 ± 0.7 0.8 ± 1.2 15206.50 Absorption 15203.1 - 15210.1 0.8433 0.8584 A_K 3.0 ± 0.6 4.9 ± 1.3 4.4 ± 0.7 1.8 ± 1.3 15215.54 Absorption 15211.3 - 15220.8 0.9289 0.9387 DIB -1.8 ± 0.9 7.8 ± 1.8 -0.9 ± 1.0 4.3 ± 1.8 15224.37* Absorption 15221.8 - 15226.7 0.6093 0.9189 A_K -0.0 ± 0.5 1.2 ± 1.0 -1.5 ± 0.5 2.1 ± 0.9 15237.62 Absorption 15233.2 - 15245.0 0.8508 0.9808 A_K 16.0 ± 1.0 23.0 ± 2.4 13.2 ± 1.3 10.2 ± 2.2 15258.27 Absorption 15255.3 - 15264.0 0.9202 0.8982 DIB 2.4 ± 0.8 6.8 ± 1.5 6.0 ± 0.8 0.5 ± 1.3 15272.40* Absorption 15264.6 - 15281.0 0.9999 0.9999 DIB 59.4 ± 1.5 303.0 ± 2.8 75.3 ± 1.5 215.5 ± 2.6 15292.03 Absorption 15286.1 - 15296.3 0.9883 0.9240 DIB 5.8 ± 1.1 15.7 ± 2.2 3.5 ± 1.1 12.3 ± 2.0 15319.31 Emission 15315.7 - 15325.2 0.9906 0.9707 DIB -0.2 ± 1.0 -14.7 ± 1.8 2.7 ± 1.0 -8.5 ± 1.8 15347.49 Br18 Absorption 15344.5 - 15351.5 0.5131 0.8855 A_K -0.2 ± 0.6 3.7 ± 1.1 -0.4 ± 0.6 5.1 ± 1.2 15370.61 Absorption 15362.8 - 15375.5 0.9578 0.8730 DIB 3.4 ± 1.1 14.9 ± 2.3 5.0 ± 1.1 9.1 ± 1.9 15384.21 Absorption 15376.1 - 15391.9 0.8128 0.8068 DIB 5.3 ± 1.4 9.8 ± 2.8 1.8 ± 1.4 2.7 ± 2.7 15394.84 Absorption 15392.1 - 15397.8 0.4916 0.9186 A_K 2.6 ± 0.7 5.5 ± 1.3 4.3 ± 0.7 3.2 ± 1.3 15407.18 Absorption 15404.6 - 15410.6 0.7304 0.5966 DIB 1.3 ± 0.6 4.0 ± 1.1 1.2 ± 0.6 2.5 ± 1.0 15441.28 Br17 Absorption 15437.6 - 15450.2 0.8960 0.8995 DIB -1.3 ± 1.0 10.0 ± 2.0 3.4 ± 1.0 5.3 ± 1.8 15477.59 Emission 15469.9 - 15481.2 0.8836 0.8983 A_K -5.9 ± 1.0 -15.9 ± 2.0 -7.5 ± 1.0 -12.2 ± 1.7 15481.22 Absorption 15477.8 - 15485.7 0.9155 0.9664 A_K 0.2 ± 0.7 8.3 ± 1.3 6.1 ± 0.8 3.4 ± 1.4 15537.79 Absorption 15532.2 - 15541.2 0.3082 0.5685 A_K 0.1 ± 1.5 3.1 ± 2.7 0.8 ± 1.4 2.3 ± 2.8 15549.17 Absorption 15545.1 - 15554.5 0.9757 0.9101 DIB 3.4 ± 1.0 16.6 ± 1.8 6.0 ± 1.0 7.4 ± 1.8 15560.13 Br16 Absorption 15556.9 - 15564.4 0.9910 0.8949 DIB -2.4 ± 0.7 8.2 ± 1.2 0.3 ± 0.7 -0.2 ± 1.1 15601.89 Absorption 15599.5 - 15604.9 0.4632 0.8414 A_K -0.6 ± 0.6 2.5 ± 1.2 1.1 ± 0.6 -0.0 ± 1.0 15615.91* Absorption 15611.2 - 15624.3 0.9999 0.9999 DIB 10.0 ± 1.0 44.0 ± 2.2 16.1 ± 1.1 42.4 ± 1.7 15633.18 Absorption 15627.4 - 15637.7 0.8891 0.9183 A_K 3.1 ± 0.9 8.1 ± 1.6 2.4 ± 0.9 5.3 ± 1.6 15643.55 Absorption 15641.0 - 15646.4 0.8174 0.7844 DIB 0.5 ± 0.5 3.9 ± 0.8 -0.1 ± 0.5 1.4 ± 0.8 15652.63* Absorption 15646.1 - 15662.8 0.9999 0.9999 DIB 14.2 ± 1.6 76.6 ± 2.8 10.9 ± 1.5 62.0 ± 2.5 15671.89* Absorption 15663.7 - 15679.2 0.9999 0.9999 DIB 21.7 ± 1.2 76.0 ± 2.4 26.6 ± 1.3 59.3 ± 2.3 15706.13 Br15 Absorption 15701.8 - 15711.1 0.9953 0.9764 DIB 2.5 ± 1.2 20.7 ± 2.1 3.2 ± 1.0 14.0 ± 2.0 15722.41 Absorption 15718.9 - 15726.8 0.9921 0.9520 DIB -1.7 ± 0.9 7.6 ± 1.5 -0.8 ± 0.9 5.7 ± 1.4 15733.93 Absorption 15728.1 - 15738.9 0.8863 0.9278 A_K 1.6 ± 1.2 10.6 ± 2.3 0.9 ± 1.4 12.3 ± 2.1 15751.55 Emission 15746.8 - 15760.3 0.7631 0.8419 A_K -1.9 ± 1.2 -10.4 ± 2.5 -6.7 ± 1.3 -8.9 ± 2.3 15769.62 Absorption 15767.0 - 15772.5 0.8378 0.7406 DIB 1.5 ± 0.7 3.2 ± 1.2 2.3 ± 0.8 4.1 ± 1.2 15878.05 Absorption 15873.5 - 15882.2 0.9611 0.6855 DIB 1.3 ± 0.9 13.3 ± 1.6 3.5 ± 0.9 9.4 ± 1.7 15889.24 Absorption 15886.6 - 15894.3 0.6431 0.8079 A_K 2.6 ± 0.8 5.6 ± 1.5 5.6 ± 0.9 3.5 ± 1.4 15919.35 Absorption 15913.9 - 15922.6 0.8737 0.5808 DIB -1.4 ± 0.8 5.1 ± 1.4 1.0 ± 0.7 0.5 ± 1.4 15927.93 Absorption 15925.1 - 15930.8 0.8308 0.2518 DIB 0.1 ± 0.5 3.0 ± 0.9 0.3 ± 0.5 2.0 ± 0.9 15937.83 Absorption 15931.9 - 15944.0 0.9847 0.7884 DIB 4.1 ± 1.0 16.5 ± 1.8 6.4 ± 0.9 10.3 ± 1.7 15949.95 Absorption 15944.7 - 15954.4 0.8917 0.9338 A_K 1.6 ± 0.8 8.5 ± 1.4 5.7 ± 0.8 6.4 ± 1.2 15977.29 Absorption 15974.4 - 15981.5 0.6406 0.5648 DIB 0.3 ± 0.7 4.0 ± 1.4 1.3 ± 0.7 2.5 ± 1.2 15989.22* Absorption 15983.7 - 15993.9 0.9771 0.9274 DIB 2.8 ± 0.9 14.3 ± 1.6 7.7 ± 1.0 8.2 ± 1.4 15999.38 Absorption 15995.0 - 16004.0 0.7028 0.4841 DIB 0.3 ± 0.8 5.6 ± 1.2 3.4 ± 0.7 4.5 ± 1.3 16029.47 Absorption 16025.7 - 16043.2 0.7538 0.8039 A_K -3.9 ± 2.1 12.0 ± 3.9 10.5 ± 2.0 2.5 ± 3.7 16046.53 Absorption 16044.3 - 16049.2 0.4056 0.7245 A_K -0.0 ± 0.8 3.7 ± 1.8 1.2 ± 0.7 0.8 ± 1.6 16059.40 Absorption 16054.7 - 16062.5 0.9423 0.6806 DIB 1.5 ± 0.8 4.7 ± 1.6 1.1 ± 0.8 5.8 ± 1.5 16113.62 Br13 Absorption 16101.8 - 16119.4 0.8491 0.8396 DIB 3.3 ± 1.8 17.1 ± 2.9 6.3 ± 1.7 15.4 ± 2.9 16132.78 Absorption 16130.5 - 16135.5 0.5925 0.3200 DIB 0.5 ± 0.6 3.2 ± 1.1 1.0 ± 0.6 2.7 ± 1.1 16141.92 Absorption 16138.4 - 16145.7 0.8523 0.2659 DIB 2.0 ± 0.6 3.4 ± 1.2 1.5 ± 0.6 1.1 ± 1.0 16147.94 Emission 16142.1 - 16157.3 0.6719 0.2360 DIB -4.0 ± 1.3 -12.2 ± 2.3 -5.4 ± 1.2 -8.3 ± 2.0 16157.09 Absorption 16152.9 - 16166.5 0.9850 0.9613 DIB -2.3 ± 1.2 14.0 ± 2.1 -0.2 ± 1.2 6.8 ± 2.2 16219.72 Absorption 16216.4 - 16224.9 0.9462 0.9046 DIB 0.9 ± 0.7 8.0 ± 1.5 1.1 ± 0.7 6.5 ± 1.4 16233.39 Absorption 16224.9 - 16237.6 0.9373 0.9335 DIB 4.3 ± 1.4 18.2 ± 3.2 7.8 ± 1.3 14.9 ± 2.5 16265.94 Absorption 16262.1 - 16270.0 0.9133 0.8788 DIB 2.4 ± 0.7 7.6 ± 1.3 3.3 ± 0.7 4.7 ± 1.2 16280.11 Absorption 16277.0 - 16286.6 0.8855 0.8796 DIB 1.4 ± 0.8 8.2 ± 1.5 -0.3 ± 0.8 4.4 ± 1.4 16290.68 Absorption 16288.0 - 16294.7 0.7890 0.9301 A_K 1.3 ± 0.6 4.6 ± 1.2 1.9 ± 0.6 4.4 ± 1.1 16371.00 Absorption 16368.5 - 16375.8 0.9926 0.7010 DIB 2.4 ± 0.7 9.1 ± 1.4 1.3 ± 0.7 4.7 ± 1.3 16382.09 Absorption 16378.9 - 16385.0 0.3357 0.9270 A_K 0.6 ± 0.7 3.7 ± 1.3 0.0 ± 0.6 2.7 ± 1.2 16411.31 Br12 Absorption 16406.3 - 16414.5 0.6866 0.9621 A_K 0.0 ± 1.0 8.6 ± 2.2 -5.1 ± 1.0 -0.3 ± 1.9 16518.68 Absorption 16515.5 - 16522.3 0.9870 0.8953 DIB -0.6 ± 0.7 8.1 ± 1.3 -0.2 ± 0.7 4.1 ± 1.1 16525.52 Absorption 16522.8 - 16528.5 0.9339 0.6967 DIB 1.0 ± 0.6 4.2 ± 1.1 0.4 ± 0.6 1.6 ± 1.0 16532.60 Absorption 16529.9 - 16536.3 0.8580 0.8020 DIB 1.5 ± 0.7 3.5 ± 1.3 2.4 ± 0.7 -2.0 ± 1.1 16536.49 Emission 16532.4 - 16550.2 0.9392 0.5342 DIB -3.3 ± 1.6 -19.3 ± 3.1 -8.1 ± 1.7 -2.9 ± 3.2 16549.51 Absorption 16541.7 - 16554.5 0.8462 0.8512 A_K -0.3 ± 1.5 5.7 ± 2.7 3.2 ± 1.5 8.5 ± 2.5 16573.31* Absorption 16567.8 - 16579.5 0.9998 0.9919 DIB 5.9 ± 1.1 30.3 ± 2.1 7.1 ± 1.0 22.9 ± 1.9 16584.99 Absorption 16580.4 - 16589.8 0.9906 0.9814 DIB 8.7 ± 0.9 18.3 ± 1.8 3.5 ± 0.9 14.7 ± 1.6 16594.62 Absorption 16590.5 - 16598.1 0.9346 0.6475 DIB 2.9 ± 0.7 9.0 ± 1.5 1.3 ± 0.7 4.5 ± 1.3 16601.73 Absorption 16598.7 - 16605.2 0.8588 0.7641 DIB 0.3 ± 0.6 5.3 ± 1.2 0.1 ± 0.6 2.6 ± 1.2 16608.38 Absorption 16605.8 - 16611.4 0.3847 0.5903 A_K 1.4 ± 0.6 2.2 ± 1.4 1.6 ± 0.6 4.2 ± 1.1 16621.92 Absorption 16617.3 - 16627.2 0.9017 0.6553 DIB -1.4 ± 0.9 7.8 ± 2.0 0.9 ± 0.9 -0.3 ± 1.8 16655.25 Absorption 16645.8 - 16661.2 0.6496 0.8407 A_K 1.2 ± 1.6 7.2 ± 3.5 -2.4 ± 1.7 11.8 ± 2.6 16711.03 Absorption 16708.3 - 16715.2 0.7723 0.9000 A_K 1.0 ± 1.0 6.9 ± 2.0 1.5 ± 1.0 1.3 ± 2.0 16718.42 Absorption 16716.1 - 16722.8 0.7223 0.3377 DIB 0.2 ± 0.8 3.9 ± 1.7 1.0 ± 0.8 1.0 ± 1.4 16750.09 Absorption 16739.7 - 16753.3 0.6424 0.8165 A_K 0.9 ± 1.5 12.5 ± 3.3 3.7 ± 1.5 11.0 ± 2.9 16757.04 Absorption 16753.8 - 16762.1 0.8341 0.9163 A_K 0.2 ± 1.2 9.1 ± 2.2 2.9 ± 1.0 0.8 ± 2.0 16770.47 Absorption 16764.7 - 16777.7 0.3466 0.7885 A_K 5.1 ± 1.4 10.0 ± 3.2 5.4 ± 1.5 4.8 ± 3.0 16780.44 Absorption 16778.1 - 16783.0 0.6389 0.2577 DIB 0.9 ± 0.6 2.2 ± 1.2 0.7 ± 0.6 0.4 ± 1.1 16826.17 Absorption 16823.4 - 16828.7 0.8639 0.5821 DIB -0.5 ± 0.6 2.5 ± 1.5 0.0 ± 0.6 0.9 ± 1.2 16842.68 Emission 16835.5 - 16846.4 0.8304 0.1570 DIB -2.4 ± 1.3 -8.6 ± 2.9 -5.7 ± 1.3 -2.1 ± 2.7 16845.94 Absorption 16843.2 - 16849.0 0.4582 0.5016 A_K -0.8 ± 0.8 5.6 ± 1.7 2.7 ± 0.8 1.8 ± 1.5 16885.78 Absorption 16881.1 - 16888.8 0.7304 0.5061 DIB 2.1 ± 1.0 3.7 ± 2.4 0.3 ± 1.0 3.0 ± 1.8 16901.42 Absorption 16894.7 - 16905.4 0.4721 0.6623 A_K 6.6 ± 2.0 2.8 ± 5.5 1.6 ± 2.0 -0.5 ± 3.8 16916.84 Emission 16911.7 - 16921.8 0.9711 0.8659 DIB -2.0 ± 1.3 -14.2 ± 3.4 -4.8 ± 1.3 -9.4 ± 2.9 16920.58 Absorption 16917.3 - 16924.3 0.8827 0.7339 DIB 1.8 ± 0.9 8.2 ± 2.4 2.5 ± 0.9 2.6 ± 2.0 aWavelength, in the 15272 Å DIB rest-frame, of the local extremum found by smoothing the highest DIB-strength spectrum in Figure <ref> with a 5 pixel Gaussian kernel. bHydrogen transition wavelengths that are within 2 Å of the feature's central wavelength. cRegion, in the 15272 Å DIB rest-frame, that we measure the equivalent width from. The wavelengths used to estimate the local continuum are taken to be the 5 pixels nearest to the Wavelength Range values. dEquivalent width measured when binning by A_K or the 15272 Å DIB strength, in the minimum and maximum bins. *Nearest detected DIB to the previously-known DIBs in Table <ref>. § SPECTRAL COMBINATIONS Our analysis, like many studies involving spectroscopy, requires the combination of multiple spectral observations. The standard approach would be to use an inverse-variance weighted combination using the fluxes and corresponding uncertainties at each pixel, but this results in a combined uncertainty that is often overly constraining and which becomes smaller for any increase in the number of observations. Instead, we argue that fitting a population-level distribution is generally the better approach when combining spectra, particularly in cases where the uncertainty on the combined measurement is important. This technique returns population-level means and variances that incorporate the uncertainties in each individual measurement as well as the dispersion in those measurements. As an illustrative example, two flux measurements of the same star in one pixel might differ from one another by an amount larger than is described by their uncertainties; this case is displayed in Figure <ref>. Combining these measurements using inverse-variance weighting (black line) produces a mean that is, again, statistically far away from either measurement and has a high confidence (small uncertainty). If we instead fit a population-level distribution (red line), the resulting population width we measure is much larger than the inverse-variance width so that the population distribution can capture the large distance between the data[While this is a helpful visual example, we should be cautious about performing population fits on a very small number of measurements.]. For another example, consider set of N normalized flux measurements that all have SNR of 10 pixel^-1. The inverse-variance weighted combination will return an uncertainty of ( 10/√(N))  pixel^-1, which will clearly decrease in size as we add more measurements, regardless of how similar or disparate those measurements may be to one another. Using population fitting instead, the distribution of the population width becomes narrower and can converge on an underlying true width, assuming one exists. This is illustrated in Figure <ref>. For this paper's analysis of residuals[NOTE: for this example, we are assuming that each star has one observation for clarity of the math, but the approach is easily expanded to include multiple observations of the same star.], we consider the measured/observed residual flux at pixel i for star j to be r_i,j with corresponding uncertainty σ_r,i,j. Then, we define the following hierarchical statistical model that describes the relationship between the individual measurements, their uncertainties, and the population distribution: p(σ̂_r,i) ∝ 1 p( r̂_i | σ̂_r,i) ∝ 1 ( r'_i,j | r̂_i, σ̂_r,i) ∼𝒩(r̂_i, σ̂_r,i^2 ) ( r_i,j | r'_i,j, σ_r,i,j) ∼𝒩(r'_i,j, σ_r,i,j^2 ) where r̂_i is the population mean of the residual fluxes in pixel i, σ̂_r,i is the population width/uncertainty/standard deviation of the residuals in pixel i, and r'_i,j is the true residual flux for star j in pixel i. We have chosen flat priors on the population parameters, though these could be changed to other distributions if there was good reason for it (e.g. Gaussian for p( r̂_i | σ̂_r,i)). The full posterior distribution is then: p(r̂_i, σ̂_r,i^2, r'_i,1, …, r'_i,n_* | r_i,1, σ^2_i,1, …, r_i,n_*, σ^2_i,n_*) ∝ p( r̂_i,σ_r,i) ·∏_j^n_*𝒩(r'_i,j | r̂_i, σ̂_r,i^2 ) ·𝒩(r_i,j | r'_i,j, σ_r,i,j^2 ) We next see that the posterior full conditional on r'_i,j is given by: p(r'_i,j | r̂_i, …) ∝𝒩(r'_i,j | r̂_i, σ̂_r,i^2 ) ·𝒩(r'_i,j | r_i,j, σ_r,i,j^2 ) = 𝒩( r'_i,j | μ_r',i,j, σ_r',i,j^2 ) where σ_r',i,j^2 = [σ̂_r,i^-2+σ_r,i,j^-2]^-1 and μ_r',i,j = σ_r',i,j^2·[σ̂_r,i^-2·r̂_i +σ_r,i,j^-2· r_i,j]. We can then use these results to integrate over r'_i,j in the full posterior distribution of Equation <ref> to find the marginal posterior of ( r̂_i | σ̂_r,i, data): p(r̂_i | σ̂_r,i, data) = 𝒩(μ_r̂,i, σ_r̂,i^2 ) where σ_r̂,i^2 = [∑_j^n_*( σ̂_r,i^2+σ_r,i,j^2)^-1]^-1 and μ_r̂,i = σ_r̂,i^2·[∑_j^n_*( σ̂_ r,i^2+σ_r,i,j^2)^-1· r_i,j]. Using Bayes' Law, we can find the marginal posterior for p(σ̂_r,i | data) as: p(σ̂_r,i | data ) ∝ p(σ̂_r,i) ·σ̂_r,i^1/2·∏_j^n_*(σ̂_r,i^2+σ_r,i,j^2)^-1/2·exp( -(r_i,j-r̂_i)^2/2 ( σ̂_r,i^2+σ_r,i,j^2)^2) With the functional forms of the distributions in Equation <ref> and <ref> in hand, we are able to draw samples of (r̂_i, σ̂_r,i | data) fairly quickly. First, we evaluate p(σ̂_r,i | data ) for a reasonable range and number of σ̂_r,i values, then we use those probabilities to draw σ̂_r,i samples. Next, we use those σ̂_r,i samples to draw samples from the (r̂_i | σ̂_r̂,i, data) Gaussian distribution, which is an relatively easy and efficient step. Once we've repeated this process a sufficient number of times, we can take the median of the (r̂_i, σ̂_r,i | data) samples as our best estimate of the underlying population mean and width. Sometimes, like in the main text of this work, we are also interested in the distribution on (r̂_i, σ̂_r,i | data) itself, and this technique allows us to study this distribution.
http://arxiv.org/abs/2307.05092v1
20230711075206
Offline and Online Optical Flow Enhancement for Deep Video Compression
[ "Chuanbo Tang", "Xihua Sheng", "Zhuoyuan Li", "Haotian Zhang", "Li Li", "Dong Liu" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Offline and Online Optical Flow Enhancement for Deep Video Compression Chuanbo Tang, Xihua Sheng, Zhuoyuan Li, Haotian Zhang, Li Li, Member, IEEE, Dong Liu, Senior Member, IEEE C. Tang, X. Sheng, Z. Li, H. Zhang, L. Li and D. Liu are with the CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230027, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]). Corresponding author: Dong Liu. ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Video compression relies heavily on exploiting the temporal redundancy between video frames, which is usually achieved by estimating and using the motion information. The motion information is represented as optical flows in most of the existing deep video compression networks. Indeed, these networks often adopt pre-trained optical flow estimation networks for motion estimation. The optical flows, however, may be less suitable for video compression due to the following two factors. First, the optical flow estimation networks were trained to perform inter-frame prediction as accurately as possible, but the optical flows themselves may cost too many bits to encode. Second, the optical flow estimation networks were trained on synthetic data, and may not generalize well enough to real-world videos. We address the twofold limitations by enhancing the optical flows in two stages: offline and online. In the offline stage, we fine-tune a trained optical flow estimation network with the motion information provided by a traditional (non-deep) video compression scheme, e.g. H.266/VVC, as we believe the motion information of H.266/VVC achieves a better rate-distortion trade-off. In the online stage, we further optimize the latent features of the optical flows with a gradient descent-based algorithm for the video to be compressed, so as to enhance the adaptivity of the optical flows. We conduct experiments on a state-of-the-art deep video compression scheme, DCVC. Experimental results demonstrate that the proposed offline and online enhancement together achieves on average 12.8% bitrate saving on the tested videos, without increasing the model or computational complexity of the decoder side. Video coding, versatile video coding, rate control, rate-distortion model. § INTRODUCTION To meet the demand for video transmission and storage, video compression has been developed for several decades. Video compression relies heavily on exploiting the temporal redundancy between video frames, which is usually achieved by estimating and using the motion information. The motion information is represented as optical flows in most of the existing deep video compression networks <cit.>. Indeed, these networks often adopt pre-trained optical flow estimation networks <cit.> to estimate the motions between video frames. Taking a widely acknowledged and highly flexible scheme, DCVC <cit.>, as an example, the pre-trained Spynet <cit.> is used for estimating the optical flows. The optical flows can be considered as pixel-wise motion vectors (MV) and are compressed by an autoencoder-based MV encoder <cit.>. In the training stage, the pre-trained Spynet is first loaded, and then the whole deep video compression network is optimized in an end-to-end manner. In the inference stage, the motion information of different video contents is obtained through the fixed networks. However, regarding the optical flows estimated by the commonly-used pre-trained optical flow estimation networks <cit.> as motion information in deep video compression schemes may be less suitable due to the following two factors. First, the pre-trained optical flow estimation networks are trained to perform inter-frame prediction as accurately as possible, but the optical flows themselves may cost too many bits to encode. Although they can be further optimized with the whole video compression networks in an end-to-end manner, the inappropriate initial point may affect the final optimization result. Second, the optical flow estimation networks are trained on synthetic data <cit.>, and may not generalize well enough to real-world videos. The end-to-end optimization in video compression networks can alleviate the domain gap between the synthetic data and the real-world videos to some degree. However, once the end-to-end optimization is finished, the optical flow estimation network is "optimal" in the sense that the average performance over the entire training set is optimal, but not "optimal" in the sense that the network produced optical flows may not be the optimal for any given video sequence. To address the twofold limitations, we consider learning the good traditions from the inter-frame prediction techniques in traditional (non-deep) video compression schemes. The latest traditional video compression standard H.266/VVC <cit.> has achieved great success in effectively estimating and using the motion information, which is represented by MV. Specifically, in the offline stage, various hand-crafted inter-frame prediction modes are first designed for different types of motions without optimization. Then, the optimal mode is searched online to achieve the best rate-distortion (RD) performance for each coding sequence. Such offline and online optimization is believed a promising direction for learning-based video compression as well in the reference <cit.>. Similar to the two-stage strategy in the traditional video compression scheme, we address the twofold limitations of the optical flows by enhancing them in two stages: offline and online. In this paper, we propose an offline and online enhancement on the optical flows to better estimate and utilize motion information under the RD constraint. Specifically, in the offline stage, the trained optical flow estimation network Spynet is fine-tuned by the MV provided by VTM (reference software of H.266/VVC), as we believe the MV of VTM achieves a better RD trade-off. With the guidance of the MV of VTM, the optical flow estimation network can provide a more appropriate initial point for end-to-end optimization in video compression networks. In terms of the online stage, we optimize the latent features of the optical flows with a gradient descent-based algorithm for the video to be compressed, so as to enhance the adaptivity of the optical flows. Inspired by the search-based online optimization algorithm in traditional video compression schemes, our scheme enables online updating the latent features of the optical flows by minimizing the RD loss in the inference stage, which has been introduced in deep image compression <cit.>. When online updating the latent features of the optical flows, the parameters of the whole video compression networks are fixed and the decoding time remains unchanged. With the online enhancement, the updated latent features can help the video compression networks achieve a better RD performance than the latent features obtained by a simple forward pass through the MV encoder. We conduct experiments on the widely acknowledged baseline DCVC <cit.> to verify the effectiveness of our scheme. Experimental results demonstrate our scheme can outperform DCVC without increasing the model size or computational complexity on the decoder side. It is worth noting that our scheme is a plug-and-play mechanism that can be easily integrated into any deep video compression framework with the same motion estimation and MV encoder. Our contributions are summarized as follows: * We propose an offline enhancement on the optical flows by fine-tuning the optical flow estimation network with the MV of VTM. With the guidance of the MV of VTM, the optical flow estimation network can provide a more appropriate initial point for end-to-end optimization in deep video compression networks. * We further enhance the adaptivity of the optical flows by online optimizing the latent features of the optical flows according to the contents of different coding sequences in the inference stage without changing the network parameters. * When equipped with our proposed offline and online optical flow enhancement methods, the baseline scheme DCVC achieves a better RD performance without increasing the model size and decoding complexity. § RELATED WORK §.§ Deep Video Compression Recently, deep video compression has explored a new direction for video compression. The mainstream of deep video compression frameworks can be divided into two categories: the motion-compensated prediction and residual coding framework and the motion-compensated prediction and conditional coding framework. DVC <cit.> is the pioneering work for the motion-compensated prediction and residual coding framework, which followed the traditional video compression framework and replaced all the modules with neural networks. Different from the residual coding-based framework, DCVC<cit.> introduced the motion-compensated prediction and conditional coding framework, which is able to utilize the learned temporal correlation between the current frame and predicted frame rather than the subtraction-based residual. Many existing works have followed these two frameworks and conducted research on the frameworks and optimization strategy. Research on the frameworks. For the motion-compensated prediction and residual coding framework, many existing works <cit.> have followed this framework to improve the compression ratio. Lu et al. <cit.> utilized the auto-regressive model <cit.> to compress the motion and residual, and added the motion and residual refine modules to further improve the compression performance. Lin et al. <cit.> proposed multi-frame-based motion estimation and motion compensation to reduce the temporal redundancy efficiently. The deformable convolution <cit.> was applied for motion estimation, compression, and compensation in the feature domain<cit.>. In <cit.>, the single-resolution motion compression was extended to resolution-adaptive motion compression in both the frame level and block level. Hu et al. <cit.> proposed coarse-to-fine motion compensation to reduce the residual energy and hyperprior-guided adaptive motion and residual compression to realize the block partition and residual skip. Shi et al. <cit.> has improved the compression performance by introducing the conditional-I frame, pixel-to-feature motion prediction, and probability-base entropy skipping method. Following the motion-compensated prediction and conditional coding framework, Sheng et al. <cit.> further proposed the multi-scale temporal context mining to better utilize the temporal correlation. The following work <cit.> designed a parallel-friendly entropy model which explores both temporal and spatial dependencies. Li et al. <cit.> further increased the context diversity in both temporal and spatial dimensions by introducing the group-based offset diversity and quadtree-based partition. Most existing deep video compression schemes often adopt pre-trained optical flow estimation networks <cit.> for motion estimation. However, the optical flow estimation networks are trained on synthetic data <cit.> to only perform inter-frame prediction as accurately as possible without considering the bits cost on the estimated optical flows. In comparison, in our offline enhancement stage, the pre-trained optical flow estimation network is fine-tuned by the MV of VTM, which is extracted from real-world videos and achieves a better RD trade-off than the optical flows. Research on the optimization strategy. Lu et al. <cit.> applied a new training objective with multiple time steps to prevent the error accumulation and adopted an online encoder updating scheme to realize the content adaptive. The online encoder updating scheme needs to update the parameters of the encoder while keeping the parameters of the decoder fixed in the inference stage. BAO <cit.> is a pixel-level implicit bit allocation method for deep video compression by using iterative optimization. Xu et al. <cit.> proved the BAO's sub-optimality and recursively applied back-propagating on non-factorized latent through gradient ascent-based on the corrected optimal bit allocation algorithm. In contrast, as video compression relies heavily on estimating and using the motion information to exploit the temporal redundancy between video frames, in our online enhancement stage, we only optimize the latent features of the optical flows with a gradient descent-based algorithm. §.§ Inter-frame Prediction in Traditional Video Compression Traditional video compression have been developed for several decades, and many video compression standards have been proposed, such as H.264/AVC <cit.>, H.265 /HEVC <cit.>, and H.266/VVC <cit.>. These video compression schemes follow a similar hybrid video coding framework, where inter-frame prediction techniques play a crucial role among the techniques. In the latest coding standard (H.266/VVC <cit.>), many advanced inter-frame prediction techniques have been proposed to attain high inter-frame coding efficiency. To estimate the accurate motion, various motion situations (translation, rotation motion model, etc.) corresponding to different inter-frame prediction modes (AMVP <cit.>, Affine <cit.>, etc.) are executed to search for the optimal MV for each coding region via Rate-Distortion-Optimization (RDO) <cit.>. For each coding sequence, the optimal mode is searched online from multiple inter-frame prediction modes. With the increase in the number of inter-frame prediction modes and the ever-rising searching cost, traditional video compression frameworks have achieved great compression performance. Considering the latest traditional video codec VTM <cit.> searches each MV for the best RD performance for each coding sequence, the MV can achieve a better RD trade-off than the optical flows. Thus, in this paper, we enhance the optical flows in two stages according to the inter-frame prediction techniques in the traditional video compression scheme. In the offline stage, we propose an offline enhancement on the optical flows by fine-tuning the trained optical flow estimation network Spynet <cit.> with the MV searched by VTM. In the online stage, inspired by the VTM searching-based strategy in motion estimation, we further optimize the latent features of the optical flows with a gradient descent-based algorithm while the parameters of the whole network are fixed. Our strategy is to search the optimal latent features of the optical flows for the best RD performance by varying only the latent values themselves. § PROPOSED METHOD §.§ Overview In this paper, our proposed offline and online enhancement is integrated into the baseline scheme DCVC<cit.> to demonstrate the effectiveness. The encoding procedure of our scheme, as illustrated in Fig. <ref>(a), can be divided into three parts: motion estimation, motion compression, and contextual compression. Motion Estimation. The input frame x_t and the reference frame x̂_t-1 are fed into our proposed offline enhanced optical flow estimation network to estimate the optical flows, which are considered as pixel-wise MV v_t. Following DCVC, the network is based on Spynet<cit.>, but we fine-tune it with the MV of VTM. Details are presented in Section <ref>. Motion Compression. The estimated MV v_t is compressed by an autoencoder-based MV encoder <cit.>. The latent features of the optical flows, MV features y_t and MV hyperprior z_t, are online enhanced by updating with a gradient descent-based algorithm in the inference stage. More information is provided in Section <ref>. Contextual Compression. Following DCVC, the input frame x_t is compressed conditioned on the context ẍ_t, which is extracted by the context extractor using the reference frame x̂_t-1 and the decoded MV v̂_t as input. §.§ Offline Enhancement To alleviate the domain gap between the synthetic data and the real-world videos, and provide a more appropriate initial point for the end-to-end optimization in deep video compression networks, we propose the offline enhancement on the optical flows. Different from DCVC, we fine-tune the pre-trained Spynet with the MV searched by VTM for the best RD performance on real-world videos, which has a better RD trade-off than the optical flows. Preliminaries. To provide the optical flow estimation network with accurate and learnable labels, we extract the block-level MV from each frame by VTM under certain configuration. To match the low-delay mode of DCVC, the reference list of VTM is set to only include the previous frame of the current frame. Besides, for acquiring finer MV on the encoder side, we set the quantization parameter (QP) to 22 and turn off the decoder-side MV refine technique (PROF <cit.>). As the coding block predicted by intra mode is not appropriated for the training of optical flow estimation network, we turn off the intra-prediction mode and intra-related inter technique (combine inter-intra prediction, CIIP <cit.>) in VTM to obtain the MV. The extracted block-level MV is at the quarter resolution, so we use the nearest interpolation to obtain the full-resolution block-level MV v_t and scale the upsampled MV by a factor of 16. Specifically, as shown in Fig. <ref>(b), we fine-tune the pre-trained Spynet under the guidance of the extracted MV v_t, which is searched by VTM for the best RD performance on real-world videos. To better match the warp operation in the video compression, our training objective for Spynet is to minimize both the End Point Error (EPE) loss and the Mean Squared Error (MSE) loss between the input frame and the corresponding warp frame. Let x̌_̌ť denote the warp frame, x̌_t = w(x_t-1, v_t), where w(·) denotes the warp operation. Therefore, our training objective is to minimize a weighted sum of EPE and MSE loss, L_ME = 1/mn∑_i,j√((v_i - v_i)^2 + (v_j - v_j)^2) + λ_ME· d(x_t, x̌_t). The m × n in Eq. (<ref>) is the image dimension and the i and j subscript indicate the horizontal and vertical components of the flow vector and motion vector. d(x_t, x̌_̌ť) represents the MSE metric for measuring the difference between the input frame x_t and the warp frame x̌_̌ť. λ_ME is the Lagrange multiplier that controls the trade-off between the EPE and MSE loss. In Fig. <ref>, we provide the visual results of the optical flows generated by Spynet and our enhanced Spynet, the MV of VTM, and their corresponding warp frames. The MV of VTM is extracted in the luminance (Y) channel and interpolated on the chrominance (U, V) components to obtain a complete motion vector field, so only the visual results in the luminance channel are presented here. Under the guidance of the block-level MV searched by VTM, the enhanced optical flow can better recover the sharp motion boundaries and the regions with rich details visually. Compared with Spynet, the warp frames of enhanced Spynet have an average improvement of 1.15dB (33.23dB vs. 32.08dB) in JVET CTC test sequences <cit.>. The improvement in inter-frame prediction accuracy indicates that the offline enhancement can alleviate the domain gap between the synthetic data and the real-world videos to some degree. §.§ End-to-End Training After fine-tuning the pre-trained Spynet, we deploy it into DCVC, then train the whole video compression network in an end-to-end manner which is the same as DCVC. Thus, the training loss is as follows: L = λ· D + R = λ d(x_t, x̂_t) + H(ŷ_t) + H(ẑ_t) + H(ĝ_t), where ŷ_t=Q(y_t), ẑ_t=Q(z_t), and ĝ_t=Q(g_t). Q(·) represents the quantization operator. The term R in Eq. (<ref>) denotes the number of bits used to encode the frame, and R is computed by adding up the number of bits H(ŷ_t) and H(ẑ_t) for encoding the latent features of motion information and H(ĝ_t) for encoding the latent features of context. d(x_t, x̂_t) denotes the distortion between the input frame x_t and the reconstruction frame x̂_t. λ is a hyperparameter that determines the trade-off between the number of bits R and the distortion D. The MV of VTM is searched for the best trade-off between the bits cost and the MSE loss, so we only optimize our scheme with D representing the MSE. §.§ Online Enhancement To further enhance the adaptivity of the optical flows and achieve a better compression performance, we propose the online enhancement on the optical flows. In the inference stage, we online optimize the latent features of the optical flows with a gradient descent-based algorithm minimizing the RD loss for the videos to be compressed. Single-frame online optimization. As shown in Fig. <ref>(c), for the input frame x_t and reference frame x̂_t-1 in a group of pictures (GOP), we online update the latent features of the optical flows (MV feature y_t and the MV hyperprior z_t) by a gradient descent-based algorithm in the inference stage. After N iterations, we obtain the latent features of the optical flows ŷ^op_t and ẑ^op_t which are optimal for the consecutive two frames x_t and x̂_t-1. Then, the latent features of context ĝ_t and reconstruction frame x̂_t are generated by the ŷ^op_t and ẑ^op_t, and we start to online update the latent features of the optical flows generated by the next input frame x_t+1 and the reference frame x̂_t in a GOP. The pipeline of the optical flow latent updating algorithm is shown in Algorithm <ref>. Firstly, the initial latent features of the optical flows y_t^0 and z_t^0 are generated by the input frame x_t and the reference frame x̂_t-1. Secondly, the initial RD cost L̂_RD_t^op can be computed by feeding the initial latent features of the optical flows to the video decoder without gradient 𝐃𝐞𝐜_𝐈. Then, the latent features of the optical flows are online updated iteratively to minimize the RD loss of each iteration L̃_RD_t^i: L̃_RD_t^i = λ· d(x_t, x̃_t^i) + H(ỹ_t^i) + H(z̃_t^i) + H(g̃_t^i), where y_t^i denotes the MV feature of the current frame after i steps of update, and z_t^i denotes the MV hyperprior of the current frame after i steps of update. Following the work <cit.>, we use adding uniform noise to approximate the rounding during training, ỹ_t^i = y_t^i + u, z̃_t^i = z_t^i + u, and u∼𝒰(-0.5,0.5). The latent features of context g̃_t^i and reconstruction frame x̃_t^i are generated by feeding the latent features of the optical flows ỹ_t^i and z̃_t^i to the video decoder with gradient 𝐃𝐞𝐜_𝐓 during the online optimization. The only difference between 𝐃𝐞𝐜_𝐓 and 𝐃𝐞𝐜_𝐈 lies in the quantization. To allow online optimization via gradient descent, the quantization in 𝐃𝐞𝐜_𝐓 is replaced by adding uniform noise, while the quantization in 𝐃𝐞𝐜_𝐈 is using rounding operation directly. During the updating iterations, the latent features of the optical flows are updated by minimizing the RD loss of each iteration L̃_RD_t^i, which is computed by sending latent features of the optical flows to 𝐃𝐞𝐜_𝐓. Then, the updated latent features of the optical flows are sent to 𝐃𝐞𝐜_𝐈 to compute the RD cost of each iteration L̂_RD_t^i, and we only save the optimal latent features of the optical flows ŷ_t^op and ẑ_t^op which lead to the minimal RD cost L̂_RD_t^op. Multi-frame online optimization. Considering the error propagation in deep video compression frameworks <cit.>, we further extend the single-frame online optimization algorithm to a multi-frame online optimization algorithm. We design a sliding-window-based online optimization algorithm to update the latent features of the optical flows by minimizing the multi-frame RD loss of each iteration L̃_mulRD_t^i for all frames inside a window: L̃_mulRD_t^i = ∑_j=t^W α_j[λ d(x_j, x̃_j^i) + H(ỹ_j^i) + H(z̃_j^i) + H(g̃_j^i)], where window size W denotes the number of frames inside a window and α_j is a hyperparameter that determines the weight of RD loss for different frames. Specifically, the overview of three-frame online optimization is shown in Fig. <ref>. The consecutive three frames in a GOP x̂_t-1, x_t, and x_t+1 are sent into the sliding window to update the latent features of the optical flows y_t and z_t iteratively minimizing multi-frame RD loss of each iteration L̃_mulRD_t^i. After N iterations, we obtain the latent features of the optical flows ŷ^op_t and ẑ^op_t which are optimal for the consecutive three frames x̂_t-1, x_t, and x_t+1, leading to the minimal multi-frame RD cost L̂^op_mulRD_t. The reconstruction frame x̂_t is generated by the updated latent features ŷ^op_t and ẑ^op_t, then the next consecutive three frames x̂_t, x_t+1, and x_t+2 will be sent to the sliding window to update the next latent features of the optical flows. When the sliding window includes the last frame of the GOP, the window size W will decrease by 1 until it equals to 2. The study in section <ref> shows the performance improvement on multi-frame online optimization with the increase of window size W. § EXPERIMENTAL RESULTS §.§ Experimental Setup Training Data. We use BVI-DVC <cit.> dataset for fine-tuning Spynet <cit.>. The BVI-DVC dataset contains 800 sequences at various spatial resolutions from 270p to 2160p. During the fine-tuning procedure, videos are cropped into non-overlapping 256 × 256 patches, and the motion vectors are extracted by VTM-10.0[<https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM/-/tree/VTM-10.0>] under the low-delay mode. The commonly-used Vimeo-90k <cit.> dataset is used for training DCVC in an end-to-end manner. During the training, all the videos are randomly cropped into 256 × 256 patches. Testing Data. We use the JVET CTC test sequences <cit.> for evaluating the fine-tuning of Spynet. UVG <cit.> and HEVC <cit.> datasets are used for testing our scheme. The UVG dataset has 7 1080p sequences and the HEVC dataset contains 16 sequences including Class B, C, D, and E. In addition, we follow <cit.>, and also evaluate on the HEVC RGB dataset <cit.>. Testing Conditions. We test 96 frames for each video, and the intra period is set to 12 for each dataset. Besides, the same as DCVC, we use Cheng2020Anchor <cit.> implemented by CompressAI <cit.> for intra-frame coding. Implementation Details. Our scheme includes three training stages, which consist of the fine-tuning of Spynet, offline training of DCVC, and online optimization of DCVC with the enhanced Spynet. In the first stage, we set λ_ME to 100, and fine-tune the Spynet using the extracted MV for 1,000,000 iterations. In the second stage, we deploy the enhanced Spynet into DCVC and train the whole video compression network for 5,000,000 iterations with different λ values (256, 512, 1024, 2048) until converge. The anchor DCVC is also trained for the same steps. Finally, we set the updating times N in Algorithm <ref> to 1500 according to the ablation study in <ref>, then online update the latent features of the optical flows for 1500 iterations. The initial learning rate for the first two steps is 1e-4, then decrease to 5e-5 at the 800,000th iteration and 4,000,000th iteration respectively. The initial learning rate for online optimization is 5e-3, which is decreased by 50% at the 1200th iteration. The Adam optimizer <cit.> is used, and the batch size is set to 16 for the first training stage and 4 for the second training stage. We train Spynet on a single NVIDIA 2080TI GPU for about 5 days and DCVC on four NVIDIA 2080TI GPUs for about 7 days. Besides, the online optimization experiments are conducted on NVIDIA 3090 GPU. §.§ Comparisons with Baseline Method Fig. <ref> shows RD curves on HEVC Class B, Class C, Class D, Class E, Class RGB, and UVG datasets. Our baseline scheme is DCVC, and it's obvious that DCVC with the offline and online enhancement on the optical flows can outperform DCVC in all rate points. Besides, both the offline and online enhancement on the optical flows don't change the network structure of DCVC and only optimize the encoder side of DCVC, leading to no increase in the model size or computational complexity on the decoder side. The proposed offline and online enhancement together achieves an average of 12.8% bitrate saving on all testing datasets over DCVC. The offline enhancement can achieve an average of 3.9% bitrate saving on all testing datasets over DCVC, which verifies that the offline enhancement can provide a more appropriate initial point for the end-to-end optimization in the deep video compression network. In Fig <ref>, we present visual results of the reconstruction frames of DCVC and our scheme across different sequences. With the offline and online enhancement on the optical flows, our scheme can achieve higher quality reconstruction, retaining more details in the boundaries of the motion and the regions with rich texture, while using fewer bits than DCVC. §.§ Ablation Study Effectiveness of Offline and Online Enhancement. To verify the effectiveness of the offline and online enhancement on the optical flows, we compare the compression performance of the baseline scheme (DCVC) with or without the enhancement. We report the BD-rate <cit.> results in Table <ref>. For online enhancement, the updating times are set to 1500. From the comparison results, we find that the offline enhancement on the optical flows brings 3.9% bitrate saving and the online enhancement brings 10.6% bitrate saving. With both offline and online enhancement, 12.8% bitrate saving is achieved. The experimental results indicate that our offline enhancement on the optical flows can also provide a more appropriate initial point for the online optimization. In Fig. <ref> (a) - (f), we provide the visual results of the decoded optical flows and their corresponding prediction frames. It is observed that with our offline enhancement on the optical flows, accurate motion-compensated results can be generated in the homogeneous region, resulting in a 0.57dB improvement with fewer bits cost over DCVC. The online enhancement on the optical flows further improves the decoded optical flow in areas close to motion boundaries, which has further achieved 0.17dB improvement with even fewer bits cost. Compared with the raw frame shown in Fig. <ref> (g), the prediction frames can recover increasingly more details in the raw frame by adopting the offline and online enhancement on the optical flows. Influence on updating times of online enhancement. To study the influence of the total updating times in online enhancement, we change the updating times from 100 to 2500. For simplification, we only use HEVC Class C and Class D datasets to explore the reasonable updating times considering the trade-off between compression performance and encoding time. The anchor is DCVC with the offline enhancement on the optical flows (DCVC + Offline). Table <ref> reports the BD-rate <cit.>, which indicates that the RD performance is improved as the updating times N increases. To balance the trade-off between the compression performance and encoding time complexity, in this paper, we set the updating times N to 1500 for online enhancement. Besides, the decoding time in Table <ref> demonstrates that our proposed method doesn't increase the computational complexity on the decoder side. §.§ Multi-frame Online Optimization In this paper, we adopt the single-frame online optimization in the inference stage to improve the compression performance and achieve content-adaptive encoding. Besides, we also provide the compression results adopting the multi-frame online optimization which updates the latent features of the optical flows by minimizing multi-frame RD loss. We wish to explore the potential of multi-frame online optimization on the motion information with a limited number of frames in a GOP. For simplification, we only conduct experiments on HEVC Class C and Class D datasets. we set the DCVC with offline and online enhancement on the optical flows (DCVC + Offline + Online) as the anchor, which adopts the single-frame online optimization. The single-frame online optimization is the same as setting the window size W in Eq. (<ref>) to 2. The hyperparameters α_0, α_1, α_2, and α_3 in Eq. (<ref>) are set to 1, 0.5, 0.2, and 0.1 respectively. Table <ref> reports the BD-rate <cit.> with updating times N set to 2000. We compare the compression performance, encoding time, and decoding time of DCVC with multi-frame online enhancement on the optical flows with window size W set from 2 to 5 in HEVC Class C and Class D datasets. Table <ref> shows that increasing the window size cannot improve the compression ratio greatly, but the encoding time has increased a lot when the window size exceeds 2. In this paper, we currently adopt the single-frame online optimization. § CONCLUSION In this paper, we have proposed an offline and online enhancement on the optical flows to better estimate and utilize the motion information in the deep video compression network. Specifically, in the offline enhancement, we fine-tune the optical flow estimation network with the MV of VTM, which is searched for the best RD performance on real-world videos. In the online enhancement, we online update the latent features of the optical flows under the RD metric for different coding sequences in the inference stage. Our scheme can effectively improve the compression performance without increasing the model size or computational complexity on the decoder side. The experimental results show that our scheme can outperform our baseline scheme DCVC in terms of PSNR by 12.8% under the same configuration. It is worth noting that our scheme is a plug-and-play mechanism that can be easily integrated into any deep video compression framework with the same motion estimation and MV encoder. We believe that our scheme could help researchers to further explore what is a better motion information and how to compress it effectively in deep video compression frameworks. IEEEtran
http://arxiv.org/abs/2307.07621v1
20230714204503
The fundamental solution of the fractional p-laplacian
[ "Leandro M. Del Pezzo", "Alexander Quaas" ]
math.AP
[ "math.AP", "Primary 35A08: 35R11: 35B53, Secondary 35J70: 35J75" ]
In this article, we find the fundamental solution of the fractional p-laplacian and use them to prove two different Liouville-type theorems. A non-existence classical Liouville-type theorem for p-superharmonic and a Louville type results for an Emden-Folder type equation with the fractional p-laplacian. Unveiling the Impact of Cognitive Distraction on Cyclists Psycho-behavioral Responses in an Immersive Virtual Environment Xiang Guo, Arash Tavakoli, T. Donna Chen, and Arsalan Heydarian Corresponding Author: Dr. Arsalan Heydarian, [email protected] August 12, 2023 ============================================================================================================================================= Fundamental solution; Liouville-type theorems; fractional p-laplacian; non-local operator. Mathematics Subject Classification (2010).Primary 35A08: 35R11: 35B53; Secondary 35J70: 35J75 § INTRODUCTION In the last decades, Liouville-type non-existence theorems have been studied intensively, this is, because these have emerged as a crucial tool for many applications in PDEs. They mostly appear in establishing qualitative properties of solutions. The more well know is Gidas-Spruck a priori bound and nowadays Louville-type theorems are used in regularity issues. Observe that the non-existence results are used, in most of the cases, after rescaling and compactness argument. See, for instance, <cit.> and references therein. Our purpose here is to establish Liouville-type theorems equations that involve the fractional p-laplacian, defined by, (-Δ_p)^su(x) 2C(s,p,N) lim_ε→0^+∫_ℝ^N∖ B_ε(x)|u(x)-u(y)|^p-2(u(x)-u(y))|x-y|^N+spdy x∈ℝ^N where p∈(1,∞), s∈(0,1), and C(s,p,N) is a normalization factor. By stability results as s→ 1 the fractional p-laplacian is a nonlocal version of the p-laplacian and an extension of the fractional laplacian (p= 2). Here we omit the constant C(s,p,N) throughout this article for simplicity, since we don't analysis our results as s→ 1. The nonlocal operators have taken relevance because they arise in several applications in many fields, for instance, game theory, mathematical physics, finance, image processing, Lévy processes in probability, and some optimization problems, see <cit.> and the references therein. From a mathematical point of view, the fractional p-Lapalcian has a great attractiveness since two phenomena are present in it: the nonlinearity of the operator and its nonlocal character. See for instance <cit.> and the references therein. §.§ Main results Our first result introduces the general formula for a fundamental solution of the fractional p-lapacian. Let N≥ 2, 0<s<1, and 1<p<∞. (a) If ps≠ N then v_β(x)=|x|^β β∈(-Np-1,psp-1), is a weak solution of (-Δ_p)^s u(x)= 𝒞(β)|x|^β(p-1)-sp in ℝ^N∖{0}, where 𝒞(β) 4πα_N∫_0^1|1-ρ^β|^p-2(1-ρ^β)[ρ^N-1 -ρ^ps-β(p-1)-1]G(ρ^2)dρ, with α_Nπ^N-3/2Γ(N-1/2) and G(t,N,ps) B(N-1/2,1/2)F(N+ps/2,ps+2/2;N/2;t). Here Γ,B and F denote the gamma, the beta, and the (2-1)-hypergeometric functions, respectively. Additionally, we have 𝒞(β) =0 if β=0, or β=ps-Np-1, >0 if min{ps-Np-1,0}<β<max{ps-Np-1,0}, <0 otherwise. (b) If ps= N then v(x)=log(|x|), is a weak solution of (-Δ_p)^s u(x)= 0 in ℝ^N∖{0}. It should be noted that point (a) of the above theorem is mentioned in Example 1.5 in <cit.>, but this is only in case 2< p≤ N+1 and 0<s<p-1p, and without detailed proof. Here we give a complete proof in all cases that needs delicate estimates together with some nontrivial explicit computations. By <cit.>, both solutions are viscosity solutions. Furthermore, since our solutions are C^∞(ℝ^N∖{0}) viscosity solutions that have non-zero gradients in ℝ^N∖{0}, they are also classical solutions. For the definitions of viscosity and weak solutions, see Section <ref>. The fundamental solution in the literature corresponds to the case β=ps-Np-1. and in the standard case (p=2 and s=1), we have β=2-N. Here, we abuse the fundamental solution name since we include all values of β in the corresponding admissible range. In a parabolic context, the fundamental solution for the fractional p-laplacian in self-similar variables is found in <cit.>. As mentioned before, our main application of the fundamental solution of the fractional p-laplacian is to establish two models of Liouville-type theorems depending on the order between N and ps. We start with the case N≤ ps that corresponds to the classical type of Louville results. Let N≥2,0<s<1, and 1<p<∞. If N≤ ps and u is a non-negative lower semi-continuous weak solution of (-Δ_p)^s u ≥ 0 in ℝ^N, then u is constant. Again as the previous remark by <cit.>, we know that a non-negative lower semi-continuous weak solution of (<ref>) is also a viscosity solution. Then the previous theorem also holds if we consider viscosity solutions instead of weak ones. See also <cit.>. Our second Liouville-type theorem is for an equation involving the fractional p-laplacian operator (with N>ps) and a zero-order power nonlinearity, this corresponds to in the literature as the Lane-Emden type equations. Let N≥2,0<s<1, 1<p<∞, and N>ps. * If 0<q< N(p-1)N-ps and u∈ C(ℝ^N) is a non-negative viscosity solution of (-Δ_p)^s u-u^q ≥ 0 in ℝ^N then u≡0. * If q>N(p-1)N-ps then there is a positive solution of (<ref>). For the proofs of our Louville-type theorems we proceed similarly to <cit.>, but we need some extra delicate estimates together with some new ideas due to the strongly nonlinear character of the operator. To end this introduction, we want to mention that for the p-laplacian (case s=1), the last two theorems were proved in <cit.>. Furthermore, in <cit.>, it is shown that the first point of Theorem <ref> also holds when q= N(p-1)N-ps, sometimes known as the critical case for super-solution. Unfortunately, we have yet to be able to prove this result in our framework, so we are leaving this as an open problem, below. The difficulty in our approach is to compute a log perturbation of the fundamental solution thus is related with a product rule for the fractional p-laplacian. Open problem: Let N≥2,0<s<1, 1<p<∞, and N>ps. If q=N(p-1)N-ps and u∈ C(ℝ^N) is a non-negative viscosity solution of (<ref>) then u≡0. §.§ The paper is organized as follows. In Section <ref>, we give the definition of weak and viscosity solutions. In Section <ref>, we prove Theorem <ref>. Afterward, in Section <ref>, we prove some Hadamard properties that will be fundamental to proving our Liouville results. Finally, in Sections <ref> and <ref>, we prove Theorems <ref> and <ref>. § PRELIMINARIES Throughout this paper, Ω is an open set of ℝ^N, and s∈(0,1), p∈(1,∞). The fractional Sobolev spaces W^s,p(Ω) is defined to be the set of functions u∈ L^p(Ω) such that |u|_W^s,p(Ω)^p∫_Ω^2|u(x)-u(y)|^p|x-y|^N+sp dxdy<∞, where Ω^2 denotes Ω×Ω. The fractional Sobolev spaces admit the following norm u_W^s,p(Ω)(u_L^p(Ω)^p+|u|_W^s,p(Ω)^p )^1/p, where u_L^p(Ω)^p∫_Ω |u(x)|^p dx. We also denote L_s^p-1(ℝ^N){ u∈ L^p-1_loc(ℝ^N)∫_ℝ^N|u|^p-1/(1+|x|)^N+spdx <∞}. Let us note that, in Theorem <ref>, we chose β∈(-Np-1,psp-1) so that v_β(x)=|x|^β∈ L_s^p-1(ℝ^N). Let fΩ×ℝ→ℝ be a continuous function. A function u∈ L_s^p-1(ℝ^N) is a weak super-solution (sub-solution) of (-Δ_p)^su(x)=f(x,u) in Ω, if for any bounded open U⊆Ω we have that u∈ W^s,p_loc(U) and ∫_ℝ^2N|u(x)-u(y)|^p-2(u(x)-u(y))(φ(x)- φ(y))/|x-y|^N+spdxdy ≥(≤)∫_ℝ^Nf(x,u)φ(x)dx, for any non-negative function φ∈ C^∞_c(U). We say that u is a weak solution of (<ref>) if it is both a weak super-solution and sub-solution to the problem. Following <cit.>, we define our notion of viscosity super-solution of (<ref>). We start to introduce some notation. The set of critical points of a differential function u and the distance from the critical points are denoted by N_u{x∈Ω∇ u(x)=0}, and d_u(x)dist(x,N_u), respectively. Let D⊂Ω be an open set. We denote the class of C^2-functions whose gradient and Hessian are controlled by d_u as C^2_γ(D){ u∈ C^2(Ω)sup_x∈ D(min{d_u(x),1}^γ-1|∇ u(x)|+|D^2u(x)|d_u(x)^γ-2)<∞}. Observe that, if γ≥ 2 then u(x)=|x|^γ∈ C^2_γ. Let fΩ×ℝ→ℝ be a continuous function. We say that a function uℝ^N→[-∞,∞] is a viscosity super-solution (sub-solution) of (<ref>) if it satisfies the following four assumptions: (VS1) u<∞ (u>-∞) a.e. in ℝ^N and u>-∞ (u<∞) everywhere in Ω; (VS2) u is lower (upper) semi-continuous in Ω; (VS3) If ϕ∈ C^2(B_r(x_0)) for some B_r(x_0)⊂Ω such that ϕ(x_0)=u(x_0) and ϕ≤ u (ϕ≥ u) in B_r(x_0), and one of the following holds (a)p>22-s or ∇ϕ(x_0)≠0; (b)1<p≤22-s; ∇ϕ(x_0)=0 such that x_0 is an isolate critical point of ϕ, and ϕ∈ C^2_γ(B_r(x_0)) for some γ>spp-1; then (-Δ_p)^sϕ_r(x_0)≥ (≤) f(x_0,u), where ϕ_r(x)= ϕ if x∈ B_r(x_0), u(x) otherwise; (VS4) u_-max{-u,0} (u_+max{u,0}) belongs to ∈ L_s^p-1(ℝ^N). Finally, u is a viscosity solution if it is both a viscosity super-solution and sub-solutions. § FUNDAMENTAL SOLUTION In this section, we will prove the main result of this article (Theorem <ref>). To simplify the presentation, we split the proof into two cases. §.§ Case This subsection aims to prove the following result. Let N≥2,0<s<1, and 1<p<∞. If ps≠ N then v_β(x)=|x|^β β∈(-Np-1,psp-1), is a weak solution of (-Δ_p)^s u(x)= 𝒞(β)|x|^β(p-1)-sp in ℝ^N∖{0}, where 𝒞(β) is defined by (<ref>). We introduce a bit of notation to take advantage of the fact that v_β is a radial function. Given ε>0 and we define A_ε(x)={y∈ℝ^N ||x|-|y||<ε} and J_ε v_β(x) 2∫_ℝ^N∖ A_ε(x)|v_β(x)-v_β(y)|^p-2(v_β(x)-v_β(y))|x-y|^N+sp dy. Notice that J_ε v_β(x) is finite for all x∈ℝ^N since β∈(-Np-1,psp-1). Ideas of the proof of Theorem <ref>. We proceed somewhat as in the proof of <cit.>. First, we will show that J_ε v_β(x)=h_β,ε(x)|x|^β(p-1)-sp- g_β,ε(x) in ℝ^N∖{0}, where h_β,ε(x) πα_N∫_0^1-ε/|x||1-ρ^β|^p-2(1-ρ^β)[ρ^N-1-ρ^ps-β(p-1)-1] G(ρ^2,N,ps)dρ, g_β,ε(x) =4πα_N|x|^β(p-1)-sp∫_1-ε/|x|^|x|/|x|+ε|1-ρ^β|^p-2(1-ρ^β)ρ^sp-β(p-1)-1G(ρ^2,N,ps)dρ. Then we will show that h_β,ε(x)|x|^β(p-1)-sp→𝒞(β)|x|^β(p-1)-sp and g_β,ε(x)→ 0, strongly in L^1_loc(ℝ^N∖{0}) as ε→ 0^+. Finally, we will conclude our result. Throughout this proof, we assume that ps≠ N, β∈(-Np-1,psp-1), ε>0 and x∈ℝ^ℕ∖{0}, and we simply use the notations G(t) instead of G(t,N,ps). We split the proof into four steps. Step 1. The first part of the proof shows (<ref>). We begin by observing that J_ε v_β(x) = 2∫_ℝ^N∖ A_ε(x)|v_β(x)-v_β(y)|^p-2(v_β(x)-v_β(y))|x-y|^N+sp dy =2∫_ℝ^N∖ A_ε(x)||x|^β-|y|^β|^p-2(|x|^β-|y|^β)|x-y|^N+sp dy =2|x|^β(p-1)-sp∫_ℝ^N∖ A_ε(x)|1-(|y||x|)^β|^p-2(1-(|y||x|)^β)/|x|x|-y|x||^N+spdy|x|^N. Thus, by a simple change of variable and by rotation invariance, we have that J_ε v_β(x)=2|x|^β(p-1)-sp∫_ℝ^N∖ A_ε/|x|(e_1)|1-|y|^β|^p-2(1-|y|^β)|e_1-y|^N+sp dy, where e_1=(1,0,…,0)∈ℝ^N. We now take y=ρ z with ρ>0 and z∈𝕊^N-1{w∈ℝ^N-1 |w|=1}, we get J_ε v_β(x)=2|x|^β(p-1)-sp∫_|1-ρ|≥ε|x||1-ρ^β|^p-2(1-ρ^β) ∫_𝕊^N-1dℋ^N-1(z)|e_1-ρ z|^N+spρ^N-1dρ =2|x|^β(p-1)-sp∫_|1-ρ|≥ε|x||1-ρ^β|^p-2(1-ρ^β) ∫_𝕊^N-1dℋ^N-1(z)|1-2ρ e_1· z+ρ^2|^N+sp/2ρ^N-1dρ. Using <cit.> and <cit.>, we have that J_ε v_β(x)=4πα_N|x|^β(p-1)-sp∫_|1-ρ|≥ε|x||1-ρ^β|^p-2(1-ρ^β)ρ^N-1𝒦(ρ) dρ, where 𝒦(ρ)∫_0^πsin^N-2(θ)dθ|1-2ρcos(θ)+ρ^2|^N+sp/2 = G(ρ^2) if ρ<1, G(ρ^-2)/ρ^N+ps if ρ>1. Therefore, J_ε v_β(x)=4πα_N|x|^β(p-1)-sp{∫_1+ε/|x|^∞|1-ρ^β|^p-2(1-ρ^β)G(ρ^-2)/ρ^ps+1dρ. +.∫_0^1-ε/|x||1-ρ^β|^p-2(1-ρ^β)ρ^N-1G(ρ^2)dρ} =4πα_N|x|^β(p-1)-sp{∫_0^|x|/|x|+ε|1-ρ^-β|^p-2(1-ρ^-β)ρ^ps-1G(ρ^2)dρ. +.∫_0^1-ε/|x||1-ρ^β|^p-2(1-ρ^β)ρ^N-1G(ρ^2)dρ} =4πα_N|x|^β(p-1)-sp{∫_0^1-ε/|x||1-ρ^β|^p-2(1-ρ^β)[ρ^N-1-ρ^ps- β(p-1)-1]G(ρ^2)dρ. - . ∫_1-ε/|x|^|x|/|x|+ε|1-ρ^β|^p-2(1-ρ^β)ρ^ps-β(p-1)-1G(ρ^2)dρ}. So, J_ε v_β(x)=h_β,ε(x)|x|^β(p-1)-sp- g_β,ε(x) in ℝ^N∖{0}. Step 2. We now show that h_β,ε(x)|x|^β(p-1)-sp→𝒞(β)|x|^β(p-1)-sp strongly in L^1_loc(ℝ^N∖{0}), as ε→ 0^+. Given a bounded set U⊂ℝ^N∖{0} such that U⊂ℝ^N∖{0}, we want to show that 0=1/4πα_Nlim_ε→ 0^+∫_U |h_β,ε(x)|x|^β(p-1)-sp- 𝒞(β)|x|^β(p-1)-sp|dx =1/4πα_Nlim_ε→ 0^+∫_U|h_β,ε(x)- 𝒞(β)| |x|^β(p-1)-spdx =lim_ε→ 0^+∫_U |x|^β(p-1)-sp|∫_1-ε/|x|^1 |1-ρ^β|^p-2(1-ρ^β) (ρ^N-1-ρ^ps-β(p-1)-1)G(ρ^2)dρ| dx. Since U⊂ℝ^N∖{0} is bounded, we have that |x|^β(p-1)-sp∈ L^∞(U). Therefore, it is enough to prove that lim_ε→ 0^+∫_U∫_1-ε/|x|^1 |1-ρ^β|^p-1 |ρ^N-1-ρ^ps-β(p-1)-1|G(ρ^2)dρ dx=0. Let H(ρ) (1-ρ)^1 +ps G(ρ^2). By <cit.>, we have that lim_ρ→ 1^- H(ρ) exists. Then, |1-ρ^β|^p-1 |ρ^N-1-ρ^ps-β(p-1)-1|G(ρ^2)= |1-ρ^β|^p-1 |ρ^N-1-ρ^ps-β(p-1)-1||1-ρ|^1+ps H(ρ), belongs to L^1(0,1). Therefore, by the dominated convergence theorem, we get (<ref>). Step 3. Our next goal is to show that, g_β,ε(x)→ 0 strongly in L^1_loc(ℝ^N∖{0}). as ε→0^+. Again, let U⊂ℝ^N∖{0} be a bounded set such that U⊂ℝ^N∖{0}. In this case, we want to show that 0 =lim_ε→0^+∫_U|g_β,ε(x)|dx =4πα_Nlim_ε→0^+∫_U |x|^β(p-1)-sp|∫_1-ε/|x|^|x|/|x|+ε|1-ρ^β|^p-2(1-ρ^β)ρ^sp-β(p-1)-1G(ρ^2)dρ| dx. Since U⊂ℝ^N∖{0} is bounded, we have that |x|^β(p-1)-sp∈ L^∞(U), and |1-ρ^β|≤ C |1-ρ|, where C is a positive constant that depends on β,p, and dist(U,0). Then ∫_U|g_β,ε(x)|dx ≤ C∫_U ∫_1-ε/|x|^|x|/|x|+ε|1-ρ|^p-1 G(ρ^2)dρ dx= C∫_U ∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|^p(s-1)-2dρ dx, where C is a positive constant that depends on |x|^β(p-1)-sp_L∞(U), β,p, and dist(U,0). Therefore, to show (<ref>), it is enough to show that lim_ε→0^+∫_U ∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|^p(s-1)+2dρ dx=0. To prove this, we consider three cases. Case 1: p>11-s. By (<ref>), H(ρ)|1-ρ|^p(s-1)+2∈ L^1(0,1). Thus, by the dominated convergence theorem, we get (<ref>). Case 2: We now assume p<11-s. By <cit.>, we have that H is differentiable and lim_ρ→1^-H^'(ρ) exists. Notice that for any x∈ U we have ∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|^p(s-1)+2dρ = 1p(s-1)+1{ H(|x||x|+ε)(ε|x|+ε)^p(s-1)+1 - H(1-ε|x|)(ε|x|)^p(s-1)+1 - ∫_1-ε/|x|^|x|/|x|+εH^'(ρ)|1-ρ|^p(s-1)+1dρ}. Taking δ=dist(U,0), and using (<ref>) and (<ref>), there are two positive constants C_1 and C_2 such for any x∈ U we have | H(|x||x|+ε)(ε|x|+ε)^p(s-1)+1. - . H(1-ε|x|)(ε|x|)^p(s-1)+1|= ≤ε^p(1-s) (|x|+ε)^p(s-1)+1|H(|x||x|+ε) -H(1-ε|x|)|ε +H(1-ε|x|) |(|x|+ε)^p(s-1)+1-|x|^p(s-1)+1|ε ≤ε^p(1-s){H^'_L∞(0,1)δ^p(1-s)+1 +p(1-s)δ^p(s-1)H_L∞(0,1)} ≤ C_1ε^p(1-s), and ∫_1-ε/|x|^|x|/|x|+εH^'(ρ)|1-ρ|^p(s-1)+1dρ≤ C_2ε^p(1-s). Then ∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|^p(s-1)+2dρ≤ C_1ε^p(1-s)+C_2ε^p(1-s) ∀ x∈ U. This implies (<ref>). Case 3: Finally we consider the case p=11-s. In this case, for any x∈ U we have ∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|^p(s-1)+2dρ = ∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|dρ = H(|x||x|+ε) log(ε|x|+ε) - H(1-ε|x|) log(ε|x|) - ∫_1-ε/|x|^|x|/|x|+ε H^'(ρ)log(1-ρ)dρ ={ H(|x||x|+ε)-H(1-ε|x|) }log(ε|x|+ε) + H(1-ε|x|) {log(ε|x|+ε)- log(ε|x|)} - ∫_1-ε/|x|^|x|/|x|+ε H^'(ρ)log(1-ρ)dρ ={ H(|x||x|+ε)-H(1-ε|x|) }log(ε|x|+ε) + H(1-ε|x|)log(|x||x|+ε) -∫_1-ε/|x|^|x|/|x|+ε H^'(ρ)log(1-ρ)dρ. Again, taking δ=dist(U,0), and using (<ref>) and (<ref>), there are three positive constants C_1, C_2 and C_3 such for any x∈ U we have ∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|^p(s-1)+2dρ≤ ≤ -H^'_L∞(0,1)δ^2ε^2 log(εδ+ε)+ H _L∞(0,1)εδ -H^'_L∞(0,1)∫_1-ε/δ^1log(1-ρ)dρ ≤ -H^'_L∞(0,1)δ^2ε^2 log(εδ+ε)+ H _L∞(0,1)εδ -H^'_L∞(0,1)ε/δ(log(ε)-log(δ)-1) ≤ -C_1ε^2log(εδ+ε) -C_2εlog(ε)+C_3ε, from which (<ref>) follows. Step 4. Finally, we will show that v_β is a weak solution of (<ref>). By step 1, we have that J_ε v_β(x)=h_β,ε(x)|x|^β(p-1)-sp- g_β,ε(x) in ℝ^N∖{0}. Then, given φ∈ C^∞_c(ℝ^N∖{0}) we have that ∫_ℝ^N J_ε v_β(x) φ(x) dx = ∫_ℝ^N(h_β,ε(x)|x|^β(p-1)-sp- g_β,ε(x))φ(x) dx, that is ∫_ℝ^2N (1-_A_ε(x)(y)) |v_β(x)-v_β(y)|^p-2(v_β(x)-v_β(y))|x-y|^N+spφ(x) dy dx =1/2∫_ℝ^N(h_β,ε(x)|x|^β(p-1)-sp- g_β,ε(x))φ(x) dx. Interchanging the roles of x and y ∫_ℝ^2N (1-_A_ε(y)(x)) |v_β(x)-v_β(y)|^p-2(v_β(y)-v_β(x))|x-y|^N+spφ(y) dx dy =1/2∫_ℝ^N(h_β,ε(x)|x|^β(p-1)-sp- g_β,ε(x))φ(x) dx Now, adding (<ref>) and (<ref>) and using that 1-_A_ε(y)(x)=1-_A_ε(x)(y), we get ∫_ℝ^N(h_β,ε(x)|x|^β(p-1)-sp- g_β,ε(x)))φ(x) dx= ∫_ℝ^2N (1-_A_ε(x)(y)) |v_β(x)-v_β(y)|^p-2 (v_β(x)-v_β(y))(φ(x)-φ(y))|x-y|^N+sp dy dx. By steps 2 and 3 ∫_ℝ^N(h_β,ε(x)|x|^β(p-1)-sp- g_β,ε(x)))φ(x) dx →𝒞(β) ∫_ℝ^N|x|^β(p-1)-spφ(x) dx. as ε→ 0^+. On the other hand, we have that |v_β(x)-v_β(y)|^p-2 (v_β(x)-v_β(y))(φ(x)-φ(y))|x-y|^N+sp∈ L^1(ℝ^N×ℝ^N), and _A_ε(x)(y)→ 0 a.e. in ℝ^2N as ε→ 0^+. Then lim_ε→ 0^+∫_ℝ^2N (1-_A_ε(x)(y)) |v_β(x)-v_β(y)|^p-2 (v_β(x)-v_β(y))(φ(x)-φ(y))|x-y|^N+sp dy dx =∫_ℝ^2N|v_β(x)-v_β(y)|^p-2 (v_β(x)-v_β(y))(φ(x)-φ(y))|x-y|^N+sp dy dx. Since φ is arbitrary, by (<ref>), (<ref>) and (<ref>), we conclude that v_β is a weak solution of (<ref>). §.§ Case To completethe study of the fundamental solution of the fractional p-laplacian, we prove the following result. Let N≥2,0<s<1, and 1<p<∞. If ps= N then v(x)=log(|x|), is a weak solution of (-Δ_p)^s u(x)= 0 in ℝ^N∖{0}. Ideas of the proof. Given ε>0, we define J_ε v(x) 2∫_ℝ^N∖ A_ε(x)|v(x)-v(y)|^p-2(v(x)-v(y))|x-y|^N+sp dy. First, we will show that J_ε v(x)→ 0 strongly in L^1_loc(Ω) as ε→ 0^+. Then, arguing as in step 4 of the proof of Theorem <ref>, we conclude that v is a weak solution of (<ref>). Throughout this proof, we assume that ps=N, ε>0 and x∈ℝ^ℕ∖{0}. Let us begin by observing that J_ε v(x) = 2∫_ℝ^N∖ A_ε(x)|v(x)-v(y)|^p-2(v(x)-v(y))|x-y|^N+sp dy =-2|x|^N∫_ℝ^N∖ A_ε(x)|log(|y|/|x|)|^p-2log(|y|/|x|)|x/|x|-y/|x||^2Ndy/|x|^N. Now, proceeding as in step 1 of the proof of Theorem <ref> J_ε v(x) =-4α_N|x|^N∫_|1-ρ|≥ε/|x| |log(ρ)|^p-2log(ρ)ρ^N-1𝒦(ρ) dρ, where 𝒦(ρ) is defined by (<ref>). Then J_ε v(x)= -4α_N|x|^N{∫_0^1-ε/|x| |log(ρ)|^p-2log(ρ)ρ^N-1G(ρ^2,N,N) dρ. +.∫_1+ε/|x|^∞ |log(ρ)|^p-2log(ρ)ρ^-N-1G(ρ^-2,N,N) dρ} = -4α_N|x|^N{∫_0^1-ε/|x| |log(ρ)|^p-2log(ρ)ρ^N-1G(ρ^2,N,N) dρ. -.∫_0^|x|/|x|+ε |log(ρ)|^p-2log(ρ)ρ^N-1G(ρ^2,N,N) dρ} = 4α_N|x|^N∫_1-ε/|x|^|x|/|x|+ε |log(ρ)|^p-2log(ρ)ρ^N-1G(ρ^2,N,N) dρ. That is, J_ε v(x)=g_ε(x) in ℝ^N∖{0} where g_ε(x)=4α_N|x|^N∫_1-ε/|x|^|x|/|x|+ε |log(ρ)|^p-2log(ρ)ρ^N-1G(ρ^2,N,N) dρ We claim g_ε→ 0 strongly in L^1_loc(ℝ^N∖{0}), as ε→ 0^+. To see this, notice that |g_ε (x)|≤ 4α_N|x|(|x|+ε)^N-1∫_1-ε/|x|^|x|/|x|+ε|log(ρ)|^p-1(1-ρ)^N+1(1-ρ)^N+1G(ρ^2,N,N) dρ ≤ 4α_N|x|^p-2(|x|+ε)^N-1(|x|-ε)^p-1∫_1-ε/|x|^|x|/|x|+ε(1-ρ)^N+1G(ρ^2,N,N)(1-ρ)^N-p+2 dρ. On the other hand, by <cit.>, we have that H(ρ)=(1-ρ)^N+1G(ρ^2,N,N) is differentiable and lim_ρ→1^-H(ρ) and lim_ρ→1^-H^'(ρ), exist. As in step 3 of the proof of Theorem <ref>, we consider three cases to prove our claim. Case 1: We start assuming that p>N+1. In this case, owing to (<ref>) and (<ref>), it is easy to check that there is a positive constant C (independent of ε) |g_ε(x)|≤ 4α_N4α_N|x|^p-2/(|x|+ε)^N-1(|x|-ε)^p-1∫_1-ε/|x|^|x|/|x|+εH(ρ)(1-ρ)^N-p+2 dρ ≤ 4α_N|x|^p-2/(|x|+ε)^N-1(|x|-ε)^p-1(1/|x|^p-N-1-1/(|x|+ε)^p-N-1) ε^p-N-1. This implies (<ref>). Case 2: We now study the case p<N+1. Due to (<ref>), we have that ∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|^N-p+2dρ = =1N-p+1{ H(|x||x|+ε)(ε|x|+ε)^N-p+1 - H(1-ε|x|)(ε|x|)^N-p+1 - ∫_1-ε/|x|^|x|/|x|+εH^'(ρ)|1-ρ|^N-p+1dρ} ≤H^'_L∞(0,1)N-p+1{ε(|x|+ε)^p-N|x| +1p-N( 1|x|^p-N- 1(|x|+ε)^p-N) }ε^p-N +H_L∞(0,1)|x|^p-Nε^p-N. This implies, using again (<ref>), |g_ε (x)|≤ Cε^p-N|x|^p-2(|x|+ε)^N-1(|x|-ε)^p-1{ε(|x|+ε)^p-N|x| +1|x|^p-N- 1(|x|+ε)^p-N} where C is a positive constant independent of ε. Now it is easy to check (<ref>). Case 3: To conclude the proof of our claim, we study the case p=N+1. Again by (<ref>), we have that ∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|^N-p+2dρ =∫_1-ε/|x|^|x|/|x|+εH(ρ)|1-ρ|dρ= H(|x||x|+ε) log(ε|x|+ε) - H(1-ε|x|)log(ε|x|) - ∫_1-ε/|x|^|x|/|x|+εH^'(ρ)log(1-ρ)dρ ≤H^'_L∞(0,1){ -εlog(ε/|x|+ε)(|x|+ε)|x| - ( ε(log(ε)-1)|x|(|x|+ε) +log(|x|+ε)(|x|+ε) -log(|x|)|x|) }ε +H_L∞(0,1)|x|ε. This implies, using again (<ref>), |g_ε (x)|≤ Cε |x|^N-2 -2εlog(ε) +2ε+ (ε-|x|)log(|x|+ε) +(|x|+ε)log(|x|)+|x|/(|x|+ε)^N(|x|-ε)^N where C is a positive constant independent of ε. This implies (<ref>). Finally, we prove that v(x)=log(|x|), is a weak solution of (<ref>). By (<ref>), we have that J_ε v(x)=g_ε(x) in ℝ^N∖{0}. Then, given φ∈ C^∞_c(ℝ^N∖{0}) we have that ∫_ℝ^N J_ε v(x) φ(x) dx = ∫_ℝ^Ng_ε(x)φ(x) dx that is ∫_ℝ^2N (1-_A_ε(x)(y)) |v(x)-v(y)|^p-2(v(x)-v(y))|x-y|^N+spφ(x) dy dx =1/2∫_ℝ^N g_ε(x)φ(x)dx. Interchanging the roles of x and y ∫_ℝ^2N (1-_A_ε(y)(x)) |v(x)-v(y)|^p-2(v(y)-v(x))|x-y|^N+spφ(y) dx dy =1/2∫_ℝ^N g_ε(x)φ(x) dx Now, adding (<ref>) and (<ref>) and using that 1-_A_ε(y)(x)=1-_A_ε(x)(y), we get ∫_ℝ^N g_ε(x)φ(x) dx= ∫_ℝ^2N (1-_A_ε(x)(y)) |v(x)-v(y)|^p-2 (v(x)-v(y))(φ(x)-φ(y))|x-y|^N+sp dy dx. By (<ref>) we have that lim_ε→ 0^+∫_ℝ^N g_ε(x)φ(x) dx =0. On the other hand, we have that |v(x)-v_β(y)|^p-2 (v(x)-v(y))(φ(x)-φ(y))|x-y|^N+sp∈ L^1(ℝ^2N) and _A_ε(x)(y)→ 0 a.e. in ℝ^2N as ε→ 0^+. Then lim_ε→ 0^+∫_ℝ^2N (1-_A_ε(x)(y)) |v(x)-v(y)|^p-2 (v(x)-v(y))(φ(x)-φ(y))|x-y|^N+sp dy dx =∫_ℝ^2N|v(x)-v(y)|^p-2 (v(x)-v(y))(φ(x)-φ(y))|x-y|^N+sp dy dx. Since φ is arbitrary, by (<ref>), (<ref>) and (<ref>), we conclude that v_β is a weak solution of (<ref>). § HADAMARD PROPERTIES To prove our Hadamard properties, we use comparison techniques that require modifying the fundamental solution near the origin to put it below a weak fractional superharmonic function near the origin. §.§ Two fractional subharmonic functions We start by building two weak fractional subharmonic functions from the fundamental solution. Throughout this section N≥2,0<s<1, 1<p<∞, N>ps, and 0<ε<1<r<R. Now, define B_ρ B_ρ(0) (ρ>0), A_r,R{x∈ℝ^N r<|x|<R}, and ϕ_ε(x)ε^β if 0≤ |x|<ε, |x|^β if ε≤ |x|. There is ε_0∈(0,1) independent of R such that for any ε∈(0,ε_0), ϕ_ε is a weak solution of (-Δ_p)^s u(x)≤ 0 in A_r,R. Let φ∈ C_c^∞(A_r,R) be non-negative.Then ∫_ℝ^2N|ϕ_ε(x)-ϕ_ε(y)|^p-2 (ϕ_ε(x) -ϕ_ε(y))(φ(x)-φ(y))/|x-y|^N+sp dxdy=I_1+I_2, where I_1 =2∫_B_ε∫_ℝ^N∖ B_ε|ϕ_ε(x)-ϕ_ε(y)|^p-2 (ϕ_ε(x)-ϕ_ε(y))(φ(x)-φ(y))/|x-y|^N+sp dxdy =-2∫_ℝ^N∖ B_r∫_B_ε|ε^β-|x|^β|^p-1/|x-y|^N+sp dy φ(x) dx and I_2 =∫_(ℝ^N∖ B_ε)^2 |ϕ_ε(x)-ϕ_ε(y)|^p-2 (ϕ_ε(x)-ϕ_ε(y))(φ(x)-φ(y))/|x-y|^N+sp dxdy = ∫_(ℝ^N∖ B_ε)^2 ||x|^β-|y|^β|^p-2 (|x|^β-|y|^β) (φ(x)-φ(y))/|x-y|^N+sp dxdy =∫_ℝ^2N||x|^β-|y|^β|^p-2 (|x|^β-|y|^β)(φ(x)-φ(y))/|x-y|^N+sp dxdy + 2∫_ℝ^N∖ B_r∫_B_ε||y|^β-|x|^β|^p-1/|x-y|^N+sp dy φ(x) dx. By Theorem <ref>, we have that I_2=𝒞(β)∫_ℝ^N|x|^β(p-1)-spφ(x) dx +2∫_ℝ^N∖ B_r∫_B_ε||y|^β-|x|^β|^p-1/|x-y|^N+sp dy φ(x) dx. Then I_1+I_2=𝒞(β)∫_ℝ^N|x|^β(p-1)-spφ(x) dx +2 ∫_ℝ^N∖ B_r F(x)φ(x) dx, where F(x)∫_B_ε||y|^β-|x|^β|^p-1- |ε^β-|x|^β|^p-1/|x-y|^N+sp dy=∫_B_ε k(x,y|x|) dy |x|^β(p-1)-N-sp with k(x,z)|z^β-1|^p-1- |(ε|x|)^β-1|^p-1/|x|x|-z |^N+sp. Making the change of variables z=y/|x|, we have that F(x)=∫_B_ε k(x,y|x|) dy |x|^β(p-1)-N-sp =∫_B_ε/|x| k(x,z) dz |x|^β(p-1)-sp. On the other hand, since |x|>r and β<0, we have that (ε|x|)^β≥(εr)^β, and therefore k(x,z)≤||z|^β-1|^p-1- |(εr)^β-1|^p-1/|x|x|-z |^N+sp. Thus, by a simple change of variable and by rotation invariance, we have that 0≤∫_B_ε/|x| k(x,z) dz≤∫_B_ε/|x|||z|^β-1|^p-1- |(εr)^β-1|^p-1/|e_1-z |^N+sp dz. Now, since |e_1-z|≥ 1-εr for any z∈ B_ε/|x|, p>1, β∈(-Np-1,ps-Np-1), we get 0 ≤∫_B_ε/|x|||z|^β-1|^p-1- |(εr)^β-1|^p-1/|e_1-z |^N+sp dz ≤1/(1-εr)^N+sp∫_B_ε/|x||z|^β(p-1) dz= α_N/(1-εr)^N+sp(εr)^β(p-1)+N. Then, by (<ref>), (<ref>), and (<ref>), we have that 0≤ F(x)≤α_N/(1-εr)^N+sp(εr)^β(p-1)+N|x|^β(p-1)-sp, for any x∈ℝ^N∖ B_r. By (<ref>), (<ref>), and (<ref>), we obtain ∫_ℝ^2N|ϕ_ε(x)-ϕ_ε(y)|^p-2 (ϕ_ε(x)-ϕ_ε(y))(φ(x)-φ(y))/|x-y|^N+sp dxdy≤ ≤(𝒞(β)+ 2α_N/(1-εr)^N+sp(εr)^β(p-1)+N) ∫_ℝ^N∖ B_r |x|^β(p-1)-spφ(x) dx. Thus, by Remark <ref>, we get ∫_ℝ^2N|ϕ_ε(x)-ϕ_ε(y)|^p-2 (ϕ_ε(x)-ϕ_ε(y))(φ(x)-φ(y))/|x-y|^N+sp dxdy ≤ 0 for any ε small enough. Before showing our next result, we define A_r A_r2,2r, r_ε r(ε/1+ε 2^β)^-1β, and ψ_ε(x) r_ε^β if 0≤ |x|≤ r_ε, |x|^β if r_ε<|x|≤ 2r, (2r)^β if 2r<|x|. Observe that r_ε<r2 and r_ε→ 0^+ as ε→ 0^+. There is ε_0∈(0,1) independent of r such that for any ε∈(0,ε_0), ψ_ε is a weak solution of (-Δ_p)^s u(x)≤ 0 in A_r. Let φ∈ C_c^∞(B_r) be non-negative. Then ∫_ℝ^2N|ψ_ε(x)-ψ_ε(y)|^p-2 (ψ_ε(x)-ψ_ε(y))(φ(x)-φ(y))/|x-y|^N+sp dxdy= J_1+2J_2, where J_1 =∫_A_r^2|ϕ_ε(x)-ϕ_ε(y)|^p-2 (ϕ_ε(x)-ϕ_ε(y))(φ(x)-φ(y))/|x-y|^N+sp dxdy, J_2 =∫_A_r∫_ℝ^ℕ∖ A_r|ϕ_ε(x)-ϕ_ε(y)|^p-2 (ϕ_ε(x)-ϕ_ε(y))/|x-y|^N+spφ(x) dydx. Observe that ℝ^ℕ∖ A_r=C_r^1∪ C_r^2∪ C_r^3 with C_r^1 A_r_ε,r2, C_r^2ℝ^N∖ B_2r and C_r^3 B_r_ε. Then J_2 =∫_A_r∫_C_r^1||x|^β-|y|^β|^p-2(|x|^β-|y|^β)|x-y|^N+spdy φ(x) dx +∫_A_r∫_C_r^2||x|^β-(2r)^β|^p-1|x-y|^N+spdy φ(x) dx -∫_A_r∫_C_r^3|r_ε^β-|x|^β|^p-1|x-y|^N+spdy φ(x) dx. Thus J_1+2J_2= ∫_ℝ^2N||x|^β-|y|^β|^p-2 (|x|^β-|y|^β)(φ(x)-φ(y))/|x-y|^N+sp dxdy +2∫_A_r(G_1(x)+G_2(x))φ(x)dx, where G_1(x) =∫_C^2_r||x|^β-(2r)^β|^p-1-||x|^β-|y|^β|^p-1|x-y|^N+spdy, G_2(x) =∫_C^3_r||y|^β-|x|^β|^p-1 -|r_ε^β-|x|^β|^p-1|x-y|^N+spdy. By Theorem <ref>, we have J_1+J_2≤ 2∫_A_r(G_1(x)+G_2(x))φ(x)dx. On the other hand, for any x∈ A_r G_2(x)≤α_N r_ε^β(p-1)+Nr^N+sp, by calculation as in the proof of Lemma <ref>. Now, taking x∈ A_r, we get G_1(x) =∫_C^2_r||x|^β-(2r)^β|^p-1-||x|^β-|y|^β|^p-1|x-y|^N+spdy =∫_C^2_r|1-(2r|x|)^β|^p-1 -|1-(|y||x|)^β|^p-1|x|x|-y|x||^N+spdy|x|^β(p-1)-N-sp. Making the change of variable z=y|x|, we obtain that G_1(x)=∫_ℝ^N∖ B_2r/|x||1-(2r|x|)^β|^p-1 -|1-|z|^β|^p-1|x|x|-|z| |^N+spdy|x|^β(p-1)-sp. Since x∈ A_r and β<0, we have that 2<2r|x|<4 and G_1(x)≤∫_ℝ^N∖ B_1|1-4^β|^p-1 -|1-|z|^β|^p-1|x|x|-|z| |^N+spdy(r/2)^β(p-1)-sp. As the last integration is invariant under the rotation of coordinate axes, we get G_1(x)≤ D_N r^β(p-1)-sp ∀ x∈ A_r, here D_N denotes a negative constant depending only on N. Finally by (<ref>), (<ref>), (<ref>) and (<ref>), ∫_ℝ^2N |ψ_ε(x)-ψ_ε(y)|^p-2 (ψ_ε(x)-ψ_ε(y))(φ(x)-φ(y))/|x-y|^N+sp dxdy ≤(D_N r^β(p-1)-sp + α_N r_ε^β(p-1)+Nr^N+sp) ∫_A_rφ(x) dx ≤(D_N + α_N (ε/1+ε 2^β)^-(p-1)-Nβ)r^β(p-1)-sp∫_A_rφ(x) dx. Since D_N<0 and (ε1+ε 2^β)^-(p-1)-Nβ→0^+ as ε→0^, there is a positive ε_0 independent of r such that ∫_ℝ^2N|ψ_ε(x)-ψ_ε(y)|^p-2 (ψ_ε(x)-ψ_ε(y))(φ(x)-φ(y))/|x-y|^N+sp dxdy≤0. §.§ Hadamard-type results We are now in a position to prove our Hadamard properties. Let N≥2, 0<s<1 and 1<p<∞. If N>ps then for any β∈(-Np-1,ps-Np-1) and any r_0>1 there is a positive constant C>0 such that for any u≢0 non-negative lower semi-continuous weak solution of (-Δ_p)^su≥0 in ℝ^N, we have that m(r)≥ Cm(r_0)r^β ∀ r>r_0, where m(r)min{u(x) x∈B_r(0)}. Let r_0>1. By Lemma <ref>, there is ε_0>0 such that ϕ_ε is a weak solution of (-Δ_p)^s v(x)≤0 in A_r_0,R for any R>r_0. For any ε>ε_0 and R>r_0, we define H_ε,R(x)m(r_0)ε^β-R^βϕ_ε(x)-R^β if |x|≥ε, 0 if |x|≤ε. Observe that H_ε,R is a weak solution of (<ref>) and u≥ H_ε,R in ℝ^N∖ A_r_0,R. Thus, by the comparison principle (see, for instance, <cit.>), we have that u≥ H_ε,R in ℝ^N. Then u(x)≥m(r_0)ε^β-R^β (|x|^β-R^β) in A_r_0,R. Passing to limits as R→∞, we have that u(x)≥m(r_0)/ε^β|x|^β in {x∈ℝ^N |x|>r_0}. Finally, taking C=ε_0^-β we have m(r)≥ Cm(r_0)r^β ∀ r≥ r_0. Our second Hadamard property is Let N≥2, 0<s<1 and 1<p<∞. If N>ps then there is a positive constant C>0 such that for any u≢0 non-negative lower semi-continuous weak solution of (-Δ_p)^su≥ 0 in ℝ^N we have that m(r2)≤ Cm(r) ∀ r>1, where m(r)min{u(x) x∈B_r(0)}. Let r>1 and β∈(-Np-1,ps-Np-1). By Lemma <ref>, there is ε_0>0 independent of r such that ψ_ε is a weak solution of (-Δ_p)^s v(x)≤0 in A_r. For any ε>ε_0, we define J_ε,r(x) m(r2) ψ_ε(x)-(2r)^βr_ε^β-(2r)^β. Observe that J_ε,r is also weak solution of (<ref>) and u≥ J_ε,r in ℝ^N∖ A_r. Thus, by the comparison principle, we have that u≥ J_ε,r in ℝ^N. Then m(r) ≥ m(r2) min{ψ_ε(x)-(2r)^βr_ε^β-(2r)^β |x|=r}=m(r2)r^β-2^β r^β/r^β(ε/1+ε 2^β)^-1-2^β r^β = m(r2)ε(1-2^β). Finally, taking C=ε (1-2^β) we have m(r)≥ Cm(r2). § A LIOUVILLE-TYPE THEOREM We now prove a Liouville-type theorem. We split the proof into two cases. §.§ Case N<sp Let β∈(0,ps-Np-1), 0<ε<1<r<R, and u be a non-negative lower semi-continuous weak solution of (-Δ_p)^s u ≥ 0 in ℝ^N. We know, by Theorem <ref>, v_β(x)=|x|^β is a weak solution of (-Δ_p)^s v(x)= 𝒞(β)|x|^β(p-1)-sp in ℝ^N∖{0} with 𝒞(β)>0 (see Remark <ref>). We now define θ^ε_β(x) m(r) 1 if 0≤ |x|<ε, R^β-|x|^βR^β-ε^β if ε≤ |x|<R, 0 if R≤ |x|, where m(r)min{u(x) x∈B_r(0)}. First, we prove the following auxiliary result. For ε sufficiently small, θ^ε_β is a weak solution of (-Δ_p)^s v(x)≤0 in A_r,R. Let φ∈ C^∞_0(A_r,R) be non-negative. Then ∫_ℝ^2N|θ^ε_β(x)-θ^ε_β(y)|^p-2 (θ^ε_β(x) -θ^ε_β(y))(φ(x)-φ(y))/|x-y|^N+spdxdy=I_1+I_2+I_3, where I_1= -2m(r)/|R^β-ε^β|^p-1∫_A_r,R∫_B_ε(0)||x|^β-ε^β|^p-1|x-y|^N+spφ(x) dydx I_2= 2m(r)/|R^β-ε^β|^p-1∫_A_r,R∫_ℝ^N∖ B_R(0)|R^β-|x|^β|^p-1|x-y|^N+spφ(x) dydx I_3= m(r)/|R^β-ε^β|^p-1∫_A_ε,R^2||y|^β-|x|^β|^p-2 (|y|^β-|x|^β)(φ(x)-φ(y))/|x-y|^N+spdxdy. Observe that I_3 = m(r)/|R^β-ε^β|^p-1∫_ℝ^2N||y|^β-|x|^β|^p-2 (|y|^β-|x|^β)(φ(x)-φ(y))/|x-y|^N+spdxdy +J_1+J_2 with J_1 =2m(r)/|R^β-ε^β|^p-1∫_A_r,R∫_B_ε(0)||x|^β-|y|^β|^p-1|x-y|^N+spφ(x) dydx, J_2= -2m(r)/|R^β-ε^β|^p-1∫_A_r,R∫_ℝ^N∖ B_R(0)||y|^β-|x|^β|^p-1|x-y|^N+spφ(x) dydx. Then, using that v_β(x)=|x|^β is a weak solution of (<ref>), we have that ∫_ℝ^2N|θ^ε_β(x)-θ^ε_β(y)|^p-2 (θ^ε_β(x) -θ^ε_β(y))(φ(x)-φ(y))/|x-y|^N+spdxdy= =2m(r)/|R^β-ε^β|^p-1∫_A_r,R∫_B_ε(0)||x|^β-|y|^β|^p-1-||x|^β -ε^β|^p-1|x-y|^N+spφ(x) dydx -2m(r)/|R^β-ε^β|^p-1∫_A_r,R∫_ℝ^N∖ B_R(0)||y|^β-|x|^β|^p-1-|R^β-|x|^β|^p-1|x-y|^N+spφ(x) dydx -𝒞(β)m(r)/|R^β-ε^β|^p-1∫_A_r,R|x|^β(p-1)-spφ(x) dx ≤2m(r)/|R^β-ε^β|^p-1∫_A_ε,R∫_B_ε(0)||x|^β-|y|^β|^p-1-||x|^β -ε^β|^p-1|x-y|^N+spφ(x) dydx -𝒞(β)m(r)/|R^β-ε^β|^p-1∫_A_ε,R|x|^β(p-1)-spφ(x) dx. Note that, for any x∈ A_r,R, using the change of variable z=y|x|, we have ∫_B_ε(0) ||x|^β-|y|^β|^p-1-||x|^β -ε^β|^p-1|x-y|^N+spdy= =∫_B_ε(0)|1-(|y||x|)^β|^p-1-|1 -(ε|x|)^β|^p-1||x|x| -y|x||^N+spdy|x|^β(p-1)-N-sp =∫_B_ε/|x|(0)|1-|z|^β|^p-1-|1 -(ε|x|)^β|^p-1|x|x| -z|^N+spdz|x|^β(p-1)-sp ≤α_Nε^N/(1-εR)^N+sp |x|^β(p-1)-sp. Therefore, ∫_ℝ^2N |θ^ε_β(x)-θ^ε_β(y)|^p-2 (θ^ε_β(x) -θ^ε_β(y))(φ(x)-φ(y))/|x-y|^N+spdxdy= ≤m(r)/|R^β-ε^β|^p-1(2α_Nε^N/(1-εR)^N+sp-𝒞(β)) ∫_A_ε,R|x|^β(p-1)-spφ(x) dx, <0 provided ε is small enough. We now prove our Liuoville-type theorem for the case sp>N. Let N≥2,0<s<1, and 1<p<∞. If N<ps and u is a non-negative lower semi-continuous weak solution of (<ref>), then u is constant. Let β∈(0,ps-Np-1) and 0<ε<1<r<R. By Lemma <ref>, for ε sufficiently small, θ^ε_β is a weak solution of (<ref>) and it is easy to see that θ^ε_β≤ u in ℝ^N∖ A_r,R. Thus, by the comparison principle, we have that θ^ε_β≤ u in ℝ^N. Therefore m(r)≤ u(x) for any |x|≥ r. Then there is x_0∈B_r(0) such that u(x_0)≤ u(x) for any x∈ℝ^N. On the other hand, by <cit.> (see also <cit.>), we know the u is a viscosity solution of (<ref>). Finally, since u attains its minimum, we can conclude that u is constant. §.§ Cases N=sp Let 0<ε<1<r<R. In this case, we take a non-negative function ζ_ε∈ C^∞_c(Ω) such that ζ_ε(x)= 1 if x∈ B_ε2(0), 0 if x∈ℝ^N∖ B_ε(0), and _ε (x)log(|x|)-log(ε)+κζ_ε(x) if x∈ B_ε(0), 0 if x∈ℝ^N∖ B_ε(0), where κ is a positive constant to be chosen later. By Theorem <ref> and <cit.>, we have that _ε(x)_ε(x)-log(|x|) satisfies (-Δ_p)^s_ε(x)=h(x) in A_r,R where for a.e. Lebesgue point x∈ A_r,R h(x) 2∫_B_ε(0)F(x,y)/|x-y|^N+spdy with F(x,y) |-log(|x|)+log(|y|)- _ε(y)|^p-2(-log(|x|)+log(|y|) -_ε(y)| -log(|x|)+log(|y|)|^p-2(-log(|x|)+log(|y|)). Observe that, for a.e. Lebesgue point x∈ A_r,R and for any y∈ B_ε(0), we have F(x,y) =|-log(|x|)+log(ε) -κζ_ε(y)|^p-2(-log(|x|)+log(ε) -κζ_ε(y)) +(log(|x|)-log(|y|)|^p-1 =(log(|x|)-log(|y|)|^p-1-(log(|x|)-log(ε)+ κ_ε(y))^p-1. Then for a.e. Lebesgue point x∈ A_r,R h(x)= 2∫_B_ε(0)(log(|x|)-log(|y|))^p-1-(log(|x|)-log(ε)+ κ_ε(y))^p-1/|x-y|^N+spdy. Now we choose κ large enough so that h(x)≤ 0 for a.e. Lebesgue point x∈ A_r,R. Then, we can prove the second case of our Liuoville-type theorem. Let N≥2,0<s<1, and 1<p<∞. If N=ps and u is a non-negative lower semi-continuous weak solution of (<ref>), then u is constant. Let u be a non-negative lower semi-continuous weak solution of (<ref>) and _ε(x)=m(r) _ε(x)+log(R)-log(ε)+ κ+log(R) where m(r)min{u(x) x∈B_r(0)}. Then _ε is a weak solution of (<ref>) and it is easy to see that _ε≤ u in ℝ^N∖ A_r,R. Thus, by the comparison principle, we have that _ε≤ u in ℝ^N. Therefore m(r)≤ u(x) for any |x|≥ r. Then there is x_0∈B_r(0) such that u(x_0)≤ u(x) for any x∈ℝ^N. On the other hand, by <cit.> (see also <cit.>), we know that u is a viscosity solution of (<ref>). Finally, since u attains its minimum, we can conclude that u is constant. § A NONLINEAR LIOUVILLE-TYPE THEOREM In this last section, we prove our non-linear Liouville-type theorem (see Theorem <ref>). As before, we split the proof in two cases. §.§ The sub-critical case First, we show the following result. Let N≥2,0<s<1, 1<p<∞, and N>ps. If 0<q< N(p-1)N-ps and u∈ C(ℝ^N) is a non-negative weak solution of (-Δ_p)^s u-u^q ≥ 0 in ℝ^N. then u≡0. Let u be a non-negative lower semi-continuous weak solution of (<ref>). By <cit.>, u is a viscosity solution of (<ref>). We suppose by contradiction that u≢0 in ℝ^N. By <cit.>, we have that u>0 a.e. in ℝ^N. On the other hand, by <cit.> (see also <cit.>), we know the u is a viscosity solution of (-Δ_p)^s u(x)≥ 0 in ℝ^N. Therefore u>0 in ℝ^N. Let's take a function μ∈ C^∞([0,∞),ℝ) such that μ is non-increasing, 0≤μ≤1, and μ(t)= 1 if 0≤ t≤12, 0 if t≥ 1. Then, by <cit.>, there is a positive constant C such that w(x)=μ(|x|) satisfies in strongly sense (-Δ_p)^s w(x)≤ C in B_1(0). Now, for any R>1, we take (x)= m(R2)μ(|x|R), where m(R2)= min{u(x) x∈B_R2(0)}. Observe that satisfies in strongly sense (-Δ_p)^s (x)≤ m(R2)^p-1C/R^ps in B_R(0). Here, C is a positive constant independent of R. On the other hand, since (x)≤ u(x) in ℝ^N∖ A_R/2,R and u is lower semi-continuous function, there is x_R∈ B_R(0) such that u(x_R)-(x_R)≤ u(x)-(x) for any x∈ℝ^N. We divide the rest of the proof into two cases. Case 1: R2<|x_R|<R. We take r≪dist(x_R,∂ B_R(0)), and ϕ_r(x)= (x)- (x_R)+u(x_R) if x∈ B_r(x_R), u(x) if x∈ℝ^N∖ B_r(x_R). Note that ϕ_r(x)∈ C^∞(B_r(x_R)), ϕ_r(x_R)=u(x_R), ϕ_r(x)≤ u(x) in B_r(x_R) and ∇ϕ_r(x_R)=∇_r(x_R)≠ 0. Then, since u is a viscosity solution of (<ref>), we have that u(x_R)^q≤(-Δ_p)^sϕ_r(x_R). Now, using that u(x_R)-(x_R)≤ u(x)-(x) for any x∈ℝ^N, x_R∈ B_R(0), and (<ref>), we get m(R)^q≤ (-Δ_p)^sϕ_r(x_R)≤(-Δ_p)^s_r(x_R)≤ m(R2)^p-1C/R^ps. Then, by Lemma <ref>, we have m(R)^q≤C/R^psm(R)^p-1 ∀ R>1 where C is a positive constant independent of R. If 0<q≤ p-1, we obtain a contradiction. On the other hand, if q>p-1, we have m(R)≤ CR^κ where κ=-psq-p+1. Since p-1<q<N(p-1)N-ps, there is β∈ (Np-1,ps-Np-1) such that κ<β. Then, by Lemma <ref>, there is r_0>1 and a positive constant such that m(R)≤ CR^κ≤ C R^κ-β m(R) ∀ R>r_0. We again obtain a contradiction. Case 2: |x_R|≤R2. Then 0≤ u(x_R)-m(R2)=u(x_R)-(x_R)≤ u(x)-(x) ∀ x∈ℝ^N. In particular, if we x̃∈ B_R/2(0) so that u(x̃)=m(R2) we have that 0≤ u(x_R)-m(R2)≤ 0. Therefore u(x_R)=m(R2) and (x)≤ u(x) for any x∈ℝ^N. Thus if p>22-s, we can proceed as in the previous case. But if 1<p≤22-s, we have a problem because x_R is a critical point of but it is not isolated. Then ϕ_r is not an admissible test function. To solve this problem, we take ϕ̃_r (x)= (x)- m(R2) |x-x_R|^γ if x∈ B_r(x_R), u(x) if x∈ B_r(x_R), as a test function with γ >spp-1 and r<R^-spγ(p-1)-sp. Observe that m(R)^q≤ u(x_R)^q≤ (-Δ_p)^sϕ̃_r(x_R) ≤ ∫_B_r(x_R) |(x_R)-(x)+ m(R2)|x-x_R|^γ|^p-1|x-x_R|^N+spdx + ∫_ℝ^N∖ B_r(x_R) |u(x_R)-u(x)|^p-2(u(x_R)-u(x))|x-x_R|^N+spdx. Note that, 0<p-1≤s2-s<1 and (x_R)-(x)≥ 0 for any x∈ℝ^N. Then, using that (a+b)^q≤ a^q+b^q ∀ a,b≥0 q∈(0,1] (see <cit.>), (x)≤ u(x) for any x∈ℝ^N, and (<ref>), we have that m(R)^q≤ u(x_R)^q≤ (-Δ_p)^sϕ̃_r(x_R) ≤ m(R2)^p-1∫_B_r(x_R) |x-x_R|^γ(p-1)|x-x_R|^N+spdx+ (-Δ_p)^s(x_R) ≤ C(r^γ(p-1)-sp+ 1R^ps) m(R2)^p-1 ≤ C/R^ps m(R2)^p-1 ∀ R>1, where C is a positive constant independent of R. Now the proof follows exactly the proof of Case 1. §.§ The super-critical To conclude this article, we prove the following result. Let N≥2,0<s<1, 1<p<∞, and N>ps. If q>N(p-1)N-ps then there is a positive solution of (<ref>). In this case, we take =spq-p+1 and w(x)=1(1+|x|)^. Observe that, since q>N(p-1)N-ps we have that 0<<N-psp-1. For any x∈ℝ^N∖{0}, we have that 2 ∫_ℝ^N |w(x)-w(y)|^p-2(w(x)-w(y))|x-y|^N+sp dy= =2∫_ℝ^N|1(1+|x|)^- 1(1+|y|)^|^p-2( 1(1+|x|)^- 1(1+|y|)^)|x-y|^N+sp dy =21(1+|x|^2)^(p-1)+sp+N∫_ℝ^N|1- ( 1+|x|1+|y|)^|^p-2(1- ( 1+|x|1+|y|)^) |x1+|x|-y1+|x||^N+sp dy. Applying a rotation, we may assume that x|x|=e_1=(1,0,…,0)∈ℝ^N, and via the change of variable z=11+|x|(y+x|x|), and since x/1+|x|-y/1+|x|= x/|x|-x/(1+|x|)|x| -y/1+|x|= x/|x|-1/1+|x|(y+x/|x|), and (p-1)+sp= q, we get 2(1+|x|)^ q∫_ℝ^N|w(x)-w(y)|^p-2(w(x)-w(y))|x-y|^N+sp dy= = 2·∫_ℝ^N|1- ( 1+|x|1+|(1+|x|)z-e_1|)^|^p-2(1- ( 1+|x|1+|(1+|x|)z-e_1|)^) |e_1-z|^N+sp dz. Now, using that (1+|x|)|z|≤ 1+|(1+|x|)z-e_1| we have ·∫_ℝ^N|1- ( 1+|x|1+|(1+|x|)z-e_1|)^|^p-2(1- ( 1+|x|1+|(1+|x|)z-e_1|)^) |e_1-z|^N+sp dz≥ ∫_ℝ^N|1- 1|z|^|^p-2(1- 1|z|^) |e_1-z|^N+sp dz=𝒞(-). Since, 0>->ps-Np-1, by Remark <ref>, we have that 𝒞(-)>0. See also Remark <ref>. Then, 𝒞(-)^1/q-p+1w , is a positive solution of (<ref>). § ACKNOWLEDGMENTS L.M.D.P. partially supported by CONICET grant PIP GI No 11220150100036CO (Argentina), PICT-2018-03183 (Argentina) and UBACyT grant 20020160100155BA (Argentina). A. Q. was partially supported by Fondecyt Grant No. 1231585. abbrv
http://arxiv.org/abs/2307.04987v1
20230711025249
Inflationary magnetogenesis with a self-consistent coupling function
[ "Y. Li", "L. Y. Zhang" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc" ]
Y.Li and L.Y.ZhangInflationary magnetogenesis with a self-consistent coupling function School of Science, Dalian Maritime University, Dalian 116026, China [email protected] School of Science, Dalian Maritime University, Dalian 116026, China [email protected] Inflationary magnetogenesis with a self-consistent coupling function Le-Yao Zhang ===================================================================== Received (Day Month Year)Revised (Day Month Year) In this paper, we discuss the inflationary magnetogenesis scenario, in which the coupling function is introduced to break the conformal invariance of electromagnetic action. Unlike in conventional models, we deduce the Maxwell's equations under the perturbed FRW metric. We found that, the self-consistency of the action depends on the form of the coupling function when the scalar mode perturbations have been considered. Therefore, this self-consistency can be seen as a restriction on the coupling function. In this paper, we give the restrictive equation for coupling function then obtain the specific form of the coupling function in a simple model. We found that the coupling function depends on the potential of the inflaton and thus is model dependent. We obtain the power spectrum of electric field and magnetic field in large-field inflation model. We also found that the coupling function is a incresing function of time during slow-roll era as most of inflationary magnetogenesis models, it will lead to strong coupling problem. This issue is discussed qualitatively by introducing a correction function during the preheating. PACS Nos.:98.80.Cq. § INTRODUCTION Observations indicate that the universe is magnetized on a wide range of length scales<cit.>. The sources of these magnetic fields are still unclear. There are two types of models that can be used to explain the origin of these magnetic fields: astrophysical scenario <cit.> and primordial scenario(see Refs.<cit.> for reviews). The former believes that these magnetic fields originate from some astrophysical processes. The origin of magnetic fields in galaxies and clusters can be explained in such models. However, this type of models is difficult to explain the origin of the magnetic fields in cosmic voids. The magnetic fields in the cosmic voids are more like the origin of the early universe<cit.>. The latter, i.e. primordia scenario, assumes that these large-scale magnetic fields originated in the early stage of the universe. One class of possible sources of the primordial magnetic fields are phase transitions like electroweak phase transition <cit.> or the QCD transtition <cit.>. However, in these scenarios very tiny fields on galactic scales obtain unless helicity is also generated, in which case one can have an inverse cascade of energy to large scales<cit.>. The other class of possible sources of primordial magnetic field are inflationary magnetogenesis<cit.>. The inflation provides an ideal setting for the generation of primordial large-scale field<cit.>, therefore we focus on the inflationary magnetogensis in this paper. Due to the conformal invariance of the standard electromagnetic action and the FRW metric is conformally flat, the electromagnetic field is not amplified during the inflation era <cit.>. Therefore, in order to be able to generate large scale magnetic fields by inflation, it is necessary to break this conformal invariance<cit.>. One way to do this is to introduce a time-dependent coupling function f^2(ϕ) into the action <cit.>. On the other hand, an effective way of linking theoretical models to observations is to consider the effects of the existence of large-scale magnetic fields on cosmological perturbations. While, cosmological perturbations will also in turn affect the evolution of large-scale magnetic fields. So the complete discussion should be to solve the evolution of electromagnetic field and cosmological perturbation together. In other words, it is necessary to consider the magentogenesis in inflation model in which cosmological perturbations have been included. Hovever, it is difficult to solve the equations which include all fields (pertubations and electromagnetic field). There are two methods to discuss this issue approximately. One way is to consider the magnetogenesis in unperturbed FRW metric and discuss its backreaction on the perturbations e.g. on CMB. Most of the current work is done in this way (see <cit.> for example). The other way, which is used in this paper, is to consider the magnetogenesis in perturbed FRW metric and discuss the influence of perturbations on the electromagnetic field. As we will discuss in this paper, the existence of cosmological perturbations restrict the form of the coupling function. Under the FRW background, the introduction of the coupling function does not destroy the self-consistency of the action, which means that the secondary constraint equation for electromagnetic field ∇⃗·E⃗=0 is satisfied automatically. However, if one consider the perturbed FRW background, this constraint equation will be not a trivial equation. In this situation, we can treat this equation as the restriction on the coupling function, i.e. the form of a self-consistent coupling function f(ϕ) should satisfy this equation. The purpose of this paper is to discuss the inflationary magnetogensis with this self-consistent coupling function. This paper is organized as follow: We deduce the Maxwell's equations under the perturbed FRW metric , and then get the restrict equation for f(ϕ) in section <ref> . We apply this restrict equation to slow-roll inflation in the end of section <ref>  and obtain the power spectrum of electromagnetic field in large-field inflation model in section <ref> . We also discuss the backreaction in section <ref>  and strong coupling problems in section <ref> , the summary in the section <ref> . § MAXWELL'S EQUATIONS UNDER PERTURBATED FRW BACKGROUND To get the restrict equation for f(ϕ), let us consider the FRW metric with scalar mode of inhomogeneous perturbations in longitudinal gauge: ds^2 = -(1+2Φ)dt^2+a^2(t)(1-2Φ)δ_ijdx^idx^j = a^2(η)[-(1+2Φ)dη^2+(1-2Φ)δ_ijdx^idx^j] where Φ is Bardeen potential, t is cosmic time and η is conformal time. The action of matter during inflation can be written as <cit.>: S=-1/16π∫ d^4x √(-g)[g^αβg^μνf^2(φ)F_μαF_νβ] -∫ d^4x √(-g)[1/2g^μν∂_μφ∂_μφ+V(φ)] where φ(t, x)=ϕ(t)+δϕ(t, x) is the inflaton and its perturbation. F_αβ=A_β;α-A_α;β=A_β,α-A_α,β is the electromagnetic field tensor, with A_α being the standard electromagnetic 4-potential. f(φ) is the coupling function which is introduced to break the conformally invariant of the standard electromagnetic action <cit.>. For the convenience of discussion, we expand the coupling function as: f^2(φ)=f^2(ϕ+δϕ)≈[f(ϕ)+df/dφ|_ϕδϕ]^2 ≈ f^2(ϕ)[1+𝒢(ϕ)δϕ] where 𝒢(ϕ)≡2/f(ϕ)df/dφ|_ϕ It is worth to notice that 𝒢 depend only on time or, in other words, it is scale-independent. In the model we discuss here, we treat the electromagnetic field as a “test" field which means that F_αβ do not affect the evolution of the background (a and ϕ) and perturbations (Φ and δϕ), but the background and perturbations can affect the evolution of electromagnetic field. The Maxwell's equations can be obtain by using the action (<ref>): ∂_ρ[√(-g)f^2(φ)g^σμg^ρνF_μν]=0 For conformal time, the time component of Eq.(<ref>) (σ=0) lead to: ∂_i[f^2(ϕ)(1-2Φ+𝒢δϕ)δ^ijF_0j]=0 In Minkowski spacetime, Eq.(<ref>) is noting but ∇⃗·E⃗=0. This is the secondary constraint equation for source-free electromagnetic field (one can refer to Appendix E in <cit.>).We will see later, this equation is trivial equation in FRW background and is non-trivial in perturbed background. The space component of Eq.(<ref>) (σ=i) can also be obtain similarly: (1-2Φ+𝒢δϕ)A”_j-(1+2Φ+𝒢δϕ)∇^2A_j +[𝒢ϕ'(1-2Φ+𝒢δϕ)-2Φ'+𝒢'δϕ+gδϕ']A'_j +(2Φ_,k+𝒢δϕ_,k) δ^kℓ(A_ℓ,j-A_j,ℓ)=0 where ∇^2≡δ^kℓ∂_k∂_ℓ is Laplace operator and ' denote the derivative with respect to conformal time. For the convenience of discussion, we assume that A_i can be expressed as perturbation expansion: A_i=A_i^(0)+A_i^(1)+A^(2)_i+⋯ where O[A_i^(0)]∼ O[Φ]. In this paper we adopt the Coulomb gauge:A_0(η, x)=0, ∂_jA^j(η, x)=0. Under this gauge, Maxwell's equations can be rewritten as: ∂_i{𝒢δϕ[A^(0)_j]'-4Φ[A^(0)_j]'-2A^(0)_jΦ'}δ^ij=0 and {[A^(0)_j]”+𝒢ϕ'[A^(0)_j]'-∇^2A^(0)_j} + {[A^(1)_j]”+𝒢ϕ'[A^(1)_j]'-∇^2A^(1)_j}-Q_j^(1)=0 where: Q_j^(1) ≡ (2Φ-𝒢δϕ) {[A^(0)_j]”+𝒢ϕ'[A^(0)_j]'-∇^2A^(0)_j} + (2Φ-𝒢δϕ)'[A^(0)_j]' +4Φ∇^2A^(0)_j+δ^kℓ(𝒢δϕ+2Φ)_,k(A^(0)_j,ℓ-A^(0)_ℓ,j) Notice that, all the order in the left of Eq.(<ref>) should be zero, therefore the space component of Maxwell equations lead to [A^(0)_j]”+𝒢ϕ'[A^(0)_j]'-∇^2A^(0)_j = 0  [A^(1)_j]”+𝒢ϕ'[A^(1)_j]'-∇^2A^(1)_j = Q_j^(1) If scalar perturbations are not taken into account (Φ=δϕ=0), then time component of Maxwell's equations Eq.(<ref>) becomes a trivial equation, and source term Q_j^(1) vanish. One can choose the form of the coupling function f(ϕ) (and 𝒢) freely from the beginning and solve the space component Eq.(<ref>,<ref>) directly. However, once the scalar perturbations have been considered, Eq.(<ref>) is no longer trivial. This means that not any coupling function can make the theory self-consistent. The form of the coupling function is to ensure that this constraint equation holds. Although Eq.(<ref>) is order O[Φ^2], it does not means that the restriction on coupling function is order of perturbations. The trivial or non-trivial of Eq.(<ref>) is essentially different. This fact is not surprising. It is rooted in the fact that the choice of Lagrangian density is not arbitrary and it needs to satisfy self-consistent conditions<cit.>. From Eq.(<ref>,<ref>,<ref>) one can see that, there are two aspects to the influence of cosmological perturbations on electromagnetic fields: * The perturbations restrict the form of coupling function by Eq.(<ref>). * The perturbations (and A^(0)_j) provide a source term Q_j^(1) for A^(1)_j. An consistent discussion should be solving Eq.(<ref>,<ref>,<ref>) together. However, one can notice that, there is no A^(1)_j in Eq.(<ref>) and Eq.(<ref>). This means that we can solve Eq.(<ref>,<ref>) to obtain the 𝒢 and A^(0)_j first, and then insert the 𝒢 and A^(0)_j into Eq.(<ref>) to get the solution of A^(1)_j. Furthermore, the source term Q_j^(1) in Eq.(<ref>) is order O[Φ^2], and then the A^(1)_j will be more smaller than A^(0)_j. Therefore in this paper, we only focus on the evolution of the main part of A_j, i.e. A_j^(0). In other words, we only focus on the influence (i) of cosmological perturbations. In fact, Eq.(<ref>) is the same as the evolution equation of the electromagnetic 4-potential in the conventional model <cit.> besides the coupling function and A^(0)_j should satisfy the Eq.(<ref>). We treat Eq.(<ref>) as a restriction on the form of the coupling function. One can choose a coupling function which satisfy Eq.(<ref>), then to solve the Eq.(<ref>). From Eq.(<ref>), one can obtain that 𝒢δϕ[A^(0)_j]'-4Φ[A^(0)_j]'-2A^(0)_jΦ'=ℂ(η) where ℂ(η) is the function of η. The choice of function ℂ(η) will affect the form of the coupling function. In order to get the specific form of the coupling function, we consider the slow-roll era of inflation first. During the slow-roll inflation, the super-Hubble scale Fourier mode of Bardeen potential Φ_k satisfy<cit.> Φ'_k = 0,    Φ_k≈ϵℋδϕ_k/ϕ' where ϵ≡-Ḣ/H^2 is slow-roll parameter, and dot denote the derivative with respect to cosmic time. ℋ≡ a'/a is conformal Hubble parameter and δϕ_k is Fourier mode of δϕ. If one consider the large scales only, then we have: Φ' ≈ 0,    Φ≈ϵℋδϕ/ϕ' Insert Eq.(<ref>) into Eq.(<ref>) one can get 𝒢=4ϵℋ/ϕ'+ℂ(η)/δϕ[A^(0)_j]' From the Eq.(<ref>), one can see that 𝒢 is scale-independent, therefore the second term in righthand of Eq.(<ref>) should vanish, which means that we should consider the models in which ℂ(η)=0. This consideration give 𝒢=4ϵℋ/ϕ' Although the above discussion only considers the behavior at large scales, the form of the 𝒢 is scale-independent, so Eq.(<ref>) holds at all scales. During the slow-roll inflation, the Klein-Gorden equation of ϕ is 3Hϕ̇≈-V_,ϕ. Note that the Friedmann equation in the slow-roll inflation is H^2≈ V/3, then Eq.(<ref>) change to 𝒢=-2V_,ϕ/V Using Eq.(<ref>) one can get the form of coupling function f(ϕ) as: f(ϕ)∝exp(-∫^ϕV_,ϕ/Vdϕ)∝ V^-1 Eq.(<ref>) shows that the form of self-consistent coupling functiong depend on the potential of inflaton. In other words, the form of the coupling function is model dependent. § POWER SPECTRUM OF ELECTROMAGNETIC FIELD IN LARGE-FIELD MODEL To obtain the power spectrum of electromagnetic field, we consider the large-field inflation with polynomial potentials: V(ϕ)=Λ^4(ϕ/μ)^p    (p>0) where Λ is the “height" of potential, corresponding to the vacuum energy density during inflation, and μ is the “width" of the potential, corresponding to the change in the field value Δϕ during inflation<cit.>. According to Eq.(<ref>), the coupling function in this large-field model is f(ϕ)∝ϕ^-p By using ϕ̇≈-V,_ϕ/3H,  3H^2≈ V during slow-roll inflation one can have<cit.>: lna/a_i=-1/2p(ϕ^2-ϕ_i^2) where a_i and ϕ_i is the scale factor and value of inflaton at the beginning of the inflation. And then the coupling function can be written as the function of scale factor: f(a)∝(-2plna/a_i+ϕ_i^2)^-p/2 This form of f(ϕ) is different from conventional models (see <cit.> for review). In these models, it is often assumed that the coupling function has a power law form of scale factor. However, Eq.(<ref>) shows that the form of the self-consistent coupling function is not a power law form as in conventional models. This means that the power-law-like coupling function does not satisfy the self-consistent condition Eq.(<ref>). This is the main difference between the model we discuss here and the conventional models. During slow-roll inflation era, the scale factor as the function of confomal time a(η) can be assumed as: a(η)=a_i|η/η_i|^1+β where η_i is the conformal time when the inflation begin. The case β=-2 corresponds to de Sitter space-time. During the inflation, η→0_-. Insert Eq.(<ref>) into Eq.(<ref>) we have: f∝[-2p(1+β)ln|η/η_i|+ϕ^2_i]^-p/2 It is worth noting that the coupling function will diverges at the conformal time |η_∞|=|η_i|exp[ϕ^2_i/2p(1+β)] When η=η_∞, ϕ=0, which means that the slow roll phase has been ended before η_∞. Insert Eq.(<ref>) into Eq.(<ref>) one can get: f∝[2p(1+β)ln|η_∞/η|]^-p/2∝[ln|η_∞/η|]^-p/2 The next step is to solve Eq.(<ref>) by using coupling function Eq.(<ref>) or Eq.(<ref>). Before that, it is convenient to set 𝒜≡ a(η)f(ϕ(η))A^(0)_j(η,k), where A^(0)_j(η,k) is Fourier mode of A^(0)_j. Eq.(<ref>) can be written as<cit.>: 𝒜”(η,k)+(k^2-f”/f)𝒜(η,k)=0 At the beginning of inflation, η≈η_i. This means f”/f≈ 0 and the Eq.(<ref>) change to 𝒜”+k^2𝒜=0  ⇒  𝒜∝exp(± i kη) At small scale limit, the solution should be “negative frequency"<cit.>. In order to satisfy the Wronskian condition<cit.>, the solution of 𝒜 should be 𝒜=1/√(2k)exp(-ikη) At the late time of inflation when η≈η_∞ we have f”/f≈p/2(p/2+1)T^-2 where T≡|η_∞|-|η|=η-η_∞, therefore T<0 during the slow-roll inflation. Notice that dT=d η, then the Eq.(<ref>) change to d^2/dT^2𝒜+[k^2-p/2(p/2+1)T^-2]𝒜=0 The general solution of Eq.(<ref>) is 𝒜=(-kT)^1/2[C_1(k)J_ν(-kT)+C_2(k)J_-ν(-kT)] where ν≡(1+p)/2. When |η|≫|η_∞|, Eq.(<ref>) should be approximated to Eq.(<ref>). While, when |η|≫|η_∞|, T≈η, therefore Eq.(<ref>) just need to be approximated to 𝒜=1/√(2k)exp(-ikT) Eq.(<ref>) can be used to determine the coefficient C_1, C_2. We focus on the behavior on large scale, so we take the large scale limit: -kη→0⇒-kT→0. After ignoring the decay term, we have: 𝒜≈ k^-1/2c(γ)(-kT)^1-γ where c(γ)=√(π/2^3-2γ)exp[iπ(1+γ)/2]/Γ(3/2-γ)cos(πγ) and γ≡1+p/2>1. Therefore the power spectrum of magnetic field is dρ_B/dk=1/kdρ_B/dln k=c^2(γ)/2π^2kH^4(kη)^6-2γ(1-η_∞/η)^2-2γ The power spectrum of electric field can also be obtain: dρ_E/dk=1/kdρ_E/dln k≈d^2(γ)/2π^2kH^4(kη)^4-2γ(1-η_∞/η)^-2γ where d(γ)=√(π)exp[iπ(1+γ)/2]/2^-γ+1/2Γ(-γ+1/2)cos(πγ) From Eq.(<ref>,<ref>) we can see that the spectral index of magnetic power spectrum is n_B=6-2γ and the spectral index of electric power spectrum is n_E=4-2γ. Therefore, scale invariant spectrum of magnetic field can be got when γ=3 i.e. p=4. When γ=2 i.e. p=2 one can get the scale invariant spectrum of electric field. If the magnetic field spectrum is scale invariant (γ=3), the electric field spectrum is red. Because of γ>1, it is can be seen from Eq.(<ref>) and Eq.(<ref>) that the spectrum of magnetic and electric field will increase rapidly when η→η_∞. This can cause backreaction problem. However, as we discussed above, the slow-roll inflation will ended before η_∞. We assume that the moment when the slow-roll ends is η_end, and the value of inflaton at this moment is ϕ_end. Then we can get approximately that |η_∞/η_end|-1≈ln|η_∞/η_end|=-ϕ_end^2/2p(1+β)≡𝒴 Insert Eq.(<ref>) into Eq.(<ref>,<ref>) one can estimate the power spectrum of magnetic and electric field at the end of slow-roll inflation: dρ_B/dln k|_end ≈ c^2(γ)/2π^2H^4_end(k/a_endH_end)^6-2γ𝒴^2-2γ dρ_E/dln k|_end ≈ d^2(γ)/2π^2H^4_end(k/a_endH_end) ^4-2γ𝒴^-2γ To avoid the backreaction problem, the energy density of electromagnetic field can not exceed the energy density of the inflaton, this require that dρ_B/dln k|_end+dρ_E/dln k|_end<ρ_end We focus on scale invariant spectrum of magnetic field, i.e. γ=3, p=4, then Eq.(<ref>) means H_end^4/2π^2𝒴^-6d^2(γ)(k/a_0H_0)^-2(a_0H_0/a_endH_end)^-2 <3/8πH^2_endM_pl^2 The ratio (a_0H_0)/(a_endH_end) can be estimated as<cit.>: a_0H_0/a_endH_end≈1.51×10^-29h/R,      (h≈0.72) where R depend on the reheating phase, and for simple estimate one can chose R≈ρ_end^1/4 as in <cit.>. Insert these into Eq.(<ref>) we can have ρ_end<(ϕ_endM_pl^1/3)^8×10^-42 If one require the ρ_end should be satify the requirements of nuleosynthesis (ρ_nuc≈10^-85M_pl^4), then ϕ_end>10^-43M_pl^4/3 This means that as long as the slow-roll era ends before the inflaton decays too small, the backreaction problem can be avoided. We also can estimate the present day value of magnetic field strength simply. From Eq.(<ref>) we know that the power spectrum of the model in this paper is amplified by the factor 𝒴^2-2γ compare with the conventional model <cit.>. This factor is independ on the scale factor. If one assume the instant reheating, then the present day power spectrum of magnetic field is also amplified by this factor. While, this factor is depend on the detail of the inflation model, specifically on ϕ_end. For example, for scale invariant magnetic pectrum, i.e. V(ϕ)∝ϕ^4 , this factor is (8/ϕ_end^2)^4. In ϕ^4 inflation model, the slow-roll era ends at ϕ_end≈ M_pl/2 <cit.>, and this factor change to (4/π)^4. Therefore, the present day value of magnetic field strength is amplified by (4/π)^2≈1.6 compare with the conventional model <cit.>. This means that the magnetic field on coherence scale 1Mpc today is B_0≈8×10^-10G(H/10^-5M_pl) This result satisfy the lower bound of γ-ray observation B∼10^-15G <cit.>. Notice that the lower limit of ϕ_end is very small (see Eq.(<ref>)), then the magnitude of the 𝒴 factor has a large span. Therefore, we can use ϕ_end as a tunable parameter of the model, and adjust the value of it to make the predicted magnetic field strength of the model consistent with today's observations. § STRONG COUPLING As the last part of this paper, we will discuss the problem of strong coupling qualitatively. From Eq.(<ref>) we know that the coupling function is monotonically increasing function during the slow-roll of inflation. This will lead to the strong coupling problem which was first pointed out in <cit.>. To avoid this problem, one can assume a decreasing coupling function during the preheating era like in <cit.>. However, in this paper, we found that the coupling fucntion should satisfy the Eq.(<ref>). An attractive possibility is that this equation can lead to an decrease coupling function after the slow-roll of inflation. Therefore it is interesting to discuss the behavior of coupling function during the preheating. It should be noted that, Eq.(<ref>) is satisfied only in slow-roll era. During the preheating, the coupling function should be obtain by solving the Eq.(<ref>,<ref>) together. One can eliminate the A_j by combining Eq.(<ref>,<ref>) and get 𝒟_1h'+𝒟_2h+𝒟_3h^2+𝒟_4=0 where 𝒟_1=ϕ'^2-2ℋq 𝒟_2=4qϕ'^2-8ℋq^2-2ϕ'ϕ”+2ℋ'q+2ℋq' 𝒟_3=-ϕ'^2+2ℋq 𝒟_4=-4ϕ'^2q'-ϕ'^4+4ℋqϕ'^2-4ℋ^2q^2       +8ϕ'ϕ”q-8ℋ'q^2 q≡Φ/δϕϕ',  h≡𝒢ϕ'=2f'/f Eq.(<ref>) is the equation that the coupling function needs to satisfy during the preheating. On large scale, the Fourier mode of Bardeen potential and perturbation of inflaton can be written as <cit.> Φ_k=𝒞(1-H/a∫ adt),   δϕ_k≈𝒞ϕ̇(a^-1∫ adt) Consider the case where the magnetic field is scale invarant p=4, which means that we should consider the ϕ^4 preheating. In this model of preheating, a∝√(t) and the evolution of ϕ can be approximated as <cit.> ϕ≈ϕ̃/acos(0.8472√(λ)ϕ̃η) For qualitative discussion, we assume ϕ=1/ηcosη The evolution curve of the coupling function given in the Fig.<ref>. It can be seen that the coupling function will increase during the preheating era. This will lead to strong coupling problem. In conventional models, one strategy to solve strong coupling problem is to ansatz a decreasing coupling function during the preheating era directly as in <cit.>. However, in the models we consider here, the coupling function should satisfy Eq.(<ref>), which means that the coupling function is model dependent. In other words, to get the decreasing coupling function, we should modify the preheating model. We consider a toy model in which there are some modifications at the beginning of ϕ^4 preheating. As the preheating process goes on, the modificatons will vanish and the preheating model will approximate to the ϕ^4 model. These modifications affect both the dynamics of background ϕ, ℋ... and perturbation Φ, δϕ..., and affect the Eq.(<ref>). We assume that the effect of these modifications of preheating model change the Eq.(<ref>) to 𝒟_1ĥ'+𝒟_2ĥ+𝒟_3ĥ^2+𝒟_4=0 where ĥ(η)≡ h(η)+r(η) and r(η) is a decreasing function of η. In Eq.(<ref>), the dynamics of ϕ,ℋ,δϕ,Φ are the same as in ϕ^4 model, which means that we assume the modifications of preheating model equivalent to introduce a correction function r(η) in Eq.(<ref>). As the preheating proceeds, the correction function will tend to zero and ĥ→ h, therefore h will satisfy the Eq.(<ref>) at late time of preheating and the preheating model approximate to the ϕ^4 model as we assumed. One simple choice of r(η) is r(η)=1/η^n Notice that the form of r(η) depend on the modifications of preheating model, then the parameter n can be seen as a parameter of preheating model. This parameter can be choosen to avoid the strong problem. The evolution of coupling function during preheating with different choices of r(η) are given in Fig.<ref>. It can be seen that a decreasing coupling function can be obtained by selecting the appropriate parameter n (i.g. n=1 in Fig.<ref>), and then the strong coupling can be avoided. § SUMMARY AND DISCUSSION In this paper, we discuss the inflationary magnetogenesis with a coupling function which can keep the action to be self-consistent. This self-consistence coming from the time component of Maxwell's equation which is the secondary constraint for electromagnetic field. Under the FRW metric, this is a trival equation. However, once we consider the perturbed metric, this equation become non-trival and can be seen as a restrict equation for coupling function f(ϕ), see Eq.(<ref>). Taking this as a starting point, we calculated the power spectrum of the electric and magnetic fields in the large-scale inflation model. We estimate the present day value of magnetic field, and the result satisfy the lower bound of γ-ray observation. We found that, the power spectrum obtained in this paper is multiplied by a factor related to ϕ_end compared to the conventional model<cit.>. This means that one can generate the required magnetic field by choosing a suitable inflation model, or conversely use the magnetic field observed today to limit ϕ_end. This provides a possibility to use today's large-scale magnetic field strength to estimate the value of the inflaton field at the end of the slow-roll. On the other hand, to avoid the backreaction problem at the end of inflation, the value of inflaton at the end of slow-roll era ϕ_end should have a lower limit (∼10^-43)(see Eq.(<ref>)). This lower limit is so small that the upper limit of the factor 𝒴 can be large. This means that the model discussed in this paper can generate a sufficiently strong magnetic field without causing backreaction problem. The strong coupling problem can also appear in the situation which is discussed here. One way to solve the problem of strong coupling is to introduce an decreasing coupling function in the preheating era. In this paper, the coupling function is determined by Eq.(<ref>) or Eq.(<ref>) in preheating era. Unfortunately these equations give an increasing coupling function. To make the coupling function change to a decreasing function, it is necessary to introduce a correction function in the early stage of preheating. So it is a very interesting open topic to find a suitable preheating model in which the coupling function can naturally be determined as an decreasing function. § ACKNOWLEDGMENTS This work was supported by the Fundamental Research Funds for the Central Universities of Ministry of Education of China under Grants No. 3132018242, the Natural Science Foundation of Liaoning Province of China under Grant No.20170520161 and the National Natural Science Foundation of China under Grant No.11447198 (Fund of theoretical physics). ws-mpla
http://arxiv.org/abs/2307.04877v1
20230710195631
Engineering bound states in continuum via nonlinearity induced extra dimension
[ "Qingtian Miao", "Jayakrishnan M. P. Nair", "Girish S. Agarwal" ]
quant-ph
[ "quant-ph" ]
[email protected] Institute for Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA [email protected] Institute for Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA [email protected] Institute for Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA Department of Biological and Agricultural Engineering, Texas A&M University, College Station, TX 77843, USA Bound states in continuum (BICs) are localized states of a system possessing significantly large life times with applications across various branches of science. In this work, we propose an expedient protocol to engineer BICs which involves the use of Kerr nonlinearities in the system. The generation of BICs is a direct artifact of the nonlinearity and the associated expansion in the dimensionality of the system. In particular, we consider single and two mode anharmonic systems and provide a number of solutions apposite for the creation of BICs. In close vicinity to the BIC, the steady state response of the system is immensely sensitive to perturbations in natural frequencies of the system and we illustrate its propitious sensing potential in the context of experimentally realizable setups for both optical and magnetic nonlinearities. Engineering bound states in continuum via nonlinearity induced extra dimension Girish S. Agarwal August 12, 2023 ============================================================================== § INTRODUCTION The localization of electromagnetic waves has been a subject of intense research over the past few decades <cit.>. It is well known that the solutions of the Schrödinger equation below the continuum threshold possess discrete energies and are square integrable in nature. In contrast, above the continuum threshold, energy eigenvalues are continues and the solutions are unbounded. It has been, however, shown that there exist localized states within the continuum of energies, namely the bound states in continuum. BICs were first proposed in 1929 by von Neumann and Wigner <cit.> in an electronic system and Stillinger and Herrick later extended it to a two electron wave function <cit.>. However, a fist experimental observation of BICs came only in 1992 by Capasso et al, where they demonstrated an electronic bound state in a semiconductor superlattice <cit.>. The emergence of BICs in electromagnetic systems can be explicated by investigating the effective non-Hermitian Hamiltonian ensuing from the Maxwell's equations, resulting in complex resonance frequencies ω. BICs are, in essence, non-radiating solutions of the wave equations, ie., modes of the system with Im(ω) approaching zero. In the last decade, BICs have been realized in a multitude of settings involving, for example, electronic <cit.>, acoustic <cit.> and photonic <cit.> subsystems. In particular, owing to their excellent tunability, photonic systems have emerged as an excellent candidate in recent years with applications including, but not limited to the design of high-Q resonators <cit.>, lasing <cit.>, sensing <cit.>, filters <cit.>, etc. In <cit.>, Romano et al reported an optical sensor underpinned by BIC for the fine grained estimation of perturbations in a dielectric environment. Another recent work <cit.> reported the development of nanophotonic sensor based on high-Q metasurface elements for molecular detection with applications in biological and environmental sensing. Some other recent intriguing research include the enhanced sensing of spontaneous emission <cit.>, vortex generation <cit.>, switches <cit.>, efficient higher harmonic generation <cit.> and many more. In this paper, we propose Kerr nonlinearities as a resource to engineer BICs. Such nonlinearities can be observed in a plentitude physical systems ranging from optical cavities <cit.> to magnetic systems <cit.>, which has been a prime subject of interest, with many exotic effects <cit.>. Here, we present a variety of solutions for BICs relevant to single and two mode bosonic systems having a Kerr type of anharmonicity. The resulting BICs are strongly sensitive to perturbations in the system parameters, in particular variations in characteristic detunings which owes its origin to the existence of first and second order poles in the response function. In addition, we discuss a number of experimental platforms germane to our analysis of the nonlinear systems. In particular, we specifically illustrate its sensing capabilities of the two mode anharmonic system in the context of a few experimentally realizable systems. The manuscript is organized as follows. In section <ref>, we discuss the well known schemes for the generation of BICs without involving the use of Kerr nonlinearities. Subsequently, in section <ref>, we provide a detailed analysis of the protocol to achieve BICs in a single mode system with passive Kerr nonlinearity and the accompanying sensitivity to perturbations in the system. We extend the study into the domain two mode active nonlinear system in section <ref> and establish its equivalence with the single mode results in Appendix <ref>. Finally, we conclude our results in section <ref>. § BIC IN A COUPLED TWO-MODE SYSTEM We commence our analysis by revisiting the emergence of BICs in a generic two-mode system without any nonlinearities. To this end, we consider a system comprising of modes a and b coupled through a complex parameter J and driven externally at frequency ω_d. The dynamics of the system in the rotating frame of the drive is given by Ẋ=-iℋ X+F_in, where, X^T=[a b], F_in describes the modality of external driving and ℋ is the effective non-Hermitian Hamiltonian provided by ℋ=[ Δ_a-iκ J; J Δ_b-iγ ]. Here, Δ_i=ω_i-ω_d where i∈{a,b}, ω_a and ω_b are the characteristic resonance frequencies of the modes a and b, and κ, γ denote their respective decay rates. Note that the real and imaginary parts of J=g-iΓ represent the coherent and dissipative form of coupling between the modes. The eigenvalues of ℋ are given by λ_±=Δ_a+Δ_b/2-iγ̅±√((Δ_a-Δ_b/2-iγ̃)^2+(g-iΓ)^2), where γ̅=κ+γ/2 and γ̃=κ-γ/2. One of the ways to bring to naught the imaginary part of the eigenvalues is to employ engineered gain into the system, that is to make κ=-γ. This in conjunction with the absence of dissipative coupling, viz, Γ=0 and Δ_a=Δ_b yield the eigenvalues λ_±=Δ_a±√((g^2-γ^2)). Palpably, the system in the parameter domain g≥γ is earmarked by the observation of real eigenspectra <cit.>. Note en passant, that the system under this parameter choice lends itself to a PT-symmetric description of the effective Hamiltonian featuring an exceptional point (EP) in the parameter space at g=γ. On the other hand, the region g<γ affords eigenvalues which form a complex-conjugate pair, wherein the amplitude of the of the modes grows exponentially in time whereas the other one decays. In the context of PT-symmetric systems, it is important to notice that EPs, which have found applications in sensing <cit.> are functionally analogous to BICs. The exists another interesting parameter domain, conformable with anti-PT symmetry, i.e., {PT,ℋ}=0, that can spawn a BIC, without involving external gain. Such a system necessitates the absence of coherent coupling, that is to say g=0, κ=γ and Δ_a=-Δ_b, begetting λ_±=-iκ±√((Δ_a^2-Γ^2)) which take purely imaginary form when |Δ_a|≤Γ. In contrast, the |Δ_a|>Γ phase leads to decaying solutions with real part of the eigenvalues flanked on either side of the external drive frequency. Observe that when Δ_a= 0 and as Γ approaches κ, the system entails a BIC, marked by the existence of a vanishing eigenvalue, i.e., λ_+→ 0 and thereby eliciting a pole at origin in response to the external drive. The anti-PT symmetric system does not warrant the use of gain, however, it stipulates the use of dissipative coupling, which can be engineered by coupling the subsystems via a common intermediary reservoir <cit.>. It makes for a relevant observation that in general, the effective Hamiltonian in Eq. (1) does not yield non-radiating solutions of the Maxwell equations, especially when J=0, i.e., when the modes are decoupled. In the following section, we provide a mechanism to engineer BIC in a nonlinear system which does not depend on the underlying symmetries of the system. More importantly, the protocol can be implemented even in the limit where the subsystems are completely decoupled. In fact, the existence of BIC is an inalienable consequence of anharmonicities present in the system and the concomitant magnification of the dimensionality. The mechanism can be extended to two-mode nonlinear systems and we provide a detailed analysis in section <ref>. § BIC IN A SINGLE MODE KERR NONLINEAR SYSTEM We consider a medium with third order Kerr nonlinearity characterized by a nonlinear contribution to the polarization P^(3)(ω)=χ^(3)|E(ω)|^2E(ω) placed in a single-mode cavity with mode variable a as depicted in Fig. <ref>. Here, E is the cavity electric field and χ^(3) the third order nonlinear susceptibility. The cavity is driven externally at frequency ω_d. The passive nature of nonlinearity indicates that the nonlinear processes are only affected by the frequency composition of the field and not the medium which only plays a catalytic role <cit.>. The dynamics of the system in the rotating frame of the drive is given by ȧ=-i(Δ-iγ)a-i2U|a|^2a+ℰ, where, Δ=ω_a-ω_d, ω_a denotes the cavity resonance frequency, U=3ħω_a^2χ^(3)/4ϵ_0 n V_eff is a measure of Kerr nonlinearity of the medium with refractive index n, V_eff signifies the effective volume of the cavity mode having a leakage rate γ and ℰ=√(2γ P_d/ħω_d) represents the Rabi frequency of external driving. In the long time limit, the mode a decays into a steady state described by the cubic equation I=α/2(1+(Δ̃+α/2)^2), where I=2U|ℰ|^2/γ^3, α=4U/γ|a_0|^2, Δ̃=Δ/γ and a_0 is the steady amplitude of the mode a. The Eq. (4) can engender a bistable response under the condition UΔ<0 and Δ^2>3γ^2 as illustrated by Fig. <ref>(a). Notably, there exist two turning points characterized by the coordinates (I_±,α_±) of the I-α curve, subject to dI/dα=0, beyond which we observe an abrupt change in α. The exact form of α_± is given by α_±=-4Δ̃±2√(Δ̃^2-3)/3, while I± can be obtained from Eq. (4) by substituting the above-mentioned solutions. Moreover, there is a cut off for the pump power beyond which the bistable characteristics set in. The critical magnitude of I^c is defined by the inflection point in the I-α graph described by the condition dI/dα=d^2I/dα^2=0, providing us I^c=-α^2/2(Δ̃+α/2). For a given set of parameters U, α and γ, we would like to perturb the system in Δ, modifying the mode variable into a=a_0+δ a, in which δ a characterizes the perturbations of the mode a about a_0. The dynamics of the perturbations are governed by the following effective Hamiltonian ℋ̃=[ Δ̃+α-i β; -β^* -Δ̃-α-i ], where β=2U/γa_0^2. The complex eigenvalues of the Eq. (7) denoted as λ refers to the normal modes of the system and they can be obtained by solving the characteristic polynomial equation λ^2+2iλ+|β|^2-(Δ̃+α)^2-1=0. Notably, in the limit when the determinant of the Hamiltonian (α/2)^2-(Δ̃+α)^2-1→0, one of the solutions of the Eq. (7) becomes vanishingly small. Note that we are working in the frame rotating at frequency ω_d. Therefore, under this condition, the imaginary part of one of the eigenvalues approaches zero, alluding to the generation of a BIC, as depicted in the Fig. <ref> (b). It is worth noting that α≠0, i.e., U≠0 is a prerequisite for the existence of such a state. In other words, the generated BIC owes its origin entirely to the Kerr anharmonicities of the mode a. For a given value of the parameter Δ, the BICs exist at (I_±,α_±), which are exactly the turning points of the I-α curve as depicted in the Fig. <ref>(a,c). Application of nonlinearity induced BIC in sensing: The existence of BICs also leads to the enhanced sensitivity of the nonlinear response to perturbations in the system parameters. This can be accredited to the existence of the first and second order poles at α=α_± in the first order derivative of the nonlinear response dα/dΔ̃=-8α(Δ̃+α/2)/3(α-α_-)(α-α_+), obtained from differentiating Eq. (4) by Δ̃. To further elucidate the origin of sensitivity, we expand I around the turning points of the I-α curve, that is, α=α_±, I=I_±+∂ I/∂αϵ+∂^2 I/∂α^2ϵ^2+O(ϵ^3), where ϵ=α-α_± and I_± are obtained by substituting α_± in Eq. (4). Consequently, at the turning points of the curve, ∂ I/∂α=0 and we have dα/dΔ̃∼|I-I_±|^-1/2. On the other hand, close to inflection point sensitivity has the functional dependence dα/dΔ̃∼|I-I_c|^-2/3. In practice, one can choose a value of Δ and the Eq. (8) in conjunction with Eq. (4) respectively determine the corresponding α_± and I_± appropriate for sensing. As I is varied tantalizingly close to I_±, any perturbations in the parameter Δ translate into a prodigious shift in the mode response as perceptible from Fig. <ref>(d). Note that the sensitivity to aberrations in Δ is a direct artifact of the existence of a BIC. Bearing in mind the generality of our analysis, it is interesting to observe the variety of experimental platforms available to implement our scheme for investigating BICs produced by nonlinearity induced extra dimensions. Some of the well-known examples in the context of passive nonlinearities and bistability include Sodium vapor <cit.>, Ruby <cit.>, Kerr liquids like CS_2, nitrobenzene, electronic nonlinearity of Rb vapor etc. to name a few <cit.>. In the subsequent section, we stretch the analysis into the case of a two mode anharmonic system. § ENGINEERING BIC IN A TWO MODE KERR NONLINEAR SYSTEM We begin this section by considering a two mode active Kerr nonlinear system that consists of modes a and b coupled coherently through a real parameter g, and b is externally pumped at a frequency of ω_d. The Hamiltonian of the system can be expressed as H/ħ =ω_aa^† a+ω_b b^† b+g(b^† a+ba^†) +Ub^† bb^† b+iΩ(b^† e^-iω_d t-be^iω_d t), where ω_a and ω_b represent the resonance frequencies of the modes a and b, the coefficient U quantifies the strength of Kerr nonlinearity, and Ω denotes the Rabi frequency of external driving. The systems characterized by the aforementioned Hamiltonian are prevalent in nature, for example, a collection of two-level atoms under the conditions of no saturation which act as an active Kerr nonlinear medium in a driven resonant cavity. The dynamics of the system in the rotating frame of the drive is provided by ȧ=-(iδ_a+γ_a)a-igb, ḃ=-(iδ_b+γ_b)b-2iUb^† bb-iga+Ω, where δ_a=ω_a-ω_d, δ_b=ω_b+U-ω_d, and γ_a and γ_b denote the dissipation rates of the modes a and b, respectively. In the long-time limit, the system decay into a steady state, i.e., a→ a_0, b→ b_0 lending the following nonlinear cubic equation I=4 x^3+4δ̃_Rx^2+|δ̃|^2x, where I=UΩ^2, x=U|b_0|^2, δ̃=δ_b-iγ_b-g^2/δ_a-iγ_a, and we define δ̃_R=δ_b- g^2δ_a/δ_a^2+γ_a^2 and δ̃_I=-γ_b-g^2γ_a/δ_a^2+γ_a^2 as the real and imaginary parts of δ̃, respectively. Notice that δ̃_I is negative. Under the criterion δ̃_R<√(3)δ̃_I, there exist three possible roots for x, leading to a bistable response, wherein, two of the roots are stable while the third is unstable. Conditions for the existence of BIC: To analyze the effect of perturbations around the steady state, we use a linearized approximation by letting a=a_0+𝒜 and b=b_0+ℬ, where 𝒜 and ℬ signify the perturbations of mode a about a_0 and mode b about b_0, respectively. The dynamics of the perutrbations ψ^T = [𝒜,ℬ,𝒜^†,ℬ^†] are governed by the following equation, ∂ψ/∂ t=-iℋψ+ℐ, where ℋ is the effective Hamiltonian ℋ=( [ δ_a-iγ_a g 0 0; g δ_b+4x-iγ_b 0 2Ub_0^2; 0 0 -δ_a-iγ_a -g; 0 -2Ub_0^*2 -g -δ_b+4x-iγ_b; ]), and ℐ=0 for the steady state. The normal modes of the system are hallmarked by complex eigenvalues of Eq. (<ref>), which can be obtained by solving the characteristic polynomial equation (ℋ-λ𝐈)=0. Conspicuously, when ℋ=0, one of the eigenvalues can approach zero (in the rotating frame of the drive), spawning real eigenvalues and thereby indicating the emergence of a BIC. Therefore, we first determine the parameter domain consistent with condition 0 =ℋ =12(δ_a^2+γ_a^2)x^2+8(-δ_a g^2+δ_bδ_a^2+δ_bγ_a^2)x +(g^2-δ_aδ_b+γ_aγ_b)^2+(δ_aγ_b+δ_bγ_a)^2. It is worth noting that the existence of BIC relies on the prerequisite x=U|b_0|^2≠0. In other words, the Kerr anharmonicities of the mode b are solely responsible for the creation of the BIC. Upon solving Eq. (<ref>), we discover that BICs can exist at points x_±=-1/3δ̃_R±1/6√(δ̃_R^2-3δ̃_I^2), which are exactly the turning points of the I – x curve given in Eq. (<ref>), obtained from solving the condition dI/dx=0. While invoking the linearized dynamics, one must make sure that the dynamical system is stable, which is to ensure that the eigenvalues of ℋ have negative imaginary parts. Consequently, we define λ_R and λ_I as the real and imaginary parts of the complex eigenvalues, respectively, and let λ'=-iλ. The characteristic polynomial equation can then be written as 0=(ℋ-iλ'𝐈)=λ'^4+a_1λ'^3+a_2λ'^2+a_3λ'+a_4, where a_1=2(γ_a+γ_b), a_2=δ_a^2+2g^2+(γ_a^2+4γ_aγ_b+γ_b^2)+(12 x^2+8δ_b x+δ_b^2), a_3 =2δ_a^2γ_b+2δ_b^2γ_a+2(γ_aγ_b+g^2)(γ_a+γ_b) +16δ_bγ_ax+24γ_a x^2, a_4=ℋ. The stability conditions of the system can be obtained by employing the Routh-Hurwitz Criteria, yielding the constraints a_1>0, a_3>0, a_4>0, and a_1a_2a_3>a_3^2+a_1^2a_4. Apparently, the first two conditions are met automatically, and we find a_1a_2a_3-a_3^2-a_1^2 a_4=4γ_aγ_b(12 x^2+8δ_b x-δ_a^2+δ_b^2)^2 +4γ_aγ_b(γ_a+γ_b)^2[24x^2+16δ_b x+2(δ_a^2+δ_b^2)+(γ_a+γ_b)^2] +4g^2(γ_a+γ_b)^2[12x^2+8(δ_a+δ_b)x+(δ_a+δ_b)^2+(γ_a+γ_b)^2], which is manifestly positive fulfilling the final criterion. The only remaining criterion a_4=ℋ>0 is satisfied along with δ̃_R<√(3)δ̃_I and x∈ (0,x_-)∪(x_+,∞). Sensing capabilities of nonlinearity induced BIC: The importance of the above results can be legitimized in the optical domain with several well known systems, including, for instance Sagnac resonators <cit.> among other settings <cit.>. The presence of BICs at points x_± contributes to the significantly improved sensitivity of the nonlinear response to variations in the system parameters, in particular, to perturbations in natural frequency of the active nonlinear medium. The remarkable sensitivity is a direct upshot of the existence of first or second order poles at x=x_± in the first derivative of the nonlinear response which has the functional form d x/dδ_b=-x(x+δ̃_R/2)/3(x-x_-)(x-x_+), analogous to Eq. (9). Therefore, it immediately follows that adjacent to the turning points, we have d x/d δ_b∼I(x_±)-I^-1/2. By the same token, close to the inflection point, the sensitivity scales as I_c-I^-2/3. Sensing in magnetic systems: In view of the extensive studies on nonlinearities <cit.> in ferrimagnetic spheres, it is worthwhile to consider magnetic systems to implement the sensing scheme. Note that the anharmonicities in optical systems is a direct consequence of the nonlinear response of electrical polarization. In stark contrast, the anharmonic component in a magnetic system originates from the nonlinear magnetization. We consider a single ferromagnetic YIG interacting with a microwave cavity as portrayed in Fig. <ref>. The ferromagnet couples strongly with the microwave field at room temperature, giving rise to quasiparticles, namely cavity-magnon polaritons. The YIG acts as an active Kerr medium, which can be pinned down to the magnetocrystalline anisotropy <cit.> of the sample. A strong microwave pump of power P_d and frequency ω_d is used to stimulate the weak anharmonicity of the YIG, which is of the order 10^-9 Hz. The full Hamiltonian of the cavity-magnon system is consistent with Eq. (<ref>) where the mode operators a, b are respectively superseded by cavity and magnon annihilation operators. The quantities ω_a and ω_b represent the cavity and Kittel mode resonance frequencies. Rabi frequency of external pumping takes the form Ω=γ_e√(5πρ d P_d/3c), where γ_e is the gyromagnetic ratio, ρ denotes the spin density of the YIG with a diameter d and c stands for the velocity of light. For experimentally realizable parameters of the system, we plot in Fig. <ref> x from Eq. (12) by varying δ_b and I and the results replicate the physics described in Fig. <ref>. § CONCLUSIONS In conclusion, we have demonstrated a new scheme apropos of single and two-mode Kerr nonlinear systems to engineer BICs. In the context of single mode systems, we considered a passive Kerr nonlinearity in an optical cavity that demonstrates bistability. As the the system parameters are tuned in close proximity to the turning points of the hysteresis, a BIC springs into existence marked by a vanishing linewidth of the mode. In the neighborhood of the BIC, the steady state response was observed to show pronounced sensitivity to perturbations in the detunings. This remarkable sensitivity can be traced down to the existence of poles in the first order derivative of the response with respect to the perturbation variable. The sensitivity to perturbations scales as inverse square root of the deviations in external pump powers optimal for the turning points. Further, we extended the analysis into the regime of two-mode systems possessing an active nonlinear medium. Our analysis is generic, applicable to a large class of systems, including, both optical and magnetic systems. Some of the passive nonlinear optical platforms include nonlinear media like CS_2, nitrobenzene, Rb vapor whereas high-quality Sagnac resonators support active Kerr nonlinearities. In addition, we considered an active Kerr medium provided by magnetic systems interacting with a microwave cavity where research activity has flourished of late. In the domain of large detunings of the active Kerr medium, the two-mode setup can be described by an effectively single-mode anharmonic system in lockstep with the results from the passive Kerr nonlinearity in an optical cavity. § ACKNOWLEDGEMENTS The authors acknowledge the support of The Air Force Office of Scientific Research [AFOSR award no FA9550-20-1-0366], The Robert A. Welch Foundation [grant no A-1943] and the Herman F. Heep and Minnie Belle Heep Texas A&M University endowed fund. Q. M. and J. M. P. N. contributed equally to this work. § EQUIVALENCE BETWEEN SINGLE-MODE AND TWO-MODE ANHARMONIC SYSTEM So far, we have discussed schemes for the creation of BICs in single and two mode nonlinear systems. It is worth mentioning that there exists a close correspondence between the two mode and single mode results in the limit of large δ_b. To enunciate this, let us delve into the second part of Eq. (11). In the long-time limit, we have -(iδ_b+γ_b)m-2iUb^† bb-iga+Ω=0. Note that the effect of γ_b pales in comparison with δ_b and we can recast the above equation into b=-(ga+iΩ)/δ_b[1+x]^-1, where x=2U|b|^2/δ_b. For the purpose of simplification, we set Ω=0 and assume that the external drive is on the cavity at Rabi frequency ℰ. Owing to the largeness of δ_b, it is discernible that x<<1. Therefore, we can revise the above equation as b=-(ga+iΩ)/δ_b[1-x+O(x^2)]. Keeping only terms up to first order in x, we are left with b=-ga/δ_b[1-2U|b|^2/δ_b]. Upon iterating the solution and omitting the higher order terms, the approximate solution for b morph into b=-ga/δ_b+2(g/δ_b)^3(U/δ_b)|a|^2a. Substituting this into the first part of Eq. (11), we obtain an effective single mode description of the dynamics of the system, ȧ=-(iδ̃_a+γ_a)a-iŨ|a|^2a+ℰ, where δ̃_a=δ_a-g^2/δ_a and Ũ=2(g/δ_b)^4U. Strikingly, the preceding equation reproduces Eq. (3) with Δ, γ and U respectively replaced by δ̃_a, γ_a and Ũ, unfolding the equivalence between two-mode and single-mode nonlinear systems in the realm of large δ_b.
http://arxiv.org/abs/2307.10049v1
20230714080509
Reciprocal microswimming in fluctuating and confined environments
[ "Yoshiki Hiruta", "Kenta Ishimoto" ]
cond-mat.soft
[ "cond-mat.soft", "physics.flu-dyn" ]
APS/123-QED [email protected] [email protected] Research Institute for Mathematical Sciences, Kyoto University, Kyoto 606-8502, Japan From bacteria and sperm cells to artificial microrobots, self-propelled microscopic objects at low Reynolds number frequently perceive fluctuating mechanical and chemical stimuli, and contact exterior wall boundaries both in nature and in the laboratory. In this study, we theoretically investigate the fundamental features of microswimmers, focusing on their reciprocal deformation. Although the scallop theorem prohibits the net locomotion of a reciprocal microswimmer, by analyzing a two-sphere swimmer model, we show that in a fluctuating and geometrically confined environment, reciprocal deformations can give rise to a displacement as a statistical average. To elucidate this symmetry breakdown, by introducing an impulse response function, we derived a general formula that predicts the non-zero net displacement of the reciprocal swimmer. With this theory, we revealed the relationship between the shape gait and the net locomotion, as well as the net diffusion constant enhanced and suppressed by the swimmer's deformation. These findings, together with a theoretical formulation, provide a fundamental basis for environment-coupled statistical locomotion. Thus, this study will be valuable in understanding biophysical phenomena in fluctuating environments, designing artificial microrobots, and conducting laboratory experiments. Reciprocal microswimming in fluctuating and confined environments Kenta Ishimoto August 12, 2023 ============================================================================== § INTRODUCTION Even a single water droplet in a pond contains thousands of swimming cells with diverse shape morphologies. These microorganisms respond to physical and chemical stimuli from external environments, and these response behaviors can be active and passive. Active responses are well known in chemotaxis, e.g., sperm cells responding to molecules released from eggs or bacteria swimming toward nutrients and oxygen <cit.>. Some microorganisms and cells swim upstream, which is considered passive behavior because of the hydrodynamical interactions between the self-propelled object and external fluid flow <cit.>. These microscopic, self-propelled objects are called microswimmers, and they have inspired the study of artificial active agents in recent years <cit.>. More recently, enabled by the rapid development of machine learning, a new research field has emerged, which is focused on the creation and design of smart particles that can change their dynamics by perceiving and learning outer environments <cit.>. The natural environment, however, fluctuates over time. In particular, such microscopic objects are frequently considered Brownian noise. Another important physical feature of environments is external boundaries, such as the air–water surface and wall substrates. Microswimmers are also known to accumulate near a wall purely by physical mechanisms, such as hydrodynamic and contact interactions <cit.>. In this study, we consider the general aspect of microswimming in such noisy and confined environments. Due to the small size of microswimmers, all the inertia effects are negligible, and thus, the flow around the swimmer is governed by the Stokes equations of a low-Reynolds-number flow. The linearity of the flow equation allows us to determine the swimmer's locomotion purely by its shape gait, as highlighted by the scallop theorem, which states that the reciprocal deformation of a swimmer cannot generate net locomotion <cit.>. Therefore, one degree of freedom is insufficient for propulsion. An example is a two-sphere model, which comprises two spheres connected by a rod. Although this swimmer can vary the length of the rod, only reciprocal deformation is possible; thus, it cannot go anywhere. Therefore, microswimmers frequently utilize non-reciprocal deformations, such as multiple degrees of freedom, and this behavior can be well studied using a three-sphere model <cit.>. The scallop theorem only holds when the fluid equation satisfies the Stokes equation and negligible swimmer inertia. The finite inertia effects, therefore, give rise to net locomotion even with a reciprocal deformation <cit.>. Additionally, non-Newtonian fluids, such as viscoelastic fluids, allow reciprocal swimmers to generate net locomotion <cit.>. With external boundaries, the scallop theorem still holds if the Stokes flow is satisfied. When the environment fluctuates, the motion of the reciprocal swimmer should also fluctuate. Therefore, the aspect of interest becomes the statistical average. The effects of noise on the active swimmer have been intensively studied using non-deforming active agents, such as active Brownian particles <cit.>. Furthermore, the swimming problem with fluctuating shape gaits has been well studied, for example, using the three-sphere model <cit.>. In this case, the swimming velocity is determined by the statistical average of an area enclosed in the shape space, and the reciprocal swimmer does not generate net locomotion <cit.>. This is, however, distinct from the problem where the swimmer's position fluctuates with the environment. In contrast, studies on swimmers with deformations in noisy environments are still limited. Hosaka et al. <cit.> considered a three-sphere swimmer with elastic springs. Each sphere possessed different temperatures and, thus, different magnitudes of noise. They theoretically demonstrated that the two-sphere model cannot generate net locomotion in a statistical sense even with inhomogeneous environmental noise. If the noise is spatially homogeneous, such as Brownian motion in free space, the statistical average of the particle position does not change. With an external boundary, the environmental noise is no longer spatially homogeneous because of the position-dependent hydrodynamic resistance <cit.>. Therefore, the primary aim of this study is to examine the effects of environmental noise on reciprocal swimmers, particularly in a geometrically confined environment. By considering a two-sphere swimmer model, we numerically and theoretically demonstrate that a reciprocal swimmer can generate net locomotion in a statistical sense. To analyze a precise effect from the environmental noise, we derive a statistical theory with lower-order moments in the probability distribution function. Accordingly, the secondary aim of this study is to apply this theory to examine the effects of the shape gait on its net velocity and net diffusion. The contents of the paper are arranged as follows. In Sec. <ref>, we introduce the governing equation of the two-sphere swimmer model with an external boundary. Considering a small amplitude and far-field asymptotic regime, we provide an explicit form of the stochastic equation of the swimmer in a noisy environment. In Sec. <ref>, we provide numerical results to demonstrate that the reciprocal deformation leads to net locomotion. In Sec. <ref>, we derive a theory to predict the non-zero displacement, and in Sec. <ref>, we apply this theory to understand the symmetric properties of the shape gait and its impact on the net velocity and net diffusion of the swimmer. Concluding remarks are provided in Sec. <ref>. § MODEL MICROSWIMMERS IN A NOISY AND BOUNDED DOMAIN §.§ Equations of motion for a two-sphere swimmer In this section, we introduce the equations of motion for a two-sphere swimmer under geometrical confinement. For simplicity, we assume that the swimmer's position is restricted to one dimension and neglect the rotational motion. The two-sphere swimmer consists of two spheres of radius a connected by a rod, whose length is controlled as a function of time. Let x_1 and x_2 > x_1 be the center positions of the spheres. We define the swimmer position by X=(x_1+x_2)/2. Thus, the relative distance between the spheres, l=x_2-x_1, represents a configuration of the swimmer. Subsequently, we calculate the swimmer velocity, U=dX/dt, for a given deformation l(t) [See FIG.<ref>]. Function l(t) designates the shape gait, and we will consider a time-periodic deformation with period T. We assume that the surrounding fluid is governed by the Stokes equation, i.e., for the velocity field, u, and the pressure function, P, ∇ P =ν∇^2u, ∇·u =0, where constant ν is the kinetic viscosity. Due to the negligible inertia of the swimmer, the force on the swimmer is balanced, and we may derive the one-dimensional equations of the swimmer without noise, via <cit.>: dX/dt =M(X,l)dl/dt, where function M(X,l) encodes the hydrodynamic interactions, which are determined only by the instantaneous configuration of the swimmer and the geometry of the surrounding objects. Note that Eq.(<ref>) holds for a general reciprocal swimmer moving in one direction, once function l(t) reads any function to specify its shape gait. Notably, most of the theoretical results shown in the following sections are not restricted to a specific model. Nonetheless, to analyze the effects of the geometry, below, we focus on the two-sphere swimmer inside a spherical container and with a spherical obstacle or a flat wall [See FIG.<ref>]. To derive a handy expression, we assume an asymptotic length scales of the regimes, a≪ l ≪ L_W and a≪ X ≪ L_W, where L_W is the distance between the swimmer position and the external boundary. Note that position X is assumed to be positive without loss of generality. First, we estimate the force, F_i, acting on sphere i (i∈{1,2}) from the surrounding fluid, which follows the Stokes resistance law, with γ=6πμ a, and is given by F_i=-γ(dx_i/dt-U^I_i), where U^I_i is the fluid velocity at position x_i, induced by the motion of the other sphere and the presence of the external boundary. In this asymptotic regime, the induced velocity U^I_i may be written by using Green's function of the Stokes flow with the boundary geometry, as follows: U_ij^I=-G̃_ijF_j, where G̃_ij is the summation of the Stokeslet along the x axis and the corrections due to the presence of a wall boundary, G^ W(x_i,x_j). The Stokeslet part is explicitly given by G^ Stokes_ij=1/4πμ|x_i-x_j|  for   i≠ j , and is otherwise zero since the self-induced velocity should be removed. Green's function in (<ref>) is, therefore, written as G̃_ij=G^ Stokes(x_i-x_j)+G^ W(x_i,x_j). Note that G̃_ij=G̃_ji, which is valid for any boundary shape <cit.>. From Eqs.(<ref>) and (<ref>), we may derive an equation to determine the force acting on each sphere, as follows: F_i =-(I/γ+G̃)_ij^-1dx_j/dt, where I is the identity matrix. Generally, the sum of the forces acting on the microswimmer, F_1+F_2, may be decomposed into the drag force F_ d, which is proportional to U, and F_ t, which is proportional to dl/dt, as in <cit.> F_ d =-1/ det(γ^-1I+G̃) ×(2γ^-1+G̃_11+G̃_22-G̃_12-G̃_21)dX/dt, F_ t =-1/ det(γ^-1I+G̃)(G̃_11-G̃_22)dl/dt. In the asymptotic regime of a≪ l, the leading-order expression is det(γ^-1I+G̃)=γ^-2(1+O(a/l)), which yields the following asymptotic form: F_ d ≃-2γdX/dt, F_ t ≃-γ^2 (G̃_11-G̃_22)dl/dt, where the neglected error terms are at the order of O(a/l). In addition, an external force F_ ext(t) acts on the swimmer through the environmental noise. These are balanced as F_ d+F_ t+F_ ext=0 due to the negligible inertia. Next, we introduce a Gaussian white noise ξ(t) as an external force from the environment. Due to the position-dependent drag coefficient M(X, ℓ), the equations of motions generally contain a multiplicative noise term <cit.>, leading to a position-dependent diffusion coefficient. Such a stochastic differential equation is analyzed as a Stratonovich type and produces an additional virtual background flow term. This effective drift leads to a non-zero net displacement for a reciprocal swimmer even with the noise. In the asymptotic regime of a≪ l, however, the position dependence of the diffusion constant is negligible, as detailed and discussed in Appendix <ref>. Therefore, we obtain the equations of motion in the following form: dX/dt=M(X, l)dl/dt+√(D) ξ(t) , where the drag coefficient M is given by M(X,l)= γ/2[G^W(X+l/2,X+l/2)-G^W(X-l/2,X-l/2)], and the random variable, ξ(t), is the zero-mean normal white Gaussian noise, i.e., ⟨ξ(t)⟩ =0 and ⟨ξ(t_1)ξ(t_2) ⟩ =δ(t_1-t_2). Here, δ(t) is the Dirac delta function, and the constant D denotes the diffusion coefficient, representing the strength of the noise. From the thermodynamics constraints, the diffusion constant D should relate to the drag coefficient γ through the fluctuation-dissipation theorem, an example of which is the Einstein relation: D=2K_BTγ^-1, for a thermal noise of temperature T and Boltzmann constant K_B. In this study, however, we take D as an arbitrary positive constant. The stochastic equations of motion (<ref>) are equivalent to the Fokker–Planck equation for probabilistic distribution P(X,t) given by ∂/∂ tP =-∂/∂ X(M(X,l)dl/d tP)+D/2∂^2/∂ X^2P . Therefore, a statistical spatial average of an arbitrary function A(X) is computed as ⟨ A ⟩_t≡∫ dX A(X)P(X,t). We define the average position ⟨ X⟩_t and the variance ⟨ X^2⟩_t, both of which are functions of time. In the subsequent sections, we numerically solve the Fokker–Planck equation (<ref>) by the finite volume method for spatial direction and the fourth Runge–Kutta method for time stepping. We impose no flux boundary conditions at the spatial boundary, which is taken at an adequate distance from the swimmer. The initial condition is set to be a Gaussian distribution, i.e., P(X)∝exp(-(X-X(0))^2/σ_0^2) with the parameter σ_0 being taken sufficiently small. §.§ Swimmer-wall hydrodynamic interactions To determine the equations of motion Eqs.(<ref>) and (<ref>), we proceed to specify Green's function in a bounded domain in addition to the deformation {l(t)}, focusing on the three different situations: a swimmer inside a spherical container of radius R, in the presence of an infinite flat wall, and in the presence of an external spherical obstacle of radius R. We assume that the flat wall and the sphere center are situated at x=0. The form of the boundary-induced Green function, G^W(X, X), is explicitly solved by the method of images and obtained as the superpositions of a Stokeslet and its multipoles located in the region outside the fluid domain <cit.>. The expression is, therefore, written as G^W(X, X)=α(X) G^ Stokes(X-X_IM(X)) , with the strength and position of the mirror image represented by α and X_IM, respectively, both of which are functions of the position X(>0). Next, we approximate G^W(X, X) in the asymptotic regime of a ≪ X ≪ L_W < R to derive the leading order contribution of Green's function from the full expression. When a flat wall is located at x=0, by evaluating the weights of the mirror singularities to impose the no-slip condition at the wall boundary, we obtain α=-1 and X_IM=-X. Thus, we have the leading order contribution: G^W(X,X)=-1/4πμ|X-X_IM|=-1/8πμ X, and the Taylor expansion around X provides the expression of M(X, l) from Eq. (<ref>), as follows: M =-γ/16πμl/(X+l/2)(X-l/2)≃-γ/16πμl/X^2. When the swimmer is situated exterior or interior to a sphere, similar analysis of the weights of the mirror singularities provides the parameters, α=-3R/2X and X_IM=R^2/X. Thus, we obtain a similar asymptotic form of M: G^W(X,X)=± 3R/8πμ (X^2-R^2), where the upper and lower signs correspond to the swimmer inside the spherical container and around the spherical obstacle, respectively. The drag coefficient is, therefore, given at the leading order of the asymptotic regime, l ≪ X ≪ R, by M≃ -9a Xl/4 R^3( 1+2X^2/R^2) , for the swimmer in the spherical container (X<R), with neglecting the O( (X/R)^4) term. With an outer sphere (X>R), the drag coefficient is evaluated similarly, as follows: M≃9a Rl/4 X^3. § RECIPROCAL SWIMMING IN A NOISY ENVIRONMENT Subsequently, we proceed to examine the mechanism through which the reciprocal swimmer generates net locomotion under geometrical confinement in a noisy environment. The arguments in this section do not depend on the specific form of M(X, l), and thus, they hold for a general reciprocal swimmer. When the noise is neglected (D=0), from the scallop theorem, the reciprocal swimmer cannot gain any net displacement. Here, we introduce the solution to Eq. (<ref>) with D=0 as the zero-noise solution and denote it by X_0(t). By integrating Eq. (<ref>) over time, as is readily found, the zero-noise solution is determined by the instantaneous configuration of the swimmer to be X_0(t)=X_0(l(t)) once the initial condition is fixed. Subsequently, we consider the effects of noise (D≠ 0). As is expected from the Brownian dynamics, if the model swimmer does not deform (dl/dt=0), the Fokker–Planck equation (<ref>) becomes the one-dimensional diffusion equation with a uniform diffusion constant D. Let the position be bounded at a wall of x=0 so that ∂_X P|_x=0=0 is imposed. When the swimmer is initially situated at X=x_0, the solution of the one-dimensional diffusion equation is given by ∂/∂ tP=D/2d^2/dX^2P   for   X∈(0,∞) with P(X,0)=δ(X-x_0), is analytically obtained as ⟨ X⟩_t=x_0 erf(√(τ_W/t))+√(Dt/π)exp(-τ_W/t), with τ_W=D^-1x_0^2. The probability flux results in the position moving away from the boundary for a long time. However, as our asymptotic regime, l ≪ X ≪ L_W, focuses on the short-time behavior (t≪τ_W), the displacements are negligibly small, following ⟨ X⟩_t=x_0 +x_0/2√(π)(t/τ_W)^3/2+O((t/τ_W)^5/2) . When the swimmer deforms, however, the statistical average of the displacement may have a non-zero value even in the short-time regime. In FIG.<ref>, we show a numerical demonstration of the paths of a reciprocal swimmer following Eq. (<ref>). FIG. <ref> presents the individual and averaged dynamics of the swimmer inside a sphere. We employed the Euler–Maruyama method <cit.> for the time integration over t∈[0, T] and the drag coefficient of Eq. (<ref>). The shape gait is given by a symmetric form, as follows: l_1(t)= l_0+ϵ_1exp[-(t/T-0.5)^2/s^2] , where parameter s is fixed as s=0.2. In FIG.<ref>(a), we plot the results for a swimmer in a spherical container using expression (<ref>). The parameters are set as R/a=10, X(0)/a=5, DT/a^2=10^-3, l_0/a=2, and ϵ_1/a=-1. The rod between the spheres is shortened and subsequently elongated to recover the initial length at time t=T, as schematically indicated in the figure. Each realization of the stochastic dynamics is shown in thin lines with different colors. The statistical average ⟨ X⟩_t is computed from the Fokker–Planck equation (<ref>) and shown in a thick line. Furthermore, we plot the zero-noise solution in a broken line and observed that the statistical average indeed deviates from the zero-noise solution, although the two lines appear to overlap. To show the difference, in the inset, we plot the deviation from the zero-noise defined as Y(t)≡⟨ X ⟩_t-X_0(t). This confirms that the statistical average deviates from the zero-noise solution X_0(t) at the last part of the deformation, whereas the zero-noise solution X_0(t) returns to the initial position after the deformation cycle. The displacement by the reciprocal swimmer after one beat cycle is computed as ⟨ X⟩_t=1≈ 5.0× 10^-7a [inset of FIG.<ref>(a)]. To highlight the non-zero displacement, we also consider an example model of M=λ^-3lX^2, where λ represents a characteristic length scale of this example. The simulation results are shown in FIG. <ref>(b) for the same stroke of l_1(t) with the parameters being set as X(0)/λ=10, DT/λ^2=10, l_0/λ=0.5, and ϵ_1/λ=-0.1. As is the case for a swimmer inside a spherical container, the spatial dependence of the drag coefficient produces a non-zero averaged displacement for a reciprocal swimmer in a noisy environment. If the swimmer repeats their deformation, the net displacement is expected to be proportional to the time. Thus, it scales as ⟨ X ⟩_t =O(t) at a shorter timescale. This physically intuitive argument is further numerically and theoretically demonstrated in a later section [Sec. <ref>]. At the short timescale with t≪τ_W, τ_W/T=2.5× 10^4, and τ_W/T=10 for FIG.<ref>(a) and (b), respectively, this locomotion is faster than the displacement achieved purely by simple diffusion τ_W, which scales as O(t^3/2) from Eq. (<ref>). At the long timescale, the displacement by (<ref>) scales as O(t^1/2) by diffusion, and the net displacement by reciprocal motion is again considerably faster than that by pure diffusion. This statistical non-zero displacement results from the fact that the statistical average of a function generally differs from the function of the averaged variable, i.e., ⟨ M⟩_t≠ M(⟨ X⟩_t). Thus, the statistical average for Eq. (<ref>) simply yields ⟨ U⟩_t=⟨ M(X, l)⟩_t dl/dt≠ M(⟨ X ⟩_t, l)) dl/dt, and the right term provides the swimming velocity of the swimmer without environmental noise. When function M(X,l) is convex with respect to X, Jensen's inequality holds, and we have ⟨ M⟩_t≥ M(⟨ X⟩_t). The equality holds only when M(X,l) is a constant or linear in X, which includes swimming in free space. If so, the statistical average is identical to the zero-noise solution, and a reciprocal deformation cannot generate net statistical displacement in the noisy environment. For the two-sphere models in different geometrical confinements, according to the functions of M(X, t) in Eqs. (<ref>), (<ref>), and (<ref>), one readily infers from (<ref>) that the two-sphere swimmer can generate non-zero statistical displacements when located in these three types of geometrical confinement. The strict inequality, however, does not guarantee the non-zero net statistical displacements, because we cannot get rid of the case where the velocity difference in Eq. (<ref>) cancels out after the whole beat cycle. To evaluate the net locomotion of a reciprocal swimmer more precisely, we formulate a theory to calculate the displacement for a given M(X, l) in the next section. § SMALL-DEVIATION THEORY To derive a formula to calculate the deviation from the zero-noise solution, we introduce an imaginary probe force f_p to the system as an arbitrarily small external force, instead of the environmental noise. We focus on the short-time behavior, in which the deviation from the zero-noise solution, Y= X(t)-X_0(t), is sufficiently small to expand the equations of motion (<ref>). Thus, we obtain dY/dt= M'(X_0(t),l(t))dl/dtY +1/2M”(X_0(t),l(t))dl/dtY^2+f_p(t) , with O(Y^3) errors, where the primes denote the derivative with respect to X. We may readily solve Eq. (<ref>) by a standard method for a linear ordinary differential equation, as follows: Y(t)= ∫_0^tduexp(ℒ(t,u)) × (f_p(u)+1/2M”(X_0(u),l(u))dl/duY(u)^2) , where the kernel, expℒ, corresponds to an impulse response. The two-time function, ℒ(t, u) is explicitly given by ℒ(t,u)=∫^t_uds M'(X_0(s),l(s))dl/ds, which we now call a response generator for later use. Applying the white Gaussian noise as the probe force such that f_p(t)=√(D)ξ(t) and taking the statistical average, we first obtain the variance up to the second order of Y, as follows: ⟨ Y⟩_t =1/2∫_0^tduexp(ℒ(t,u))M”(X_0(u),l(u))dl/du⟨ Y^2⟩_u. Afterward, we derive the expression for ⟨ Y^2⟩_t by substituting the form of the probe force, f_p(t)=√(D)ξ(t), into Y^2(t) after squaring Eq.(<ref>). Thus, the statistical average becomes ⟨ Y^2⟩_t = ∫_0^tdu∫_0^tdsexp(ℒ(t,u)+ℒ(t,s))D⟨ξ(u)ξ(s)⟩, by neglecting the O(Y^4) term and dropping other terms using ⟨ξ(u)⟩=0. By the relation, ⟨ξ(u)ξ(s)⟩=δ(u-s), we further simplify the equation to obtain ⟨ Y^2⟩_t =D∫_0^tduexp(2ℒ(t,u)). Equations (<ref>) and (<ref>) provide a closed form and, thus, enable us to calculate the statistical averages when the swimming gait l̇ and its drag coefficient M(X, l) are given. From Eq. (<ref>), one may readily reproduce the zero net displacements for the deterministic swimmer (D=0) and rigid swimmer without deformation (dl/dt=0). Let us further assume that the rod length l(t) is a periodic function with period T; thus, the integrands in Eqs.(<ref>) and (<ref>), as well as the response generator ℒ, are also periodic functions with the same period, T. Thus, we may introduce the net swimming velocity and the net diffusion coefficient by the time-average of ⟨ Y ⟩_t and ⟨ Y^2 ⟩_t over the deformation cycle. Here, we define the net swimming velocity as U_ net=⟨ Y⟩_nT/nT, with a positive integer, n. This velocity is equivalent to the net swimming velocity since the zero-noise solution vanishes as X_0=0 at time t=nT. Similarly, we define the net diffusion coefficient as D_ net=⟨ Y^2⟩_nT/nT, which is explicitly obtained from formula (<ref>) as D_ net =D/T∫_0^Tdtexp(-2ℒ(t,0)). In the following sections, we employ formulas (<ref>) and (<ref>) to examine the statistical properties of the swimmer. § NET SWIMMING VELOCITY AND NET DIFFUSION OF THE SWIMMER §.§ Net swimming velocity Next, we proceed to analyze the ensemble-averaged swimming velocity and variance using the formula derived in the previous section. Thereafter, we analyze the net velocity for particular swimmer models from the formulas (Eqs. (<ref>) and (<ref>)) for comparison with the solution of the Fokker–Planck equation. As in FIG.<ref>, we employ the symmetric deformation l(t)=l_1(t) for t∈[0, T] and recursively repeat this deformation at a later time using l(t+T)=l(t). Other parameters on the shape gait were the same as those in FIG. <ref>, except the sign of ϵ_1 and the value of the diffusion constant, which is now set to be DT/a^2=10. In FIG.<ref>, we plot the averaged deviation from the zero-noise solution or, equivalently, the net displacement ⟨ Y⟩ at every period of deformation. FIG. <ref>(a) shows the analysis for the swimmer inside a spherical container with the drag coefficient M given in Eq. (<ref>). The red and black plots are the results with ϵ_1/a=1 and ϵ_1/a=-1, respectively. The results from the formulas (Eqs. (<ref>) and (<ref>)) plotted by the plus symbol (+) are in excellent agreement with the solution to the Fokker–Planck equation plotted by circles (∘). The displacement is linear in time both in the theoretical prediction and in the direct simulation of the Fokker–Planck equation. This linear increase in the deviation confirms the physical arguments made in Sec. <ref>, that the net displacement by the deformation-environment coupling dominates the passive diffusion estimated by Eq. (<ref>). To investigate the limit of our theory based on the small deviation, we again examine the model of M=λ^-3 lX^2, and the same parameter set is used as in FIG.<ref>(b). The deviation from the zero-noise solution is relatively larger in this case, and the assumption of a small deviation is expected to be violated at an earlier time. In FIG.<ref>(b), we plot the net displacement at every deformation period, and the statistical average obtained by the direct numerical simulation deviates from the theoretical prediction, which assumed a small deviation and neglected O(Y^4) terms. The motion of a swimmer in the spherical container is shown in FIG.<ref>(a), where the small-deviation assumption of the theory is not violated until t/T∼ 10. §.§ Net swimming velocity and shape gait Next, we examine the effects of the shape gait on the locomotion, focusing on the shorter-time regime where the small-deviation theory is valid. At the short timescale, t≪τ_W, we approximate the response generator as ℒ≃ 0 and M(X_0(t),l(t))≃ M(X_0(0),l(t)). In the current asymptotic regime of lengths, a ≪ l, X ≪ L_W <R, function M is proportional to l, as expressed in Eqs. (<ref>), (<ref>), and (<ref>); thus, we decompose M into the form, M(X(t),l(t))=N(X(t))l(t). Further, in the short-time regime under consideration, function N is represented by the value at the initial time, and we may approximate its second spatial derivative as N”(X(t))=N”(X(0))+O(N”'(X(t)-X(0))), leading to the following relation: M”dl/dt=κ/2dl^2/dt+O(N”'(X(t)-X(0))) , where κ=N”(0) is a constant only determined by the initial position. Subsequently, we plug these equations into Eqs. (<ref>) and (<ref>). The integrands are no more dependent on time, and we readily obtain ⟨ Y⟩_t =Dtκ/2( (l(t)^2-1/t∫_0^t dt' l(t')^2). Next, we introduce a T-periodic function η (t) such that l(t)=l_0+η (t) and η (T)=η(0)=0. By substituting this form into Eq. (<ref>), the net velocity at t=nT is given by U_ net =-Dκ/2( 2 l_0/T∫_0^T du Sym[η(u)] +1/T∫_0^T du η^2(u) ) , where the time-reversal part of the deformation is defined by Sym[η(t)]≡ (η(t)+η(T-t))/2. From Eq. (<ref>), at the first order of η, only the time-reversal term contributes to the net velocity. Similarly, we can define the skew-symmetric part of the deformation as Skew[η]≡ (η(t)-η(T-t))/2, and the skew-symmetric part of the deformation rate is computed as Skew[dl/dt(t)] = 1/2(dl/dt(t)-dl/dt(T-t))=d/dtSym[η(t)] . To examine the effects of the shape gait and its symmetry, we consider two examples of the deformation function with different symmetries. The first example is the shape gait given by l(t)=l_1(t), which was already introduced as Eq. (<ref>) in Sec.<ref>, satisfying the symmetric property in time-reversal as l_1(t)=l_1(T-t). As the second example, we consider a skew-symmetric oscillatory deformation, given by l_2(t)= l_0+ϵ_2sin(ω t) , where ω=2π/T is the angular frequency. This function satisfies the relation, l_2(t)-l_0=-(l_2(T-t)-l_0). In these two cases, by substituting the shape gait functions into Eq. (<ref>), we obtain exact solutions that estimate the net displacement during one period of deformation. For the symmetric deformation, l_1(t), the net velocity is expressed as U^(1)_net=-Dκ√(π)ϵ_1 s(ϵ_1/√(2)+2l_0) . Conversely, the expression for the skew-symmetric deformation, l_2(t), possesses a qualitatively different form: U^(2)_net=1/4Dκϵ_2^2 , which is independent of ω. The symmetric deformation yields contributions of the order of ϵ_1, whereas the skew-symmetric deformation only generates net velocities of the order of O(ϵ_2^2). In FIG. <ref>, we plot the numerical result of the net displacement via the Fokker–Planck equation, together with the theoretical results for a swimmer in a spherical container. We used the form of M given by Eq.(<ref>) and the parameters, (X_0,l_0,ϵ_1, ϵ_2)=(5,2,1,1)a, with the diffusion constant DT/a^2=10 to integrate over time t ∈ [0,T]. In both shape gaits, the analytical solutions are in excellent agreement with the numerical solution to the Fokker–Planck equation, which, in turn, validates the approximations for deriving Eq. (<ref>). §.§ Net diffusion and shape gait We further investigate the relations between the shape gait and the emerging net velocity and net diffusion by analyzing the impulse response function exp(ℒ(t_1, t_2)). From the definition in Eq. (<ref>), this function characterizes the effect of noise at time t_2 on the locomotion at time t_1 and is, therefore, interpreted as the sensitivity to the environmental noise. Furthermore, as expressed in Eq. (<ref>), the impulse function affects ⟨ Y^2⟩, hence the net diffusion constant. To visually understand this effect, we first decompose the response generator into ℒ(t_1,t_2)=ℒ(t_1,0)- ℒ(t_2,0). A schematic of this function ℒ(t,0) is shown in FIG.<ref>. Therefore, the response generator ℒ(t_1,t_2) is regarded as the difference between the two points on the graph of ℒ(t,0). To gain an intuitive understanding, we consider a polynomial function M(X,l), i.e., M(X,l)=λ^-(m+1)lX^m, with m being an integer m≠ 1, noting that the expressions derived in Sec. <ref> are written by a sum of the polynomial functions. With this simplified form, we exactly solve the response generator, ℒ(t,0). Expectedly, substituting the form of M and integrating (<ref>) lead to the expression of the zero-noise solution, as follows: X_0(t)=[ (1-m)/2λ^-(m+1)l^2(t)+c]^1/(1-m), where c is the constant of the integral given by c=X(0)^(1-m)-(1-m)Kl^2(0)/2. Thereafter, using this expression to integrate Eq. (<ref>), we obtain the response generator in the following form: ℒ(t,0)=mlog( X_0(t)/X_0(0)) . Notably, when m>1, the shape gait l(t) cannot be taken arbitrarily. Since the first term in the bracket of Eq.(<ref>) is negative, the second term, c, needs to be positive, leading to a diverging X_0 parameter at some finite values of l^2(t). The exact solution is plotted in FIG.<ref>(a) for m=2 with the initial position given as X_0(0)=10λ and a shape gait of l_1(t) in Eq. (<ref>). We employed the same parameter set as in FIGs.<ref>(b) and <ref>(b), where l_0=0.5λ and ϵ_1=± 0.1λ. When m=2, the exact solutions, Eqs. (<ref>)-(<ref>), are simply written as ℒ(t,0)=-2log( 1-X_0(0)/2λ^3(l^2(t)-l^2(0))) . Thus, the response generator becomes positive when the swimmer is elongated from the initial configuration (ϵ_1>0) and negative when the swimmer is shortened from the initial configuration (ϵ_1<0). After one period of the deformation at t=T, the response generator satisfies ℒ(T,0)=ℒ(0,0)=0. To examine the impact on the variance ⟨ Y^2 ⟩_t, we numerically evaluated this function by a direct numerical simulation of the Fokker–Planck equation and by the small-deviation theory. The results are shown in FIG.<ref>(b), together with the normal diffusion Dt/λ^2 plotted by the black dotted line. As shown in the figure, the numerical results and the theoretical predictions are in good agreement. In particular, the shape deformation contributes to enhancing or suppressing the net diffusion depending on the sign of ℒ(t_1, t_2), as expressed in Eq. (<ref>). The response generator does not change its sign for t<0.5 T because function ℒ(t, 0) is monotonic. The value of the variance ⟨ Y^2 ⟩_t therefore deviates from the normal diffusion with enhancement (ϵ_1>0) or suppression (ϵ_1<0) [FIG.<ref>(b)]. Later, during the deformation, however, the sign and impact of the impulse response on the variance ⟨ Y^2 ⟩_t change and subsequently reverse, i.e., suppression when ϵ_1>0 or enhancement when ϵ_1<0, as evidenced in the figure. The fluctuating diffusion during one beat cycle holds for a general swimmer with an arbitrary M(l, t), according to the small-deviation theory. Indeed, by integrating Eq.(<ref>) from l(t=0) to l(t=T) after a change of variable, we have ℒ(T,0)=ℒ(0,0)=0. Thus, the function, ℒ(t,0), possesses at least one extremum in t∈(0, T), if the swimmer's shape is deformed. Therefore, for the general deforming object under a noisy environment, we conclude that there exist both time periods where the net diffusion exceeds the normal diffusion D and where the net diffusion is suppressed more than the normal diffusion. § CONCLUDING REMARKS In this study, we theoretically investigated the dynamics of a reciprocal microswimmer in a noisy environment. Focusing on a microswimmer moving in one direction, we showed that the statistical average of its displacement ⟨ X ⟩_t can produce net locomotion if the swimmer dynamics are confined by external boundaries. As the scallop theorem states, such a reciprocal swimmer returns to its original position after one beat cycle in the absence of environmental noise. Its dynamics were introduced as the zero-noise solution and denoted by X_0(t). For a quantitative analysis of the net locomotion in a statistical sense, by an impulse response function, we established a theory to analyze the deviation from the zero-noise solution, given by ⟨ Y⟩_t=⟨ X ⟩_t-X_0(t). Focusing on a two-sphere model swimmer located in a spherical container, we numerically demonstrated that the theoretical prediction is in excellent agreement with the direct numerical solutions of the Fokker–Planck equation associated with the stochastic Langevin dynamics. Although our theory is based on a small deviation in a short timescale, it is sufficient to analyze the key mathematical structures of the shape gait that produce net statistical locomotion. Based on the theory, we found that a time-reversal deformation produces the net swimming velocity at the leading order, whereas the skew-symmetric part of the deformation only contributes to the second order of the deformation amplitude. Further, we found that the shape gait affects the net diffusion constant through the behavior of the impulse response function and demonstrated that the net diffusion is both enhanced and suppressed during one beat cycle. In this study, we focused on a reciprocal swimmer moving along an unidirectional axis. Nonetheless, the theoretical methods used can be extended to swimming dynamics in multiple spatial dimensions or the dynamics with rotation. A similar mathematical structure will allow us to demonstrate that a reciprocal swimmer generally produces net locomotion in a confined geometry. The small-deviation theory for multiple dimensions, however, requires a non-commutative integral for an impulse response, yielding theoretical challenges. Therefore, details of these extensions will be reported elsewhere. The perturbation method used in this study is similar to that for nonlinear phenomena with very high dimensions, such as high-Reynolds-number turbulence. The higher-order moments of the probability distribution function are generally not negligible in these systems. Thus, further theoretical treatments are required to include the higher-order moments, such as the direct interaction approximation in turbulent flows <cit.> and the renormalization group theory in quantum field theory. In contrast, the perturbation analysis around a given non-linear solution as used in this study does not require higher order moments, for which our small-deviation theory was matched with the full dynamics <cit.>. Another remark on the small-deviation theory is that the theory is limited to the short-time behavior, i.e., where the deviation from the zero-noise solution is sufficiently small. The long-time asymptotic behavior is also important since it predicts the accumulation or depletion of swimmer distribution around the external boundaries. In this paper, we also assumed an asymptotic region, where the swimmer is situated sufficiently far from the outer boundaries as X ≪ L_W. Therefore, the statistical average of the net displacement is limited to small values. When the swimmer approaches the wall boundaries, as is well known, the hydrodynamic drag should diverge. This causes the deterministic part of the dynamics to be strongly dependent on the distance from the wall. Thus, the spatial derivative of the M(X, l) function will be a large number when the swimmer is very close to the wall. As argued in our theory (e.g., Eqs. (<ref>) and (<ref>)), the net velocity ⟨ Y⟩_t is significantly enhanced. In these situations, the deformation-driven net locomotion of a reciprocal microswimmer will be observable and significant in physical and biological problems, such as the surface accumulation of microswimmers <cit.> as well as the fluctuation of polymers and enzymes inside biological and artificial cell membranes. <cit.>. Numerical works with such a singular function, however, require careful treatment to satisfy the no-penetration condition at the boundary. Further, precise cell-wall hydrodynamic interactions frequently incur intense numerical costs because of the finer numerical meshes <cit.>. In conclusion, we theoretically and numerically studied the fundamental feature of the statistical properties of a microscopic object with reciprocal deformation in a fluctuating and geometrically confined environment. We focused on the short-time, small deviation situation to elucidate the non-trivial, deformation-driven net locomotion of such a reciprocal swimmer. Our findings on the relation between the symmetry of the shape gait and the net locomotion, together with a theoretical formulation based on the impulse response, provide a fundamental basis for the environment-coupled statistical locomotion. Thus, this study will be beneficial in understanding such biophysical dynamics in fluctuating environments, designing artificial microrobots, and conducting laboratory experiments. § ACKNOWLEDGMENTS K.I. acknowledges the Japan Society for the Promotion of Science (JSPS) KAKENHI for Transformative Research Areas A (Grant No. 21H05309), and the Japan Science and Technology Agency (JST), FOREST (Grant No. JPMJFR212N). Y.H. and K.I. were supported in part by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located at Kyoto University. § DERIVATION OF EQ.(<REF>) In this appendix, we provide an error estimate to derive the equation of motion (<ref>). The force balance among the drag force, propulsion, and the external noise force, F_d+F_t+F_ext=0, should be formulated by a stochastic differential equation of the Stratonovich type, as follows: C(X, t)dX=A(X, t)dt+B(X,t)dW , where C(x, t)=2γ and A=F_t from Eqs. (<ref>)-(<ref>); dW indicates a Wiener process. Thus, the corresponding equation motion in the sense of Itô is given by ( C+C'/2dX) dX=Adt+( B+B'/2dX) dW , where the prime denotes the derivative with respect to X. With Itô's rules, Eq.(<ref>) is rewritten in the order of dt as C(X, t)dX=( A+ BB'/2-C'B^2/2C^2)dt+BdW . By introducing the diffusion D as D=B^2/C^2, we arrive at the equation of motion in the following form: dX/dt=F_t/2γ+( 1/2γd/dX(γ^2 D)-√(D)dγ/dX)+√(D)ξ, where ξ is the zero mean Gaussian noise, which satisfies ⟨ξ(t) ⟩=0 and ⟨ξ(t_1)ξ(t_2)⟩δ(t_1-t_2), as in the main text. The respective terms on the right-hand side of Eq.(<ref>) represent the deterministic swimming velocity, effective drift velocity, and noise-induced velocity. Let us rewrite Eq.(<ref>) as dX/dt=V_s+V_d+V_n and evaluate the magnitude of each velocity. We first consider the ratio between the drift velocity and the swimming velocity. With the expressions of V_d=-Dγ'/2γ and F_t=-γ^2(G̃_11-G̃_22)(dl/dt), we estimate its ratio by neglecting prefactors, as follows: V_d/V_s∼V_d/γ^2G̃(dl/dt)∼G̃'/G̃D/(dl/dt)∼D/L_W (a/T), where G̃ represents the characteristic size of G̃_11-G̃_22. T is the time of the deformation, and L_W is the distance between the swimmer and the wall boundary of the geometrical confinement, as introduced in the main text. Here, we define the characteristic timescale of the (passive) diffusion as τ_D=a^2/D. Therefore, the ratio of the two terms is evaluated as V_d/V_s∼T/τ_Da/L_W≪ 1 , which is asymptotically small. The ratio between the swimmer and the noise term, contrarily, is evaluated through the displacement during the time interval Δ t∼ T and the spatial interval Δ l: V_n/V_s∼√(D Δ t)/γG̃Δ l∼√(D)/a√(Δ t)/γG̃∼√(T)/√(τ_D)1/M. Although the magnitude of M depends on the geometrical confinement, for a swimmer in a spherical container, it can be estimated from Eq. (<ref>) as M∼ a^2 X /L_W^3≪ 1, indicating that the noise term is dominant. Notably, Sancho et al. <cit.> also suggested using the Stratonovich product for the drag force, as in the left-hand side of Eq. (<ref>). However, they suggested using the Itô product in the stochastic part of Eq. (<ref>) based on physical reasoning. Nonetheless, this difference in interpretations only affects the prefactor of the drift velocity, and the estimate argued in this appendix remains unchanged. In summary, in our asymptotic regime, the drift terms are negligible, and thus, we arrive at our equation of motion (<ref>). apsrev4-2
http://arxiv.org/abs/2307.05696v1
20230709011908
A Personalized Reinforcement Learning Summarization Service for Learning Structure from Unstructured Data
[ "Samira Ghodratnama", "Amin Beheshti", "Mehrdad Zakershahrak" ]
cs.IR
[ "cs.IR", "cs.AI", "cs.CL" ]
A Personalized Reinforcement Learning Summarization Service for Learning Structure from Unstructured Data Samira Ghodratnama Macquarie University, Australia W.W. Grainger, USA [email protected] [email protected] Amin Behehsti Macquarie University, Australia [email protected] Mehrdad Zakershahrak Macquarie University, Australia [email protected] Received August 12, 2023; accepted August 12, 2023 ============================================================================================================================================================================================================================================================================================================================ The exponential growth of textual data has created a crucial need for tools that assist users in extracting meaningful insights. Traditional document summarization approaches often fail to meet individual user requirements and lack structure for efficient information processing. To address these limitations, we propose Summation, a hierarchical personalized concept-based summarization approach. It synthesizes documents into a concise hierarchical concept map and actively engages users by learning and adapting to their preferences. Using a Reinforcement Learning algorithm, Summation generates personalized summaries for unseen documents on specific topics. This framework enhances comprehension, enables effective navigation, and empowers users to extract meaningful insights from large document collections aligned with their unique requirements. Document summarization, personalized summarization, hierarchical summarization, concept-based summarization. § INTRODUCTION The availability of a vast amount of information on various topics has led to a phenomenon known as information overload, where the volume of data exceeds an individual's capacity for effective processing within a reasonable timeframe. While this abundance of data can be valuable for analytical applications, it necessitates efficient exploration tools to harness its potential benefits without succumbing to information overload, which can strain cognitive resources. Data summaries serve as effective tools for gathering relevant information, organizing it into a coherent and manageable form, and facilitating complex question answering, insight generation, and conceptual boundary discovery <cit.>. Automatic document summarization has been extensively studied to address the challenges of data reduction for analysis, commercialization, management, and personalization purposes. Furthermore, users often seek information in an organized and coherent structure. However, despite the speed of document generation and the massive collections of unstructured documents, producing personalized summaries comparable to human-written ones remains challenging. Most previous work on automatic text summarization has focused on generating textual summaries rather than structured ones. These approaches typically produce a single, short, general, and flat summary that applies to all users, lacking interpretability and personalization. Moreover, they are incapable of producing more extended and detailed summaries, even if users express interest in obtaining additional information. Additionally, the lack of structure in these summaries hampers further processing, and they heavily rely on reference or gold summaries created by humans, which are subjective and costly <cit.>. To address these limitations, we propose Summation, a hierarchically interactive structured summarization approach that generates personalized summaries. We emphasize the significance of the following aspects in our contribution: i) Structured summaries, ii) Personalization, iii) Interaction, and iv) The elimination of reference summaries. Structured Summaries. Studies have demonstrated that when individuals encounter numerous documents, they seldom formulate fully-fledged summaries. Instead, they attempt to extract concepts and understand the relationships among them <cit.>. Consequently, structured data has become crucial in various domains. It offers a concise overview of the document collection's contents, unveils interesting relationships, and serves as a navigational structure for further exploration of the documents. Our approach, Summation, provides summaries in the form of a hierarchical concept map, which caters to diverse user requirements by being interpretable, concise, and simultaneously providing an overview and detailed information. Personalization. Existing summarization approaches typically generate a generic summary comprising a few selected sentences intended to meet the needs of all users. In contrast to such generic summaries, there is a dearth of user-centric summarization approaches that allow users to specify the desired content in the summaries <cit.>. Interaction. Conventional summarization approaches treat a topic-related document set as input and generate a summary that captures the most salient aspects. However, research on this topic often neglects the usefulness of the approach for users, focusing primarily on the accuracy of the generated summaries. As a result, these approaches produce short (3-6 sentences), inflexible, and flat summaries that are the same for all users. Consequently, these approaches fail to provide more extensive summaries even when users express interest in obtaining additional information. Reference Summaries. Traditional document summarization techniques rely on reference summaries created by humans for training their systems. However, this approach is subjective and, more importantly, resource-intensive. For instance, Lin <cit.> reported that creating summaries for the Document Understanding Conferences (DUC) required 3,000 hours of human effort. Personalized summaries eliminate the need for such reference summaries by generating specific summary for a user instead of optimizing a summary for all users. Our Contribution. We study the automatic creation of personalized, structured summaries, allowing the user to overview a document collection's content without much reading quickly. The goal here is to dynamically maintain a federated summary view incrementally, resulting in a unified framework for intelligent summary generation and data discovery tools from a user-centered perspective. The unique contribution of this paper includes: * We provide summaries in the form of a hierarchical concept map, labeled graphs representing concepts and relationships in a visual and concise format. Their structured nature can reveal interesting patterns in documents that users would otherwise need to discover manually. It enables providing more information than traditional approaches within the same limit size. It can be used as a navigator in the document collection. Such visualization is beneficial for decision-making systems. * We introduce and formalize a theoretically grounded method. We propose a personalized interactive summarization approach utilizing a reinforcement learning algorithm to learn generating user-adapted results. It is the first approach to predict users' desired structured summary to the best of our knowledge. * We provide various evidence evaluating different aspects to prove Summation's usability using human and automatic evaluation. We divide the proposed framework into two steps. The first step is organizer which structure unstructured data by making a hierarchical concept map. Then summarizer is responsible for: i) predicting users' preferences based on the given feedback by employing preference learning and ii) learning to provide personalized summaries by leveraging reinforcement learning. A general overview of the algorithm is depicted in Figure <ref>. § RELATED WORK We categorize previous approaches into three groups including traditional approaches, structured approaches, personalized and interactive approaches discussed below. Traditional Approaches. A good summary should provide the maximum information about the input documents within a size limit and be fluent and natural. Different aspects for categorizing traditional multi-document summarization approaches exist, such as the input type, the process, and the summarization goal <cit.>. However, the main category considers the process and the output type of the summarization algorithm: extractive and abstractive approaches. The input in both cases is a set of documents, and the output is a few sentences. Abstractive summaries are generated by interpreting the main concepts of a document and then stating those contents in another format. Therefore, abstractive approaches require deep natural language processing, such as semantic representation and inference <cit.>. However, extractive text summarization selects some sentences from the original documents as the summary. These sentences are then concatenated into a shorter text to produce a meaningful and coherent summary <cit.>. Early extractive approaches focused on shallow features, employing graph structure, or extracting the semantically related words <cit.>. Different machine learning approaches, such as naive-Bayes, decision trees, neural networks, and deep reinforcement learning models are used for this purpose <cit.>. Structured Approaches. While traditional summarization approaches produce unstructured summaries, there exist few attempts on structured summaries. Structured summaries are defined by generating Wikipedia articles and biographies to extract the significant aspects of a topic using approaches such as topic modeling or an entity-aspect LDA model <cit.>. Discovering threads of related documents is another category of structured summaries. They mostly use a machine algorithm to find the threads using a supervised approach and features such as temporal locality of stories for event recognition and time-ordering to capture dependencies <cit.>. A few papers have examined the relationship between summarization and hierarchies. However, the concept of hierarchy in these approaches is the relation between different elements of a document. An example is creating a hierarchy of words or phrases to organize a set of documents <cit.>. There is a related thread of research on identifying the hierarchical structure of the input documents and generating a summary which prioritizes the more general information according to the hierarchical structure <cit.>. However, the information unit is a sentence, and the hierarchy is based on time measures. Concept-based multi-document summarization is a variant of traditional summarization that produces structured summaries using concept maps. It learns to identify and merge coreferent concepts to reduce redundancy and finds an optimal summary via integer linear programming. However, it produces a single flat summary for all users <cit.>. Personalized and Interactive Approaches. Recently, there exist few recent attempts on personalized and interactive approaches in different NLP tasks. Unlike non-interactive systems that only present the system output to the end-user, interactive NLP algorithms ask the user to provide certain feedback forms to refine the model and generate higher-quality outcomes tailored to the user. Multiple forms of feedback also have been studied including mouse-clicks for information retrieval <cit.>, post-edits and ratings for machine translation <cit.>, error markings for semantic parsing <cit.>, and preferences for translation <cit.>. A significant category of interactive approaches presents the output of a given automatic summarization system to users as a draft summary, asking them to refine the results without further interaction. The refining process includes cutting, paste, and reorganize the essential elements to formulate a final summary <cit.>. Other interactive summarization systems include the iNeATS <cit.> and IDS <cit.> systems that allow users to tune several parameters for customizing the produced summaries. Avinesh and Meyer <cit.> proposed the most recent interactive summarization approach that asks users to label important bigrams within candidate summaries. Their system can achieve near-optimal performance. However, labeling important bigrams is an enormous burden on the users, as users have to read through many potentially unimportant bigrams. Besides, it produces extractive summaries that are unstructured. § THE PROPOSED APPROACH (SUMMATION) The ultimate goal of summarization is to provide a concise, understandable, and interpretable summary tailored to the users' needs. However, making such a summary is challenging due to massive document collection, the speed of generated documents, and the unstructured format. In this regard, Summation aims to make structured summaries to facilitate further processes to make it concise and easily understandable while engaging users to create their personalized summaries. This novel framework has two components: organizer and the summarizer. First, we discuss the problem definition, and then each component is explained. Problem Definition. The input is a set of documents D={D_1,D_2, ... ,D_N} and each document consists of a sequence of sentences S=[s_1,s_2,...,s_n]. Each sentence s_i is a set of concepts {c_1,c_2, ..,c_k}, where a concept can be a word (unigram) or a sequence of words. The output is a personalized hierarchical concept map. This novel framework has two components, an organizer and a summarizer, explained in Sec. <ref> and <ref>, respectively. §.§ Adding Structure to Unstructured Data The first step is to structure unstructured information by making a hierarchical concept map. A concept map is a graph with directed edges, where nodes indicate concepts and edges indicate relations. Both concepts and relations are sequences of related words representing a semantic unit. Consequently, the first step in creating a concept map is to identify all concepts and relations. Here, we propose hierarchical clustering to form the hierarchical concept map. §.§.§ Concept and Relation Extraction. Concepts come in different syntactic types, including nouns, proper nouns, more complex noun phrases, and verb phrases that describe activities <cit.>. For this purpose, we used open information extraction (OIE) <cit.> through which the entities and relations are obtained directly from the text. OIE finds binary propositions from a set of documents in the form of (con_1,R,con_2), which are equivalent to the desired concepts and relations. For example, the output for the sentence, ‘cancer treatment is underpinned by the Pharmaceutical Benefits Scheme’, is: Cancer treatment by the Pharmaceutical Benefits Scheme Balancing precision and recall in extracting concepts is a challenging task. A high precision causes to define all identified spans as mentions of concepts. Therefore, some constructions are usually missed, which leads to lowering the recall. On the other hand, a high recall is necessary since missed concepts can never be in summary. Obtaining a higher recall may extract too many mentions, including false positives. Generalizability is also essential. The reason is that extracting a particular syntactic structure might generate only correct mentions, causing too broad mentions. Ideally, a proper method applies to many text types. To avoid meaningless and long concepts, we processed the OIE results such that concepts with less than one noun token or more than five tokens are omitted. The original nouns also replace pronouns. If an argument is a conjunction indicating conj-dependency in the parse tree, we split them. §.§.§ Concept Map Construction. Among various extracted concepts and relations, multiple expressions can refer to the same concept while not using precisely the same words; that is, they can also use synonyms or paraphrases. However, distinguishing similar concepts to group them is challenging and subjective. For example, adding a modifier can completely change the meaning of a concept based on the purpose of summarization. Consequently, grouping them may lead to propositions that are not stated in the document. Therefore, we need to group every subset that contains mentions of a single, unique concept. Scalability is another critical issue. For example, pairwise comparisons of concepts cause a quadratic run-time complexity applicable only to limited-sized document sets. The same challenges exist for relation grouping. However, we first grouped all mentions by the concepts' pairs, and then performed relation grouping. Therefore, this task’s scope and relevance are much smaller than when concepts are used. Therefore, in practise, comparison-based quadratic approaches are feasible. Moreover, as the final goal is to create a defined size summary, the summary size significantly affects the level of details in grouping concepts. This is because the distinction between different mentions of a concept might not be required, as it is a subjective task. Ideally, the decision to merge must be made based on the final summary map’s propositions to define the necessary concept granularity. We further propose hierarchical conceptual clustering using k-means with word embedding vectors to tackle this problem, as it spans a semantic space. Therefore, word embedding clusters give a higher semantic space, grouping semantically similar word classes under the Euclidean metric constraint defined below. Before defining the proposed hierarchical conceptual clustering, we review word embedding schemes used in the proposed model. Word Embedding. Word embedding is a learnt representation of text such that the same meaning words have similar representations. Different techniques can be used to learn a word embedding from the text. Word2Vec <cit.> is an example of a statistical model for learning a word embedding representation from a text corpus, utilising different architectures. As such, we used skip-gram and bag of character n-grams in our experiments. The skip-gram model uses the current word for predicting the surrounding words by increasing the weights of nearby context words more than other words using a neural network model. One drawback of skip-gram is its inability to detect rare words. In another model, authors define an embedding method by representing each word as the sum of the vector representations of its character n-grams, known as ‘bag of character n-grams’ <cit.>. If the training corpus is small, character n-grams will outperform the skip-gram (of words) approach. [We used fastText for word embedding: https://fasttext.cc/docs/en/support.html] Conceptual Hierarchical Clustering. Given word (concept) embeddings learnt from a corpus, {v_w_1,v_w_2,...,v_w_T}, we propose a novel recursive clustering algorithm to form a hierarchical concept map, H. This variable denotes a set of concept maps organised into a hierarchy that incrementally maintains hierarchical summaries from the most general node (root) to the most specific summary (leaves). Within this structure, any non-leaf summary generalises the content of its children nodes. Hierarchical summarization has two critical strengths in the context of large-scale summarization. First, the initial information under review is small and grows upon users’ request, so as not to overwhelm them. Second, the parent-to-child links facilitate user navigation and drilling down for more details on interesting topics. The hierarchical conceptual clustering minimizes the objective function Eq. <ref> over all k clusters as C={c_1,c_2,..,c_k}. J = ∑_k=1^K∑_t=1^| T | |v_w_t- c_k|^2 +αmin _c∈ C size(c), where c_k is the randomly selected centre k-th cluster, and T is the number of word vectors. The second term is the evenness of the clusters, added to avoid clusters with small sizes. α tunes the evenness factor, which was defined by employing a grid search over a development set. We also implemented hierarchical clustering top-down at each time, optimising Eq. <ref>. After defining the clusters, we must find the concept that best represents every concept at the lower levels to ensure hierarchical abstraction. A concise label is the desired label for each node; however, shortening mentions can introduce propositions that are not asserted by a text. For example, the concept labelled ‘students’ can change in meaning where the emphasis is on a few students or some students. To this end, a centre of a cluster at each level of the hierarchy was defined as a label. The inverse distance to the cluster centres is the membership degree or the similarity to each label. The cluster distance for a word w_t is defined as d_v_w_t. Consequently, the membership of each word w_t in cluster c_k to its label is the inverse distance defined in Eq. <ref>. m_v_w_t=1/d_v_w_t =1/|c_k-v_w_t|^2∀ w_t ∈ c_k We then fine-tuned K within the 5–50 range based on the dataset size and chose the cluster number according to gap statistic value <cit.>. The output H can be directly used as a new dataset for other actions, such as browsing, querying, data mining process, or any other procedures requiring a reduced but structured version of data. The hierarchical clustering can also be pruned at each level to represent a summarised concept map for different purposes or users. Therefore, H is fed to the summariser for pruning to generate a personalized summary. Moreover, by using preference-based learning and RL, we learn users’ preferences in making personalized summaries for unseen topic-related documents, discussed in Sec. <ref>. §.§ Summarizer The hierarchical concept map produced in the previous step is given to the summariser to make the desired summaries for users based on their given preferences. Therefore, the summariser consists of two phases—(i) predicting user preferences and (ii) generating the desired summary. §.§.§ Predicting User Preference. The first step towards creating personalized summaries is to understand users’ interests. It can be extracted implicitly based on users' profiles, browsing history, likes or dislikes, or retweeting in social media <cit.>. When this information is not available, interaction with users is an alternative to retrieve user's perspectives. The user feedback can be in any form, such as mouse-click or post-edits, as explained in Section <ref>. Preference-based interactive approaches are another form of feedback that puts a lower cognitive burden on human subjects <cit.>. For instance, asking users to select one concept among “cancer treatment" and “cancer symptoms" is more straightforward than asking for giving a score to each of these concepts. Therefore, in this paper, to reduce users' cognitive load, queries are in the form of concept preference. Preference learning is a classification method that learns to rank instances based on the observed preference information. It trains based on a set of pairwise preferred items and obtaining the total ranking of objects <cit.>. H is the hierarchical concept map, where at the i-th level of the hierarchy there exist m_i nodes defining a label l. L={l_11,...,l_nm_i} is the set of all labels, where l_i1 indicates the first node at i-th level of the hierarchy and n is the number of levels, and L_i indicates the labels at i-th level. We queried users with a set of pairwise concepts at the same levels, {p(l_i1,l_i2),p(l_i2,l_i3),...,p(l_im_i-1,p(l_im_i)}, where p(l_i1,l_i2) is defined in Eq. <ref>. p(l_i1,l_i2)= 1, if l_i1>l_i2 0, otherwise where > indicates the preference of l_i1 over l_i2. Preference learning aims to predict the overall ranking of concepts, which requires transforms concepts into real numbers, called utility function. The utility function U such that l_i > l_j U(l_i) > U (l_j), where U is a function U: CR. In this problem, the ground-truth utility function (U) measures each concept’s importance based on users’ attitudes, defined as a regression learning problem. According to U, we defined the ranking function, R, measuring the importance of each concept towards other concepts based on users’ attitude. This is defined in Eq. <ref>. R(l_i)=∑1{U(l_i)>U(l_j)} , ∀ l_i , l_j∈L where 1 is the indicator function. The Bradley–Terry model <cit.> is a probability model widely used in preference learning. Given a pair of individuals l_i and l_j drawn from some population, the model estimates the probability that the pairwise comparison l_i > l_j is true. Having n observed preference items, the model approximates the ranking function R by computing the maximum likelihood estimate in Eq. <ref>. J_x(w)= ∑_i ∈ n[p(l_i,l_j)log F(l_i,l_j;w)+ p(l_j,l_i)log F(l_j,l_i;w)] where F(l) is the logistic function defined in Eq. <ref>. F(l_i,l_j;w)= 1/1+exp[U^*(l_j;w)-U^*(l_i;w)] Here, U^* is the approximation of U parameterised by w, which can be learnt using different function approximation techniques. In our problem, a linear regression model was designed for this purpose, defined as U(l;w)=w^Tϕ(l), where ϕ(l) is the representation feature vector of the concept l. For any l_i,l_j ∈ L, the ranker prefers l_i over l_j if w^Tϕ(l_i)> w^Tϕ(l_j). By maximizing the J_x(w) in Eq. <ref>, w^* = arg max_w J_x(w), the resulting w^* using stochastic gradient ascent optimisation will be used to estimate U^*, and consequently the approximated ranking function R^*: C R. Thus, Summation learns a ranking over concepts and uses the ranking to generate personalized summaries. §.§.§ Generating Personalized Summaries. The summarization task is to transform the input (a cluster of documents) d to the best summary among all possible summaries, called Y(d), for the learnt preference ranking function. This problem can be defined as a sequential decision-making problem, starting from the root, sequentially selecting concepts and adding them to a draft summary.Therefore, it can be defined as an MDP problem. An MDP is a tuple (S,A,R,T), where S is the set of states, A is the set of actions, R(s,a) is the reward for performing an action (a) in a state (s), and T is the set of terminal states. In our problem, a state is a draft summary, and A includes two types of action—either adding a new concept to the current draft summary or terminating the construction process if it reaches users’ limit size. The reward function R returns an evaluation score in one of the termination states or 0 in other states. A policy π(s,a): S × A R in an MDP defines the selection of actions in state s. The goal of RL algorithms is to learn a policy that maximises the accumulated reward. The learnt policy trained on specific users’ interests is used on unseen data at the test time (in this problem to generate summaries in new and related topic documents). We defined the reward as the summation of all concepts’ importance included in the summary. A policyπ defines the strategy to add concepts to the draft summary to build a user’s desired summary. We defined π as the probability of choosing a summary of y among all possible summaries within the limit size using different hierarchy paths, Y(d), denoted as π(y). The expected reward of performing policy π, where R(y) is the reward for selecting summary y, is defined in Eq. <ref>. R^RL(π|d)= E_y ∈ Y(d)R(y)= ∑_y∈ Y(d)π(y)R(y) The goal of MDP is to find the optimal policy π^* that has the highest expected reward. Therefore, the optimal policy, π^*, is the function that finds the desired summary for a given input based on user feedback (Eq. <ref>). π^* = arg max R^RL(π|d) = arg max ∑_y ∈ Y(d)π(y) R(y) We also used the linear temporal difference algorithm to obtain π^*. The process is explained in Algorithm <ref>. § EVALUATION In this section, we present the experimental setup for assessing our summarization model's performance. We discuss the datasets, give implementation details, and explain how system output was evaluated. §.§ Datasets and Evaluation We evaluated Summation using three commonly employed benchmark datasets from the Document Understanding Conferences (DUC) [Produced by the National Institute Standards and Technology (https://duc.nist.gov/)]. Each dataset contains a set of document clusters accompanied by several human-generated summaries used for training and evaluation. Details are explained in Table <ref> Automatic Evaluation. We evaluate the quality of summaries using ROUGE_N measure <cit.>[We run ROUGE 1.5.5: http://www.berouge.com/Pages/defailt.aspx with parameters -n 2 -m -u -c 95 -r 1000 -f A -p 0.5 -t 0] defined as: The three variants of ROUGE (ROUGE-1, ROUGE-2, and ROUGE-L) are used. We used the limited length ROUGE recall-only evaluation (75 words) to avoid being biased. Human Evaluation. For this purpose, we hired fifteen Amazon Mechanical Turk (AMT)[https://www.mturk.com/] workers to attend tasks without any specific prior background required. Then five document clusters are randomly selected from the DUC datasets. Each evaluator was presented with three documents to avoid any subjects' bias and was given two minutes to read each article. To make sure human subjects understood the study's objective, we asked workers to complete a qualification task first. They were required to write a summary of their understanding. We manually removed spam from our results. §.§ Results and Analysis Summation was evaluated from different evaluation aspects, first from the organiser’s output, and then concerning the hierarchical concept map (H), which can be served individually to users as the structured summarised data. Next, we evaluated H using both human and automatic evaluation techniques to answer the following questions: * Do users prefer hierarchical concept maps to explore new and complex topics? * How much do users learn from a hierarchical concept map? * How coherent is the produced hierarchical concept map? * How informative are summaries in the form of a hierarchical concept map? Personalized summaries generated on test data were also evaluated from various perspectives to analyse the effect of RL and preference learning, including: * The impact of different features in approximating the proposed preference learning. * The role of the query budget in retrieving pairwise preferences. * The performance of RL algorithm and the information coverage in terms of ROUGE. * Users' perspectives on learned summaries based on their given feedback. Hierarchical Concept Map Evaluation. To answer the questions in Sec. <ref>, we performed three experiments. First, within the same limit size as the reference summaries, we compared the summaries produced by three models—using ExDos, which is a traditional approach; using a traditional hierarchical approach <cit.>; and using a structured summarization approach <cit.> on selected documents (with ROUGE-1 and ROUGE-2 scores based on the reference summaries). The average ROUGE-1 for Summation was 0.65 and ROUGE-2 was 0.48. The structured approach <cit.> showed similar performance with ROUGE-1 and ROUGE-2 at 0.65 and 0.45, respectively. Meanwhile, traditional hierarchical approaches <cit.> produced a ROUGE-1 of 0.27 and ROUGE-2 of 0.18. In the same task, the percentage of covered unigrams and bigrams based on documents were also compared. Both Summation and the structured approach covered approximately 4% unigrams and 2% bigrams, but dropped below 1% in both cases when testing the hierarchical approaches. In the third experiment, all competitors’ outputs were rated based on three measures, including usability in exploring new topics, level of informativeness, and coherency. Summation’s rate for the first and second criteria was 96% and 94%, respectively. However, it was 34% for coherency. We removed all concepts with low similarity to their parents based on a different threshold at each level. After repeating the same experiment, and rate of coherency increased to 76%. Feature Analysis. Before evaluating the effect of conceptual preference, it is important to explain the ground-truth concept ranker function (U) and the approximate function (U^*), indicating the importance of concepts. To estimate the approximate function (U^*), we defined a linear model U^*(c)=W^Tϕ(c), where ϕ are the features. To this end, a set of features (whose importance was validated in ExDos) was used, including surface-level and linguistic-level features. Surface-level features include frequency-based features (TF-IDF, RIDF, gain and word co-occurrence), word-based features (upper-case words and signature words), similarity-based features (Word2Vec and Jaccard measure) and named entities. Linguistic features are generated using semantic graphs and include the average weights of connected edges, the merge status of concepts as a binary feature, the number of concepts merged with a concept, and the number of concepts connected to the concept. We defined different combinations of features with different sizes,{2,5,8,10}, starting from the most critical one. Then, we repeated the experiments for 10 cluster documents. We used the concepts included in the reference summary as preferences, and then evaluated the concept coverage in a concept map compared to the reference summaries using ROUGE-1 and ROUGE-2. The results reported in Fig. <ref> show that the model’s performance improved after adding more features. Summary Evaluation. To avoid subjectivity in the evaluation process, we used the reference summaries as feedback. The mentioned concepts that exist in reference summaries receive the maximum score by the ranked function. We compared the summaries produced by three models, including the traditional approach (ExDos), a range of hierarchical approaches <cit.>, and a structured summarization approach <cit.>, each tested on randomly selected documents from three datasets using ROUGE-1, ROUGE-2 and ROUGE-L scores based on the references summaries. The average results reported in Table <ref> show the supremacy of Summation in selecting specific contents. Query Budget Size. We also measure the effectiveness of the users' query budget size in the process. The pairwise preferences are defined based on the reference summaries, defining in a dictionary format. We selected the query size among the selection of {10,15,20,25,30,35}, demonstrating the user's number of feedback. The results are reported in Figure <ref>. As expected, by increasing the number of feedback, the ROUGE score increases significantly. However, the difference rate decreases through the process. Human Analysis. Since the goal of Summation is to help users make their desired summary, we conducted two human experiments to evaluate the model. In the first experiments, to assess the possibility of finding their desired information, they were asked to answer a given question about each topic. Their level of confidence in answering questions and their answers were recorded. An evaluator assessed their accuracy in answering questions. Among the fifteen workers, 86.67% were completely confident in their answers. However, 57% answered completely accurately. In another task, after querying users for feedback, we ask them to select some concepts as the summary for the test data. Then the outputs were also shown to users, and they all approved their satisfaction. Besides, an evaluator manually compared them and reported more than 80% correlation between outputs. § CONCLUSION AND FUTURE WORK Extensive information in various formats is producing from single or multiple simultaneous sources in different systems and applications. For instance, data can be structured, such as data in SQL databases, unstructured stored in NoSQL systems, semi-structured like web server logs, or streaming data from a sensor. We propose a summarization approach based on a hierarchical concept map to tackle the variety and volume of big generated data. We trained our approach using document collections as input and employed users' feedback to generate desired summaries for users, which can be extended to other data types. Many future directions are possible. First, capturing users' interests is a significant challenge in providing practical personalized information. The reason is that users are reluctant to specify their preferences as entering lists of interests may be a tedious and time-consuming process. Therefore, techniques that extract implicit information about users' preferences are the next step for making useful personalized summaries. Another potential direction is to use human feedback records to provide personalized summaries on new domains using transfer learning. Moreover, we aim to use fuzzy clustering to make a hierarchical concept map. § ACKNOWLEDGEMENT We acknowledge the Centre for Applied Artificial Intelligence at Macquarie University, Sydney, Australia, for funding this research. IEEEtran
http://arxiv.org/abs/2307.04498v1
20230710113918
RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling
[ "Javad Ebrahimizadeh", "Evgenii Vinogradov", "Guy A. E. Vandenbosch" ]
cs.NI
[ "cs.NI", "cs.SY", "eess.SY" ]
RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling Javad Ebrahimizadeh, Evgenii Vinogradov, Guy A.E. Vandenbosch J. Ebrahimizadeh and G. Vandenbosch are with WaveCoRE of the Department of Electrical Engineering (ESAT), KU Leuven, Leuven, Belgium. E-mail: {Javad.Ebrahimizade,Guy.Vandenbosch}@kuleuven.be E. Vinogradov is with ESAT, KU Leuven, Leuven, Belgium, also with Autonomous Robotics Research Center, Technology Innovation Institute (TII), Abu Dhabi, UAE. E-mail: [email protected]. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper presents a quasi-deterministic ray tracing (QD-RT) method for analyzing the propagation of electromagnetic waves in street canyons. The method uses a statistical bistatic distribution to model the Radar Cross Section (RCS) of various irregular objects such as cars and pedestrians, instead of relying on exact values as in a deterministic propagation model. The performance of the QD-RT method is evaluated by comparing its generated path loss distributions to those of the deterministic ray tracing (D-RT) model using the Two-sample Cramer-von Mises test. The results indicate that the QD-RT method generates the same path loss distributions as the D-RT model while offering lower complexity. This study suggests that the QD-RT method has the potential to be used for analyzing complicated scenarios such as street canyon scenarios in mmWave wireless communication systems. quasi-deterministic, ray tracing, Radar Cross Section, statistical distribution, EM propagation, Cramer-von Mises test. § INTRODUCTION Wireless communication has been rapidly evolving with the advent of new technologies and the increasing demand for high-speed data transmission. Millimeter-Wave (mmWave) wireless communication is considered a promising technology for the next generation of wireless communication due to its ability to provide multi-Gbps average data rates with low latency <cit.>. This high data rate is particularly necessary for dense urban areas such as the street canyon scenario, where a large number of users demand high-speed data transmission. In this scenario, radio frequencies at mmWave bands are used to transmit data, which requires an understanding of the propagation characteristics of mmWave signals in street canyons. Recently, Facebook introduced an affordable solution for deploying high-speed data access in street canyons using mmWave Terragraph radios operating at 60 GHz for rooftop-to-rooftop or light-pole-to-light-pole links <cit.>. Since there is no closed-form scattering model available for bistatic Radar Cross Section (RCS) of irregular objects such as pedestrians and cars, numerical methods such as the Method of Moments (MoM), Geometrical Optics (GO), Physical Optics (PO), or their combinations, are typically used to calculate the bistatic RCS of these objects. However, this increases the computational complexity of the analysis, which can be especially challenging in the case of the street canyon scenario, where a large number of irregular objects need to be considered. While the use of the bistatic RCS model of a sphere in the METIS channel model is simple, it may not accurately represent the scattering from irregular objects in all directions. This is because a large sphere, relative to the wavelength, exhibits a constant RCS. To address this limitation, Lahuerta-Lavieja et al. developed a fast mmWave scattering model based on the 3D Fresnel model for rectangular surfaces. However, while these models are useful for certain types of objects, they may not accurately model more complex or irregular objects <cit.>. Therefore, further research is needed to develop more accurate bistatic RCS models that can be incorporated into channel models for a more comprehensive analysis of wireless communication systems in real-world scenarios. Myint et al. demonstrated the feasibility of modeling the bistatic RCS of intricate objects using a closed-form statistical distribution function. They found that the bistatic RCS of cars conforms to a logistic distribution and applied this model to various vehicle types, including passenger cars, vans, and trucks, at sub-6 GHz frequency. However, they did not validate this Probability Density Function (PDF) model in a practical channel environment <cit.>. The present paper introduces a low-complexity quasi-deterministic ray tracing method that takes advantage of the statistical distribution of bistatic RCS of irregular objects for calculating scattering instead of its exact values, as done in deterministic ray tracing. The method uses the Physical Optics (PO) method to calculate the bistatic RCS of irregular objects at mmWave and assigns a suitable Probability Density Function (PDF) to them. This approach significantly reduces the complexity of the ray tracing method. The QD-RT method is verified numerically by calculating the path loss due to irregular objects in a realistic street canyon scenario. The main contributions of the paper are: * Development of a quasi-deterministic ray tracing technique based on dedicated PDFs of bistatic RCSs of objects. * Deriving the probability density function of the area coverage for a specific street canyon scenario in spherical coordinates. The rest of the paper is organized as follows. Section <ref> describes the quasi-deterministic ray tracing method. Section <ref> validates the quasi-deterministic propagation technique. Finally, the paper is concluded in Section <ref>. § QUASI-DETERMINISTIC RAY TRACING METHOD In this section, we provide a comprehensive overview of the street canyon topology and the theory of deterministic electromagnetic (EM) propagation in the scenario. Additionally, we outline the quasi-deterministic and statistical channel models used in the study and their corresponding parameterization. §.§ street canyon scenario topology The topology of the street canyon scenario is shown in Figure <ref> with two tall buildings on either side of the street. The street has a length of W_2 and a width of L_1, and there is a sidewalk on both sides of the street with a width of W_1. In this scenario, there are scattering objects such as lampposts, parked cars, and pedestrians placed on the street. The walls of the buildings have a thickness of D_w and are made of bricks with a relative permittivity of ϵ_r,w at operational frequency f_0. The transmitter and receiver antennas are omnidirectional antennas with vertical polarization, and they are located at positions (X_tx, Y_tx, Z_tx) and (X_rx, Y_rx, Z_rx), respectively. The lampposts have a radius of R_l and a length of L_l, and they are equidistantly positioned on both sides of the street with a separation distance of d_l. The scenario dimensions and parameter values are provided in Table <ref>. §.§ deterministic propagation The propagation of Em wave in street canyon scenario includes the Line of Sight (LOS), reflection, and scattering paths without considering shadowing, and diffraction. The LOS, reflection (from walls and ground), and scattering components can be modeled as: §.§.§ LOS propagation H_0(ω)=a_0e^jωτ_0, where | a_0| ^2=(λ/4π r_0)^2 is the LOS propagation loss, the corresponding path loss in dB is PL=-20 log_10(|a_0|), and τ_0=r_0/c_0 is the propagation time. §.§.§ reflections from ground and walls H^r(ω)=a^re^jωτ_r, where | a^r| ^2=(R^TE/TMλ/4π (r_1+r_2))^2, the corresponding path loss in dB due to reflection is PL=-20 log_10(|a^r|), and τ^r=(r_1+r_2)/c_0 is the propagation time; r_1 is the distance between TX to specular point and r_2 is the distance between the specular point to RX. The reflection coefficient R^TE/TM for both TE and TM polarization for dielectric slab (wall) and half-space (Ground) media <cit.>. §.§.§ Scattering from objects H^s(ω)=a^se^jωτ_s, where | a^s| ^2= 1/4π r_1^2×σ_rcs×1/4π r_2^2×λ^2/4π, the corresponding path loss in dB due to scattering is PL=-20 log_10(|a^s|), and the propagation time is τ^s=(r_1+r_2)/c_0; r_1 and r_2 are the distance between the scatterer and RX and TX, respectively and σ_rcs is the bistatic RCS of the scatterer. In this paper, the bistatic RCS values of complex objects (such as cars or pedestrians) are computed by the Physical Optics Gordon method, and regular shape objects (e.g., lampposts) are computed with the closed-form model of the RCS of a conducting cylinder <cit.>. §.§ quasi-deterministic propagation In the quasi-deterministic ray tracing (QD-RT) method, the PDF of a bistatic RCS in (<ref>) is used instead of the exact value of this bistatic RCS resulting in computational complexity decreases drastically. The QD-RT method is a low-complexity technique for statistically analysis and modeling of the channel with Monte-Carlo simulations should be done. A Monte Carlo simulation has variables that should be randomly varied during each iteration. For example for statistical analysis of the path loss due to an irregular object using (<ref>), the distance between the object and the TX and RX antenna denoted by r_1 and r_2 are considered as the Monte-Carlo variable denoted as the independent random variable X_1 and X_2. Therefore, based on (<ref>), the path loss is a random variable denoted as PL(X_1,X_2) ∼ A_0- 40× log_10(X_1+X_2)-10× log_10(σ_rcs) where A_0=-10log_10((4π)^3 ×λ^2) is a constant value. According to (<ref>), using the PDF of the bistatic RCS of objects can generate the same distribution for the path loss as using the exact values of bistatic RCS. To model the bistatic Radar Cross Section (RCS) of an irregular object using a Probability Density Function (PDF), a dataset of bistatic RCS for all incident and scattered angles must be generated. It is important to note that the combination of the bistatic RCS at different angles to generate the dataset of bistatic RCS is not equal and depends on the specific scenario being tested. In the case of a street canyon scenario, the angular dependency of the bistatic RCS in creating the dataset follows a specific equation: f_Θ, Φ(θ,ϕ)= (Δ z)^2sin(θ)/2L_1W_2 cos^3(θ) , if a/Δ z × sin(ϕ)<θ<b/Δ z × sin(ϕ) ϕ_0 < ϕ < π - ϕ_0, 0 ,  otherwise where (θ and ϕ) are elevation and azimuth angles in spherical coordinates. Δ z is the differential height between the object and TX (RX). Here, the elevation angle is limited by the lines Y_w = a and Y_w = b and the azimuth angle is bounded by ϕ_0 = 2a/L_1, see Fig .<ref>. § SIMULATION RESULTS The purpose of this study is to validate the quasi-deterministic ray tracing (QD-RT) method by comparing it with the deterministic ray tracing (D-RT) method for a street canyon scenario. To accomplish this, the pass loss and excess delay time distributions due to a pedestrian (parked cars) located randomly on the sidewalk (along the street) in the street canyon scenario shown in Fig. <ref> with dimensions listed in Table <ref> are numerically calculated using both methods. The PDF of the bistatic RCS for the pedestrians and parked cars are first obtained using the Physical Optics method, with logistic distributions observed for both cases. It is observed that the pedestrian and parked car follow the logistic distributions as listed in Table <ref>. The mean values for a car and a pedestrian are 11 and 6.17 dBsm, respectively. However, the maximum values (corresponding to the specular points) for a car and a pedestrian are around 60 dBsm and 40 dBsm, which yields a considerable difference of approximately 20 dBsm. Monte Carlo simulations are then conducted with a total of 1000 Monte-Carlo simulations with n ∈{1, ..., 10} pedestrians, randomly positioned on the sidewalks is performed with the resulting path loss distributions fitted to Weibull distributions with scale and shape parameters. Excess time delay distributions for both pedestrians and parked cars are also calculated, with lognormal distributions observed. The statistical parameters of the path loss and excess time delay distributions are presented in Fig. <ref> and table <ref>, respectively. This study demonstrates that the QD-RT method offers the same path loss distributions as the D-RT method with lower complexity, making it a promising approach for analyzing complex scenarios such as street canyon scenarios in mmWave wireless communication systems. § CONCLUSION In conclusion, the proposed quasi-deterministic ray tracing method using a statistical bistatic distribution to model the Radar Cross Section of various irregular objects showed promising results in analyzing the propagation of electromagnetic waves in street canyon scenarios. The method provided the same path loss and excess time delay distributions as the deterministic ray tracing model while offering lower complexity. The study also found that the scenario-specific PDF of bistatic RCS of irregular objects followed logistic distributions and the path loss and excess time delay followed Weibull and lognormal distributions, respectively. This study highlights the potential of the QD-RT method for analyzing complicated scenarios, such as street canyon scenarios, in mmWave wireless communication systems. § ACKNOWLEDGMENTS The present work received funding from the European Union’s Framework Programme for Research and Innovation Horizon 2020 under Grant Agreement No. 861222 (MINTS project). IEEEtran
http://arxiv.org/abs/2307.03980v1
20230708140837
Building and Road Segmentation Using EffUNet and Transfer Learning Approach
[ "Sahil Gangurde" ]
cs.CV
[ "cs.CV", "cs.LG", "eess.IV" ]
Building and Road Segmentation Using EffUNet and Transfer Learning Approach Sahil Gangurde ABV-Indian Institute of Information Technology & Management, Gwalior, India [email protected] =========================================================================================================================== In city, information about urban objects such as water supply, railway lines, power lines, buildings, roads, etc., is necessary for city planning. In particular, information about the spread of these objects, locations and capacity is needed for the policymakers to make impactful decisions. This thesis aims to segment the building and roads from the aerial image captured by the satellites and UAVs. Many different architectures have been proposed for the semantic segmentation task and UNet being one of them. In this thesis, we propose a novel architecture based on Google's newly proposed EfficientNetV2 as an encoder for feature extraction with UNet decoder for constructing the segmentation map. Using this approach we achieved a benchmark score for the Massachusetts Building and Road dataset with an mIOU of 0.8365 and 0.9153 respectively. segmentation,urban planning, state-of-the-art, mask, road, building § INTRODUCTION With the increasing population, city areas will increase, so the road network and building networks will get congested and intertwined. It will be difficult for humans to look at the aerial views of the scene and create proper layouts of the roads and buildings. Land cover segmentation has been in the picture for a very long time. The area of unmanned aerial vehicles (UAVs) has seen significant growth in attention in recent years, particularly in research and industry. As unmanned aerial vehicles become more commercially successful, aerial photographs provide a new and intriguing study avenue. Integrating drones with computer vision is a unique and demanding notion that allows unmanned aerial vehicles to grasp the overflown region. The process of aerial image interpretation entails inspecting aerial images for the express goal of detecting numerous distinguishing qualities of the objects of interest. Several stages are required to acquire complete scene comprehension from an aerial photograph. Given a picture, a segmentation phase is used to separate the scene into sections of specific categories (such as residential areas, flood, woodland, roads, and so on), essentially seeing the entire environment as a completely linked location with all categories interacting with each other. Semantic segmentation is the process of segregating different parts of images into predefined classes. It helps identify different labels in the image and pinpoint the exact map of it. Various problems related to medical imagery, satellite imagery, and urban planning can be solved by automating the process of detecting and segmenting multiple objects associated with the corresponding domain. The ability to recognize various objects from UAV images, such as railway lines, water bodies, forests, and other categories, could be beneficial in multiple applications, including creating and maintaining maps of cities, improving urban planning, noting environmental changes, and disaster relief. Our study focuses on creating effective ways of recognizing buildings from top-down aerial photos and establishing an efficient automatic system capable of identifying individual structures. In this paper segmentation on aerial images is performed to extract building mask. Then the project explores road segmentation and can be further extended for other classes. § RELATED WORK performed road network segmentation from SAR images using FCN <cit.>. They evaluated three models, FCN-8s, VGG19 with UNet, and DeepLabv3+. The paper uses inferior backbone models along with UNet <cit.>. This is a major drawback and achieved very low accuracy over the custom dataset. proposed stacking two UNets and generating the output mask <cit.>. The input image is first divided into blocks of 224x224 pixels and trained of the two UNets. The patches are then again converted into the real segmented mask. Though it gave promising results on the Massachusetts Building Dataset and Inria's Aerial dataset, the problem of converting the image into different patches and then reconstructing the image again is computationally expensive. proposed EU-Net to perform building segmentation <cit.>. The paper uses a dense spatial pyramid pooling structure(DSPP) after the encoder network to increase the multi-class feature extraction. The decoder used is a UNet architecture. The DSPP block achieved better results than normal UNet in performing segmentation of different sizes of buildings. proposed using an attention mechanism in the encoder to extract the features <cit.>. The encoder then produces the segmentation mask, transferring it to the edge detection block, which makes road edges. Thus, using a hybrid encoder mechanism provided very high accuracy for the road segmentation task. used of attention mechanism in the encoder to extract the features <cit.>. The proposed model employed a hybrid encoder separated into two parts: the first harvests full-resolution features, while the second creates high-resolution feature encoding. The second half, on the other hand, employs max-pooling layers to expand our network's entire receptive field, providing the network with adequate context information to operate with. Before the features from both sections are combined, a 2D activation map is constructed for each portion, letting the network choose how much attention to devote to the features from each encoder step. This helped the segmentation of huge roadways and the development of fine-edged segmentation masks used a novel vision transformer network to perform building segmentation <cit.>. The transformer simultaneously captured global and spatial detail contexts using a dual path structure for accurate building segmentation. The disadvantage of this approach was that to gather the global context; the search window size has to be large, which causes very high computation resources. § PROPOSED WORK The problem statement can be formulated as follows: * Develop models to segment buildings and roads from the urban environment and generate the mask for the same. * Evaluate the models for segmentation metrics and choose the best one. In this paper, we combine the state-of-the-art CNN architectures like Resnet50<cit.> and variations of EfficientNet<cit.> as encoders for UNet architecture and train them on the Massachusetts dataset<cit.>. This will generate the mask of the roads and buildings; hence, the two can be identified from the actual image. Figure <ref> shows the complete process of steps involved to achieve our goal. § DATASET The datasets used in this project are Massachusetts Road dataset and Massachusetts Building dataset<cit.>. Both the datasets have a aeriel view of the Boston city and the corresponding segmented masks of roads and building. §.§ Building Dataset The dataset used is Massachussetts Buildings Dataset. It includes a total of 151 images shot from UAV in the Boston region. A single image has dimension of 1500 x 1500 pixels. Each image convers an approcimate area of 2.25 sq km of land. The whole dataset expands over a region of 340 sq km. The dataset have been split into three parts: * Training Data: 137 images * Validation Data: 4 images * Test data: 10 photos The segmentation masks are created by using the building footprints of the OpenStreetMap project. The dataset covers urban and suburban part of Boston. The building labels include houses, buildings, garages all of various sizes. The images are made available by the Massachusetts government. The segmentation masks after computing computationally were further hand corrected for higher accuracy on the model training. Figure <ref> shows the sample images and their masks of the building dataset. §.§ Road Dataset The Massachusetts Roads Dataset contains 1171 photos clicked from the UAV. Each picture has a resolution of 1500x1500 pixels. A single photo covers area of 2.25 sq km. The images were randomly divided into three sets: * Training Data: 1108 images * Validation Data: 14 photos * Test Data: 49 photos The dataset spans over 2600 square kilometers and includes many urban, suburban, and rural areas. The test site alone spans 110 square kilometers. The segmentation masks are created by using the road centerline footprints of the OpenStreetMap project. The labels i.e. the centerline is then given a thickness of 7 pixels. All picture is rescaled to 1 pixel per square metre resolution. Figure <ref> shows the sample images and their masks of the building dataset. § MECHANISM/ALGORITHMS We will train the above datasets on the following models as encoders to the UNet. The encoder's general functionality is extracting features present in the image using mask labels. These models will extract the necessary details from the images the decoder will reconstruct the mask for the input. §.§ Encoder-Decoder Architecture The encoder is a CNN model which extracts the features from the image. The encoder downsamples the image, reduces the feature map resolution so that it captures the high level details from the original image. This is followed by many SOTA models in past like ResNet<cit.>. It is a common practice in CNN architecture to reduce the size of input image to extract the high level details. It is challenging to create a segmentation map based on the final feature map of the encoder due to its reduced size. A decoder network consists of set of layers that upsamples the feature maps extracted form the encoder network to again recover the information. Figure <ref> §.§ UNet . created the UNET for biomedical image segmentation<cit.>. The UNet has two parts one is the encoder and other is the decoder. The encoder extracts the features from the input image and the decoder achieves exact localization using transposed convolutions. The encoder consists of only convolutional and max pooling layers. It was mainly developed for the use of medical image segmentation but for our task this model we will use this model along with other encoder and find the segmentation mask. §.§ EfficientNet introduced the EfficientNet with a scaling convolutional strategy. Figure <ref> displays the architecture of EfficientNet<cit.>. As the depth of the network increases the the accuracy increases is what is shown by the ResNet<cit.> architecture. But at some point the accuracy of the network cannot be increased due to the problem of vanishing gradient. To solve this issue, scaling must be performed in all dimensions i.e. depth, width and resolution. EfficientNet introduced a new method called 'compound scaling' through which each of these above mentioned parameters get scaled by a factor ϕ. The parameters for scaling are given in <ref>. depth(d) = α^ϕ width(w) = β^ϕ resolution(r) = γ^ϕ such that, (αβ^2γ^2)^ϕ≈2 where α≥1, β≥1and γ≥1 With ϕ = 1 using grid search authors came up with the value of α=1.2, β=1.1 and γ=1.15. Now keeping these value as constant we can change the factor ϕ to get the scaled models from EfficientNetB1 upto EfficientNetB7. §.§ EfficientNetV2 Compound scaling in EfficientNet scales all the parameters of the model by the factor of ϕ. This type of scaling is not necessary as this scales in all parameters. So EfficientNet gives less control over the model parameters. Also in EfficientNet as the size of the image increases we need to decrease the batch size. This increase in image size needs more time to compute the features. EfficientNet uses MBConv layer which uses depth wise convolution which is an expensive operation. The motive behind EfficientNetV2<cit.> was to create a CNN model with the motive to increase the accuracy(A) while decreasing the training step time(S) and having less parameters(P). Basically max(A) while min(S^w, P^v) where w and v are experimentally determined. To solve the problem of less parameters and less training time NAS was used to create a model with the above given objective function. To reduce the depthwise convolution time, proposed Fused MBconv method. In Fused MBConv instead of performing a depthwise convolution and we perform a convolution with a filter of 3x3. As the depthwise convolution performs multiplication over all channels by removing it we reduce the computation cost and create faster models. The Figure <ref> shows the MBConv operation. § TECHNOLOGIES USED FOR IMPLEMENTATION The problem involves solving the segmentation task of various deep learning libraries, matrix manipulation libraries, image processing libraries, and plotting libraries are used. Table <ref> shows the different libraries, frameworks, and other technologies used in this project. Most of the code runs were done in the Kaggle environment. Kaggle environments are backed by Google Cloud, which provides free computation power to run ML tasks. § DATA PROCESSING Following are the steps involved in data processing: * One-hot encoding: For all the images, perform one hot encoding. One hot encoding is the process of converting the pixel values into number of the class we want the image to be segmented. Figure <ref> shows the original image, real mask and the constructed one hot encoded mask of the image. * Augmentation: Perform random horizontal flip, vertical flip and 90 degree rotation on the images and their corresponding masks. * Padding: The encoder models are implemented in such a way that the padding is added to arbitrary input size to match the input size of various encoders. * Dataset Loader: Create a data loader for model to train with input as image and the label as the one hot encoded mask. § RESULTS AND DISCUSSION §.§ System Configuration All the models are trained on Kaggle with Google Cloud backbone. Table <ref> shows the system parameters of the environment under which the models are trained. § EVALUATION METRICS The given models will be evaluated on two metrics - Intersection Over Union(IOU) and F1 Score. * Intersection Over Union (IOU) - Intersection Over Union is also known as the Jaccard index, is used to calculate the percentage of overlap between the true mask and the predicted output mask. IOU = y ∩ y^'/y ∪ y^' The intersection consists of the pixels found in the true mask and the predicted mask and the union consists of the pixels containing the true mask as well as the predicted mask. Equation <ref> shows the formula for IOU calculation. * F1 Score - F1 score or dice coefficient calculates the overlap of two masks. The values of the dice coefficient lie between 0 and 1 inclusive where 1 denotes perfect overlap and 0 represent no overlap. The equation <ref> shows the formula for Dice coefficient. Dice Coefficient = 2 * |y ∩ y^'|/|y| ∩ |y^'| The loss function for the neural network to minimise is given by 'Dice Loss' and is shown in <ref>. Dice Loss = 1 - 2∑_pixels^yy^'/∑_pixelsy^2 + ∑_pixelsy^'2 * Accuracy - Accuracy is defined as the ratio of sum of how many pixels in the image are correctly identified as the true segmented pixel and the number of pixels not identified as true segmented correctly to all the pixels present. Basically in terms of true position, negative and false positive negative in terms of pixel values accuracy can be defined as given in equation <ref> Accuracy = TP + TN/TP + TN + FP + FN * Precision - It shows the purity of the positive detection relative to the ground truth values. In precision TP mask having an IOU of above threshold, while FP represent the mask having an IOU of below threshold. Precision = TP/TP + FP * Recall - Recall determines the completeness of positive prediction with respect to ground truth label. Equation <ref> represents the formula to calculate the recall. Recall = TP/TP + FN §.§ Building Segmentation The goal of this experiment is to detect building mask from the input aerial image. Five different encoders are tested on UNet and the experiments are carried out by training the models on dataset and finding the IOU score and dice loss. The parameters for all the models are same. Table <ref> shows in-detail configuration of the parameters. §.§ Road Segmentation The goal of this experiment is to detect road mask from the input aerial image. Similiar to the building segmentation, five different encoders are tested on UNet and the experiments are carried out by training the models on dataset and finding the IOU score and dice loss. The parameters for road segmentation task are given in the Table <ref>. These parameters are common for all the models in this experiment. §.§ Results and discussion The models are tested on the test data, and the results obtained are shown in Table <ref> and <ref> for building and road segmentation respectively. The scores written in bold represent the best score achieved under a particular metric. §.§ Benchmarks The results derived from the experiments outperform the benchmark scores for both datasets. Table <ref> shows the recent papers presented for the building dataset, and the best accuracy achieved has been presented in this paper. Also, in Table <ref> existing models are compared concerning mIOU and mDice for the road dataset. The models presented in the paper set new benchmark scores for the Massachusetts dataset. § LIMITATIONS AND FUTURE SCOPE The size of the input image is a massive problem for UAV-based segmentation. Very high GPU memory is required for images with higher dimensions to load the model with weights. Standard images consist of 3 channels, but satellite images can contain more than three channels. In that case, the whole UNet architecture must be changed to fit the extra 3rd dimension data. In this paper, only roads and buildings are segmented as a part of urban object segmentation. Aerial images from different cities can be taken, and masks for various classes such as manholes, power lines, railway tracks, etc. must be created to expand the segmentation classes and allow more objects to be segmented in the urban environment. Attention mechanism must be explored on EfficientNet+Unet architecture to improve the accuracy further. § CONCLUSION Based on the experiments, we can conclude that for building and road segmentation, UNet architecture with a pre-trained encoder is the best architecture to be used. Using transfer learning, the training time and GPU cost are reduced, and the accuracy of the models is very high. The problems discussed in the research gaps regarding transfer learning are filled, and models pre-trained with an imagenet dataset were used. The thesis presents the new benchmark score for the Massachusetts Building and Road dataset. For the building segmentation task, EfficientNetV2L+UNet achieved an IOU of 0.8365, and for the road segmentation task, EfficientNetB7+UNet gave an IOU of 0.9153. IEEEtranN
http://arxiv.org/abs/2307.04348v2
20230710051628
Full statistics of non-equilibrium heat and work for many-body quantum Otto engines and universal bounds: A non-equilibrium Green's function approach
[ "Sandipan Mohanta", "Bijay Kumar Agarwalla" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.mes-hall" ]
apsrev
http://arxiv.org/abs/2307.05956v2
20230712070012
Language-Routing Mixture of Experts for Multilingual and Code-Switching Speech Recognition
[ "Wenxuan Wang", "Guodong Ma", "Yuke Li", "Binbin Du" ]
cs.SD
[ "cs.SD", "eess.AS" ]
An adaptive approach to remove tensile instability in SPH for weakly compressible fluids [ August 12, 2023 ======================================================================================== Multilingual speech recognition for both monolingual and code-switching speech is a challenging task. Recently, based on the Mixture of Experts (MoE), many works have made good progress in multilingual and code-switching ASR, but present huge computational complexity with the increase of supported languages. In this work, we propose a computation-efficient network named Language-Routing Mixture of Experts (LR-MoE) for multilingual and code-switching ASR. LR-MoE extracts language-specific representations through the Mixture of Language Experts (MLE), which is guided to learn by a frame-wise language routing mechanism. The weight-shared frame-level language identification (LID) network is jointly trained as the shared pre-router of each MoE layer. Experiments show that the proposed method significantly improves multilingual and code-switching speech recognition performances over baseline with comparable computational efficiency. Index Terms: mixture of experts, language identification, multilingual, code-switch, speech recognition § INTRODUCTION Multilingualism is a widespread phenomenon in the world. Multilingual speakers often communicate in multiple languages simultaneously, such as interspersing English in Mandarin. Therefore, a practical multilingual speech recognition system needs to support the recognition of monolingual and code-switching utterances in multiple languages. End-to-end (E2E) ASR systems<cit.> have become more and more popular recently due to the simple pipeline, excellent performance and less dependence on linguistic knowledge compared to traditional hybrid methods<cit.>. Prior works based on the E2E model have also made good progress in the field of multilingual ASR, including code-switching corpus synthesis<cit.>, multi-task training with joint language identification<cit.>, self-supervised speech representation learning<cit.>, cross-lingual transfer learning<cit.>, etc. The MoE architecture is an effective method to improve the performance of multilingual speech recognition in both monolingual and code-switching scenarios, which has been widely concerned and studied recently. The existing MoE-based methods <cit.> extract language-specific representations separately by independent encoders and fuse them to decode. Mostly, the computational complexity of the models will increase significantly with the number of supported languages. In this work, we propose a computation-efficient network named Language-Routing Mixture of Experts (LR-MoE) to improve the performance of the multilingual and code-switching ASR task. The LR-MoE architecture consists of a shared block and a Mixture-of-Language-Experts (MLE) block. Unlike the sparsely-gated mixture of experts (sMoE) <cit.>, the expert layers in the MLE block are language-dependent, which is called Language-Specific Experts (LSE). The shared block generates the global representation, while the LSE of the MLE block extracts language-specific representations. In the MLE block, we design a Frame-wise Language Routing (FLR) mechanism, which guides the expert layers to learn language specialization at the training stage. A weight-shared frame-level language identification (LID) network is jointly trained as the shared pre-router of each LSE layer, and the alignment of frame-wise LID will be used as the routing path of the LSE layers. We also compare utterance-wise and frame-wise language routing for LR-MoE in the multilingual and code-switching experiment. To distinguish them, we will name the two networks with different routing as ULR-MoE and FLR-MoE, respectively. Our contributions are summarized as follows: * We propose a computation-efficient LR-MoE architecture, which is suitable to apply in more languages with little increase in computational complexity. * We investigate multiple routing strategies of MoE and propose an FLR mechanism to guide the expert layers to learn language specialization, which is compatible with both multiple monolingual and code-switched ASR. * In Mandarin-English code-switching and multilingual experiments, the proposed method significantly improves the performances of multilingual and code-switching speech recognition over the baseline with comparable computational efficiency and outperforms previous MoE-based methods with less computational complexity. § RELATED WORKS AND MOTIVATION §.§ Previous MoE-based works More recently, many works <cit.> focus on exploring MoE architectures to recognize monolingual and intra-sentence code-switching speech. The MoE-based methods mainly utilized language-specific expert encoders to generate parallel language-specific representations and fuse them, whose difference is primarily in the fusion mode of expert encoders and training strategy. For example, the Bi-encoder transformer network <cit.> uses a gated network to dynamically output the MoE interpolation coefficients to mix two encoding representations. The weights of expert encoders are initialized with the pretrained monolingual model, respectively. Conditional factorized neural transducer <cit.> defined the monolingual sub-tasks with label-to-frame synchronization to achieve unified modeling of code-switching and monolingual ASR. The language-aware encoder <cit.> learned language-specific representations through language-aware training with the language-specific auxiliary loss instead of monolingual pretraining and used the frame-wise addition to fuse them. §.§ Motivations As mentioned above, the previous MoE-based works achieved considerable improvement in monolingual and code-switching ASR, but there are still the following problems: * The approach needs to compute all language-specific blocks. However, only one works in the monolingual scene. It means a large amount of redundant computational overhead. And the more languages are supported, the more redundant computational overhead is. * Language-specific blocks are isolated from each other and lack interaction. As a result, the cross-linguistic contextual information is easily lost in code-switching speech. In order to alleviate the above two issues, we propose the LR-MoE architecture inspired by the sparsely-gated mixture of experts <cit.>. Please refer to Section 3 for more details. § PROPOSED METHOD §.§ Sparsely-Gated Mixture of Experts The sMoE module is shown in Fig. <ref>(a). As a representative, Switch Transformer <cit.> adopts a top-1 expert routing strategy in the MoE architecture to route the data samples to the expert model with the highest probability in the gated network. The computational complexity of the whole network increases slightly as the number of experts increases, and the extra computational overhead only comes from the gated network. The inputs of the expert layer and the gated network are the outputs of the previous non-expert layer o_ne. The router probability p can be expressed as follows: p = Softmax(W_r· o_ne + b_r) 1 where W_r and b_r is the weights and bias of router respectively. An auxiliary loss is added to guarantee load balance across the experts during training. The balance loss is expressed as: ℒ_b = n ·∑_i=1^nf_i· p_i2 where f_i is the fraction of samples dispatched to i-th expert, n is the number of experts. §.§ Architecture of LR-MoE In order to strengthen the interaction of cross-lingual context information, we further expand the shared parts, such as the attention layers in the Transformer <cit.> network. It is different from the separate language-specific encoder <cit.>. Fig. <ref> shows the LR-MoE-based Transformer model. The shared block are stacked by standard transformer blocks. In contrast to the standard transformer block, we introduce the MLE FFN module as shown in Fig. <ref>(b) to strengthen language-specific representations in MLE block. All MLE modules share the same language router, which is a frame-level LID-gated network in front. According to the top-1 language predicted by the FLR, data samples are routed to the corresponding LSE layer. For each time frame, only one LSE will be routed, so the computational complexity of the model will not increase with more languages. §.§ Language Routing §.§.§ LID-Gated Network Considering that LID can be regarded as a low-dimensional subtask of ASR and the output of the non-expert layer o_ne already contains rich high-dimensional linguistic information, we model the frame-level LID task with a linear layer as follows: r = W_r· o_ne + b_r3 A frame-level LID auxiliary loss is added to jointly train the LID and ASR tasks at the training stage. We get the auxiliary labels Y_ lid by replacing the tokens in the text labels with the corresponding language IDs. Then, based on r, the LID-CTC loss is adopted to get the token-to-frame alignment, as shown in Eq. (<ref>). ℒ_lid-ctc = - log P_ CTC( Y_ lid | r) 4 Due to the sparse spike property of the Connectionist Temporal Classification (CTC), the greedy decoding result of the LID output z_t will contain a large amount of blank. Therefore, we adopt a simplified alignment strategy to get dense frame-wise language routing information as follows: z_t = z_f,  t = 0 z_t,  z_t≠ϕ z_t-1,  z_t = ϕ5 where z_t∈{language ids}∪ϕ, t=0,1,2,…,T, z_f is the first non-blank element. Besides, we also use an utterance-wise LID-gated network with the cross entropy (CE) loss for comparison. The utterance-wise LID loss is as follows: ℒ_lid-utt = CE(r_u, U_ lid) 6 where r_u is the time-dimension global average pooling of r, U_ lid is the language ID of utterance. The final object loss function is shown in Eq. (<ref>): ℒ_mtl = ℒ_asr + λ_lidℒ_lid7 where λ_lid is selected by hand, ℒ_lid of ULR and FLR correspond to ℒ_lid-utt and ℒ_lid-ctc, respectively. §.§.§ Shared Router Unlike sMoE <cit.>, we use a shared router instead of the independent router for each MLE layer, mainly due to the following considerations: The independent router of each MoE layer in sMoE is helpful in obtaining more diverse routing paths and larger model capability. However, the expert layers are language-specific and the desired routing paths are determined with a priori in LR-MoE. Therefore, the shared LID router might be helpful to reduce additional computation and the multi-level error accumulation caused by the alignment drift of the language routing. §.§ Pretrained Shared Block The bottleneck features of ASR are effective in transfer learning for LID <cit.>. Therefore, we utilize a pretrained shared block to speed up the convergence of the LID-gated network and reduce the bad gradient back-propagation caused by erroneous routing paths, especially at the early training stage. § EXPERIMENTS §.§ Datasets Our experiments are conducted on ASRU 2019 Mandarin-English code-switching Challenge dataset<cit.> and a four-language dataset including Aishell-1 (CN) <cit.>, train-clean-100 subset of Librispeech <cit.> (EN), Japanese (JA), Zeroth-Korean (KR) [https://openslr.org/40/] and Mandarin-English code-switching (CN-EN) data. JA and CN-EN are collected from Datatang [https://www.datatang.com/]. Table <ref> and <ref> show the details of all experimental datasets. For all the experiments, the acoustic features are 80-dimensional log filter-bank energy extracted with a stride size of 10ms and a window size of 25ms. SpecAugment<cit.> is applied during all training stages. The Mandarin-English vocabulary and the vocabulary of 4 languages consist of 12064 and 15492 unique characters and BPE <cit.> tokens. §.§ Experimental setup The experiments are conducted on the ESPnet toolkit <cit.>. We use the Transformer CTC/Attention model with a 12-layer encoder and a 6-layer decoder as our baseline, called the Vallina model. The LR-MoE encoder consists of a 6-layer shared block and a 6-layer MLE block in experiments. We also compare various MoE-based encoders with our proposed method, including Bi-Encoder <cit.>, LAE<cit.>. The Bi-Encoder contains the 12-layer encoder for each language. The LAE contains a 9-layer shared block and a 3-layer language-specific block for each language. Besides, we implement 12-layer sMoE<cit.> with 4 experts in each MoE block for comparison in the multilingual and code-switching experiment. All encoders and decoders are stacked transformer-based blocks with attention dimension of 256, 4 attention heads and feed-forward dimension of 2048. We implement multi-task learning with λ_ctc=0.3 and λ_lid=0.3 for ASR and LID at the training stage of the LR-MoE model. We use the Adam optimizer with a transformer-lr scale of 1 and warmup steps of 25k to train 100 epochs on 8 Tesla V100 GPUs. The training process adopts a dynamic batch size strategy with a maximum batch size of 128. We train a 4-gram language model with all training transcriptions and adopt the CTC prefix beam search for decoder with a fixed beam size of 10. §.§ Experimental Results §.§.§ Results on Mandarin-English ASR As shown in Table <ref>, our proposed method outperforms the previous MoE-based methods with comparable or fewer parameters in the Mandarin-English ASR system, including CTC and Attention-based Encoder-Decoder (AED) models. The proposed method achieves significant performance improvement over the baseline. The relative improvements on mono-Mandarin, mono-English and code-switch evaluation sets are 28.2%, 18.5% and 13.9% in the CTC-based model and 25.4%, 17.9% and 13.4% in the AED-based model, respectively. §.§.§ Results on multilingual ASR Table <ref> shows the results of models in the CTC-based multilingual ASR system. Compared with previous MoE-based methods, our proposed method achieves significant performance improvement in both monolingual and code-switching scenarios. Regarding FLOPs, the proposed architecture's computational complexity increases little with the increase of languages, and it is easier to scale in multilingual ASR. The proposed method significantly improves performance over the baseline with comparable computational complexity. The relative average improvements on monolingual and code-switch evaluation sets are 28.4% and 26.8%, respectively. We also compare different routing strategies of LR-MoE. Experiments show that FLR-MoE achieves slight performance improvement compared to ULR-MoE, especially in code-switching scenarios. The shared language routing strategy for FLR-MoE outperforms the layer-wise language routing strategy on both monolingual and code-switching evaluation sets. §.§ Ablation study and analysis §.§.§ Position of MLE As shown in Table <ref>, we explore the effect of the location the position of the MLE layers on performance. According to our analysis, the deeper the shared block, the more accurate the LID and the weaker the language-specific representation in the fix-depth model of FLR-MoE, which is a trade-off of the model design. Experiments show the proposed method has a strong ability to distinguish languages, and we achieve the best results with the language router in the middle position. §.§.§ LID and Routing Analysis We summarize the results of utterance-level language classification based on the frame-level LID routing information and the results of ASR. The proposed method's language classification accuracies and confusion matrix are shown in Fig. <ref>. Compared to the 98.8% average LID accuracy of the baseline, the proposed method's average LID accuracy is 99.4%. This shows that the proposed method can effectively reduce the confusion between languages. As shown in Fig. <ref>, for the code-switching input, the proposed method obtains the routing of the language expert by token-to-frame LID alignment and routes the layer inputs of the language segments to the corresponding language experts, which demonstrates the effectiveness of the proposed method for code-switching ASR. § CONCLUSIONS This paper proposes the LR-MoE architecture to improve multilingual ASR system for monolingual and code-switching situations. Based on the frame-wise language-routing (FLR) mechanism, the proposed LR-MoE can switch the corresponding language expert block to extract language-specific representations adaptively and efficiently. Experiments show the LR-MoE significantly improves multilingual and code-switching ASR system over the standard Transformer model with comparable computational complexity and outperforms the previous MoE-based methods with less computational complexity. In the future, we will explore more efficient MoE routing mechanisms for multilingual and code-switching speech recognition. IEEEtran
http://arxiv.org/abs/2307.07357v1
20230714140347
Inverse Optimization for Routing Problems
[ "Pedro Zattoni Scroccaro", "Piet van Beek", "Peyman Mohajerin Esfahani", "Bilge Atasoy" ]
math.OC
[ "math.OC", "cs.LG" ]
From Multilayer Perceptron to GPT: A Reflection on Deep Learning Research for Wireless Physical Layer Mohamed Akrout, Amine Mezghani, Member, IEEE, Ekram Hossain, Fellow, IEEE, Faouzi Bellili, Member, IEEE, Robert W. Heath, Fellow, IEEE August 12, 2023 ======================================================================================================================================================= We propose a method for learning decision-makers' behavior in routing problems using Inverse Optimization (IO). The IO framework falls into the supervised learning category and builds on the premise that the target behavior is an optimizer of an unknown cost function. This cost function is to be learned through historical data, and in the context of routing problems, can be interpreted as the routing preferences of the decision-makers. In this view, the main contributions of this study are to propose an IO methodology with a hypothesis function, loss function, and stochastic first-order algorithm tailored to routing problems. We further test our IO approach in the Amazon Last Mile Routing Research Challenge, where the goal is to learn models that replicate the routing preferences of human drivers, using thousands of real-world routing examples. Our final IO-learned routing model achieves a score that ranks 2nd compared with the 48 models that qualified for the final round of the challenge. Our results showcase the flexibility and real-world potential of the proposed IO methodology to learn from decision-makers' decisions in routing problems. § INTRODUCTION Last mile delivery is the last stage of delivery in which shipments are brought to end customers. Optimizing delivery routes is a well-researched topic, but most of the classical approaches for this problem focus on minimizing the total travel time, distance and/or cost of the routes. However, the routes driven by expert drivers often differ from the routes that minimize a time or distance criterion. This phenomenon is related to the fact that human drivers take many different factors into consideration when choosing routes, e.g., good parking spots, support facilities, gas stations, avoiding narrow streets or streets with slow traffic, etc. This contextual knowledge of expert drivers is hard to model and incorporate into traditional optimization strategies, leading to expert drivers choosing potentially more convenient routes under real-life operational conditions, contradicting the optimized route plans. Thus, developing models that capture and effectively exploit this tactic knowledge could significantly improve the real-world performance of optimization-based routing tools. For instance, in 2021, Amazon.com, Inc. proposed the Amazon Last Mile Routing Research Challenge <cit.> (referred to as the Amazon Challenge in the following). For this challenge, Amazon released a large dataset of real-world delivery requests and respective expert human routes. The goal was for participants to propose novel methods that use this historical data to learn how to route like an expert human driver, thus incorporating their tactical knowledge when routing vehicles for new delivery requests. In the literature, several approaches have been proposed to incorporate information from historic route data into the planning of new routes. Some of those methods use discrete choice models and the routes of the drivers are used to determine a transition probability matrix <cit.>. Other approaches use inverse reinforcement learning to learn a routing policy that approximates the ones from historical data <cit.>. The Technical Proceedings of the Amazon Challenge <cit.> contains 31 articles with approaches that were submitted to the Amazon Challenge. Many of these approaches rely on learning specific patterns in the sequence of predefined geographical city zones visited by expert drivers. Some of the methods try to learn constraints based on observed zone sequences in the dataset and enforce them on a routing heuristic <cit.>. Other methods try to adjust the distance matrix between zones based on the training dataset <cit.>. Inverse reinforcement learning was also used to approximate the expert drivers' routing policy <cit.>. <cit.> uses a sequential probability model to encode the drivers' behavior and uses a policy iteration method to sample zone sequences from the learned probability model. The model that won the Amazon Challenge <cit.> is based on a constrained local search method. In this approach, given a new delivery request, they extract precedence and clustering constraints by analyzing similar historical human routes in the training dataset. Thus, their model is nonparametric, in the sense that the entire training dataset is required as an input, whenever the route for a new delivery request needs to be computed. In this paper, we propose to use Inverse Optimization (IO), as a special form of supervised learning, to learn from decision-makers in general routing problems. In particular, our IO models are parametric, that is, our models are parametrized by a learned vector of parameters, with a dimension that does not depend on the number of examples in the training dataset. In IO problems, the goal is to model the behavior of an expert agent, which given an exogenous signal, returns a response action. It is assumed that to compute its response, the expert agent solves an optimization problem that depends on the exogenous signal. In this work, we assume that the constraints imposed on the expert are known, but its cost function is unknown. Therefore, the goal of IO is to, given examples of exogenous signals and corresponding expert responses, model the cost function being optimized by the expert. As an example, in a Capacitated Vehicle Routing Problem (CVRP) scenario, the exogenous signal can be a particular set of customers and their respective demands, and the expert's response can be the CVRP routes chosen by the decision-maker to serve these customers and their demands. <cit.>, <cit.>, and <cit.> use IO to learn the cost matrix of shortest path problems. <cit.> investigate IO for the Traveling Salesperson Problem (TSP). They study the problem of, given an edge-weighted complete graph, a single TSP tour, and a TSP solving algorithm, finding a new set of edge weights so that the given tour can be an optimal solution for the algorithm, and is closest to the original weights. Moreover, IO has also been used to learn household activity patterns <cit.>, for network learning <cit.>, and more recently for learning complex model predictive control schemes <cit.>. For more examples of applications of IO, we refer the reader to the recent review paper by <cit.> and references therein. In this paper, we focus on IO for routing problems, and we describe a general methodology and a tailored algorithmic solution. Different from previous IO approaches in the literature, our approach is flexible to what type of routing problem the expert is assumed to solve, handles cases when there is a large number of routing examples, as well as cases when the solving the routing problem is computationally expensive. These attributes allow us to use our IO methodology to tackle a wide range of learning problems, in particular, the Amazon Challenge. The main contributions of this paper are summarized as follows: * (IO methodology for routing problems) We propose a novel IO methodology, which in view of a supervised learning framework, adopts the following specifications tailored for routing problems: * Hypothesis class: We introduce a new hypothesis class of affine cost functions with nonnegative cost vectors representing the weights of edges of a graph, along with an affine term that can capture extra desired properties of the model (Section <ref>). * Loss function: We introduce a new training loss function based on a reformulation of the Augmented Suboptimality loss <cit.>, tailoring it to our affine hypothesis class, exploiting the fact that the decision variables of routing problem can be modeled using binary variables (Section <ref>). * First-order algorithm: We also design a first-order algorithm specialized to minimize our tailored IO loss function, and is particularly efficient for IO problems with large datasets and with computationally expensive decision problems, e.g., large VRPs (Section <ref>). * Modeling flexibility: Finally, we showcase the flexibility of our IO methodology by demonstrating how VRPs and TSPs can be modeled using our framework (Sections <ref> and <ref>). * (Application to the Amazon Challenge) We evaluate our IO methodology on the Amazon Challenge, namely, we learn the drivers' preferences in terms of geographical city zones using our IO learning approach. We present results for two different hypothesis functions and showcase how new insights about the structure of the problem at hand can be seamlessly integrated into our IO methodology, illustrating its flexibility and modeling power (Sections <ref>). Our approach achieves a final Amazon score of 0.0319, which ranks 2nd compared to the 48 models that qualified for the final round of the Amazon Challenge (Figure <ref>). Moreover, using an approximate TSP solver and a fraction of the training dataset, we are able to learn a good routing model in just a couple of minutes, demonstrating the possibility to use our IO approach for real-time learning problems (Table <ref>). All of our experiments are reproducible, and the underlying source code is available at <cit.>. The rest of the paper is organized as follows. In the remainder of this section, we define the mathematical notation used in this paper. In Section <ref>, we introduce the IO methodology used in this paper and our IO approach for routing problems. In Section <ref> we introduce the Amazon Challenge, its datasets, objective, and score metric, and our IO methodology to tackle it. In Section <ref> we present our numerical results, and in Section <ref> we conclude the paper. Notation. For vectors x,y ∈ℝ^m, x ⊙ y, exp(x) and max(x,y) mean element-wise multiplication, element-wise exponentiation and element-wise maximum, respectively. The Euclidean inner product between two vectors x,y ∈ℝ^m is denoted by xy. The set of integers from 1 to N is denoted as [N]. A set of indexed values is compactly denoted by {x^[i]}_i=1^N {x^[1],…,x^[N]}. Given a set A, we denote its complement by A̅ and its cardinality by |A|. § INVERSE OPTIMIZATION In this section, we give a brief introduction to the IO methodology used in this paper and describe our IO approach to learning from routing problems. Let us begin by formalizing the IO problem. Consider an exogenous signal s ∈𝕊, where 𝕊 is the signal space. Given a signal, an expert agent is assumed to solve the following parametric optimization problem to compute its response action: min_x ∈𝕏(s) F(s,x), where 𝕏(s) is the expert's known constraint set, F: 𝕊×𝕏→ℝ is the expert's unknown cost function, where we define 𝕏⋃_s ∈𝕊𝕏(s). The expert's decision x̂ is chosen from the set of optimizers of (<ref>), i.e., x̂∈_x ∈𝕏(s) F(s,x). Assume we have access to N pairs of exogenous signals and respective expert optimal decisions {(ŝ^[i], x̂^[i])}_i=1^N, that is, x̂^[i]∈_x ∈𝕏(ŝ^[i]) F(ŝ^[i],x) ∀ i ∈ [N], where we use the hat notation “·̂” to indicate signal-response data. Using this data, our goal is to learn a cost function that, when optimized for the same exogenous signal, (approximately) reproduces the expert's actions. §.§ Hypothesis class Naturally, we can only search for cost functions in a restricted function space. In this work, given our focus on routing problems, we consider an affine hypothesis space with a nonnegative cost vector ℱ_θ{ F_θ(s,x) = θx + f(s,x): θ≥ 0 }, where θ∈ℝ^p is the cost vector that parametrizes the cost function, and f: 𝕊×𝕏→ℝ is a function that can be used to model terms in the hypothesis function that do not depend on θ. We consider nonnegative cost vectors because, for routing problems, it represents the weights the expert assigns to the edges of a graph. For instance, given a complete graph with n nodes, common cost functions to routing problems are the two-index or three-index formulations θx = ∑_i=1^n∑_j=1^n θ_ij x_ij and θx = ∑_i=1^n∑_j=1^n∑_k=1^K θ_ijk x_ijk, where x_ij and x_ijk are binary variables equal to 1 if the edge connecting node i to node j is used in the route, and 0 otherwise (for the three-index formulation, we have an extra index k specifying which of the K available vehicles uses the edge) <cit.>. Moreover, we could also have, for instance, the extra term f(s,x) = ∑_i=1^n∑_j=1^n M_ij(s) x_ij in the cost function, where the penalization terms M_ij(s) ∈ℝ are used to encode some behavior we would like to enforce in the model, for example, by adding extra penalization terms to some edges of the graph. In summary, our goal with IO is to find a nonnegative parameter vector θ such that when solving the so-called Forward Optimization Problem (FOP) FOP(θ, ŝ) min_x ∈𝕏(ŝ){θx + f(ŝ,x) }, we are able to reproduce (or approximate) the response the expert would have taken given the same signal ŝ. For a more detailed discussion on the formalization of the IO problem, please refer to <cit.>, <cit.> and <cit.>. §.§ Loss function Given a signal-response dataset {(ŝ^[i], x̂^[i])}_i=1^N, in this work we propose to solve the IO problem (i.e., find a parameter vector θ) by solving the following regularized loss minimization problem: min_θ∈Θ κℛ(θ) + 1/N∑_i=1^N ℓ_θ (ŝ^[i], x̂^[i]), where ℛ : ℝ^p →ℝ is a regularization function, κ is a nonnegative regularization parameter, Θ⊆ℝ^p is the set used to encode prior information on θ and ℓ_θ : 𝕊×𝕏→ℝ is called the loss function. The loss function should be designed in such a way that it is a proxy for our IO objective, i.e., when solving (<ref>), we learn a cost function that reproduces the behavior from the data. In this work, we use the Augmented Suboptimality loss (ASL), defined as ℓ_θ(ŝ,x̂) F_θ(ŝ,x̂) - min_x ∈𝕏(ŝ){ F_θ(ŝ,x) - d(x̂, x) }, where d : 𝕏×𝕏→ℝ_+ is a distance penalization function. This loss function has been shown to perform better than, for instance, the Suboptimality Loss (i.e., the ASL without the distance function d), and moreover, automatically excludes the trivial solution θ = 0, even when F_θ is linear in θ. For more details on this loss function, see <cit.>. A major practical challenge when using the ASL is that the cost function of the inner minimization problem min_x ∈𝕏(ŝ) F_θ(ŝ,x) - d(x̂, x) is nonconvex in x in general, due to the fact that a distance function d is never concave. However, for the special case of affine hypothesis functions (<ref>), by choosing d(x̂, x) = x̂ - x_1, and exploiting the fact that routing problems can be modeled using binary decision variables (e.g., x_ij = 1 if the edge connecting nodes i and j is used, and x_ij = 0 otherwise), we show how to reformulate the cost function of inner minimization problem as a convex function in x (assuming f is also convex in x). We do it by noting that if a,b ∈{0,1}, we have that |a - b| = (1-a)b + (1-b)a. Thus, in the case of binary decision variables, using the definition of the ℓ_1-norm, and defining 1∈ℝ^p to be the all-ones vector, one can show that x̂ - x_1 = 1 - 2x̂x + 1x̂, that is, and affine (convex) function of x. Therefore, combining this observation with Eq. (<ref>) and Problem (<ref>), we arrive at the IO regularized loss minimization problem for routing problems min_θ≥ 0 κℛ(θ) + 1/N∑_i=1^N (θx̂^[i] + f(ŝ^[i],x̂^[i]) - min_x ∈𝕏(ŝ^[i]){θ + 2x̂^[i] - 1x + f(ŝ^[i],x) + 1x̂^[i]}), where we used the affine hypothesis space (<ref>), the ASL with ℓ_1-norm distance function, and a binary decision space, i.e., 𝕏(ŝ) ⊆{0,1}^p. §.§ First-order algorithm In this work, we propose to solve problem (<ref>) using stochastic first-order algorithms. In particular, our algorithm is inspired by the Stochastic Approximate Mirror Descent from <cit.> and is tailored for optimizing (<ref>), i.e., IO applied to routing problems. In particular, our algorithm uses update steps customized to affine hypothesis functions with a nonnegative cost vector. Moreover, it also exploits the finite sum structure of the problem (i.e., the sum over the N examples) by using a single training example per iteration of the algorithm. Before presenting our algorithm, we discuss how to compute subgradients of (<ref>), which is necessary when using first-order methods to minimize this loss. For this, we define A-FOP(θ, ŝ, x̂) min_x ∈𝕏(ŝ){θ + 2x̂ - 1x + f(ŝ,x) }, that is, the set of optimizers of the FOP with augmented edge weights θ + 2x̂ - 1 instead of θ. Being able to solve the Augmented FOP (<ref>) is important because to compute a subgradient of the ASL (and thus, a subgradient of (<ref>)), we need to compute an element of A-FOP(θ, ŝ, x̂) (this follows from Danskin's theorem <cit.>). Here we emphasize an important consequence of our reformulation of the ASL that led to Problem (<ref>): solving the A-FOP has the same complexity as solving the FOP. For instance, if the FOP is a TSP with edge weights θ, then the A-FOP is also a TSP, but with augmented edge weights θ + 2x̂ - 1. This is of particular practical interest since the same optimization solver can be used both for learning the model (i.e., for the A-FOP) and for using the model (i.e., for the FOP). Algorithm <ref> shows our reshuffled stochastic first-order algorithm for Problem (<ref>). The algorithm runs for T epochs, and at the beginning of each epoch, we sample a permutation of [N] (line 3), which simply means that we shuffle the order of the examples in the dataset. This is known as random reshuffling, as has been shown to perform better in practice compared to standard uniform stochastic sampling <cit.>. Moreover, since our random reshuffling strategy uses only one example per update step, it is particularly efficient for problems with large datasets. Next, for each epoch, we perform one update step for each example in the dataset. In particular, in line 5 we compute one element of A-FOP, and in line 6 we compute a subgradient of the ASL. For the update step (line 7), we offer two possibilities: exponentiated and standard updates. * Exponentiated update rule: This update step can be interpreted as using an Exponentiated subgradient algorithm <cit.> to minimize (<ref>), where we choose ℛ(θ) = θ_1 and reformulate the regularization term κθ_1 in the cost function as the regularization constraint θ∈{θ∈ℝ^p_+ : κθ_1 ≤ 1}. Also, we add the option to normalize the gradients when using exponentiated updates. This is a known idea used when optimizing nonsmooth, not strongly convex functions (such as the ASL), and stems from the fact that for this class of functions, the norm of the gradient does not necessarily correlate with how close we are to an optimizer of the function <cit.>. * Standard update rule: This update step can be interpreted as using the standard subgradient method to minimize (<ref>), where we choose ℛ(θ) = 1/2θ_2^2, and project onto the nonnegative cone after each iteration. In practice, the question of what update step is the best should be answered on a case-by-case basis. An important component of our algorithm (and of first-order algorithms in general) is the step size η_t. In theory, in order for Algorithm <ref> to converge to a minimizer of (<ref>), we must use a diminishing step-size, that is, η_t must decrease as t increases, for example, η_t = c/√(t) or η_t = c/t, where c is some fixed constant. However, in practice, constant step sizes can also be used. Another important component of Algorithm <ref> is the regularization parameter κ. Similar to the case in standard supervised learning problems, by making κ larger, it can be used to avoid overfitting the training data, improving the generalization performance of the learned model. Finally, the output of Algorithm <ref> is some function of the iterates, e.g., θ_T = θ_T^[N+1] (last iterate), θ_T = 1/T∑_t=1^T θ_t^[N+1] (average) or θ_T = 2/T(T+1)∑_t=1^T t θ_t^[N+1] (weighted average) <cit.>. Notice that an element of A-FOP needs to be computed at each iteration of Algorithm <ref> (line 5). However, if the A-FOP is a hard combinatorial problem (e.g., a large TSP or VRP), it may not be practical to solve it to optimality multiple times. Thus, in practice, one may use an approximate A-FOP, that is, in line 5 of Algorithm <ref> we compute an approximate solution to the augmented FOP instead of an optimal one. Fortunately, an approximate solution to the A-FOP can be used to construct an approximate subgradient <cit.>, which in turn can be used to compute an approximate solution of Problem <ref> <cit.>. In practice, using approximate solvers may lead to a much faster learning algorithm, in exchange for a worse learned model. This trade-off is explored in Section <ref>. Next, we present two examples of how our IO methodology can be used for learning from routing data. Namely, we first exemplify how a CVRP scenario can be modeled with our IO methodology, and present a simple numerical example to illustrate the intuition behind how Algorithm <ref> works. Second, we define a class of TSPs and show how to learn from them using IO, which will later be used to formalize the Amazon Challenge as an IO problem. §.§ IO for CVRPs We define the K-vehicle Symmetric Capacitated Vehicle Routing Problem (SCVRP) as min_x_e ∈{0,1} ∑_e ∈ E w_e x_e, s.t. ∑_e ∈δ(i) x_e = 2 ∀ i ∈ V ∖{0} ∑_e ∈δ(0) x_e = 2K ∑_e ∈δ(S) x_e ≥ 2 r(S, D, c) ∀ S ⊂ V ∖{0}, S ≠∅, where 𝒢 = (V, E, W) is an edge-weighted graph, with node set V (node 0 being the depot), undirected edges E, and edge weights W. For this problem, each node i ∈ V represents a customer with demand d_i ∈ D. There are K vehicles, each with a capacity of c. Given a set S ⊂ V, let δ(S) denote the set of edges that have only one endpoint in S. Moreover, given a set S ⊂ V ∖{0}, we denote by r(S, D, c) the minimum number of vehicles with capacity c needed to serve the demands of all customers in S. The x_e's are binary variables equal to 1 if the edge e ∈ E is used in the solution, and equal to 0 otherwise, and w_e ∈ W is the weight of edge e ∈ E <cit.>. Next, we show how to use IO to learn edge weights that can be used to replicate the behavior of an expert, given a set of example routes. Consider the signal ŝ D, where D is a set of demands of the customers, and the response x̂∈{0, 1}^|E|, which is the vector with components x_e encoding the optimal solution of the Problem (<ref>) for the signal ŝ). Defining the linear hypothesis function (i.e., with no affine term f) θx∑_e ∈ Eθ_e x_e, and the constraint set 𝕏(ŝ) {x ∈{0,1}^|E| : ∑_e ∈δ(i) x_e = 2 ∀ i ∈ V ∖{0}, ∑_e ∈δ(0) x_e = 2K, ∑_e ∈δ(S) x_e ≥ 2 r(S, D, c) ∀ S ⊂ V ∖{0}, S ≠∅}, we can interpret the signal-response pair (ŝ, x̂) as coming from an expert agent, which given the signal ŝ of demands, solves the SCVRP to compute its response x̂. Thus, in order to learn a cost function (i.e., learn a vector of edge weights) that replicates the SCVRP route x̂, we can use Algorithm <ref> to solve Problem (<ref>) with hypothesis (<ref>) and constraint set (<ref>). To illustrate how Algorithm <ref> works to learn edge weights in routing problems on graphs, consider a simple SCVRP with K=2 vehicles, each with capacity c=3, and 5 customers, each customer i with demand d_i=1. In this example, for simplicity, we use w_e equal to the Euclidean distance between the customers, however, any other set of weights could be used instead. We create one training example using these weights. Figure <ref> shows the location of the customers (black dots), the depot (red square), and the optimal SCVRP routes using weights w_e. Figure <ref> shows a representation of the weights of each edge of the graph, where the smaller the weight, the thicker and darker the edge. We use Algorithm <ref> with exponentiated updates, with κ = 1, η_t = 0.0002, and we initialize θ_0 with the same weight for all edges. In Figure <ref>, we graphically show two iterations of the algorithm for this problem. In the first column, we show the evolution of the learned weights θ_t. In the second column, we show optimal SCVRP routes computed using the weights in the first column (i.e., computed by solving the A-FOP in line 5 of Algorithm <ref>), and in the third column, we show the difference between the optimal routes using the true weights (Figure <ref>) and the optimal routes using the current learned weights (the route in the second column). The difference between these two routes is the subgradient computed in line 6 of Algorithm <ref>, which is used to update the learned weights θ_t. For the subgradient representation in the third column, red edges represent a negative subgradient (i.e., edges with weights that should be increased) and green edges represent a positive subgradient (i.e., edges with weights that should be decreased). This is the main intuition behind Algorithm <ref>: at each iteration, we compare the route we want to replicate with the one we get with the current edge weights. Then, comparing which edges are used in these two routes, we either increase or decrease their respective weights, in such a way that we “push” the optimal route using the learned weights to be closer to the route we want to replicate. In the example shown in Figure <ref>, we can see that after two iterations of the Algorithm <ref>, the optimal route using the learned weights coincides with the example route. §.§ IO for TSPs Let 𝒢 = (V, E, W) be a complete edge-weighted directed graph, with node set V, directed edges E, and edge weights W. Next, given S ⊂ V (i.e., a subset of the nodes of 𝒢), we define the Restricted Traveling Salesperson Problem (R-TSP) as min_x_ij ∑_i ∈ V∑_j ∈ V w_ij x_ij, s.t. ∑_j ∈ S x_ij = ∑_j ∈ S x_ji = 1 ∀ i ∈ S ∑_i ∈ Q∑_j ∈ Q x_ij≤ |Q| -1 ∀ Q ⊂ S, Q ≠∅, Q̅≠∅ x_ij∈{0,1} ∀ (i,j) ∈ S × S x_ij = 0 ∀ (i,j) ∉ S × S, where x_ij is a binary variable equal to 1 if the edge from node i to node j is used in the solution, and 0 otherwise, and w_ij is the weight of the edge connecting node i to node j. Problem (<ref>) is based on the standard formulation of a TSP as a binary optimization problem <cit.>. The only difference to a standard TSP is that instead of being required to visit all nodes of the graph, for an R-TSP we compute the optimal tour over a subset S of the nodes V. Notice that the standard TSP can be interpreted as an R-TSP, for the special case when S = V. In practice, any TSP solver can be used to solve an R-TSP by simply ignoring all nodes of the graph that are not required to be visited. Next, we show how to use IO to learn edge weights that can be used to replicate the behavior of an expert, given a set of example routes. Consider the dataset {(ŝ^[i], x̂^[i])}_i=1^N, where the signal ŝ^[i]Ŝ^[i]∈ V is a set of nodes required to be visited and the response x̂^[i]∈{0,1}^|V|^2 is the respective optimal R-TSP tour (i.e., a vector with components x_ij for (i,j) ∈ V × V). Defining the affine hypothesis function θx∑_i ∈ V∑_j ∈ V(θ_ij + M_ij) x_ij, and the constraint set 𝕏(ŝ) {x ∈{0,1}^|V|^2 : ∑_j ∈ S x_ij = 1, ∀ i ∈ S ∑_i ∈ S x_ij = 1, ∀ j ∈ S ∑_i ∈ Q∑_j ∈ Q x_ij≤ |Q| -1, ∀ Q ⊂ S, Q ≠∅, Q̅≠∅ x_ij = 0 ∀ (i,j) ∉ S × S }, we can interpret this dataset as coming from an expert agent, which given the signal ŝ^[i], solves an R-TSP to compute its response x̂^[i]. For the hypothesis function, the term M_ij can be used as a penalization term to enforce some kind of expected behavior to the model, e.g., to add penalizations to some edges of the graph. Figure <ref> illustrates a signal and expert response for an R-TSP. Thus, in order to learn a cost function (i.e., learn a vector of edge weights) that replicates (or approximates as well as possible) the example routes in the dataset, we can use Algorithm <ref> to solve Problem (<ref>) with hypothesis (<ref>) and constraint set (<ref>). This formulation will serve as the basis of our IO approach to tackle the Amazon challenge. We formally introduce the Amazon challenge and our complete approach in the next sections. We conclude this section with some general comments about our IO approach. First, we emphasize that it does not require the dataset {(ŝ^[i], x̂^[i])}_i=1^N to be consistent with a single cost function (i.e. a single set of edge weights), which is to be expected in any realistic setting, due to model uncertainty, noisy measurements or bounded rationality <cit.>. Also, we showed how to use our IO approach for SCVRP and R-TSP scenarios, but we emphasize that the methodology developed in this section could be easily adapted to different kinds of routing problems. For instance, if the problem was a VRP with time windows, backhauls, or pickup and delivery, we could account for these characteristics by simply changing the constraint set 𝕏(ŝ) of our IO model <cit.>, in other words, by modifying the problem we assumed the expert agent is solving to generate its response. Notice that in any case, the methodology developed in sections <ref>, <ref>, and <ref> would not change, which highlights the generality and flexibility of our IO approach. As a final comment, we mention that our approach can easily be adapted to the scenario where new signal-response examples arrive in an online fashion. That is, instead of learning from an offline dataset of examples, we gradually update the edge weights (i.e., θ_t) with examples that arrive online, similar to <cit.>. This can be done straightforwardly by adapting Algorithm <ref> to use examples that arrive online in the same way it uses the signal-response pairs (ŝ^[π_i], x̂^[π_i]). § AMAZON CHALLENGE In this section, we describe in more detail the Amazon Challenge, which we use as a real-world application to assess our IO approach. A detailed description of the data provided for the challenge can be found in <cit.>. In summary, Amazon released two datasets for this challenge: a training dataset and a test dataset. The training dataset consists of 6112 historical routes driven by experienced drivers. This dataset is composed of routes performed in the metropolitan areas of Seattle, Los Angeles, Austin, Chicago, and Boston, and each route is characterized by a number of features. Figure <ref> shows a high-level description of the features available for each example route. Each of the training routes originates at a depot, visits a collection of drop-off stops assigned to the driver in advance, and ends at the same depot. Thus, each route can be interpreted as an R-TSP route. Figure <ref> shows 8 example routes leaving from a depot in Boston, where different colors represent different routes. Each stop in every route was given a Zone ID, which indicates to which geographical zone of the city the stop belongs. Some stops in the dataset are not given a Zone ID, so for these stops, we assign them the Zone ID of the closest zone (in terms of Euclidean distance). Turns out, this predefined zoning of the stops is a key piece of information about the Amazon Challenge. This will be discussed in detail in the subsequent sections of this paper. As previously mentioned, the goal of the challenge was to incorporate the preferences of experienced drivers into the routing of last-mile delivery vehicles. Thus, rather than coming up with TSP strategies that minimize time or distance given a set of stops to be visited, the goal of the challenge was to learn from historical data how to route like the expert drivers. To this end, a test dataset consisting of 3072 routes was also made available to evaluate the proposed approaches. To compare the routes from expert drivers to the routes from the approaches submitted to the challenge, Amazon devised a scoring metric that computes the similarity between two routes, where the lower the score, the more similar the routes <cit.>. In summary, a dataset of 6112 historical routes from expert human drivers was made available for the Amazon Challenge. Using this dataset, the goal is to come up with routing methods that mimic the way human drivers route vehicles. To evaluate the proposed approaches, Amazon used a test dataset consisting of 3072 unseen examples. In order to compare how similar the routes from this dataset are to the ones computed by the submitted approaches, a similarity score was devised. The final score is the average score over the 3072 test instances. A summary of the scores of the top 20 submissions to the Amazon Challenge can be found at <cit.>. Since each historical route in the dataset of the challenge refers to a driver's route that starts at a depot, visits a predefined set of customers, and then returns to the depot, the expert human routes from the Amazon Challenge can be interpreted as solutions to R-TSPs, and we can use the IO approach to tackle the Amazon Challenge. In other words, we can use IO to estimate the costs they assign to the street segments connecting stops. Ultimately, this will allow us to learn the drivers' preferences, and replicate their behavior when faced with new requests for stops to be visited. §.§ Zone IDs and time windows In Section <ref>, we describe how IO can be used to learn drivers' preferences from R-TSP examples. Although the Amazon Challenge training dataset consists of 6,112 historical routes, it contains only a few cases of stops that are visited in multiple route examples. This makes it difficult to learn any meaningful preference of the drivers at the stop level (i.e., individual customer level). However, recall that each stop in the dataset is assigned a Zone ID, which refers to a geographical zone in the city (see Figure <ref>), and each zone contains multiple stops. Analyzing how the human drivers' routes relate to these zones, a critical observation can be made: in the vast majority of the examples, the drivers visit all stops within a zone before moving to another zone. This behavior is illustrated in Figure <ref>. Also, the same zone is usually visited in multiple route examples in the dataset. Thus, instead of learning drivers' preferences at the stop level, we can learn their preferences at the zone level. In other words, we consider each zone as a hypernode containing all stops with the same Zone ID. Thus, we can create a hypergraph with nodes corresponding to the zone hypernodes (see Figure <ref>). This way, we can view the expert human routes as routes over zones, and we can use our IO approach to learn the weights the drivers use for the edges between zones. Another piece of information in the dataset is that time window targets for package delivery are included for a subset of the stops. As observed by some contestants of the challenge (e.g., <cit.>), these constraints are often trivially satisfied, and ignoring them altogether had minimal impact on their final score. Therefore, time windows are ignored in our approach. Moreover, we also ignore all information about the size of the vehicle and the size of the packages to be delivered, as these do not seem to influence the routes chosen by the drivers. §.§ Complete method In this section, we outline all steps involved in our IO approach to the Amazon Challenge. As explained in the previous section, due to the nature of the provided data, we focus on learning the preferences of the driver at the zone level. However, the historical routes of the datasets are given in terms of a sequence of stops. Moreover, given a new request for stops to be visited, the learned model should return the sequence of stops, not the sequence of zones. Therefore, intermediate steps need to be taken in order to go from a sequence of stops to a sequence of zones, and vice-versa. A block diagram of our method is shown in Figure <ref>. A detailed description of each step of our method is given in the following. Step 1 (pre-process the data). The first step is to transform the datasets from stop-level information to zone-level information. Namely, for each data pair of stops to be visited ŝ and respective expert route x̂ (see modeling in Section <ref>), we transform them into a signal ŝ^z_t containing the zones to be visited and respective expert zone sequence x̂^z_t. This is the process illustrated in Figure <ref>. However, differently from Figure <ref>, there are cases in the dataset where the human driver visits a certain zone, leaves it, and later returns to the same zone. Thus, to enforce that the sequence of zones respects the TSP constraint that each zone is visited only once, when transforming a sequence of stops into a sequence of zones, we consider that a zone is visited at the time the most consecutive stops in that zone are visited. To illustrate it, consider the case when a driver visits 7 stops belonging to zones A,B,C, where the sequence of visited stops, in terms of their zones, is A → B → B → A → A → C → C. In this case, the driver visits zone A, leaves it, and then visits it again. Following our transformation rule, we consider the sequence of zones to be B → A → C. Step 2 (Inverse Optimization). Next, considering the hypergraph of zones (i.e., each node represents a zone), we use our IO approach to learn the weights the expert drivers give to the edges connecting the zones. Namely, given a dataset of N examples of zones to be visited and respective zone sequences, we solve Problem (<ref>) using Algorithm <ref> to learn a vector θ, that is, a cost vector with components corresponding to the learned edge weights between zones. Step 3 (compute the zone sequence). Let ŝ be a set of stops to be visited from the testing dataset. In order to use the weights learned in Step 2 to construct a route for these stops, we first need to transform the signal from the stops to be visited into the zones that need to be visited by the driver ŝ^z (see Step 1). Given the signal of zones to be visited, and the weights θ learned in Step 2, we solve the R-TSP over zones min_x ∈𝕏(ŝ^z)θx, where the constraint set is defined in (<ref>). The solution to this problem contains the sequence of zones the driver needs to follow. In some cases, routes in the test dataset contain zones that are not visited in the training dataset. In these cases, since the vector of learned weights θ does not contain information about these zones, we set their weights equal to the Euclidean distance between the center of the zones. Step 4 (from a zone sequence to a stop sequence). The final step of our method consists of computing the complete route at the stop level. In other words, given the zone sequence computed in Step 3 (e.g., Figure <ref>), we want to find a respective stop sequence (e.g., Figure <ref>). We do it using a penalization method. Let c_ij be the transit time from stop i to stop j (that is provided in the Amazon Challenge dataset, see Figure <ref>). In order to enforce the zone sequence found in Step 3, we create the penalized weights c̃_ij, defined as c̃_ij c_ij, if stops i and j are in the same zone c_ij + M, if the zone of stop j should be visited directly after the zone of stop i c_ij + 2M, otherwise, where M is a large penalization constant. For a large enough M, this modification ensures that all stops within a zone are visited before moving to another zone and that the sequence of zones from Step 3 is respected. Thus, we compute the complete route over a set of stops Ŝ by solving the R-TSP over stops min_x ∈𝕏(Ŝ)∑_i=1^m ∑_j=1^m c̃_ij x_ij, where 𝕏 is the R-TSP constraint set (<ref>) and m is the total number of stops. As explained in Section <ref>, the dataset comprises example routes from 5 cities in the USA, and each city can have multiple depots. It turns out that each zone is always served by the same depot, thus, we can learn the preferences of the drivers separately for each depot. Consequently, when using our approach in the Amazon Challenge, we perform steps 1 to 4 separately for each depot. § NUMERICAL RESULTS In this section, we numerically evaluate our Inverse Optimization approach to the Amazon Challenge. To compute the zone sequence (i.e., step 3 of our method) we use a Gurobi-based TSP solver <cit.> and to compute the complete route at the stop-level (i.e., step 4 of our method), we use Google OR-Tools <cit.>. Our experiments are reproducible, and the underlying source code is available at <cit.>. In particular, we use the InvOpt python package <cit.> for the IO part of our approach. For the following numerical results, we use Algorithm <ref> with standard update steps for T=5 epochs, with κ = 0,η_t = 0.0005/√(t), and we set θ_1 as the Euclidean distance between the center of the zones in the training dataset, where we compute the center of a zone by taking the mean of the longitudinal and lateral coordinates of all stops within the zone. §.§ IO for the Amazon Challenge In this section, we present results for two IO approaches, where their difference lies in the hypothesis function used in the IO model. Namely, the first approach uses a linear hypothesis (i.e., (<ref>) with M_ij = 0), whereas the second approach uses an affine hypothesis, which we use to enforce the resulting zone sequences to respect certain behaviors observed in the Amazon dataset. First approach: linear hypothesis function. As a first attempt, we apply our IO methodology to the Amazon challenge as explained in the previous section. In particular, we use the linear hypothesis (<ref>). Evaluating the resulting IO model on the test dataset, we achieve a final Amazon score of 0.048. This score ranks 5th compared to the 48 models that qualified for the final round of the Amazon Challenge <cit.>. Although this is already a good result, we can significantly improve this score by using our IO approach to exploit patterns in the behavior of expert human drivers. Second approach: linear hypothesis function with penalizations. As noticed by some of the contestants of the original Amazon challenge <cit.>, by carefully analyzing the sequence of zones followed by the human drivers, one can uncover patterns that can be exploited. These patterns are related to the specific encoding of the Zone ID given to the zones. Namely, the Amazon Zone IDs have the form W-x.yZ, where W and Z are upper-case letters and x and y are integers. Table <ref> shows an example of a zone sequence from the Amazon Challenge dataset. Although the zone sequence shown in Table <ref> is just a small example, it contains the patterns that we exploit to improve our approach, which are the following: * Area sequence: For a zone with Zone ID W-x.yZ, define its area as W-x.Z. It is observed that the drivers tend to visit all zones within an area before moving to another area. * Region sequence: For a zone with Zone ID W-x.yZ, define its region as W-x. It is observed that the drivers tend to visit all areas within a region before moving to the next region. * One unit difference: Given two zone IDs z_1 = W-x.yZ and z_2 = A-b.cD, we define the difference between two zone IDs as d(z_1, z_2) |ord(W)-ord(A)| + |x-b| + |y-c| + |ord(Z)-ord(D)|, where the function maps characters to integers (in our numerical results, we use Python's built-in function). In particular, letters that come after the other in the alphabet are mapped to integers that differ by 1, e.g., ord(G) = 71 and ord(H) = 72. It is observed that for subsequent zone IDs in the zone sequences from the Amazon dataset, the difference between these zone IDs tends to be small (most often 1). Next, we incorporate these observations into our IO learning approach. One way to force the routes from our IO model to respect these behaviors (i.e., the “area sequence”, “region sequence” and “one unit difference” behaviors) is to augment the IO linear hypothesis function with penalization constants. In a sense, modifying the hypothesis function can be interpreted as modifying what we believe is the optimization problem the expert human drives solve to compute their routes. For this particular case, we use ∑_i ∈ V∑_j ∈ V( θ_ij + M^A_ij + M^R_ij + M^d_ij) x_ij, where M^A_ij = 0 if zones i and j are in the same area, and M^A_ij = 1 otherwise, M^R_ij = 0 if zones i and j are in the same region, and M^R_ij = 1 otherwise, and M^d_ij = d(i, j), that is, the difference between zones i and j. Since for Algorithm <ref> we initialize θ_1 as the Euclidean distance between zone centers, where the coordinates of the centers are given by their latitudes and longitudes, each component of θ_0 is much smaller than 1. This makes a penalization of one unit (such as the ones used for M^A_ij and M^R_ij) enough to enforce that the resulting routes will respect the area sequence and region sequence behaviors. The same idea applies to the “one unit variance” penalization. Regarding the steps described in Section <ref>, for the second approach we just need to modify the hypothesis function we use. Namely, in steps 2 and 3, we use (<ref>) instead of (<ref>), while keeping everything else the same. This also exemplifies the modularity of our IO approach, where new information/structure can be exploited by simply modifying some part of the IO pipeline. The final Amazon challenge score achieved by this approach is 0.0319. Figure <ref> shows a complete report of the progress of the training (i.e., in-sample) and test (i.e., out-of-sample) scores of the first and second approaches w.r.t. the iterations of Algorithm <ref>. In this figure, we can see that the training scores are smaller than the test scores, as expected for any learning algorithm. Moreover, it is also evident that our IO approach was able to effectively exploit the new information encoded in the hypothesis function (<ref>). Figure <ref> shows the scores of the top 20 submissions of the Amazon Challenge, plus the score of the Amazon Web Services team <cit.>. As can be seen, our score ranks 2nd compared to the 48 models that qualified for the final round of the Amazon Challenge <cit.>. Moreover, if instead of using our tailored Augmented Suboptimality Loss (<ref>), we used the Suboptimality loss (i.e., (<ref>) without the distance function d), the performance of the IO model would drop to 0.033, which demonstrates the impact of our tailored loss function. §.§ Computational and time complexity In this section, we present further numerical experiments using the Amazon Challenge datasets, focusing on the computational and time complexity of Algorithm <ref>. Before we present our results, as discussed at the end of Section <ref>, recall that we apply our IO learning method separately for each depot in the training dataset. Thus, assuming we can run Algorithm <ref> in parallel for all depots, the complexity of computing the final IO model for all depots equals the complexity of computing the IO model for the largest depot in the dataset. For the Amazon Challenge, the largest depot dataset is DLA7 in Los Angeles, which we thus use to discuss the complexity of our approach. Dataset size vs performance. First, we study the performance of our IO approach by changing the size of the training dataset. That is, instead of using the entire training dataset of the Amazon Challenge to train the IO model, we test the impact of using only a fraction of the available data. Figure <ref> shows the results of this experiment. Figure <ref> shows the Amazon score achieve, per epoch of Algorithm <ref> using different fractions of the Amazon training dataset. Figure <ref> shows the time it took for 5 epochs of Algorithm <ref>, for the different fractions of the training dataset. As expected, the more data we feed to Algorithm <ref>, the better the score gets, and the longer the training takes. Interestingly, notice that using only 20% of the data provided for the challenge, our IO approach is already able to learn a routing model that scores 0.0348, which would still rank 2nd compared to the 48 models that qualified for the final round of the Amazon Challenge. Time complexity and approximate A-FOP. In practice, the most time consuming component of Algorithm <ref> is solving the A-FOP (line 5). As previously explained, for the Amazon Challenge, this problem consists of a TSP over zones (see Step 2 in Section <ref>). Thus, for each epoch of Algorithm <ref>, we need to solve N TSPs, where N is the number of examples in the training dataset. For the depot DLA7, N = 1133, and each example contains, on average (rounded up), 23 zones, where the largest instance has 37 zones and the smallest has 9 zones. Thus, for each epoch of Algorithm <ref>, we need to solve 1133 TSPs, each with 23 zones on average. Using an exact Gurobi-based TSP solver, and 5 epochs of Algorithm <ref> for our second approach (using the entire training dataset) took 133.67 minutes (see Figure <ref>). However, recall that as discussed in Remark <ref>, Algorithm <ref> can be used with an approximate A-FOP instead of an exact one. The idea here is that solving A-FOP approximately can be much faster in practice, which may compensate for a potentially worse performance of the learned IO model. We test this idea by using Algorithm <ref> with an approximate TSP solver instead of the exact Gurobi-based one. For the approximate TSP solver, we use Google OR-Tools <cit.>. The final Amazon score after 5 epochs of Algorithm <ref> with the approximate solver is 0.0319, the same score when using the exact solver, but taking only 60.36 minutes in total, that is, more than twice as fast. Interestingly, we can push this time even further. As can be seen in Figure <ref>, a good IO model can be achieved using Algorithm <ref> is used for only one epoch. Moreover, from Figure <ref>, it can also be seen that a good IO model can be learned using only 20% of the training dataset. Thus, using 20% of the training dataset and running Algorithm <ref> using the Google OR-Tools approximate TSP solver for 5 epochs, we achieve a final score of 0.0349 (0.0352 after only one epoch) in only 13.39 minutes (2.68 minutes per epoch on average). This showcases the learning efficiency of our IO methodology, making it also suitable for real-time applications, where models need to be learned/updated frequently, and the training time should not take more than a couple of minutes. Table <ref> summarizes the numerical results of this section. § CONCLUSION AND FURTHER WORK In this work, we propose an Inverse Optimization (IO) methodology for learning the preferences of decision-makers in routing problems. To exemplify the potential and flexibility of our approach, we first apply it to a simple CVRP problem, where we give insight into how our IO algorithm works by modifying the learned edge weights by comparing the example routes to the optimal route we get using the current learned weights. Then, we show the real-world potential of our approach by using it to tackle the Amazon Challenge, where the goal of the challenge was to develop routing models that replicate the behavior of real-world expert human drivers. To do so, we first define what we call Restricted TSPs (i.e., TSPs for which only a subset of the nodes is required to be visited). Given a dataset of signals (nodes to be visited) and expert responses (R-TSPs tours), we have shown how to use IO to learn the edge weights that explain the observed data. In the context of the Amazon Challenge, learning these edge weights translates to learning the sequence of city zones preferred by expert human drivers. Finally, from a sequence of zones, we constructed a complete TSP tour over the required stops. The final score of our approach is 0.0319, which ranks 2nd compared to the 48 models that qualified for the final round of the Amazon Challenge. As future research directions, it would be interesting to apply our methodology to different and more complex classes of routing problems, for instance, dynamic VRPs, routing problems with time windows, backhauls, etc. Given the modularity/flexibility of our IO methodology, we believe it has the potential to be used for a wide range of real-world decision-making problems. apalike
http://arxiv.org/abs/2307.04210v1
20230709154627
Investigating the Edge of Stability Phenomenon in Reinforcement Learning
[ "Rares Iordan", "Marc Peter Deisenroth", "Mihaela Rosca" ]
cs.LG
[ "cs.LG" ]
Investigating the Edge of Stability Phenomenon in Reinforcement Learning]Investigating the Edge of Stability Phenomenon in Reinforcement Learning Rares Iordan [email protected] University College London, United Kingdom Marc Peter Deisenroth [email protected] University College London, United Kingdom Mihaela Rosca [email protected] University College London, United Kingdom Also at Google Deepmind. [ Jonatan A. González Computer, Electrical and Mathematical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia and Paula Moraga Computer, Electrical and Mathematical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================= Recent progress has been made in understanding optimisation dynamics in neural networks trained with full-batch gradient descent with momentum with the uncovering of the edge of stability phenomenon in supervised learning <cit.>. The edge of stability phenomenon occurs as the leading eigenvalue of the Hessian reaches the divergence threshold of the underlying optimisation algorithm for a quadratic loss, after which it starts oscillating around the threshold, and the loss starts to exhibit local instability but decreases over long time frames. In this work, we explore the edge of stability phenomenon in reinforcement learning (RL), specifically off-policy Q-learning algorithms across a variety of data regimes, from offline to online RL. Our experiments reveal that, despite significant differences to supervised learning, such as non-stationarity of the data distribution and the use of bootstrapping, the edge of stability phenomenon can be present in off-policy deep RL. Unlike supervised learning, however, we observe strong differences depending on the underlying loss, with DQN — using a Huber loss — showing a strong edge of stability effect that we do not observe with C51 — using a cross entropy loss. Our results suggest that, while neural network structure can lead to optimisation dynamics that transfer between problem domains, certain aspects of deep RL optimisation can differentiate it from domains such as supervised learning. § THE EDGE OF STABILITY PHENOMENON <cit.> use a thorough experimental study to shed light on deep learning optimisation dynamics by showing that full-batch gradient descent training in supervised learning exhibits two phases: progressive sharpening and edge of stability. In the first stage of training, progressive sharpening, the leading eigenvalue of the Hessian, λ_1, increases steadily and the loss decreases monotonically. As λ_1 increases, it reaches the divergence threshold of the underlying optimisation algorithm under a quadratic loss assumption (we will call this “the quadratic divergence threshold"); for gradient descent with learning rate η and momentum decay rate β, the quadratic divergence threshold is 1/η (2 + 2β). As λ_1 reaches the quadratic divergence threshold, the edge of stability phenomenon occurs: the loss starts to exhibit short-term instabilities, while decreasing over long time scales; λ_1 no longer steadily increases, but fluctuates around the threshold. For cross entropy losses, the edge of stability area is followed by a decrease in λ_1, while for mean squared error losses λ_1 stays in the edge of stability area. Similar results are shown for stochastic gradient descent, across batch sizes, though the effect is less pronounced as the batch sizes decrease <cit.>. The edge of stability phenomenon shows that neural network training is strongly affected by the quadratic divergence threshold of the underlying optimisation algorithm, and exceeding it leads to training instabilities. This observation has garnered a lot of interest, with recent works having analysed the edge of stability phenomenon in supervised learning with both theoretical and empirical tools <cit.>. To the best of our knowledge, no studies on the edge of stability phenomenon have been made outside of supervised learning. We complement this body of work by empirically investigating whether the edge of stability phenomenon in occurs in off-policy deep RL algorithms DQN and C51 across a variety of data regimes, from offline to online learning. Upon acceptance, we will make the code and data used publicly available. § CHALLENGES WITH OPTIMISATION IN OFF-POLICY DEEP REINFORCEMENT LEARNING To investigate whether the edge of stability phenomenon translates to deep RL, we conduct experiments using DQN <cit.> and C51 <cit.>, two off-policy algorithms that model the state-action value function Q(s, a; θ) with a neural network with parameters θ. We study both DQN and C51 as their losses correspond to the mean squared error and cross entropy loss used in supervised learning, studied by <cit.> when investigating the edge of stability phenomenon. For DQN, we investigate the more commonly used the Huber loss, which is quadratic around 0: min_θ E(θ) = 𝔼_(s, a, s', r) ∼ℛ 12(Q(s, a; θ) - (r + γmax_a' Q(s', a'; θ) ) )^2 , if (·)^2 ≤ 1 𝔼_(s, a, s', r) ∼ℛ|Q(s, a; θ) - (r + γmax_a' Q(s', a'; θ) ) | - 12 , otherwise. C51 <cit.>, the distributional counterpart of DQN, uses distributional quantiles instead of operating in expectation as in Eq (<ref>), leading to a cross entropy loss; for details, we refer to Appendix <ref>. Off-policy deep RL algorithms like DQN and C51 differ from supervised learning both through their objectives—which use bootstrapping—and the data present in the replay buffer ℛ used for learning the agent, which can be non-stationary, have noise inserted to help exploration, and can be obtained from the agent's own experience. All the above affect optimisation dynamics in deep RL, and have been studied individually <cit.>. Bootstrapping — the dependence of the regression target in Eq (<ref>) on the Q-function — can lead to increased variance and bias in model updates <cit.>. To mitigate instabilities introduced by bootstrapping often a target network is used, where old parameters updated at regular intervals are used to construct the target instead of the current parameters; both DQN and C51 use target networks. In online RL, where the replay buffer ℛ is filled with the agent's own experience, the non-stationarity of the data present in the replay buffer ℛ violates the i.i.d. assumption required by many optimisation algorithms and gradient updates might not form a gradient vector field <cit.>. Offline RL <cit.>, where the agent learns from a fixed replay buffer often gathered from another agent or expert demonstrations, can mitigate some of the training challenges with online RL, but can suffer from poor agent performance due to distributional shift <cit.> — the discrepancy between the data distribution used for learning, present in the offline dataset (the replay buffer), and the distribution the policy encounters during execution. Given these peculiarities of RL, it is unclear how optimisation effects observed in supervised learning, such as the edge of stability results, transfer to deep RL, and they interact with the behaviour of the loss function. Since the value of the loss function in RL has not been connected with agent performance, many RL works do not study or report loss function behaviour, and focus on the agent reward instead. Here, we focus on the behaviour of the loss function in deep RL and aim to connect it with the value of the leading Hessian eigenvalue through edge of stability results. § INVESTIGATING THE EDGE OF STABILITY IN PHENOMENON REINFORCEMENT LEARNING When studying the edge of stability phenomenon in deep RL, we aim to isolate the effect of the data distribution from the other aspects of RL, such as the use of bootstrapping. We thus train agents across multiple data regimes, ranging from offline learning to online learning. We use gradient descent with and without momentum on MinAtar <cit.>, a subset of Atari games with reduced visual complexity; MinAtar results have been shown to translate to Atari <cit.>. We show results on Breakout in the main text, with Space Invaders results in Appendix <ref>. Since not all the data regimes we consider allow for full-batch training, results in this section use mini-batch training; we provide full-batch training results in Appendix <ref>. Experimental details are provided in Appendix <ref>, and training details of the pre-trained agent used in the offline RL experiments are in Appendix <ref>. §.§ DQN . We use a pre-trained agent's greedy policy to generate a replay buffer of 10^6 transitions and use this to train a new DQN agent. This setup is closest to that of supervised learning, and isolates the effect of the RL losses and bootstrapping from RL specific effects on the data distribution. Figure <ref> shows a clear edge of stability effect: the leading eigenvalue λ_1 grows until reaching the quadratic divergence threshold, after which fluctuates around it and the loss function shows increasing instabilities; this is consistent with results using the mean squared error in supervised learning. Consistent with existing results <cit.>, the performance of the agent is poor, likely due distributional shift. . To address the challenges with distributional shift and increase the diversity of agent experience, instead using greedy actions taken by the pre-trained agent to generate the replay buffer, we use an ϵ-greedy policy with 0.7 probability of taking greedy action from the pre-trained agent and 0.3 probability of a random action. This is akin to the concurrent setting in <cit.> . Results in Figure <ref> show that this intervention vastly improves the reward obtained by the agent compared to the previous setting explored, but optimisation dynamics retain the edge of stability behaviour. . We train an agent using the last 10^6 transitions obtained from the pre-trained agent's online phase; this is known as the final buffer setting <cit.>. This brings us closer to online RL: we still use a fixed replay buffer to train the agent, but that dataset contains transitions from a changing distribution. This allows us to isolate the effect of the replay buffer being obtained from a series of changing policies from the interactions of the agent's behaviour affecting its own replay buffer, as we see in online learning. Results in Figure <ref> show that here too, we observe the edge of stability phenomenon. . In online RL, the replay buffer is obtained from the agent's own experience, leading to the related challenges mentioned in Section <ref>. Figure <ref> shows that the leading eigenvalues of the Hessian grow early in training but plateau below the quadratic divergence threshold; despite λ_1 not reaching the quadratic divergence threshold, once it plateaus we observe increased instability in the loss function. We show full-batch results across the above offline RL cases above in Figures <ref>, <ref>, <ref> in the Appendix, which consistently show that as λ_1 fluctuates around the quadratic divergence threshold the loss function exhibits increased instabilities. We note that while changing the target network can have a short term effect on λ_1, it does not drastically affect it's long term trajectory and the edge of stability phenomenon. §.§ C51 When investigating the edge of stability effect using C51 across all the above data regimes, we observe that C51 does not clearly exhibit the edge of stability behaviour; we show selected results in Figure <ref> and additional results in Figures <ref> and <ref> in the Appendix. However, similar to the observations in supervised learning with cross-entropy loss <cit.>, we notice that λ_1 grows early in training, after which it consistently decreases. In offline learning (Figure <ref>), the leading eigenvalue λ_1, usually stays under the quadratic divergence threshold, but this is not the case in online learning (Figure <ref>), where λ_1 is consistently significantly above the quadratic divergence threshold. This observation might explain why we observed increased challenges with training C51 in this setting compared to DQN (results in Figure <ref> use stochastic gradient descent without momentum, as using momentum led to very poor results, see Figure <ref> in the Appendix). § DISCUSSION We examined the edge of stability effect in DQN and C51, two off-policy deep RL algorithms on simple environments. Our findings suggest that the edge of stability phenomenon can be induced by neural network optimisation in deep RL, but whether this occurs depends on underlying algorithm. We observed that DQN exhibits the edge of stability behaviour in offline RL, with a diminished effect in online RL. In contrast, we did not observe a consistent edge of stability effect when using C51, but nonetheless did observe a connection between large leading Hessian eigenvalues and challenges in training C51 agents. Caveats and future work. Our results were obtained on the MinAtar environment; we hope that future studies will expand our results to a wider range of environments. Following <cit.>, we investigate the edge of stability phenomenon in RL when using gradient descent with momentum; we believe future work can expand our exploration to adaptive optimisers commonly used in RL, such as Adam <cit.>, as has recently been done in supervised learning <cit.>. We further hope future research can connect the leading eigenvalue of the Hessian to the agent's performance, not only the loss, as has been done in supervised learning with generalisation <cit.>. 0.2in § ADDITIONAL EXPERIMENTAL RESULTS §.§ SGD with momentum results for C51 on Breakout In Figure <ref> we present C51 results on Breakout which do not clearly show an edge of stability effect. In the online regime, the eigenvalues consistently rise orders of magnitude above the quadratic threshold but reach low levels and plateau later in training. §.§ Experiments on Breakout using SGD with momentum and full-batch In Figure <ref> we present full-batch experiments for DQN and C51 in the setting , with a zoom in on the first time the quadratic threshold is achieved in Figure <ref>. DQN shows a clear edge of stability effect which is broken later during training where we see increased instabilities. C51 does not show an edge of stability effect with the eigenvalues plateauing over time. In Figure <ref> we present full-batch experiments for DQN and C51 in the setting , with a zoom in on the first time the quadratic threshold is achieved in Figure <ref>. Similar to the previous setting, DQN shows a clear edge of stability effect which is broken later during training where we see increased instabilities. C51 does not show an edge of stability effect with the eigenvalues plateauing over time. In Figure <ref> we present full-batch experiments for DQN and C51 in the setting , with a zoom in on the first time the quadratic threshold is achieved in Figure <ref>. Similar to the previous two settings, DQN shows a clear edge of stability effect which is broken later during training where we see increased instabilities. C51 does not show an edge of stability effect with the eigenvalues plateauing over time. §.§ SGD results for DQN and C51 on Breakout In Figure <ref> we present DQN results on Breakout with SGD without momentum which do not clearly show an edge of stability effect. In the online regime, the eigenvalues consistently fail to rise to the quadratic threshold but reach low levels and plateau later in training. There is a clear edge of stability effect offline. In Figure <ref> we present C51 results on Breakout with SGD without momentum which show an edge of stability effect. In the offline regimes, the eigenvalues rise sligtly past the quadratic threshold and then decrease to hover around it. In the online regime, the eigenvalues consistently rise orders of magnitude above the quadratic threshold. §.§ Results for DQN in the Space Invaders environment with and without momentum In Figure <ref> we present DQN results on Space Invaders with SGD with momentum which clearly show an edge of stability effect in every offline setting. In the online regime, the eigenvalues consistently rise above the quadratic threshold and it has no effect on the trend of the sharpness (λ_1). In Figure <ref> we present DQN results on Space Invaders with SGD without momentum which clearly show an edge of stability effect in every offline setting. In the online regime, there exists a trace of the edge of stability behaviour, however, later during training the principal eigenvalue consistently rises above the quadratic threshold. § EXPERIMENTAL DETAILS §.§ Neural architectures We used a similar neural network architecture for both DQN and C51. The network consists of 1 Convolutional Layer, followed by 1 Fully Connected (FC) Layer with and an Output Layer which depends on the algorithm. The Convolutional Layer has a kernel size of 3 and a stride of 1 and is configured differently based on the game due to different channel numbers. The rest of the details can be found in Table <ref>. §.§ Algorithms The pseudocode for the DQN algorithm is presented below[A detailed describtion can be found in <cit.>]: As an extension of DQN, <cit.> proposed to look at the entire value distribution dubbed Z instead of considering expectations. Such a view permits the definition of distributional Bellman equations and operators[The Mathematics behind Categorical DQN is expanded in <cit.>.]. Z is described discretely by a number N ∈ℕ and V_MIN, V_MAX∈ℝ, and whose support is the set of atoms { z_i = V_MIN + i Δ z : 0 ≤ i < N } with Δ z = V_MAX - V_MIN/N - 1. These atoms represent the "canonical returns" of the distribution and each has probability given by a parametric model θ : 𝒮×𝒜→ℝ^N: Z_θ (s, a) = z_i with probability p_i(s, a) = e^θ_i(s, a)/∑_j e^θ_j(s, a) The update is computed via 𝒯̂ Z_θ where 𝒯̂ is an operator but a discrete distributional view poses a problem because because Z_θ and 𝒯̂ Z_θ almost always have disjoint sets. To combat this issue, the update is reduced to multi-class classification by being projected onto the support of Z_θ. Assume that π is the greedy policy w.r.t 𝔼 [Z_θ]. Given a tuple (s, a, r, s', γ) the term 𝒯̂ z_j = r + γ z_j for each atom z_j. The probability p_j(s', π(s')) is distributed to the immediate neighbours of 𝒯̂ z_j via a projection operator Φ whose i^th component is given by[The quantity [ · ]_a^b bounds the argument in the interval [a, b].]: ( Φ𝒯̂ Z_θ (s, a) )_i = ∑_j=0^N-1 [ 1 - | [ 𝒯̂ z_j ]_V_MIN^V_MAX - z_i |/Δ z ]_0^1 p_j (s', π(s')) In the end, as a DQN derivative, a policy network and a target network model Z_θ and Z_θ̂ (respecting the notation of Algorithm <ref>) with the loss ℒ given by the cross-entropy term of the KL Divergence: D_KL ( Φ𝒯̂ Z_θ̂ (s, a) || Z_θ (s, a) ) The routine of Categorical DQN is the same as before with the only expectation being the loss computation which is given by: §.§ Offline RL reproduction details In this paper we examined three different offline RL replay buffers for Breakout on Minatar: * 10^6 transitions that were obtained from the experience of a pre-trained agent with no action perturbation. * 10^6 transitions that were obtained from the experience of a pre-trained agent where during game-play 30% of the actions taken were taken were random (instead of greedy actions being taken). * 10^6 transitions that were obtained from last 10^6 transitions from the replay buffer of an agent that was trained with Adam, online. §.§ Optimistion details and how to replicate results We studied both the full-batch the mini-batch settings of GD and momentum for both algorithms. The mini-batch experiments always used a batch size of 512 and the full-batch experiments were performed on a sub-sample of 10^4 transitions from the original replay buffers which consisted of 10^6 transitions. When experimenting with gradient descent, the learning rate was 0.01 and when adding momentum the learning rate was 0.01 and the momentum coefficient was β = 0.8. The initial γ parameter to discount reward was 0.99. The agent which generated the replay buffers was trained with Adam with a batch size of 64, learning rate of 0.00025, β_1 = 0.9, β_2 = 0.999 and ϵ = 10^-8. Figure <ref> shows the return obtained for Breakout online with these settings. During offline training, the action executed by the agent is the action present in the replay buffer. During online training, the first 5000 iterations are used to accumulate experiences in the replay buffer, after which the training of the agent starts. During the first 5000 steps the agent is taking random actions. Afterwards, the actions are taken based on a decaying ϵ-greedy policy where ϵ decreases linearly from 1.0 to 0.1 for 100000 iterations. For C51, we used 51 atoms with V_MIN = -10 and V_MAX = 10. The eigenvalues were logged at every 100 iterations. Whenever "Avg Return" was mentioned, that referred to a moving average of the return per episode. It was calculated based on the following routine: avg_return[i] = 0.99 * avg_return[i-1] + 0.01 * return_per_episode[i] where avg_return[0] = return_per_episode[0]. The datasets for Breakout and Space Invaders used for experiments are available https://github.com/riordan45/rl-edge-of-stabilityhere. In addition, we are able to provide datasets for performing similar experiments on Asterix, Freeway and Seaquest, the other games present in the Minatar testbed.
http://arxiv.org/abs/2307.05699v1
20230711180433
Electrons interacting with Goldstone modes and the rotating frame
[ "Konstantinos Vasiliou", "Yuchi He", "Nick Bultinck" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
Rudolf Peierls Centre for Theoretical Physics, Parks Road, Oxford, OX1 3PU, UK Rudolf Peierls Centre for Theoretical Physics, Parks Road, Oxford, OX1 3PU, UK Rudolf Peierls Centre for Theoretical Physics, Parks Road, Oxford, OX1 3PU, UK Department of Physics, Ghent University, Krijgslaan 281, 9000 Gent, Belgium We consider electronic systems with a spontaneously broken continuous symmetry. The scattering vertex between electrons and Goldstone modes is calculated over the entire Brillouin zone using the random phase approximation. This calculation reveals two things: (1) electrons always couple to both ϕ and ∂_tϕ, where ϕ is the Goldstone field, and (2) quasi-particles in a state with continuous symmetry breaking have to be defined in a rotating frame, which locally follows the fluctuations of the order parameter. The implications of these findings for electron spectral functions in both symmetry-broken and thermally disordered systems are discussed, and the examples of anti-ferromagnetism in the Hubbard model and spin spiral order in the three-band model are worked out in detail. Electrons interacting with Goldstone modes and the rotating frame Nick Bultinck Indian Institute of Technology Kharagpur ================================================================= § INTRODUCTION Many strongly-correlated materials display continuous symmetry breaking in some part of their phase diagram. Some paradigmatic examples are spin-density wave orders in the cuprate and iron-based superconductors, and valley/spin orders in 2D moiré materials. As a result of the continuous symmetry-breaking, the low-energy spectrum contains gapless Goldstone modes. In this work we set out to calculate the scattering vertex between these Goldstone modes and the electrons. Specifically, our goal is to obtain an expression for the scattering vertex which is valid over the entire Brillouin zone, and not just in the long-wavelength limit for the Goldstone modes. One reason for going beyond a long-wavelength approximation is that just like acoustic phonons, Goldstone modes of the electron liquid should decouple from the electrons at long wavelengths. Hence one expects the most important scattering processes to be those where an electron emits or absorbs a zone-boundary Goldstone mode. The starting point of our analysis is a simple mean-field description of the broken-symmetry state. From the optimal broken-symmetry Slater determinant, we then construct a path-integral representation of the partition function in order to study fluctuations on top of the mean-field state. Via the random phase approximation (RPA)[Note that in this work RPA means summing both bubble and ladder diagrams, such that is a conserving approximation in the sense of Baym and Kadanoff <cit.>. It is also equivalent to applying the time-dependent variational principle <cit.>, and hence is not a perturbative calculation which is only applicable to weakly-interacting systems.] we obtain both the Bethe-Salpeter equation for the collective modes (whose solution gives the Goldstone mode energies and wavefunctions), and an explicit expression for the electron-Goldstone mode scattering vertex. Using these results, we then construct an effective electron-boson model with both fermionic and bosonic fields, which emulates the interacting electron-Goldstone mode system. Towards the end of the paper we illustrate our formalism with two examples: anti-ferromagnetism in the Hubbard model, and circular spin-spiral order in the three-band model. §.§ Summary of results and connection to previous works Even though both the goal (obtaining the electron-Goldstone mode scattering vertex) and the methods (mean-field theory and RPA) of this work are straightforward and have previously been discussed in classic works such as e.g. Refs. <cit.>, we nevertheless find that obtaining a result which is in line with physical expectations requires some non-trivial (and to the best of our knowledge new) steps. In this section we briefly discuss what these non-trivial steps are, why they are necessary, and what interesting physics is hiding behind them. Via the RPA analysis we obtain a scattering vertex which in the long-wavelength limit agrees with the previous result of Ref. <cit.>. However, this result is not entirely satisfactory, as it seems to imply that long-wavelength Goldstone modes do not decouple from the electrons. Instead, they give rise to inter-band scattering processes for the electrons with a scattering vertex of the order of the mean-field bandgap which results from the spontaneous symmetry breaking. Hence the long-wavelength inter-band scattering is of the order of the interaction energy scale. Already at tree-level, exchange of Goldstone modes therefore introduces an effective interaction with a magnitude corresponding to the square of the bare repulsive interaction, divided by a Goldstone propagator which vanishes in the low-energy and long-wavelength limit. This leads to strong self-energy corrections for the mean-field electrons, casting doubt on their validity as well-defined quasi-particles. However, we observe that the spurious contribution to the electron-Goldstone mode vertex is `pure gauge', meaning that it can be removed by a gauge transformation in the path integral. This gauge transformation implements a spatially-dependent symmetry transformation on the mean-field electrons. In particular, the transformed electron fields are defined in a rotating frame which follows the local fluctuations of the order parameter. Crucially, electrons in the rotating frame completely decouple from long-wavelength Goldstone modes. We note that a similar rotating frame has already been introduced in previous studies of superconducting <cit.> and magnetic <cit.> orders, but for different reasons. In particular, these previous works did not obtain the Goldstone mode energies and wavefunctions by solving the Bethe-Salpeter equation, and did not motivate the rotating frame by requiring a decoupling between electrons and Goldstone modes at long wavelengths. The mean-field electrons are related via a frequency-independent, and hence non-dynamical, transformation to the microscopic electrons. The linear response functions which are measured in experiment are therefore directly related to correlation functions of the mean-field electrons. But we argued above that the mean-field electrons are not a good approximation to the true quasi-particles of the system – one should instead use the electrons in the rotating frame to approximate the quasi-particles. However, the electron propagator can be approximated as a convolution (in frequency and momentum space) of two propagators of well-defined quasi-particles: fermions in the rotating frame, and the Goldstone modes (we momentarily ignore the possibility of Landau damping for the Goldstone modes). This is explained in detail in Sec. <ref>. This result has previously also been obtained in Ref. <cit.>, where, similarly to Ref. <cit.>, the rotating frame is introduced to correctly describe fluctuations beyond mean-field theory via a Hubbard-Stratanovich decoupling of the interaction. Again, this work did not motivate the necessity of a rotating frame by deriving the electron-Goldstone mode scattering vertex in the RPA formalism as we do here. We note that a similar expression for the microscopic fermion two-point function as a convolution of two propagators is also obtained in theories where the electron is assumed to fractionalize in a spinon and a holon <cit.>. The RPA approach describes small fluctuations around the mean-field state. In the effective electron-boson model, which we construct to emulate the interacting electrons and Goldstone modes at the RPA level, the Goldstone fields are a collection of real scalar fields. However, once the dispersion relation of small order parameter fluctuations is known, it is straightforward to improve the theory and incorporate the correct global structure of the order parameter manifold by describing the Goldstone mode dynamics with a non-linear sigma model. This is especially important in 2D, where the non-linear sigma model is disordered at any non-zero temperature, and hence ensures that the effective electron-boson model respects the Hohenberg-Mermin-Wagner theorem. As a final point, let us elaborate on the physical behaviour which can be extracted from the expression for the microscopic fermion propagator as a convolution of the Goldstone propagator and the propagator of the fermions in the rotating frame. In particular, let us consider 2D systems, and focus on temperatures T which are much smaller than the mean-field bandgap. If we use the correct non-linear sigma model action for the Goldstone mode dynamics, the Goldstone mode propagator at finite T will be invariant under global symmetry transformations, and acquire a thermal mass. As a result, the propagator for the microscopic fermions (which is a convolution containing the Goldstone mode propagator) will also be symmetric (this is explained in detail in Sec. <ref>, and illustrated via the examples in Sec. <ref> and <ref>). However, the spectral weight of the microscopic electrons is still predominantly determined by the poles of the fermion propagator in the rotating frame. As we assume T to be much smaller than the mean-field bandgap, the poles of the rotating-frame fermion propagator will be located at the same energies of the symmetry-broken mean-field band spectrum. As a result, we find that even though the microscopic fermion propagator is invariant under the continuous global symmetry, it nevertheless produces a spectral weight that is predominantly determined by the spectrum of the broken-symmetry state – and hence is very different from the spectral weight of the non-interacting fermions. We thus see that the introduction of a rotating frame, which is necessary to have the electrons decouple from the long-wavelength Goldstone modes, automatically leads to an expression for the electron propagator which captures the physically intuitive behaviour of an electron system with an order parameter that has well-defined magnitude, but whose orientation is disordered by thermal fluctuations. §.§ Structure of the paper The remainder of this work is organized as follows. We start in Sec. <ref> by introducing the mean-field starting point of our analysis. In Sec. <ref> we use the optimal Slater determinant, which we obtain from the mean-field analysis, to construct a path integral representation of the partition function, which can be used to study fluctuations beyond mean-field. Using this path integral, we introduce the infinite series of RPA Feynman diagrams that give rise to the Bethe-Salpeter equation, and also the effective interaction between electrons mediated by collective mode exchange, in Sec. <ref>. In Sec. <ref> we first discuss the general properties of the Goldstone mode wavefunctions that we obtain by solving the Bethe-Salpeter equation. We then construct an effective electron-boson model which mimics the interacting electron-Goldstone mode system at tree-level. In Sec. <ref> we use the properties of the Goldstone mode wavefunctions to study the electron-Goldstone mode scattering vertex obtained from the RPA analysis in the long-wavelength limit. Here we uncover the the problematic inter-band scattering processes which do not vanish in the long-wavelength limit. In the same section, we then explain how this problem can be solved by doing a gauge transformation in the path integral, and defining fermions in a rotating frame. The implications of the rotating frame for the electron spectral functions are discussed at the end of this section. Finally, we illustrate the general formalism for anti-ferromagnetism in the Hubbard model in Sec. <ref>, and for circular spin-spiral order in the three-band model in Sec. <ref>. The appendix reviews some properties of the Bethe-Salpeter equation and its solutions. § MEAN-FIELD STARTING POINT In this section we set the stage for our main results presented below. In particular, we introduce the mean-field theory which is the starting point for our calculations. All the concepts discussed here are standard, and the main purpose of this section is therefore to introduce the context and notation necessary to understand the following sections. We consider interacting electron systems described by the following general Hamiltonian: H = ∑_,̊'̊∑_a,b h(-̊'̊)_ab c^†_,̊a c_'̊,b + 1/2∑_,̊'̊V(-̊'̊):n_ n_'̊: , where $̊ denote the sites of a Bravais lattice indspatial dimensions,h()̊is a general hopping Hamiltonian, andn_=̊ ∑_a c^†_,̊ac_,̊ais the electron density at site$̊. We also assume that h()̊ is short range, and V()̊ has a well-defined Fourier transform (for example it could be a screened Coulomb potential). Our results can be generalized to other Hamiltonians, for example with other (not density-density) interactions, but for ease of presentation we do not consider these generalizations here. We will make use of the translation invariance and work in momentum space, where the Hamiltonian in Eq. (<ref>) takes the form H = ∑_∑̨_a,b h()̨_abc^†_,̨ac_,̨b + 1/2∑_ V() :n_ n_-: , where n_ = 1/√(N)∑_∑̨_a c^†_+̨,ac_,̨a , with N the number of sites, is the Fourier transform of the electron density. We are interested in Hamiltonians with a continuous symmetry, such as spin rotation symmetry, which gets broken spontaneously. The starting point of our analysis is a mean-field treatment of the symmetry breaking at zero temperature. In particular, we assume that the Slater determinants which minimize the energy of H are not invariant under the continuous symmetry of the Hamiltonian. Let us choose one such Slater determinant and write it as |ψ_0⟩ = ∏_∏_i=1^N_o()̨c^†_,̨i|0⟩ , where c^†_,̨i creates an electron in a mean-field single-particle state, and N_o()̨ is the number of occupied states at momentum $̨. Note that we have assumed that the optimal Slater determinants preserve the translation symmetry. Also this restriction can be relaxed, but we will not do this here. The mean-field single-particle states (both occupied and unoccupied) will be labeled by Greek indices, and the corresponding creation operators are defined as c^†_,̨α = ∑_a u_α^a()̨ c^†_,̨a , where|u_α()̨⟩are a set of orthonormal vectors at every$̨: ⟨ u_α()̨|u_β()̨⟩ = δ_αβ. We will use the convention that i,j,k denote occupied mean-field states, whereas m,n,o denote the unoccupied states. The single-particle correlation matrix of the optimal Slater determinant in Eq. (<ref>), which by assumption is diagonal in momentum space, is thus given by P()̨_αβ := ⟨ψ_0|c^†_,̨βc_,̨α|ψ_0⟩=δ_αβn_α()̨ , with n_α()̨∈{0,1}, n_i/j/k()̨=1 and n_m/n/o()̨=0. Next we construct the Hartree-Fock mean-field Hamiltonian using P()̨. For future use we find it convenient to first rewrite the exact Hamiltonian in the mean-field basis: H = ∑_h̨()̨_αβ c^†_,̨α c_,̨β + 1/2∑_ V() :n_ n_-: n_ = 1/√(N_s)∑_∑̨_α,βc^†_-̨,α[Λ_()̨]_αβc_,̨β , where h()̨_αβ = ∑_a,bu^a*_α()̨h()̨_abu^b_β()̨, and [Λ_()̨]_αβ := ⟨ u_α(-̨)|u_β()̨⟩ . As we are only considering translationally invariant states, the Hartree Hamiltonian is trivial and is simply given by H_H[P] = V(0)N_e/N∑_∑̨_αc^†_,̨α c_,̨α , where N_e is the number of electrons in the system. The Fock Hamiltonian is given by H_F[P] = -1/N∑_,∑_α,β,iV() [Λ_-(-̨)]_α i[Λ_()̨]_i βc^†_,̨αc_,̨β The assumption that the Slater determinant in Eq. (<ref>) is a variational energy minimum is equivalent to the statement that the Hartree-Fock self-consistency equation is satisfied, which in our notation takes the following form: ∑_α,βh()̨_αβc^†_,̨αc_,̨β + H_H[P] + H_F[P] = ∑_α E_,̨αc^†_,̨αc_,̨α , where E_,̨α are the mean-field single-particle energies satisfying sgn(E_,̨α) = 1-2n_α()̨. § HARTREE-FOCK PATH INTEGRAL We are interested in fluctuations beyond mean-field theory, and in particular in the role of the Goldstone modes associated with the spontaneous symmetry breaking. To study these fluctuations, we will use the path integral formalism. However, we will not use the standard path integral construction which relies on fermionic coherent states constructed on top of the Fock vacuum. Instead, we will construct coherent states on top of the Slater determinant in Eq. (<ref>). Working in the single-particle mean-field basis, coherent states associated with unoccupied states take on the conventional form: |ψ_,̨n⟩ = (1-ψ_,̨nc^†_,̨n)|0⟩ ⟨ψ̅_,̨n| = ⟨ 0|(1 - c_,̨nψ̅_,̨n) , where ψ_,̨n and ψ̅_,̨n are Grassmann numbers. These coherent states have the usual properties: c_,̨n|ψ_,̨n⟩ = ψ_,̨n|ψ_,̨n⟩ ⟨ψ̅_,̨n|ψ_,̨n⟩ = e^ψ̅_,̨nψ_,̨n . For the occupied states, we will use particle-hole transformed coherent states, which we define as |ψ̅_,̨i⟩ = (1-ψ̅_,̨ic_,̨i)c^†_,̨i|0⟩ ⟨ψ_,̨i| = ⟨ 0|c_,̨i(1-c^†_,̨iψ_,̨i) , where ψ̅_,̨i and ψ_,̨i are again Grassmann numbers. The particle-hole transformed coherent states satisfy following properties: c^†_,̨i|ψ̅_,̨i⟩ = ψ̅_,̨i|ψ̅_,̨i⟩ ⟨ψ_,̨i|ψ̅_,̨i⟩ = e^ψ_,̨iψ̅_,̨i , To construct a path integral representation of the partition function, we need to insert resolutions of the identity in terms of the coherent states, which are given by ∫dψ_,̨n∫dψ̅_,̨ne^-ψ̅_,̨nψ_,̨n|ψ_,̨n⟩⟨ψ̅_,̨n| = 1 ∫dψ̅_,̨i∫dψ_,̨ie^-ψ_,̨iψ̅_,̨i|ψ̅_,̨i⟩⟨ψ_,̨i| = 1 Using the above resolutions of the identity, the partition function at temperature T can be written as Z(T) = tr(e^-H/T) = e^-E_0^HF/T∫ [Dψ]∫[Dψ̅] e^-S , where E_0^HF = ⟨ψ_0|H|ψ_0⟩ is the Hartree-Fock variational ground state energy, and the action S is given by S = ∫_0^1/Tdτ ∑_∑̨_αψ̅_,̨α (∂_τ + E_,̨α)ψ_,̨α +1/2N ∑_,,̨'̨V() (ψ̅_-̨Λ_()̨ψ_)( ψ̅_'̨+Λ_-('̨) ψ_'̨) , where in the last line we have suppressed the indices α,β,…. A few comments are in order. First, note that the kinetic term of the action contains the mean-field single-particle energies, and not the bare band energies (which would correspond to the eigenvalues of h()̨). So it will be the properties of the renormalized mean-field bands, e.g. their nesting or density-of-states, which controls the perturbation theory in the interaction. Secondly, we emphasize that no approximation is involved in our derivation of the path integral – Eqs. (<ref>), (<ref>) constitute an exact representation of the partition function. Thirdly, the unusual form of the action, and the additional factor exp(-E_0^HF/T) in Eq. (<ref>), result from the different form of normal ordering required by the use of particle-hole transformed coherent states: one has to normal order the Hamiltonian with respect to the Hartree-Fock ground state, and not with respect to the Fock vacuum state. This different choice of normal ordering produces additional quadratic terms, which exactly correspond to the Hartree and Fock Hamiltonians, which combined with the bare kinetic term produce the mean-field single-particle energies via the Hartree-Fock self-consistency equation (<ref>). In working with the path integral in Eqs. (<ref>), (<ref>) the conventional imaginary-time Feynman rules can be used, except for equal-time diagrams. These diagrams are usually defined by inserting a factor e^iϵω_n, where ω_n is the fermionic Matsubara frequency, and taking the limit ϵ→ 0 at the end of the calculation. Here, to reflect the different normal ordering, one has to add the factor e^iϵω_nsgn(E_,̨α) to the equal-time propagators. With this regularization, the Hartree and Fock self-energy diagrams vanish at zero temperature. This makes sense physically, as the Hartree-Fock path integral already takes these self-energy effects into account from the outset, and one should not double count these terms. To conclude this section, we present the diagrammatic Feynman rules that will be used in this work. First, the electron propagator is represented in the usual was as a straight line with an arrow: (a) at (-0.3,0); (b) at (1,0); (c) at (2.7,0) =(iω_n - E_,̨α)^-1 .; * (a) – [fermion] (b), ; The interaction will be represented as a dashed line: (a) at (0,-1) ,̨α; (b) at (0,1) -̨,β; (c) at (0.5,0); (d) at (2,0); (e) at (2.5,-1) '̨-,σ; (f) at (2.5,1) '̨,λ; (g) at (3,0) =; (h) at (5.8,0) -V()[Λ_()̨]_βα[Λ_^†('̨) ]_λσ .; * (a) – [fermion] (c), (c) – [fermion] (b), (c) – [scalar] (d), (e) – [fermion] (d), (d) – [fermion] (f), ; With these definitions and conventions in place, we can now turn our attention to the RPA fluctuations on top of the mean-field result, which is the topic of the next section. § EFFECTIVE RPA INTERACTION In the previous section we derived the Hartree-Fock path integral, which contains the complete bare Coulomb repulsion in the action. Here we study the effective action which arises from summing all contributions from RPA diagrams. In Fig. <ref> we show these diagrams. They contain, among others, the familiar bubble/polarization diagrams, and Berk-Schrieffer diagrams <cit.>. The central object defined in Fig. <ref> is called G_2(,iν), which is a matrix diagonal in momentum and bosonic Matsubara frequency iν. Writing out the indices explicitly, this matrix is [G_2(,iν)]^α̨β_'̨λσ. It is defined diagramatically in Fig. <ref>(b) as the infinite sum of direct and exchange diagrams concatenated with electron propagators. An explicit expression for this infinite sum can be obtained by solving the familiar Bethe-Salpeter equation, which is shown diagramatically in Fig. <ref>. Written out explicitly, the Bethe-Salpeter equation for G_2(,iν) is [G_2(,iν)]^α̨β_'̨λσ = [-T∑_iω_n1/iω_n - E_,̨α1/i(ω_n-ν) - E_+̨,β]× [δ_αλδ_βσδ_,̨'̨ -1/N_s∑_”̨μν( V() [Λ_()̨]_βα[Λ_-(”̨-)]_μν - V(”̨-'̨)[Λ_-̨”̨()̨]_μα[Λ_”̨-(”̨-)]_βν)[G_2(,iν)]^”̨μν_'̨λσ] The details of the Bethe-Salpeter equation and its solution are presented in Appendix <ref>. As the Bethe-Salpeter equation is well-known, we simply present the results here. At T=0, the general form of G_2(,iν) is [G_2(iν,)]^,̨αβ_'̨,λσ = ∑_s φ^s_,αβ()̨1/ω_,s - iν η_,sφ^s*_,λσ('̨) , where ω_,s are the energies of the RPA collective modes, which are guaranteed to be real and positive if the Slater determinant in Eq. (<ref>) is a local energy minimum, i.e. if it satisfies the Hartree-Fock self-consistency equations (<ref>). The sum in Eq. (<ref>) runs only over those modes which have non-zero energy. Eq. (<ref>) shows that G_2(,iν) takes the form of a propagator for the collective modes, which consist of electron-hole pairs with a pair-wavefunction φ^s_,αβ()̨. The collective mode wavefunctions φ^s_,αβ()̨ and energies ω_,s are obtained by solving the following generalized eigenvalue equation: (n_α()̨ - n_β(-̨))φ^s_,αβ()̨η_,sω_,s = |E_α()̨- E_β(-̨)|φ^s_,αβ()̨ + V()1/N∑_'̨tr(φ^s_('̨)Λ_('̨) )[ Λ^†_()̨]_αβ - 1/N∑_'V(') [Λ^†_'()̨φ^s_(-̨')Λ_'(-̨) ]_αβ . Note that φ^s_,αβ()̨ is defined to be zero when n_α()̨= n_β(-̨). For collective modes with non-zero energy ω_,s, the signs η_,s are related to the collective mode wavefunctions through the following relation: ∑_∑̨_i,n(|φ^s_,ni()̨|^2 - |φ^s_,in()̨|^2 ) = η_,s = ± 1 , where we have again used the convention that i labels occupied states, and n labels empty states. The physical meaning of the η_,s sign factors and their appearance in the denominator of the collective mode propagator in Eq. (<ref>) can be understood as follows. The Bethe-Salpeter equation at momentum describes both the collective mode creation operators, which we denote as b_^†, and the collective mode annihilation operators, denoted as b_-. The collective mode propagator therefore contains contributions of the form ∫dτ e^iντ⟨T̂b_-(τ)b^†_-(0)⟩ and ∫dτ e^iντ⟨T̂b^†_(τ)b_(0)⟩, where T̂ is the time-ordering operator. The latter can be rewritten as ∫dτ e^-iντ⟨T̂b_(τ)b^†_(0)⟩, hence the origin of the minus sign in front of iν in some of the collective mode propagators. From this we conclude that η_,s = 1 means that φ^s_ corresponds to a collective mode creation operator, whereas η_,s=-1 means that φ^s_ corresponds to a collective mode annihilation operator. The collective mode propagator is guaranteed to have a particle-hole symmetry, which implies that if φ^s_ is a solution to the generalized eigenvalue equation (<ref>) with energy ω_,s, then so is its particle-hole conjugate partner φ^s'_- defined as φ^s'_-,αβ()̨ := 𝒫[ φ^s_,αβ()̨] := φ^s*_,βα(+̨) , where 𝒫 is the (anti-unitary) particle-hole conjugation operator. Note that the particle-hole transformation inverts the momentum , but preserves the energy ω_,s. From Eq. (<ref>) we also see that the particle-hole transformation flips the signs η_,s. When the self-consistent Hartree-Fock state has a time-reversal symmetry 𝒯, the combined action 𝒫𝒯 is a unitary symmetry which preserves momentum but flips the signs η_,s defined in Eq. (<ref>). The 𝒫𝒯 symmetry guarantees that non-zero energies appear in degenerate pairs ω_,s = ω_,s'. Furthermore, the corresponding wavefunctions are related by φ^s'_,αβ()̨ := 𝒫𝒯[φ^s_,αβ()̨] := φ^s_,βα(-+̨) , where we have without loss of generality chosen to work in a gauge where 𝒯|u_α()̨⟩ = |u_α(-)̨⟩. Below, we will focus on time-reversal symmetric self-consistent Hartree-Fock states and make use of this property. At this point we have introduced all the properties of the collective mode propagator in Eq. (<ref>) that we will need for our further analysis. It is now straightforward to obtain the RPA effective interaction by plugging in the collective mode propagator in the diagrams in Fig. <ref> (a). The last four diagrams in this figure can then be interpreted as the effective interaction between electrons which results from the exchange of collective modes. In the following section, we will return to the main topic of this work, which is interacting electron systems with a spontaneously broken continuous symmetry. The collective modes of interest will be the Goldstone modes, and we will use the general formalism discussed in this section to obtain an expression for the scattering vertex between electrons and the Goldstone modes. § INTERACTING ELECTRONS AND GOLDSTONE MODES When the self-consistent Hartree-Fock state spontaneously breaks a continuous symmetry the collective mode spectrum is guaranteed to have gapless branches corresponding to the Goldstone modes. From now on we will exclusively focus on these Goldstone modes, and ignore the other collective modes. In this section we (1) elaborate on the properties of the Goldstone mode wavefunctions, (2) construct the scattering vertex between electrons and Goldstone modes, and (3) synthesize our findings in an effective electron-boson model. §.§ Properties of the Goldstone wavefunctions Assuming that the mean-field state preserves time-reversal symmetry, for every non-zero Goldstone mode energy ω_,s, there are two collective mode wavefunctions φ^s_ which are related by the action of 𝒫𝒯, and which have opposite signs η_,s as defined in Eq. (<ref>). We will use the convention that for every s>0, the Goldstone mode wavefunctions corresponding to ω_,s are denoted as φ^± s_, where the sign is determined by the corresponding value of η_,s. In this work, we are mainly interested in linearly dispersing Goldstone modes (quadratically dispersing Goldstone modes are simpler, and do not require the full power of the formalism that we develop here). So we assume that at small momenta ω_,s = c_s || + 𝒪(^2), where c_s are the velocities of the Goldstone modes. For a Goldstone branch corresponding to broken symmetry generator Q_s, we can write down the exact analytic expression for the zero mode at = 0 [Note that as we are only considering linearly dispersing Goldstone modes, the broken symmetry generators Q_s are in one-to-one correspondence with the Goldstone modes <cit.>.]: Q̃_s,αβ()̨ = i⟨ u_α()̨|Q_s|u_β()̨⟩[n_α()̨-n_β()̨] , where the factor i is added to ensure that φ̃^s_=0 is an eigenstate of the particle-hole conjugation operator 𝒫 defined in Eq. (<ref>) with eigenvalue 1. Given that Q_s commutes with h()̨, one can check that Q̃_s is indeed a zero mode of the generalized eigenvalue equation in Eq. (<ref>). Furthermore, due to the factor [n_α()̨-n_β()̨] the wavefunction Q̃_s,αβ()̨ is non-zero only when the symmetry generator Q_s is broken (if it is not broken, then Q_s commutes with the mean-field Hamiltonian and hence is diagonal in the mean-field basis |u_α()̨⟩). The zero mode is also an eigenstate of the 𝒫𝒯 symmetry in Eq. (<ref>) with eigenvalue (-1)^κ+1, where κ encodes whether Q_s is time-reversal even or odd: 𝒯Q_s𝒯^-1 = (-1)^κ Q_s . At non-zero , for every broken generator Q_s there are two collective mode wavefunctions φ̃^± s_ which are mapped to each other by the action of 𝒫𝒯. In order to have a non-singular and smooth → 0 limit, we define the following rescaled collective mode wavefunctions at non-zero : φ̃^± s_ := √(2ω_,saw_s N/c_s)φ_^± s , where a is the lattice constant, and w_s a dimensionless number that will be determined below. The components φ̃^± s_,αβ()̨ are generically order one numbers which for fixed non-zero do not go to zero in the thermodynamic limit. Moreover, in the appendices we show that in the thermodynamic limit φ̃^± s_,αβ()̨ also remains finite as → 0. The two wavefunctions φ̃^± s_,αβ()̨ at non-zero become identical (possibly up to a phase factor) as → 0. This is possible because the Goldstone mode wavefunctions are not orthogonal eigenvectors obtained from a Hermitian eigenvalue problem, but instead are obtained from the generalized eigenvalue equation in Eq. (<ref>). Exactly at =0, there is only one Goldstone wavefunction – which corresponds to the exact zero mode in Eq. (<ref>). This happens because in the presence of zero modes, solutions to the generalized eigenvalue equation (<ref>) do not form a complete basis <cit.>. As a next step, we fix the phase and norm of the rescaled wavefunctions such that lim_→ 0φ̃^± s_,αβ()̨ = Q̃_s,αβ()̨ , where Q̃_s is defined in Eq. (<ref>). As a first consistency check, let us note that it follows from Eq. (<ref>) that the rescaled wavefunctions satisfy 1/N∑_∑̨_i,n(|φ̃^± s_,ni()̨|^2 - |φ̃^± s_,in()̨|^2 ) = ± 2ω_,saw_s/c_s . The exact zero mode Q̃_s defined in Eq. (<ref>) also satisfies this equation with ω_=0,s=0. To satisfy Eq. (<ref>) we have to adjust the phase and norm of the wavefunctions φ̃^± s_. The norm we fix via the dimensionless parameter w_s defined in Eq. (<ref>). The phase can be partially fixed using the symmetries 𝒫 and 𝒫𝒯. In particular, as the zero mode Q̃_s is even under 𝒫 and has eigenvalue (-1)^κ +1 under 𝒫𝒯, we require that 𝒫𝒯[φ̃^± s_] = (-1)^κ +1φ̃^∓ s_ 𝒫[φ̃^± s_] = φ̃^∓ s_- . After this partial gauge fixing, there is a remaining phase freedom over half of the Brillouin zone for one of the two wavefunctions, say φ̃_^+s. We fix this phase by requiring φ̃_^+s to be a continuous function of , and Eq. (<ref>) to hold. Note that if there is an additional inversion symmetry ℐ under which → -, then we can fix the remaining phase of φ̃_^+s over half of the Brillouin zone up to a minus sign by requiring that φ̃_^+s is either even or odd under ℐ𝒯 (depending on whether Q̃_s is even or odd under ℐ𝒯). The requirements of continuity and a smooth → 0 limit as in Eq. (<ref>) then uniquely fix the phase. With the gauge choice in Eq. (<ref>), it follows that (φ̃^s_±φ̃^-s_)/2 is an eigenstate of 𝒫𝒯 with eigenvalue ±(-1)^κ + 1. In the limit → 0, the symmetric combination therefore becomes the unique zero mode Q̃_s, which has 𝒫𝒯 eigenvalue (-1)^κ +1. The anti-symmetric combination becomes a zero mode with 𝒫𝒯 eigenvalue -(-1)^κ +1, which must be the zero vector. From this it follows that with the gauge choice in Eq. (<ref>), equation (<ref>) indeed holds. This concludes our discussion of the Goldstone wavefunctions. Below we will make use of the properties of these wavefunctions when we go to the rotating frame in Sec. <ref>. §.§ Electron-Goldstone scattering vertex In Sec. <ref> we defined the effective RPA interaction, which contains the collective mode propagator G_2(,iν) that we obtained by solving the Bethe-Salpeter equation. We also discussed the properties of the solutions to the Bether-Salpeter equation (the details of which are contained in the appendices), and in the previous subsection we focused in particular on solutions that correspond to Goldstone modes. In this section, we come back to the effective RPA interaction. In particular, now that we understand better the properties of G_2(,iν) and the associated collective mode wavefunctions, we will plug G_2(,iν) into the diagrams in Fig. <ref> (a) and study the interaction between electrons which results from the exchange of Goldstone modes. Our starting point is Eq. (<ref>), which is the general form for G_2(,iν) as a solution to the Bethe-Salpeter equation. Taking this expression, and plugging it into the diagrams in Fig. <ref> (a), we see that the effective RPA interaction is given by the bare interaction (the left most diagram in Fig. <ref> (a)), plus an interaction where the electrons emit and absorb a Goldstone mode, which can be denoted by the following single diagram: < g r a p h i c s > In this diagram, the wavy line denotes the boson progagator (iνη_,s-ω_,s)^-1, and the scattering vertex is given by < g r a p h i c s > , where the box represents the Goldstone wavefunction φ^s_,αβ()̨. Written out in equations, the vertex is given by ĝ^s_,αβ()̨ = V()1/N∑_'̨ tr(φ^s_('̨)Λ_('̨) )[ Λ^†_()̨]_αβ - 1/N∑_'V(') [Λ^†_'()̨φ^s_(-̨')Λ_'(-̨) ]_αβ Note that ĝ^s_,αβ()̨ is generically non-zero for all α and β, whereas φ^s_,αβ()̨ is only non-zero if n_α()̨≠ n_β(-̨). The terms in Eq. (<ref>) which define ĝ_^s have appeared previously as part of the generalized eigenvalue equation in Eq. (<ref>). Since the Goldstone wavefunctions φ_^s are obtained as solutions to that equation, we immediately obtain from Eqs. (<ref>) and (<ref>) that ĝ^s_,αβ()̨ = ([n_α()̨-n_β(-̨)]η_,sω_,s - |E_α()̨-E_β(-̨)|)φ^s_,αβ()̨ if n_α()̨≠ n_β(-̨) . For the components ĝ^s_,αβ()̨ with n_α()̨=n_β(-̨) we have no general analytic expression. Using the 𝒫 symmetry in the gauge of Eq. (<ref>), Eq. (<ref>) also makes it explicit that the electron-Goldstone vertex satisfies ĝ^s*_,αβ()̨ = ĝ^-s_-,βα(-̨) , where we have used that η_,s = -η_-,-s and ω_,s=ω_-,-s. This property ensures that the Goldstone-mediated interaction between electrons is Hermitian. §.§ Effective electron-boson model In the previous section we have derived an explicit expression for the interaction between electrons mediated by the exchange of Goldstone modes. The Goldstone modes are collective excitations of the electron fluid, and hence are not described by independent microscopic degrees of freedom. In this section, however, we will write down an effective theory which does contain both independent electronic and bosonic degrees of freedom, and which mimics the behaviour of the coupled electron-Goldstone-boson system. The action for the effective electron-boson model is given by S = S_el + S_V + S_B + S_el-B + S_C , where S_el and S_V respectively denote the quadratic and interacting part of the electronic action that appeared previously in the Hartree-Fock path integral in Eq. (<ref>). The terms S_B, S_el-B, and S_C are not present in the Hartree-Fock action. Of these, the first two describe the Goldstone dynamics and the electron-Goldstone-mode scattering. The Goldstone dynamics is described by S_B = ∫_0^1/Tdτ ∑_s>0∑_b̅_,s(∂_τ + ω_,s)b_,s , where b̅_,s,b_,s are bosonic fields corresponding to the Goldstone creation and annihilation operators. The electron-Goldstone coupling is given by S_el-B = ∫_0^1/Tdτ 1/√(N)∑_,ψ̅_,̨αψ_-̨,β× ∑_s>0 √(c_s/2ω_ aw_s)( g̃^s_,αβ()̨b̅_-,s + g̃^-s_,αβ()̨ b_,s) , where for future use we have defined the vertex functions g̃^s_ using the same rescaling as in Eq. (<ref>), i.e. g̃^s_ = ĝ^s_√(2ω_ aw_s N/c_s). It follows from Eq. (<ref>) that tree-level boson exchange in the effective electron-boson model indeed generates the interaction in Eq. (<ref>). So far we have taken the Hartree-Fock action, and supplemented it with the bosonic degrees of freedom. The theories with and without the bosonic fields are obviously not the same. In particular, the bosonic degrees of freedom are designed to capture the RPA fluctuations which result from the electronic interaction. As a result, working at tree level with the effective action in Eq. (<ref>) (without S_C) is the same as doing the infinite sum of RPA diagrams in the original theory. Once loops are taken into account in the effective theory, however, one has to be careful that the loop diagrams do not correspond to a double counting of RPA diagrams in the original theory. For example, ω_,s is already the complete boson dispersion which is generated by the electron dynamics, and it should therefore not be further renormalized by coupling to the electrons. To ensure this, we have to add a counter term which is quadratic in the boson fields, and which removes all self-energy diagrams for the bosons. Another counter term quartic in the fermion fields has to be added to remove the RPA renormalization of the electron repulsion, as this effect is already contained in the interaction mediated by boson exchange. All these counter terms are contained in S_C. It is not necessary to construct these terms explicitly – all one needs to know is that they are present, and that they eliminate certain diagrams such that working with the effective theory beyond tree level is the same as working with the original theory. §.§ Transforming to the real basis In this final section we bring the effective electron-boson model introduced in the previous section in a more standard form. First, we rescale the boson fields as follows b̅_,s,b_,s→√(aw_s/c_s)b̅_,s, √(aw_s/c_s)b_,s , and then define the usual (Fourier transformed) canonically conjugate real fields as ϕ_,s = 1/√(2ω_)(b̅_,s + b_-,s) π_,s = i√(ω_/2)(b̅_,s - b_-,s) , where we adopted the standard notation. In terms of the new fields, the quadratic boson action becomes S_B = ∫_0^1/Tdτ ∑_s>0aw_s/c_s∑_(-iπ_,s∂_τϕ_-,s + 1/2π_,sπ_-,s + 1/2ω_,s^2 ϕ_,sϕ_-,s) , and the electron-boson vertex is given by S_el-B = ∫_0^1/Tdτ 1/√(N)∑_,ψ̅_,̨αψ_-̨,β× ∑_s>0 ( g^s_,αβ()̨ϕ_-,s + f^s_,αβ()̨π_-,s) , with g^s_,αβ()̨ = 1/2(g̃^s_,αβ()̨ + g̃^-s_,αβ()̨) f^s_,αβ()̨ = -i/2ω_,s(g̃^s_,αβ()̨ - g̃^-s_,αβ()̨) . As a final step, we integrate out the π_,s fields. This produces the following quadratic boson action: S_B = 1/2∫dτ ∑_s>0aw_s/c_s∑_( -ϕ_-∂^2_τϕ_ + ω_,s^2 ϕ_,sϕ_-,s) . In the long-wavelength continuum limit, using ω_,s∼ c_s^2^2, we can write this as the standard action for relativistic real scalar fields: S_B = 1/2∫dτ∫d^2∑_s>0χ_s (∂_τϕ_s)^2 + ρ_s (∇ϕ_s)^2 , with χ_s = w_s/ac_s, and ρ_s = w_s c_s/a is the stiffness. After integrating out π_,s, the electron-Goldstone vertex becomes S_el-B = ∫_0^1/Tdτ 1/√(N)∑_,ψ̅_,̨αψ_-̨,β× ∑_s>0 ( g^s_,αβ()̨ϕ_-,s + f^s_,αβ()̨ i∂_τϕ_-,s) , where in the second term π_,s is now replaced with i∂_τϕ_,s. Finally, we also obtain following two-body interaction from integrating out π_,s: S_Vπ =∫dτ∑_s>0-c_s/2aw_s N ∑_,̨'̨,(ψ̅_f̨^s_()̨ψ_-̨)× (ψ̅_'̨ f^s_-('̨)ψ_'̨+) , where we have suppressed the Greek indices, and instead adopted a matrix notation. From Eqs. (<ref>) and (<ref>) it follows that we can equivalently write Eq. (<ref>) as S_Vπ = ∫dτ∑_s>0-c_s/2aw_s N ∑_,̨'̨,(ψ̅_f̨^s_()̨ψ_-̨)× (ψ̅_'̨- f_^s†('̨)ψ_'̨) , which makes it explicit that this is a negative definite interaction. § THE ROTATING FRAME We continue to work with the effective electron-boson model derived in the previous section. Even though this model was shown to exactly reproduce the RPA result, we will argue below that it has an (apparent) unphysical property, which can be removed by working in the so-called `rotating frame'. §.§ The problem: strong interband scattering in the → 0 limit Using Eqs. (<ref>), (<ref>) and (<ref>) we see that the vertices g^s_ and f^s_ which appear in the effective electron-boson model for n_α()̨≠ n_β(-̨) are given by g^s_,αβ()̨ = 1/2( [n_α()̨-n_β(-̨)]ω_,s( φ̃^s_,αβ()̨- φ̃^-s_,αβ()̨) -|E_α()̨-E_β(-̨)|( φ̃^s_,αβ()̨+φ̃^-s_,αβ()̨)) f^s_,αβ()̨ = -i/2ω_,s( [n_α()̨-n_β(-̨)]ω_,s( φ̃^s_,αβ()̨+φ̃^-s_,αβ()̨) -|E_α()̨-E_β(-̨)|( φ̃^s_,αβ()̨-φ̃^-s_,αβ()̨)) , where we have used that η_,s=-η_,-s. From these results, and Eqs. (<ref>) and (<ref>), we obtain an explicit expression for the → 0 limit of g^s_: lim_→ 0 g^s_,αβ()̨ = i[E_α()̨-E_β()̨]⟨ u_α()̨|Q_s|u_β()̨⟩ if n_α()̨≠ n_β()̨ , where we used that [n_α()̨-n_β()̨]×|E_α()̨-E_β()̨| = -|n_α()̨-n_β()̨| × [E_α()̨ - E_β()̨] . Equation (<ref>) shows that = 0 Goldstone modes induce scattering between electrons in different mean-field bands. This is because the mean-field bands have gaps which result from the presence of a non-zero symmetry-breaking order parameter. The way the original, non-interacting bands are split to generate the mean-field bands depends on the orientation of the order parameter. So a =0 Goldstone mode, which corresponds to a global rotation of the order parameter, will induce a mixing of the mean-field bands which enter the Hartree-Fock path integral, as they are defined using an order parameter with a chosen fixed orientation. We note Eq. (<ref>) has previously also been obtained in Ref. <cit.>, although the authors of that work did not use the RPA approach adopted here. The exchange of Goldstone modes thus generates an interaction between the mean-field electrons which contains a contribution of the form ∼Δ^2/(χ_s (iν)^2 - ρ_s ^2), where Δ is the gap in the mean-field bands induced by the symmetry breaking. This interaction is singular in the ν,→ 0 limit. Moreover, Δ is of the interaction energy scale. So for strongly interacting electron systems, one finds that the energy scale of the interaction mediated by Goldstone exchange is much larger than the energy scale of the bare density-density interaction. This results in a situation where the mean-field electrons acquire large self-energy corrections, which casts doubt on their existence as well-defined quasi-particles. In the next section we will show that electrons which are defined in a rotating frame are not affected by =0 Goldstone modes, and do not experience any singular interactions. Finally, from Eq. (<ref>) it follows that also lim_→ 0f^s_≠ 0. However, as f^s_ is the vertex which couples the mean-field electrons to ∂_τϕ, this vertex does not induce a singular interaction. §.§ The solution: the rotating frame To remove the singular interaction between the mean-field electrons discussed in the previous section we perform a change of integration variables in the path integral of the electron-boson model. In particular, in the local orbital basis we change the Grassmann fields as ψ_a()̊→∑_b R_ab()̊ψ_b()̊ , with R()̊ = exp(-i∑_s>0ϕ_s()̊Q_s ) . As this is a unitary transformation, the corresponding change of integration variables has a trivial Jacobian. In momentum space, and working in the mean-field basis, the transformation in Eq. (<ref>) can be written as ψ_α()̨→ψ_α()̨ - i/√(N)∑_s,,βϕ_,s Q^s_,αβ()̨ψ_β(-̨) + 𝒪(ϕ^2) , where we have defined Q^s_,αβ()̨ = ⟨ u_α()̨|Q_s|u_β(-̨)⟩ . Performing this change of variables in the kinetic term of the mean-field fermions, ∑_,̨αψ̅_,̨α(∂_τ + E_α()̨)ψ_,̨α, produces terms which can be absorbed in S_el-B. In particular, using Eq. (<ref>) we find that the change of Grassmann variables induces following shift in the vertex functions: g^s_,αβ()̨ → g^s_,αβ()̨ - iQ^s_,αβ()̨[E_α()̨-E_β(-̨)] =: g^Rs_,αβ()̨ f^s_,αβ()̨ → f^s_,αβ()̨ -Q^s_,αβ()̨ =: f^Rs_,αβ()̨ The change of variables also introduces higher-order interaction terms of the form ϕ^n ψ̅ψ with n>1, but we ignore these terms here. From Eq. (<ref>) we see that lim_→ 0 g^Rs_,αβ()̨ = 0 , i.e. the ν, = 0 Goldstone modes decouple from the electrons. This shows that the electrons defined in Eq. (<ref>), which live in a rotating frame that locally follows the order parameter fluctuations, will have much smaller self-energy corrections than the mean-field electrons, and hence will be much closer to the true quasi-particles of the symmetry-broken system. Before concluding this section let us mention the effect of going to the rotating frame on the other terms in the action of the electron-boson model. First, the change of Grassmann variables in Eq. (<ref>) leaves the bare interaction term S_V invariant. However, in general it will change the interaction contained in S_Vπ (<ref>). For some applications it might be important to keep this in mind, but in the remainder of this work we will not use the interactions contained in S_V and S_Vπ anymore. §.§ Implications for electron spectral functions In this section we investigate the consequences of the fact that quasi-particles are defined in a rotating frame for the correlation functions of microscopic fermions. In particular, we are interested in the two-point functions G_ab(τ,-̊'̊) = -⟨T̂ c_,̊a(τ)c^†_'̊,b(0)⟩_β = -1/Z∫_ψ̅,ψ,ϕψ_,̊aψ̅_'̊,b e^-S[ψ̅,ψ,ϕ] , where T̂ is again the time-ordering operator, ⟨·⟩_β is the thermal average, and ∫_ψ̅,ψ,ϕ a path integral for the fields ψ̅,ψ and ϕ. Note that the fermion fields in Eq. (<ref>) are in the original, unrotated frame. Performing the change of integration variables as in Eq. (<ref>), we obtain G_ab(τ,)̊ = -∑_c,d⟨ R^*_ca(τ,)̊ψ_c,(τ) R_db(0,0)ψ̅_d,0(0) ⟩_β , where ψ̅,ψ now live in the rotated frame, and hence describe the quasi-particles of the broken-symmetry state. As a first approximation, we ignore the interaction between the electrons and the Goldstone modes, such that the correlation function of the microscopic electrons factorizes as G_ab(τ,)̊ = -∑_c,d⟨ R^*_ca(τ,)̊R_db(0,0)⟩ ⟨ψ_c,(τ) ψ̅_d(0,0) ⟩_β . By working with this approximation we make a physical assumption that corrections resulting from electron-Goldstone mode interactions will be subleading compared to the above `zeroth order' contribution. This assumption is partly justified by the fact that the fermions in the rotating frame decouple from the Goldstone modes at long wavelengths. Going to frequency and momentum space we then obtain G_ab(iω,)̨ = -T/N∑_iν, ∑_cdα D^R_ab,cd(iν,) ×u^c_α(-̨)u^d*_α(-̨)/i(ω-ν)-E_-̨,α , where D^R_ab,cd(iν,)=-⟨ R^*_ca(iν,)R_db(iν,)⟩. We thus find that the fermion correlation function in frequency-momentum space is a convolution of the fermionic quasi-particle propagator with the propagator of the Goldstone modes. Note that an expression very similar to Eq. (<ref>) is obtained for the electron Green's function in theories where the electron is assumed to fractionalize in a spinon and a holon <cit.>. Up to now our effective electron-boson model describes a collection of independent generalized `spin-waves', and does not take topological order parameter configurations, such as e.g. vortices or skyrmions, or interactions between Goldstone modes into account. To remedy this, one can improve the model to correctly capture the compact nature of the order parameter manifold by rewriting the Goldstone action as a non-linear sigma model. We will do this explicitly in the next section for the anti-ferromagnetic ground state of the Hubbard model. With the non-linear sigma model description of the Goldstone modes, it is possible that thermal fluctuations destroy the long-range order at a much lower temperature than the mean-field transition temperature at which the symmetry-breaking gap disappears from the Hartree-Fock band spectrum. This is especially relevant in 2D, where the Hohenberg-Mermin-Wagner theorem states that long-range order from continuous symmetry breaking disappears at any non-zero temperature. In the resulting thermally disordered phase, the propagator D^R_ab,cd(iν,) is symmetric, i.e. D^R_ab,cd(iν,) ∝δ_ab, and acquires a mass. From Eq. (<ref>), we see that in that case the electron Green's function G_ab(iω,)̨ also becomes symmetric in the thermally disordered phase, even though the order-parameter-induced gap can still be present in the mean-field spectrum of the fermions in the rotating frame. This effect therefore naturally leads to `pseudo-gap' physics in thermally disordered broken-symmetry states. Below we illustrate this explicitly in the two example sections. § EXAMPLE I: ANTI-FERROMAGNETISM IN THE HUBBARD MODEL In this first example section we apply the general formalism introduced above to the anti-ferromagnetic ground state of the square-lattice Hubbard model. §.§ Mean-field state and Goldstone mode energies We are interested in the Hubbard model on the square lattice, which is defined by the following Hamiltonian: H = -t ∑_⟨ ij⟩∑_s c^†_i,s c_j,s -t'∑_⟨⟨ ij⟩⟩ c^†_i,s c_j,s + h.c. + U∑_i n_i,↑ n_i,↓ , where the first sum is over nearest neighbors, and the second sum over next nearest neighbors. In the interaction term, n_i,s = c^†_i,sc_i,s is the density of electrons with spin s at site i. We choose units of energy such that t≡ 1, and take t' = -0.35. For our purposes, it suffices to only consider half filling, where it is well-known that the ground state is an insulating anti-ferromagnet for sufficiently large U. An anti-ferromagnet (AFM) breaks translation symmetry, so at first sight the general formalism introduced above, which explicitly assumes translation invariance, does not seem to apply. However, an AFM is invariant under the combined action of translating by one lattice site and flipping all the spins. Let's call this modified translation symmetry T_x/y'. If we consider the AFM state with periodic boundary conditions, i.e. on a torus of size L_x× L_y (note that both L_x and L_y have to be even), then we see that T_x^'L_x = T_y^'L_y = 1. This shows that the eigenvalues of T'_x (T'_y) are phases e^ik̃_x (e^ik̃_y), with k̃_x = 2π n/L_x (k̃_y = 2π n/L_y), just as for the conventional translation operators. Because the AFM is invariant under T'_x/y, the correlation functions satisfy ⟨ c^†_c_'⟩∝δ_,', i.e. the single-particle density matrix is diagonal in . So by working in the basis where T'_x/y is diagonal, we can treat the AFM in the same way as conventional translationally invariant states, by making use of the conserved pseudo-momentum . Let us assume without loss of generality that the AFM order is in the XY plane. T'_x/y can then be defined as a translation followed by a π rotation along the z axis: T'_ = e^i·̊ s^z/2 T_ , where =̊ (n,m) with n,m∈ℤ describes a general lattice vector, = (π,π), and s^i are the Pauli matrices acting on the spin indices. In this case, the connection between the pseudo-momentum and the crystal momentum $̨ is given by: c^†_,↑ = c^†_-̨/2,↑ c^†_,↓ = c^†_+̨/2,↓ The self-consistent Hartree-Fock mean-field Hamiltonian is diagonal in the pseudo-momentum, and, assuming AFM order along thex-direction, can be written as H_HF = ∑_ c^†_(ε_,↑ Δ_ Δ_ ε_,↓) c_ , wherec^†_ = (c^†_,↑, c^†_,↓). We have numerically solved the Hartree-Fock self-consistency equations on a34×34pseudo-momentum grid usingU = 5, and we find an AFM order parameterΔ_ = Δ≈1.93. We next obtain the Goldstone mode energies and wavefunctions by numerically solving the generalized eigenvalue equation in Eq. (<ref>), where each momentum label$̨ is to be replaced with a pseudo-momentum label . The numerically obtained energy for the lowest-energy collective mode (i.e. the Goldstone mode) is shown in Fig. <ref> over the entire pseudo-momentum Brillouin zone, and along two cuts q̃_y =0 and q̃_y =π. As expected, we find two linearly dispersing Goldstone modes, one at pseudo-momentum =(0,0), and one at =(π,π). As the AFM order is along the x-direction, we can understand this by noting that the two broken-symmetry generators s^z and s^y at crystal momentum = (0,0) respectively have pseudo-momentum =(0,0) and =(π,π). From a fit to the linear part of the dispersion relation ω_, we find a Goldstone velocity c ≈ 0.988. §.§ Electron-Goldstone scattering vertices By solving the generalized eigenvalue equation (<ref>), we also obtain the collective mode wavefunctions φ^±_,αβ(), which can be used to construct the electron-Golstone scattering vertices g_,αβ() and f_,αβ() as explained in Sec. <ref>. Recall that α and β label the mean-field bands, i.e. the eigenstates of H_HF defined in Eq. (<ref>), and not the spin indices. To go to the rotating frame, we use R()̊ = exp(-i[ϕ_z()̊s^z + ϕ_y()̊(-1)^· s^y ] ) , where ϕ_z()̊ and ϕ_y()̊ contain pseudo-momenta which lie in the magnetic Brillouin zone, i.e. the Brillouin zone with reciprocal vectors (π,±π). It thus follows that Q_,αβ(), as defined in Eq. (<ref>), is given by Q_,αβ() = ⟨ u_α()|s^z|u_β(-)⟩ , if lies in the first magnetic Brillouin zone, and Q_,αβ() = ⟨ u_α()|s^y|u_β(-)⟩ , if lies in the complement of the first magnetic Brillouin zone in the full pseudo-momentum Brillouin zone. From Eq. (<ref>), it is clear that the mean-field states satisfy |u_α(+)⟩ = ± s^x |u_α()⟩ (assuming that we work in a gauge where the mean-field states are real). From this it follows that Q_ is periodic under shifts by and multiplication by i, up to gauge-dependent factors ± 1. The rotating frame is introduced to ensure that the scattering vertex g^R_,αβ(), defined in Eq. (<ref>), is zero as → 0,. As explained in Sec. <ref>, this requires both a rescaling of the collective mode wavefunctions by a factor w^1/2, and a gauge fixing procedure. The gauge fixing procedure is simplified by choosing AFM order along the x-direction, such that the collective mode wavefunctions φ_ can initially be taken to be real. We then fix the gauge of the φ_ by multiplication with ± i when lies in the first magnetic Brillouin zone, and ± 1 otherwise, such that the phase of an arbitrary off-diagonal element in φ_,αβ(0) agrees with the phase of the corresponding off-diagonal element in iQ_,αβ(0)[n_α(0) - n_β()]. After the gauge fixing, we find that lim_→ 0,g^R_ = 0 if we take w ≈ 0.922. To illustrate our numerical results for the electron-Goldstone scattering vertices, we define ⟨ g_,D⟩^2 = 1/L_xL_y∑_ |g_,00()|^2 + |g_,11()|^2 , ⟨ g_,O⟩^2 = 1/L_xL_y∑_ |g_,01()|^2 + |g_,10()|^2 , as a pseudo-momentum averaged version of respectively the intra-band (diagonal) and inter-band (off-diagonal) scattering vertices in the mean-field basis. Similarly, we define the averaged scattering vertices in the rotating frame as ⟨ g^R_,D⟩^2 = 1/L_xL_y∑_ |g^R_,00()|^2 + |g^R_,11()|^2 , ⟨ g^R_,O⟩^2 = 1/L_xL_y∑_ |g^R_,01()|^2 + |g^R_,10()|^2 . In Fig. <ref> (a) and (b) we show the averaged scattering vertices in the mean-field basis. As expected, the intra-band scattering vertex ⟨ g_,D⟩ goes to zero as → 0,, whereas the inter-band vertex ⟨ g_,O⟩ remains non-zero. We see that ⟨ g_=0,O⟩≈ U = 5, which illustrates the strong inter-band scattering for mean-field electrons induced by =0, Goldstone modes. Fig. <ref> (c-d) shows the averaged scattering vertices in the rotating frame. From Fig. <ref> (d) we see that the inter-band scattering in the rotating frame indeed goes to zero as → 0,, and is now maximal at the magnetic Brillouin zone boundary. We also see that the intra-band scattering vertex ⟨ g^R_⟩ is slightly enhanced in the rotating frame, and is almost identical (up to an overall scale factor) to the Goldstone dispersion ω_ shown in Fig. <ref>. We can also similarly define the averaged intra/inter-band scattering vertices ⟨ f_,D/O⟩ and ⟨ f^R_,D/O⟩ in the mean-field and rotating frame respectively. These quantities are shown in Fig. <ref>. Both the intra- and inter-band scattering vertices become smaller in the rotating frame, are maximal near the magnetic Brillouin zone boundary, but remain non-zero and order one as → 0,. §.§ Electron spectral function In this last section, we go back to the original crystal momentum basis, and again assume that the AFM order is along the x-direction. The free fermion part of the effective action can then be written as S_ψ = ∫_0^βdτ ∑_∑̨_s ψ̅_,̨s(∂_τ + ε_)̨ψ_,̨s + Δ (ψ̅_,̨↑ψ_+̨,↓ + ψ̅_+̨,↓ψ_,̨↑) . We will assume that these fermion fields are already in the rotated frame, such that their scattering with the Goldstone modes vanishes as → 0. The components of the fermion Green's function in the rotated frame which are diagonal in the spin-z indices are given by [G^R_↑↑(iω,)̨]^-1 = [G^R_↓↓(iω,)̨]^-1 = iω - ε_-̨Δ^2/iω - ε_+̨ . In experiments one obviously does not measure the electrons in the rotated frame. One measures the original physical fermions in the unrotated frame, which are given by R^†()̊ψ̅_, with R()̊ = exp(i[φ_y()̊s^y + φ_z()̊s^z]/2) , where φ_y and φ_z are the Goldstone boson fields. To encode the compactness of the Goldstone modes we need to write the quadratic Goldstone mode action in terms of a general SU(2) matrix R()̊ (and not in terms of φ_y and φ_z). For this, it is easiest to start with the continuum version of the Goldstone mode action S_G = 1/2∫dτ∫d^2∑̊_n=y,zχ(∂_τφ_n)^2+ ρ (∇φ_n)^2 . As a next step, we note that R^† (-i∂_μ) R = 1/2∑_n=y,z∂_μφ_n s^n + 𝒪(φ^2) As a result, the following action agrees with Eq. (<ref>) to lowest order in φ: S_G = 1/2∫_τ,∑_n=y,zχ [tr(R^† i ∂_τ R s^n)]^2 + ρ [tr(R^† i ∇ R s^n)]^2 This is the improved action which correctly incorporates the compactness of the order parameter manifold. Note that this action is invariant under R()̊→ R()̊e^iθ()̊s^z , with θ()̊ an arbitrary function. This is the U(1) gauge invariance which is familiar from the CP^1 formulation of the AFM Goldstone action <cit.>. The Green's function of the physical fermions is given by G_s_1s'_1(τ,)̊ = -⟨ R^*_s_2s_1(τ,)̊ψ_s_2,(τ)R_s'_2s'_1(0,0) ψ̅_s'_2(0,0) ⟩_β , with repeated indices summed over. As a first approximation, we ignore the interactions between the electrons and the Goldstone modes, in which case the Green's function factorizes: G_s_1s'_1(τ,)̊≈ -⟨ R^*_s_2s_1(τ,)̊R_s'_2s'_1(0,0) ⟩⟨ψ_s_2,(τ)ψ̅_s'_2(0,0) ⟩_β . This is expected to be a reasonable approximation as the fermions in the rotating frame decouple from the low-energy Goldstone modes. An approximate expression for the Goldstone propagator can be obtained from a large-N analysis of the CP^1 model <cit.>, which leads to the following result <cit.>: ⟨ R^*_s_2,s_1(τ,)̊ R_s'_2,s'_1(0,0) ⟩ = - D(τ,)̊δ_s_1s'_1δ_s_2,s'_2 , where D(τ,)̊ is the Fourier transform of D(iν,) = χ^-1/(iν)^2 - c^2^2 - m^2 . Here m^2 is a small mass for the Goldstone modes generated by thermal fluctuations, which restore the symmetry in 2 spatial dimensions due to the Hohenberg-Mermin-Wagner theorem. Putting everything together, we find that the fermion Green's function is given by G_ss'(iω,)̨ = -δ_ss'T∑_iν1/L_xL_y× ∑_χ^-1/(iν)^2 - ω_^2 - m^2 2G^R_↑↑(iω-iν,-̨) This result has previously also been obtained by Borejsza and Dupuis <cit.> starting from the rotating-frame mean-field approach motivated by Schultz <cit.>. To simplify Eq. (<ref>), we first rewrite G^R_↑↑ as G^R_↑↑(iω,)̨ = iω - ε_+̨/(iω - E_^̨+)(iω - E_^̨-) = |u_+()̨|^2/iω -E_^+ + |u_-()̨|^2/iω -E_^- , where E_^̨± = 1/2(ε_+̨ε_+̨±√((ε_-̨ε_+̨)^2 + 4Δ^2)) , |u_±()̨|^2 = 1/2(1 ±ε_-ε_+̨/√((ε_-̨ε_+̨)^2 + 4 Δ^2)) . It is now straightforward to perform the summation over iν in Eq. (<ref>), which gives G_ss'(iω,)̨ = δ_ss'/L_xL_y∑_∑_σ=±|u_σ(-̨)|^2 /χω̃_× (n(ω̃_) + f(-E^σ_-̨)/iω - E^σ_-̨-ω̃_ + n(ω̃_) + f(E^σ_-̨)/iω - E^σ_-̨+ω̃_) , where n(ω) and f(E) are respectively the Bose-Einstein and Fermi-Dirac distributions, and ω̃_ = √(c^2^2 + m^2). The spectral function, defined for spin-rotation invariant systems as, 𝒜(ω,)̨ = 2/πIm G_↑↑(ω - iϵ,)̨ is now easily obtained <cit.>: 𝒜 (ω,)̨ = 2/L_xL_y∑_∑_σ=±|u_σ(-̨)|^2 /χω̃_× ϵ/π(n(ω̃_) + f(-E^σ_-̨)/(ω - E^σ_-̨-ω̃_)^2 + ϵ^2 + n(ω̃_) + f(E^σ_-̨)/(ω - E^σ_-̨+ω̃_)^2 + ϵ^2) . In Fig. <ref> (a) we show the numerically obtained spectral weight 𝒜(ω,)̨ at frequency ω = -1.73. To obtain this result, we have kept a small non-zero ϵ = T = 0.05 to smear out the numerical results obtained on a finite-size discrete momentum grid. We see that at this negative frequency, the spectral weight is located along contours demarcating the boundaries of `hole pockets' centered at (±π/2,±π/2). The part of the contour oriented towards the center of the Brillouin zone is brighter than the backside facing (π,π) as a result of the coherence factors |u_σ()̨|^2 in Eq. (<ref>). Fluctuations will cause the AFM order to decrease. To illustrate the effect of a reduced order in the rotating frame on the spectral function, we show the spectral weight at ω = -0.53 obtained using a smaller Δ = 0.5 in Fig. <ref> (b). Note that both in Fig. <ref> (a) and (b), the area inside the `hole pockets' is ∼ 10 % of the total Brillouin zone area. From Fig. <ref> (b) we see that reducing Δ elongates the contours of high spectral weight in the directions toward (0,±π) and (±π,0). Also the spectral weight on the backside of the contours is further reduced. Before concluding we want to reiterate that the spectral weights shown in Figs. <ref> (a-b) are those of a thermally disordered state, with no long-range AFM order. Nevertheless, the spectral weight retains important features of the T=0 AFM state, and is strikingly different from the spectral weight of the conventional Fermi liquid at small U. § EXAMPLE II: SPIN-SPIRAL ORDER IN THE THREE-BAND MODEL Having discussed the anti-ferromagnetic Mott insulator in the Hubbard model in the previous section, we now turn to the slightly more involved example of spin spiral order in the hole-doped three-band Hubbard model <cit.>. §.§ Mean-field state and Goldstone mode energies The three-band model is described by the following Hamiltonian: H = ∑_iϵ_d c^†_1ic_1i + ϵ_p( c^†_2ic_2i + c^†_3ic_3i) + ∑_⟨ ij ⟩ t^ij_pd( c^†_1ic_2j + c^†_1ic_3j + h.c.) + ∑_⟨ ij ⟩ t^ij_pp( c^†_2ic_3j + h.c.) + ∑_i U_d n_1i↑n_1i↓ + U_p( n_2i↑n_2i↓ + n_3i↑n_3i↓) , where a=1,2,3 respectively refers to the Copper d_x^2-y^2, Oxygen 2p_x and 2p_y orbitals, s denotes the spin, and i,j label the unit cells. Note that following most of the literature on this model, we have formulated the Hamiltonian in terms of the hole degrees of freedom. The first two lines in Eq. (<ref>) represent a potential energy difference for the Copper and Oxygen orbitals, and nearest and next-nearest neighbour hopping (in these terms, the spin summation is implicit). Note that the signs of the hopping parameters t^ij_pd and t^ij_pp depend on the relative orientation of the sites i and j. See Fig. <ref> for a graphical representation of the different hopping processes in the three-band model, and how the corresponding signs depend on the orientation. The last line in Eq. (<ref>) contains the on-site Hubbard interactions for electrons in the Copper and Oxygen orbitals. An important parameter in the three-band model is the charge-transfer parameter δ = ϵ_p-ϵ_d > 0. Previous works <cit.> have investigated the groundstate properties of the three-band model for a range of values of δ using different numerical methods, including Hartree Fock, and identified a parameter regime with spin-spiral order. To make contact with the results of <cit.>, we choose the following values for the parameters of the three-band model: U_d=8.4,U_p=2.0,ϵ_d = -7.6, ϵ_p=-6.1, |t^ij_pd|=1.2, |t^ij_pp|=0.7. Note that for these parameter values the charge-transfer parameter is δ = 1.5. At half filling, these parameters lead to an insulating AFM ground state, with orbital hole densities n_1 ≈ 0.46,n_2 = n_3 ≈0.27. Upon hole doping, the anti-ferromagnet becomes a circular spin spiral with wave vector = (π,π-2πη) or = (π-2πη,π), where η is the incommensurability of the spiral. In this work we investigate the ground state at a hole doping of 1/8. Let us assume without loss of generality that the spiral order is in the XY plane. In this case, the spin expectation value is given by ⟨𝐒_i⟩ = Δ( cos(_sp·𝐫_i)𝐱̂+sin(·𝐫_i)ŷ) , where 𝐒_i = 1/2∑_a,s,s' c^†_j,s,aσ_ss' c_j,s',a. Similar to the AFM, the circular spiral order breaks both SU(2) spin symmetry and translation symmetry, but is invariant under the modified translation symmetry T'_ = e^i·̊ s^z/2 T_ , which allows us to define a conserved pseudo-momentum 𝐤̃ as in Eq.(<ref>). The self-consistent Hartree-Fock mean-field Hamiltonian can then be written as H_HF = ∑_ c^†_(ε_,↑ Δ_ Δ_ ε_,↓) c_ , where c^†_ = (c^†_1,,↑,c^†_2,,↑,c^†_3,,↑,c^†_1,↓,c^†_2,↓,c^†_3,↓) and Δ_ = diag(Δ_1,,Δ_2,,Δ_3,). Here, ε_ are not the bare band energies since the contribution from the Hartree term is not just a simple overall energy shift, but rather, each orbital gets shifted by a different amount, corresponding to the orbital's hole density. To solve the HF self-consistency equations, we have applied the following procedure. First, we used restricted HF with conserved pseudo-momentum, and minimized the energy for different values of . Even though the optimal varies slightly with system size, it was consistently found to be ≈(π,0.80π) (assuming spiral order along the y-direction). As a next step we used the optimal solution of the restricted HF as an initial seed for completely unrestricted HF, which did not assume any translational symmetry. The converged unrestricted HF yielded ground states with the same spiral order as the restricted HF, but with a small δ n /n ∼ 10^-5-10^-3 charge density modulation, which may be a result of the incommensurability of the exact optimal with the finite system size. The difference in energy between the spiral states with and without charge density modulation was found to decrease with system size. Our results are thus consistent with a circular spin spiral state with = (π,π - 2πη) and η≈ 0.10, in agreement with Refs. <cit.>. The spin hybridization parameters in the mean-field Hamiltonian were found to be (Δ_1,,Δ_2,,Δ_3,) = (Δ_1,Δ_2,Δ_3 )≈(1.31,0.00,0.013 ). The spiral order has three broken symmetry generators, and three Goldstone modes <cit.>. Two of the modes are associated with rotating the plane of the spiral order (out-of-plane modes), and one mode corresponds to in-plane rotations. For a XY spiral, the out-of-plane modes are related to the broken s^x,s^y generators while the in-plane mode is related to the broken s^z generator. We can understand the location of the Goldstone modes in the pseudo-momentum frame by relating them to the original frame, where they are all at =(0,0). The in-plane mode remains at zero, =(0,0), while the two out-of-plane modes go to = ±. We obtain the collective mode spectrum by numerically solving Eq.(<ref>) in the pseudo-momentum basis. This was first done over the entire pseudo-momentum Brillouin zone on a 24×24 grid, and subsequently on a larger 40×40 grid near the locations of the Goldstone modes. The energies for the low-energy collective modes are shown in Fig.<ref>, near =(0,0) and =±. We see that the Goldstone modes near ±, which correspond to the out-of-plane fluctuations, lie outside the particle-hole continuum and hence will not be damped. The Goldstone mode near , on the other hand, does lie in the particle-hole continuum, and hence will be Landau damped. These observations agree with Refs. <cit.>. The velocities of the out-of-plane Goldstone modes are anisotropic, and from a linear fit we find c_x≈ 0.32 and c_y≈ 0.27. §.§ Electron spectral function Working in the original crystal momentum basis, and taking the spiral order to be in the XY plane, the fermion action is given by form S_ψ = ∫_0^βdτ∑_∑̨_s,a,bψ̅_,̨s,a(δ_ab∂_τ + ε_,̨ab) ψ_,̨s,b + ∑_aΔ_a (ψ̅_,̨↑,aψ_+̨,↓,b + ψ̅_+̨,↓,bψ_,̨↑,a) . As before, we assume that these fermion fields are already in the rotated frame. In a basis (ψ_,̨↑,ψ_+̨,↓), suppressing the orbital indices, the rotated-frame Green's function is given by [G^R(iω,)̨]^-1 = [ iω1 - ε_ -Δ; -Δ iω1 - ε_+̨; ] , where Δ,ε_ are 3×3 matrices with orbital indices, with values as described in the previous section. The spin-diagonal part of the Green's function can be written as G^Rab_ss(iω,)̨ = ∑_αu^as_α()̨u^bs *_α()̨/iω-E_α()̨ , which is the equivalent of Eq. (<ref>) in the one-band Hubbard model case. Note that for the spiral order, it no longer holds that G^R_↑↑(iω,)̨ = G^R_↓↓(iω,)̨. The microscopic fermions R^†()̊ψ̅_ are obtained by acting with R()̊ = exp(i[φ_y()̊1⊗ s^y + φ_z()̊1⊗ s^z]/2) = exp(i[φ_x()̊s̅^x+φ_y()̊s̅^y + φ_z()̊s̅^z]/2) , on the fermions in the rotating frame. Here we have defined s̅^i = 1⊗ s^i are the correct generators which acts trivially on the orbital indices. The quadratic Goldstone action is then given by S_G = 1/2∫dτ ∫d^2∑̊_n=x,y ( χ^⊥(∂_τφ_n)^2+ ∑_i=x,yρ^⊥_i (∂_i φ_n)^2 ) + χ^(∂_τφ_z)^2+ ∑_i=x,yρ^_i (∂_iφ_z)^2 , where the out-of-plane and in-plane parameters are denoted by ⊥ and . Note that terms involving ∂_iφ∂_jφ with i≠ j are forbidden by the reflection symmetry of the spiral state. By again using Eq. (<ref>), we obtain following non-linear sigma action for the Goldstone modes: S_G = 1/2∫_τ,∑_n=x,yχ^⊥[ tr( R^† i∂_τ R s^n]^2 + ρ_j^⊥[ tr( R^† i∂_jR s^n]^2 + χ^[ tr( R^† i∂_τ R s^z]^2 + ρ_j^[ tr( R^† i∂_jR s^z]^2 , where the sum over j=x,y is implicit. A large-N analysis of the non-linear sigma model <cit.>, leads to the following result for the Goldstone propagator: ⟨ R^c,a *_s_2,s_1(τ,)̊ R^d,b_s'_2,s'_1(0,0) ⟩ = - D(τ,)̊δ_s_1s'_1δ_s_2s'_2δ_adδ_cb , where D(τ,)̊ is the Fourier transform of D(iν,) = (χ^⊥)^-1/(iν)^2 - ω̃_^2 . Here, ω̃_= √(ρ^⊥_αq_α^2/χ^⊥ + m^2) is the Goldstone mode dispersion, including the exponentially small (in T) spin-gap mass generated by the thermal fluctuations at non-zero temperature, as determined via the saddle point equations. The velocities c_α = √(ρ^⊥_α/χ^⊥) are obtained from the collective mode spectrum. Finally, we express the fermion Green's function as G^ab_s_1s'_1(τ,)̊ = -⟨ R^ca *_s_2s_1(τ,)̊ψ^c_s_2(τ,)̊ψ̅^d_s_2'(0,0) R^db_s'_2s'_1(0,0) ⟩_β ≈ -⟨ R^ca *_s_2s_1(τ,)̊ R^db_s'_2s'_1(0,0) ⟩ G^R cd_ s_2 s_2' (τ,)̊ where we have again ignored the interaction between the Goldstone modes and the electrons in the rotating frame, so that the Green's function factorizes. Going to momentum space, and using Eqs. (<ref>) and (<ref>), we find that the fermion Green's function is given by G^ab_ss'(iω,)̨ = -δ_ss' T ∑_iν1/N_x N_y× ∑_(χ^⊥)^-1/(iν)^2 - ω̃_^2 ∑_σ,αu^a,σ_α(-̨)u^b,σ *_α(-̨)/i(ω-ν)-E_-̨,α , from which we obtain the spectral weight. In Fig <ref>(a) we show the Fermi surfaces of the spiral state mean-field band structure in the pseudo-momentum Brillouin zone. The band spectrum clearly breaks C_4 symmetry, in contrast to the AFM case, but retains a reflection symmetry about x. The spectral weight of the physical electrons involves a spin sum, as seen in Eq. (<ref>), which adds two copies of the Fermi levels of Fig <ref>(a), with a relative shift of and the associated coherence factors. This restores the reflection symmetry about y, but not the C_4 symmetry. To illustrate this, we have calculated the spectral weight of the following fermions: f^†_(θ) = cosθ c^†_1, + sinθ/2( c^†_2, - c^†_3, - c^†_2,-̊x + c^†_3,-̊y) , where the combination of orbitals in this operator is chosen to obey all the point group symmetries of the three-band model. We have optimized the spectral weight at the Fermi energy as a function of θ, and found that the maximal spectral weight occurs at θ = θ^* ≈ 0.27π. In Fig. <ref>(b) we plot the spectral weight of f^†_(̊θ^*), which is indeed reflection symmetric, but breaks C_4. Similar to the AFM case discussed above, the highest spectral weight is along four contours with a suppressed backside due to the coherence factors |u^a_α()̨|^2. However, compared to the AFM, the centers of the high spectral-weight contours are shifted along k_y. Also, the fraction of the Brillouin zone area contained in the contours is twice the number of doped holes per unit cell relative to half filling, i.e. 1/4 for the doping that we consider here. The implications of this were previously discussed in Ref. <cit.>. § CONCLUSIONS We have shown that a naive application of mean-field theory + RPA produces scattering vertices between electrons and Goldstone modes which do not vanish in the long-wavelength limit. However, fermions which live in a frame that locally follows the order parameter fluctuations do decouple from =0 Goldstone modes. This has important consequences for electron correlation functions, which by making use of the rotating frame naturally reflect the properties of thermally disordered systems which nevertheless retain features associated with a non-zero magnitude for the order parameter. Although we have illustrated our formalism only for square-lattice Hubbard and three-band models, its applicability is far more general. In particular, it can also be used to study the broken-symmetry states in moiré materials, such as e.g. the Incommensurate Kekulé Spiral (IKS) state <cit.> which is observed experimentally in both magic-angle twisted bilayer <cit.> and trilayer <cit.> graphene. The IKS order is in fact very similar to the circular spin-spiral order that we have studied in the three-band model in Sec. <ref>. We leave these applications for near-future work. Acknowledgements – N.B. would like to thank Patrick Ledwidth for helpful discussions about the manuscript, and Steve Kivelson for bringing Ref. <cit.> to our attention. This work was supported by a Leverhulme Trust International Professorship grant [number LIP-202-014]. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. The research was also in part supported by the National Science Foundation under Grant No. NSF PHY-1748958. N.B. was supported by a University Research Fellowship of the Royal Society, and thanks École Normale Supérieure Paris, where part of this work was completed, for hospitality. YH acknowledges support from the European Research Council (ERC) under the European Union Horizon 2020 Research and Innovation Programme (Grant Agreement Nos. 804213-TMCS) — Supplementary Material — § SOLUTION OF THE BETHE-SALPETER EQUATION AND ITS PROPERTIES In this appendix we elaborate on the Bethe-Salpeter equation, its solutions, and their properties. We will closely follow the seminal works <cit.>. To ease the presentation, we will consider a general Hartree-Fock action of the form ∫_0^1/Tdτ ∑_αψ̅(∂_τ + E_α)ψ_α + 1/2∑_αβλσV_αλ,βσψ̅_αψ̅_λψ_σψ_β , where the Greek indices run over the Hartree-Fock single-particle states. Note that compared to the main text we have dropped the momentum labels, which one can think of as being absorbed into the Greek indices. As explained in the main text, the RPA collective mode propagator is obtained by summing the infinite set of diagrams in Eq.(<ref>). Summing this infinite series is equivalent to solving the Bethe-Salpeter equation, whose graphical representation we repeat here: < g r a p h i c s > Written out explicitly, the Bethe-Salpeter (BS) equation is G_2(α,β;λ,σ;iν) = T ∑_ω_n(-δ_βσδ_λα1/i(ω_n-ν) - E_β1/iω_n - E_α) + ∑_μνV_αμ,βν G_2(ν,μ;λ,σ;iν) T ∑_ω_n1/i(ω_n-ν) - E_β1/iω_n - E_α - ∑_μνV_αμ,νβ G_2(ν,μ;λ,σ;iν) T ∑_ω_n1/i(ω_n-ν) - E_β1/iω_n - E_α = -f(E_β)-f(E_α)/iν - E_α + E_β(δ_αλδ_βσ - ∑_μν(V_αμ,βν - V_αμ,νβ) G_2(ν,μ;λ,σ;iν) ), where f(E) is the Fermi-Dirac distribution. By going to zero temperature we find that G_2 satisfies the following equation: ∑_μ,ν( (n_β-n_α)(E_α - E_β - iν)δ_ανδ_βμ + V_αμ,βν - V_αμ,νβ) G_2(ν,μ;λ,σ;iν) = δ_αλδ_βσ , where n_α∈{0,1} are the fermion occupation numbers. Again adopting the notation where i,j,k,l represent occupied states, and m,n,o,p unoccupied states, we can rewrite the BS equation as ( A - iν1 B B^* A^* + iν1)( G^(1)(iν) G^(2)(iν) G^(3)(iν) G^(4)(iν)) = (1 0 0 1) , where we have defined G^(1)_mi,nj(iν) = G_2(m,i;n,j;iν) G^(2)_mi,nj(iν) = G_2(m,i;j,n;iν) G^(3)_mi,nj(iν) = G_2(i,m;n,j;iν) G^(4)_mi,nj(iν) = G_2(i,m;j,n;iν) , and the matrices A and B are given by A_mi,nj = (E_m - E_i) δ_ijδ_mn + V_mj,in - V_mj,ni B_mi,nj = V_mn,ij - V_mn,ji The complex conjugate in the lower rows in Eq. (<ref>) comes from the following property of the interaction potential: V^*_αλ,βσ = V_βσ,αλ , which is a simple consequence of hermiticity of the Hamiltonian. The anti-symmetry of the fermion creation and annihilation operators implies that the interaction potential also satisfies V_αλ,βσ = V_λα,σβ , from which it follows that A is hermitian, and B is symmetric. As a result, the following matrix H_B = (A B B^* A^* ) is hermitian. It also has a particle-hole symmetry: H_B^* = XH_BX , with X = (0 1 1 0 ) . Note that this is a true particle-hole transformation for the excitations on top of the HF state, as X maps from (i,m) space to (m,i) space, so it maps electron excitations to hole excitations and vice versa. The solution to Eq. (<ref>) is given by G_2(iν) = (ZH_B - iν1)^-1Z , where Z = (1 0 0 -1) . The matrix ZH_B satisfies X(ZH_B)^*X = -ZH_B, so all its non-zero eigenvalues come in pairs ±ω. Here we used that ω is real whenever H_B is a positive matrix, which is the condition for the Hartree-Fock state to be a variational energy minimum <cit.>). Let us now assume that ZH_B has no zero modes, which will generically be true if we add for example a very small symmetry breaking field to the Hamiltonian which gaps the Goldstone modes. In the absence of zero modes, the eigenvectors of ZH_B span a complete basis <cit.>, which means that the matrix S whose columns consist of these eigenvectors is invertible. We can therefore write ZH_B as ZH_B = S(Z_ηΩ)S^-1 , where Ω is a positive diagonal matrix, and Z_ηΩ is a diagonal matrix containing the eigenvalue pairs ±ω (Z_η is thus a diagonal matrix with ± 1 on the diagonal). The particle-hole symmetry in Eq. (<ref>) implies that if S_(αβ),s is an eigenvector with eigenvalue ω_s, then [XS^*]_(αβ),s is an eigenvector with eigenvalue -ω_s. Using the hermiticity of H_B and Eq. (<ref>), we can write [S^† H_B S]_ss' = [S^† Z S]_ss' [Z_ηΩ]_s's' = [Z_ηΩ]_ss [S^† Z S]_ss' Comparing Eqs. (<ref>) and (<ref>), we conclude that if [Z_ηΩ]_s's'≠ [Z_ηΩ]_ss, then [S^† Z S]_ss' = 0. Moreover, as H_B is positive if the Hartree-Fock state is a variational energy minimum, which we assume to be the case here, it follows that sgn([S^† Z S]_ss) = [Z_η]_ss if Ω_ss=ω_s ≠ 0. As we assume the Goldstone modes to be gapped, and hence that there are no zero modes, we can normalize the columns of S (i.e. the eigenvectors of ZH_B) such that the following equation holds: S^† Z S = Z_η , Equation (<ref>) is Eq. (<ref>) in the main text. To see this, note that if we restore the momentum labels, we have that [S()]_(α̨β),s = φ^s_,αβ()̨, i.e. the columns of S (which is block-diagonal in momentum ) are labeled by s and correspond to the collective mode wavefunctions. The diagonal elements of Z_η are the signs η_,s defined in the main text. Using the eigenbasis of ZH_B, normalized such that Eq. (<ref>) holds, we can write the collective mode propagator as G_2(iν) = S(Z_ηΩ - iν1)^-1S^-1Z = S(Ω -iν Z_η)^-1Z_η S^-1Z = S(Ω -iν Z_η)^-1S^† . The last line is Eq. (<ref>) in the main text. Let us now consider translationally invariant systems, for which H_B is block-diagonal in the momentum . We will write the momentum-dependent blocks as H_B(). We are interested in the case where H_B() has a zero mode at = 0, and we want to understand the behaviour of the eigenvectors contained in S() as a function of . Much can be learned from studying the following simple example: H_B() = ( 1 + ϵ_^2 1 1 1+ϵ_^2) , where ϵ_=0=0, and ϵ_≪ 1 for all . For this example we have Z = σ^z and X=σ^x, where σ^i are the Pauli matrices. Working to second order in ϵ_, we find that the eigenvalues of ZH_B() are given by ω_±,=±√(2)ϵ_. The corresponding eigenvectors, normalized to have unit Euclidean norm, are given by S()_:,+ = 1/√(2)-ϵ_ + 3√(2)ϵ_^2/4( 1 -1 + √(2)ϵ_ -ϵ_^2 ) S()_:,- = 1/√(2)-ϵ_ + 3√(2)ϵ_^2/4( 1 - √(2)ϵ_ +ϵ_^2 -1 ) , where we have again only kept terms up to second order in ϵ_. Note that as → 0, and thus ϵ_→ 0, the two eigenvectors S()_:,± become identical, reflecting the fact that the generalized eigenvalue equation H_B(0)|v⟩ = λ Z|v⟩ has only one solution in the presence of a zero mode, and the matrix S() becomes degenerate. The eigenvectors also satisfy [S^†()ZS()]_++ = √(2)ϵ_ [S^†()ZS()]_– = -√(2)ϵ_ . These equations show that if we want to adopt the normalization of the eigenvectors such that Eq. (<ref>) holds, then the Euclidean norm of the eigenvectors will diverge in the → 0 limit as ϵ_^-1/2. This is a generic feature of the generalized eigenvalue problem considered here, and is not an artefact of the simple example. To see this, let us write ZH_B = ZU D U^†, where U is the unitary matrix which diagonalizes H_B. If H_B has a zero mode with eigenvector corresponding to the first column of U, denoted as U_:,0, then U_:,0 is also a zero mode of ZH_B, i.e. we can take S_:,0 = U_:,0. Furthermore, as H_B is particle-hole symmetric [Eq. (<ref>)], it follows that [XU^*]_:,0 = e^iα U_:,0, where e^iα is an unimportant phase. As X anti-commutes with Z, the particle-hole symmetry implies that [S^† ZS]_ss = 0 if Ω_ss=ω_s = 0 . This shows that the zero mode eigenvectors cannot be normalized such that Eq. (<ref>) holds. Previously, we assumed a small non-zero symmetry breaking field to gap the Goldstone modes at =0. In this case, the Euclidean norm of the Goldstone mode eigenvectors at =0, normalized such that Eq. (<ref>) holds, will diverge as ω_^-1/2 when we take this symmetry-breaking field to zero at the end of the calculation. To avoid this divergence, we defined the rescaled wavefunctions in Eq. (<ref>) of the main text.
http://arxiv.org/abs/2307.04899v1
20230710205644
Application of the Duperier method to the analysis of the cosmic muon flux dependence on the meteorological parameters, based on the DANSS detector data
[ "I. Alekseev", "V. Belov", "M. Danilov", "D. Filosofov", "M. Fomina", "S. Kazartsev", "A. Kobyakin", "A. Kuznetsov", "I. Machikhiliyan", "D. Medvedev", "V. Nesterov", "D. Ponomarev", "I. Rozova", "N. Rumyantseva", "V. Rusinov", "E. Samigullin", "Ye. Shevchik", "M. Shirchenko", "Yu. Shitov", "N. Skrobova", "D. Svirida", "E. Tarkovsky", "A. Yakovleva", "E. Yakushev", "I. Zhitnikov", "D. Zinatulina" ]
physics.ins-det
[ "physics.ins-det", "hep-ex" ]
The angular dependence of spin-orbit torque in monolayer Fe3GeTe2 Paul M. Haney August 12, 2023 ================================================================= 1. . ݣ 20- , , . - , , . , , ԣ . , — . , , - , , ģ , . , , , , , . , , . , , , ̣, DANSS — , ∼50 ... . ޣ , , ԣ , . , ,  <cit.>, .  <cit.>, ݣ DANSS,  <cit.>. , . : I-⟨ I⟩/⟨ I⟩=αT_eff-⟨ T_eff⟩/⟨ T_eff⟩+β(P-⟨ P⟩), I — ޣ , T_eff — , P — , α β — . β , , , . , α , ,  <cit.> 20 40 ..., , , ̣. , β, ,́  <cit.> , . , , ޣ . , , , , , , , , . , , 100 , : I-⟨ I⟩/⟨ I⟩ = β(P-⟨ P⟩)+ +μ'(H_100-⟨ H_100⟩)+μ”(T_100-⟨ T_100⟩) , H_100 T_100 — 100 , β, μ' μ” — . 2. . DANSS <cit.> (57.91 .., 35.06 ..), -1000 ߣ , 10.9 – 12.9 . , , ∼50 . ߣ 1 ^3 2500 , . , ϣ: (5 ), (8 ), (5 ) (8 ). , - . ޣ, . 100 × 4 × 1 , 3 , . , ɣ. , , 10 ϣ, 5 . . . , . -, ; , . , DANSS , - , , . 3. . , 05.10.2016 31.08.2020, 4 . , 40  ߣ. ң ߣ . - , ߣ — , — , . ң , : cosθ > 0.9, cosθ < 0.36 , . ERA5 <cit.>. , , , . 0.25×0.25, . 37 1  1000 , DANSS (57.9 .., 35.1 ..). ERA5 100 <cit.>, , 60  , 0.81 . ERA5  <cit.>, ̣ 0.59 . , ERA5 100 , . ޣ 100   <cit.>: Δ H = 18400(1+aT)(p_1/p_2), Δ H — , a = 0.00366 K^-1 — ߣ , T — . p_1 p_2. , H_100  <ref> ERA5, 100 , H_100. ޣ . , 100  ∼16 . β, μ' μ” . ޣ , . , . , , , . , , , . , β β. "" . <ref>. β, μ' μ”  <ref>. 4. . , ݣ . , ţ, E_thr — , , . , , , . ң ң  <cit.>  <ref>. , DANSS, . Global Muon Detector Network <cit.>, . -, , , . -, , , . -, أ , E_thr . , ݣ , , . 40 ... Budapest<cit.>, 42 ... Hobart<cit.> 60 ... London<cit.>, . Particle Data Group<cit.>, 24 10 40 . — 10 40, 42 60 , , , . , , , , , . μ' μ” , ,  <ref>. , , . , , , . . β  <cit.>  <ref>. - , β E_thr , , ,  <cit.>, DANSS . β  <cit.> . ,  <ref>  <ref>, , E_thr Budapest, Hobart London  <cit.>.  <cit.>, ( ), ,  <ref>. Σ 6, 7 9,  <cit.> E_thr, . 5. . DANSS , , . β, μ' μ” ң , ң . β ޣ , ∼ 30 %. μ' μ”, , . β , α.  <cit.>  <cit.> Budapest, Hobart London, β . . . , . "" № .4.44.90.13.1119 № .4.44.9.16.1006 (2013–2016 .). , № 17-12-01145 (2017–2021 .). № 23-12-00085. ieeetr
http://arxiv.org/abs/2307.04110v1
20230709065359
Learning Space-Time Continuous Neural PDEs from Partially Observed States
[ "Valerii Iakovlev", "Markus Heinonen", "Harri Lähdesmäki" ]
cs.LG
[ "cs.LG" ]
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication José Miguel Mateos-Ramos, Student Member, IEEE, Christian Häger, Member, IEEE, Musa Furkan Keskin, Member, IEEE, Luc Le Magoarou, Member, IEEE, Henk Wymeersch, Senior Member, IEEE This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718. José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]). Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]). Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= We introduce a novel grid-independent model for learning partial differential equations (PDEs) from noisy and partial observations on irregular spatiotemporal grids. We propose a space-time continuous latent neural PDE model with an efficient probabilistic framework and a novel encoder design for improved data efficiency and grid independence. The latent state dynamics are governed by a PDE model that combines the collocation method and the method of lines. We employ amortized variational inference for approximate posterior estimation and utilize a multiple shooting technique for enhanced training speed and stability. Our model demonstrates state-of-the-art performance on complex synthetic and real-world datasets, overcoming limitations of previous approaches and effectively handling partially-observed data. The proposed model outperforms recent methods, showing its potential to advance data-driven PDE modeling and enabling robust, grid-independent modeling of complex partially-observed dynamic processes. § INTRODUCTION All source code and datasets will be made publicly available after review. Modeling spatiotemporal processes allows to understand and predict the behavior of complex systems that evolve over time and space <cit.>. Partial differential equations (PDEs) are a popular tool for this task as they have a solid mathematical foundation <cit.> and can describe the dynamics of a wide range of physical, biological, and social phenomena <cit.>. However, deriving PDEs can be challenging, especially when the system's underlying mechanisms are complex and not well understood. Data-driven methods can bypass these challenges <cit.>. By learning the underlying system dynamics directly from data, we can develop accurate PDE models that capture the essential features of the system. This approach has changed our ability to model complex systems and make predictions about their behavior in a data-driven manner. While current data-driven PDE models have been successful at modeling complex spatiotemporal phenomena, they often operate under various simplifying assumptions such as regularity of the spatial or temporal grids <cit.>, discreteness in space or time <cit.>, and availability of complete and noiseless observations <cit.>. Such assumptions become increasingly limiting in more realistic scenarios with scarce data and irregularly spaced, noisy and partial observations. We address the limitations of existing methods and propose a space-time continuous and grid-independent model that can learn PDE dynamics from noisy and partial observations made on irregular spatiotemporal grids. Our main contributions include: * Development of an efficient generative modeling framework for learning latent neural PDE models from noisy and partially-observed data; * Novel PDE model that merges two PDE solution techniques – the collocation method and the method of lines – to achieve space-time continuity, grid-independence, and data efficiency; * Novel encoder design that operates on local spatiotemporal neighborhoods for improved data-efficiency and grid-independence. Our model demonstrates state-of-the-art performance on complex synthetic and real-world datasets, opening up the possibility for accurate and efficient modeling of complex dynamic processes and promoting further advancements in data-driven PDE modeling. § PROBLEM SETUP In this work we are concerned with modeling of spatiotemporal processes. For brevity, we present our method for a single observed trajectory, but extension to multiple trajectories is straightforward. We observe a spatiotemporal dynamical system evolving over time on a spatial domain Ω. The observations are made at M arbitrary consecutive time points t_1:M:=(t_1, …, t_M) and N arbitrary observation locations _1:N:=(_1, …, _N), where _i ∈Ω. This generates a sequence of observations _1:M:=(_1, …, _M), where _i ∈ℝ^N × D contains D-dimensional observations at the N observation locations. We define _i^j as the observation at time t_i and location _j. The number of time points and observation locations may vary between different observed trajectories. We assume the data is generated by a dynamical system with a latent state (t, ) ∈ℝ^d, where t is time and ∈Ω is spatial location. The latent state is governed by an unknown PDE and is mapped to the observed state (t, ) ∈ℝ^D by an unknown observation function g and likelihood model p: ∂(t, x)/∂ t = F((t,), ∂_(t,), ∂^2_(t,),…), (t,) ∼ p(g((t,))), where ∂^∙_(t,) denotes partial derivatives wrt . In this work we make two assumptions that are highly relevant in real-world scenarios. First, we assume partial observations, that is, the observed state (t,) does not contain all information about the latent state (t,) (e.g., (t,) contains pressure and velocity, but (t,) contains information only about the pressure). Second, we assume out-of-distribution time points and observation locations, that is, their number, positions, and density can change arbitrarily at test time. § MODEL [9]r0.4 < g r a p h i c s > Model sketch. Initial latent state (t_1,) is evolved via F_θ_dyn to the following latent states which are then mapped to the observed states by g_θ_dec. Here we describe the model components (Sec. <ref>) which are then used to construct the generative model (Sec. <ref>). §.§ Model components Our model consists of four parts: space-time continuous latent state (t, ) and observed state (t, ), a dynamics function F_θ_dyn governing the temporal evolution of the latent state, and an observation function g_θ_dec mapping the latent state to the observed state (see Figure <ref>). Next, we describe these components in detail. Latent state. To define a space-time continuous latent state (t, ) ∈ℝ^d, we introduce (t):=(^1(t), …, ^N(t)) ∈ℝ^N × d, where each ^i(t) ∈ℝ^d corresponds to the observation location _i. Then, we define the latent state (t, ) as a spatial interpolant of (t): (t, ) := Interpolate((t))(), where Interpolate(·) maps (t) to an interpolant which can be evaluated at any spatial location ∈Ω (see Figure <ref>). We do not rely on a particular interpolation method, but in this work we use linear interpolation as it shows good performance and facilitates efficient implementation. Latent state dynamics. [13]r0.3 < g r a p h i c s > Latent state (t,) defined as an interpolant of (t) := (^1(t), ..., ^4(t)). Given a space-time continuous latent state, one can naturally define its dynamics in terms of a PDE: ∂(t, x)/∂ t = F_θ_dyn((t,), ∂_(t,), ∂^2_(t,),…), where F_θ_dyn is a dynamics function with parameters θ_dyn. This is a viable approach known as the collocation method <cit.>, but it has several limitations. It requires us to decide which partial derivatives to include in the dynamics function, and also requires an interpolant which has all the selected partial derivatives (e.g., linear interpolant has only first order derivatives). To avoid these limitations, we combine the collocation method with another PDE solution technique known as the method of lines <cit.>, which approximates spatial derivatives ∂^∙_(t,) using only evaluations of (t,), and then let the dynamics function approximate all required derivatives in a data-driven manner. To do that, we define the spatial neighborhood of as 𝒩_S(), which is a set containing and its spatial neighbors, and also define (t, 𝒩_S()), which is a set of evaluations of the interpolant (t, ) at points in 𝒩_S(): 𝒩_S() := {' ∈Ω : '= or ' is a spatial neighbor of }, (t, 𝒩_S()) := {(t, ') : ' ∈𝒩_S() }, and assume that this information is sufficient to approximate all required spatial derivatives at . This is a reasonable assumption since, e.g., finite differences can approximate derivatives using only function values and locations of the evaluation points. Hence, we define the dynamics of (t, ) as ∂(t, )/∂ t = F_θ_dyn(𝒩_S(), (t, 𝒩_S())), which is defined only in terms of the values of the latent state, but not its spatial derivatives. [17]r0.225 < g r a p h i c s > Example of 𝒩_S(_i). Instead of using the observation locations (dots) to define spatial neighbors, we use spatial locations arranged in a fixed predefined pattern (crosses). One way to define the spatial neighbors for is in terms of the observation locations _1:N (e.g., use the nearest ones) as was done, for example, in <cit.>. Instead, we utilize continuity of the latent state (t, ), and define the spatial neighbors in a grid-independent manner as a fixed number of points arranged in a predefined patter around (see Figure <ref>). This allows to fix the shape and size of the spatial neighborhoods in advance, making them independent of the observation locations. In this work we use the spatial neighborhood consisting of two concentric circles of radius r and r/2, each circle contains 8 evaluation points as in Figure <ref>. In Appendix <ref> we compare neighborhoods of various shapes and sizes. Equation <ref> allows to simulate the temporal evolution of (t, ) at any spatial location. However, since (t, ) is defined only in terms of a spatial interpolant of (t) (see Eq. <ref>), with ^i(t) = (t, _i), it is sufficient to simulate the latent state dynamics only at the observation locations _1:N. Hence, we can completely characterize the latent state dynamics in terms of a system of N ODEs: d(t)/dt := [ d^1(t)/dt; ⋮; d^N(t)/dt ] = [ ∂(t, _1)/∂ t; ⋮; ∂(t, _N)/∂ t ] = [ F_θ_dyn(𝒩_S(_1), (t, 𝒩_S(_1))); ⋮; F_θ_dyn(𝒩_S(_N), (t, 𝒩_S(_N))) ]. For convenience, we define (t; t_1, _1, θ_dyn) := ODESolve(t;t_1,_1,θ_dyn) as the solution of the ODE system in Equation <ref> at time t with initial state (t_1)=_1 and parameters θ_dyn. We also define (t, ; t_1, _1, θ_dyn) as the spatial interpolant of (t; t_1, _1, θ_dyn) as in Equation <ref>. We solve the ODEs using off the shelf differentiable ODE solvers from torchdiffeq package <cit.>. Note that we solve for the state (t) only at the observation locations _1:N, so to get the neighborhood values (t, 𝒩_S(_i)) we perform interpolation at every step of the ODE solver. Observation function. We define the mapping from the latent space to the observation space as a parametric function g_θ_dec with parameters θ_dec: (t,) ∼𝒩(g_θ_dec((t, )), σ_u^2I_D), where 𝒩 is the Gaussian distribution, σ_u^2 is noise variance, and I_D is D-by-D identity matrix. §.§ Generative model [18]r0.3 < g r a p h i c s > Multiple shooting splits a trajectory with one initial state (top) into two sub-trajectories with two initial states (bottom) and tries to minimize the gap between sub-trajectories (orange arrow). Training models of dynamic systems is often challenging due to long training times and training instabilities <cit.>. To alleviate these problems, various heuristics have been proposed, such as progressive lengthening and splitting of the training trajectories <cit.>. We use multiple shooting <cit.>, a simple and efficient technique which has demonstrated its effectiveness in ODE learning applications <cit.>. We extent the multiple shooting framework for latent ODE models presented in <cit.> to our PDE modeling setup by introducing spatial dimensions in the latent state and designing an encoder adapted specifically to the PDE setting (Section <ref>). Multiple shooting splits a single trajectory {(t_i)}_i=1,...,M with one initial state _1 into B consecutive non-overlapping sub-trajectories {(t_i)}_i ∈ℐ_b, b=1,…,B with B initial states _1:B:=(_1,…,_B) while imposing a continuity penalty between the sub-trajectories (see Figure <ref>). The index set ℐ_b contains time point indices for the b'th sub-trajectory. We also denote the temporal position of _b as t_[b] and place _b at the first time point preceding the b'th sub-trajectory (except _1 which is placed at t_1). Note that the shooting states _b have the same dimension as the original latent state (t) i.e., _b ∈ℝ^N × d. Multiple shooting allows to parallelize the simulation over the sub-trajectories and shortens the simulation intervals thus improving the training speed and stability. In Appendix <ref> we demonstrate the effect of multiple shooting on the model training and prediction accuracy. We begin by defining the prior over the unknown model parameters and initial states: p(_1:B, θ_dyn, θ_dec) = p(_1:B|θ_dyn)p(θ_dyn)p(θ_dec), where p(θ_dyn) and p(θ_dec) are zero-mean diagonal Gaussians, and the continuity inducing prior p(_1:B|θ_dyn) is defined as in <cit.> p(_1:B| θ_dyn) = p(_1) ∏_b=2^Bp(_b|_b-1, θ_dyn). Intuitively, the continuity prior p(_b|_b-1, θ_dyn) takes the initial latent state _b-1, simulates it forward from time t_[b-1] to t_[b] to get μ_[b] = ODESolve(t_[b] ; t_[b-1], _b-1, θ_dyn), and then forces μ_[b] to approximately match the initial state _b of the next sub-trajectory, thus promoting continuity of the full trajectory. We assume the continuity inducing prior factorizes across the grid points, i.e., p(_1:B| θ_dyn) = [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N p(_b^j|_b-1, θ_dyn)], = [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N𝒩( _b^j|(t_[b], _j; t_[b-1], _b-1, θ_dyn), σ_c^2I_d )], where p(_1^j) is a diagonal Gaussian, and parameter σ_c^2 controls the strength of the prior. Note that the term (t_[b], _j; t_[b-1], _b-1, θ_dyn) in Equation <ref> equals the ODE forward solution ODESolve(t_[b] ; t_[b-1], _b-1, θ_dyn) at grid location _j. Finally, we define our generative in terms of the following sampling procedure: θ_dyn, θ_dec, _1:B ∼ p(θ_dyn)p(θ_dec) p(_1:B | θ_dyn), (t_i) = (t_i; t_[b], _b, θ_dyn), b ∈{1, ..., B}, i ∈ℐ_b, _i^j ∼ p(_i^j | g_θ_dec((t_i, _j)), i = 1, …, M, j=1,…,N, with the following joint distribution (see Appendix <ref> for details about the model specification.): p(_1:M, _1:B, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N[ p(_i^j|_b, θ_dyn, θ_dec) ] p(_1:B | θ_dyn) p(θ_dyn) p(θ_dec). § PARAMETER INFERENCE §.§ Amortized variational inference We approximate the true posterior over the model parameters and initial states p(_1:B, θ_dyn, θ_dec | _1:M) using variational inference <cit.> with the following approximate posterior: q(θ_dyn, θ_dec, _1:B) = q(θ_dyn) q(θ_dec) q(_1:B) = q_ψ_dyn(θ_dyn) q_ψ_dec(θ_dec) ∏_b=1^B∏_j=1^Nq_ψ_b^j(_b^j), where q_ψ_dyn, q_ψ_dec and q_ψ_b^j are diagonal Gaussians, and ψ_dyn, ψ_dec and ψ_b^j are variational parameters. To avoid direct optimization over the local variational parameters ψ_b^j, we use amortized variational inference <cit.> and train an encoder h_θ_enc with parameters θ_enc which maps observations _1:M to ψ_b^j (see Section <ref>). For brevity, we sometimes omit the dependence of approximate posteriors on variational parameters and simply write e.g., q(_b^j). In variational inference the best approximation of the posterior is obtained by minimizing the Kullback-Leibler divergence: KL[q(θ_dyn, θ_dec, _1:B) ‖ p(θ_dyn, θ_dec, _1:B|_1:N)], which is equivalent to maximizing the evidence lower bound (ELBO), defined for our model as: ℒ = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N𝔼_q(_b, θ_dyn, θ_dec)[ log p (_i^j | _b, θ_dyn, θ_dec) ] _(i) observation model -∑_j=1^NKL[ q(_1^j) ‖ p(_1^j) ]_(ii) initial state prior - ∑_b=2^B∑_j=1^N𝔼_q(θ_dyn, _b-1)[ KL[ q(_b^j) ‖ p(_b^j|_b-1, θ_dyn) ] ]_(iii) continuity prior -KL[q(θ_dyn) ‖ p(θ_dyn)]_(iv) dynamics prior -KL[q(θ_dec) ‖ p(θ_dec)]_(v) decoder prior. The terms (ii), (iv), and (v) are computed analytically, while terms (i) and (iii) are approximated using Monte Carlo integration for expectations, and numerical ODE solvers for initial value problems. See Appendix <ref> and <ref> approximate posterior details and derivation and computation of the ELBO. §.§ Encoder Here we describe our encoder which maps observations _1:M to local variational parameters ψ_b^j required to sample the initial latent state of the sub-trajectory b at time point t_[b] and observation location _j. Similarly to our model, the encoder should be data-efficient and grid-independent. Similarly to our model (Section <ref>), we enable grid-independence by making the encoder operate on spatial interpolants of the observations _1:M (even if they are noisy): _i() := Interpolate(_i)(), i=1,…,M, where spatial interpolation is done separately for each time point i. We then use the interpolants _i() to define the spatial neighborhoods 𝒩_S() in a grid-independent manner. To improve data-efficiency, we assume ψ_b^j does not depend on the whole observed sequence _1:M, but only on some local information in a spatiotemporal neighborhood of t_[b] and _j. We define the temporal neighborhood of t_[b] as 𝒩_T(t_[b]) {k : |t_k - t_[b]| ≤δ_T, k=1,…,M}, where δ_T is a hyperparameter controlling the neighborhood size, and then define the spatiotemporal neighborhood of t_[b] and _j as [t_[b], _j] := {_k() : k ∈𝒩_T(t_[b]), ∈𝒩_S(_j) }. Our encoder operates on such spatiotemporal neighborhoods [t_[b], _j] and works in three steps (see Figure <ref>). First, for each time index k ∈𝒩_T(t_[b]) it aggregates the spatial information {_k()}_∈𝒩(_j) into a vector α_k^S. Then, it aggregates the spatial representations α_k^S across time into another vector α_[b]^T which is finally mapped to the variational parameters ψ_b^j as follows: ψ_b^j = h_θ_enc([t_[b], _j]) = h_read(h_temporal(h_spatial([t_[b], _j]))). Spatial aggregation. Since the spatial neighborhoods are fixed and remain identical for all spatial locations (see Figure <ref>), we implement the spatial aggregation function h_spatial as an MLP which takes elements of the set {_k()}_∈𝒩_S(_j) stacked in a fixed order as the input. Temporal aggregation. We implement h_temporal as a stack of transformer layers <cit.> which allows it to operate on input sets of arbitrary size. We use time-aware attention and continuous relative positional encodings <cit.> which were shown to be effective on data from dynamical systems observed at irregular time intervals. Each transformer layer takes a layer-specific input set {ξ_k^in}_k ∈𝒩_T(t_[b]), where ξ_k^in is located at t_k, and maps it to an output set {ξ_k^out}_k ∈𝒩_T(t_[b]), where each ξ_k^out is computed using only the input elements within distance δ_T from t_k, thus promoting temporal locality. Furthermore, instead of using absolute positional encodings the model assumes the behavior of the system does not depend on time and uses relative temporal distances to inject positional information. The first layer takes {α_k^S}_k ∈𝒩_T(t_[b]) as the input, while the last layer returns a single element at time point t_[b], which represents the temporal aggregation α_[b]^T. Variational parameter readout. Since α_i^T is a fixed-length vector, we implement h_read as an MLP. § EXPERIMENTS We use three challenging datasets: Shallow Water, Navier-Stokes, and Scalar Flow which contain observations of spatiotemporal system at N ≈ 1100 grid points evolving over time (see Figure <ref>). The first two datasets are synthetic and generated using numeric PDE solvers (we use scikit-fdiff <cit.> for Shallow Water, and PhiFlow <cit.> for Navier-Stokes), while the third dataset contains real-world observations (camera images) of smoke plumes raising in warm air <cit.>. In all cases the observations are made at irregular spatiotemporal grids and contain only partial information about the true system state. All datasets contain 60/20/20 training/validation/testing trajectories. See Appendix <ref> for details. We train our model for 20k iterations with constant learning rate of 3e-4 and linear warmup. The latent spatiotemporal dynamics are simulated using differentiable ODE solvers from the torchdiffeq package <cit.> (we use dopri5 with rtol=1e-3, atol=1e-4, no adjoint). Training is done on a single NVIDIA Tesla V100 GPU, with a single run taking 3-4 hours. We use the mean absolute error (MAE) on the test set as the performance measure. Error bars are standard errors over 4 random seeds. For forecasting we use the expected value of the posterior predictive distribution. See Appendix <ref> for all details about the training, validation, and testing setup. Latent state dimension. Here we show the advantage of using latent-space models on partially observed data. We change the latent state dimension d from 1 to 5 and measure the test MAE. Note that for d=1 we effectively have a data-space model which models the observations without trying to reconstruct the missing states. Figure <ref> shows that in all cases there is improvement in performance as the latent dimension grows. For Shallow Water and Navier-Stokes the true latent dimension is 3. Since Scalar Flow is a real-world process, there is no true latent dimension. As a benchmark, we provide the performance of our model trained on fully-observed versions of the synthetic datasets (we use the same architecture and hyperparameters, but fix d to 3). Figure <ref> also shows examples of model predictions (at the final time point) for different values of d. We see a huge difference between d=1 and d=3,5. Note how apparently small difference in MAE at d=1 and d=5 for Scalar Flow corresponds to a dramatic improvement in the prediction quality. Grid independence. Here we show the grid-independence property of our model by training it on grids with ≈ 1100 observation locations, and then testing on a coarser, original, and finer grids. For Shallow Water and Navier-Stokes the coarser/finer grids contain 290/4200 nodes, while for Scalar Flow we have 560/6420 nodes, respectively. Figure <ref> shows the model's performance on different spatial grids. We see that A performance drop on coarse grids is expected since as we get less accurate information about the system's initial state and simulate the dynamics on coarse grids. Figure <ref> also shows examples of model predictions (at the final time point) for different grid sizes. Comparison to other models. Here we compare our model with two recent models from the literature: MAgNet <cit.> and DINo <cit.>. Similarly to our model, these models also produce space-time continuous predictions: MAgNet uses neural network-based interpolation and Euler time discretization, while DINo uses implicit neural representation-based [6]r0.5 Test MAE for different models. 1! Shallow Water Navier-Stokes Scalar-Flow MAgNet 0.061 ± 0.001 0.103 ± 0.003 0.056 ± 0.003 DINo 0.063 ± 0.003 0.113 ± 0.002 0.059 ± 0.001 Ours 0.016 ± 0.002 0.041 ± 0.003 0.042 ± 0.001 decoder and continuous-time dynamics. These two methods also use an encoder that takes a history of observations and map them to an initial state in the latent space, where the latent dynamics are learned and the latent state is mapped to the observation space via a decoder (we use the non-Markovian version of DINo). We use the official implementations of both models and tune the hyperparameters for the best performance. For Shallow Water and Navier-Stokes we use the history size of 5 and predict the next 20 steps, while for Scalar Flow the history size is 10 and we predict the next 10 steps. See Appendix <ref> for hyperparameter details. The results are shown in Table <ref>, and the model predictions are shown in Figure <ref>. Our model shows the best performance, achieving very accurate predictions on the synthetic data, and also shows the capacity for modeling real-world data managing to predict the smoke speed, direction, and even the smoke separation. In Figure <ref> we also test data efficiency of the models and show that our model requires much less data to converge to its lowest error. In Appendix <ref> we further demonstrate our model's capability to learn dynamics from noisy data. § RELATED WORK Closest to our work is <cit.>, where they considered the problem of learning PDEs from partial observations and proposed a discrete and grid-dependent model that is restricted to regular spatiotemporal grids. Another related work is that of <cit.>, where they proposed a variational inference framework for learning ODEs from noisy and partially-observed data. However, they consider only low-dimensional ODEs and are restricted to regular grids. Other works considered learning the latent space PDE dynamics using the “encode-process-decode” approach. <cit.> use GNN-based encoder and dynamics function and map the observations to the same spatial grid in the latent space and learn the latent space dynamics. <cit.> use a similar approach but with CNNs and map the observations to a coarser latent grid and learn the coarse-scale dynamics. <cit.> use CNNs to map observations to a low-dimensional latent vector and learn the latent dynamics. However, all these approaches are grid-dependent, limited to regular spatial/temporal grids, and require fully-observed data. Interpolation has been used in numerous studies for various applications. Works such as <cit.> use interpolation to map latent states on coarse grids to observations on finer grids. <cit.> used interpolation as a post-processing step to obtain continuous predictions, while <cit.> used it to recover observations at missing nodes. § CONCLUSION We proposed a novel space-time continuous, grid-independent model for learning PDE dynamics from noisy and partial observations on irregular spatiotemporal grids. Our contributions include an efficient generative modeling framework, a novel latent PDE model merging collocation and method of lines, and a data-efficient, grid-independent encoder design. The model demonstrates state-of-the-art performance on complex datasets, highlighting its potential for advancing data-driven PDE modeling and enabling accurate predictions of spatiotemporal phenomena in diverse fields. However, our model and encoder operate on every spatial and temporal location which might not be the most efficient approach and hinders scaling to extremely large grids, hence research into more efficient latent state extraction and dynamics modeling methods is needed. plainnat § APPENDIX A §.§ Model specification. Here we provide all details about our model specification. The joint distribution for our model is p(_1:M, _1:B, θ_dyn, θ_dec) = p(_1:N|_1:B, θ_dyn, θ_dec) p(_1:B | θ_dyn) p(θ_dyn) p(θ_dec). Next, we specify each component in detail. Parameter priors. The parameter priors are isotropic zero-mean multivariate normal distributions: p(θ_dyn) = 𝒩(θ_dyn | 0, I), p(θ_dec) = 𝒩(θ_dec | 0, I), where 𝒩 is the normal distribution, 0 is a zero vector, and I is the identity matrix, both have an appropriate dimensionality dependent on the number of encoder and dynamics parameters. Continuity prior. We define the continuity prior as p(_1:B| θ_dyn) = p(_1) ∏_b=2^Bp(_b|_b-1, θ_dyn), = [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N p(_b^j|_b-1, θ_dyn)], = [ ∏_j=1^N𝒩(_1^j | 0, I) ] [ ∏_b=2^B∏_j=1^N𝒩( _b^j|(t_[b], _j; t_[b-1], _b-1, θ_dyn), σ_c^2I ).], where 𝒩 is the normal distribution, 0∈ℝ^d is a zero vector, I ∈ℝ^d × d is the identity matrix, and σ_c ∈ℝ is the parameter controlling the strength of the prior. Smaller values of σ_c tend to produce smaller gaps between the sub-trajectories. Observation model p(_1:N|_1:B, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N p(_i^j|_b, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^Np(_i^j | g_θ_dec((t_i, _j; t_[b], _b, θ_dyn))) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N𝒩(_i^j | g_θ_dec((t_i, _j; t_[b], _b, θ_dyn)), σ_u^2 I), where 𝒩 is the normal distribution, σ_u^2 is the observation noise variance, and I ∈ℝ^D × D is the identity matrix. Note again that (t_i, _j; t_[b], _b, θ_dyn) above equals the ODE forward solution ODESolve(t_i ; t_[b], _b, θ_dyn) at grid location _j. §.§ Approximate posterior specification. Here we provide all details about the approximate posterior. We define the approximate posterior as q(θ_dyn, θ_dec, _1:B) = q(θ_dyn) q(θ_dec) q(_1:B) = q_ψ_dyn(θ_dyn) q_ψ_dec(θ_dec) ∏_b=1^B∏_j=1^Nq_ψ_b^j(_b^j). Next, we specify each component in detail. Dynamics parameters posterior. We define q_ψ_dyn(θ_dyn) as q_ψ_dyn(θ_dyn) = 𝒩(θ_dyn | γ_dyn, diag (τ_dyn^2)), where γ_dyn and τ_dyn^2 are vectors with an appropriate dimension (dependent on the number of dynamics parameters), and diag (τ_dyn^2) is a matrix with τ_dyn^2 on the diagonal. We define the vector of variational parameters as ψ_dyn = (γ_dyn, τ_dyn^2). We optimize directly over ψ_dyn and initialize γ_dyn using Xavier <cit.> initialization, while τ_dyn is initialized with each element equal to 9 · 10^-4. Decoder parameters posterior. We define q_ψ_dec(θ_dec) as q_ψ_dec(θ_dec) = 𝒩(θ_dec | γ_dec, diag (τ_dec^2)), where γ_dec and τ_dec^2 are vectors with an appropriate dimension (dependent on the number of decoder parameters), and diag (τ_dec^2) is a matrix with τ_dec^2 on the diagonal. We define the vector of variational parameters as ψ_dec = (γ_dec, τ_dec^2). We optimize directly over ψ_dec and initialize γ_dec using Xavier <cit.> initialization, while τ_dec is initialized with each element equal to 9 · 10^-4. Shooting variables posterior. We define q_ψ_b^j(_b^j) as q_ψ_b^j(_b^j) = 𝒩(_b^j | γ_b^j, diag ([τ_b^j]^2))), where the vectors γ_b^j, τ_b^j ∈ℝ^d are returned by the encoder h_θ_enc, and diag ([τ_b^j]^2) is a matrix with [τ_b^j]^2 on the diagonal. We define the vector of variational parameters as ψ_b^j = (γ_b^j, [τ_b^j]). Because the variational inference for the shooting variables is amortized, our model is trained w.r.t. the parameters of the encoder network, θ_enc. § APPENDIX B §.§ Derivation of ELBO. For our model and the choice of the approximate posterior the ELBO can be written as ℒ = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M, _1:B, θ_dyn, θ_dec)/q(θ_dyn, θ_dec, _1:B)dθ_dyn dθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M|_1:B, θ_dyn, θ_dec)p(_1:B|θ_dyn)p(θ_dyn)p(θ_dec)/q(_1:B)q(θ_dyn)q(θ_dec)dθ_dyn dθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M | _1:B, θ_dyn, θ_dec)dθ_dyn dθ_dec d_1:B - ∫q(θ_dyn, θ_dec, _1:B) lnq(_1:B)/p(_1:B | θ_dyn)dθ_dyn dθ_dec d_1:B - ∫q(θ_dyn, θ_dec, _1:B) lnq(θ_dyn)/p(θ_dyn)dθ_dyn dθ_dec d_1:B - ∫q(θ_dec, θ_dec, _1:B) lnq(θ_dec)/p(θ_dec)dθ_dyn dθ_dec d_1:B = ℒ_1 - ℒ_2 - ℒ_3 - ℒ_4. Next, we will look at each term ℒ_i separately. ℒ_1 = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M | _1:B, θ_dyn, θ_dec)dθ_dyn dθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) ln[∏_b=1^B∏_i ∈ℐ_b∏_j=1^Np(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_1:B = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N∫q(θ_dyn, θ_dec, _1:B) ln[p(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_1:B = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N∫q(θ_dyn, θ_dec, _b) ln[p(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_b = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N𝔼_q(θ_dyn, θ_dec, _b)ln[p(_i^j | _b, θ_dyn, θ_dec)]. ℒ_2 = ∫q(θ_dyn, θ_dec, _1:B) lnq(_1:B)/p(_1:B | θ_dyn)dθ_dyndθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) ln[q(_1)/p(_1)∏_b=2^Bq(_b)/p(_b|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B = ∫q(θ_dyn, θ_dec, _1:B) ln[∏_j=1^Nq(_1^j)/p(_1^j)]dθ_dyndθ_dec d_1:B + ∫q(θ_dyn, θ_dec, _1:B) ln[∏_b=2^B∏_j=1^Nq(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B = ∑_j=1^N∫q(θ_dyn, θ_dec, _1:B) ln[q(_1^j)/p(_1^j)]dθ_dyndθ_dec d_1:B + ∑_b=2^B∫q(θ_dyn, θ_dec, _1:B) ∑_j=1^Nln[q(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B = ∑_j=1^N∫q(_1^j) ln[q(_1^j)/p(_1^j)]d_1^j + ∑_b=2^B∫q(θ_dyn, _b-1, _b) ∑_j=1^Nln[q(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyn d_b-1 d_b = ∑_j=1^N∫q(_1^j) ln[q(_1^j)/p(_1^j)]d_1^j + ∑_b=2^B∫q(θ_dyn, _b-1) ∑_j=1^N[ ∫ q(_b^j) lnq(_b^j)/p(_b^j|_b-1, θ_dyn)d_b^j]dθ_dyn d_b-1 = ∑_j=1^NKL( q(_1^j) ‖ p(_1^j) ) + ∑_b=2^B𝔼_q(θ_dyn, _b-1)[ ∑_j=1^NKL( q(_b^j) ‖ p(_b^j|_b-1, θ_dyn) ) ], where KL is Kullback–Leibler (KL) divergence. Both of the KL divergences above have a closed form but the expectation w.r.t. q(θ_dyn, _b-1) does not. ℒ_3 = KL(q(θ_dyn) ‖ p(θ_dyn)), ℒ_4 = KL(q(θ_dec) ‖ p(θ_dec)). §.§ Computation of ELBO. We compute the ELBO using the following algorithm: * Sample θ_dyn, θ_dec from q_ψ_dyn(θ_dyn), q_ψ_dec(θ_dec). * Sample _1:B by sampling each _b^j from q_ψ_b^j(_b^j) with ψ_b^j = h_θ_enc([t_[b], _j]). * Compute _1:M from _1:B as in Equations <ref>-<ref>. * Compute ELBO ℒ (KL terms are computed in closed form, for expectations we use Monte Carlo integration with one sample). Sampling is done using reparametrization to allow unbiased gradients w.r.t. the model parameters. § APPENDIX C §.§ Datasets. Shallow Water. The shallow water equations are a system of partial differential equations (PDEs) that simulate the behavior of water in a shallow basin. These equations are effectively a depth-integrated version of the Navier-Stokes equations, assuming the horizontal length scale is significantly larger than the vertical length scale. Given these assumptions, they provide a model for water dynamics in a basin or similar environment, and are commonly utilized in predicting the propagation of water waves, tides, tsunamis, and coastal currents. The state of the system modeled by these equations consists of the wave height h(t, x, y), velocity in the x-direction u(t, x, y) and velocity in the y-direction v(t, x, y). Given an initial state (h_0, u_0, v_0), we solve the PDEs on a spatial domain Ω over time interval [0, T]. The shallow water equations are defined as: ∂ h/∂ t + ∂ (hu)/∂ x + ∂ (hv)/∂ y = 0, ∂ u/∂ t + u∂ u/∂ x + v∂ u/∂ y + g∂ h/∂ x = 0, ∂ v/∂ t + u∂ v/∂ x + v∂ v/∂ y + g∂ h/∂ y = 0, where g is the gravitational constant. We set the spatial domain Ω to be a unit square and use periodic boundary conditions. We set T=0.1. The solution is evaluated at randomly selected spatial locations and time points. We use 1089 spatial locations and 25 time points. The spatial end temporal grids are the same for all trajectories. Since we are dealing with partially-observed cases, we assume that we observe only the wave height h(t,x,y). For each trajectory, we start with zero initial velocities and the initial height h_0(x,y) generated as: h̃_0(x, y) = ∑_k,l = -N^Nλ_klcos(2π (kx+ly)) + γ_klsin(2π (kx+ly)), h_0(x, y) = 1 + h̃_0(x, y) - min(h̃_0)/max(h̃_0) - min(h̃_0), where N = 3 and λ_kl, γ_kl∼𝒩(0, 1). The datasets used for training, validation, and testing contain 60, 20, and 20 trajectories, respectively. We use scikit-fdiff <cit.> to solve the PDEs. Navier-Stokes. For this dataset we model the propagation of a scalar field (e.g., smoke concentration) in a fluid (e.g., air). The modeling is done by coupling the Navier-Stokes equations with the Boussinesq buoyancy term and the transport equation to model the propagation of the scalar field. The state of the system modeled by these equations consists of the scalar field c(t,x,y), velocity in x-direction u(t,x,y), velocity in y-direction v(t,x,y), and pressure p(t,x,y). Given an initial state (c_0, u_0, v_0, p_0), we solve the PDEs on a spatial domain Ω over time interval [0, T]. The Navier-Stokes equations with the transport equation are defined as: ∂ u/∂ x + ∂ v/∂ y = 0, ∂ u/∂ t + u ∂ u/∂ x + v ∂ u/∂ y = - ∂ p/∂ x + ν( ∂^2 u/∂ x^2 + ∂^2 u/∂ y^2), ∂ v/∂ t + u ∂ v/∂ x + v ∂ v/∂ y = - ∂ p/∂ y + ν( ∂^2 v/∂ x^2 + ∂^2 v/∂ y^2) + c, ∂ c/∂ t = - u ∂ c/∂ x - v ∂ c/∂ y + ν( ∂^2 c/∂ x^2 + ∂^2 c/∂ y^2), where ν = 0.002. We set the spatial domain Ω to be a unit square and use periodic boundary conditions. We set T=2.0, but drop the first 0.5 seconds due to slow dynamics during this time period. The solution is evaluated at randomly selected spatial locations and time points. We use 1089 spatial locations and 25 time points. The spatial and temporal grids are the same for all trajectories. Since we are dealing with partially-observed cases, we assume that we observe only the scalar field c(t,x,y). For each trajectory, we start with zero initial velocities and pressure, and the initial scalar field c_0(x,y) is generated as: c̃_0(x, y) = ∑_k,l = -N^Nλ_klcos(2π (kx+ly)) + γ_klsin(2π (kx+ly)), c_0(x, y) = c̃_0(x, y) - min(c̃_0)/max(c̃_0) - min(c̃_0), where N = 2 and λ_kl, γ_kl∼𝒩(0, 1). The datasets used for training, validation, and testing contain 60, 20, and 20 trajectories, respectively. We use PhiFlow <cit.> to solve the PDEs. Scalar Flow. r0.2 < g r a p h i c s > Spatial grid used for Scalar Flow dataset. This dataset, proposed by <cit.>, consists of observations of smoke plumes rising in hot air. The observations are post-processed camera images of the smoke plumes taken from multiple views. For simplicity, we use only the front view. The dataset contains 104 trajectories, where each trajectory has 150 time points and each image has the resolution 1080 × 1920. To reduce dimensionality of the observations we sub-sample the original spatial and temporal grids. For the temporal grid, we remove the first 50 time points, which leaves 100 time points, and then take every 4th time point, thus leaving 20 time points in total. The original 1080 × 1920 spatial grid is first down-sampled by a factor of 9 giving a new grid with resolution 120 × 213, and then the new grid is further sub-sampled based on the smoke density at each node. In particular, we compute the average smoke density at each node (averaged over time), and then sample the nodes without replacement with the probability proportional to the average smoke density (thus, nodes that have zero density most of the time are not selected). See example of a final grid in Figure <ref>. This gives a new grid with 1089 nodes. We further smooth the observations by applying Gaussian smoothing with the standard deviation of 1.5 (assuming domain size 120 × 213). We use the first 60 trajectories for training, next 20 for validation and next 20 for testing. §.§ Model architecture and hyper-parameters. Dynamics function. For all datasets we define F_θ_dyn as an MLP. For Shallow Water/Navier-Stokes/Scalar Flow we use 1/3/3 hidden layers with the size of 1024/512/512, respectively. We use ReLU nonlinearities. Observation function. For all datasets we define g_θ_dec as a selector function which takes the latent state (t, x) ∈ℝ^d and returns its first component. Encoder. Our encoder h_θ_enc consists of three function: h_θ_spatial, h_θ_temporal, and h_θ_read. The spatial aggregation function h_θ_spatial is a linear mapping to ℝ^128. The temporal aggregation function h_θ_temporal is a stack of transformer layers with temporal attention and continuous relative positional encodings <cit.>. For all datasets, we set the number of transformer layers to 6. Finally, the variational parameter readout function h_θ_read is a mapping defined as ψ_b^j = h_θ_read(α_[b]^T) = [ γ_b^j; τ_b^j ]= [ Linear(α_[b]^T); exp(Linear(α_[b]^T)) ], where Linear is a linear layer (different for each line), and γ_b^j and τ_b^j are the variational parameters discussed in Appendix A. Spatial and temporal neighborhoods. We use the same spatial neighborhoods 𝒩_S() for both the encoder and the dynamics function. We define 𝒩_S() as the set of points consisting of the point and points on two concentric circles centered at , with radii r and r/2, respectively. Each circle contains 8 points spaced 45 degrees apart (see Figure <ref> (right)). The radius r is set to 0.1. For Shallow Water/Navier-Stokes/Scalar Flow the size of temporal neighborhood (δ_T) is set to 0.1/0.1/0.2, respectively. Multiple Shooting. For Shallow Water/Navier-Stokes/Scalar Flow we split the full training trajectories into 4/4/19 sub-trajectories, or, equivalently, have the sub-trajectory length of 6/6/2. §.§ Training, validation, and testing setup. Data preprocessing. We scale the temporal grids, spatial grids, and observations to be within the interval [0, 1]. Training. We train our model for 20000 iterations using Adam <cit.> optimizer with constant learning rate 3e-4 and linear warmup for 200 iterations. The latent spatiotemporal dynamics are simulated using differentiable ODE solvers from the torchdiffeq package <cit.> (we use dopri5 with rtol=1e-3, atol=1e-4, no adjoint). The batch size is 1. Validation. We use validation set to track the performance of our model during training and save the parameters that produce the best validation performance. As performance measure we use the mean absolute error at predicting the full validation trajectories given some number of initial observations. For Shallow Water/Navier-Stokes/Scalar Flow we use the first 5/5/10 observations. The predictions are made by taking one sample from the posterior predictive distribution (see Appendix C.4 for details). Testing. Testing is done similarly to validation, except that as the prediction we use an estimate of the expected value of the posterior predictive distribution (see Appendix C.4 for details). §.§ Forecasting. Given initial observations _1:m at time points t_1:m, we predict the future observation _n at a time point t_n > t_m as the expected value of the approximate posterior predictive distribution: p(_n | _1:m, _1:M) ≈∫ p(_n | _m, θ_dyn, θ_dec) q(_m) q(θ_dyn) q(θ_dec) d_m dθ_dyn dθ_dec. The expected value is estimated via Monte Carlo integration, so the algorithm for predicting _n is: * Sample θ_dyn, θ_dec from q(θ_dyn), q(θ_dec). * Sample _m from q(_m) = ∏_j=1^Nq_ψ_m^j(_m^j), where the variational parameters ψ_m^j are given by the encoder h_θ_enc operating on the initial observations _1:m as ψ_m^j = h_θ_enc([t_m, _j]). * Compute the latent state (t_n) = (t_n; t_m, _m, θ_dyn). * Sample _n by sampling each _n^j from 𝒩(_n^j | g_θ_dec((t_n, _j))), σ_u^2 I). * Repeat steps 1-4 n times and average the predictions (we use n=10). §.§ Model comparison setup. DINo. We use the official implementation of DINo <cit.>. The encoder is an MLP with 3 hidden layers, 512 neurons each, and Swish non-linearities. The code dimension is 100. The dynamics function is an MLP with 3 hidden layers, 512 neurons each, and Swish non-linearities. The decoder has 3 layers and 64 channels. MAgNet. We use the official implementation of MAgNet <cit.>. We use the graph neural network variant of the model. The number of message-passing steps is 5. All MLPs have 4 layers with 128 neurons each in each layer. The latent state dimension is 128. § APPENDIX D §.§ Spatiotemporal neighborhood shapes and sizes. Here we investigate the effect of changing the shape and size of spatial and temporal neighborhoods used by the encoder and dynamics functions. We use the default hyperparameters discussed in Appendix C and change only the neighborhood shape or size. A neighborhood size of zero implies no spatial/temporal aggregation. Initially, we use the original circular neighborhood displayed in Figure <ref> for both encoder and dynamics function and change only its size (radius). The results are presented in Figures <ref> and <ref>. In Figure <ref>, it is surprising to see very little effect from changing the encoder's spatial neighborhood size. A potential explanation is that the dynamics function shares the spatial aggregation task with the encoder. However, the results in Figure <ref> are more intuitive, displaying a U-shaped curve for the test MAE, indicating the importance of using spatial neighborhoods of appropriate size. Interestingly, the best results tend to be achieved with relatively large neighborhood sizes. Similarly, Figure <ref> shows U-shaped curves for the encoder's temporal neighborhood size, suggesting that latent state inference benefits from utilizing local temporal information. We then examine the effect of changing the shape of the dynamics function's spatial neighborhood. We use ncircle neighborhoods, which consist of n equidistant concentric circular neighborhoods (see examples in Figure <ref>). Effectively, we maintain a fixed neighborhood size while altering its density. The results can be seen in Figure <ref>. We find that performance does not significantly improve when using denser (and presumably more informative) spatial neighborhoods, indicating that accurate predictions only require a relatively sparse neighborhood with appropriate size. §.§ Multiple shooting. Here we demonstrate the effect of using multiple shooting for model training. In Figure <ref> (left), we vary the sub-trajectory length (longer sub-trajectories imply more difficult training) and plot the test errors for each sub-trajectory length. We observe that in all cases, the best results are achieved when the sub-trajectory length is considerably smaller than the full trajectory length. In Figure <ref> (right) we further show the training times, and as can be seen multiple shooting allows to noticeably reduce the training times. § APPENDIX E Noisy Data. Here we show the effect of observation noise on our model and compare the results against other models. We train all models with data noise of various strengths, and then compute test MAE on noiseless data (we still use noisy data to infer the initial state at test time). Figure <ref> shows that our model can manage noise strength up to 0.1 without significant drops in performance. Note that all observations are in the range [0, 1].
http://arxiv.org/abs/2307.03889v1
20230708034331
Quantum techniques for eigenvalue problems
[ "Dean Lee" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas", "nucl-th" ]
Article Title]Quantum techniques for eigenvalue problems *]Dean [email protected] [*]Facility for Rare Isotope Beams and Department of Physics and Astronomy, Michigan State University, East Lansing, 48824 MI, USA This article is a brief introduction to quantum algorithms for the eigenvalue problem in quantum many-body systems. Rather than a broad survey of topics, we focus on providing a conceptual understanding of several quantum algorithms that cover the essentials of adiabatic evolution, variational methods, phase detection algorithms, and several other approaches. For each method, we discuss the potential advantages and remaining challenges. [ [ August 12, 2023 =================== § INTRODUCTION Quantum computing has the potential to address many of the unsolved problems of quantum many-body physics. By allowing for arbitrary linear combinations of tensor products of qubits, one can store exponentially more information than classical bits. This opens the possibility of calculations of strongly-interacting systems with many degrees of freedom without the need for Monte Carlo methods and their accompanying problems associated with sign oscillations <cit.>. Furthermore, qubits naturally evolve with unitary real-time dynamics, providing access to non-equilibrium processes, which are often well beyond the reach of first-principles calculations using classical computers. But there are also great challenges to realizing the promise of quantum computing. One of the main problems is the fact that the quantum computing devices available today have significant limitations due to gate errors, qubit decoherence, faulty measurement readout, small numbers of qubits, and limited qubit connectivity. These problems severely limit the class of problems that one can address at present. Nevertheless, significant advances are being made in quantum hardware performance and scale <cit.>, and it is useful to consider the design and performance of quantum algorithms as quantum resources grow and become more reliable. There is an excellent and comprehensive review on quantum computing and quantum many-body systems in Ref. <cit.>. Instead of writing another review with similarly broad scope, in this article we instead focus on several algorithms of interest for eigenvalue problems. The aim is to provide a readable introduction for novice readers with enough detail to demonstrate the concepts and execution of each method. We should note that there are many useful algorithms of relevance to eigenvalue problems that we do not cover here. These include cooling algorithms <cit.>, coupled heat bath approaches <cit.>, dissipative open system methods <cit.>, spectral combing <cit.>, symmetry projection techniques <cit.>, linear combinations of unitaries <cit.>, and imaginary time evolution <cit.>. In the following, we start with a review of the adiabatic theorem and the performance of adiabatic evolution for the preparation of eigenstates. After this, we cover the broad class of variational methods. We discuss gradient calculation techniques for optimization and several specific variational algorithms. Thereafter we present several phase detection algorithms. These include phase estimation, iterative phase estimation, and the rodeo algorithm. We then conclude with a summary and outlook for the future. § ADIABATIC EVOLUTION The adiabatic theorem states that if a quantum state is an eigenstate of an initial Hamiltonian H(0) = H_0, then the quantum state will remain trapped in an exact eigenstate of the instantaneous Hamiltonian H(t) in the limit that the time dependence of H(t) is infinitely slow <cit.>. If this evolution has only finite duration, then the error will scale inversely with the total time evolution, T. We can use quantum adiabatic evolution to prepare the eigenstates of any Hamiltonian H_1 by preparing an exact eigenstate of some simple initial Hamiltonian H_0. For the purpose of analysis, it is convenient to scale out the dependence on the total duration of time T and work with the rescaled variable s = t/T. We then make a smooth interpolation H(s) with s ranging from s= 0 to s=1, with H(0)=H_0 and H(1)=H_1 <cit.>. Let us define the adiabatic evolution operator U(s) = Texp[-i T∫_0^s H(s') ds' ], where T indicates time ordering where operators at later times are placed on the left. In the limit of large time T, the unitary transformation U(1) will map any eigenstate of H_0 to an eigenstate of H_1. In Ref. <cit.>, it is observed that the unitarily-transformed Hamiltonian, H'(1) = U^†(1)H_1 U(1), is a Hamiltonian whose eigenvalues are equal to H_1 but whose eigenvectors are equal to H_0. For this reason, the term “Hamiltonian translator” was used to describe the unitary transformation U(1). Suppose we start from the Hamiltonian H(0) and perform a perturbation theory expansion in the difference, H'(1)-H(0), H'(1) = H(0) + [H'(1)-H(0)]. Since H(0) and H'(1) share the same eigenvectors, we find that first-order perturbation for the energy is exact and all other terms in perturbation theory for the energy or wave function vanish. Let us now consider the one-parameter eigenvector |ψ(s)⟩, which is an instantaneous eigenvector of H(s) for s in the interval [0,1]. Let Δ(s) be the spectral gap between |ψ(s)⟩ and the rest of the energy spectrum of H(s). In computing the spectral gap, we can ignore sectors that are orthogonal to |ψ(s)⟩ due to symmetries that are respected by H(s). We note that |ψ(0)⟩ is an eigenstate of H_0. Let us define |ψ_U(s)⟩ as U(s)|ψ(0)⟩. We use the symbol || · || to denote the operator norm. Building upon the work of Ref. <cit.>, Jansen et al. <cit.> derived the rigorous bound that T ≥1/δ{∫^s_0 [ || ∂_s^2 H(s')||/Δ^2(s) + 7 || ∂_s H(s')||^2/Δ^3(s)] ds' + B } is sufficient to satisfy the error bound |⟨ψ(s)|ψ_U(s)||⟩≥ 1-δ, where B is a boundary term that vanishes when ∂_s H(0) and ∂_s H(1) both equal zero <cit.>. We see from Eq. (<ref>) that, for any fixed system, the required time T is scaling inversely with the error δ. The challenge with adiabatic state preparation for eigenstates of quantum many-body systems is the fact that Δ(s) may be extremely small for large systems. This is especially true when H_0 and H_1 have very different eigenstates, and H(s) must pass through one or more quantum phase transitions. This motivates the search for initial Hamiltonians H_0 for which the starting eigenstate can be prepared on a quantum computer, but the eigenstate structure of H_0 is not completely trivial and has some resemblance to that of H_1 <cit.>. Even in cases where |ψ_U(1)⟩ is not a good approximation to the eigenstate |ψ(1)⟩ of H_1, the state |ψ_U(1)⟩ can still be a useful starting vector for other state preparation algorithms which converge more rapidly. In order to perform the time evolution in Eq. (<ref>), one usually uses some version of the Trotter approximation. The conceptual starting point for the Trotter approximation is the Baker-Campbell-Hausdorff formula, which states that when e^Ae^B = e^C, we have the formal series C = A + B + 1/2[A,B] + 1/12[A,[A,B]] - 1/12[B,[A,B]] + ⋯. If our Hamiltonian has two non-commuting pieces, H = H_A + H_B. At first order in the Trotter-Suzuki expansion, we can use <cit.> e^-iHΔ t = e^-iH_AΔ te^-iH_BΔ t + O[(Δ t)^2] = e^-iH_BΔ te^-iH_AΔ t + O[(Δ t)^2]. At second order we have e^-iHΔ t = e^-iH_BΔ t/2e^-iH_AΔ te^-iH_BΔ t/2 + O[(Δ t)^3] = e^-iH_AΔ t/2e^-iH_BΔ t e^-iH_AΔ t/2+ O[(Δ t)^3]. The generalization to higher-order expressions can be found in Ref. <cit.>. The performance of the Trotter-Suzuki expansion can be improved in numerous ways, such as using random orderings <cit.>, sums of Trotter products at different orders <cit.>, extrapolation methods <cit.>, and renormalization <cit.>. § VARIATIONAL METHODS Variational quantum algorithms encompass a broad class of methods that are among the most popular approaches to the preparation of eigenstates using current and near-term quantum hardware. While the examples we consider here are optimizing a single vector, there are also many different variational methods that use subspaces <cit.>. The typical strategy is a hybrid approach where the quantum device is used to prepare a parameterized family of possible wave functions, and then a classical computation is performed to minimize the associated cost function. Let θ be an L-dimensional vector of parameters θ_j. The most common example is the search for the ground state of a quantum Hamiltonian H by minimizing a cost function C(θ) given by the energy expectation value C(θ) = ⟨θ|H|θ|$⟩<cit.>. We consider a general ansatz for the wave function|θ⟩that is a product of unitary operators acting upon some simple initial state|ψ_I⟩<cit.>, |θ⟩ = V_L U_L(θ_L) ⋯ V_1 U_1(θ_1)|ψ_I⟩. where eachV_jis a fixed unitary operator. It is convenient to take eachU_j(θ_j)as an exponential of a Hermitian operatorH_j, U_j(θ_j) = exp(-iH_jθ_j/2), where we restrictH_jto be its own inverse so thatH_j^2 = I. This involutory condition is satisfied by any product of Pauli matrices on any multi-qubit system. In such cases, we have the simple trigonometric relation, U_j(θ_j) = cos(θ_j/2)I -i sin(θ_j/2)H_j. For any operatorO, we find that U_j^†(θ_j) O U_j(θ_j) = O_1 + O_sinsin(θ_j) + O_coscos(θ_j), for some operatorsO_1,O_sin, andO_cosindependent ofθ_j. It follows that <cit.> ∂/∂θ_j U_j^†(θ_j) O U_j(θ_j) = O_sincos(θ_j) - O_cossin(θ_j) = U_j^†(θ_j+α_j) O U_j(θ_j+α_j) - U_j^†(θ_j-α_j) O U_j(θ_j-α_j)/2 sin(α_j), for anyα_jsuch thatsin(α_j)0. We note that this parameter shift formula is exact and not simply a finite-difference approximation. This allows values forα_jofO(1), which is helpful to measure gradient components in the presence of stochastic and systematic errors. One can now compute the components of the gradient using ∂/∂θ_jC(θ ) = C(θ+ α_j)-C(θ- α_j)/2 sin(α_j), where the vectorα_jhas components[α_j]_k = α_j δ_jk. These gradients can be used to minimize the cost functionC(θ). Consider adiabatic evolution with initial HamiltonianH_0, final HamiltonianH_1, and interpolating HamiltonianH(s) = sH_1 + (1-s)H_0. We then have a string of exponentials for our adiabatic evolution operator, U(1) = e^-iH(1)ds⋯ e^-iH(s)ds⋯ e^-iH(0)ds, which we apply to the ground state ofH_0. LetNbe the number of time steps and letds = 1/(N+1). If we now use the Trotter approximation to write e^-iH(s)ds = e^-i[sH_1 + (1-s)H_0]ds≈ e^-isH_1dse^-i(1-s)H_0ds, then the adiabaic evolution operator has the form U(1) ≈ e^-iγ_NH_1dse^-iβ_NH_0ds⋯ e^-iγ_jH_1dse^-iβ_jH_0ds⋯ e^-iγ_0H_1dse^-iβ_0H_0ds, forβ_j = 1 - jdsandγ_j = jds. This structure provides the theoretical motivation for the quantum approximate optimization algorithm (QAOA) <cit.>. Instead of using the values forγ_jandβ_jas prescribed by adiabatic evolution, they are treated as free variational parameters optimized to minimize the energy expectation of the HamiltonianH_1. For large quantum systems, the required number of variational parameters will grow with system size. The number of variational parameters needed as a function of the size of the system with fixed error tolerance remains an open question. There are at least two major challenges that arise in quantum variational algorithms for large systems. The first challenge is the problem of barren plateaus. For parameterized random quantum circuits, the components of the cost function gradient will become exponentially small in the number of qubits of the quantum system <cit.>. The second challenge is the appearance of many local minima, making gradient descent optimization difficult. Before discussing the problem of local minima, we first review some terminology from computational complexity theory. A decision problem is one where the two possible answers are yes or no. P refers to the set of decision problems that can be solved using a deterministic Turing machine in polynomial time. NP refers to the set of decision problems whose solution, once given, can be confirmed by a deterministic Turing machine in polynomial time. Equivalently, NP refers to decision problems that can be solved using a non-deterministic Turing machine in polynomial time, where a general non-deterministic Turing machine is endowed with the ability to branch over all possible outcomes in parallel. A problempis NP-hard if all problems in NP can be obtained in polynomial time from the solution ofp. If a decision problem in NP is NP-hard, then it is called NP-complete. Consider a graph withdvertices and an adjacency matrixA_i,jmarking the edges of the graph that equal0or1for each pair of vertices{i,j}. The MaxCut problem poses the task of finding the subsetSof the vertices that maximizes the number of edges connectingSand its complement, ∑_i∈ S∑_j ∉ S A_i,j. The MaxCut problem was shown to be NP-complete <cit.>. The continuous MaxCut problem consists of finding thed-dimensional vectorϕ = [0,2π)^dthat minimizes μ(ϕ) = 1/4∑_i=1^d ∑_j=1^d A_i,j[cos(ϕ_i)cos(ϕ_j)-1] . In Ref. <cit.>, the continuous MaxCut problem is shown to be equivalent to the MaxCut problem and therefore also NP-hard. Furthermore, the continuous MaxCut problem can also be recast as a variational quantum optimization problem for the Ising model Hamiltonian, 1/4∑_i=1^d ∑_j=1^d A_i,j (σ^z_i σ^z_j - 1), with variational wave function e^-i σ^y_dϕ_d/2⋯ e^-i σ^y_1ϕ_1/2|0⟩^⊗ d. Although there is no proof that NP contains problems outside of P, there is much speculation that this is true. NP-hard problems would then belong to the set of difficult problems outside of P, and this would include the problem of minimizing the variational cost function for an Ising Hamiltonian. Although the general performance of variational methods for large quantum systems is challenging, there are many cases in which major simplifications arise because of some simplification, such as the emergence of a mean-field picture. There are many examples of such approaches for fermionic quantum many-body systems <cit.>. One popular example is the unitary coupled cluster (UCC) method. In the UCC method, one starts with an initial state |ψ_I⟩, which is a mean-field reference state. For the unitary transformation, U, we take the form U = e^T(θ) - T^†(θ), where T(θ) = ∑_m T_m(θ), and T_m(θ) is an m-body operator that produces excitations. The singles excitation has the form, T_1(θ) = ∑_i ∑_a θ^i_a a^†_a a_i, where a^†_a and a_i and fermionic creation and annihilation operators for orbitals a and i respectively. The doubles terms has the structure, T_2(θ) = 1/4∑_i<j∑_a<bθ^i,j_a,b a^†_a a^†_b a_j a_i. For the general case, we have T_m(θ) = 1/(m!)^2∑_i<j<⋯∑_a<b<⋯θ^i,j,⋯_a,b,⋯ a^†_a a^†_b ⋯ a_j a_i. There are several ways to encode ferimonic antisymmetrization properties on a quantum computer. Although often not the most efficient, the simplest approach is the Jordan-Wigner transformation <cit.>. We define σ^+_j = (σ^x_j + i σ^y_j)/2, σ^-_j = (σ^x_j - i σ^y_j)/2, and use the convention that |0⟩ corresponds to occupation number 0, and |1⟩ corresponds to occupation number 1. We then have a faithful representation of the algebra of creation and annihilation operators with the mapping a^†_j = σ^-_j ⊗σ^z_j-1⊗⋯⊗σ^z_1, a_j = σ^+_j ⊗σ^z_j-1⊗⋯⊗σ^z_1. This gives the required anticommutation relations, {a_j,a^†_k} = δ_j,k, { a_j,a_k } = { a^†_j,a^†_k } = 0. Many other antisymmetrization techniques <cit.> have been designed that are computationally more efficient in cases where the products of creation and annihilation operators in the Hamiltonian appear in combinations with some locality restriction with respect to the orbital index. A convenient choice for the mean-field reference state |ψ_I⟩ is a Hartree-Fock state, corresponding to a Slater determinant of single-particle orbitals achieving the lowest energy expectation value. The Thouless theorem <cit.> shows how to prepare any desired Slater determinant state starting from any other Slater determinant state. Let α_p( r) label the original orbitals and let β_p( r) label the new orbitals. We take a^†_p, a_p to be the creation and annihilation operators for α_p( r), and b^†_p, b_p to be the creation and annihilation operators for β_p( r). Let N be the number of particles in our system of interest. The aim is to derive a simple relation between b^†_N ⋯ b^†_1 | vac⟩ and a^†_N ⋯ a^†_1 | vac⟩. Without loss of generality, we use a linear transformation to redefine the orbitals β_1( r), ⋯, β_N( r) so that for each p = 1, ⋯, N, we have b^†_p = a^†_p + ∑_q=N+1^∞ a^†_q u_q,p for some coefficient matrix u_q,p. The linear transformation on β_1( r), ⋯, β_N( r) has no effect on b^†_N ⋯ b^†_1 | vac⟩ except for introducing an overall normalization factor. Our convention will ensure that b^†_N ⋯ b^†_1 | vac⟩ and a^†_N ⋯ a^†_1 | vac⟩ have the same normalization. The Thouless theorem is based on the observation that for each p = 1, ⋯, N, ( a^†_p + ∑_q=N+1^∞ a^†_q u_q,p) F( no a^†_p) | vac⟩ = ( 1 + ∑_q=N+1^∞ a^†_q u_q,p a_p ) a^†_p F( no a^†_p) | vac⟩, where F( no a^†_p) is an arbitrary function of the creation and annihilation operators where a^†_p does not appear. We then have b^†_N ⋯ b^†_1 | vac⟩ = ( a^†_N + ∑_q=N+1^∞ a^†_q u_q,N) ⋯( a^†_1 + ∑_q=N+1^∞ a^†_q u_q,1) | vac⟩ = ( 1 + ∑_q=N+1^∞ a^†_q u_q,N a_N ) a^†_N ⋯( 1 + ∑_q=N+1^∞ a^†_q u_q,1 a_1 ) a^†_1 | vac⟩. This leads to simple relation, b^†_N ⋯ b^†_1 | vac⟩ = ( 1 + ∑_q=N+1^∞ a^†_q u_q,N a_N ) ⋯( 1 + ∑_q=N+1^∞ a^†_q u_q,1 a_1 ) a^†_N ⋯ a^†_1 | vac⟩ = exp( ∑_p=1^N ∑_q=N+1^∞ a^†_q u_q,p a_p ) a^†_N ⋯ a^†_1 | vac⟩. Once the Hartree-Fock orbitals are determined using classical computing, one can prepare a simple N-particle Slater determinant state with orbitals given by the computational basis of the quantum computer and then apply the transformation in Eq. (<ref>)<cit.>. § PHASE DETECTION ALGORITHMS Quantum phase estimation <cit.> is a well-known example of a phase detection algorithm that can be used to find energy eigenvalues and prepare energy eigenstates of the quantum many-body problem <cit.>. Suppose for the moment that |ψ⟩ is an eigenstate of the unitary operator U with eigenvalue e^2π iθ. Of particular interest is the case where the unitary operator U is the time evolution operator for some Hamiltonian H over some fixed time step Δ t. The goal is to efficiently determine the phase angle θ. Since U|ψ⟩=e^2π i θ|ψ⟩, we have U^2^j|ψ⟩ = e^2π i θ 2^j|ψ⟩ for any nonnegative integer j. Together with the state |ψ⟩, we take n ancilla qubits with each initialized as |0⟩. The resulting state is |0⟩^⊗ n⊗|ψ⟩. The Hadamard gate is a single qubit gate that maps |0⟩ to 1/√(2)(|0⟩ + |1⟩) and maps |1⟩ to 1/√(2)(|0⟩ - |1⟩). The action of the Hadamard gate for a general linear combination of |0⟩ and |1⟩ is determined by linearity. We apply Hadamard gates to each of the ancilla qubits so that we get 1/2^n/2( |0⟩ + |1⟩)^⊗ n⊗|ψ⟩. For each of the ancilla qubits j = 0, ⋯, n-1, we use the ancilla qubit to control the unitary gate U^2^j. This means that U^2^j is applied when the ancilla qubit j is in state |1⟩, but no operation is performed if the ancilla qubit is in state |0⟩. The result we get is <cit.> 1/2^n/2( |0⟩ + e^2π i θ 2^n-1|1⟩) ⊗⋯⊗( |0⟩ + e^2π i θ 2^0|1⟩) ⊗|ψ⟩ = |f(θ)⟩⊗|ψ⟩, where |f(θ)⟩ = 1/2^n/2∑_m=0^2^n-1( e^2π i θ 2^n-1 m_n-1|m_n-1⟩) ⊗⋯(e^2π i θ 2^0 m_0⊗|m_0⟩) = 1/2^n/2∑_m=0^2^n-1 e^2π i θ m|m_n-1⟩⊗⋯⊗|m_0⟩, and m_n-1⋯ m_0 are the binary digits of the integer m. Let k be an integer between 0 and 2^n-1 with binary representation k_n-1⋯ k_0. We note that when θ equals k divided by 2^n, then |f(k/2^m)⟩ is the quantum Fourier transform of the state |k_n-1⟩⊗⋯⊗|k_0⟩, |f(k/2^m)⟩ = 1/2^n/2∑_m=0^2^n-1 e^2π i k m/2^n|m_n-1⟩⊗⋯⊗|m_0⟩. We can therefore extract information about the value of θ by applying the inverse quantum Fourier transform to |f(θ)⟩, QFT^-1|f(θ)⟩ = 1/2^n∑_k=0^2^n-1∑_m=0^2^n-1 e^-2π i (k/2^n-θ) m|k_n-1⟩⊗⋯⊗|k_0⟩. We see that if θ equals k/2^n for some integer k in the summation, then QFT^-1|f(θ)⟩ equals |k_n-1⟩⊗⋯⊗|k_0⟩. In the general case, we get a superposition of such states |k_n-1⟩⊗⋯⊗|k_0⟩ that is highly peaked for integers k, where k/2^n is close to θ. We simply measure each ancilla qubit and determine k/2^n to obtain an estimate of θ. This is repeated over several trials to build a probability distribution and refine the estimate of θ. Suppose now that |ψ⟩ is not an eigenstate of U but rather a general superposition of eigenstates |ψ_a⟩ with eigenvalues e^2π iθ_a, |ψ⟩ = ∑_a c_a |ψ_a⟩. We can now apply phase estimation to the general state |ψ⟩ in exactly the same manner as before. Let us assume that the separation between each θ_a is large compared to 1/2^n. This ensures that the peaked distributions we get for each eigenvector have negligible overlap. The outcome after measuring the n ancilla qubits will be |k_n-1⟩⊗⋯⊗|k_0⟩⊗|ψ_a⟩, for some eigenstate |ψ_a⟩. The probability of |ψ_a⟩ being selected will equal |c_a|^2. The error of quantum phase estimation in determining eigenvalues will scale inversely with 2^n. This arises from the discretization of energy values k/2^n, where k is an integer from 0 to 2^n-1. If we relate U to the time evolution of a Hamiltonian H for time step Δ t, the error in the energy scales inversely with the total time evolution required. This scaling of the uncertainty matches the lower bound one expects from the Heisenberg uncertainty principle. The error of phase estimation for eigenstate preparation arises from the admixture of terms from different eigenstates, 1/2^n∑_a ∑_m=0^2^n-1 c_a e^-2π i (k/2^n-θ_a) m|k_n-1⟩⊗⋯⊗|k_0⟩⊗|ψ_a⟩. When the spacing between θ_a is much larger than 1/2^n, then the contamination of other eigenstates will be O(2^-n). For the case when U is the time evolution of a Hamiltonian H for time duration Δ t, then the error of eigenstate preparation scales inversely with the total amount of time evolution needed. We have mentioned the quantum Fourier transform, but have not yet discussed how it is implemented. It suffices to describe its action on the state |k_n-1⟩⊗⋯⊗|k_0⟩. We again use the notation that k_n-1⋯ k_0 are the binary digits of the integer k. The desired action of the quantum Fourier transform upon |k_n-1⟩⊗⋯⊗|k_0⟩ is 1/2^n/2( |0⟩ + e^2π i k 2^n-1/ 2^n|1⟩) ⊗⋯⊗( |0⟩ + e^2π i k 2^0/ 2^n|1⟩) = 1/2^n/2∑_m=0^2^n-1 e^2π i k m/2^n|m_n-1⟩⊗⋯⊗|m_0⟩. The first few steps of the quantum Fourier transform algorithm will actually produce the desired result with the tensor product in the reverse order, 1/2^n/2( |0⟩ + e^2π i k 2^0/ 2^n|1⟩) ⊗⋯⊗( |0⟩ + e^2π i k 2^n-1/ 2^n|1⟩) . But this can be fixed by pairwise swap gates between qubits 0 and n-1, 1 and n-2, etc. The quantum Fourier transform begins with the state |k_n-1⟩⊗|k_n-2⟩⊗⋯⊗|k_0⟩. We first act upon qubit n-1 with a Hadamard gate and this gives 1/2^1/2( |0⟩ + e^2π i k_n-1 2^n-1/ 2^n|1⟩) ⊗|k_n-2⟩⊗⋯⊗|k_0⟩. The coefficient in front of |1⟩ equals 1 if k_n-1=0 and equals -1 if k_n-1=1. We use qubit n-2 to apply a controlled phase rotation to qubit n-1 by a phase e^2π i k_n-22^n-2/2^n. The result is 1/2^1/2( |0⟩ + e^2π i (k_n-1 2^n-1 + k_n-2 2^n-2)/ 2^n|1⟩) ⊗|k_n-2⟩⊗⋯⊗|k_0⟩. We continue in this manner with qubit j applying a controlled phase rotation on qubit n-1 by a phase e^2π i k_j2^j/2^n. After doing this for all of the remaining qubits, we get 1/2^1/2( |0⟩ + e^2π i k/ 2^n|1⟩) ⊗|k_n-2⟩⊗⋯⊗|k_0⟩. We perform the analogous process for qubits n-2, ⋯, 1. For the qubit 0, we simply apply the Hadamard gate. In the end, we get the desired result, 1/2^n/2( |0⟩ + e^2π i k 2^0/ 2^n|1⟩) ⊗⋯⊗( |0⟩ + e^2π i k 2^n-1/ 2^n|1⟩). As described above, we now apply swap gates between pairs of qubits 0 and n-1, 1 and n-2, etc. and then we obtain the desired quantum Fourier transform. Iterative phase estimation performs the determination of the binary digits of θ one at a time <cit.>. Let |ψ⟩ again be an eigenstate of U with eigenvalue e^2π i θ. We first consider the case where θ is equal to k/2^n where k is an integer between 0 and 2^n-1. We start with |0⟩⊗|ψ⟩ and apply a Hadamard gate to obtain 1/2^1/2(|0⟩ + |1⟩)⊗|ψ⟩. We now use the ancilla qubit to perform the controlled unitary operator U^2^n-1. The result is then |f_0(θ)⟩⊗|ψ⟩, where |f_0(θ)⟩ = 1/2^1/2(|0⟩ + e^2π i θ 2^n-1|1⟩). Applying a Hadamard gate to |f_0(θ)⟩ gives ( 1 + e^2π i θ 2^n-1/2|0⟩ + 1 - e^2π i θ 2^n-1/2|1⟩) = δ_k_0,0|0⟩ + δ_k_0,1|1⟩. Therefore, we can determine the digit k_0. Let us assume that we have determined the digits from k_0, ⋯, k_j-1. We can determine k_j by taking 1/2^1/2(|0⟩ + |1⟩)⊗|ψ⟩ and using the ancilla qubit to perform the controlled unitary operator U^2^n-j-1 followed by the phase gate |0⟩⟨0| + e^- 2 π i (k_j-12^-2 + ⋯ + k_02^-j-1)|1⟩⟨1|, on the ancilla qubit. This phase gate removes the complex phase associated with the binary digits k_0, ⋯, k_j-1 that have already been determined. The net result is |f_j(θ)⟩⊗|ψ⟩, where |f_j(θ)⟩ = 1/2^1/2(|0⟩ + e^2π i θ 2^n-j-1 - 2 π i (k_j-12^-2 + ⋯ + k_02^-j-1) |1⟩). Applying a Hadamard gate to |f_j(θ)⟩ gives δ_k_j,0|0⟩ + δ_k_j,1|1⟩. For the general case where θ is not equal to k/2^n for some integer k between 0 and 2^n-1, there will be some distribution of values associated with the measurements of the binary digits k_n-1, ⋯, k_0. As with regular phase estimation, the error in energy resolution scales inversely with 2^n and is therefore inversely proportional to the number of operations of U needed. If U is the time evolution of a Hamiltonian H over time step Δ t, then the error in the energy scales inversely with the total time evolution required. Iterative phase estimation is not designed to perform eigenstate preparation. If we start from a general linear combination of energy eigenstates, then the uncertainty in the sequential measurements of the binary digits k_n-1, ⋯, k_0 arising from the different eigenvalues e^2π i θ_a will prevent the algorithm from functioning as intended. The rodeo algorithm is another phase detection algorithm <cit.> that shares some structural similarities with iterative phase estimation. In contrast to iterative phase estimation, however, the rodeo algorithm is efficient in preparing energy eigenstates starting from a general initial state. Let H be the Hamiltonian for which we want to prepare energy eigenstates. To explain the algorithm, we first consider the case where the initial state is an eigenstate of H. We call it |ψ_j⟩ with eigenvalue E_j. We use one ancilla qubit and start with the state |0⟩⊗|ψ_j⟩, and apply the Hadamard gate on the ancilla qubit, 1/2^1/2( |0⟩ + |1⟩) ⊗|ψ_j⟩. We then use the ancilla to perform the controlled unitary for e^-i H t_1 and apply the phase gate |0⟩⟨0| + e^iEt_1|1⟩⟨1|, on the ancilla. This produces 1/2^1/2( |0⟩ + e^-i(E_j-E)t_1|1⟩) ⊗|ψ_j⟩. We now apply a Hadamard gate to the ancilla qubit, which then gives 1/2[ ( 1 + e^-i(E_j-E)t_1) |0⟩+ ( 1 - e^-i(E_j-E)t_1) |1⟩] ⊗|ψ_j⟩. If we measure the ancilla qubit, the probability of measuring |0⟩ is cos^2[(E_j-E)t_1/2] and the probability of measuring |1⟩ is sin^2[(E_j-E)t_1/2]. We call the measurement of |0⟩ a success and the measurement of |1⟩ a failure. We repeat this process for n cycles with times t_1, ⋯, t_n. The probability of success for all n cycles is ∏_k=1^n cos^2[(E_j-E)t_k/2]. If we take random times t_1, ⋯, t_n to be chosen from a Gaussian normal distribution with zero mean and σ standard deviation, then the success probability averaged over many trials will equal P_n(E) = [1+e^-(E_j-E)^2σ^2/2]^n /2^n. We see that the peak value is equal to 1 when E_j = E and the width of the peak is O(σ^-1n^-1/2). Let us now consider a general linear combination of energy eigenstates |ψ⟩ = ∑_j c_j |ψ_j⟩. For this case, the probability of success for n cycles is P_n(E) = ∑_j [1+e^-(E_j-E)^2σ^2/2]^n | c_j |^2 /2^n. When scanning over the input parameter E, peaks in P_n(E) will appear at places where there are eigenvalues E_j and the overlap with the initial state is not too small. For fixed n, the error of the energy determination scales inversely with σ. Similarly to phase estimation and iterative phase estimation, the rodeo algorithm saturates the Heisenberg bound, where the error in the energy scales inversely with the total duration of time evolution. In contrast with both phase estimation and iterative phase estimation, the rodeo algorithm is exponentially fast for eigenstate preparation. There are several other energy projection and filtering methods with similar characteristics <cit.>. Once the peak of the eigenstate energy in P_n(E) is located approximately, we set E as the peak value. With E fixed and σ fixed, the error estimates for the eigenvector scale as 1/2^n for small n and accelerate to 1/4^n for asymptotically large values of n<cit.>. The 1/2^n comes from the fact that the arithmetic mean of cos^2(θ) equals 1/2, while the 1/4^n comes from the fact that the geometric mean of cos^2(θ) equals 1/4. In Ref. <cit.>, it was shown that the use of progressive smaller values for the time evolution parameters t_j accelerates the convergence of the rodeo algorithm towards 1/4^n. The main limitation of the rodeo algorithm for large quantum many-body systems is the requirement that the initial state have non-negligible overlap with the eigenstate of interest. This is a difficult problem that is common to nearly all eigenstate preparation algorithms that use measurement projection. Nevertheless, one can use techniques such as adiabatic evolution, variational methods, or some other approach as a preconditioner to significantly increase the overlap with the eigenstate of interest <cit.>. § SUMMARY AND OUTLOOK In this article, we have presented several methods that show the essential features of adiabatic evolution, variational methods, and phase detection algorithms. All of the algorithms have their strengths and limitations, and one common theme is that the techniques can be combined with each other to produce something that is potentially greater than the sum of its parts. For example, adiabatic evolution provides a theoretical foundation for the QAOA variational method. In turn, the variational method can be used to find a good starting Hamiltonian for adiabatic evolution. Both adiabatic evolution and variation methods can be used as an initial-state preconditioner for phase detection algorithms. There has been great interest by both scientists and the general public on the question of quantum advantage, if and when quantum computers are able to perform tasks exceeding the capabilities of classical computers. It is generally believed that calculations of real-time dynamics and spectral functions of quantum many-body systems are areas ripe for possible quantum advantage. However, the dynamics of some quantum many-body system starting from a trivial initial state is not something that connects directly with real-world phenomena. To make connections with real-world experiments and observations, one also needs the ability to prepare energy eigenstates. It is not clear whether quantum advantage will be achievable for the task of eigenstate preparation. However, this may not be necessary. It may be enough for quantum eigenstate preparation to be competitive with classical computing methods to achieve quantum advantage for calculating the real-time dynamics or spectral functions for real-world applications. The algorithms described in this article provide some of the tools needed, but much more work is needed to address the remaining challenges. Acknowledgments The author acknowledges support from the U.S. Department of Energy (DE-SC0021152, DE-SC0013365, DE-SC0023658) and the SciDAC-4 and SciDAC-5 NUCLEI Collaborations. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
http://arxiv.org/abs/2307.05719v1
20230710131845
Systemic risk indicator based on implied and realized volatility
[ "Paweł Sakowski", "Rafał Sieradzki", "Robert Ślepaczuk" ]
q-fin.RM
[ "q-fin.RM" ]
WNEUW]Paweł Sakowski 1 [email protected] NYU]Rafał Sieradzki 2 [email protected] WNEUW]Robert Ślepaczuk cor1 3, 4, 5 [email protected] [WNEUW]Quantitative Finance Research Group, Department of Quantitative Finance, University of Warsaw, Faculty of Economic Sciences, ul. Dluga 44-50, 00-241, Warsaw, Poland [NYU]New York University Stern School of Business; Cracow University of Economics [cor1]Corresponding author [1]ORCID: https://orcid.org/0000-0003-3384-3795 [2]ORCID: https://orcid.org/0000-0002-4702-7716 [3]ORCID: https://orcid.org/0000-0001-5227-2014 [4]This document is the result of the research project funded by the IDUB program BOB-IDUB-622-187/2022 at the University of Warsaw [5]We want to thank Linda Allen from Baruch College for providing us with CATFIN data, and Viral Acharya and Rob Capellini from the NYU Stern School of Business and NYU Stern Volatility and Risk Institute for sending us the sRisk data on a single constituent level. We have benefited from discussions with Tobias Adrian, Markus Brunnenmeier, Ruggero Japelli, Yi Cao, Zexun Chen. We would also like to thank participants of the 49th Eastern Economic Conference in New York (February 2023), the 33rd Quantitative Finance Research Group and Data Science Lab Research Seminar at the University of Warsaw (April 2023), the MSBE research seminar at the University of Edinburgh, School of Business (April 2023), the 43rd International Symposium on Forecasting at the University of Virginia Darden School of Business, Charlottesville, VA, USA (June 2023), and the 16th edition of the International Risk Management Conference Florence, Italy (July 2023) for their inspiring comments and insightful discussions. We propose a new measure of systemic risk to analyze the impact of the major financial market turmoils in the stock markets from 2000 to 2023 in the USA, Europe, Brazil, and Japan. Our Implied Volatility Realized Volatility Systemic Risk Indicator (IVRVSRI) shows that the reaction of stock markets varies across different geographical locations and the persistence of the shocks depends on the historical volatility and long-term average volatility level in a given market. The methodology applied is based on the logic that the simpler is always better than the more complex if it leads to the same results. Such an approach significantly limits model risk and substantially decreases computational burden. Robustness checks show that IVRVSRI is a precise and valid measure of the current systemic risk in the stock markets. Moreover, it can be used for other types of assets and high-frequency data. The forecasting ability of various SRIs (including CATFIN, CISS, IVRVSRI, SRISK, and Cleveland FED) with regard to weekly returns of S&P 500 index is evaluated based on the simple linear, quasi-quantile, and quantile regressions. We show that IVRVSRI has the strongest predicting power among them. systemic risk implied volatility realized volatility volatility indices equity index options market volatility JEL: G14, G15, C61, C22 introduction § INTRODUCTION The magnitude and the speed of the contagion of the financial market turmoils is the main point of interest in numerous studies. This topic is of special importance because the reactions of the financial markets to any existing or forthcoming crisis are fast, and it is hard to identify them on time based on the real economic measures, as they are announced with a delay. The main aim of this paper is to analyze and compare the systemic impact of the major financial market turmoils in the equity markets in the USA, Europe, Brazil, and Japan from 2000 to 2023. For this purpose, we construct an indicator based on implied and realized volatility measures (IV and RV, respectively) for each market, which are easily available to all market participants. Moreover, we construct a general indicator at the worldwide level. Our partial motivation to undertake this study is to show that such Systemic Risk Indicators can be constructed from simple metrics, and there is no need to use any sophisticated risk models for this purpose (<cit.>). In other words, we want to show that the model risk can be significantly reduced while the results are similar to the ones obtained by the use of much more complex tools. We set four research hypotheses: * RH1: It is possible to construct a robust Systemic Risk Indicator based on the well-known concepts of realized and implied volatility measures. * RH2: The indication of the proposed Systemic Risk Indicator depends on the geographical location of a given equity market. * RH3: The robustness of the proposed Systemic Risk Indicator depends on various parameters selected: the memory parameter for RV, time to expiration for IV, the percentile selected for the risk map, the length of the history selected for the calculation of percentile in case of risk map. * RH4 IVRVSRI has the highest forecasting ability of S&P500 index amongst other benchmark SRIs, especially in the moments of systemic risk The robustness of proposed systemic risk measure is particularly important, as in many studies (e.g. <cit.>, <cit.>) researchers do not consider extent to which the initial parameters of the model affect the final results, especially those regarding the speed of reaction to unexpected market turmoils. We check the sensitivity of the proposed Systemic Risk Indicator to the change of the selected parameters like: the memory parameter for the realized volatility (RV), time to expiration for the implied volatility (IV), the percentile selected for the risk map, and the length of the history selected for the calculation of percentile in case of the risk map. Systemic risk refers to the risk of the collapse of an entire financial system, as opposed to the risk associated with any individual entity, which is a credit default risk. One of the main distinguishing features of systemic risk is that an idiosyncratic event affecting one or a group of market entities is exacerbated by interlinkages and interdependencies in a system, leading to a domino effect that can potentially bring down the entire system. The fragility of the system as a whole is being built over time, and it “only” materializes in times of crisis. One reason is a comparable business model among market participants that leads to accumulating similar assets on a balance sheet and comparable investment strategies that resemble herd behavior. In fact, those entities, although legally separated, can be considered as one large entity from a systemic point of view. The recent collapses of Silicon Valley Bank and Signature Bank, and the abrupt nature of the Covid-19 pandemic, suggest the timeliness of the indicators is one of the key characteristics of a systemic risk indicator. One may claim that relying on the measures primarily based on the market variables, basically the prices of the financial instruments, may be potentially misleading as they may generate false positive signals. On the other hand, measures that are based primarily on accounting-based data are slow in reacting to potential problems in the financial system, as they are available with a delay. Therefore, we argue that it is better to get a signal of a potential problem that sometimes may be false that to get it when the crisis has already started. Most of the research recognizes that problem and tries to combine market and accounting data (See the literature review for details). In general, the function of the market-based variables is to detect potential crises in a timely manner, and the accounting-based measures serve to identify systematically important institutions, whose collapse may create spill-over effects in the system. This approach may be applied by the market regulators that can monitor more closely entities that are systematically important, and try to introduce regulatory solutions to lower their impact on the system. These entities also have access to more data on individual institutions and also collect them earlier than market participants. Although, we agree that this approach is a good way to identify the “too-big-to-fail” institutions. On the other hand, we argue that one of the drawbacks of this approach is that combining both types of data leads to higher model risk and is computationally more intensive than using only market-based variables. Moreover, it seems to be hard to detect the interconnectedness in the market due to its complex and dynamic nature and therefore classifying “too-interconnected-to-fail” is a difficult task[ Some systemic risk measures, like ΔCoVaR, try to capture the potential for the spreading of financial distress across institutions by gauging spill-over by observing the tail comovement using the VaR approach]. At the same time, the aforementioned collapses of the SVB and Signature Bank show that some risk built-up in the system we were not aware of and they were not taken into account by the existing models[Those banks had bonds on their balance sheets which were “held to maturity”. When the customers started to withdraw their deposits, banks had to sell those bonds in the market at much lower prices than they were reported on the balance sheet, leading to huge losses.]. In this work, we focus on one part of the systemic risks, which is the timely identification of the potential crisis by market participants and not only by the regulators. We also claim that a “wide market” may have superior knowledge about systemic risk, and to some extent, their coordinated actions can trigger a systemic event[There is anecdotal evidence that some depositors withdrew all their funds from the SVB two days before its collapse. It seems, that presumably the involuntarily coordinated action of a group of investors to take out their deposits from that bank in a short stretch of time was the trigger of the bank’s collapse. At the same time, one may assume that those investors who were very convinced that the SVB was going to have serious problems were short-selling its stocks or going long deep out-of-the-money options on its stocks, further exacerbating the problems of the bank and finally leading to its collapse.]. The structure of this paper is as follows. The second section presents a literature review. The third section describes Data and Methodology. The fourth section presents the Results, and the fifth one includes Conclusions. literature-review-and-classification-the-selected-systemic-risk-indicators § LITERATURE REVIEW AND CLASSIFICATION THE SELECTED SYSTEMIC RISK INDICATORS literature-review §.§ Literature review The major approach in the literature to measure systemic risk is based either on market data or a mix of market and balance sheet data. Those combined risk indicators use i.a. such metrics as VaR and CoVaR. The results obtained for one country, market segment, or economic sector are aggregated to get a general measure of systemic risk. In general, various methods yield similar results as in <cit.>, <cit.>, <cit.>, <cit.> or <cit.>. One of the first attempts focusing on systemic risk was <cit.> who reminded the last resort lending function of the central bank, which has digressed from its overall strategy of monetary control to also undertake a tactical rescue of individual banks and segments of the financial market. <cit.> developed a broad concept of systemic risk, the basic economic concept for the understanding of financial crises. They claimed that any such concept must integrate systemic events in banking and financial markets as well as in the related payment and settlement systems. At the heart of systemic risk are contagion effects, and various forms of external effects. The concept also includes simultaneous financial instabilities following aggregate shocks. They surveyed the quantitative literature on systemic risk, which was evolving swiftly in the last couple of years. <cit.> point out that systemic risk is a multifaceted problem in an ever-changing financial environment, any single definition is likely to fall short and may create a false sense of security as financial markets evolve in ways that escape the scrutiny of any one-dimensional perspective. They provide an overview of over 30 indicators of systemic risk in the literature, chosen to address key issues in measuring systemic risk and its management. The measures are grouped into six various categories including: macroeconomic, granular foundations and network, forward-looking risk, stress-test, cross-sectional, illiquidity and finally insolvency measures. They analyze them from the supervisory, research, and data perspectives, and present concise definitions of each risk measure. At the same time, they point out that the system to be evaluated is highly complex, and the metrics considered were largely untested outside the GFC crisis. Indeed, some of the conceptual frameworks that they reviewed were still in their infancy and had yet to be applied. <cit.> agreed that governments and international organizations worried increasingly about systemic risk, under which the world's financial system could have collapsed like a row of dominoes. There is widespread confusion, though, about the causes and, to some extent, even the definition of systemic risk, and uncertainty about how to control it. His paper offers a conceptual framework for examining what risks are truly “systemic,” what causes those risks, and how, if at all, those risks should be regulated. Scholars historically have tended to think of systemic risk primarily in terms of financial institutions such as banks. However, with the growth of disintermediation, in which companies can access capital-market funding without going through banks or other intermediary institutions\footnoote{In the US more than 50% of funding of non-financial corporations comes from equity and bond issuance.}, greater focus should be devoted to financial markets and the relationship between markets and institutions. This perspective reveals that systemic risk results from a type of tragedy of the commons in which market participants lack sufficient incentives, and absence of the regulation to limit risk-taking in order to reduce the systemic danger to others. In this light, <cit.> models systemic risk is modeled as the endogenously chosen correlation of returns on assets held by banks. The limited liability of banks and the presence of a negative externality of one bank's failure on the health of other banks give rise to a systemic risk-shifting incentive where all banks undertake correlated investments, thereby increasing economy-wide aggregate risk. Regulatory mechanisms such as bank closure policy and capital adequacy requirements that are commonly based only on a bank's own risk fail to mitigate aggregate risk-shifting incentives, and can, in fact, accentuate systemic risk. Prudential regulation is shown to operate at a collective level, regulating each bank as a function of both its joint (correlated) risk with other banks as well as its individual (bank-specific) risk. <cit.> introduce SRISK to measure the systemic risk contribution of a financial firm. SRISK captures the capital shortfall of a firm conditional on a severe market decline and is a function of its size, leverage and risk. They use the measure to study the top financial institutions in the recent financial crisis. SRISK delivers useful rankings of systemic institutions at various stages of the crisis and identifies Fannie Mae, Freddie Mac, Morgan Stanley, Bear Stearns, and Lehman Brothers as the top contributors as early as 2005-Q1. Moreover, aggregate SRISK provides early warning signals of distress in indicators of real activity. The ΔCoVaR method proposed by <cit.> estimates the systemic risk of a financial system conditional on institutions being in distress based on publicly traded financial institutions. They define an institution's contribution to systemic risk as the difference between ΔCoVaR conditional on the institution being in distress and ΔCoVaR in the median state of the institution. They quantify the extent to which characteristics such as leverage, size, and maturity mismatch predict systemic risk contribution. <cit.> examine the aftermath of the postwar financial crises in advanced countries. Through the construction of a semiannual series of financial distress in 24 OECD countries for the period 1967–2012. The series is based on assessments of the health of countries' financial systems from and classifies financial distress on a relatively fine scale. They find that the average decline in output following a financial crisis is statistically significant and persistent, but only moderate in size. More importantly, the average decline is sensitive to the specification and sample, and that the aftermath of the crises is highly variable across major episodes. Following this research, <cit.>, using a crisis severity variable constructed by <cit.>, estimated a Tobit model for 23 developed economies. They developed a probability of crisis measure and SRISK capacity measure from the Tobit estimates. These indicators reveal an important global externality whereby the risk of a crisis in one country is strongly influenced by the undercapitalization of the rest of the world. <cit.> present an economic model of systemic risk in which undercapitalization of the financial sector as a whole is assumed to harm the real economy, leading to a systemic risk externality. Each financial institution's contribution to systemic risk can be measured as its systemic expected shortfall (SES), that is, its propensity to be undercapitalized when the system as a whole is undercapitalized. The research by <cit.> addresses the measurement of the systemic risk contribution (SRC) of country-level stock markets to understand the rise of extreme risks worldwide to prevent potential financial crises. The proposed measure of SRC is based on quantifying tail risk propagation's domino effect using CoVaR and the cascading failure network model. While CoVaR captures the tail dependency structure among stock markets, the cascading failure network model captures the nonlinear dynamic characteristics of tail risk contagion to mimic tail risk propagation. The validity test demonstrated that this method outperforms seven classic methods as it helps early warning of global financial crises and correlates to many systemic risk determinants, e.g., market liquidity, leverage, inflation. The results highlight that considering tail risk contagion's dynamic characteristics helps avoid underestimating SRC and supplement a “cascading impact” perspective to improve financial crisis prevention. The micro-level methods have been criticized by <cit.>. They base their research on the assumption that financial intermediaries including commercial banks, savings banks, investment banks, broker/dealers, insurance companies, mutual funds, etc. are special because they are fundamental to the operation of the economy. The specialness of banks is reflected in the economic damage that results when financial firms fail to operate properly. They proposed a new measure to forecast the likelihood that systemic risk-taking in the banking system as a whole, called CATFIN. It captures the tail risk of the overall banking market using VaR methodology at a 1% level with monthly data. This early warning system should signal whether aggressive aggregate systemic risk-taking in the financial sector presages future macroeconomic declines. <cit.> showed that among 19 different risk measures, CATFIN performs the best in predicting macro-level shocks. <cit.> introduced TALIS (TrAffic LIght System for Systemic Stress) that provides a comprehensive color-based classification for grouping companies according to both the stress reaction level of the system when the company is in distress and the company's stress. level. This indicator can integrate multiple signals from the interaction between different risk metrics. Starting from specific risk indicators, companies are classified by combining two loss functions, one for the system and one for each company, evaluated over time and as a cross-section. An aggregated index is also obtained from the color-based classification of companies. <cit.> compare different approaches to Value-at-Risk measurement based on parametric and non-parametric approaches for different portfolios of assets, including cryptocurrencies. They checked if the analyzed models accurately estimate the Value-at-Risk measure, especially in the case of assets with various returns distribution characteristics (eg. low vs. high volatility, high vs. moderate skewness). <cit.> checked which of the VaR models should be used depending on the state of the market volatility. They showed that GARCH(1,1) with standardized student's t-distribution is least affected by changes in volatility among analysed models. <cit.> point out that under the conditions of sudden volatility increase, such as during the global economic crisis caused by the Covid-19 pandemic, no classical VaR model worked properly even for the group of the largest market indices. In general, there is an agreement between market risk researchers that an ideal model for VaR estimation does not exist, and different models' performance strongly depends on current economic circumstances. Some spectacular crash events, including the FTX collapse in November 2022, followed by a dramatic slump in prices of most of the cryptocurrencies triggered a question about the resiliency of this financial market segment to shocks and the potential spillover effect. In one of the latest research, <cit.> studied systemic risk in the cryptocurrency market based on the FTX collapse. Using the CATFIN measure to proxy for the systemic risk they claimed that the FTX crisis did not engender higher systemic and liquidity risks in this market compared to previous negative shocks. Various rigorous models of bank and payment system contagion have now been developed, although a general theoretical paradigm is still missing. Direct econometric tests of bank contagion effects seem to be mainly limited to the United States. Empirical studies of the systemic risk in foreign exchange and security settlement systems appear to be non-existent. Moreover, the literature surveyed reflects the general difficulty to develop empirical tests that can make a clear distinction between contagion in the proper sense and joint crises caused by common shocks, rational revisions of depositor or investor expectations when information is asymmetric (“information-based” contagion) and “pure” contagion as well as between “efficient” and “inefficient” systemic events. Bearing in mind the huge dynamics of the recent shocks (e.g. the Covid-19 pandemic, and the FTX collapse), we claim that the monthly data frequency (like in the case of CATFIN) is not enough to create a valid early warning indicator. At the same time, we claim that the existing indicators of systemic risk are over sophisticated and some of them require huge computing power or access to paid datasets. Therefore, there is a need to create a precise and simple indicator of systemic risk based on a publicly available date with relatively high frequency. In this study, we base on the macro-level data which is easily accessible to the general public to construct a robust systemic risk indicator. We show that our simple metrics can yield similar (or better) results than complex methods and can be computed with a relatively high-frequency using publicly available data, which is a great advantage. a-comparison-of-the-selected-systemic-risk-indicators §.§ A comparison of the selected systemic risk indicators Following the Cleveland Fed's commentary on the performance of their systemic risk indicator (Craig 2020), we agree that a good financial-stress indicator (we may also say a good systemic risk indicator) is reliable, timely, straightforward, valid, and ongoing. Most of the indicators miss some of those features. For example, indicators that base on the balance-sheet data are neither timely nor ongoing, as financial data is provided on a monthly basis to the regulators and it is publicly released on a quarterly basis and with a delay. This means that those indicators can be computed by market regulators with a higher frequency than by the wide public, which is a disadvantage for the market participants. Moreover, some of the indicators are complex and thus they involve a significant model risk. In other words, if two indicators perform the same, the better one is the simpler one. In Table <ref> we provide an overview of the selected systemic risk indicators. methodology § METHODOLOGY Our methodology is based on the combination of the information hidden in latent process of volatility using the concept of implied and realized volatility. We did it by utilizing the methodology for volatility indices based on <cit.> and <cit.> and the concept of realized volatility for various frequency of data introduced by <cit.> and <cit.>. Similarly to <cit.>, we construct a dynamic historical ranking evaluating the systemic risk day by day both on the global and country level. What is more important, our methodology can be transformed and adapted in a simple way for the use with high-frequency data and so that such systemic risk indicator can monitor the risk on the real-time basis. The general formula of the IVRVSRI consist of two component indices which are based on implied (IVSRI) and realized (RVSRI) volatility. implied-volatility—volatility-indices §.§ Implied volatility - Volatility indices One of the first and the widely known volatility index is the VIX index, introduced by CBOE in 2003 and recalculated backward to 1987. Its formula, based on the seminal paper of <cit.>, was described in detail in <cit.> and it can be summarized by the following equation: σ^2 = 2/T∑_k=i^Δ K_i/K_i^2 e^RT Q(K_i) - 1/T[F/K_0-1]^2 where: σ = VIX/100, T - time to expiration, K_i - strike price of i-th out-of-the-money option; a call if K_i > K_0 and a put if K_i < K_0; both put and call if K_i = K_0, R - risk-free interest rate to expiration, F - forward index level derived from index option prices, K_0 - first strike below the forward index level (F). The formulas for other volatility indices used in this study (VSTOXX, VNKY, and VXEWZ) are based on the similar methodology and their details can be found in <cit.>, <cit.>, and <cit.>. realized-volatility-measure §.§ Realized volatility measure In the case of historical volatility measure, we use the realized volatility concept (<cit.>). It is based on summation of log returns during a given period of time and annualized in order to combine it later with IV. The formula used in this paper is as follows: RV_t,i^1M = √(252/21∑_k=0^20 r_t-k,i^2) = RVSRI_i, r_t,i = log(P_t,i/P_t-1,i) where RV_t,i^1M is the realized volatility for i-th equity index on day t with the memory of 1 calendar month (i.e. 21 trading days), while P_t,i is the price of i-th equity index on day t. The memory of the realized volatility estimator was set to 21 days (trading days) in order to make it comparable with 30 calendar days in case of VIX. ivrvsri—implied-volatility-realized-volaitlity-systemic-risk-indicator §.§ IVRVSRI - Implied Volatility Realized Volaitlity Systemic Risk Indicator Our methodology has significant advantages compared to other approaches presented in the literature (<cit.>, <cit.>). First, IVRVSRI uses systemic risk indication based on two simple and heavily grounded amongst market participants risk measures (IV and RV). Second, we analyze various financial market turmoils from 2000 until 2023 unhiding the characteristics and severity of major market crisis during the last 23 years. Third, we construct a dynamic ranking (day by day) showing the current level of stress on the global level and additionally separately for USA, Europe, Brazil and Japan. Finally, our methodology can be simply extended by using high-frequency price data for the selected equity indices and the same frequency for volatility indices to mimic the systemic-risk on real time basis. In order to accomplish this task we construct two component systemic risk indicators based on implied (IVSRI) and realized volatility measures (RVSRI) for each country separately and additionally on the aggregated level for all countries. implied-volatility-sri §.§.§ Implied Volatility SRI IVSRI is based on the separate volatility index for each country, or group of countries, and its share in the total market capitalization. The formula for IVSRI is as follows: IVSRI = ∑_k=1^N w_i * IV_i where N is the number of analyzed countries, IV_i denotes the implied volatility index for the i-th country, and w_i is the weight of the given country in SRI, calculated according to: w_i= MC_i/∑_k=1^NMC_i where MC_i is the market capitalization fo the given country. Based on Table <ref> and Equation <ref>, we construct weights vector w = {77.7%, 8.1%, 12%, 2.2%} which will be used in calculations of our risk metrics. realized-volatility-sri §.§.§ Realized Volatility SRI RVSRI is based on similar concept as IVSRI (section <ref>): RVSRI = ∑_k=1^N w_i * RV_i where RV_i the realized volatility index for the i-th country. ivrvsri—implied-volatility-realized-volatility-systemic-risk-indicator-on-the-country-level §.§.§ IVRVSRI - Implied Volatility Realized Volatility Systemic Risk Indicator on the country level IVRVSRI can be calculated on the country level (IVRVSRI_i) as the weighted sum of IV_i and RV_i measures for the given country and on the global level: IVRVSRI_i = w_IV * IVSRI_i + w_RV * RVSRI_i ivrvsri—implied-volatility-realized-volaitlity-systemic-risk-indicator-on-the-global-level-ivrvsri §.§.§ IVRVSRI - Implied Volatility Realized Volaitlity Systemic Risk Indicator on the global level (IVRVSRI) IVRVSRI can be calculated in both ways, based on IVSRI and RVSRI on the global level (formulas <ref> and <ref>): IVRVSRI = w_IV * IVSRI + w_RV * RVSRI, w_IV + w_RV = 1 where w_IV is the weight of IVSRI component in IVRVSRI (equal to 50%), and w_RV is the weight of RVSRI component in IVRVSRI (equal to 50%). Alternatively, we can calculate IVRVSRI measure based on country specific IVRVSRI (i.e. IVRVSRI_i) IVRVSRI = ∑_k=1^N w_i * IVRVSRI_i The weights w_IV and w_RV are assumed to be equal, however, different different weights may also be used. dynamic-quartile-ranking-based-on-ivrvsri-dqr_ivrvsri §.§ Dynamic quartile ranking based on IVRVSRI (DQR_IVRVSRI) In the next step, we construct the dynamic quartile ranking (DQR_IVRVSRI) based on RVSRI, IVSRI, and IVRVSRI indications, both on the country and on the global level. The DQR_IVRVSRI on the country level is constructed based on the following steps: enumi. * We create quartile map chart based on IVRVSRI_i for each country under investigation, * This map chart on the daily level shows colored systemic risk indicator, * Colors indicate the following: * RED, if IVRVSRI_i is in its 4th quartile based on historical indications → VERY HIGH country-systemic risk, * ORANGE, if IVRVSRI_i is in its 3rd quartile based on historical indications → HIGH country-systemic risk, * LIGHT GREEN, if IVRVSRI_i is in its 2nd quartile based on historical indications → LOW country-systemic risk, * GREEN, if IVRVSRI_i is in its 1st quartile based on historical indications → VERY LOW country-systemic risk. To construct the index at the global level, we follow the same approach as for the IVRVSRI_i for each country separately, however the map is constructed on the global level. benchmark-systemic-risk-indicators §.§ Benchmark systemic risk indicators From an array of systemic risk measures described in Section <ref>, we select four indicators that will serve as benchmarks for the IVRVSRI. We choose the sRisk, the CATFIN, the Cleveland FED Systemic Risk Indicator, and the CISS. As this choice may seem arbitrary, it is partly based on the availability of the data. Moreover, as the selected measures use different methodologies, we want to check how their indications compare with each other and how accurate they are at predicting financial turmoils. Data for the sRisk is available for many countries, and at the global level, while the CATFIN and Cleveland FED's measures are available only for the US market, and the CISS indicator is only available for Europe. srisk §.§.§ SRISK SRISK is defined as the expected capital shortfall of a financial entity conditional on a prolonged market decline. It is a function of the size of the firm, its degree of leverage, and its expected equity loss conditional on the market decline, which is called Long Run Marginal Expected Shortfall (LRMES). The SRISK calculation is analogous to the stress tests that are regularly applied to financial firms. It is done with only publicly available information, making the index widely applicable and relatively inexpensive to implement for a single entity. The measure can readily be computed using balance sheet information and an appropriate LRMES estimator. Firms with the highest SRISK are the largest contributors to the undercapitalization of the financial system in times of distress. The sum of SRISK across all firms is used as a measure of overall systemic risk in the entire financial system. It can be thought of as the total amount of capital that the government would have to provide to bail out the financial system in case of a crisis. SRISK combines market and balance sheet information in order to construct a market-based measure of financial distress, which is the expected capital shortfall of a financial firm conditional on a systemic event. SRISK depends not only on equity volatility and correlation (or other moments of the equity return distribution), but also explicitly on the size and the degree of leverage of a financial firm. According to <cit.>, SRISK can be calculated based on the following formulas. They start from the definition of the capital shortfall of firm i on day t: CS_it = kA_it-W_it=kt(D_it+W_it) where: W_it - is the market value of equity, D_it - is the book value of debt, A_it - is the value of quasi asset, k - is the prudential capital fraction (it is assumed on the level of 8%). Then, they note that when the capital shortfall is negative, i.e., the firm has a capital surplus, the firm functions properly. However, when this quantity is positive, the firm experiences distress. They add, that the main focus will be put on the prediction of the capital shortfall of a financial entity in case of a systemic event defined as a market decline below a threshold C over a time horizon h (<cit.>). Next, they define multiperiod arithmetic market return between period t+1 and t+h as R_mt+1:t+h and the systemic event as R_mt+1:t+h. Additionally, they set the horizon h to 1 month (approx. 22 periods) and the threshold C to -10%. Then, the SRISK is defined as the expected capital shortfall conditional on a systemic event: SRISK_it = E_t(CS_it+h|R_mt+1:t+h<C) = kE_t(D_it+h|R_mt+1:t+h<C)-(1-k)E_t(W_it+h|R_mt+1:t+h<C) Additionally, they assume that in the case of a systemic event debt cannot be renegotiated, what implies that E_t(Dit+h)|R_mt+1:t+h<0=D_it, and finally provide SRISK definition using this assumption: SRISK_it = kD_it-(1-k)W_it(1-LRMES_it) = W_it[kLVG_it+(1+k)LRMES_it-1] where LVG_it is the quasi-leverage ratio (D_it+W_it)/W_it and LRMES_it is Long Run Marginal Expected Shortfall, i.e. Long Run MES, defined as: LRMES_it = -E_t(R_it+1:t+h|R_mt+1:t+h<C) where R_it+1:t+h is the multiperiod arithmetic firm equity return between period t+1 and t+h. Authors claim that SRISK depend on the size of the firm, its degree of leverage, and its expected equity devaluation conditional on a market decline. SRISK increases when these variables increase. Finally, after pointing that SRISK measure of Equation <ref> provide a point prediction of the level of capital shortfall a financial entity would experience in case of a systemic event, they provide the formula for SRISK_t measure across all firms to construct system-wide measure of financial distress: SRISK_t = ∑_i=1^N(SRISK_it)_+ where (x)_+ denotes max(x, 0). At the end, they add that aggregate SRISK_t should be thought of as the total amount of capital that the government would have to provide to bail out the financial system conditional on the systemic event. At the same time, they admit that they ignore the contribution of negative capital shortfalls (that is capital surpluses) in the computation of aggregate SRISK. The last, and probably the most time consuming step to calculate SRISK_t, requires specifying a model for the market and firm returns that can be used to obtain estimators of the LRMES. Nevertheless, a number of different specifications and estimation techniques can be used to obtain this prediction. To construct LRMES predictions, the GARCH-DCC model (<cit.>) is used. cleveland-feds-systemic-risk-indicator §.§.§ Cleveland Fed's Systemic Risk Indicator The Cleveland FED's indicator captures the risk of widespread stress in the US banking system (<cit.>). This method of computing the systemic risk indicator (cfSRI) is based on the difference, or spread, between two measures of insolvency risk. The first one is an average of default risk across individual banking institutions (average distance-to-default), while the other one is a measure of risk for a weighted portfolio of the same institutions (portfolio distance-to-default). cfSRI_t = ADD_t-PDD_t where ADD_t is the average distance-to-default, while PDD_t is the portfolio distance-to-default. The narrowing of the spread, resulting from the rising insolvency risk of the banking system as a whole, reflects market perceptions of imminent systematic disruption of the banking system. Fragility in the banking system is indicated when falling PDD converges toward ADD (the narrowing of the spread), even when both PDD and ADD are well in positive territory. A spread that is lower than 0.1 for more than two days indicates major financial stress when the average insolvency risk is rising and major banks are stressed by a common factor. When it stays below 0.5 for an extended period of time, it indicates that the markets are signaling major stress about the banking system. To gauge the level of systemic risk in the banking system, the component parts of cfSRI have to be calculated, i.e. the average distance-to-default, the portfolio distance-to-default, and then the spread between these two should be interpreted jointly. The average distance-to-default (ADD) reflects the market's perception of the average risk of insolvency among a sample of approximately 100 US banks[The constituents of an exchange-traded fund (ETF) that reflects the banking system in the aggregate: State Street Global Advisors’ SPDR S&P Bank ETF, commonly referred to as “KBE".]: ADD_t = 1/N∑_i=1^NDD_i,t where DD_i is a distance-to-default (DD) for an individual bank T periods ahead, which is calculated using the Merton model (<cit.>) for equity valuation as a European call option on the bank's assets A at maturity T. The portfolio distance-to-default (PDD) is a similar measure that is based on options on a weighted portfolio of the same banks[For this case, it is calculated based on options on an exchange-traded fund (ETF): State Street Global Advisors’ SPDR S&P Bank ETF (KBE), instead its constituents like it was in the case of ADD.]. It is calculated using options on an exchange-traded fund that reflects the banking system in the aggregate. A decreasing ADD or decreasing PDD indicates the market's perception of rising average insolvency risk in the banking sector. catfin §.§.§ CATFIN CATFIN measure (<cit.>) tries to capture the risk of catastrophic losses in the financial system and the statistical approaches to estimating VaR are used for modeling the losses. There are three methodologies used in the VaR estimation: a) direct estimate of the tail risk based on the extreme value distributions, specifically the generalized Pareto distribution (GPD), b) investigation of the shape of the entire return distribution, while providing flexibility of modeling tail thickness and skewness by applying the skewed generalized error distribution (SGED), and c) estimation of VaR based on the left tail of the actual empirical distribution without any assumptions about the underlying return distribution. The first two approaches are known as the parametric methods, whereas the latter one is considered a non-parametric one. In the CATFIN, VaR is estimated at the 99% confidence level using all three methodologies. Then the first principal component is extracted from the three measures. The CATFIN measure bases on the notion that banks are special for a country's economy and the excess monthly returns on all financial firms are used for the estimation. The extreme returns are defined as the 10% left tail of the cross-sectional distribution of excess returns on financial firms. Final formula for CATFIN combines three VaR measures on a monthly level. Principal component analysis (PCA) is used to extract the common component of catastrophic risk embedded in the three proxies in a parsimonious manner, while suppressing potential measurement error associated with the individual VaR measures. This leads to measure the catastrophic risk in the financial system as of month t, denoted CoVaR, as: CATFIN_t = 0.570υ_GPD^STD + 0.5719υ_SGED^STD + 0.5889υ_NP^STD where υ_GPD^STD, υ_SGED^STD, and υ_NP^STD correspond to the standardized VaR measures based on the GPD, the SGED, and the non-parametric methods, respectively. composite-indicator-of-systemic-stress-ciss §.§.§ Composite indicator of systemic stress (CISS) A Composite Indicator of Systemic Stress (CISS) is a broad measure of systemic risk in the financial system. The financial system can be divided into three main building blocks: markets, intermediaries, and infrastructures. Each of these building blocks can be split into specific segments. The financial markets segment can be separated into individual markets like money, equity, bond, currency, and derivatives. Within financial intermediaries, the most important are banks, and insurance companies. The market infrastructure is composed of payment settlement and clearing systems. CISS aggregates financial stress at two levels: it first computes five segment-specific stress subindices, and then aggregates these five subindices into the final composite stress index. It mostly relies on realized asset return volatilities and on risk spreads to capture the main symptoms of financial stress in the various market segments. The subindices are aggregated analogously to the aggregation of individual asset risks into overall portfolio risk by taking into account the cross-correlations between all individual asset returns and not only their variances. It is essential for the purpose of constructing of the systemic risk indicator to allow for time-variation in the cross-correlation structure between subindices. In this case, the CISS puts more weight on situations in which high stress prevails in several market segments at the same time. The stronger financial stress is correlated across subindices, the more widespread is the state of financial instability according to the “horizontal view” of the definition of systemic stress. The second element of the aggregation scheme potentially featuring systemic risk is the fact that the subindex weights can be determined on the basis of their relative importance for real economic activity. This specific feature in the design of the CISS not only offers a way to capture the “vertical view” of systemic stress, but in doing so it also implicitly accounts for country differences in the structure of their financial systems as long as these actually matter for the transmission of financial stress to the real economy. For the Euro area the subindex weights are: money market 15%, bond market 15%, equity market 25%, financial intermediaries 30%, and foreign exchange market 15%. comparison-and-forecasting-ability-of-ivrvsri §.§ Comparison and Forecasting ability of IVRVSRI correlation-matrix-and-rolling-correlation §.§.§ Correlation matrix and rolling correlation In order to compare existing SRIs with our IVRVSRI, first we visualize their levels, returns and descriptive statistics of weekly returns which are used in regressions in the next step. We also calculate the correlation matrices for four distinct benchmarks SRIs, IVRVSRI and S&P500: for weekly returns of all of them and between returns of S&P500 index and lagged weekly returns of SRIs. forecasting-ability §.§.§ Forecasting ability simple-regression-model Simple Regression Model   The first model (Equation <ref>) is be based on simple regression of S&P 500 index weekly returns regressed on the lagged weekly returns of the given SRI (either a benchmark one, or the IVRVSRI): r_t^w,SP500 = β_0 + ∑_k=1^pβ_k^ir_t-k^w,SRI_i+ε_t where: r_t^w,SP500 - weekly return of S&P 500 index on day t, β_k^i - sensitivity of r_t^w, SP500 to lag k of the given SRI_i return, r_t-k^w,SRI_i - lag k of weekly return of the given SRI_i on day t. Additionally, forecasting abilities of each SRIs are investigated jointly: r_t^w,SP500 = β_0 + ∑_k = 1^p∑_i = 1^Nβ_k^ir_t-k^w, SRI_i+ε_t where: β_k^i - sensitivity of r_t^w, SP500 to lag k of the given SRI_i return, r_t-k^w,SRI_i - lag k of weekly return of the given SRI_i on day t. quasi-quantile-regression-model Quasi-quantile Regression Model   Quasi-quantile Regression Model is our innovative idea where we estimate models described with Equation <ref> and Equation <ref> only for such values r_t^w,SP500 which fulfill the following conditions: * r_t^w,SP500 r̅_SP500,t^w * r_t^w,SP500 1st quartile of r_t^w,SP500 * r_t^w,SP500 1st decile of r_t^w,SP500 * r_t^w,SP500 5th percentile of r_t^w,SP500 * r_t^w,SP500 2.5th percentile of r_t^w,SP500 * r_t^w,SP500 1st percentile of r_t^w,SP500 Hence, model specification for this approach is given by: r_t^w,SP500(p) = β_0 + ∑_k = 1^pβ_k^ir_t-k^w, SRI_i+ε_t where r_t^w,SP500(p) is weekly return of S&P 500 index on day t conditional on percentile p of its distribution. quantile-regression-model Quantile Regression Model   Following <cit.>, we estimate the quantile regression model (Equation <ref>) in order to present a comprehensive picture of the forecasting ability of aggregated systemic risk indicators on the equity market conditioned on the location of the equity index return over its density. Q_τ(r_t^w,SP500) = β_0(τ) + ∑_k = 1^pβ_k(τ)r_t-k^w, SRI_i+ε_t where: Q_τ(r_t^w,SP500) - τ quantile of S&P 500 returns on day t, r_t-k^w,SRI_i - lag k of return of the given SRI_i, β_k(τ) - percentage change in τ quantile of the S&P500 index returns produced by the change in the predictor k intervals earlier. data § DATA Our data set is based on daily data for volatility indices (VIX, VSTOXX, VNKY, and VXEWZ) and daily price and market cap data for equity indices (S&P500, EuroStoxx50, Nikkei 225, Bovespa) in the period between 2000 to 2023. Figure <ref> presents the fluctuations of the analyzed times series, while Figure <ref> fluctuations of returns. Figure <ref> informs us about different magnitude of upward and downward movements on analyzed markets, while Figure <ref> additionally visualize volatility clustering with high and low volatility periods indicating calm and more stressfull periods of time. Drawdowns of analyzed equity indices, depicted on Figure <ref>, show the length of the most important turmoils and additionally visualize their speed and magnitude. Descriptive statistics of returns, presented in Table <ref>, confirm the well-known fact about equity returns, i.e. high kurtosis, negative skewness and associated non-normality of returns. In order to calculate the proper weights in IVSRI, RVSRI and IVRVSRI indicators, we decided to use market capitalization data for each of the equity indices used (Table <ref>). results § RESULTS Based on the logic “keep it simple”, we want to check if it is possible to create Systemic Risk Indicator based on widely available (most often publicly available and free of charge) volatility risk measures which can have similar properties as systemic risk indicators introduced in highly cited papers (<cit.>, <cit.>, <cit.> or <cit.>) or in the most recent study of <cit.>. In the Results section, we present Figures and map charts visualizing systemic risk indicators and theirs components. ivrvsris-on-the-country-level §.§ IVRVSRIs on the country level Figure <ref> shows the fluctuations of IV indices for each country separately and shows the most significant turmoils affecting the equity market in each country under investigation, i.e. GFC (20087-2009), COVID pandemic (March 2020), and a few of lower magnitude like Eurozone debt crisis (2009-2014), and turmoils in August 2015, February 2018 and November-December 2018. On the other hand, Figure <ref> presents RV indices for each country separately. Comparing Figure (<ref> with Figure <ref>) shows that the anticipated reaction (IV indices in Figure <ref>) to the current market stress is not always the same as the current reaction revealed in realized volatility of returns (IV versus RV for Japan during Covid pandemic in March 2020). Overall, our results show that the magnitude of reactions to the risk events varies across countries. Analyzing IVRVSRI indications on the country levels presented on Figure <ref>, we observe a very weak reaction of Japanese markets to COVID-19 pandemic in March 2020 in comparison to the USD and Eurozone, and literally no reaction of Japanese and Brazilian markets to the European sovereign debt crisis in 2009-2014. Only in the case of the GFC 2007-2009 all analyzed markets reacted strongly but the persistence of the crisis was not the same (Figure <ref>). Brazil and Japan recovered quickly with regard to the speed of the decrease of IVRVSRI indications while the USA and Europe were struggling much longer. Next, Figure <ref> shows the map chart with colored quartile levels of IVRVSRI indications on the country level. It shows that in the case of Eurozone, the GFC extended into the debt crisis and lasted with a small break in 2014 until 2016. In general, before the GFC the Eurozone, Japanese and Brazilian markets were more resilient than the American one to worldwide turmoils while the situation reversed after the Eurozone sovereign debt crisis, with Brazil and Japan being the least resilient in that period among all analyzed countries. ivrvsris-on-the-global-level §.§ IVRVSRIs on the global level Figure <ref> shows the aggregated results for IVRVSRI and its components (IVSRI and RVSRI) on the global level. We can see that after aggregation of the country specific indices all the major financial crises are indicated and additionally we can observed their severity. GFC and Covid were the most severe turmoils, but other ones line the end of downward trend after the Dotcom bubble (2002-2003) and Eurozone debt crisis (2009-2014) are revealed as well. What is more, the reaction of IVSRI and RVSRI components on the global level to the above mentioned turmoils differs with regard to the magnitude of their reaction. Most often, the fear revealed in IVSRI (Panel (1) of Figure <ref>), especially in case of less severe turmoils (Eurozone debt crisis or the bottom of the Dotcom bubble), was not realized in the same magnitude of RVSRI indications (Panel (2) of Figure <ref>). Figure <ref> presents a colored map chart indicating quartiles of IVSRI, RVSRI, and IVRVSRI on the global level stressing the major turmoils on the aggregated level. The IVSRI and RVSRI show slightly different risk levels in the “transition” periods when systemic risk changes. In general, we can state that the reaction of the implied-volatility-based metrics is faster than the realized volatility one, which is something we have expected. Moreover, the correlation between the IV-based indicator and the general systemic risk indicator (IVRVSRI) is higher than that of the RV-based ones. At the same time, the general systemic risk indicator (IVRVSRI) is a better indicator of systemic risk than any individual indicator based on only one measure of volatility (RVSRI or IVSRI), and this result is robust even after the change of the weights of the RVSRI and IVSRI in the general systemic risk measure. Figure <ref> depicts the comparison of fluctuations of S&P500 index and IVRVSRI on the global level. It clearly shows that each major financial turmoil was reflected on our IVRVSRI almost immediately informing market participants about increased level of stress. comparison-between-sris-and-sp500-index §.§ Comparison between SRIs and S&P500 index In this section, in order to see a broader picture for comparison purposes, we present fluctuations of S&P 500 and analyzed SRIs (Figure <ref>), their weekly returns (Figure <ref>) with descriptive statistics (Table <ref>) and correlations (Table <ref>), and finally correlation between weekly returns of S&P 500 index and lagged SRIs (Table <ref>). forecasting-ability-of-sris §.§ Forecasting ability of SRIs In this section we refer to the forecasting ability of IVRVSRI and benchamrk SRIs with regard to the weekly returns of S&P 500 index. Therefore, we present two set of six models for overlapping and non-overlapping weekly returns. In Tables for simple and quasi-quantile regression we report th adjusted R^2, while in tables for quantile regression the pseudo R^2. Adjusted R^2 was calculated according to the formula: R^2_𝖺𝖽𝗃 = 1 - (1 - R^2) n - 1/n - p - 1 where n is the number of observations and p is the number of parameters to estimate. To calculate pseudo R^2 for the quantile regressions, we follow the <cit.> approach, who propose R_𝗉𝗌𝖾𝗎𝖽𝗈^2 as a local measure of goodness of fit at the particular τ quantile. We assume that: V(τ) = min_b ∑ρ_τ(y_i - x_i^'b) Let β̂(τ) and β̃(τ) and be the coefficient estimates for the full model, and a restricted model, respectively. Similarly, V̂ and Ṽ are the corresponding V terms. The goodness of fit criterion is then defined as R_𝗉𝗌𝖾𝗎𝖽𝗈^2 = 1 - V̂ / Ṽ. The list of tested models is presented in the form two sets of models for overlapping and non-overlapping weekly returns: Simple Regression Models (1-lag and p-lag), Quasi Quantile Regression Models (1-lag and p-lag), and Quantile Regression Models (1-lag and p-lag). regression-models-based-on-the-overlapping-data §.§.§ Regression models based on the overlapping data The results (Tables <ref>, <ref>, and <ref>) of the regressions based on the overlapping data (the simple linear regression, the quasi-quantile regression, and the quantile regression) indicate that the explanatory power of SRIs increase when the larger number of lags is used in the model, and when the regression is performed in a deeper part of the tail of the distribution (quasi-quantile for S&P500 index returns in the lowest decile and quantile regressions for τ<0.1). However, the most important conclusion from the presented regressions is that the lagged weekly returns of the IVRVSRI have the largest explanatory power of the weekly returns of the S&P 500 index for all presented regression models. regression-models-based-on-the-non-overlapping-data §.§.§ Regression models based on the non-overlapping data The results of the regressions based on the non-overlapping data (Tables <ref>, <ref>, and <ref>) yield similar conclusions as for the overlapping data. The explanatory power of the lagged weekly returns of the IVRVSRI is still the highest among all analyzed models, measured by the adjusted R2 pseudo-R2 statistics. Moreover, in the case of the IVRVSRI, the predictive power is increasing while analyzing deeper and deeper parts of the tail distributions which is crucial for the point of view of the indicators that measure systemic risk. conclusions § CONCLUSIONS In this study, we propose a robust Systemic Risk Indicator based on the well-known concepts of realized and implied volatility measures. The main contribution of this paper to the broad bulk of studies of systemic risk indicators is the simplicity of the metrics that we propose, which at the same time yield similar results as more complex tools, thus significantly reducing the model risk. At the same time, the proposed methodology enables calculation of IVRVSRI on high-frequency data (even on tick level) which significantly decreases the time of response of our indicator to the starting point of each major financial turmoil. Moreover, in the case of many metrics, it is also much less computationally demanding and does not rely on paid data sets or data that is available only for market regulators. The indication of this measure depends on the geographical location of a given equity market. As expected, the robustness of the proposed Systemic Risk Indicator depends on various parameters selected: the memory parameter for RV, time to expiration for IV, the percentile selected for the risk map, and the length of the history selected for the calculation of percentile in case of the risk map. Referring to the first hypothesis (RH1), we were able to draw the following conclusions. We can not reject RH1 as we show that it is possible to construct a robust Systemic Risk Indicator (IVRVSRI) based on the well-known concepts of realized and implied volatility measures. Moreover, we cannot reject RH2 as the indication of the proposed Systemic Risk Indicator (IVRVSRI_i) depends on the geographical location of a given equity market. As expected, the robustness of the proposed Systemic Risk Indicator depends on various parameters selected in the process of its calcuation: the memory parameter for RV, time to expiration for IV, the percentile selected for the risk map, and the length of the history selected for the calculation of percentile in case of risk map, which supports RH3. Finally, the forecasting ability of IVRVSRI is undoubtly the highest from all SRIs used in this study (the srisk, the CATFIN, the Clevelend FED Systemic Risk Indicator, and the CISS). This conclusion is backed by three versions of our regression models (simple, quasi-quantile, quantile) for two types fo weekly returns (overlapping and non-overlapping data). This study can be extended by adding more countries to the analysis or other asset classes like currencies, commodities, real estate, cryptocurrencies, and hedge funds. Moreover, using high-frequency data would allow the construction of a real-time early implied volatility realized volatility systemic risk indicator (rteIVRVSRI) that would serve as an early warning indicator of systemic risk. We have a plan to design a website within resources of QFRG in order to publish real time data of IVRVSRI and its component parts for theoretical and practical purposes. Finally, there is a need to prepare detailed sensitivity analysis of IVRVSRI for all crucial parameters assumed in the process of its calculations.
http://arxiv.org/abs/2307.04717v1
20230710172826
Biomass dust explosions: CFD simulations and venting experiments in a 1 m$^3$ silo
[ "A. Islas", "A. Rodríguez-Fernández", "C. Betegón", "E. Martínez-Pañeda", "A. Pandal" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
1 .001 Islas et al. mode = title]Biomass dust explosions: CFD simulations and venting experiments in a 1 m3 silo 1]Alain Islas[] [1]Department of Energy, University of Oviedo - 33203 Gijón, Asturias, Spain 1]Andrés Rodríguez Fernández[] 2]Covadonga Betegón[ ] [2]Department of Construction and Manufacturing Engineering, University of Oviedo - 33203 Gijón, Asturias, Spain 3]Emilio Martínez-Pañeda[] [3]Department of Civil and Environmental Engineering, Imperial College London - London, SW7 2AZ, United Kingdom 1]Adrián Pandal[orcid=0000-0001-6006-2199] [1] [email protected] [cor1]Corresponding author: This study presents CFD simulations of biomass dust explosions in a newly developed experimental 1 m3 silo apparatus with variable venting, designed and fabricated to operate similarly to the explosivity test standards. The aim of the study is to validate a CFD model under development and investigate its capability to capture the transient effects of a vented explosion. The model is based on OpenFOAM and solves the multiphase (gas-particle) flow using an Eulerian-Lagrangian approach in a two-way regime. It considers the detailed thermochemical conversion of biomass, including moisture evaporation, devolatilization, and char oxidation, along with the homogeneous combustion of gases, turbulence, and radiative heat transfer. The explosion is analyzed in all stages, i.e., dust cloud dispersion, ignition, closed explosion, and vented explosion. The results indicate excellent agreement between the CFD model and experimental tests throughout the sequence. Our findings highlight the critical role of particle size in dust cloud distribution and pre-ignition turbulence, which significantly influences flame dynamics and the explosion itself. This model shows great promise and encourages its application for future investigations of biomass dust explosions in larger-scale geometries, especially in venting situations that fall out of the scope of the NFPA 68 or EN 14491 standards, and to help design effective safety measures to prevent such incidents. Vented dust explosions Biomass CFD OpenFOAM [ [ ===== § INTRODUCTION As global efforts to achieve net zero emissions by 2050 continue, the demand for bio-energy has increased, making biomass combined heat & power (CHP) an attractive option for greenhouse gas (GHG) abatement due to its CO2 neutral characteristics <cit.> and potential to become carbon negative if combined with carbon capture and storage (CCS) <cit.>. From biomass co-firing to dedicated routes, long-term fuel delivery concepts and contracts are essential for the successful operation of biomass power plants <cit.>. However, the supply and availability of feedstock must be carefully considered to ensure continuous and stable power generation <cit.>. Unfortunately, experience has shown that dust explosions are a potential hazard that must be addressed through the implementation of necessary precautions to guarantee safe and reliable plant operation, particularly during fuel handling and storing phases <cit.>. Dust explosions are a significant threat in power plants and other industrial facilities, posing a peril to worker safety and property damage <cit.>. These explosions can occur in a range of equipment, from silos and mills to conveyors and dust collection systems, and can be triggered by various sources, including hot surfaces, electrical sparks, and self-heating processes <cit.>. While prevention and inherent safety measures are the primary means of reducing the hazards of dust explosions <cit.>, it is often necessary to implement operational and dynamic risk assessments to better comprehend the probability of occurrence, the potential severity of dust explosions, and to look for mitigation solutions such as venting panels <cit.>. However, determining the appropriate vent size remains a controversial issue <cit.>, despite the existence of established standards, e.g., the EN 14491 or the NFPA 68 codes <cit.>. Likewise, as newly built plants scale up, more cost-efficient, high-volume storage solutions are needed to secure continuing plant operation. Although mammoth silos may seem an attractive option <cit.>, they often fall outside the scope of these standards, highlighting the need for further research. To address these challenges, besides the traditional dust explosion testing activities <cit.>, modeling research <cit.> has emerged as an alternative to predict the consequences of dust explosions with reduced labor and capital. Especially, computational fluid dynamics (CFD) simulations can play a meaningful role in assessing risk analysis, providing a more nuanced understanding of explosion development and designing mitigation systems beyond the simplified scenarios considered by guidelines and standards. These tools have been successfully applied to the study of various aspects related to dust explosions, including dust cloud formation <cit.> and the determination of the explosion severity parameters <cit.>. However, the practical application of CFD codes to large industrial settings requires a pragmatic approach that involves a compromise with accuracy and precision, and initial validation through repeated small-scale experiments is necessary <cit.>. In this paper, we aim to contribute to safety engineering and consequence analysis by presenting the next step in our efforts to develop a reliable computational tool for simulating dust explosions in industrial equipment. Specifically, we validate the performance of our previous CFD model <cit.> by revisiting it and conducting experiments on dust explosion venting. To do this, we designed a self-made silo with a capacity of 1 m3 that features adjustable venting, and we used it to perform biomass dust explosions. We based our test procedure on the EN 14034 <cit.> and ASTM E1226 <cit.> standards, and we constructed the silo based on the design references for standardized test vessels, including the 20L Siwek sphere and 1 m3 ISO chamber. The purpose of this study is to gather experimental data and use it to validate our CFD model's ability to capture the transient behavior that occurs during the different stages of a dust explosion. These stages include: (1) dust cloud dispersion, (2) ignition, (3) pressure development and flame propagation, and (4) pressure relief. Our end goals for this research are two-fold: (1) to enhance the potential of CFD codes to accurately simulate biomass dust explosions and (2) to improve the accuracy of vent sizing calculations to reduce the risk of dust explosions in industrial settings. § MATERIALS AND METHODS §.§ Experimental setup A 1 m3, pressure-resistant silo with adjustable venting was designed in partnership with PHB Weserhütte S.A. and the R&D center IDONIAL in Asturias, Spain. The silo was manufactured in carbon steel ASME SA516-GR70, resistant up to 5 bar g overpressure and its dimensions were scaled down from a typical design of large-scale silo ( >10,000 m3). The bottom is beveled to imitate a hopper design and the roof is cone-shaped. The vent openings consist of 8 hinged hatches (198x185 mm), distributed equiangularly on the roof, see Fig. <ref>. The venting area varies between 0.036 to 0.293 m2 representing up to a total venting efficiency of 30.64%. The opening is regulated by polymer bolts specifically designed to withstand a 570 mbar g overpressure. The sizing of these bolts was calculated based on the tensile strength of the material, which required drilling the threaded shank to obtain the appropriate cross-section. To create the dust cloud inside the test vessel, a system of pressurized air injection was employed for dispersing the dust sample into the silo. This system consists of a dust canister that has a volume of 5 L and a length-to-diameter ratio L/D=3.6. Its lower part is cone-shaped to facilitate dust outflow. As in the standards, the canister is pressurized up to 20 bar g, but the 1 m3 silo is vacuumed to -0.125 bar g prior the start of the dispersion process. This condition is important to ensure that the normal pressure at the start of the deflagration test is exactly 0.0 bar g. The air discharge is controlled by a Nordair® U150 electropneumatic valve with an ATEX II 2GD actuator. The ignition delay time t_d was set to 600 ms in all the experiments, matching the value used for the tests in the 1 m3 ISO chamber <cit.>. To favor the radial spread of the dust, a new nozzle was designed. Specifically, an axisymmetric version of the traditional rebound nozzle <cit.> was manufactured and installed at the bottom of the silo. The dust cloud is ignited by means of 2x5 kJ Sobbe® chemical igniters placed right above the dispersion nozzle and upheld by two slender rods. The igniters are fired oppositely at an angle of 45^∘ with respect to the horizontal. The pressure reading is recorded by two pressure transmitters Siemens® Sitrans P320 positioned at the top and on opposite extremes of the cylindrical walls. The control and data acquisition system consists of a programmable logic controller (PLC) Siemens® Simatic HMI and a videocamera. All the tests were conducted in the experimental test site of Applus+ TST® (Tunnel Safety Testing, S.A.) in Asturias, Spain. §.§ Dust sample The aim of this research is to study dust explosions with a representative sample found in industrial processes that manipulate pellets. The test sample is a commercial biomass from a local pellet manufacturer in Asturias, Spain, and is comprised of natural wood sub-products (saw dust, wood chips and debarked wood). The commercial pellets were received in a 15 kg bag format, whose percentage of fine particulates (d<1 mm) was less than 1%, see Fig. <ref>. As only a few grams could be used for the explosion tests, the pellets were ground in a gently-rotating ball mill to generate additional combustible dust. The particle size distribution (PSD) of both the raw fine particulates in the bag and the post-milling samples was determined by Sieve analysis. The cumulative distributions and other size statistics are shown in Fig. <ref>. As noted, the PSD generated by pellet milling contains slightly more fine particle diameters than the PSD of the raw dust. However, the difference is small as indicated by the polydispersity index. The polydispersity index σ_D is a measure of the breadth in a size distribution <cit.> and is calculated as σ_D=D_90-D_10/D_50 where the median D_50 and the D_10 and the D_90 values represent the 50%, 10% or 90% point in the cumulative undersize PSD, respectively. A polydispersity index σ_d≪ 1 indicates a high homogeneity in particle size or a narrow PSD, while σ_D≫ 1 represents a heterogeneous particle size or a broad PSD <cit.>. Considering that in industrial applications, the dust particles can vary in size largely, the polydispersity indices of both the original and ground PSDs are comparable. Moreover, the two PSD cover the same order of magnitude and the median varies in ∼ 5%. So, for practical purposes the former size distribution is considered to be representative of typical transporting, handling, and stacking activities of pellets. The ultimate and proximate analysis, as well as the lower calorific value (LCV) of the sample were taken from the manufacturer's specifications sheet, see Table <ref>. §.§ Ignition When the ignition energy is too strong relative to the chamber size, it can cause a significant increase in pressure <cit.>. Several studies have shown that using a 10 kJ ignition source in a small volume such as the 20L Siwek sphere, leads to an overdriving effect that cannot be ignored <cit.>. In closed vessel testing, the overdriving effect can lead to an overestimation of the explosion severity parameters <cit.>, particularly the deflagration index K_st and the minimum explosive concentration (MEC). Each of the 5kJ Sobbe® chemical igniters is charged with 1.2 g of a pyrotechnic powder mixture of 40% w.t. zirconium, 30% barium nitrate, and 30% barium peroxide <cit.>. They are activated electrically with internal fuse wires. To determine whether an overdriving effect takes place in the 1 m3 silo, we conducted a blank test experiment. The pressure-time trace developed by the 2x5 kJ igniters alone was measured in the closed silo and free of any combustible dust. Fig. <ref> presents a comparison between the resulting overpressures in the 1 m3 silo and a 1 m3 ISO chamber from the literature <cit.>. Clearly, there is jump-like behavior during the first moments after the igniters are triggered. This is because the igniters deliver their energy in very short times (∼ 10 ms) <cit.>. The maximum overpressure is registered as approximately 30 mbar g and matches reasonably well the same time-dependency as the experiment in the 1 m3 ISO chamber. Moreover, the 30 mbar g overpressure is consistent with the values reported in other studies <cit.>. According to data collected from blank test experiments in the 20L Siwek sphere, the pressure increase due to 10kJ igniters can vary between 0.8 and 1.6 bar <cit.>. Therefore, when compared to the overpressure in any 1 m3 volume, the overdriving effect can be safely regarded as negligible. § GAS AND PARTICLE PHASE MODELING In this work, the vented biomass dust explosions were simulated in the 1 m3 silo by employing our customized version of OpenFOAM's coalChemistryFoam code <cit.>. This CFD code is a transient solver of two-phase (gas-solid) flow suitable to model compressible flow with turbulence, combustion, chemical reactions and radiative heat transfer. The solver uses an Eulerian-Lagrangian method to solve the particle-laden flow within a two-way coupling regime. Source terms are computed to represent the exchange of mass, momentum, energy and chemical species between the two phases. The Lagrangian framework allows for a detailed analysis of biomass burning, including modeling of sensible heating and thermochemical conversion of biomass. To reduce the computational burden, physical particles are replaced with computational parcels, which group together particles with similar properties and whose extensive properties are scaled by a number density. §.§ Gas phase governing equations In CFD simulations of dust explosions, the Reynolds-Averaged Navier Stokes (RANS) closure is often used. The gas phase governing equations consist of the Reynolds-averaged mass, momentum, energy and species transport equations. The mass transport is ∂ρ̅/∂t + ∂/∂x_i(ρ̅ ũ_i)= Γ_i where the overbar denotes that the scalar is Reynolds-averaged and the tilde denotes density-weighted time averaged or Favre-averaged. As reacting particles can exchange mass with the gas phase, the source term Γ_i is included in Eq. (<ref>) to account for the fluid/particle interaction. The momentum transport equations are ∂/∂t(ρ̅ ũ_i) + ∂/∂x_j(ρ̅ ũ_i ũ_j)= - ∂p̅/∂x_j + ∂τ̅^ij/∂x_j+∂/∂x_j(-ρ̅ u_i^' u_j^') +ρ̅ g_i + Λ_i where the Reynolds stress term is calculated using the Bousinessq hypothesis -ρ̅u_i^' u_j^'=2μ_tS_ij-2/3ρ̅k. The standard k-ε turbulence model is used to determine the eddy viscosity μ_t=ρ̅C_μk^2/ε, where k is the turbulent kinetic energy and ε is the turbulence dissipation rate. k and ε are modeled using the following transport equations ∂/∂t(ρ̅k) + ∂/∂x_i(ρũ_̃ĩk)=∂/∂x_i[(μ+μ_t/σ_k)∂k/∂x_i]+P_k-ρ̅ε ∂/∂t(ρ̅ε) + ∂/∂x_i(ρũ_̃ĩε)=∂/∂x_i[(μ+μ_t/σ_ε)∂ε/∂x_i]+C_ε1ε/kP_k-C_ε2ρ̅ε^2/k Again, a source term Λ_i is included in Eq. (<ref>) to represent the momentum exchange due to particles. The enthalpy transport equation is ∂/∂t(ρ̅ h) + ∂/∂x_i(ρ̅ ũ_i h) = D p̅/D t - ∂q̅_̅i̅/∂x_i + τ^ij∂u_i/∂x_j+Θ_i where Θ_i is a source term that accounts for the combined effect of: (1) the homogeneous gas phase reactions, (2) the enthalpy exchange due to the thermochemical conversion of the biomass particles, and (3) the radiative heat transfer. The species transport equation is ∂/∂t (ρ̅ Y_k) + ∂/∂x_i (ρ̅ ũ_̃ĩY_k) = ∂/∂x_i(ρ̅ D_k∂Y_k/∂x_i) + ω̇_k + Φ_k where Y_k is the mass fraction of species k in the gas mixture and ω̇_k is the chemical reaction rate. The source term Φ_k represents the species released/consumed by the particle devolatilization and char conversion. §.§ Particle governing equations In biomass dust explosions, the interaction between the particles and the surrounding medium is through mass and momentum exchange and heat transfer. In the CFD model, each biomass particle is a reactive multi-phase entity, whose content of liquid, gaseous and solid matter is based on the proximate analysis. The mass conservation for each particle is written as dm_p/dt=ṁ_moisture+ṁ_volatiles+ṁ_char where ṁ_moisture, ṁ_volatiles, ṁ_char denote the rate of evaporation, devolatilization, and char oxidation. After all the reactive content is depleted, the biomass is reduced to an inert ash particle. Along its entire thermal history, the particle temperature is obtained from the energy conservation m_pC_pdT_p/dt = πd_pk_gNu(T_∞-T_p) + dm_p/dtΔH + πd_p^2ε_0σ(θ_R^4-T_p^4) where dm_p/dt and Δ H denote the rate of mass consumption within a particle and its associated latent heat due to one of the three mechanisms dm_p/dt = { [ πd_pD_0Sh(p_sat,T/RT_m-X_wp/RT_m)M_w evaporation; -k(T)(m_p-(1-f_VM_0)m_p_0) devolatilization; -πd_p^2 p_o(1/R_diff+1/R_kin)^-1 char oxidation ] . The heat and mass transfer numbers, Nu and Sh are found using the Ranz-Marshall correlations for spherical particles <cit.>. Eq. (<ref>) states that biomass combustion can be seen as a three-stage, sequential process: (1) evaporation of moisture, (2) thermal cracking of biomass into light gases, and (3) the heteregenous conversion of char. The evaporation of moisture consists of the endothermic phase change of liquid water contained within the particle into water vapor that is added to the gas phase. The devolatilization or thermal cracking of biomass is the release of volatile gases that are further combusted in the gaseous phase. Contrarily to other solid fuels (e.g., coal), the overall heat release of biomass samples is dominated by the combustion of these gases. For example, the volatile matter in Pellets Asturias represents more than 75% of the total mass, see Table. <ref>. The remaining char is burned by the heteregeneous reaction with oxygen. Depending on various characteristics, e.g., the heating rate, particle residence time or particle temperature, the gas species composition during devolatilization can be quite diverse. For the sake of model simplification, the volatiles are represented as a postulate substance C_xH_yO_z, whose x, y or z subscripts are calculated from the ultimate and proximate analysis. During devolatilization each biomass particle breaks down into the following 4 light gases <cit.> CxHyOz ν_1^'' CO + ν_2^'' CO2 + ν_3^'' CH4 + ν_4^'' H2 LCVVM = ∑_i=1^4Y_i×ΔH_R,i where the lower calorific value (LCV) of volatiles VM is found assuming that the LCV of biomass can be split into the combustion of its separate elements <cit.> LCV_biomass = Y_VM^daf×LCV_VM + Y_FC^daf×LCV_FC Under these considerations, the postulate volatile substance is C_1.03H_2.13O_0.97 and the stoichiometric coefficients ν_i^'' in Eq. (<ref>) are 0.07, 0.44, 0.51, and 0.03 for CO, CO2, CH4, H2 respectively. In all simulations, these gases are combusted following the 4-step reaction mechanism proposed by Jones & Lindstedt <cit.>. The kinematics of the particles is governed by Newton's 2nd law du_p_i/dt=18μ/ρd_p^2C_DRe_p/24(u_i-u_p_i)+g_i(1-ρ/ρ_p) C_D = { [ 0.424 Re_p> 1000; 24/Re_p(1+1/6Re_p^2/3) Re_p≤1000 ] . where the RHS terms of Eq. (<ref>) represent all the forces acting on the particle, namely drag, gravity and buoyancy. The drag factor is determined by the correlation for spherical particles proposed by Putnam <cit.>, Eq. (<ref>). Moreover, we use a stochastic dispersion approach to include the effect of instantaneous turbulent velocity fluctuations on the particle trajectories. With a given u_p_i the position of the particle is computed by integrating the equation dx_p_i/dt=u_p_i. Our customized solver includes more comprehensive submodels for the simulation of the biomass devolatilization and radiative heat transfer phenomena. Specifically, it uses the BioCPD model <cit.> to determine the devolatilization kinetics of biomass samples at elevated heating rates and uses Mie theory calculations to estimate the radiative properties of particles. Moreover it uses a a dry/wet weighted-sum of gray gase model (WSGGM) model to calculate the absorption coefficient of the gaseous mixture <cit.>. For a detailed description of the complete method and the other submodels, the reader is referred to our previous works <cit.>. §.§ Computational grids In order to simulate the full range of stages in a dust explosion (including dispersion, explosion, and venting), three separate computational grids were employed, see Fig. <ref>. Mesh 1 corresponded to the entire silo, encompassing both the dispersion system and the silo itself. Mesh 2, on the other hand, focused solely on the inner region of the silo, excluding the dispersion system. Finally, mesh 3 was created as an exact copy of mesh 2, but with additional cells to represent the far field region. All grids were manually constructed using the ANSYS ICEM meshing software and subsequently converted to OpenFOAM format for simulation purposes. §.§ Solution strategy The vented dust explosion was simulated in 3 stages, namely: (1) dust dispersion, (2) explosion in closed silo, and (3) vented explosion. * Dust dispersion: the dust particles are initially placed in the dust container at stagnant conditions and the pressure field is initialized accordingly (i.e. 0.875 bar a in the silo and 21 bar a in the dust canister). The ensuing pressure gradient drives the particles from the canister to the silo. The dust injection is simulated for an ignition delay time t_d=600 ms. Right afterwards, the case is stopped and all the Eulerian and Lagrangian fields are mapped from mesh 1 to mesh 2. * Explosion in closed silo: starting from the cold flow solution, the reactive features of the solver (combustion, chemistry, radiation, etc…) are switched on. The ignition mechanism is activated and the dust cloud starts burning. In these simulations, the chemical igniters are again represented by a 10kJ enthalpy source term distributed over a kernel sphere of r=13 cm <cit.> placed right above the axisymmetric rebound nozzle. During the run-time, the instantaneous pressure is monitored each time-step as the weighted-area-average value on the roof surface. The simulation is stopped as soon as the monitor hits the rupture pressure of the polymer bolts, i.e. p_stat=1.570 mbar a. Next, the Eulerian and Lagrangian fields are mapped from mesh 2 to mesh 3. * Vented explosion: once mesh 3 is initialized with the latest reactive solution, the boundary condition at the venting areas are switched from walls to interior cells. The number of venting areas is changed based on the venting scenario. For the rest of simulation, the deflagration is allowed to escape to the surroundings and the pressure inside the 1 m3 silo decays. This approach enabled us to simulate the entire dust explosion using computational meshes tailored to specific purposes. Specifically, the grids 2 and 3 are structured and were meshed manually with topologies comprising hexagonal blocks. This helps ensure that all fluxes in the discretized equations, which involve numerous physics, pose high orthogonality and converge correctly. The boundary conditions and initialization settings of each stage of the simulation are provided in Table <ref> and Table <ref>, respectively. The conservation equations of the Eulerian phase were discretized using first-order upwind schemes and second-order difference schemes for the convective terms and diffusive terms, respectively. The gradients were evaluated using a cell-limited scheme scheme with cubic interpolation. The transient discretization was treated with a first-order Euler scheme with an adaptive time-stepping method to satisfy a Courant-Friedrich-Lewy (CFL) condition of CFL=1.0. The pressure-velocity coupling was solved by the PIMPLE algorithm with 3 correctors per time step. The flow residuals were set to 10^-8 for continuity/pressure, and to 10^-12 for momentum, energy, species and turbulence equations, respectively. § RESULTS AND DISCUSSION §.§ Dispersion system The transient behavior of the pressurized air injection can be estimated if we assume that the air is an ideal gas and that the discharge is modeled as a poly-tropic process (p^n=C), see Fig. <ref>. If the pressure, temperature, and density in both the canister and 1 m3 silo are given at t=0, the pressure evolution p_C,t+Δ t in either control volume C 1 or 2 can be found as p_C,t+Δt=p_C,t(ρ_C,t+Δt/ρ_C,t)^n where n is the poly-tropic exponent. The density at time t+Δ t is estimated by applying conservation of mass to the corresponding control volume d ρ_C/dt=±ρ_0A_tM_t(γR T_0)^1/2/(1+γ-1/2M_t^2)^-γ+1/2(γ-1) where a positive value represents a charging process and a negative value represents a discharging process. Since the vessels are connected, the mass leaving the canister equals the mass entering the silo. The RHS of Eq. (<ref>) refers to the mass flow rate at the throat area A_t considering the compressibility effects. The properties at the throat are calculated using isoentropic relations and assuming that the canister is at stagnant conditions (T_0=T_c and ρ_0=ρ_c) T_c/T_t =1+γ-1/2M_t^2 ρ_c/ρ_t =(1+γ-1/2M_t^2)^1/(γ-1) The velocity at the throat V_t is related to the Mach number as V_t=M_t(γ R T_t)^1/2 whose sonic or subsonic behavior is determined by the choked flow condition. Finally, the temperature evolution T_C,t+Δ t in any of the control volumes is predicted using the ideal gas law T_C,t+Δt=p_C,t+Δt/ρ_C,t+ΔtR Performance of the dispersion system was checked by comparing the experimental measurements with the 0D poly-tropic model implemented in Matlab. After iterating over various poly-tropic exponents, the best agreement of the final pressure reading was found for n=1.3. The pressure trends in both the canister and 1 m3 silo are shown together with the model results in Fig. <ref>. Although the ignition delay time was 600 ms, the experimental reading suggests that the pressure in the silo and canister stabilizes around 500 ms. In contrast, the time for stabilizing the pressures calculated by the model varies and is slightly ahead of the experimental data. Such offset can be attributed to various factors: (1) the spatial effects are not resolved (such as the length and curvature of the conduit), (2) friction effects are neglected, and (3) the time delay of the electropneumatic valve is ignored. As illustrated in the figure, the pressure discharge in the experiment does not commence immediately. Instead, it appears to be delayed by approximately 20 ms before it gradually develops. To account for irrecoverable losses and improve the model prediction, a discharge coefficient C_d could be introduced; however, this is difficult to estimate based on the number of parameters in the experiment. Despite its simplicity, the poly-tropic model is a useful tool to get reasonable estimates of the behavior of the pressurized air injection. It helps to calculate the desired pressure upon commencing the explosion test and to establish an according ignition delay time. In addition, it can provide other useful information, such as the fact that in our current setting, the flow is highly turbulent for almost half of the air blast. During this period, the Reynolds number is in the order of 𝒪(Re)∼10^6 and the flow velocity becomes subsonic only after approximately 300 ms. §.§ Non-reactive flow In dust explosions the particle size is a crucial factor, as it influences not only the dynamic behavior of the dust cloud, but also the reactivity of the fuel. Thus, it is essential to analyze dust dispersion in order to fully comprehend the development of the explosion. Dust dispersion by pressurized air is the most common method for dust explosion testing. In this method, the dispersion nozzle plays a leading role in the whole dispersion system, as it distributes the dust inside the test vessel and regulates the flow pattern and turbulence intensity. In this work, the axisymmetric rebound nozzle produces a recirculating flow pattern characterized by four large vortices that emerge from the nozzle outlet and extend up to the top of the silo. These vortices are distributed symmetrically and rotate in clockwise direction (right vortex) and counter-clockwise direction (left vortex), as depicted by the streamlines in Fig. <ref>. The top row of the figure corresponds to the dust-free flow injection, while the bottom row illustrates the behavior with a nominal dust concentration C_0=500 g m-3. In all the experiments, the ignition delay time is 600 ms, and throughout the following discussion, all graphs and figures represent this time interval from -600 ms to 0 ms. In the early stage (as shown in Fig. <ref>), the cores of the vortices are located at the top, specifically at a height of 0.6 < y/H < 0.8, which is very close to the corners. When dust is injected, the vortices are still able to form, but their location is altered. The vortices shift downward to a height of approximately y/H = 0.5 for the same time interval. This occurs because the first injected particles have high velocities and collide strongly with the silo roof. As a result, they descend and hinder the vortices from extending all the way to the top. As the flow progresses and particles are fully injected, the vortices rise vertically and the cores return to their original positions at the corners (refer to Fig. <ref>). Moreover, by the end of the dispersion process, small vortices form at tip of the roof. Unfortunately, these recirculating flow structures do not facilitate dust cloud mixing, as they behave as stagnant zones, where only air is trapped. This flow pattern bears striking resemblance to that of the 20L Siwek sphere, where Benedetto et al. <cit.> exposed that the vortices tended to deposit most of the particles near the vessel walls. However, in case of the 1 m3 silo, the particles accumulate in the central section of the silo, particularly in a central column. Fig. <ref> depicts the particle distribution in two half-plane slices, revealing that the cloud is highly concentrated in the region between -0.5 ≤ x/R ≤ 0.5. The concentration in this central column varies locally along the height and is notably highest in the lower zone, where it reaches a maximum of 20 kg m-3 before decreasing to 2.5 kg m-3 in the uppermost part of the column. When injecting a dust sample with large particle diameters, the particles may behave ballistically and interact very little with the air flow. A convenient way to determine if the particles trajectories adjust to the air streamlines is by calculating the Stokes number. The Stokes number is a parameter that relates the particle response time to a characteristic time scale of the fluid, which can be used to determine whether the particles are in equilibrium with the air or not. The Stokes number is: Stk = τ_p/τ_f τ_p = ρ_pd_p^2/18μ_f24/C_DRe_p where C_D, Re_p and τ_f are the particle drag coefficient, the particle Reynolds number, and the characteristic fluid time scale τ_f=l_e/|ũ_i|, respectively. In these calculations, l_e is the integral length scale C_μk^3/2/ε, the drag coefficient obeys the correlation shown in Eq. (<ref>) and the particle Reynolds number is Re_p=|ũ_i-u_p_i|d_p/ν_f For particles with a Stokes number less than unity, the fluid can alter their trajectories and cause them to closely follow the motion of vortices. However, when the Stokes number of particles is greater than unity, their strong inertia impedes the fluid streamlines from altering their trajectories. Fig. <ref> clearly shows that the cloud has a small radial aperture and very few particles are contained within the vortices. Fig. <ref> shows that the Stokes number logarithmically varies as a function of particle diameter, where a threshold value of Stk = 1.0 indicates a critical particle diameter of apparently 100 microns. Particles larger than this size display a response time up to two orders of magnitude greater than the fluid time scale, meaning that it takes significantly longer for them to adapt to vortex motion. Based on the particle size distribution depicted in Fig. <ref>, over 90% of biomass dust particles are affected by this condition, as their adaptation time is over 100 times longer than the fluid time scale. Only particles with diameters less than 100 microns are able to interact with recirculating flow. Furthermore, the figure illustrates that the larger the particle diameter, the shorter its distance from the central dust column. Conversely, as particle diameter decreases, the particles can be dispersed over a wider range of radial distances. In addition, Fig. <ref> shows that the particle Reynolds number also displays a logarithmic dependence on particle diameter, with particle inertia being up to four orders of magnitude greater than fluid inertia. This confirms that large particles exhibit a ballistic behavior and thus, the dust cloud is unable to mix homogeneously with the air. Another important aspect of dust explosions is the initial turbulence at which the dust cloud ignites. Turbulence is a function of many aspects, such as the dispersion nozzle, the flow pattern, the dust concentration, the particle size or the ignition delay time. In explosivity tests in standardized vessels, it is well known that an increase in turbulence levels increases the rate of pressure rise. Fig. <ref> shows a comparison of the velocity fluctuations between the air-only case and the dust case. For the air-only case, a maximum value of u^'_RMS=16 m/s is reached just about 40 ms after the start of the injection, whereafter the turbulence is strictly decreasing until the end of the injection delay time. This happens because the intensity of the pressure gradient decreases steeply and continuously until the pressures of both, the canister and the 1 m3 silo stabilize. In addition, the mechanisms of turbulence production such as wall friction and shear layers are not strong enough to counteract the decay of the pressure gradient. On the contrary, for the case with dust injection, there are two periods of turbulence decay. The first of these periods corresponds to the times between -600 and -400 ms, where the velocity fluctuations are smaller than in the case with air injection only. This is because the discharge of particle-laden air obstructs and slows down the flow entering the silo, debilitating the baroclinic force and decreasing the incoming velocity. The second period corresponds to the times between -260 and 0 ms, where the velocity fluctuations reached a maximum value of u^'_RMS=8 m/s and is higher than the case of injection with only air. The intermediate time between -400 and -260 ms is a period of stabilization and turbulence production. Here, the decay of the first period is counteracted mainly for two reasons. The first is that by this time 70% of the nominal dust concentration has already entered the 1 m3 silo, which frees the flow from local blockages caused by the particles. The second is that the particles already injected create local distortions in the flow, generating self-induced wakes and vortices around the particles. In this period, the velocity fluctuations double, rising from u^'_RMS=4 m/s to 8 m/s. Interestingly, the turbulence of the particulate flow is consistently higher than that of the air-only flow from -300 to 0 ms, as shown in Fig. <ref>. This period is referred to as "turbulence enhancement". This finding is significant because the intensity of turbulence before dust cloud ignition can considerably impact the rates of heat transfer and chemical reactions during the explosion, a phenomenon known as turbulence modulation. Turbulence modulation is the inertial effect of particles on the flow turbulence relative to non-particle flow. Crowe <cit.> identified some factors that influence turbulence modulation: surface effects, inertial effects, and response effects. On the one hand, the inertial and response effects are determined by the dimensionless numbers mentioned earlier. Because the particle Reynolds number and Stokes number cover several orders of magnitude above unity, the dispersion of this biomass sample is likely to significantly affect the flow turbulence compared to the flow without particles. On the other hand, surface effects are determined by the particle diameter normalized by a length scale representative of the flow. As the dispersion time comes to a close, turbulence decreases for both dust-free flow and particle-laden flow. The Kolmogorov scale, which represents the smallest eddies where viscosity is the dominant force and turbulent kinetic energy is dissipated in heat, serves as a characteristic length scale for turbulent eddies. Fig. <ref> shows the change in turbulence intensity versus particle diameter normalized by the Kolmogorov length scale. The percentage change in turbulence intensity is defined as ΔTurb. Intensity %=Turb. Int.TP-Turb. Int.SP/Turb. Int.SP×100 where the turbulent intensity is calculated based on the hypothesis of isotropic turbulence, Turb. Int=√(2k/3)/|ũ_i| and the subscripts TP and SP refer to the two-phase and single-phase flows, respectively. All values were calculated locally at the particle positions inside the 1 m3 silo. The Gore-Crowe classification identifies the critical value d_p/η = 0.1, which marks the threshold at which higher values of d_p/η will cause an increase in the turbulence intensity of the entrained gas, and lower values will cause a decrease. The figure shows that the particles generally follow this criterion, with most of the data points lying to the right of the threshold value and above the horizontal line representing zero. Notably, particles located closest to the central dust column exhibit the highest d_p/η ratios. A possible explanation for this phenomenon is that the particles generate turbulence in their wake at the length scale of the smallest eddies, resulting in an increase in the turbulence intensity of the air. In this case, the energy is transferred from the particles to the turbulent kinetic energy of the trailing gas. Therefore, according to the map, when injecting a dust loading of 500 g m-3, the local increase in turbulence intensity can be as high as 4000%. §.§ Reactive flow The venting experiments were conducted during the last quarter of year 2022 in the experimental test tunnel of Applus+ TST. A delimiting explosive area was established and the 1 m3 silo was safely installed and anchored to the ground. The tests were carried out under the supervision and support from IDONIAL and Applus+ TST personnel and were operated from a remote control unit. All tests were performed for a concentration of 500 g m-3 with an ignition delay time of 600 ms and an activation energy of 10 kJ. Fig. <ref> shows a qualitative comparison of the flame propagation between the experiment and the simulation during a vented explosion with 1 hatch open (venting percentage of 3.83%). Upon the rupture of the polymer bolt, the hinged hatch opens violently and lets a jet flame to escape perpendicularly to the silo roof. The gaseous flame comes out accompanied with burning biomass particles and an elevated inertia. Remarkably, the CFD simulation predicts a flame shape that resembles very well to the experimental observation, with a flame temperature that is close to 1500 K and that extinguishes after a few seconds. Fig. <ref> displays the pressure profile recorded in the 1 m 3 silo and canister during the test. The graph illustrates the pressure increase in the silo on the left side and the pressure discharge in the canister on the right side. As noted, the pressure profile predicted by the CFD model agrees very well with the experimental test, both showing a maximum pressure value of 1570 mbar a. This value corresponds to the rupture pressure of the polymer bolts, which triggers the opening of the hinged hatch, allowing the pressure wave and flame to escape from the silo to the surroundings. Shortly after, the pressure inside the silo drops abruptly to atmospheric pressure. A key metric of the test is the time taken to vent the explosion. The CFD model predicts that the static pressure of the bolts is reached 785 ms after activating the igniters, while the experimental test records a time of 757 ms. The relative error is 3.5% and endorses that the model is in excellent agreement with the experiment, not only capturing the maximum overpressure or the flame propagation, but the transient behavior in general. To quantify the fuel burned during the explosion, we analyzed the consumption of each reactive component in the dust cloud, as shown in Fig. <ref>. The graph is divided into two regions: (1) before the opening of the hatch (p < p_stat) and (2) after the opening of the hatch (p > p_stat). The data in the first region of the graph shows that prior the opening of the hatch, the O2 mass fraction decreased from 0.23 to 0.17. This suggests that the dust cloud burned in small amounts, with only approximately 10% of the volatile gases being released from the particles. Despite this, the combustion of such small amount of gases was enough to create an overpressure of 570 mbar g inside the silo. In the second region of the graph, it is evident that the other reactive components of the cloud burned slowly, with a slight increase in the consumption rates starting at 1500 ms. However, even after 2 seconds, only 40% of the volatile gases had been released and significant amounts of moisture and char remained in the particles. Based on the proximate analysis of the biomass dust, only 50 g of the available 500 g of mass had been consumed by the time the hatch opened, and only 194 g had been consumed after 2 s of ignition. Fig. <ref> illustrates the evolution of the dust explosion during all the stages of the experiment. The first 6 contours represent the dust injection phase, which is non-reactive flow. These contours are colored by dust concentration and are spaced every 100 ms until the time 0 is reached. From that point, the temperature contours of the reactive flow are displayed up to 1500 ms. The flame development begins with the activation of the pyrotechnic igniters, which, as mentioned before, are modeled as a 10kJ sphere of radius 13 cm placed above the axisymmetric rebound nozzle. These igniters generate the initial flame that induces the dust particles to produce a self-propagating flame kernel. The resulting flare propagates vertically, creating a mushroom-shaped flame. This is primarily due to 2 factors: * First, hot gases are drawn upward by the flow pattern and velocity field achieved at the end of the dispersion process. This is evident from Fig. <ref>. * Second, the flame spreads in the direction where the fuel is present, which is linked to the distribution of the dust cloud during the injection phase. As discussed in the previous section, the large particle size of the dust resulted in a thin central cloud distribution. Once the flame reaches the uppermost part of the silo, it hits the roof and expands radially, seeking areas where the local equivalence ratio allows for stoichiometric combustion. The mushroom appearance is due to the fact that the local dust concentration is higher near the silo roof than within the vortex cores, where the dust/air mixture is extremely lean, as shown in Fig. <ref>. Starting at 785 ms, the temperature contours correspond to the vented explosion. The jet flame can be observed escaping perpendicular to the roof and reaching its maximum length and temperature within the next 100 ms. This phenomenon can be attributed to the pressure gradient that propels the flame from the silo to the surrounding atmosphere. Once the pressure in the silo stabilizes with atmospheric pressure, the jet flame partially extinguishes. After 1000 ms, the flame is reignited due to the unburned particles that left the silo and encountered fresh oxygen in the atmosphere. However, the flame weakens and bends down since the velocity magnitude of the jet has decayed significantly. An additional experimental test was conducted with two open hatches and identical operating conditions as the previous test. Pressure curves are shown in Fig. <ref>. Again, once the rupture pressure of the bolts is reached, the pressure in the silo decreases rapidly until it reaches atmospheric pressure. However, this time the CFD model is slightly delayed compared to the experimental results, with the static pressure of the bolts being reached at 785 ms in the simulation compared to 600 ms in the test. This can be attributed to an unexpected incidence with the experimental test. According to the pressure discharge in the canister, the experimental reading registered a bump right below the 5 bar g and during almost the second half of the ignition delay time. This delay may have occurred due to a dust blockage in the dispersion duct, which slowed pressure balancing in both vessels. A weakened pressure gradient may cause the initial pressure in the silo to decrease at the moment of dust cloud ignition. Igniting the air/dust mixture at pressures below 1 bar may affect mass transfer rates, particularly the evaporation rates due to the pressure-dependent nature of water's phase change from liquid to vapor state. At pressures below 1 bar, the evaporation point of moisture is reduced, so during the first 100 ms of the experimental test, the dust cloud may have burned faster than if it were ignited at atmospheric pressure. This faster evaporation of moisture may result in the earlier release of volatile gases, advancing the pressure rise curve. Nevertheless, the CFD model predicts transient effects quite accurately. Fig. <ref> illustrates the evolution of this experiment. As only the number of open hatches was changed for this test, all contours up to 785 ms are identical to those of the previous explosion. Two flame jets escape perpendicularly to the silo roof, developing their maximum length and temperature within a few milliseconds after opening. The flames partially extinguish after 900 ms and appear again after 1100 ms. The first jet flame is a consequence of the pressure wave that drags the hot gases and particles towards the far field, while the second flame is due to the fact that both the particles and the fresh gases escaping from the silo encounter abundant oxygen in the surroundings. The forming flames attach to the periphery of the hatches and bend downward because of a debilitated velocity field. Also, it is interesting to note that the flame inside the silo maintains its mushroom shape and does not propagate into the interior of the vortices. This suggests that the particles never mixed with the recirculating flow pattern and remained distributed in the central column previously studied. Finally, we identified the particle size with higher reactivity at the time of hatch opening, Fig. <ref> classifies the consumption of each fuel component based on particle diameter. The amount of gases released from the dust cloud is higher than the burned mass of the fixed char, as expected due to the higher volatile matter content in biomass combustion. Furthermore, particles with a diameter close to 300 μm not only reached the highest temperatures, but also exhibited the highest reactivity with respect to devolatilization. While this may seem counter-intuitive, in our previous works <cit.>, we have consistently emphasized that a dust explosion, specifically flame propagation, cannot be attributed solely to isolated factors such as particle size, dust distribution, velocity field, or residence time. Instead, it is the combined effect of all these factors that drives the phenomena. Within our 1m3 silo, we have demonstrated the non-uniform dispersion of dust, with particles concentrated in a central column, particularly larger particles (>100 μm) that represent the majority of the particle size distribution (as shown in Fig. <ref>). Additionally, particles smaller than 100 μm exhibit some radial scattering, as depicted in Fig. <ref>. Consequently, it is the larger particles that ignite the air/dust mixture in our specific case, contradicting conventional expectations. This counter-intuitive behavior arises because the larger particles are aligned with the ignition source and effectively convect the flame vertically upward, as evidenced in Fig. <ref> in conjunction with either Fig. <ref> or Fig. <ref>. A notable distinction arises when comparing the flow characteristics achieved in apparatuses like the Godbert Greenwald (G-G) furnace to that of our 1m3 silo. The (G-G) furnace, often used for pyrolysis studies at high-heating rates <cit.>, exhibits a flow with relatively lower turbulence levels and enhanced uniformity due to its simplified geometry and absence of dispersion nozzles. The small pressure gradient used for dust dispersion in the (G-G) furnace leads to a flow environment that is potentially more uniform. This results in a lower level of turbulence and enhanced consistency in terms of velocity field, temperature, and dust distribution. As a consequence, all particles, irrespective of their size, have the opportunity to react under similar conditions, making it easier to establish correlations between devolatilization times and particle sizes. In contrast, our 1m3 silo experiences significantly higher turbulence and attains sonic velocities. The complex dust dispersion pattern observed within our silo further contributes to the non-homogeneous nature of the combustion process. Therefore, “turbulent dust flames depend on the nature of the turbulent flow field and thus on the experimental apparatus and are not basic to the dust itself” as suggested by Smoot <cit.>. § CONCLUSIONS In this study we conducted experimental tests and CFD simulations of biomass dust explosions in a newly developed 1 m3 silo apparatus designed for analyzing explosions in situations with variable venting. We examined all stages of a biomass dust explosion, including dust dispersion, ignition, closed and vented explosion. Our CFD results indicate that the flow characteristics after dust dispersion plays a crucial role in flame propagation and the explosion itself, and depends largely on particle size and the dispersion system. The turbulence prior to ignition and distribution of the dust particles also significantly affect the reactive characteristics of the cloud. During the explosion, our CFD model accurately predicted the time evolution of the pressure, particularly with regard to maximum overpressure and pressure relief. We observed similar pressure drops for the two venting scenarios studied. The promising results obtained from our CFD simulations encourage the use of our CFD model to simulate larger scale geometries for further investigation of dust explosions. Future work will involve simulating additional test cases to gain a deeper understanding of the explosion behavior of biomass dust, especially in venting situations that fall out of the scope of the NFPA 68 or EN 14491 standards, and to help design effective safety measures to prevent such incidents. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § CREDIT AUTHORSHIP CONTRIBUTION STATEMENT A. Islas: Conceptualization, Formal analysis, Data curation, Methodology, Software, Validation, Investigation, Resources, Writing - original draft, Writing - review & editing, Visualization. A. Rodríguez Fernández: Methodology, Software, Validation, Investigation, Resources, Writing - review & editing. E. Martínez-Pañeda: Conceptualization, Writing - review & editing, Funding acquisition. C. Betegón: Writing - review & editing, Supervision, Project administration, Funding acquisition. A. Pandal: Conceptualization, Methodology, Software, Investigation, Resources, Writing - review & editing, Supervision, Funding acquisition. § ACKNOWLEDGEMENTS Authors acknowledge that this work was partially funded by CDTI (Centro para el Desarrollo Tecnológico Industrial de España, IDI-20191151), Universidad de Oviedo and PHB WESERHÜTTE, S.A., under the project "FUO-047-20: Desarrollo de silo metálico de grandes dimensiones ante los condicionantes de explosividad de la biomasa". Likewise, authors endorse the computer resources provided in the Altamira Supercomputer at the Institute of Physics of Cantabria (IFCA-CSIC), member of the Spanish Supercomputing Network, and the technical support provided by the Advance Computing group at University of Cantabria (UC) (RES-IM-2022-3-0002). A. Islas acknowledges support from the research grant #BP20-124 under the 2020 Severo Ochoa Pre (Doctoral) Program of the Principality of Asturias. unsrt_abbrv_custom
http://arxiv.org/abs/2307.04605v1
20230710144526
Moire-enabled artificial topological superconductivity in twisted bilayer graphene
[ "Maryam Khosravian", "Elena Bascones", "Jose L. Lado" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
.png,.jpg,.eps
http://arxiv.org/abs/2307.03942v1
20230708093617
Ariadne's Thread:Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images
[ "Yi Zhong", "Mengqiu Xu", "Kongming Liang", "Kaixin Chen", "Ming Wu" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Using Text Prompts to Improve Segmentation Y. Zhong et al. Beijing University of Posts and Telecommunications, China {xiliang2017, xumengqiu, liangkongming, chenkaixin, wuming}@bupt.edu.cn Ariadne's Thread[Ariadne's thread, the name comes from ancient Greek myth, tells of Theseus walking out of the labyrinth with the help of Ariadne's golden thread.] : Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images Yi ZhongMengqiu Xu Kongming Liang Kaixin Chen Ming Wu August 12, 2023 =========================================================================================================================================================================================================================================================== Segmentation of the infected areas of the lung is essential for quantifying the severity of lung disease like pulmonary infections. Existing medical image segmentation methods are almost uni-modal methods based on image. However, these image-only methods tend to produce inaccurate results unless trained with large amounts of annotated data. To overcome this challenge, we propose a language-driven segmentation method that uses text prompt to improve to the segmentation result. Experiments on the QaTa-COV19 dataset indicate that our method improves the Dice score by 6.09% at least compared to the uni-modal methods. Besides, our extended study reveals the flexibility of multi-modal methods in terms of the information granularity of text and demonstrates that multi-modal methods have a significant advantage over image-only methods in terms of the size of training data required. § INTRODUCTION Radiology plays an important role in the diagnosis of some pulmonary infectious diseases, such as the COVID-19 pneumonia outbreak in late 2019<cit.>. With the development of deep learning, deep neural networks are more and more used to process radiological images for assisted diagnosis, such as disease classification, lesion detection and segmentation, etc. With the fast processing of radiological images by deep neural networks, some diagnoses can be obtained immediately, such as the classification of bacterial or viral pneumonia and the segmentation mask for pulmonary infections, which is important for quantifying the severity of the disease as well as its progression<cit.>. Besides, these diagnoses given by the AI allow doctors to predict risks and prognostics in a "patient-specific" way<cit.>. Radiologists usually take more time to complete lesion annotation than AI, and annotation results can be influenced by individual bias and clinical experience<cit.>. Therefore, it is of importance to design automatic medical image segmentation algorithms to assist clinicians in developing accurate and fast treatment plans. Most of the biomedical segmentation methods<cit.> are improved based on U-Net<cit.>. However, the performance of these image-only methods is constrained by the training data, which is also a dilemma in the medical image field. Radford et al. proposed CLIP<cit.> in 2021, where they used 4M image-text pairs for contrastive learning. With the rise of multi-modal learning in the recent years, there are also methods<cit.> that focus on vision-language pretraining/processing and applying them on local tasks. Li et al. proposed a language-driven medical image segmentation method LViT<cit.>, using a hybrid CNN-Transformer structure to fuse text and image features. However, LViT uses an early fusion approach and the information containd in the text is not well represented. In this paper, we propose a multi-modal segmentation method that using independent text encoder and image encoder, and design a GuideDecoder to fuse the features of both modalities at decoding stage. Our main contributions are summarized as follow: * We propose a language-driven segmentation method for segmenting infected areas from lung x-ray images. Source code of our method see: https://github.com/Junelin2333/LanGuideMedSeg-MICCAI2023https://github.com/Junelin2333/LanGuideMedSeg-MICCAI2023 * The designed GuideDecoder in our method can adaptively propagate sufficient semantic information of the text prompts into pixel-level visual features, promoting consistency between two modalities. * We have cleaned the errors contained in the text annotations of QaTa-COV19<cit.> and contacted the authors of LViT to release a new version. * Our extended study reveals the impact of information granularity in text prompts on the segmentation performance of our method, and demonstrates the significant advantage of multi-modal method over image-only methods in terms of the size of training data required. § METHOD The overview of our proposed method is shown in Fig. <ref>(a). The model consists of three main components: Image Encoder, Text Encoder and GuideDecoder that enables multi-modal information fusion. As you can see, our proposed method uses a modular design. Compared to early stage fusion in LViT, our proposed method in modular design is more flexible. For example, when our method is used for brain MRI images, thanks to the modular design, we could first load pre-trained weights trained on the corresponding data to separate visual and text encoders, and then only need to train GuideDecoders. §.§.§ Visual Encoder & Text Encoder The Visual Encoder used in the model is ConvNeXt-Tiny<cit.>. For an input image I∈ℝ^H× W×1, we extract multiple visual features from the four stages of ConvNeXt-Tiny, which are defined as f_4∈ℝ^H/4×W/4× C_1, f_8∈ℝ^H/8×W/8× C_2, f_16∈ℝ^H/16×W/16× C_3 and f_32∈ℝ^H/32×W/32× C_4, Note that C is the feature dimension, H and W are the height and width of the original image. For an input text prompt T ∈ℝ^L, We adopt the CXR-BERT<cit.> to extract text features g_t ∈ℝ^L× C. Note that C is the feature dimension, L is the length of the text prompt. §.§.§ GuideDecoder Due to our modular design, visual features and textual features are encoded independently by different encoders. Therefore, the design of the decoder is particularly important, as we can only fuse multi-modal features from different encoders in post stage. The structure of GuideDecoder is shown in Fig. <ref>(b). The GuideDecoder first processes the input textual features and visual features before performing multi-modal interaction. The input textual features first go through a projection module (i.e. Project in the figure) that aligns the dimensionality of the text token with that of the image token and reduces the number of text tokens. The projection process is shown in Equation 1. f_t = σ(Conv(T W_T)) where W_T is a learnable matrix, Conv(·) denotes a 1×1 convolution layer, and σ(·) denotes the ReLU activation function. Given an input feature T ∈ℝ^L× D, the output projected features is f_t ∈ℝ^M × C_1, where M is the number of tokens after projection and C_1 is the dimension of the projected features, consistent with the dimension of the image token. For the input visual features I∈ℝ^H× W× C_1, after adding the position encoding we use self-attention to enhance the visual information in them to obtain the evolved visual features. The process is shown in Equation 2. f_i = I + LN(MHSA(I)) where MHSA(·) denotes Multi-Head Self-Attention layer, LN(·) denotes Layer Normalization, and finally the evolved visual features f_i ∈ℝ^H× W× C_1 with residuals could be obtained. After those, the multi-head cross-attention layer is adopted to propagate fine-grained semantic information into the evolved image features. To obtain the multi-modal feature f_c ∈ℝ^H× W× C_1, the output further computed by layer normalization and residual connection: f_c = f_i + α (LN(MHCA(f_i,f_t))) where MHCA(·) denotes multi-head cross-attention and α is a learnable parameter to control the weight of the residual connection. Then, the multi-modal feature f_c ∈ℝ^(H× W)× C_1 would be reshaped and upsampling to obtain f'_c ∈ℝ^H'× W'× C_1. Finally the f'_c is concatenated with f_s∈ℝ^H'× W'× C_2 on the channel dimension, where f_s is the low-level visual feature obtained from visual encoder via skip connection. The concatenated features are processed through a convolution layer and a ReLU activation function to obtain the final decoded output f_o ∈ℝ^H'× W'× C_2 f'_c = Upsample(Reshape(f_c)) f_o = σ(Conv([f'_c, f'_s])) where [·,·] represents the concatenate operation on the channel dimension. § EXPERIMENTS §.§ Dataset The dataset used to evaluate our method performance is the QaTa-COV19 dataset<cit.>, which is compiled by researchers from Qatar University and Tampere University. It consists of 9258 COVID-19 chest radiographs with pixel-level manual annotations of infected lung areas, of which 7145 are in the training set and 2113 in the test set. However, the original QaTa-COV19 dataset does not contain any matched text annotations. Li et al. <cit.>have made significant contributions by extending the text annotations of the dataset, their endeavors are worthy of commendation. We conducted a revisitation of the text annotations and found several notable features. Each sentence consists of three parts, containing position information at different granularity. However, these sentences cannot be considered as medical reports for lacking descriptions of the disease, we consider them as a kind of "text prompt" just as the title of the paper states. Besides, we found some obvious errors (e.g. misspelled words, grammatical errors and unclear referents) in the extended text annotations. We have fixed these identified errors and contacted the authors of LViT to release a new version of the dataset. Dataset see Github link: https://github.com/HUANGLIZI/LViThttps://github.com/HUANGLIZI/LViT §.§ Experiment Settings Following the file name of the subjects in the original train set, we split the training set and the validation set uniformly in the ratio of 80% and 20%. Therefore, the training set has a total of 5716 samples, the validation set has 1429 samples and the test set has 2113 samples. All images are cropped to 224×224 and the data is augmented using a random zoom with 10% probability. We used a number of open source libraries including but not limited to PyTorch, MONAI<cit.> and Transformers<cit.> to implement our method and baseline approach. We use PyTorch Lightning for the final training and inference wrapper. All the methods are training on one NVIDIA Tesla V100 SXM3 32GB VRAM GPU. We use the Dice loss plus Cross-entropy loss as the loss function, and train the network using AdamW optimization with a batch size of 32. We utilize the cosine annealing learning rate policy, the initial learning rate is set to 3e-4 and the minimal learning rate is set to 1e-6. We used three metrics to evaluate the segmentation results objectively: Accuracy, Dice coefficient and Jaccard coefficient. Both Dice and Jaccard coefficient calculate the intersection regions over the union regions of the given predicted mask and ground truth, where the Dice coefficient is more indicative of the segmentation performance of small targets. §.§ Comparison Experiments We compared our method with common mono-modal medical image segmentation methods and with the LViT previously proposed by Li et al. The quantitative results of the experiment are shown in Table <ref>. UNet++ achieves the best performance of the mono-modal approach. Comparing to UNet++, our method improves accuracy by 1.44%, Dice score by 6.09% and Jaccard score by 9.49%. Our method improves accuracy by 1.28%, Dice score by 4.86% and Jaccard coefficient by 7.66% compared to the previous multi-modal method LViT. In general, using text prompts could significantly improve segmentation performance. The results of the qualitative experiment are shown in Fig. <ref>. The image-only mono-modal methods tend to generate some over-segmentation, while the multi-modal approach refers to the specific location of the infected region through text prompts to make the segmentation results more accurate. §.§ Ablation Study Our proposed method introduces semantic information of text in the decoding process of image features and designs the GuideDecoder to let the semantic information in the text guide the generation of the final segmentation mask. We performed an ablation study on the number of GuideDecoder used in the model and the results are shown in the Table <ref>. As can be seen from the Table <ref>, the segmentation performance of the model improves as the number of GuideDecoders used in the model increases. The effectiveness of GuideDecoder could be proved by these results. §.§ Extended Study Considering the application of the algorithm in clinical scenarios, we conducted several interesting extension studies based on the QaTa-COV19 dataset with the text annotations. It is worth mentioning that the following extended studies were carried out on our proposed method. §.§.§ Impact of text prompts at different granularity on segmentation performance. In section 3.1 we mention that each sample is extended to a text annotation with three parts containing positional information at different granularity, as shown in the Fig. <ref>. Therefore we further explored the impact of text prompts at different granularity on segmentation performance of our method and the results are shown in Table <ref>. The results in the table show that the segmentation performance of our proposed method is driven by the granularity of the position information contained in the text prompt. Our proposed method achieved better segmentation performance when given a text prompt with more detailed position information. Meanwhile, we observed that the performance of our method is almost identical when using two types of text prompts, i.e. Stage3 alone and Stage1 + Stage2 + Stage3. It means the most detailed position information in the text prompt plays the most significant role in improving segmentation performance. But this does not mean that other granularity of position information in the text prompt does not contribute to the improvement in segmentation performance. Even when the input text prompts contain only the coarsest location information (Stage1 + Stage2 items in the Table <ref>), our proposed method yielded a 1.43% higher Dice score than the method without text prompt. §.§.§ Impact of the size of training data on segmentation performance. As shown in Table <ref>, our proposed method demonstrates highly competitive performance even with a reduced amount of training data. With only a quarter of the training data, our proposed method achieves a 2.69% higher Dice score than UNet++, which is the best performing mono-modal model trained on the full dataset. This provides sufficient evidence for the superiority of multi-modal approaches and the the fact that suitable text prompts could significantly help improve the segmentation performance. We observed that when the training data was reduced to 10%, our method only began to exhibit inferior performance compared to UNet++, which was trained with all available data. Similar experiments could be found in the LViT paper. Therefore, it can be argued that multi-modal approaches require only a small amount of data (less than 15% in the case of our method) to achieve performance equivalent to that of mono-modal methods. § CONCLUSION In this paper, we propose a language-driven method for segmenting infected areas from lung x-ray images. The designed GuideDecoder in our method can adaptively propagate sufficient semantic information of the text prompts into pixel-level visual features, promoting consistency between two modalities. The experimental results on the QaTa-COV19 dataset indicate that the multi-modal segmentation method based on text-image could achieve better performance compared to the image-only segmentation methods. Besides, we have conducted several extended studies on the information granularity of the text prompts and the size of the training data, which reveals the flexibility of multi-modal methods in terms of the information granularity of text and demonstrates that multi-modal methods have a significant advantage over image-only methods in terms of the size of training data required. §.§.§ Acknowledgements This work was supported by NSFC under Grant 62076093 and MoE-CMCC "Artifical Intelligence" Project No. MCM20190701. splncs04
http://arxiv.org/abs/2307.06866v1
20230710004749
Modeling correlated uncertainties in stochastic compartmental models
[ "Konstantinos Mamis", "Mohammad Farazmand" ]
q-bio.PE
[ "q-bio.PE", "math.DS", "math.PR", "37N25, 60H10" ]
Correlated uncertainties in epidemiological models]Modeling correlated uncertainties in stochastic compartmental models 1]Konstantinos [email protected] These authors contributed equally to this work. [2]Mohammad [email protected] These authors contributed equally to this work. [1]Department of Applied Mathematics, University of Washington, Seattle, 98195-3925, WA, USA [2]Department of Mathematics, North Carolina State University, 2311 Stinson Drive, Raleigh, 27695-8205, NC, USA We consider compartmental models of communicable disease with uncertain contact rates. Stochastic fluctuations are often added to the contact rate to account for uncertainties. White noise, which is the typical choice for the fluctuations, leads to significant underestimation of the disease severity. Here, starting from reasonable assumptions on the social behavior of individuals, we model the contacts as a Markov process which takes into account the temporal correlations present in human social activities. Consequently, we show that the mean-reverting Ornstein–Uhlenbeck (OU) process is the correct model for the stochastic contact rate. We demonstrate the implication of our model on two examples: a Susceptibles-Infected-Susceptibles (SIS) model and a Susceptibles-Exposed-Infected-Removed (SEIR) model of the COVID-19 pandemic. In particular, we observe that both compartmental models with white noise uncertainties undergo transitions that lead to the systematic underestimation of the spread of the disease. In contrast, modeling the contact rate with the OU process significantly hinders such unrealistic noise-induced transitions. For the SIS model, we derive its stationary probability density analytically, for both white and correlated noise. This allows us to give a complete description of the model's asymptotic behavior as a function of its bifurcation parameters, i.e., the basic reproduction number, noise intensity, and correlation time. For the SEIR model, where the probability density is not available in closed form, we study the transitions using Monte Carlo simulations. Our study underscores the necessity of temporal correlations in stochastic compartmental models and the need for more empirical studies that would systematically quantify such correlations. [ * August 12, 2023 =================== § INTRODUCTION Compartmental models describe the spread of communicable diseases in a population <cit.>. In such models, the population of a community is partitioned into disjoint compartments, e.g., the susceptible and the infected, each one containing all individuals with the same disease status <cit.>. The state variables in a compartmental model are the numbers of individuals in each compartment. The model parameters, e.g., the contact, incubation, and curing rates, determine the flow of individuals between compartments. In the present work, we study how the model predictions are affected by the presence of uncertainties in the compartmental model parameters. Determining the value of model parameters is a delicate task that involves estimation and averaging of data over the whole population <cit.>. As such, model parameters are subject to uncertainties, arising from the variation of social and biological factors among individuals. Among the model parameters, average contact rate is the most volatile, due to its strong dependence on the social activity that varies from person to person, and also changes over time <cit.>. Uncertainties in the contact rate λ(t) are often modeled as a stochastic perturbation ξ(t) with intensity σ around a constant mean λ̅, so that λ(t)=λ̅+σξ(t). A common choice for ξ(t) is Gaussian white noise, see e.g. <cit.>. The remaining model parameters, such as the average incubation or curing rate, depend mainly on the biology of the virus, and, while every individual responds to the infection differently, they vary less compared to the contact rate and can be considered constant. In a recent study <cit.>, we studied the role of temporal correlations, which are present in social activities of individuals, on the contact rate λ(t). Using standard results from the theory of stochastic processes, and assuming that the perturbation ξ(t) has an exponentially decreasing autocorrelation function, we showed that the only admissible model for the stochastic contact rate is the Ornstein–Uhlenbeck (OU) process. However, the assumption of exponentially decreasing autocorrelation had remained unjustified in Ref. <cit.>. A main contribution of the present paper is to derive the OU process without making such an onerous assumption. In fact, as we show in Sec. <ref>, the OU process emerges naturally by making quite simple and realistic assumptions on the contacts of each individual in the population. Then, we focus on determining the final size of disease in a population, as predicted by compartmental models with white or OU noise fluctuations in contact rate. We study two compartmental models. First, we consider the stochastic Susceptibles-Infected-Susceptibles (SIS) model, which is adequate for modeling sexually transmitted or bacterial diseases such as gonorrhea or syphilis. For the SIS model, we determine the stationary probability density (PDF) of the infected population fraction in closed form. This allows us to completely classify the asymptotic state of the model as a function of its bifurcation parameters, i.e., basic reproduction number, noise intensity, and the correlation time of the noise. As the second model, we consider the stochastic Susceptibles-Exposed-Infected-Removed (SEIR) model for COVID-19 pandemic in the US during the Omicron variant. Using Monte Carlo simulations, we study the bifurcations of the asymptotic probability density as in the SIS model. Our main qualitative result is that, for increasing levels of white noise in the contact rate, both compartmental models undergo a noise-induced transition, whereby the stationary PDF of the infected exhibits an additional peak near zero, and far away from the deterministic equilibrium. This unrealistic behavior leads to significant underestimation of the severity of the disease. In contrast, under OU noise this transition is suppressed, with most of the probability mass of the stationary PDF being concentrated around the deterministic equilibrium. §.§ Related work Stochasticity has been incorporated into many epidemiological models <cit.>. One principled approach for deriving stochastic compartmental models begins by modeling of the number of individuals in each compartment as continuous-time Markov chain birth–death processes or as branching processes, see e.g. <cit.>. Then, by assuming their state variables to be continuous, the Markov chain models result in a system of stochastic differential equations (SDEs) whose parameters contain white noise uncertainties. Another approach is to directly add noise to the model parameters; see e.g. <cit.>. This approach is more straightforward, but the choice of the type of noise is largely arbitrary: The most common choice in literature is Gaussian white noise <cit.>. However, OU noise has also been proposed for parameter perturbation in biological systems, see e.g., <cit.>, because OU noise combines the modeling of stochastic fluctuations with the stabilization around an equilibrium point, due to its mean-reverting property. Recently, the COVID-19 pandemic has renewed interest in stochastic modeling of disease spread; see e.g. <cit.> for a survey of existing forecast models for COVID-19, and <cit.> where a lognormally distributed process has been considered for the stochastic fluctuations in contact rate of a COVID-19 compartmental model, to account for the presence of superspreaders in the population. However, most of the studies that use stochastic compartmental models to make predictions for the COVID-19 pandemic rely primarily on simulations, see e.g., <cit.>. Most of the analysis performed on stochastic compartmental models has been focused on the derivation of conditions for the eradication or persistence of the disease in the population; see e.g., <cit.> for compartmental models with white noise uncertainties. Recently, this line of work has been extended to models with OU uncertainties <cit.>, and Lévy noises <cit.> to account for abrupt changes (jumps) in disease transmission. The focus of the present work is different; apart from conditions for the disease to become endemic, we are also interested in the predictions of stochastic compartmental models on the final disease size. Analytic work in this direction is scarce; see e.g. Ref. <cit.> which uses the Fokker–Planck equation to study a scalar compartmental model under white noise fluctuations in contact rate. §.§ Outline This paper is organized as follows. In Sec. <ref>, we present our model of uncertainties in the contact rate. In Sec. <ref>, we study the SIS model, for both cases of white and OU noise fluctuations in contact rate. We analytically determine the noise-induced transitions the stochastic SIS model undergoes, and we quantify the effect of noise correlations in contact rate. In Sec. <ref>, we study the noise-induced transitions of a stochastic SEIR model for the Omicron wave of the COVID-19 pandemic in the US, by using direct Monte Carlo simulations. In Sec. <ref>, we make our concluding remarks and outline possible directions for future work. § MODELING UNCERTAINTIES IN CONTACT RATE The average contact rate λ, defined as the average number of adequate contacts per individual per unit time <cit.>, <cit.>, is the main source of uncertainty in compartmental models. In order to determine its properties as a random process, we begin with the cumulative number of contacts C_n(t) of the n-th individual up to time t. We denote the incremental number of contacts by Δ C_n(t) which measures the number of constants that the n-th individual makes during the time interval [t,t+Δ t], with Δ t being a reference time interval, e.g., a day or a week. The contact rate λ_n of the n-th individual is then given by λ_n(t) = Δ C_n(t)/Δ t. Next, we make the following assumptions on the social behavior of each individual. * The average number of contacts that individuals make in a time interval is proportional to the length of the time interval, so that 𝖤[Δ C_n(t)]=μ_n Δ t, for some constant μ_n>0. * The number of contacts Δ C_n(t) is subject to time-varying random fluctuations, the intensity of which is also proportional to reference unit time Δ t. For instance, this assumption implies that the contacts of an individual per week are prone to more uncertainty than the contacts of the same individual per day. * After a period of relatively high or low contacts compared to the average number μ_nΔ t, the contacts of the individual will tend towards the mean μ_nΔ t. In other words, high or low numbers of contacts are not sustained for prolonged periods of time. Under the above assumptions, we formulate the conditional probability of Δ C_n(t+δ t)-Δ C_n(t), given the number of contacts Δ C_n(t), where δ t≪Δ t is a small time increment. Note that Δ C_n(t+δ t) is the number of contacts that the individual makes over the time interval [t+δ t,t+δ t+Δ t]. Therefore, Δ C_n(t+δ t)-Δ C_n(t) measures the variations in the number of contacts as the reference time interval [t,t+Δ t] is shifted ever so slightly (see Fig. <ref> for an illustration). Based on the above assumptions, for given positive integers i and j, we define the conditional probability, 𝖯[Δ C_n(t+δ t) -Δ C_n(t)=j|Δ C_n(t)=i]= 1/2[(κ_n Δ t)^2-θ_n(μ_n Δ t-i)]δ t j=-1 1/2[(κ_n Δ t)^2+θ_n(μ_n Δ t-i)]δ t j=+1 1-(κ_n Δ t)^2δ t j=0 0 otherwise, where κ_n, θ_n and μ_n are positive constants. We will see shortly that κ_n controls the noise intensity and θ_n determines the time correlation of the resulting stochastic process. For simplicity, we assume that these constants are identical across the population. Therefore, we omit the subscript n and simply denote them by κ, θ and μ. The conditional probability (<ref>) dictates the following. If the current number of contacts Δ C_n(t) = i is greater than the mean μΔ t, then it is more probable for the number of contact to decrease by one after a short time interval δ t has passed (case j=-1). Conversely, if the number of contacts Δ C_n(t) = i is less than the mean, it is more likely for the number of contacts to increase by one in the near future (case j=+1). Furthermore, it assumes that the probability that the number of contacts jump by more than one within a short time δ t is negligible. Finally, the probability for case j=0 (no change within time δ t) is defined to ensure that the total probability adds up to one. We note that the constant θ plays a crucial role here. If θ =0, the number of contacts increase or decrease with the same probability and regardless of their past history (Brownian motion). In contrast, θ >0 introduces time correlations into the process so that the number of contacts have a tendency to revert back to their mean value. From Eq. (<ref>), we calculate the conditional mean value and variance, 𝖤[Δ C_n(t+δ t)-Δ C_n(t)|Δ C_n(t)=i]=θ(μΔ t-i)δ t, 𝖵𝖺𝗋[Δ C_n(t+δ t)-Δ C_n(t)|Δ C_n(t)=i]=(κΔ t)^2δ t. Using Eq. (<ref>), and the definition λ_n(t) = Δ C_n(t)/Δ t, we calculate the conditional mean and variance, 𝖤[λ_n(t+δ t) -λ_n(t)|λ_n(t)=α]= 1/Δ t𝖤[Δ C_n(t+δ t)-Δ C_n(t)|Δ C_n(t)=αΔ t]=θ(μ-α)δ t, 𝖵𝖺𝗋[λ_n(t+δ t) -λ_n(t)|λ_n(t)=α]= 1/(Δ t)^2𝖵𝖺𝗋[Δ C_n(t+δ t)-Δ C_n(t)|Δ C_n(t)=αΔ t]=κ^2δ t, where α=i/Δ t. Assuming no dependence between the incremental contacts of different individuals, λ_n(t) are independent random variables. This means that λ_n(t+δ t)-λ_n(t) are also independent random variables, with the same mean value and variance given by Eq. (<ref>). Hence, the central limit theorem implies that the average over the whole population of N individuals, λ(t+δ t)-λ(t)=1/N∑_n=1^N(λ_n(t+δ t)-λ_n(t)), follows a normal distribution with mean θ(μ-α)δ t and variance κ^2δ t/N. As a result, we have λ(t+δ t)-λ(t)=θ(μ-α)δ t+D√(δ t)𝒩(t), where D=κ/√(N), and 𝒩(t) is the standard normal distribution, with 𝒩(t) and 𝒩(s) being independent for t≠ s. Recall that the expressions in Eq. (<ref>) are conditioned on λ_n(t)=α for all n=1,…,N, which implies λ(t)=(1/N)∑_n=1^Nλ_n(t)=α. Therefore Eq. (<ref>) is equivalent to λ(t+δ t)-λ(t)=θ(μ-λ(t))δ t+D√(δ t)𝒩(t). Dividing by δ t and taking the limit δ t→ 0, we obtain the Langevin equation, 𝕀λ(t)/𝕀 t = θ (μ - λ(t)) + Dξ^WN(t), where ξ^WN(t) is the standard white noise. Equation (<ref>) is the SDE for an Ornstein–Uhlenbeck process. Therefore, the average contact rate λ(t) is an OU process. The stationary solution of Eq. (<ref>) is a Gaussian process with the following mean and autocovariance <cit.>, 𝖤[λ(t)]=μ, 𝖢𝗈𝗏[λ(t)λ(s)]=D^2/2θexp(-θ| t-s|). Introducing the new parameters τ=1/θ, σ^2=D^2τ^2, mean value and autocovariance of Eq. (<ref>) are recast into 𝖤[λ(t)]=μ, 𝖢𝗈𝗏[λ(t)λ(s)]=σ^2/2τexp(-| t-s|/τ). Now it is easy to see that τ=1/θ is the correlation time of the average contact rate λ(t). It can be shown that, as τ→ 0, the autocovariance (<ref>) tends to the delta function, corresponding to white noise with intensity σ <cit.>. We note that the autocovariance in Eq. (<ref>) was assumed in the earlier derivation of Mamis and Farazmand <cit.>. Here, we have shown that this property can be deduced naturally from the conditional probability (<ref>). In the following sections, to simplify the notation, we write λ(t)=λ̅+σξ^OU(t), where λ̅=μ is the mean value, σ is the noise intensity, and ξ^OU(t) is the standard OU process. The standard OU process ξ^OU(t) has zero mean and its autocovariance is given by 𝖤[ξ^OU(t)ξ^OU(s)]=1/2τexp(-| t-s|/τ). With this expression, λ(t) =λ̅+σξ^OU(t) satisfies the Langevin Eq. (<ref>) and its mean and covariance are given by Eq. (<ref>). § SIS MODEL The Susceptibles-Infected-Susceptibles (SIS) model is described by the equations 𝕀 S(t)/𝕀 t=-λ/NS(t)I(t)+γ I(t), 𝕀 I(t)/𝕀 t=λ/NS(t)I(t)-γ I(t), where S(t), I(t) are the numbers of susceptible and infected individuals, respectively, and N is the total population. SIS model parameters are the average contact rate λ and the average curing rate γ, which is the inverse of the average time an individual needs to recover. SIS Eq. (<ref>) is suitable for modeling diseases that are curable, and whose infection does not confer protective immunity; thus the infected become susceptibles again after their recovery. This is the case for most bacterial and sexually transmitted diseases <cit.>. Note that (λ/N)S(t)I(t) is the simplest form for the disease transmission term, and it is based on the assumption of homogeneous mixing of population <cit.>. Under this assumption, out of the total number of contacts that each susceptible individual makes on average per unit time, λ I/N contacts are with the infected, resulting in disease transmission. Transmission term without the division with N is sometimes used <cit.>; however, this choice is not supported by empirical evidence <cit.>. Under the usual assumption of constant population S(t)+I(t)=N, SIS model (<ref>) can be reduced to one scalar ordinary differential equation (ODE) <cit.>. Defining the infected fraction of the population, X(t)=I(t)/N∈[0,1], as the state variable, the scalar ODE is written as 𝕀 X(t)/𝕀 t=λ X(t)(1-X(t))-γ X(t). The equilibrium points of ODE (<ref>) are x_0=0, and x_1=(λ-γ)/λ. The stability of equilibrium points depends on the basic reproduction number R_0=λ/γ: * For R_0<1, equilibrium point x_0=0 is stable. In this case, the disease is eventually eradicated from the population. * For R_0>1, equilibrium point x_0=0 is unstable and x_1=(λ-γ)/λ is stable. In this case, the disease persists in the population and becomes endemic. In the endemic case, R_0>1, we derive a characteristic time scale for ODE (<ref>). For this, we linearize ODE (<ref>) around the stable equilibrium x_1, and calculate its Lyapunov exponent λ-γ (see also <cit.>). The characteristic time scale is determined as the inverse of the Lyapunov exponent, η=(λ-γ)^-1. Under the stochastic perturbation of the contact rate λ(t)=λ̅+σξ(t), the SIS model reads 𝕀 X(t)/𝕀 t=λ̅ X(t)(1-X(t))-γ X(t)+σ X(t)(1-X(t))ξ(t). Eq. (<ref>) is a stochastic differential equation under multiplicative noise excitation, since noise excitation ξ(t) is multiplied by a state-dependent function. In the remainder of this section, we determine the asymptotic behavior of SIS model (<ref>) for two cases: 1. when ξ(t) is the standard Gaussian white noise, and 2. when ξ(t) is the standard OU process. In particular, we show that the OU process, as derived in Sec. <ref>, is more suitable for modeling uncertainties in the contact rate. In contrast to the deterministic SIS model (<ref>), stochastic SIS model (<ref>) exhibits a richer asymptotic behavior that includes regions of bistablity, and regions with R_0>1 where x_0=0 is stable. For the SIS model under white noise, its stationary PDF is easily determined as the stationary solution of the classical Fokker–Planck equation, see, e.g., <cit.>. For the case of OU fluctuations in contact rate, determining the stationary PDF is not as straightforward; the derivation and solution of Fokker–Planck-like equations, corresponding to stochastic differential equations (SDEs) excited by correlated noise, has been the topic of research for many decades <cit.>. Recently <cit.>, we have proposed a nonlinear Fokker–Planck equation whose validity is not limited to small correlation times of the stochastic excitation (see Appendix <ref>). As we show in Sec. <ref>, the stationary solution to this nonlinear Fokker–Planck equation is given in explicit closed form for the case of the stochastic SIS model. Thus, stochastic SIS model under OU perturbation is a rare instance of a nonlinear SDE under correlated noise whose stationary solution can be analytically determined. By having the stationary PDFs in explicit form for both white and OU models, we are able to systematically investigate the noise-induced transitions that the stochastic SIS model undergoes, for increasing levels of noise. §.§ SIS model under white noise In this section, we consider ξ(t) to be the standard white noise ξ^WN(t) with zero mean value and autocorrelation 𝖤[ξ^WN(t)ξ^WN(s)]=δ(t-s), where 𝖤[·] denotes the expected value and δ(t-s) is Dirac's delta function. For the stochastic SIS model (<ref>) under white noise, we calculate the stationary PDF of X(t) as the stationary solution to the corresponding Fokker–Planck equation (see Appendix <ref>), p_0(x)=Cx^2(1-R_0^-1)/(σ^2/λ̅)-2+ϖ(1-x)^-2(1-R_0^-1)/(σ^2/λ̅)-2+ϖexp(-2R_0^-1/(σ^2/λ̅)1/1-x), where C is a normalization factor, so that ∫_ℝp_0(x)𝕀 x=1. Parameter ϖ models the difference, on the level of stationary PDF, between the Itō (ϖ=0) and Stratonovich (ϖ=1) solution of SDE (<ref>) under white noise. This difference stems from the different definition of integrals with respect to Wiener process in the two approaches <cit.>. Stationary PDF (<ref>) depends on two dimensionless parameters: the basic reproduction number of the underlying deterministic model, R_0=λ̅/γ, and the relative variance of the noise, σ^2/λ̅, measuring the noise intensity. Using these dimensionless parameters, we study the bifurcation diagram of the stationary PDF as shown in Fig. <ref> (see Appendix <ref> for calculations). As we derive in Appendix <ref>, both Itō and Stratonovich solutions result in a disease eradication for R_0<1, as is the case for the deterministic SIS model. Therefore, we only consider the range R_0>1 in Fig. <ref>. The different regions in Fig. <ref>, marked by roman numerals, correspond to different shapes of the stationary PDF of the infected population fraction X: * Unimodal with mode at a non-zero x_m: The most probable outcome is the disease to become endemic in the population. * Bimodal with one mode at zero and one at a non-zero x_m: In this case, the most probable outcomes is either the disease being eradicated, or to attain the level x_m in the population. * Unimodal with mode at zero: The disease is most likely eradicated from the population. * Delta function at zero, present only for Itō solution: disease eradication is certain. This is the only case of absolute eradication of the disease for R_0>1. In the Itō solution, the disease persists in the population for (σ^2/λ̅)<2(1-1/R_0) (region below the green curve in Fig. <ref>A), written equivalently as R_0-σ^2/2γ>1. Eq. (<ref>) is the disease persistence condition derived in <cit.>, as expressed for the stochastic SIS model (<ref>). Thus, for the Itō solution, increase in noise intensity results eventually in the eradication of the disease from the population, regardless of the value of R_0. On the other hand, region IV is absent from Fig. <ref>B, meaning that, under Stratonovich interpretation, the disease is never surely eradicated from the population for R_0>1, see also <cit.>. Apart from the PDF shape, another important measure of disease severity is the value of x_m, at which the non-zero PDF mode is exhibited. As we observe in Fig. <ref>, for low levels of noise, the stationary PDFs of the infected population fraction are narrow and unimodal, exhibiting their mode at the stable equilibrium x_1 of the underlying deterministic SIS model. As the noise level increases, the PDF mode x_m moves away from the deterministic equilibrium. This phenomenon is called the peak drift <cit.>, and is commonplace in SDEs with multiplicative noise excitation such as SDE (<ref>). The color in Fig. <ref> encodes the peak drift phenomenon quantifying the difference between the coordinates x_m (non-zero PDF mode) and x_1 (deterministic equilibrium point), as a percentage of x_1. Figure <ref> revels two opposite trends in peak drift. In regions Ia and IIa to the left of the vertical dashed line where R_0<2, higher noise intensity σ^2/λ̅ results in the non-zero PDF mode x_m to drift towards zero. In contrast, in regions Ib and IIb where R_0>2, higher noise intensity σ^2/λ̅ results in the non-zero PDF mode x_m to drift towards one. The above discussion shows that, by increasing the relative noise intensity σ^2/λ̅, the stochastic SIS model undergoes a noise-induced transition <cit.>, i.e., a bifurcation in the shape of its stationary PDF. The type of noise-induced transition is determined by the value of the deterministic dimensionless parameter R_0: * Type 1: For 1<R_0<1.5, the stationary PDF stays always unimodal. By increasing σ^2/λ̅, the PDF peak drifts from the deterministic equilibrium x_1 towards zero. When the relative noise intensity σ^2/λ̅ crosses the level marked by the blue curve in Fig. <ref>, the PDF mode is located at zero. Further increase of σ^2/λ̅ results in more probability mass being accumulated at zero. In Figs. <ref>A, B, we show an example of this noise-induced transition, for R_0=1.4. * Type 2: For 1.5<R_0<2, the PDF mode shifts towards zero as σ^2/λ̅ increases, which is similar to the previous case. However, in this case, when σ^2/λ̅ crosses the blue curve level, the PDF becomes bimodal, with the additional peak located at zero. By increasing σ^2/λ̅ further, more probability mass accumulates at zero, and, after σ^2/λ̅ crosses the level marked by the magenta curve in Fig.<ref>, the PDF becomes unimodal at zero. In Figs. <ref>C, D, we show an example of this noise-induced transition for R_0=1.7. * Type 3: For R_0>2, PDF peak drift phenomenon has the opposite trend; by increasing σ^2/λ̅, the PDF peak drifts towards higher values. When σ^2/λ̅ crosses the blue curve level, an additional PDF mode appears at zero, whose magnitude increases by further increase of σ^2/λ̅. In Figs. <ref>E, F, we show an example of this noise-induced transition for R_0=2.2. We observe that, by increasing noise levels, a PDF peak at zero appears eventually, making the eradication of disease more likely. However, for diseases with R_0<2 (corresponding to noise-induced transitions of types 1 and 2), the most likely final size of disease in the population, i.e., the non-zero mode at x_m, drifts towards zero, even for low white noise levels, for which no PDF peak at zero has appeared yet. This means that, for R_0<2, white noise in contact rate always results in less severe predictions for disease spread. Note that, many SIS-modeled diseases lie in the range of 1<R_0<2, such as gonorrhea, R_0=1.4 <cit.>, syphilis, R_0=1.32-1.50 <cit.>, streptococcus pneumoniae (pneumococcus), R_0=1.4 <cit.>, tuberculosis, R_0=1.78 <cit.>. On the other hand, for highly contagious diseases with R_0>2 (e.g. pertussis, R_0=5.5 <cit.>) increase in noise levels results in more spread of the disease, since, in this case, the most likely endemic point x_m drifts towards larger values. We note that bifurcation diagrams were also analyzed by Méndez et al. <cit.> for a stochastic SIS model slightly different than Eq. (<ref>). However, in <cit.>, only the Stratonovich solution was considered, the peak drift phenomenon was not studied, and a dimensionless parameter involving noise intensity σ and curing rate γ was chosen, instead of the more easily interpretable relative variance σ^2/λ̅. §.§ SIS model under Ornstein–Uhlenbeck noise In this section, we let the stochastic perturbation ξ(t) to be the standard OU process ξ^OU(t) with zero mean and autocorrelation (<ref>). Recall that τ>0 is the correlation time of the OU noise. For an SDE under OU excitation, we can approximate its stationary PDF by the equilibrium solution of a nonlinear Fokker–Planck equation which was only recently formulated <cit.>. For the case of stochastic SIS model (<ref>) under OU noise, we are able to derive an approximate stationary PDF for the infected population fraction (see Appendix <ref>). This stationary PDF is available in the explicit closed form, p_0(x)=Cx^Q_1(1-x)(Gx^2-Dx+F)^Q_2exp[Q_3arctan(2Gx-D/√(|Δ|))], where C is the normalization factor, and Q_1=P/F-1, Q_2=-P/2F-1, Q_3=P(D-2BF)/F√(|Δ|), G=A^2B^2+AB+1, D=A+AB+2A^2B+2, F=A^2+A+1, with |Δ|=4GF-D^2>0, and P=2(1+a)/B(σ^2/λ̅), B=R_0/R_0-1, A=a/1+a, a=τ(λ̅-γ). Despite its convoluted form, stationary PDF (<ref>) depends on three dimensionless parameters only. Two of them, R_0 and σ^2/λ̅, are the same as in the white noise case. The additional parameter a=τ/η is the relative correlation time of the OU noise, defined as the ratio of the correlation time τ of the noise and the Lyapunov characteristic time scale η=(λ̅-γ)^-1 of the underlying deterministic model (<ref>). As discussed in Appendix <ref>, for the white noise limit τ→ 0, PDF (<ref>) results in the Stratonovich stationary PDF (<ref>) with ϖ=1. Using PDF (<ref>), we formulate the bifurcation diagrams shown in Fig. <ref>, which depend on the dimensionless parameters R_0, σ^2/λ̅ and a. To the best of our knowledge, such bifurcation diagrams for the correlated noise case are considered here for the first time. As shown in Fig. <ref>, although PDF (<ref>) is approximate, it is in excellent agreement with the stationary PDFs obtained from direct Monte Carlo simulations of SDE (<ref>). Bifurcation diagrams in Fig. <ref> corresponding to the correlated OU process are similar to those in Fig. <ref> for the uncorrelated white noise. In addition, the types of noise-induced transitions are similar to those in the white noise case. However, there are some important quantitative differences: * Region III, where disease eradication is most likely, is smaller when using correlated OU process. Moreover, as the relative correlation time a increases, this region shrinks further. * As the relative correlation time a increases, the range of R_0 values corresponding to transitions of types 1 and 2 reduces. Furthermore, transitions of type 3 occur for R_0 that are significantly less than 2 (vertical dashed line). * PDF peak drift towards zero, that occurs in transitions of types 1 and 2, becomes less pronounced, as a increases. To summarize, correlations in contact rate suppress the drift of the PDF mode towards zero and delay the emergence of a PDF mode at zero. This results in stationary PDFs whose probability mass is mainly located around the equilibrium of the deterministic SIS model. We can also observe the stabilizing effect of correlated noise by comparing Figs. <ref>A, B for OU noise with a=0.5, to the respective Figs. <ref>B, D for white noise (Stratonovich interpretation). We also observe the change in type of noise-induced transitions due to correlated noise: for R_0=1.4 (resp., R_0=1.7), stationary PDF exhibits a type 1 (resp., type 2) noise-induced transition under white noise, while it exhibits a type 2 (resp., type 3) transition under OU noise with a=0.5. § SEIR MODEL In this section, we consider the Susceptibles-Exposed-Infected-Removed (SEIR) model, 𝕀 S(t)/𝕀 t=-λ/NS(t)I(t), 𝕀 E(t)/𝕀 t=λ/NS(t)I(t)-α E(t), 𝕀 I(t)/𝕀 t=α E(t)-γ I(t), 𝕀 R(t)/𝕀 t=γ I(t). Compared to the SIS model, the SEIR model has two additional compartments: the exposed E(t) containing the individuals that have contracted the disease but are not infectious yet, and R(t) containing the individuals that have been removed from the population, comprising the deceased and the immune due to vaccination or prior infection. The additional model parameter α is the average incubation rate, defined as the inverse of the average incubation (or latency) period during which the individual has contracted the disease but is not infectious yet. SEIR models are suitable for describing the spread of airborne diseases such as flu and COVID-19, whose infection follows after a latency period, and also confers immunity after recovery, albeit temporarily <cit.>. In our study, we use SEIR model (<ref>) to model the Omicron wave of COVID-19 pandemic in the US, i.e., the period between December 3, 2021 and April 22, 2022. We use the data for cumulative infections from the COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University <cit.>. The total population N is considered constant and equal to the US population of 329.5 million, and the initial values of the exposed E(t_0), the infected I(t_0), and the removed R(t_0) at the beginning of the Omicron wave were chosen consistent with the Johns Hopkins data base to be 0.14%, 0.18% and 14.88% of the total population, respectively. Then, SEIR model parameters R_0=λ/γ, α and γ are determined by least square fitting so that the cumulative number of COVID-19 cases during Omicron wave, as predicted by the model, agrees with the Johns Hopkins data. To determine the cumulative number of COVID-19 cases from the SEIR model, we use the relation ∫_t_0^tI(s)𝕀 s=1/γ(R(t)-R(t_0)), which is derived from Eq. (<ref>). By this process, we obtain the values R_0=1.85, α=1/3.5 days^-1, γ=1/1.2 days^-1. After fitting the deterministic SEIR model to data, we add noise fluctuations to the average contact rate. We consider both cases of white and OU noise, for noise levels 0.5≤σ^2/λ̅≤ 3.0. We note that, in prior studies which use stochastic compartmental models for COVID-19, the parameter σ/λ̅ is chosen to model the noise level <cit.>. However, here we use σ^2/λ̅, since this is a dimensionless parameter. Contrary to the stochastic SIS model of Sec. <ref>, a stationary PDF for the stochastic SEIR model is not available in analytic form. Thus, we perform Monte Carlo simulations of SEIR model (<ref>) with sample size 50,000 and stochastically perturbed contact rate λ(t) = λ̅+σξ(t). In the white noise case, ξ(t)=ξ^WN(t), the system of SDEs of SEIR model is numerically solved under the Stratonovich interpretation, using the predictor-corrector scheme of Cao et al. <cit.>. In the case of OU noise, ξ(t)=ξ^OU(t), stochastic SEIR model is augmented by the linear SDE, 𝕀ξ^OU(t)/𝕀 t=-1/τξ^OU(t)+1/τξ^WN(t), that generates the standard OU process ξ^OU(t). The resulting coupled system is again solved using a predictor-corrector scheme <cit.>. The time series of the mean cumulative COVID cases obtained from the Monte Carlo simulations are shown in Fig. <ref>. These simulations are also used to determine the stationary PDFs of COVID cases shown in Fig. <ref>. As we show in Fig. <ref>, the choice between white or OU noise for modeling uncertainties in the contact rate is consequential since they lead to very different forecasts for the spread of the pandemic. For increasing levels of white noise intensity, SEIR model significantly underestimates the severity of the pandemic on average. On the other hand, OU noise leads to forecasts whose mean trajectory of COVID cases stays always close to the actual data. The best fit is obtained for OU noise with correlation time of 1 week, which is in agreement with the weekly social patterns observed in human behavior <cit.>. Note however that, despite the abundance of data collected from the COVID-19 pandemic, the correlation time of the contact rate has not been quantified yet <cit.>. The reason the SEIR model under white noise underestimates the pandemic spread is that it undergoes a noise-induced transition similar to type 2 transition of the stochastic SIS model, see Fig. <ref>. As the white noise intensity increases, the PDF peak drifts from the deterministic equilibrium towards lower values. Due to this peak drift, the mean trajectories of cumulative COVID cases for σ^2/λ̅=0.5 and σ^2/λ̅=1 lie below the actual data. For the noise level σ^2/λ̅=2, an additional peak emerges in the regime of low number of cases (≈5×10^7). Further increase of white noise intensity makes the additional peak more pronounced. This results in stochastic SEIR model to greatly underestimate the pandemic severity for σ^2/λ̅=3. In contrast, when the contact rate is perturbed by the OU noise, the stationary PDF of COVID cases remains unimodal for a wider range of noise levels. The presence of correlations in OU noise hinders the emergence of the additional peak at lower case values; only for the combination of small correlation time (τ=1 day) and high intensity (σ^2/λ̅=3) of the OU noise does an additional peak start forming around 5×10^7 cases (see Fig. <ref>B). Also, the stationary PDF for OU noise exhibits the opposite trend in peak drift compared to the PDFs for white noise; increasing the noise intensity makes the peak to drift towards higher values. Thus, presence of temporal correlation in the noise changes the type of the noise-induced transitions the SEIR model undergoes. This is similar to our results for correlated noise in the stochastic SIS model. In Figs. <ref>C-E, we also see that larger correlation times make the PDFs less diffusive, and the peak drift less pronounced. This is the expected sharpening effect of correlated noise <cit.> as a result of the mean-reverting property of the OU process <cit.>, which becomes stronger as the correlation time increases. The effect of mean-reverting property of OU noise is also shown in Fig. <ref>, where the OU noise is more concentrated around its mean value, compared to the white noise with the same intensity. This also means that the OU noise becomes negative less frequently than the respective white noise. Nonetheless, since OU noise is Gaussian and thus unbounded, it can always attain negative values, which is unrealistic for the contact rate. Prior work on stochastic oncology remedies these unwanted negative values by considering bounded noise <cit.>. Noise-induced transitions in compartmental models under bounded noise have not been studied extensively yet (see, e.g., <cit.>), thus constituting an interesting direction for future work. § CONCLUSIONS It was shown recently that time correlations are essential for modeling uncertainties in the contact rate of an infectious disease <cit.>. Using standard results from the theory of stochastic processes, Mamis and Farazmand <cit.> showed that the only feasible process for modeling such uncertainties is the Ornstein–Uhlenbeck (OU) process. However, to arrive at this conclusion, the authors assumed that the autocorrelation function of the contact rate has an exponentially decreasing form. In the present work, we proved the same result without making such onerous assumptions. Modeling the contacts of each individual as a Markov process, assuming a reasonable conditional probability for such contacts, and using the central limit theorem, we proved that the contact rate averaged over the population satisfies the Langevin equation corresponding to the OU process. We studied the implications of this result on two typical examples of stochastic compartmental models in epidemiology; the SIS model which describes bacterial and sexually transmitted diseases, and the SEIR model which describes airborne diseases such as COVID-19. Stochasticity enters into the compartmental models by considering stochastic fluctuations in the contact rate, to account for uncertainties in social behavior of individuals in the population. For the stochastic SIS model, we derived the exact stationary PDF of the infected population fraction for both cases of white and Ornstein–Uhlenbeck noise fluctuations in contact rate. As a result, we were able to determine the noise-induced transitions that a stochastic SIS model undergoes, as well as the effect of temporal correlations in contact rate. Our main result is that, for a range of R_0 corresponding to many SIS-modeled diseases (see Remark <ref>) white noise in contact rate makes the eradication of the disease more likely. This is an unrealistic behavior since greater uncertainty in measuring a model parameter should not lead to the eradication of the disease. On the other hand, the inclusion of correlations has a stabilizing effect on the stationary PDF of the infected population fraction, mitigating the unrealistic transitions towards zero infected population. The results for noise-induced transitions of stochastic SEIR models are similar to those for SIS models. By performing Monte Carlo simulations of a SEIR model fitted to data from the Omicron wave of COVID-19 pandemic in the US, we observed that white noise models of the contact rate lead to systematic underestimation of the pandemic severity. On the other hand, when the contact rate is modeled as an OU process, the predicted number of COVID cases is always close to the actual data. An important direction for future work is to develop analytic tools for stochastic SEIR models, similar to those that we have already developed for SIS models. Our work demonstrates that the inclusion of correlated uncertainties in compartmental models is a central component for a realistic stochastic model of disease spread. If overlooked, this would lead to unrealistic, less severe forecasts. However, despite the abundance of data collected, especially during the COVID-19 pandemic, the intensity and temporal correlations of noise in compartmental model parameters have not been determined with precision. This calls for more empirical studies that would systematically quantify the nature of uncertainties, and especially their correlation time, in the parameters of compartmental epidemiological models. Acknowledgments K.M. would like to acknowledge the hospitality of the Department of Mathematics at North Carolina State University where most of this work was carried out when he was a postdoctoral associate in the research group of M.F. Data availability The data used in this work is available from COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University, <https://github.com/CSSEGISandData/COVID-19>. Authors' contributions M.F. conceptualized and supervised the research. K.M. conducted the research and wrote the paper. M.F. revised the paper. Funding The authors received no funding for this work. Competing interests The authors declare that they have no competing financial interests. § CALCULATION OF STATIONARY PDFS Consider the general form of the scalar SDE 𝕀 X(t)/𝕀 t=h(X(t))+σ(X(t))ξ(t), where X(t) and ξ(t) are the stochastic processes of the response and excitation respectively, h(x) is the continuous drift function, and σ(x) is the differentiable function of the noise intensity. In the case where excitation ξ(t) is Gaussian white noise (see Eq. (<ref>)), the evolution of the PDF p(x,t) of the response X(t) is governed by the classical Fokker–Planck equation (see, e.g., <cit.>): ∂ p(x,t)/∂ t+∂/∂ x[(h(x)+ϖ/2σ'(x)σ(x))p(x,t)]=1/2∂^2/∂ x^2[σ^2(x)p(x,t)]. In Eq. (<ref>), the drift term h(x) is augmented by (1/2)σ'(x)σ(x) which is the Wong-Zakai correction (see <cit.>) modeling the difference between Itō (ϖ=0) and Stratonovich (ϖ=1) interpretations of SDEs under multiplicative white noise excitation. The stationary solution p_0(x)=lim_t→∞p(x,t) of Fokker–Planck Eq. (<ref>) is given in the closed form <cit.>: p_0(x)=C/σ^2-ϖ(x)exp(2∫^xh(y)/σ^2(y)𝕀 y), where ∫^x𝕀 y denotes the antiderivative, and C is the normalization factor. In our recent papers <cit.>, we derived an approximate nonlinear Fokker–Planck equation corresponding to SDE (<ref>) under correlated excitation: ∂ p(x,t)/∂ t+∂/∂ x {[h(x)+σ'(x)σ(x)A(x,t;p)]p(x,t)}= =∂^2/∂ x^2[σ^2(x)A(x,t;p)p(x,t)], where A(x,t;p)=∑_m=0^2D_m(t;p)/m!{ζ(x)-𝖤[ζ(X(t))]}^m, with ζ(x)=σ(x)(h(x)/σ(x))', and D_m(t;p)=∫_t_0^tC_ξ(t,s)exp(∫_s^t𝖤[ζ(X(u))]𝕀 u)(t-s)^m𝕀 s. where C_ξ(t,s) is the autocorrelation function of noise excitation ξ(t). Fokker–Planck equation (<ref>) is nonlinear, due to the dependence of coefficient A(x,t;p) on the response moment 𝖤[ζ(X(t))], which in turn depends on the unknown PDF p(x,t). As we have proven in <cit.>, for diminishing correlation time of ξ(t), τ→0, coefficient A in Eq. (<ref>) becomes 1/2. Thus, in the white noise limit, nonlinear Fokker–Planck Eq. (<ref>) becomes the Stratonovich Fokker–Planck Eq. (<ref>) for ϖ=1. Also, note that, by keeping only the zeroth-order term in the sum of Eq. (<ref>), we obtain the widely-used Hänggi's approximate Fokker–Planck equation <cit.>. For ξ(t) being the standard OU process (see Eq. (<ref>)), the stationary solution of the nonlinear Fokker–Planck Eq. (<ref>) reads p_0(x,M)=C/σ(x)A(x,M)exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y), where A(x,M) is the stationary value of coefficient A(x,t;p), given by A(x,M)=1/2∑_m=0^2[τ(ζ(x)-M)]^m/(1-τ M)^m+1, and M is the stationary value of response moment 𝖤[ζ(X(t))]: M=∫_ℝζ(x)p_0(x,R)𝕀 x. As derived in <cit.>, solution (<ref>) is valid under the condition M<τ^-1. Due to the presence of the unknown response moment M, Eq. (<ref>) is an implicit stationary solution of the nonlinear Fokker–Planck Eq. (<ref>). In <cit.>, we proposed an iteration scheme for the calculation of M, by substituting the implicit form (<ref>) for p_0(x,M) into the definition relation (<ref>) for M. The initial value of moment M for the iteration scheme is calculated from the corresponding Stratonovich Fokker–Planck equation. Implicit closed-form solution (<ref>), supplemented by the iteration scheme for M, constitutes a semi-analytic form for the stationary response PDF for SDE (<ref>) under OU stochastic excitation. However, for the special case of stochastic SIS model (<ref>), we are able to calculate moment M analytically. Note that the calculation of moment M in explicit closed form is, in general, not possible. For stochastic SIS model (<ref>), moment M defined by Eq. (<ref>), is M=-(λ̅-γ). For stochastic SIS model, (<ref>), and by substituting Eq. (<ref>) into Eq. (<ref>), the definition relation for M is specified as M=C∫_ℝζ(x)/σ(x)A(x,M)exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x, where h(x)=λ̅x(1-x)-γ x, σ(x)=σ x(1-x) and ζ(x)=-γ x/(1-x). By performing integration by parts, we obtain M =C∫_ℝζ(x)σ(x)/h(x)[exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)]'𝕀 x= =-C∫_ℝ(ζ(x)σ(x)/h(x))'exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x= =Cσγ(λ̅-γ)∫_ℝ1/[λ̅(1-x)-γ]^2exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x. On the other hand, normalization factor C of p_0 is defined as C^-1=∫_ℝ1/σ(x)A(x,M)exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x, and after integration by parts: C^-1 =∫_ℝσ(x)/h(x)[exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)]'𝕀 x= =-∫_ℝ(σ(x)/h(x))'exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x= =-σγ∫_ℝ1/[λ̅(1-x)-γ]^2exp(∫^xh(y)/σ^2(y)A(y,M)𝕀 y)𝕀 x. By substituting Eq. (<ref>) into Eq. (<ref>), we obtain Eq. (<ref>). Using Eq. (<ref>), we calculate coefficient A to A(x)=1/2(1+a)∑_m=0^2(a/1+a)^m(1-x)^-m(1-R_0x/R_0-1)^m, where a=τ(λ̅-γ)>0. By having coefficient A(x) in explicit form, we can perform the integration in Eq. (<ref>) analytically, obtaining thus the expression (<ref>) for the stationary response PDF for SIS model (<ref>) under OU noise. § ANALYSIS OF STATIONARY PDFS FOR SIS MODELS §.§ White noise model - Itō solution In the vicinity of zero, response PDF (<ref>) for ϖ=0 is p_0(x)∼ x^2(1-R_0^-1)/(σ^2/λ̅)-2. For 2(1-R_0^-1)/(σ^2/λ̅)-2<-1⇒ (σ^2/λ̅)>2(1-R_0^-1), p_0(x) is not integrable, and p_0(0)=+∞. Thus, under condition (<ref>), p_0(x) is a delta function at zero. Eq. (<ref>) always holds true for R_0<1, resulting in disease eradication, as in the deterministic case. This is the reason for choosing R_0∈[1,+∞) in our analysis. The green curve in Fig. <ref>A corresponds to (σ^2/λ̅)=2(1-1/R_0). For -1<2(1-R_0^-1)/(σ^2/λ̅)-2<0⇒ 1-R_0^-1<(σ^2/λ̅)<2(1-R_0^-1), p_0(x) is integrable, and has a peak at zero. The blue curve in Fig. <ref>A corresponds to (σ^2/λ̅)=1-1/R_0. For 2(1-R_0^-1)/(σ^2/λ̅)-2>0⇒ (σ^2/λ̅)<1-R_0^-1, p_0(x) is integrable, and p_0(0)=0. For the case (σ^2/λ̅)<2(1-1/R_0), where p_0(x) is integrable, its local extrema points for x∈(0,1) are specified, by the first derivative test, as the roots of quadtratic equation 2(σ^2/λ̅)x^2+[1-3(σ^2/λ̅)]x+(σ^2/λ̅)+R_0^-1-1=0. The requirement of nonnegative discriminant results in the condition R_0≥8(σ^2/λ̅)/[1+(σ^2/λ̅)]^2. Eq. (<ref>), for the case of equality, is the magenta curve in Fig. <ref>A. The two roots of Eq. (<ref>) are x_±=3(σ^2/λ̅)-1±√([1+(σ^2/λ̅)]^2-8(σ^2/λ̅)R_0^-1)/4(σ^2/λ̅). By the additional requirement of x_±∈(0,1), we summarize the conditions for roots x_± to be extrema points of p_0(x). x_+ is extremum point for{(σ^2/λ̅)<1/3⋀(σ^2/λ̅)<1-1/R_0, (σ^2/λ̅)>1/3⋀ R_0≥8(σ^2/λ̅)/[1+(σ^2/λ̅)]^2.. x_- is extremum point for (σ^2/λ̅)>1/3⋀(σ^2/λ̅)>1-1/R_0⋀R_0≥8(σ^2/λ̅)/[1+(σ^2/λ̅)]^2. Furthermore, we determine that x_+ is a maximum point, and x_- is a minimum point. Thus, we identify x_+ as the non-zero mode coordinate x_m. By using de l'Hôpital rule, we calculate lim_σ→0x_m=(λ̅-γ)/λ̅, which is the expected result that, in the deterministic limit, PDF mode x_m coincides with the deterministic equilibrium. Last, in order to capture the peak drift phenomenon, we calculate the first derivative of x_m with respect to (σ^2/λ̅). After algebraic manipulations, we obtain that x_m'(σ^2/λ̅)≥0⇒ R_0≥2. The dashed line in Fig. <ref>A is R_0=2. §.§ White noise model - Stratonovich solution We repeat the procedure we followed in Sec. <ref>, for Stratonovich solution, Eq. (<ref>) for ϖ=1. The results we obtain are the following: Stratonovich solution is a delta function at zero only for R_0<1; for R_0>1, it is always integrable. For (σ^2/λ̅)>2(1-1/R_0), p_0(x) has a peak at zero. The blue curve in Fig. <ref>B corresponds to (σ^2/λ̅)=2(1-1/R_0). For x∈(0,1), the local extrema are roots of the equation 2(σ^2/λ̅)x^2+[2-3(σ^2/λ̅)]x+(σ^2/λ̅)+2(R_0^-1-1)=0. Thus, the possible extrema points in (0,1) are x_±=3(σ^2/λ̅)-2±√([2+(σ^2/λ̅)]^2-16(σ^2/λ̅)R_0^-1)/4(σ^2/λ̅), under the condition for nonnegative discriminant of Eq. (<ref>) R_0≥16(σ^2/λ̅)/[2+(σ^2/λ̅)]^2. Eq. (<ref>) for the case of equality is the magenta curve in Fig. <ref>B. We further determine that x_+ is maximum point for{(σ^2/λ̅)<2/3⋀(σ^2/λ̅)<2(1-1/R_0), (σ^2/λ̅)>2/3⋀ R_0≥16(σ^2/λ̅)/[2+(σ^2/λ̅)]^2.. x_- is minimum point for (σ^2/λ̅)>2/3⋀(σ^2/λ̅)>2(1-1/R_0)⋀R_0≥16(σ^2/λ̅)/[2+(σ^2/λ̅)]^2. Also, we calculate that condition (<ref>) is true for Stratonovich solution too. §.§ Ornstein–Uhlenbeck noise model In the vicinity of zero, response PDF (<ref>) is p_0(x)∼ x^P/F-1. We calculate that, for R_0<1, solution (<ref>) is a delta function at zero, similarly to the Stratonovich solution. For (σ^2/λ̅)>2(1+a)/F(1-R_0^-1), PDF (<ref>) exhibits a peak at zero. The blue curve in Fig. <ref> corresponds to (σ^2/λ̅)=2(1+a)(1-R_0^-1)/F. For x∈(0,1), the local extrema are roots of the cubic equation f(x)=0, with f(x)=2G(σ^2/λ̅)x^3+[2(1+a)-(3G+D)(σ^2/λ̅)]x^2+ [2D(σ^2/λ̅)-2(1+a)(2-R_0^-1)]x+2(1+a)(1-R_0^-1)-F(σ^2/λ̅). The calculation of the exact roots of a cubic equation is cumbersome. However, f(1)=-(σ^2/λ̅)A^2(B-1)^2<0, f(+∞)=+∞, and thus, by intermediate value theorem, cubic polynomial f(x) has always a real root that is greater than 1, which is not admissible as extremum point of p_0(x). Thus, the regions III in Fig. <ref> correspond to Δ_3<0, where Δ_3 is the discriminant of the cubic polynomial f(x). Note also that f(-∞)=-∞, and f(0)>0 under the condition (σ^2/λ̅)<2(1+a)/F(1-R_0^-1). Using the intermediate value theorem again, we deduce that, under condition (<ref>), polynomial f(x) has three distinct real roots, with only one of them in (0,1). By combining this result to the behavior of PDF (<ref>) at zero (see Eq. (<ref>)), we conclude that, under condition (<ref>), PDF (<ref>) is unimodal, with its mode at a non-zero x_m. Last, we observe that, for a=0, condition (<ref>) is identical to condition (<ref>), and the cubic polynomial f(x) is factorized to f(x)= (x-1){2(σ^2/λ̅)x^2+[2-3(σ^2/λ̅)]x+(σ^2/λ̅)+2(R_0^-1-1)}. We identify the second factor in the right-hand side of Eq. (<ref>) as the quadratic polynomial, whose roots detetermine the PDF extrema points in (0,1) of the white noise case, under Stratonovich interpretation (see Eq. (<ref>)). This finding shows the compatibility between results under OU noise with a=0 and the Stratonovich solution for the white noise case.
http://arxiv.org/abs/2307.04117v1
20230709081351
SPar: estimating stellar parameters from multi-band photometries with empirical stellar libraries
[ "Mingxu Sun", "Bingqiu Chen", "Helong Guo", "He Zhao", "Ming Yang", "Wenyuan Cui" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
UTF8gbsn Stellar parameters from multi-band photometries] SPar: estimating stellar parameters from multi-band photometries with empirical stellar libraries 0000-0002-2473-9948]Mingxu Sun (孙明旭) Department of Physics, Hebei Key Laboratory of Photophysics Research and Application, Hebei Normal University, Shijiazhuang 050024, P. R. China 0000-0003-2472-4903]Bingqiu Chen(陈丙秋) South-Western Institute for Astronomy Research, Yunnan University, Kunming 650500, P. R. China 0000-0001-5737-6445]Helong Guo(郭贺龙) South-Western Institute for Astronomy Research, Yunnan University, Kunming 650500, P. R. China /0000-0003-2645-6869]He Zhao(赵赫) Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210023, P. R. China 0000-0001-8247-4936]Ming Yang(杨明) Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, P. R. China 0000-0003-1359-9908]Wenyuan Cui(崔文元) Department of Physics, Hebei Key Laboratory of Photophysics Research and Application, Hebei Normal University, Shijiazhuang 050024, P. R. China Bingqiu Chen [email protected] Modern large-scale photometric surveys have provided us with multi-band photometries of billions of stars. Determining the stellar atmospheric parameters, such as the effective temperature () and metallicities (), absolute magnitudes (M_G), distances (d) and reddening values () is fundamental to study the stellar populations, structure, kinematics and chemistry of the Galaxy. This work constructed an empirical stellar library which maps the stellar parameters to multi-band photometries from a dataset with Gaia parallaxes, LAMOST atmospheric parameters, and optical to near-infrared photometry from several photometric surveys. Based on the stellar library, we developed a new algorithm, SPar (Stellar Parameters from multiband photometry), which fits the multi-band stellar photometries to derive the stellar parameters (, , M_G, d and ) of the individual stars. The algorithm is applied to the multi-band photometric measurements of a sample of stars selected from the SMSS survey, which have stellar parameters derived from the spectroscopic surveys. The stellar parameters derived from multi-band photometries by our algorithm are in good agreement with those from the spectroscopic surveys. The typical differences between our results and the literature values are 170 K for , 0.23 dex for , 0.13 mag for M_G and 0.05 mag for . The algorithm proved to be robust and effective and will be applied to the data of future large-scale photometric surveys such as the Mephisto and CSST surveys. § INTRODUCTION Modern large-scale photometric surveys, such as the Sloan Digital Sky Survey (SDSS; ), the Pan-STARRS 1 Survey (PS1; ), the Two Micron All Sky Survey (2MASS; ) and the Wide-Field Infrared Survey Explorer (WISE; ), have provided us board band photometry covering the wavelength from the optical to the infrared (IR) bands of billions of stars. The multi-band photometric measurements of stars contain the spectral energy distribution (SED) information of these stars, from which we can obtain the stellar atmosphere parameters (i.e. the effective temperature , the metallicity and the surface gravity log g), as well as the distance and extinction values of stars <cit.>. Stellar atmospheric parameters are typically determined from their spectra. While large-scale multi-fibre spectroscopic surveys like the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST; ), the Apache Point Observatory Galactic Evolution Experiment (APOGEE; ), the Galactic Archaeology with HERMES (GALAH, ), and the Dark Energy Spectroscopic Instrument (DESI; ) have provided spectra for tens of millions of stars, we can now determine the parameters of billions of stars with comparable precision using photometric data of exceptionally high accuracy or data obtained through specially designed filters. This approach allows us to obtain accurate stellar parameters without relying solely on spectroscopic data. For example, <cit.> have measured metallicities of ∼ 27 million stars from the Gaia Early Data Release 3 (Gaia EDR3; ) photometric data of unprecedented millimagnitude precision. The typical metallicity precision is about δ[Fe/H] = 0.2 dex. Based on the narrowband photometry from the SkyMapper Southern Survey (SMSS; ), <cit.> have determined stellar atmospheric parameters for ∼ 24 million stars. The precision of their metallicity estimates has typical values around 0.05 to 0.15 dex. <cit.> obtained stellar parameters (effective temperature, , surface gravity, , and metallicity, ) from the narrow-band photometries of the J-PLUS survey <cit.>. They have achieved precisions of δ∼ 55 K, δ∼ 0.15 dex, and δ∼ 0.07 dex, respectively. We note that these works are only for stars located at high Galactic latitudes, where the interstellar extinction is small. For stars at low Galactic latitudes, where the extinction effects are large, the stellar atmospheric parameters need to be estimated along with the stellar extinction values. The future Multi-channel Photometric Survey Telescope (Mephisto; ) photometric survey and the Chinese Space Station Telescope (CSST; ; ) optical survey will have both high precision and specially designed filters, which will provide great opportunities for us to obtain accurate stellar parameters of billions of stars. Mephisto is a wide-field survey telescope with a 1.6 m primary mirror. It is equipped with three CCD cameras and is capable to image the same patch of sky in three bands simultaneously, which will provide us with real-time colours of stars with unprecedented accuracy. The filters of Mephisto (uvgriz) are similar to those of SkyMapper <cit.>. The Mephisto-W Survey (; Chen et al. in prep.) will target the northern sky of declination (Dec) between -21 and 75 with a coverage of over 27,000 deg^2. CSST is a 2 m space survey telescope, which shares the same orbit as the Chinese Space Station. The CSST optical survey (CSST-OS) will observe a large sky area of ∼ 18,000 deg^2 in seven photometric filters (NUV,  u,  g,  r,  i,  z and y) covering the wavelengths from the near-ultraviolet (NUV) to near-infrared (NIR). Deriving the stellar atmospheric parameters as well as the distance and extinction values is a fundamental task for the Mephisto and CSST surveys. In this work, we present a new algorithm,SPar (Stellar Parameters from multiband photometry), to estimate stellar atmospheric parameters, such as the effective temperature () and metallicities (), absolute magnitudes (M_G), distances (d) and reddening values () of a large sample of stars from their multi-band photometries. Previous algorithms such as the Star-Horse <cit.> and the General stellar Parameterized from photometry (GSP-PHot; ) rely on the theoretical stellar models, which may suffer systematic effects <cit.>. One method of dealing with these inaccuracies in theoretical models is to apply empirical corrections based on the observed photometry of stars of known type. <cit.>, <cit.> and <cit.> measure stellar parameters and extinction based on the empirical stellar locus in the colour-colour space. <cit.>, <cit.> and <cit.> calculated the intrinsic colours and reddening values of the individual stars based on a spectroscopic sample selected from the spectroscopic surveys. In this work, we will construct an empirical stellar library which maps the stellar parameters to multi-band photometries from a dataset with Gaia parallaxes, LAMOST atmospheric parameters, and optical to near-infrared photometry from several photometric surveys. The paper is structured as follows: Section <ref> presents the relevant dataset. Section <ref> describes in detail the empirical stellar library we constructed. Section <ref> describes our algorithm and Section <ref> tests it. Finally, Section <ref> summarizes the algorithm. § DATA As the Mephisto and CSST surveys have not yet started, in the current work we have used photometric data from the SMSS survey for the experiment. This work is based on the Gaia Data Release 3, the broad-band photometry from SMSS, Two Micron All Sky Survey (2MASS; ) and the Wide-field Infrared Survey Explorer survey (WISE; ), and the spectroscopic data from LAMOST, APOGEE and GALAH. The SMSS is an ongoing photometric survey of the Southern sky <cit.>. The survey depth is between 19.7 and 21.7 mag in six optical bands: u, v, g, r, i and z. In this work, we adopt the data from its second data release (SMSS DR2; ). The photometry has a internal precision of 1 per cent in the u and v bands, and 0.7 per cent in the other four bands (g,  r,  i and z). To break the degeneracy of effective temperature (or intrinsic colours) and extinction for the individual stars, we combine the SMSS photometry with the IR photometry of 2MASS and WISE. The 2MASS survey is a full-sky survey undertaken in three filters: J,  H and . The systematic errors of the 2MASS photometry are estimated to be less than 0.03 mag <cit.>. The WISE survey is a full-sky survey undertaken in four bands: W1,  W2,  W3 and W4. In the current work, we adopt the AllWISE Source Catalog <cit.>. We use only the data in the W1 and W2 bands, as the W3 and W4 measurements have lower sensitivities and poorer angular resolutions. The Gaia DR3 <cit.> photometric data and parallax measurements are also adopted to derive the stellar parameters when available. Gaia DR3 was released by the Gaia mission <cit.>. It contains more than a billion sources with five astrometric parameters (position, parallaxes, and proper motions) and three-band photometry (G, and ). The median uncertainty of the parallax is ∼ 0.02 - 0.03 mas for G < 15 mag sources, 0.07 mas at G = 17 mag, and 0.5 mas at G = 20 mag <cit.>. At G = 20 mag, the typical uncertainties for the Gaia DR3 photometries are 6, 108, and 52 mmag for the Gaia G, and bands, respectively <cit.>. The LAMOST, APOGEE and GALAH spectroscopic data are adopted in the current work for two aims: to construct the empirical stellar library and to validate the resulting stellar properties. In this work, we use the `LAMOST LRS Stellar Parameter Catalog of A, F, G and K Stars' from the LAMOST data release 8 (LAMOST DR8; ). The catalogue contains stellar atmospheric parameters (, and ) derived from over 6 million low-resolution spectra. For the APOGEE data, we use the APOGEE stellar parameters catalogue from the SDSS data release 17 (SDSS DR17; ). The catalogue contains stellar atmospheric parameters (, and [M/H]) derived from over 0.6 million near-IR spectra. For the GALAH data, we adopt its DR3 catalogue <cit.>, which contains stellar parameters (, and [Fe/H]) of over 0.5 million nearby stars. § EMPIRICAL STELLAR LIBRARY An empirical stellar library is first created based on a sample of stars selected from the LAMOST spectroscopic data and Gaia DR3, for which atmospheric parameters, distances, and extinction values can be well measured. We cross-match the LAMOST and Gaia stars with the optical and near-IR photometric data, i.e. the SMSS, 2MASS and WISE, using a radius of 1 arcsec. To exclude the stars with bad observations, a sample of stars are selected by the criteria as below: * LAMOST spectral signal-to-noise ratio (SNR) larger than 20 and effective temperature between 4000 and 8000 K, * All Gaia G photometric errors smaller than 0.1 mag and phot_bp_rp_excess_factor > 1.3 + 0.06( - )^2, * Photometric errors in any SMSS uvgriz, 2MASS JH and WISE W1W2 bands less than 0.1 mag. To obtain the absolute magnitude of our selected stars in each filter, the extinction values in the individual filters and the distance of each star need to be calculated. §.§ Correct the extinction effect of the individual stars In this work, The reddening values of our sample stars are calculated with a star-pair method <cit.>. In selecting the control sample for extinction correction, this study impose more rigorous constraints on the signal-to-noise ratio (SNR) of the LAMOST spectrum and photometric accuracies compared to the empirical stellar library, and only low-extinction stars are chosen. The control stars are selected via the following criteria: * LAMOST spectral signal-to-noise ratio (SNR) larger than 50, * Gaia G photometric errors smaller than 0.01 mag, all SMSS uvgriz, 2MASS JH and WISE W1W2 bands photometric errors less than 0.08 mag, * values from the extinction map of <cit.> smaller than 0.025 mag. For a given target star in our sample, its intrinsic colours, (G_ BP - x)_0 (where x denotes to the magnitude in another band, i.e., G, u, v, g, r, i, z, J, H, , W1 or W2), are estimated simultaneously from the corresponding values of the pair stars in the control sample. The reddening values of E(G_ BP -x) of the target star are then obtained from the differences between its observed and intrinsic colours, i.e. E(G_ BP -x) = (G_ BP - x) - (G_ BP - x)_0. Stars with resulted E(G_ BP -x) errors larger than 0.1 mag are excluded from our catalogue. Based on the resultant reddening values E(G_ BP -x), a and reddening dependent extinction law similar to the work of <cit.> has been built. <cit.> obtained the empirical reddening coefficients for the individual colours as a function of and . In the current work, we derive the empirical reddening coefficients, defined as R(G_ BP - x) = E(G_ BP - x)E(G_ BP - G_ RP), as a function of and . In the current work, we adopt a binary function: R(G_ BP - x) = C_0+C_1x+C_2x^2+C_3y+C_4xy+C_5y^2, where x = T_ eff - 6000 and y = E(G_ BP-G_ RP) - 0.5. The resulting reddening values we derived from the star-pair method are used to fit Eq. <ref> to obtain the individual coefficients C_0 to C_5. The fitting results are listed in Table <ref>. Finally, we assume that A_W2/E(G_ BP-G_ RP) = 0.063 <cit.> for the W2 band has the longest wavelength and experiences the least amount of extinction of all the bands. Combining with the extinction coefficient relations we derived, the extinction value in each filter for all our sample stars can be calculated from their reddening values. The extinction values are then subtracted to obtain the intrinsic magnitude of the stars. In the left and middle panels of Fig. <ref>, we show the observed and intrinsic colour and magnitude diagrams (CMD) of our sample stars, respectively. §.§ The empirical HR diagrams We then obtain the absolute magnitudes of the sample stars. The distances of the sample stars are calculated from the Gaia DR3 parallaxes via a simple Bayesian approach <cit.>. A simple posterior probability is adopted: p(d|ϖ) = d^2exp(-12σ^2_ϖ(ϖ-ϖ_ zp - 1d))p(d), where σ_ϖ and ϖ _ zp are respectively the errors and globular zero points of the Gaia parallaxes, and p(d) the space density distribution prior for the sample stars. In the current work, a zero point of ϖ _ zp = -0.026 mas from <cit.> is adopted. The Galactic structure model of <cit.> is adopted as the spatial density distribution prior. With the resultant distances, the absolute magnitude in each band is then calculated by M_x = x - 5log d + 5 - A_x. Stars with parallax errors larger than 20 per cent are excluded. This leads to a final sample of 3,842,671 stars. In the sample, more than 3.5 million stars have all Gaia absolute magnitude estimates, over 3.2 million stars have all IR bands (2MASS JH and WISE W1W2) absolute magnitudes, and over 0.3 million stars have all SMSS absolute magnitude estimates. In the right panel of Fig. <ref>, we show the resulting Hertzsprung–Russell diagram (HRD) of our final sample stars. The final sample of stars we obtained above is too large and not evenly distributed across the parameter spaces. Therefore, we use this sample to create a gridded stellar library. To map the stellar parameters, namely effective temperature (), metallicity (), and Gaia G band absolute magnitude (M_G), to absolute magnitude in each filter, we divide the parameter spaces into 100 bins for (ranging from 4000 to 8000 K), 50 bins for (ranging from -2.5 to 0.5 dex), and 100 bins for M_G (ranging from 8 to -4 mag). We use a machine learning algorithm called Random Forest regression to obtain the absolute magnitudes in each passband for each bin, using the final sample stars as the training dataset. We exclude any grids with fewer than 5 stars, resulting in an empirical stellar library with 39,905 grids in the parameter space. In Fig. <ref>, we present the Hertzsprung-Russell diagrams (HRDs) of both the final sample stars and the resultant gridded stellar library. We performed a comparative analysis between our empirical stellar library and the PARSEC theoretical stellar isochrones <cit.> as a means of verifying our results. To achieve this, we linearly interpolated the PARSEC absolute magnitudes into our , and M_G grids, and subsequently compared them with our corresponding results. As illustrated in Fig. <ref>, the results of this comparison indicate that our stellar library is in good agreement with the PARSEC theoretical models. Specifically, the mean values of the differences between our empirical library and the PARSEC models ranged between -0.04 and 0.04 mag, with dispersions of 0.01 to 0.03 mag observed for most of the filters. We note a slight increase in the dispersions for the WISE W1W2 bands, which ranged between 0.04 - 0.05 mag, and the SMSS uv bands, which show dispersions of 0.08 - 0.11 mag. We attribute the increase in dispersion to the relatively large calibration errors and uncertainties associated with the filter response curves of these particular filters. For the u band, we find a slightly higher mean difference of -0.065 mag than other filters. This difference could be due to a variety of factors, including notable photometric error, extinction, and model uncertainty in the passband. § ESTIMATING STELLAR PARAMETERS FROM MULTI-BAND PHOTOMETRIES This section introduces how we derive the stellar parameters from the multi-band photometries. To allow our algorithm SPar to be applied to large samples of billions of stars, we need to minimise the computational cost. Our algorithm fits only four stellar parameters, , , and . The distances d of stars can be further derived from the fitting results. SPar uses an ensemble Markov-chain Monto-Carlo (MCMC) method to obtain the best parameters of the individual stars by adopting a set of initial values derived from a minimum χ^2 method. §.§ Initial parameters from the minimal χ^2 method Based on the reference stellar library and extinction law derived in Sect. <ref>, assuming a reddening value, we can predict the `distance module corrected' magnitudes in the individual filters M^'_x of stars, i.e., M^'_x = M_x + A_x. Distance module μ can then be derived by subtracting M^'_x from the observed magnitude m_x of stars: μ = m_x - M^'_x. By substituting the resulting distance modulus into the standard magnitude equation: m_x = M_x + A_x + μ, we can simulate the magnitude of the stars in the individual passbands. With , , and as free parameters, we can model the stellar observed magnitude in each filter. We define, χ^2 = 1N-K∑^N_x=1((m^ obs_x-m^ mod_x)σ_x)^2, where m^ obs_x and m^ mod_x are respectively the observed and simulated magnitudes of the filter x, σ_x are the photometric errors, N and K is the number of adopted filters and free parameters, respectively. If Gaia parallax exists, χ^2 = 1N+1-K(∑^N_x=1((m^ obs_x-m^ mod_x)σ_x)^2 + ((ϖ_ obs +ϖ_ zp -ϖ_ mod)σ_ϖ)^2), where ϖ_ obs and ϖ_ mod are respectively the observed and simulated parallax, σ_ϖ is the parallax error, ϖ_ zp (ϖ_ zp≡ -0.026 mas) is the zero point of the observed parallax. We search for the minimal χ^2 parameters by running a series of values ranging from -0.1 to 6.0 in step of 0.02 mag and all grids in the reference stellar library. We use only the optical filters, i.e. Gaia G and SMSS gri, to derive the distances of the stars. This procedure yields best-fit values of , , and for the individual stars, which will be adopted as the initial parameters of the following MCMC analysis. If the resulted χ^2 are too large (χ^2 > 10), this means that the stellar parameters in our template library do not fit the observed values well. It is possible that the star of concern is not a normal AFGK star, and then this star will not be used further in the subsequent MCMC analysis. §.§ Final parameters from the MCMC analysis In order to determine the final parameters and their uncertainties for the individual stars, we adopt the MCMC procedure described in <cit.>. The initial values for effective temperature , metallicity , absolute magnitude , and reddening are set according to the values derived from Sect. <ref>. The maximum likelihood is defined as follows: L = ∏^N_x=11/√(2π)σ_xexp((X^ obs_x-X^ mod_x)^2/2σ_x^2) where X^ obs_x and X^ mod_x are respectively the observed and simulated magnitudes of the filter x, or parallax if available. To run the MCMC analysis, we created 10 walkers and 20 steps chains, discarding the first 5 steps for burn-in purposes. The posterior distributions of the final parameters are determined by the 50th percentile values, and their uncertainties are obtained by computing the 16th and 84th percentile values. § TESTS OF OUR ALGORITHM In this section, we will test the SPar algorithm. We cross-match the SMSS data with the LAMOST, APOGEE and GALAH spectroscopic data, and have obtained a test sample of stars with parameter measurements from the spectroscopic data. The SPar algorithm is then applied to the sample stars to obtain their parameters, which are then compared with those obtained from the spectra. The test sample contains 1,046,722 stars, of which 408,671, 200,902 and 437,149 stars are from the LAMOST, APOGEE and GALAH surveys, respectively. Fig. <ref> displays the distribution of χ^2 values for all sources included in the test sample. A typical χ^2 distribution is observed, with a prominent peak at 2 and a long tail. Some stars in the sample exhibit a large χ^2 value, indicating that our empirical models are unable to effectively match their observed data. This may be due to the atmospheric parameters of these stars lying outside the range of our models. To optimize computational efficiency, we have excluded these stars from subsequent MCMC analysis. Consequently, by adopting a conservative χ^2 threshold of less than 10 in our study, we have retained 96% of LAMOST sources, 85% of APOGEE sources, and 97% of GALAH sources. After conducting the MCMC analysis for the remaining 982,927 stars, we obtained their final parameters. To evaluate the accuracy of our algorithm, we first compared the differences between our predictions and the observations. We simulated the observed magnitudes of the individual stars at various wavelengths and their parallaxes based on the MCMC results, which were compared with the observed data presented in Fig. <ref>. Our simulations show good agreement with the observational results. The mean values of the differences are negligible. The dispersions of the differences are about 0.020-0.024 mag for the optical magnitudes (G g r i z), 0.034 - 0.039 mag for the UV and near-IR magnitudes (u v J HW1 W2), and 0.042 mas for parallax, respectively. §.§ Comparison of the resulted parameters In this section, we then compare the stellar parameters obtained by SPar to those obtained from the spectroscopic surveys to test the accuracy of Spar. Fig. <ref> shows the comparisons of effective temperature and metallicities between the current work and the spectroscopic surveys, including LAMOST, APOGEE, and GALAH. The effective temperature from SPar and LAMOST are in good agreement without any obvious trend of change with temperature. The dispersion of the difference between the effective temperatures is only 170 K. Compared to LAMOST, APOGEE has more low-temperature stars and fewer high-temperature stars. Some of the stars in APOGEE have below 4000,K, which is outside the temperature range of our empirical stellar templates. For those low-temperature stars, we would overestimate their temperatures. Regarding the GALAH data, our effective temperatures are systematically higher than the GALAH values for high-temperature stars. The possible reason for this is that the effective temperatures measured by GALAH are systematically higher than those measured by LAMOST for stars of high temperatures, as discussed in <cit.>. The sensitivity of the SkyMapper uv filters to stellar metallicities enables accurate measurements of based on the SMSS multiband photometric data, as illustrated in Fig. <ref>. Our resulting metallicities show good agreement with those obtained from the LAMOST, APOGEE, and GALAH surveys, even for extremely metal-poor stars with ∼ -2.5 dex. However, the dispersion of the differences is relatively high, with values of 0.23, 0.28, and 0.24 dex, respectively, for LAMOST, APOGEE, and GALAH measurements. These values are larger than the dispersion values reported by <cit.> (0.05 to 0.15 dex). This difference may be attributed to the fact that <cit.> restricted their analysis to stars at high Galactic latitudes, where dust extinction values are small and readily derived from two-dimensional extinction maps. In contrast, many of the stars in our sample are located in the Galactic disk, which is subject to high extinction effects. Therefore, errors arising from dust extinction hinder accurate determination of the intrinsic colors of stars, leading to relatively large uncertainties in the derived metallicities. We plotted the differences in effective temperature and metallicities between our work and the spectroscopic surveys against the reddening values of our sample stars in Fig. <ref>. As the reddening values increase, the dispersions of the differences in the two parameters also increase. On average, the mean values of the differences do not vary with the reddening. However, for highly reddened stars, the mean value of the differences in the effective temperature deviates from zero. This may be due to the small number of stars in such regions and the relatively larger errors associated with them. In addition to the stellar parameters, reddening and distance are also results from SPar. We compare our resultant values with those derived from the star-pair method, our derived distances and those from Gaia DR3 parallaxes for all stars in the test sample in Fig. <ref>. For , the consistencies are good. There are no offsets between our results and those from the spectroscopic results. The dispersion values of the differences for the LAMOST and GALAH stars are about 0.05 mag. While for the APOGEE stars, the dispersion is larger, of about 0.08 mag. This is partly due to the high proportion of stars with large reddening values in APOGEE, and partly due to the fact that there are some low-temperature stars in APOGEE with temperatures below 4000 K. As mentioned above, we may overestimate their effective temperatures, which would lead us to overestimate their reddening values at the same time. Regarding the distance measurements, our results are consistent with the Gaia measurements, with no significant offsets observed. The dispersions of the relative differences for both LAMOST and GALAH stars are only around 7%, while for APOGEE stars, the value is about 19%. This is because LAMOST and GALAH stars, being mainly dwarfs that are relatively close to us, have accurate parallax measurements from Gaia, resulting in smaller relative distance dispersions. Conversely, the APOGEE catalogue contains many giant stars that are further away, leading to larger parallax measurement errors and, therefore, a larger dispersion in relative distance measurements. Finally, we compared the absolute magnitudes in the Gaia G band, M_G values obtained from SPar, with those from the three surveys of all the test stars and found that, in general, the agreement is good (Fig. <ref>). The dispersion value of the differences is about 0.35 mag, but the parallax error has a significant impact on the accuracy of the distances, and therefore, it has a great impact on the accuracy of the absolute magnitude we obtain. For stars with small relative parallax errors (less than 5%), the dispersion value of the M_G difference is only 0.13 mag. However, for stars with relative parallax errors greater than 20%, these sources exhibit significant dispersion, and the dispersion value of the M_G difference is 1.62 mag. §.§ Comparison of results with longer MCMC chains To enable SPar to run on large samples of stars, we employed a smaller number of chains and steps in the final MCMC analysis phase. To evaluate the effect of chain and step numbers on the results, we randomly selected 30 sources (10 sources each from LAMOST, APOGEE, and GALAH) and performed 100 walkers and 1000 step chains for each of these sources. The results obtained from SPar and longer chains and steps are presented in Fig. <ref>. Overall, there are good agreements between the obtained parameters. However, four sources showed relatively large deviations in , mainly due to the large uncertainties. Despite this, we believe it is reasonable for SPar to use relatively short chains and steps. Using longer chains and steps can significantly increase the computational time without bringing any significant improvement in the results. § SUMMARY In this work, we have developed a new algorithm called SPar to derive stellar parameters from multi-band photometries, which can be applied to large samples of stars. The algorithm takes advantage of empirical stellar libraries constructed from Gaia, LAMOST, and other photometric surveys. It leverages the minimum χ^2 fit of the stellar SEDs to obtain the initial values for the MCMC analysis, which results in the stellar parameters, including , , and M_G, of the individual stars. Our algorithm is tested on the LAMOST, APOGEE, and GALAH stars. The typical dispersion values of the differences between our results and literature values were 170 K for , 0.23 dex for , 0.13 mag for M_G and 0.05 mag for . In the future, our new SPar algorithm will be implemented on large samples of stars obtained from the Mephisto and CSST surveys to derive atmospheric parameters, distances, and extinction values for billions of stars. This will give us crucial insights into the structure, chemistry, and other properties of the Milky Way. § ACKNOWLEDGEMENTS We would like to thank the referee for providing us with detailed and constructive feedback that has significantly enhanced the quality of the manuscript. We are grateful to Professor Biwei Jiang for help and discussion. This work is partially supported by the National Key R&D Program of China No. 2019YFA0405500, National Natural Science Foundation of China 12173034, 11833006, 12203016 and 12173013, Natural Science Foundation of Hebei Province No. A2022205018, A2021205006, 226Z7604G, and Yunnan University grant No. C619300A034, and Science Foundation of Hebei Normal University No. L2022B33. We acknowledge the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A09, CMS-CSST-2021-A08 and CMS-CSST-2021-B03. We are grateful for the support of the Postdoctoral Research Station in Physics at Hebei Normal University. This research made use of the cross-match service provided by CDS, Strasbourg. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work presents results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia. The national facility capability for SkyMapper has been funded through ARC LIEF grant LE130100104 from the Australian Research Council, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technology, Monash University and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University’s Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is hosted at the National Computational Infrastructure (NCI). Development and support the SkyMapper node of the ASVO has been funded in part by Astronomy Australia Limited (AAL) and the Australian Government through the Commonwealth’s Education Investment Fund (EIF) and National Collaborative Research Infrastructure Strategy (NCRIS), particularly the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service Projects (ANDS). This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication makes use of data products from the Widefield Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. aasjournal