entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.05550v1
20230709072109
Exploring high scale seesaw models through a supersymmetric portal
[ "Yi Liu", "Stefano Moretti", "Harri Waltari" ]
hep-ph
[ "hep-ph" ]
Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping Kazuya Nishimura1 Ami Katanaya2 Shinichiro Chuma2 Ryoma Bise1 Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ========================================================================================== § INTRODUCTION Neutrino masses have been known to be non-zero for 25 years <cit.>. As they are so much smaller than all other Standard Model (SM) fermion masses, one usually assumes that they are generated by some kind of a seesaw mechanism <cit.>. The masses are still generated through the Higgs mechanism, but suppressed by a heavy seesaw particle, which can be a singlet neutrino (Type-I), a triplet of Higgs bosons (Type-II) or a triplet of exotic leptons (Type-III) (see Refs. <cit.> for reviews). The seesaw scale is a priori unknown. If the seesaw scale is around the Electro-Weak (EW) scale, one may be able to produce the seesaw particles directly at the Large Hadron Collider (LHC) <cit.>. One of the original ideas <cit.> was that the smallness of the neutrino masses could be related to the breaking of a Grand Unification Theory (GUT), i.e., the relevant Yukawa couplings would be of order unity and the seesaw scale somewhere around α M_GUT∼ 10^14 GeV. Such energy scales are obviously out of the reach of present and future colliders. Supersymmetry, the symmetry between fermions and bosons, is often a necessary ingredient in formulating models with large separations of scales. Due to the cancellation between the bosonic and fermionic loops, the separation of scales is radiatively stable <cit.>, once it has been generated by some dynamics. Thus in the supersymmetric framework, scalar masses would not get quadratic corrections proportional to the seesaw scale and an EW scale Higgs boson would not be unnatural even if the seesaw scale was close to the GUT scale. In the context of high scale seesaw models, supersymmetry has one remarkable property. The scalar potential, and especially its F-terms being of the form V=∑_i| ∂ W/∂φ_i|^2, leads to four-scalar interactions without the seesaw particle but with the seesaw couplings involved. If the couplings are of the order unity, they are among the largest ones in the model and could lead to observable consequences. For definiteness, let us consider the Type-I seesaw model, where the extra superpotential terms in addition to those of the Minimal Supersymmetric Standard Model (MSSM) are W=W_MSSM+ y^ν L· H_u N^c+M_NN^cN^c, where we assume y^ν∼ 1 and M_N∼ 10^14 GeV. When differentiating with respect to N^c, one gets the term ∑_k y^ν *_iky^ν_jkL̃^†_i· H_u^†L̃_j· H_u, involving only Higgs bosons and left-handed sleptons, which we assume to be at the TeV scale. If there are significant mass splittings between the sfermion generations, which could well be generated through Renormalisation Group Evolution (RGE) due to the large couplings, one might get processes like ν̃_i→ν̃_jh with a large Branching Ratio (BR). If the sneutrinos decay visibly, the decays can be distinguished from mono-Higgs signatures that could arise from dark matter <cit.>. Slepton decays with Higgs bosons in the final state could offer an indication of a high scale seesaw model and thus provide us a window to scales otherwise beyond our experimental reach. Our aim is to investigate how could one observe such slepton decay patterns involving Higgs bosons in seesaw models of Type-I and Type-III, which have a similar structure in terms of the TeV scale Lagrangian. Our paper is organised as follows. Higgs-slepton interactions are described in the next section, which is followed by a discussion of the production and decay modes relevant to our research. Our numerical analysis is introduced in the following section, after which we conclude. § HIGGS-SLEPTON INTERACTIONS IN SEESAW MODELS We shall now look at how the Higgs-slepton interactions arise from our seesaw models in some detail. In particular, we look at Type-I and Type-III seesaw models. Both have Yukawa couplings that connect the lepton and Higgs doublets to the seesaw particles, which form a singlet and triplet under SU(2). The superpotential of Type-I seesaw is given in Eq. (<ref>) and for Type-III seesaw it is W = W_MSSM + y^ν L Σ H_u + M_ΣTr(Σ^2), where L is the left-chiral lepton doublet and H_u = (H^+ , H^0)^T is the up-type Higgs doublet. The Σ is an antilepton (L=-1) chiral superfield which transforms as (1,3,0) under the SM gauge group SU(3)_c× SU(2)_L × U(1)_Y. The mass term for Σ violates lepton number by two units. The superfield Σ can be represented Σ = σ^iΣ^i= ( [ Σ^0/√(2) Σ^+; Σ^- -Σ^0/√(2) ]), Σ^± = Σ^1 ∓ iΣ^2/√(2), Σ^0 = Σ^3. The models look very similar in what comes to neutrino mass generation, both having a lepton and a Higgs doublet coupling to the companion neutrinos. The only difference is that the L and H_u superfields combine to a singlet in the case of Type-I and to a triplet in the case of Type-III seesaw. This difference between the two seesaw models leads to a difference in the scalar potential which contributes the processes that lead to slepton decays containing a Higgs boson. When we expand the neutrino Yukawa terms in the superpotential, we get W = y^ν_ij( e^-_iH_u^+-1/√(2)ν_i H_u^0)N^c_j +…, W = y^ν_ij( 1/√(2)e^-_iH_u^+Σ^0_j -ν_iΣ^-_jH_u^++1/√(2)e^-_iΣ^+_jH_u^0+1/2ν_iΣ^0_jH_u^0)+…, for Type-I and Type-III, respectively. Here we have included a factor of 1/√(2) into the definition of the neutral Higgs field. Differentiating with respect to the heavy seesaw fields leads to the scalar potentials V = ∑_k1/2y^ν_iky^ν *_jkν̃_iν̃^*_jH_u^0H_u^0 *+…, V = ∑_k1/4 y^ν_iky^ν *_jk(ν̃_iν̃^*_jH_u^0H_u^0 *+2ẽ^-_iẽ^+_jH_u^0H_u^0 *)+… , for Type-I and Type-III, respectively. Hence one in general gets Higgs interactions with sleptons that are non-diagonal in flavour space and, in the case of a high scale seesaw, have large couplings. After EW Symmetry Breaking (EWSB) we have ⟨ H_u^0⟩ = vsinβ (v=246 GeV), which generates a three-point coupling between sleptons and the SM-like Higgs. One may also note that in Type-III seesaw there is a non-flavour-diagonal coupling between charged sleptons and Higgs bosons, while there is no such coupling in the case of Type-I seesaw. As we discuss below, this leads to a stronger signal arising from Type-III than Type-I seesaw. We further notice that, while the usual D-terms of the scalar potential also contain large couplings between sneutrinos, charged sleptons and Higgs bosons, such couplings are always flavour-diagonal and cannot result in decays of the type ν̃_2→ν̃_1h, which is our smoking gun signature for high scale seesaw models. Besides the decay modes containing Higgs bosons, there are other decay channels and the visibility of the signal depends on the branching ratios. If the Lightest Supersymmetric Particle (LSP) is a higgsino-like neutralino and the gauginos are heavier than the sleptons, the decays of the left-handed sleptons arise from the superpotential term y^ℓ LH_dE^c, so one gets the decays ν̃→χ̃^±ℓ^∓ and ℓ̃^±→χ̃^0ℓ^±. These lead to partial widths Γ(ν̃_j→ℓ^±_jχ̃^∓_i) = |y^ℓ_jj|^2|U_i2|^2(m_ν̃^2-m_χ̃^2)^2/32π m_ν̃^3, Γ(ℓ̃^±_j→ℓ^±_jχ̃^0_i) = |y^ℓ_jj|^2|N_i3|^2(m_ℓ̃^2-m_χ̃^2)^2/16π m_ℓ̃^3, where U_i2 gives the higgsino component of the chargino (for our benchmarks |U_i2|≃ 1), N_i3 gives the down-type higgsino component of the neutralino (for our benchmarks |N_13|≃ 1/√(2)). If the soft slepton masses are not flavour diagonal, an appropriate linear combination of the leptonic Yukawas corresponding to the flavour composition of the sleptons must be used. If the LSP is a gaugino there are additional decay channels ν̃→νχ̃^0 and ℓ̃^±→χ̃^±ν (if winos are light) and the decay widths are propotional to g^2 instead of |y^ℓ|^2 and gaugino components instead of higgsino components. Since we have the hierarchy y^ℓ_11≪ y^ℓ_22≪ y^ℓ_33≪ g, the strength of our signal will depend on the nature of the light neutralinos and charginos and in the case of higgsinos, the flavour of the heavier sleptons. As the electron and muon Yukawas are so tiny, in practice the mixing between the gaugino and higgsino components will be significant for the overall decay widths of the sneutrinos and charged sleptons unless the gauginos are extremely heavy. We shall concentrate on the higgsino case, since as we shall see, already the tau Yukawa is so large that the signal containing Higgs bosons will have a too small branching ratio if stau is the heavy slepton that decays. Hence in all our benchmarks we make our gauginos heavier than the sleptons. § THE PRODUCTION AND DECAY MECHANISMS To study the high-scale seesaw signatures with Higgs bosons, we build some Benchmark Points (BPs) with m(ẽ^±)<m(μ̃^±)<m(τ̃^±) and mass splittings between generations larger than m_h≈ 125 GeV (the mass of the SM-like state h). As we shall see, this will be the limiting case, where we still can see a signal. If the second slepton (assuming the third one to be too heavy to be produced efficiently) would be a selectron, the signal would be similar (as the mixing with gauginos dominates the other decay modes already for smuons), while in the case of a stau, the signal would almost vanish due to the larger partial widths from equations (<ref>) and (<ref>). We consider the charged current process pp→ℓ̃_2^±ν̃_2, where the subscript indicates mass ordering. The charged current portal is more promising as the final state contains charged leptons even when the sneutrino decays invisibly. As discussed above, in Type-III seesaw both sneutrinos and charged sleptons can decay to final states with Higgs bosons. The dominant process is ℓ̃_2→ℓ̃_1 h while ν̃_2 →ℓ^±χ̃_1^∓, νχ̃^0. The Feynman diagram for such a process is shown in Fig. <ref>. There is also a process, where the Higgs originates from a sneutrino decay, but that has a smaller BR as can be seen from equation (<ref>). In Type-I seesaw, only the sneutrino can decay into a Higgs boson via ν̃_2 → h ν̃_1. The corresponding Feynman diagram is shown in Fig. <ref>. These processes can lead to a variety of final state topologies. Currently the limit for charged slepton masses is m(ẽ^±),m(μ̃^±)> 700 GeV for neutralino masses below 350 GeV <cit.>, which we take as our lower limit of charged slepton masses[With more compressed spectra m(ℓ̃)-m(χ̃^0)≲ 100 GeV, one obviously can have significantly lighter sleptons. Such cases need a different analysis strategy than the one adopted here as we rely on large E_T to suppress SM backgrounds.]. This means that the overall production rate of slepton-sneutrino pairs will be low, especially as we have to produce second generation sleptons with a large mass splitting compared to the first generation ones. In fact, the production rate at the LHC even with nominal collision energy (√(s)=14 TeV) is so low (∼ 30 ab for 1 TeV sleptons), that there will not be sufficient statistics even at the High-Luminosity LHC (HL-LHC) <cit.>. Hence we turn to the proposed High-Energy LHC (HE-LHC) <cit.> with a nominal collision energy of √(s)=27 TeV. This increases the production cross section by an order of magnitude compared to the standard LHC. In Tab. <ref> we show the lepton multiplicities for some typical benchmark points (BP1 and BP3, defined in Table <ref>). We see that the single lepton final state has the highest multiplicity for both seesaw models. As we will lose a part of the signal due to different BRs involved in the model, it is reasonable to look at the state with the highest multiplicity first. We also pick the Higgs decay mode to b-quarks as that has the highest BR and allows to reconstruct the Higgs boson, although not with a too high precision in mass. Unfortunately the channels with good mass resolution (i.e., γγ and ZZ^*→ 4 leptons) are too rare to be useful with such a small event rate. Our signal events will then consist of events with a single lepton, two b-tagged jets and missing momentum carried by the LSP. The largest SM backgrounds to this final state arise from the following processes: * tt̅ production where one the top (anti)quarks decays semileptonically and the other one hadronically; * W^±h production in the case where the W^± boson decays into a lepton and a neutrino. These have been considered to be the dominant backgrounds in similar types of experimental analyses (e.g., <cit.>). § SIMULATION AND RESULTS In this section we will describe our numerical toolbox and the Monte Carlo (MC) simulations that we have pursued with it. §.§ Analysis strategy The model files are produced by the Mathematica package Sarah v4.14 <cit.>. This code also generates a source code for Spheno v4.0.4 <cit.> to obtain the mass spectrum and couplings as well as for Madgraph5 v2.8.2 <cit.> to simulate collider events. We use Pythia v8.2 <cit.> for parton showering and hadronisation while we simulate the detector response by using Delphes3 <cit.>. We simulate the analysis and present our numerical results with Madanalysis5 v1.8 <cit.>. We prepare two BPs for Type-III seesaw and two for Type-I seesaw, which can be detected in the HE-LHC with 27 TeV collision energy and the integrated luminosity 10 ab^-1. We simulate proton-proton collisions to produce the second generation sneutrino (ν̃_2) and slepton (ℓ_2), which in our cases are smuon-like, and select decays to the SM-like Higgs boson plus corresponding first generation particles. The mass of ν̃_2 and ℓ_2 should be heavy enough to allow for the decay kinematics. At the same time, the mass of lightest slepton is required to be larger than 700 GeV <cit.>. The particle mass spectra and relevant BRs are shown in Tab. <ref>. All of the BPs have the same Lightest Supersymmetric Particle (LSP) and Next-to-LSP (NLSP), which are higgsino-like neutralinos and charginos. BP1 has a mass spectrum similar to BP3 and the same situation arises between BP2 and BP4. However, there is a significant difference in the Higgs production cross section times BRs between Type-III seesaw and Type-I seesaw. For the sneutrino decay process, Type-I seesaw has BRs larger than the Type-III ones, which can be traced back to the factors in equations (<ref>) and (<ref>). However, the charged slepton decay channel does not exist in Type-I seesaw whereas it dominates the Higgs signal in Type-III seesaw, consistent with equations (<ref>) and (<ref>). As the slepton masses increase, the BR shows a decreasing trend. The BR for μ̃^±→ẽ^±h is high in Type-III seesaw, since the competing decay mode of eq. (<ref>) is proportional to the small muon Yukawa coupling squared or the small gaugino-higgsino mixing factor squared. Had the second slepton been a selectron, the BR would have been similar as the gaugino-higgsino mixing would dominate the decays to neutralinos/charginos, while for staus the corresponding branching ratio is only a few percent as the tau Yukawa is large enough to dominate the branching ratio. As a pre-selection, we require a single lepton and at least two b-jets, as shown in Tab. <ref>. We use a working point, where the b-jet tagger achieves 70% efficiency and only a 1.5% probability of misidentifying a light-parton jet as a b-one <cit.>. Then several cuts are imposed to select the Higgs signal as per the process in Fig. <ref>. The leading lepton is dominantly produced from the process ν̃_1→ e + χ̃_1^±. As the mass difference between sneutrino and the lightest chargino is larger than 500 GeV for BP1 and 400 GeV for BP2, we choose the transverse momentum of the leading lepton to be larger than 400 GeV to preserve the single lepton signal and reduce the background, as shown in Fig. <ref>. The E_T (MET) cut is chosen to be 500 GeV as the NLSP mass is around that value. In order handle properly the MC generation of the tt̅ background, we add a cut at the generation level (MET above 300 GeV) so as to generate this SM process automatically in the signal region of interest. The Higgs selection is done by choosing the interval of invariant mass of the leading and next-to-leading b-jets from 100 GeV to 150 GeV. Fig. <ref> shows a peak around the SM-like Higgs mass for the signal and W^± h background, while the tt̅ noise is rather flat therein. Hence, this requirement proves effective against the latter. Finally, the 100 GeV cut on the transverse mass defined using the highest p_T lepton plus missing transverse momentum, M_T(l_1,E_T), can also significantly reduce background, especially tt̅, as evident from Fig. <ref>. §.§ Numerical analysis We have applied the cuts of Tab. <ref> to all BPs as well as backgrounds and the results are presented in Tab. <ref>, for the discussed HE-LHC energy and luminosity. As expected, Type-III seesaw preserves more signal events (25.8 for BP1 and 27.7 for BP2) than Type-I seesaw (15.5 for BP3 and 9.2 for BP4). Furthermore, BP2 and BP4 show the interesting feature of having fewer initial events (compared to BP1 and BP3, respectively) but displaying a similar final result. This is because the sneutrino and smuon in BP2(BP4) are heavier than those in BP1(BP3), leading to a larger MET and higher transverse momentum of the leading lepton (p_T(ℓ_1)), thereby increasing the efficiency of the corresponding selections. The significances are shown in Tab. <ref>, for the usual HE-LHC parameters, wherein one can appreciate rather significant signal excesses above the SM backgrounds for Type-III seesaw while for Type-I seesaw the sensitivity is somewhat limited (but larger values of Yukawa couplings could be probed and there could be room to improve the analysis or increase the amount of data). We also tested a benchmark similar to BP1, but with the mass ordering m(ẽ)<m(τ̃)<m(μ̃) with the smuon too heavy to be produced. This gave just 0.6 events after the cuts, so we can get a significant signal only arising from selectrons or smuons and their sneutrinos. In addition it is essential for our analysis that there is a significant mass splitting between the sleptons and the LSP. With a softer MET cut the tt background would be problematic, while the cut on the transverse mass of the lepton and MET would keep W^±h under control. In summary, though, it is clear that the HE-LHC is a machine with clear potential to access high scale seesaw models (like Type-III and Type-I embedded within the MSSM) by exploiting the SM-like Higgs (eventually decaying to bb̅) plus a hard lepton and MET signature. § CONCLUSIONS How neutrino mass generation occurs in Nature is one of the outstanding questions in particle physics. Current probes of neutrinos hardly include colliders, as herein such particles appear as E_T, thereby offering no scope to identify their properties. However, in a supersymmetric world, there exist sneutrinos, which share with neutrinos their interactions. Therefore, given that sneutrinos can decay visibly at the LHC (i.e., inside the detectors), it makes sense, in order to study neutrino properties in supersymmetry, to study sneutrinos. One, however, needs a paradigm for supersymmetry to do so, i.e., a model realisation of it, which we assumed here to be the MSSM, supplemented with two kinds of seesaw mechanism for (s)neutrino mass generation, the so-called Type-I and Type-III. These mechanisms have a similar structure to generate neutrino masses and hence both lead to Higgs-sneutrino interactions, which are non-diagonal in flavour space. These two are examples of high scale seesaw mechanisms, wherein the companion neutrinos (to the SM ones) can have masses of order 10^12-10^14 GeV. However, left-handed sneutrino and slepton masses are necessarily linked to the typical supersymmetry breaking scale, which ought to be 10 TeV or so at the most (in order to preserve gauge coupling unification, successful dynamical EWSB, etc.). In the case of a high seesaw scale the neutrino Yukawa couplings are among the largest ones in the model and, due to the structure of the supersymmetric scalar potential, they can lead to observable consequences at the supersymmetry breaking scale. We found that the current LHC, for which √(s)=14 TeV (in turn recalling that √(ŝ) is only a fraction of that), cannot test such seesaw scenarios. However, a possible energy upgrade has been proposed for it: the so-called HE-LHC. This offers √(s)=27 TeV (and ∫ L dt=10 ab^-1), therefore, it is in a position to test the aforementioned seesaw scenarios of neutrino mass generation. In this paper, we have, in particular, tested the scope of a particular signal stemming from these two seesaw mechanisms. In fact, the signature is common to both, i.e., charged current induced slepton-sneutrino production and subsequent decay into the SM-like Higgs boson (in turn decaying to bb̅ pairs), a single lepton (l=e,μ) and MET (or E_T). Upon assessing that the single lepton channel (as opposed to multi-lepton ones also stemming in these two scenarios) is the most sensitive one, for any number of b-jets beyond 1, we have devised a simple cut-and-count analysis, deployed identically for both Type-I and -III, that has enabled us to reach evidence to discovery significances at the HE-LHC for the Type-III case while for the Type-I case a more refined selection and/or additional data would be required. This was shown, in both cases, for BPs currently compliant with standard theoretical requirements as well as current experimental searches. Parameterwise, the signature requires the gauginos to be heavier than the sleptons, a sufficient mass splitting (≳ 300 GeV) between the sleptons and the higgsino-like LSP and a sufficient mass splitting between the slepton generations so that the decay with a Higgs boson is kinematically allowed. Even though this signal is common to the two seesaw models, the fact that in Type-I seesaw only sneutrinos have decay modes containing Higgs bosons, while for Type-III also charged sleptons have such decay channels allows us to distinguish the models. This distinction might be more difficult at a hadron collider but, if there was an electron-positron collider with sufficient collision energy, the pair production of charged sleptons above √(s)=2m_ℓ̃ would lead to an enhanced signal with Higgs bosons in case of Type-III, while no such an enhancement would be present in Type-I. As an outlook of our work, we would like to highlight that a Future Circular Collider in hadron-hadron mode (FCC-hh) <cit.>, running at √(s) values up to 100 TeV, will not improve the scope of the HE-LHC since, herein, background rates increase more that the signal ones that we pursued (although this may not be true for other channels not considered here). Altogether, we have shown that there exist cases where, in supersymmetric theories, it is possible to probe the neutrino mass generation mechanism through sneutrino phy­sics while the (seesaw) scale related to this mechanism is extremely high, roughly, up to 10^14 GeV. § ACKNOWLEDGEMENTS SM is supported in part through the NExT Institute and STFC Consolidated Grant No. ST/L000296/1. HW is supported by the Carl Trygger Foundation under grant No. CTS18:164. We finally acknowledge the use of the IRIDIS5 High-Perfor­mance Computing Facility and associated support services at the University of Southampton in the completion of this work. 99 Super-Kamiokande:1998kpq Y. Fukuda et al. [Super-Kamiokande], Phys. Rev. Lett. 81 (1998), 1562-1567 [arXiv:hep-ex/9807003 [hep-ex]]. Minkowski:1977sc P. Minkowski, Phys. Lett. B 67 (1977), 421. Konetschny:1977bn W. Konetschny and W. Kummer, Phys. Lett. B 70 (1977), 433. Gell-Mann:1979vob M. Gell-Mann, P. Ramond and R. Slansky, Conf. Proc. C 790927 (1979), 315 [arXiv:1306.4669 [hep-th]]. Mohapatra:1980yp R. N. Mohapatra and G. Senjanovic, Phys. Rev. D 23 (1981), 165-180. Foot:1988aq R. Foot, H. Lew, X. G. He and G. C. Joshi, Z. Phys. C 44 (1989), 441. Khalil:2022toi S. Khalil and S. Moretti, CRC Press, 2022, ISBN 978-1-138-33643-8. Moretti:2019ulc S. Moretti and S. Khalil, CRC Press, 2019, ISBN 978-0-367-87662-3. CMS:2017ybg A. M. Sirunyan et al. [CMS], Phys. Rev. Lett. 119 (2017) no.22, 221802 [arXiv:1708.07962 [hep-ex]]. CMS:2018jxx A. M. Sirunyan et al. [CMS], JHEP 01 (2019), 122 [arXiv:1806.10905 [hep-ex]]. ATLAS:2019kpx G. Aad et al. [ATLAS], JHEP 10 (2019), 265 [arXiv:1905.09787 [hep-ex]]. ATLAS:2020wop G. Aad et al. [ATLAS], Eur. Phys. J. C 81 (2021) no.3, 218 [arXiv:2008.07949 [hep-ex]]. Dimopoulos:1981zb S. Dimopoulos and H. Georgi, Nucl. Phys. B 193 (1981), 150. Petrov:2013nia A. A. Petrov and W. Shepherd, Phys. Lett. B 730 (2014), 178 [arXiv:1311.1511 [hep-ph]]. Berlin:2014cfa A. Berlin, T. Lin and L. T. Wang, JHEP 06 (2014), 078 [arXiv:1402.7074 [hep-ph]]. ATLAS:2019lff G. Aad et al. [ATLAS], Eur. Phys. J. C 80 (2020) no.2, 123 [arXiv:1908.08215 [hep-ex]]. Gianotti:2002xx F. Gianotti, M. L. Mangano, T. Virdee, S. Abdullin, G. Azuelos, A. Ball, D. Barberis, A. Belyaev, P. Bloch and M. Bosman, et al. Eur. Phys. J. C 39 (2005), 293 [arXiv:hep-ph/0204087 [hep-ph]]. FCC:2018bvk A. Abada et al. [FCC], Eur. Phys. J. ST 228 (2019) no.5, 1109. ATLAS:2022enb G. Aad et al. [ATLAS], JHEP 06 (2023), 016 [arXiv:2207.00230 [hep-ex]]. Staub:2015kfa F. Staub, Adv. High Energy Phys. 2015 (2015), 840780 [arXiv:1503.04200 [hep-ph]]. Porod:2003um W. Porod, Comput. Phys. Commun. 153 (2003), 275 [arXiv:hep-ph/0301101 [hep-ph]]. Porod:2011nf W. Porod and F. Staub, Comput. Phys. Commun. 183 (2012), 2458 [arXiv:1104.1573 [hep-ph]]. Alwall:2011uj J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer, JHEP 06 (2011), 128 [arXiv:1106.0522 [hep-ph]]. Sjostrand:2014zea T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen and P. Z. Skands, Comput. Phys. Commun. 191 (2015), 159 [arXiv:1410.3012 [hep-ph]]. deFavereau:2013fsa J. de Favereau et al. [DELPHES 3], JHEP 02 (2014), 057 [arXiv:1307.6346 [hep-ex]]. Conte:2012fm E. Conte, B. Fuks and G. Serret, Comput. Phys. Commun. 184 (2013), 222 [arXiv:1206.1599 [hep-ph]]. ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group], PTEP 2022 (2022), 083C01 CMS:2012feb S. Chatrchyan et al. [CMS], JINST 8 (2013), P04013 [arXiv:1211.4462 [hep-ex]]. FCC:2018byv A. Abada et al. [FCC], Eur. Phys. J. C 79 (2019) no.6, 474.
http://arxiv.org/abs/2307.03882v1
20230708024835
The Busboy Problem: Efficient Tableware Decluttering Using Consolidation and Multi-Object Grasps
[ "Kishore Srinivas", "Shreya Ganti", "Rishi Parikh", "Ayah Ahmad", "Wisdom Agboh", "Mehmet Dogar", "Ken Goldberg" ]
cs.RO
[ "cs.RO" ]
The Busboy Problem: Efficient Tableware Decluttering Using Consolidation and Multi-Object Grasps Kishore Srinivas^1, Shreya Ganti^1, Rishi Parikh^1, Ayah Ahmad^1, Wisdom Agboh^1,2, Mehmet Dogar^2, Ken Goldberg^1 ^1The AUTOLab at UC Berkeley (automation.berkeley.edu). ^2University of Leeds, UK. ============================================================================================================================================================================================================= We present the “Busboy Problem": automating an efficient decluttering of cups, bowls, and silverware from a planar surface. As grasping and transporting individual items is highly inefficient, we propose policies to generate grasps for multiple items. We introduce the metric of Objects per Trip (OpT) carried by the robot to the collection bin to analyze the improvement seen as a result of our policies. In physical experiments with singulated items, we find that consolidation and multi-object grasps resulted in an 1.8x improvement in OpT, compared to methods without multi-object grasps. See https://sites.google.com/berkeley.edu/busboyproblem for code and supplemental materials. § INTRODUCTION The post-meal task of clearing a dining table, commonly referred to as “bussing,” requires moving cups, bowls, and utensils that are dispersed across the surface into a bin or tray to be cleaned in the kitchen. This is a common task that occurs after any event involving food service and dish collection, from daily household meals to casual picnics to formal cocktail parties and dinners. Automating this tedious and repetitive task could reduce fatigue and busy work for the skilled waiters who typically perform it. We define the “Busboy Problem" as the efficient transfer of cups, bowls, and utensils (collectively called tableware) from the table into a designated collection bin while minimizing the time required for completion. This is an interesting problem for automation because the tableware are of varying shape, requiring low-level planning to execute grasps and high-level planning to consolidate tableware for efficient transport. Even small inaccuracies can lead to toppling or dropping delicate and expensive tableware, so the system must be extremely reliable. Previous work in multi-object grasping, object manipulation, and grasp candidate generation highlight the efficiency of grasping pre-stacked objects as well as objects manually oriented for multi-object grasps <cit.>. Whereas these works explore situations with objects are already positioned for said grasps, our work investigates methods of stacking and clustering objects into these favorable positions for multi-object grasps. In this paper, we present a framework and algorithms for the Busboy Problem. We consider a scenario where multiple items are placed on a work surface (see Fig. <ref>), under an RGBD camera. We use the concept of multi-object grasping, which enables the robot to move multiple items simultaneously, thus reducing the number of pick-and-place actions needed. This paper makes the following contributions: * Formulation of the Busboy Problem. * Action primitives for rearranging and grasping cups, bowls, and utensils. * Two algorithms that leverage consolidation and multi-object grasps. * Experimental results indicating a 1.8x improvement in OpT. § RELATED WORK §.§ Multi Object Grasping Prior work on multi-object grasping includes different grasping techniques to facilitate multi-object grasps <cit.>, detecting the number of objects in a grasp <cit.>, decluttering surfaces <cit.>, and multi-object grasping to place objects in virtual reality <cit.>. Yamada et al. considered the simplified multi-object grasping problem, where the objects are already in a configuration where they can be grasped at once <cit.>. Agboh et. al. <cit.> showed that friction can increase picks per hour for convex polygonal objects. Some prior work has focused on the design of grippers for multi-object grasping. Jiang et. al. <cit.> proposed a vaccum gripper with multiple suction cups, while Nguyen et. al. <cit.> proposed a soft gripper based on elastic wires for multi-object grasping. Object stacking <cit.> has the potential to improve the number of objects per trip. We take inspiration from these works to include a stacking primitive. §.§ Pulling Prior work by Berretty et al. has examined the use of inside-out pulling to orient convex polygonal parts <cit.>. We utilize a similar technique for circular cups and bowls. Furthermore, a planner for ensuring convergence to the final pose of pulling trajectories is proposed by Huang et al. <cit.>, where they examine the motion of planar objects undergoing quasi-static movement. §.§ Grasp Candidates Satish et al. discuss using a synthetic data sampling distribution that combines grasps sampled from the policy action set with guiding samples from a robust grasping supervisor to construct grasp candidates <cit.>. Additionally, Mahler et al. <cit.> discuss the use of energy-bounded caging to evaluate grasp candidates. They efficiently compute candidate rigid configurations of obstacles that form energy-bounded cages of an object, where the generated push-grasps are robust to perturbations. Mousavian et al. describe the process of using a variational autoencoder to generate grasps by mapping the partial point cloud of an observed object to a diverse set of grasps for the object <cit.>. Because of the relative simplicity of our setup, we found that an analytical approach to constructing grasp candidates is sufficient. In the case of bowls and cups, we sample a random point uniformly on the rim and then orient the gripper perpendicular to the tangent of the circle at that point. In the case of utensils, we identify the axis of the utensil, and pick the highest depth point along that line, with the gripper perpendicular to the axis. §.§ Object Manipulation in Cluttered Environments Efficiently finding object manipulation plans in high-dimensional environments with a large number of objects is a challenging problem. Hasan et al. <cit.> addressed this problem by identifying high-level manipulation plans in humans, and transferring these skills to robot planners. Other work by Tirumala et al. <cit.> used tactile sensing to singulate layers of cloth from a stack. Different from these works, our goal in the cluttered environment is to bring objects together, or stack them, to enable multi-object grasps. § THE BUSBOY PROBLEM The Busboy Problem involves the task of decluttering a workspace containing cups, bowls, and utensils, with the objective of minimizing both the time and number of trips required for completion. §.§ Assumptions In the initial configuration, a planar workspace is defined in a cartesian grid (x, y) and has n_c cups, n_b bowls, and n_u utensils scattered across its surface. All items are assumed to be face up, visible by camera, and within a workspace defined by the constraints of the robot arm. These items may be initially stacked on top of one another or resting individually on the surface, and we assume that the initial state meets the following criteria: * All items are of known dimensions, and cups and bowls are circular when viewed from top-down. Cups have radius 4.5cm, bowls have radius 8.5cm, and utensils are at most 17cm × 1.8cm. * Cups and bowls are upright, and utensils are laid flat on the surface. * Any stacks that exist are stable, such that r_0 ≥ r_1 ≥ ... ≥ r_s, where r_0 represents the radius of the vertically lowest item, and r_s the highest one. * Initially, no two items are touching (items are singulated). §.§ State We use cups, bowls, and utensils (forks and spoons) as the tableware set - collectively called “tableware" - in this work. Each cup and bowl has a position [x, y], and each utensil has a position [x, y] and orientation θ. § DECLUTTERING TABLEWARE §.§ Action primitives We propose to use a combination of manipulation primitives to solve the Busboy Problem. We specifically propose to use single object grasps, multi-object grasps, pull-grasps, and stack-grasps to efficiently clear a work surface of items (Figure <ref>). §.§.§ Grasp We use both single and multi-object grasps in this work. Let u⃗_G be the grasp to pickup objects — single or multiple. We represent this action as: u⃗_G = [p⃗_G, θ_G] where p⃗_G = [x_G, y_G, z_G] is the center point of the grasp, and θ_G is the grasp orientation. §.§.§ Pull-Grasp A pull-grasp action involves two steps: a pull of one object to another, then a multi-object grasp of both objects. We represent a pull action as: u⃗_P = [p⃗_S, θ_S, p⃗_E, θ_E] where p⃗_S = [x_S, y_S, z_S] is the pull start point, θ_s is the gripper orientation at the state point, p⃗_E = [x_E, y_E, z_E] is the pull end point, and θ_E is the gripper orientation at p⃗_E. For circular objects such as bowls and cups, the gripper pulls outwards from the center of the dish using an internal pull, and for utensils, the gripper cages the utensil around its center point while moving it (Figure <ref>). Then, we denote a pull-grasp action as: u⃗_⃗P⃗G⃗ = [u⃗_P, u⃗_G] §.§.§ Stack-Grasp A stack-grasp action involves two steps: a stack of one object onto another, then a multi-object grasp of both objects. We represent a stack action as: u⃗_⃗S⃗ = [u⃗_G_i, p⃗_L, θ_L] where u⃗_G_i is a grasp on the lifted object, and p⃗_L = [x_L, y_L, z_L] is the placement point on the stationary object, and θ_L is the gripper orientation at p⃗_L. Then, we denote a stack-grasp action as: u⃗_⃗S⃗G⃗ = [u⃗_S, u⃗_G] §.§ Determining allowable actions §.§.§ Grasp A single-object grasp is always allowable. We can safely assume this since any dish or stack of items is already top-down graspable. When no other actions are allowed, the single-object grasp action is used as a default to clear the workspace. A multi-object grasp is allowable when the grasp heights of both items are similar (within an adjustable threshold value) and if the lateral distance between the grasp points of both items is less than the width of the gripper. If the grasp heights of the items are significantly different, the gripper will have to either collide with the taller dish while attempting to grasp the shorter dish or grasp only the taller dish to avoid the collision, and either case results in a failure of grasping multiple items at once. Similarly, if the items are separated by more than the maximum inside width of the grippers, an attempt to grasp both at the same time will fail. §.§.§ Pull A pull of two items is allowable if a multi-object grasp can be executed on those items and if no other objects lie between the two items on the workspace. We disallow pull actions of items for which a multi-object grasp cannot be executed, since the pull becomes a wasted action. We also disallow pull actions of items with other objects between them to ensure that the intermediate objects are not displaced in a non-deterministic manner. §.§.§ Stack A stack of dish d_a with radius r_a onto dish d_b with radius r_b is allowable if r_a ≤ r_b. This means that a cup can be stacked onto a bowl, but not vice versa, and that a utensil can be stacked onto any other dish, including another utensil. This is to ensure that the stack stability assumption present at the initial state remains valid after each action. §.§ Robustness of action primitives We present three primitives to robustly execute the above actions. This design makes the primitives more robust. §.§.§ Grasp When executing a grasp at location x, y, z, the robot will open its grippers centered around x, y, and then move down to the appropriate height, as measured by the depth sensor, before closing the gripper to grasp the object. The affordances granted by max gripper opening, gripper height, and gripper width mean that an off-center grasp point x, y, z will still successfully complete the single-object or multi-object grasp of the object (Figure <ref>). §.§.§ Pull For cups and bowls, the gripper pulls outwards from the center of the dish, contacting the inner surface of the dish (Figure <ref>). This action is successful as both r_b and r_c are larger than the width of the gripper when closed. If the gripper is anywhere within the opening of the object, it will be able to move the target object to a specified location. For utensils, the gripper cages the utensil around its center point while moving it, preventing unwanted rotation and moving the utensil to its specified location. §.§.§ Stack For bowls and cups, the top lip radius is larger than the radius of the base, giving the sides a taper. Because a dish d_a is only stacked onto another dish d_b of equal or larger size, the base radius of d_a is guaranteed to be smaller than the top radius of d_b, allowing the tapered sides of the items to funnel d_a into place even if there is slight error in the placement of the dish. Placing a utensil onto a bowl is extremely robust to error because of the relative radii of the items, and placing a utensil onto another utensil is robust due to the curvature of the utensils themselves which slide a misplaced utensil into place, making them naturally conducive to stacking. §.§ Policies §.§.§ Pull Policy The pull policy combines Pull-Grasp and Grasp actions. From the initial scene, it checks if any multi-object grasps can be executed right away, and executes those first. Then, it runs the Pull-Grasp action for all remaining items, pulling together items that don't cause collisions and executing multi-object grasps to clear them from the workspace. If any items remain after all possible multi-object grasps are executed, those items are cleared with single-object Grasp actions. After each action, a new image of the workspace is taken and the state representation is updated to reflect the new state of the workspace, including any tableware that has been moved or left behind by the previous action. This policy is formalized in Algorithm <ref>. §.§.§ Stack Policy The stack policy combines Stack-Grasp and Grasp actions. It repeatedly executes the Stack-Grasp action to clear the workspace, and if there are any remaining items they are cleared with single-object Grasp actions. It prioritizes stacking utensils onto bowls and transporting them to the bin, and then tries to stack the remaining dishes. Stacking utensils first is an efficient way to improve the number of OpT for this policy. The policy is formalized in Algorithm <ref>. After utensils are cleared, the stacks created by this policy are limited to be a combination of at most 2 existing stacks (i.e. once a Stack action is executed, the next action is necessarily a Grasp on the resulting stack, not another Stack action onto that stack). This is because when 4 or more bowls or cups are stacked, the height difference between the lip of the top dish and the lip of the bottom dish exceeds the height of the gripper jaws, causing many attempted grasps to fail. By limiting stacks to at most 2 existing stacks, we significantly reduce the chances of creating a stack with more than 3 dishes. § EXPERIMENTS AND RESULTS We evaluate through physical experiments the robustness of the pulling action primitive and then evaluate the pull and stack policies on a real-world table clearing task. §.§ Experimental Setup We use a UR5 robot arm with a Robotiq 2F-85 gripper and Intel RealSense 455D RGBD camera mounted 83cm above the workspace. The workspace is a flat 78cm x 61cm surface with 4 cups, 4 bowls, and 4 utensils, n_b = n_c = n_u = 4. In our experimental setup, we calculated a max gripper opening of w = 8.5cm, gripper height of h = 4.5cm, bowl radius r_b = 8.5cm, cup radius r_c = 4.5cm and utensil width r_u = 1.8cm. We identify and locate tableware on the workspace with a vision pipeline. Since the surface of the workspace is white, we use darker colored tableware to be easily visible. To locate cups and bowls, we first use edge detection, contour forming, and HoughCircles to identify circular shapes on the workspace, then filter these circles based on the known image radius of cups and bowls. We cluster these circles by their centers and remove circles that overlap beyond a specified threshold, allowing an unambiguous detection of cups and bowls. To locate utensils, we use edge detection and contour forming, and then filter out the contours that are too “square", as determined by the aspect ratio of the identified contour. We draw an imaginary line through the lengthwise center of bounding rectangle of the contour, and sample depth values along that line; we use the highest depth point as the grasp point of the utensil to allow the gripper maximum clearance with the surface. We define three tiers to evaluate the performance of our algorithm on scenes of increasing complexity. * Tier 0: scenes contain 6 items, either all cups, all bowls, or all utensils, with no stacks in the initial state. * Tier 1: scenes contain 4 items each of cups, bowls, and utensils, and have no stacks in the initial state. * Tier 2: scenes contain 4 items each of cups, bowls, and utensils, but we allow stacks of at most 3 objects in the initial state. For Tier 2, we limit initial stacks to at most 3 objects because of the dimensions of the gripper, as mentioned in Section <ref>. The number of objects in a stack, and not the actual dimensions of individual dishes, is the main limiting factor for the grasp, because we grasp dishes from the rim. The dishes could actually be much larger and still be graspable as long as the walls are thin enough to allow the gripper to slide over them, and the weight of the dish does not exceed the payload limitations of the gripper itself. We limit ourselves to a small set of known kitchenware objects for consistency in our experiments. We evaluate the performance of the pull and stack policies against a baseline single-item policy, referred to as “Random" in Table <ref>. This policy picks a dish at random, and if the dish is a cup or bowl, it uniformly samples a point on the rim and grasps the dish at that point. If the dish is a utensil, it identifies the grasp point of the utensil as described above and grasps the utensil at that point. This policy is stack-agnostic, so even in Tier 2 when there are stacks present in the initial state, it treats each item in the stack as its own object, and clears the stack by transporting one item at a time. §.§ Scene Generation In order to evaluate our policies, we generate multiple scenes at each tier, and every policy is run once on each scene. To generate each scene, we use the dimensions of the workspace (78cm × 61cm), and r_b, r_c, r_u for the dimensions of the objects. We randomly sample x, y locations within the scene for each object. If an object intersects with another object, we create a stack of the two objects if the maximum number of intersections has not been exceeded, and resample a position for the object if it has. Tiers 0 and 1 allow no such intersections, whereas Tier 2 allows 4 intersections. For each trial we manually reset the scene to maintain consistency. §.§ Evaluation We evaluated on 9 scenes at Tier 0 (3 scenes per type of dish), 3 scenes at Tier 1, and 3 scenes at Tier 2. A trial is one execution of one policy on one scene, so we have a total of (9+3+3)*3 = 45 trials. For each trial, we record the time in seconds to clear the table, the OpT, and the number of failures. A failure occurs when the robot is unable to move all items to the collection bin, either because of a perception failure that leaves items behind on the workspace or a policy failure that drops a dish off the workspace. We report our results in Table <ref>. To evaluate the performance of our policies in more realistic scenario, we present the theoretical improvement in execution time when the bin is placed further away from the workspace, as might be seen in a home or professional kitchen. Given the physical limitation of the UR5 arm length, we simulated the lengthening distance by adding time delays of 3 and 5 seconds in both directions of motion (to and from the collection bin). We find that moving the bin further away causes the stack and pull policies to perform significantly better than the baseline policy because motions to and from the bin are penalized, making policies with fewer total actions perform better. We report these results in Table III in the appendix of the project website. § DISCUSSION Results show that using consolidation and multi-object grasps allows clearing the workspace efficiently, with the pull policy transporting at least 1.6x as many objects per trip, and the stack policy at least 1.8x. A discussion of resulting execution time improvement is in the appendix of the project website. § LIMITATIONS AND FUTURE WORK An overhead RGBD camera gives only a clear top view. This affects state estimation and can lead to failures. We assume circular cups and bowls. This makes it easy to compute grasps. For more general dishes, advanced grasp generation methods will be needed. In future work, we will loosen the assumption of starting with singulated objects. We also hope to combine the pull and stack policies into a higher-level policy that can efficiently clear the workspace. § ACKNOWLEDGMENTS This research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab, and the CITRIS “People and Robots" (CPAR) Initiative. The authors were supported in part by donations from Toyota Research Institute, Bosch, Google, Siemens, and Autodesk and by equipment grants from PhotoNeo, NVidia, and Intuitive Surgical. Mehmet Dogar was partially supported by an EPSRC Fellowship (EP/V052659). IEEEtran
http://arxiv.org/abs/2307.05705v1
20230711181230
Measure transfer via stochastic slicing and matching
[ "Shiying Li", "Caroline Moosmueller" ]
math.NA
[ "math.NA", "cs.NA", "math.ST", "stat.ML", "stat.TH", "65C20, 49Q22, 68T05, 60D05" ]
This paper studies iterative schemes for measure transfer and approximation problems, which are defined through a slicing-and-matching procedure. Similar to the sliced Wasserstein distance, these schemes benefit from the availability of closed-form solutions for the one-dimensional optimal transport problem and the associated computational advantages. While such schemes have already been successfully utilized in data science applications, not too many results on their convergence are available. The main contribution of this paper is an almost sure convergence proof for stochastic slicing-and-matching schemes. The proof builds on an interpretation as a stochastic gradient descent scheme on the Wasserstein space. Numerical examples on step-wise image morphing are demonstrated as well. Keywords: measure transfer, stochastic iterative scheme, optimal transport, sliced Wasserstein distance 2020 Mathematics Subject Classification: 65C20, 49Q22, 68T05, 60D05 Data-driven Discovery of Diffuse Interstellar Bands with APOGEE Spectra [ August 12, 2023 ======================================================================= § INTRODUCTION Optimal transport and the Wasserstein distance have gained widespread interest in the machine learning community, as they provide a natural framework for dealing with data sets consisting of point clouds or measures. Some areas of application include generative models <cit.>, (semi-supervised) learning <cit.>, signal processing <cit.>, and imaging <cit.>. The optimal transport problem seeks to find the best way (in the sense of cost-minimizing) to transport one measure into another <cit.>. In the Monge formulation, one tries to seek a map T which transports the original measure σ to the target measure μ (i.e. T_♯σ = μ) and minimizes the cost W_2^2(σ,μ):=min_T: T_♯σ = μ∫_ℝ^nT(x)-x^2 dσ(x). Here W_2 is the 2-Wasserstein distance and the argmin is the optimal transport map from σ to μ. Note that in this set-up, a solution to (<ref>) may not exist; <cit.> provides conditions for existence (such as absolute continuity of σ), and Kantorovich <cit.> relaxed this framework, seeking joint distributions rather than maps (see <Ref> for more details). We introduce the Monge setting here, as the main focus of this paper is studying transportation maps, which mimic certain behaviors of the optimal transport map. We also mention that (<ref>) is a special case of p-Wasserstein distances, and the Euclidean distance can also be replaced by more general cost functions <cit.>. While very successful in applications, optimal transport can be computationally expensive, especially in high-dimensions. The problem (<ref>) becomes a linear program of computational order O(n^3log(n)). For this reason, there is interest in approximation schemes both for the Wasserstein distance as well as for the optimal transport map. A well-studied approach is entropic regularized optimal transport (Sinkhorn distances) <cit.>, which significantly reduces the computational cost by using matrix scaling algorithms <cit.>. Other approximation schemes take advantage of particular properties of the underlying set of measures; linearized optimal transport, for example, uses linear distances in a tangent space, which approximate the Wasserstein distance if the set of measures has an almost flat structure <cit.>. In this paper, we are interested in a particular type of approximation, namely sliced Wasserstein distances <cit.>, which make use of the fact that Wasserstein distances in one dimension can be computed easily through the cumulative distribution functions (CDFs). More concretely, we are interested in the slicing idea underlying these distances, which can be used to construct transport maps. These maps, in turn, give rise to iterative schemes for measure approximation. We introduce these ideas in the next section. §.§ Sliced Wasserstein distances and iterative approximation schemes The sliced Wasserstein distance between two measures σ, μ is given by SW_2^2(σ,μ) = ∫_S^n-1 W_2^2(σ^,μ^) du(θ), where μ^ is the one-dimensional measure defined by μ^ =𝒫_θ_♯μ with 𝒫_θ(x)= <x,θ> = x·θ the projection onto the unit vector θ. Here u denotes the uniform probability measure over S^n-1. The Wasserstein distance W_2 under the integral is between one-dimensional measures, and can therefore be computed by the CDFs of σ^ and μ^ (no optimization necessary). Due to its simplicity, the sliced Wasserstein distance is often used as a replacement for the full Wasserstein distance, and has been successful in many applications, such as texture mixing and barycenters <cit.>, shape retrieval <cit.>, neural style transfer <cit.> and radiomics studies <cit.>. The sliced Wasserstein distance has also been extended to different settings, such as unbalanced and partial transport problems <cit.> and generalized slicing <cit.>. Closely related to sliced Wasserstein distances is the idea of using slices to define transport maps. In the simplest setting <cit.>, one chooses a line θ∈ S^n-1, and defines T_σ,μ;θ() := + (T_σ^θ^μ^θ(x·θ)-x·θ) θ, where T_σ^θ^μ^θ is the one-dimensional optimal transport map between the slices σ^θ,μ^θ, which can again be explicitly computed through the CDFs. Similarly, one can use multiple directions θ_i to define a map in the spirit of (<ref>). For example, <cit.> uses an orthogonal matrix P= [θ_1,…,θ_n] and defines T_σ,μ;P() := + P[ T_σ^_1^μ^_1(·_1)-·_1; T_σ^_2^μ^_2(·_2)-·_2; ⋮; T_σ^_n^μ^_n(·_n)-·_n ]. While (<ref>) and (<ref>) are not necessarily the optimal transport map between σ and μ, they can be used in approximation schemes. Similar to efforts in <cit.>, the Knothe-Rosenblatt rearrangement <cit.>, and no-collision transport maps <cit.>, these maps can be used as an easy-to-compute replacement for the actual optimal transport map or to approximate μ itself. To this end, <cit.> suggested to iteratively apply maps of the form (<ref>) to σ, with the aim of approximating μ in some distance (<cit.> studies KL-divergence, we are concerned with the Wasserstein and the sliced Wasserstein distance). In particular, <cit.> proposes the following iterative scheme: Choose a sequence of orthogonal matrices {P_k}⊂ O(n) and let σ_0=σ. Then σ_k+1 = (T_σ_k,μ;P_k)_♯σ_k, k≥ 0. The main motivation in <cit.> is color transfer of images, but this idea generalizes to a myriad of applications, including texture mixing <cit.> and shape retrieval <cit.>, barycenter problems <cit.>, sampling <cit.>, generative modeling with normalizing flows <cit.>, in addition to many interesting connections to gradient descent and Procrustes analysis <cit.> as well as various gradient flows of probability measures <cit.>, see <Ref> . To justify the use of iterative scheme of this type in data science applications, convergence results of the form σ_k →μ as k→∞ are crucial. Currently, not too many results in this direction are available: <cit.> shows convergence of (<ref>) in KL-divergence, when both σ and μ are Gaussians. These results were further refined by <cit.>. In addition, <cit.> shows convergence of estimators in the Wasserstein distance, i.e. when the number of samples goes to infinity, assuming that the number of iterations k is large enough (suitable sizes of k depend on the dimension of the space). The present manuscript aims at contributing towards these efforts by providing a rigorous convergence proof for a variant of the scheme (<ref>), which we introduce in the next section. §.§ Stochastic iterative approximation schemes In this paper, we are interested in a stochastic version of the scheme (<ref>), namely σ_k+1 = ((1-γ_k)id+γ_kT_σ_k,μ;P_k)_♯σ_k, k≥ 0, where γ_k ∈ [0,1] is a sequence satisfying the classical gradient descent assumptions ∑_k=0^∞γ_k = ∞ and ∑_k=0^∞γ_k^2 < ∞. Stochasticity is obtained by choosing P_k as i.i.d. samples from the Haar measure on O(n). We mention that the original scheme (<ref>) does not fall into this class of iteration schemes, since the constant step-size γ_k=1 does not satisfy the assumptions. While our convergence results, as outlined in <Ref>, hold for a large class of iteration schemes, they do not hold for (<ref>). The version (<ref>) was first studied in <cit.>, where the main focus was on applications and Wasserstein barycenters. Further significant contributions were made in <cit.>, as will be outlined throughout this manuscript. These papers furthermore observe that (<ref>) can be interpreted as a stochastic gradient descent scheme for a loss function closely related to L(σ) = 1/2 SW_2^2(σ,μ). The main focus of <cit.> is point cloud data, which translates the iteration (<ref>) into a stochastic gradient descent scheme on ℝ^N. Our paper is concerned with measures and therefore uses stochastic gradient descent schemes in the Wasserstein space, building on the recent results of <cit.>. We summarize our main contribution in the next section. §.§ Main contribution The main contribution of this paper is an almost sure (a.s.) convergence proof for the stochastic iterative scheme (<ref>), when σ and μ are measures rather than point clouds and slices are i.i.d. drawn from the Haar measure on O(n). The proof uses stochastic gradient descent on the Wasserstein space with a modified version of the loss (<ref>). Our result is motivated by the observations in <cit.> on point cloud data and the proof techniques of the recent paper <cit.>, which studies stochastic gradient descent schemes and population barycenters in the Wasserstein space. In particular, we derive the following result: Consider two measures σ_0,μ over ^n and let P_k i.i.d∼ u_n, k≥ 0, where u_n is the Haar probability measure on O(n). Define σ_k by the iteration (<ref>) using σ_0,μ and P_k. Then under some technical assumptions we get σ_k μ,  and σ_k μ, a.s. as k→∞ In fact, we show an a.s. convergence result for any “inbetween” slicing scheme using 1≤ j≤ n slices, and <Ref> is a special case with n slices. Our result also includes the “single-slice scheme”, which uses the map T_σ,μ;θ from (<ref>) in the iteration with i.i.d. samples θ_k drawn from the uniform probability measure on S^n-1 (<Ref>). The main reason the proof techniques of <cit.> are useful for our setting is a reformulation to a special type of barycenter problem, as we outline in the proof of <Ref>. §.§ Relation to gradient flows In the case of functions F:^n →, a gradient flow equation is of the form ẋ = - ∇ F (x). The implicit Euler scheme for (<ref>) can be reformulated as a “minimizing movement scheme”<cit.>, which can then be generalized to a scheme operating on measures rather than points in Euclidean space (replacing Euclidean distance by, for example, the Wasserstein distance) <cit.>. This gives the W_2-gradient flow scheme <cit.>. In this paper we are interested in problems of the form min_σ∈𝒲_2(^n)ℱ(σ), which, if formulated for F:^n →, could be tackled by a gradient descent scheme x_n+1 = x_n -h ∇ F(x_n), i.e. by the explicit Euler scheme for (<ref>). For ℱ: 𝒲_2(^n) →, we consider the following version of (<ref>): σ_n+1 = (𝕀 -h ℱ^')_♯σ_n, where ℱ^' is a (formal) Fréchet derivative of ℱ <cit.>. This provides an interpretation of gradient descent on the Wasserstein space <cit.>, and shows the close relation of this idea to gradient flows. In the case of empirical measure, <cit.> shows that the iterative scheme (<ref>) can be interpreted as a kind of gradient descent scheme for the functional ℱ_P_k(σ) = ∑_i=1^nW_2^2(σ^θ_i^k, μ^θ_i^k) with orthogonal matrix P_k = [θ_1^k,⋯,θ_n^k] and step-size h=1. The functional, however, depends on the iteration variable k. To deal with this issue, <cit.> suggested to integrate over O(n), i.e. to consider the functional ℱ(σ) = ∫_O(n)∑_i=1^n W_2^2(σ^θ_i, μ^θ_i) dP, which is closely related to the loss L of (<ref>) and hence the sliced Wasserstein distance. <cit.> furthermore studies the gradient flow related to this functional. Motivated by the recent results in <cit.>, we give a stochastic gradient descent interpretation for the iterative scheme (<ref>) in terms of the functional (<ref>). While (<ref>) can be alternatively viewed as a batch gradient descent procedure with respect to the functional (<ref>), see <cit.>, a version of (<ref>) conveniently leads to a unifying stochastic gradient descent convergence analysis for a more general framework of iterative slice-matching schemes. Other work in this area include: W_2-gradient flows for functionals defined using generalized sliced probability metrics <cit.>, for barycenter problems using functionals of the form ∫__2(^n) W_2^2(σ,μ) dΠ(μ) for some Π <cit.>, for SW_2 with entropy functionals with applications in generative modeling <cit.>, and SW_2-gradient flows (replacing the Euclidean distance by the sliced Wasserstein distance) for general functionals <cit.>. §.§ Structure of the paper <Ref> summarizes the necessary preliminarlies such as optimal transport, Wasserstein distance and the slicing procedure, which can be used to define the sliced Wasserstein distances as well as “sliced” transport maps. In <Ref> we introduce a generalized form of the slice-matching maps of <cit.> and show some basic properties of these transports, including a connection to compatible maps as introduced in <cit.>. <Ref> then shows the relation of a generalized version of the iterative scheme (<ref>) to stochastic gradient descent on the Wasserstein space. The main result on a.s. convergence of this scheme is stated in <Ref>, along with a corollary showing convergence for the “single-slice” scheme, which is based on iterating (<ref>). The proofs are made available in <Ref>. The paper closes with numerical experiments on morphing images in <Ref>. § PRELIMINARIES §.§ Optimal transport preliminaries By 𝒫(^n) we denote the space of probability measures on ^n, and by 𝒫_ac(^n) the space of absolutely continuous measures (with respect to the Lebesgue measure). We are furthermore interested in the quadratic Wasserstein space, which is the space of probability measures σ with finite second moment ∫_^nx^2dσ(x)<∞. We denote this space by W_2(^n). We also define _2,ac(^n)=_2(^n) ∩𝒫_ac(^n). There is a natural metric on W_2(^n), the quadratic Wasserstein metric, which is defined as W_2(σ,μ) := inf_π∈Γ(σ,μ)(∫_^2nx-y^2dπ(x,y))^1/2, where Γ(σ,μ):={γ∈𝒫(^2n): γ(A×^n) = σ(A), γ(^n× A)=μ(A) for A⊂^n} denotes the set of couplings (measures on the product space with marginals σ and μ). If σ∈𝒫_ac(^n), then the following optimization problem has a solution min_T:T_♯σ=μ∫_^nT(x)-x^2 dσ(x), where T is a map in L^2(σ) and ♯ denotes the pushforward operation, T_♯σ(A)=σ(T^-1(A)) for A measurable. Furthermore, the optimal coupling (<ref>) has the form π = (id,T_σ^μ)_♯σ, where T_σ^μ is the (up to additive constants, unique) solution to (<ref>) <cit.>. The Wasserstein metric can then be written as W_2(σ,μ)= T_σ^μ-id_σ :=( ∫_^nT_σ^μ(x)-x^2 dσ(x))^1/2 The optimal transport map T_σ^μ can be written as the gradient of a convex functions, i.e. T_σ^μ=∇φ with φ convex <cit.>. We call a map S, which is the gradient of a convex function (but not necessarily the optimal transport map between two measures) a Brenier map. In the case of one-dimensional measures, the optimal transport map (and the Wasserstein distance) can be explicitly computed: For σ,μ∈𝒫() and σ∈𝒫_ac(), we get T_σ^μ = F_μ^-1∘ F_σ, where F_σ: → [0,1] is the cumulative distribution function (CDF) of σ, defined by F_σ(x)=σ((-∞,x]). Here F_μ^-1 denotes the pseudo-inverse F_μ^-1(y)=min_z{z:F_μ(z)≥ y}. Then we get W_2(σ,μ) =( ∫_0^1 |F_μ^-1(x)-F_σ^-1(x)|^2 dx)^1/2. Note that the assignment σ↦ F_σ^-1 is an isometry to the space L^2() with linear L^2 metric. In this paper, we deal both with probability measures on ^n and and we denote the Wasserstein distance and optimal transport maps between such measures with the same symbol, independent of the dimension of the measures. It will be clear from context if the measures are on ^n or . §.§ Sliced Wasserstein distances The computation of the Wasserstein distance (<ref>) can be expensive, in particular in high dimensions (solving the linear program (<ref>) leads to O(n^3log(n))). Therefore, there is interest in approximations of the Wasserstein distance which can be computed more efficiently. A well-studied class of approximations are the Sinkhorn distances <cit.>, which add a regularization term to the linear program (<ref>). The resulting problem can then be solved with matrix scaling algorithms <cit.>. In this paper we are interested in sliced Wasserstein distances, which make use of the fact that the Wasserstein distance can be computed easily for one-dimensional measures, see (<ref>). The main idea is to project σ, μ∈𝒫(^n) to a line parallel to θ, compute the one-dimensional distances between the projected measures, and then sum (or integrate) over all directions θ. We now introduce this idea formally: Let θ∈ S^n-1 and define the projection to θ by 𝒫_θ: ^n →, 𝒫_θ(x) = x ·θ = < x,θ>. Denote the projection of σ∈𝒫(^n) by σ^θ:=(𝒫_θ)_♯σ. Then the (continuous) sliced Wasserstein distance between σ and μ is defined by SW_2^2(σ,μ) = ∫_S^n-1 W_2^2(σ^θ,μ^θ) du(θ), where integration is over the uniform measure u on S^n-1. Note that the Wasserstein distance under the integral is between one-dimensional measures, hence can be computed by (<ref>). The sliced Wasserstein distance defines a metric. For a finite set {θ_i}_i=1^N, we can consider a discrete version of (<ref>): SW_2^2(σ,μ) ≈1/N∑_i=1^N W_2^2(σ^θ_i,μ^θ_i). In practice, the sliced Wasserstein distance can be computed in this way by drawing i.i.d. samples θ_i from the uniform distribution on S^n-1. We list some results on the relation between the sliced Wasserstein distance and the regular Wasserstein distance: Let θ∈ S^n-1 and σ,μ∈𝒫(^n) with σ∈𝒫_ac(^n). Then we have * W_2(σ^θ,μ^θ)≤ W_2(σ,μ), which implies SW_2(σ,μ) ≤ W_2(σ,μ). * If both σ and μ have compact supports, then W_2^2(σ,μ)≤ C_n SW_2(σ,μ)^1/n+1, where C_n is a positive constant depending on dimension n and the supports of the measures. The first part follows from the fact that the projection map 𝒫_θ is 1-Lipschitz, see e.g., Proposition 5.1.3 in <cit.> for details. The second part is a special case of Theorem 5.1.5 in <cit.>. Through the computation of Wasserstein distances between slices σ^θ_i,μ^θ_i, we also obtain one-dimensional optimal transport maps T_σ^θ_i^μ^θ_i (<ref>). As suggested in <cit.>, these can be stacked together to construct a map ^n →^n, which is not necessarily the optimal transport map between σ and μ, but can be used to define an iterative approximation scheme. This type of schemes, and in particular, their convergence, are the main topic of the present manuscript. We introduce them in <Ref>. § SLICE-MATCHING MAPS We are interested in the two types of slice-matching maps defined in <cit.>. In this section we present a unifying framework which is closely related to compatible maps <cit.>. Consider σ∈_2,ac(^n), μ∈_2(^n). * Single-slice matching map: Let θ∈ S^n-1. The map from σ to μ is defined by T_σ,μ;θ() = + (T_σ^θ^μ^θ(x·θ)-x·θ) θ * Matrix-slice matching map: Let {_1,⋯, _n}⊂ S^n-1 be an orthonormal set. The map from σ to μ is defined by T_σ,μ;P() = + P[ T_σ^_1^μ^_1(·_1)-·_1; T_σ^_2^μ^_2(·_2)-·_2; ⋮; T_σ^_n^μ^_n(·_n)-·_n, ] where P = [_1, ⋯, _n]. The following representation follows immediately from this definition: T_σ,μ;P() = ∑_i=1^n T_σ^_i^μ^_i(·_i) _i. Similarly, if we choose an orthonormal set {_1,⋯, _n}⊂ S^n-1 with θ_1=θ, then T_σ,μ;θ() = T_σ^^μ^(·)θ + ∑_i=2^n (·_i) _i. This motivates the following unified framework: Let σ∈_2,ac(^n), μ∈_2(^n). Choose 1≤ j ≤ n and an orthonormal set {_1,⋯, _n}⊂ S^n-1. The j-slice matching map is defined by T_σ,μ;P^j() = ∑_i=1^j T_σ^_i^μ^_i(·_i) _i + ∑_i=j+1^n (·_i) _i, where P = [_1, ⋯, _n]. We remark on some important properties of the j-slice matching map: * Note that T_σ,μ;P^n = T_σ,μ;P and T_σ,μ;P^1 = T_σ,μ;θ_1, where θ_1 is the first column of P. * The map T_σ,μ;P^j is a Brenier map since it is the gradient of the convex function x↦∑_i=1^j F_i(x·θ_i) + ∑_i=j+1^n 1/2 (x_2^2 - (x·θ_i)^2), where F_i is the anti-derivative of T_σ^θ_i^μ^θ_i. Therefore, T_σ,μ;P^j = T_σ^μ if and only if (T_σ,μ;P^j)_♯σ = μ. * A computation shows that 𝒫_θ_ℓ∘ T_σ,μ;P^j = T_σ^θ_ℓ^μ^θ_ℓ∘𝒫_θ_ℓ for ℓ=1,…,j and 𝒫_θ_ℓ∘ T_σ,μ;P^j = 𝒫_θ_ℓ for ℓ=j+1,…,n. * Note that (3) implies: If ν = (T_σ,μ;P^j)_♯σ, then ν^θ_i=μ^θ_i for 1≤ i ≤ j. This property is crucial for the proof of our main result in <Ref> and is the motivation for the name slice-matching map. The map T_σ,μ;P^j is a special type of a compatible map, as introduced in <cit.>: For a fixed P ∈ O(n), the set of compatible maps is defined by 𝔖(P)={↦ P [ f_1((P^t)_1); f_2((P^t)_2); ⋮; f_n((P^t)_n) ]: f_i:→ is increasing}. Note that functions in this set can be written as ∑_i=1^n f_i(·_i) _i with P=[θ_1,…,θ_n], which directly shows the relation to T_σ,μ;P^j. Direct verification shows that 𝔖(P) has a semi-group structure with composition as the group operation. Moreover, if each f_i associate with a compatible map given in (<ref>) has an inverse, then the compatible map has an inverse in 𝔖(P), to which the corresponding f_i's are simply replaced by their inverses. Note that the slice-matching maps of <Ref> can be inverted easily. Compatible maps have been identified as a set of maps which allow for linear separability of two classes of measures in a tangent space, see <cit.>. Results in these papers concerning supervised learning in the Wasserstein space therefore also hold for slice-matching maps. §.§ Iterative schemes via slice-matching maps Following <cit.>, we define an iterative scheme using the slice-matching maps of <Ref>: Let σ_0∈_2,ac(^n) and μ∈_2(^n). For k≥ 0, choose P_k ∈ O(n) and γ_k ∈ [0,1]. Fix 1≤ j ≤ n. Define σ_k+1= ((1-γ_k)id +γ_kT_σ_k,μ;P_k^j)_♯σ_k, k≥ 0. Note that for γ_k=1 and j=n, we obtain the original scheme of <cit.>, see (<ref>). We mention that our convergence results of <Ref> do not hold for this scheme, since γ_k=1 does not satisfy <Ref>. Results detailed in this section, however, hold for all schemes of <Ref>. We note that (1-γ_k)id +γ_kT_σ_k,μ;P_k^j is a Brenier map, since it is a convex combination of two gradients of convex functions. It is furthermore the optimal transport map from σ_k to σ_k+1. We show two results for the iterative scheme of <Ref>: Let σ_k ∈_2,ac(^n) and μ∈_2(^n). If γ_k=1 and P_k+1 = P_k, then the iterative scheme (<ref>) gives σ_k+2 = σ_k+1. We first note that σ_k+1^θ^k_ℓ = μ^θ^k_ℓ where P_k = [θ_1^k,…,θ_n^k]. Using <Ref>, we obtain σ_k+1^θ^k_ℓ = (𝒫_θ^k_ℓ∘ T_σ_k,μ;P_k^j)_♯σ_k = (T_σ^θ^k_ℓ_k^μ^θ^k_ℓ)_♯σ^θ^k_ℓ_k = μ^θ^k_ℓ, ℓ=1,…,j. This implies T_σ^θ^k_ℓ_k+1^μ^θ^k_ℓ = id, ℓ=1,…,j, and therefore T_σ_k+1,μ;P_k^j(x) = ∑_ℓ=1^j T_σ^θ^k_ℓ_k+1^μ^θ^k_ℓ(x·θ^k_ℓ)θ^k_ℓ + ∑_ℓ=j+1^n (x·θ^k_ℓ)θ^k_ℓ =x, which implies σ_k+2 = σ_k+1. Note that for P_k+1=P_k and γ_k≠ 1, the scheme does not necessarily become stationary: σ_k+1^θ^k_ℓ = ((1-γ_k)𝕀 + γ_k T_σ^θ^k_ℓ_k^μ^θ^k_ℓ)_♯σ^θ^k_ℓ_k, ℓ = 1,…, j. The iteration evaluates on the geodesic connecting the slices σ^θ^k_ℓ_k and μ^θ^k_ℓ, but is not necessarily equal to μ^θ^k_ℓ. The following Lemma is crucial for the proof of our main result, <Ref>, as is it relates consecutive iterates of (<ref>) to a type of discrete sliced Wasserstein distance: Let σ_k ∈_2,ac(^n) and μ∈_2(^n). Let σ_k be defined through the iteration (<ref>). Then W_2^2(σ_k+1,σ_k) = γ_k^2 ∑_i=1^jW_2^2(σ_k^θ_i^k,μ^θ_i^k), where P_k = [θ_1^k,…,θ_n^k]∈ O(n). This result follows from direct computation: W_2(σ_k+1,σ_k)^2 = T^σ_k+1_σ_k - id_2^2 = ∫_^n(1-γ_k)x +γ_kT_σ_k,μ;P_k^j(x) -x^2_2 dσ_k(x) = γ_k^2 ∫_^n∑_i=1^j(T^μ^θ_i^k_σ_k^θ^k_i(x·θ^k_i) -x·θ^k_i)θ^k_i _2^2 dσ_k(x) = γ_k^2 ∫_^n∑_i=1^j(T^μ^θ_i^k_σ_k^θ^k_i(x·θ^k_i) -x·θ^k_i)^2 dσ_k(x) = γ_k^2 ∑_i=1^j ∫_ (T^μ^θ_i^k_σ_k^θ^k_i(y) -y)^2 dσ_k^θ^k_i(y) = γ_k^2 ∑_i=1^jW_2(σ_k^θ_i^k,μ^θ_i^k)^2. Proposition 5.2.7 in <cit.> provides an analogous result for the case of discrete measures with finite supports and γ_k=1,j=n. § CONVERGENCE OF THE STOCHASTIC ITERATIVE SCHEME To illustrate the idea of the iterative schemes (<ref>) as a stochastic gradient descent procedure in the 2-Wasserstein space of a certain functional, we first highlight the single-slice matching case. We then show a general interpretation for the j-slice matching scheme (<ref>) as well as an a.s. convergence proof. To see the connection between the single-slice and matrix-slice schemes, we note the following relation between the uniform probability measure on the sphere S^n-1 and the Haar probability measure on the orthogonal group O(n): By an explicit geometric construction of the Haar measure (see e.g., <cit.>), a random orthogonal matrix P distributed according to u_n can be constructed by choosing the first column as θ_1 a random vector according to the uniform probability measure u on S^n-1, and then choosing the subsequent columns according to the surface area measures on subsets of S^n-1 that are orthogonal to the previous columns. §.§ Stochastic gradient descent interpretation §.§.§ Stochastic single-slice matching When j=1, i.e., a single directional vector is chosen at each iteration, by <Ref>, the iterative scheme in <Ref> becomes: Given θ_k∈ S^n-1 and γ_k ∈ [0,1], σ_k+1= ((1-γ_k)id +γ_kT_σ_k,μ;θ_k)_♯σ_k, k≥ 0. When each θ_k is chosen independently according to u, the uniform probability measure on S^n-1, the above iterative scheme (<ref>) can be interpreted a stochastic gradient descent of the following functional minimization problem: min_σ∈_2,ac(^n)L(σ), where L(σ):= 1/2SW_2^2(σ,μ) = 1/2∫_S^n-1W_2^2(σ^, μ^)du(). The unique minimizer is μ by the fact that SW_2 is a metric. Applying <Ref> with γ_k=1 and j=1, one can equivalently write (<ref>) as L(σ)= 1/2∫_S^n-1W_2^2(σ,(T_σ,μ;θ)_♯σ)du(θ). The connection to the stochastic gradient decent method can be observed by computing the formal Fréchet derivative of L, following the ideas in <cit.>. We first note that for the functional F(σ):= 1/2W_2^2(σ,μ) defined on _2(^n) with a fixed μ∈_2(^n), the following differentiability property holds: For any σ∈_2,ac(^n), lim_σ_1→σF(σ_1)-F(σ)-<𝕀-T_σ^μ, T_σ^σ^1-𝕀>_L^2(σ)/W_2(σ_1,σ)=0, see <cit.>. Then the Fréchet type derivative for F at σ, denoted as F^'(σ) is a functional on the tangent space (see e.g. <cit.> and <cit.>) Tan_σ:= {λ(T-𝕀): T   is a Brenier map ; λ>0}^L^2(σ), and is given by F^'(σ)= 𝕀 - T_σ^μ, using the Riesz representation theorem on L^2(σ). Relating F and (<ref>) and observing that T_σ,μ;θ is a Brenier map (see <Ref>), we define the formal Fréchet derivative [<cit.> defines the formal Fréchet derivative corresponding to the barycenter problem in a similar spirit.] L^'(σ) (x) := ∫_S^n-1(x-T_σ,μ;θ(x))du(), where T_σ,μ; θ is as defined in (<ref>). Note that again the functional L^'(σ) is identified as a function in L^2(σ) by the Riesz representation theorem. For the analysis in this manuscript, only a formal notion of Fréchet derivative as defined in (<ref>) is needed. Under additional assumptions, e.g., if σ,μ∈_2,ac(K) where K is a compact subset of ^n, we get ∫_K L^'(σ)(x)ζ(x)dσ(x) = lim_ϵ→0SW_2^2((𝕀+ϵζ)_♯σ,μ)-W_2^2(σ,μ)/2ϵ, for any test diffeomorphism ζ of K, see <cit.>. To develop an intuitive understanding of the scheme (<ref>) as a stochastic gradient descent step for the minimization problem stated in (<ref>), we note that the stochastic version of Fréchet derivative (<ref>) at a random θ_k∈ S^n-1 is 𝕀-T_σ,μ;θ_k. Hence given a step size γ_k∈ [0,1], the corresponding push-forward map between measures is given by x ↦ x - γ_k (x-T_σ,μ;θ_k(x)), which gives (<ref>). §.§.§ Stochastic j-slice matching More generally, when the first j columns of an orthogonal matrix are used in each iteration, and each P_k is chosen independently according to u_n, the probability Haar measure on the orthogonal group O(n), the iterative j-slice matching scheme (<ref>) can be viewed as a stochastic gradient descent scheme for the following functional minimization problem: min_σ∈_2,ac(^n)L_j(σ) = 1/2∫_O(n)∑_i=1^jW_2^2(σ^θ_i,μ^θ_i) du_n(P), where P = [θ_1,⋯,θ_n]. Applying <Ref> with γ_k=1, one can equivalently write <Ref> in the following way: L_j(σ)=1/2∫_O(n)W_2^2(σ,(T^j_σ,μ;P)_♯σ) du_n(P). Following similar analysis in <Ref>, we define the formal Fréchet derivative L^'_j(σ)(x) = ∫_O(n) (x-T^j_σ,μ;P(x)) du_n(P). <Ref> implies * L_1 = L, as defined in (<ref>), * L(σ)≤ L_j(σ)≤ nL(σ), and hence L_j(σ)=0 if and only if σ = μ. §.§ Almost sure convergence of stochastic j-slice matching For the convergence analysis of the iterative scheme defined in (<ref>), we need the following assumptions, which are adapted from <cit.>: Fix σ_0,μ∈_2,ac. Let P_k∈ O(n) and γ_k ∈ [0,1] for k≥ 0. (A1) Given σ_0,μ∈_2,ac(^n), the sequence {σ_k}_k≥ 0 given in (<ref>) stays in some compact set K_σ_0,μ⊆_2,ac(^n) independent of the choices of {γ_k}_k≥ 0 and {P_k}_k≥ 0. (A2) Step-size assumption: ∑_k=0^∞γ_k = ∞, ∑_k=0^∞γ_k^2 <∞. (A3) Given σ_0,μ∈_2,ac(^n), there exists some K_σ_0,μ given by (A1) such that L^'_j(η) =0 (η-a.e.) where σ∈ K_σ_0,μ implies that η = μ[This condition is more generally referred to as the uniqueness of the Karcher mean, see <cit.>.]. For the stochastic single-slice matching case, i.e., j=1, <cit.> shows a sufficient condition for (A1). Moreover <cit.> provides a sufficient scenario where (A3) is satisfied. In particular, K_σ_0,μ can be any compact subset of measures in _2,ac(B_r(0)) with strictly positive density that also satisfies (A1). Here B_r(0)⊆^n is the ball centered at the origin of radius r. Choices of γ_k satisfying (A2) include γ_k = 1/k, k≥ 1 and γ_k = 1+log_2(k)/k. Note that γ_k=1 does not satisfy the assumptions and therefore, our convergence result of <Ref> does not hold for the original scheme of <cit.>. Let σ_0,μ∈_2,ac(^n) and P_k i.i.d∼ u_n, k≥ 0, where u_n is the Haar probability measure on O(n). Fix 1≤ j ≤ n and assume that (A1), (A2), (A3) hold for L_j of (<ref>). Then the j-slice matching scheme (<ref>) satisfies σ_k μ,  and σ_k μ, a.s. as k →∞. The proof is based on a careful modification of the proof of Theorem 1.4 in <cit.>, a result by Backhoff-Veragyas et al. solving a population barycenter problem through min_σ∈_2,ac(^n) ℱ(σ)= 1/2∫__2,ac(^n) W_2^2(σ,m) dΠ(m), where Π is a probability measure defined on the space _2(^n) of probability measures, which gives full measure to a compact subset. In the present manuscript, we seek to recover μ through min_σ∈_2,ac(^n) L_j(σ)=1/2∫_O(n)W_2^2(σ, (T^j_σ,μ;P)_♯σ) du_n(P), see <Ref>. Note that by the change-of-variable formula, L_j becomes very similar to ℱ: L_j(σ)= 1/2∫_φ_σ,μ^j(O(n))W_2^2(σ,m) dΠ_σ,μ^j(m) where φ_σ,μ^j:O(n) →𝒲_2(^n) is defined by φ_σ,μ^j(P) = (T_σ,μ;P^j)_♯σ and Π_σ,μ^j = φ_σ,μ^j_♯u_n. Through this interpretation, we arrive at a barycenter problem on a subset of 𝒲_2(^n) which is parametrized by O(n). Note that φ_σ,μ^j(O(n)) is a compact subset to which Π_σ,μ^j gives full measure, however, the key difference to ℱ lies in the dependence of both φ_σ,μ^j(O(n)) and Π_σ,μ^j on σ. Therefore, the proof of <cit.> does not directly carry over, and it is more natural to work with integration over O(n) than over Π_σ,μ^j. We show a careful adaptation of the proof of <cit.> in <Ref>. The key modification is to show analogs of Proposition 3.2 and Lemma 3.3 in <cit.>, which are given by <Ref> in <Ref>. In essence, <Ref> shows that the sequence {L_j(σ_k)}_k∈ℕ is decreasing in expectation, and <Ref> verifies continuity properties of L_j and L_j^'. The outline as well as other details of the proof are provided in <Ref>. We note that <Ref> with j=n shows a.s. convergence of the matrix-slice matching scheme (<ref>). For j=1 we also get convergence for the single-slice scheme, which is summarized in the following: Let σ_0,μ∈_2,ac(^n) and _k i.i.d∼ u, k≥ 0, where u is the uniform probability measure on S^n-1. Assume that (A1), (A2), (A3) hold for L of (<ref>). Then the single-slice matching scheme (<ref>) satisfies σ_k μ,  and σ_k μ, a.s. as k →∞. The result follows directly from combining <Ref> with <Ref>. We close this section with an example on translations. The above theorem offers a straightforward application to verify the almost sure convergence of the stochastic single-slice scheme (<ref>) when the initial and target measure are related through a translation. Let σ∈_2,ac(^n) and let μ = T_b_♯σ with T_b(x)= x+b, b∈^n, b≠ 0. We apply the single-slice iterative scheme ((<ref>) with j=1 and _k i.i.d∼ u, k≥ 0 on S^n-1) with σ_0=σ. Then for any choice of sequence γ_k satisfying (A2), we get σ_k→μ a.s. in W_2. By <Ref>, it suffices to show that the assumptions (A1) and (A3) are satisfied. (A1) Let K_μ^b= {ν_z = (T_z)_♯μ: T_z(x)=x+z, z≤b}. The compactness of K_μ^b can be seen via the compactness of the ball {z∈^n: z≤b} and the continuity of the operator from (^n,·) to (𝒫(^n),W_2) z↦ (T_z)_♯μ. Note that σ∈ K_μ^b. The fact that {σ_k} stays in K_μ^b can be seen from the geodesic convexity of K_μ^b, i.e., for any γ∈ [0,1] and ν_z∈ K_μ^b we have ((1-γ)𝕀 + γ T_ν_z,μ;θ)_♯ν_z ∈ K_μ^b. This follows from T_ν_z,μ;θ(x) = x-(θ· z)θ and from (1-γ)x + γ T_ν_z,μ;θ(x) = x - γ (θ· z)θ. Therefore γ(θ· z)θ≤z≤b and ((1-γ)𝕀 + γ T_ν_z,μ;θ)_♯ν_z ∈ K_μ^b. (A3) We will show that for any ν_z∈ K_μ^b, L^'(ν_z)= 0 if and only if  ν_z= μ. Let ν_z = (T_z)_♯μ, where T_z(x)=x+z. A direct computation gives L^'(ν_z)(x)= ∫_S^n-1 (θ· z)θ du(θ). Let Az := ∫_S^n-1 (θ· z)θ du(θ) where A∈^n× n and observe that for the standard basis vector e_i∈^n, Ae_i= ∫_S^n-1 (θ· e_i)θ du(θ) = [0,⋯,0, w_i, 0⋯, 0]^t, where all but the i-th entry of the RHS vector are zero and w_i>0 (see <Ref>). Since A is a diagonal matrix with positive diagonal entries, it follows that Az = 0 if and only if z= 0. § NUMERICAL EXPERIMENTS We show numerical experiments morphing MNIST digits <cit.> into one another. Here we chose to morph a digit 5 into a digit 1 using both the matrix-slicing scheme (j=n=2) and the single-slicing scheme (j=1). In both cases, the step-wise morphing obtained through these schemes is visibly similar to a Wasserstein interpolation process, see <Ref> and <Ref>. §.§ Matrix-slicing example We show a numerical experiment morphing a digit 5 into a digit 1 using the iterative scheme (<ref>) with j=n=2 and choice γ_k = 1+log_2(k)/k. The random orthogonal matrix P = [ cosβ sinβ; -sinβ cosβ ] is generated via choosing a uniformly distributed random angle β in [0,π/2). A subset of the first 13 iterations are shown in the top panel of <Ref>. The bottom panel of <Ref> shows how SW_2(σ_k,μ) decreases as the iteration variable k increases. The first step σ_1 creates the biggest drop, as it translates the digit 5 into the correct location. The other iteration steps “stretch” the 5 into the 1. There is some fluctuation in the convergence SW_2(σ_k,μ) → 0 since this is a stochastic iteration. There are still some artifacts present in σ_13 which are due to a combination of small number of iterations and numerical errors. The sliced Wasserstein distance in the last step is ≈ 0.17. §.§ Single-slicing example For comparison, we apply the single-slice scheme to the same digits as in <Ref>. We use the iterative scheme (<ref>) with j=1 (note that n=2) and choice γ_k = 1+log_2(k)/k. The random unit vector θ = [cosβ,sinβ]^t is generated by choosing a uniformly distributed random angle β in [0,π). A subset of the first 30 iterations are shown in the top panel of <Ref>, as well as the plot of the associated sliced Wasserstein distance to the target image in the bottom panel. Note that the morphing is slower than with the matrix-slicing scheme of <Ref> since a single angle is chosen rather than a pair of orthonormal angles, and 30 iterations are needed to obtain comparable results. At the 30th iteration step, the sliced Wasserstein distance also drops to ≈ 0.17. § CONCLUSION Motivated by the availability closed-form formulae for one-dimensional optimal transport maps and the associated computational advantages, we are interested in transferring measures through slice-matching schemes as introduced in <cit.>. We derive a generalized framework for these types of schemes and establish an interpretation as “compatible maps”, which in turn allows for direct application to supervised learning tasks in the Wasserstein space. The main result of this paper is an a.s. convergence proof of a stochastic variant of the slice-matching schemes of <cit.>, using stochastic gradient descent iterations in the Wasserstein space as suggested by <cit.>. This convergence result contributes towards efforts in justifying the use of slice-matching schemes in data science applications <cit.>. To this end, we show numerical experiments on image morphing. § ACKNOWLEDGEMENTS CM is supported by NSF award DMS-2306064 and by a seed grant from the School of Data Science and Society at UNC. Special thanks are extended to Soheil Kolouri for introducing SL to the convergence question related to the single-slice matching scheme. § REFERENCES [title=, heading=none] § PROOF OF <REF> We consider a joint probability space for the sequence of random variables {P_k}_k≥ 0, where P_ki.i.d.∼u_n and u_n is the Haar probability measure on O(n). The almost sure convergence result in <Ref> is in terms of this joint probability space coupled with the product sigma-algebra and the associated product measure <cit.>. We follow the outline of the proof in Theorem 1.4 in <cit.> and establish the analogs of Proposition 3.2 and Lemma 3.3 of the proof in <cit.>, shown in key lemmas section below. Note that the unique minimizer for (<ref>) is μ and L_j(μ)=0. We similarly introduce l_i: = L_j(σ_i). Following the same arguments as well as assumption (A2), one can derive that l_i→ l_∞ a.s. for some non-negative random variable l_∞∈ L^1 and lim inf_i→∞L^'_j(σ_i)^2_L^2(σ_i)=0 a.s. Now let K_σ_0,μ be a compact set such that σ_i∈ K_σ_0,μ for all i (see (A1)). Let ϵ>0 and K_ϵ= {σ : L_j(σ)≥ϵ}∩ K_σ,μ. Since K_ϵ is compact, we have inf_K_ϵL_j^'(σ)_L^2(σ)>0. Otherwise, it would imply the existence of σ_ϵ∈ K_ϵ such that L^'_j(σ_ϵ) =0 by part (ii) of Lemma <ref>. However, this would lead to contradictory statements: σ_ϵ=μ according to (A3), and L_j(σ_ϵ)≥ϵ by part (i) of Lemma <ref>. The remaining arguments closely resemble those in Theorem 1.4 of <cit.>, which yield the same conclusions: l_∞ =0 a.s. and that σ_i μ a.s., which trivially implies that SW_2(σ_i,μ)→ 0 a.s. since SW_2≤ W_2 (see e.g., <Ref>). Hence we also have that σ_iμ a.s. §.§ Key lemmas We show two lemmas, which are analogs of Proposition 3.2 and Lemma 3.3 in <cit.> adapted to our functional defined in (<ref>). To quantify the behavior of L_j(σ_k), given the first k randomly chosen orthogonal matrices, we introduce (in a similar fashion to <cit.>), the following (natural) filtration associated with the stochastic process {P_k}_k∈ℕ: Let P_k i.i.d∼ u_n, k≥ 0, where u_n is the Haar probability measure on O(n). Define ℱ_0 as the trivial the σ-algebra and ℱ_k+1 as the σ-algebra generated by P_0,…,P_k. Let σ_k+1= ((1-γ_k)id +γ_kT^j_σ_k,μ;P_k)_♯σ_k as defined in (<ref>), where P_k, ℱ_k are as in Definition <ref>. Then [L_j(σ_k+1)|ℱ_k]≤ (1+γ_k^2)L_j(σ_k)-γ_kL^'_j(σ_k)^2_L^2(σ_k). Based on <Ref>, an analogous argument to Proposition 3.2 in <cit.> yields the following L_j(σ_k+1)= 1/2∫_O(n)∑_i=1^j W_2^2(σ_k+1^θ_i, μ^θ_i)du_n(P) ≤ 1/2∫_O(n)𝕀-T^j_σ_k,μ;P^2_L^2(σ_k)du_n(P)+γ_k^2/2∫_O(n)𝕀-T^j_σ_k,μ;P_k^2_L^2(σ_k)du_n(P) -γ_k<∫_O(n)(𝕀-T^j_σ_k,μ;P)du_n(P),𝕀-T^j_σ_k,μ,P_k>_L^2(σ_k) = 1/2∫_O(n)∑_i=1^jW_2^2(σ_k^θ_i, μ^θ_i)du_n(P)+γ_k^2/2∫_O(n)∑_i=1^jW_2^2(σ_k^θ_i^k, μ^θ_i^k)du_n(P) -γ_k < L^'_j(σ_k), 𝕀-T^j_σ_k,μ;P_k>_L^2(σ_k), where P = [θ_1,…,θ_n] is a generic orthogonal matrix and P_k = [θ_1^k,…,θ_n^k ] is the orthogonal matrix at step k of the iteration scheme (<ref>). Note that the first two terms of the last inequality follows from essentially <Ref> for the case where γ_k =1, and the last term follows from the the definition of the formal Frechét derivative in (<ref>). Rewriting the above using the definition of L_j and the fact that u_n is a probability measure, we have L_j(σ_k+1)≤ L_j(σ_k)+γ_k^2/2∑_i=1^jW_2^2(σ_k^θ_i^k, μ^θ_i^k)-γ_k <L_j^'(σ_k), 𝕀-T^j_σ_k,μ;P_k>_L^2(σ_k). Based on the above inequality, the final estimation for the conditional expectation 𝔼[L_j(σ_k+1)|ℱ_k] parallels the reasoning used in the last inequality in the proof of Proposition 3.2 in <cit.>. Let {ρ_m}⊆_2,ac(^n) such that ρ_m ρ∈_2,ac(^n). Then as m→∞, we get * L_j(ρ_m)→ L_j(ρ) and * L^'_j(ρ_m)_L^2(ρ_m)→L^'_j(ρ)_L^2(ρ). The arguments closely follow the structure of those in Lemma 3.3 in <cit.>, with necessary adjustments made for the specific functional L_j. For the convenience of the reader, we repeat the required set-up. Suppose that (Ω, 𝒢, ℙ) is a common probability space for random vectors {X_m} of laws {ρ_m} and X of law ρ such that X_m converges ℙ-a.s. to X and enlarge it (when needed) instead with an independent random variable 𝐏∈ O(n) according to the Haar measure u_n. By similar arguments as in <cit.> using (<ref>), one can observe that (X_m, T^j_ρ_m,μ;𝐏(X_m)) converges ℙ-a.s. to (X, T^j_ρ,μ;𝐏(X)). To see that (X_m, T^j_ρ_m,μ;𝐏(X_m)) → (X, T^j_ρ,μ;𝐏(X))   entrywise in  L^2(Ω,𝒢,ℙ), using parallel arguments to <cit.>, like the Vitali convergence theorem, it suffices[To show a sequence of vector fields {h_m:^n→^n}_m, where h_m = (h_m^1, ⋯, h_m^n)^t, is uniformly integrable entrywise, i.e., ∀ i, {h_m^i}_m is uniformly integrable, it suffices to show that ∫h_m(x)^2 1_h_m^2≥ M dx → 0 as M→ 0 since ∫ |h_m^i(x)|^2 1_|h_m^i|^2≥ Mdx≤∫h_m(x)^2 1_h_m^2≥ Mdx.] to verify that for any M>0, sup_m 𝔼[X_m^21_X_m^2≥ M] = sup_m ∫_O(n)∫_y^2≥ My^2dρ_m(y) → 0, as M→∞, and sup_m 𝔼[T^j_ρ_m,μ;𝐏(X_m)^21_T_ρ_m,μ;𝐏^j(X_m)^2≥ M] = sup_m ∫_O(n)∫_y^2≥ My^2d ((T^j_ρ_m,μ;𝐏(X_m))_♯ℙ) (y)du_n(P) = sup_m ∫_O(n)∫_y^2≥ M∑_i=1^n|θ_i · y|^2d ((T^j_ρ_m,μ;𝐏( X_m))_♯ℙ) (y) du_n(P) = sup_m ∫_O(n)(∑_i=1^j ∫_{θ_i· y:  y^2≥ M}|z_i|^2 dμ^θ_i(z_i) + ∑_i=j+1^n∫_{θ_i· y:  y^2≥ M}|z_i|^2dρ_m^θ_i(z_i))du_n(P) ≤ sup_m∫_O(n)∫_y^2≥ M∑_i=1^n|θ_i· y|^2d(μ(y)+ρ_m(y))du_n(P) = sup_m∫_O(n)∫_y^2≥ My^2d(μ(y)+ρ_m(y))du_n(P)→ 0  as  M→∞, where the last steps in (<ref>) and (<ref>) follow from the dominated convergence theorem with observations (a): the associated second moments of ρ_m converge to that of ρ and hence uniformly bounded given the fact that ρ_m→ρ in _2,ac(^n), and (b): the second moment of μ is bounded given the fact that μ∈_2,ac(^n). Here P = [θ_i,⋯,θ_n] denotes a generic orthogonal matrix. Note that the third equality of (<ref>) follows from the fact that (𝒫_θ_i∘ T^j_ρ_m,μ;𝐏(X_m))_♯ℙ = μ^θ_i 1≤ i ≤ j ρ_m^θ_i j+1≤ i ≤ n , which can be directly verified using the definition of T^j_ρ_m,μ;𝐏 in (<ref>). Then by (<ref>) and the triangle inequality, we have (X_m -T^j_ρ_m,μ;𝐏 (X_m))→(X -T^j_ρ,μ;𝐏(X)) entrywise in L^2(Ω,𝒢,ℙ). Hence for part (i) we get L_j(ρ_m)= 1/2𝔼[X_m -T^j_ρ_m,μ;𝐏 (X_m)^2] → L_j(ρ)= 1/2𝔼[X -T^j_ρ,μ;𝐏(X)^2], where the two identities above can be obtained from (<ref>) by the change of variables formula. For part (ii), by (<ref>) and an implication of the contraction property of the conditional expectation in L^2 (see <Ref>), we have 𝔼[X_m-T^j_ρ_m,μ;𝐏(X_m)|𝒢_∞] →𝔼[X-T^j_ρ,μ;𝐏(X)|𝒢_∞]   entrywise in  L^2(Ω,𝒢,ℙ), where 𝒢_∞ represents the sigma-field generated by (X_1,X_2,…). The the desired convergence follows from (<ref>) and the observations L_j^'(ρ_m)(X_m) = 𝔼[X_m-T^j_ρ_m,μ;𝐏(X_m)|𝒢_∞], and ∫_^nL_j^'(ρ_m)(x)^2dρ_m(x) =𝔼[𝔼[X_m-T^j_ρ_m,μ;𝐏(X_m)|𝒢_∞]^2], as well as the analogs of (<ref>) and <ref> associated with X and ρ, which can derived from (<ref>) using the change of variables formula. §.§ Auxiliary lemmas Let P, P_k∈ O(n), where P = [θ_1, ⋯, θ_n] and σ_k+1 be defined in (<ref>) using P_k and for fixed 1≤ j ≤ n. Then we get ∑_i=1^j W_2^2(σ_k+1^θ_i,μ^θ_i) ≤𝕀-T^j_σ_k,μ;P^2_L^2(σ_k) +γ_k^2𝕀-T^j_σ_k,μ;P_k^2_L^2(σ_k) -2γ_k<𝕀-T^j_σ_k,μ;P, 𝕀-T^j_σ_k,μ;P_k>_L^2(σ_k). Let σ_k,μ;P,j = (T^j_σ_k,μ;P)_♯σ_k. We first note that by <Ref> (4), μ^θ_i= σ_k,μ;P,j^θ_i for 1≤ i ≤ j. Observe that σ_k,μ;P,j^θ_i = (𝒫_θ_i∘ T^j_σ_k,μ;P)_♯σ_k and by definition, σ_k+1^θ_i= (𝒫_θ_i∘((1-γ_k)𝕀 + γ_k T^j_σ_k,μ;P_k))_♯σ_k. Then, by the Lipschitz continuity of the push-forward operation <cit.>, for 1≤ i ≤ j we get W_2^2(σ_k+1^θ_i,μ^θ_i) ≤𝒫_θ_i∘( T^j_σ_k,μ;P-((1-γ_k)𝕀 + γ_k T^j_σ_k,μ;P_k))^2_L^2(σ_k) ≤𝒫_θ_i∘((𝕀-T^j_σ_k,μ;P)-γ_k(𝕀-T^j_σ_k,μ;P_k))^2_L^2(σ_k). Note that for an orthonormal basis {θ_1,…,θ_n} of ^n and a function F∈ L^2(σ) we have F^2_L^2(σ) = ∑_i=1^n 𝒫_θ_i∘ F^2_L^2(σ). Therefore, for an orthonormal set {θ_1,…,θ_j} with j≤ n we get ∑_i=1^j 𝒫_θ_i∘ F^2_L^2(σ)≤F^2_L^2(σ). This implies ∑_i=1^j W_2^2(σ_k+1^θ_i,μ^θ_i) ≤(𝕀-T^j_σ_k,μ;P)-γ_k(𝕀-T^j_σ_k,μ;P_k)^2_L^2(σ_k). The desired inequality hence follows from expanding the above inequality. § OTHER TECHNICAL DETAILS Let {e_i}_i=1^n be the standard basis for ^n. ∫_S^n-1 (θ· e_i)θ du(θ) = [0,⋯,0, w_i, 0⋯, 0]^t, where the i-th entry w_i>0. Note that θ· e_i = θ(i). The natural symmetry inherent in S^n-1 allows us to readily observe that ∫_S^n-1θ(i)θ(j) du(θ) >0 i=j =0 i≠ j . Let σ∈_2,ac(^n). Then for any θ∈ S^n-1, σ^θ∈_2,ac(^n). The fact that σ^θ is absolutely continuous follows from the co-area formula, see also Box 2.4. in <cit.> The fact that σ^θ has finite second moments follows from the change of variable formula ∫_|t|^2dσ^θ(t)=∫_^nx·θ^2 dσ(x)≤∫_^nx^2dσ(x)<∞. The conditional expectation is a contraction in L^p(Ω) for all p≥ 1, i.e., given X∈ L^p(Ω,ℱ,ℙ) and a σ-algebra 𝒢⊆ℱ 𝔼[X|𝒢]_L^p(Ω)≤X_L^p(Ω), where 𝔼[X|𝒢]_L^p(Ω)= 𝔼[|𝔼[X| 𝒢]|^p] and X_L^p(Ω) = 𝔼[|X|^p]. Let Y_n→ Y in L^p(Ω,ℱ,ℙ) for p≥ 1 and 𝒢⊂ℱ be a sub-sigma-algebra. Then 𝔼[Y_n|𝒢]→𝔼[Y|𝒢] in L^p(Ω,ℱ,ℙ). By linearity of conditional expectation and <Ref>, we have 𝔼[|𝔼[Y_n|𝒢]- 𝔼[Y|𝒢]|^p ] = 𝔼[|𝔼[Y_n-Y|𝒢]|^p] ≤𝔼[|Y_n-Y|^p]→ 0.
http://arxiv.org/abs/2307.04897v1
20230710204356
Spin-EPR-pair separation by conveyor-mode single electron shuttling in Si/SiGe
[ "Tom Struck", "Mats Volmer", "Lino Visser", "Tobias Offermann", "Ran Xue", "Jhih-Sian Tu", "Stefan Trellenkamp", "Łukasz Cywiński", "Hendrik Bluhm", "Lars R. Schreiber" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall" ]
JARA-FIT Institute for Quantum Information, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany ARQUE Systems GmbH, 52074 Aachen, Germany JARA-FIT Institute for Quantum Information, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany Helmholtz Nano Facility (HNF), Forschungszentrum Jülich, Jülich, Germany Institute of Physics, Polish Academy of Sciences, Warsaw, Poland [email protected] JARA-FIT Institute for Quantum Information, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany ARQUE Systems GmbH, 52074 Aachen, Germany Long-ranged coherent qubit coupling is a missing function block for scaling up spin qubit based quantum computing solutions. Spin-coherent conveyor-mode electron-shuttling could enable spin quantum-chips with scalable and sparse qubit-architecture. Its key feature is the operation by only few easily tuneable input terminals and compatibility with industrial gate-fabrication. Single electron shuttling in conveyor-mode in a 420 nm long quantum bus has been demonstrated previously. Here we investigate the spin coherence during conveyor-mode shuttling by separation and rejoining an Einstein-Podolsky-Rosen (EPR) spin-pair. Compared to previous work we boost the shuttle velocity by a factor of 10000. We observe a rising spin-qubit dephasing time with the longer shuttle distances due to motional narrowing and estimate the spin-shuttle infidelity due to dephasing to be 0.7 % for a total shuttle distance of nominal 560 nm. Shuttling several loops up to an accumulated distance of 3.36, spin-entanglement of the EPR pair is still detectable, giving good perspective for our approach of a shuttle-based scalable quantum computing architecture in silicon. Spin-EPR-pair separation by conveyor-mode single electron shuttling in Si/SiGe Lars R. Schreiber August 12, 2023 ============================================================================== Silicon-based electron-spin qubits show single- and two-qubit gate, <cit.> as well as readout <cit.> fidelities reaching the prerequisite for topological quantum error correction <cit.>. This pronounces the need of increasing the number of spin-qubits on a chip in an architecture which does preserve the qubit's manipulation and readout performance. New qubit readout strategies <cit.> and ideas for architectures with sparse <cit.> and dense <cit.> qubit-grids have emerged. Sparse qubit grids have good perspective to eliminate qubit cross-talk issues of their dense counter-part <cit.> and to solve the signal-fanout problem <cit.> by employing tiles of on-chip control-electronics <cit.>. Sparse qubit architectures require high-fidelity coherent spin couplers that can bridge distances of several micrometers. One type of coupler involves high-impedance superconducting resonators, which necessitate a complex interface between spin and the electrical-dipole <cit.>. Other demonstrations focus on spin-qubit shuttling of one spin-qubit towards another qubit across an array of tunnel-coupled static quantum dots (QDs) named bucket-brigade shuttling <cit.>. This approach, however, is complicated by the sensitivity of adiabatic Landau-Zener transitions to potential disorder in the quantum well <cit.>. In this respect, spin shuttling using a moving QD—referred to as conveyor-mode shuttling—is more scalable, as it requires only four easily tunable input signals, independent of its length <cit.>. While coherent spin shuttling preserving entanglement has been demonstrated with surface acoustic waves in piezoelectric materials <cit.>, an array of top-gates connected to four gate sets can induce a moving QD in a Si/SiGe one-dimensional electron channel (1DEC) <cit.>. A spin qubit shuttle device (SQS), also called QuBus, employing the conveyor-mode shuttling in Si/SiGe has been demonstrated, with a shuttle distance of 420 and a charge shuttling fidelity of (99.42 ± 0.02) % <cit.>. Subsequent improvements pushed the cumulative shuttle distance to 19 with a charge shuttling fidelity of (99.7 ± 0.3) % <cit.>. Here, we go one step further and characterise the spin-coherence of a SQS operated in conveyor-mode. To probe the spin-coherence, we initialize the SQS by creating a spin-entangled Einstein-Podolsky-Rosen (EPR)-pair at one end, separate the EPR-pair by conveyor-mode shuttling at a variable distance and velocity and combine them to detect the preservation of the spin-entanglement by Pauli-spin blockade (PSB). Compared to previous work <cit.>, we increased the shuttle velocity by four orders of magnitude to 2.8 while preserving the charge shuttle fidelity at (99.72 ± 0.01) % over a distance of nominal 560 nm in total. By observing coherent oscillations from singlet (S) to unpolarised triplet (T_0) during the shuttle process, we demonstrate the coherence of the shuttled spin-qubit up to a cumulative distances of nominal 3.36. The dephasing time T_2^* of the EPR-pair is initially on par with ST_0 dephasing in a tunnel-coupled double quantum-dot (DQD) in Si/SiGe with a natural abundance of isotopes <cit.>. We observe an increase of T_2^* with the shuttle distance, which demonstrates the predicted enhancement of the dephasing time of the shuttled qubit by motional narrowing <cit.>. §.§ Device Layout and Method First, we introduce the SQS device and the experimental methods. The three metallic (Ti/Pt) gate-layers of the SQS device (Fig. <ref>a) are isolated by conformally deposited 7.7 nm thick Al_2O_3 and fabricated by electron-beam lithography and metal-lift off on an undoped Si/Si_0.7Ge_0.3 quantum well with natural abundance of isotopes similar to Ref. <cit.>. The 1DEC of the SQS is formed in the Si/SiGe quantum well by an approximately 1.2 micron long split-gate with 200 nm gate spacing (purple in Fig. <ref>a). Seventeen so called clavier gates are fabricated on top with 70 nm gate pitch. Eight gates are fabricated on the second gate layer labelled P1, P8, 3×S1 and 3×S3. Nine gates are on the third layer labelled B1, B2, B8, B9, 3×S2 and 2×S4. Characteristic for our SQS in conveyor mode, the shuttle gates S1, S2, S3, S4 each represent one of the four gate-sets containing two to three clavier gates. Clavier gates of one gate set are electrically connected and thus always on the same electrical potential <cit.>. Since every fourth clavier gate is on the same potential within the shuttle section, the period λ of the electrostatic potential is 280 nm. The SQS contains two single electron transistors (SETs) at both ends which are used as electron reservoir and proximate charge sensors sensitive to the electron filling at the ends of the SQS. Due to a broken clavier gate B8 on the right side of the device, only the left side of the SQS is used. §.§ Pulse Sequence Fig. <ref>b shows the simplified sequence for a shuttling experiment (details in the method section). It starts with loading four electrons from the left tunnel-coupled SET into the SQS (red and blue triangle stages in Fig. <ref>a,c). Then, we decouple this electron reservoir by raising B1, such that the four electrons are trapped in the first QD confined by gates B1 and B2. Next, we form a DQD under P1 and S1 with B2 controlling the inter-dot tunnel coupling. We initialise the electron system to a spin-singlet state by waiting in (n,m)=(4,0) (stage I) for approximately 1, where n and m are the electron filling numbers of the left and right QD, respectively. Then, we adiabatically pulse to the (3,1) charge state (stages I → S) and close the DQD's tunnel barrier via B2 (S → T in Fig. <ref>b,c).The electron in the right QD forms a spin-singlet with the remaining three electrons. We load four electrons into our system to enhance the energy splitting between singlet and triplet states and thus increase the PSB region in gate space (Fig. <ref>c) <cit.>. The analogy to the two-spin EPR-pair is reasonable, since the simple picture holds that two of the three electrons fill one valley-orbit shell and the remaining electron is in a singlet state with the electron in the right QD <cit.>. Afterwards we initiate the electron shuttling process by applying sinusoidal voltage pulses on the shuttle gates S1-S4 (see details in the method section and in Fig. <ref>). During shuttling, the three electrons remain confined in the outermost left QD and only the separated electron is shuttled in a moving QD. After shuttling forward and backward by the same distance (Fig. <ref>b), we increase the tunnel coupling within the DQD again and tune the DQD into PSB (stages T → S → P in Figs. <ref>b and c). In this way, only the EPR pair in singlet state can tunnel into (4,0) charge state. For all three triplet states this charge transition is energetically forbidden. Finally, we close the barrier once more to freeze the charge state <cit.> (stage F in Figs. <ref>b and c) and read it out by the current I_SET. §.§ Coherent Shuttling In this section, we demonstrate coherent shuttling by measuring ST_0 oscillations as a function of shuttle velocity v_S, distance d and two values of the global magnetic fields B (Fig. <ref>a and b). For each measurement of the singlet probability P_S, 50000 shuttle cycles are evaluated. Note that the QD shuttles a distance d always twice, forward and backward. We apply a simple sinusoidal signal of frequency f to the gates S1, S2, S3 and S4 (see method section Charge Shuttling for the details), thus the shuttle velocity should be approximately constant and the electron shall be in motion throughout the entire shuttle process, from initialisation to readout. The total shuttle time τ_S is adjusted by varying the shuttle velocity v_S=f λ. The maximum velocity is v_max=2.8 and the amplitude of the sinusoidal signals is chosen to be in the regime of of large charge shuttling fidelity ℱ_C=(99.72 ± 0.01) % across a shuttle distance d=λ (see methods section Charge Shuttling). We managed to extend this distance to d=1.2 λ=336 nm finally limited by a drastic drop in electron return probability. The upper bound of v_S does not allow to access data points in the grey triangular areas (labelled with τ_S) of Fig. <ref>a,b at small τ_S and large d. We fit each line of measured ST_0 oscillations for both B (Fig. <ref>c,d) with P_S(τ) = e^-(τ/T_2^*)^2( a_<cos(2πν_<τ_S+φ_<) +a_>cos(2πν_>τ_S+φ_>))+c, where P_S is the probability of detecting the EPR pair in a singlet state, T_2^* is the ensemble dephasing time of the EPR pair, a_<,>, ν_<,>, φ_<,> and c are the visibility, frequency, phase and offset of the ST_0-oscillations, respectively. Variations in the offset c may arise from singlet initialisation and detection errors and randomly fluctuates among scan-lines. We empirically find that the data can be best fitted by two oscillations, hence the two cosine terms with their respective frequencies and phases are used. We speculate that this might result from initialising a mixed valley state, which requires further investigation elsewhere. Our fits (Fig. <ref>c,d) match the measured raw-data in Fig. <ref>a,b well. First, we discuss the fitted ν_<,>. The origin of the measured ST_0-oscillations is the Zeeman energy difference between the spin in the shuttled QD and the spin in the static QD, which is filled by three electrons. The difference originates from slightly different electron g-factors Δ g and Overhauser-energies Δ E_hf due to hyperfine contact interaction <cit.>. This is the same mechanism that leads to ST_0 oscillations in the case of a DQD without any conveyor-mode shuttling. These oscillations, which are effectively at d=0 nm, are discussed in the method section about Singlet-Triplet oscillations. The dynamics of the nuclear spins is slow compared to a shuttle pulse sequence, but the Overhauser field might vary along the 1DEC. The electron g-factor depends on valley state and QD confinement and might vary for the moving QD along the 1DEC as well <cit.>. Hence, the Zeeman energy difference of the entangled spins and thus the ST_0 oscillation ν_i frequency depends on the position x of the moving QD. As this position is changing during the shuttle process, the frequency ν_i(d) becomes a function of shuttle distance d and it is given by an average over the shuttling distance d: ν_i(d)= 1/h d∫_0^d dx [Δ g(x) μ_B B + Δ E_hf(x) ], where h is the Planck constant, and μ_B is the Bohr magneton. We idealize by neglecting the time-dependence of Δ E_hf and Δ g and by assuming a deterministic thus reproducible trajectory x(t) of the shuttled QD, when averaging over several shuttling cycles. Due to the integral, we expect that changes in ν_i(d) smooth out for increasing d. Indeed, we observe a shuttle-distance dependence of the ST_0 oscillation with a smoothing trend towards larger d (Fig. <ref>e). Furthermore, we observe that the ν_<,> scale with the external magnetic field, which underlines the origin of our observed oscillations being spin-dynamics in agreement with Eq. <ref>. Calculating pairwise the ratios of ν_< and ν_> measured at B=0.6 T and B=0.8 T, we arrive close to the expected ratio of 0.75 (Fig. <ref>f). This demonstrates the linearity in magnetic field strength and indicates that the contribution of Δ E_hf(x) is small compared to the contribution of the electron g-factor difference. Furthermore, it shows the two oscillation components have distinct, but reproducible Δ g(x). For small d, the difference of φ_<,> is small increasing the fitting error, but deviations from the ratio 0.75 cannot be fully excluded here. §.§ Spin-dephasing during shuttling Most important is the evaluation of the ensemble spin dephasing time T_2^* of the EPR-pair as a function of d, since it contains information on the impact of conveyor-mode shuttling on the spin dephasing. We observe that T_2^* increases with larger shuttle distance (Fig. <ref>g). Since qubit shuttling opens up new dephasing mechanisms <cit.>, this result might be surprising at first sight, but is expected due to a motional narrowing enhancement of the shuttled qubit dephasing time <cit.>. We quantify the phenomenon by the fit f_1(d) in Fig. <ref>g using ( 1/T_2^*)^2 = ( 1/T_2,L^*)^2 + ( 1/T_2,R^*)^2 l_c/d+l_c. To incorporate the dependence of Gaussian decay T_2^* of the EPR-pair on shuttle distance d, we use the quadratic addition of inverse T_2^* times for the left (L) and right (R) electron spin and include a factor for motional narrowing for the shuttled qubit, where T_2,L^* is the ensemble spin dephasing time of the electron-spin that remains static in the outermost left QD, and T_2,S^*(d)≡ T_2,R^*√(d+l_c/l_c) represents the ensemble spin dephasing time of the forward and backward shuttled electron spin (total distance 2d), averaging over a d long spatial range of quasistatic-noise of its Zeeman-energy E_z(x(t)) having a correlation length l_c <cit.>. Note that we distinguish between the static ensemble dephasing times T_2,L^* and T_2,R^*, since we expect the confinement strength within the static QD to be less than in the moving QD. Our fit to the ensemble dephasing time of the EPR pair (Fig. <ref>g) results in T_2,L^*=(1110±90) ns, T_2,R^*=(520±20) ns and l_c=(13±3) nm. This in total yields T_2,S^*(280) = (2460±310). This result implies that shuttled qubit increases its dephasing time by a factor of ≈ 4 when shuttled twice across a distance of nominal 280 nm due to motional narrowing. Note that the data points in (Fig. <ref>g) tend to be lower than the fit for the largest d, which might be due to dephasing mechanisms induced by the shuttle process such as motion-induced valley excitations <cit.>. At very short shuttle distance d, a deformation of the moving QD might add to the change in spin dephasing time. Assuming a constant shuttle velocity, constant shape of the moving QD and only motional narrowing of E_hf(x), we derive the fit function f_2(d) exhibiting a modified motional narrowing factor (Fig. <ref>g). Remarkably, we arrive at very similar fitting parameters (see supplementary material). §.§ Long distance shuttling In order to increase the distance of shuttling, we always shuttle at a maximum velocity v_max and once the shuttled electron returns to the right QD of the DQD (stage S), we recorded the ST_0 oscillations by waiting between additional 0 to 1 μs prior to measure the EPR spin-state. We plot the spin-singlet probability P_S of the EPR-pair as a function of the total time τ_S,DQD of shuttling and waiting (Fig. <ref>h). Due to the limited length of the shuttle zone, we increase the cumulative distance by shuttling in- and out for one period λ multiple times. The total number of periods (D) shuttled forward plus backward is indicated on the left as the accumulated shuttle distance. For example for the trace labelled D=2, the voltage pulses applied to S1-S4 are designed to shuttle the electron one period λ=280 nm forward and same distance back towards the spin-detector. For D=1, the electron is shuttled half a period forward, and the same distance back towards the detector. Strikingly, we still observe ST_0 oscillations for the trace labelled D=12, for which the electron shuttles alternating six times forward and backward by λ being nominally equivalent to an accumulated distance of 3.36. The appearance of ST_0 oscillations show that the EPR-pair remained entangled after such long shuttling distance. §.§ Mapping local ν variations Coherent shuttling of a spin qubit and EPR separation allows us to collect information about Δ g(x) along the SQS. Instead of shuttling the spin-qubit forward and backward with a τ_S-dependent v_S, we shuttle it by a distance x along the 1DEC at maximum v_max=2.8, wait there for a time τ_W to let the ST_0 oscillations evolve and then shuttle back at maximum v_S for PSB detection. We observe (Fig. <ref>) ST_0-oscillations and similar to Fig. <ref>a and b, their frequency ν(x,B) scales with the B-field as expected (cmp. Fig. <ref>a and b). Compared to Fig. <ref>a and b, ν(x,B) tend to fluctuate faster as a function of x. This is expected, since ν_<,> results from averaging many positions x(t) in the coherent shuttle experiment (Eq. <ref>) in Fig. <ref>, while here ν dominantly depends on the fixed position x. Note that x(t) and thus d is not measured in any case, but deduced from the expected position of the ideal propagating wave potential x=λΔφ/2 π, where φ is the phase of the voltages applied to gates S1-S4 relative to the initialisation potential. Hence, we neglect potential disorder and wobbling effects of the propagating wave potential, which are exemplary simulated in Ref. <cit.>. Notably, ν(x) starts to become nearly constant at x>210. This could be an indication that the electron stops moving at this point. If we try to shuttle to x>330>λ, the electron dominantly does not return, indicating potential disorder which is sufficiently high to break the QD confinement in the propagating QD. §.§ Conclusion This work shows progress on electron shuttling in conveyor-mode, building up on earlier demonstrations of charge shuttling <cit.>. We improved the shuttle velocity by four orders of magnitude to a regime at which coherent shuttling becomes feasible <cit.>. When moving into and out of the device once, we demonstrate coherent shuttling by EPR pair separation and recombination across a total distance of nominal 560 nm and at least 420 nm in case the electron spins halts at x=210 nm. Furthermore, we detect entanglement when moving the electron for an accumulated shuttle distance of nominal 3.36 (at least 2.4). Remarkably, the dephasing time of the shuttled qubit T_2,S^* is enhanced by motional narrowing, while the static electron-spin dominates the dephasing of the spin-entangled EPR-pair. Based on the fitted T_2,S^*(280) ≈ 2460 (≈ 2130 for fitting with f_2(d)), we can estimate a phase-infidelity caused by the shuttle time τ_S at maximum shuttle velocity v_S using the Gaussian decay 1-ℱ = 1 - exp(-(τ_S/T_2,S^*)^2 ) ≈(2d/T_2,S^* v_S)^2. We estimate a shuttling-induced phase-infidelity of 1-ℱ=(0.66 ± 0.17)% for a total shuttle distance of nominal 2d=2λ=560 nm (at least 420 nm). Assuming a constant shuttle velocity, constant shape of the moving QD and only motional narrowing of E_hf(x) (fit equation f_2(d) see supplementary material) yields a matching infidelity of 1-ℱ=(0.88 ± 0.18)% within the error range. Next, we have to increase the shuttle distance by improving confinement of the moving QD. We already achieved a charge shuttle fidelity of (99.7 ± 0.3) % for total shuttle distance of 20 μm in a 10 μm long Si/SiGe QuBus <cit.>. The spin dephasing time can be enhanced by isotopically purified ^28Si. Adding spin-manipulation zones will grant more flexibility in performing coherent shuttling experiments to explore the dephasing channels and the role of the valley states. In the long run, we target at the integration of our spin shuttle device into a scalable semiconductor qubit architecture <cit.>. § METHODS §.§ Charge Shuttling A prerequisite for spin-coherent shuttling is that the electron stays confined in the moving QD, which we call charge shuttling. Fig. <ref> depicts the pulse procedure for benchmarking the charge shuttling in the same device that we used for spin-coherent shuttling. Firstly, we load four electrons into the first QD by lowering B1 (Fig. <ref>a inset). Due to cross-talk, we need to compensate on P1 and B2. Thereafter, the barrier is raised again to isolate the system. Loading takes approximately 2 time as the voltage on B1 is 10 kHz lowpass-filtered. Subsequently, one electron is moved into the second QD (Fig. <ref>a, red triangle → S) and the barrier B2 is closed by pulsing it down by 120 (S → T). After stage T, the shuttle pulse (Fig. <ref>b lower part) is applied to the gate-sets S1-S4 V_Si(τ_S)=U_i·sin(2 π f τ_S+φ_i)+C_i. The amplitudes (U_1, U_3) applied to the gate-sets S1 and S3 on the second layer (blue in Fig. <ref>a) is U_lower=150, whereas the amplitudes (U_1, U_3) applied to the gate-sets S2 and S4 on the 3rd metal layer is slightly higher (U_upper=1.28· U_lower=192) to compensate for the difference of capacitive coupling of these layers to the quantum well. This compensation extends to the DC-part of the shuttle gate voltages. The offsets C_1=C_3= 0.7 are chosen to form a smooth DQD, whilst C_2= C_4= 0.896 are chosen to form a smooth DC potential. The phases are chosen in order to build a travelling wave potential across the one-dimensional electron channel (φ_1=-π/2, φ_2=0, φ_3=π/2, φ_4=π). This travelling wave potential is illustrated in Fig. <ref>b at the top part. The barrier B2 is pinched off to limit the cross-talk-influence from the shuttle pulse to the static electrons. The electron is moved adiabatically by one period of the travelling wave potential (280) to the right. After one period, the absolute gate voltages are exactly identical to the prior state, when the charge scan in Fig. <ref>a has been recorded. Hence, we can check whether the electron is shuttled away by going back to the electrostatic configuration corresponding to the red triangle and measuring the SET current. By time reversing the voltage pulses on S1-S4, we shuttle the electron back and perform a measurement in a similar manner. Then, we calculate a histogram as shown in the inset of Fig. <ref>, fit two Gaussian distributions and take the fits crossing point to define the range of I_SET assigned to three and four electron detection events. Only if the first measurement yields three (i.e. electron is shuttled away from detector) and the second measurement four electrons (i.e. electron is shuttled back to detector), a shuttling event is counted as successful. The same approach for counting successful charge shuttle events has been used in Ref. <cit.>. In Fig. <ref>d, we plot the charge shuttling fidelity ℱ_C as a function of the lower layer amplitude U_lower, the upper layer amplitude is U_upper=1.28· U_lower to compensate for the larger distance to the 1DEC. We find a steep rise of ℱ_C at U_lower>110. The histogram of I_SET for all U_lower>125 mV (inset of Fig. <ref>d) shows well separated Gaussians assigned to either four or three electron filling of the QD underneath gate P1. Due to nonlinear effects on the SET, the peak for four electrons is narrower than the peak for three electrons. Fig. <ref>e shows charge shuttling fidelities as a function of shuttle frequency f as defined in Eq. <ref> which corresponds to a shuttle velocity v_S=fλ. From the green points we read off high fidelities up to 10 (2.8/). By averaging ℱ_C(U_lower>125) in Fig. <ref>d, we calculate the mean charge shuttling fidelity for shuttling a nominal total distance of 2λ =560 nm (λ forwards and backwards) to be ℱ_C=(99.72 ± 0.01) %. This value is is slightly better than the charge shuttling fidelity of 99.42 % obtained in Ref. <cit.>. Moreover, we found charge shuttling across 2λ and back at f=2. We tracked the charge by measuring the charge state after every shuttle-pulse, which moves the electron by one period λ (cmp. shuttle tomography method in Ref. <cit.>) and calculated a transfer fidelity of 98.7 % at the same voltage amplitudes. §.§ Singlet-Triplet Oscillations To demonstrate that the single-electron spin-qubit coherently shuttles, we use the preservation of the entanglement with the static electron spin, which we detect by the coherent oscillations between spin-singlet S and unpolarised spin-triplet T_0 state of this EPR pair H=([ -J(ε) Δ g μ_B B + Δ E_hf/2; Δ g μ_B B + Δ E_hf/2 0 ]) in the (|S⟩,|T_0⟩)-basis. Here, J(ε) represents the exchange interaction as a function of the detuning ε(=V_P1) between the left and right QD. Δ g is the g-factor difference between the two QDs <cit.>. Δ E_hf is the Overhauser-energy-difference between the two dots. After loading four electrons as shown in Fig. <ref>a, we initialise the system to S(4,0) by waiting at stage I (Fig. <ref>a) for 2. Next, we step V_P1 by 20 which reduces J(ε) and turns on Δ g μ_B B by letting one electron adiabatically tunnel into the right QD. As the two electrons are laterally separated, they are subject to different electron g-factors resulting in different Zeeman-energies as a result of the global B-field of 0.8. At stage S, we wait for τ_DQD time and pulse to the PSB in stage P where spin information is converted to charge information. The conversion takes approximately 500 after which a raise of the inter-dot barrier freezes the charge state for readout (stage F). Iterating over this pulse scheme, we record the singlet return probability P_S (Fig. <ref>b), which is fitted by P_S(dt)=a · e^-(τ_DQD/T_2^*)^2cos(2πν dt+φ) + c . We yield for the spin dephasing time of the entangled spin-state T_2^*=(565±10) and for the frequency ν=(7.29±0.01). Fig. <ref>c summarises the pulse in a schematic way. For coherent shuttling experiments, instead of waiting at the separation stage the sequence presented in Fig. <ref>d is inserted between the separation and PSB-freeze-RO pulse segments shown in Fig. <ref>c. §.§ Experimental Setup All experiments are conducted in a dilution refrigerator with a base temperature of 40. All DC lines to the device are filtered by pi-filters (f_c=5MHz) at room temperature and by 2nd order RC filters with f_c=10 kHz at base temperature. The clavier gates, B2, P1, P8 and B8 are connected to resistive bias-tees with a cutoff frequency of 5Hz. Signals are applied to the AC and DC input terminal of the bias-tee, in order to allow inclusion of millisecond long pulse segments. A serial resistor is added to the low-frequency terminal, the value of which is tuned by flattening the sensor signal response. The SETs are DC-biased by 100 μV and readout by a transimpedance amplifier and an analog to digital converter. § DATA AVAILABILITY The data is available from the authors upon reasonable request. § ACKNOWLEDGEMENTS This work has been funded by the German Research Foundation (DFG) under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing" (ML4Q) EXC 2004/1 - 390534769 and by the Federal Ministry of Education and Research under Contract No. FKZ: 13N14778. Project Si-QuBus received funding from the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union's Horizon 2020 Programme. The device fabrication has been done at HNF - Helmholtz Nano Facility, Research Center Juelich GmbH <cit.>. § AUTHOR CONTRIBUTIONS T.S., M.V. and L.V. conducted the experiments, T.S., M.V., T.O., L.V. and L.R.S. analysed the data. J.T. and R.X. fabricated the device. S.T. wrote the e-beam layers. Ł.C. derived motional narrowing effect of nuclear spins. L.R.S. designed and supervised the experiment. L.R.S and H.B. provided guidance to all authors. T.S., M.V., L.V., T.O. and L.R.S. wrote the manuscript, which was commented by all other authors. § COMPETING INTERESTS L.R.S. and H.B. are co-inventors of patent applications that cover conveyor-mode shuttling and its applications. L.R.S. and H.B. are founders and shareholders of ARQUE Systems GmbH. The other authors declare no competing interest.
http://arxiv.org/abs/2307.04158v1
20230709121449
Four-loop splitting functions in QCD -- The gluon-to-quark case
[ "G. Falcioni", "F. Herzog", "S. Moch", "A. Vogt" ]
hep-ph
[ "hep-ph" ]
T> c M>c <23.0cm 16.5cm -0.1cm -0.1cm -1.5cm 4colour#1#1#11 mu /#1 #13 mu /#1
http://arxiv.org/abs/2307.07414v1
20230714154214
An Embedded Auto-Calibrated Offset Current Compensation Technique for PPG/fNIRS System
[ "Sadan Saquib Khan", "Sumit Kumar", "Benish Jan", "Laxmeesha Somappa", "Shahid Malik" ]
eess.SY
[ "eess.SY", "cs.SY" ]
IEEE Embedded Systems Letters, Vol. XX, No. YY, Month Year Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals An Embedded Auto-Calibrated Offset Current Compensation Technique for PPG/fNIRS System Sadan Saquib Khan, Sumit Kumar, Benish Jan, Laxmeesha Somappa, and Shahid Malik Sadan Saquib Khan, Sumit Kumar, Benish Jan, and Shahid Malik are with the Centre for Sensors, Instrumentation and Cyber Physical System Engineering (SeNSE), Indian Institute of Technology Delhi (IIT Delhi). Laxmeesha Somappa is with the Department of Electrical Engineering, Indian Institute of Technology Bombay (IIT Bombay). Manuscript received Month Day, Year; revised Month Day, Year. August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Usually, the current generated by the photodiode proportional to the oxygenated blood in the photoplethysmography (PPG) and functional infrared spectroscopy (fNIRS) based recording systems is small as compared to the offset-current. The offset current is the combination of the dark current of the photodiode, the current due to ambient light, and the current due to the reflected light from fat and skull <cit.>. The relatively large value of the offset current limits the amplification of the signal current and affects the overall performance of the PPG/fNIRS recording systems. In this paper, we present a mixed-signal auto-calibrated offset current compensation technique for PPG and fNIRS recording systems. The system auto-calibrates the offset current, compensates using a dual discrete loop technique, and amplifies the signal current. Thanks to the amplification, the system provides better sensitivity. A prototype of the system is built and tested for PPG signal recording. The prototype is developed for a 3.3 V single supply. The results show that the proposed system is able to effectively compensate for the offset current. Article submission, IEEE, IEEEtran, journal, , paper, template, typesetting. § INTRODUCTION The Optical sensors are used in many applications thanks to their unique features such as immunity to electromagnetic interference, small size, lightweight and high sensitivity <cit.>. They are particularly useful in biomedical applications since they non-invasively provide information about the biomarkers. Various biomedical diagnostic devices such as oximeters, cerebral oximeters, and fNIRS-based brain-imaging systems utilize optical sensors <cit.>. In addition, they are also being explored for the non-invasive detection of breast cancer <cit.>. However, the optical sensors suffer from the offset current. Mostly a photodiode is used to convert the photon into the current. The offset current may be because of the ambient light, dark current, and also because of the light reflected from the fat, bones, and skulls <cit.>. Usually, the dark current is fixed and can be compensated. However, the offset current due to ambient light and reflection varies with time and can be significantly higher than the current due to the actual signal <cit.>. The current from the photodiode is converted into equivalent voltage using a trans-impedance amplifier. The value of the feedback resistor is selected appropriately to convert the current into an output voltage. A high-value of feedback resistor is desirable to amplify the signal and keep the current noise at a low level. However, the large value of offset current limits the amplification. To compensate for the effect of offset current on the output voltage of the trans-impedance amplifier, various approaches are explored. The current source using the digital-to-analog converter (DAC) for offset current compensation is integrated with many system-on-chip (SoC) based PPG/fNRIS systems. The current-DAC-based SoC systems are able to compensate wide-range offset current from 1μ A to 128 μ A with a 7-bit resolution. However, the HPF for offset compensation The offset current compensation techniques, based on a negative feedback loop (analog or digital), by nullifying the DC-voltage at the output of the amplifier, are reported in <cit.>. This continuous nullification of the DC signal affects the shape of the signal which is misinterpreted as the measurand. The problem is especially critical for biomedical signals such as a PPG and functional near-infrared spectroscopy (fNIRS) for brain-computer interfacing applications. In addition, the implementation of the analog low-pass filter in the feedback path to nullify the offset current introduces the delay in the sensor signal<cit.>. Since the cut-off frequency of the filter is near dc, it introduces a delay causing a phase error in the PPF and fNIRS signals <cit.>. Typically, the cutoff frequency varies from 0.1 Hz to 5 Hz for PPG and fNIRS. Due to the extremely low cutoff frequency, the analog low pass filter may influence the frequencies present in our signal of interest <cit.>. In this paper, we present a dual-loop auto-compensation technique for wide-range offset current compensation. The circuit utilizes voltage feedback to compensate for the offset current smaller than 1μ A. For a wide range of currents, the circuit utilizes a digitally controlled current source <cit.>. The circuit compensates the offset current. § THE PROPOSED SYSTEM The block diagram of the proposed discrete offset compensation loop-based system is shown in Fig. <ref>. The system consists of a trans-impedance amplifier, with a digital potentiometer R_F for programmable gain. A programmable current source i_DAC is used to compensate for the offset current from the photodiode. The proposed system utilizes a dual-cancellation technique to effectively compensate the offset voltage from the output voltage V_out of the trans-impedance amplifier. A low-pass filter is used to filter out the DC offset from the voltage V_out. The DC-offset voltage V_dc is acquired using an analog-to-digital converter. The offset-free output is further amplified by a variable gain non-inverting amplifier OA_2. The output voltage V_sig of the amplifier is proportional to the AC signal. The frequency of the PPG and fNIRS is usually between 0.05 Hz to 5 Hz. The selection of an appropriate cut-off frequency for the compensation of DC-offset is important for the effective design of the overall embedded sensing system. For instance, ideally, a cut-off frequency of less than 0.05 Hz is preferred for DC-offset compensation. However, the compensation time will increase accordingly. The continuous compensation technique will also affect the shape morphology of the PPG/fNIRS signals. In this paper, the cut-off frequency of the low-pass filter is chosen to be around 1 Hz. The proposed system utilizes a discrete offset compensation approach by monitoring the offset voltage. The loop will automatically compensate the offset voltage if the DC voltage crosses a threshold voltage limit. The flow of operation of the proposed system is shown in Fig. <ref>. The auto-calibration phase start with switching the opening of the switch S_1 and connecting the switch S_2 to V_cm. During this phase, both ac and dc current through the photo-diode is amplified by the trans-impedance amplifier. The average dc value of V_out is extracted using the low-pass filter. The voltage V_dc is acquired using an analog-to-digital converter (ADC 1), as shown in Fig. <ref>. The DC current is estimated from the voltage V_dc and resistor R_F. Next, the magnitude of the digitally controlled current source (i_DAC) is tuned to compensate for the DC current. Once the current value is set, the switch S_1 is closed. Consequently, the DC offset current from the photo-diode has sunk by the i_DAC. The output voltage of the trans-impedance amplifier is proportional to the AC signal. The DC offset is compensated by using the digitally controlled current source. The magnitude of the current source is controlled by using a digital potentiometer. However, the digital potentiometer is suffered from limited resolution and tolerance. Once the value of i_DAC is set to compensate for the DC offset, the gain of the trans-impedance amplifier is increased by using digital-potentiometer R_F. However, due to the finite resolution of the current source, the uncompensated DC current is also amplified, which results in a DC offset at the output of the trans-impedance amplifier. To compensate for that, in this letter, we incorporated a second offset-compensation loop by controlling the voltage V_REF of the trans-impedance amplifier. § EXPERIMENTAL SETUP AND RESULTS A prototype of the proposed embedded auto-calibrated system is fabricated and tested. The components used for the prototyping are tabulated in Table <ref>. The low-voltage low-power operational amplifier (OPA333) is used to implement the trans-impedance amplifier, voltage-follower, and non-inverting amplifier. A microcontroller with integrated ADC and DAC is used. The resolution of the ADC can be tuned from 12 to 16 bits with oversampling and decimation. A digital potentiometer with 8-bit resolution and 1 MΩ full-scale value is used. The amplitude of the current source can be tuned from 1 μA to 10 mA. This study's primary goal was to look into the DC Offset in the PPG signal. In order to get the PPG signal, infrared light is shone on the fingertip and reflected to a photodiode that has data on the blood and oxygen flow. From Fig. <ref>a, we can clearly visualize the DC offset present in the signal which limits the gain in the succeeding stage. Fig. <ref> (b) show the dc offset compensated PPG signals by acquiring a stable DC value using the proposed mixed-signal system. The dc offset in the PPG signal has been successfully eliminated up to a very high order with no effect of the delay and shape morphing of the PPG signal. § CONCLUSION A mixed-signal loop based approach for acquiring high fidelity PPG/fNIRS signals has been demonstrated. The proposed architecture overcomes the issues associated with large delay and shape morphing of the signals with the traditional continuous offset cancellation technique. Finally, PPG measurement results acquired with the proposed system are shown and the offset cancellation with high fidelity data acquisition is verified. 00 b1 Opto-Mechatronic Systems Handbook: Techniques and Applications, publisher=CRC Press b2 Congcong Huo, Gongcheng Xu, Wenhao Li, Hui Xie, Tengyu Zhang, Ying Liu, Zengyong Li, A review on functional near-infrared spectroscopy and application in stroke rehabilitation, Medicine in Novel Technology and Devices, Volume 11, 2021, 100064, ISSN 2590-0935, https://doi.org/10.1016/j.medntd.2021.100064. b3 J. Kim, J. Kim, and H. Ko, “Low-power photoplethysmogram acquisition integrated circuit with robust light interference compensation,” Sensors, vol. 16, no. 1, 2016. [Online]. Available: https://www.mdpi.com/1424- 8220/16/1/46. b4 A. K. Y. Wong, K.-P. Pun, Y.-T. Zhang, and K. N. Leung, “A low-power cmos front-end for photoplethysmographic signal acquisition with robust dc photocurrent rejection,” IEEE Transactions on Biomedical Circuits and Systems, vol. 2, no. 4, pp. 280–288, 2008. b5 G. Wang, M. Atef, and Y. Lian, “Towards a continuous non-invasive cuffless blood pressure monitoring system using ppg: Systems and circuits review,” IEEE Circuits and Systems Magazine, vol. 18, no. 3, pp. 6–26, 2018. b6 G. Sciortino, A. Ragni, A. De la Cadena, M. Sampietro, G. Cerullo, D. Polli, and G. Ferrari, “Four-channel differential lock-in amplifiers with autobalancing network for stimulated raman spectroscopy,” IEEE Journal of Solid-State Circuits, vol. 56, no. 6, pp. 1859–1870, 2021. b7 Q. Lin, S. Song, R. Van Wegberg, W. Sijbers, D. Biswas, M. Konijnenburg, C. Van Hoof, F. Tavernier, and N. Van Helleputte, “A 134 db dynamic range noise shaping slope light-to-digital converter for wearable chest ppg applications,” IEEE Transactions on Biomedical Circuits and Systems, vol. 15, no. 6, pp. 1224–1235, 2021. b8 Q. Lin, J. Xu, S. Song, A. Breeschoten, M. Konijnenburg, C. Van Hoof, F. Tavernier, and N. Van Helleputte, “A 119db dynamic range charge counting light-to-digital converter for wearable ppg/nirs monitoring applications,” IEEE Transactions on Biomedical Circuits and Systems, vol. 14, no. 4, pp. 800–810, 2020. b9 Castaneda, D., Esparza, A., Ghamari, M., Soltanpur, C., and Nazeran, H. (2018). A review on wearable photoplethysmography sensors and their potential future applications in health care. International journal of biosensors and bioelectronics, 4(4), 195–202. https://doi.org/10.15406/ijbsbe.2018.04.00125 b10 Han S, Roh D, Park J, Shin H. Design of Multi-Wavelength Optical Sensor Module for Depth-Dependent Photoplethysmography. Sensors. 2019; 19(24):5441. https://doi.org/10.3390/s19245441 b11 N. De Pinho Ferreira, C. Gehin, B. Massot, A Review of Methods for Non-Invasive Heart Rate Measurement on Wrist, IRBM, Volume 42, Issue 1, 2021, Pages 4-18, ISSN 1959-0318, https://doi.org/10.1016/j.irbm.2020.04.001. b12 Ysehak Abay T, Shafqat K, Kyriacou PA. Perfusion Changes at the Forehead Measured by Photoplethysmography during a Head-Down Tilt Protocol. Biosensors. 2019; 9(2):71. https://doi.org/10.3390/bios9020071 b13 Charlton, Peter and Kyriacou, Panicos and Mant, Jonathan and Alastruey, Jordi. (2020). Acquiring Wearable Photoplethysmography Data in Daily Life: The PPG Diary Pilot Study. Engineering Proceedings. 2. 80. 10.3390/ecsa-7-08233. b14Bortolotto, L.A.; Blacher, J.; Kondo, T.; Takazawa, K.; Safar, M.E. Assessment of vascular aging and atherosclerosis in hypertensive subjects: Second derivative of photoplethysmogram versus pulse wave velocity. Am. J. Hypertens. 2000, 13, 165–171. b15Poon, C.; Zhang, Y. Cuff-less and noninvasive measurements of arterial blood pressure by pulse transit time. In Proceedings of the 27th Annual International Conference of the Engineering in Medicine and Biology Society (IEEE-EMBS 2005), Shanghai, China, 1–4 September 2005; pp. 5878–5880. b16Sant, L.; Fant, A.; Torta, P.; Dorrer, L. A system containing an ambient light and a proximity sensor with intrinsic ambient light rejection. In Proceedings of the 38th European Solid-State Circuit Conference (ESSCIRC 2012), Bordeaux, France, 17–21 September 2012; pp. 97–100. b17He, D.; Morgan, S.P.; Trachanis, D.; van Hese, J.; Drogoudis, D.; Fummi, F.; Hayes-Gill, B.R. A single-chip CMOS pulse oximeter with on-chip lock-in detection. Sensors 2015, 15, 17076–17088. b18R. Rawassizadeh, B. A. Price, and M. Petre, “Wearables: Has the age of smartwatches finally arrived?,” Commun. ACM, vol. 58, no. 1, pp. 45–47, 2014. b19L. Nilsson et al., “Combined photoplethysmographic monitoring of respiration rate and pulse: A comparison between different measurement sites in spontaneously breathing subjects,” Acta Anaesthesiologica Scandinavica, vol. 1, no. 9, pp. 1250–1257, 2007 b20M. Kramer et al., “Wearable pulse oximetry measurements on the torso, arms, and legs: A proof of concept,” Mil. Med., vol. 182, pp. 92–98, 2017 b21A. Kiruthiga et al., “Reflectance pulse oximetry for blood oxygen saturation measurement from diverse locations-a preliminary analysis,” in Proc. IEEE Int. Symp. Med. Meas. Appl., 2018, pp. 1–6. b22A. A. Alian and K. H. Shelley, “Photoplethysmography,” Best Pract. Res. Clin. Anaesthesiol., vol. 28, no. 4, pp. 395–406, 2014. b23D. G. Wyser et al., “Wearable and modular functional near-infrared spectroscopy instrument with multidistance measurements at four wavelengths,” Neurophotonics, vol. 4, 2017, Art. no. 041413. b24] Q. Lin et al., “Wearable multiple modality bio-signal recording and processing on chip: A review,” IEEE Sensors J., vol. 21, no. 2, pp. 1108–1123, Jan. 2021 b25R. M. Gagliardi and S. Karp, Optical Communications, Wiley,second edition, 1995 b26G. P. Agrawal, Fiber-Optic Communications Systems, Wiley, thirdedition, 1992. b27Demirtaş, M., Erişmiş, M.A. and Güneş, S. Analysis and design of a transimpedance amplifier based front-end circuit for capacitance measurements. SN Appl. Sci. 2, 280 (2020). https://doi.org/10.1007/s42452-020-2104-x b28Petterson M T, Begnoche V L, Graybeal J M. The effect of motion on pulse oximetry and its clinical significance. // Anaesthesia and Analgesia, 2007. - P. 78-84 b29Fodor L, Ullman Y, Elman M. Aesthetic Applications of Intense Pulsed Light. // London: Springer London, 2011. P. 133. b30Bashkatov A N, Genina E A, Kochubey V I, Tuchin V V. Optical properties of human skin, subcutaneous and mucous tissues in the wavelength range from 400 to 2000 nm. // Journal of Physics D: Applied Physics, 2005. - P. 2543-2555. b31Rovas G, Bikia V, Stergiopulos N. Quantification of the Phenomena Affecting Reflective Arterial Photoplethysmography. Bioengineering (Basel). 2023 Apr 10;10(4):460. doi: 10.3390/bioengineering10040460. PMID: 37106647; PMCID: PMC10136360. b32Tamura, T., Maeda, Y., Sekine, M., and Yoshida, M. (2014). Wearable Photoplethysmographic Sensors—Past and Present. Electronics, 3, 282-302. b33K. Budidha and P. A. Kyriacou, “In vivo investigation of ear canal pulse oximetry during hypothermia,” Journal of Clinical Monitoring and Computing, vol. 32, no. 1, pp. 97–107, 2018. https://doi.org/10.1007/s10877-017-9975-4 b34S. Chatterjee, K. Budidha, and P. A. Kyriacou, “Investigating the origin of photoplethysmography using a multiwavelength Monte Carlo model,” Physiological Measurement, vol. 41, no. 8, p. 084001, 2020. https://doi.org/10.1088/1361-6579/aba008 b35A. V. Moc¸o, S. Stuijk, and G. De Haan, “New insights into the origin of remote PPG signals in visible light and infrared,” Scientific Reports, vol. 8, no. 1, pp. 1–15, 2018. https://doi.org/10.1038/s41598-018-26068-2
http://arxiv.org/abs/2307.04777v1
20230709223047
MentalHealthAI: Utilizing Personal Health Device Data to Optimize Psychiatry Treatment
[ "Manan Shukla", "Oshani Seneviratne" ]
cs.LG
[ "cs.LG", "cs.CY" ]
Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant [ August 12, 2023 ======================================================================== § ABSTRACT Mental health disorders remain a significant challenge in modern healthcare, with diagnosis and treatment often relying on subjective patient descriptions and past medical history. To address this issue, we propose a personalized mental health tracking and mood prediction system that utilizes patient physiological data collected through personal health devices. Our system leverages a decentralized learning mechanism that combines transfer and federated machine learning concepts using smart contracts, allowing data to remain on users' devices and enabling effective tracking of mental health conditions for psychiatric treatment and management in a privacy-aware and accountable manner. We evaluate our model using a popular mental health dataset that demonstrates promising results. By utilizing connected health systems and machine learning models, our approach offers a novel solution to the challenge of providing psychiatrists with further insight into their patients' mental health outside of traditional office visits. § INTRODUCTION Mental health conditions such as depression and anxiety are some of the most challenging medical problems to diagnose and treat. Current treatment guidelines for these disorders primarily utilize subjective assessments, relying on patient self-report or clinician evaluation to inform clinical decisions. As such, the lack of objective markers for clinical outcomes presents a significant bottleneck in psychiatry. Furthermore, a patient's mood or emotions may change over time, but clinicians only have access to a patient’s data at the time of the visit, leading to a potentially biased sampling of the patient’s mental state. To address this, collecting data from the patient over a long period would be ideal for effective diagnosis and treatment. However, collecting such data is also challenging due to privacy concerns. Connected health applications enable data to be generated and stored in a decentralized manner, where the data may reside cross-device. A common challenge in health informatics in federated and decentralized settings is that training and test data are not independently and identically distributed (non-IID), which is especially true in scenarios that apply to predict the mental health of individuals using a combination of medical and environmental signals. Because health data is typically not identically distributed, the generalization performance tends to be worse, and lower accuracy can result from overlooking the distribution shift in the training and testing data<cit.>. More importantly, since non-IID data in healthcare applications comes from different clients, protecting data privacy is crucial in decentralized learning settings<cit.>. Furthermore, applying connected health technologies in a mental health population poses multiple problems<cit.>. First is the concern about data security and privacy. Studies have shown that mental health populations typically consider their data sensitive and vary in sharing this information due to perceived mental health stigmas. Surveys have shown that 65% of patients with mental health disorders are unlikely to share patient data with their psychiatrists<cit.>. If the psychiatrists aim to rely on patients' history, studies<cit.> have shown patient histories only to be 62% accurate, leading to psychiatric misdiagnoses as high as 65.9% for major depressive disorders and 85.8% for panic disorders. Therefore, a technological solution is necessary to provide psychiatrists with health insights without collecting raw data from the patient's smart health devices. Second, current models do not account for the granularity of mental health disorders. As explained in the American Psychiatric Association's Clinical Practice Guidelines<cit.>, patient emotions are subject to rapid changes within the span of a day or a week, and elements such as sleep or diet can lead to quick changes in mood. While many have utilized information from Electronic Health Records (EHR) to predict mental health crises<cit.>, these models overlook granular patient changes. Therefore, they cannot generate a patient baseline (in fact, getting data through facial expressions or EHR systems can lead to biased results). Understanding the immediate effects of medication, such as antidepressants, is crucial for psychiatrists and requires granular patient data that cannot be retrieved otherwise. Currently, the most feasible way to collect this granular patient data is through a smartphone and a patient's health devices. This method, however, has the issue of unequal data streams. Different patients have different personal health devices. For example, while one patient may have five devices, another may only have one. While training a model on the patient with five devices may lead to better results, the input from a patient population to this model will decrease (as not many patients have so many personal health devices). Therefore, there is a need to obtain insights even with the feature types being unequal from patient to patient. We present a decentralized federated learning algorithm called MentalHealthAI to alleviate these challenges. First, MentalHealthAI uses on-device machine learning to prevent data from leaving the patient's smartphone. However, smart contracts are utilized in this framework to elect an aggregator, thereby creating a decentralized aggregator instead of a traditional centralized server. A self-executing piece of code, called a smart contract, can encode rules that will be executed in a decentralized manner on a blockchain<cit.>. The data remains on the patient's device in each epoch, and the model parameters are transferred from that device to the aggregator. Second, as each smartphone may collect a different set of patient features, MentalHealthAI utilizes a decision tree based methodology to derive model insights even when features and labels are not necessarily uniform. § SYSTEM DESIGN AND IMPLEMENTATION r0.58 < g r a p h i c s > System Architecture At its core, the current system is a decentralized-learning infrastructure that utilizes physiological data to predict patient moods and therefore provides mental insights to a patient's psychiatrist without the requirements of a uniform set of features (as is necessary for typical machine learning algorithms). The overall architecture can be found in <Ref>, and its specific features are described below. Clients X, Y, and Z each represent different patients. Each patient owns several IoT devices, indicated by the different data streams A, B, and C. Each data stream has the same dimensions but different content (A may be heart rate data, B may be blood pressure, and C may be skin temperature). The final model is created by adding the union of models trained from different combinations of data streams to the random forest decision tree classification system. During the evaluation, we use the features in the POPANE dataset<cit.> as individual data streams to simulate this concept. Personal health data do not contain uniform features from patient to patient. The common problem is that the devices used by different individuals are different. Traditional machine learning is limited in cases where one patient has data collected from many disparate data streams, such as heart rate, blood pressure, electroencephalogram (EEG), and electrocardiogram (ECG), while another patient has only one (for example, just the heart rate). This limitation exists because only the intersection of feature types is considered rather than every feature type present. For each patient, we assume a smartphone acts as the gateway between the patient's IoT devices, as depicted in <Ref>. We utilized the POPANE dataset<cit.> for the simulation, where we divided the patient population based on the type and number of data streams each patient has. For example, we place patients with six of the same data streams (set A) in a different cohort than patients with only three data streams (set B). Here, data streams can refer to heart rate and blood pressure data. However, if set B is a subset of set A, A ∩ B can be added to cohort B's training set (as the data streams used in B and A ∩ B are the same). Now, consider a patient population where the number of data streams a patient has varies from 1 to 6. For any patient with >1 data streams, a power set excluding the empty set is generated as shown in <Ref>. For example, given a patient with data streams d = {A, B, C}, P(d) = [{}, {A}, {B}, {A, B}, {C}, {A, C}, {B, C}, {A, B, C}], excluding {}, we divert each element in this power set (representing a set of data streams) into separate cohorts. This process maximizes the utility of patients with multiple data streams, as it maximizes the amount of data found in each cohort (in comparison to simply dividing the patient population based on the number of data streams present). We then train multiple machine learning models on these cohorts, where one model is trained from data from one cohort. Note that the labels are unaltered, regardless of the feature subset. When combined with MentalHealthAI's decentralized AI architecture, it is also important to note that multiple smartphones will be selected as aggregators but will be training different models with different training subsets. Based on the patient population and available data streams at a given time, certain models will be more accurate than others, and this relationship can change frequently. Furthermore, every patient's mood with different baseline emotion levels may differ. While models can predict a large portion of the population successfully, they may not be accurate enough for a specific patient. <Ref> shows the model generation process for each client based on the available datasets. Using a smart contract deployed on a blockchain as the “secure model aggregator,” the client interacts with it by emitting events to indicate learning has finished. The corresponding smart contract code is depicted in <Ref>. The smart contract employs a voting process to elect the next “leader” to perform model aggregation. As each model has been trained on different feature subsets, decentralized aggregation occurs independently for each model. For example, if we have three models trained on features [A], [A,B], and [A,B,C], each of these models will be aggregated with other models trained on the same set of features from a different patient on a different client. If the patient's smartphone is not elected as an aggregator, the smart contract will send the model parameters from the patient's smartphone to the smartphone elected as the model aggregator. The clients will interact with the smart contract as shown in <Ref>. Once the clients have finished training, they will notify the smart contract and be considered for the next “leader” election. The client will also monitor events emitted from the smart contract to see if it is elected as an aggregator. If it is, then it will receive models from other smartphones. Once the model parameters have been received, utilizing a decision tree, the “leader” client select the best prediction model for the patient as shown in <Ref>. It collects mapping from the smart contract with data stream as key and aggregator smart contract address as value. For example, assume a patient has three devices/data streams. This patient's models include every model trained on the following data stream combinations: [{A}, {B}, {A, B}, {C}, {A, C}, {B, C}, {A, B, C}]. Then, a calibration period is set for a certain period to collect new patient emotional features/labels (in <Ref>, it is set to 7 days). Each set of features in our simulation contributes to the models generated daily. The random forest decision tree (such as the one shown in <Ref>) will then use these models to predict the patient's emotional labels. The decision tree is run on the patient's smartphone after data stream based models have been generated and distributed back to the individual nodes from the aggregator. § EVALUATION AND RESULTS r0.5 < g r a p h i c s > Learning Results After Leader Election and Model Aggregation. Nodes refer to the other smartphones contributing to the combined model. We evaluated our system using a mental health dataset named POPANE<cit.>. The POPANE dataset contains a set of 142 patients whose ECG, Electrodermal Activity (EDA), Skin Temperature (ST), Respiration (Resp), Systolic Blood Pressure (SBP), and Diastolic Blood Pressure (DBP) have been measured and labeled with positive and negative affect, which is rated from a scale of 0-10, with 0 indicating negative affect, and 10 indicating positive affect. We chose this dataset primarily because it closely matches our use case. A personal health device can measure each of the physiological parameters given above, and training on such a dataset can provide insight into the utility of such a system in a much larger population. Secondly, the data provided is collected on a second-to-second basis, similar to the collection rates found in many current IoT devices, such as smartwatches that measure heart rate or ECGs on the patient's skin. Finally, a major advantage of utilizing the POPANE dataset is its non-IID distributed data, as seen in <Ref>. The figure clearly shows that the affect is not equally distributed throughout the dataset (and is not likely to represent the standard population), which is more akin to what may be present in real-world situations, where random samples proportionate to the overall population are unlikely. Thus, through this dataset, we aim to investigate the resilience of MentalHealthAI in non-IID settings. r0.5 < g r a p h i c s > Frequency of Various Affects in the POPANE Dataset<cit.> First, we assessed the training results from a model run on a centralized server. The model was a simple Artificial Neural Network (ANN) with three dense layers with softmax activation, as shown in  <Ref>. We decided upon the activation function based on favorable learning results. As mentioned, we used six physiological features to assess the patient's affect, ranked from 0-10 to serve as the output. Each physiological feature is considered a separate data stream for this evaluation, containing data from different IoT devices. We set the data into a train-test split of 70-30% and used a sparse categorical cross-entropy loss function due to the nature of the output categorical labels. Multiple checkpoints were implemented, such as early stopping (which will stop training if accuracy does not improve after multiple epochs in a row) and learning adjustment (which will lower the learning rate by a factor of ten if the accuracy does not improve). We ran the model for 107 epochs and stopped the learning process because there was no change in training accuracy. After multiple trials, this epoch value led to the best learning result in our model. The overall accuracy was approximately 86%. r0.8 < g r a p h i c s > A Simplified View of the Neural Network Model Architecture Based on these results, we can conclude that there is a link between physiological parameters and a patient's emotional state. We chose accuracy as our primary evaluation metric to ensure the model is clinically viable. We then evaluated the decentralized learning aspects of this system. Since we could not acquire physical devices to test the model's performance in the real world, we evaluated the decentralized learning components through simulation. In this simulation, we assume a consortium of 142 patients modeled using the POPANE dataset, each with data collected through IoT devices. A global model updates itself based on data from each patient to form the final trained model. We trained the models in the same fashion as the ANN discussed above. <Ref> shows the test-set accuracy after training on each node. As shown in <Ref>, successful learning can happen in a discontinuous situation. While the nodes had an initial training accuracy of 51%, this increased immediately to 86% after training from two additional nodes, confirming that MentalHealthAI can obtain high accuracy even in distributed settings. However, note that such results may not be obtainable in real-world conditions. Primarily, data collected in the POPANE dataset has been obtained in a controlled environment rather than during regular day-to-day activities. Therefore, if truly deployed in a community, there is a greater chance of false positives, false negatives, and inaccurate readings from IoT devices. However, given accurate input data, we assert that MentalHealthAI can be deployed in such a community setting. We then evaluated the decision tree aspect of the system to determine the model accuracy in non-ideal settings. We divided the 142 patients in the POPANE dataset into four patient cohorts. Each cohort represented patients with a certain number of IoT devices (1 device, 2 devices, 3 devices, or 4 devices). Similar to what we explained in the methods section, we extracted data streams from each cohort based on the power set of their features. For example, in the cohort with 2 data streams (A and B), three sets of data were created: A, B, (A, B), each with the same label. We repeated this process for each patient cohort. Models were trained on the following data streams: ST, ECG, ST, ECG, EDA, ST, EDA, ECG, EDA, ST, ECG, EDA, Resp, ST, Resp, ECG, Resp, ST, ECG, Resp, EDA, Resp, ST, EDA, Resp, ECG, EDA, Resp, ST, ECG, EDA, Resp. Next, we simulated the “calibration” period, where each model generated emotion predictions based on new data to which the models were not exposed. This data then served as the input to the random forest model, which then provided the emotional predictions based on the predictions of the previously trained models. <Ref> depicts the decision tree for a single client (i.e., a smartphone belonging to a patient). We simulated a standard baseline solution, MentalHealthAI-Baseline, to the above problem as a means of comparison. As a typical machine learning model cannot utilize different feature sets, the most optimized results will likely only come from the cohort with 4 data streams (35 patients). We trained standard ANN with the same hyperparameters as above on this data set, with an overall accuracy of 86%. In comparison, MentalHealthAI-Fed had an overall accuracy of 80%, a substantial improvement. By utilizing unequal feature sets through multiple model combinations and a random forest model, one can improve learning results compared to a model that requires uniform features. While this accuracy level is lower than the original baseline model, it is important to acknowledge the differences in data. The baseline ANN model simulated an ideal world where 142 willing patients with access to 6 separate IoT devices. However, finding 142 patients with more than three personal health devices in the real world is intuitively infeasible for many reasons, such as cost and access. However, through this unique MentalHealthAI framework, we demonstrate that high accuracy is achievable even in less-than-realistic settings. We believe that this occurs due to multiple reasons. First, MentalHealthAI utilizes models without noise and irrelevant features, making them less susceptible to their effects. Second, models trained on less number of features can succeed by having access to a greater number of patients. Third, a random forest model can select the best model for the patient, a choice that can change over time. Finally, MentalHealthAI was compared to current state-of-the-art emotion prediction systems and machine learning methods in adjacent domains. As shown in <Ref>, it is clear that compared to other past AI models, MentalHealthAI can produce greater accuracy with both the baseline ANN model and the decentralized decision tree architecture. Note that due to the novelty of the POPANE dataset at the time we developed our model, we were unable to compare our results to similar models that may have been trained on the same data. § RELATED WORK Federated learning, introduced by McMahan et al.<cit.>, enables learning from decentralized data sources, where clients volunteer to participate in federated learning, i.e., they can join or leave the systems whenever they want. Simply put, federated learning enables learning from decentralized data sources<cit.>. A variant of federated learning in blockchain settings is swarm learning<cit.>, where a smart contract would elect a node to perform model updates at each epoch instead of a central aggregator. This selected node aggregates and broadcasts the model parameters to all other nodes. We drew inspiration from this methodology in the work presented in this paper. However, swarm learning nodes are essentially large and powerful hospital servers utilized in applications such as leukemia and tuberculosis prediction<cit.>. Our work involves learning in a much more decentralized setting that leverages IoT devices and smartphones with much smaller memory and performance. At the same time, input features are all uniform in the original swarm learning implementation<cit.>, which we believe is an assumption that may not hold in other decentralized settings. We have embraced the non-IID assumption in our implementation. Pfitzner et al.<cit.> conducted a systematic literature review on the concept of and research into federated learning and its applicability for confidential healthcare datasets. In particular, Lee and Shin<cit.> conducted an experiment using the Modified National Institute of Standards and Technology (MNIST), Medical Information Mart for Intensive Care-III (MIMIC-III), and ECG datasets to evaluate the performance of a federated learning system compared to the state-of-the-art method for in-hospital mortality using imbalanced data. Additionally, a small but growing number of works have focused on the application of federated learning in mental health applications. FedMood<cit.> uses mobile phone and IoT data in a “multi-view” federated learning setting to detect the emotions of individuals. However, a central aggregator is still necessary for federated learning to be possible, which is risky, especially for patients in vulnerable populations. Chhikara et al.<cit.> describe a federated learning framework that uses images to detect human emotion. While they achieved successful learning results, such a model is infeasible for granular changes in emotion/mood. While physiological data can provide hour-by-hour changes in a patient's emotion (such as changes after taking an antidepressant), obtaining patient pictures every hour is infeasible. Garriga et al.<cit.>'s work is similar to the previously described work but utilizes EHR to predict mental health crises. While this is a valuable model for predicting specific mental health events, the model cannot assess granular emotional changes. Instead, crises are only extrapolated based on patient visits to the clinic or the hospital (which often occurs when the patient is sick). Through our work, we aim to see both granular and longitudinal changes in emotions, i.e., how mood changes throughout the day (especially with medication interventions) and how moods change over a week. A novel variant of federated learning is personalized federated learning<cit.>. The generated model is adapted to better fit a local dataset (for example, data belonging to a single patient). Such a model adaptation can lead to a more personalized model for the specific patient. In other words, each client’s model does not need to be the same. While this strategy can prove useful for this application, we have not yet employed it due to problems with categorical overfitting (where the model chooses one category with any given input). Such overfitting will likely occur in this setting as emotion/mood may remain the same for many hours. Subjecting this to a model may lead the model to assume that the patient's emotions always remain the same. A more generalized model can expose a greater variation of training data. Architecturally, MentalHealthAI provides many learning advantages that other AI strategies in this domain do not provide. Compared to traditional machine learning, MentalHealthAI introduces privacy-sensitive strategies to address a previously stigmatized population. We specifically allow learning on decentralized edge nodes, i.e., smartphones and only require transferring model parameters to make MentalHealthAI less susceptible to noise and irrelevant features. However, MentalHealthAI takes this further by introducing decentralized aggregators, preventing attacks on a centralized aggregator. The unique contribution of MentalHealthAI lies in its ability to utilize the available data, regardless of variations in the feature set. § CONCLUSION AND FUTURE WORK Advances in AI techniques and IoT devices have transformed how chronic illnesses are treated today, such as asthma, hypertension, and diabetes. However, an area of medicine where connected health has remained relatively untapped has been mental health. In most situations, patient history, which is inaccurate and imperfect, has predominantly been used to treat this disorder. At the same time, psychiatry poses many unique challenges to connected health adoption. First, a sensitive patient population may not support releasing personal data, i.e., information potentially harmful if leaked. Secondly, significant variations exist in the number of data streams a patient has, thus potentially limiting learning. Finally, qualitative elements such as mood or emotions can change rapidly throughout the day and the week. For example, simple changes in diet, medications, or even sleep can lead to different emotions. Therefore, monitoring granular emotional changes is key to successfully monitoring mental health. While many have focused on using facial expressions or EHR records to predict crises, these models overlook the small changes that can lead to mental health issues. Therefore, to solve these problems, we utilized a unique combination of various AI and blockchain techniques to enhance data privacy and ownership in a system called MentalHealthAI. It is an innovative combination of smart contracts and decentralized learning to create models useful for psychiatrists but in a way that protects the patient's privacy. IoT devices provide a second-by-second change in the patient's outward physiological signs, allowing for a granular understanding of the patient's health. It allows for successful learning even when data is stored in a patient's smartphone. As part of the evaluation, we used a novel mental health dataset and divided the patient population into cohorts based on the data streams available for each patient. We demonstrated that we could predict emotions/moods from physiological data in a decentralized and privacy-preserving manner. Our methodology for predicting mental health disorders has several benefits. First, it increases accessibility. For example, if 20% of the patient population has only one IoT device, a traditional machine learning algorithm would be trained on only this limited population. In comparison, MentalHealthAI can utilize the entire patient population for model training in a decentralized and privacy-preserving manner, which can provide greater model utility for patients who do not have access to physiological data generators (i.e., IoT Devices). Secondly, it can increase model accuracy, especially in fields that have yet to be studied extensively due to non (or limited) data availability. Therefore, certain data streams may contribute to the model's accuracy, and different combinations of data stream features enable the better establishment of links between features and labels. Finally, this method can better adapt to non-IID settings. Intuitively, patient populations are unique, as most patients are more likely to have between 1-3 IoT devices. Therefore, a model trained on more patients but with fewer features can have greater accuracy than one trained on more features but with a smaller patient cohort. This relationship can change from community to community and region to region. We address this issue by having different population cohorts to provide accurate results while being resilient to changes in the patient population composition. There are various limitations to this work. We are yet to evaluate our work with real smartphones in a decentralized setting in real life. Therefore, an initial user study is necessary to determine the effectiveness and impact of the prediction accuracy. Secondly, understanding physiological changes during emotions such as surprise, fear, or agitation would be valuable in addition to detecting moods and emotions in patients as a baseline. Therefore, the current model may need to be retrained on a separate dataset, and hyperparameters tuned appropriately to recognize these emotions. Another important challenges are model approximation and optimization, i.e., is there a model that performs well on all clients? And how to find such a model? By continuing to work on these limitations, we can deploy such infrastructure in the mental health patient population and provide utility to psychiatrists needing an objective metric to assess their patients. ACM-Reference-Format
http://arxiv.org/abs/2307.03923v1
20230708073717
New Methods for MLE of Toeplitz Structured Covariance Matrices with Applications to RADAR Problems
[ "Augusto Aubry", "Prabhu Babu", "Antonio De Maio", "Massimo Rosamilia" ]
eess.SP
[ "eess.SP" ]
Submitted to IEEE Trans. on Signal Processing... New Methods for MLE of Toeplitz Structured Covariance Matrices with Applications to RADAR Problems Augusto Aubry, Senior Member, IEEE, Prabhu Babu, Antonio De Maio, Fellow, IEEE, and Massimo Rosamilia, Member, IEEE A. Aubry and A. De Maio are with the Department of Electrical Engineering and Information Technology, Universita degli Studi di Napoli “Federico II”, DIETI, Via Claudio 21, I-80125 Napoli, Italy (E-mail: [email protected], [email protected]). P. Babu is with CARE, IIT Delhi, New Delhi, 110016, India (E-mail: [email protected]) M. Rosamilia is with the National Inter-University Consortium for Telecommunications, 43124 Parma, Italy (e-mail: [email protected]). August 12, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This work considers Maximum Likelihood Estimation (MLE) of a Toeplitz structured covariance matrix. In this regard, an equivalent reformulation of the MLE problem is introduced and two iterative algorithms are proposed for the optimization of the equivalent statistical learning framework. Both the strategies are based on the Majorization Minimization (MM) paradigm and hence enjoy nice properties such as monotonicity and ensured convergence to a stationary point of the equivalent MLE problem. The proposed framework is also extended to deal with MLE of other practically relevant covariance structures, namely, the banded Toeplitz, block Toeplitz, and Toeplitz-block-Toeplitz. Through numerical simulations, it is shown that the new methods provide excellent performance levels in terms of both mean square estimation error (which is very close to the benchmark Cramér-Rao Bound (CRB)) and signal-to-interference-plus-noise ratio, especially in comparison with state of the art strategies. § INTRODUCTION Estimation of the data covariance matrix has diverse applications in radar signal processing, such as direction of arrival estimation, target detection, adaptive beamforming, and sidelobe canceller design <cit.>. In these situations, the interference covariance matrix is estimated from the secondary/training data, which are assumed target-free and collected from spatial and/or temporal returns corresponding to range cells close to the one of interest. When the data follows a complex, zero-mean, circular Gaussian distribution, it is well known that the Sample Covariance Matrix (SCM) is the unstructured Maximum Likelihood (ML) estimate of the covariance matrix. However, in the presence of a small number of training data and/or when mismatches in training data spectral properties occur, it does not always represent a reliable choice for the covariance inference <cit.>. A well-known strategy, often discussed in the open literature to improve the performance of a covariance estimator, relies on the incorporation of some a priori knowledge about its underlying structure. For instance, in some radar applications, it is customary to suppose that data come from a stationary Gaussian random process, leading to a Hermitian symmetric Toeplitz Structured Covariance (TSC) matrix. Leveraging this information, one can obtain (under the design conditions) a more reliable estimator than the SCM <cit.>. Aside radar applications, the estimation of a TSC matrix is encountered in speech recognition <cit.>, spectral estimation <cit.>, gridless compressive sensing <cit.>, and hyperspectral imaging <cit.>. So far, several algorithms have been proposed for estimating a TSC matrix. Let us first discuss those for ML Estimation (MLE). According to the Caratheodory parametrization <cit.> a Toeplitz covariance matrix ∈ℍ^m × m can always be decomposed as[Notice that the parametrization is unique provided that the rank of <m <cit.>.] [ = ^H; []_k,k≥ 0 ], where = [ 1 ⋯ 1; e^jω_1 ⋯ e^jω_r; ⋮ ⋱ ⋮; e^j(m-1)ω_1 ⋯ e^j(m-1)ω_r ], = [ p̃_1 … 0; ⋮ ⋱ ⋮; 0 … p̃_r ], ω_i and p̃_i, i=1,2, ⋯,r ≤ m, denote some angular frequencies and their corresponding powers while r indicates the rank of . Capitalizing on this parametrization, Circulant Embedding (CE) of Toeplitz matrix (<cit.>) can be used to compute approximately the ML estimate of . According to CE, a Positive SemiDefinite (PSD) m × m Toeplitz matrix is modeled as [ = ^H; = diag([p_1,p_2,⋯,p_L]), p_k≥ 0 , ] where = [_m × m _m × (L-m)], _m × m is the identity matrix of size m × m, _m × L-m is the zero matrix of size m × L-m, is the normalized Discrete Fourier Transform (DFT) matrix of size L ≥ 2m-1 and is a diagonal matrix of size L × L with diagonal elements p_k≥ 0. Therefore, the matrix is completely parameterized by the diagonal matrix . Although estimating the Toeplitz covariance matrix using CE seems attractive, the representation in (<ref>) is valid only for a subset of Toeplitz covariance matrices. This can be intuitively justified because the Caratheodory parametrization in (<ref>) does not give restrictions on the frequencies spacing, while the CE in (<ref>) strictly requires the frequencies to lie on the Fourier grid. Hence, for some Toeplitz matrices, the parametrization in (<ref>) is only approximated. Based on CE, <cit.> and <cit.> have proposed an iterative algorithm based on Expectation-Maximization (EM) for MLE of . By modifying the M step in the EM procedure, in <cit.> the technique has been extended to deal with the banded Toeplitz covariance case. In <cit.>, still leveraging CE framework, a Majorization Minimization (MM) based optimization, with faster convergence than the EM of <cit.> and <cit.>, has been introduced. In <cit.> a closed-form estimator has been designed by invoking the extended invariance principle to deal with the Toeplitz constraint. Finally, in <cit.>, an efficient approximation of a Toeplitz covariance matrix under a rank constraint has been handled forcing the eigenvectors to be the same as those of the SCM whereas the Toeplitz constraint has been explicitly imposed while estimating the eigenvalues. Other than the MLE, several other alternative paradigms have been considered for the problem at hand. Recently, in <cit.> the Toeplitz structure is forced together with a condition number constraint via SCM projection onto a suitable constraint set. Other geometric based approaches for the TSC estimation have also been proposed in <cit.>. In this manuscript, two iterative algorithms referred to as Alternating Projection Based TOeplitz Covariance Matrix Estimation 1 (ATOM1) and ATOM2 are devised leveraging a suitable reformulation of the MLE problem and the MM framework. Both ATOM1 and ATOM2 involve the construction of a bespoke surrogate function (s.f.) along with its optimization. Specifically, the two procedures construct distinct s.f. and therefore solve different surrogate minimization problems. While ATOM1 addresses the surrogate minimization problem using the Alternating Direction Method of Multipliers (ADMM), ATOM2 handles it either via alternating projection or Dykstra's algorithm. However, both the procedures directly estimate the Toeplitz covariance matrix without forcing a reparametrization via the CE. ATOM2 is also extended to include other constraints, such as banded Toeplitz, block-Toeplitz, and Toeplitz-block-Toeplitz structures. The major contributions of this paper can be summarized as follows: * Two iterative algorithms ATOM1 and ATOM2 are proposed based on the MM framework to address MLE of a Toeplitz covariance matrix. Their computational complexities are thoroughly discussed. Also, the convergence of the procedures to a stationary point of the equivalent MLE problem is established. * The extensions of ATOM2 to handle additional covariance structures, such as banded Toeplitz, block-Toeplitz, and Toeplitz-block-Toeplitz. * The derivation of the Cramér-Rao Bound (CRB) for the estimation of Toeplitz, banded Toeplitz, and Toeplitz-block-Toeplitz covariance matrices are provided. * Performance comparisons of the proposed algorithms (included their extensions) with some state-of-the-art procedures via numerical simulations are illustrated, using the Mean Square Error (MSE) and the Signal-to-Interference-plus-Noise Ratio (SINR) (for case studies related to radar applications) as performance metrics. The organization of the paper is as follows. The MLE problem of Toeplitz covariance matrix for complex, zero-mean, circular Gaussian observations is formulated in Section <ref>. In Section <ref>, ATOM1 and ATOM2 algorithms are proposed, along with a discussion on their computational complexity and implementation aspects. Also, their convergence properties are studied. At the end of this section, the extension of ATOM2 to handle additional constraints along with the Toeplitz requirement is discussed too. In Section <ref>, the CRB for the estimation of Toeplitz, banded Toeplitz, and Toeplitz-block-Toeplitz covariance matrices is computed. In Section <ref>, the proposed algorithms are compared with some state-of-the-art techniques, and finally, concluding remarks are given in Section <ref>. §.§ Notation Throughout the paper, bold capital and bold small letter denote matrix and vector, respectively. A scalar is represented by a small letter. The value taken by an optimization vector at the t^th iteration is denoted by _t. Furthermore, ℝ is used to denote the set of real numbers, ℝ^m and ℂ^m are used to represent the sets of m dimensional vectors of real and complex numbers, respectively, whereas ℝ^m × m, ℂ^m × m, and ℍ^m × m are used to represent the sets of m × m matrices of real numbers, m × m matrices of complex numbers, and m × m Hermitian matrices, respectively. Superscripts (·)^T, (·)^*, (·)^H, and (·)^-1 indicate the transpose, complex conjugate, complex conjugate transpose, and inverse, respectively. For any x ∈ℝ, ⌈ x ⌉ returns the least integer greater than or equal to x. The trace and the determinant of a matrix are denoted by Tr() and ||, respectively. The notation []_i is used to represent the i^th column of the matrix . The symbol ⊗ indicates the Kronecker product while the gradient of a function f is denoted by ∇ f. The symbol ≽ (and its strict form ≻) is used to denote the generalized matrix inequality: for any ∈ℍ^m × m, ≽ 0 means that is a PSD matrix (≻ 0 for positive definiteness). Besides, for any ∈ℍ^m × m, eig() is the vector collecting the eigenvalues of (sorted in increasing order). The Euclidean norm of the vector is denoted by _2, || indicates the element wise modulus of the vector . The notation E[·] stands for statistical expectation. Finally, for any ,∈ℝ^m× m, max(,) refers to the matrix containing the element wise maximum between and . § PROBLEM FORMULATION Let us assume the availability of n independent and identically distributed vectors {_1, _2, ⋯,_n}, where each _i is of size m and follows a m-variate complex, zero-mean, circular Gaussian distribution with covariance matrix ≻0. The maximum likelihood covariance estimation problem can be formulated as [ ≻ 0 minimize f̅() =1n∑_i=1^n_i^H^-1_i + log|| . ] If n ≥ m, Problem (<ref>) has a unique minimizer with probability one which is given by the SCM, i.e., _SCM = 1n∑_i=1^n_i_i^H. However, if the random process, where each observation is drawn, is stationary (at least in wide sense) then the covariance matrix also exhibits a Toeplitz structure which can be capitalized in the estimation process. By doing so, Problem (<ref>) becomes [ MLE: ∈ Toep, ≻ 0 minimize f̅(), ] where Toep is used to denote the set of Hermitian Toeplitz matrices of size m × m. The above problem has two constraints: a structural constraint and a positive definite constraint. Even though the structural constraint is convex, the non-convexity of the objective function makes Problem (<ref>) challenging to solve and no analytical solution seems to be available. In the following two iterative solution procedures for (<ref>) are designed exploiting the MM principle. Briefly, the MM technique mainly consists of two steps * constructing a s.f. g(|_t) (where _t is the estimate of at the t^th iteration) for the objective function in (<ref>); * minimizing the resulting surrogate problem at each iteration. For more details, <cit.> provide an in-depth discussion on MM based algorithms. § ALGORITHMS FOR TOEPLITZ COVARIANCE MATRIX ESTIMATION In this section, ATOM1 and ATOM2 are proposed to tackle the MLE problem of TSC matrix. Both exploit the MM principle (applied to an equivalent reformulation of the MLE problem) and differ in the way they construct and handle the surrogate minimization problem. ATOM1 solves the surrogate optimization using ADMM while ATOM2 tackles it using either alternating projection or Dykstra's algorithm. Subsequently, the computational complexity and proof of convergence of the procedures are established. Finally, the extension of ATOM2 to deal with additional covariance constraints along with the Toeplitz structure is provided. Before proceeding further, let us observe that the Hermitian Toeplitz matrices intrinsically endow the centro-Hermitian symmetry structure <cit.>, i.e., = ^* with the m× m permutation matrix given by = [ 0 0 ⋯ 0 1; 0 0 ⋯ 1 0; ⋮ ⋮ ⋱ ⋮ ⋮; 1 0 ⋯ 0 0 ] . As a consequence, Problem (<ref>) is tantamount to ∈ Toep, ≻ 0 minimize f(), where f() = (_FB^-1) + log|| refers to the restriction of f̅(·) to the centro-Hermitian covariance matrices, with _FB the forward-backward (FB) averaged sample covariance matrix[Hereafter, Problem (<ref>) (and thus (<ref>)) is assumed solvable, i.e., there exists a global optmizer ^* ≻ 0, as well as any limit point of a feasible sequence of matrices whose corresponding objectives converge to the optimal value is feasible to the optimization problem. As a consequence, without loss of generality, the constraint ≻ 0 can be relaxed into ≽ 0. Notably, a sufficient condition to ensure the aforementioned properties is provided by n ≥⌈ m/2 ⌉, corresponding to _FB≻ 0 with probability one.] given by _FB = 1/2 (_SCM + _SCM^* ) <cit.>. Now, decomposing _FB=^H, e.g., via LDL factorization, with ∈ℂ^m × r, where r=rank(_FB)≤ m, Problem (<ref>) can be equivalently cast as[A similar constraint reformulation is used in some studies involving atomic norm for sparse reconstruction <cit.>.] (the interested reader may refer to Appendix A of the supplementary material to this paper) min_∈ Toep,∈ℍ^r× r () + log|| s.t.     ([ ^H; ])≽0, where the objective is a concave differentiable function of and . Before proceeding with the next important lemma, it is worth pointing out that Problem (<ref>) holds true even if the Toeplitz structural constraint in Problem (<ref>) and (<ref>) is replaced by any set of positive definite matrices, provided that the estimation problem is solvable. Given a concave differentiable[For a non-differentiable function, the inequality in (<ref>) can be cast as h() ≤h(_t) + Tr((_t)^H (-_t)), where (_t) is the subgradient of the concave function h() at _t <cit.>. ] function h(): ℍ^r × r→ℝ, it can be majorized as [ h() ≤h(_t) + Tr(∇h(_t)^H (-_t)), ] where _t∈ℍ^r × r. The upper bound to h() is linear and differentiable with respect to (w.r.t.) . Since h() is a concave function w.r.t. , (<ref>) stems from linearizing h() via its first order Taylor expansion <cit.>. In order to tackle the challenging optimization problem (<ref>), MM-based methods <cit.>, denoted ATOM1 and ATOM2, are now developed. To this end, let us observe that the term log|| in (<ref>) is a concave function w.r.t. <cit.>. Hence, it can be majorized using Lemma <ref> to get the following s.f. g(,|_t) =() + (_t^-1) + c_1 =(_t) + c_1, where the constant c_1 = log|_t| - m, _t = diag(,_t^-1), whereas = diag(,) is the block-diagonal matrix with blocks and along the main diagonal. Given _t, which in our case is the value assumed by the variable _t at the t-th iteration of the algorithm, the MM method demands for the solution of the following surrogate minimization task _t+1 = ∈ Toep, ∈ℍ^r× r arg min g(,|_t) s.t. ([ ^H; ]) ≽, which is a SDP problem. Unfortunately, the computational complexity necessary to handle SDP using interior point methods is 𝒪((r+m)^6.5) <cit.>. In order to alleviate the computational issue, two different approaches are pursued. The former directly handles Problem (<ref>) via the iterative ADMM algorithm. The latter, by means of a suitable manipulation of (<ref>), constructs a different s.f. for the objective function in Problem (<ref>). By doing so, as clearly explained in the following, a computationally efficient and flexible estimation procedure capable of including additional constraints can be developed. To this end, let us observe that, adding and subtracting γTr(^2), (<ref>) is equivalent to (_t) + γTr(^2)-γTr(^2) with γ > 0∈ℝ a parameter of the surrogate construction stage (for γ↓ 0, the function in (<ref>) reduces to (<ref>)). Now, being -Tr(^2) a concave function of and invoking Lemma <ref> applied to the feasible solution _t=diag(_t;_t) with _t = ^H_t^-1 and _t provided by the t-th iteration step of the estimation process, it is possible to construct the s.f. for (<ref>) g̃(,|_t) = Tr(_t)+γTr(^2)-2γTr(_t) - γTr(_t^2). It is worth pointing out that g̃(,|_t) represents a surrogate to a s.f.. Nonetheless, since g̃(,|_t) is a tight approximation of g(,|_t), it is straightforward to show that (<ref>) provides a direct surrogate for the objective function in Problem (<ref>). Hence, given _t and after some algebraic manipulations, the resulting surrogate minimization problem at the t-th iteration can be cast as _t+1= ∈ Toep, arg min - _t_F^2 subject to +≽0, where _t = _t - γ'_t, with γ' = 0.5/γ and =[,^H;,]. In the following subsections <ref> and <ref> two iterative methods, i.e., ATOM1 and ATOM2, are proposed to solve the surrogate minimization problems in (<ref>) and (<ref>), respectively. §.§ ATOM1 The surrogate minimization problem in (<ref>) is solved using ADMM <cit.>. To this end, an auxiliary variable ∈ℍ^r+m × r+m is introduced in (<ref>) and the problem is framed in the equivalent form min_∈ Toep,≽,∈ℍ^r× r () + ((_t)^-1) s.t.     ([ ^H; ])- =0. The augmented Lagrangian associated with (<ref>) is ℒ_ρ(,,,)=() + ((_t)^-1) + [^H(([ ^H; ])-)] + ρ/2‖([ ^H; ])-‖_F^2, where ρ >0 is the penalty parameter and is the Lagrange multiplier of size (r+m)× (r+m). Problem (<ref>) can be further rewritten as ℒ_ρ(, , ) = (_t ) + (^H ( + - )) + ρ/2‖ + - ‖_F^2. The (inner) iterative steps of ADMM algorithm <cit.> are _k+1^t = ≽min ((_k^t)^T (_k^t + - )) + ρ/2‖_k^t + - ‖_F^2 _k+1^t = ∈ Toep,min () + ((_k^t)^T ( + - _k+1^t)) + ρ/2‖ + - _k+1^t‖_F^2 _k+1^t =_k^t + ρ(_k+1^t + - _k+1^t), where (·)^t_k is used to denote the k-th inner-iteration of the ADMM algorithm in correspondence of the t-th MM outer-loop. Problems (<ref>) and (<ref>) have closed-form solutions which can be computed via the projection of appropriate matrices onto the respective feasible sets. Indeed, Problem (<ref>) can be equivalently cast as [ ^t_k+1= ≽ 0 arg min - ^t_k_F^2; ] where ^t_k = _k^t + + 1/ρ_k^t. Hence, solving (<ref>) is tantamount to performing the orthogonal projection of the matrix ^t_k onto the set of the PSD matrices which can be computed as ^t_k+1=^t_kmax(diag(^t_k),)^t H_k, where diag(^t_k) and ^t_k are the matrices containing the eigenvalues and the corresponding orthonormal eigenvectors of ^t_k, respectively. Similarly, the update step of in (<ref>) can be rewritten as _k+1^t = ∈ Toep,min  ‖ - _k^t‖_F^2, where _k^t = 𝒫_D–Toep( _k+1^t- - 1/ρ (_k^t +_t)), with 𝒫_D–Toep() computed as follows: Partitioning the matrix as =([ _11 _12; ^H_12 _22 ]) with _12 of size r× m, the orthogonal projection of interest amounts to set the upper diagonal block to _11 whereas the second diagonal block is obtained by averaging the elements along each diagonal of _22 and constructing the corresponding Toeplitz matrix. Now, partitioning _k^t as _k^t = ([ ^t_11,k ^t_12,k; ^tH_12,k ^t_22,k ]) with ^t_11,k and ^t_22,k being r × r and m × m matrices, respectively, it follows that _k+1^t=^t_11,k and _k+1^t = ^t_22,k. Before concluding, it is worth pointing out that since the surrogate minimization problem in (<ref>) is convex and only an equality constraint is forced, it is guaranteed that ADMM converges to a supposed existing[A sufficient condition for the existence of the optimal solution to Problem (<ref>) is provided by the solvability of (<ref>).] optimal unique solution to (<ref>) (see Section 3.2 in <cit.>, <cit.>). The pseudocode of the proposed algorithm is shown in Algorithm 1. From Algorithm 1 it can be seen that ATOM1 requires initialization of the matrices _0, ^t_0 and ^t_0. _0 can be set using the initialization scheme discussed in <cit.> and, as t=0, ^t_0 can be set equal to ^H_0^-1 while ^t_0 can be constructed as ^t_0 =^H, where the elements of are drawn randomly from a uniform distribution over [0,1]. For t≥1, the matrices ^t_0 and ^t_0 can be initialized with their last value after convergence at the previous ADMM iteration, respectively. Another input parameter required by ATOM1 is the penalty weight ρ, introduced during the construction of the Augmented Lagrangian of the ADMM framework. It is shown in <cit.>, that the ADMM algorithm converges for any value of ρ>0. However, the numerical stability and the convergence rate depends on the choice of ρ. Simulation results have highlighted that for ρ = 1, the ADMM algorithm is stable for different values of n and m. Hence, unless otherwise stated, in all the numerical analysis ρ = 1 is used. §.§.§ Computational complexity and discussion about ATOM1 ATOM1 is iterative in nature with two loops - the outer-loop updates the Toeplitz matrix _t while the inner-loop solves the surrogate minimization problem using ADMM. Note that in the inner-loop, it is required to construct the data-based matrix = ([ 0 ^H; 0 ]) - which is iteration independent and hence can be pre-computed and stored. Let us now discuss the complexity related to the outer and inner-loops of ATOM1. The inner-loop of ATOM1 requires the computation of the matrix _t - which is outer-loop iteration dependent. Therefore, this matrix can be evaluated once in each outer-loop. Consequently, apart from the computations involved in the inner-loop, an outer-loop cycle just involves the evaluation of the matrix _t^-1. Since _t is Toeplitz, its inverse can be efficiently computed with a complexity 𝒪(m logm) <cit.>. The computational complexity of an inner-loop cycle is related to the projection of _k^t onto the set of PSD matrices and projection of ^t_k onto the set of block diagonal matrices where the upper part (of size r × r) is unconstrained, whereas the lower block (of size m × m) is Toeplitz structured. The cost of this latter operation mainly involves the projection of ^t_22,k onto the set of Toeplitz matrices; thus, it is substantially dictated by the computation of average of the elements along the diagonals of ^t_22,k. Hence, the cost of the inner-step 4) is 𝒪(m^2). Next, the projection of onto the set of PSD matrices mainly involves the computation of the eigenvalues and eigenvectors of the matrix _k^t - whose corresponding complexity is 𝒪((r+m)^3) <cit.>. Therefore, the per-outer-iteration computational complexity of ATOM1 is 𝒪(η(r+m)^3) where η is the total number of inner-loop iterations required by the algorithm to converge. A drawback of ATOM1 is the lack of a theoretical quality guarantee when it has to handle additional constraints on the covariance matrix. This is because ATOM1 implements ADMM algorithm at each inner-iteration which requires (to endow convergence guarantees to the process) the optimization problem to exhibit the standard form <cit.> [ , minimize h_1(_1) + h_2(_2); subject to _1_1+_2_2 = ] where h_1(_1), h_2(_2) are convex functions and _1, _2, are matrices of appropriate dimensions, respectively. Therefore, to incorporate additional inequality constraints (such as those resulting from upper bound on the condition number of the matrix _1 or a lower bound to the strength of diagonal elements, or more in general an intersection of closed convex sets that can be described by additional auxiliary variables), one needs to replace each inequality constraint with an appropriate equality constraint. This can be done by introducing a slack variable for each inequality constraint to the existing optimization variables _1 and _2. However, there is no convergence guarantee of ADMM when there are more than two optimization variables <cit.>. This issue can be addressed by the low complexity algorithm, referred to as ATOM2, proposed to solve Problem (<ref>). §.§ ATOM2 Problem (<ref>) is tantamount to seeking the block diagonal matrix belonging to the intersection of the two sets - the former defined by block diagonal matrices with the lower diagonal block of size m × m fulfilling a Toeplitz structure and the latter given by the Linear Matrix Inequality (LMI) <cit.> + ≽ 0 - with minimum distance from . Being the feasible set of (<ref>) characterized by the intersection of convex sets, a viable, even though heuristic, means to tackle Problem (<ref>) is provided by the alternating projection or Projection Onto the Convex Sets (POCS) technique <cit.>, which has already been successfully applied in the signal processing context, e.g., <cit.>. Let us denote by 𝒫_LMI() the orthogonal projection of an arbitrary matrix onto the set defined by +≽0. Now, to proceed further and employ the POCS framework, 𝒫_D–Toep() and 𝒫_LMI() projections must be employed. Remarkably, both can be obtained in closed-form: the former is computed as described in subsection <ref>; as to the latter, the orthogonal projection onto the set defined by LMI +≽0 is computed by first evaluating the EigenValue Decomposition (EVD) of the matrix +, i.e., obtaining [, ] = eig( +), where and are matrices containing the eigenvalues and eigenvectors of the spectral decomposition, respectively. Then, the orthogonal projection 𝒫_LMI() is given by max(,)^H -. According to POCS method, given an initial value _0^t = _t, at the k-th inner-iteration first compute ^t_k+1 =𝒫_D–Toep(^t_k) and then, using ^t_k+1, determine ^t_k+1=𝒫_LMI(^t_k+1) which represents the starting point ^t_k+1 of the next inner-iteration. Hence, the POCS-based solution approach finds a sequence of iterates {^t_k} by alternatingly projecting between the two convex sets. Nevertheless, as reported in <cit.>, POCS may suffer from slow convergence. Even more crucial, the convergence to the global optimal solution to (<ref>) is, in general, not ensured <cit.>. A possible solution to the aforementioned shortcoming is provided by Dykstra's projection <cit.> which is a refinement of POCS capable of finding a point closest to _t by adding correction matrices _k and _k before each projection is performed, which in-turn ensures convergence of sequence {_k+1} to the optimal solution ^*=^* <cit.>. The pseudocode of Dykstra's algorithm is shown in Algorithm 2. Once the optimal solution ^* is obtained via Dykstra's projection, the matrix _t+1 can be constructed from its lower diagonal block of size m × m. This process is repeated until the whole MM-procedure, i.e., including the outer-loop, converges. The complete ATOM2 is summarized in Algorithm 3. It requires the initialization of the matrix . In this respect, a similar scheme as in ATOM1 is followed, i.e., at each outer-iteration, the initial guess required to determine _t+1 in the inner-loop is obtained starting from _t. §.§ Computational complexity of ATOM2 Like ATOM1, ATOM2 is an iterative algorithm with outer- and inner-loops. The outer-loop updates the Toeplitz matrix _t and the inner-loop implements the Dykstra's algorithm - which requires the computation of the matrices and _t^-1. The former is a iteration independent data matrix and therefore can be pre-constructed. The latter is outer-loop iteration dependent and therefore can be computed once in each outer-loop. Consequently, apart from the inner-loop computations, the outer-loop demands only the computation of _t^-1 - which can be computed efficiently with complexity 𝒪(m logm). Meanwhile, the computational load of the inner-loop stems from the evaluation of EVD of the matrix (_k +_k) plus a data matrix - which has a complexity of about 𝒪((r+m)^3). In Table <ref>, the computational complexity of ATOM1 and ATOM2 is compared with that of the state-of-the-art iterative algorithms <cit.>. Unlike the proposed algorithms, the state-of-the art methods are single loop iteration algorithms. Therefore, in the case of <cit.> η is used to represent the number of iterations required by the algorithm to converge. Inspection of Table <ref> shows that ATOM1 and ATOM2 have the highest complexity when compared to MELT and EM. Nevertheless, it is worth anticipating that this complexity increase is complemented by a superior performance in terms of generality of the problem solved (ATOM1 and ATOM2 do not exploit the CE, ATOM2 permits to handle additional structural constraints with quality guarantee, as shown in subsection <ref>), covariance matrix MSE, and achieved SINR. §.§ Proof of convergence In this subsection, the proof of convergence of ATOM1 and ATOM2 is established. In this regard, it is worth pointing out that both the algorithms differ in the way they construct and optimize the s.f. for the Problem (<ref>). Nonetheless, since ATOM1 and ATOM2 are based on the MM framework, the proof of convergence based on the following Theorem will hold for both algorithms. Before stating the Theorem, let us first introduce the first-order optimality condition for minimizing a function over a convex constraint set. A point is a stationary point of f(·) if f'(;) ≥ 0 for all such that +∈𝒞, where 𝒞 is the convex constraint set and f'(;) is the directional derivative of f(·) at point in direction and is defined as <cit.> [ f'(;) =λ↓ 0lim inf f(+λ) - f()λ ]. Based on the following Theorem, both ATOM1 and ATOM2 are guaranteed to converge to a stationary point of Problem (<ref>). Denoting by {_t} the sequence of matrices generated by either ATOM1 or ATOM2, then the objective function of Problem (<ref>) monotonically decreases along the iterations. Besides, any positive definite cluster point[Under the assumption m≥ n/2, all the cluster points are demanded to be positive definite.] to _t is a stationary point to Problem (<ref>). See Appendix B of the supplementary material for details. §.§ Extensions of ATOM2 The augmentation of ATOM2 to handle additional constraints other than the Toeplitz structure in the covariance estimation process is now addressed. In particular, it is shown that ATOM2 can be generalized to account for the following scenarios: Banded Toeplitz, block-Toeplitz, and Toeplitz-block-Toeplitz matrices. On the other side, as already mentioned in subsection <ref>, ATOM1 cannot be directly extended to tackle the general constraints as for instance an upper bound requirement to the condition number. §.§.§ MLE of banded Toeplitz covariance matrix The covariance matrix is constrained to exhibit a banded Toeplitz structure of bandwidth b (see <cit.> for relevant applications). For instance, assuming a bandwidth b=2 and dimension m=5 the covariance matrix enjoys the following structure = [ r_1 r_2 r_3 0 0; r^*_2 r_1 r_2 r_3 0; r^*_3 r^*_2 r_1 r_2 r_3; 0 r^*_3 r^*_2 r_1 r_2; 0 0 r^*_3 r^*_2 r_1 ]. Then, the MLE problem for banded Toeplitz covariance matrix can be formulated as [ ∈ Band-Toep, ≻ 0 minimize 1n∑_i=1^n_i^H^-1_i + log|| ], where Band-Toep is used to denote the set of banded Toeplitz matrices. Like in (<ref>), the above problem can be cast in the following equivalent form [ ∈ Band-Toep, minimize () + log||; subject to ([ ^H; ]) ≽0 ]. Hence, (<ref>) is handled via MM framework solving the following surrogate minimization problem [ minimize - _F^2; subject to + ≽0; = diag(,) with being a; banded Toeplitz matrix ] The above problem involves two convex sets: the set defined by the LMI +≽0 and the set of block diagonal matrices where the second block has a banded Toeplitz structure with bandwidth b. Consequently, Dykstra's projection algorithm or POCS can be used to solve Problem (<ref>). The projection of a matrix onto the LMI set can be calculated as discussed earlier in Subsection <ref>. The projection of a matrix = ([ _11 _12; ^H_12 _22 ]) onto the set of block diagonal matrices with the second banded Toeplitz block can be obtained as follows. The first diagonal block is the same as _11 and the second diagonal block is constructed by averaging the entries of the main and the first b upper-diagonals of the matrix _22 and computing the corresponding Toeplitz matrix <cit.>. §.§.§ MLE of block-Toeplitz or Toeplitz-block-Toeplitz covariance matrix In space-time adaptive processing radar applications, the covariance matrix exhibits a block-Toeplitz (BT) or a Toeplitz-block-Toeplitz (TBT) structure. An example of a BT-structured covariance matrix with p blocks is shown below = [ _0 _1 … _p-1; ^H_1 _0 … _p-2; ⋮ ⋱ ⋱ ⋮; ^H_p-1 … ^H_1 _0 ]. When each block exhibit a Toeplitz structure, then is TBT <cit.>. The MLE problem of a BT or a TBT covariance matrix is formulated as ∈BT (TBT), ≻ 0 minimize 1n∑_i=1^n_i^H^-1_i + log||, where the notation BT (TBT) is used to indicate the set of BT (TBT) matrices. A feasible solution to Problem (<ref>) can be obtained by solving at any given step the following surrogate optimization problem [ minimize    - _F^2; subject to +≽0; is a block diagonal matrix with; the second diagonal BT (TBT) block ]. Problem (<ref>) exhibits two constraints - 1) a LMI constraint and 2) a structural constraint - where the optimization variable is confined to be a block diagonal matrix with the second block having a BT (TBT) structure. Since both the constraints are convex, Dykstra's projection or POCS can be applied to solve Problem (<ref>). The projection of a matrix onto the LMI set can be calculated as discussed earlier in Section <ref> B. The projection of a given matrix onto the set of matrices whose second diagonal block has the BT (TBT) constraint can be obtained as follows. For the first diagonal block, the submatrix _11 is directly used. Then, the second diagonal block is obtained following two (three) steps. First, p matrices are obtained by averaging the (upper-right) diagonal blocks of the matrix _22. Then, only for TBT, each of the p matrices are projected onto the Toeplitz set as described in subsection <ref>. Finally, the resulting matrix is constructed according to (<ref>). § CRB CALCULATION In this section, the CRB is derived for the estimation of Toeplitz structured covariance matrix (the interesting reader may refer to Appendix C of the supplementary material with reference to the CRBs of Banded Toeplitz, BT, and TBT covariance model). The CRB provides a lower bound on the variance of any unbiased estimator <cit.>. To proceed further, let represent the real value vector parametrizing a given covariance matrix structure of interest. Then, the CRB is the inverse of the Fisher Information matrix (FIM) whose (i,k)^th element is [ []_i,k = E[∂^2logf̅()/∂θ_i∂θ_k] ], where ∂logf̅()/∂θ_i denotes the partial derivative of logf̅() w.r.t. θ_i, with θ_i the i-th element of . Due to the Gaussian assumption, the (i,k)^th element of the FIM can be computed using the Slepian–Bangs formula <cit.> []_i,k = nTr(^-1∂/∂θ_i^-1∂/∂θ_k). In the following subsection, the FIM is derived for the Toeplitz covariance structure. §.§ Toeplitz matrix As the entries of the TSC matrix are completely characterized by its first row, i.e., [r_1, r_2,⋯ r_m]^T, the covariance matrix ∈ℍ^m × m can be parameterized by = [r_1, (r_2),⋯(r_m),(r_2),...,(r_m) ]^T∈ℝ^ 2m-1 where (r_i) and (r_i) denotes the real and imaginary parts of r_i, respectively. Then, the covariance matrix can be expressed in terms of and basis matrices ^Toep_g (defined as in (<ref>)), g=1,2,⋯,m <cit.> [ = ∑_g=1^mθ_g(^Toep_g) + j ∑_g=m+1^2m-1θ_g(^Toep_g-m+1) ]. The (i,k)^th element of the matrix ^Toep_g is given as [ [^Toep_g]_i,k= 1+j i-k=g-1=0 1+j k-i=g-1≠0 1-j i-k=g-1≠ 0 0 otherwise ]. Using (<ref>), ∂/∂θ_i can be obtained as ∂/∂θ_i= (^Toep_i) 1≤ i ≤ m j(^Toep_i-m+1) m+1 ≤ i ≤ 2m-1 Substituting ∂/∂θ_i in (<ref>), yields the FIM for Toeplitz covariance matrix. § NUMERICAL SIMULATIONS In this section, the performance of the proposed covariance matrix estimators ATOM1 and ATOM2 is numerically analyzed in comparison with the following state-of-the-art algorithms: EM-based <cit.>, MELT <cit.>, the SCM, and the FB estimators <cit.>. First, a convergence analysis of the derived methods is provided, also in comparison with the aforementioned counterparts. Then, the estimation capabilities are analyzed in three different scenarios, using the MSE as performance metric, defined as[In the following, (<ref>) is computed via Monte Carlo techniques.] MSE = E[ - ^2] , where indicates the estimate of the unknown , obtained according to one of the aforementioned strategies. First of all, the covariance matrix is assumed to share the Toeplitz structure. Then, the banded Toeplitz, the BT, and the TBT constraints are considered. The CRB-based benchmark, computed as CRB = (^-1), is reported too, whereby, for each case study, the FIM is appropriately derived, see Section <ref>. Furthermore, assuming a typical radar signal processing scenario, the performance is also evaluated in terms of average achievable SINR by an adaptive spatial filter. It is also worth reporting that, in the aforementioned scenarios, ATOM1 and ATOM2 procedures are initialized using the FB estimate _FB, projected onto the set of Toeplitz matrices. Moreover, for the execution of ATOM2, the parameter γ is updated adaptively in each outer-loop iteration according to the following law[As to the adaptive ATOM2 surrogate construction stage, it has been empirically shown that the updating rule (<ref>), with γ_0= 10^-4 and k_1 = 5, provides satisfactory performance in all the scenarios; therefore, unless otherwise stated, ATOM2 s.f. (and the subsequent processing) is constructed using (<ref>) with the aforementioned values.] γ = γ_0 (t logt+k_1)^2. To illustrate the role of γ in the optimization process performed by ATOM2, a notional representation of the objective function (conceptually depicted as a one-dimensional curve and corresponding to a specific portion of a restriction of the multivariate objective) and the s.f. of ATOM1 and ATOM2, is reported in Fig. <ref>. Remarkably, the value of γ affects the trade-off between performance and convergence speed of ATOM2. Indeed, while a smaller γ leads to a better performance (ATOM2 s.f. approaches the ATOM1 one as γ→ 0), it demands more inner-loop iterations to achieve convergence, due to the almost singular resulting metric. On the other hand, a larger γ reduces the overall computational cost, but introduces a growth in the approximation error. However, as the outer-loop iterations increase, the approximation error of the ATOM2 s.f. w.r.t. the objective function decreases as the updated point becomes closer and closer to a local minimum at which the sequence is “converging”. That said, slowly increasing γ with the number of iterations allows to speed-up its computational burden without decreasing its performance. §.§ Assessment of iterative algorithms convergence for on-grid and off-grid frequencies In this simulation, the convergence of ATOM1 and ATOM2 (whose inner-loop was implemented via Dykstra's algorithm) is assessed in comparison with MELT and EM algorithms. To this end, each data snapshot _k∈ℂ^m is modeled as _k= ^1/2_k, k=1,2, ⋯, n where _k∈ℂ^m, k=1,…, n are independent and identically distributed zero-mean circularly symmetric Gaussian random vectors with unit mean square value. Two different experimental setups are considered, assuming m=6 and n=20. In the former, the true underlying Toeplitz covariance matrix is constructed by choosing the 2-nd, 3-rd, 5-th, 7-th, 8-th and the 11-th column of the DFT matrix with L=2m-1 in (<ref>), corresponding to the frequencies [0.5712, 1.1424, 2.2848, 3.4272, 3.9984, 5.7120] rad, and as powers [p_1, …, p_6]^T = [3, 6, 4, 1, 7, 5]^T, respectively. Figs. *fig:negLL_obj_ON_GRID_a and *fig:negLL_obj_ON_GRID_b show the negative log likelihood (<ref>) and the objective function of problem (<ref>) versus the number of iterations, respectively. It can be seen that all the algorithms numerically improve the negative log-likelihood as the number of iterations increases and almost converge to the same value, with negligible differences. Moreover, Fig. *fig:negLL_obj_ON_GRID_b indicates that the proposed algorithms monotonically decrease the problem objective function, which is expected since they optimize (<ref>) using the MM framework. In the other experimental setup, the true underlying Toeplitz covariance matrix is constructed such that two of the frequencies are not on the Fourier grid. Therefore, the same parameters used in case study 1 are considered, with the exception that the Fourier frequencies 0.5712 rad and 3.9984 rad are replaced with 0.5 rad and 5.3 rad, respectively. For the case study at hand, the negative log-likelihood (<ref>) and the objective function of (<ref>) are reported in Figs. *fig:negLL_obj_OFF_GRID_a and *fig:negLL_obj_OFF_GRID_b versus the number of iterations, respectively. Inspection of Fig. *fig:negLL_obj_OFF_GRID_a reveals that while MELT and EM converge to a value of ≈ 22.4, ATOM1 and ATOM2 converge to 22. Therefore, when two of the frequencies do not lie on the Fourier grid, the state-of-the-art iterative algorithms converge to a larger value of the negative log-likelihood than the proposed methods. This is due to the fact that unlike the counterparts, the proposed algorithms estimate the Toeplitz covariance matrix without reparametrizing it via the CE technique and thus they are able to cover the whole set of Toeplitz covariance matrices. Furthermore, remarks similar to those made for the on-grid case hold true with reference to the results depicted in Fig. *fig:negLL_obj_OFF_GRID_b. In the following, the mean computational time[The simulation has been executed using MATLAB R2020b on a desktop computer equipped with an Intel i5 processor and 16 GB of RAM.] (averaged over 1000 Monte Carlo trials) of the proposed techniques and the counterparts is examined. As case studies, four different values of m are considered, i.e., m ∈{4, 8, 16, 32}. Moreover, the data samples _k are generated as (<ref>) using n=4m samples, with R = T + I. The Toeplitz covariance matrix is generated assuming 3 equal power sources, i.e., with p = [5, 5, 5], whose frequencies are randomly selected (at each trial) such that two of them lie on the Fourier grid of the DFT matrix, with L=2m-1, whereas the third one is drawn from a uniform distribution over [0, 2π]. The iterative algorithms have been run until the following condition is met[For the execution of EM and MELT procedures, the exit condition is set as f(_t-1)-f(_t) ≤ 10^-4.] p(_t-1, _t-1)-p(_t, _t) ≤ 10^-4 with p(, ) = () + log|| the objective function of problem (<ref>), or until the maximum number of iterations (set equal to 1000) is reached. The average computational time of the different algorithms (possibly with different values of the hyperparameters) are reported in Table <ref>. The results show that ATOM2 has, in general, a longer execution time than ATOM1. This is because the inner-loop of ATOM2 (based on Dykstra's algorithm) requires an higher number of iterations and hence a longer run time to converge than ATOM1 inner-loop (implemented via ADMM), and similar to those of EM/MELT when γ_0 is small, where the distance is minimized in a metric space is ill defined more and more. However, when γ_0 = 10^-1, the run times of ATOM1 and ATOM2 are comparable and similar to those of MELT and EM. Interestingly, Table <ref> pinpoints that, for γ_0 sufficiently small, i.e., 10^-4, ATOM2 is generally able to reach MSE values smaller than ATOM1, reasonably to its adaptive step-size strategy (<ref>), which allows it to provide better quality estimates than ATOM1 as the outer-loop iteration increases. It can also be seen that EM has the least computational time (at large values of m). Nevertheless, as shown in Table <ref>, although the proposed algorithms have a slight longer computational time, the obtained estimates are superior, in terms of MSE, to those provided by MELT and EM. Interestingly, as the data dimension increases, the resulting average MSE values reached by the ATOM2 using different γ_0 parameters becomes closer and closer. Therefore, for a sufficient larger data size, i.e., m≥32, γ_0 = 10^-1 represents an appropriate choice for ATOM2 implementation, as it offers a good performance with a reduced computational burden. §.§ MSE vs n for Toeplitz covariance matrix For this case studies, it is assumed m= 15 and the number of samples n ranging between 50 and 500 in steps of 50. The data _k∈ℂ^15 are again simulated according to (<ref>). Precisely, two different experiments are considered whereby the true Toeplitz covariance matrix is generated using on-grid[The frequencies used in the first experiment are: [0.2167, 0.6500, 1.0833, 1.3, 1.5166, 1.9500, 2.3833, 2.8166, 3.2499, 3.6832 4.1166, 4.5499, 4.9832, 5.4165, 5.8499] rad. Their corresponding powers increase linearly from 1 to 15 with a unit step.] and off-grid frequencies[For the off-grid simulation, the frequencies [1.3, 2.8166, 4.9832,5.8499] rad are replaced with [1.25, 3.01, 5.20, 5.8] rad, respectively.], respectively. The resulting MSE, computed over 1000 Monte Carlo trials, are illustrated in Fig. <ref>. Inspection of the curves depicted in Fig. *fig:MSE_a shows that, regardless of the number of samples n, in the first experiment ATOM1 and ATOM2 almost reach the CRB, whereas EM and MELT yield a slightly better performance, resulting in a deviation from the CRB. This can be explained observing that the derived CRB does not exploit the information that the frequencies lie on-grid. Fig. *fig:MSE_b highlight that in the second experiment, ATOM1 attain the best performance, with results quite close to the CRB and slightly better than ATOM2, with a limited gap between the corresponding curves. Furthermore, MELT and EM exhibit similar MSE values which seem to saturate as n increases. The performance behavior of Fig. *fig:MSE_b stems from the observation that, unlike MELT and EM, ATOM1 and ATOM2 are gridless methods, delivering the same performance regardless of the sources frequencies. §.§ MSE vs n for banded Toeplitz covariance matrix This subsection analyzes the performance in the case of covariance matrix belonging to the set of banded Toeplitz matrices. In particular, the same simulation setup as in Section <ref> is considered, but enforcing the underlying covariance matrix to have a bandwidth b=6. To this end, is constructed by alternately projecting a random Hermitian matrix onto the set of banded Toeplitz matrices and the set of PSD matrices. Moreover, for this study case, ATOM2 is implemented according to the procedure described in Section <ref>, namely explicitly including the banded Toeplitz structure in the constraint set. Fig. <ref> highlights that the bespoke implementation of ATOM2 delivers the best performance, with MSE values really close to the CRB. Furthermore, MELT and EM share the same performance with a noticeable gap w.r.t. ATOM2, which is expected since the aforementioned algorithms do not leverage the banded structure of the covariance matrix. §.§ MSE vs n for BT (TBT) covariance matrix Here, the capabilities of ATOM2 are analyzed in the context of covariance matrix with TBT structure. To this end, assuming m=16 and p=4 blocks (each having block-size l=4), the covariance matrix is modeled as = _1 ⊗_1, where _1 ∈ℂ^l × l is a Toeplitz matrix constructed as in subsection <ref>, with frequencies [0.6, 1.4, 3.2, 5.1] rad and powers [3,6,4,1]. Thus, each data snapshot _k is drawn according to (<ref>). The resulting MSE values (averaged over 1000 Monte Carlo trials) are displayed in Figure <ref> versus the number of snapshots. Specifically, the performance of both the BT and the TBT extension of ATOM2 (described in Section <ref>) are reported and compared with the CRB (see Appendix C reported in the supplementary material to this paper) as well as with two EM-based estimators, tailored respectively for BT/TBT covariance matrix <cit.>. Inspection of the results reveals that ATOM2 TBT uniformly achieves the least MSE, with ATOM2 BT ranking second. As previously highlighted, the superior performance of the proposed method stems from the design criterion which does not require reparametrizing the covariance matrix using the CE. §.§ Radar Application In this subsection, the performance of the covariance estimation algorithms is evaluated with reference to the average achievable SINR in adaptive radar spatial processing context. To this end, let us consider a radar system equipped with a uniform linear array with m=6 sensors, pointing toward the boresight direction. The inter-element distance between each sensor is set equal to d=λ/2, where λ is the radar operating wavelength. For this simulation scenario, the interference covariance matrix is modeled as = _s + σ_a^2 where σ_a^2 is the power level of the white disturbance noise (assumed without loss of generality equal to 0 dB) and _s is given by _s = ∑_l=1^Jσ_l^2 (ϕ_l) (ϕ_l)^H, where J is the number of uncorrelated narrow-band jammers and, for the l-th jammer, (ϕ_l) = 1/√(m)[1, e^j 2π/λ d sin(ϕ_l), …, e^j (m-1) 2π/λ d sin(ϕ_l)]^T is the steering vector in its direction-of-arrival ϕ_l, and σ^2_l the corresponding interferer power. The capabilities of the estimation methods are analyzed by means of the average SINR, computed as SINR_avg= 1K∑_i=1^K|_̂î^H(θ)|^2_i^H_̂î, where K=500 is the number of Monte-Carlo trials and _i = _i^-1(θ) is the estimate of the optimal weight vector for adaptive spatial processing with _i the estimate of the interference-plus-noise covariance matrix for the i-th trial, computed either via the sample covariance matrix or enforcing the Toeplitz structure in the covariance matrix and employing the estimators ATOM1, ATOM2, EM, and MELT. More precisely, J=2 jammers, with powers σ_1^2= 30 dB and σ_2^2= 20 dB, respectively, impinging on the array from θ_1=9.8^∘ and θ_2=-8.8^∘, is considered. As comparison terms, the optimum SINR, i.e., SINR_OPT = (θ)^H^-1(θ) and the performance of the Sample Matrix Inversion (SMI) beamformer, are included too. The average SINR versus θ∈𝒯, with 𝒯 = [-π/2, π/2] discretized with 500 equally-spaced points, is shown in Fig. <ref>, for n∈{m, 2m, 3m}. Inspection of the plots highlights that as the number of samples n increases, the results achieved by ATOM1 and ATOM2 gets closer and closer to the optimum, yielding superior performance w.r.t. the counterparts. § CONCLUSION In this paper, the MLE problem for TSC matrices has been addressed. Precisely, by reformulating appropriately the MLE optimization problem and leveraging the MM framework, two iterative algorithms ATOM1 and ATOM2 have been developed. Both inherit the key properties of MM i.e., they monotonically decrease the underlying cost function with guaranteed convergence to a stationary point of the equivalent MLE problem. Subsequently, ATOM2 has been extended to handle covariance matrix MLE forcing other Toeplitz-related structures, such as banded Toeplitz, BT, and TBT. Simulation results have indicated that the proposed algorithms can perform better than some state-of-the-art techniques in terms of MSE and the SINR metrics. Some of the possible future research directions are now outlined. In particular, ATOM2 could be further extended to include the cases of low rank TSC, with the rank assumed either known or unknown at the design stage, as well as covariance matrix with an upper bound to the condition number. Another possible extension of the proposed technique could be MLE of a Toeplitz covariance matrix assuming a compound Gaussian distribution for the underlining data which has a significant application in low-grazing angle target detection <cit.>. Moreover, acceleration methods inspired for instance by the SQUAREd iterative Methods (SQUAREM) <cit.> could be investigated. Finally, the design of sub-optimal optimization strategies (e.g., based on the gradient projection method) with an improved computational burden (a valuable feature for real-time applications) is definitely worth to be pursued. § APPENDIX A § PROOF OF EQUIVALENCE BETWEEN (8) AND (10) Let ^⋆ be an optimal solution to (8), then (^⋆, ^⋆), with ^⋆= ^H ^⋆-1, is feasible for (10) and the two problems have the same objective values. This means that v(8) ≥ v(10), where v(·) indicates the optimal value of the corresponding optimization problem. Moreover, for any fixed _1 ≻ 0, concentrating the objective function of (10) with respect to (which is tantamount to placing = ^H _1^-1), it follows that the concentrated optimization problem is _1 ≽ 0 minimize (_FB_1^-1) + log|_1|, due to Schur complement Theorem and the monotonicity of the trace operator with respect to generalized matrix inequality “≽”. Finally, being by assumption (8) solvable, any minimizer of (<ref>) satisfies _1^⋆≻ 0 with a corresponding optimal solution to (10) given by (_1^⋆, ^H _1^⋆-1). This implies that v(8)≤ v(10). Capitalizing on (<ref>) and  (<ref>) as well as the above considerations, it follows that v(8)=v(10) and given an optimal solution (_1^⋆,_1^⋆) to (10), _1^⋆ is also optimal to (8) and viceversa, given an optimal solution ^⋆ to (8) (^⋆, ^⋆) is an optimal point to (10). § APPENDIX B § PROOF OF THEOREM 3.2 To begin with, let us denote by h(|_t) either the objective function involved in the surrogate optimization problem of ATOM1 (12) or ATOM2 (15), where = diag(, ). This function, regardless of the method, satisfies the following two inequalities h(_t|_t) = l(_t) h(_t+1|_t) ≥l(_t+1) where l()= Tr() + log||. Leveraging the above inequalities, it follows that l(_t+1) (a)≤h(_t+1|_t) (b)≤h(_t|_t) (c)= l(_t) In (<ref>), the inequality (a) and equality (c) stem from (<ref>) and (<ref>), respectively; besides, the inequality (b) is obtained by exploiting the fact that ATOM1 and ATOM2 globally solve the corresponding convex surrogate optimization problem. Therefore, (<ref>) implies that the sequence of objective value of Problem (16) generated by the proposed algorithms is monotonically decreasing , i.e., l(_0) ≥l(_1) ≥l(_2) ≥⋯ Next, let us denote by a cluster point to {_t} and let {_r_t} be a subsequence of {_t} converging to . Then, from (<ref>), (<ref>), and (<ref>) [ h(_r_t+1|_r_t+1)= l(_t_j+1) ≤l(_r_t+1); ≤h(_r_t+1|_r_t)≤h(|_r_t), ∀ . ] Thus, letting t →∞ h(|) ≤h(|), which implies that h'(|;) ≥ 0 where h'(·|;) is the directional derivative of the surrogate function at point in a feasible direction . Finally, by Proposition 1 in <cit.>, the surrogate function h(|) and the objective function l(·) have the same first order behavior at . Therefore, h'(|;) ≥ 0 implies that l'(; ) ≥ 0. Hence, is a stationary point of the objective function l(). § APPENDIX C § CRB OF BANDED TOEPLITZ, BT, AND TBT COVARIANCE MODEL Herein, the CRB of Banded Toeplitz, BT, and TBT covariance model are provided. §.§ Banded Toeplitz matrix In the case of banded Toeplitz matrix with bandwidth b, the first row of the covariance matrix ∈ℍ^m × m has only b+1 non-zero terms. Therefore, can be parameterized via = [r_1, (r_2),⋯(r_b+1),(r_2),...,(r_b+1) ]^T∈ℝ^ 2b+1. Besides can be expressed in terms of basis matrices ^Toep_g and real coefficients [ = ∑_g=1^b+1θ_g(^Toep_g) + j ∑_g=b+2^2b+1θ_g(^Toep_g-b) ] and consequently ∂/∂θ_i= (^Toep_i) 1≤ i ≤ b+1 j(^Toep_i-b) b+2≤ i ≤ 2b+1 . Substituting ∂/∂θ_i in (34), yields the FIM for banded Toeplitz covariance matrix. §.§ Toeplitz-block-Toeplitz matrix Before proceeding further, it is worth noting that a TBT matrix composed of p blocks of size l can be parameterized by the vector = [_0^T, _1^T, …, _P-1^T]^T∈ℝ^2 l -1 + (p-1)(4l-2) whereby _0 = [r_0,1, (r_0,2), …, (r_0,l), (r_0,2), …, (r_0,l)]^T∈ℝ^2l-1 and _p = [(r_p,1), …, (r_p,l), (r_p,1), …, (r_p,l), (c_p,2), …, (r_p,l), (r_p,2), …, (r_p,l)]^T∈ℝ^4l-2, p=1,…, P-1, with r_p,n and c_p,n the n-th row and n-th column of _p, respectively. Indeed, the TBT covariance matrix can be expressed as ^TBT = _0⊗_0 +∑_w=1^p-1((_w⊗^H_w) +(_w^T⊗_w)), where _0 = ∑_g=1^lθ_0,g(^Toep_g) + j ∑_g=l+1^2l-1θ_0,g(^Toep_g-l+1) and, for w=1,…, p-1, _w = ∑_g=1^l[θ_w,g + jθ_w,g+l ](_g) + ∑_g=2l+1^3l-1[θ_w,g + jθ_w,g+l-1 ](_g-2l+1) with θ_w,g the g-th element of _w, _g = ^Toep_g as long as g = 1 and 1/2 ((^Toep_g)^T + j (^Toep_g)^T) elsewhere, whereas the (i,k)^th element of the matrix _w∈ℝ^l× l is given by [_w]_i,k= 1 i-k=w 0 otherwise. That said, ∂^TBT/∂θ_w,g is given by 0.19!∂^TBT/∂θ_w,g=0.81!_0⊗(^Toep_g) 1≤ g≤l, w=0 _0⊗ j(^Toep_g-l+1) l+1≤ g ≤ 2l-1, w=0 _w⊗(_g)^T + _w^T⊗(_g) 1≤ g ≤ l, w > 0 _w⊗(-j)(_g-l)^T + _w^T⊗j(_g-l) l+1≤ g ≤ 2l, w > 0 _w⊗(_g-2l+1)^T + _w^T⊗(_g-2l+1) 2l+1≤ g ≤ 3l-1, w > 0 _w⊗(-j)(_g-3l+2)^T + _w^T⊗j(_g-3l+2) 3l≤ g ≤ 4l-2, w > 0 which, employed in (34), yields the FIM for TBT covariance matrix. IEEEtran
http://arxiv.org/abs/2307.04292v1
20230710005828
A Demand-Driven Perspective on Generative Audio AI
[ "Sangshin Oh", "Minsung Kang", "Hyeongi Moon", "Keunwoo Choi", "Ben Sangbae Chon" ]
eess.AS
[ "eess.AS", "cs.AI" ]
[ A Demand-Driven Perspective on Generative Audio AI equal* Sangshin Ohequal,comp Minsung Kangequal,comp Hyeongi Mooncomp Keunwoo Choicomp Ben Sangbae Choncomp compGaudio Lab, Inc., Seoul, South Korea Ben Sangbae [email protected] Audio AI, ICML 0.3in ] To achieve successful deployment of AI research, it is crucial to understand the demands of the industry. In this paper, we present the results of a survey conducted with professional audio engineers, in order to determine research priorities and define various research tasks. We also summarize the current challenges in audio quality and controllability based on the survey. Our analysis emphasizes that the availability of datasets is currently the main bottleneck for achieving high-quality audio generation. Finally, we suggest potential solutions for some revealed issues with empirical evidence. § INTRODUCTION The use of audio generative models has the potential to significantly impact a variety of industries. Although essential, the process of creating foley effects is often tedious, non-reproducible, and lacks scalability. Moreover, the utilization of pre-recorded sounds is not conducive to real-time or interactive applications, rendering it inadequate for fields like gaming, metaverse, or any domain requiring the simulation of lifelike environments. The advent of generative audio AI offers a promising solution to address these limitations, significantly impacting areas like film production, gaming, social platforms, and more. Audio synthesis research has a long history <cit.>, but we will focus on the data-driven approaches as they are the recent pioneers with huge potential. The current generative audio AI is still in its early stages, necessitating further advancements in various aspects. We present this paper to provide a demand-driven perspective on task definitions, challenges, and potential solutions within audio generation. Specifically, our focus is on general audio, excluding speech and music. The key contributions of this paper include: * A survey with individuals working in movie sound productions to share insights into the industry-side demands. * Detailed definitions and review of distinct tasks in audio generation regarding input types and conditions. * A summary of the related challenges towards industrial demands and a proposal on potential solutions supported by empirical evidence, including a method with which we achieved 2nd place in the foley synthesis challenges at DCASE 2023. § DEMANDS FROM INDUSTRY To gather insights regarding the impact of audio generative models on the industry, we first interviewed two professionals from the field of movie sound production. They highlighted that their role extends beyond that of sound technicians, as they contribute to the artistic dimension of creating immersive and captivating sound experiences. Despite the inevitable laborious nature of foley and sound effect recording, they are compelled to record new sounds since existing sounds are hardly reusable. While they have a vast library of previous sound stems, there is effectively no efficient method at hand for searching and finding suitable sounds. Even if they find a suitable sound, they have to spend time on editing the time synchronization and sound tone. Based on this knowledge, we conducted a survey involving 18 individuals working in movie sound production, addressing the topic of AI audio generation. We first presented them with some examples of AI image generation applications and a demo page[<https://audioldm.github.io/>] of a recent text-to-audio model <cit.>. We then asked three following primary questions with multiple-choice options. Q1. What are the major challenges faced in foley recording? The most frequently selected option for this question was the time synchronization problem. Following that, respondents expressed the importance of audio quality and consistency in tone with the synchronous recording. In the additional comments, respondents emphasized again that for foley sound, audio quality, synchronization with the scene, and consistency in tone with other sound sources are crucial – to the point that without a good synchronization, some might only consider using AI-generation for ambient sounds. This indicates that relying solely on text-based conditioning may not be sufficient for a majority of use-cases. Q2. What is the limitation(s) of the current text-conditioned audio generation as a product? The survey result is plotted in Figure <ref>. In this question, it was found that audio quality presents the most significant challenge for practical usage. According to their comments, the concerns about quality encompass other aspects such as low fidelity, low sampling rate, roughness, and other related factors. A majority of respondents expressed complaints regarding the sample rate. It is noteworthy that while the industry requires full-band signals at 48kHz or higher, most of the current systems still operate within the 16kHz-24kHz range <cit.>. For creativity, which was the second most frequently chosen category, it refers to the generation of new sounds that fulfill artistic intentions, e.g., creating “the sound of a lightsaber in Star Wars." The terms such as edit and text, which received the third and fourth highest numbers of votes, indicate the problems of controllability. Q3. How would you like to condition the audio generation? As in Figure <ref>, the most frequently chosen option is the utilization of video for time synchronization and achieving an appropriate sound tone. More than half of the respondents were interested in generating similar sounds to reference audio samples. The third and fourth popular options, namely interp. and consistn., are related to refining the generated audio based on reference audio samples. The respondents seemed to show their hope for a more efficient workflow in Q3, in contrast to showing their expectations in Q2. This survey result presents important remarks on generative audio research. First, texts and videos are complementary to each other towards a more complete generative audio system. Second, sound and event synchronization is an important topic that deserves more attention. Third, although it is somewhat deviated from our topic, high-quality audio indexing, search, and separation may be also a solution for some of the problems generative audio AI aims to solve. Based on this understanding, we delve into the current state and challenges of the audio generation field in the following sections. § TASK DEFINITIONS In a recent proposal paper on foley sound synthesis challenge <cit.>, the audio generative AI task is specified based on the input and output types. The authors outline three distinct input types: i) category index, ii) text description, and iii) videos. While the categorization of output types is not explicitly stated, it can be inferred as follows: i) individual foley sounds representing a single event, ii) a combination of multiple events and/or ambient sounds, and iii) a comprehensive soundtrack comprising foley sounds ambient elements, and spatially enhanced mixing. We will focus on the input types since the determination of output types is primarily governed by technical feasibility, allowing a limited scope with the current technology. §.§ Input Types First, a category index, that indicates a single type of audio event, would be the simplest form of input type for a sound synthesis system. This was adopted in some previous works <cit.> and this year's DCASE Task 7 <cit.>. Solutions with this approach would improve foley recording processes for some popular categories such as dog barks, door slams, or footsteps. The second type would be text descriptions as employed in recent research <cit.>, relying on audio caption datasets. There are several promising aspects associated with this text-to-audio approach. i) Extensive research has already been conducted on text-to-X generation (e.g., text-to-image generation studies <cit.>), which simplifies its adaptation for audio generation purposes. ii) The familiarity of users with UI/UX utilizing text inputs further supports the feasibility of this approach. However, there are difficulties as well. i) Compared to text-image pairs, there is a scarcity of text-audio pairs available for training models <cit.>. For example, the number of items of AudioCaps <cit.>, the largest audio captioning dataset, is 0.013% of (or 7561 times smaller than) that of LAION-400M, an text-image pair dataset <cit.>. ii) Text input has limitations in providing highly detailed descriptions at a professional level, as audio engineers rely on precise controls like knobs and sliders to make fine adjustments to the sound (e.g., equalizers). Third, video input types have pros and cons. Unlike the previous input types, videos may provide the exact timings of events <cit.>. As its importance was discussed in Section 2, there is a huge potential for improving the workflow of video creation in this scenario by efficient time synchronization. However, the video itself does not provide complete information because it is common that not everything visible should sound, as well as not everything that sounds is visible. Additionally, there are deliberate artistic intentions involved in video creation such as muting/exaggerating certain sounds. These artistic decisions may vary significantly. Therefore, when developing video-to-sound generation methods, the ability to edit and manipulate the generated audio becomes crucial, just as it is important for text-based generation approaches as we will discuss in the following section. §.§ Conditioning Conditioning can be viewed as a form of input in a broader sense and is deeply related to controllability and editability. AudioLDM pioneered sound editing through text-based approaches <cit.>, and we believe that this direction of research will continue toward more diverse, intuitive, and fine-grained conditioning. For example, users may want to control factors such as sound bandwidth, F0 contours, temporal and spectral envelopes, etc. Our exploration of these product development considerations will continue in the following sections. § CHALLENGES §.§ Dataset Improvement for Audio Quality Recently, there have been some generative AI products successfully deployed on language and image <cit.>. However, the current state of audio generation research does not seem mature enough to be adopted into professional sound production. As audio quality was the most prominent issue as in Figure <ref>, we focus on the issues and potential solutions on datasets to improve the generated audio quality in this section. footnote-5 <https://sound-effects.bbcrewind.co.uk> footnote1 <https://www.epidemicsound.com/sound-effects/> footnote1 <https://www.freetousesounds.com/all-in-one-bundle/> footnote1 <https://sonniss.com/gameaudiogdc> footnote1 <https://wesoundeffects.com/we-sound-effects-bundle-2020/> footnote1 <https://www.paramountmotion.com/odeon-sound-effects> First of all, the current data scarcity deteriorates the model training and resulting audio quality. Compared to image generation datasets that go beyond a few billion pairs <cit.>, there are much less text-paired audio data available <cit.>. Moreover, most of such paired datasets are weakly labeled, i.e. their labels or captions lack time resolution. This is problematic as it is common practice to slice audio signals for ease of training and memory-related issues. Since the text in the pairs depicts audio coarsely in the time axis, there should be potential risks of mismatching when the audio signal is sliced into smaller segments for some practical reasons. Augmentation method <cit.> or using a contrastive embedding network <cit.> can help this, but not as an absolute treatment. The characteristics of the audio itself even exacerbates the problem. It is a difficult problem to separate foreground and background audio sources, and obtaining isolated audio recording would remain to be costly. The spatial characteristics of the recording environment often have negative affects to the recording quality. Altogether, there are many factors that make it tricky to create a studio-quality audio dataset. We listed available audio datasets in Table <ref>. Since the largest datasets in the list are collected or curated from crowd-sourced audio <cit.> or video <cit.>, their recording conditions may vary and are usually not good. Thus, the samples from those datasets often suffer from severe background noises, low recording bandwidth / bit rate, and various types of distortion. Clean datasets are limited to several commercial sound effect libraries. To this trade-off problem of more data vs. clean data, we propose a solution called quality-aware training (QAT). This can be simply done by prompting, i.e., appending dataset labels indicating the quality of the dataset in the text input. QAT enables to utilize a broader range of datasets. During the training phase, a model can learn from both clean and noisy datasets with quality labels. As a result, the model would learn not only the concepts of different audio events but also their audio quality; i.e., the model would have compositionality of audio events and audio quality. During the inference phase, we can force the model to generate clean signals by conditioning the model, i.e., by appending `clean' labels to the text input. This enabled us to use all data pairs regardless of their quality without deteriorating their output quality. In our experience, this approach let us control the audio quality, reverberation, signal bandwidth, and audio event independently and achieve 2nd place in the recent foley synthesis challenge at DCASE 2023 <cit.>. Details about experiments are provided in Appendix B. §.§ Methodological Improvement for Controllability Controllability was another major concern in our survey, as the audio engineers have specific intent about how the generated output should sound. Audio generation may take a long time, hence it is crucial for deployable audio AI systems to have effective controllability Classifier-free guidance is a widely adopted solution for the problem across diffusion-based and Transformer-based generative models. At the cost of sample qualities by extrapolating intermediate features or logits, it introduces diversity, which would make exploration easier for the users of generative audio AI systems. Most of the recent text-to-audio generation research adopted this technique <cit.>. Controllability can be also attained by introducing new features or new modalities, for example, a reference audio or a conditioning video as in Figure <ref>. As AudioLDM demonstrated audio manipulation without fine-tuning <cit.>, we believe text-guided audio-to-audio generation is a compelling research direction towards deployable generative audio AI. Video-based foley generation has been less popular, but it would be an interesting direction for future research along with the existing research <cit.>. Finally, conventional signal features such as F0 contour or envelopes can be a great user interface for experienced audio engineers. As those features are easy to extract from audio signals, it is plausible to use them as one of the inputs during the training phase, then build a user interface that allows control of the generated output by modifying the features. § CONCLUSION In this paper, we presented a survey conducted with sound engineers in the movie industry. Based on the survey results, we have provided task definitions for audio generation research and identified related research challenges. Our objective was to bridge the gap between current research and industry practices, offering potential solutions to address the challenges of audio quality and controllability. Surprisingly, there are limited opportunities for researchers to gain insights from the industry side. We believe that this work serves as a valuable starting point for understanding the difficulties faced by both researchers and potential users, ultimately aligning our efforts to solve the real-world problems. While our perspective focuses on the movie industry, it is important to acknowledge that neighboring industries may face different challenges with varying priorities. For example, the demand for real-time generation systems may be stronger in the virtual reality or gaming industry, while the standards for audio quality or artistic intent may be lower for non-professional movie creation platforms such as YouTube. We hope that our work represents a meaningful step towards comprehending the diverse demands placed on generative audio AI and its diverse applications. icml2023 § DETAILS OF SURVEY IN SECTION <REF> §.§ Exact expression of the options in Figure <ref> and Figure <ref> §.§ Results on the other questionnaire § EXPERIMENT RESULTS FOR SECTION <REF>
http://arxiv.org/abs/2307.04602v1
20230710144153
Inverse cascading for initial MHD turbulence spectra between Saffman and Batchelor
[ "Axel Brandenburg", "Ramkishor Sharma", "Tanmay Vachaspati" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
An effective density matrix approach for intersubband plasmons coupled to a cavity field: electrical extraction/injection of intersubband polaritons R. Colombelli August 12, 2023 ==================================================================================================================================================== In decaying magnetohydrodynamic (MHD) turbulence with a strong magnetic field, the spectral magnetic energy density increases with time at small wavenumbers k, provided the spectrum at low k is sufficiently steep. This is inverse cascading and occurs for an initial Batchelor spectrum, where the magnetic energy per linear wavenumber interval increases like k^4. For an initial Saffman spectrum that is proportional to k^2, however, inverse cascading is known not to occur. We study here the case of an intermediate k^3 spectrum, which may be relevant for magnetogenesis in the early Universe during the electroweak epoch. This case is not well understood in view of the standard Taylor expansion of the magnetic energy spectrum for small k. Using high resolution MHD simulations, we show that also in this case there is inverse cascading with a strength just as expected from the conservation of the Hosking integral, which governs the decay of an initial Batchelor spectrum. § INTRODUCTION Standard hydrodynamic turbulence exhibits forward cascading whereby kinetic energy cascades from large scales (small wavenumbers) to smaller scales (larger wavenumbers) <cit.>. This also happens in decaying turbulence, except that the rate of energy transfer to smaller scales is here decreasing with time <cit.>. In magnetohydrodynamic (MHD) turbulence, the situation is in many ways rather different. This is primarily owing to magnetic helicity <cit.>, which is a conserved quantity in the absence of magnetic diffusivity <cit.>. Magnetic helicity is an important property of MHD turbulence that is not shared with hydrodynamic turbulence, even though there is kinetic helicity that is also an invariant if viscosity is strictly vanishing <cit.>. However, this is no longer true when the viscosity is finite <cit.>. This is because kinetic helicity dissipation occurs faster than kinetic energy dissipation, whereas magnetic helicity dissipation occurs more slowly than magnetic energy dissipation for finite magnetic diffusivity <cit.>. The importance of magnetic helicity conservation has been recognized long ago by <cit.> and <cit.> in cases when it is finite on average. In that case, it leads to the phenomenon of an inverse cascade. In forced turbulence, this means that part of the injected energy gets transferred to progressively larger scales <cit.>. This process is at the heart of large-scale dynamos, which can be described by what is known as the α effect <cit.>, and is relevant for explaining the large-scale magnetic fields in stars and galaxies <cit.>. In decaying turbulence, on the other hand, inverse cascading leads to a temporal increase of the magnetic energy at the smallest wavenumbers. A similar phenomenon has never been seen in hydrodynamic turbulence, where the spectrum at small k remains unchanged. Even when the magnetic helicity vanishes on average, there can still be inverse cascading. In that case, it is no longer the mean magnetic helicity density, whose conservation is important, but the magnetic helicity correlation integral, also known as the Hosking integral <cit.>. In nonhelical turbulence, the possibility of inverse cascading with an increase of spectral magnetic energy at small wavenumbers was originally only seen for steep initial magnetic energy spectra, (k)∝ k^4. Here, (k) is defined as the spectral magnetic energy per linear wavenumber interval and is normalized such that ∫(k,t) k=⟨^2|/2≡(t) is the mean magnetic energy density. Those k^4 spectra where motivated by causality arguments, requiring that magnetic field correlation functions strictly vanish outside the light cone <cit.>. Such a field can be realized by a random vector potential that is δ-correlated in space, i.e., the values of any two neighboring mesh points are completely uncorrelated. The magnetic vector potential has therefore a k^2 spectrum, which implies that the magnetic field =× has k^4 spectrum. For the case of a shallower (k)∝ k^2 spectrum, no inverse cascading has been found <cit.>. This was explained by the conservation of the magnetic Saffman integral <cit.>, which constitutes the coefficient in the leading quadratic term of the Taylor expansion of the magnetic energy spectrum for small k. The intermediate case of a k^3 spectrum may be realized during the electroweak epoch in cosmology due to a distribution of magnetic charges as shown in <cit.> and <cit.>. The evolution of the magnetic field in this case is less clear. <cit.> reported weak inverse cascading, but it is not obvious whether this agrees with what should be expected based on the conservation of the Hosking integral, or whether it is some intermediate case in which the possible conservation of both the magnetic Saffman integral and also the Hosking integral can play a role. Investigating this in more detail is the purpose of the present work. § PRELIMINARY CONSIDERATIONS §.§ Relevant integral quantities in MHD Three important integrals have been discussed in the context of decaying MHD turbulence. The first two are the magnetic Saffman and magnetic Loitsyansky integrals <cit.>, I_ SM = ∫⟨()·(+)| ^3, I_ LM= - ∫⟨()·(+)| r^2 ^3, respectively. Here, angle brackets denote ensemble averages, which we approximate by volume averages. The integrals saffman and magneticli are analogous to those in hydrodynamics, but with being replaced by the velocity . The third relevant quantity is the Hosking integral <cit.>, I_ H=∫⟨h()h(+)| ^3, where h=· is the magnetic helicity density. By defining the longitudinal correlation function M_ L(r) through ⟨() ·(+)⟩=1/r^2 d/dr(r^3 M_L), the integrals I_ SM and I_ LM emerge in the coefficients of the Taylor expansion of the magnetic energy <cit.>. A similar expansion also applies to the magnetic helicity variance spectra <cit.>. For power spectra that decay sufficiently rapidly, a Taylor expansion of sin(kr)/(kr) gives, .()|_k→0 = 2 k^2/π∫d/dr(r^3 M_ L) (1-k^2 r^2/6+...) dr ≡I_ SM/2π^2k^2 +I_ LM/12π^2k^4+..., .(h)|_k→0 = I_ H/2π^2k^2 +... . Here, (h)=(k^2/8π^3L^3)∮_4π|h̃|^2 Ω_k is the shell-integrated spectrum in volume L^3, the tilde marks a quantity in Fourier space, and Ω_k is the solid angle in Fourier space, so that ∫(h) k=⟨h^2|, and likewise for ∫() k=⟨^2|. The definition of shell integration implies that Parseval's theorem in the form ⟨h^2|L^3=∫|h̃|^2 ^3k/(2π)^3 is obeyed. The magnetic energy spectrum is defined as (k,t)=()/2μ_0, where μ_0 is the magnetic permeability, but in the following, we measure in units where μ_0 is set to unity. According to ExpSpB, () seems to be constrained to having only even powers of k in the limit k → 0. Furthermore, () ∝ k^2 when I_ SM is finite and dominant, and likewise, () ∝ k^4 when I_ LM is finite and dominant. The expansion in powers of k in (<ref>) holds, however, only if the coefficients in the expansion are finite. This is the case if, for example, M_ L is an exponentially decaying function of r. If, on the other hand, M_ L decays only as a power law, the expansion does not hold since higher order coefficients will be divergent. In such cases the leading order behavior in k may consist of odd (or even arbitrary) powers of k. A simple counterexample to the expansion in (<ref>) is provided by considering the case r^3M_L ∝ r for large r in (<ref>). The specific case of ()∝ k^3 occurs for magnetic fields produced during electroweak symmetry breaking as discussed in <cit.> and <cit.>. In our numerical work we will compute both the compensated shell-integrated spectra, i.e., (*) divided by suitable powers of k, as well as the integrals using their definitions in saffmanHintegral. §.§ Competition between I_ SM and I_ H Using the Taylor expansion of the magnetic energy spectrum in ExpSpB we see that for initial Saffman scaling (∝ k^α with α=2), the magnetic Saffman integral I_ SM must be non-vanishing. For initial Batchelor scaling (α=4), on the other hand, I_ SM vanishes initially and cannot play a role. In that case, the conservation of I_ H becomes important and leads to inverse cascading, which then also implies the non-conservation of I_ SM <cit.>. For α=2, there are indications <cit.> that I_ SM is slightly better conserved than the Hosking integral I_ H, which enters the Taylor expansion of the magnetic helicity variance spectrum in ExpSph. Therefore, for α=2, () continues being determined by ExpSpB, and I_ H begins to decline in ExpSph. For α=4, on the other hand, I_ SM=0 initially, but then both I_ SM and I_ LM begin to grow <cit.>. Our question here is what happens in the intermediate case when α=3. In that situation, () and (h) cannot be Taylor expanded and it is unclear whether there is inverse cascading in that case, because it would require violation of the conservation of I_ SM, or whether I_ SM is conserved, as for α=2, and there is no inverse cascading. §.§ Growth of spectral energy at small wavenumbers We now want to quantify the growth of spectral energy at small wavenumbers. As in <cit.>, we use self-similarity, i.e., the assumption that the magnetic energy spectra at different times can be collapsed on top of each other by suitable rescaling. Thus, we write (k,t)=^-βϕ( k), where (t)=∫ k^-1(k) k/ is the integral scale and β depends on the relevant conservation law: β=2 for Saffman scaling and β=3/2 for Hosking scaling. This follows from the dimensions of the conserved quantity; see <cit.> for details. Next, we assume a certain initial subinertial range scaling, ∝ k^α. Thus, for k≪1, we have (k,t)=^α-β k^α . Assuming power-law scaling, (t)∝ t^q, we get lim_k→0(k,t)∝ t^(α-β) q. Thus, inverse cascading is possible for α>β, so α=2 and β=3/2 could, in principle, still give rise to inverse cascading. Following <cit.> we have q=2/(β+3), so q=2/5 for β=2 and q=4/9 for β=3/2; see Tscaling for a comparison of different theoretical possibilities for the various exponents. Thus, unless I_ SM is conserved and there is therefore no inverse cascading, we expect lim_k→0(k,t)∝ t^2/3 for cubic scaling (∝ k^3, i.e., between Saffman and Batchelor scalings) when the Hosking integral is conserved (β=3/2 and q=4/9). In the following, we present numerical simulations demonstrating that this is indeed the case. § SIMULATIONS We perform simulations in a domain of size (2π)^3, so the lowest nonvanishing wavenumber is k≡ k_1, where k_1=1 for Runs B and C, but 0.02 for Runs A and D. For Run B, we assume that the initial magnetic energy spectrum peaks at k_0=60 k_1, and therefore we consider spectral values for k=k_1 to approximate the limit k→0. We use N^3=2048^3 mesh points in all of our simulations, so the largest wavenumber is 1024. It is similar to a run of <cit.> with α=4, which here corresponds to Run C. We also compare with some other runs that we discuss later. All simulations are performed with the Pencil Code <cit.>, which solves the compressible, isothermal equations using finite differences. In the numerical simulations, the sound speed is always chosen to be unity, i.e., =1. The initial position of the spectral peak is at k=k_0 and its numerical value is chosen to be 60 and the lowest wave number in the domain is unity, or, when using the data of <cit.>, k_0=1 and k_1=0.02, so that k_0/k_1=50. The magnetic diffusivity is η k_1/=2×10^-6 in Runs B and C, so η k_0/=1.2×10^-4. In some runs with α=2, we also present results for larger values of η. The magnetic Prandtl number, i.e., the ratio of kinematic viscosity ν to magnetic diffusivity, =ν/η, is unity for Runs B and C. For Runs A and D, we have η k_0/=5×10^-5 and ν k_0/=2×10^-4, so =4. §.§ Inverse cascading The results for the magnetic energy and helicity variance spectra are shown in rspec_select_hoskM_k60del2bc_k3, which shows inverse cascading with (k_1,t)∝ t^2/3 and (h)= for k→0. The temporal increase at low k is compatible with Tscaling for α=3, β=3/2, q=4/9, and thus (α-β) q=2/3. Next, we compare in rspec_select_comp_k60del2bc_k3 compensated spectra, which allow us to determine I_ SM→2π^2()/k^2, if it were flat for small k (but this is not the case here), and I_ H→2π^2(h)/k^2, which is approximately flat for small k. The upward trend with time in the peaks of the curves in rspec_select_comp_k60del2bc_k3ab reflects the fact that β=3/2 in Compensated, so the compensated spectrum, (k,t)/k^2=^-βϕ( k)/k^2=^-β+2ϕ̃( k), scales with decreasing (t)∝^-1 like ^-1/2. Here, ϕ̃(κ)≡ϕ(κ)/κ^2 is a compensated version of ϕ(κ), and so the peak increases with time like ^1/2∝ t^q/2. The fact that the magnetic Saffman integral is not conserved is also demonstrated by the fact that the compensated curves are not flat, but show a bump. For comparison with earlier work, it may still be useful to quote approximate values of I_ SM. Those are here based on the approximate height of the bumps; see the dotted horizontal lines in rspec_select_comp_k60del2bc_k3ab. In rspec_select_comp_k60del2bc_k3(d), we see that (h) shows a distinctly downward trend with k for the smallest k values. This suggests that the conservation property of I_ H begins to deteriorate, especially at late times, and that higher order terms begin to play a role. To clarify this further, more scale separation would be useful, i.e., a run with a larger value of k_0. Such runs at a resolution of 2048^3 mesh points are, however, rather expensive, but it is interesting to note that, even for the case of an initial k^4 spectrum, the compensated spectra show a similar downward trend with k when the numerical resolution is only 1024^3; see Figure 3(d) of <cit.>, which corresponds to our Run D. It should also be noted that in rspec_select_comp_k60del2bc_k3(d), the last time is t k_1=190, while in rspec_select_comp_k60del2bc_k3c, the last time is only t k_1=60. The two times correspond to t η k_0^2≈1.4 and 0.4. §.§ Universal scaling constants Given that I_ H is reasonably well conserved and enters the evolution of magnetic energy and correlation length, as well as the spectral envelope of the peak, through dimensional arguments, it is useful to determine the nondimensional coefficients in these relations. This was done recently for the cases α=2 and α=4; see <cit.>, who computed the coefficients C_ H^(ξ), C_ H^( E), and C_ H^(E), defined through the relations (t)=C_i^(ξ) I_i^σt^q, (t)=C_i^( E) I_i^2σt^-p, (k)=C_i^(E) I_i^(3+β)/σ(k/k_0)^β, where the index i on the integrals I_i and the coefficients C_i^(ξ), C_i^( E), and C_i^(E) stands for SM or H for magnetic Saffman and Hosking scalings, respectively, and σ is the exponent with which length enters in I_i: σ=5 for the magnetic Saffman integral (i= SM) and σ=9 for the Hosking integral (i= H). In the following, we focus on the case i= H, but refer to <cit.> for comparisons with i= SM. We recall that k_0 is the initial position of the spectral peak. Note that the last expression of GeneralFits describes an envelope under which E(k,t) evolves; see rspec_select_hoskM_k60del2bc_k3a for an example. In principle, the nondimensional coefficients C_ H^(ξ), C_ H^( E), and C_ H^(E) could depend on other quantities characterizing the system, for example the magnetic Reynolds number, but they may also be universal, just like for the Kolmogorov constant in the kinetic energy spectrum. To begin assessing the degree of universality of these nondimensional coefficients, we now consider the empirical laws (t), (t), and (k,t) for the new case of α=3. As in earlier work, the nondimensional constants in the scaling laws for α=3 agree with those found earlier for α=4 <cit.>. Specifically, we have (t)≈0.12 I_ H^1/9t^4/9, (t)≈3.7 I_ H^2/9t^-10/9, (k,t)0.025 I_ H^1/2(k/k_0)^3/2. The quality of these asymptotic laws can be seen from the red lines in the last two panels of rspec_select_comp_k60del2bc_k3. The blue lines show the case if the Saffman integral were conserved. As explained above, those are based on the position of the bumps in rspec_select_comp_k60del2bc_k3ab, and are therefore only of limited use. A comparison of the coefficients with those found by <cit.> is given in Tcomparison. Note that in both panels, the solid and dashed blue lines show an asymptotic upward trend, reflecting again that the magnetic Saffman integral is not conserved. §.§ Normalized Hosking and Saffman integrals The runs of <cit.> had different mean magnetic energy densities and also the minimum wavenumber k_1 was not unity, but k_1=0.02, unlike the present cases, where k_1=1. Instead, the peak of the initial spectrum, k_0, was then chosen to be unity. To compare such different runs, it is necessary to normalize I_ SM and I_ H appropriately. On dimensional grounds, I_ SM is proportional to ^3 and I_ H is proportional to ^2^5. By approximating the spectrum as a broken power-law, as in <cit.>, (k)={ E_peak(k/k_peak)^α, k≤k_peak, E_peak(k/k_peak)^-s, k> k_peak, . where s=5/3 and s=2 were used to represent the inertial range slopes at early and late times, respectively, we find k_peak=α^-1+s^-1/(α+1)^-1+(s-1)^-1, E_peak=/α^-1+s^-1. For α=2, we find the following reference values for the Saffman integral: I_SM^ref=2π^2×{ 250/99 , 16/9 . . For other values of α, the value of I_ SM^ ref is not meaningful and only I_ H^ ref is computed for other values of α by using equations (2.14) and (4.5) in <cit.>. It is given by I_H^ref=8π^2 ^2 ^5 ((α+1)^-1+(s-1)^-1/(α^-1+s^-1)^5/3)^3 (1/2α-3+1/2s+3). In calculating the above expression, we assumed the magnetic field distribution to be Gaussian and its spectrum to be of the form as given in mag_spec. These reference values are summarized in Tcomparison2. In Tcomparison, we also list the ratios I_ H/I_ H^ ref and I_ SM/I_ SM^ ref, where I_ H^ ref∝^2^5 and I_ SM^ ref∝^3 are defined quantitatively in Tcomparison2. We have used here the actual values of α=2, 3, or 4, and s=2 in all cases which describes the late time inertial range well; see rspec_select_hoskM_k60del2bc_k3a. The former ratio, I_ H/I_ H^ ref, varies only little, because the Hosking integral is always reasonably well conserved, except when the magnetic diffusivity is large. Near tη k_0^2≈0.1, the ratio has for all runs a well distinguished maximum, which is the value we quote in Tcomparison. These values tend to be about 20% larger than those at the end of the run, which are the reference values given in Tcomparison. The ratio I_ SM/I_ SM^ ref, on the other hand, is not at all conserved for Runs B–D, and the ratio is then best be characterized by a mild minimum at early times, which is the value quoted here. It is interesting to note that I_ H/I_ H^ ref is about twice as large on the larger mesh (Run C) than on the smaller mesh (Run D). This is somewhat surprising. It should be noted, however, that Run C with a larger mesh had actually a larger magnetic diffusivity (η k_0/=7×10^-3) than Run D (η k_0/=5×10^-5); see Tcomparison. It is therefore possible that Run D was actually underresolved and that this was not noticed yet. To reexamine the idea that for α=2, I_ SM is better conserved than I_ H, we compare their evolution for different models in rspec_select_comp_k60del2bc_Isaff. In addition to the two high resolution models (with 2048^3 mesh points) with α=3 and 4, we also present the dependencies for the lower resolution models of <cit.> (with 1024^3 mesh points) with α=2 and 4. We see that in all cases, I_ H is reasonably well conserved, except when the magnetic diffusivity is large. By contrast I_ SM is conserved only for α=2, and not at all for any other values of α. It is also remarkable that for α=2, I_ SM appears to be much better conserved than I_ H for α=4 and 3. In fact, by comparing runs for α=2 with larger magnetic diffusivities (Runs A1 and A2), we find that I_ H declines more rapidly (as expected), but I_ SM seems completely unaffected by this. This reflects mainly the fact that the magnetic field at the lowest wave numbers is indeed unchanged. §.§ Limitations of the Taylor expansion Given that the expansion in ExpSpB cannot be justified for α=3, the calculation of the magnetic Saffman integral as I_ SM→2π^2()/k^2 may be problematic. We therefore also compare with a direct calculation using the spectral method through () analogous the “box-counting method” of <cit.>; see their equation (2.9). This corresponds here to calculating first the function I_ SM(R)=∫ w_ sph^ BC(k;R)() k, where w_ sph^ BC(k;R)=4π R^3/3[ 6j_1(kR)/kR]^2 is the weight function of <cit.>; see their equation (2.8). We then obtain I_ SM as the limit of I_ SM(R) for large values of R, but smaller than the system size L. Here we choose R=R_*=L/3, but we note that the exact choice of this value is not crucial. In rspec_select_comp_k60del2bc_Isaffb, we compare the two methods of obtaining I_ SM. We see that the box-counting method tends to give somewhat better conserved estimates of I_ SM. For completeness, we also show the evolution of I_ H from the box-counting method; see rspec_select_comp_k60del2bc_Isaffa. Here, both methods give virtually indistinguishable results. §.§ Comments on non-Gaussianity The question of non-Gaussianity is important in many aspects of cosmology. Not all its aspects are captured by kurtosis or skewness. In the work of <cit.>, it was already pointed out that, although the kurtosis was only slightly below the Gaussian value of three, there was a very strong effect on the statistics of the fourth order moments that enter in the calculation of I_ H and (h). In Gaussianity_check, we compare (h) at the initial and a later time from the numerical calculation and the semi-analytical calculation based on the actual magnetic energy spectra, assuming Gaussian statistics. As in <cit.>, we find also here a ten-fold excess of the actual spectra compared with the value expected based on the assumption of Gaussianity. §.§ How special is the Saffman scaling for α=2? We now address in more detail the case α=1.7, for which tdep_for_k0 would predict lim_k→0(k,t)∝ t^(α-β) q=t^4/45≈ t^0.09. This run is listed in Tcomparison2 as Run O. We have seen that, for small magnetic diffusivity, I_ H is well conserved for all values of α; see rspec_select_comp_k60del2bc_Isaffa. On the other hand, I_ SM appears to be well conserved only in the special case of α=2; see rspec_select_comp_k60del2bc_Isaffb. One possibility is therefore that, as long as α>2, we have inverse cascading, but not for α≤2. But the argument for not expecting inverse cascading relies heavily on the existence of I_ SM and that it is non-vanishing. If we accept that for α=3, () cannot be expanded in terms of k^2 and k^4, then this would also be true for α=1.7, which is a value between 3/2 and 2. One might therefore expect that also in this case, I_ SM would not be conserved, and that the decay if governed by the conservation of I_ H. This possibility was already listed Tscaling. In rspec_select_hoskM_k60del2bc_k1p7a we show that there is no noticeable growth of lim_k→0(k,t). The inset, however, does show that there is an intermediate phase phase with a very weak growth ∝ t^0.05. Given that also the theoretically expected growth ∝ t^0.09 is already very small, and that the degree of conservation of I_ H is also limited, as seen in rspec_select_hoskM_k60del2bc_k1p7b, it is indeed possible that at larger resolution and smaller magnetic diffusivity, clearer inverse cascading might emerge. §.§ Evolution in the pq diagram There is a range of tools for assessing the decay properties of MHD turbulence. We did already discuss the determination of I_ H and I_ SM, and the potentially universal coefficients C_ H^(ξ), C_ H^( E), and C_ H^(E). We also discussed the close relation between the envelope parameter β in CompensatedGeneralFits, and the parameter q characterizing the growth of the correlation length ∝ t^q. There is also the parameter p characterizing the decay of magnetic energy, ∝ t^-p. Both p and q can also be determined as instantaneous scaling parameters through p(t)=-ln/ln t and q(t)=ln/ln t, and their parametric representation p(t) versus q(t) gives insights about the properties of the system and how far it is from a self-similar evolution <cit.> and the scale-invariance line, p=2(1-q) <cit.>. In pEMxi_pq_run_3, we show such a pq diagram for Runs B and C. We see that the points (q,p) for different times and for both runs cluster around (q,p)=(4/9, 10/9), as expected for Hosking scaling. The locations for Loitsyansky and Saffman scalings, (2/7, 10/7) and (2/5, 6/5), respectively, as well as for the fully helical case (2/3, 2/3) are also indicated for comparison. A detailed assessment of the full range of scaling parameters is important for establishing the validity of Hosking scaling. Assessments based on comparisons of the parameter p for different runs may not be sufficient, and have led to inconclusive results; see <cit.> for recent results. Thus, the idea behind the Hosking phenomenology is therefore not universally accepted. In this connection, it should be noted that additional support for the validity of Hosking scaling came from two rather different numerical experiments. First, in applications to the Hall cascade, the Hosking phenomenology predicts the scalings q=4/13 and p=10/13, which was confirmed by simulations <cit.>. Second, in relativistic plasmas where the mean magnetic helicity density is finite, but the total chirality vanishes because the helicity is exactly balanced by fermions chirality, the Hosking phenomenology predicts a decay of mean magnetic helicity ∝ t^-2/3, which, again, was confirmed by simulations <cit.>. § CONCLUSIONS Our work has shown that the decay dynamics of an initial magnetic field with power law scaling proportional to k^3 is similar to that for k^4, but very different from that for k^2. Even the case α=1.7 may be different from α=2. This suggests that the case of an initial k^2 spectrum may be singular. At the same time, it underlines the importance of the Hosking integral in determining the decay dynamics for a large class of initial magnetic energy spectra. We also confirmed that the nondimensional coefficients in the empirical scaling relations for (t), (t), and (k,t) are compatible with those found earlier for an initial k^4 subinertial range spectrum. According to a simple argument involving self-similarity, we showed and confirmed that the temporal growth of the magnetic energy spectra at small k is proportional to t^2/3, while for α=4, we have t^10/9. At the moment, even with a resolution of 2048^3 mesh points, we cannot make very firm statements about the case α=1.7, because I_ H is not sufficiently well conserved and the value of α is close to 3/2. It would be useful to reconsider also the case α=2 with a more accurate analysis to see whether even here one could find violation of conservation of the magnetic Saffman integral, and thus weak inverse cascading ∝ t^0.2. § ACKNOWLEDGEMENTS We are grateful to Antonino Midiri, Alberto Roper Pol, and Kandaswamy Subramanian for encouraging discussions. § FUNDING A.B. and R.S. where supported in part by the Swedish Research Council (Vetenskapsrådet, 2019-04234); Nordita is sponsored by Nordforsk. T.V. was supported by the U.S. Department of Energy, Office of High Energy Physics, under Award No. DE-SC0019470. We acknowledge the allocation of computing resources provided by the Swedish National Allocations Committee at the Center for Parallel Computers at the Royal Institute of Technology in Stockholm and Linköping. § DECLARATION OF INTERESTS The authors report no conflict of interest. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are openly available on Zenodo at doi:10.5281/zenodo.8128611 (v2023.07.09). All calculations have been performed with the Pencil Code <cit.>; DOI:10.5281/zenodo.3961647. § AUTHOR'S ORCIDS A. Brandenburg, https://orcid.org/0000-0002-7304-021X R. Sharma, https://orcid.org/0000-0002-2549-6861 T. Vachaspati, https://orcid.org/0000-0002-3017-9422 jpp
http://arxiv.org/abs/2307.07211v1
20230714081342
Pyxis: A ground-based demonstrator for formation-flying optical interferometry
[ "Jonah T. Hansen", "Samuel Wade", "Michael J. Ireland", "Tony D. Travouillon", "Tiphaine Lagadec", "Nicholas Herrald", "Joice Mathew", "Stephanie Monty", "Adam D. Rains" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.EP" ]
Complementary Frequency-Varying Awareness Network for Open-Set Fine-Grained Image Recognition Jiayin Sun, Hong Wang and Qiulei Dong The corresponding author is Qiulei Dong. Jiayin Sun and Qiulei Dong are with the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China, and the Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected]; [email protected]). Hong Wang is with the College of Life Science, University of Chinese Academy of Sciences, Beijing 100049, China (email: [email protected]) ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In the past few years, there has been a resurgence in studies towards space-based optical/infrared interferometry, particularly with the vision to use the technique to discover and characterise temperate Earth-like exoplanets around solar analogues. One of the key technological leaps needed to make such a mission feasible is demonstrating that formation flying precision at the level needed for interferometry is possible. Here, we present Pyxis, a ground-based demonstrator for a future small satellite mission with the aim to demonstrate the precision metrology needed for space-based interferometry. We describe the science potential of such a ground-based instrument, and detail the various subsystems: three six-axis robots, a multi-stage metrology system, an integrated optics beam combiner and the control systems required for the necessary precision and stability. We end by looking towards the next stage of Pyxis: a collection of small satellites in Earth orbit. *Email: [email protected] § INTRODUCTION AND BACKGROUND High angular resolution astrophysics, in particular optical/infrared (IR) interferometry, is in a golden age. Instruments such as GRAVITY <cit.> and MATISSE <cit.> at the VLTI and MIRC-X <cit.> on the CHARA array have produced stunning scientific results, including imaging the starspots on a distant giant star<cit.>, the first astrometric confirmation of a planet<cit.>, and of course, the characterisation of a supermassive compact object at the centre of our galaxy<cit.>, leading to the 2020 Nobel Prize in physics. However, there are still questions yet unexplored that only interferometry will be able to probe. One such avenue is the direct imaging of exoplanets. One of the major goals of exoplanet research is identifying potentially habitable worlds that may harbour life, and to that end one needs access to the atmosphere of the planet to look for biosignatures <cit.>. Transmission spectroscopy is promising for the characterisation of hydrogen rich atmospheres, but is challenging for terrestrial atmospheres<cit.>. Hence direct imaging is one of the only techniques available to obtain atmospheric spectra from terrestrial planets. In order to accomplish this, however, we require minimising the contrast between a planet and it's host star—which for terrestrial planets lies in the mid-infrared (MIR). To obtain the sensitivity needed, as well as avoiding telluric contamination, we also require these telescopes to be in space; producing challenges in having large coronagraphic apertures with the required angular resolution. Hence space-based interferometry has long been recognised as the only way to take MIR spectra of Earth-like planets<cit.>, and is simulated to have similar or greater yield compared to the largest launchable >$10B coronagraphic telescopes in detecting any habitable planet biosignatures <cit.>. Once the required aperture diameter becomes too large to construct mechanically or to launch, the only option is to launch parts of the mirror as separate light collector spacecraft with light combined in a beam combiner spacecraft—becoming a space interferometer. Many previous studies have been made of space interferometer missions, in one of two categories. These were either connected element interferometers<cit.> which are much more limited than formation flying, or formation flying interferometers at the Sun-Earth L2 point<cit.>, which are vastly more expensive than low earth orbit compatible designs, mostly due to launch costs. In the late 2000s, both NASA and ESA shelved plans for large scale space-based interferometers (TPF-I<cit.> and Darwin<cit.> respectively), primarily due to a lack of understanding of the potential planet yield of such a mission, as well as technical unreadiness. Since that time, however, missions such as Kepler <cit.> and more recent exoplanet missions (e.g. TESS<cit.> and CHEOPS <cit.>) have greatly increased our knowledge of planet demographics to the point where simulations of a large MIR space interferometer have shown it would detect approximately 20 Earth-like planets <cit.>. This renewal of interest in space interferometry has lead to the development of the Large Interferometer For Exoplanets (LIFE) initiative; a revival of the TPF-I/Darwin concept to detect and characterise planets in the MIR <cit.>. In late 2021, ESA released its Voyage 2050 plan, for which the characterisation of planets in the MIR was one of the top priorities for a large scale mission <cit.>. As a caveat to this recommendation, however, was the requirement to prove that such a space interferometer mission would be feasible technologically, as well as scientifically. Two critical technology areas have not been at an adequate level to progress the formation flying optical and infrared interferometry missions: compact, cryogenic compatible nulling beam combiners (a target of the Nulling Interferometer Cryogenic Experiment (NICE) <cit.>), and formation flying itself, including metrology systems. It is this second technology that is the primary purpose of Pyxis, the subject of this paper, though other investigations into formation flying interferometry are currently ongoing <cit.>. Pyxis is a multi-platform, linear-formation, robotic ground-based optical interferometer in development at the Australian National University's (ANU) Research School of Astronomy and Astrophysics (RSAA) located at Mt Stromlo Observatory. It will serve as a crucial technology demonstration of formation control and metrology for future formation flying space-interferometry missions and enable for more flexible ground-based stellar interferometry. A schematic of the Pyxis interferometer is found in Figure <ref>, highlighting the major components and subsystems. Here we distinguish between the side collector platforms as “deputies”, and the central beam combining platform as the “chief”. Pyxis has a number of novel key features that will allow it to achieve its goal of formation flying interferometry. Firstly, it utilises a frame of reference tied to a precision star tracker on a movable platform, rather than the Earth itself. This is made possible through the use of newly affordable MEMS (microelectromechanical system) accelerometers and a fibre laser gyroscope to define this frame of reference, and will be discussed more in Section <ref>. Secondly, we implement a multi-stage metrology system, using camera based coarse metrology supplemented by a time of flight (TOF) sensor; and a sub-wavelength path-differential interferometric metrology sensor using laser diodes. These systems allow us to vastly simplify our system architecture and will be discussed in Section <ref>. Finally, Pyxis has a linear array architecture that can be used in Low Earth Orbit (LEO) <cit.> and has a continuous range of reconfigurable baselines, nominally between 1 and 60 m. This paper is structured as follows: the remainder of Section <ref> details the scientific potential and aims of Pyxis; Section <ref> describes the mechanical interface and architecture of the interferometer; Section <ref> details the metrology system; Section <ref> discusses the beam combiner; and finally Section <ref> will describe the control systems. §.§ Scientific Aims While the primary purpose of Pyxis is to act as a demonstrator for formation-flying interferometry, it will be well placed to make important, unique astrophysical measurements on its own. Pyxis is designed to nominally work in the R band with a wavelength range between ∼620 and 760 nm, where the upper bound at 760 nm is due to the atmospheric telluric band corresponding to the Fraunhofer A O_2 band and the lower bound corresponding to the single-mode cutoff of our 630 nm fibres. The science telescopes on each deputy platform (see Figure <ref>) have an aperture diameter of 94 mm, and are expected to achieve a 10% throughput including fibre coupling. We thus expect to be able to achieve a limiting magnitude of approximately R∼6 with 5 ms exposures, corresponding to approximately 10 pixels per spectral channel. The aperture size was chosen due to the only moderate sensitivity gains when increasing the aperture beyond Fried's parameter (r_0 = 5-10 cm) without adaptive optics. Instead, the sensitivity of Pyxis is increased through having a simplistic optical design (covered in Section <ref>). With Pyxis working in the R band, it is well placed to complement the existing suite of interferometers globally; particularly as it will be the only visible light interferometer in the Southern Hemisphere following the decommissioning of the SUSI interferometer in the mid-2010s. Figure <ref> shows the current angular resolution capabilities of the VLTI in Chile, and NPOI and the CHARA array in the USA as a function of wavelength, compared with Pyxis. We also include the non-redundant baselines of the Keck Aperture Masking experiment<cit.>, as a comparison for short baseline capabilities. As can be seen, Pyxis will span a unique combination of wavelength and angular resolution within the southern hemisphere, and the ability for it to span a continuous range of baselines and position angles (and therefore UV plane data points) can allow it to probe optimal spatial frequencies for a given object's visibility curve. One of the key areas that Pyxis will provide insight into is the measurement of fundamental parameters of stars. In recent years, obtaining the stellar masses, ages and radii of stars with high precision is especially important due to the burgeoning fields of Galactic archaeology and exoplanet research, where the properties of the planet host star are critical in extracting the exoplanet parameters <cit.>. Pyxis will build off of the success of PAVO in measuring stellar diameters <cit.> utilising an even simpler architecture, single mode spatial filtering, and polarisation control. Precision stellar masses are another key parameter in the studies of stellar evolution, as they determine stellar age and Galactic evolution timescales. This parameter can be obtained through giant star asteroseismology, but suffers from a lack of calibration and benchmarks <cit.>. Simple stellar diameter measurements have been shown to help calibrate these techniques <cit.>, and so Pyxis will be able to augment Gaia astrometry of the brightest astrometric binaries with precise interferometric separation measurements, thus providing a precise stellar mass with which we can calibrate other measurements. Arguably the more exciting and unique science case of Pyxis, however, is utilising its polarimetry functionality. First pioneered by Ireland (2005) <cit.>, multi-wavelength interferometric polarimetry has resulted in exciting observations in resolving spherically symmetric dust shells around very large stars that would be otherwise undetectable with a non-interferometric instrument <cit.>. Such studies have probed the dust grain size distribution around giant stars, which in turn informs the processes of how dust is made and stellar mass loss. However, these studies have been rare due to the lack of available instrumentation to make these measurements, and questions remain regarding when dust scattering provides the dominant mechanism for giant star mass loss <cit.>. Pyxis requires a polarisation split in order to accurately calibrate the visibilties, and due to its much simpler geometry than existing long baseline interferometers, it is straightforward to split the polarisation for scientific measurements; Pyxis should achieve an estimated calibrated differential polarisation fringe visibility of 2% precision, comparable to that of previous measurements <cit.>. Hence Pyxis should be able to make simple time and wavelength-dependent interferometric polarimetry measurements around these bright giant stars to resolve some of these questions. § MECHANICAL DESIGN §.§ Robotic Platforms The Pyxis robotic platforms comprise a vibration-isolated upper platform, where lasers, telescopes, cameras, and fibre injection systems are mounted, and a lower platform housing the beam combiner, circuitry, and control computers. The upper platform payload on each robot is mounted on a stepper-motor-controlled goniometer to achieve precise elevation control at the level of a few arcseconds. These goniometers are coupled to the upper platform through a set of passive mechanical vibration isolators, intended to attenuate vibrations from roughness in the surface the robot traverses, as well as from the motors themselves. The platforms are designed to allow control of all six degrees of freedom of the upper platform payload as a ground based simulation of satellite control. The three degrees of freedom in the “ground” plane (two horizontal translations and rotation about the vertical axis) are controlled by stepper motors, coupled to precision planetary gearboxes and bi-directional omni-wheels in a “Kiwi drive” arrangement. The three remaining degrees of freedom (vertical translation, tip and tilt) are controlled by a set of three linear actuators (also stepper-motor driven) with a three way rotationally symmetric, ball and V-groove kinematic interface. The V-grooves are created by pairs of hardened dowel pins, creating a pair of point contacts with the truncated and threaded balls attached to the ends of the linear actuators. A photograph of one of the deputy platforms is shown in Figure <ref>. Within the optical subsystems, fine tip/tilt control is achieved using piezo actuators and optical path difference is controlled with a piezo stick/slip stage. In order for the system to achieve stable fringes using these fine control subsystems, we determined the following mechanical requirements for the robotic platforms: * The RMS motion above 100 Hz for anything on the upper platform goniometer must be below 50 nm, so that fringe visibility remains high in 5 ms exposures. * The RMS velocity must not exceed 10 µm/s at frequencies <100 Hz, so that with a 5 ms servo lag, fringe tracking at this 50 nm level is possible, with residual fringe motion being dominated by astronomical seeing. * The absolute positioning must be accurate to within 3 mm, in order to have a 1 part in 1,000 baseline knowledge for a 3 m science baseline, and in order to keep the fringes within the range of the position actuator. * The attitude of the platforms must be accurate to within 30", in order to inject the light into the field of view of the injection unit. * The attitude velocity error cannot exceed 100"/s, so that with a 5 ms servo lag, there is no more than a 0.5" angular error (required for single mode fibre injection). Since precise positioning is essential for the successful operation of the optics, the mechanical vibration isolators were characterised both through simulation and testing. The simulated transmissibility plot of their frequency response in the horizontal and vertical directions is displayed in Figure <ref>, produced by fitting a damped oscillator function to the listed parameters of the upper platform springs. The resonance peaks with over-unity gain in the 5-10 Hz range will cause fringe motions that are measurable and partly controllable by the fringe tracker operating at a 200 Hz loop speed. From these peaks, attenuation steadily improves over the 5-75 Hz range, and it is expected that vibrations exceeding 75 Hz will not be of much concern. It is also clear that the resonance in the vertical direction has greater gain, as well as a higher frequency, and so is likely to be more of an issue than the horizontal resonance. Simulations were also carried out on a two-dimensional model of the system, and showed how horizontal motion would cause significant angular perturbation of the platform given the high centre of mass, leading to the installation of a 1kg counterweight suspended below the upper platform. The system response between the lower and upper platform was physically tested, by applying a sinusoidal frequency sweep between 0 and 75 Hz using the motors in each axis (X, Y, Z), and recording the data from a set of three 3-axis accelerometers placed on the vibration-isolated platform. The accelerometer readings were bias-corrected and transformed into body-frame accelerations, before being Fourier transformed to find the peak amplitude for the given test frequency. The transmissibility – the ratio of lower platform (input) to upper platform (output) amplitudes – is plotted for all six axes of motion in Figure <ref> in both a linear and logarithmic scale. Between 1-15 Hz, these results largely agree with the simulation, showing peaks close to where they were expected for horizontal (X/Y) and vertical (Z) inputs, and significantly higher gain in the vertical direction. Unlike the theoretical plot, however, we do see some amplification of frequencies around 20-30 Hz in all three axes. These resonances are likely due to the coupling of the full system to the springs, and while the addition of a counterweight was able to suppress these resonances they were not able to be removed completely. Beyond 30 Hz, once again the springs attenuate to below unity gain. It is clear from both model and test results that the vibration-isolators alone are insufficient to attenuate vibration to the required levels over the full range of frequencies. However, given their good performance at very high (>100 Hz) frequencies, it is expected that an active control system will be able to handle the lower frequency vibrations, discussed further in Section <ref>. §.§ Diamond-turned Telescope The science telescopes are one of the subsystems that the project designed to be “space ready”. The optics need to be significantly compressed to be compatible with the CubeSat format without need for refocusing, while sustaining the standard NASA GEVS vibration qualification profile. Current demonstration units have passed this requirement. In addition, it requires a wide enough field of view so that it could correct angular errors in the deputy position without moving parts. Our solution was to design and prototype the telescopes from diamond-turned aluminium. A photograph of one of the telescopes is shown in Figure <ref>. Each deputy telescope has a 5:1 magnification conjugated at infinity, in a Cassegrain design with two paraboloids. The primary has a 94 mm clear aperture diameter with a 200 mm radius of curvature, while the secondary is 25 mm across with a 40 mm radius of curvature. This size allows the telescope to fit in one end of a 3U CubeSat, and can be seen attached to the upper platform in Figure <ref>. A 45 degree flat tertiary mirror reflects the beam at 90 degrees to the optical axis. The telescopes were manufactured at Optofab-ANU, part of the Australian National Fabrication Facility, using a diamond lathe turning RSA-6061 aluminium. The complete telescope structure, including mirrors, is formed from aluminium, so that the telescope is naturally resistant to optical aberrations caused by thermal expansion. The telescope manufacturing and assembling process went through several prototypes, with the best iteration of the primary mirror exhibiting a level of aberration of a 96% Strehl ratio. We have also minimised stress on the primary mirror caused by the assembly of the two parts of the telescope: we moved away from an original shrink fit design and have developed an assembly that minimises distortions by using a 50 micron thick gap around the edge of the primary mirror filled with a thermal-expansion matched vacuum compatible adhesive. Coma and defocus are actively set to have negligible amplitudes interferometrically during adhesive curing, by find adjustment of primary mirror tilt and piston. While a residual amount of spherical aberration remains, along with some minor astigmatism caused by the gluing and mounting of the tertiary mirror, the final complete telescopes exhibit a ∼70% Strehl ratio. § METROLOGY SYSTEM One of the primary goals of Pyxis is to demonstrate a metrology system capable of sustaining satellite formation flight at an adequate level to make precise interferometric measurements. Pyxis's distance metrology approach is divided into two broad parts: a coarse metrology system that measures the large distances between the platforms, and a fine metrology system that complements the coarse metrology using interferometry and fringe patterns to achieve the necessary sub-wavelength precision. The fine metrology system concept was already described in Lagadec et. al (2020)<cit.>. This system is not yet fully commissioned, in particular with a decision point remaining as to whether temperature stabilised Fabry-Perot laser diodes are enough, or if three single frequency laser diodes are required. However, the combination of coarse metrology described below, combined with a fringe search (see Section <ref>) using starlight will be enough for initial on-sky fringes. §.§ Coarse Metrology The coarse metrology itself has two parts: measuring the parallax between two LEDs, and a time-of-flight (TOF) laser system. The former consists of two LEDs mounted 90 mm apart on the sides of each of the deputy platforms, and a camera mounted on the chief robot. We measure the angular separation of these LEDs to 0.05 pixels or 0.7" using a 7.6 mm focal length F/2 lens attached to a FLIR Firefly FFY-U3-16S2M-S camera. This in turn will enable a position measurement of better than 1 cm for our maximum chief-deputy separation of 30 m. However, for the purposes of designing a future space-based mission with a baseline of ∼300 m, this camera-based metrology system is almost certainly too challenging, even with an extended LED spacing of 300 mm. This is due to the precision required for angular separation measurements, which is proportional to the square of the spacecraft separation. Hence, while we implement this system for Pyxis, the TOF system is more critical for demonstrating a complete space compatible metrology system. The TOF system is also required to obtain a sub-centimetre position estimate, in order to be within the fine metrology system's capture range. The TOF distance ranging system utilises the time it takes for light to travel from the chief platform to a retroreflector mounted on a deputy platform and back. Pulses of light are generated by a laser diode and the distance the light travel determines the delay before the reflected light returns. This is equivalent to a phase shift in the pulses of the returning light relative to the transmitted pulses. The distance to the reflecting object, d, is half the total distance the light has travelled and is related to the delay in the returning light pulse t_ d by the speed of light, c. Hence this can be expressed as d= t_ d·c/2. Implementing a TOF system with analogue electronics requires two capacitors that act as timing elements to measure the delay and a clock source to generate pulses of light. The same clock pulses are used to switch between the two-timing elements so that the first capacitor is charged while the pulse is high and the second capacitor charges while the pulse is low. When the capacitor is connected, it is charged by a photo-diode which allows a current to flow when it receives the returning pulses of light, charging the connected capacitor. An FPGA with an integrated microcontroller and some additional circuitry including the laser, photodiodes, timing capacitors, fast analogue switches and an analogue-to-digital converter (ADC) is used to implement the system. A high frequency, square wave signal is produced by clock conditioning circuity within the FPGA fabric. Two signals with the same frequency are produced, one to drive the laser modulator and the other to drive the switching between timing elements. The timing is implemented using a pair of capacitors that are charged by the photocurrent from a photodiode detector and a high-speed analogue 2:1 multiplexer (MUX) is used to direct the current between the two charge-integrating capacitors. After an integrating cycle, the capacitors are sequentially sampled by an ADC, which has a built-in MUX, and the capacitors are reset by a command from the microcontroller. In the full system, there are four pairs of capacitors and accompanying photodiodes, so that the system can measure two distances to each of the deputy platforms. The light from a single modulated laser diode is optically split for the four light paths. All four channels are charged in parallel and then be sampled sequentially. The laser modulator is controlled by the clock signal and provides the required DC bias and modulation current to drive the laser diode. The laser diodes used in this system are shared with the fine metrology system (described next in Section <ref>), with the laser diodes and TOF photodiodes shown in the metrology schematic in Figure <ref>. §.§ Fine Interferometric Metrology To achieve sub-wavelength precision metrology, we require the use of interferometric fringe measurement. However, interferometric metrology itself relies on the coherence of its beams to the sub-wavelength level. Such required precision poses a challenge for a multiple platform system like Pyxis for which fringe scanning - necessary to unambiguously determine the phase and optical path length difference - is complex and time consuming. Fortunately, it is possible to employ the paradigm of Multi-Wavelength Interferometry<cit.> to broaden the wavelength range where the phase can be unambiguously measured; using several wavelengths can broaden the range by three orders of magnitude compared to the narrow ±λ/2 possible with a single wavelength. The details of this technique as applied to Pyxis are described in Lagadec et. al (2020)<cit.>, but in brief a `synthetic' wavelength is created between each combination of wavelengths, given by: Λ = λ_1λ_2/(λ_2-λ_1), for two wavelengths λ_1 and λ_2. Hence, as long as the coarse metrology can provide an estimate of the distance to within the range of the longest synthetic wavelength, we can provide a precise measurement using this system. To implement this in Pyxis, we use a similar setup to Lagadec et. al (2020)<cit.>, which is shown in Figures <ref> and <ref>. First describing Figure <ref>, the light from each diode is collimated by a 4.5 mm focal length lens (L1) and injected into an optical fibre by a 6.25 mm focal length lens (L2). The injection unit involves an in-house constructed diode-to-fibre injection system that couples into an APC fibre connector of a polarisation maintaining fibre via a polarising element. This results with a high (∼30 dB) polarisation extinction ratio into one of the orthogonal fibre modes. Note the schematic only shows one of the two laser injection units. The fibres are fed through a V-groove into a photonic chip, where the light is mixed by a directional x-coupler. This chip is currently being manufactured through Australian National Fabrication Facility OptoFab node through direct write ultrafast laser inscription (ULI) <cit.>, and has the advantage of being able to combine elements of the coarse and fine metrology systems in a compact form factor ideal for a future space mission. The light is split into four separate beams (only one beam path is shown in Figure <ref>) and output into a series of optical fibres. These are then fed into four transceiver units, which consist of a 25 mm focal length collimating lens (L3) and a quarter-wave plate. The collimated light is sent out of the chief platform at positions A, B, C and D shown in Figure <ref>, reflected off of a retro-reflector mounted on each deputy platform (element 10 in Figure <ref>), and back into the transceiver unit. The quarter-wave plate ensures that the outgoing and incoming polarisations are orthogonal and will not interfere with each other. A test configuration using two Fabry-Perot laser diodes as injection units has been constructed. Continued development of this system, including thermal stabilisation of the Fabry-Perot diodes and investigation of single-frequency diodes is ongoing. § INJECTION AND BEAM COMBINATION §.§ Fibre Injection With Pyxis being a relatively simple, single baseline interferometer, we aimed to produce a beam combiner that was as simple as possible to maximise throughput. We also aimed to have limited moving parts, to maximise the amount of coherent integration. This will also serve well in translating this beam combiner design to a future space-based mission. In this vein, we chose to base the beam combiner around an integrated optics (IO) photonic chip modelled after the success of IO chips in the GRAVITY <cit.> and GLINT <cit.> instruments. The ∼18 mm diameter beams from each of the two deputy collector platforms are reflected towards the central platform and enter into the fibre injection system; shown in Figure <ref> for one of these light paths. This system is also designed to be CubeSat compatible, measuring 200 by 100 mm. The beam is transmitted through a f=100 mm focal length infinity corrected tube lens assembly (L1), then a piezo-controlled translating f=6 mm, 4 mm diameter collimating lens (L2), which acts as the tip/tilt actuator through X-Y translation orthogonal to the optical axis. We used Piezosystem Jena PXY 200 D12 stages, which when accounting for PWM filtering and voltage conversions, provides us with about 150 µm of stroke. An 595 nm cutoff dichroic is used to split the shorter wavelengths used for alignment from the longer science wavelengths, which are transmitted towards the fibre injection unit. The <595 nm wavelengths are polarisation-split and recombined so that a single FLIR Firefly FFY-U3-16S2M-S camera can coarsely image both input pupils for pupil alignment with one polarisation while simultaneously performing fast (∼200 Hz sampling rate) tip/tilt with the other polarisation. The control system is explained in Section <ref>. The pupil viewing path is designed such that the light passes through a f=48 mm lens (L4), which is focused at a pupil imaging plane located before the dichroic (seen in Figure <ref>). The camera images both paths through a f=12 mm imaging lens (L5) focused at infinity with a right-angled prism glued in front to ensure the camera fits within the tight footprint. It is also worth mentioning that while the footprint is tight, space inside the unit has been allocated to allow the insertion of a star tracking camera. This will be utilised for the CubeSat version of Pyxis. §.§ Science Beam Combiner The longer wavelengths, used for science, are injected into polarisation maintaining 630-HP fibres using two f=4.5 mm, 3 mm diameter achromat lenses (L3). One of the arms is mounted onto a small SmarAct SLC-1720-L translation stage, with 20 nm steps and 8 mm of stroke, for fine path control and fringe tracking. The fibres transport the light from the upper platform towards the bottom platform, where the beam combiner is located. The fibres are then fed into a V-groove, which is attached to the photonic chip. The chip features a “tricoupler” waveguide scheme, where three inputs are fed towards each other in a equilateral triangle formation, and then fed back out in three outputs. As described in Hansen et. al. (2022) <cit.>, the tricoupler results in the output beams having a phase shift of 2π/3, and allows for the full recovery of the complex coherence in a single frame without modulation (see Sections 2.3 and 2.4 in that paper). The three outputs that this chip provides is the minimum possible while retaining the above qualities of full information without modulation, and thus maximises throughput. The photonic chip was measured to have a mean throughput of 85 ± 7%, and a coupling ratio between 33:33:33% and 51:31:17% over the science bandpass of approximately 620 nm to 760 nm. Because we only have two inputs from the telescopes, the central input of the chip was not used. More details regarding this photonic chip, including experimentally retrieved visibilities and group delay, and a schematic of the chip and attached V-groove, can be found in Hansen et. al. (2022) <cit.>. This paper also details the reduction algorithms using a pixel to visibility matrix (PV2M), and group delay extraction for fringe tracking based on Fourier transform numerical integration. More details on the fringe tracking control loop can be found in Section <ref>. From the beam combiner chip, the three outputs are fed through a f=12.5 mm, 6.25 mm diameter collimating lens into a custom spectrograph. This spectrograph includes a Wollaston prism for splitting linear polarisations (and hence allowing for both polarisation calibration and polarimetry on astrophysical sources) and a 45^∘ BK7 dispersive prism. This results in the spectrograph having a spectral resolution of R∼ 50, which was chosen as a balance between throughput (i.e reducing the number of channels on the detector) and scientific usefulness. The dispersed light is then fed through an f=15 mm imaging lens onto the scientific camera. Due to the custom optical components and small space limitations, a resin printed optical mount was designed (shown in Figure <ref>) to hold the spectrograph lenses and prisms. The mount contains a variety of screw holes and sprung inserts to ensure a kinematic mount of all components. The full beam combiner design can be seen in Figure <ref>. The photonic chip is mounted to a manual translation stage for focus adjustment; due to the limited thermal expansion expected in the system, we do not anticipate needing to constantly adjust this axis of motion. The chip is also mounted on top a piezo stack to allow for small translations in the vertical direction; ensuring that the centre of the output beam lines up with detector pixels. The scientific detector chosen was a QHY2020 camera with the GPixel Gsense2020 BSI sensor. This was chosen for its low readout noise capabilities and high frame rate. The camera is mounted with a set of telescope tube rings and is connected to the spectrograph with a series of adapting rings. The whole system fits on a 150x300 mm breadboard, and in principle the size footprint could be made smaller through choosing a much smaller camera. The beam combiner also features a small back-illumination setup, consisting of a 590 nm LED that is reflected off of a miniature 45^∘ mirror that is able to be flipped in and out of the optical path onto the output waveguides of the photonic chip. This will assist in fibre injection alignment, and show which pixel on the pupil tracking camera corresponds to the science fibres. §.§ Pupil Alignment Procedure Before on-sky observations can be made, a variety of offsets and calibrations are required to ensure that the system can be easily aligned. Notable among these is a method to ensure an initial pupil alignment to within the capture range of the pupil camera. This comes from the coarse metrology system, with which we can identify a pixel offset from the two metrology LEDs corresponding to the exit aperture. Let us consider the two LEDs with coordinates x_1 and x_2 that are measured from the mechanical design, as well as the target exit aperture coordinate x_t. These coordinates are a projection of the absolute coordinates onto the plane perpendicular to the baseline vector. We can then define a relationship between these three positions with respect to two parameters, β and γ: x_t = x_1 + β(x_2 - x_1) + γR_90(x_2 - x_1), where R_90 is a 90 degree rotation matrix. On the coarse metrology camera, we then measure the two dimensional angles *α_i corresponding to the two LEDs and can thus measure the angle of the exit aperture: *α_t = (β + γR_90)*α_2 + (1-β -γR_90)*α_1. This angle can also be calculated when considering the frame of reference of the chief: *α_t = x_0 + *α_cd/d, where x_0 is the aperture coordinate also taking into account the offset from the chief metrology camera and d is the distance between the two platforms. The angle *α_c is calibrated in the lab using the back-illuminated beam from the science camera, with the tip/tilt piezos at the centre of their range. This reference pixel is identified ahead of time through back illumination and set of retro-reflectors, and is the position on the tip/tilt sensing camera where the light is injected into the fibre. Hence, after calibrating the system, we simply need to adjust the deputy such that the LED angles *α_1 and *α_2 satisfy: (β + γR_90)*α_2 + (1-β -γR_90)*α_1 = x_0 + *α_cd/d, while simultaneously measuring the distance d through the approximation: d ≈|x_1-x_2|/|*α_1 - *α_2|. Using the small angle approximation here is adequate for all platform separations exceeding 0.5m. We mention here that while this calibration and alignment procedure will work for Pyxis, it is insufficient for the space version; in a space environment, we are not able to move the deputies in all directions. Instead, we change the angle *α_c by changing the central target positions of the tip/tilt piezo stages. § CONTROL SYSTEM §.§ Sensors and Architecture A key part of achieving the stability and positioning accuracy required for the interferometry is the navigation and control system. This system is comprised of a variety of sensors, actuators, and processing computers spread across the three robotic platforms and interfaces with all Pyxis systems. Here we describe an overview the physical elements that contribute to the control of Pyxis, followed by the software architecture and the control system design. It is clear from Figure <ref> that many of the Pyxis subsystems are devoted to metrology and navigation at varying levels of granularity in order to achieve sub-wavelength precision, with coarse sensors providing sufficient accuracy for unambiguous operation of higher resolution sensors. Absolute attitude measurement is achieved using star tracker cameras on each deputy, described further in Section <ref>. Attitude is also tracked via inertial measurements, using six 3-axis accelerometers on each robot, and a fibre laser gyroscope (FLG; VG035LND from Fizoptica) on the chief robot. The coarse metrology camera also captures the position of the deputy satellites in 3 dimensions with respect to the chief body frame. The central frame of reference, however, is defined through a high precision star tracker on the chief robot. This star tracker consists of a finite-conjugate version of the diamond-turned telescopes described earlier in Section <ref> combined with a FLIR Blackfly camera containing an IMX178 sensor. This allows us to obtain a ± 40 arcminutes field of view, with 1.5" per pixel; ample enough to sample the FWHM of a guide star under the seeing conditions of Mt Stromlo without pixel phase errors. This, together with the attitude measurements supplied from the sensors listed above, provides Pyxis with a reference frame independent from the Earth and co-moving with the platforms themselves. Critically, the FLG is used to measure the angle of rotation about the axis orthogonal to the star and baseline vector. The FLG and star tracker together enables the fine metrology measurements to to moved from the chief rigid body frame to an intertial frame, in order to predict open-loop fringe motion. We characterised the FLG to ensure it was within specifications, through driving the gyroscope with a sinusoidal input at 0.1 Hz. A plot of the FLG voltage, converted into angular velocity measurements through a gain of 0.152 rad/s/V, against the integrated accelerations of the accelerometers, again in rad/s, is shown in Figure <ref>. We see that the FLG does not drift substantially over a long 100 s test. The RMS noise of the FLG was found to be 9.21×10^-6 rad/s, and the bias was calculated as 5.74×10^-6 rad/s; indicating that at a power bandwidth of 60 Hz, the FLG exhibits a random walk of 0.26"/s, which is within the desired specification of 0.003 deg/hr^1/2. Interfacing the array of sensors and actuators is a set of computers comprising the physical elements of the control system. On the chief robot is a PC with a 6-core i5-8500 processor, responsible for tracking the dynamic state of all three robots, as well as coordinating the many systems mounted on the chief robot (fibre optic gyroscope, beam combiner, time-of-flight metrology, etc.). Multiple Teensy 4.1 microcontrollers are also connected to the PC in order to manage all non-USB interfaces. The PC is connected to WiFi, over which requests can be sent to its servers, and over which it can send requests to servers running on the deputy robot computers. The deputy robots mirror this configuration, but with a mini PC (Intel NUC with 4-core i5-10210U processors) reflecting the fewer number of USB connections required and the reduced computational demands. A schematic of this interface architecture for the control system can be found in Figure <ref>. In order to integrate this large number of different physical components, each requiring different communication channels and protocols, a server-based software architecture has been designed and implemented for a number of the systems. In this architecture, each interface is given a server that can respond to a variety of requests depending on the nature of the system it interfaces. For example, a microcontroller on each deputy robot is used to monitor voltage and current draw, as well as control the LEDs used by the coarse camera metrology. This microcontroller connects via USB interface to the control computer, where a server program runs managing it. The server has a set of requests it can respond to by calling various functions, such as reporting the latest voltage measurement, with requests coming either locally from other servers running on the control computer, or over WiFi from the chief or the user interface. The control system itself is implemented through a PID (proportional–integral–derivative) controller, that aims to minimise the error in linear and angular position and velocity provided from the sensors through a feedback loop. That is, for a given desired state r(t), and a measured state y(t), the control function u(t) in terms of the error value e(t) is: e(t) = r(t) - y(t) u(t) = K_1e(t) + K_2∫^t_0e(x)dx + K_3e(t)t where K_1, K_2 and K_3 are tuning variables that are used to maximise the performance of the control loop. A linear-quadratic-regulator (LQG) controller was also considered, but was deemed unnecessary for the types of error correction needed in Pyxis. §.§ Pointing Control and Star Tracking To ensure that Pyxis has sufficient attitude control, we rely on a pair of star trackers on each deputy platform to provide an estimate of the position and orientation of the robot. These star trackers take an image of the sky using a 5^∘ FOV, f=50 mm lens at a rate of approximately 3 Hz. This provides a balance between attitude update speed and the ability to detect numerous stars. An algorithm incorporating the Tetra3[GitHub: <https://github.com/esa/tetra3>] and Astronomy.net<cit.>[Website: <https://astrometry.net>] plate solvers then extracts the centroid positions of the brightest stars in the image and matches them to a set of known stellar positions located in an index file. These index files were compiled for Astrometry.net utilising the Tycho-2 catalogue<cit.>. The matched positions are then used to provide an estimate for the right ascension (α) and declination (δ) of the centre of the image, as well as the position angle of the image (angle of the top centre of the image from the North Celestial Pole). These angles are converted into an altitude/azimuth/position angle quaternion for use in correcting and adjusting the attitude of the robot. We tested the plate solving algorithm for two quantities: speed and accuracy. The former is particularly important, as it cannot be slower than the frame rate of the camera and, by extension, the attitude update speed. It is for this reason that we adapted numerous plate solving algorithms to create a high-speed version. In our tests, we found that the program could output an attitude quaternion from an image in between 0.2 and 0.3 seconds (∼ 4 Hz), which is sufficient for a frame rate of 3 Hz. We then tested the accuracy by obtaining a number of random on-sky images using the same lens, running the plate solver, and comparing the extracted positions with the positions located in the index file. Each image was manually checked to ensure that the plate solver matched with the correct stars. We converted the (α,δ) coordinates of each star into polar coordinates to give us an estimate of the radial position error and the azimuthal orientation error. We found that the RMS error in the radial direction was about 2", and the RMS error in orientation was 30". Our requirements are that the deputy platforms are able to measure their angle to ±100" in an angle about the star pointing vector (that is, the position angle of the image) and ±20" in the other two axes. Hence, our plate solver will be adequate in accuracy to function as an attitude estimator for Pyxis. The fine star tracker on the chief platform also utilises a plate solving scheme, although it has a much tighter angular position requirement of 0.2" along the optical path axis, and as such utilises a much smaller field of view (approximately 1 degree). This leads to a much slower solve between 1.5 and 2 Hz. To augment this, we implement a centroiding algorithm that augments the star tracker at 3-4 Hz and tracks the position of the target star, using a sufficiently long exposure time so that the star is blurred through seeing (avoiding the noise introduced through the movement of a star's position by the same seeing). Of course, the critical angle of alignment of the chief platform is managed by the FLG, and so the star tracking system acts as a fallback for the rest of the attitude determination. §.§ Tip/Tilt Control This tip/tilt control of the starlight injection into the chief platform, described in Section <ref>, is measured though a weighted centre of gravity (WCOG) centroiding algorithm, given by c = ∑_ijw(x_ij)F(x_ij)x_ij/∑_ijw(x_ij)F(x_ij), where F(x_ij) is the flux of a given pixel x_ij and w(x_ij) is a super-Gaussian weighting function of the form: w(x,y) = e^-1/4σ^4((x-x_0)^2+(y-y_0)^2)^2. The centroid is measured with respect to the tip/tilt reference position in pixel coordinates, described in Section <ref>. Now, the difference between the measured centroid and the reference position is controlled to zero through two control systems with differing timescales. On short timescales (around 200 Hz), the position error is directly sent to the X-Y piezo stages and controlled through a proportional controller. Due to the relatively small stroke of the actuators, however, we then implement a second control loop on longer timescales (about 5 Hz). The positional error is sent to the star tracker on the relevant deputy platform, and converted into a motion to move the deputy angle to alleviate the reliance on the piezos. This motion is calculated via applying an offset to the reference pixel of the star tracker. That is, the plate solver solves for the location of the field of view offset to the centre of the image; this is also required for accounting for the offset between the star tracking camera and telescope. To convert between the tip/tilt camera pixel frame (x) and the star tracker camera frame (x'), we use the following conversion matrix: x = [ ±cos^2ϵ ±5; cosϵsinϵ + 5 cos^2ϵ 1 ]x', where ϵ is the elevation pointing angle of the deputy, the factor of 5 comes from the telescope magnification and the relevant signs are flipped for the two deputies. We also note that before the control loop is closed during alignment, the server controlling the tip/tilt system can still send alignment corrections to the star tracker, so that the centroid spot is located well within the range of the tip/tilt piezos. §.§ Fringe Tracking Control The fringe tracker relies on a group delay estimator of the spectrally dispersed fringes. This estimator comes from the production of an array of P2VM matrices for each spectral channel and polarisation, calculated using the output flux ratios of each input following the method outlined in Hansen et. al. (2022)<cit.>. To begin with, a matrix of trial delay phasors, τ were calculated: x = [-a,-a+δ,...,a-δ,a] *τ = e^2π i (x⊗1/*λ), where a is half the coherence length, δ is the group delay resolution, x is the vector of trial delays and *λ is the vector of wavelength channels. As described in Hansen et. al. (2022)<cit.>, we obtain the complex coherence of each ith wavelength channel and polarisation by multiplying the instantaneous intensity by the P2VM matrix: [ ℜ(γ_i)F_0,i; ℑ(γ_i)F_0,i; F_0,i ] = 𝐏2𝐕𝐌_i·𝐈_i γ_i = ℜ(γ_i) + iℑ(γ_i). The fringe group delay envelope, H is simply the multiplication of the complex coherence vector with the trial delay phasor matrix, effectively sampling the Fourier transform of the coherence. H(x) = *τ·*γ. The group delay is then the delay x corresponding to the maximum of the power spectrum P(x) = |H(x)|^2 of this envelope. To reduce the effect of calibration error in the P2VM matrix, we subtract a “foreground” power spectrum derived from the intensity of the combined beams a long way away from the fringe envelope. Furthermore, to mitigate some of the effects of scintillation and rapid variance in the group delay estimator, we employ a fading memory controller, where each instantaneous delay is combined with the average of the previous delays (P̅(x)) scaled by a factor α. Combining these two effects, we obtain: P̅(x)_i = α (P(x)_i - P(x)_foreground) + (1-α)P̅(x)_i-1, where the group delay associated with P̅(x)_i is sent to the controller. The fringe tracker utilises a proportional velocity controller, where the speed of the delay line is proportional to the group delay estimator, scaled by a gain factor β. Specifically, at each estimation of the group delay, the SmarAct stage is given a command to move a set distance, clocking with steps at a period given by: p = 20β/x_gd, where the prefactor is used to scale the units of group delay into step counts and the gain into units of milliseconds. Each estimation overrides the previous command, providing a smoother control than implementing a positional controller. The performance of the servo loop can be seen in Figure <ref>, using a gain of β = 50 ms and a fading memory parameter of α = 0.95. The external delay was purposely moved forwards and backwards by approximately 7 µm to be corrected by the fringe tracker. Note that the external delay was not perfectly accurate due to the tolerances on the stage used, and the external stage also exhibited oscillatory behaviour (as shown by the large oscillations of the group delay estimate). Nevertheless, we see that the controller responds well to external delay movements, and holds the group delay constant at zero with an RMS of approximately 200 nm. Further optimisation of the parameters α and β will be done on-sky. The fringe envelope can also be used to perform a signal to noise (SNR) estimation for fringe searching in the case of a failing or absent fine metrology system. The signal is given by the maximum of the foreground subtracted power spectrum, and the background is provided by RMS of the median of the real and imaginary components of the envelope: r = median(|ℜ(H(x))|^2) i = median(|ℑ(H(x))|^2) SNR = max(P(x)_i - P(x)_foreground)/√(r^2+i^2) The SmarAct stage, assuming that the coarse metrology system has equalised the baselines up to the stroke of 8 mm, can then scan for the fringe envelope and stop when the SNR reaches a predetermined threshold. § SUMMARY The Pyxis interferometer, when complete, will be a critical step towards verifying the technological readiness of space-based interferometry in the search for Earth-like exoplanets. Specifically, utilising its free-form platform nature and multi-stage metrology system, it will provide a demonstration of satellite-like formation-flying without the cost of space qualification and launch. Furthermore, due to its relatively simple optical design and beam combiner, it will be able to do unique visible-light polarimetric interferometry; the only instrument of its kind in the Southern Hemisphere. Over the next few years, our goal will be to conduct on-sky demonstrations and observations, while simultaneously preparing for the next phase in the project: a satellite version of the same system (see Hansen and Ireland (2020)<cit.>). To our knowledge, this would be the first demonstration of optical interferometric fringe tracking in space, and furthermore that utilising multiple separate spacecraft in formation. With these demonstrators, it is the authors' hope that the technological barriers will be sufficiently eased such that the final goal of a large scale, mid-infrared space interferometer such as LIFE will be achievable in the coming decades. For it is only such a mission that will truly begin to probe one of the greatest scientific questions of our time: “are there habitable worlds out there?” §.§ Disclosures This paper was derived and adapted from two 2022 SPIE Astronomical Telescopes and Instrumentation conference proceedings: Paper 12183-1B “The Pyxis Interferometer (I): Scientific Context, Metrology System and Optical Design”<cit.> and Paper 12183-1C “The Pyxis Interferometer (II): Control System, Telescope and Mechanical Design” <cit.>. The fine metrology section (Section <ref>) also adapts a portion of the 2020 SPIE Astronomical Telescopes and Instrumentation conference proceeding paper 11446-2F "Compact unambiguous differential path-length metrology with dispersed Fabry-Perot laser diodes for a space interferometer array"<cit.>. §.§ Acknowledgments We acknowledge and celebrate the traditional custodians of the land on which the Australian National University is based, the Ngunnawal and Ngambri peoples, and pay our respects to elders past and present. The authors also acknowledge the substantial work of the entire Pyxis team and associates in progressing this project and its results: Julien Bernard, Nicholas Bohlsen, Logan Corry, Michael Ellis, Steven Ellis, Alex Fan, Shanae King, Weihao Luo, Stephen Madden, Joseph Mangos, Patrick Miller, Michael Polkinghorne, Laura Schlueter, Thomas Scott, Hancheng Shao and Kunlun Yan. This research was supported by funding from Australian Research Council grant No. DP200102383. JH acknowledges support from the Australian Government Research Training Program, and the College of Science's Dean's Merit HDR supplementary scholarship. §.§ Code, Data, and Materials Availability The data used in this paper is available upon reasonable request to the authors. spiejour Jonah Hansen is a PhD candidate at the Australian National University, researching into space interferometry. In particular, he is assisting to design and build the Pyxis interferometer - a ground based pathfinder for an eventual space interferometer mission, LIFE (Large Interferometer For Exoplanets), designed to find and characterise exoplanets. He is also assisting in modelling different array architectures and beam combiners for this latter mission. Samuel Wade is a mechatronics and project engineer at the Australian National University, assisting in the engineering aspects of Pyxis - a ground-based pathfinder for an optical space interferometer. Samuel has had previous experience in satellite sensors and ground-based space surveillance. Michael Ireland is a Professor of Astrophysics and Instrumentation Science at the Australian National University. He obtained his PhD from the University of Sydney in 2006, and has since held positions at the California Institute of Technology, the Australian Astronomical Observatory, Macquarie University and the University of Sydney. His research focuses on stellar astrophysics, exoplanet formation, the search for life on other worlds and technologies needed to support these endeavours Tony Travouillon is an associate professor at the Australian National University and is responsible for the instrumentation and telescope development of its school of astronomy. He received his PhD in 2005 from the University of New South Wales. Tiphaine Lagadec is an instrument scientist whose interest is the development of interferometric techniques and technologies to investigate stellar systems at high resolution. Tiphaine obtained a PhD from the University of Sydney in 2020 and has participated in the fields of intensity interferometry, nulling interferometry, space interferometry and solar coronagraphy. Joice Mathew is working as an instrumentation scientist at the Australian National University (ANU). His research interests include electro-optical payload development, instrument modelling, systems engineering, space instrumentation, and qualification. Joice obtained his Ph.D. in astronomical space instrumentation from the Indian Institute of Astrophysics, Bangalore. Before joining ANU, he worked as a visiting instrument scientist at the Max Planck Institute for Solar System Research, Germany on the Solar Orbiter mission. Stephanie Monty is a research associate at the University of Cambridge. She completed her PhD at the Australian National University in 2022. Her research focuses are Galactic Archaeology, stellar dynamics, fibre fed spectrographs and adaptive optics. Adam Rains completed his PhD in astronomy and astrophysics at the Australian National University in 2021 and is now a postdoctoral researcher at Uppsala University in Sweden. His research interests sit at the intersection of stellar and exoplanetary astrophysics – in particular the spectroscopic characterisation of low-mass stars and their planets. Biographies of the other authors are not available at this time.
http://arxiv.org/abs/2307.04060v1
20230708233916
Double instability of Schwarzschild black holes in Einstein-Weyl-scalar theory
[ "Yun Soo Myung" ]
gr-qc
[ "gr-qc", "hep-th" ]
Double instability of Schwarzschild black holes in Einstein-Weyl-scalar theory Yun Soo Myung^a[e-mail address: [email protected]] ^aInstitute of Basic Sciences and Department of Computer Simulation, Inje University, Gimhae 50834, Korea We study the stability of Schwarzschild black hole in Einstein-Weyl-scalar (EWS) theory with a quadratic scalar coupling to the Weyl term. Its linearized theory admits the Lichnerowicz equation for Ricci tensor as well as scalar equation. The linearized Ricci-tensor carries with a regular mass term (m^2_2), whereas the linearized scalar has a tachyonic mass term (-1/m^2_2). It turns out that the double instability of Schwarzschild black hole in EWS theory is given by Gregory-Laflamme and tachyonic instabilities. In the small mass regime of m_2<0.876, the Schwarzschild black hole becomes unstable against Ricci-tensor perturbations, while tachyonic instability is achieved for m_2<1.174. The former would provide a single branch of scalarized black holes, whereas the latter would induce infinite branches of scalarized black holes. § INTRODUCTION Recently, black hole solutions with scalar hair obtained from Einstein-Gauss-Bonnet-scalar (EGBS) theories <cit.> and Einstein-Maxwell-scalar theory <cit.> have received much attention because they have uncovered easily an evasion of the no-hair theorem <cit.> by introducing a non-minimal (quadratic) scalar coupling function f(ϕ) to Gauss-Bonnet and Maxwell terms. We note that these scalarized black hole solutions are closely related to the appearance of tachyonic instability for bald black holes. In these linearized theories, the instability of Schwarzschild black hole is determined solely by the linearized scalar equation where the Gauss-Bonnet term acts as an effective mass term <cit.>, while the instability of Reissner-Nordström (RN) black hole is given just by the linearized scalar equation where the Maxwell term plays the role of an effective mass term <cit.>. This is allowed because their linearized Einstein and Einstein-Maxwell equations reduce to those for the linearized Einstein theory around Schwarzschild black hole and the Einstein-Maxwell theory around RN black hole, which turned out to be stable against tensor (metric) and vector-tensor perturbations. It was well known that a higher curvature gravity (Einstein-Weyl theory) with a mass coupling parameter m^2_2 has provided the non-Schwarzschild black hole solution which crosses the Schwarzschild black hole solution at the bifurcation point of m_2=0.876 <cit.>. This solution indicates the black hole with non-zero Ricci tensor (R̅_μν≠0), comparing to zero Ricci tensor (R̅_μν=0) for Schwarzschild black hole. We note that the trace no-hair theorem for Ricci scalar played an important role in obtaining the non-Schwarzschild black hole solution. It is worth noting that the instability of Schwarzschild black hole was found in the massive gravity theory <cit.> since the Schwarzschild black hole was known to be dynamically stable against tensor perturbations in Einstein theory <cit.>. In the linearized Einstein-Weyl theory, the instability bound of Schwarzschild black hole was found as m_2<0.876 with r_+=1 when solving the Lichnerowicz equation for the linearized Ricci tensor <cit.>, which is the same equation as the linearized Einstein equation around a (4+1)-dimensional black string where the Gregory-Laflamme (GL) instability appeared firstly <cit.>. A little difference is that the instability of Schwarzschild black hole arose from the massiveness of m_2≠0 in the Einstein-Weyl theory, whereas the GL instability appeared from the geometry of an extra z dimension in (4+1)-dimensional black string theory. This means that the mass m_2 trades for the extra dimension z. In the present work, we wish to study two instabilities of Schwarzschild black holes simultaneously by introducing the Einstein-Weyl-scalar theory with a quadratic scalar coupling to Weyl term, instead of Gauss-Bonnet term. In this case, the linearized Ricci-tensor δ R_μν has a regular mass term m^2_2, whereas the linearized scalar δϕ possesses a tachyonic mass term (-1/m^2_2). The linearized scalar equation around Schwarzschild black hole undergoes tachyonic instability for m_2<1.174, while the Lichnerowicz equation for linearized Ricci-tensor reveals GL instability for m_2<0.876. We expect that the former may induce infinite branches (n=0,1,2,⋯) of scalarized black holes, while the latter admits a single branch (m_2≠0) of scalarized black holes. This means that their role of the mass term are quite different for producing scalarized black holes. § EINSTEIN-WEYL-SCALAR (EWS) THEORY We introduce the EWS theory defined by S_ EWS=1/16 π∫ d^4 x√(-g)[ R-2∂_μϕ∂^μϕ-f(ϕ)/2m^2_2 C^2], where f(ϕ)=1+ϕ^2 is a quadratic scalar coupling function, m_2^2 denotes a mass coupling parameter, and C^2 represents the Weyl term (Weyl scalar invariant) given by C^2(≡ C_μνρσC^μνρσ)=2(R_μνR^μν-R^2/3)+ R_ GB^2 with the Gauss-Bonnet term R_ GB^2=R^2-4R_μνR^μν+R_μνρσR^μνρσ. In the limit of m_2^2→∞, the Weyl term decouples and the theory reduces to the tensor-scalar theory. We wish to emphasize that scalar couplings to Gauss-Bonnet term were mostly used to find black holes with scalar hair within EGBS theory because it provides an effective mass term for a linearized scalar without modifying metric perturbations <cit.>. This is so because the Gauss-Bonnet term is a topological term in four dimensions. Actually, the Weyl term is similar to the Maxwell term (F^2) because both they are conformally invariant and their variations with respect to g_μν are traceless. From the action (<ref>), we derive the Einstein equation G_μν=2∂ _μϕ∂ _νϕ -(∂ϕ)^2g_μν+2(1+ϕ^2)B_μν/m^2_2-Γ_μν/m^2_2, where G_μν=R_μν-(R/2)g_μν is the Einstein tensor. Here, B_μν (B^μ _μ=0) coming from the first part of (<ref>) is the Bach tensor defined as B_μν = R_μρνσR^ρσ-g_μν/4 R_ρσR^ρσ- R/3(R_μν-g_μν/4R) + 1/2(∇^2R_μν-g_μν/6∇^2 R-1/3∇_μ∇_ν R) and Γ_μν is given by Γ_μν = -4/3R∇_(μΨ_ν)-∇^αΨ_α(3R_μν-4g_μν/3R)+ 6R_(μ|α|∇^αΨ_ν) - 3 R^αβ∇_αΨ_β g_μν +4R^β_ μαν∇^αΨ_β with Ψ_μ= 2ϕ∂_μϕ. Its trace is not zero as Γ^μ _μ=R∇^ρΨ_ρ-2R^ρσ∇_ρΨ_σ. Importantly, the scalar equation is given by ∇^2 ϕ +C^2/4m^2_2ϕ=0 . Considering ϕ̅=0, the Schwarzschild solution is found from Eqs.(<ref>) and (<ref>) as ds^2_ SBH= g̅_μνdx^μ dx^ν=-(1-r_+/r)dt^2+dr^2/(1-r_+/r)+r^2dΩ^2_2 with horizon radius r_+=2M. This Schwarzschild background gives us R̅_μνρσ≠0, R̅_μν=0, and R̅=0. In this case, one finds easily that C̅^2=R̅_μνρσR̅^μνρσ=12r_+^2/r^6=R̅^2_ GB. § DOUBLE INSTABILITY FOR SCHWARZSCHILD BLACK HOLE For the stability analysis of Schwarzschild black hole, we need the two linearized equations which describe the metric perturbation h_μν in (g_μν=g̅_μν+h_μν) and scalar perturbation δϕ in (ϕ=0+δϕ) propagating around (<ref>). They are obtained by linearizing Eqs.(<ref>) and (<ref>) as ∇̅^2δ G_μν+2R̅_μρνσδ G^ρσ-1/3(∇̅_μ∇̅_ν-g̅_μν∇̅^2)δ R-m^2_2 δ G_μν=0 , (∇̅^2+ 3r_+^2/m^2_2r^6)δϕ= 0 with δ G_μν=δ R_μν-δ R g̅_μν/2 the linearized Einstein tensor. Here, we note that `m^2_2' in Eq.(<ref>) is regarded as a regular mass term, while `3r_+^2/m^2_2r^6' in Eq.(<ref>) corresponds to a tachyonic mass term for m^2_2>0. Taking the trace over Eq.(<ref>) leads to m^2_2 δ R=0, which implies the non-propagation of a linearized Ricci scalar as δ R=0. We confirm Eq.(<ref>) by linearizing R=2(∂ϕ)^2+Γ^μ _μ/m^2_2. This non-propagation of linearized scalar plays an important role in obtaining a linearized theory of the EWS theory. Plugging Eq.(<ref>) into Eq.(<ref>), one finds the Lichnerowicz-Ricci tensor equation for the traceless and transverse Ricci tensor δ R_μν as (Δ̅_ L+m^2_2 ) δ R_μν=0, where the Lichnerowicz operator on the Schwarzschild background is given by Δ̅_ Lδ R_μν=-∇̅^2δ R_μν-2R̅_μρνσδ R^ρσ. Here, we consider m^2_2>0 for non-tachyonic case. Actually, Eq.(<ref>) describes a massive spin-2 mode (δ R_μν) with mass m_2 propagating on the Schwarzschild black hole background. Let us solve the Lichnerowicz-Ricci tensor equation (<ref>) by adopting δ R_μν(t, x)=e^Ω tδR̃_μν( x). Its s(l=0)-mode in polar sector satisfies the Schrödinger-type equation when introducing a tortoise coordinate r_*=∫[dr/(1-r_+/r)] d^2δR̃^l=0_μν/dr^2_*-[Ω^2+V_ Z(r)]δR̃^l=0_μν=0, where the Zerilli potential V_ Z(r) is given by <cit.> V_ Z(r)=(1-r_+/r)[m^2_2 +r_+/r^3-12m^2_2r_+(r-0.5r_+)+6m^4_2r^3(2r_+-r)/(r_++m^2_2r^3)^2]. As is shown in (Left) Fig. 1, all potentials with m_2≠0 induce negative region near the horizon, while their asymptotic forms are given by m^2_2>0. The negative region becomes wide and deep as the mass parameter m_2 decreases, implying GL instability of the Schwarzschild black hole. In case of m_2=0, however, there is no GL instability because its potential V_ Z(r) is positive definite outside the horizon. Solving Eq.(<ref>) numerically with appropriate boundary conditions, one finds the GL instability bound from (Left) Fig. 2 as 0<m_2<m_2^ th=0.876, for r_+=1, where m_2^ th denotes threshold of GL instability. It is important to note that this bound is found in the EWS theory, but there is no such bound in the EGBS theory. In the study of the instability for the Euclidean Schwarzschild black hole together with Einstein gravity, Gross, Perry, and Yaffe have found that there is just one normalizable negative-eigenvalue mode of the Licherowicz operator [(Δ^ E_ L-λ_ GPY)h_μν=0] <cit.>. This connection could be realized from Eq.(<ref>) because when one considers δ R_μν=Δ̅_ Lh_μν/2 for ∇̅^μ h_μν=0 and h^μ _μ=0, Eq.(<ref>) implies that Δ̅_ Lh_μν=0 or (Δ̅_ L+m^2_2)h_μν=0. Its eingenvalue is given by λ_ GPY[=-(m_2^ th)^2]=-0.768/r_+^2 which was noted in the early study of Schwarzschild black hole within higher curvature gravity <cit.>. Indeed, λ_ GPY is related to the thermodynamic instability of negative heat capacity C=-2π r_+^2 for Schwarzschild black hole in canonical ensemble. On the other hand, we focus on the linearized scalar equation (<ref>) which is the same form as found in the linearized EGBS theory. Considering δϕ(t,r,θ,φ)=u(r)/re^-iω tY_lm(θ,φ), the radial equation for s(l=0)-mode scalar leads to the Schrödinger-type equation d^2u/dr_*^2+[ω^2-V_ S(r)]u(r)=0, where the scalar potential V_ S(r) is given by V_ S(r)=(1-r_+/r)[r_+/r^3-3r_+^2/m^2_2r^6], where the last term corresponds to a tachyonic mass term. Considering ∫^∞_r_+ dr [V_ S(r)/(1-r_+/r)]<0, one may introduce a sufficient condition of tachyonic instability for a mass parameter m_2 <cit.> m^2_2r_+^2<12/10⇒ m_2<m_2^ sc=1.095/r_+. However, Eq.(<ref>) is not a necessary and sufficient condition for tachyonic instability. Observing (Right) Fig. 1, one finds that the negative region becomes wide and deep as the mass parameter m_2 decreases, implying tachyonic instability of the Schwarzschild black hole. To determine the threshold of tachyonic instability, one has to solve the second-order differential equation (<ref>) with ω=iΩ numerically, which may allow an exponentially growing mode of e^Ω t as an unstable mode. In this case, we choose two boundary conditions: a normalizable solution of u(∞)∼ e^-Ω r_* at infinity and a solution of u(r_+)∼(r-r_+)^Ω r_+ near the horizon. By observing (Right) Fig. 2 together with r_+=1, we read off the bound for tachyonic instability as m_2<m_2^ sth=1.174 which implies that the threshold of tachyonic instability is given by 1.174 being greater than 1.095 (sufficient condition for tachyonic instability). This corresponds to a bifurcation point between Schwarzschild and n=0 branch of scalarized black holes. In the limit of m^2_2 → 0, one has an infinitely negative potential which implies a large Ω as seen from (Right) Fig. 2. Finally, we obtain an inequality bound for threshold of GL and tachyonic instabilities as m_2^ th<m_2^ sth. However, we remind the reader that the linearized Ricci-tensor δ R_μν carries with a regular mass term (m^2_2), whereas the linearized scalar δϕ has a tachyonic mass term (-1/m^2_2). In this sense, the GL instability is quite different from the tachyonic instability <cit.>. § DISCUSSIONS In this work, we have investigated two instabilities of Schwarzschild black holes simultaneously by introducing the EWS theory with a quadratic scalar coupling to Weyl term. Here, the linearized Ricci-tensor has a regular mass term (m^2_2), whereas the linearized scalar possesses a tachyonic mass term (-1/m^2_2). The linearized scalar equation around black hole indicates tachyonic instability for m_2<1.174, while the Lichnerowicz equation for linearized Ricci-tensor shows GL instability for m_2<0.876. This suggests that their mass terms play different roles for generating scalarized black holes because the GL instability is quite different from the tachyonic instability. We expect that the former may induce infinite branches (n=0,1,2,⋯) of scalarized black holes, while the latter admits single branch (m_2>0) of scalarized black holes. Now, we would like to mention the non-Schwarzschild black hole solutions obtained from the Einstein-Weyl theory (ϕ=0 EWS theory with m_2^2>0). This solution can be obtained numerically by requiring the no-hair theorem for Ricci scalar (R=0) <cit.>. Actually, it corresponds to single branch of non-Schwarzschild black holes with Ricci-tensor hair <cit.>. Recently, it was shown that the long-wave length instability bound for non-Schwarzschild black holes is given by m_2<0.876 <cit.>, which is the same bound as the GL instability for Schwarzschild black hole <cit.>, but it contradicts to the conjecture from black hole thermodynamics addressed in <cit.>. We expect that a single branch of non-Schwarzschild black holes with Ricci-tensor and scalar hairs would be found from the EWS theory with f(ϕ)=1+ϕ^2. On the other hand, we consider the scalar equation (<ref>) with tachyonic mass. From its static equation with ω=0, we obtain an infinite spectrum of parameter m_2 : m_2∈ [1.174=m_2^ sth, 0.453, 0.280, 0.202, · · ·], which defines infinite branches of scalarized black holes: n=0((0,1.174]), n=1((0,0.453]), n=2((0,0.28]), n=3((0,0.202]),⋯. Also, n=0, 1, 2, 3,⋯ are identified with the number of nodes for δϕ(z) = u(z)/z profile. Thus, it is expected that infinite branches (n=0, 1, 2, 3,⋯) of black hole with scalar hair would be found when solving Eqs.(<ref>) and (<ref>) numerically. However, this computation seems not to be easy because Eq.(<ref>) includes fourth-order derivatives and its Ricci scalar is not zero (R=2(∂ϕ)^2+Γ^μ _μ/m^2_2). We wish to introduce a conventional case of f(ϕ)=ϕ^2 quadratic coupling function. In this case, there is no GL instability because the Bach tensor-term does not contribute to the linearized Einstein equation (<ref>). Here, the linearized EWS theory reduces to the linearized EGBS theory which provides n=0 band with bandwidth of 1.174 < m_2 < 1.272  <cit.>. This band of black holes with scalar hair is unstable against radial perturbations <cit.>. This is reason why we choose the EWS theory with the quadratic coupling function f(ϕ)=1+ϕ^2. Finally, for the EWS theory with a quartic coupling function f(ϕ)=(1-e^-κϕ^4)/4κ <cit.>, the linearized scalar equation leads to ∇̅^2δϕ=0, which implies that there is no tachyonic instability. Also, its linearized Einstein equation is given by δ G_μν=0 which indicates that there is no GL instability. In this quartic coupling case, the linearized EWS theory reduces to the linearized EGBS theory, showing tachyonic stability. Without tachyonic instability, one expects to have a single branch of nonlinearly scalarized black holes but not infinite branches of scalarized black holes. Acknowledgments The author thanks De-Cheng Zou for helpful discussions. 99 Antoniou:2017acq G. Antoniou, A. Bakopoulos and P. Kanti, Phys. Rev. Lett. 120, no.13, 131102 (2018) doi:10.1103/PhysRevLett.120.131102 [arXiv:1711.03390 [hep-th]]. Doneva:2017bvd D. D. Doneva and S. S. Yazadjiev, Phys. Rev. Lett. 120, no.13, 131103 (2018) doi:10.1103/PhysRevLett.120.131103 [arXiv:1711.01187 [gr-qc]]. Silva:2017uqg H. O. Silva, J. Sakstein, L. Gualtieri, T. P. Sotiriou and E. Berti, Phys. Rev. Lett. 120, no.13, 131104 (2018) doi:10.1103/PhysRevLett.120.131104 [arXiv:1711.02080 [gr-qc]]. Herdeiro:2018wub C. A. R. Herdeiro, E. Radu, N. Sanchis-Gual and J. A. Font, Phys. Rev. Lett. 121, no.10, 101102 (2018) doi:10.1103/PhysRevLett.121.101102 [arXiv:1806.05190 [gr-qc]]. Bekenstein:1995un J. D. Bekenstein, Phys. Rev. D 51, no.12, R6608 (1995) doi:10.1103/PhysRevD.51.R6608 Myung:2018iyq Y. S. Myung and D. C. Zou, Phys. Rev. D 98, no.2, 024030 (2018) doi:10.1103/PhysRevD.98.024030 [arXiv:1805.05023 [gr-qc]]. Myung:2018vug Y. S. Myung and D. C. Zou, Eur. Phys. J. C 79, no.3, 273 (2019) doi:10.1140/epjc/s10052-019-6792-6 [arXiv:1808.02609 [gr-qc]]. Lu:2015cqa H. Lu, A. Perkins, C. N. Pope and K. S. Stelle, Phys. Rev. Lett. 114, no.17, 171601 (2015) doi:10.1103/PhysRevLett.114.171601 [arXiv:1502.01028 [hep-th]]. Babichev:2013una E. Babichev and A. Fabbri, Class. Quant. Grav. 30, 152001 (2013) doi:10.1088/0264-9381/30/15/152001 [arXiv:1304.5992 [gr-qc]]. Brito:2013wya R. Brito, V. Cardoso and P. Pani, Phys. Rev. D 88, no.2, 023514 (2013) doi:10.1103/PhysRevD.88.023514 [arXiv:1304.6725 [gr-qc]]. Regge:1957td T. Regge and J. A. Wheeler, Phys. Rev. 108, 1063-1069 (1957) doi:10.1103/PhysRev.108.1063 Zerilli:1970se F. J. Zerilli, Phys. Rev. Lett. 24, 737-738 (1970) doi:10.1103/PhysRevLett.24.737 Myung:2013doa Y. S. Myung, Phys. Rev. D 88, no.2, 024039 (2013) doi:10.1103/PhysRevD.88.024039 [arXiv:1306.3725 [gr-qc]]. Gregory:1993vy R. Gregory and R. Laflamme, Phys. Rev. Lett. 70, 2837-2840 (1993) doi:10.1103/PhysRevLett.70.2837 [arXiv:hep-th/9301052 [hep-th]]. Lu:2017kzi H. Lü, A. Perkins, C. N. Pope and K. S. Stelle, Phys. Rev. D 96, no.4, 046006 (2017) doi:10.1103/PhysRevD.96.046006 [arXiv:1704.05493 [hep-th]]. Gross:1982cv D. J. Gross, M. J. Perry and L. G. Yaffe, Phys. Rev. D 25, 330-355 (1982) doi:10.1103/PhysRevD.25.330 Whitt:1985ki B. Whitt, Phys. Rev. D 32, 379 (1985) doi:10.1103/PhysRevD.32.379 Held:2022abx A. Held and J. Zhang, Phys. Rev. D 107, no.6, 064060 (2023) doi:10.1103/PhysRevD.107.064060 [arXiv:2209.01867 [gr-qc]]. Stelle:2017bdu K. S. Stelle, Int. J. Mod. Phys. A 32, no.09, 1741012 (2017) doi:10.1142/S0217751X17410123 Blazquez-Salcedo:2018jnn J. L. Blázquez-Salcedo, D. D. Doneva, J. Kunz and S. S. Yazadjiev, Phys. Rev. D 98, no.8, 084011 (2018) doi:10.1103/PhysRevD.98.084011 [arXiv:1805.05755 [gr-qc]]. Doneva:2021tvn D. D. Doneva and S. S. Yazadjiev, Phys. Rev. D 105, no.4, L041502 (2022) doi:10.1103/PhysRevD.105.L041502 [arXiv:2107.01738 [gr-qc]]. Blazquez-Salcedo:2022omw J. L. Blázquez-Salcedo, D. D. Doneva, J. Kunz and S. S. Yazadjiev, Phys. Rev. D 105, no.12, 124005 (2022) doi:10.1103/PhysRevD.105.124005 [arXiv:2203.00709 [gr-qc]]. Lai:2023gwe M. Y. Lai, D. C. Zou, R. H. Yue and Y. S. Myung, [arXiv:2304.08012 [gr-qc]].
http://arxiv.org/abs/2307.04874v1
20230710195039
Chern-Kuiper's inequalities
[ "Diego Guajardo" ]
math.DG
[ "math.DG" ]
Engineering bound states in continuum via nonlinearity induced extra dimension Girish S. Agarwal August 12, 2023 ============================================================================== Given a Euclidean submanifold gMn^n+p, Chern and Kuiper provided inequalities between μ and ν_g, the ranks of the nullity of M^n and the relative nullity of g respectively. Namely, they prove that ν_g≤μ≤ν_g+p. In this work, we study the submanifolds with ν_g≠μ. More precisely, we characterize locally the ones with 0≠(μ-ν_g)∈{p,p-1,p-2} under the hypothesis of ν_g≤ n-p-1. § INTRODUCTION There are two associated distributions to a submanifold gMn^n+p, the nullity Γ⊆ TM of the curvature tensor and the relative nullity _g⊆ TM, i.e., the nullity of the second fundamental form α of g. The relative nullity plays a fundamental role in many works of submanifold theory; for example <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. In many of them, this distribution coincides with the nullity, turning the problem into an intrinsic one; besides the ones already cited, see <cit.>, <cit.>, <cit.>. We want to understand the submanifolds whose relative nullity does not coincide with the nullity. There are two natural families of submanifolds with ν_g≠μ. Firstly, if M^n is flat and g is not (an open subset of) an affine subspace then _g≠ TM=Γ. Secondly, we have the compositions, that is, if ĝMn^n+ℓ has nontrivial nullity and hU⊆n+ℓ^n+p is a flat submanifold with ĝ(M^n)⊆ U then generically g=h∘ĝMn^n+p has less relative nullity, in particular _g≠Γ. Theorem 1 of <cit.> is an example of this phenomenon. As a starting point, Gauss equation shows that _g⊆Γ. Furthermore, Chern and Kuiper provided a complementary relation in <cit.>. Namely, they showed that the ranks μ:=(Γ) and ν_g:=(_g) are related by desigualdades de Chern-Kuiper's. Straightforward computations show that if ν_g=μ-p then M^n is flat and ν_g=μ-p=n-p. Proposition 7 of <cit.> analyzes the next case of the Chern-Kuiper's inequalities in a restricted situation. It shows that if gMn^n+2 has ν_g=μ-1=n-3 then g is locally a composition. However, the authors' approach seems difficult to generalize. The first result of this work extends that proposition, and the generalization is in two directions. We allow higher codimensions and do not impose a particular rank for the nullity. Let gMn^n+p be a submanifold with p≥ 2 and ν_g=μ-p+1≤ n-p-1. Then g=G∘ĝ is a composition, where G:N^n+1→^n+p is a flat submanifold and ĝMnN^n+1 is an isometric embedding. Moreover, _ĝ=Γ and ν_G=(n+1)-(p-1). In particular, with teorema de composicion chern-kuiper mu=nu+p-1 we characterize locally the submanifolds gMn^n+2 with μ≠ν_g. Observe that the inequality condition in the last result is equivalent to M^n being nowhere flat. Using our technique, we analyze the next case of Chern-Kuiper's inequalities. We show that if p≥ 3 and ν_g=μ-p+2≤ n-p-1 then, on connected components of a dense subset of M^n, g is also a composition. Let gMn^n+p be an isometric immersion with p≥ 3 and ν_g=μ-p+2≤ n-p-1. Let U be a connected component of an open dense subset of M^n where (p-ℓ):=(𝒮(α|_TM×Γ)) is constant. Then ℓ∈{1,2} and g|_U=G∘ĝ is a composition, where GNn+ℓ^n+p is a flat submanifold and ĝU⊆ MnN^n+ℓ is an isometric embedding. Moreover, _ĝ=Γ and ν_G∈{(n+1)-(p-j)}_j=ℓ^2. The organization of this paper is as follows. In section preliminaries, we recall flat bilinear forms, properties of the nullities, and ruled extensions, among others. Ch-K section is devoted to analyzing the submanifolds with ν_g≠μ. More precisely, is divided into subsections to analyze each possible value of μ-ν_g. Lastly, in final section, we give some final comments about this work. Acknowledgment. This work is a portion of the author's Ph.D. thesis at IMPA - Rio de Janeiro. The author would like to thank his adviser, Prof. Luis Florit, for his orientation. § PRELIMINARIES In this section, we introduce the main techniques used in this article. Firstly, we discuss the basic properties of bilinear forms. Then, we analyze the two principal distributions of this work, which are the nullity and the relative nullity. The final subsection summarizes the properties of ruled extensions. §.§ Flat bilinear forms Given a bilinear map β:𝕍×𝕌→𝕎 between real vector spaces, set 𝒮(β)=span{β(X,Y):X∈𝕍, Y∈𝕌}⊆𝕎. The (left) nullity of β is the vector subspace Δ_β={X∈𝕍:β(X,Y)=0 , ∀ Y∈𝕌}⊆𝕍. For each Y∈𝕌 we denote by β^Y:𝕍→𝕎 the linear map defined by β^Y(X)=β(X,Y). Let Re(β)={Y∈𝕌:(Im(β^Y)) is maximal} be the set of (right) regular elements of β, which is open and dense in 𝕌. There are similar definitions for left regular elements and right nullity. Assume now that 𝕎 has a positive definite inner product ⟨·,·⟩:𝕎×𝕎→ℝ. We say that β is 𝑓𝑙𝑎𝑡 if ⟨β(X,Y),β(Z,W)⟩=⟨β(X,W),β(Z,Y)⟩ ∀ X,Z∈𝕍 ∀ Y,W∈𝕌. The next result is due to Moore in <cit.>. It lets us determine the nullity of a flat bilinear form. Let β:𝕍×𝕌→𝕎 be a flat bilinear form. If Z_0∈𝕌 is a right regular element, then Δ_β=(β^Z_0). In particular, (Δ_β)=(𝕍)-(Im(β^Z_0))≥(𝕍)-(𝕎). §.§ Intrinsic and relative nullities We describe now the two main distributions of this work, the nullity of a Riemannian manifold and the relative nullity of a submanifold. Given a Riemannian manifold M^n and x∈ M^n, the nullity of M^n at x is the nullity of its curvature tensor R at x, that is, the subspace of T_xM given by Γ(x)={X∈ T_xM: R(X,Y)=0,∀ Y∈ T_xM}. The rank of M^n at x is defined by n-μ, where μ=(Γ(x)). As the results that we are looking for are of local nature and our subspaces are all either kernels or images of smooth tensor fields, without further notice we will always work on each connected component of an open dense subset of M^n where all these dimensions are constant and thus all the subbundles are smooth. In particular, we assume that μ is constant and hence the second Bianchi identity implies that Γ is a totally geodesic distribution, namely, ∇_ΓΓ⊆Γ. For an isometric immersion g:M^n→^n+p we denote by α^g:TM× TM→ T^⊥ _gM its second fundamental form. We define the relative nullity of g at x as the nullity of α^g(x), that is, Δ_g(x):=_α^g. The rank of g is the number n-ν_g, where ν_g=(Δ_g). Gauss equation implies that _f⊆Γ, while Codazzi equation implies that it is a totally geodesic distribution of M^n. The isometric immersion g:M^n→ℝ^n+p is said to be R^d-ruled, if R^d⊆ TM is a d-dimensional totally geodesic distribution whose leaves are mapped by g onto (open subsets of) affine subspaces of ℝ^n+p. §.§ Revisiting ruled extensions Given a submanifold gMn^n+p with μ≠ν_g≤ n-p-1, we want to describe g as a composition G∘ĝ, where GNn+ℓ^n+p is flat as in Theorems <ref> and <ref>. For this, we will use ruled extensions. The present subsection describes the basic properties of these extensions, many of which are already present in the literature; see <cit.> and <cit.> for example. In order to describe g as such a composition, the first step is to find a rank ℓ subbundle L=L^ℓ⊆ T^⊥_gM to be a candidate of normal bundle of ĝ. Then, we consider the tensor ϕ:=ϕ_L:TM×(TM⊕ L)→ L^⊥ given by ϕ(X,v)=(∇̃_Xv)_L^⊥, where ∇̃ is the connection of ^n+p and the subindex denotes the orthogonal projection on the respective subspace, that is, L^⊥. Proposition 17 of <cit.> shows the importance of this tensor for our work. Namely, the flatness of ϕ is equivalent to the local existence of an isometric immersion ĝU⊆ Mn^n+ℓ whose normal bundle is L (up to a parallel identification), and its second fundamental form is the orthogonal projection of α^g onto L. However, meaningful cases also occur when ϕ is non-necessarily flat, as shown by Thm L=1. Consider the covariant derivative of ϕ as (∇_Xϕ)(Y,v):=(∇̃_X(ϕ(Y,v)))_L^⊥-ϕ(∇_XY,v)-ϕ(Y,(∇̃_Xv)_TM⊕ L). Notice that (∇_Xϕ)(Y,v)-(∇_Yϕ)(X,v) =(∇̃_X(ϕ(Y,v))_L^⊥-(∇̃_Y(ϕ(X,v)))_L^⊥-ϕ([X,Y],v) +ϕ(X,(∇̃_Yv)_TM⊕ L)-ϕ(Y,(∇̃_Xv)_TM⊕ L) =(∇̃_X∇̃_Yv-∇̃_Y∇̃_Xv-∇̃_[X,Y]v)_L^⊥, but the curvature of the ambient space is zero, so ϕ satisfies the following Codazzi equation (∇_Xϕ)(Y,v)=(∇_Yϕ)(X,v), ∀ X,Y∈ TM, ∀ v∈ TM⊕ L. We denote by _ϕ^l and _ϕ^r the left and right nullities of ϕ respectively. Certainly, _ϕ^l⊆_ϕ^r∩ TM. Moreover, Codazzi equation Codazzi phi implies that _ϕ^l⊆ TM is integrable and ∇̃__ϕ^l_ϕ^r⊆_ϕ^r. In particular, if _ϕ^l=_ϕ^r then g is _ϕ^l-ruled. Given such ϕ, we define the curvature of ϕ as the tensor R_ϕ given by R_ϕ(X,Y,v,w)=ϕ(X,w)ϕ(Y,v)-ϕ(X,v)ϕ(Y,w), ∀ X,Y∈ TM, ∀ v,w∈ TM⊕ L. In particular, ϕ is flat when its curvature is zero. Intuitively, ϕ and R_ϕ are the second fundamental form and curvature of the extension respectively. The curvature of ϕ satisfies the following Bianchi identities. [Bianchi's identities] The curvature of ϕ satisfies the following first and second Bianchi identities ∑ R_ϕ(S,T,U,v)=0, ∀ S,T,U∈ TM, ∀ v∈ TM⊕ L, ∑(∇_SR_ϕ)(T,U,v,w)=0, ∀ S,T,U∈ TM, ∀ v,w∈ TM⊕ L. where the sum denotes the cyclic sum over S, T, and U. The first Bianchi identity comes from opening the curvatures and simplifying terms. To prove the second identity, we make the computations at a fixed point q∈ M^n, and we take smooth sections such that the derivatives between S, T, U, v, and w are zero at q, that is, (∇_ST)(q)=0, (∇̃_Sv)_TM⊕ L(q)=0, and so on. Denote by B the left-hand side of Bianchi phi, and notice that B =∑ S(R_ϕ(T,U,v,w)) =∑[(∇_Sϕ)(T,w)ϕ(U,v)+ϕ(T,w)(∇_Sϕ)(U,v)]-∑[(∇_Sϕ)(T,v)ϕ(U,w)+ϕ(T,v)(∇_Sϕ)(U,w)] =∑[(∇_Sϕ)(T,w)ϕ(U,v)-ϕ(T,v)(∇_Sϕ)(U,w)]+∑[ϕ(T,w)(∇_Sϕ)(U,v)-(∇_Sϕ)(T,v)ϕ(U,w)], rearranging the terms of both sums we get B=∑[(∇_Sϕ)(T,w)-(∇_Tϕ)(S,w)ϕ(U,v)]+∑[ϕ(T,w)(∇_Sϕ)(U,v)-(∇_Uϕ)(S,v)], which is zero since ϕ satisfies Codazzi equation Codazzi phi. We denote by Γ_ϕ^l and Γ_ϕ^r the left and right nullities of R_ϕ, that is Γ_ϕ^l:={X∈ TM| R_ϕ(X,TM,TM⊕ L,TM⊕ L)=0}⊆ TM, and Γ_ϕ^r:={v∈ TM⊕ L| R_ϕ(TM,TM,TM⊕ L,v)=0}⊆ TM⊕ L. Certainly, _ϕ^l⊆Γ_ϕ^l and _ϕ^r⊆Γ_ϕ^r. The first Bianchi identity 1 Bianchi identity phi shows that Γ_ϕ^l⊆Γ_ϕ^r∩ TM. Moreover, the second one implies that Γ_ϕ^l⊆ TM is integrable and (∇̃_Γ_ϕ^lΓ_ϕ^r)_TM⊕ L⊆Γ_ϕ^r. In particular, ∇̃__ϕ^lΓ_ϕ^r⊆Γ_ϕ^r. Consider the vector bundle Λ:=_ϕ^r∩(_ϕ^l)^⊥⊆ TM⊕ L, and suppose that rank(Λ)=ℓ=rank(L). The ruled extension G:Λ→^n+p of g is given by G(ξ_q)=g(p)+ξ_q, ∀ q∈ M^n, ∀ξ_q∈Λ_q We restrict G to a neighborhood N^n+ℓ of the zero section ĝMnN^n+ℓ⊆Λ in order to G being an immersion. Moreover, we endow N^n+ℓ with the induced metric by G. Assume that _ϕ^l=_ϕ^r∩ TM and (Λ)=ℓ=(L). Then _ϕ^r is the nullity of G, that is, _G=_ϕ^r up to a parallel identification along _ϕ^l. Similarly, the nullity of N^n+ℓ is given by Γ_ϕ^r. First, G is _ϕ^r-ruled since ∇̃__ϕ^l_ϕ^r⊆_ϕ^r. Take a section ξ of N^n+ℓ⊆Λ and Y∈ TM, then G_*(ξ_*Y)=g_*Y+∇̃_Yξ∈ TM⊕ L=G_*(TN). Notice that TM⊕ L is parallel along _ϕ^r since _ϕ^l=_ϕ^r∩ TM, so Λ⊆_G. As TN≅ TM⊕Λ, to compute the second fundamental form of G is enough to understand α^G|_TM×(TM⊕ L). If X∈ TM then ∇̃_X(G_*(ξ_*Y))=g_*∇_XY+α(X,Y)+∇̃_X∇̃_Yξ, so α^G(X,ξ_*Y)=(∇̃_X(G_*(ξ_*Y)))_L^⊥=(α(X,Y))_L^⊥+(∇̃_X∇̃_Yξ)_L^⊥=ϕ(X,Y)+ϕ(X,∇̃_Yξ)=ϕ(X,ξ_*Y), up to parallel identifications. This proves that _ϕ^r=_G. Finally, Gauss equation shows that the curvature tensor R_N of N^n+ℓ is given by R_N(X,Y,v,w)=α^G(X,w)α^G(Y,v)-α^G(X,v)α^G(Y,w)=R_ϕ(X,Y,v,w), ∀ X,Y∈ TM, ∀ v,w∈ TM⊕ L, which shows that Γ_ϕ^r is the nullity of N^n+ℓ since the remaining values of R_N involve terms of relative nullity. We can give a weaker version of the last proposition for 0≤(Λ)<rank(L). In that case, there is an orthogonal decomposition T_G^⊥N=ℒ⊕ E such that rank(ℒ)=rank(L)-(Λ), G is _ϕ^r-ruled, and this distribution coincides with the nullity of the E-component of α^G. § CHERN-KUIPER'S INEQUALITIES In this section, we describe the basic properties of the submanifolds gMn^n+p whose relative nullity _g does not coincide with the intrinsic nullity Γ. In the following subsections, we analyze the cases ν_g=μ-p, ν_g=μ-p+1, and ν_g=μ-p+2 respectively. Let gMn^n+p be a submanifold with non-trivial intrinsic nullity Γ≠0. Call α its second fundamental form and _g its relative nullity. Gauss equation implies that _g⊆Γ and the flatness of the bilinear tensor β:=α|_TM×Γ. Let _β be the (left) nullity of β. The flatness of β implies that α(Y,X)∈𝒮(β)^⊥, ∀ Y∈_β, ∀ X∈ TM. So in particular α(Y,X)∈𝒮(β)∩𝒮(β)^⊥=0, ∀ Y∈_β∩Γ, ∀ X∈ TM, which shows that _g=_β∩Γ. Then, we have the following relation ν_g+(_β+Γ)=(_β)+μ. Notice that _β⊆ TM is an integrable distribution. Indeed, Codazzi equation for T_1,T_2∈_β gives α([T_1,T_2],Z)=α(T_1,∇_T_2Z)-α(T_2,∇_T_1Z), ∀ Z∈Γ, but the left-hand side belongs to 𝒮(β) and the right-hand side to 𝒮(β)^⊥ by alpha(Y,X) in S(beta)perp, so [T_1,T_2]∈_β. Let us recall Chern-Kuiper's inequalities, and provide a quick proof. Let gMn^n+p be a submanifold, then desigualdades de Chern-Kuiper's holds. As _g⊆Γ then ν_g≤μ. Take Z_0∈Re(β)⊆Γ a (right) regular element of β, then by nulidad para no simetrica and suma de dimensiones=suma de dimensiones we get that ν_g+n≥ν_g+(_β+Γ)=(_β)+μ=n-(Im(β^Z_0))+μ≥ n-p+μ. which proves the second inequality of desigualdades de Chern-Kuiper's. Before analyzing the inequality cases of the Chern-Kuiper's inequalities, we present a result that gives us bounds for the rank of 𝒮(β) under the hypothesis of ν_g≤ n-p-1. Let gMn^n+p be a submanifold with ν_g≤ n-p-1. Then μ-ν_g≤(𝒮(β))≤ p-1. The first inequality comes from suma de dimensiones=suma de dimensiones and nulidad para no simetrica since n+ν_g≥(_β+Γ)+ν_g=(_β)+μ≥ n-(𝒮(β))+μ. On the other hand, suppose by contradiction that 𝒮(β)=T^⊥_gM. Then, by alpha(Y,X) in S(beta)perp, we have that _β=_g. However, in this case, nulidad para no simetrica implies that ν_g=(_β)≥ n-(𝒮(β))=n-p, which is absurd. §.§ The case TEXT In this subsection, we analyze the maximal case of Chern-Kuiper's inequalities. We also describe the technique that will be used for the following cases. The next result shows that only flat submanifolds attain the second inequality of desigualdades de Chern-Kuiper's. Let gMn^n+p be a submanifold with μ=ν_g+p. Then M^n is flat, in particular μ=ν_g+p=n. In this case we must have equalities in hhhhh, hence _β+Γ=TM and Im(β^Z_0)=𝒮(β)=T^⊥_gM. Then alpha(Y,X) in S(beta)perp implies that _β=_g⊆Γ, and thus Γ=_β+Γ=TM. There are natural parametrizations for flat submanifolds attaining igualdad extrema de chern-kuiper, planas; see <cit.> for p=1 and <cit.> for p=2. This is generalized in <cit.> for any p≤ n. Chern-Kuiper's inequalities and proposicion caso extremo ChK es flat characterize the hypersurfaces with _g≠Γ by means of the Gauss parametrization. Hence, we assume from now on that p≥ 2. There is a natural way to produce submanifolds gMn^n+p with _g≠Γ using compositions. Consider a submanifold ĝMn^n+ℓ with Γ=_ĝ≠0, ℓ<p, and let GU⊆n+ℓ^n+p be an isometric immersion of an open subset U of ^n+ℓ with ĝ(M^n)⊆ U. Then g:=G∘ĝ generically has less nullity than ĝ, so _g≠Γ. Conversely, we will use the following strategy to prove that such a g must be a composition. Naively, 𝒮(β) should be T^⊥_jU (or, at least, contained), and so L:=𝒮(β)^⊥⊆ T^⊥_gM is a candidate to be T^⊥_ĝM. Hence, we can use the techniques of section revisiting ruled extensions. Namely, we will study the properties of the tensor ϕ=ϕ_L given by phi en la seccion chern kuiper associated with L, then we will use prop ruled extensions to obtain the desired composition. §.§ The case TEXT This subsection is dedicated to analyzing the following case of Chern-Kuiper's inequalities. We will prove a more general statement. We characterize the submanifolds such that the first inequality of lemma bound of L is attained; they are all flat compositions. Suppose that gMn^n+p is a submanifold with μ=ν_g+p-1 and p≥2. lemma bound of L implies that (𝒮(β))=μ-ν_g=p-1. In particular, we are in a situation where the lower bound of eq bound of the rank is attained. The following result analyzes this equality in complete generality. teorema de composicion chern-kuiper mu=nu+p-1 is a direct consequence of it. Consider a submanifold gMn^n+p with _g≠Γ. Let β=α|_TM×Γ and suppose that p-ℓ:=(𝒮(β))=μ-ν_g<p. Then g=G∘ĝ is a composition, where GNn+ℓ^n+p is a flat submanifold and ĝMnN^n+ℓ is an isometric embedding. Moreover, _ĝ=Γ and ν_G=(n+1)-(p-ℓ). Let Z_0∈Γ be a (right) regular value of β. nulidad para no simetrica and suma de dimensiones=suma de dimensiones imply that (_β+Γ)=(_β)+(Γ)-(_g)= n-(Im(β^Z_0))+μ-ν_g≥ n, which shows that α(Z_0,TM)=L^⊥ and _β+Γ=TM. In particular, 𝒮(β)=𝒮(α|_Γ×Γ). Let L:=𝒮(β)^⊥⊆ T^⊥_gM and consider the tensor ϕ=ϕ_L given by phi en la seccion chern kuiper. We will use prop ruled extensions to prove that g is such a composition. Hence, we need to show that ϕ is flat, (_ϕ^r)=(_ϕ^l)+ℓ=n+ℓ-(p-ℓ), and _β=_ϕ^l=_ϕ^r∩ TM. Notice that ϕ(_β,TM)=0 by alpha(Y,X) in S(beta)perp, and so _β=_ϕ^r∩ TM. Moreover, if Y∈_β then Codazzi equation for ξ∈ L and Z_1,Z_2∈Γ gives us that ϕ(Y,ξ)α(Z_1,Z_2)=∇^⊥_Yξα(Z_1,Z_2)=-ξ(∇_Y^⊥α)(Z_1,Z_2)=ξα(Y,∇_Z_1Z_2)=0, ∀ Z_1,Z_2∈Γ, since Γ⊆ TM is totally geodesic. Hence, ϕ(Y,ξ)=0 since 𝒮(α|_Γ×Γ)=𝒮(β)=L^⊥, and so _β=_ϕ^l. This proves hjhj. As TM=_β+Γ and hjhj holds, the flatness of ϕ is equivalent to the flatness of ϕ|_Γ×(Γ⊕ L). Notice that ϕ|_Γ×Γ=α|_Γ×Γ is flat by Gauss equation. On the other hand, if Z_1,Z_2,Z_3∈Γ and ξ∈ L then ϕ(Z_1,ξ)ϕ(Z_2,Z_3)=∇^⊥_Z_1ξα(Z_2,Z_3)=-ξ(∇^⊥_Z_1α)(Z_2,Z_3), which is symmetric in Z_1 and Z_2 by Codazzi equation. Hence, to prove the flatness is enough to show that ϕ(T_1,ξ_1)ϕ(t T_2,ξ_2)=ϕ(T_1,ξ_2)ϕ(T_2,ξ_1), ∀ T_1,T_2∈Γ, ∀ξ_1,ξ_2∈ L. Notice first that the nullity of α|_Γ×Γ is _β∩Γ=_g. Thus, α|_Γ×Γ is completely described by Theorem 2 of <cit.>. Namely, there are vectors Z_1,…,Z_p-ℓ∈Γ∩_g^⊥ such that α(Z_i,Z_j)=0 for i≠ j and the set {ρ_i:=α(Z_i,Z_i)}_i=1^p-ℓ is an orthonormal basis of L^⊥. Given ξ∈ L, Codazzi equation implies that ϕ(Z_i,ξ)ρ_j=-ξ(∇^⊥_Z_iα)(Z_j,Z_j)=ξ∇^⊥_Z_j(α(Z_i,Z_j))-α(∇_Z_jZ_i,Z_j)-α(Z_i,∇_Z_jZ_j)=0, ∀ i≠ j. Then ϕ(Z_i,ξ)=λ_i(ξ)ρ_i for some 1-forms λ_i:L→. Then jhjh holds since {Z_1,…,Z_p-ℓ} is a basis of Γ and ϕ(Z_i,ξ_1)ϕ(Z_j,ξ_2)=δ_ijλ_i(ξ_1)λ_j(ξ_2), ∀ i,j, ∀ξ_1,ξ_2∈ L. Finally, by nulidad para no simetrica, we have for Z_0∈Γ a regular element of β that (_ϕ^r)=n+ℓ-Im(ϕ^Z_0)=(n+ℓ)-(p-ℓ)=n-Im(β^Z_0)+ℓ=(_β)+ℓ=(_ϕ^l)+ℓ. The result now follows from prop ruled extensions. Notice that the second fundamental form of ĝ is the orthogonal projection of α onto L, but as α(Γ,TM)∈ L^⊥ then Γ=_ĝ. We can describe locally all the submanifolds gMn^n+2 with _g≠Γ. Let gMn^n+2 be a submanifold with Γ≠_g. Then, on each connected component U of an open dense subset of M^n, we have one of the following possibilities: * μ=ν_g+1 and g|_U=j∘ĝ is a composition where ĝ:U→ V⊆^n+1 and j:V→^n+2 are isometric immersions with Γ=_ĝ; * μ=ν_g+2 and U is flat. By proposicion caso extremo ChK es flat, and teorema de composicion chern-kuiper mu=nu+p-1, it only remains to analyze the case μ=n=ν_g+1. However, this case is a direct consequence of teo de composicion para betaD. Each case of ChK mu=nu+1 en codimension 2 is naturally parametrizable. For (i) we use the Gauss parametrization described in <cit.>, and Corollary 18 of <cit.> describes the second case. §.§ The case TEXT In this final subsection, we discuss the next case of Chern-Kuiper's inequalities. For this, we prove Thm L=1 which analyzes in generality the case ℓ=1. This result and teo de composicion para betaD imply thm Ch-K nu+p-2=mu. teo de composicion para betaD describes the submanifolds that attain the first inequality of eq bound of the rank. We now analyze when the second one does. Namely, let us consider gMn^n+p a submanifold with _g≠Γ and suppose that L:=𝒮(β)^⊥ has rank ℓ=1. As before, consider ϕ=ϕ_L the tensor given by phi en la seccion chern kuiper. We begin with the next result. If L has rank 1 and ν_g≤ n-p-1 then _β=_ϕ^l=_ϕ^r∩ TM. Furthermore, if α(_β,_β)≠ 0 then Γ⊆Γ_ϕ^l. On the second Bianchi identity Bianchi phi, take S=Z∈Γ, T=d_1,v=d_2∈_β, and w∈ TM to obtain 0=R_ϕ(Z,d_1,α(U,d_2),w)+R_ϕ(U,Z,α(d_1,d_2),w) ∀ Z∈Γ, ∀ d_1,d_2∈Γ, ∀ w∈ TM. In the last equation, fix d_1 and choose 0≠ d_2∈_β∩_g^⊥ such that α̂(d_1,d_2)=0. This is possible since ℓ=1 and (_β∩_g^⊥)=(_β)-(_g)≥ n-(p-1)-(n-p-1)=2, where the last inequality comes from nulidad para no simetrica. Let ρ∈ L be a fixed unit generator of L and take U∈ TM such that ρ=α̂(U,d_2) to obtain 0=R_ϕ(Z,d_1,ρ,w)=ϕ(Z,w)ϕ(d_1,ρ), ∀ Z∈Γ, ∀ d_1∈_β, ∀ w∈ TM, but ϕ(Γ,TM)=β(TM,Γ)=L^⊥, so ϕ(d_1,ρ)=0 for any d_1∈_β. Thus _β⊆_ϕ^l⊆_ϕ^r∩ TM⊆_β. Finally, suppose that α(_β,_β)≠0. Then α(_β,_β)=L by alpha(Y,X) in S(beta)perp. Take d_1,d_2∈_β such that α(d_1,d_2)=ρ, and use them in triop to obtain 0=R_ϕ(Z,U,ρ,w), ∀ Z∈Γ, ∀ U,w∈ TM, which proves that Γ⊆Γ_ϕ^l since R_ϕ(Γ,TM,TM,TM)=0 by Gauss equation. lemma L=1, phi(delta-beta,TM+L)=0 holds under the weaker assumption of (_β∩_g^⊥)≥2 instead of ν_g≤ n-p-1. Let gMn^n+p be an isometric immersion with μ≠μ_g≤ n-p-1. Suppose that L:=𝒮(α|_TM×Γ)^⊥⊆ T^⊥_gM has rank 1. Then, on each connected component U of an open dense subset of M^n where μ, ν_g, and k=α(_β,_β) are constant, we have the following possibilities * k=1 and g|_U is a composition of a ruled extension GNn+1^n+p and an isometric embedding ĝ:U⊆ M^n→ N^n+1. Moreover, _ĝ=Γ, (n+1)-(p-1)≤ν_G≤ (n+1)-(μ-ν_g), and ĝ_*(Γ)⊆Γ̂, where Γ̂⊆ TN is the nullity of N^n+1 and satisfies that (Γ̂)≥μ-ν_g+ν_G; * k=0 and g is _β-ruled. Moreover, the rank of the ruling is at least (n-p+1). We want to use prop ruled extensions to prove this result. By lemma L=1, phi(delta-beta,TM+L)=0 we know that _β=_ϕ^l=_ϕ^r∩ TM. Suppose first that k=1, and so Γ⊆Γ_ϕ^l by lemma L=1, phi(delta-beta,TM+L)=0. This implies that the tensor β̂:(TM⊕ L)×Γ→ L^⊥ given by β̂(v,Z)=ϕ(Z,v) is a flat extension of β. Notice that the left nullity of β̂ coincides with _ϕ^r. Indeed, let us verify the non-trivial contention. Take v_0∈ TM⊕ L such that β̂(Γ,v_0)=0. Then, as Γ⊆Γ_ϕ^l, we have that 0=R_ϕ(Z,X,v_0,w)=ϕ(X,v_0)ϕ(Z,w), ∀ Z∈Γ, ∀ X,w∈ TM, but ϕ(Γ,TM)=L^⊥, and so v_0∈_ϕ^r. In particular, nulidad para no simetrica for β and β̂ shows that the dimensions of _ϕ^r and _ϕ^l=_β differ by at most 1. However, if _β=_ϕ^l=_ϕ^r then g would be _β-ruled which is absurd since k=1. prop ruled extensions shows that g has a ruled extension GNn+1^n+p. Moreover, nulidad para no simetrica implies that the nullity of G satisfies that ν_G=(_ϕ^r)=(_β)+1≥ n+1-(p-1). On the other hand, using suma de dimensiones=suma de dimensiones we get that ν_G=(_β)+1=(_β+Γ)+ν_g+1-μ≤ n+1-(μ-ν_g). The bound on the nullity of N^n+1 follows from suma de dimensiones=suma de dimensiones and Γ⊆Γ_ϕ^l ⊆Γ_ϕ^r since (Γ̂)=(Γ_ϕ^r)≥(_ϕ^r+Γ)=1+(_β+Γ)=1+(_β)+μ-ν_g=ν_G+μ-ν_g. Finally, suppose that k=0, that is, α(_β,_β)=0. Codazzi equation phi en la seccion chern kuiper for d_1,d_2∈_β give us that 0=(∇_Xϕ)(d_1,d_2)-(∇_d_1ϕ)(X,d_2)=ϕ(X,∇_d_1d_2), ∀ X∈ TM, ∀ d_1,d_2∈_β, which proves that _β is totally geodesic since _ϕ^r∩ TM=_β. Hence, g is _β-ruled, and nulidad para no simetrica gives the desired bound on the rank of the rulings. We prove now thm Ch-K nu+p-2=mu. By lemma bound of L we know that ℓ∈{1,2}. The case ℓ=2 follows from teo de composicion para betaD. Assume now that ℓ=1, so we can apply Thm L=1. Then, if k=1, N^n+1 must be flat since (Γ̂)≥ p-2+ν_G≥ p-2+(n+1)-(p-1)=n, but (Γ̂)=n is not possible by the symmetries of the curvature tensor, so Γ̂=TN. It remains to exclude the second possibility of that result, that is, k=0. Suppose, by contradiction, that ℓ=1 and g is _g-ruled on an open subset of M^n. Notice that as ϕ|_TM×Γ=β is flat and _β=_ϕ^r∩ TM, so ϕ|_TM× (_β+Γ) is flat. However, by suma de dimensiones=suma de dimensiones and nulidad para no simetrica we know that (_β+Γ)=(_β)+μ-ν_g≥ n-(p-1)+p-2=n-1, so ϕ|_TM× TM must be flat. Then, fixing a unit generator ρ of L, the shape operator A=A_ρ satisfies Gauss equation. However, as g is _β-ruled and AΓ=0, we have that A(_β+Γ)_β+Γ=0, which implies that μ=(A)≥ n-2. This is a contradiction since μ=ν_g+p-2≤ (n-p-1)+(p-2)= n-3. § FINAL COMMENTS In this final section, we give some observations of this work. The results of this work suggest that there are at least two distinct families of submanifolds gMn^n+p with ν_g≠μ. The submanifolds with rank greater than their codimension and those that do not. Moreover, Theorems <ref> and <ref> suggest that, aside from the ruled cases, any submanifold of the first class is contained in a submanifold of the second one. In the submanifold theory, there are many works in which it is necessary to exclude compositions of the form g=G∘ĝ where ĝ:N^n+ℓ→^n+p is a flat submanifold; see <cit.> and <cit.> for example. Moreover, the notion of honest deformation is to exclude this type of behavior in the deformation theory; see <cit.>. This concept seems to be related to our work, but we did not deal with deformations. For this reason, it may be more appropriate a notion of honesty that depends only on the submanifold itself. IMPA – Estrada Dona Castorina, 110 22460-320, Rio de Janeiro, Brazil E-mail address: [email protected]
http://arxiv.org/abs/2307.07614v1
20230714202150
Towards Generalizable Detection of Urgency of Discussion Forum Posts
[ "Valdemar Švábenský", "Ryan S. Baker", "Andrés Zambrano", "Yishan Zou", "Stefan Slater" ]
cs.LG
[ "cs.LG", "cs.CL" ]
5 V. Švábenský, R. Baker, A. Zambrano, Y. Zou, and S. Slater. Towards Generalizable Detection of Urgency of Discussion Forum Posts. In M. Feng, T. Käser, and P. Talukdar, editors, Proceedings of the 16th International Conference on Educational Data Mining, pages 302–309, Bengaluru, India, July 2023. International Educational Data Mining Society. © 2023 Copyright is held by the author(s). This work is distributed under the Creative Commons Attribution NonCommercial NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. <https://doi.org/10.5281/zenodo.8115790> Towards Generalizable Detection of Urgency of Discussion Forum Posts Valdemar Švábenský University of Pennsylvania [email protected] Ryan S. Baker University of Pennsylvania [email protected] Andrés Zambrano University of Pennsylvania [email protected] Yishan Zou University of Pennsylvania [email protected] Stefan Slater University of Pennsylvania [email protected] August 12, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================== Students who take an online course, such as a MOOC, use the course's discussion forum to ask questions or reach out to instructors when encountering an issue. However, reading and responding to students' questions is difficult to scale because of the time needed to consider each message. As a result, critical issues may be left unresolved, and students may lose the motivation to continue in the course. To help address this problem, we build predictive models that automatically determine the urgency of each forum post, so that these posts can be brought to instructors' attention. This paper goes beyond previous work by predicting not just a binary decision cut-off but a post's level of urgency on a 7-point scale. First, we train and cross-validate several models on an original data set of 3,503 posts from MOOCs at University of Pennsylvania. Second, to determine the generalizability of our models, we test their performance on a separate, previously published data set of 29,604 posts from MOOCs at Stanford University. While the previous work on post urgency used only one data set, we evaluated the prediction across different data sets and courses. The best-performing model was a support vector regressor trained on the Universal Sentence Encoder embeddings of the posts, achieving an RMSE of 1.1 on the training set and 1.4 on the test set. Understanding the urgency of forum posts enables instructors to focus their time more effectively and, as a result, better support student learning. § INTRODUCTION In computer-supported learning environments, students often ask questions via email, chat, forum, or other communication media. Responding to these questions is critical for learners’ success since students who do not receive a timely reply may struggle to achieve their learning goals. In a small-scale qualitative study of online learning <cit.>, students who received delayed responses to their questions from the instructor reported lower satisfaction with the course. Another study showed that students who received instructor support through personalized emails performed better on both immediate quizzes and delayed assessments <cit.>. Massive Open Online Courses (MOOCs) are a prevalent form of computer-supported learning. MOOCs enable many students worldwide to learn at a low cost and in a self-paced environment. However, many factors cause students to drop out of MOOCs, including psychological, social, and personal reasons, as well as time, hidden costs, and course characteristics <cit.>. A MOOC’s discussion forum is central to decreasing the risk of student drop-out since it promotes learner engagement with the course. Students use the forum to ask questions, initiate discussions, report problems or errors in the learning materials, interact with peers, or otherwise communicate with the instructor. Andres et al. <cit.> reviewed studies on MOOC completion and discovered that certain behaviors, such as spending above-average time in the forum or posting more often than average, are associated with a higher likelihood of completing the MOOC. Similarly, Crues et al. <cit.> showed that students who read or write forum posts are more likely to persist in the MOOC. At the same time, instructor participation in the forum and interaction with students promotes engagement with the course <cit.>. For the reasons above, the timely response of instructors to students’ posts is important. In a study with 89 students, 73 of them preferred if the instructor responded to discussion forum posts within one or two days <cit.>. However, this is not always feasible. Students’ posts that require an instructor’s response may be unintentionally overlooked due to MOOCs’ scale. Instructors can feel overwhelmed by a large number of posts and often lack time to respond quickly enough or even at all. As a result, issues that students describe in the forum are left unsolved <cit.>, leaving the learners discouraged and frustrated. §.§ Problem Statement Since MOOCs tend to have far more students than other computer-supported learning environments, identifying urgent student questions is crucial. We define urgency in discussion forum posts as the degree of how quickly the instructor’s response to the post is needed. Urgency is expressed on an ordinal scale from 1 (not urgent at all) to 7 (extremely urgent). This scale is adopted from the Stanford MOOCPosts data set <cit.>, arguably the most widely used publicly available data set of MOOC discussion forum posts. It contains 29,604 anonymized, pre-coded posts that have been employed in numerous past studies (see Section 2). Educational data mining and natural language processing techniques may allow us to automatically categorize forum posts based on their urgency. Our goal is to build models that will perform such categorizations to determine whether a timely response to a post would be valuable. Ultimately, we aim to help instructors decide how to allocate their time where it is needed the most. Automatically determining the urgency of forum posts is a challenging research problem. Since posts highly vary in content – the students can type almost anything – the data may contain a lot of noise that is not indicative of urgency. In addition, it is difficult to generalize the trained models to other contexts because of linguistic differences caused by different variants of English or by non-native speakers of English, as well as terms that are highly specific to a course topic. §.§ Contributions of This Research We collected and labeled an original data set of 3,503 forum posts, which we used to train and cross-validate several classification and regression models. From the technical perspective, we tested two different families of features and compared the performance of the regressors, multi-class classifiers, and binary classifiers. Subsequently, we tested the generalizability of the results by using the independent Stanford MOOCPosts data set <cit.> of 29,604 forum posts as our holdout test set. § RELATED WORK Almatrafi et al. <cit.> used the Stanford MOOCPosts data set to extract three families of features: Linguistic Inquiry and Word Count (LIWC) attributes, term frequency, and post metadata. They represented the problem of urgency prediction as binary classification, considering the post not urgent if it had a label below 4, and urgent for 4 and above. The study evaluated five classification approaches: Naive Bayes, Logistic Regression, Random Forest, AdaBoost, and Support Vector Machines. The best-performing model was AdaBoost, able to classify the forum post urgency with the weighted F1-score of 0.88. Sha et al. <cit.> systematically surveyed approaches for classifying MOOC forum posts. They discovered that previous research used two types of features: textual and metadata. Textual features consist of n-grams, post length, term frequency-inverse document frequency (TF-IDF), and others. Metadata features include the number of views of the post, the number of votes, and creation time. Furthermore, the survey compared six algorithms used to construct urgency models from these features, building on the methods by Almatrafi et al. <cit.>. Four traditional machine learning (ML) algorithms included Naive Bayes, Logistic Regression, Random Forest, and Support Vector Machines. The best results were yielded by combining textual and metadata features and training a Random Forest model (AUC = 0.89, F1 = 0.89). Two deep learning algorithms examined in the survey were CNN-LSTM and Bi-LSTM. Using the same metrics, these models performed even better than the traditional ones. However, in their follow-up work, Sha et al. <cit.> concluded that deep learning does not necessarily outperform traditional ML approaches overall. The best urgency classifier, again a Random Forest model, achieved an F1-score of 0.90 (AUC was not reported). Several studies employed the Stanford MOOCPosts data set to train a neural network (NN) for identifying urgent posts. Capuano and Caballé <cit.> created a 2-layer feed-forward NN on the Bag of Words representation of the posts, reaching an F1-score of 0.80. Alrajhi et al. <cit.> used a deep learning model that combined text data with metadata about posts. They reported an F1-score of 0.95 for predicting non-urgent posts (defined by labels 1–4) and 0.74 for predicting urgent posts (label > 4). Yu et al. <cit.> also transformed the problem into binary classification. They compared three models, the best being a recurrent NN achieving an F1-score of 0.93 on non-urgent posts and 0.70 on urgent posts. More advanced approaches include those by Guo et al. <cit.>, who proposed an attention-based character-word hybrid NN with semantic and structural information. They achieved much higher F1-scores overall, ranging from 0.88 to 0.92. Khodeir <cit.> represented the Stanford MOOCPosts data set using BERT embeddings and trained gated recurrent NNs to predict the posts’ urgency. The best model achieved weighted F1-scores from 0.90 to 0.92. Previous work used the Stanford MOOCPosts data set to train the models but did not evaluate them on other data. Therefore, the models may overfit to that data set but be ineffective in other contexts. By training models on our own data and testing it on the Stanford MOOCPosts data set, we provide a new perspective within the current body of work in post urgency prediction. We aim to achieve a more generalizable modeling of forum posts’ urgency and provide valuable information for instructors who support large numbers of learners. In doing so, we also build upon work by Wise et al. <cit.>, who researched techniques for determining which MOOC forum posts are related content-wise. They used the Bag of Words representation of posts and extracted unigrams and bigrams as features. Using a Logistic Regression model, they reached an accuracy between 0.73 and 0.85, depending on the course topic. We use similar methods but for a different purpose. In designing responses to urgent posts, it is valuable to consider the work by Ntourmas et al. <cit.>, who analyzed how teaching assistants respond to students’ forum posts in two MOOCs. The researchers combined content, linguistic, and social network analysis to discover that teaching assistants mostly provide direct answers. The researchers suggested that this approach does not adequately promote problem-solving. Instead, they argued that more indirect and guiding approaches could be helpful. § RESEARCH METHODS This section describes the data and approaches used to train and evaluate predictive models of forum post urgency. §.§ Data Collection and Properties We collected posts from students who participated in nine different MOOCs at the University of Pennsylvania (UPenn) from the years 2012 to 2015. The nine MOOCs focused on a broad range of domains (in alphabetical order): accounting, calculus, design, gamification, global trends, modern poetry, mythology, probability, and vaccines. This breadth of covered topics enables us to prevent bias towards certain course topics and support generalization across courses. To construct the research data set, we started by randomly sampling 500 forum posts for each of the nine courses. Then, we removed posts that: * were in a language other than English * contained only special symbols and characters * contained only math formulas * contained only website links As a result, we ended up with 3,503 forum posts from 2,882 students. This data set included a similar number of posts from each course (between 379 and 399 per course), adding up to the total of 3,503. Each data point consists of three fields: a unique numerical student ID, the timestamp of the forum post submission, and the post text. All remaining post texts are in the English language, though not all students who wrote them were native speakers of English. The posts contain typos, grammatical errors, and so on, which we did not correct. §.§ Data Anonymization To preserve student privacy, two human readers manually redacted personally identifiable information in the posts. The removed pieces of text included names of people or places, contact details, and any other information that could be used to determine who a specific poster was. Each of the two readers processed roughly half of the post texts from each of the nine courses (195 posts per course per reader on average). The split was selected randomly. After this anonymization procedure was completed, the data were provided to the research team. To support the replicability of our results, the full data set used in this research can be found at <https://github.com/pcla-code/forum-posts-urgency>. Since we use only de-identified, retrospective data, and the numerical student IDs cannot be traced back to the students’ identity, this research study received a waiver from the university’s institutional review board. §.§ Data Labeling Three human coders (distinct from those individuals who anonymized the data) manually and independently labeled the 3,503 anonymized post texts. To ensure the approach was unified, they completed coder training and followed a predefined protocol that specified how to assign an urgency label to each post. The protocol is available alongside our research data at <https://github.com/pcla-code/forum-posts-urgency>. The three coders initially practiced on a completely separate data set of 500 labeled posts with the urgency label hidden. After each coded response, they revealed the correct label and consulted an explanation if they were off by more than 1 point on the scale. At the end of the training, we computed the inter-rater reliability of each coder within the practice set. Specifically, we calculated continuous (i.e., weighted) Cohen’s Kappa using linear weighting. The three coders achieved the Kappa of 0.57, 0.49, and 0.56, respectively. We note that the weighted values are typically lower than regular Kappa. For instance, weighted Kappa values are lower when there is a relatively large number of categories <cit.>, as is seen in our data sets. They are also lower in cases where, for example, one coder is generally stricter than another (i.e., different means by coder) even though their ordering of cases is identical <cit.>. When the coders felt confident in coding accurately, the study coordinator sent them 20 different posts from the separate data set with the urgency label removed. If they coded them accurately, they received a batch of 50 original posts (out of our 3,503 collected) for actual coding. In case a coder was unsure, discrepancies were resolved by discussion. As stated in Section 1.1, we use the term urgency to indicate how fast an instructor should respond to the post. For example, if a post is very urgent, then the instructor or teaching assistant (TA) should respond to it as soon as possible. If a post is not urgent, then the instructor and TA might not have to respond to the post at all. Degrees of urgency were mapped to ordinal scores proposed by Agrawal and Paepcke <cit.> (and later adopted by related work <cit.>) as follows: * 1: No reason to read the post * 2: Not actionable, read if time * 3: Not actionable, may be interesting * 4: Neutral, respond if spare time * 5: Somewhat urgent, good idea to reply, a teaching assistant might suffice * 6: Very urgent: good idea for the instructor to reply * 7: Extremely urgent: instructor definitely needs to reply Example for label 1: “Hi my name is [REDACTED] and I work in the healthcare industry, looking forward to this course!“ Example for label 5: “When will the next quiz be released? I'd like to get a head start on it since I've got some extra time these days.” Example for label 7: “The website is down at the moment, [link] seems down and I'm not able to submit the Midterm. Still have the "Final Submit" button on the page, but it doesn't work. Are the servers congested?” Table 1 lists the frequencies of individual urgency labels in the training data across each of the nine courses, as well as their total count. We also detail the frequencies of urgency labels in our test set (see Section 3.6). As the table shows, the frequencies of the labels differ between the training and test set; thus, if our models perform well in this case, they are likely to be robust when predicting data with various distributions. §.§ Data Automated Pre-Processing Before training the models, we performed automated data cleaning and pre-processing that consisted of the following steps in this order: * Converting all text in the posts to lowercase. * Replacing all characters, except the letters of the English alphabet and numbers, with spaces. * Removing duplicate whitespace. * Removing common stopwords in the English language, such as articles and prepositions. * Stemming, that is, automatically reducing different grammatical forms of each word to its root form <cit.>. Each pre-processed post contained 51 words on average (stdev 76, min 1, max 1390). §.§ Model Training and Cross-Validation The problem of assigning a forum post into one of seven ordered categories corresponds to multi-class ordinal classification or regression (Section 3.5.1). In addition, we also converted the problem to binary classification (Section 3.5.2) to provide a closer comparison with related work. §.§.§ Multi-class Classification and Regression We hypothesized that regression algorithms would be more suitable for our use case because they can capture the order on the 1–7 scale, which categorical classifiers cannot achieve. We used a total of six classification and regression algorithms: * Random Forest (RF) classifier, * eXtreme Gradient Boosting (XGB), * Linear Regression (LR), * Ordinal Ridge Regression (ORR), * Support Vector Regression (SVR) with a Radial Basis Function (RBF) kernel, and * Neural Network (NN) regressor. We used Python 3.10 and standard implementations of the algorithms in the Scikit-learn module <cit.>, using TensorFlow <cit.> and Keras <cit.> for the neural networks. The Python code we wrote to train and evaluate the models is available at <https://github.com/pcla-code/forum-posts-urgency>. All algorithms had default hyperparameter values provided by Scikit-learn. The only exception was the neural network with the following settings discovered experimentally: * Input layer with 128 nodes, 0.85 dropout layer, and ReLU activation function, * One hidden layer with 128 nodes, 0.85 dropout layer, and ReLU activation function, * Output layer with 1 node and ReLU activation function. Each algorithm was evaluated on two families of features: one based on word counts (Bag of Words or TF-IDF representations of the forum post texts), the other based on Universal Sentence Encoder v4 (USE) <cit.> numerical feature embeddings of the forum post texts. During model training, we used 10-fold student-level cross-validation in each case. The metrics chosen to measure classification/regression performance were Root Mean Squared Error (RMSE) and Spearman ρ correlation between the predicted and actual values of urgency on the validation set. We chose Spearman instead of Pearson correlation because the urgency labels are ordinal data. The output of the regression algorithms was left as a decimal number, i.e., we did not round it to the nearest whole number. §.§.§ Binary Classification In addition, we trained separate models for binary classification. Following the precedent from the related work <cit.>, the urgency label was converted to 0 if it was originally between 1–4, and converted to 1 if it was originally larger than 4. We did not adopt the approach of Almatrafi et al. <cit.>, who considered a post urgent if it was labeled 4 or above, since based on the scale description defined by Agrawal and Paepcke <cit.> (see Section 3.3), we do not consider “Neutral” posts to be urgent. (When we tried doing this, it caused only a slight improvement in the model performance.) Then, we trained RF, XGB, and NN classifier models. The performance evaluation metrics were macro-averaged AUC ROC and weighted F1-score. §.§ Model Generalizability Evaluation To determine the generalizability of our models, we evaluated them on held-out folds of the training set, then tested them on the Stanford MOOCPosts data set. This data set is completely separate from the training and validation sets and should, therefore, indicate how well our models would perform in different courses and settings. The test set uses the 1–7 labels but with .5 steps, meaning that some posts can be labeled as 1.5 or 6.5, for example. We did not round these during model training to verify generalizability across both types of labels. However, when labeling our training set, we did not consider .5 labels since the coders felt it added too much granularity. Earlier work did not explicitly differentiate the .5 labels from the integers. § RESULTS AND DISCUSSION This section details the results from both families of models: one based on word counts and the other on Universal Sentence Encoder. Then, we compare our models with those from related literature. §.§ Models with Word Count Features These models used the Bag of Words or TF-IDF representations of the forum post texts. §.§.§ Multi-class Classification and Regression We tested the following combinations of settings and hyperparameters for the word count models on the training and cross-validation set: * Method of feature extraction. TF-IDF performed slightly better than Bag of Words. * Range of n-grams extracted from the data. We tried unigrams, bigrams, and a combination of the two. The best results were obtained when using unigrams only. Models based on bigrams only or those that combined unigrams and bigrams performed worse. In the 3,503 posts, we had 774 unigram and 226 bigram features. * Minimal/maximal allowed document frequency for each term. Here, the best-performing cut-off was to discard the bottom/top 1% of extreme document frequencies, so the ranges were set to 0.01 and 0.99, respectively. Using this approach made the algorithms run substantially faster, but given the extreme cut-offs, it did not appreciably change the values. Without setting the cut-offs, the training of some models took several hours. * Feature unitization. It either did not impact or slightly worsened the model performance in all cases, so we did not use it. Table 2 summarizes the performance of all models. Support vector regression performed best overall on the training and cross-validation set in terms of both metrics: RMSE and Spearman ρ correlation. It also outperformed the other approaches on the separate test set. Figure 1 shows the predictions of the best model on the test set. Most urgency labels are under-predicted, but they are still predicted in the increasing order of urgency, which demonstrates that the model is detecting the ranking. After SVR, other regressors followed, with neural networks being the second best. Overall, the classifier models performed more poorly than the regression models. We expected this result since the urgency classes are ordinal, and the categorical classifiers cannot capture their ordering. §.§.§ Binary Classification Table 3 summarizes the performance of all models. The NN outperformed the remaining two classifiers, though the differences in AUC are more visible than for F1-score compared to XGBoost. Although the fit of RF and NN is non-deterministic, the results did not change substantially when we re-ran the model training multiple times. When considering the prediction of non-urgent posts only, all models achieved a very high F1-score between 0.9512 (NN) and 0.9589 (RF) on the training set, and 0.8924 (NN) to 0.8971 (XGBoost) on the test set. For the urgent posts only, the predictive power was much lower: between 0.1841 (RF) and 0.4168 (NN) on the training set, and 0.0025 (RF) to 0.2761 (NN) on the test set. Due to the imbalance in favor of the non-urgent class, experimenting with decision cut-offs lower than the default 50% visibly improved the RF and XGBoost models' AUC (up to 0.7771) but improved the F1-score only slightly. The best results were achieved for decision thresholds of 10 or 15%. §.§ Models with Feature Embeddings Using the Universal Sentence Encoder (USE) §.§.§ Multi-class Classification and Regression Table 4 summarizes the performance of all models. Again, SVR performed best on the training set, followed by NN. After that, other regressors and classifiers followed in the same order as with the word-count-based models. However, for the test set, while SVR still obtained the best ρ, it had slightly worse RMSE than the other three regressors. Overall, the model quality was better for USE than for TF-IDF. Figure 2 shows the predictions made by the best model on the test set, with the trend being similar to Figure 1. §.§.§ Binary Classification Table 5 summarizes the performance of all models. Compared to using the TF-IDF features, the results are surprisingly slightly worse, even though the differences are minimal in some cases. The overall order of models is preserved – again, the NN outperformed the other two models. As previously, we observed similar imbalances in F1-scores when predicting non-urgent and urgent posts separately. For non-urgent posts, all models achieved a high F1-score between 0.9544 (NN) and 0.9597 (XGBoost) on the training set, and 0.8954 (RF) to 0.8974 (XGBoost) on the test set. For predicting the urgent posts only, the predictive power is much lower: between 0.0366 (RF) and 0.3799 (NN) on the training set, and 0.0007 (RF) to 0.2563 (NN) on the test set. Again, the respective performance of the individual classifiers corresponds to the case with word count features. As expected, decreasing the decision cut-off below 50% again substantially improved the overall model performance. The best results were again achieved for decision thresholds of 10 or 15%. §.§ Comparison with the Results Published in Previous Literature We now compare our results with the binary classification models reported in Section 2, which were trained on the Stanford MOOCPosts data set. We cannot compare our multi-class classification and regression analyses to past work since it treated this problem only as binary classification. Almatrafi et al. <cit.> and Sha et al. <cit.> slightly differed from our approach in using the label 4 as the cut-off for post urgency, as opposed to 4.5. The best model by Almatrafi et al. <cit.>, an AdaBoost classifier, achieved a weighted F1-score of 0.88. Our binary classifiers slightly outperformed this model, even though we used fewer types of features. This indicates that combining features from various sources does not necessarily improve model quality. Sha et al. <cit.> reported a RF model that scored F1 = 0.89 and AUC = 0.89. While we achieved similar F1-scores, our AUC was much lower. This could have been caused by the smaller training set, in which the class imbalance had a larger effect. The NN approaches by Capuano and Caballé <cit.>, Guo et al. <cit.>, and Khodeir <cit.> reported F1-scores ranging from 0.80 to 0.92. Even though our NN models were much simpler and trained on a smaller data set, they achieved a similarly high F1 of 0.91. Finally, Alrajhi et al. <cit.> and Yu et al. <cit.> reported the model performance separately for non-urgent and urgent posts. When considering non-urgent posts only, they reached F1-scores of 0.95 and 0.93, respectively. Our best-performing model on this task achieved F1 = 0.96 on the training set (RF, word count features) and 0.90 on the test set (XGBoost, USE features). When considering urgent posts only, they reported F1-scores of 0.74 and 0.70. Here, our models scored much worse, 0.42 on the training set and 0.28 on the test set (both approaches used NN on the word count features). The AUC scores were not reported in this case. Overall, we achieved comparable or even slightly better performance in most cases. In addition, we evaluated the models for multi-class classification and regression, which the previous work did not consider. We could not fully replicate past work because the feature set and the code used to produce the previous results were unavailable. This prevented us from testing the prior work on our data set, which would have helped to establish the generalizability of those earlier approaches. §.§ Opportunities for Future Work In future work, the urgency rating of forum posts can also be treated as a ranking problem. Using an ML algorithm, posts can be sorted from the most to the least urgent instead of classifying them as high or low priority. Even among the posts with the same urgency level, some messages should be addressed first. Therefore, reframing the problem to ranking learning would lead to a different model that suggests the most urgent post to address instead of estimating the level of urgency. Our current approach shows that regardless of the regression outputs, regressor models such as the SVR correctly estimate a higher urgency for more urgent posts. For this reason, ML models could show promising results for sorting the posts based on their urgency. In addition, the post labeling scale could be improved, perhaps by simplifying it to fewer categories. In this study, we adopted the scale from previous work <cit.>, used additionally in <cit.> in order to be able to study the generalizability of findings across data sets. Finally, experimenting with over- or undersampling of the training set using algorithms such as SMOTE might improve model performance for certain labels. To ensure even a higher degree of generalizability, future research could validate the models on data from different populations than those employed in our paper. § CONCLUSION Responding to students’ concerns or misunderstandings is vital to support students’ learning in both traditional and MOOC courses. Since instructors cannot read all forum posts in large courses, selecting the posts that urgently require intervention helps focus instructors’ attention where needed. The presented research aims to automatically determine the urgency of forum posts. We used two separate data sets with different distributions and different approaches to the urgency scale (using .5 values or not) to support generalizability. Support vector regression models showed the highest performance in almost all aspects and cases. The best model from both categories of features (word count or numerical embeddings) performed similarly, with Universal Sentence Encoder embeddings being slightly better. The results of this work can contribute to supporting learners and improving their learning outcomes by providing feedback to instructors and staff managing courses with large enrollment. The model quality has implications for practical use. Based on the RMSE values, it is unlikely that a highly urgent post will be labeled non-urgent and vice versa. From a practical perspective, implementing the urgency rating into MOOC platforms or large courses would help instructors, for example, by providing automated notification on posts with high urgency. In this case, however, students should not be aware of the inner workings of such a system. This is to prevent abuse by writing words with certain phrases to trigger instructor notifications. § ACKNOWLEDGMENTS This research was supported by the National Science Foundation (NSF) (NSF-OAC#1931419). Any opinions, findings and conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the NSF. We also thank Nathan Levin and Xiaodan Yu for their work in preparing and labeling the research data set. abbrv
http://arxiv.org/abs/2307.07317v1
20230714125112
Hybrid moderation in the newsroom: Recommending featured posts to content moderators
[ "Cedric Waterschoot", "Antal van den Bosch" ]
cs.IR
[ "cs.IR", "cs.CL", "cs.LG" ]
1234-5678-9012 [email protected] KNAW Meertens Instituut Oudezijds Achterburgwal 185 Amsterdam The Netherlands 1012 DK Institute for Language Sciences, Utrecht University Utrecht The Netherlands [email protected] Online news outlets are grappling with the moderation of user-generated content within their comment section. We present a recommender system based on ranking class probabilities to support and empower the moderator in choosing featured posts, a time-consuming task. By combining user and textual content features we obtain an optimal classification F1-score of 0.44 on the test set. Furthermore, we observe an optimum mean NDCG@5 of 0.87 on a large set of validation articles. As an expert evaluation, content moderators assessed the output of a random selection of articles by choosing comments to feature based on the recommendations, which resulted in a NDCG score of 0.83. We conclude that first, adding text features yields the best score and second, while choosing featured content remains somewhat subjective, content moderators found suitable comments in all but one evaluated recommendations. We end the paper by analyzing our best-performing model, a step towards transparency and explainability in hybrid content moderation. <ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems Hybrid moderation in the newsroom: Recommending featured posts to content moderators Antal van den Bosch August 12, 2023 ==================================================================================== § INTRODUCTION Online newspapers allowing user comments have been facing moderation challenges with large and increasing content streams <cit.>. Whether to filter out toxicity, counter misinformation, or promote constructive posts, platforms are looking towards computational solutions to support moderator decisions <cit.>. Overall, moderation strategies are focused on two polar opposites. On the one hand, the moderator is required to safeguard the comment space from toxic and negative content <cit.>. On the other hand, platforms aim to promote what they deem good contributions, for example by pinning certain content to the top of the page <cit.>. In this paper we present a recommender based on ranking class probabilities to support the content moderator in picking such featured posts. Using Dutch comment data with human labeling of featured posts, we train a set of models which present the human moderator with a set of posts that might qualify for being featured. We hypothesize that the optimal post representation for ranking includes both user features and textual content features, information used by content moderators as well. Furthermore, we validate our models separately on a collection of articles. Validation on unseen articles reflects the real-life setting of moderating and choosing only a few comments to be featured, as opposed to artificially split and balanced test sets. The output of the best-performing model is assessed in an expert evaluation by content moderators currently employed at the platform in question, who evaluated a random selection of articles by deciding whether the recommended comments are worthy of getting featured. § BACKGROUND §.§ Online Content Moderation As online comment platforms grow, content moderators have had to adapt moderation strategies to the changing online environments. Dealing with negative content has been a particular focus, e.g. detecting trolling and online harassment <cit.> or even organized misinformation campaigns <cit.>. <cit.> describes these forms of negative content under the umbrella of 'dark participation'. Recently, however, moderators are seeking to promote good comments as well. On the opposite side of the comment spectrum from dark participation, platforms and moderators are selecting what they deem as good, high-quality comments and manually pinning them to the top of the comment space. Promoting what news outlets see as high-quality contributions has for example taken the form of New York Times (NYT) picks <cit.>, Guardian Picks at The Guardian, or featured posts at Dutch news outlet NU.nl. On their FAQ pages, these outlets describe such promotion-worthy comments as "substantiated", "respectful" and representing "a range of viewpoints". <cit.> assigns a set of twelve editorial criteria to such featured posts, ranging from argumentative quality to entertainment value and relevance. Overall this procedure may be seen as "a norm-setting stategy" (). The authors argue that exposure to these promoted posts may also improves the quality of succeeding comments <cit.>. <https://www.nu.nl/nujij/5215910/nujij-veelgestelde-vragen.html> <https://help.nytimes.com/hc/en-us/articles/115014792387-The-Comments-Section> Supplementary to the aforementioned goal of promoting high quality content and the positive normative effects these posts may have on other commenters, user engagement may increase as well. <cit.> find that after a user received their first featured comment, their commenting frequency increased. §.§ Hybrid Moderation While featured content ranking for moderators is a novel task, recommender systems have been used in the context of news platforms. Plenty of research and workshops (e.g. INRA workshops) focus on news recommendation and personalization aimed at readers on these platforms <cit.>. While this application is adjacent to content moderation, it differs from our application in that it is mostly aimed at users of a platform (as opposed to moderators) to optimize news consumption (instead of improving moderation tasks). Moderators of online news outlets have been increasingly working with computational systems to perform their tasks to ward off toxic and unwanted content <cit.>. The result is a hybrid setting in which the role of human moderator on the one hand, and the computational system on the other hand have been intertwined. <cit.> argue that ideally, AI should offer decision support to the human moderator. Taking the final decision on publishing content is a task exclusively for the human moderator, and tools should be focused on assisting this function <cit.>. <cit.> emphasize this hybrid relation, stipulating that journalists do not want automatic editorial decision-making. When computational moderation tools support the human in carrying out their tasks, the moderator themselves can adapt to the nuances of changing online contexts and apply human interpretation and judgement <cit.>. Classifying toxic comments in online comment spaces has received substantial attention <cit.>. The classification of featured comments or editor picks, however, has not been explored quite that often. <cit.> uses cosine similarity to calculate relevance scores relative to the conversation and the article using New York Times editor picks. The author discovers an association between these picks and relevance and concludes that such computational assistance may speed up comment curation <cit.>. As part of their CommentIQ interface, <cit.> train a SVM classifier on unbalanced, but limited, data (94 NYT picks, 1,574 non-picks) and achieve a precision score of 0.13 and recall of 0.6. Their data includes user history criteria as well as comment variables <cit.>. <cit.> annotated comments from Yahoo News in terms of what they present as "ERICs: Engaging, Respectful, and/or Informative Conversations". The authors look at the constructiveness of a thread rather than a single comment, and do not use editorial choices as their labelling <cit.>. <cit.> combined the Yahoo comments with NYT picks. The authors achieve an F1-score of 0.81 by training a BiLSTM on GloVe embeddings, using the NYT picks as benchmark and a balanced test set <cit.>. Furthermore, they combine a set of variables, including comment length features and named entities, and achieve a best F1-score of 0.84 using SVMs <cit.>. In a follow-up study, the authors achieved an F1-score of 0.87 on a similar task using crowdsourced annotations and logistic regression <cit.>. To sum up, these classifiers for the most part lacked input aside from comment information, whether text representation or otherwise. Additionally, the validation of these models was performed on large, balanced test sets, which does not resemble the real-life practice of picking featured posts. The moderator chooses editor picks on the article level and any model should therefore be evaluated on such tasks. In this paper, we combine user information with comment data and text representation, all information used by the moderators themselves. §.§ Platform Specifics The comment platform discussed in this paper is called NUjij, part of the Dutch online newspaper NU.nl[<https://nu.nl/>]. NUjij allows commenting on a wide range of pre-selected news articles. Pre-moderation safeguards the comment space, in the form of automatic toxicity filtering and human moderators checking the uncertain comments <cit.>. NUjij employs a selection of moderation strategies, including awarding expert labels to verified users and presenting featured comments above the comment section, similar to New York Times or the Guardian picks <cit.>. Picking featured comments is done by human moderators and are described as "substantiated and respectful reactions that contribute to constructive discussion"[<https://www.nu.nl/nujij/5215910/nujij-veelgestelde-vragen.html>]. Nujij states in their FAQ that moderators aim to present balanced selections and to not pick based on political affiliations. This paper aims to address this specific task making use of the information available to moderators, which includes user information and history. Other platforms might have different editorial guidelines for moderators to choose featured comments. To best support the moderator, it is important that the approach fully suits their context, which may include the (intended) human bias in picking featured content. § METHODOLOGY §.§ Data We obtained a Dutch language dataset containing a total of 821,408 pseudonymized posts from the year 2020, spanning 2,952 articles from NU.nl. Major topics within this dataset are climate change, the 2020 US election and the COVID-19 pandemic. A binary variable indicates whether each post was featured by a moderator during the time interval that commenting on the page was allowed. User variables were obtained by grouping and aggregating the information across the pseudonymization user keys. In total we have 8,262 featured posts. An article has on average 2.8 featured posts (sd=3), with a median of 2. The average article has 278 comments (sd=358, median=173). This shows that, while large variation exists in the number of comments per article, the number of featured posts per article remains low and relatively stable. The number of featured posts does not grow along with the number of comments posted per article and, therefore, articles with many comments cause great difficulty in finding the featured posts. <https://anonymous.4open.science/r/HybridModeration_RecSys2023/README.md> We group the comment data by article_id and sort these chronologically. We split this data 50%/50%, resulting in two sets of 1,476 articles, with the split date at June 16th 2020. The first set of articles is used for training and testing classifiers. We further split this set into 80%/10%/10% generating a training, validation and test set, respectively. Table <ref> shows the distribution of posts in each set. Using the validation data, we tested the downsampling of the non-featured posts in the training set using all the featured posts in the training data (n=3,047). Using the features listed in Table <ref>, we trained a random forest to predict if a post was featured on six different downsampled training sets (Figure <ref>). The 95/5 ratio, i.e. 95% non-featured posts and 5% featured posts, yielded the best result and will be used as the training data henceforth. While the 95/5 ratio still remains unbalanced, it is important to note that the unsampled actual ratio approximates 99/1. Thus, the training data represents a marked downsampling of non-featured posts. The second dataset contains 1,476 articles, published after June 16th 2020, with a total of 500,191 posts, of which 4,484 are featured. This second article set is used for evaluating the ranking of unseen discussions. §.§ Models The first model is trained exclusively on the non-textual features listed in Table <ref>. These features are available to the moderator and can be taken into account when deciding to feature a comment. The other two random forest classifiers, detailed later, either include textual Bag-of-Words or word embedding features, while we have also finetuned a transformer-based model on textual input only. Other models were trained (including SVM and logistic regression) but did not perform as well as the random forest implementations. §.§.§ Baseline We defined a simple threshold-based model to determine whether a post is classified as featured. More specifically, comments posted by users that have a featured post ratio above 3% will be labelled as such. The threshold constitutes the 95th-percentile. Users with a history of often writing featured comments might do so in a new discussion. To make recommendations, the featured ratio is sorted in descending order. §.§.§ Random Forest (RF) We trained a random forest on the non-textual variables presented in Table <ref>. We used the standard sci-kit implementation of random forest and performed a hyperparameter grid search. The final model has a max depth of 50, 200 estimators and a minumum sample split of 10. <https://scikit-learn.org/stable/index.html>, v1.2.0 §.§.§ RobBERT Previous models were entirely trained on non-textual data. To obtain a model that uses pure text as input, we employed the pre-trained Dutch transformer-based language model, RobBERT, and finetuned it on our training data <cit.>. This training data consists of the text belonging to the exact comments the non-textual variables represent. The sequence classification employs a linear classification head on top of the pooled output <cit.>. We trained for 10 epochs with a batch size of 64, AdamW optimizer and a learning rate of 5e^-5 <cit.>. §.§.§ RF_Emb & RF_BoW By extracting the embeddings from the previously discussed RobBERT model, we are able to combine the textual input with the set of non-textual variables. We extracted the CLS-tokens from the input and added those to the training data as features, adding up to a total of 797 features. We trained a sci-kit random forest on this combined data. A hyperparameter grid search was performed leading to a final random forest model with a max depth of 64, 1,200 estimators and min_sample_split of 2. Our final model adds another text representation to the mix. Instead of the CLS token used for the RobBERT embeddings, we represented the content by a standard Bag-Of-Words approach, counting the occurrences of the tokens in each comment. First, the text was lowercased and both punctuation and stopwords were removed. The words were added to the non-textual training data, resulting in a set of 426 features. We once again performed a hyperparameter grid search that resulted in RF_BoW with 1,200 estimators, min_sample_split of 10 and a max depth of 110. § RESULTS The initial evaluation is done on the test set that was obtained out of our original 80/10/10 split on the first set of articles. Evaluation on the test set follows the standard procedure of a classification problem, in which comments are not yet ranked by class probability. Next, we use the second set of articles to calculate recommendation scores. In order to recommend comments to the moderator, the model ranks all the posts by class probability. The top comments, i.e. those with the highest probability of belonging to the featured class, are recommended. We evaluate recommendations on the basis of Normalized Discounted Cumulative Gain (NDCG). We calculated NDCG at different recommendation set sizes (k=3, 5, 10) across unseen articles. §.§ Classification results The initial evaluation step concerned the classifier's generalization performance on the test set, containing a total of 379 featured posts and 31,743 non-featured posts. The 'Informed Baseline' achieved an F1-score of 0.17. This model performed well in terms of recall, but not in precision. The transformer-based RobBERT model, which lacks the non-textual information that the others models have, underperformed as well (Table <ref>). This might be the case due to the fact that identical comments are sometimes featured and other times not. Due to the limit on featured comments per article, only a small set of well-written comment have the featured label. The RF without textual representation achieved the best F1-score on the test set, while RF_BoW outperformed the other models in terms of precision (Table <ref>). §.§ Validation on unseen articles Besides the standard classification of rare featured posts, these positively classified comments ought to be part of the recommendation set as well. Ranking is done by sorting individual comments by class probability for the featured class in descending order. A recommendation consists of the top k posts derived from this ranking. To validate the models, we calculated NDCG@k, with k, the number of comments recommended, at 3, 5 and 10 across all 1,476 articles <cit.>. An article has on average 3 featured posts, while 5 and 10 allows for the moderator to choose. The results are shown in Table <ref>. The best performing models on the initial evaluation set also yield the best rankings of unseen comments. RF and RF_BoW performed the best at all recommendation sizes, with the latter yielding the highest score. This result indicates that added text representation in the form of Bag-of-Words slightly improves the recommendations shown to the moderators (Table <ref>). Simply ranking comments based on the featured history scored better than ranking based on content, potentially because there is no consistent featuring of well-written comments. §.§ Expert evaluation by moderators Using the best performing model, a random set of unseen articles was collected alongside the recommendations. We created a survey consisting of 30 articles combined with a set of comments. This set consisted of the recommended comments (comments with class probability above 0.5 and maximum 10 per article) and an equal number of random non-recommended comments from the discussion. These were randomly shuffled so the moderators did not know which comments were recommended by our system. Along with the article and comments, the evaluation included features that moderators have access to in the real-life practice: the number of previously posted and featured comments by the user, the rejection rate of the user and the respect points of the comment. The content moderators had to decide for each individual comment whether they thought it was a candidate to feature on NUjij. In total, four moderators took the survey and each of them labelled comments from 15 articles. The first five articles were shown to all moderators in order to calculate inter-annotator agreement, while the other 10 were randomly selected from the pool. We calculated a Krippendorff's alpha inter-rater agreement of 0.62. This result, combined with the fact that 42.3% of comments featured in the original data were not chosen, indicates that picking featured content remains somewhat subjective. However, in all but one article, moderators found comments to feature among the recommendations, resulting in a NDCG score of 0.83. While there is subjectivity involved in picking featured comments, the moderators do find featured content within the recommendations made by the model. They might not all choose the exact same comments, but all find worthy content in the recommended set. § DISCUSSION The context of hybrid moderation asks for insight into computational models employed in the pipeline. Transparency being a key value in the field of journalism, moderators and users alike demand explanations as to how models come to a certain output <cit.>. Transparency is a prerequisite for user trust in content moderation <cit.>. Here, we offer an error analysis of our best performing model. Moderators may use this information to counter potential bias towards certain comment characteristics. Furthermore, we discuss the limitations of our approach. To explain our model's behavior in general terms, we explore the erroneous recommendations the model has made, more specifically which features repetitively contributed to false positives (FP), and false negatives (FN). For the error analysis, we processed all 1,476 validation articles and collected the top false positives in each recommendation (at k=5) and all false negatives. The latter are gathered from the entire article dataset, since they were incorrectly omitted from the actual recommendation. We used the python library 'treeinterpreter' to collect for each prediction the feature contribution. The contribution (c) equals the percentage points (as decimal) the feature has contributed to the class probability of the prediction, calculated by following the decision paths in the trees. <https://github.com/andosa/treeinterpreter> Respect_count (c=0.14) and respect_uptime (c=0.11) contributed highly to incorrect recommendations, indicating that that our model often incorrectly recommended posts with a high number of likes (Table <ref>). Additionally, the model is biased towards users who have been often featured before (c=0.06), and towards longer posts (c=0.04). Next, we looked at the false negatives (FN). Similar to FPs, the history of being featured is a crucial factor in incorrectly omitting posts. Posts by users that have not been featured before or have an extremely low ratio of featured posts (c=0.05) were missed, as can be seen in Table <ref>. Furthermore, featured posts with a noticeably low respect_count (c=0.09) were missed as well. Another source of erroneous rankings was wordcount (c=0.02). Featured posts tend to be longer (mean featured = 100, mean non-featured = 53). Shorter comments may have been overlooked and omitted from the recommendation. §.§ Limitations & future research We see at least two limitations to our approach. First are those related to the platform. Our models make use of a wide range of variables, including aggregated user information which may not be available for other platforms. Furthermore, our recommendations are based on historical moderation choices and may therefore be biased towards certain content. These choices reflect the editorial interpretation of a constructive comment by the platform. Future research could compare different criteria for featuring posts. Another platform-related limitation is the language. All text in this study was Dutch. Although we did not test the approach on data in another language, our approach, which assumes the presence of pre-labeled featured post data and a transformer language model for that language, is language-independent. Second, while we have validated our models on a large collection of articles which resemble the real-life application, we do not know the precise moment at which the moderator selected featured posts. Knowing which posts were available to the moderator at that point in time would allow us to replay the recommendation process in time-realistic detail. Future research will specifically address this issue, using time-stamped data that documents the precise moment moderators selected featured posts. § CONCLUSION In this paper, we presented a classifier-based recommender system for featured posts to offer decision support to the online content moderator. Using comment and moderation data from a Dutch news platform, we showed that supplementing the non-textual data with text representation achieves the best ranking scores. More specifically, our random forest supplemented with Bag-Of-Words representations achieved the best ranking. While previous research on classifying constructive comments validated their models only on an artificially balanced test set, we validated our models on a large set of articles, replicating real-life practice. Furthermore, content moderators of the platform in question evaluated the output, yielding a NDCG of 0.83. We unpacked our best performing model in terms of error analysis, showing that our model favoured posts from users with a history of being featured before and might omit comments with a lower respect count. With our proposed and novel approach combined with transparency, we aim to support and empower the online content moderator in their tasks, while not obscuring the nuance and contextuality of picking featured posts. This study is part of the project Better-MODS with project number 410.19.006 of the research programme 'Digital Society - The Informed Citizen' which is financed by the Dutch Research Council (NWO). ACM-Reference-Format
http://arxiv.org/abs/2307.05098v1
20230711081243
Fabry-Pérot interference in Josephson junctions
[ "Sushil Kumar Sahu", "Abhiram Soori" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall" ]
http://arxiv.org/abs/2307.07358v1
20230714140401
Learn from Incomplete Tactile Data: Tactile Representation Learning with Masked Autoencoders
[ "Guanqun Cao", "Jiaqi Jiang", "Danushka Bollegala", "Shan Luo" ]
cs.RO
[ "cs.RO" ]
Quasinormal modes of black holes encircled by a gravitating thin disk Petr Kotlařík August 12, 2023 ===================================================================== empty empty The missing signal caused by the objects being occluded or an unstable sensor is a common challenge during data collection. Such missing signals will adversely affect the results obtained from the data, and this issue is observed more frequently in robotic tactile perception. In tactile perception, due to the limited working space and the dynamic environment, the contact between the tactile sensor and the object is frequently insufficient and unstable, which causes the partial loss of signals, thus leading to incomplete tactile data. The tactile data will therefore contain fewer tactile cues with low information density. In this paper, we propose a tactile representation learning method, named TacMAE, based on Masked Autoencoder to address the problem of incomplete tactile data in tactile perception. In our framework, a portion of the tactile image is masked out to simulate the missing contact region. By reconstructing the missing signals in the tactile image, the trained model can achieve a high-level understanding of surface geometry and tactile properties from limited tactile cues. The experimental results of tactile texture recognition show that our proposed TacMAE can achieve a high recognition accuracy of 71.4% in the zero-shot transfer and 85.8% after fine-tuning, which are 15.2% and 8.2% higher than the results without using masked modeling. The extensive experiments on YCB objects demonstrate the knowledge transferability of our proposed method and the potential to improve efficiency in tactile exploration. § INTRODUCTION The tactile properties of an object's surface are important information for robots to gain an understanding of the physical environment. The surface tactile properties, such as textures, stiffness, softness, etc., are embedded into the tactile data received by the physical interaction between the tactile sensor and the target objects, which enable robots to facilitate manipulation tasks and interact with their surrounding effectively <cit.>. During the tactile data collection process, the tactile sensor is expected to have adequate contact with the target objects, leading to the activation of larger perception fields and distinct tactile signals to be recorded. However, due to the dynamic environment and limited working space for robotics, it often happens that the tactile sensor fails to make contact or only makes partial contact with the target object, especially for soft objects like clothes. In such a case, the recorded incomplete tactile data contains fewer tactile cues than the data from adequate contact because of a smaller perception field being stimulated. Moreover, some tactile properties, such as compressibility and stiffness, which need to be obtained by squeezing the target, would be absent from partial contact. In tactile perception, the tactile image collected from the optical tactile sensor is one of the most popular tactile data types <cit.>. A typical example is the tactile image from the GelSight sensor, which has embedded tactile textures, height information, and friction information with high resolution. Since tactile images share the same data format as RGB images, they have been processed by using advanced techniques from the field of computer vision<cit.>. However, most methods are limited to treating adequate and partial contact events equally. We argue that the current methods ignore the effect of the incomplete tactile data and there is no specific optimisation or representation method to address this issue. As shown in Fig. <ref>, the major difference between tactile data from adequate contact versus partial contact comes from the information density, where some contact regions are missing in the partial contact. However, humans are able to identify an object by touching a small portion of the object's surface area <cit.>. It indicates that the tactile signals have a high degree of redundancy on the object's surface and the patterns of the object's surface tend to have repetitive parts with identical properties. Consequently, it is possible to reconstruct a missing signal in tactile data from neighbouring signals in the spatial space using semantic information of of the existing tactile cues. In this paper, we propose a tactile representation method based on the Masked Autoencoder <cit.>, named TacMAE, to simulate the contact area's absence of incomplete tactile data caused by partial contact. Motivated by a recent work <cit.>, which uses the amount of the remaining voxels to represent the degree of occlusion in 3D point cloud, we employ a similar approach that uses contact area in tactile images as the criterion to determine adequate contact and partial contact. In the training stage, a portion of the input patches of tactile images is masked out using the data collected from adequate contact, and the missing tactile signals are then reconstructed. Moreover, a supervised classification head is designed, which allows us to learn additional information from corresponding labels. After training, the encoder can be applied to the incomplete tactile images to learn the latent representations with low information density effectively. In the tactile texture recognition experiments, we observe a significant improvement in recognising the objects with partial contact, in both zero-shot transfer and fine-tuning settings. Moreover, our proposed method can be seen as a sensor-agnostic representation method by converting the signals into the image format from any other tactile sensor. The contributions of this paper are summarised as follows: * We propose a tactile representation method TacMAE to address the problem of incomplete tactile data in robotic perception, which is the first of its kind. * We use a masked autoencoder with a high masking ratio to simulate the absence of contact area in partial contact. By reconstructing the missing signals using observed information, the model can effectively learn from incomplete tactile data. To the best of the authors' knowledge, no previous studies have investigated mask modeling in tactile representation learning. * The experimental results demonstrate that our method can significantly improve tactile texture recognition performance by learning more robust tactile representations and the experiments of transfer learning indicate our method can be used to improve the efficiency of tactile exploration. § RELATED WORKS In this section, we will first review the works on tactile perception with tactile images, followed by a discussion of masked modeling in representation learning. §.§ Tactile perception with tactile images Tactile sensing has been wildly used in robotic exploration with different kinds of sensing mechanisms, such as strain gauges <cit.>, capacitive sensors <cit.>, and microphones <cit.>. The motion of the sensor provides tactile sensory information of the contacting surface, such as friction information and textures for tactile perception. Compared with other tactile sensors, optical tactile sensors use high-resolution cameras and record more detailed tactile information in tactile images. Recently, camera-based optical tactile sensor, such as the GelSight sensor, has gained popularity in tactile perception tasks. In <cit.>, the GelSight is applied to enable the robot to recognise clothes and their corresponding properties autonomously. In <cit.>, a spatio-temporal attention model is proposed to process tactile images from the GelSight sensor, which is capable to highlight salient tactile features in both spatial and temporal dimensions for texture recognition. In <cit.>, tactile images are fused with visual images to learn the sharing features between vision and tactile sensing for cloth texture recognition. However, the previous methods treat each contact event equally, ignoring the effect of incomplete tactile data from partial contact for tactile perception. §.§ Masking modelling in representation learning In representation learning, masking has been used as a way of masking some of the input and training the model to predict the masked input so as to improve the model's capability when some input is absent. Two popular paradigms are Masked Language Modeling (MLM) and Masked Image Modeling (MIM). MLM has become a successful paradigm in the field of NLP, such as BERT <cit.>, RoBERTa <cit.>, and GPT models<cit.>, etc. Models are trained to predict the value of masked tokens of input sentences in order to understand the context of the sentences. Due to their superior performance, MLMs have been applied for a variety of downstream tasks, including machine translation <cit.>; speech recognition <cit.>, question answering <cit.>, and sentiment analysis <cit.>. MIM also exhibits great potential in the field of computer vision. Similar to the mechanism in MLM, the representation is learnt by predicting the missing information from the remaining cues. In <cit.>, patches of raw pixels are masked by a high masking ratio which are then reconstructed by using visible patches. In <cit.>, the latent representation is predicted based on the view of masked input. Additionally, MIM has benefited various visual applications. In <cit.>, a segmentation method is developed based on top of the MAE by leveraging the synthesized image with shifted objects. In <cit.>, a multi-scale MAE framework is proposed to learn the 3D point clouds for shape classification and object detection. In <cit.>, face privacy is considered and reserved by masking the face images when training the face recognition model. In representation learning, the concept of the dropout method <cit.> is similar to the masking modeling, i.e., dropping a portion of the elements to improve the generalisation ability. However, these two techniques have a difference in their approaches to drop elements: the masking modeling drops a portion of the input and reconstructs missing content while the dropout method discards random neurons in a layer. It results in different effects on the capabilities of the model that the masking modeling enhances the representation learning of data and the dropout prevents the overfitting from the training data. To the best of the authors' knowledge, there are no prior works that apply mask modeling in tactile representation learning. In this work, we develop a tactile representation method based on masked modeling to solve the problem caused by partial contact in tactile perception, for the first time. § METHODOLOGIES Our proposed TacMAE masks a portion of signals to simulate the contact area missing in partial contacts and reconstructs the missing signals from limited tactile cues. As shown in Fig. <ref>, our framework mainly includes three parts: 1) an encoder E that encodes the unmasked patches of tactile images to obtain latent representation; 2) a decoder D to reconstruct the missing patches of tactile images as well as 3) a classification head to classify the unmasked patches of input tactile image. TacMAE encoder. Let 𝐱∈ℝ^H × W × C denotes the input tactile image obtained from the adequate contact, where H,W,C represent the height, width and channel of the tactile image, respectively. First, the input tactile image is reshaped into N patches (tokens) 𝐱_p ∈ℝ^N ×(P^2 · C), as token embeddings, where (P,P) represent the shape of each patch and N=H W / P^2. Then, a portion (e.g., 70%) of patches are masked out and the remaining unmasked patches are fed into the encoder to obtain the latent representation. Specifically, we use the structure of Vision Transformer (ViT) <cit.> as the encoder in our proposed TacMAE. TacMAE decoder. A decoder is applied to reconstruct missing patches of tactile images from the unmasked patches, which enables the model to learn from limited tactile cues. Specifically, our TacMAE receives two components as the input: latent features from the encoder and trainable vectors that illustrate the existence of the missing patches for reconstruction <cit.>. Concretely, the decoder network is also implemented based on the ViT block. The decoder reconstructs the tactile image and the mean square error (MSE) is calculated between the original tactile patches and reconstructed tactile patches. The reconstruction loss can be represented as: ℒ_rec=1/|Ω|∑_p ∈Ω|x_p-x̂_p|^2, where p denotes the index of masked patches, Ω represents the number masked patches, x_p represents original value of the masked patches, and x̂_p represents the reconstructed patches. TacMAE classification head. Apart from the reconstruction by the decoder that focuses on the correlation of the surface geometry in the tactile image, we also apply a classification head to make the model learn additional information from the corresponding labels. Unlike traditional supervised learning that utilises all patches of tactile images, our approach employs only the latent features of unmasked patches during the training phase. This is due to the fact that the surface patterns of objects often contain repetitive elements with redundancies. For example, when humans use tactile sensing to recognise the contacting objects, the perceptive area of the tactile area is usually much smaller than the object surfaces <cit.>, which means that tactile images collected from neighbouring areas of an object surface could be of similar textures. Consequently, it is possible to train the model to learn from partial tactile information effectively. Specifically, a global pooling function is performed on the latent features first to obtain global representation. Then, two fully connected layers are used in the classification head, and each layer is followed by a ReLU activation function. Consequently, the cross-entropy loss is calculated between the predicted labels and human-annotated labels in supervised training. The cross-entropy classification loss is given as: ℒ_ce=-∑_i=1^K y_i logŷ_i, where ŷ represents the predicted softmax probability, y represents one hot vector of correct category, and K is the number of samples in our dataset. By minimising both the classification loss and reconstruction loss, a robust tactile representation is learnt from limited tactile cues during the training stage. The overall objective can is expressed as: ℒ=λ_recℒ_rec+λ_ceℒ_ce, where λ_rec and λ_ce are set to 1 and 0.01, respectively, after a grid search. Implementation on downstream tasks. After training, fine-tuning can be performed based on different downstream tasks, such as recognition and detection. It should be noted that only the encoder needs to be retained as a backbone to obtain the tactile representation while other structures will be removed. Moreover, different from the aforementioned training which uses a subset of divided patches, uncorrupted tactile images are applied in the downstream tasks. § DATA PREPARATION In tactile perception, one distinct example with a wide range of surface properties is fabric or clothing. In this paper, we use the dataset from <cit.>. This dataset contains 118 fabrics with a size of 1m×1m which display various properties, such as textures, colors, density, and stiffness. The dataset contains visual, tactile, and semantic data, of which only tactile data is used in our study. During the tactile data collection, an optical tactile sensor, a GelSight sensor, is used to collect a sequence of tactile images by pressing against the fabrics when they are placed on a hard plane. The fabrics are placed with three different appearances for data collection: laying the fabric flat (flat data), laying it with one fold (fold data), and laying it randomly (random data). Approximately 10 flat data samples, 15 fold data samples, and 15 random data samples are collected for each fabric. To simulate the incomplete tactile images from partial contact, we randomly select three tactile images with contact areas ranging from 10% to 40% of the perception field from each contact event. The tactile images with contact areas over 50% of the perception field are used as the data from adequate contact. Some samples of adequate contact and partial contact are shown in Fig <ref>. Specifically, we determine the contact area by using the OpenCV findContours function. Accordingly, there are 14,961 tactile images that represent adequate contact and 14,823 tactile images that indicate incomplete tactile data, respectively. Both datasets are divided into a ratio of 7:2:1 for training, validation and test, respectively. In this study, we use the contact area as the criterion to distinguish between adequate contact and partial contact. However, there are several alternatives that can be considered, such as contact force or entropy <cit.> of the tactile image. We plan to explore this open question in our future work. § EXPERIMENTS AND ANALYSIS §.§ Tactile Representation Learning To validate the tactile representation capability of our proposed method, we test the results under two different settings, i.e., zero-shot transfer and fine-tuning in tactile recognition of incomplete tactile data. The zero-shot learning usually involves the ability to recognise novel objects whose categories are not included in the training set. In this study, following <cit.>, we extend the concept of zero-shot learning and investigate the generalisation to unobserved datasets. The motivation is to utilise it as a proxy to conduct unobserved tasks <cit.>. Concretely, the tactile representation is learnt by using the data from adequate contact, and the incomplete tactile data from partial contact are tested directly, without any additional training. In addition to zero-shot learning, fine-tuning is a widely used method to evaluate the capability of representation. When compared to zero-shot learning, fine-tuning is a practical and adaptable approach that can modify the representation to suit the new dataset, thereby mitigating the failure in the representation learned during the pre-training phase. Specifically, the representation model is trained by the data from adequate contact first. Then, a linear classifier on top of the learnt model is fine-tuned by using the incomplete data from partial contact. In our framework, we use a high ratio of masking to simulate the contact area's absence for partial contact and reconstruct the masked patches to make the model learn the surface geometry and tactile features from the limited tactile signals. First, we investigate how the masking ratio affects the results of tactile representation learning. As shown in Fig <ref>, the recognition results are given for two different settings, zero-shot transfer and fine-tuning, with the masking ratio ranging from 10% to 90%. When the masking ratio is 90%, we see that recognition results are inferior to most others, with only 55.6% in zero-shot transfer and 71.7% in fine-tuning. It indicates the challenge in establishing the correlation of surface geometry and predicting the missing patches with a small portion of unmasked tactile signals. On the other hand, if the masking ratio is very small, (e.g., 10%,) the reconstruction of the tactile signals is trivial as most of the tactile features are already present in the unmasked patches. The optimal point for the masking ratio is around 70%, where the recognition results are highest in our experiment, 71.4% in zero-shot learning and 85.8% in fine-tuning. Moreover, the computation cost can also be reduced significantly because of the high masking ratio. As shown in Fig <ref>, we visualise the reconstructed tactile images with different masking ratios. When the masking ratio is at 10%, we are able to predict fine details of the missing patches. However, if the masking ratio is increased to 70%, the predicted details become less clear but the textures and geometry can be preserved. At a masking ratio of 90%, only the outlines of the contact area can be reconstructed. It also illustrates that a 70% masking ratio could be a good compromise between the accuracy of the representation learnt and the masking ratio. To further analyse how the proposed TacMAE method works in tactile representation learning, we conduct an ablation study. TacMAE consists of a reconstruction component and a classification component. In the ablation study, we explore the effect of removing these two components, one at a time. As shown in Table <ref>, our proposed TacMAE achieves the highest texture recognition results for incomplete tactile data. Concretely, the accuracy decreases by 15.2% and 8.2% in zero-shot transfer and fine-tuning respectively when we remove the reconstruction branch and use full patches for training. When the classification head is discarded, there is an obvious drop in performance, by 39.5% and 24.1%, respectively in zero-shot learning and fine-tuning. §.§ Comparison against other methods We compare TacMAE against the other existing methods in tactile texture recognition for the incomplete tactile data from partial contact. Concretely, two CNN-based methods <cit.> are tested in both zero-shot transfer and fine-tuning settings. In <cit.>, a multi-label classification is performed and we modify the method for categorical classification. In <cit.>, spatio-temporal attention is applied to address the salient features in a tactile sequence, and we remove the temporal attention function as the input in our experiment is a single tactile frame. From Table <ref>, we see that our proposed method achieves the highest recognition accuracy in both zero-shot learning and fine-tuning. Specifically, we can notice that the baseline methods <cit.> lack the ability to generalise in zero-shot learning, only obtaining 26.4% and 34.1% respectively. This is because the baseline methods, receiving the data for training from adequate contact, are unable to extract useful tactile features from incomplete tactile data due to differences in data distributions. In our proposed framework, masking a portion of tactile signals and reconstructing the missing signals allow us to obtain semantic information of features from limited tactile signals. This enables TacMAE to possess strong robustness and generalisation ability. Although the performance of baseline methods clearly improves after the fine-tuning where the recognition accuracies are 60.6% and 74.5%, respectively, it is still inferior to the performance of TacMAE. §.§ Transfer Learning by exploring YCB Objects The goal of this experiment is two-fold: 1) to demonstrate the cross-task knowledge transferability of our tactile representation method by performing an object recognition task with YCB objects <cit.>, and 2) to show that this method is able to improve the efficiency in active tactile exploration. Experimental setup. There are six YCB objects with different surface textures selected, including the abrasive sponge, tomato soup can, tuna fish can, baseball, spatula, and a metal plate. Firstly, the tactile textures are collected from every single object to fine-tune the whole model. Then, as shown in Fig <ref>, the objects are intentionally placed in such a way that the target object is obstructed by other objects. As a result, in an active tactile exploration task, the target object can only be partially contacted by the tactile sensor without being relocated. In the robotic experiment, a GelSight sensor is equipped on the UR5 robot arm as an end effector to obtain partial contacts for texture recognition. Experimental results. In the robotic tactile exploration, we let the sensor to contact the objects partially to recognise the objects for 12 attempts per each object. The contact location will be changed randomly by about 3mm on the horizontal plane in each attempt. Particularly, the average contact area makes up about 11.5% of the sensor's perception field in these contacts, while the other areas remain empty. Table <ref> compares the success rate of recognition against other baseline methods <cit.>. The results show that our TacMAE has the highest performance, 83.3% recognition accuracy, in a total of 72 attempts. These results demonstrate the unique ability of TacMAE to transfer the learnt knowledge of tactile features to different kinds of objects. It also indicates that TacMAE can improve the efficiency of active tactile exploration, i.e., obtaining more information with fewer attempts of touching, especially for the obstructed object without having to move it. § CONCLUSION In this paper, we proposed TacMAE, a robust tactile representation method based on the MAE to effectively learn the features from partial contacts. During training, we create a simulation of partial contact by masking out a portion of tactile signals. By reconstructing the missing signals via self-supervised learning, the model is capable to learn the surface geometry and the correlation between limited tactile cues. The experimental results show that TacMAE obtains accurate tactile representations in both zero-shot learning and fine-tuning setting. Furthermore, results on the YCB objects indicate the generalisation ability and knowledge transferability on different tasks. Moreover, the ability to acquire knowledge from partial contact can increase the efficiency of tactile exploration, especially for obstructed objects. TacMAE has the potential to be used as a sensor-agnostic representation learning method by converting the signals into the format of images from any tactile sensor, not limited to the GelSight sensor. We plan to explore these possibilities in our future work. Moreover, we will investigate different downstream tasks using our proposed method, such as defect detection and robotic manipulation tasks. ieeetr
http://arxiv.org/abs/2307.04212v1
20230709155233
Delay-Adaptive Control of First-order Hyperbolic PIDEs
[ "Shanshan Wang", "Jie Qi", "Miroslav Krstic" ]
math.AP
[ "math.AP", "cs.SY", "eess.SY", "math.OC", "physics.class-ph", "physics.flu-dyn" ]
1]Shanshan Wang 2]Jie Qi 3]Miroslav Krstic AUTHOR ONE et al [1]Department of Control Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China [2]College of Information Science and Technology, Donghua University, Shanghai, China [3]Department of Mechanical and Aerospace Engineering, University of California, San Diego, California, USA Jie Qi, [email protected] [Summary]We develop a delay-adaptive controller for a class of first-order hyperbolic partial integro-differential equations (PIDEs) with an unknown input delay. By employing a transport PDE to represent delayed actuator states, the system is transformed into a transport partial differential equation (PDE) with unknown propagation speed cascaded with a PIDE. A parameter update law is designed using a Lyapunov argument and the infinite-dimensional backstepping technique to establish global stability results. Furthermore, the well-posedness of the closed-loop system is analyzed. Finally, the effectiveness of the proposed method was validated through numerical simulations. Delay-Adaptive Control of First-order Hyperbolic PIDEs [ August 12, 2023 ======================================================== § INTRODUCTION First-order hyperbolic PIDEs are widely used in various engineering applications, including traffic flow <cit.>, pipe flow <cit.>, heat exchangers <cit.>, and oil well drilling <cit.>. These applications often involve time delays due to the transportation of matter, energy, and information, which negatively affect the stability and performance of the system. Maintaining a stable fluid temperature is critical for the normal operation of heat exchangers, but the response speed is often limited when regulating fluid temperature, resulting in a time delay <cit.>. The exact value of the delay is usually hard to measure, which becomes a significant source of uncertainty within the controlled process <cit.>. Controlling the advection process in the presence of unknown delays is, therefore, a challenging task with practical significance. Thus, addressing the stabilization problem of first-order hyperbolic PIDEs with unknown input delays is of great practical importance. Recently, there have been many studies on the stability of first-order hyperbolic PIDEs <cit.>, and the development of infinite-dimensional backstepping techniques in <cit.> has provided effective methods for the PDE system control problems. <cit.> applies this method to the control of unstable open-loop hyperbolic PIDEs and developed a backstepping-based controller to stabilize the system. Subsequently, control problems for 2× 2 first-order PDEs <cit.>, n+1 coupled first-order hyperbolic PDEs <cit.>, and m+n anisotropic hyperbolic systems <cit.> were investigated by employing the infinite-dimensional backstepping approach. In reference <cit.>, a state feedback controller was designed for hyperbolic PIDEs with time-varying system parameters using this infinite-dimensional backstepping method, and the controller ensures that the system state converges to zero in the H_∞ norm within a finite time. Furthermore, a stabilizing controller and observer for hyperbolic PIDEs with Fredholm integrals were constructed in <cit.>, and the results of <cit.> were extended to output regulation problems. <cit.> demonstrated the equivalence between finite-time stabilization and exact controllability properties for first-order hyperbolic PIDEs with Fredholm integrals. For linear anisotropic hyperbolic systems without integral terms, finite-time output regulation problems were addressed in <cit.>, and stabilization problems for linear ODEs with linear anisotropic PDEs were solved in <cit.>. Infinite-dimensional backstepping has also been applied to adaptive control of hyperbolic PDEs. The pioneering work was presented in <cit.>, where an adaptive stabilization method was developed for a one-dimensional (1-D) hyperbolic system with a single uncertain parameter. Since then, this method has been extensively applied to various types of hyperbolic PDEs with unknown parameters, as presented in the extensive literature <cit.>. The aforementioned results are built based on three traditional adaptive schemes, including the Lyapunov design, the passivity-based design, and the swapping design, which were initially proposed for nonlinear ODEs <cit.>, and extended to the boundary adaptive control of PDEs <cit.>. Combined with backstepping design, a novel control strategy is proposed for coupled hyperbolic PDEs with multiplicative sensor faults in <cit.>, it utilized a filter-based observer and model-based fault parameter estimation technique to achieve the tracking objective. In recent years, studies began to pay attention to the time delays that occur in first-order PIDE systems since delays are commonly encountered in engineering practice. For instance, in <cit.>, input delays were considered, and a backstepping boundary control was designed for first-order hyperbolic PIDEs. An observer-based output feedback control law was proposed for a class of first-order hyperbolic PIDEs with non-local coupling terms in the domain and measurement delay compensation<cit.>. Reference <cit.> addressed the output boundary regulation problem for a first-order linear hyperbolic PDE considering disturbances in the domain and on the boundary as well as state and sensor delays. Recently, the robustness of output feedback for hyperbolic PDEs with respect to small delays in actuation and measurements was discussed in <cit.>. Research on adaptive control for unknown arbitrary delays in PDE systems is relatively scarce. In contrast, there have been significant research achievements in the adaptive control of ODE systems with unknown delays. A notable theoretical breakthrough by developing adaptive control methods to compensate for uncertain actuator delays is achieved in <cit.>. Subsequently, the delay adaptive control technique has been applied to various types of unknown delays in ODE systems, including single-input delay <cit.>, multi-input delay <cit.> and distributed input delay <cit.>. Inspired by these studies, recent work on parabolic systems with unknown input delays is presented in <cit.>. However, research on hyperbolic PDE systems with delays remains relatively limited. For the first-order hyperbolic systems with uncertain transport speed, parameter estimators and adaptive controllers are designed in <cit.> by using swapping filters. Different from these two studies, we apply a Lyapunov argument combined with the infinite-dimensional backstepping technique to design a delay-adaptive controller that achieves global stability in this paper, since the Lyapunov based adaptive methods are known to provide better transient performance <cit.>. In this paper, we consider a hyperbolic PIDE with an arbitrarily large unknown input delay. We extend the previous work on parabolic PDEs <cit.> to a first-order PIDE system. We employ the infinite-dimensional backstepping method and choose the classic update law for the unknown delay, resulting in the structuring of the target system as a "cascade system", and the target transport PDE has two extra nonlinear terms which are controlled by the delay estimation error and the delay update law. The L^2 global stability of the target system is proven using appropriate Lyapunov functionals. The inverse Volterra/backstepping transformation establishes the norm equivalence relationship between the target system and the original one, thereby achieving L^2 global stability of the PDE system under the designed adaptive delay compensation controller. Furthermore, the well-posedness of the closed-loop system is analyzed. Main contributions of this paper are: (1) This paper develops a combined approach of the infinite-dimensional backstepping and the Lyapunov functional method for delay-adaptive control design for a class of hyperbolic PIDEs with unknown input delay. In <cit.>, the presence of nonzero boundary conditions in the parabolic PDE target system with unknown input delay restricts us to the local stability of the closed-loop system with delay update law. However, we leverage the property first-order hyperbolic of the system to attain global stability of the closed-loop system. (2) The well-posedness of the closed-loop system is established. Due to the presence of nonlinear terms and non-zero boundary conditions in the target system, the proof of well-posedness is not straightforward. We use the semigroup method to analyze the well-posedness of the target system, and construct Lyapunov functions to establish the system's asymptotic stability in the H^1 norm, thereby ensuring the global existence of the classical solution. Due to the invertibility of the backstepping transformation, the equivalence between the target system and the closed-loop system can be established, so that the closed-loop system is well-posed. The structure of this paper is as follows: Section <ref> briefly describes the design of a nonadaptive controller for the considered hyperbolic PIDE system. Section <ref> discusses the design of the delay-adaptive control law. Section <ref> is dedicated to the stability analysis of the resulting adaptive closed-loop system and the well-posedness of the closed-loop system. Section <ref> provides consistent simulation results to demonstrate the feasibility of our approach. The paper ends with concluding remarks in Section <ref>. Notation: Throughout the paper, we adopt the following notation to define the L^2-norm for χ(x)∈ L^2[0,1]: ‖χ‖^2_L^2=∫_-1^1|χ(x)|^2dx, and set ‖χ‖^2=‖χ‖^2_L^2. For any given function ψ (· , D̂ (t)) ∂ψ (· , D̂ (t))/∂ t= Ḋ̂̇(t) ∂ψ(·,D̂ (t))/∂D̂ (t). § PROBLEM STATEMENT AND NON-ADAPTIVE CONTROLLER Consider the first-order PIDE with an input delay D>0, u_t(x,t)= u_x(x,t)+g(x) u(0,t)+∫_0^xf(x,y)u(y,t)dy, u(1,t)=U(t-D), u(x,0)=u_0(x), for (x,t)∈ (0,1)×ℝ_+, where g(x),f(x,y)∈ C[0,1] are known coefficient functions. Following <cit.>, the delayed input U(t-D) is written as a transport equation coupled with (<ref>) as follows: u_t(x,t)= u_x(x,t)+g(x) u(0,t)+∫_0^xf(x,y)u(y,t)dy, u(1,t)=v(0,t), u(x,0)=u_0(t), Dv_t(x,t)=v_x(x,t),   x∈[0,1), v(1,t)=U(x,t), v(x,0)=v_0(x), where the infinite-dimensional actuator state is solved as v(x,t)=U(t+D(x-1)). To design the delay-compensated controller U(t), the backstepping transformation as follows can be employed: w(x,t)= u(x,t)-∫_0^xk(x,y) u(y,t)dy, z(x,t)= v(x,t)-∫_0^1γ(x,y) u(y,t)dy-D∫_0^xq(x-y)v(y,t)dy, where the kernel function k(x,y) and q(x-y) are defined on 𝒯_1={(x,y): 0≤ y ≤ x≤ 1}, γ(x,y) on 𝒯_2 ={(x,y): 0≤ y, x ≤ 1}, which gives the following target system w_t(x,t)=w_x(x,t), w(1,t)=z(0,t), w(x,0)= w_0(x), Dz_t(x,t)=z_x(x,t), z(1,t)=0, z(x,0)=z_0(x), with a mild solution for z z(x,t)= z_0(x+t/D), 0≤ x+t/D≤ 1, 0, x+t/D>1, Using the backstepping method, one can get the kernel equations k_x(x,y)= -k_y(x,y)+∫_y^xf(τ,y)k(τ,y)dτ-f(x,y), k(x,0)= ∫_0^x k(x,y)g(y)dy-g(x), γ_x(x,y)= -Dγ_y(x,y)+D∫_y^1f(τ,y)γ(x,τ)dτ, γ(x,0)= ∫_0^1g(y)γ(x,y)dy, γ(0,y)= k(1,y), q(x)= γ(x,1). From the boundary conditions (<ref>) and (<ref>), the associated control law is straightforwardly derived U(t)=∫_0^1γ(1,y) u(y,t)dy+D∫_0^1q(1-y)v(y,t)dy. Knowing that the transformations (<ref>)–(<ref>) are invertible with inverse transformation as u(x,t)= w(x,t)+∫_0^xl(x,y) w(y,t)dy, v(x,t)= z(x,t)+∫_0^1η(x,y) w(y,t)dy-D∫_0^xp(x-y)z(y,t)dy, where kernels l(x,y), η(x,y) and p(x-y) satisfy the following PDEs, l_x(x,y)+l_y(x,y)=-∫_y^xf(τ,y)l(τ,y)dτ-f(x,y), l(x,0)=-g(x), η_x(x,y)+Dη_y(x,y)=0, η(x,0)=0, η(0,y)=l(1,y), p(x)=η(x,1). Next, we will develop an adaptive controller with delay update law to stabilize (<ref>)–(<ref>) for the arbitrarily long unknown delay. § DESIGN OF A DELAY-ADAPTIVE FEEDBACK CONTROL §.§ Adaptive control design Considering the plant (<ref>)–(<ref>) with an unknown delay D>0, which equivalent to the cascade system (<ref>)–(<ref>) with an unknown propagation speed 1/D, we will design an adaptive boundary controller to ensure global stability result. assuptionAssumption The upper and lower bounds D and D for delay D>0 are known. Based on the certainty equivalence principle, we rewrite controller (<ref>) by replacing D with estimated delay D̂(t) as the delay-adaptive controller U(x,t)=∫_0^1γ(1,y,D̂(t)) u(y,t)dy+D̂(t)∫_0^1q(1-y,D̂(t))v(y,t)dy. §.§ Target system for the plant with unknown input delay Rewriting the backstepping transformations (<ref>) as z(x,t)=v(x,t)-∫_0^1γ(x,y,D̂(t)) u(y,t)dy-D̂(t)∫_0^xq(x-y,D̂(t))v(y,t)dy, and its inverse (<ref>) as: v(x,t)=z(x,t)+∫_0^1η(x,y,D̂(t)) u(y,t)dy+D̂(t)∫_0^xp(x-y,D̂(t))z(y,t)dy, where the kernels γ(x,y,D̂(t)), q(x-y,D̂(t)), η(x,y,D̂(t)), p(x-y,D̂(t)) satisfy the same form of PDEs (<ref>)-(<ref>) and (<ref>)-(<ref>) except D replaced with D̂(t). Using the transformation (<ref>) and (<ref>), we get the following target system w_t(x,t)= w_x(x,t), w(1,t)=z(0,t), w(x,0)= w_0(x), Dz_t(x,t)=z_x(x,t)-D̃(t)P_1(x,t)-DḊ̂̇(t)P_2(x,t), z(1,t)=0, z(x,0)=z_0(x), where D̃(t)=D-D̂(t) is the estimation error, functions P_i(x,t), i=1,2 are given below: P_1(x,t)= z(0,t)M_1(x,t)+∫_0^1 w(y,t)M_2(x,y,t)dy, P_2(x,t)= ∫_0^1 z(y,t)M_3(x,y,t)dy+∫_0^1w(y,t)M_4(x,y,t)dy, with M_1(x,t)= γ(x,1,D̂(t)), M_2(x,y,t)= γ(x,1,D̂(t))l(1,y)-γ_y(x,y,D̂(t)) +∫_y^1(-γ_y(x,ξ,D̂(t))l(ξ,y)+γ(x,ξ,D̂(t))f(ξ,y)+∫_ξ^1γ(x,τ,D̂(t))f(τ,ξ)l(ξ,y)dτ)dξ, M_3(x,y,t)= q(x-y,D̂(t))+ q_D̂(t)(x-y,D̂(t))+D̂(t)∫_y^xq(x-ξ,D̂(t))p(ξ-y,D̂(t))dξ +D̂(t)^2∫_y^xq_D̂(t)(x-ξ,D̂(t))p(ξ-y,D̂(t))dξ, M_4(x,y,t)= γ_D̂(t)(x,y,D̂(t))+∫_y^1γ_D̂(t)(x,ξ,D̂(t))l(ξ,y)dξ+∫_0^xq(x-ξ,D̂(t))η(ξ,y,D̂(t))dξ +D̂(t)∫_0^xq_D̂(t)(x-ξ,D̂(t))η(ξ,y,D̂(t))dξ. §.§ The parameter update law We choose the following update law Ḋ̂̇(t)=θProj_[D,D]{τ(t)},    0<θ<θ^*, where τ(t) is given as τ(t)=-b_1∫_0^1(1+x)z(x,t)P_1(x,t)dx/N(t), with N(t)=1/2∫_0^1 (1+x)w (x,t)^2dx+b_1/2∫_0^1(1+x)z(x,t)^2d x, b_1>2D̅ and θ^*=min{D,b_1-2D̅}min{1,b_1}/2b_1^2L^2. The standard projection operator is defined as follows Proj_[D,D]{τ(t)}={[ 0 D̂(t)=D  τ(t)<0,; 0 D̂(t)=D  τ(t)>0,; τ(t) . ]. The projection is used to ensure the parameters D̂(t) within the known bounds [D, D] which cannot be viewed as a robust tool <cit.>. It prevents adaptation transients by over-limiting the size of the adaptation gain. The projection set can be taken conservatively and can be large, however, in order to ensure stability, the size needs to be inversely proportional to the adaptation gain. § THE GLOBAL STABILITY OF THE CLOSED-LOOP SYSTEM UNDER THE DELAY-ADAPTIVE CONTROL The following theorem states the global stability result of the closed-loop system (<ref>)–(<ref>) with update law (<ref>) and adaptive controller (<ref>). Consider the closed-loop system consisting of the plant (<ref>)–(<ref>), the control law (<ref>), and the update law (<ref>)–(<ref>) under Assumption <ref>. There exist positive constants ρ, R such that Ψ(t)≤ R(e^ρΨ(0)-1), ∀ t≥0, where Ψ(t)= ∫_0^1 u(x,t)^2dx+∫_0^1 v(x,t)^2dx+D̃(t)^2. Furthermore, lim_t→∞max_x∈[0,1]|u(x,t)|=0, lim_t→∞max_x∈[0,1]|v(x,t)|=0. The global stability of the (u, v)-system is established by the following steps: * We establish the norm equivalence between (u, v) and ( w, z). * We introduce a Lyapunov function to prove the global stability of the (w, z)-system (<ref>)–(<ref>), and then get the stability of system (u, v) by using the norm equivalence. * We arrive at the regulation of states u(x,t) and v(x,t). §.§ Global stability of the closed-loop system First, we discuss the equivalent stability property between the plant (<ref>)–(<ref>) and the target system (<ref>)–(<ref>). Call now kernel functions k(x,y), γ(x,y), q(x-y), l(x,y), η(x,y), and p(x-y) are bounded by k, γ, q, l, η, and p and in their respective domains. From (<ref>), (<ref>), (<ref>), and (<ref>) it is easy to find, by using Cauchy-Schwarz inequality, that ‖ u(t)‖^2+‖ v(t)‖^2≤ r_1 ‖ w (t)‖^2+r_2‖ z(t)‖^2, ‖ w (t)‖^2+‖ z(t)‖^2≤ s_1‖ u(t)‖^2+s_2‖ v(t)‖^2, where r_i and s_i, i=1,2 are positive constants given by r_1=2+2l^2+3η^2, r_2=3+3D^2p^2, s_1=2+2k^2+3γ^2, s_2=3+3D^2q^2. Next, we prove the global stability of the closed-loop system consisting of the (u, v)-system under the control law (<ref>), and the update law (<ref>)-(<ref>). Introducing a Lyapunov-Krasovskii-type function V_1(t)= Dlog (1+N(t))+D̃(t)^2/2θ, where N(t)=1/2∫_0^1 (1+x)w (x,t)^2dx+b_1/2∫_0^1(1+x)z(x,t)^2d x, based on the target system (<ref>)–(<ref>) and where b_1 is a positive constant. Taking the time derivative of (<ref>) along (<ref>)–(<ref>), we get V̇_1(t)= D/N(t)(∫_0^1 (1+x)w (x,t)w_t (x,t)dx+b_1∫_0^1(1+x)z(x,t)z_t(x,t)dx)-D̃(t)Ḋ̂̇(t)/θ = 1/N(t)(D∫_0^1 (1+x)w (x,t)w_x(x,t)dx+b_1∫_0^1(1+x)z(x,t)(z_x(x,t)-D̃(t)P_1(x,t) -DḊ̂̇(t)P_2(x,t))dx)-D̃(t)Ḋ̂̇(t)/θ = 1/N(t)(Dw(1,t)^2-D/2w(0,t)^2-D/2‖ w‖^2-b_1/2z(0,t)^2-b_1/2‖ z‖^2 -b_1D̃(t)∫_0^1(1+x)z(x,t)P_1(x,t)dx-b_1DḊ̂̇(t)∫_0^1(1+x)z(x,t)P_2(x,t)dx)-D̃(t)Ḋ̂̇(t)/θ, where we have used integration by parts, Cauchy-Schwarz, and Young's inequalities. Using (<ref>)–(<ref>) and the standard properties of the projection operator leads to V̇_1(t)≤ 1/N(t)(-D/2‖ w‖^2-b_1/2‖ z‖^2-(b_1/2-D)z(0,t)^2 -b_1DḊ̂̇(t)∫_0^1(1+x)z(x,t)P_2(x,t)dx), where b_1>2D̅. After a lengthy but straightforward calculation, employing the Cauchy-Schwarz and Young inequalities, along with (<ref>) and (<ref>), yields the following estimates ∫_0^1(1+x)z(x,t)P_1(x,t)dx≤ L(‖ w‖^2+‖ z‖^2+‖ z(0,t)‖^2), ∫_0^1(1+x)z(x,t)P_2(x,t)dx≤ L(‖ w‖^2+‖ z‖^2), where the parameter L̅ is defined below L̅=max{M_1+M_2,2M_3+M_4}, where M_1=max_0≤ x≤1, t≥0{|M_1(x,D̂(t))|}, M_i=max_0≤ x≤ y≤1, t≥0{|M_i(x,y,D̂(t))|} for i=2,3,4. According to the equivalent stability property between the plant (<ref>)–(<ref>) and the target system (<ref>)–(<ref>), we can get V̇_1≤ -(min{D/2,b_1/2-D̅}..-θ b_1^2L^2/min{1,b_1})‖ w‖^2+‖ z‖^2+‖ z(0,t)‖^2/N(t). Choosing θ∈(0,θ^⋆), where θ^⋆ defined by (<ref>), we know V̇_1(t)≤0, which gives V_1(t)≤ V_1(0), for all t≥0. Hence, we get the following estimates from (<ref>): ‖ w‖^2≤ 2(e^V_1(t)/D-1), ‖ z‖^2≤2/b_1(e^V_1(t)/D-1), D̃(t)≤2θ V_1(t)/D. Furthermore, from (<ref>), (<ref>) and (<ref>)-(<ref>), it follows that ‖ u‖^2+‖ v‖^2≤(2r_1+2r_2/b_1)(e^V_1(t)/D-1), and combining (<ref>) and (<ref>), we get Ψ(t)≤(2r_1+2r_2/b_1+2θ/D)(e^V_1(t)/D-1). So, we have bounded Ψ(t) in terms of V_1(t) and thus, using (<ref>), in terms of V_1(0). Now we have to bound V_1(0) in terms of Ψ(0). First, from (<ref>), it follows that V_1(t)=D log(1+1/2∫_0^1(1+s)w(x,t)^2dx+b_1/2∫_0^1(1+s)z(x,t)^2dx)+D̃(t)^2/2θ ≤ D̅‖ w‖^2+b_1D̅‖ z‖^2+D̃(t)^2/2θ ≤ D̅max{1,b_1}(s_1+s_2)(‖ u‖^2+‖ v‖^2)+D̃(t)^2/2θ ≤ (D̅max{1,b_1}(s_1+s_2)+1/2θ)Ψ(t), leading to the following relation V_1(0)≤ (D̅max{1,b_1}(s_1+s_2)+1/2θ)Ψ(0). Then, combining (<ref>), (<ref>) and (<ref>), we have Ψ(t)≤ R(e^ρΨ(0)-1), where R=2r_1+2r_2/b_1+2θ/D, ρ=D̅max{1,b_1}(s_1+s_2)+1/2θ, so we complete the proof of the stability estimate (<ref>). §.§ Pointwise boundedness and regulation of the distributed states Now, we ensure the regulation of the distributed states. From (<ref>) and (<ref>), we get the boundedness of ‖ w‖, ‖ z‖ and D̂(t). Knowing that ∫_0^t‖ w(τ)‖^2dτ≤sup_0≤τ≤ tN(τ)∫_0^t‖ w(τ)‖^2/N(τ)dτ, and using (<ref>) the following inequality holds N(τ)≤ N(0)e^D̃(0)^2/2θ. Integrating (<ref>) over [0, t ], we have ∫_0^t‖ w(τ)‖^2/N(τ)dτ≤D̅log N(0)+D̃(0)^2/2θ/min{1/2,b_1/2-1}-θ b_1^2L^2/min{1,b_1}. Substituting (<ref>) and (<ref>) into (<ref>), we get ‖ w‖ is square integrable in time. One can establish that ‖ z‖ and ‖ z(0,t)‖ are square integrable in time similarly. Thus, ‖ P_1‖ and ‖ P_2‖ are bounded and integrable functions of time. To prove the boundedness of ‖ w_x‖, we define the following Lyapunov function V_2(t)=1/2∫_0^1 (1+x)w _x(x,t)^2dx+b_2D/2∫_0^1 (1+x)z_x(x,t)^2dx, where b_2 is a positive constant. Using the integration by parts, the derivative of (<ref>) with respect to time is written as V̇_2(t)= ∫_0^1 (1+x)w _x(x,t) w _xt(x,t)dx+b_2D∫_0^1 (1+x)z_x(x,t)z_xt(x,t)dx = ∫_0^1 (1+x)w _x(x,t) w _xx(x,t)dx+b_2∫_0^1 (1+x)z_x(x,t)z_xx(x,t)dx -b_2D̃(t)∫_0^1(1+x)z_x(x,t)P_1x(x,t)dx-b_2DḊ̂̇(t)∫_0^1(1+x)z_x(x,t)P_2x(x,t)dx. = w_x(1,t)^2-1/2w_x(0,t)^2-1/2‖ w_x‖^2+b_2z_x(1,t)^2-b_2/2z_x(0,t)^2-b_2/2‖ z_x‖^2 -b_2D̃(t)∫_0^1(1+x)z_x(x,t)P_1x(x,t)dx-b_2DḊ̂̇(t)∫_0^1(1+x)z_x(x,t)P_2x(x,t)dx. Based on (<ref>), (<ref>), one can get w_x(1,t)= w_t(1,t)=z_t(0,t) = z_x(0,t)-D̃(t)P_1(0,t)-DḊ̂̇(t)P_2(0,t), we arrive at the following inequality V̇_2(t)≤ -1/2‖ w_x‖^2-b_2/2‖ z_x‖^2-(b_2/2-3)z_x(0,t)^2+3D̃(t)^2P_1(0,t)^2+3D^2Ḋ̂̇(t)^2P_2(0,t)^2 +2b_2D̃(t)^2 P_1(1,t)^2+2b_2D^2Ḋ̂̇(t)^2 P_2(1,t)^2+2b_2|D̃(t)|‖ z_x‖‖ P_1x(x,t)‖ +b_2D|Ḋ̂̇(t)|‖ z_x‖‖ P_2x(x,t)‖. Choosing b_2>6, we get, V̇_2(t)≤ -1/2‖ w_x‖^2-b_2/2‖ z_x‖^2+3D̃(t)^2P_1(0,t)^2+3D^2Ḋ̂̇(t)^2P_2(0,t)^2+2b_2D̃(t)^2 P_1(1,t)^2 +2b_2D̅^2Ḋ̂̇(t)^2 P_2(1,t)^2+2b_2D|Ḋ̂̇(t)|‖ z_x‖‖ P_2x‖+2b_2|D̃(t)|‖ z_x(x,t)‖‖ P_1x(x,t)‖ ≤ -c_1V_2(t)+f_1(t)V_2(t)+f_2(t), where we use Young's and Agmon's inequalities. Here, c_1=1/2min{1,1/D}, and the functions f_1(t) and f_2(t) are given by f_1(t)= b_2D^2(|Ḋ̂̇(t)|^2+4), f_2(t)= b_2‖ P_1x‖^2+b_2‖ P_2x‖^2+12D̅^2P_1(0,t)^2+3D̅^2Ḋ̂̇(t)^2P_2(0,t)^2+8b_2D̅^2 P_1(1,t)^2 +2b_2D^2Ḋ̂̇(t)^2 P_2(1,t)^2. Knowing that P_1(0,t)^2≤ 2M_1^2z(0,t)^2+2M_2^2‖ w ‖^2 ≤ 2M_1^2(‖ z‖^2+‖ z_x‖^2)+2M_2^2‖ w ‖^2, P_2(0,t)^2≤ 2M_3^2‖ z‖^2+ 2M_4^2‖ w ‖^2, with (<ref>) and (<ref>), we get |Ḋ̂̇(t)|, P_1(0,t)^2, P_2(0,t)^2, P_1(1,t)^2 and P_2(1,t)^2 are integrable. Then, f_1(t) and f_2(t) are also integrable functions of time. Using Lemma D.3 <cit.>, we get that ‖ w _x‖ and ‖ z _x‖ are bounded, and combing the Agmon's inequality, one can deduce the boundedness of w(x,t) and z(x,t) for all x∈[0,1]. Next we establish the boundedness of d/dt(‖ w‖^2), d/dt(‖ z‖^2) and d/dt(‖ z_x‖^2) using the following Lyapunov function V_3(t)=1/2∫_0^1 (1+x)w_x (x,t)^2dx+b_3D/2∫_0^1 (1+x)z(x,t)^2dx+b_3D/2∫_0^1 (1+x)z_x(x,t)^2dx, where b_3 is a positive constant. Taking the derivative of (<ref>) with respect to time, we obtain V̇_3(t)= ∫_0^1 (1+x)w_x(x,t) w _xt(x,t)dx+b_3D∫_0^1 (1+x)z(x,t)z_t(x,t)dx +b_3D∫_0^1 (1+x)z_x(x,t)z_xt(x,t)dx = w_x(1,t)^2-1/2w_x(0,t)^2-1/2‖ w_x‖^2-b_3/2z(0,t)^2-b_3/2‖ z‖^2+b_3z_x(1,t)^2-b_3/2z_x(0,t)^2 -b_3/2‖ z_x‖^2-b_3D̃(t)(∫_0^1(1+x)z(x,t)P_1(x,t)dx+∫_0^1(1+x)z_x(x,t)P_1x(x,t)dx) -b_3DḊ̂̇(t)(∫_0^1(1+x)z(x,t)P_2(x,t)dx+∫_0^1(1+x)z_x(x,t)P_2x(x,t)dx). Clearly, using integrations by part and Young's inequality, the following holds, V̇_3(t)≤ -1/2‖ w_x‖^2-b_3/2‖ z‖^2-(b_3/2-1)z(0,t)^2-b_3/2‖ z_x‖^2-(b_3/2-3)z_x(0,t)^2 +3D̃(t)^2P_1(0,t)^2+3D^2Ḋ̂̇(t)^2P_2(0,t)^2+2b_3D̃(t)^2 P_1(1,t)^2+2b_3D^2Ḋ̂̇(t)^2 P_2(1,t)^2 +2b_3|D̃(t)|‖ z‖‖ P_1‖+2b_3D|Ḋ̂̇(t)|‖ z‖‖ P_2‖+2b_3|D̃(t)|‖ z_x‖‖ P_1x‖ +2b_3D|Ḋ̂̇(t)|‖ z_x‖‖ P_2x‖. Choosing b_3>6, we have V̇_3(t)≤ -1/2‖ w_x‖^2-b_3/2‖ z‖^2-b_3/2‖ z_x‖^2+3D̃(t)^2P_1(0,t)^2+3D^2Ḋ̂̇(t)^2P_2(0,t)^2+2b_3D̃(t)^2 P_1(1,t)^2 +3D^2Ḋ̂̇(t)^2P_2(0,t)^2+2b_3D̃(t)^2 P_1(1,t)^2+2b_3D^2Ḋ̂̇(t)^2 P_2(1,t)^2+2b_3|D̃(t)|‖ z‖‖ P_1‖ +2b_3D|Ḋ̂̇(t)|‖ z‖‖ P_2‖+2b_3|D̃(t)|‖ z_x‖‖ P_1x‖+2b_3D|Ḋ̂̇(t)|‖ z_x‖‖ P_2x‖ ≤ -c_1V_3(t)+f_3(t)V_3(t)+f_4(t)<∞, where we use Young's and Agmon's inequalities, and f_3(t)= 2b_3D^2(|Ḋ̂̇(t)|^2+4), f_4(t)= 3D̃(t)^2P_1(0,t)^2+3D̅^2Ḋ̂̇(t)^2P_2(0,t)^2+b_3(‖ P_1‖^2+‖ P_2‖^2+‖ P_1x‖^2+‖ P_2x‖^2 +8D̅^2 P_1(1,t)^2+2D̅^2Ḋ̂̇(t)^2 P_2(1,t)^2), are bounded functions. Thus, from (<ref>), one can deduce the boundedness of d/dt(‖ w‖^2), d/dt(‖ z‖^2) and d/dt(‖ z_x‖^2). Moreover, by Lemma D.2 <cit.>, we get V_3(t)→0, and thus ‖ w_x ‖,‖ z‖, ‖ z_x‖^2→ 0 as t→∞. Next, from (<ref>), we have ‖ u_x‖^2, ‖ v‖^2, ‖ v_x‖^2→ 0 as t→∞. From (<ref>), we have ‖ u_x‖^2≤ 2‖ w_x‖^2+2‖ w‖^2l_x^2. Since ‖ w‖, ‖ w_x‖ are bounded, ‖ u_x‖ is also bounded. By Agmon's inequality u(x,t)^2≤2‖ u‖‖ u_x‖, which enables one to state the regulation of u(x,t) to zero uniformly in x. Similarly, one can prove the regulation of v(x,t). Since ‖ v‖^2 and ‖ v_x‖^2→ 0 as t→∞, by Agmon's inequality v(x,t)^2≤2‖ v‖‖ v_x‖, which enables one to state the regulation of v(x,t) to zero uniformly in x and completes the proof of Theorem <ref>. §.§ Well-posedness of the closed-loop system Following the approach in <cit.>, we prove the well-posedness of the closed-loop system in Theorem 1. Consider the closed-loop target system (w,z,D̃(t)): w_t(x,t)= w_x(x,t), w(1,t)=z(0,t), z_t(x,t)=1/Dz_x(x,t)-D̃(t)/D P_1(x,t)-θProj_[D,D̅]{τ(t)} P_2(x,t), z(1,t)=0, Ḋ̃̇(t)=-θProj_[D,D̅]{τ(t)}, we set Z=( w,z,D̃(t))^T, and introduce the operator A=[ -∂/∂ x 0 0; 0 -∂/D∂ x 0; 0 0 0 ], with F(Z)=[ 0; -D̃(t)/D P_1(x,t)-θProj_[D,D̅]{τ(t)} P_2(x,t); θProj_[D,D̅]{τ(t)} ]. Then (<ref>)-(<ref>) can be written in abstract form as Z_t=-AZ+F(Z), Z(0)=Z_0. where Z=L^2(0,1)× L^2(0,1)×ℝ, ℬ(A)={f,g,l:f∈ H^1(0,1), f(1)=g(0); g∈ H^1(0,1),g(1)=0; l∈ℝ} and the norm ‖ Z‖_H=‖ w‖^2+‖ z‖^2+D̃^2. Now, we establish the well-posedness of (<ref>)–(<ref>) with the following theorem (see as well Theorem 8.2 <cit.>, Theorem 2.5.6 <cit.>, for which a similar method has been employed to establish well-posedness). Consider the system (<ref>)–(<ref>) , where A is a maximal accretive operator from a dense subset ℬ(A) in a Banach space H into H. If F is a nonlinear operator from ℬ(A) to ℬ(A) and satisfies the local Lipschitz condition, then for any Z_0 ∈ℬ(A), the problem (<ref>)–(<ref>) admits a unique classical solution Z such that Z∈ C^1([0,T_max),H)∩ C([0,T_max),ℬ(A)), where (i) either T_max=+∞, i,e., there is a unique global classical solution (ii) or T_max<+∞ and lim_t→ T_max-0‖ Z(t)‖_H=+∞. Combining the proof for hyperbolic case (see, e.g., Example 2.3.1 in <cit.>), we obtain that A is a maximal accretive operator. Then, it is straightforward to establish that for any Z_1, Z_2 ∈ H, ‖ F(Z_1)-F(Z_2)‖_H≤ C‖ Z_1-Z_2‖_Hmax{‖ Z_1‖_H,‖ Z_2‖_H}, where C is a constant independent of Z_1 and Z_2. So, we get F to be locally Lipschitz on H. Hence, the system (<ref>)–(<ref>) has a unique classical solution. Next, we will establish that the existence of the classical solution is global. In order to prove that T_max = +∞, which means there is no blowup, we need to make a priori estimates of the H^1 norm of w and z. Based on the proof of boundedness of w and z in L^2 norms, in our present work, one can obtain that w and z are bounded in H^1 by using the following new Lyapunov function V_4(t)=1/2∫_0^1 (1+x)w _xx(x,t)^2dx+b_4D/2∫_0^1 (1+x)z_xx(x,t)^2dx. Using the integration by parts, the derivative of (<ref>) with respect to time is written as V̇_4(t)= ∫_0^1 (1+x)w_xx(x,t) w _xxt(x,t)dx+b_4D∫_0^1 (1+x)z_xx(x,t)z_xxt(x,t)dx = ∫_0^1 (1+x)w _xx(x,t) w _xxx(x,t)dx+b_4∫_0^1 (1+x)z_xx(x,t)z_xxx(x,t)dx -b_4D̃(t)∫_0^1(1+x)z_xx(x,t)P_1xx(x,t)dx-b_4DḊ̂̇(t)∫_0^1(1+x)z_xx(x,t)P_2xx(x,t)dx = w_xx(1,t)^2-1/2w_xx(0,t)^2-1/2‖ w_xx(x,t)‖^2+b_4z_xx(1,t)^2-b_4/2z_xx(0,t)^2-b_4/2‖ z_xx(x,t)‖^2 -b_4D̃(t)∫_0^1(1+x)z_xx(x,t)P_1xx(x,t)dx-b_4DḊ̂̇(t)∫_0^1(1+x)z_xx(x,t)P_2xx(x,t)dx. Based on (<ref>), (<ref>), one can get w_xx(1,t)= w_tx(1,t)=w_tt(1,t)=z_tt(0,t) = 1/D^2z_xx(0,t)-D̃(t)/D^2P_1x(0,t)-Ḋ̂̇(t)/DP_2x(0,t)+1/DḊ̂̇(t)P_1(0,t)-D̈̂̈(t)P_2(0,t) -1/DD̃(t)P_1t(0,t)-Ḋ̂̇(t)P_2t(0,t), z_xx(1,t)= D̃(t)P_1x(1,t)+DḊ̂̇(t)P_2x(1,t)-DḊ̂̇(t)P_1(1,t)+D^2D̈̂̈(t)P_2(1,t) +DD̃(t)P_1t(1,t)+D^2Ḋ̂̇(t)P_2t(1,t). Submitting (<ref>) and (<ref>) into (<ref>), we arrive at the following inequality V̇_4(t)≤ -1/2w_xx(0,t)^2-1/2‖ w_xx‖^2-b_4/2‖ z_xx‖^2-(b_4/2-7/D^4)z_xx(0,t)^2+2b_4|D̃(t)|‖ z_xx‖‖ P_1xx‖ +2b_4D|Ḋ̂̇(t)|‖ z_xx‖‖ P_2xx‖ +7D̃(t)^2/D^4P_1x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_2x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_1(0,t)^2 +7D̈̂̈(t)^2P_2(0,t)^2+7/D^2D̃(t)^2P_1t(0,t)^2+7Ḋ̂̇(t)^2P_2t(0,t)^2+6b_4D̃(t)^2P_1x(1,t)^2 +6b_4D^2Ḋ̂̇(t)^2P_2x(1,t)^2+6b_4D^2Ḋ̂̇(t)^2P_1(1,t)^2+6b_4D^4D̈̂̈(t)^2P_2(1,t)^2+6b_4D^2D̃(t)^2P_1t(1,t)^2 +6b_4D^4Ḋ̂̇(t)^2P_2t(1,t)^2. Choosing b_4>14/D^4, we get, V̇_4(t)≤ -1/2‖ w_xx‖^2-b_4/2‖ z_xx‖^2+b_4D̃(t)^2‖ z_xx‖^2+b_4 ‖ P_1xx‖^2 +b_4D^2Ḋ̂̇(t)^2‖ z_xx‖^2 +b_4‖ P_2xx‖^2 +7D̃(t)^2/D^4P_1x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_2x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_1(0,t)^2+7D̈̂̈(t)^2P_2(0,t)^2 +7/D^2D̃(t)^2P_1t(0,t)^2+7Ḋ̂̇(t)^2P_2t(0,t)^2+6b_4D̃(t)^2P_1x(1,t)^2+6b_4D^2Ḋ̂̇(t)^2P_2x(1,t)^2 +6b_4D^2Ḋ̂̇(t)^2P_1(1,t)^2+6b_4D^4D̈̂̈(t)^2P_2(1,t)^2+6b_4D^2D̃(t)^2P_1t(1,t)^2+6b_4D^4Ḋ̂̇(t)^2P_2t(1,t)^2 ≤ -c_1V_4(t)+f_5(t)V_4(t)+f_6(t), where we use Young's and Agmon's inequalities. Here, c_1=1/2min{1,1/D}, and the functions f_5(t) and f_6(t) are given by f_5(t)= b_5D^2(Ḋ̂̇(t)^2+4), f_6(t)= b_4 ‖ P_1xx‖^2+b_4‖ P_2xx‖^2+28D̅^2/D^4P_1x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_2x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_1(0,t)^2 +7D̈̂̈(t)^2P_2(0,t)^2+28/D^2D̅^2P_1t(0,t)^2+7Ḋ̂̇(t)^2P_2t(0,t)^2+24b_4D̅^2P_1x(1,t)^2 +6b_4D̅^2Ḋ̂̇(t)^2P_2x(1,t)^2+6b_4D̅^2Ḋ̂̇(t)^2P_1(1,t)^2+6b_4D̅^4D̈̂̈(t)^2P_2(1,t)^2+24b_4D̅^4P_1t(1,t)^2 +6b_4D̅^4Ḋ̂̇(t)^2P_2t(1,t)^2, based on all above results, we can get all terms in (<ref>) and (<ref>) are integrable of time. Using Lemma D.3 <cit.>, we get that ‖ w_xx‖ and ‖ z_xx‖ are bounded. Then, from (<ref>) and (<ref>), w_tx(x,t)= w_xx(x,t), Dz_tx(x,t)=z_xx(x,t)-D̃(t)P_1x(x,t)-DḊ̂̇(t)P_2x(x,t), we get ‖ w_tx‖ and ‖ z_tx‖ are bounded. Combing with ‖ w_x‖,‖ z_x‖^2→ 0 as t→∞ and regulation of w(x,t) and z(x,t), one can get ‖ w_t ‖,‖ z_t‖^2→ 0 as t→∞, and then, by using the Agmon's inequality, the regulation of w_t(x,t) and z_t(x,t) is proven for all x∈[0,1]. Therefore, we have proved that ‖ Z‖_H is bounded and global classical solution exists. Finally, we can get the well-posedness of the closed-loop system consisting of the plant (<ref>)–(<ref>), the control law (<ref>), and the update law (<ref>)–(<ref>) the under Assumption 1 based on the invertibility of the backstepping transformations (<ref>) and (<ref>). § SIMULATION To illustrate the feasibility of the proposed adaptive controller design, we simulate the closed-loop system consisting (<ref>)–(<ref>), the control law (<ref>), and the update law defined through (<ref>)–(<ref>). The actual delay is set to D = 2 assuming known upper and lower bounds defined as D̅ = 4 and D=0.1, respectively. The adaptation gain is set to θ = 0.021, the plant coefficients are chosen as g(x)=2(1-x) and f(x,y)=cos(2π x)+4sin(2π y)). The simulations are performed considering u_0(x)=4sin(π x), v_0(x)=0 as initial conditions with D̂_0=1 and D̂_0=3, respectively. Figure <ref> shows the convergence of the plant's state u(x,t) with and without adaptation, respectively. In the absence of adaptation, but with a "mismatch input delay" set to D̂(t) = 3 (the true delay being D = 2). Figure <ref> (a) shows the dynamics of the L^2-norm of the plant state u(x,t)_L^2 with and without adaptation, respectively. The control effort is displayed in Figure <ref> (b) and the update law in Figure <ref> (c). Finally, Figure <ref> (d) reflects a good estimate of the delay with D̂(t) converging to the true value D=2. § CONCLUSION We have studied a class of first-order hyperbolic PIDEs systems with an input subject to an unknown time delay. By utilizing an infinite-dimensional representation of the actuator delay, the system was transformed into a cascading structure consisting of a transport PDE and a PIDE. We successfully established global stability results by designing a parameter update law using the well-known infinite-dimensional backstepping technique and a Lyapunov argument. Furthermore, we analyzed the well-posedness of the system, taking into account the added difficulty caused by the presence of nonlinear terms. Through numerical simulations, we have demonstrated the effectiveness of the proposed method. This research contributes to the understanding and control of systems with unknown time delays and provides valuable insights into the stability analysis and parameter update design for such systems. Future work may involve extending these findings to more complex systems or considering additional constraints and uncertainties. § ACKNOWLEDGMENTS This work is partly supported by partially supported by National Natural Science Foundation of China (62173084, 61773112), the Natural Science Foundation of Shanghai (23ZR1401800). ieeetr
http://arxiv.org/abs/2307.04342v1
20230710045252
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays
[ "Kangheun Kim", "Fan Yang", "Klaus Mølmer", "Jaewook Ahn" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas", "physics.atom-ph" ]
These authors contributed equally to this work Department of Physics, KAIST, Daejeon 34141, Republic of Korea These authors contributed equally to this work Center for Complex Quantum Systems, Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C, Denmark Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark [email protected] Department of Physics, KAIST, Daejeon 34141, Republic of Korea Strong mutual interactions correlate elementary excitations of quantum matter and plays a key role in a range of emergent phenomena <cit.>, from binding and condensation <cit.> to quantum thermalization and many-body localization <cit.>. Here, we employ a Rydberg quantum simulator to experimentally demonstrate strongly correlated spin transport in anisotropic Heisenberg magnets, where the magnon-magnon interaction can be tuned two orders of magnitude larger than the magnon hopping strength. In our approach, the motion of magnons is controlled by an induced spin-exchange interaction through Rydberg dressing <cit.>, which enables coherent transport of a single Rydberg excitation across a chain of ground-state atoms. As the most prominent signature of a giant anisotropy, we show that nearby Rydberg excitations form distinct types of magnon bound states, where a tightly bound pair exhibits frozen dynamics in a fragmented Hilbert space, while a loosely bound pair propagates and establishes correlations beyond a single lattice site. Our scheme complements studies using resonant dipole-dipole interactions between Rydberg states, and opens the door to exploring quantum thermodynamics with ultrastrong interactions and kinetic constraints <cit.>. Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays Jaewook Ahn August 12, 2023 ================================================================================ Quantum simulation of spin models has established a powerful tool for unraveling exotic many-body phases and dynamics <cit.>. As a pivotal process in quantum magnetism, the quasiparticle spin excitations (magnons) can propagate through the system by coherent spin exchanges that conserve the total magnetization <cit.>. The inclusion of strong magnon-magnon interaction complicates the underlying spin transport, where the motion of different magnons cannot be separated <cit.>. Similar correlated transport dynamics has been observed in various quantum systems, including ultracold atoms engineered by the superexchange mechanism <cit.>, trapped atomic ions with phonon mediated spin-spin couplings <cit.>, and Rydberg atom arrays subjected to resonant dipole-dipole interactions <cit.>. These works aim to construct a spin-1/2 Heisenberg model, where the correlations can be tuned by the anisotropy of the XXZ-type Hamiltonian, defined as the strength of the magnon-magnon interaction relative to the spin-exchange rate. One of the biggest challenges in previous experiments was to acquire a very large anisotropy, for which the strongly correlated dynamics is constrained to flip-flops that conserve not only the total magnetization but also the number of domain-walls. This kinetic constraint is key to exotic non-ergodic dynamics, such as Hilbert space fragmentation <cit.> and quantum many-body scars <cit.>. In this work, we demonstrate an approach that can access such an extremely anisotropic regime on a neutral-atom quantum simulator, where ground-state atoms are off-resonantly dressed to a Rydberg state to induce an effective excitation exchange <cit.>. As evidence of the large anisotropy, we show that the propagation of a single Rydberg excitation significantly slows down in the presence of a nearest-neighbor Rydberg excitation, due to the formation of a tightly bound state. While similar magnon bound states have been identified in systems with short-range interactions <cit.> or moderate anisotropies <cit.>, the large long-range anisotropy in our work can further support a new type of bound states with a bond length beyond the nearest neighbor. Effective spin exchange in a Rydberg Ising model Our experiments are carried out in a chain of ^87 Rb atoms initially trapped in an optical tweezer array [see Fig. <ref>(a)]. We use a two-photon excitation scheme to couple the ground state |↓⟩=|5S_1/2,F=2,m_F = 2⟩ to the Rydberg state |↑⟩=|71S_1/2,m_J=1/2⟩, which maps the system onto a spin-1/2 chain described by a tilted Ising Hamiltonian (taking ħ=1, where ħ is the reduced Planck constant), Ĥ_ Ryd = Ω/2∑_i σ̂_i^x - Δ∑_i n̂_i + 1/2∑_i≠ jV_ijn̂_i n̂_j. Here, σ̂_i^α are Pauli matrices, n̂_i = |r_i⟩⟨ r_i|=(1+σ̂_i^z)/2 denotes the Rydberg-state projector, and Ω and Δ are the Rabi frequency and the detuning of the two-photon transition, respectively. The interaction strength V_ij between Rydberg atoms at sites i and j takes the form V_ij=C_6/r_ij^6, where r_ij is the distance between the atoms and C_6>0 is the van der Waals (vdW) coefficient. To understand the dynamics of this Rydberg Ising model, we decompose the original Hamiltonian into Ĥ_ Ryd = Ĥ_0 + Ω̂_D, where Ĥ_0 is the diagonal part, and Ω̂_D=(Ω/2)∑_iσ̂_i^x is the off-diagonal driving term that can create or annihilate a single Rydberg excitation. If we label the eigenstates of Ĥ_0 according to the total Rydberg excitation number 𝒩̂_R=∑_i n̂_i, then Ω̂_D only couples states where 𝒩̂_R changes by one. As a result, the coupling usually admixes different 𝒩̂_R subspaces. However, if the energy difference between adjacent blocks of Ĥ_0 is much larger than the coupling strength Ω, these subspaces become dynamically decoupled, and only states of the same 𝒩̂_R are coupled with each other via a perturbation process. This perturbation effect occurs predominantly at the second order and can be described by an effective Hamiltonian Ĥ_eff (see Methods), which has a U(1) symmetry corresponding to the conserved Rydberg excitation number 𝒩̂_R. Figure <ref>(b) visualizes the perturbation process for two atoms, where states |↑↓⟩ and |↓↑⟩ are coupled by a spin-exchange interaction J(σ̂_1^+σ̂_2^-+σ̂_1^-σ̂_2^+) between the ground state and the Rydberg state, with σ̂^±_n=(σ̂^x_n ± iσ̂^y_n)/2. Crucially, the nonvanishing interaction strength J = Ω^2 V_12/4Δ(Δ-V_12) is enabled by unequal energy differences between adjacent 𝒩̂_R sectors. These nonuniform level spacings arise from the vdW interaction and can lead to complicated density-dependent spin exchanges. For example, in a three-atom chain with the central site excited to the Rydberg state [see Fig. <ref>(c)], the spin exchange between the first and the third atom is described by a three-body interaction term Q(σ̂_1^+σ̂_3^-n̂_2+σ̂_1^-σ̂_3^+n̂_2), where Q = Ω^2 V_13 /4(Δ-V_12)(Δ - V_12 - V_13) is the density-dependent coupling strength. To observe these virtual spin-exchange processes, it is preferable to work in the weak dressing regime Ω≪|Δ|, which, however, results in weaker interaction strengths. Concerning this trade-off, which could be relaxed by a larger Rabi frequency, our experiments are typically performed with |Δ/Ω|∈ [1.5,4]. In this intermediate regime, we demonstrate that the U(1) symmetry is largely preserved and the deviation from the effective theory can be suppressed by a postselection measurement. Actually, we can accurately count Rydberg excitations in each experimental run by single-site resolved fluorescence imaging, which projects the spins to an exact microstate. Therefore, when exploring the dynamics of a specific 𝒩̂_R subspace, events subject to processes breaking the U(1) symmetry can be discarded, while only states remaining in the given symmetry sector are retained <cit.>. This postselection scheme has a high success probability and shows good tolerance to imperfect state initialization. Quantum walk of a single magnon We first investigate the dynamics within the 𝒩̂_R=1 subspace of a single Rydberg excitation (magnon). The effective Hamiltonian for this symmetry sector is a simple XY model describing coherent hopping of a single magnon: Ĥ_ eff = ∑_i< j J_ij (σ̂_i^+ σ̂_j^- + σ̂_i^- σ̂_j^+) + ∑_i μ_i n̂_i, where J_ij= Ω^2 V_ij/4Δ(Δ-V_ij) is the rate of the effective spin exchange, and μ_i = -Δ +2δ +∑_j≠ iJ_ij is the on-site potential of the magnon with δ=Ω^2/4Δ. As a minimal yet nontrivial example, we begin with two sites and measure the spin-exchange process |↓↑⟩↔|↑↓⟩. To this end, two atoms are loaded into the tweezers and prepared in state |↓↓⟩ via optical pumping. Then, the trap is turned off, and the first atom is addressed with a 820-nm laser, making it off-resonant with respect to the transition driven by the global Rydberg beam. The second atom is on-resonant and subsequently driven to the Rydberg state by a π-pulse, creating the desired initial state |↓↑⟩. After that, the global Rydberg beam is significantly detuned to induce the effective spin exchange. The experimental sequence is shown in Fig. <ref>(d), and more details can be found in Refs. <cit.>. Figure <ref>(e) depicts the characteristic oscillation dynamics measured with Ω = 2π× 1.52   MHz, Δ = 2π× 5   MHz, and r=4.95 μ m, where r is the interatomic distance. It is clearly seen that the oscillation is approximately U(1) symmetric, as it mainly occurs in the single-excitation subspace, while states |↓↓⟩ and |↑↑⟩ are rarely populated. The oscillation frequency ∼ 0.80   MHz drawn from the experiment agrees well with the perturbation analysis that gives |J| ≈ 0.78   MHz. Here, the damping of the coherent spin exchange is mainly caused by uncorrelated dephasings from the intermediate-state scattering, and the scheme is intrinsically robust against correlated dephasings from the laser phase noise. We next measure the distance dependence of the interaction J_ij=J(r_ij) by varying the distance r between the two atoms. As shown in Fig. <ref>(f), the measured potential perfectly matches the theoretical prediction J_±(r) = δ/[(r/r_c)^6∓1], where ± denotes the sign of the detuning, and r_c = (C_6/|Δ|)^1/6 is a characteristic length. For a negative detuning (Δ<0), J_-(r) is a soft-core potential that plateaus at δ for r<r_c and decays with a vdW tail ∼ 1/r^6, similar to the Rydberg-dressing induced interaction between ground-state atoms <cit.>. The potential for a positive detuning (Δ>0) has a distinct behavior: while it has the same plateau value and asymptotic scaling, J_+(r) diverges at r=r_c. This singularity is caused by the facilitation dynamics, where the condition V_i,i+1=Δ makes single-magnon states resonantly coupled with the two-magnon state |↑↑⟩, leading to a breakdown of perturbation theory and the U(1) symmetry. In the facilitation regime, it has been shown previously that a small thermal fluctuation of atomic positions can lead to a strong Anderson localization, hindering the transport of the excitation <cit.>. In contrast, for the U(1) symmetric regime studied in this work, the plateau of the potential makes the dynamics insensitive to the fluctuation of interatomic distance, and a magnon is expected to be highly delocalized. To demonstrate that the magnon can exhibit robust quantum walk against atomic positional disorders, we now create a larger array containing 7 atoms with a spacing of 4.95  nm. In order to prepare the initial state |↓↓↓↑↓↓↓⟩, we apply the individual addressing beam to shift the detuning of the central site, followed by an adiabatic ramping of the global Rydberg beam, which only drives the atom at the center to the Rydberg state [Fig. <ref>(a)]. After the initialization, the addressing beam is turned off, and a red-detuned (Δ<0) Rydberg driving field is applied to induce the effective dynamics. The propagation of the initial excitation can be traced by observing the evolution of the local Rydberg density ⟨n̂_i⟩, as shown in Fig. <ref>(b), where an approximate light-cone wavefront can be identified. The staggered pattern of ⟨n̂_i⟩ during the evolution is a clear evidence of the quantum interference [Fig. <ref>(c)], as opposed to the Gaussian distribution in a classical random walk. In the current system, the existence of uncorrelated dephasings will eventually destroy the coherence of the system and leads to a uniform steady distribution. To quantify the role of the dephasing, we extract the mean square displacement ⟨ x^2 ⟩ of the magnon [Fig. <ref>(d)], and find good agreement with the simulations based on the Haken-Reineker-Strobl (HRS) model <cit.>, which includes both coherent magnon hoppings and on-site dephasings (with a rate γ=2π× 0.2  MHz). For a larger system, the HRS model predicts that the magnon will continue to spread with no steady-state distribution, but its motion has a quantum-classical crossover: while the initial propagation for t<1/γ is governed by a ballistic transport (⟨ x^2 ⟩∝ t^2), the spreading will gradually become diffusive with ⟨ x^2 ⟩∝ t. Such a scaling crossover can be identified in future experiments with increased system size. Dynamics of magnon bound states Having explored the single-magnon dynamics, we proceed to the observation of correlated motions of multiple magnons. In the two-excitation subspace (𝒩̂_R=2), neglecting the essentially uniform on-site potential, the effective Hamiltonian now reads Ĥ_eff = ∑_i<j≠ kQ_ijk(σ̂_i^+ σ̂_j^-n̂_k + n̂_kσ̂_i^-σ̂_j^+) + ∑_i<j U_ijn̂_i n̂_j, where Q_ijk = (G_ijk+G_jik)/2 is the density-dependent hopping strength with G_ijk = Ω^2 V_ij /4(Δ-V_ik)(Δ - V_ik - V_ij), and U_ij=V_ij-4J_ij+ ∑_l≠ i,j(G_lij-J_li) denotes the density interaction between magnons. Note that the density interaction U_ij∼ V_ij is mainly from the zeroth-order Hamiltonian Ĥ_0, while the exchange interaction Q_ijk is induced by the second-order perturbation. This leads to an important characteristic that |U_ij/Q_ijk|∼ (2Δ/Ω)^2≫1, which makes Eq. (<ref>) a long-ranged, highly anisotropic Heisenberg model. One direct consequence of this large anisotropy is the emergence of a family of magnon bound states. In an infinite spin chain, the two-magnon eigenstate |ψ_K⟩=∑_i≠ jψ_K(i,j)σ̂_i^+σ̂_j^+|↓↓⋯↓⟩ can be labeled by the center-of-mass momentum K, where the wavefunction can be factorized as ψ_K(i,j) = e^iKRϕ_K(r) by introducing the center-of-mass position R = (i+j)/2 and the relative distance r= i-j <cit.> . The bound state has a bounded wavefunction ϕ_K(∞)→0, whose energy is isolated from the scattering continuum. Therefore, systems initially in the bound state remain localized in the relative coordinate, in stark contrast to the scattering state, where individual excitations propagate freely. Figure <ref>(a) shows the energy spectrum and the bound-state wavefunction for a typical parameter Δ/Ω=-3 and V_i,i+1/Δ=-8. The extremely large nearest-neighbor (NN) anisotropy ξ_1=U_i,i+1/Q_i-1,i,i+1≈ 684 in this case gives rise to a high-energy bound state (red curve), where magnons are tightly bounded at a relative distance r=1 (nearest neighbors) for all momenta. The strong density interaction also has a significant long-range effect absent in a short-range interacting system <cit.>: the next-nearest-neighbor (NNN) anisotropy ξ_2=U_i,i+2/Q_i-1,i,i+2≈ 4 is also quite large, and can thus support a low-energy loosely bound state (blue curve), whose wavefunction ϕ_K(r) has a larger bond-length r>1. We will focus on these two types of bound pairs in the experiment, and expect that the same system gives rise to further varieties of bound states at larger anisotropy or in different lattice configurations. To probe the correlated dynamics of the tightly bound Rydberg pair, we prepare an initial state |↓↓↑↑↓↓⟩ in a 6-atom chain via an adiabatic anti-blockade excitation scheme, where the detuning for the center two atoms are swept across the resonant point Δ=V_i,i+1/2. We then quench the system to a fixed detuning and measure the evolution of the two-site correlator Γ_ij=⟨σ̂_i^+ σ̂_j^+ σ̂_i^- σ̂_j^-⟩. For a postive detuning Δ=2π× 12  MHz, the observed correlation function propagates almost perfectly along the directions j=i±1 [see the upper panels of Fig. <ref>(c)], demonstrating that two Rydberg excitations move in a correlated manner as expected [see Fig. <ref>(b)]. In fact, the large NN anisotropy ξ_1≈ -35 in our experiment makes the total NN-Rydberg bonds 𝒩̂_RR=∑_i n̂_in̂_i+1 another conserved charge. The tightly bound Rydberg pairs constitute the symmetry sector (𝒩̂_R=2, 𝒩̂_RR=1), whose dynamics are governed by an NNN hopping term Q∑_i (σ̂_i^+ σ̂_i+2^-n̂_i+1 +H.c.). Here, the strength Q=Q_i,i+2,i+1 corresponds to the exchange process illustrated in Fig. <ref>(c), and determines the propagation speed of the tightly bound pair. To further confirm this analysis, we turn the detuning to a negative value Δ=2π×-3.3  MHz, with which the single-magnon hopping strength J=J_i,i+1 remains unchanged, but the density-dependent hopping is significantly reduced (Q=0.13  MHz→ 0.01  MHz). Consistent with the theoretical prediction, the dynamics of the system becomes almost frozen within the time scale T∼ 2π/J [see the lower panels of Fig. <ref>(c)], at which a single Rydberg excitation should already spread over the lattice. Note that the slight spreading of the correlator at late time is mainly caused by the imperfect state initialization rather than by excitation hopping. The frozen dynamics observed here is a clear signature of the Hilbert space fragmentation: while all tightly bound states |⋯↑_i↑_i+1⋯⟩ share the local symmetry (𝒩̂_R and 𝒩̂_RR), they form dynamically disconnected Krylov subspaces of dimension 1 (frozen states). In fact, taking only NN vdW interactions into consideration (in accordance with a vanishing NNN hopping strength Q), the effective Hamiltonian can be mapped to a folded XXZ model <cit.>, where spin exchanges are constrained by the conservation of 𝒩̂_RR, leading to a strongly fragmented Hilbert space in the thermodynamic limit. Unlike the tightly bound state, which has a nearly flat band in most parameter regimes (corresponding to the frozen dynamics), the loosely bound pair displays a finite bandwidth and is therefore more mobile [Fig. <ref>(a)]. To observe the propagation of this longer-range bound state, we prepare a 7-site chain and excite the third and the fifth atom to the Rydberg level. We first choose a small lattice spacing of 4.95 μ m to achieve large anisotropies ξ_1=539 and ξ_2≈ 1.24, for which the produced initial state |↓↓↑↓↑↓↓⟩ has a considerable overlap (≈ 0.24) with the loosely bound state. The upper panels of Fig. <ref>(e) depicts the evolution of the experimentally extracted correlation function Γ_ij. In contrast to the tightly bound pair, whose transport is determined by an NNN hopping term, the correlated motion of the loosely bound pair is mediated by two successive NN hopping processes [Fig. <ref>(d)], as evident from the predominant spreading of Γ_ij along the directions i=j±2. As a comparison, we then increase the interatomic distance to 8.5 μ m, at which the NNN anisotropy ξ_2≈ -0.52 is too small to support the long-range bound state for most values of the momenta. In this regime, the observed correlator Γ_ij rapidly spreads over the entire zone with no preferred propagation direction [see the lower panels of Fig. <ref>(e)], which suggests that the two Rydberg excitations are not bounded to each other but propagate freely <cit.>. To further confirm the existence of the bound state, we extract their participation ratios (BR) from the measured correlation map, where the ratios for the tightly bound state and the long-range bound state are defined as BR_1 = ∑_iΓ_i,i+1/Γ_tot and BR_2 = ∑_iΓ_i,i+2/Γ_tot, respectively, with Γ_tot = ∑_i<jΓ_ij. For the system size realized in our experiment, the reflection from the boundary can lead to a finite BR_1 and BR_2 even in the absence of magnon interactions. To estimate this finite-size effect and get a lower reference value for the participation ratio, we assume a uniform thermal distribution of the magnons with Γ_ij=1/Γ_tot. As confirmed by Fig. <ref>, the measured ratio is much larger than this lower bound (dashed curves) during the free-magnon relaxation time ∼ 1/J. Here, the damping of the bound pair at late time is mainly caused by the local dephasing. It is here worth pointing out that an atomic positional disorder may slow down the propagation of bounded magnons more easily than single magnons, because it contributes a large disordered binding interaction U_ij (especially for the tightly bound pair). To account for the decoherence, the positional disorder, as well as other imperfections, we carry out full numerical simulations based on realistic experimental conditions and the original Rydberg Ising model (see Methods). This full simulation agrees very well with the experimental data (see Fig. <ref>) and suggests improving the coherence of the correlated spin-exchange dynamics in future studies. Conclusions and outlook In conclusion, we have demonstrated a new approach to constructing the Heisenberg-type spin model in a Rydberg atom array. Different from previous schemes realized by dipolar exchange interaction and Floquet engineering <cit.>, our approach is based on Rydberg dressing of an Ising Hamiltonian, which can offer a large and widely tunable anisotropy. In the current experiment, we focused on the single-magnon and the two-magnon sector. By creating more excitations in a large-scale array, the system may allow exploration of emergent Hilbert space fragmentation <cit.> and the Krylov-restricted thermalization of multiple magnons <cit.>. The scheme also allows dynamical engineering of spin transport, topological pumping protocols and programmable entanglement distributions <cit.>. Generalizations to higher dimension could lead to richer physics. In particular, in a 2D lattice, the inclusion of a multicolor dressing field could enable application of a synthetic gauge flux <cit.>, which can give rise to topologically protected chiral motion of the magnon-bound state and holds promise for observation of a chiral spin liquid <cit.>. This research was supported by Samsung Science and Technology Foundation (SSTF-BA1301-52) and National Research Foundation of Korea (2017R1E1A1A01074307). F. Yang and K. Mølmer acknowledge the support from Carlsberg Foundation through the “Semper Ardens” Research Project QCooL and from the Danish National Research Foundation (DNRF) through the Center of Excellence “CCQ” (Grant No. DNRF156). We thank L. You, T. Pohl, A. E. B. Nielsen, H. Yarloo, H. Zhang, A. Cooper, and X. Wu for valuable discussions. b>X s>=.5X § METHODS §.§ Effective Hamiltonian of the system The effective U(1) symmetric model can be constructed from the Schrieffer-Wolff (SW) transformation <cit.>. Up to the second-order perturbation, the effective Hamiltonian is given by Ĥ_eff = Ĥ_0 + Ĥ_eff^(2) with Ĥ_eff^(2)=𝒫̂(1/2[𝒮̂,Ω̂_D])𝒫̂, where 𝒮̂ is a generator satisfying [𝒮̂,Ĥ_0]+Ω̂_D=0, and 𝒫̂ projects out terms that do not conserve 𝒩̂_R. Formally, the generator can be expressed as 𝒮̂=iΩ/2∑_iσ̂_i^y/Δ - ∑_j≠ iV_ijn̂_j. It is difficult to get an explicit effective Hamiltonian using the above expression. Therefore, we expand 𝒮̂ in orders of the Rydberg excitation number that can influence the spin flip of a single atom at the i-th site, i.e., 𝒮̂ = (2i/Ω) δ∑_i σ̂_i^y + (2i/Ω) ∑_i≠ jJ_ijσ̂_i^yn̂_j + (i/Ω)∑_i≠ j≠ k(G_ijk-J_ij)σ̂_i^yn̂_jn̂_k +⋯ , where δ = Ω^2/4Δ, J_ij = Ω^2V_ij/4Δ(Δ-V_ij), G_ijk = Ω^2V_ij/4(Δ-V_ik)(Δ-V_ik-V_ij). The above expansion then leads to an effective Hamiltonian Ĥ_eff^(2) = ℋ̂_1-body+ℋ̂_2-body+ℋ̂_3-body+⋯, where ℋ̂_1-body = δ∑_iσ̂_i^z, ℋ̂_2-body = ∑_i≠ jJ_ij/2(σ̂_i^+σ̂_j^- + σ̂_i^-σ̂_j^+ -2σ̂_i^zn̂_j) ℋ̂_3-body = ∑_i≠ j≠ kG_ijk-J_ij/2(σ̂_i^+ σ̂_j^- + σ̂_i^-σ̂_j^+ -σ̂_i^zn̂_j)n̂_k, are the one-body self-energy shift, the two-body XXZ-type Hamiltonian, and the three-body XXZ term, respectively. The Hamiltonian can be further simplified by the substitution σ̂_i^z = 2n̂_i -1 in a given state sector. For the single-magnon sector (𝒩̂_R=1), the quadratic term n̂_in̂_j can be neglected, which leads to the XY model given in the main text. For the two-magnon sector (𝒩̂_R=2), the cubic term n̂_in̂_jn̂_k can be discarded, and the resulting Hamiltonian can be mapped to Eq. (<ref>). For a general multi-magnon case, the dynamics is governed by a folded XXZ model exhibiting the HSF <cit.>. §.§ Experimental setup and procedure The experimental setup of our system is a Rydberg quantum simulator using a neutral atom array of ^87 Rb atoms, similar to our previous experiments <cit.>. The atomic ensembles are cooled and gathered inside a magneto-optical trap (MOT), while the single atoms are trapped inside a 820-nm optical tweezer array of 1  mK depth and sub-Doppler cooled to ∼ 35 μ K with polarization gradient cooling. Atoms are then optically pumped to |↓⟩ = |5S_1/2,F=2,m_F=2⟩. After the ground state preparation, traps are turned off and the atoms are operated to the Rydberg state |↑⟩ = |71S_1/2,m_J=1/2⟩ with the two Rydberg beams of 780-nm (homemade ECDL) and 480-nm (TA-SHG Pro of Toptica) with two photon transition of intermediate detuning of Δ_I = 2π× 660  MHz from the intermediate state |m⟩ =|5P_3/2,F=3,m_F=3⟩. Quantum operation is performed by a series of Rydberg and addressing laser pulses. After the quantum operation, atoms are trapped again by turning on the optical tweezer, and atoms in the Rydberg states are anti-trapped from the tweezer. The remaining atoms are imaged with the electron-multiplied charged coupled device (EMCCD, iXon Ultra 888 of Andor) by illuminating the imaging beam. By distinguishing the fluorescence of background and trapped atom, we could determine the internal state of each individual atom. The optical tweezer trap and the addressing beam for the state initialization use the same 820-nm laser drived from Ti:Sapphire oscillator (TiC of Avesta) pumped by a 532-nm laser (Verdi G18 of Coherent). The laser beam passes an acousto-optic modulator (AOM) and is split into zeroth and first order beams. The first order beam is sent to the spatial light modulator (SLM, ODPDM512 of Meadowlark optics), and the optical tweezer array of target and reservoir traps is formed and rearranged with real-time calculation Gerchberg-Saxton weighted (GSW) algorithm with GPU (Titan-X Pascal of NVIDIA). The phase for atom arrays are calculated with a 4 times larger array zero-padded to the initial phase to achieve resolution less than the trap size <cit.>. The zeroth order beam propagates along a different path passing an additional AOM and followed by an acousto-optic deflector (AOD, DTSXY-400-820 of AA Opto-Electronic) which is used to address the target atom. This 820-nm addressing beam is off-resonant to the 5S→ 5P transition, inducing an a.c.-Stark shift to the target-atom Rydberg transition. The quantum operation is programmed using a delay generator (DG645 of Stanford Research Systems) and an arbitrary waveform generator (AWG, XRF Agile RF Synthesizer of Moglabs), controlling AOMs of both the addressing beams and the Rydberg beams. The sequence is depicted in Fig. <ref>(d) of the main text, and a more detailed one is given in Extended Data Fig. <ref>. The sequence is divided into two parts: an initialization process driving the target atoms to Rydberg states, and the spin-exchange process inducing the many-body quench dynamics. For the two-atom experiment, the initial state is prepared by addressing one of the atoms to make it off-resonant to the Rydberg beams and applying a resonant π pulse to the other atom [see Extended Data Fig. <ref>(a) and Fig. <ref>(c)]. For all other experiments, the target atoms are addressed, and the Rabi frequency Ω and the detuning Δ of the global Rydberg beams are adiabatically swept according to the following sequence: (1) 0 μ s→ 0.1 μ s, (0,Δ_i)→ (Ω_ exp,Δ_i) (2) 0.1 μ s→ 0.9 μ s, (Ω_ exp,Δ_i)→ (Ω_ exp,Δ_f), and (3) 0.9 μ s→ 1 μ s, (Ω_ exp,Δ_f)→ (0,Δ_f) as depicted in Extended Data Fig. <ref>(b), where Ω_ exp is the Rabi frequency used in the spin-exchange step. The values of these parameters are summarized in Extended Data Table <ref>. With the above initialization, the addressed target atom is adiabatically excited to the Rydberg state [see Extended Data Fig. <ref>(d)]. §.§ Experimental parameters and measured values The experimental parameters are given in the following tables. Extended Data Table <ref> shows the parameters and measured values for the two-atom spin-exchange dynamics, where Δ is the detuning for the spin exchange, r is the distance between the two atoms, Ω is the Rabi frequency, and J is the spin-exchange frequency fitted from each experiment, e.g., from the data in Fig. <ref>(e) of the main text. The vdW interaction strength V = C_6/r^6 is determined by the distance r with C_6 = 2π× 1023  GHz·μ m^-6 corresponding to the Rydberg state |71S_1/2, m_J = 1/2⟩ used in the experiment <cit.>. The values of Ω and J are fitted to the expression P=a+bcos(2π× c× t)×exp(-t/d) with unknowns a, b, c, d and probability P of the initial state, where Ω/2π and J/4π corresponds to c. The errors in r, which is plotted in Fig. <ref>(f) of the main text, has the same value 0.3 μ m for all distances, which is limited by the resolution of the image plane, where the beam waist is about ∼ 1.2 μ m and the resolution is ∼ 0.3 μ m =1.2/4  μ m because of the zero-padding. Extended Data Table <ref> shows the experimental parameters for the rest of the experiments. Here, Ω_ exp is the Rabi frequency for both spin-exchange dynamics experiment and the maximum Rabi frequency for the quantum annealing in the initial state preparation, Δ_A is the detuning applied on the target atom by the addressing beam (two values respectively for the left and the right atom in the two-magnon experiments), Δ_i and Δ_f is the initial and final detuning respectively for the detuning sweep of the state initialization, and Δ_ exp is the detuning for the spin-exchange quench dynamics. §.§ Experimental imperfections and numerical simulations Full numerical simulations in Fig. <ref> of the main text take the experimental errors into consideration. Extended Data Table <ref> shows types of experimental imperfections and its treatment in the numerical simulations. The dominant error in the dressing scheme is the uncorrelated individual dephasing mainly due to the spontaneous decay from the intermediate state, vdW interaction fluctuation due to the finite temperature of the atom, as well as the state-measurement error. The collective dephasing mainly induced by the laser phase noise does not have a significant role on the dynamics because of the decoherence-free feature of the effective model <cit.>. Both individual and collective dephasings are treated with the Lindblad master equation dρ/dt = -i [H, ρ ] + ℒ_ ind(ρ) + ℒ_ col(ρ) <cit.>, where the superoperator ℒ_ ind, ℒ_ col denotes the individual (on-site) and the collective phase noise, respectively. The individual dephasing rate γ_ ind≈ 2π× 0.2  MHz was fitted from the three level model of |g⟩, |r⟩ and the intermediate state |m⟩. The collective phase noise was fitted from the single-atom Rabi oscillation by fixing γ_ ind, and its value is γ_ col≈ 2π× 0.4  MHz. The temperature of the atomic thermal motion T_ atom = 34.27(5)  μ K was measured using release and recapture method. With the temperature, we could calculate the motional variation of atom with a standard deviation σ_i = √(k_BT/(mω_i^2)) of the position for the trap frequency ω_i. In the simulation, the average effect of such an atomic positional disorder was evaluated with the Monte-Carlo method. The radial and longitudinal position standard variations are σ_r≈ 0.1 μ m and σ_a≈ 0.3 μ m respectively. The detection error was considered similar to <cit.>, where the dominant portion of the conditional error probability P(g|r) is due to the Rydberg decay and the dominant portion of P(r|g) is due to a finite temperature of the atom. The former is calculated with P(g|r) = 1-exp(-t_ trap/t_1), where t_ trap is the time when the trap is turned off, and the Rydberg lifetime t_1=43(15) μ s is measured with an additional Ramsey experiment <cit.>. The latter probability P(r|g)=P_ recap(t_ trap) is obtained from the release and recapture probability curve.
http://arxiv.org/abs/2307.05964v1
20230712071359
Virtual Screening of Chemical Space based on Quantum Annealing
[ "Takuro Tanaka", "Masami Sako", "Mahito Chiba", "Chul Lee", "Hyukgeun Cha", "Masayuki Ohzeki" ]
quant-ph
[ "quant-ph" ]
Modeling Motion Dynamics in Psychotherapy: a Dynamical Systems Approach Itai Dattner Department of Statistics, University of Haifa [email protected] August 12, 2023 ======================================================================================== Quantum computer is expected as a key technology to change from conventional computer. Based on quantum annealing, a combinatorial optimization from wide search range can be obtained. The quantum annealer is sometimes regarded as a simulator for the quantum many-body dynamics <cit.>. Practical applications of quantum annealer have been presented across various fields, such as finance <cit.>, traffic <cit.>, routing optimization<cit.>, logistics <cit.>, manufacturing <cit.>, and marketing <cit.>, as well as in decoding problems <cit.>. Its potential for solving the optimization problem with inequality constraints has been enhanced <cit.>, especially in the case that is hard to formulate directly <cit.>. The comparative study of quantum annealer has also been performed for benchmark tests to solve the optimization problems <cit.>. The quantum effect on the case with multiple optimal solutions has also been discussed <cit.>. Further, applications of quantum annealing for machine learning for solving optimization problems have been reported <cit.>. As in the case for material informatics (MI) in the classical computer, optimization scheme based on quantum annealing can be very useful approach. Recently, with the need for higher performance and diversification of materials, it is indispensable to accelerate research and develop new materials with unprecedented properties and functions. However, since the properties of materials depend on the many microscopic elements, one needs examination of huge combinations in chemical space. To shorten development time, therefore, one needs algorithm to search materials that satisfy the desired properties with few trail. In previous study, combinatorial optimization problem is applied to material research <cit.>. Hatakeyama et al. <cit.> extracted high-melting-temperature molecules using an open experimental database (DB) of organic molecules <cit.>. Kitai et al. <cit.> studied on designing complex thermofunctional metamaterials consisting of SiO2, SiC, and Poly(methyl methacrylate). In their study, the DB was constructed using atomistic simulation to calculate the performance for thermal radiator. Experiments on D-Wave quantum annealer hardware platforms have demonstrated that the quantum annealing hardware behaves like a Gibbs-Boltzmann sampler at a hardware-specific effective temperature <cit.>. Using quantum annealer, it is possible to perform sampling according to the Gibbs-Boltzmann distribution. When this characteristics is used effectively, it may help as means of solving Boltzmann machine and probability distribution-based optimized problems. However, to the best of our knowledge, there are no examples of MI problems that have been used quantum annealer as a sampling machine. In order to search luminescence material based on quantum chemistry, the energy value of HOMO-LUMO gap is indispensable in the DB. In this paper, we use QM9 as a database and extract feature importance of emission wavelength by making a sampling data based on quantum annealer. Here we note that emission wavelength can be exchanged to HOMO-LUMO Gap in microscopic feature. QM9 dataset was published in 2014 by Ramakrishnan et. al. <cit.> consists of more than 133k organic molecules with up to 9 heavy atoms (C, N, O and F) with corresponding geometries, thermodynamic and electronic properties, i.e. HOMO-LUMO gap, and simplified molecular-input line-entry system (SMILES). Note that SMILES can be converted to fingerprints, which are a generally well-known essential cheminformatics tools for mapping chemical space. The prediction model is constructed based on Stochastic Gradient Descent (SGD) Regressor with fingerprint as a descriptor. The cost function y is given as the distance between the HOMO-LUMO gap Δ_DBvalue in DB and the target HOMO-LUMO gap value Δ^∗ as follows: y = (Δ_DB- Δ^∗)^2. Using quantum annealer, the optimized combination of fingerprint which minimize the cost function and the optimized value is obtained. Even if there are many discriptors, annealer searches and samples various combinations probability to obtain the combination with lowest energy. Analysis of feature importance can be easily performed by utilizing the sampling results. By extracting the dominant features, as a result, dimension reduction of chemical space is possible. Molecular fingerprints encode molecular structure in a series of binary digits (bits) that represent the presence or absence of particular substructures in the molecule. Typical fingerprints for cheminformatics are Morgan, extended circular finger print (ECFP), molecular access system (MACCS) key, Avalon, etc. This fingerprints for molecular structure correspond to the spin state S_z up (S_z = 1) and down (S_z = -1) in Ising model. In this paper, we used Avalon 512 bit using a free library, RDKit. <cit.> Comparing fingerprints allow us to determine the similarity between two molecules, to find matches to a query substructure, etc. To extract the feature importance of material properties using a quantum annealer, in this paper, we first constructed an Ising type prediction model and confirmed that the prediction accuracy is high. Then, we verified the accuracy of the optimal solution obtained. To predict target HOMO-LUMO gap, the cost function y is given as the distance from the target HOMO-LUMO gap: y= f_pred as given in Eq. (<ref>). Here, Δ^∗ is set to 0.32eV in this paper. We define the Ising (quadratic) model as the prediction model of the target HOMO-LUMO gap as follows: f_pred = ∑_i ≠ j Q_ij x_i x_j + ∑_i h_i x_i . Here, descriptor x_i represents fingerprint: x_i = 0 or 1. To make regression of the prediction model for the training data, we redefine the quadratic model as a linear model by using a new expression X_ij = x_i x_j. The quadratic model can be regarded as the linear model f_pred = ∑_ij Q_ij X_ij, where Q_ii = h_i. We then perform the standard regression technique for the linear model. In terms of prediction accuracy, a prediction model using full fingerprint is appropriate. However, from the viewpoint of memory, it is very difficult to perform for following two reasons: 1) Since cost function is given as a quadratic term, _nC_2 interactions (n ≥ 10^2) must be added which makes difficult to calculate for regression. 2) The number of data is over 10^5 for QM9 databases. Therefore, compressing fingerprint is indispensable. The threshold to compress fingerprint is set as 15000, which means that fingerprint is compressed when the number of times used as 1 is less than 15000 in train data, i.e. fingerprint less than 14% ( ≈ 15000/110000). Then, the 512 Avalon fingerprint is compressed to 207 fingerprint. For regression, the ratio between train and test data is set to 9:1. To make prediction model in this paper, SGD Regressor is used which gives high accuracy model for data more than 10^5. First regression with squared loss and a L_1 norm regularization is calculated with the parameter of regulation strength α = 0.1, learning rate η = 0.001. In this case the loss function of SGD regression does not decrease from 0.000382 after few interation, and prediction accuracy (Coefficient of Determination R^2) is negative. This results suggest that it is difficult to predict cost function y from descriptor x due to lack of learning. Next, L_2 norm regularization with same parameter (α=0.1, η = 0.001) are calculated. The loss function decreased as 0.000073 after iteration of 160 times, and prediction accuracy (Coefficient of Determination R^2) is 0.82(0.81) for train data (test data) as shown in Fig <ref>. This result shows that with L_2 norm regularization, learning works satisfactorily and able to predict with a sufficient accuracy. For the annealing calculation explained below, the coefficient of quadratic term (QUBO) with L_2 norm regularization is used which is shown in Fig<ref>. In this study, a simulator of quantum annealing which is called openjij <cit.> is used to obtain the optimized combination of fingerprint in Eq.(<ref>). Openjij is an open-source library for heuristic optimization problems in Python which is based on the quantum Monte Carlo method via Suzuki-Trotter decomposition <cit.>. x^opt = _x f_pred (x). Here, the result of calculation based on quantum annealer is explained. First, to confirm the accuracy of the calculation based on quantum annealer, the difference between the optimized cost function value E^opt=min f_pred(x) and the value E^* = ∑ Q_ij x_i^opt x_j^opt, in which calculated back with the optimized fingerprint x_i^opt and the coefficient of the prediction model Q_ij in Eq.(<ref>) is calculated. δ = | E^opt -E^* |/E^opt = | E^opt - ∑ Q_ij x_i^opt x^opt_j |/E^opt. As a result, the calculated difference δ is 7.51 × 10^-15 (i.e. eleven-nine accuracy). Second, the optimized cost function value E^opt is compared to the minimum value in train data, which is given as 9.99 × 10^-5. The sweep time dependence of the optimized value is calculated. When the number of sweep time is 500, the optimized value is given as 1.40 × 10^-5, which is lower than the minimum value in train data. The optimized fingerprint x^opt_i is not included in train data, which is confirmed by calculating Hamming distance between them. For sweep time 100, on the other hand, the optimized value is given as 1.63 × 10^-4, which is bigger than the minimum value in train data. As a property of quantum annealing, sampling follows the Gibbs-Boltzmann distribution.<cit.> This property of Gibbs-Boltzmann distribution is expected to be utilized in machine learning. In this study, Gibbs-Boltzmann sampling data is made with openjij. The feature importance of each fingerprint was evaluated from the frequencies obtained by 1000 sampling data using LightGBM model (Gradient Boosting Decision Tree). From the above result, top 20 of feature importance is selected as shown in Fig.<ref> (A). To judge whether these 20 feature importance are used as 0 or 1, the optimized fingerprint x_i^opt are referenced. Finally, filtering the 110000 train data by 20 feature importance, 600 molecular structures were extracted, which has small distance from target HOMO-LUMO gap Δ^∗ given in Eq.(<ref>). This result suggests that the researching range is screened to 0.5% (=600/110000) as shown in Fig. <ref> (B), which indicates the effect of virtual screening based on quantum annealing. We stuided on extracting feature importance from sampling data generated by Annealer and studied chemical space reduction. In this paper, feature importance was extracted from sampling data generated by quantum Annealer, and the possibility of reducing chemical space was studied. Based on QM9 database, SMILES is converted to fingerprint which corresponds to 0,1 state in Ising model. SGD regressor, only the linear term of quadratic term, is used to predict target HOMO-LUMO gap. Note that the coefficient of the quadratic term is called QUBO which is used for annealing calculation to minimize the cost function. The cost function is given as the distance from the target value of HOMO-LUMO Gap. The feature importance of the sampling data is exctracted, which makes it possible to screen the whole search space to 0.5 %. This result suggests the acceleration of material research with virtual screening. This is the first application that uses quantum annealing to extract material feature importance, and it has proven to be effective. In order to verify the effectiveness of this method, a comparison with existing methods is being conducted on actual materials. The results of the comparison and verification will be published in the near future. The authors gratefully acknowledge Dr. Yoshida, president of the LG Japan Lab Inc. for helpful discussions and providing an environment suitable for our research. 0 ref1 Yuki Bando, Yuki Susa, Hiroki Oshiyama, Naokazu Shibata, Masayuki Ohzeki, Fernando Javier Gómez-Ruiz, Daniel A. Lidar, Sei Suzuki, Adolfo del Campo, and Hidetoshi Nishimori, Probing the universality of topological defect formation in a quantum annealer: Kibble-zurek mechanism and beyond. Phys. Rev. Res. 2, 033369 (2020). ref2 Bando, Y. and Nishimori, H. Simulated quantum annealing as a simulator of nonequilibrium quantum dynamics. Phys. Rev. A 104, 022607 (2021). ref3 Gili Rosenberg, Poya Haghnegahdar, Phil Goddard, Peter Carr, Kesheng Wu, Marcos López de Prado, Solving the Optimal Trading Trajectory Problem Using a Quantum Annealer, IEEE J. Sel. Top. Signal Process. 10, 1053-1060(2016). ref4 Orús, R., Mugel, S. and Lizaso, E. Forecasting financial crashes with quantum computing. Phys. Rev. A 99, 060301 (2019). ref5 Venturelli, D. and Kondratyev, A. Reverse quantum annealing approach to portfolio optimization problems. Quantum Mach. Intell. 1, 17-30 (2019). ref6 Florian Neukart, Gabriele Compostella, Christian Seidel, David von Dollen, Sheir Yarkoni, Bob Parney, Traffic Flow Optimization Using a Quantum Annealer, Front. ICT 4, 29 (2017). ref7 Hussain, H., Javaid, M. B., Khan, F. S., Dalal, A. and Khalique, A. Optimal control of traffic signals using quantum annealing. Quantum Inf. Process. 19, 312 (2020). Volkswagon Florian Neukart, Gabriele Compostella, Christian Seidel, David von Dollen, Sheir Yarkoni and Bob Parney, "Traffic Flow Optimization Using a Quantum Annealer", Frontiers in ICT 4, 29 (2017). Ohzeki Masayuki Ohzeki, Akira Miki, Masamichi J. Miyama and Masayoshi Terabe, "Control of Automated Guided Vehicles Without Collision by Quantum Annealer and Digital Devices", Front. Comput. Sci., 19 (2019) Haba Renichiro Haba, Masayuki Ohzeki and Kazuyuki Tanaka, "Travel time optimization on multi-AGV routing by reverse annealing", arXiv:2204.11789 [quant-ph]. ref8 Sebastian Feld, Christoph Roch, Thomas Gabor, Christian Seidel, Florian Neukart, Isabella Galter, Wolfgang Mauerer, Claudia Linnhoff-Popien, A Hybrid Solution Method for the Capacitated Vehicle Routing Problem Using a Quantum Annealer. Front. ICT 6, 13 (2019). ref9 Ding, Y., Chen, X., Lamata, L., Solano, E. and Sanz, M. Implementation of a Hybrid Classical-Quantum Annealing Algorithm for Logistic Network Design. SN Comput. Sci. 2, 68 (2021). ref10 Venturelli, D., Marchand, D. J. J. and Rojo, G. Quantum Annealing Implementation of Job-Shop Scheduling. arXiv:1506.08479 [quant-ph] (2016). ref11 Nishimura, N., Tanahashi, K., Suganuma, K., Miyama, M. J. and Ohzeki, M. Item Listing Optimization for E-Commerce Websites Based on Diversity. Front. Comput. Sci. 1, 2 (2019). ref12 Ide, N., Asayama, T., Ueno, H. and Ohzeki, M. Maximum Likelihood Channel Decoding with Quantum Annealing Machine. In 2020 International Symposium on Information Theory and Its Applications (ISITA), 91-95 (2020). ref13 Arai, S., Ohzeki, M. and Tanaka, K. Mean field analysis of reverse annealing for code-division multiple-access multiuser detection. Phys. Rev. Res. 3, 033006 (2021). ref14 Yonaga, K., Miyama, M. J. and Ohzeki, M. Solving Inequality-Constrained Binary Optimization Problems on Quantum Annealer. arXiv:2012.06119 [quant-ph] (2020). ref15 Koshikawa, A. S., Ohzeki, M., Kadowaki, T. and Tanaka, K. Benchmark test of black-box optimization using d-wave quantum annealer. J. Phys. Soc. Jpn. 90, 064001(2021). ref16 Oshiyama, H. and Ohzeki, M. Benchmark of quantum-inspired heuristic solvers for quadratic unconstrained binary optimization. Sci. Reports 12, 2146 (2022). ref17 Yamamoto, M., Ohzeki, M. and Tanaka, K. Fair sampling by simulated annealing on quantum annealer. J. Phys. Soc. Jpn. 89, 025002 (2020). ref18 Maruyama, N., Ohzeki, M. and Tanaka, K. Graph minor embedding of degenerate systems in quantum annealing. arXiv:2110.10930 [quant-ph] (2021). ref19 Amin, M. H., Andriyash, E., Rolfe, J., Kulchytskyy, B. and Melko, R. Quantum boltzmann machine. Phys. Rev. X 8, 021050 (2018). ref20 Kumar, V., Bass, G., Tomlin, C. and Dulny, J. Quantum annealing for combinatorial clustering. Quantum Inf. Process. 17, 39 (2018). ref21 Adachi, S. H. and Henderson, M. P. Application of Quantum Annealing to Training of Deep Neural Networks. arXiv:1510.06356 [quant-ph, stat] (2015). ref22 Benedetti, M., Realpe-Gómez, J., Biswas, R. and Perdomo-Ortiz, A. Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning. Phys. Rev. A 94, 022308 (2016). ref23 Arai, S., Ohzeki, M. and Tanaka, K. Teacher-student learning for a binary perceptron with quantum fluctuations. J. Phys. Soc. Jpn. 90, 074002 (2021). ref24 Sato, T., Ohzeki, M. and Tanaka, K. Assessment of image generation by quantum annealer. Sci. Reports 11, 13523 (2021) Oyaizu Kan Hatakeyama-Sato,Takahiro Kashikawa,Koichi Kimura,Kenichi Oyaizu, "Tackling the Challenge of a Huge Materials Science Search Space with Quantum-Inspired Annealing", Adv. Intell. Syst. (2019) Tamura Koki Kitai, Jiang Guo, Shenghong Ju, Shu Tanaka, Koji Tsuda, Junichiro Shiomi, and Ryo Tamura, "Designing metamaterials with quantum annealing and factorization machines", Phys. Rev. Research 2, 013319 (2020). Oyaizu-ref "Jean-Claude Bradley Open Melting Point Dataset", DOI: https://doi.org/10.6084/m9.figshare.1031638. Sampling_LosAlamos Jon Nelson,Marc Vuffray, Andrey Y. Lokhov, Tameem Albash,and Carleton Coffrin, arXiv:2109.01690v2. QM9_1 L. Ruddigkeit, R. Van Deursen, L. C. Blum, and J.L. Reymond, "Enumeration of 166 billion organic small molecules in the chemical universe database GDB-17", Journal of Chemical Information and Modeling, vol. 52, no. 11, pp. 2864-2875, 2012. QM9_2 R. Ramakrishnan, P. O. Dral, M. Rupp, and O. A. Von Lilienfeld, "Quantum chemistry structures and properties of 134 kilo molecules", Scientific Data, vol. 1, no. 1, pp. 1-7, 2014. rdkit "RDKit: Open-source cheminformatics; http://www.rdkit.org". Jij OpenJij https://github.com/OpenJij/OpenJij. ST Hatano Naomichi, Suzuki Masuo (2005-11-16). "Finding Exponential Product Formulas of Higher Orders". Quantum Annealing and Other Optimization Methods. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 37-68. arXiv:math-ph/0506007v1.
http://arxiv.org/abs/2307.03972v1
20230708131059
Evaluating the Capability of Large-scale Language Models on Chinese Grammatical Error Correction Task
[ "Fanyi Qu", "Yunfang Wu" ]
cs.CL
[ "cs.CL" ]
Autonomy 2.0: The Quest for Economies of Scale Shuang Wu, Bo Yu, Shaoshan Liu, Yuhao Zhu August 12, 2023 ============================================== Large-scale language models (LLMs) has shown remarkable capability in various of Natural Language Processing (NLP) tasks and attracted lots of attention recently. However, some studies indicated that large language models fail to achieve promising result beyond the state-of-the-art models in English grammatical error correction (GEC) tasks. In this report, we aim to explore the how large language models perform on Chinese grammatical error correction tasks and provide guidance for future work. We conduct experiments with 3 different LLMs of different model scale on 4 Chinese GEC dataset. Our experimental results indicate that the performances of LLMs on automatic evaluation metrics (e.g. F_0.5 score) falls short of the previous sota models because of the problem of over-correction. Furthermore, we also discover notable variations in the performance of LLMs when evaluated on different data distributions. Our findings demonstrates that further investigation is required for the application of LLMs on Chinese GEC task. § INTRODUCTION Building on InstructGPT <cit.>, ChatGPT has demonstrated its powerful ability to understand complex instruction and generate reasonable responses on various of NLP tasks. Following the technical trajectory of ChatGPT, a significant number of high-quality LLMs have emerged in recent times in both academia and industry, such as LLaMA <cit.>, ChatGLM <cit.> and PaLM <cit.>. Previous studies found that these LLMs have achieved great performance on a wide range of NLP tasks, including machine translation <cit.>, named entity recognition <cit.> and text summarization <cit.>. Certain studies have token comprehensive investigations into the performance of LLMs in the domain of English grammatical error correction, yielding some interesting findings <cit.>. LLMs are not able to outperform sota models in terms of automatic evaluation metrics. This is primarily because LLMs tend to make unnecessary modifications to make the input sentences more fluent, which may result in over correction problem, and in some cases, even alter the original semantics of the input sentences. In this report, we aim to explore the performance of LLMs in Chinese GEC task. We conducted experiments on various LLMs to investigate the influence of model size on the GEC results. Additionally, we attempted different test dataset from various data sources to explore the impact of data distribution on the outcomes. § EXPERIMENTAL SETUP §.§ Dataset We conduct experiments on four Chinese GEC dataset to provide a comprehension demonstration of LLMs' capability. The detailed statistics of these dataset are shown in Table <ref>. §.§.§ GEC data from Chinese learners We apply the test set of NLPCC-2018<cit.> and the validation set of MuCGEC<cit.> for evaluation. These two dataset collect the grammar errors made by foreigners during their process of learning Chinese. §.§.§ GEC data from Chinese native speaker examination We apply the validation set of FCGEC<cit.> and the validation set of NaCGEC<cit.> for evaluation. These two dataset are collected from Chinese native speakers' language examinations. §.§ Model We conduct experiments on 3 LLMs with different model scales: * ChatGPT[https://platform.openai.com/docs/api-reference]: we evaluate the performance of ChatGPT with OpenAI's official API. We choose gpt-3.5-turbo as the evaluated model, which stands out as the most advanced and specifically optimized for chat functionality. * ChatGLM-6B <cit.>: ChatGLM is an open bilingual language model based on GLM framework which is optimized for Chinese QA and dialogue and exhibits a robust capacity for Chinese understanding. * LLaMA-7B <cit.>: LLaMA is a collection of foundation LLMs ranging from 7B to 65B parameters proposed by Meta AI. we applied the 7B model for evaluation. §.§ Evaluation Metric We evaluate models' performance with Precision, Recall and F_0.5 from word level and char level respectively. We adopt the official implementation of MaxMatch (M^2) <cit.> scorer to calculate word-level F_0.5 score and choose PKUNLP as our word segment tool. We apply ChERRANT [https://github.com/HillZhang1999/MuCGEC/tree/main/ scorers/ChERRANT] for char-level metric calculation. §.§ Prompt Considering the differences in performance of large language models, we designed different prompts for them. These prompts are roughly the same in semantics, but there are some differences in details. The prompts are shown in Figure <ref> §.§ Setting details We set temperature to 0.6 when applying ChatGPT for a reliable generated result. For ChatGLM-6B and LLaMA-7B, we conduct experiments on 4 GeForce NVIDIA 3080 Ti GPUs. § EXPERIMENT RESULTS The experiment results are shown in Table <ref>. There are some results worthy of discussion. First, different data sources result in distinct evaluation results. LLMs exhibit significantly superior performance when trained on Chinese learner data (NLPCC and MuCGEC), as opposed to Chinese native speaker examination data (FCGEC and NaCGEC). According to our observations, the grammatical errors made by Chinese learners primarily involve the misuse of similar words or phrases, rather than incorrect sentence structures. In contrast, GEC data from Chinese native speaker examination maintains a higher level of regularity and is consisted of more complex structural errors. It is noteworthy that there exists gaps between GEC data from Chinese examination and Chinese native speakers' daily spoken habits. Second, different model scales also lead to distinct performance. The unified trend is that ChatGPT performs similarly on Precision with other 2 smaller models while achieves significant improvement in Recall. This implies that the evaluated LLMs have similar error correction capability while their error detection ability differs a lot. Third, there still exists great gaps between state-of-art models and LLMs on automated evaluation metrics. Previous work <cit.> has found the problem of over-correction for LLMs, which has also been noticed in our experiment. What's more, it is hard to explain why the char-level evaluation metrics is significantly lower than word-level evaluation metrics, which is not noticed in previous work. § CONCLUSION In this report, we explore the performance of various LLMs on Chinese grammatical error correction task. Experimental results indicate that there still remains gap between LLMs' performance and current sota models. Furthermore, the performance of different LLMs' performance is greatly impacted by the distribution of test data. Future work can focus on addressing the over-correction problem of LLMs and explore the untapped potential of LLMs in the field of grammatical error correction tasks. acl_natbib
http://arxiv.org/abs/2307.04778v2
20230710054331
Formulating A Strategic Plan Based On Statistical Analyses And Applications For Financial Companies Through A Real-World Use Case
[ "Saman Sarraf" ]
cs.LG
[ "cs.LG", "cs.CE" ]
Reducing Information Loss for Spiking Neural Networks Yufei Guo1Equal contribution. Yuanpei Chen1^⋆ Liwen Zhang1 YingLei Wang1 Xiaode Liu1 Xinyi Tong1 Yuanyuan Ou2 Xuhui Huang1 Zhe Ma1 August 12, 2023 ====================================================================================================================================== Business statistics play a crucial role in implementing a data-driven strategic plan at the enterprise level to employ various analytics where the outcomes of such a plan enable an enterprise to enhance the decision-making process or to mitigate risks to the organization. In this work, a strategic plan informed by the statistical analysis is introduced for a financial company called LendingClub, where the plan is comprised of exploring the possibility of onboarding a big data platform along with advanced feature selection capacities. The main objectives of such a plan are to increase the company’s revenue while reducing the risks of granting loans to borrowers who cannot return their loans. In this study, different hypotheses formulated to address the company’s concerns are studied, where the results reveal that the amount of loans profoundly impacts the number of borrowers charging off their loans. Also, the proposed strategic plan includes onboarding advanced analytics such as machine learning technologies that allow the company to build better generalized data-driven predictive models. § INTRODUCTION Formulating a strategic plan aligned with a company’s business scope allows the company to explore data-driven ways of business improvement and risk mitigation quantitively while utilizing collected data to perform statistical applications. The company’s business leadership generally organizes joint meetings with internal or external data analysis teams to design a plan for executing business-related statistical analysis. Such projects demonstrate that the company should invest in what areas and adjust the budget for business verticals with low revenue. Furthermore, statistical applications can determine the logic of how to improve staff performance in the workplace. LendingClub, as a peer-to-peer lending company, offers loans and investment products in different sectors, including personal and business loans, automobile loans, and health-related financing loans. LendingClub’s business model comprises three primary players: borrowers, investors, and portfolios for issued loans. LendingClub is about expanding the statistical analytics that consists of infrastructure and software algorithm applications to develop two meaningful solutions ultimately: a) estimating durations in which clients will pay off loans; and b) 30-minute loan approval decision-making. To implement these two capabilities, the company has collected data on loans that were granted or rejected over 12 years, including 145 attributes and more than 2 million observations, where 32 features have no missing values across the dataset. To achieve its ultimate targets, LendingClub performs a statistical analysis of numerous steps to determine whether to accept or reject hypotheses, which enables data scientists and statisticians to select attributes for predictive modeling. LendingClub seeks patterns in the loan data to discover relationships between a loan amount and borrowers who have charged off and reported by LendingClub <cit.>. The company assumes a potential correlation between the two features, which establishes specific loan criteria for the group loan applicants who might encounter such an issue. Discovering the correlation enables LendingClub to enhance its risk management portfolio and minimize the risk of losing financial resources, aiming to mitigate the negative impacts of issuing loans to borrowers of this category. Using business statistics, the company seeks proof of concept for the mentioned ideas before recruiting a third-party software developer to implement a standalone product; therefore, the internal data scientists explore various aspects of such data, not limited to the questions listed above <cit.>. In the first phase, demographic information is extracted from the datasets, and data preprocessing steps, such as data cleaning, are performed to remove any broken data from the database. Next, further investigation of specific data (e.g., type of loans issued, loans issued by region, and a more in-depth analysis of bad loans) is performed <cit.>. In the second phase, which oversees the business perspective, the company’s experts explore the operative side of the business (operational business aspects) and analyze applicants’ income category. The third phase refers to the risk assessment of issuing loans, which consists of four steps: a) identifying existing risks in the business; b) the importance and role of credit scores in the loan approval or denial; c) defining bad loans and risky borrowers; d) loans by default (pre-approved); and e) exploring risks by targeted criteria <cit.>. The ultimate goals of such extensive analysis are to lead LendingClub’s data scientists to explore the feasibility of answering the two questions above based on current data, provide recommendations for data collection, or modify the business scope <cit.>. § PROBLEM STATEMENT AND HYPOTHESIS The problem for this work points to statistical applications in LendingClub, which establishes three hypotheses regarding the relationship between the “Loan Amount” and “Charge OFF Flag” features, where various statistical analyses, including hypothesis testing <cit.> and correlation analysis <cit.>, are employed. The hypotheses are as follows: * Accepting or rejecting the hypothesis that any relationship exists between the loan amounts and charge-offs * Accepting or rejecting the hypothesis that any relationship exists between the higher loan amounts and charge-offs * Accepting or rejecting the hypothesis that any relationship exists between the lower loan amounts and charge-offs § STATISTICAL ANALYSIS PIPELINE DESIGN The problem statement consists of three main components: a) data exploration, b) descriptive analysis of loan duration, and c) real-time (fast) loan approval (or denial). Data exploration includes preprocessing, data cleaning, feature engineering, and selection to result in a meaningful descriptive analysis to find an accurate loan during and prediction. In the real-time step, various statistical techniques are explored, including hypothesis testing, student T-Test, and ANOVA testing, and statistical models, such as linear regression, logistic regression, cluster analysis, ANOVA tests, and correlation analysis <cit.>. §.§ Data Exploration Missing values are removed from the loan data, and “loanAmnt” refers to “the listed amount of the loan applied for by the borrower if, at some point in time, the credit department reduces the loan amount, then it will be reflected in this value” and “debtsettlementflag” indicating “flags whether or not the borrower, is charged-off, is working” are extracted from the preprocessed data shown in Figure <ref>. The “debtsettlementflag” – a binary feature – is considered a categorical attribute requiring conversion to numerical equivalents for statistical analysis <cit.>. Also, the histogram of loan amounts shows how borrowers are distributed regarding loan amounts. §.§ Hypothesis Testing In this experiment, T-Test is the primary method for whether to accept or reject the hypothesis. A T-Test is a hypothesis-testing method with broad applications in the industry due to its simplicity and convergence capability with a small sample of data <cit.>. T-Test requires a relatively small subset of data so that the loan dataset is shuffled, and a subsample of 1000 observations is randomly selected from charged-off samples along with 1000 samples, which are randomly selected from the on-time borrowers’ observations for further analysis<cit.>. To explore the consistency of T-Test results, analysis of variance (ANOVA) tests are applied to the same subsets as those used in the previous method. ANOVA tests demonstrate whether such groups offer statistically significant differences <cit.>. §.§ Correlation Analysis Correlation analysis is applied to the subsets to show the dependency between two features <cit.>. This analysis can indicate whether the loan amount impacts the number of borrowers charged off. Correlation analysis provides additional exposure to the data, which might strengthen the acceptance or rejection of the three hypotheses<cit.>. §.§ Results Visualization and Interpretation The results of statistical analysis methods are visualized and interpreted to verify whether the hypotheses are accepted. Also, the visualization of results allows the company’s data scientists to explore whether such outcomes from various techniques converge for decision-making and conclusion purposes. § SUMMARY OF RESULTS To perform an accurate T-Test, several data requirements must be met: a) test variables are continuous; b) test variables (observations) are independent; c) subsets are randomly selected; c) data distribution is approximately normal; d) variance scores of subsets and population are approximately consistent; and e) no outliers <cit.>. In addition to these criteria, a balanced dataset design is required to conduct a meaningful ANOVA test, where the number of subjects in each group needs to be equal <cit.>. Also, an ideal correlation analysis requires data to be independently collected as paired samples, preferably continuous numeric values <cit.>. §.§ Data Analysis The first step of data analysis is exploring the distribution of observations regarding the number of on-time borrowers versus those who have charged off. The next step is to downsample the charged-off samples into subsets of 1000 observations. The same procedure was applied to on-time borrowers’ observations (non-charged-off), and 1000 samples were randomly selected; thus, each subset included 2000 samples of each class equally distributed <cit.>. The mean, standard deviation, and variance of each subset were calculated. The statistical measures of subsets are highly similar, which suggests the need for statistical testing to produce interpretable results. Figure <ref> shows a histogram of each subset where the number of bins is automatically calculated from the data (bin=10). The histogram results indicate that most of the issued loan amounts are in the range of [$5000,$20000]. §.§.§ Hypothesis 1 G*Power statistical software application <cit.> performed a T-Test against each subset, including 2000 samples of charged-off and on-time borrowers’ observations equally distributed. One-tailed T-Tests were conducted using an alpha error probability of 0.05 and a power of 0.95 (1 – beta error probability) to produce an actual power (decision-making criteria) for each subset. The results demonstrated that the actual power values were greater than 0.95, suggesting that the null hypothesis can be rejected, meaning that a “Loan Amount” affects whether a borrower can be charged-off. ANOVA test was conducted against each subset using G*Power, where the outcomes demonstrate that the actual power values are higher than 0.95, suggesting that the null hypothesis can be rejected, which means two groups offer variance differences so that a “Loan Amount” affects whether a borrower can be charged-off. The correlation analysis was performed against each subset and produced scores of -0.005255, 0.061228, and 0.007396 per subset, where the results indicate no strong correlation between the loan amount and the status of charged-off borrowers. The correlation results are not aligned with the T-Tests, suggesting that further analysis is needed. §.§.§ Hypothesis 2 To explore the second hypothesis regarding a relationship between higher “Loan Amount” and “Charged-off,” each subset was sorted in descending order by loan amount, and the top 25% of observations were selected for analysis. The results revealed that all actual power values were higher than 0.95, suggesting that the null hypothesis should be rejected and indicating a strong relationship between the loan amount and charged-off borrowers. §.§.§ Hypothesis 3 The third hypothesis is that the bottom 25% of loan amounts would also show a statistical relationship with the charged-off borrowers. Each subset was sorted in descending order regarding loan amount attributed, and the bottom 25% of observations were selected. The two-tailed T-Test (conducted by G*Power) revealed a strong relationship between the loan amount and charged-off accounts. § DISCUSSION The company formulated a hypothesis to explore the impact of “Loan Amount” as a dependent variable on an independent attribute referring to “Charge OFF Flag,” showing whether a borrower has repaid the loan or charged it off. To do so, LendingClub decided to conduct T-Test and ANOVA hypothesis testing and correlation analysis. The hypothesis testing revealed a statistically significant difference at p-values less than .05, which is interpreted as an indication of the impact of the loan amount on loan repayment. However, the correlation analysis produced a low score, which disagreed with the results of hypothesis testing, and the company decided to perform a more in-depth analysis to locate the source of such divergence. §.§ Steps in Statistical Analysis Statistical analysis includes various steps, such as data exploration, hypothesis testing, and visualization, where the interpretation of results is the last step that aims to explain the results of each step (or most steps) of the analysis <cit.>. In general, an explanation of statistical results often covers four main areas: a) sample size, b) metrics of central tendency, c) distribution of data, and d) hypothesis testing <cit.>. §.§.§ Dataset or Sample Size The number of observations available for statistical analysis plays a crucial role in interpreting results. This number demonstrates whether the samples (observations) can be considered representative of analyzed data <cit.>. A significant difference between statistics and machine learning exists in terms of the number of samples required for experiments, where, for example, 50 observations can represent a population for statistical analysis. A significantly larger dataset is often required for developing a machine learning model. §.§.§ Measures of Central Tendency The mean, median, and mode of observations used for statistical analysis, along with the variance and standard deviation (i.e., measures of central tendency), reveal the central gravity of observations <cit.>. Interpreting those metrics enables practitioners to discover outliers in the observations and explore the possibility of removing them from the analysis. Unlike machine learning model development, where outliers might not impact results significantly, outliers here can affect statistical results by biasing the results towards that extreme. §.§.§ Data Distribution Spreading data by calculating the observation variance can show how samples are distributed among a population <cit.>. Also, exploring data distribution by calculating the histogram of data can reveal the type of data distribution (i.e., normal distribution). It also indicates whether the data are skewed towards the left or right of the histogram <cit.>. Interpreting the data distribution also reveals whether the data are multimodal, where observations come from two or more distributions. Moreover, such interoperation can be used for accurate data normalization, removing outliers, and properly formulating hypotheses for future analyses or reiterations of the current analysis <cit.>. §.§.§ Hypothesis Testing Interpretation of hypothesis testing comprises two steps: a) exploring the logic of formulating such a hypothesis and b) exploring the results of hypothesis testing <cit.>. In the first step, statisticians review the reasons for forming such a hypothesis by studying documents related to the business aspects of an organization. For example, statisticians can only formulate a hypothesis for analysis because they have considered the types/amounts of loans granted as dependent variables (inputs) when predicting whether borrowers could repay <cit.>. The logic behind such a hypothesis is explored and interpreted once the data are analyzed and the results produced. The second step is to interpret the hypothesis testing results, determine whether the hypothesis is accepted or rejected, and explore the confidence interval of such interpretations <cit.>. For example, the interpretation of hypothesis testing results for types of loans and successful repayment could potentially reveal a) whether types/amounts of loans are adequate metrics for predicting risks associated with a borrower; and b) how an organization can mitigate potential risks and update their criteria for granting loans <cit.>. §.§ Limitations in Statistical Analysis Statistical analysis encounters various limitations that make the interpretation of results challenging. As discussed earlier, the primary challenge of statistical analysis, relative to machine learning techniques, is the number of observations required to perform analysis <cit.>. A standard practice in statistical analysis is to sample a population randomly and test hypotheses against the subset of data that can raise concerns about whether the generated subset is a true representative of data <cit.>. By contrast, training machine learning algorithms require a significant amount of data, so practitioners assume that the number of samples or observations used to train the algorithms would represent the entire population <cit.>. Another limitation in interpreting the analysis results is how to relate findings to business problems and interpret the outcomes of hypothesis testing to address business problem statements <cit.>. §.§.§ Small Dataset The size of the dataset or sample used for statistical analysis plays a crucial role in determining the extent to which the results can be generalized <cit.>. A small sample size imposes significant limitations on statistical analysis, where a small dataset serves as a somewhat unrepresentative sample of the entire population, causing different types of bias in the analysis results <cit.>. Also, a small dataset increases the risk that outliers in each population will negatively impact measures of central tendency that have been calculated based on samples out of distribution. In addition to the problem of outliers discussed earlier, a small dataset makes splitting data into training and testing highly challenging. Although statistical analysis methods employ all samples provided to implement models based on hypothesis testing, practitioners in the field often use unseen data to validate hypothesis testing results <cit.>. Another issue caused by a small sample size is an unpredicted increase in measurement errors where the error metrics used to evaluate the models produce highly varying results. To overcome the limitations imposed by a small dataset, the primary practice is to randomly shuffle the dataset and generate several subsets of data, repeating statistical analysis to ensure the results converge <cit.>. §.§.§ Cause and Effect One of the challenges in interpreting statistical results relates to inconsistency between the hypotheses formulated and the outcomes of testing methods. Practitioners interpreting the statistical results might notice that the results are misaligned with the logic of hypothesis tests <cit.>. In such ambiguous circumstances, discovering the cause and effect in statistical analysis results conducted on specific business use cases is challenging since the interpretation disagrees with the predefined scenario <cit.>. This issue can arise when the hypothesis testing design does not cover the useful parameters in testing or when less powerful features and attributes in data are used for hypothesis testing <cit.>. It sometimes happens that practitioners or business teams helping design such statistical analysis misinterpret the results or overlook some findings and/or implications <cit.>. Another source of issues includes a low confidence interval level and results lacking statistical significance <cit.>. §.§.§ Divergence of Results Obtained from Various Methods A common challenge in interpreting statistical analysis results occurs when the results obtained from various techniques diverge <cit.>. It is a widespread practice that statisticians design a statistical analysis using multiple techniques, such as T-Test, ANOVA, or regression, to explore whether the results produced by these techniques align. An agreement between the results from different methods enables an organization to interpret analytical results clearly and make firm recommendations. However, the research shows that hypothesis testing and other methods, such as correlation analysis or machine learning, sometimes produce different results, contrasting with other methods<cit.>. Such an issue indicates that a systematic problem might exist in preparing samples or conducting hypothesis testing. The solution for this type of problem is offered case by case, where practitioners more familiar with the organization’s business scope can suggest methods that produce results closer to the problem statement. §.§ Business Statistical Analysis and Interpretation Business statistics, which include various types of analysis, focus on statistical methodologies aligned with an organization’s business scope to improve the decision-making process, mitigate risks to the organization, and increase revenue <cit.>. Interpretation of such analysis is crucial to the organization, and the process is expected to go beyond that of a simple report or presentation. The areas covered by business statistics include a) customer behavior prediction and trend extraction; b) data exploration, hypothesis testing, and interpretation, such as extensive visualization; c) enhancing business performance from various angles; and d) improving decision-making processes <cit.>. To achieve such targets, business data analysts understand their organization’s business objectives and explore data and results. Also, the root cause analysis is performed to extract in-depth technical insights regarding the organization’s vulnerabilities, enabling the organization to inform its decision-making process <cit.>. §.§ Reflection on the Statistical Analysis Process The findings from the initial statistical application enable the company to redesign the statistical analysis processes to concentrate on those attributes that more substantially impact their business. Feature engineering—a systematic methodology—is necessary to reveal the relationships between dependent attributes and target variables <cit.>. Also, the company aims to explore other features highly correlated with potential target variables from the business perspective but uncorrelated with other dependent attributes <cit.>. §.§.§ Potential Improvement The process of statistical analysis at LendingClub requires several changes to better serve the company’s business needs. The primary targets are to enhance the process of issuing loans, such as the duration of the loan approval process, and to mitigate financial risks to the company by offering borrowers a data-driven loan amount. LendingClub is to apply such changes to the statistical analysis and decision-making process by employing big data infrastructure for advanced multi-model data collection and analytics. In the first step, the company needs a plan demonstrating how to onboard new technology and its costs. The second step includes a broader statistical analysis, such as hypothesis testing, and uses the current data to assess whether specific statistical applications could broadly improve the company’s performance. In the third step, LendingClub conducts research and recruits a third party to develop the required infrastructure. §.§.§ Required Infrastructure Onboarding a large-scale system, such as an enabled big data analytics platform, is a significant change to LendingClub, where modifications have been performed to everything from databases to reporting systems. The first stage is to decide whether LendingClub would adopt a big data platform to the current system or entirely migrate to the new model. This decision allows the stakeholder to estimate the cost of a big data platform and start planning. Although the cost of system adaptation or migration to the big data platform requires detailed information, the migration to a cloud environment, for example, offering various big data services, would be a potential expansion of LendingClub’s analytics in the future. Figure <ref> illustrates the proposed steps for migrating the LendingClub data collection and analytics pipeline to a cloud-based environment that offers big data services such as Amazon Web Services (AWS) <cit.>. These steps consist of a) cloud assessment, b) proof of concept, c) data migration, d) application migration, e) leverage of the cloud, and f) optimization. §.§ Proposed Large-Scale Plan The large-scale plan to enhance the current statistical analysis pipeline consists of two primary phases: a) designing and implementing an end-to-end data collection and processing pipeline that offers big data analytics, and b) increasing the number and quality of features <cit.>. The current data collection pipeline collects data from various sources, and no broadly systematic methodology is employed to acquire such data. Gathering data from different providers (in-house or third-party) involves an extensive preprocessing pipeline, which might remove many observations to prepare a consistent dataset. The proposed pipeline illustrated in Figure <ref> offers various capabilities, including big data collection and data stream processing. The first component of the architecture is a user interface that enables it to receive data from external sources where the data could either be stored in a multi-model database or be in the form of real-time messaging input into an allocated database. The collected data can be transferred between data storage and real-time messaging place holders, which offers big data capabilities to host structured and unstructured data. The next architecture layer includes enabled big data processing components for batch processing, which oversees data preparation and preprocessing for further analysis <cit.>. A similar component—the stream processing unit—prepares and preprocesses data streams for real-time analysis and applications. The preprocessed data are sent to the next component of the architecture, which encompasses the statistical analysis and machine learning methods, where such a block is considered the brain that orchestrates the data analytics. Statistical analysis or machine learning outcomes are stored in a “results database.” The last layer of this orchestration is the user interface block, which enables practitioners in the organization to generate reports with visualizations that can be provided to leadership for decision-making purposes. An extra capability in the new architecture is scheduling automatic training machine learning models or performing statistical analysis. The second phase of the new data analytics platform aims to enhance the quality of feature selection, which concentrates on those attributes that contribute most to target variables. Quarter-based statistical analysis and feature engineering demonstrate what features should be collected with higher resolution. The advantage of using targeted data collection through particular data attributes is to reduce the cost of on-demand infrastructure by reducing the load on the architecture servers and analytical blocks. However, the main disadvantage of employing such a step is that it decreases the amount of data that can be collected, which might harm statistical analysis or predictive model development. Therefore, the organization must weigh the cost of massive data streaming and collection against the impact of selective data collection. § CONCLUSIONS Statistical applications enable enterprises to establish a data-driven business plan that provides clear objectives to enhance the enterprise’s performance, revenue, and risk management. This work summarized a strategic plan informed by an already performed analysis for LendingClub – a financial company – that grants various forms. The statistical results showed that different logic could be extracted from currently collected data. Such results enabled LendingClub to improve its business scope and to encourage the company to onboard a big data platform. The plan recommended exploring employing enhanced feature engineering capabilities to acquire enormous data per year and develop predictive models to increase the company’s revenue and lessen potential risks. LendingClub’s plan also seeks to utilize artificial intelligence and machine learning technologies to implement robust models aligned with the company’s business scopes. unsrtnat
http://arxiv.org/abs/2307.05452v1
20230711172833
A reproduction of the Milky Way's Faraday rotation measure map in galaxy simulations from global to local scales
[ "Stefan Reissl", "Ralf S. Klessen", "Eric W. Pellegrini", "Daniel Rahner", "Rüdiger Pakmor", "Robert Grand", "Facundo Gomez", "Federico Marinacci", "Volker Springel" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR" ]
Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos Kunihiko Kaneko August 12, 2023 ================================================================================================= ^1Universität Heidelberg, Zentrum für Astronomie, Institut für Theoretische Astrophysik, Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany ^2Universität Heidelberg, Interdisziplinäres Zentrum für Wissenschaftliches Rechnen,Im Neuenheimer Feld 205, 69120 Heidelberg, Germany ^3Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany ^4Astrophysics Research Institute, Liverpool John Moores University, 146 Brownlow Hill, Liverpool, L3 5RF, UK ^5Instituto de Investigación Multidisciplinar en Ciencia y Tecnología, Universidad de La Serena, Raúl Bitrán 1305, La Serena, Chile ^6Departamento de Física y Astronomía, Universidad de La Serena, Av. Juan Cisternas 1200 Norte, La Serena, Chile ^7Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA Magnetic fields are of critical importance for our understanding of the origin and long-term evolution of the Milky Way. This is due to their decisive role in the dynamical evolution of the interstellar medium (ISM) and their influence on the star-formation process <cit.>. Faraday rotation measures (RM) along many different sightlines across the Galaxy are a primary means to infer the magnetic field topology and strength from observations <cit.>. However, the interpretation of the data has been hampered by the failure of previous attempts to explain the observations in theoretical models and to synthesize a realistic multi-scale all-sky RM map <cit.>. We here utilize a cosmological magnetohydrodynamic (MHD) simulation of the formation of the Milky Way, augment it with a novel star cluster population synthesis model for a more realistic structure of the local interstellar medium <cit.>, and perform detailed polarized radiative transfer calculations on the resulting model <cit.>. This yields a faithful first principles prediction of the Faraday sky as observed on Earth. The results reproduce the observations of the Galaxy not only on global scales, but also on local scales of individual star-forming clouds. They also imply that the Local Bubble <cit.> containing our Sun dominates the RM signal over large regions of the sky. Modern cosmological MHD simulations of the Milky Way's formation, combined with a simple and plausible model for the fraction of free electrons in the ISM, explain the RM observations remarkably well, thus indicating the emergence of a firm theoretical understanding of the genesis of magnetic fields in our Universe across cosmic time. Magnetic fields significantly influence the kinematical and morphlogical properties of the ISM and contribute to regulating the birth of new generations of stars <cit.>. To better understand this connection, several observational techniques have been developed and perfected over the last century. For example, dust polarization measurements of aligned dust grains <cit.> and synchrotron emission <cit.> allow us to infer the projected line-of-sight (LOS) field orientation, while estimates of the LOS field strength can be obtained from the Zeeman effect <cit.>. However, these methods are only applicable to a limited set of parameters <cit.>, and additional uncertainties arise from our incomplete understanding of the microphysics involved such as grain alignment mechanisms <cit.>. A complementary approach is to determine the characteristic Faraday rotation measure (RM). It is based on the fact that polarized radiation can change its polarization angle as it passes though a magnetized and ionized medium. The observed signal depends on the magnetic field strength and direction as well as on the density of free electrons, and on the radiation frequency <cit.>. Since the early 1960's, numerous attempts were made to reconstruct an all-sky RM map of the Milky Way <cit.> from observations of pulsars and extra-galactic background sources. Complementary synthetic data has proven its worth for systematically interpreting and analyzing this plethora of observations <cit.>. However, all existing approaches are hampered by the fact that the distribution of thermal electrons as well as the detailed structure of the magnetic field are not well known. While the large-scale properties are usually well constrained <cit.>, crucial information about individual star-forming regions and clouds is missing, which is needed to reproduce the observed small-scale features. Our approach goes beyond the current state-of-the-art and employs data from a high-resolution cosmological simulation to reconstruct the large-scale properties of the galaxy combined with a novel star cluster population synthesis model, which introduces the missing small-scale physics. Specifically, we take the Au-6 galaxy from the Auriga project <cit.>, which is able to reproduce the global star-formation rate and structure of the Milky Way very well, while at the same time predicting the amplification of minute primordial magnetic seed fields to micro-Gauss strength over secular timescales. We keep the overall gas density and magnetic field structure, but we discard the original stellar population and distribution of free electrons. We then synthesize a new population of star clusters and calculate the corresponding radiative and mechanical feedback based on the WARPFIELD cloud-cluster evolution method <cit.>. Next, we obtain the corresponding emission from each cluster across the electromagnetic spectrum <cit.>, use the polarized radiative transfer code POLARIS <cit.> to build up the spatially varying interstellar radiation field in the galaxy, and from that reconstruct the distribution of free electrons in the ISM, as discussed by Pellegrini and colleagues <cit.>. Finally, we perform a second POLARIS sweep to calculate synthetic Faraday RM all-sky maps <cit.> from varying positions within the galaxy. The total integrated angle of linear polarization for an observer follows as χ_ obs=χ_ source+RM×(c/ν)^2 , where χ_ source is the polarization angle of the source and ν is the observed frequency. The integral RM = 1/2πe^2/m_ e^2 c^4∫_0^s_obs n_ th(s) B_||(s) ds defines the rotation measure <cit.> for the non-relativistic limit along the LOS s towards the observer, c is the speed of light, e and m_ e are the electron charge and mass, respectively, and B_|| is the LOS magnetic field component. Altogether we compute high-resolution Faraday RM all-sky maps with a total number of N=786,432 pixels for ten distinct observer positions within the Au-6 galaxy (which we denote P01 - P10). They are placed at roughly the same distance to the galactic center as the Sun (that is at galactocentric radii 8 kpc≤ R ≤ 10 kpc) and are located in a gas density cavity similar to the Local Bubble that defines our own Galactic environment <cit.>, as illustrated in Figure 6 of Pellegrini and colleagues <cit.>.. The existing RM all-sky maps of the Milky Way are built from an ensemble of extragalactic polarized radio sources. Certain patches of the sky with missing data or with reduced coverage are reconstructed applying Bayesian statistics, and consequently, these regions appear to be smoother than the average. We compare our results with the observed all-sky RM maps presented by Oppermann and collaborators <cit.> and by Hutschenreuter and colleagues <cit.>, who included a larger source number and employed a more sophisticated reconstruction algorithm (O12 and H22 hereafter). We note that it has recently been pointed out <cit.> that high RM sightlines may have gone undetected for decades, either from instrumental limitations or from biases in the source selection, and that this introduces a systematic shift of these maps towards low RM values. Figure <ref> shows the O12 and H22 data (top row) as well as two synthetic RM maps from the Auriga galaxy (bottom row) from one exemplary observers position (P01) at the solar circle. The synthetic map at the left takes the galaxy as is <cit.>, based on the data of the cosmological simulation only, and the right one combines the simulation data with the detailed model for star-cluster formation and evolution <cit.> described above. It is immediately obvious that the Auriga-only maps misses small-scale features, whereas the full model agrees remarkably well the observations. It has a comparable level of fluctuations on small angular scales and gives the right RM magnitude in all regions of the sky. We also note that all maps exhibit a reversal of the magnetic field direction at the Galactic center, as indicated by a transition from positive to negative RM values. This transition is a distinct feature of the magnetic field morphology of the Milky Way, revealing a globally toroidal field structure. It is also well visible in Figure <ref>, where we plot the average RM along the Galactic longitude l (left) and latitude b (right) coordinates for all observer positions in the model galaxy. The field reversal in the center is a global property of the disk and therefore independent of the location of the measurement. Similarly, all models exhibit the highest amount of RM near the disk midplane (b=0^∘) with decreasing values towards the poles (b=± 90^∘), again in agreement with O12 and H22, indicating that the magnetic field quickly decreases further up and down into the Galactic halo. We also mention that our model slightly overestimates the amount of RM at large b with an offset of 0-30 rad m^-2. Since the signal at large latitudes b is mostly produced by material that is nearby <cit.>, this deviation emphasizes again the importance of properly accounting for the immediate surrounding of the Sun, and it is an indication of the limitations of current galaxy formation simulations when it comes to reproducing the Local Bubble <cit.>. For a more quantitative comparison, we compute the multipole expansion of the RM maps and display the result in Figure <ref>. Our synthetic RM spectra are consistent with the O12 and H22 data for all multipole moments ℓ≲ 30 at all observer positions, whereas we are predicting more small-scale structures at larger ℓ. We speculate that this may be a result of the bias in the Bayesian statistics of O12 and H22 towards lower RM values <cit.>. In contrast, the expansion of the original AU-6 galaxy <cit.> deviates from the observations already above ℓ≈ 10 indicating again importance of small-scale physics for the interpretation of the Milky Way data. We stress that including a realistic star-cluster population synthesis model is indispensable for reproducing the observed small-scale (large ℓ) features. We also note that similar conclusions have been reached by Beck and collaborators <cit.>, however, they did so by introducing a small-scale random magnetic field component still lacking the contribution of a proper multi-scale thermal electron model. Altogether, we find that the general trends in the RM maps presented here agree well for different positions within the galaxy, and they all reproduce the observed data. Consequently, we conclude that current high-resolution MHD simulations of formation and evolution of the Milky Way in a cosmological context, combined with adequate models of star formation and stellar feedback, can well explain the properties of magnetic fields in spiral galaxies. This marks an important step forward in our theoretical understanding of magnetic field amplification by various forms of the dynamo process acting in these systems <cit.>. We note that the methods presented here can also be applied to RM observations in the high-redshift Universe <cit.> and can thus help to monitor the genesis of magnetic fields over cosmic timescales. However, we also caution that most of the RM signal above and below the Galactic plane might be dominated by the local environment <cit.>. Subsequently, maps of the Faraday rotation of the Milky Way cannot be adequately interpreted without knowledge of the conditions in our Local Bubble <cit.>. This has been proposed before <cit.>, and it is also implied by the dust polarization measurements of the Planck satellite <cit.>. Distinct observers in different parts of the Galaxy would see different local magnetic field configurations and electron densities. Our results suggest that current measurements of the Milky Way RM carry a level of uncertainty that was previously not fully appreciated and that can only be accounted for on a statistical basis by detailed modeling efforts as presented here. § METHODS To construct the synthetic all-sky Faraday RM map of the Milky Way, we take one of the simulations from the Auriga project <cit.>. The galaxy Au-6 is the result of a very high-resolution cosmological MHD zoom-in simulation from initial conditions that are specifically selected to reproduce key features of the Local Group. The calculation includes line cooling, stellar evolution, galactic winds, and the growth of black holes and their associated AGN feedback <cit.>. All simulations include self-consistently evolving magnetic fields on a Voronoi grid <cit.> with the moving mesh code AREPO <cit.>. The galaxy Au-6 is selected as Milky Way analogue based on its size, total mass, and star formation rate. In the next step we follow the procedure as outlined by Pellegrini and colleagues <cit.> (hereafter P21) and synthesize a new cluster population and electron fractions that are more faithful to star-forming physics and small scale density structures know from observations. To do so, we replace the original star particles and electron fractions from the simulation and instead reconstruct this information from first principles employing the WARPFIELD cloud/cluster evolution model <cit.>. It follows the time evolution and structure of the stellar wind bubble, HII region, and protodissociation region (PDR) surrounding a cluster of massive stars in spherical symmetry. WARPFIELD accounts self-consistently for the physics of stellar winds, supernovae, radiation pressure, ionization, and gravity. It solves explicitly for the density structure adopted by the gas in response to the action of these various feedback processes, and therefore allows one to account for the evolution of the luminosity and emerging spectrum. We determine the local gas mass within equidistant annuli of the Au-6 galaxy, using linearly spaced radial bins originating at the galactic center-of-mass and sample the cluster mass function by randomly depositing mass with a rate obtained from the Kennicutt-Schmidt <cit.> relation. Per annulus and cluster a random location is selected. If the total gas mass within a distance less than 50 pc from that location is larger than the cluster mass we have drawn, then the cluster is placed there, and the corresponding gas mass is subtracted from the original Voronoi grid of the Au-6 galaxy. As a result of this procedure, the star clusters are typically inserted near density peaks. This is consistent with observations, where young clusters are seen in the vicinity of dense molecular gas, but no longer are deeply embedded into their parental clouds due to efficient stellar feedback <cit.>. This leads to a realistic stellar population that is characterized by the number of clusters for a given mass and age in each annulus. An illustratino of this approach is presented in Figure 6 of P21. The physical properties of each cluster including all emission properties are obtained from the WARPFIELD database <cit.> combined with the spectral synthesis code CLOUDY[http://www.nublado.org/] v17.00 <cit.>. This information <cit.> is then used to compute the impact of ionizing radiation from each individual cluster on the ambient Galactic ISM in order to (re)populate the entire Au-6 Voronoi grid with thermal electrons. We assume solar metallicity, and cosmic ray ionization rates and an interstellar radiation field (ISRF) that is representative of the Milky Way <cit.>. Note that we neglect the population of old field stars in our analysis, because they do not contribute to the ISRF at ionizing frequencies. The resulting distribution of free electrons agrees very well with the observationally-inspired models for the Milky Way by Cordes & Lazio <cit.> and by Yao et al. <cit.>. This is illustrated in Figure 8 of P21. For a detailed description of the magnetic field structure of the Auriga-6 galaxy, we refer to Pakmor et al. <cit.>. With this approach, we have all necessary information to compute the integral (<ref>) through arbitrary sightlines through the galaxy. Finally, we employ the radiative transfer (RT) code POLARIS[http://www1.astrophysik.uni-kiel.de/∼polaris/] <cit.> capable of dust polarization calculations <cit.> as well as Zeeman splitting line RT <cit.> and solve the RT problem on the native Voronoi grid of AREPO simulations. We calculate the resulting synthetic Faraday RM all-sky maps from ten selected positions in the Au-6 galaxy integrating equation (<ref>) for many different sightlines across the entire galaxy and covering the entire sky with high angular resolution <cit.>. For our calculations of the synthetic RM maps we apply a HEALPIX[http://healpix.jpl.nasa.gov] resolution of N_side=256 leading to a total amount of 786 432 pixel. This resolution is identical to the observed H22 map <cit.> but larger than O12 map by a factor of four. The location of the fictitious observers in the Au-6 galaxy are selected such that the distance to the galactic center is between 8kpc and 10kpc, with the solar values being 8.5kpc, and that they are placed in a Local Bubble like region, where previous supernovae have created a low-density cavity, as illustrated in Figure 6 of P21. § CODE AVAILABILITY Cluster properties and ionizaiton are calulated with the WARPFIELD code <cit.> and the spectral synthesis code CLOUDY v17.00 (http://www.nublado.org/), respectively. Cosmological simulations are performed by the moving mesh code AREPO <cit.> (https://arepo-code.org/wp-content/userguide/index.html) and the RT post-processing we utilize the RT code POLARIS <cit.> (https://portia.astrophysik.uni-kiel.de/polaris/). We used Python and its associated libraries including astropy, numpy, and matplotlib for data analysis and presentation. § CORRESPONDING AUTHOR The corresponding author is Stefan Reissl. Please send any requests for further information or data to [email protected]. § ACKNOWLEDGEMENTS S.R., R.S.K., E.W.P., and D.R. acknowledge support from the Deutsche Forschungsgemeinschaft in the Collaborative Research Center (SFB 881, ID 138713538) “The Milky Way System” (subprojects A1, B1, B2, and B8) and from the Heidelberg Cluster of Excellence (EXC 2181, ID 390900948) “STRUCTURES: A unifying approach to emergent phenomena in the physical world, mathematics, and complex data”, funded by the German Excellence Strategy. R.S.K. also thanks for funding form the European Research Council in the ERC Synergy Grant “ECOGAL – Understanding our Galactic ecosystem: From the disk of the Milky Way to the formation sites of stars and planets” (ID 855130). RG acknowledges support from an STFC Ernest Rutherford Fellowship (ST/W003643/1).” F.A.G. acknowledges financial support from CONICYT through the project FONDECYT Regular Nr. 1181264, and funding from the Max Planck Society through a Partner Group grant. The project benefited from computing resources provided by the State of Baden-Württemberg through bwHPC and DFG through grant INST 35/1134-1 FUGG, and from the data storage facility SDS@hd supported through grant INST 35/1314-1 FUGG. The Heidelberg team also thank for computing time provided by the Leibniz Computing Center (LRZ) for project pr74nu. § AUTHOR CONTRIBUTIONS S.R. has run all polarized radiative transfer calculations and has performed most of the analysis. The text was jointly written by S.R. and R.S.K. The WARPFIELD cloud-cluster evolution model was mostly contributed by E.W.P. and D.R. The Augiga-6 data and support with the data handling have been provided by R.P., R.G., F.G., F.M., and V.S. plain
http://arxiv.org/abs/2307.05585v1
20230710150855
Speed and Acceleration of CMEs Associated with Sustained Gamma-Ray Emission Events Observed by Fermi/LAT
[ "P. Mäkelä", "N. Gopalswamy", "S. Akiyama", "H. Xie", "S. Yashiro" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR" ]
Pertti Mäkelä [email protected], [email protected] 0000-0002-0786-7307]Pertti Mäkelä The Catholic University of America 620 Michigan Ave., N.E. Washington, DC 20064, USA NASA Goddard Space Flight Center 8800 Greenbelt Road Greenbelt, MD 20771, USA NASA Goddard Space Flight Center 8800 Greenbelt Road Greenbelt, MD 20771, USA The Catholic University of America 620 Michigan Ave., N.E. Washington, DC 20064, USA NASA Goddard Space Flight Center 8800 Greenbelt Road Greenbelt, MD 20771, USA The Catholic University of America 620 Michigan Ave., N.E. Washington, DC 20064, USA NASA Goddard Space Flight Center 8800 Greenbelt Road Greenbelt, MD 20771, USA The Catholic University of America 620 Michigan Ave., N.E. Washington, DC 20064, USA NASA Goddard Space Flight Center 8800 Greenbelt Road Greenbelt, MD 20771, USA The sustained gamma-ray emission (SGRE) from the Sun is a prolonged enhancement of >100 MeV gamma-ray emission that extends beyond the flare impulsive phase. The origin of the >300 MeV protons resulting in SGRE is debated, both flares and shocks driven by coronal mass ejections (CMEs) being the suggested sites of proton acceleration. We compared the near-Sun acceleration and space speed of CMEs with 'Prompt' and 'Delayed' (SGRE) gamma-ray components <cit.>. We found that 'Delayed'-component-associated CMEs have higher initial acceleration and space speed than 'Prompt-only'-component-associated CMEs. We selected halo CMEs (HCMEs) associated with type II radio bursts (shock-driving HCMEs) and compared the average acceleration and space speed between HCME populations with or without SGRE events, major solar energetic particle (SEP) events, metric, or decameter-hectometric (DH) type II radio bursts. We found that the SGRE-producing HCMEs associated with a DH type II radio burst and/or a major SEP event have higher space speeds and especially initial accelerations than those without an SGRE event. We estimated the radial distance and speed of the CME-driven shocks at the end time of the 2012 January 23 and March 07 SGRE events using white-light images of STEREO Heliospheric Imagers and radio dynamic spectra of Wind WAVES. The shocks were at the radial distances of 0.6–0.8 au and their speeds were high enough (≈975 km s^-1 and ≈750 km s^-1, respectively) for high-energy particle acceleration. Therefore, we conclude that our findings support the CME-driven shock as the source of >300 MeV protons. § INTRODUCTION The sustained gamma-ray emission (SGRE) from the Sun is a prolonged enhancement of >100 MeV gamma-ray emission that extends beyond the flare impulsive phase. SGRE typically lasts for several hours, extending well beyond the end of the associated soft X-ray flare emission. The first SGRE event at energies above 100 MeV was detected on 1991 June 15 by the Gamma-1 telescope on board the Gamma spacecraft and it lasted at least 2.16 hours <cit.>. Similar observation of a long-duration >50 MeV gamma-ray emission was reported by <cit.> during the 1991 June 11 flare. The >100 MeV SGRE is produced by >300 MeV protons precipitating from the solar corona into the solar chromosphere, where their interactions with the dense plasma layers create pions, which then decay into the observed >100 MeV gamma-rays <cit.>. The dominant source of >100 MeV gamma-rays is neutral pion decay <cit.>. <cit.> first reported a clear detection of >40 MeV gamma-rays that require pion production during the extended phase of the 1982 June 3 gamma-ray flare. SGRE events were originally called long duration gamma-ray flares <cit.>. Nowadays they are also known as late-phase >100 MeV gamma-ray emission <cit.> events. A review of gamma-ray observations analogous to Gamma-1 measurements by <cit.> listed 13 LDGRFs between 1982–1991. A few more early events have been discovered from observations by non-dedicated gamma-ray telescopes <cit.>. Most recently, observations by the Large Area Telescope <cit.> on board the Fermi satellite have shown that SGRE events are relatively common <cit.>. The >100 MeV SGRE event on 2012 March 7 was observed to last over 20 hours <cit.>. The origin of the >300 MeV protons producing SGRE is still debated. <cit.> studied the 1982 June 03 event and suggested a two-phase particle acceleration scenario, where a short-duration impulsive-phase acceleration is followed by a second acceleration phase, probably due protons accelerated by coronal shocks and resulting in SGREs <cit.>. <cit.> investigated the same 1983 June 03 gamma-ray flare and found a good agreement between their model of turbulent solar flare loops and the observed gamma-ray light-curves, including the extended emission phase, which their model explained to be due to delayed protons diffusing both in momentum space and spatially in the flare loops. The flare loop scenario requires that flare-accelerated protons must remain trapped and/or be continuously re-accelerated in the coronal loops long after the X-ray flare itself has ended. However, trapping of high-energy protons in coronal loops for several hours requires force-free loops <cit.> with a sufficiently low density and turbulence level <cit.>. As an alternative to particle trapping in the coronal loops, <cit.> suggested a continuous stochastic acceleration due to additional pulses of energy that could explain gamma-ray observations during the extended phase of the 1991 June 15 LDGRF <cit.>. Gamma-ray-line observations of the behind-the-limb flare on 1989 September 29 were interpreted to require a spatially extended gamma-ray source and hence to suggest shocks driven by fast and wide coronal mass ejections (CMEs) as a likely source of the gamma-ray-emission producing particles <cit.>. Recent LAT observations of SGRE events during eruptions occurring behind the solar limb, have confirmed that an extended source of gamma-rays must exist at the Sun <cit.>. The CME-driven shock naturally extends over large regions of solar surface allowing the shock-accelerated protons to have access to areas far from the behind-the-limb eruption site. <cit.> forward modelled the CME flux rope and the surrounding shock in the 2014 September 1 behind-the-limb event and found that the Fermi/LAT SGRE source was located far from the flare site - in the space between the flux rope and shock confirming the extended nature of the emission. <cit.> suggested another scenario where closed magnetic loops extended up to the height of several solar radii will capture high-energy protons that might be accelerated by a CME shock and subsequently the loops retract and enable sufficiently large number of >300 MeV protons to interact with the solar atmosphere. Recently, <cit.> compared the estimated fluxes of gamma-ray producing particles precipitating into the solar atmosphere with the fluxes of SEPs escaping into interplanetary space and did not find significant correlation. They suggested that the lack of correlation rules out the CME-driven shock as a common source of both fluxes. However, <cit.> pointed out that the correlation is high when systematic effects are corrected differently. <cit.> compared the SGRE time profile observed during the ground level enhancement (GLE) on 2017 September 10 with the time profiles of simulated shock parameters and found a good match between them, supporting the CME shock as a common source of SGRE-producing protons at the Sun and GLE protons at 1 au. <cit.> studied properties of flares and CMEs with and without SGREs. They found that SGRE events are associated with intense X-class flares but only one-third of the X-class solar flares Fermi/LAT observed have an SGRE event. They also note that fast and wide CMEs are associated with SGRE events. Therefore, their results on the flare and CME associations favor CME-driven shock as the source of >300 MeV protons. Additional support for the CME-shock scenario is provided by the correlation of the SGRE durations with the durations and the end frequencies of type II radio bursts <cit.>. Figure <ref> shows two examples of concurrent SGRE events and type II radio bursts during SGRE event in January and March 2012. Although type II radio bursts are produced by CME-shock accelerated electrons, they indicate the presence of a strong shock that could also accelerate protons to high energies. Therefore, the correlations suggest that CME-driven shocks could be the source of both the electrons resulting in the decameter-hectometric (DH) type II radio bursts and the >300 MeV protons generating the SGRE events. <cit.> investigated the EUV wave connection to the behind-the-limb (BTL) flare at S20E140 on 2021 July 17. They found that the time when the EUV wave crosses the limb onto the visible disk and the onset of the LAT >100 MeV flux enhancement are concurrent. They also found a coupling between the peak times of the time derivative of the EUV wave intensity profile observed at 193 Å and the >100 MeV gamma-ray flux suggesting that the EUV wave and the acceleration of the SGRE-producing protons are connected. They found the correlation to be valid in three other Fermi/LAT BTL flares. <cit.> conclude that the correlation between the derivative of the EUV wave intensity and gamma-ray flux and the near-simultaneous appearance of a complex type II radio burst indicate that radio, EUV and gamma-ray emissions share the same source (CME-shock) although the emissions originate at different heights in the corona. Back-precipitation of shock-accelerated protons have been studied using numerical simulations but results so far have not been accordant with one another. <cit.> modelled particle precipitation including enhanced turbulence and found scattering to increase back-precipitation but even that being the case the fraction of protons able to precipitate down to the radial distance of 1 R_ relative to the injected back-propagating protons is less than 1%. The precipitation fraction decreases as a function of the radial distance of the CME shock. Therefore, they conclude that the CME-driven shocks cannot provide a sufficient flux of >300 MeV protons to explain the SGRE events. Opposite conclusions in support of a CME-shock as the source of the gamma-ray-producing protons have been obtained by <cit.> who studied the Fermi behind-the-limb flare on 2014 September 1. Their simulations of the CME-driven shock indicated that the quasi-perpendicular part of the shock had a magnetic connection to the gamma-ray source at the front-side of the Sun and the shock compression ratio increase matched the increase in the observed gamma-ray emission. <cit.> simulated proton acceleration in the CME-driven shocks during the 2012 January 23 and May 17 SGRE events. The 2012 May 17 SGRE event was also observed as a GLE by neutron monitors. They concluded that proton acceleration by coronal shocks and diffusive downstream particle transport could explain the SGRE events. However, the authors of the above-mentioned studies suggest that more elaborated MHD models for the particle transport back to the Sun is required because the complex structure of the magnetic fields near the Sun, which the current simulation efforts cannot fully replicate. The lack of direct observations of the precipitating protons close to the Sun leaves the question whether they can propagate back to the solar atmosphere deep enough open. The initial acceleration and speed of the CME in part control the formation height and strength of the shock, which in turn affect particle acceleration efficiency of the shock. Therefore, the CME acceleration and speed provide a proxy for the effectiveness of high-energy particle acceleration in the CME-driven shocks. <cit.> studied SGRE association with on-disk CMEs producing major SEP events and HCMEs with sky-speeds ≥1800 km s^-1 during cycle 24. They investigated the initial acceleration and space speed of the CMEs, which they defined to be the instantaneous peak space speed and acceleration obtained from forward fitting of the graduated cylindrical shell (GCS) flux rope model <cit.> to the EUV and coronagraph images of the CMEs. They found that the peak space speed and peak initial acceleration of the SGRE-producing CME are 2516 km s^-1 and 3.87 km s^-2, respectively. <cit.> suggest that the close connection they found between CME kinematics and the SGRE events give support to the CME-shock scenario. In addition to SEP events, type II radio burst are related to particle acceleration by CME-driven shocks. In this report we estimate the initial acceleration and space speed of the CMEs associated with the Fermi/LAT solar flares (FLSFs) during solar cycle 24 listed by <cit.>. In order to evaluate the feasibility of the CME-driven shocks in producing SGRE events, we compare average initial acceleration and space speed of CME populations associated with SGRE and SEP events and type II radio bursts. We use space speeds obtained by applying geometrical correction to close-to-the-limb CMEs or by applying the model by <cit.> to HCMEs. Initial acceleration is estimated by assuming that the CME obtains its estimated space speed during the interval extending from the onset time to the peak time of the associated soft X-ray flare <cit.>. In addition, we estimate the radial distance and the space speed of shocks at the end time of the two longest-duration SGRE events on 2012 January 23 and March 07. § DATA In the analysis we use the catalog published by <cit.> that contains 45 FLSFs with >30 MeV gamma-ray emission in the period 2010 January–2018 January. We do not repeat here all the details of the event data analysis, which are given in <cit.>. We briefly describe their categorization method of FLSFs. <cit.> characterized the light curves of the FLSFs based on the associated hard X-ray (HXR) observations made by the Fermi Gamma-ray Burst Monitor <cit.>. If the early evolution of the gamma-ray emission was synchronous with the Fermi/GBM HXR evolution, the flare was deemed to have an impulsive 'Prompt' component lasting ≲10 minutes. If the flare had a second phase of gamma-ray emission without a corresponding HXR evolution, the flare was deemed to have a gradual 'Delayed' component that could last up to ≈20 hours. <cit.> found that a total of 39 out of the 45 FLSFs had detectable level of >100 MeV emission. One should note that Fermi/LAT does not observe the Sun continuously, the average LAT measurement interval lasts about 30 minutes <cit.>. Of those 45 FLSFs, they classified 6 flares as 'Prompt only' and 4 flares as 'Delayed only'. In 10 flares both the 'Prompt' and 'Delayed' emission were detected by LAT and 6 flares were detected with LAT Low Energy (LLE) analysis only. The existence of the DH type II radio bursts is based on Wind spacecraft's radio and plasma wave instrument <cit.> observations (<https://cdaw.gsfc.nasa.gov/CME_list/radio/waves_type2.html>), STEREO/WAVES instrument <cit.> observations, and on the analysis by <cit.>. The metric type II radio burst and soft X-ray flare observations are obtained from the NOAA Solar and Geophysical Event Reports. We adjusted the NOAA-reported flare onset times in some events after inspecting concurrent EUV images and soft X-ray curves of the solar eruption. The CME data near the Sun is provided by the Large Angle and Spectrometric Coronagraph <cit.> on the Solar and Heliospheric Observatory <cit.> spacecraft. The CME data is collected from the SOHO/LASCO CME Catalogs (<https://cdaw.gsfc.nasa.gov/CME_list/index.html>, <https://cdaw.gsfc.nasa.gov/CME_list/halo/halo.html>). SEP event data are from the Major SEP Event list (<https://cdaw.gsfc.nasa.gov/CME_list/sepe/>) and from the GOES-equivalent >10 MeV intensities calculated using data provided by the High Energy Telescope <cit.> onboard STEREO. For the shock distance estimation at the end of the SGRE event, we used white-light images of the Sun Earth Connection Coronal and Heliospheric Investigation <cit.> Heliospheric Imagers <cit.> onboard the Solar Terrestrial Relations Observatory <cit.> spacecraft. The HI images were provided by the STEREO Archive maintained by the UK Solar System Data Centre (<https://www.ukssdc.ac.uk/solar/stereo/data.html>). To identify the associated CMEs, we inspected the CME catalogues provided by the Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS, <https://www.helcats-fp7.eu/>). § ESTIMATION METHOD OF THE CME INITIAL ACCELERATION The initial acceleration of the CME near the Sun is difficult to measure because the cadence of white-light coronagraphs is limited. In our study we follower the method previously used by <cit.> and <cit.>. We assume that the CME accelerates from rest to its final maximum speed, which it reaches at the peak time of the associated soft X-ray flare. <cit.> have shown that the main acceleration phase of the CME coincides with the impulsive phase of the associated X-ray flare. Therefore, we calculate the initial acceleration a of the CME with a formula: a=V_Space/(t_FlarePeak-t_FlareOnset), where V_Space is the estimated space speed of the CME and t_FlarePeak and t_FlareOnset are the flare peak and onset times, respectively. The space speed of halo CMEs (HCMEs) has been estimated by using a cone model for HCMEs <cit.> and the space speeds are listed in the SOHO/LASCO HALO CME catalog (<https://cdaw.gsfc.nasa.gov/CME_list/halo/halo.html>). For non-HCMEs, the space speed, V_Space, is calculated from the measured CME speed on the sky plane, V_Sky, by using a geometrical correction V_Sky/cosθ, where θ is the angle the CME propagation direction makes away from the sky plane. The angle θ depends on the longitude of the flare location. To avoid unrealistically large corrections, we have included in the analysis only non-HCMEs for which θ is ≤30^∘ as seen either from the SOHO or STEREO spacecraft. The method calculates an average over the acceleration phase of the CME. The peak initial acceleration of the CME can be higher than obtained average initial acceleration as was shown by <cit.>. In general, we know that the CME speed profiles near the Sun vary from event to event and CME speed is an important parameter governing particle acceleration efficiency of the CME-driven shocks. <cit.> showed that CMEs associated with major SEP events have a hierarchical relationship between the initial acceleration and speed of the CME and the SEP fluence spectral indices <cit.>: CMEs associated with filament eruptions have low initial speeds and acceleration and produce the softest SEP spectra at 1 au, while the CMEs with highest initial speed and acceleration have the hardest SEP spectra. The CMEs with an intermediate speed and acceleration result in moderately hard SEP spectra at 1 au. Therefore, initial acceleration and speed provide a proxy for the effectiveness of high-energy particle acceleration in the CME-driven shocks. §.§ Initial Acceleration and Space Speed of CMEs Associated with LAT Gamma-ray Flares In our analysis we use the on-disk gamma-ray events listed in <cit.>. Their list contains 45 gamma-ray flares during cycle 24. <cit.> categorized the flares based on whether 'Prompt' or 'Delayed' component (SGRE event) of gamma-ray emission was detected. In the 6 events of the 45 events, only 'Prompt' (impulsive) emission was detected, 4 events had no detected 'Prompt' emission at all, 10 events have both 'Prompt' and 'Delayed' emission and the remaining 25 had 'Delayed' emission, but the presence of 'Prompt' emission could not be excluded because of LAT was not pointing to the Sun at the appropriate time. 32 of the flares were associated with a HCME, 10 were associated with non-HCMEs, and 3 had no associated CME. Based on our own estimations, we changed the CME of the 2014 September 10 flare to the 08:00 UT HCME. We have excluded the 3 back sided flares, the 3 flares without a CME and the 2017 September 06 X2.2 flare for which we could not estimate the space speed of the CME at 09:48 UT, because there is no suitable side-view either from SOHO or STEREO-A. Table <ref> list the total number and the average value of the initial acceleration and space speed of CMEs in different categories. First, we divided the CMEs into two main categories: those associated with flares showing only a 'Prompt' component, labelled as 'Prompt Only' and those with a 'Delayed' component, labelled 'All Delayed' in Table <ref>. The 'Prompt Only' flares are impulsive gamma-ray flares and the 'All Delayed' ones are SGRE events. Clearly, the impulsive gamma-ray flares are associated with significantly slower CMEs (775 km s^-1) than flares with an SGRE event (1708 km s^-1). The difference in the initial acceleration is not as clear, but again the CMEs with SGRE events show a larger initial acceleration than those without an SGRE event. From SEP event comparisons by (; see also ) we know that higher acceleration and speed indicate that the CME-driven shock produces harder energy spectra, i.e., more likely to have >300 MeV protons. Similar high initial acceleration and fast speed characteristics are shared by CMEs associated with GLEs, which are guaranteed to have >300 MeV protons. Then we divided the 'All Delayed' CMEs into three subcategories: the 'Prompt Delayed' CMEs are associated with gamma-ray flares having both emission components, the 'No-Prompt Delayed' CMEs do not have a detectable 'Prompt' component and the 'Delayed' CMEs have a 'Delayed' component but the existence of the 'Prompt' component is uncertain because of the lack of LAT observations during the impulsive phase of the flare. Differences are now less significant (the sample sizes also become small), but the 'Prompt Delayed' CMEs appear to have the highest average initial acceleration and space speed and the 'No-prompt Delayed' the lowest ones among the three groups. Most likely the CMEs without associated 'Prompt' gamma-ray component are more slowly accelerating CMEs but are still able to produce >300 MeV protons as their space speed becomes high enough in the later phase. Again, similar slower initial acceleration but high later-phase speed has been detected for CMEs producing major SEP events <cit.>. Table <ref> in Appendix lists the data for events included in calculations of Table <ref>. lccccccc 0pt 1 Initial acceleration and space speed of CMEs associated with LAT gamma-ray flares Quantity 2cMain Types 3cSubtypes of 'All Delayed' 3-4 6-8 Prompt Only All Delayed Delayed Prompt Delayed No-Prompt Delayed (1) (2) (3) (4) (5) (6) Count 6 32 18 8 4 Mean Acceleration (km s^-2) 1.37 1.75 1.73 1.87 1.62 Mean Space Speed (km s^-1) 775 1708 1745 1753 1663 § COMPARISON WITH HCMES ASSOCIATED WITH TYPE II RADIO BURSTS AND MAJOR SEP EVENTS Because the 'All Delayed' gamma-ray flares are mainly associated with HCMEs, we compare their initial acceleration and space speed with HCMEs associated with type II radio bursts and major SEP events. Major SEP events are defined as those with the peak proton flux in the GOES >10 MeV integral channel above 10 particles cm^-2 s^-1 sr^-1. Since SEPs are charged particles, they spiral along the interplanetary magnetic field lines as they propagate away from the acceleration source. Therefore, at Earth we can detect mostly SEP events originating from eruptions occurring in the western hemisphere of the Sun. Some very intense eruptions from the eastern limb can produce particle events at Earth but in that case only at the lower energies. In general, DH type II radio bursts are well correlated with major SEP events <cit.>. Both radio and gamma-ray emissions can be detected from all on-disk eruptions because electromagnetic emission can propagate away from the Sun without being significantly affected by coronal or interplanetary medium. Type II solar radio bursts occur at the fundamental and second harmonic of local plasma frequency that depends on the electron density at the upstream of the CME shock. Because the electron number density decreases as a function of the radial distance, the plasma frequency decreases away from the Sun and higher frequency emissions originating from a lower height can propagate freely outwards. Therefore, the type II burst can be identified in the radio dynamic spectra as an intensity feature slowly drifting towards lower frequencies at the rate that depends on the shock speed and the density scale height of the ambient medium. lcccccccc 0pt 2 Initial acceleration and space speed of cycle-24 HCMEs with metric type II radio bursts HCME Category 3c'Delayed' Component (SGRE Event) 3cNo 'Delayed' Component 3-5 7-9 Count Mean Acceleration Mean Space Speed Count Mean Acceleration Mean Space Speed (km s^-2) (km s^-1) (km s^-2) (km s^-1) (1) (2) (3) (4) (5) (6) (7) DH Type II 23 1.85 1869 37 1.09 1211 No DH Type II 1 2cToo low statistics 26 1.11 959 SEP Event 17 1.83 2004 13 1.15 1499 SEP Event (w/STEREO) 21 1.70 1858 18 1.07 1396 No SEP Event 7 1.81 1360 50 1.09 1006 No SEP Event (w/STEREO) 3 2.68 1524 45 1.11 992 In Table <ref> we have divided the 87 cycle-24 HCMEs with metric type II radio emission into CMEs with and without a 'Delayed' gamma-ray component. The existence of the metric type II radio burst indicates that a shock forms early, making these HCMEs good candidates for SGRE production. We investigated how many of the metric type II-associated HCMEs are with and without a DH type II radio burst or a major SEP event. We selected major SEP events as they are intense events and could have enhancements of >300 MeV protons, which are unlikely to be present in the inherently low-intensity SEP events. Because observer's connection to the SEP source affects the possibility to detect SEPs, the group without a major SEP event could still contain events that were able to accelerate particles, especially the poorly connected eastern hemisphere events could have produced high-energy particles that were not detected. We account for this possibility by using GOES >10 MeV equivalent STEREO intensities to identify major SEP events observed by STEREO. The STEREO >10 MeV flux is estimated using data from the STEREO/HET <cit.>, which covers the energy ranges of 13–100 MeV. The flux is estimated by fitting a power law to HET data points and integrating the flux in the 10–150 MeV range <cit.>. In Tables <ref> and <ref>, we have separated the two SEP event sets and marked the one containing both GOES and STEREO events as "(w/STEREO)", although in Table <ref> the statistics for the SGRE events are mostly too low. One should note that STEREO spacecraft drift around the Sun, so their magnetic connection to the Sun changes continuously. In addition, STEREO-A observations have significant data gaps during solar conjunction period during 2014–2015 and contact to STEREO-B was lost on October 1, 2014. We surveyed also STEREO/WAVES data for additional DH type II radio bursts but we found only one on 03 August 2011. The STEREO-A data showed a short-duration, slanted feature in the 10–14 MHz frequency range starting at 13:38 UT, which we added to our DH type II burst list. All other STEREO/WAVES DH type II bursts were accompanied with a Wind/WAVES DH type II burst, so we study STEREO and Wind DH type II bursts together. The most western event of the 7 SGRE events without a major GOES SEP event occurred at the heliographic longitude W18 and 4 of the 7 SGRE events occurred less than 30 from the eastern limb. The two bottom rows of Table <ref> are difficult to interpret, but we have added them mainly for completeness. Results show that all HCMEs associated with an SGRE event have similar average initial acceleration values (1.70–1.85 km s^-2) with the exception of the group without a GOES or STEREO SEP event, which contains only three events and the average initial acceleration (2.68 km s^-2) is very high, possibly indicating missed major SEP event identification. The range is considerably higher than those of HCMEs without an SGRE event (1.07–1.15 km s^-2). The SGRE and SEP-associated HCMEs have the highest average space speed, whereas two groups of HCMEs, SGRE-associated HCMEs without an SEP event and SEP-associated HCMEs without an SGRE event, seem to have similar speeds. However, the average initial accelerations of SGRE-associated HCMEs without a major SEP event are higher (even when we ignore the group without a GOES or STEREO SEP event that has only 3 events in total) than those of without SGRE event but with a major SEP event. The DH type II-associated HCMEs without an SGRE event have only slightly lower average space speed (1211 km s^-1), but we know that DH type II bursts are associated with SEP events and this mixed population includes 12 HCMEs with an SEP event, which have high space speeds. If we exclude these 12 SEP-associated events, then the average space speed of the remaining 25 events decreases down to 1064 km s^-1. Clearly, the existence of >300 MeV protons is connected to a high initial acceleration and speed of the associated HCME. The HCMEs without an SEP and SGRE event have the lowest average speed (992 km s^-2). Therefore, the SGRE-associated HCMEs conform to the hierarchy between the initial acceleration and speed of the CME and the fluence spectral index as described by <cit.>. Especially the high initial acceleration seems to be crucial for SGRE production. The average accelerations and speeds of the 20 cycle-24 HCMEs associate with only a DH type II radio burst are shown in Table <ref>. HCMEs are mostly without SGRE events (only 4 SGRE events) or major SEP events (only 6 SEP events if STEREO observations are included, two of which have also an SGRE event). Therefore, statistics for SGRE events are low, but the average space speed of SGRE events without an SEP event (≈1579 km s^-1) is below the space speed of the SGRE events with an SEP event (≈2004 km s^-1; ≈1858 km s^-1 if STEREO observations are included) in Table <ref>. None of the three eruptions without an major SEP event detected by GOES were magnetically well-connected to Earth, so they probably accelerated high-energy particles efficiently but particles didn't reach Earth. One of them, the 10 June 2014 HCME with a solar source at S17E82, actually had a major SEP event observed by STEREO-B, which was located at the heliographic longitude E164. The 05 March 2012 HCME had a solar source at N17E52, but the GOES-equivalent >10 MeV intensities observed by STEREO-B at longitude E117 were already elevated above 100 pfu due to a preceding HCME on 04 March that was not associated with an SGRE event. At the onset of the 10 March 2012 HCME launched from N17W24, the >10 MeV intensities were elevated above 10 pfu at all three spacecraft. In fact, the March 5 and 10 events are the first and last events in a cluster of 4 SGRE events accompanied by high level of SEP flux <cit.>. So, it is quite possible that the two March 2012 events also accelerated particles. The average initial acceleration value is lower than respective value for SGRE events with SEP events in Table <ref>, but this is expected because CMEs associated with only a DH type II radio burst accelerate slowly and the shock forms later. This probably explains the lower average space speed near the Sun. The initial acceleration and average space speed of HCMEs without an SGRE event and an SEP event are lower or similar, respectively, to the respective values in Table <ref>. Table <ref> in Appendix lists the data for events included in calculations of Tables <ref> and <ref>. lcccccccc 0pt 3 Initial acceleration and space speed of cycle-24 HCMEs with DH type II radio bursts only HCME Category 3c'Delayed' Component (SGRE Event) 3cNo 'Delayed' Component 3-5 7-9 Count Mean Acceleration Mean Space Speed Count Mean Acceleration Mean Space Speed (km s^-2) (km s^-1) (km s^-2) (km s^-1) (1) (2) (3) (4) (5) (6) (7) SEP Event 1 2cToo low statistics 3 0.46 1263 SEP Event (w/STEREO) 2 2cToo low statistics 4 0.38 1265 No SEP Event 3 1.00 1579 13 0.44 1164 No SEP Event (w/STEREO) 2 2cToo low statistics 12 0.47 1155 § RADIAL DISTANCE OF THE SHOCK AT THE END OF THE SGRE EVENTS We selected two SGRE events, the 2012 January 23 and March 07 events, with the longest duration of the associated type II radio burst (Gopalswamy et al. 2019) and for which the STEREO observations provided sideview white-light images of the HCMEs. The 2012 January 23 04:00 UT HCME produced a DH type II burst with a duration about 25.0±9.6 hr, while the estimated duration of the SGRE event was 15.4±0.8 hr. The SGRE ended around 19:25 UT. The estimated space speed was 2511 km s^-1, and the interplanetary shock arrived at the SOHO spacecraft at 14:33 UT on January 24. The eruption was associated with a M8.7 X-ray flare starting at 03:38 UT at the heliographic location of N28W21. The STEREO-A and STEREO-B longitudes were W108 and E114, respectively. The eruption produced a major SEP event at Earth with a GOES >10 MeV peak proton flux 6310 cm^-2 s^-1 sr^-1. The 2012 January 23 eruption close to the Sun has been studied extensively because the eruption involved two flux ropes that merged below the radial distance of 15 R_ <cit.>. The second HCME at 00:24 UT on 2012 March 07 was associated with an SGRE event that had even longer duration, about 21.3±1.6 hr <cit.>. The 2012 March 07 SGRE had a slightly longer estimated duration of about 21 hr but the SGRE durations cannot be measured accurately because LAT does not observe the Sun continuously. The SGRE end time was 21:40 UT. The estimated duration of the DH type was 27.9±6.8 hr. The LASCO space speed of the HCME was 3146 km s^-1 and it was associated with a X5.4 X-ray flare at 00:02 UT from N17E27. A second X1.3-class flare started about an hour later at 01:05 UT. The associated HCME at 01:30 UT had a slightly slower space speed of 2160 km s^-1. The STEREO-A and STEREO-B longitudes were W109 and E118, respectively. The SOHO shock arrival time was at 10:53 UT on March 08. The GOES >10 MeV peak proton flux was 6530 cm^-2 s^-1 sr^-1. The onset of the HCME has been studied by <cit.> and the heliopsheric propagation by <cit.> and <cit.>. §.§ Distance Estimation We estimated the radial distance and the space speed of the shock by forward fitting a spheroidal shock model to white-light images of STEREO/HIs <cit.> around the end time of the SGRE event. For shock fitting, we used IDL programs in the Solar Corona Ray-Tracing Software package developed to forward modeling of structures of the solar corona <cit.>. The fitting of the spheroidal shock model to HI observations is shown in Figure <ref>. The propagation direction of the shock is difficult to estimate, our estimates were N25W05 for the 2012 January 23 CME and N34E27 for the 2012 March 07 CME. For the 2012 January 23 HCME we obtained the radial distance r=121 R_, and the space speed 975 km s^-1. In the case of the 2012 March 07 HCME the estimated radial r=140 R_, and the space speed 750 km s^-1. The obtained speeds are reasonably high for a strong CME-driven shock to exist. We compared these results with radial distances estimated using Wind/WAVES observations of the type II radio burst. First, we measured the mid-frequency of type II emission lane at the time the CME leading edge was around 20 R_, because type II emissions are often very complex and overlapped by more intense type III emission during the early phase of the eruption, which makes radio measurements at frequencies corresponding the shock distances close to the Sun difficult. From the frequency formula f_plasma=9.0 ×√(N × n(r)), where the radial distance r is in units of R_ and frequency f in kHz, we calculated the multiplier N for the Leblanc density model n(r) <cit.>. The measurement time was obtained by extrapolating the CME height-time profiles obtained by forward fitting a flux rope model to LASCO and SECCHI/COR images to a radial distance 20 R_. We then estimated the radial distance at the SGRE end time from the mid-frequency of the type II emission lane: For the 2012 January 23 HCME we obtained the multiplier N= 4.51, which then gave for the mid-frequency f=83 kHz the radial distance of r=132 R_. For the 2012 March 07 HCME the respective values were N=9.07, f=90 kHz and r=173 R_. The distances estimated from the radio bursts data are 9% and 24% larger than those estimated from the STEREO/HI images. The STEREO/HI height-time measurements are complicated because the actual shape, location, and propagation direction of the shock ahead of the CME body are difficult to discern from the white-light images. The CME structure in white-light is also transparent, so we may confuse structures <cit.> and brightness depends on local density and Thomson-scattering geometry <cit.>. On the other hand, type II radio emissions are sporadic and depend on local density at the radio source, which the general density model cannot capture. We also assume that the location of the radio source is at the shock nose <cit.> and the type II emission in interplanetary space occurs at the fundamental of the plasma frequency <cit.>. § DISCUSSION In the first part of our analysis, we showed that the near-Sun kinematics of the CMEs correlate with the properties of the gamma-ray emission observed by Fermi/LAT. The population of the CMEs (total of 8 CMEs) that were associated with a gamma-ray event whose light-curve indicated both 'Prompt' and 'Delayed' emission component, as defined by <cit.>, had the highest average initial acceleration (1.87 km s^-2) and fastest average space speed (1753 km s^-1). The mixed 'Delayed' category, where the existence of the 'Prompt' component is uncertain due to the lack of LAT measurements around the flare onset, has a similar average space speed (1745 km s^-1) but somewhat lower initial acceleration (1.73 km s^-2). The lowest average values (1.37 km s^-2 and 775 km s^-1, respectively) had the population of CMEs (total of 6 CMEs) associated with gamma-ray flares showing a 'Prompt' emission component only, i.e., there were no SGRE emissions detected by LAT. The speeds correspond well with those obtained by <cit.> who studied CME properties for X-class flares with and without gamma-ray emission. They found a median CME linear speed of 768 km s^-1 for X-class flares without gamma-ray emission. If Fermi detected gamma-rays during the X-class flare, the median speed of the associated CMEs was 1828 km s^-1. CMEs associated with SGRE events had the highest median speed of 2125 km s^-1. The definition of SGRE in their study was that the >100 MeV gamma-ray duration is ≳∼2 hr. <cit.> definition used here is based on details of hard X-ray and gamma-ray light-curves, which probably explains why <cit.> SGRE events were associated with faster CMEs. In addition, we divided cycle 24 on-disk HCMEs associated with type II radio bursts into groups with and without (a) SGRE events, (b) DH type II bursts, and (c) major SEP events observed. For SEP events we analyzed major events observed by GOES only and the second group of major SEP events observed by GOES or STEREO spacecraft. Our statistical analysis show that all metric type II-associated HCMEs with an SGRE event have considerably higher initial acceleration and also space speed if an major SEP event was also detected than those of metric type II-associated HCMEs without an SGRE event. The average space speeds of the SGRE-associated HCMEs without an SEP event and the non-SGRE-associated HCMEs with an major SEP event were similar. The analysis of the HCMEs associated with only DH type II emission shows that the three SGRE-producing HCMEs without an SEP event observed by GOES spacecraft have higher space speed than any studied population of HCMEs not associated with an SGRE event. However, one of those three HCMEs had an major SEP event observed by STEREO-B and the other two had elevated backgrounds at leat at the best connected spacecraft, so all three events could have accelerated protons. Whereas the avergae initial acceleration is slightly lower than those of the metric type II-associated HCMEs without an SGRE event, but clearly higher than those of the DH type II-associated HCMEs without an SGRE event. The lower value is expected because CMEs associated with only a DH type II radio burst accelerate slowly and the shock forms later. This result resembles the kinematic hierarchy of CMEs with major SEP events, where rare, slowly accelerating but eventually fast CMEs associated with filament eruptions outside active regions can produce large SEP events at 1 au. In the case filament eruptions, we know that the resulting energy spectrum is soft, but the general idea that occasionally the initial acceleration of the CME is slower, but acceleration continues long enough so that a sufficiently strong shock forms later at higher altitudes is comparable. In general, our results are similar to those reported by <cit.>. The SGRE-associated HCMEs seem to conform to the hierarchy between the initial acceleration and speed of the CME and the fluence spectral index as described by <cit.>. Clearly, the existence of >300 MeV protons is connected to a high initial acceleration and speed of the associated HCME. Therefore, our results suggest that CME-driven shocks are the likely source for the >300 MeV protons required to produce SGREs at the Sun. The mirror effect near the Sun limits the number of protons that can penetrate deep enough, i.e., particles with a pitch angle α in the sheath region cannot penetrate a near-Sun region if μ=cosα is larger than the critical value μ_c: |μ| ≥μ_c ≡√(1-B_sheath/B_⊙). Because the foot points of the field lines crossing the shock nose could be connected to areas outside the source active region where the average magnetic field strength B_ is considerably lower than in active regions, the mirror ratio B_sheath/B_ increases and the width of the loss cone α_c=cos^-1μ_c=sin^-1√(B_sheath/B_) becomes larger. Because the CME flux ropes have a pile-up region in front of them, the magnetic field within the sheath could be significantly larger than the ambient field, which will further lower the mirror ratio. Therefore, more protons can precipitate deep into solar atmosphere. As mentioned earlier, enhanced turbulence increases scattering into the loss cone, which in turn increases the number of precipitating particles at the foot points. The level of the turbulence and its time evolution along the flux-rope-wrapping field lines and in the atmospheric layers close to the Sun are difficult to estimate. For example, EUV waves associated with large solar eruptions and propagating long distances over the solar surface clearly indicate that coronal shocks and CME lateral expansion affect the solar atmosphere far from the eruption site, most likely resulting in large volumes of enhanced turbulence around the source active region. It should be noted that fast CME shocks are the only sites for which we have clear corroborating observational evidence for acceleration of >300 MeV protons over extended times long after the end of the solar flare. <cit.> studied the properties of the soft X-ray flares, CMEs and SEP events associated with SGRE events. They found that SGRE events are not produced by the brightest, most intense X-ray flares. In their reverse study, they found that during the period from 2011 March to 2015 June 45 X-class soft X-ray flares were detected, but only 15 of those were associated with a SGRE event. Similarly, their study showed that SGRE events are associated with fast CMEs and the SGRE duration increases as the CME speed increases. The reverse study of the fast HCMEs with speeds above 1500 km s^-1 found only four HCMEs without a reported gamma-ray event. Two of the four HCMEs, the 2011 September 22 10:48 UT and 2012 July 19 05:24 UT halos, had concurrent Fermi/LAT observations. In both events, the LAT spectra showed slight increases that were not significant enough to be characterized as detection. In the study of related SEP events observed by GOES, <cit.> list only the 2011 March 07 SGRE event as a magnetically well-connected to GOES and without a significant background increase due to a preceding event but did not show any increase in the GOES >300 MeV flux. <cit.> suggest that the lack of high-energy protons is due to a poor latitudinal magnetic connection of the shock nose to Earth because the flare occurred at the heliographic latitude of N31 and the northern polar region of the Sun is tilted away from Earth in March. Similarly, <cit.> showed that soft energy spectrum observed by GOES during the 2014 January 7 SGRE event was due to poor magnetic connectivity of the shock nose to an Earth observer. The final conclusion of <cit.> is that their results favor the CME-shock as the source of the SGRE-producing protons. The >300 MeV protons are accelerated near the nose of the CME-driven shock <cit.>, with a possible exception of the earliest phase of the eruption, where the fast lateral expansion of the shock could result in efficient particle acceleration away from the nose region. Some fraction of the shock-accelerated protons escapes into the IP space and are detected as an SEP event, but others propagate along the magnetic field lines deep down into the solar atmosphere and generate SGRE. In addition, the magnetic field lines that are pushed ahead of the CME body maintain a continuous connection between the shock and the solar atmosphere. Therefore, if the shock can accelerate protons to energies >300 MeV, protons will have a propagation path back to the Sun and can generate SGREs. The key aspect of the CME-shock model is that the magnetic field lines protons travel along sunwards cross the shock front into the sheath region behind the shock, wrap around the CME flux rope and connect back to the Sun in areas outside the foot points of the CME flux rope and possibly also areas outside the source active region. Therefore, the locations of the foot points are widely separated providing a natural explanation for the spatially extended source of gamma-ray emission. We estimated the radial distance the CME-driven shock at the end time of the SGRE event for the 2012 January 23 and 2012 March 07 SGRE events. These events were associated with the two longest duration type II radio bursts. We estimated the shock radial distance by forward fitting a spheroidal shock model to STEREO/HI white-light images of the CME and obtained for the 2012 January 23 SGRE event the shock radial distance r=121 R_ and for the 2012 March 07 SGRE event r=140 R_. In addition, we used the frequency of the type II radio burst obtained from the radio dynamic spectra of Wind/WAVES together with a radial density model to get another estimate for the shock radial distance. The obtained distances from radio measurement were slightly longer, for the 2012 January 23 SGRE event r=132 R_, and for the 2012 March 07 SGRE event r=173 R_. <cit.> estimated the speed of the 2012 March 07 CME using the standard aerodynamic drag-force model approach, where the CME travelled through quiet or perturbed solar wind (SW). Based on their results (their Figure 8), the estimated speed of the CME at 22 UT for the quiet SW model was ≈740 km s^-1 and for the perturbed SW model ≈820 km s^-1. The perturbed SW model matches better the CME arrival time and speed at the Wind spacecraft. Therefore, our estimated CME speed of 750 km s^-1 seems to be slightly below the one obtained from the perturbed SW model. Recently, <cit.> studied the arrival signatures of the 2012 March 07 CME at several heliospheric locations. They report that Venus Express detected the arrival of the CME ejecta at 13:28 UT, when Venus was at the radial distance of 154 R_. Therefore, the radial distance estimated from the forward fitting of the spheroidal shock model, r=140 R_, is clearly too low and most likely the radial distance obtained from radio observations, r=173 R_, is closer to the actual distance. The model fitting to images of the 2012 March 07 CME was difficult because the CME structure was very faint in the STEREO-B images (see Figure <ref>c). Therefore, the sensitivity of the imaging system may not be high enough to detect the shock in front of the CME. The estimation of the radial distance of the shock at the end time of the SGRE event is quite complicated. The location and the shape of the shock front is difficult to discern from white-light images of the CME <cit.>. CMEs are transparent structures and intensity of Thomson scattering depends on viewing angle relative to the structure. The location of type II radio source on the shock front is also difficult to measure. Imaging radio instrument operate at the higher frequencies, which correspond to heights of couple solar radius above the solar surface. Direction finding and triangulation can be used to locate the interplanetary type II radio sources at lower frequencies. However, scattering of radio waves and low intensity of type II emission limit the accuracy of the direction-finding measurements. The CME-driven shock in both events reached SOHO spacecraft (2012 January 24 14:33 UT and 2012 March 08 10:53 UT respectively) and when shocks passed 1 au about 30 minutes later, GOES spacecraft observed a clear increase around the shock time visible in Figure <ref>. Both events had particle flux increase in the GOES 350–429 MeV channels indicating that the CME-driven shock did accelerate >300 MeV protons. In both cases the 1-au enhancement continued beyond the end times of the SGRE events, estimated to be at 2012 January 23 19:25 UT and 2012 March 07 21:40 UT, respectively. During the March 07 event, the particle increase was detectable at even higher energies, up to the 510–700 MeV energy range. The event integrated fluence spectrum of 2012 January 23 event provided by PAMELA indicate that SEP flux at 1 au extended above 300 MeV <cit.>. Therefore, the CME-driven shock clearly must accelerate >300 MeV protons far away from the Sun, providing support for the CME-shock as the source of the SGRE-producing protons. The 2012 March 07 SGRE event was very bright that the location of the >100 MeV gamma-ray emission source could be estimated over several time intervals over a period of about 10 hours <cit.>. The emission centroid seemed to move away from the flare site across the solar towards west. <cit.> studied two other, bright SGRE flares on 2014 February 25 and 2017 September 10. The gamma-ray intensity of the 2014 February flare was weaker, so they could determine the location of the emission centroid only during two intervals over three hours. The September 2017 flare was brighter, and the location was determined in three intervals over 7 hours, but the flare occurred over the western limb of the Sun making the detection of possible source movement difficult. In both events the centroid remained consistent with the AR location. Therefore, the movement of the SGRE source during the 2012 March 07 event supports CME-shock scenario, whereas the detection of a possible source movement in the 2014 February 25 and 2017 September 10 events is complicated because of the weaker gamma-ray intensity or unfavorable location of the flare. § CONCLUSIONS We compared acceleration and speed of CMEs associated with gamma-ray flares with 'Prompt' and/or 'Delayed' (SGRE event) component as defined by <cit.>. In addition, we divided the on-disk HCMEs associated with type II radio bursts into groups with or without SGRE events, SEP events, metric or DH type II radio bursts and compared the average acceleration and speed between the HCME groups. We showed that the CMEs associated with the 'Delayed' gamma-ray component and the metric type II-producing HCMEs associated with SGRE events together with a DH type II radio burst and/or a major SEP events have higher initial acceleration and space speed than the CMEs associated with the 'Prompt-only' gamma-ray component or the SEP- or type II-associated HCMEs without SGRE. The only exception was the space speed of metric type II -associated HCMEs with an major SEP event but without an SGRE event that had similar average space speeds as the SGRE-associated HCMEs without an major SEP event. Similar high initial acceleration and fast speed characteristics are shared by CMEs associated with GLEs, which are guaranteed to have >300 MeV protons. The SGRE-associated CMEs also conform to the hierarchy between the initial acceleration and speed of the CME and the fluence spectral index as described by <cit.>. Therefore, our findings support the CME-driven shock as the source of >300 MeV protons producing SGRE events. We estimated the radial distance the CME-driven shock at the end time of the SGRE event with the long-duration type II radio bursts on 2012 January 23 and 2012 March 07 using STEREO/HI white-light images of the CME and radio dynamic spectra of Wind/WAVES. The shock radial distances for 2012 January 23 SGRE event were r=121 R_ and r=132 R_, and for the 2012 March 07 SGRE event r=140 R_ and r=173 R_, respectively. The distances derived from white-light and radio observations are reasonably consistent, indicating that the radio source is near the shock nose as assumed. The distances are also consistently longer that the estimated shock height ≈70 R_ for the shorter-duration 2014 February 25 SGRE event <cit.> (Gopalswamy et al. 2019). Because the shock location is not visible in the white-light images, the radial distance estimated from forward fitting of the spheroidal shock model are probably underestimations. At the end time of the SGRE event, the shock speeds were still high enough (975 km s^-1 and 750 km s^-1) for high-energy particle acceleration. Therefore, we conclude that strong CME-driven shocks accelerate >300 MeV protons up to the radial distances of 0.6–0.8 au. We thank the Fermi/LAT, GOES, SOHO/LASCO, STEREO/SECCHI, Wind/WAVES, and HELCATS teams for providing the data. PM and SA were partially supported by NSF grant AGS-2043131. NG was supported by NASA's STEREO project and the Living With a Star program. HX was partially supported by NSF grant AGS-2228967. § CMES AND X-RAY FLARES ASSOCIATED WITH ON-DISK FERMI/LAT SOLAR FLARES Table <ref> contains data for the CME and X-ray flare data used in Table <ref>. The first column gives the first observation date and time of the CME followed by the measured sky-plane speed and the projection-corrected space speed in the second and third columns. The fourth column lists the estimated initial acceleration of the CME. The columns 5–7 list the location in heliographic coordinates, the onset and peak times of the GOES soft X-ray flare. The last column lists the gamma-ray components detected by Fermi/LAT taken from <cit.>. lcccrlll CME and X-ray flare data for Fermi/LAT solar flares 700pt A CME Sky Speed Space Speed Acc Location Flare Onset Flare Peak Gamma-ray Components (UT) (km ^-1) (km s^-1) (km s^-2) (UT) (UT) (UT) 2010/06/12 01:31 620 674 0.42 N23W43 2010/06/12 00:30 2010/06/12 00:57 LLE-Prompt 2011/03/07 20:00 2125 2223 1.28 N30W48 2011/03/07 19:43 2011/03/07 20:12 Delayed 2011/06/07 06:49 1255 1321 0.88 S21W54 2011/06/07 06:16 2011/06/07 06:41 Delayed 2011/08/04 04:12 1315 1477 1.54 N19W36 2011/08/04 03:41 2011/08/04 03:57 Delayed 2011/08/09 08:12 1610 1640 1.61 N17W69 2011/08/09 07:48 2011/08/09 08:05 Prompt Short-Delayed 2011/09/06 23:05 575 830 1.73 N14W18 2011/09/06 22:12 2011/09/06 22:20 LLE-Prompt Short-Delayed 2011/09/07 23:05 710 735 2.04 N14W28 2011/09/07 22:32 2011/09/07 22:38 Delayed 2011/09/24 09:48 1936 2235 1.96 N12E60 2011/09/24 09:21 2011/09/24 09:40 LLE-Prompt Short-Delayed 2012/01/23 04:00 2175 2511 1.99 N28W21 2012/01/23 03:38 2012/01/23 03:59 Delayed 2012/01/27 18:27 2508 2541 0.71 N27W78 2012/01/27 17:37 2012/01/27 18:37 Delayed 2012/03/05 04:00 1531 1627 0.52 N17E52 2012/03/05 03:17 2012/03/05 04:09 Delayed 2012/03/07 00:24 2684 3146 2.38 N17E27 2012/03/07 00:02 2012/03/07 00:24 Delayed 2012/03/09 04:26 950 1229 0.66 N15W03 2012/03/09 03:22 2012/03/09 03:53 No-Prompt Delayed 2012/03/10 18:00 1296 1638 0.94 N17W24 2012/03/10 17:15 2012/03/10 17:44 Delayed 2012/05/17 01:48 1582 1596 1.21 N11W76 2012/05/17 01:25 2012/05/17 01:47 Delayed 2012/06/03 18:12 772 786 1.87 N16E38 2012/06/03 17:48 2012/06/03 17:55 LLE-Prompt Short-Delayed 2012/07/06 23:24 1828 1907 4.54 S13W59 2012/07/06 23:01 2012/07/06 23:08 Delayed 2012/08/06 05:12 198 199 0.66 S14E84 2012/08/06 04:33 2012/08/06 04:38 LLE-Prompt 2012/11/13 02:24 980 1002 2.78 S25E46 2012/11/13 01:58 2012/11/13 02:04 Prompt 2013/04/11 07:24 861 1369 1.09 N09E12 2013/04/11 06:55 2013/04/11 07:16 No-Prompt Short-Delayed 2013/05/13 02:00 1270 1270 0.88 N11E90 2013/05/13 01:53 2013/05/13 02:17 Delayed 2013/05/13 16:07 1850 1852 1.82 N11E85 2013/05/13 15:48 2013/05/13 16:05 Delayed 2013/05/14 01:25 2625 2645 4.01 N08E77 2013/05/14 01:00 2013/05/14 01:11 No-Prompt Delayed 2013/05/15 01:48 1366 1408 0.71 N12E64 2013/05/15 01:15 2013/05/15 01:48 No-Prompt Delayed 2013/10/25 08:12 587 599 1.25 S08E73 2013/10/25 07:53 2013/10/25 08:01 Delayed 2013/10/28 02:24 695 726 0.55 N04W66 2013/10/28 01:41 2013/10/28 02:03 LLE-Prompt 2013/10/28 04:48 1201 1270 2.35 N08W71 2013/10/28 04:32 2013/10/28 04:41 LLE-Prompt 2013/10/28 15:36 812 1098 2.29 S06E28 2013/10/28 15:07 2013/10/28 15:15 Delayed 2013/10/28 21:25 771 777 1.44 N07W83 2013/10/28 20:48 2013/10/28 20:57 LLE-Prompt 2014/01/07 18:24 1830 2246 1.34 S15W11 2014/01/07 18:04 2014/01/07 18:32 Delayed 2014/02/25 01:25 2147 2153 3.59 S12E82 2014/02/25 00:39 2014/02/25 00:49 LLE-Prompt Delayed 2014/06/10 13:30 1469 1473 1.53 S17E82 2014/06/10 12:36 2014/06/10 12:52 LLE-Prompt Delayed 2014/06/11 09:24 829 915 2.18 S18E65 2014/06/11 08:59 2014/06/11 09:06 Short-Delayed 2014/09/10 18:00 1267 1652 1.15 N14E02 2014/09/10 17:21 2014/09/10 17:45 Short-Delayed 2015/06/21 02:36 1366 1740 0.97 N12E16 2015/06/21 02:06 2015/06/21 02:36 Prompt Delayed 2015/06/25 08:36 1627 1805 2.15 N09W42 2015/06/25 08:02 2015/06/25 08:16 Delayed 2017/09/06 12:24 1571 1819 3.37 S08W33 2017/09/06 11:53 2017/09/06 12:02 Delayed 2017/09/10 16:00 3163 3163 1.70 S09W92 2017/09/10 15:35 2017/09/10 16:06 Prompt Delayed Gamma-ray Components are taken from <cit.>. § CYCLE 24 HCMES WITH TYPE II RADIO BURSTS AND ON-DISK X-RAY FLARES Table <ref> contains data for the cycle-24 HCME and X-ray flares used in Tables <ref> and <ref>. The columns 1–7 are the same as in Table <ref>. Columns 8–9 list the onset times of the reported metric and DH type II radio bursts. The DH type II onset times are listed for Wind/WAVES, except on 03 August 2011, when only STEREO-A/WAVES detected a DH type II burst. The column 10 indicates on which spacecraft the WAVES instruments could detect a DH type II radio burst (W=Wind, A=STEREO-A, B=STEREO-B, '-'=no report). The columns 11–12 mark if the event had a major SEP event (G=GOES, A=STEREO-A, B=STEREO-B, '-'=data gap) and an SGRE event ('Delayed' component detected), respectively. lcccrllllccc Cycle 24 HCMEs with type II radio bursts 700pt B HCME Sky Speed Space Speed Acc Location Flare Onset Flare Peak m-Type II DH-Type II WAVES S/C SEP SGRE (UT) (km s^-1) (km s^-1) (km s^-2) (UT) (UT) (UT) (UT) G/A/B 2010/08/01 13:42 850 1030 0.34 N20E36 2010/08/01 07:36 2010/08/01 08:26 2010/08/01 09:20 W/A/B 0/-/0 0 2010/08/07 18:36 871 1102 0.63 N11E34 2010/08/07 17:55 2010/08/07 18:24 2010/08/07 18:08 2010/08/07 18:35 W/A/B 0/0/1 0 2010/08/14 10:12 1205 1280 0.51 N17W52 2010/08/14 09:23 2010/08/14 10:05 2010/08/14 09:52 -/-/- 1/0/0 0 2011/02/14 18:24 326 544 1.51 S20W04 2011/02/14 17:20 2011/02/14 17:26 2011/02/14 17:28 -/-/- 0/0/0 0 2011/02/15 02:24 669 960 1.33 S20W10 2011/02/15 01:44 2011/02/15 01:56 2011/02/15 01:52 2011/02/15 02:10 W/A/B 0/0/1 0 2011/03/07 20:00 2125 2223 1.28 N30W48 2011/03/07 19:43 2011/03/07 20:12 2011/03/07 19:54 2011/03/07 20:00 W/A/- 1/1/0 1 2011/06/02 08:12 976 1147 0.64 S19E25 2011/06/02 07:16 2011/06/02 07:46 2011/06/02 08:00 W/A/B 0/0/0 0 2011/06/07 06:49 1255 1321 0.88 S21W54 2011/06/07 06:16 2011/06/07 06:41 2011/06/07 06:25 2011/06/07 06:45 W/A/B 1/0/0 1 2011/06/21 03:16 719 882 0.12 N16W08 2011/06/21 01:22 2011/06/21 03:25 2011/06/21 03:07 W/-/- 0/0/0 0 2011/08/03 14:00 610 785 0.37 N16W30 2011/08/03 13:13 2011/08/03 13:48 2011/08/03 13:35 2011/08/03 13:38 -/A/- 0/0/0 0 2011/08/04 04:12 1315 1477 1.54 N19W36 2011/08/04 03:41 2011/08/04 03:57 2011/08/04 03:54 2011/08/04 04:15 W/A/B 1/0/0 1 2011/08/09 08:12 1610 1640 1.61 N17W69 2011/08/09 07:48 2011/08/09 08:05 2011/08/09 08:01 2011/08/09 08:20 W/-/- 1/0/0 1 2011/09/06 02:24 782 1232 1.37 N14W07 2011/09/06 01:35 2011/09/06 01:50 2011/09/06 01:46 2011/09/06 02:00 W/-/- 0/0/0 0 2011/09/06 23:05 575 830 1.73 N14W18 2011/09/06 22:12 2011/09/06 22:20 2011/09/06 22:19 2011/09/06 22:30 W/A/- 0/0/0 1 2011/09/22 10:48 1905 1905 0.99 N09E89 2011/09/22 10:29 2011/09/22 11:01 2011/09/22 10:39 2011/09/22 11:05 W/-/B 1/1/1 0 2011/09/24 12:48 1915 2018 0.58 N10E56 2011/09/24 12:22 2011/09/24 13:20 2011/09/24 12:50 W/-/B 0/0/0 0 2011/09/24 19:36 972 1076 1.49 N12E42 2011/09/24 19:09 2011/09/24 19:21 2011/09/24 19:14 -/-/- 0/0/1 0 2011/11/09 13:36 907 1012 0.54 N24E35 2011/11/09 13:04 2011/11/09 13:35 2011/11/09 13:11 2011/11/09 13:30 W/-/B 0/0/0 0 2011/11/26 07:12 933 1001 0.60 N17W49 2011/11/26 06:42 2011/11/26 07:10 2011/11/26 07:15 W/A/- 1/1/0 0 2012/01/19 14:36 1120 1269 0.15 N32E22 2012/01/19 13:44 2012/01/19 16:05 2012/01/19 15:00 W/A/B 0/0/1 0 2012/01/23 04:00 2175 2511 1.99 N28W21 2012/01/23 03:38 2012/01/23 03:59 2012/01/23 03:43 2012/01/23 04:00 W/A/- 1/1/1 1 2012/01/27 18:27 2508 2541 0.71 N27W78 2012/01/27 17:37 2012/01/27 18:37 2012/01/27 18:10 2012/01/27 18:30 W/A/B 1/1/0 1 2012/03/05 04:00 1531 1627 0.52 N17E52 2012/03/05 03:17 2012/03/05 04:09 2012/03/05 04:00 W/A/B 0/0/0 1 2012/03/07 00:24 2684 3146 2.38 N17E27 2012/03/07 00:02 2012/03/07 00:24 2012/03/07 00:17 2012/03/07 01:00 W/A/B 1/0/1 1 2012/03/07 01:30 1825 2160 4.00 N15E26 2012/03/07 01:05 2012/03/07 01:14 2012/03/07 01:09 -/-/- 0/0/0 0 2012/03/09 04:26 950 1229 0.66 N15W03 2012/03/09 03:22 2012/03/09 03:53 2012/03/09 03:43 2012/03/09 04:10 W/-/- 0/1/0 1 2012/03/10 18:00 1296 1638 0.94 N17W24 2012/03/10 17:15 2012/03/10 17:44 2012/03/10 17:55 W/A/- 0/0/0 1 2012/03/13 17:36 1884 1931 0.89 N17W66 2012/03/13 17:05 2012/03/13 17:41 2012/03/13 17:15 2012/03/13 17:35 W/A/- 1/0/0 0 2012/04/05 21:25 828 1065 0.66 N18W29 2012/04/05 20:43 2012/04/05 21:10 2012/04/05 21:08 -/-/- 0/0/0 0 2012/04/09 12:36 921 945 0.38 N20W65 2012/04/09 12:02 2012/04/09 12:44 2012/04/09 12:28 2012/04/09 12:20 W/A/- 0/0/0 0 2012/04/23 18:24 528 769 0.99 N14W17 2012/04/23 17:38 2012/04/23 17:51 2012/04/23 17:42 -/-/- 0/0/0 0 2012/05/17 01:48 1582 1596 1.21 N11W76 2012/05/17 01:25 2012/05/17 01:47 2012/05/17 01:31 2012/05/17 01:40 W/A/- 1/0/0 1 2012/07/04 17:24 662 830 2.31 N14W34 2012/07/04 16:33 2012/07/04 16:39 2012/07/04 16:42 2012/07/04 17:00 W/-/- 0/0/0 0 2012/07/06 23:24 1828 1907 4.54 S13W59 2012/07/06 23:01 2012/07/06 23:08 2012/07/06 23:09 2012/07/06 23:10 W/A/- 1/0/0 1 2012/07/12 16:48 885 1405 0.51 S15W01 2012/07/12 16:03 2012/07/12 16:49 2012/07/12 16:25 2012/07/12 16:45 W/-/- 1/0/1 0 2012/07/19 05:24 1631 1631 0.37 S13W88 2012/07/19 04:45 2012/07/19 05:58 2012/07/19 05:24 2012/07/19 05:30 W/-/- 1/0/0 0 2012/07/28 21:12 420 463 0.64 S25E54 2012/07/28 20:44 2012/07/28 20:56 2012/07/28 20:52 -/-/- 0/0/0 0 2012/07/31 11:24 567 605 0.23 N19E59 2012/07/31 10:46 2012/07/31 11:30 2012/07/31 11:04 -/-/- 0/0/0 0 2012/08/13 13:25 435 705 1.68 N22W03 2012/08/13 12:33 2012/08/13 12:40 2012/08/13 12:41 -/-/- 0/0/0 0 2012/08/31 20:00 1442 1495 0.35 S19E50 2012/08/31 19:32 2012/08/31 20:43 2012/08/31 19:42 2012/08/31 20:00 W/A/- 1/0/1 0 2012/09/28 00:12 947 1093 0.87 N09W31 2012/09/27 23:36 2012/09/27 23:57 2012/09/27 23:44 2012/09/27 23:55 W/A/- 1/0/1 0 2012/11/08 02:36 855 855 0.95 N13E89 2012/11/08 02:08 2012/11/08 02:23 2012/11/08 02:21 -/-/- 0/0/0 0 2012/11/21 16:00 529 942 0.79 N05E05 2012/11/21 15:10 2012/11/21 15:30 2012/11/21 15:33 -/-/- 0/0/0 0 2013/03/15 07:12 1063 1366 0.39 N11E12 2013/03/15 06:00 2013/03/15 06:58 2013/03/15 07:00 W/-/- 1/0/0 0 2013/04/11 07:24 861 1369 1.09 N09E12 2013/04/11 06:55 2013/04/11 07:16 2013/04/11 07:02 2013/04/11 07:10 W/-/B 1/0/1 1 2013/05/13 02:00 1270 1270 0.88 N11E90 2013/05/13 01:53 2013/05/13 02:17 2013/05/13 02:10 2013/05/13 02:20 W/-/B 0/0/1 1 2013/05/13 16:07 1850 1852 1.82 N11E85 2013/05/13 15:48 2013/05/13 16:05 2013/05/13 15:57 2013/05/13 16:15 W/A/B 0/0/1 1 2013/05/14 01:25 2625 2645 4.01 N08E77 2013/05/14 01:00 2013/05/14 01:11 2013/05/14 01:07 2013/05/14 01:16 W/A/B 0/0/0 1 2013/05/15 01:48 1366 1408 0.71 N12E64 2013/05/15 01:15 2013/05/15 01:48 2013/05/15 01:37 2013/05/15 01:49 W/-/- 1/0/0 1 2013/05/17 09:12 1345 1412 1.68 N12E57 2013/05/17 08:43 2013/05/17 08:57 2013/05/17 08:50 -/-/- 0/0/0 0 2013/05/22 13:25 1466 1491 0.71 N15W70 2013/05/22 12:57 2013/05/22 13:32 2013/05/22 12:59 2013/05/22 13:10 W/A/B 1/1/0 0 2013/06/28 02:00 1037 1254 0.91 S18W19 2013/06/28 01:36 2013/06/28 01:59 2013/06/28 01:53 W/-/- 0/0/0 0 2013/08/17 19:12 1202 1418 0.54 S05W30 2013/08/17 18:49 2013/08/17 19:33 2013/08/17 18:56 2013/08/17 20:25 W/-/- 0/0/0 0 2013/08/30 02:48 949 1031 0.31 N15E46 2013/08/30 01:51 2013/08/30 02:46 2013/08/30 02:12 2013/08/30 02:34 W/-/- 0/0/0 0 2013/09/29 22:12 1179 1370 0.21 N17W29 2013/09/29 21:43 2013/09/29 23:31 2013/09/29 21:53 2013/09/29 21:53 W/A/B 1/0/0 0 2013/10/22 21:48 459 1070 3.57 N04W01 2013/10/22 21:15 2013/10/22 21:20 2013/10/22 21:21 2013/10/22 21:33 W/-/- 0/0/0 0 2013/10/24 01:25 399 766 1.42 S10E08 2013/10/24 00:21 2013/10/24 00:30 2013/10/24 00:31 -/-/- 0/0/0 0 2013/10/25 08:12 587 599 1.25 S08E73 2013/10/25 07:53 2013/10/25 08:01 2013/10/25 07:59 -/-/- 0/0/1 1 2013/10/25 15:12 1081 1103 1.53 S06E69 2013/10/25 14:51 2013/10/25 15:03 2013/10/25 14:58 2013/10/25 15:08 W/-/B 0/0/0 0 2013/10/28 02:24 695 726 0.55 N04W66 2013/10/28 01:41 2013/10/28 02:03 2013/10/28 02:00 -/-/- 0/0/0 0 2013/10/28 15:36 812 1098 2.29 S06E28 2013/10/28 15:07 2013/10/28 15:15 2013/10/28 15:10 2013/10/28 15:24 W/-/- 0/0/0 1 2013/10/29 22:00 1001 1001 1.39 N05W89 2013/10/29 21:42 2013/10/29 21:54 2013/10/29 21:48 -/-/- 0/0/0 0 2013/11/19 10:36 740 761 1.06 S14W70 2013/11/19 10:14 2013/11/19 10:26 2013/11/19 10:24 2013/11/19 10:39 W/-/- 0/0/0 0 2013/12/07 07:36 1085 1165 1.62 S16W49 2013/12/07 07:17 2013/12/07 07:29 2013/12/07 07:27 2013/12/07 07:43 W/-/- 0/0/0 0 2014/01/07 18:24 1830 2246 1.34 S15W11 2014/01/07 18:04 2014/01/07 18:32 2014/01/07 18:17 2014/01/07 18:33 W/A/B 1/1/1 1 2014/01/20 22:00 721 750 0.18 S07E67 2014/01/20 21:39 2014/01/20 22:49 2014/01/20 22:24 W/-/- 0/0/0 0 2014/02/20 08:00 948 960 0.53 S15W73 2014/02/20 07:26 2014/02/20 07:56 2014/02/20 07:45 2014/02/20 08:05 W/-/- 1/0/0 0 2014/02/25 01:25 2147 2153 3.59 S12E82 2014/02/25 00:39 2014/02/25 00:49 2014/02/25 00:56 2014/02/25 00:56 W/A/B 1/1/1 1 2014/03/20 04:36 740 921 1.10 S14E35 2014/03/20 03:42 2014/03/20 03:56 2014/03/20 03:52 -/-/- 0/0/0 0 2014/03/29 18:12 528 679 0.87 N11W32 2014/03/29 17:35 2014/03/29 17:48 2014/03/29 17:53 2014/03/29 17:59 W/-/- 0/0/0 0 2014/04/02 13:36 1471 1564 0.55 N11E53 2014/04/02 13:18 2014/04/02 14:05 2014/04/02 13:23 2014/04/02 13:42 W/-/B 0/0/1 0 2014/04/18 13:25 1203 1359 0.71 S20W34 2014/04/18 12:31 2014/04/18 13:03 2014/04/18 12:55 2014/04/18 13:05 W/-/- 1/0/0 0 2014/06/10 13:30 1469 1473 1.53 S17E82 2014/06/10 12:36 2014/06/10 12:52 2014/06/10 12:58 W/-/B 0/0/1 1 2014/07/08 16:36 773 841 1.00 N12E56 2014/07/08 16:06 2014/07/08 16:20 2014/07/08 16:14 -/-/- 0/-/0 0 2014/08/01 18:36 789 1256 1.16 S10E11 2014/08/01 17:55 2014/08/01 18:13 2014/08/01 18:18 2014/08/01 18:58 W/-/- 0/0/0 0 2014/08/22 11:12 600 993 1.18 N12E01 2014/08/22 10:13 2014/08/22 10:27 2014/08/22 10:37 W/-/- 0/-/0 0 2014/08/24 12:36 551 569 0.56 S07E75 2014/08/24 12:00 2014/08/24 12:17 2014/08/24 12:14 -/-/- 0/-/0 0 2014/08/25 15:36 555 697 0.46 N05W36 2014/08/25 14:46 2014/08/25 15:11 2014/08/25 15:08 2014/08/25 15:20 W/-/- 0/-/0 0 2014/09/09 00:06 920 1080 0.33 N12E29 2014/09/08 23:34 2014/09/09 00:29 2014/09/09 00:05 W/-/- 0/0/0 0 2014/09/10 18:00 1267 1652 1.15 N14E02 2014/09/10 17:21 2014/09/10 17:45 2014/09/10 17:45 W/-/- 1/-/0 1 2014/12/17 05:00 587 855 0.46 S20E09 2014/12/17 04:20 2014/12/17 04:51 2014/12/17 04:44 2014/12/17 05:00 W/-/- 0/-/- 0 2014/12/19 01:04 1195 1513 1.48 S11E15 2014/12/18 21:41 2014/12/18 21:58 2014/12/18 22:22 2014/12/18 22:31 W/-/- 0/-/- 0 2014/12/21 12:12 669 906 0.28 S14W25 2014/12/21 11:24 2014/12/21 12:17 2014/12/21 12:05 W/-/- 0/-/- 0 2015/02/09 23:24 1106 1148 0.53 N12E61 2015/02/09 22:59 2015/02/09 23:35 2015/02/09 23:14 -/-/- 0/-/- 0 2015/03/07 22:12 1261 1304 0.59 S19E74 2015/03/07 21:45 2015/03/07 22:22 2015/03/07 21:57 -/-/- 0/-/- 0 2015/03/10 00:00 995 1081 0.75 S18E45 2015/03/09 23:29 2015/03/09 23:53 2015/03/10 00:05 2015/03/10 00:10 W/-/- 0/-/- 0 2015/03/10 03:36 1040 1156 3.85 S15E40 2015/03/10 03:19 2015/03/10 03:24 2015/03/10 03:28 -/-/- 0/-/- 0 2015/03/15 01:48 719 932 0.27 S22W25 2015/03/15 01:15 2015/03/15 02:13 2015/03/15 01:27 -/-/- 0/-/- 0 2015/04/23 09:36 857 864 0.29 N12W89 2015/04/23 09:18 2015/04/23 10:07 2015/04/23 09:22 -/-/- 0/-/- 0 2015/05/05 22:24 715 721 2.00 N15E79 2015/05/05 22:05 2015/05/05 22:11 2015/05/05 22:12 2015/05/05 22:24 W/-/- 0/-/- 0 2015/05/13 18:48 438 730 1.35 N13W16 2015/05/13 18:09 2015/05/13 18:18 2015/05/13 18:21 -/-/- 0/-/- 0 2015/06/18 17:24 1305 1398 0.35 N15E50 2015/06/18 16:30 2015/06/18 17:36 2015/06/18 17:42 W/-/- 0/-/- 0 2015/06/21 02:36 1366 1740 0.97 N12E16 2015/06/21 02:06 2015/06/21 02:36 2015/06/21 02:24 2015/06/21 02:33 W/-/- 1/-/- 1 2015/06/22 18:36 1209 1573 0.60 N12W08 2015/06/22 17:39 2015/06/22 18:23 2015/06/22 18:05 2015/06/22 18:20 W/-/- 0/-/- 0 2015/06/25 08:36 1627 1805 2.15 N09W42 2015/06/25 08:02 2015/06/25 08:16 2015/06/25 08:16 2015/06/25 08:35 W/-/- 1/-/- 1 2015/08/22 07:12 547 817 1.36 S15E20 2015/08/22 06:39 2015/08/22 06:49 2015/08/22 06:50 2015/08/22 07:07 W/-/- 0/-/- 0 2015/09/20 18:12 1239 1458 0.78 S22W50 2015/09/20 17:32 2015/09/20 18:03 2015/09/20 18:16 2015/09/20 18:23 W/-/- 0/-/- 0 2015/11/04 14:48 578 987 0.01 N09W04 2015/11/04 14:08 2015/11/05 13:31 2015/11/04 13:43 2015/11/04 14:07 W/-/- 0/-/- 0 2015/12/16 09:36 579 937 0.43 S13W04 2015/12/16 08:27 2015/12/16 09:03 2015/12/16 08:45 W/-/- 0/-/- 0 2015/12/28 12:12 1212 1471 0.29 S23W11 2015/12/28 11:20 2015/12/28 12:45 2015/12/28 11:50 W/-/- 0/-/- 0 2016/01/01 23:24 1730 1734 2.22 S25W82 2016/01/01 23:58 2016/01/02 00:11 2016/01/01 23:21 2016/01/02 00:55 W/A/- 1/0/- 0 2016/02/11 21:17 719 1174 0.43 N11W07 2016/02/11 20:18 2016/02/11 21:03 2016/02/11 20:35 -/-/- 0/0/- 0 2017/04/18 19:48 926 932 0.32 N14E77 2017/04/18 19:21 2017/04/18 20:10 2017/04/18 19:49 -/-/- 0/1/- 0 2017/07/14 01:25 1200 1422 0.38 S06W29 2017/07/14 01:07 2017/07/14 02:09 2017/07/14 01:18 W/-/- 1/0/- 0 2017/09/04 20:36 1418 1831 6.10 S10W12 2017/09/04 20:28 2017/09/04 20:33 2017/09/04 20:42 2017/09/04 20:27 W/-/- 1/0/- 0 2017/09/06 12:24 1571 1819 3.37 S08W33 2017/09/06 11:53 2017/09/06 12:02 2017/09/06 12:02 2017/09/06 12:05 W/A/- 1/0/- 1 2017/09/10 16:00 3163 3163 1.70 S09W92 2017/09/10 15:35 2017/09/10 16:06 2017/09/10 16:08 2017/09/10 16:02 W/A/- 1/1/- 1 The SEP column gives major SEP events observed by GOES and the SGRE column gamma-ray flares with a 'Delayed' component observed by Fermi/LAT <cit.> aasjournal
http://arxiv.org/abs/2307.03905v1
20230708053355
A novel high-order linearly implicit and energy-stable additive Runge-Kutta methods for gradient flow models
[ "Xuelong Gu", "Wenjun Cai", "Yushun Wang" ]
math.NA
[ "math.NA", "cs.NA" ]
Incorporating Deep Q - Network with Multiclass Classification Algorithms Noopur Zambare1, Ravindranath Sawane2 August 12, 2023 ======================================================================== [-10pt]15.5cm0.1em This paper introduces a novel paradigm for constructing linearly implicit and high-order unconditionally energy-stable schemes for general gradient flows, utilizing the scalar auxiliary variable (SAV) approach and the additive Runge-Kutta (ARK) methods. We provide a rigorous proof of energy stability, unique solvability, and convergence. The proposed schemes generalizes some recently developed high-order, energy-stable schemes and address their shortcomings. On the one other hand, the proposed schemes can incorporate existing SAV-RK type methods after judiciously selecting the Butcher tables of ARK methods <cit.>. The order of a SAV-RKPC method can thus be confirmed theoretically by the order conditions of the corresponding ARK method. Several new schemes are constructed based on our framework, which perform to be more stable than existing SAV-RK type methods. On the other hand, the proposed schemes do not limit to a specific form of the nonlinear part of the free energy and can achieve high order with fewer intermediate stages compared to the convex splitting ARK methods <cit.>. Numerical experiments demonstrate stability and efficiency of proposed schemes. Energy-stable schemes, Scalar auxiliary variable approach, Additive Runge-Kutta methods, Linearly implicit schemes. [-10pt]15.5cm0.1em § INTRODUCTION Phase field models are versatile mathematical equations widely used in physics, material science, and mathematics to simulate various physical phenomena, including the diffusion of two-phase interfaces, phase transitions in materials, and mechanical properties <cit.>. These models are useful for describing different phases of material and the phase transitions and microstructural changes that occur in non-equilibrium states. The phase field model is usually represented as a gradient flow of a free energy functional ℱ(u) as follows: ∂ u/∂ t= 𝒢δℱ/δ u, (𝐱, t) ∈Ω× (0, T], with the initial condition u(𝐱, 0) = u_0(𝐱), where u is a state variable, Ω⊂ℝ^n represents the computational domain, δℱ/δ u denotes the variational derivative of ℱ to u, and 𝒢 is a non-positive mobility operator. Classical phase field models include the Allen-Cahn (AC) equation <cit.>, the Cahn-Hilliard (CH) equation <cit.>, the molecular beam epitaxy (MBE) equation <cit.>, etc <cit.>. A significant aspect of (<ref>) is that the system preserves the following energy dissipation law when appropriate boundary conditions are imposed on u. d ℱ/dt = (δℱ/δ u, ∂ u/∂ t) = (δℱ/δ u, 𝒢δℱ/δ u) ≤ 0. Due to the nonlinearity of (<ref>), its analytical solution is typically intractable. Therefore, developing efficient and stable numerical schemes is imperative. One approach is constructing schemes that inherit a discrete counterpart of (<ref>), known as energy-stable methods <cit.>. As demonstrated in <cit.>, energy-stable methods can prevent numerical oscillations and unphysical solutions, thus have been the focus of extensive researchers over the past few decades. Classical energy-stable methods include convex splitting (CS) methods <cit.> and discrete variational derivative (DVD) methods <cit.> and so on. CS and DVD methods are fully implicit, thus requiring solving a nonlinear system at each time step. To improve computational efficiency, researchers have suggested linearly implicit or explicit energy-stable schemes, such as stabilized semi-implicit methods <cit.>, exponential time difference methods <cit.>, and the leapfrog methods <cit.>. The numerical methods discussed above are exclusive to particular gradient flow models and can not be effortlessly adapted to others. This status quo did not change until the energy quadratization (EQ) methods <cit.> were proposed. EQ methods provide an elegant platform for constructing linearly implicit schemes, but they involve solving linear systems with variable coefficients at each time step. In <cit.>, Shen et al. proposed scalar auxiliary variable (SAV) methods. Besides their unconditional stability, SAV methods require only the solution of a linear system with constant coefficients in each step. Furthermore, SAV approaches provide a universal framework for developing linearly implicit energy-stable schemes that can be extended to a variety of complex models <cit.>. Due to these advantages, SAV methods have received attention and are promoted in <cit.>. However, the above methods are limited to second-order accuracy, which may not accommodate high precision requirements. The nonlinearity of phase field models makes it difficult to develop high-order energy-stable schemes. In <cit.>, the authors present high-order energy-stable schemes by combining additive Runge-Kutta (ARK) methods with CS techniques (CS-ARK). To guarantee energy stability, these approaches impose stringent criteria on the coefficients of the ARK methods, necessitating a large number of intermediate stages even for a second-order scheme. Thus, the currently identified energy-stable CS-ARK methods are limited to third-order. In <cit.>, energy-stable schemes based on the Hamiltonian boundary value or discrete gradient methods are presented. These schemes are fully implicit and thus computationally expensive. Akrivis et al. introduced in <cit.> novel linearly implicit schemes based on a combination of SAV and RK (SAV-RK) approaches. For explicit discretization of nonlinear terms, they incorporated extrapolation techniques to predict solutions at specified time levels. The resulting methods are referred to as SAV-RKEX. However, excessive interpolation points lead to highly oscillatory interpolation polynomials, resulting in inaccurate predictions. Li et al. developed SAV-RKPC methods in <cit.> to obtain a more accurate prediction of numerical solutions at intermediate stages, significantly improving the stability and accuracy of SAV-RKEX methods. Nevertheless, such a technique increases the computational costs, and there is no theoretical guarantee of the necessary number of iterations to achieve adequate accuracy. In this paper, we propose a novel paradigm for constructing linearly implicit and high-order unconditionally energy-stable schemes, combining the SAV approach with the ARK methods. The proposed methods overcome the limitations of both CS-ARK and SAV-RK methods and can be applied to gradient flow systems with general nonlinear functionals. On the one hand, to guarantee energy stability, the proposed methods require only the algebraic stability of the implicit part of ARK methods. This enables the methods to achieve high accuracy and energy stability with fewer intermediate stages. On the other hand, our approach can be regarded as a novel prediction correction technique that avoids the imprecision of extrapolation techniques used in the SAV-RKEX method and does not require iterative procedures for prediction in SAV-RKPC. Thus, the proposed approach guarantees both efficiency and stability. Additionally, our framework can accommodate all SAV-RK type integrators with some appropriate modifications, enabling us to theoretically analyze the consistency of SAV-RKPC(or EQ) methods proposed in <cit.> by exploiting the order conditions of ARK methods. The overall structure of the remaining contexts is summarized below. In Section <ref>, we briefly overview the ARK and SAV methods. In Section <ref>, we reformulate the gradient flow model into an equivalent one and propose our new algorithms. Then, we prove the unconditional energy stability and solvability of the proposed methods. Moreover, we demonstrate the order condition of SAV-RKPC methods by regarding it as an ARK method. The numerical examples and comparisons are made in Section <ref>. Finally, we conclude the whole work in Section <ref>. § OVERVIEW OF ARK METHODS AND SAV REFORMULATION OF GRADIENT FLOWS In this section, we briefly overview the additive Runge-Kutta (ARK) methods. Some basic notations and concepts are also presented. By incorporating a scalar auxiliary variable, the original gradient flow model is transformed into an equivalent one (known as the SAV reformulation). The reformulated system preserves the quadratic energy and provides an elegant platform for developing high-order and linearly implicit unconditionally energy-stable numerical methods. §.§ ARK methods We provide an overview of ARK methods, which are commonly used to solve the initial value problem for the following additive partitioned system: u_t(𝐱, t) = f(u) + g(u), u(𝐱, 0) = u_0(𝐱). Here, the right-hand side of (<ref>) is subdivided with respect to stiffness, nonlinearity, dynamical behavior, etc. Before we proceed, it is helpful to introduce the Butcher notations for two s-stage RK methods. [ c A; b^T ] = [ c_0 a_00 ⋯ a_0s-1; c_1 a_10 ⋯ a_1s-1; ⋮ ⋮ ⋯ ⋮; c_s-1 a_s-1 0 ⋯ a_s-1 s-1; b_0 ⋯ b_s-1 ] , [ c A; b^T ] = [ c_0 a_00 ⋯ a_0s-1; c_1 a_10 ⋯ a_1s-1; ⋮ ⋮ ⋯ ⋮; c_s-1 a_s-1 0 ⋯ a_s-1 s-1; b_0 ⋯ b_s-1 ] , where A ∈ℝ^s × s, b ∈ℝ^s, and c = A 1 with 1 = (1, 1, ⋯, 1)^T∈ℝ^s. A, c are defined in the similar manner. [Explicit RK (ERK) methods] A RK method is explicit if a_ij = 0 for j ≥ i-1. [Diagonally implicit RK (DIRK) methods] A RK method is diagonally implicit if a_ij = 0 for j ≥ i and there exists 0 ≤ i ≤ s-1, a_ii≠ 0. [algebraically stable RK method <cit.>] Let us consider a symmetric matrix with entries M_ij = b_i a_ij + b_j a_ji - b_i b_j. A RK method is algebraically stable if its coefficients satisfy the following stability criteria. * b_i ≥ 0, ∀ i = 1, 2, ⋯, s, * M is positive semi-definite. We partition the time interval uniformly with a step size of τ and denote the time grid points as t_n = n τ. Let N_t = [T/τ]. Assuming that u^n has been solved in advance. The ARK methods then update u^n+1 through two steps. First, the intermediate stages u_ni (i = 0, 1, ⋯, s-1) are computed from u_ni = u^n + τ∑_j=0^s-1 a_ij f(u_nj) + τ∑_j=0^s-1a_ij g(u_nj), Then, we update the solution by u^n+1 = u^n + τ∑_i=0^s-1 b_i f(u_ni) + τ∑_i=0^s-1b_i g(u_ni). It is worth mentioning that the above ARK methods have been employed to develop energy-stable schemes for phase field models in <cit.> and maximum bound principle methods for the AC equations in <cit.>. We emphasize that each ARK method can be considered as a partitioned Runge-Kutta (PRK) method <cit.>. Specifically, let us introduce an equivalent reformulation of (<ref>) as follows: { u̇_f(𝐱, t) = f(u), u̇_g(𝐱, t) = g(u), u(𝐱, t) = u_f(𝐱, t) + u_g(𝐱, t), . It is straightforward to see that (<ref>) is equivalent to (<ref>) if the consistent initial condition u_f(𝐱, 0) + u_g(𝐱, 0) = u^0(𝐱) is imposed. By employing a PRK method to (<ref>) and eliminating the intermediate variables u_f, u_g, we readily obtain the ARK method as mentioned above. By Remark <ref>, we can readily infer that an ARK method has an order of p if the corresponding PRK method has an order of p, as a ARK method is essentially a PRK method applied to the extended systems (<ref>). Adrian et al. conducted an extensive study on generalized ARK methods in <cit.> and provided a comprehensive list of their order conditions. Table <ref> summarizes the order conditions of ARK methods up to the third-order for convenience. §.§ Gradient flow systems and their SAV reformulation A gradient flow model can be expressed generally as u_t(𝐱, t) = 𝒢δℱ/δ u, 𝐱∈Ω, where u is a state variable, 𝒢∈ℝ^d × d is a negative semi-definite mobility operator, and δℱ/δ u is the variational derivative of the free energy functional ℱ to u. The triple (u, 𝒢, ℱ) uniformly specifies a gradient flow system. When appropriate boundary conditions are imposed on u, system (<ref>) dissipates the free energy as follows: d ℱ/dt = ( δℱ/δ u, ∂ u/∂ t) = ( δℱ/δ u, 𝒢δℱ/δ u) ≤ 0, where (u, v) = ∫_Ω u v d𝐱, ∀ u, v ∈ L^2(Ω) is the inner product. Moreover, we denote by u = √((u, u)) the corresponding norm. For illustration, let us assume a free energy functional of the form: ℱ(u, ∇ u) = 1/2(u, ℒu) + (F(u, ∇ u), 1), where ℒ is a linear, self-adjoint, and positive definite operator, F represents a bulk energy bounded below. The SAV approach introduces a new scalar variable such that q(t) = √( (F(u, ∇ u), 1) + C ), where C is a sufficiently large positive constant to guarantee that the square root in (<ref>) makes sense. The energy functional (<ref>) can be rewritten into a quadratic form as ℱ(u, q) = 1/2(u, ℒu) + q^2 - C. Let W(u) = √( (F(u, ∇ u), 1) + C) for simplicity. The model (<ref>) is reformulated into an equivalent system using the SAV approach <cit.>, as shown below: { u_t = 𝒢 (ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u ), q_t = (δ W/δ u, u_t ) + ( δ W/δ∇ u, ∇ u_t ), . equipped with the consistent initial conditions u(𝐱, 0) = u_0(𝐱), q(0) = √((F(u_0, ∇ u_0), 1) + C). Taking the inner products on both sides of the first and second equations of (<ref>) by ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u and 2q, respectively, and then combining the resulting equations, it is readily to confirm that system (<ref>) admits the following energy dissipation law. d/dtℱ(u, q) = ( ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u, 𝒢 (ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u )) ≤ 0. § HIGH-ORDER LINEARLY IMPLICIT AND ENERGY-STABLE SCHEMES §.§ Construction of time integrators Let us further reformulate (<ref>) as follows: { v_t = 𝒢( ℒv + 2q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v] ), u_t = 𝒢( ℒu + 2 q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]), q_t = ( δ W/δ u[v], u_t ) + ( δ W/δ∇ u[v], ∇ u_t ), . equipped with the initial conditions u(𝐱, 0) = v(𝐱, 0) = u_0(𝐱), q(0) = √((f(u_0(𝐱), ∇ u_0(𝐱)), 1) + C). We first demonstrate the equivalence between the reformulated system (<ref>), (<ref>) and the original system (<ref>). Suppose that ℒ is a linear, self-adjoint, and positive definite operator. The reformulation (<ref>) and the initial condition (<ref>) are equivalent to (<ref>). According to the definition of q (<ref>) and introducing v(t) = u(t), it is evident that the original system (<ref>) implies (<ref>). We will now demonstrate that the combination of (<ref>) and (<ref>) leads to (<ref>). Subtracting the second equation from the first equation of (<ref>) yields u_t - v_t = 𝒢( ℒ u - ℒ v ). Taking the discrete inner product with ℒ u - ℒ v on both sides of the above equation produces 1/2d/dt(ℒ(u - v), u - v ) = (𝒢ℒ(u - v), ℒ(u - v)) ≤ 0. Due to the positive-definite of ℒ and (<ref>), we conclude that u(t) = v(t), ∀ 0 ≤ t ≤ T. Inserting (<ref>) into the third equation of (<ref>), we obtain q_t = ( δ W/δ u[v] , v ) + ( δ W/δ∇ u[v], ∇ v_t ) = d W[v]/dt. Combining (<ref>), (<ref>), and (<ref>) results in q = W[v] = W[u], Finally, it holds from the definition of W that 2q δ W/δ u = δ F/δ u, 2q ∇·δ W/δ∇ u = ∇·δ F/δ∇ u. Substituting the above results into (<ref>) yields (<ref>), which completes the proof. The positive-definite of ℒ is reasonable for most phase field models. For the CH equation with Neumann or periodic boundary conditions, we have ℒ = -Δ and 𝒢 = Δ. The mass conservation law guarantees the invertibility of ℒ. A similar argument applies to the MBE equation. For the AC equation, we have ℒ = -Δ and 𝒢 = -I. Although ℒ is only positive semi-definite in this case, we can introduce a stabilized parameter κ and equivalently recast the AC equation as u_t = - ( (κ I - Δ) u - (κ u + f(u)) ) := - (ℒ_κ u + f_κ (u)). Then, ℒ_κ = κ I - Δ is positive definite. The extension of (<ref>) results in a more complex system (<ref>). However, this reformulation provides an elegant platform for developing high-order, linearly implicit, and energy-stable schemes, as will be demonstrated in subsequent contexts. It should be noted that the equivalent reformulation of (<ref>) is not unique, and other similar reformulations can be employed to develop numerical schemes through the frameworks described in this paper. For simplicity, we only consider (<ref>) in this section. System (<ref>) is an extension of the original SAV approach (<ref>) proposed in <cit.>. Some other SAV approaches have recently gained popularity, including the exponential SAV approach <cit.> and the generalized SAV approach <cit.>. In <cit.>, Ju et al. have also introduced a novel exponential SAV approach to preserve both MBP and EDL for the AC equations. These approaches can also be extended similarly to (<ref>) and discretized by the methods outlined in subsequent contexts to obtain high-order and energy-stable schemes. For simplicity, we will only use the original SAV approach for illustrations. Assuming that u^n, v^n, and q^n are already determined. The SAV-ARK methods are outlined below: [SAV-ARK] The intermediate variables v_ni, u_ni, and q_ni are solved from { v_ni = v^n + τ∑_j=0^s-1 (a_ijv̇^ℒ_nj +a_ijv̇^𝒩_nj) , u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj, v̇^ℒ_ni = 𝒢ℒ v_ni, v̇^𝒩_ni = 𝒢 ( 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni] ), u̇_ni = 𝒢 ( ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni] ), q̇_ni = (δ W/δ u[v_ni], u̇_ni ) + ( δ W/δ∇ u[v_ni], ∇u̇_ni ). . Then, the solution at t_n+1 is v^n+1 = v^n + τ∑_i=0^s-1 b_i(v̇_ni^ℒ + v̇_ni^𝒩), u^n+1 = u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni. We note here that linearly implicit schemes can be obtained by carefully choosing the RK coefficients in Algorithm <ref>. One effective method is discretizing u and q with DIRK methods and v with ERK methods. These methods will be referred to as SAV-DIARK methods in the subsequent contexts. It is important to emphasize that by introducing z = (v, u, q)^T, Algorithm <ref> can be regraded as ARK methods as follows: z_ni = z^n + τ∑_j=0^s-1 (a_ijΦ(z_nj) + a_ijΨ(z_nj) ), z^n+1 = z^n + τ∑_i=0^s-1 b_i ( Φ(z_ni) + Ψ(z_ni) ), where Φ(z) = ( [ 𝒢ℒ u; 𝒢( ℒu + 2q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]); ( δ W/δ u[v], u̇) + ( δ W/δ∇ u, ∇u̇) ]), Ψ(z) = ( [ 𝒢( 2q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v] ); 0; 0 ]). This allows us to easily derive the order conditions of the proposed schemes by the order conditions of ARK methods. To further simplify and improve the stability of Algorithm <ref>, we introduce the following modified SAV-ARK (SAV-MARK) scheme. [SAV-MARK] The intermediate variables v_ni, u_ni, q_ni are solved from { v_ni = u^n + τ∑_j=0^s-1 ( a_ijv̇^ℒ_nj + a_ijv̇^𝒩_nj) , u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + ∑_j=0^s-1 a_ijq̇_nj, v̇^ℒ_ni = 𝒢ℒ v_ni, v̇^𝒩_ni = 𝒢( 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni]), u̇_ni = 𝒢 ( ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni] ), q̇_ni = (δ W/δ u[v_ni], u̇_ni) + ( δ W/δ∇ u[v_ni], ∇u̇_ni). . Then, the solution at t_n+1 is u^n+1 = u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni. In contrast to Algorithm <ref>, Algorithm <ref> does not require updating the variable v at integer time steps. This modification not only reduces computational costs but also improves the stability of the scheme in practice. Additionally, thanks to (<ref>), this modification does not affect the accuracy of Algorithm <ref>. §.§ Energy stability and solvability Suppose the RK methods employed on u in Algorithms <ref> and <ref> are algebraically stable. Then, SAV-ARK and SAV-MARK methods are unconditionally energy-stable in the sense ℱ(u^n+1, q^n+1) ≤ℱ(u^n, q^n), 0 ≤ n ≤ N_t - 1. By the definition of (<ref>) and the self-adjointness of ℒ, we can derive 1/2 (u^n+1, ℒ u^n+1) - 1/2(u^n, ℒ u^n) = τ∑_i=0^s-1 b_i (u̇_ni, ℒ u^n) + τ^2/2∑_i=0^s-1∑_j=0^s-1 b_ib_j (u̇_ni, ℒu̇_nj). Substituting u^n = u_ni - τ∑_j=0^s-1a_iju̇_nj into the above equation and observing that ∑_i = 0^s-1∑_j = 0^s-1 b_i a_ij(u̇_ni, ℒu̇_nj) = ∑_i=0^s-1∑_j=0^s-1 b_ja_ji(u̇_ni, ℒu̇_nj), we obtain 1/2 (u^n+1, ℒu^n+1) - 1/2(u^n, ℒ u^n) = τ∑_i=0^s-1 b_i (u̇_ni, ℒu_ni) - τ^2/2∑_i = 0^s-1∑_j=0^s-1 M_ij (u̇_ni, ℒu̇_nj) ≤τ∑_i=0^s-1 b_i (u̇_ni, ℒu_ni). The last inequality is a result of the positive definiteness of M and ℒ. Using a similar procedure, we have (q^n+1)^2 - (q^n)^2 ≤ 2τ∑_i=0^s-1 b_i q_niq̇_ni. Taking the discrete inner products of the sixth and last equations of (<ref>) with ℒu_ni and 2q_ni, respectively, and adding the obtained results together yield (u̇_ni, ℒu_ni) + 2 q_niq̇_ni = (𝒢μ_ni, μ_ni) ≤ 0, where μ_ni = ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni]. The desired result is thus obtained by combining (<ref>)–(<ref>) with the condition b_i ≥ 0. The proposed approach uses quadratic energy (as depicted in equation (<ref>)) instead of the original one. When higher-order time discretization is applied to (<ref>), the resulting quadratic energy becomes a high-order approximation of the original energy. Although the SAV method may be criticized for this weakness, recent studies have attempted to overcome it. For example, Jiang et al. introduced the relaxed SAV approach in <cit.> to connect the modified and original energy at a discrete level, <cit.> proposed an alternating approach that combines the SAV and Lagrange multiplier methods to preserve the original energy. Our technique can also be utilized to develop higher-order schemes based on these approaches. It should be noted that Theorem <ref> guarantees the boundedness of the numerical solutions {u^n}_n=0^N_t under the energy norm ·_ℒ, where u_ℒ := (ℒu, u). However, the solutions {v^n}_n=0^N_t obtained from Algorithm <ref> may not be bounded. Hence, Algorithm <ref> is expected to be more stable in practical applications since it does not involve the update of v^n. Let us now concentrate on the solvability of SAV-MDIARK methods. Notice that the proof for SAV-DIARK methods is similar, and we omit it here. Assume that the mobility matrix satisfies 𝒢 = - ℬ^* ℬ and the RK coefficients a_ii≥ 0 in Algorithm <ref>. The semi-discrete SAV-MDIARK scheme is then uniquely solvable when the time step is sufficiently small. Here, ℬ is a linear operator, ℬ^⋆ represents its adjoint. Since we are considering the DIRK method, the scheme to solve the intermediate variable v_ni can be reformulated as follows: v_ni = u^n + τ a_ii𝒢ℒ v_ni + τ∑_j=0^i-1 (a_ijv̇_nj^ℒ + a_ijv̇_nj^𝒩). Notably, we can solve the above system one by one for i from 0 to s-1, where the only unknown in each step is v_ni. Combining the self-adjoint of ℒ and the assumption to 𝒢, it is readily to assert the decomposition 𝒢ℒ = -𝒜^⋆𝒜. Therefore, the solution of v_ni can be regarded as the minimization of the convex functional defined by: 𝒮[v] = 1/2 (v^2 + τ a_ii𝒜 v^2) - (u^n + τ∑_j=0^i-1 (a_ijv̇_nj^ℒ + a_ijv̇_nj^𝒩), v). Therefore, the unique solvability of v_ni is straightforward. Then, we prove the solvability of the system coupled by u_ni and q_ni. Let f_ni = δ W/δ u[v_ni] - ∇·δ W/δ∇ u_ni[v_ni]. Thanks to the factor that q_ni is in dependent of space, it can be updated by q_ni = q^n + τ∑_j=0^i-1a_ijq̇_nj + τ a_ii (𝒜f_ni, 𝒜 u^1_ni ) /1 + 2τ a_iiℬ f_ni^2 - τ a_ii (𝒜f_ni, 𝒜 u_ni^2) , where u^1_ni and u_ni^2 are defined by u_ni^1 = argmin _u1/2 (v^2 + τ a_ii𝒜 v^2) - τ (u^n + ∑_j=0^i-1 a_iju̇_nj, u ), u_ni^2 = argmin _u1/2 (v^2 + τ a_ii𝒜 v^2) - 2τ a_ii (𝒢 f_ni, u ). Since the time step is supposed to be sufficiently small, the solvability of the system can be straightforward. § THEORETICAL ANALYSIS §.§ Estimates of the global error In this section, we present global error estimates for the semi-discrete SAV-MARK methods. To simplify the presentation, we consider only the classical L^2 gradient flow, i.e., 𝒢 = -1, ℒ = -Δ and ℱ(u) = 1/2∇ u^2 + ∫_Ω F(u) d𝐱. Without loss of generality, our subsequent analysis is based on the following assumptions 𝒜1–𝒜3: 𝒜1: The implicit component of the ARK method is algebraically and diagonally stable. 𝒜2: The exact solution of the system is sufficiently smooth in both space and time. 𝒜3: The nonlinearity F(·) is twice differentiable. The SAV-MARK scheme for the AC equation is given by v_ni = u^n + τ∑_j = 0^s-1 (a_ijΔ v_nj - 2 a_ij q_nj W^'(v_nj)), u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj, u^n+1 = u^n + ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + ∑_i=0^s-1 b_i q̇_ni, where u̇_ni = Δ u_ni - 2 q_ni W^'(v_ni) , q̇_ni = (W^'(v_ni), u̇_ni), W^'(u) = F^'(u)/ 2 √(∫_Ω F(u) dx + C_0 ). The major obstacle in establishing error estimates for the SAV-MARK method is obtaining a prior L^∞ bound for the intermediate stages v_ni. To address this issue, previous researches truncated the nonlinearity to a global Lipschitz function with compact support. This technique is reliable when the continuous solution is bounded, and the numerical solution is sufficiently close to it. Here, we will adopt a similar approach. Let U(𝐱, t) be the exact solution to the L^2 gradient flow and Q(t) = √(∫_Ω F(U(𝐱, t)) d𝐱 + C ). We define M_u = U(𝐱, t)_C([0, T]; L^∞ (Ω)), Ṁ_u = U̇(𝐱, t)_C([0, T]; L^∞(Ω)), M_q = max_0 ≤ t ≤ T|Q(t)|. The constants provided above are well-defined by the assumption 𝒜2 and the definition of Q(t). We denote by ℬ = M_u + 1 and let W^'_ℬ(s) = W^'(s) ρ(s/ℬ), where ρ(s) is a smooth function with compact support, such that ρ(s) = { 1, 0 ≤ |s| ≤ 1, ∈ [0, 1], 1 ≤ |s| ≤ 2, 0, |s| ≥ 2. . It is readily to confirm that W^'_ℬ(·) is global Lipschitz continuous, and W^'_ℬ (s) = W^'(s), ∀ 0 ≤ |s| ≤ℬ, |W^'_ℬ(s)| ≤ L_1, | W^'_ℬ (r) - W^'_ℬ (s) | ≤ L_2 |r - s|. Following <cit.>, we introduce reference solutions 𝒱_ni, 𝒰_ni, 𝒬_ni, 𝒰^n and 𝒬^n, such that 𝒱_ni = U(t_n) + τ∑_j = 0^s-1 (a_ijΔ𝒱_nj - 2 a_ij𝒬_nj W^'_ℬ (𝒱_nj)), 𝒰_ni = U(t_n) + τ∑_j=0^s-1 a_ij𝒰̇_nj, 𝒰̇_ni = Δ𝒰_nj - 2 𝒬_nj W^'_ℬ(𝒱_nj), 𝒬_ni = Q(t_n) + τ∑_j=0^s-1 a_ij𝒬̇_nj, 𝒬̇_ni = (W^'_ℬ(𝒱_ni), 𝒰̇_ni). These reference solutions play important roles in obtaining global estimates for the SAV-MARK methods. Suppose that the time step satisfies τ≤min{ (2c_2)^-1, (4c(c_3 + c_4))^-1}, where the constants above will be specified in the subsequent derivations. We have the following estimates for the intermediate solutions 𝒱_ni 𝒱_ni_L^∞≤ M_u + 1/2, 0 ≤ n ≤ N_t, 0 ≤ i ≤ s-1. Moreover, ∑_i= 0^s-1 (|Q(t_ni) - 𝒬_ni| + U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni) ≤ c_3 τ^2, ∑_i=0^s-1Δ (U(t_ni) - 𝒱_ni)≤ c_4 τ. Since W^'_ℬ(U(t_ni)) = W^'(U(t_ni)), the exact solutions satisfy U(t_ni) = U(t_n) + τ∑_j = 0^s-1 (a_ijΔ U(t_nj) - 2 a_ij Q(t_nj) W^'_ℬ (U(t_nj))) + η^v_ni, U(t_ni) = U(t_n) + τ∑_j=0^s-1 a_ijU̇(t_nj) + η_ni^u, U̇_ni = Δ U(t_ni) - 2 Q(t_ni) W^'_ℬ(U(t_ni)), Q(t_ni) = Q(t_n) + τ∑_j=0^s-1 a_ijQ̇(t_nj) + η^q_ni, Q̇(t_ni) = (W^'_ℬ(U(t_ni)), U̇(t_ni)), where ∑_i=0^s-1 (η_ni^v + η_ni^u + |η_ni^q|) ≤ c_1 τ^2. Subtracting the second and sixth equations of (<ref>) from that of (<ref>) yields U(t_ni) - 𝒱_ni = τ∑_j=0^s-1 ( a_ijΔ (U(t_nj) - 𝒱_nj) - 2 a_ijξ_nj ) + η_ni^v, U(t_ni) - 𝒰_ni = τ∑_j=0^s-1 a_ij (U̇(t_nj) - 𝒰̇_nj) + η_ni^u, Q(t_ni) - 𝒬_ni = τ∑_j=0^s-1 a_ij (Q̇(t_nj) - 𝒬̇_nj) + η_ni^q, where U̇(t_ni) - 𝒰̇_ni = Δ (U(t_ni) - 𝒰_ni) - 2 ξ_ni, ξ_ni = Q(t_ni) W^'_ℬ(U(t_ni)) - 𝒬_ni W^'_ℬ (𝒱_ni), Q̇(t_ni) - 𝒬̇_ni = (W^'_ℬ (U(t_ni)) - W^'_ℬ(𝒱_ni), U̇(t_ni)) + ( W^'_ℬ(𝒱_ni), U̇(t_ni) - 𝒰̇_ni ). There is no difficulty in confirming that ξ_ni ≤ M_u U(t_ni) - 𝒱_ni + L_1 |Q(t_ni) - 𝒬_ni|, |Q̇(t_ni) - 𝒬̇_ni| ≤Ṁ_u L_2 U(t_ni) - 𝒱_ni + L_1 U̇(t_ni) - 𝒰̇_ni. According to Assumption 𝒜1, there exists a positive definite diagonal matrix H = diag{h_0, h_1, ⋯, h_s-1}, such that M = H A+ A^TH is positive definite. Therefore, we can find a sufficiently small constant l, such that M_l = (m_ij^l) = A^-TM A^-1 - 2 l H = A^-TH + H A^-1 - 2 l H is positive definite. Moreover, let M_d = H A^-1, M_s = H A^-1A. Then, 0 ≤ 2 l ∑_i=0^s-1h_i U(t_ni) - 𝒱_ni^2 - 2τ∑_i=0^s-1h_i (Δ ( U(t_ni) - 𝒱_ni ), U(t_ni) - 𝒱_ni) = 2 ∑_i,j = 0^s-1m_ij^d (U(t_ni) - 𝒱_ni, U(t_nj) - 𝒱_nj) - ∑_i,j = 0^s-1m_ij^l (U(t_ni) - 𝒱_ni, U(t_nj) - 𝒱_nj) - 2τ∑_i=0^s-1h_i (Δ ( U(t_ni) - 𝒱_ni ), U(t_ni) - 𝒱_ni) = 2 ∑_i,j = 0^s-1m_ij^d (U(t_ni) - 𝒱_ni, η_nj^v) - ∑_i,j = 0^s-1m_ij^l (U(t_ni) - 𝒱_ni, U(t_nj) - 𝒱_nj) - 4 τ∑_i,j = 0^s-1m^s_ij (U(t_ni) - 𝒱_ni, ξ_nj) ≤ 2 λ_d ∑_i=0^s-1U(t_ni) - 𝒱_ni∑_i=0^s-1η_ni^v - λ_l ∑_i=0^s-1U(t_ni) - 𝒱_ni^2 + 4 λ_s (Ṁ_u L_2 + L_1) τ∑_i=0^s-1 U(t_ni) - 𝒱_ni ( ∑_i=0^s-1 |Q(t_ni) - 𝒬_ni| + ∑_i=0^s-1 U(t_ni) - 𝒱_ni ), where λ_α and λ_α, α = d,l,s,h are the maximum and minimum eigenvalues of M_d, M_l, M_s, and H, respectively. Consequently, ∑_i=0^s-1U(t_ni) - 𝒱_ni≤4 s λ_s (Ṁ_u L_2 + L_1)/λ_lτ∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| ) + 2 s λ_d/λ_l∑_i=0^s-1η_ni^v. Following the same procedure, we can derive ∑_i=0^s-1U(t_ni) - 𝒰_ni≤4 s λ_h (Ṁ_u L_2 + L_1)/λ_lτ∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| ) + 2 s λ_h/λ_l∑_i=0^s-1η_ni^u. Combining (<ref>) with the second equation of (<ref>), we have ∑_i=0^s-1U̇ (t_ni) - 𝒰̇_ni≤4s λ_d λ_h (Ṁ_u L_2 + L_1)/λ_l λ_h∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| ) + s λ_d (2 λ_h + λ_l)/λ_lλ_hτ^-1∑_i=0^s-1η_ni^u. Subtracting the fourth equation of (<ref>) with that of (<ref>) gives Q(t_ni) - 𝒬_ni = τ∑_j=0^s-1 a_ij (Q̇(t_ni) - 𝒬̇_ni) + η_ni^q. Repeating to use the above technique and combining (<ref>) and (<ref>) then result in ∑_i=0^s-1|Q(t_ni) - 𝒬_ni| ≤2s λ_h (λ_h + 2s λ_d λ_h L_1)(Ṁ_u L_2 + L_1) /λ_l λ_hτ∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni|) + 2s^2 λ_h λ_d (2 λ_h + λ_l) L_1 /λ_l^2 λ_h ∑_i=0^s-1 (η_ni^u + |η_ni^q|). Adding (<ref>), (<ref>) and (<ref>) together yields ∑_i= 0^s-1 ( U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni + |Q(t_ni) - 𝒬_ni| ) ≤ c_2 τ∑_i= 0^s-1 ( U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni + |Q(t_ni) - 𝒬_ni| ) + c_3/2τ^2. It follows by setting τ≤ (2c_2)^-1 that ∑_i= 0^s-1 ( U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni + |Q(t_ni) - 𝒬_ni| ) ≤ c_3 τ^2. Using equation (<ref>), we then demonstrate the boundedness of 𝒱_ni for sufficiently small τ. Inserting (<ref>) into the first equation of (<ref>) infers ∑_i=0^s-1Δ (U(t_ni) - 𝒱_ni)≤λ_d + 2 λ_s (M_u + L_1)/λ_h τ ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| + η_ni^v ) ≤ c_4 τ. The Sobolev inequality f_L^∞≤ cf_H^2 and the triangular inequality give us 𝒱_ni_L^∞ ≤U(t_ni)_L^∞ + U(t_ni) - 𝒱_ni_L^∞ ≤ M_u + c U(t_ni) - 𝒱_ni_H^2≤ M_u + 2c(c_3 + c_4) τ. The estimate for 𝒱_ni in Lemma <ref> is straightforward after setting τ≤ (4c(c_3 + c_4))^-1. Therefore, we have completed the proof. Using the Taylor's formula and Lemma <ref>, it is readily to confirm that when the time step satisfies the condition of Lemma <ref>, the reference solutions further satisfy U(t_n+1) = U(t_n) + τ∑_i=0^s-1 b_i 𝒰̇_ni + η_n+1^u, Q(t_n+1) = Q(t_n) + τ∑_i=0^s-1 b_i 𝒬̇_ni + η_n+1^q, with η_n+1^u_H^1 + η_n+1^q≤ c_5 τ^p+1. We proceed to prove the convergence of the modified scheme obtained by replacing the nonlinear term W^'(·) in (<ref>) with W^'_ℬ(·). For clarity, we remain to use the original notation to denote the solution of this modified scheme. Our proof demonstrates that v_ni_L^∞≤ M_u + 1 for sufficiently small time steps. Consequently, W^'_ℬ(v_ni) = W^'(v_ni), which indirectly confirming the convergence of the SAV-MARK method (<ref>). Let 𝒥_ni = 𝒱_ni - v_ni, ℰ_ni = 𝒰_ni - u_ni, 𝒟_ni = 𝒬_ni - q_ni. Define solution errors E^n+1 = U(t_n+1) - u^n+1, D^n+1 = Q(t_n+1) - q^n+1. Let c_⋆ =((3c_5^2 + c_11)Texp(2c_12T))^1/2, and the time step τ≤min{ (2c_2)^-1, (4c(c_3 + c_4))^-1, (2c_6)^-1, (4c(c_⋆ c_7 + c_8))^-1/p-1, (2c_12)^-1}. Then, the SAV-MARK method is convergent in the sense E^n + |D^n | ≤ c_⋆τ^p, 0 ≤ n ≤ N_t. We will complete the proof by the mathematical induction. As SAV-MARK is a one-step method, it is enough to prove the result for n = l+1 while assuming it holds for n = l. Let n = l. Subtracting (<ref>) and (<ref>) from (<ref>), we get 𝒥_li = E^l + τ∑_j=0^s-1 (a_ijΔ𝒥_lj - 2 a_ijζ_lj), ℰ_li = E^l + τ∑_j=0^s-1 a_ijℰ̇_lj, 𝒟_li = D^l + τ∑_j=0^s-1 a_ij𝒟̇_lj, E^l+1 = E^l + τ∑_i=0^s-1 b_i ℰ̇_li + η_l+1^u, D^l+1 = D^l + τ∑_i=0^s-1 b_i 𝒟̇_li + η_l+1^q, where ℰ̇_li = Δℰ_li - 2ζ_li, ζ_li = 𝒬_li (W^'_ℬ(𝒱_li) - W^'_ℬ(v_li) ) + 𝒟_li W^'_ℬ(v_li), 𝒟̇_li = (W^'_ℬ (𝒱_li) - W^'_ℬ(v_li), 𝒰̇_li) + (W^'_ℬ (v_li), ℰ̇_li). Based on the proof of Lemma <ref>, we can conclude that |𝒬_li| ≤ℳ_q and |𝒰̇_li| ≤ℳ̇_u. Applying the propositions of W^'_ℬ(·) then yields ζ_li≤ (ℳ_q L_2 + L_1) (𝒥_li + |𝒟_li| ), |𝒟̇_li| ≤ (ℳ̇_u L_2 + L_1)( 𝒥_li + ℰ̇_li ). Furthermore, using (<ref>) and the same technique employed in Lemma <ref>, we can still arrive at ∑_i=0^s-1𝒥_li ≤2 s λ_s (ℳ_q L_2 + L_1)/λ_lτ∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) + 2s λ_d/λ_lE^l, ∑_i=0^s-1ℰ_li ≤2 s λ_h (ℳ_q L_2 + L_1)/λ_lτ∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) +2s λ_h/λ_lE^l, ∑_i=0^s-1 |𝒟_li| ≤s λ_h (λ_d λ_h + 2 s λ_d λ_h L_1)(ℳ̇_u L_2 + L_1) /λ_l^2 λ_hτ∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) + s^2 λ_h λ_d (2λ_h + λ_l) (ℳ̇_u L_2 + L_1)/λ_l^2 λ_h |D^l|, ∑_i=0^s-1ℰ̇_li ≤2s λ_d λ_h(ℳ_q L_2 + L_1)/λ_l λ_h∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) + s λ_d (λ_l + 2 λ_h)/λ_l λ_hτ^-1E^l. Consequently, ∑_i=0^s-1 ( 𝒥_li + ℰ_li + |𝒟_li| ) ≤ c_6 τ∑_i=0^s-1 ( 𝒥_li + ℰ_li + |𝒟_li| ) + c_7/2(E^l + |D^l|). The restriction τ≤ (2c_6)^-1 and the induction produce ∑_i=0^s-1 ( 𝒥_li + ℰ_li + |𝒟_li| ) ≤ c_⋆ c_7 τ^p. Combining the above estimate with first equation of (<ref>) then yields Δ𝒥_li≤ c_8 τ^p-1, where c_8 = λ_d c_⋆ c_7 + s λ_d c_⋆ + 2 λ_s (ℳ_q L_2 + L_1)c_⋆ c_7/λ_h. Employing the inequalities ∇ f^2 ≤fΔ f and f_L^∞≤ c f_H^2, it can be shown that if τ≤ (4c(c_⋆ c_7 + c_8))^-1/p-1, v_li_H^2 ≤𝒱_li_H^2 + 𝒥_li_H^2≤𝒱_li_H^2 + 2(c_⋆ c_7 + c_8) τ^p-1≤ c_9 , v_li_L^∞ ≤𝒱_li_L^∞ + 2c(c_⋆ c_7 + c_8) τ^p-1≤ M_u + 1. Let us now provide estimates for E^l+1 and D^l+1. Taking the difference between E^l+1^2 and E^l^2, and use the fourth equation of (<ref>) yield E^l+1^2 - E^l^2 = 2τ∑_i=0^s-1 (E^l, b_i ℰ̇_li) + τ^2 ∑_i=0^s-1∑_j=0^s-1 b_i b_j (ℰ̇_li, ℰ̇_lj) + 2 (E^l + τ∑_i=0^s-1 b_i ℰ̇_li, η^u_l+1) + η_l+1^u^2. Next, we individually estimate each of the terms on the right-hand side of (<ref>). Based on the second equation of (<ref>) and the algebraically stable condition, we deduce 2τ∑_i=0^s-1 (E^l, b_i ℰ̇_li) + τ^2 ∑_i=0^s-1∑_j=0^s-1 b_i b_j (ℰ̇_li, ℰ̇_lj) + 2 τ∑_i=0^s-1 b_i ∇ℰ_li^2 = - τ^2 ∑_i=0^s-1∑_j=0^s-1 m_ij (ℰ̇_li, ℰ̇_lj) + 2τ∑_i=0^s-1 b_i (ℰ_li, ℰ̇_li) + 2 τ∑_i=0^s-1 b_i ∇ℰ_li^2 ≤ 4(ℳ_q L_2 + L_1 + 1) τ∑_i=0^s-1 b_i (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2). Using the Cauchy-Schwarz inequality and ab ≤τ/2 a^2 + 1/2 τ b^2 yield (E^l + τ∑_i=0^s-1 b_i ℰ̇_li, η^u_l+1) = E^lη_l+1^u + τ∑_i=0^s-1 b_i (Δℰ_li - 2ζ_li, η^u_l+1) ≤τ/2E^l^2 + 1/2τη_l+1^u^2 + τ∑_i=0^s-1 b_i (∇ℰ_li∇η_l+1^u + 2 ζ_liη_l+1^u) ≤τ/2E^l^2 + τ/2∑_i=0^s-1 b_i ∇ℰ_li^2 + 2(ℳ_q L_2 + L_1) τ∑_i=0^s-1 b_i (𝒥_li^2 + |𝒟_li|^2) + 2 c^2_5 τ^2p+1. Inserting (<ref>) and (<ref>) into (<ref>) infers E^l+1^2 + τ∑_i=0^s-1 b_i ∇ℰ_li^2 ≤ (1 + τ )E^l^2 + 8(ℳ_q L_2 + L_1 + 1)τ∑_i=0^s-1 b_i (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2) + 3c_5^2 τ^2p+1. Analogously, |D^l+1|^2 ≤ (1 + c_9 τ) |D^l|^2 + c_10τ∑_i=0^s-1 (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2) + c_11τ^2p+1 . Moreover, ∑_i=0^s-1 (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2) ≤ ( ∑_i=0^s-1𝒥_li + ℰ_li + |𝒟_li| )^2 ≤ 2 c_7^2 ( E^l^2 + |D^l|^2 ). Collecting (<ref>), (<ref>), (<ref>) produces E^l+1^2 - |D^l+1|^2 ≤ (1 + c_12τ) (E^l^2 + |D^l|^2) + (3c_5^2 + c_11) τ^2p+1. We observe that c_5, c_11, c_12 are independent of c_⋆ and discrete parameters according to the derivations. By selecting τ≤ (2c_12)^-1 and applying the discrete Gronwall inequality, we can derive the desired result with c_⋆ = ((3c_5^2 + c_11)Texp(2c_12T))^1/2. Therefore, the proof is completed. §.§ Relationships with the SAV-RK methods In <cit.>, Li et al. developed high-order unconditionally energy-stable schemes based on SAV techniques and RK methods. To obtain arbitrarily high-order and linearly implicit schemes, they proposed an iterative procedure to get a sufficiently accurate prediction of u, which was then used to discretize the nonlinear terms. In this section, we demonstrate that every SAV-RK methods can be viewed as an ARK method applied to some appropriate reformulations of (<ref>). This new perspective enables us to systematically investigate the order conditions of existing works, utilizing the order conditions of ARK approaches. Employing their SAV-RKPC(M) methods to gradient flows leads to. [SAV-RKPC(M)] Given a fundamental RK method with coefficients (A, b, c), the intermediate variables are calculated by the prediction-correction procedure as 1. Prediction: We initialize u_ni^(0) = u^0, q_ni^(0) = q^0. Let M be a positive integer. Then, we iteratively compute u_ni^(m) and q_ni^(m) for m = 0 to M-1 by { u_ni^(m+1) = u^n + τ∑_j=0^s-1 a_iju̇_nj^(m+1), q_ni^(m+1) = q^n + τ∑_j=0^s-1a_ijq̇_nj^(m+1) u̇_ni^(m+1) = 𝒢( ℒu_ni^(m+1) + 2q_ni^(m)δ W/δ u [u_ni^(m)] - 2 q_ni^(m)∇·δ W/δ∇ u[u_ni^(m)] ), q̇_ni^(m+1) = (δ W/δ u [u_ni^(m+1)], u̇_ni^(m+1)) + (δ W/δ∇ u[u_ni^(m+1)], ∇u̇_ni^(m+1)). . If max_i u_ni^(m+1) - u_ni^(m)_∞≤ TOL, we stop the iterations and set u_ni^⋆ = u_ni^(m+1). Otherwise, we set u_ni^⋆ = u_ni^(M). 2. Correction: For the predicted u_ni^⋆, we compute the intermediate stages u̇_ni and q̇_ni as follows: { u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_i^n = q^n + τ∑_j=0^s-1 a_ijq̇_nj, u̇_ni = 𝒢 ( ℒu_ni + 2q_niδ W/δ u[u_ni^⋆] - 2q_ni∇·δ W/δ∇ u[u_ni^⋆] ), q̇_ni = ( δ W/δ u [u_ni^⋆], u̇_ni) + ( δ W/δ∇ u [u^⋆_ni], ∇u̇_ni ), . and then update u^n+1, q^n+1 by u^n+1 =u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni. We display that Algorithm <ref> can be regarded as an ARK method for the following alternative reformulation of (<ref>). { w_t = 𝒢( ℒv + 2 r δ W/δ u[v] - 2 r ∇·δ W/δ∇ u[v] ), v_t = 𝒢( ℒv + 2r δ W/δ u[v] - 2r ∇·δ W/δ∇ u[v] ), u_t = 𝒢( ℒu + 2 q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]), r_t = (δ W/δ u[v], v^ℒ_t) + (δ W/δ u[w], v^𝒩_t) + ( δ W/δ∇ u[v], ∇ v_t^ℒ) + ( δ W/δ∇ u[w], ∇ v^𝒩_t ), q_t = ( δ W/δ u[v], u_t ) + ( δ W/δ∇ u[v], ∇ u_t ), . where v_t^ℒ = 𝒢ℒ v, v^𝒩_t = 𝒢 (2 r δ W/δ u[v] - 2 r δ W/δ∇ u[v]). Let us explain the equivalence between (<ref>) and (<ref>). Subtracting the second from the first equation of (<ref>) and investigating the initial condition, we obtain: v(t) = w(t), ∀ 0 < t ≤ T. Substituting this formula into the fourth equation of (<ref>), and subtracting the third equation of (<ref>) from the second, the fifth equation of (<ref>) from the fourth resulting in u_t - v_t = 𝒢( ℒ(u - v) + 2(q - r) (δ W/δ u[v] - ∇·δ W/δ∇ u[v]) ), q_t - r_t = ( δ W/δ u[v], u_t - v_t) + ( δ W/δ∇ u[v], ∇ u_t - ∇ v_t ). Taking the inner products on both sides of the first and the second equations in (<ref>) with ℒ(u - v) + 2(q - r) (δ W/δ u[v] - ∇·δ W/δ∇ u[v]) and 2(q - r), respectively, and adding the resulting equations together yield: 1/2 (u - v, ℒ(u - v)) + (q - r)^2 ≤ 0. This implies u(t) = v(t), q(t) = r(t). The remaining steps follow the proof of Lemma <ref>, which we omit here for brevity. Let z = (w, v, u, r, q)^T. We split the reformulated system (<ref>) as follows z_t = Φ_1(z) + Φ_2(z) + Φ_3(z) + Φ_4(z), where Φ_1(z) = ( [ 0; 𝒢ℒ v; 𝒢 ( ℒu + 2 q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]); ( δ W/δ u[v], v^ℒ_t ) + ( δ W/δ∇ u[v], ∇ v_t^ℒ ); ( δ W/δ u[v], u_t ) + ( δ W/δ∇ u[v], ∇ u_t ) ]), Φ_2(z) = ( [ 0; 𝒢 ( 2 r δ W/δ u[v] - 2 r ∇·δ W/δ∇ u[v] ); 0; ( δ W/δ u[v], w^𝒩_t ) + ( δ W/δ∇ u[v], ∇ w_t^𝒩 ); 0 ]), Φ_3(z) = ( [ 𝒢ℒ v; 0; 0; 0; 0 ]), Φ_4(z) = ( [ 𝒢 ( 2 r δ W/δ𝐮[𝐯] - 2 r ∇·δ W/δ∇𝐮[𝐯] ); 0; 0; 0; 0 ]). Employing four different RK methods to (<ref>) yields the following SAV-ARKII method {z_ni = z^n + τ∑_j=0^s-1( a_ijΦ_1(z_nj) + a_ijΦ_2(z_nj) + a_ijΦ_3 (z_nj) + a_ijΦ_4(z_nj) ), z^n+1 = z^n + τ∑_i=0^s-1 b_i (Φ_1 (z_ni) + Φ_2(z_ni) + Φ_3 (z_ni) + Φ_4(z_ni) ). . Furthermore, we rewrite the above scheme componentwisely and employ the techniques outlined in Section <ref> to modify the obtained scheme, ultimately resulting in the SAV-MARKII method as shown below. [SAV-MARKII] We solve the intermediate stages from { w_ni = u^n + τ∑_j=0^s-1 ( a_ijv̇_nj^ℒ + a_ijv̇^𝒩_nj ), v_ni = u^n + τ∑_j=0^s-1 (a_ijv̇_nj^ℒ + a_ijv̇^𝒩_nj), r_ni = r^n + τ∑_j=0^s-1( a_ijṙ_nj^ℒ + a_ijṙ_nj^𝒩), u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj, v̇_ni^ℒ = 𝒢ℒ v_ni, ṙ_ni^ℒ = ( δ W/δ u[v_ni], v̇_ni^ℒ) + (δ W/δ∇ u[v_ni], ∇v̇_ni^ℒ ), v̇_ni^𝒩 = 𝒢 ( 2r_niδ W/δ u[v_ni] - 2r_niδ W/δ∇ u[v_ni] ) , ṙ_ni^𝒩 = ( δ W/δ u[v_ni], v̇_ni^𝒩) + (δ W/δ∇ u[v_ni], ∇v̇_ni^𝒩 ), u̇_ni = 𝒢(ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_niδ W/δ∇ u[v_ni]) , q̇_ni = (δ W/δ u [v_ni], u̇_ni ) + (δ W/δ∇ u [v_ni], ∇u̇_ni). . Then, we update u^n+1 = u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni. Consider a SAV-RKPC(M) method associated with the fundamental RK method (A, b, c) of stage s. Then, it can be regarded as a SAV-MARKII method with the tableaux [ 𝐜 𝐀; 𝐛^T ] = [ 0 O O; 1_M ⊗ c O I_M ⊗ A; 0^T (𝐞_M ⊗ b)^T ], [ 𝐜 𝐀; 𝐛^T ] = [ 0 O O; 1_M ⊗ c I_M ⊗ A O; 0^T (𝐞_M ⊗ b)^T ], [ 𝐜 𝐀; 𝐛^T ] = [ 1_M ⊗ c O I_M ⊗ A; c O A; 0^T (𝐞_M ⊗ b)^T ], [ 𝐜 𝐀; 𝐛^T ] = [ 1_M ⊗ c I_M ⊗ A O; c A O; 0^T (𝐞_M ⊗ b)^T ], where I_s represents the identity matrix, 𝐞_M = (0, 0, ⋯, 1)^T, and ⊗ denotes the Kronecker product. Notice that, we have w_n,i+ms = v_n,i+(m+1)s and r_n,i+ms = q_n,i+(m+1)s in Algorithm <ref>. In addition, the intermediate stages of Algorithm <ref> and <ref> are related as follows: (u̇_ni^(m), q̇_ni^(m), q_ni^(m), u_ni^(m)) = (v̇^ℒ_n,i+ms+v̇^𝒩_n,i+ms, q̇_n,i+ms, q_n,i+ms, v_n,i+ms), u_ni^⋆ = v_n,i+Ms. By Theorem <ref>, the consistency error of the SAV-RKPC(M) can be investigated by the order conditions of the generalized ARK methods straightforwardly. Readers are referred to <cit.> for convenience. Taking the fourth-order Gauss SAV-RKPC (SAV-GRK4PC) used in <cit.> as an example, the SAV-GRK4PC(1), SAV-GRK4PC(2), SAV-GRK4PC(3) methods arrive at second-, third- and fourth-order, respectively, which agrees with the numerical experiments proposed in <cit.>. Although we have demonstrated that the SAV-GRK4PC(3) achieves fourth-order accuracy, it is advisable to carry out additional iterative steps in practical computations to guarantee the stability of the proposed method. § NUMERICAL EXPERIMENTS In this section, we demonstrate the effectiveness of our methods in solving the 2D AC, CH, and MBE equations. The spatial domain is Ω = (x_L, x_R) × (y_L, y_R), and periodic boundary conditions are employed in all examples. To guarantee both accuracy and efficiency, we use the Fourier pseudo-spectral method for spatial discretization. Let N_x and N_y be positive integers. The spatial domain is uniformly partitioned with step sizes h_x = x_R - x_L/N_x and h_y = y_R - y_L/N_y. We define Ω_N = { (x_i, y_j) |x_i = x_L + i h_x, y_j = y_L + j h_y }, and 𝕄_N denotes the space of periodic grid functions on Ω_N. We use the notations ∇_N, ∇_N ·, and Δ_N to represent discrete gradient, divergence, and Laplace operators to the Fourier pseudo-spectral method, respectively. Readers are referred to <cit.> for details. Given u, v ∈𝕄_N, the discrete L^2 inner product, discrete L^2 and L^∞ norms are (u, v)_N = h_x h_y ∑_j=0^N_x-1∑_k=0^N_y-1 u_jk v_jk, u_N = √((u, u)_N), u_∞ = max_0 ≤ j ≤ N_x-1 0 ≤ k ≤ N_y-1 |u_jk|. §.§ AC equation To validate the convergence results presented in Theorem <ref>, we consider the following AC equation u_t = ε^2 Δ u + u - u^3, which can be obtained by setting 𝒢 = -1 and ℱ[u] = ∫_Ωε^2/2 |∇ u|^2 + 1/4 (u^2 - 1)^2 d𝐱 in (<ref>). Employing the Fourier spectral method to (<ref>), the fully discrete system of the AC equation is to find (u_ni, v_ni, q_ni) ∈𝕄_N ×𝕄_N ×ℝ and (u^n+1, q^n+1) ∈𝕄_N ×ℝ, such that v_ni = u^n +τ∑_j=0^s-1 (a_ijΔ_N v_nj - 2 a_ij q_nj W^'(v_nj)), u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj, u^n+1 = u^n + ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + ∑_i=0^s-1 b_i q̇_ni, where u̇_ni = Δ_N u_ni - 2q_ni W^' (v_ni), q̇_ni = (W^'(v_ni), u̇_ni)_N, W^'(u) = F^'(u)/2√( (F(u), 1 )_N +C_0 ). It is worth mentioning that the discrete operator Δ_N satisfies the summation-by-parts formula. By following the procedure outlined in the proof of Theorem <ref> and <ref>, we can confirm the energy-stability and solvability of the above fully-discrete scheme. We set the computational domain as Ω = (0, 1)^2, the parameter as ε = 0.01, and the initial condition as u_0 = 0.1 sin (2 π x) sin (2 π y). Since the exact solution is unavailable, we use the solution obtained by the SAV-MDIARK(5,6,4) method with N = 512, and τ = 10^-4 at the final time T = 1 as a reference. Then, refinement test in time is conducted with N = 128 and different time steps τ = 0.1 × 2^-k (k = 1,2,3,4,5). Figure <ref> displays the discrete L^2-norm error of the solution at T = 1 computed by various methods as a function of the time step size in the logarithmic scale. All the methods achieve their respective accuracy. §.§ CH equation We consider the following Cahn-Hilliard model for immiscible binary fluids u_t = λΔ (-ε^2 Δ u + u^3 - u), where λ is a mobility parameter, and ε represents the width of the diffuse interface. The corresponding free energy functional is ℱ[u] = ∫_Ωε^2/2|∇ u|^2 + 1/4(u^2 - 1)^2 d𝐱. We introduce an auxiliary variable q = √(1/4∫_Ω (u^2 - 1 -κ)^2 d𝐱 + C|Ω|), where κ is a stabilized parameter. The energy functional (<ref>) is transformed into ℱ[u, q] = ε^2/2∇ u^2 + κ/2u^2 + q^2 - κ^2 + 2κ + 4C/4 |Ω|. (<ref>) is then reformulated into an equivalent model, as shown below { u_t = λΔ( -ε^2 Δ u + κ u + f_κ(u) q), q_t = 1/2(f_κ(u), u_t). . We perform convergence tests in time by considering (<ref>) in the spatial domain Ω = (0, 2π)^2 with specified parameters γ = 0.01 and ε = 1. As the exact solution of (<ref>) is not available, we construct a manufactured solution ϕ(x, y, t) = sin(x)sin(y)cos(t) to (<ref>) by introducing a nonhomogeneous source term to the right-hand side of (<ref>). We use 128 × 128 wave numbers for the spatial discretization. Subsequently, (<ref>) will be integrated using various methods until T = 1 with different time steps τ = 0.2 × (2k)^-1 (k = 1,2,3,4,5,6,7,8). The numerical solution at the final time is recorded to evaluate errors in the refinement tests. Figure <ref> plots the L^2 and L^∞ errors of different methods against the time step in a logarithmic scale. All the methods achieve the expected convergence rate. Among the second-order schemes, the SAV-MDIARK(2,2,2) method exhibits higher accuracy than the SAV-MCNRK2 method. Additionally, when γ = 3 + √(3)/6, the SAV-MDIARK(2,2,2) scheme performs to be better than when γ = 1/4. Despite the latter preserving the dissipative rate, the former is more stable in practice. Among the third-order schemes, the SAV-MDIARK(4,5,3) exhibits the highest accuracy and unexpectedly results in superconvergence in this test. This phenomenon can be attributed to the smoothness of the provided solution. Further accuracy tests of this method will be conducted in subsequent examples. When investigating the fourth-order schemes, we present the results of both the SAV-MARK methods and their corresponding SAV-ARK methods. Notably, the convergence rate of the SAV-MARK methods is consistent with that of the SAV-ARK methods, confirming that the modified Algorithm <ref> possesses the same accuracy as Algorithm <ref>. To thoroughly investigate the performance of the proposed schemes, we consider the CH equation (<ref>) with the initial condition ϕ_0(x, y) = 0.05 ( cos(6 π x)cos(8 π y) + (cos(8 π x)cos(6 π y))^2 + cos(2 π x - 10 π y)cos(4 π x - 2π y)). We specify the spatial domain Ω = (0, 2π)^2, and set the parameters in (<ref>) as λ = 1, ε = 0.01. The spatial discretization is carried out using 128 × 128 Fourier modes. Several methods are employed to solve the governing system until the final time T = 0.1. It should be noted that, due to the chosen initial condition (<ref>), the solution of (<ref>) undergoes rapid changes at the beginning. Therefore, if the method is not stable, it will fail to depict the solution using a large time step size accurately. As a benchmark, Figure <ref> illustrates the snapshot obtained by the SAV-MDIARK(5,6,4) method with a step size of τ = 1 × 10^-5. During the test, the time step is progressively reduced until the correct solution snapshot is obtained, and the maximum step size that yields the correct solution profile for each method is recorded. To facilitate comparisons, we display numerical results for several existing methods in Figure <ref>, including the SAV-CN method, the fully implicit second-order convex splitting scheme (CS2), and the SAV-GRK4PC(5). It can be seen that the SAV-CN method fails to produce a correct result at a large time step, while the convex splitting scheme is capable of producing an accurate result with a relatively large time step. Due to the high precision and stability achieved through multiple iterations, the SAV-GRK4PC(5) can also compute a correct solution with a larger time step. The numerical results obtained by the proposed schemes are presented in Figure <ref>. It is evident that our second- and third-order schemes achieve accurate results at larger step sizes compared to SAV-CN and CS2 methods and even outperform the SAV-GRK4PC(5) method. Among these methods, the SAV-MDIARK(5,4,3) method performs the best by yielding the correct solution at a step size of τ = 5.2 × 10^-4. Although the proposed fourth-order methods require smaller step sizes to obtain accurate results, their step sizes remain competitive with those used in other publications, despite considering only the order during their construction. In addition to verifying the effectiveness of the proposed schemes through profiles and step sizes, we also present the evolution of the following discrete free energy ℱ^n_N = ε^2/2∇_N u^n_N^2 + 1/4(((u^n)^2 - 1)^2, 1)_N. It is worth noting that although the above methods have only been proven to dissipate quadratic energy, we still investigate the original discrete energy in our experiments. Figure <ref> summarizes the evolution of the free energy for different numerical schemes under different time steps. It can be observed that the SAV-CN method fails to dissipate the original energy at larger step sizes due to lower precision and weaker stability. While all our methods monotonically decrease the discrete free energy. This indicates that the proposed methods are robust and unconditionally energy-stable, as predicted by the theoretical results. §.§ MBE equation To further display the accuracy and robustness of the proposed schemes, let us consider the following MBE model u_t = - λ (δΔ^2 u - ∇· f(∇ u)), which is the L^2 gradient flow with respect to the following free energy functional ℱ[u] = ∫_Ωδ/2 |Δ u|^2 + F(∇ u) d𝐱. In (<ref>) and (<ref>), u represents the height function of a thin film in a co-moving frame, δ is a positive constant, and f = F^'. If we set F(∇ u) = -1/2ln(1 + |∇ u|^2), (<ref>) is usually called the MBE equation without slope selection. Corresponding, (<ref>) is named MBE equation with slope selection while taking F(∇ u) = 1/4(|∇ u|^2 - 1)^2. Introducing an SAV q = √(1/4∫_Ω (|∇ u|^2 - 1 - κ)^2 d𝐱 + C|Ω| ). The free energy is then modified into ℱ[u, q] = δ/2Δ u^2 + κ/2∇ u^2 + q^2 - κ^2 + 2κ + 4C/4|Ω|. Correspondingly, (<ref>) is reformulated into { u_t = -λ( δΔ^2 u - κΔ u - ∇· f_κ(∇ u) q ), q_t = 1/2 (f_κ(∇ u), ∇ u_t), . where f_κ(∇ u) = (|∇ u|^2 - 1 - κ)∇ u/√(1/4∫_Ω ( |∇ u|^2 - 1 - κ )^2 d𝐱 + C|Ω| ). We remark here although the nonlinearity of the MBE equation without slope selection seems to be unbounded, the SAV can remain to be introduced as q = √(κ/2|∇ u|^2 -1/2ln(1 + |∇ u|^2) + C|Ω| ). Due to the Lipschitz continuous of F, there is no difficulty in confirming that κ/2|∇ u|^2 -1/2ln(1 + |∇ u|^2) > 0, as soon as κ≥1/8. We will still begin with performing the refinement test in time. Specifying the computational domain Ω = (0, 2π)^2 and considering a classical example with the initial condition ϕ_0(x, y) = 0.1(sin3xsin5y + sin5xsin5y), which was studied in <cit.> to observe morphological instability due to the nonlinear interaction. The parameters are λ = 1 and δ = 0.1. Since the exact solution of (<ref>) is not available, the SAV-MDIARK(5,6,4) method is employed to compute a reference solution of (<ref>) using 256 × 256 Fourier modes and a step size of τ = 5 × 10^-6. Then the refinement test in time is carried out by varying the temporal step size τ = 2^3-k× 10^-4 (k = 0,1,⋯,6). The spatial is discretized using 128 × 128 Fourier modes. The discrete L^2 and L^∞ errors between the reference and numerical solutions at T = 0.1 are recorded. Figure <ref> displays the solution error at T=0.1 as a function of the step size in the logarithmic scale. It is observable that all methods arrive at their corresponding convergence rates. Moreover, the super-convergence of SAV-MDIARK(4,5,2) disappears under this circumstance, suggesting that the results appearing in Figure <ref> are coincidental. Then, we simulate (<ref>) under the same initial condition until T=30. Figure <ref> displays the height profiles solved by the SAV-MDIARK(5,6,4) under with τ = 5 × 10^-3 at different times t=0,0.05,2.5,5.5,8,30. The results agree with those reported in <cit.>. We remark here that the simulation results of other schemes are indistinguishable and thus are omitted due to the limitation of space. Figure <ref> summarizes the evolution of free energy from t = 0 to t = 15 solved by different methods with different time steps. Notice that the energy curve for the fully-implicit backward difference (BDF) methods, which are recognized to have good stability, are also plotted for comparisons. For the third- and fourth-order schemes, the energy curves predicted by the proposed methods are comparable with those predicted by BDF methods. Moreover, among the second-order schemes, the proposed methods provide more accurate energy prediction than the BDF2 method when τ = 3.125× 10^-2. These suggest that our methods are comparable to the fully discrete BDF methods in terms of stability. However, it should be noted that our methods are linearly implicit and only require the solution of a linear system at each step. Table <ref> lists the CPU times for these methods when conducting the above experiments with the time step of τ = 1× 10^-2. Despite the ARK methods needing to solve more intermediate stages, particularly for higher-order schemes, our proposed methods are more efficient than BDF methods. § CONCLUSION Combing the SAV approach and ARK methods, we develop a novel paradigm for constructing linearly implicit and high-order unconditionally energy-stable methods for general gradient flows. The proposed schemes are rigorously proved to be unconditionally energy-stable, uniquely solvable, and convergent. We also reveal that each SAV-RKPC method can be regarded as a SAV-ARK method, and the order of the SAV-RKPC methods are then confirmed theoretically using the order-conditions of ARK methods. Numerical examples demonstrate the efficiency and robustness of the proposed methods. § ACKNOWLEDGMENTS This work is supported by the National Key Research and Development Project of China (2018YFC1504205), the National Natural Science Foundation of China (12171245, 11971242). § EXAMPLES OF SOME SAV-ARK METHODS In this section, we list SAV-ARK methods utilized in the above contexts. We will refer a SAV-DIARK (or SAV-MDIARK) method with s-stage implicit method, r-stage explicit method and p-th order as SAV-DIARK(s,r,p) (SAV-MDIARK(s,r,p)). §.§ SAV-DIARK(2,2,2) A = [ γ 0; 1-2γ γ ], b = [ 1/2 1/2 ]^T, A = [ 0 0; 1 0 ], b = b. The discriminant of the implicit part of the above method reads = (λ - 1/4) [ 1 -1; -1 1 ]. Therefore, the implicit part of the method is algebraic stable iff λ≥1/4. §.§ SAV-DIARK(2,3,3) A = [ 0 0 0; 0 3 + √(3)/6 0; 0 -√(3)/3 3 + √(3)/6 ], b = [ 0 1/2 1/2 ]^T, A = [ 0 0 0; 3 + √(3)/6 0 0; -3 + √(3)/6 3 - √(3)/3 0 ], b = b. The eigenvalues of diagonally implicit part are [1.0774, 0, 0, 0]. §.§ SAV-DIARK(3,4,3) A = [ 0 0 0 0; 0 σ 0 0; 0 1/2 - σ σ 0; 0 2σ 1 - 4σ σ; ], b = [ 0 μ 1-2μ μ ]^T, A = [ 0 0 0 0; σ 0 0 0; 0 1/2 0 0; 0 9 μσ-3 μ-3 σ+1/3 μ (2 σ-1) 1-σ- 9 μσ-3 μ-3 σ+1/3 μ (2 σ-1) 0; ], b = b, where σ = √(3)/3cos(π/18) + 1/2, μ = 1/6 (2σ - 1)^2. Then, the eigenvalues of the diagonally implicit part are [1.5530, 0, 0, 0]. §.§ SAV-DIARK(5,6,4) A = [ 0 0 0 0 0 0; 0 3/8 0 0 0 0; 3/8 0 3/16 0 0 0; 0 0 0 σ 0 0; 0 0 0 1/2 - σ σ 0; 0 0 0 2σ 1 - 4σ σ; ], b = [ 0 0 0 μ 1-2μ μ ]^T, A = [ 0 0 0 0 0 0; 3/8 0 0 0 0 0; 0 9/16 0 0 0 0; 25/162 μ -104 σμ^2 +6 μ^2+20 μ/108 μ^2-90 μ+9 112 σμ^2 +36 μ^2-37 μ/324 μ^2-270 μ+27 0 0 0; 0 0 1/2 0 0 0; 0 56 σμ^2 -2 μ^2-12 μ/36 μ^2-30 μ+3 16 σμ^2 -4 μ^2+3 μ/36 μ^2-30 μ+3 0 0 0; ], b = b. The eigenvalues of the diagonally implicit part are [1.5530, 0, 0, 0, 0, 0]. §.§ SAV-GARK(4,5,4) A = [ 0 0 0 0 0; 0 1/4 0 0 0; 1/4 0 1/4 0 0; 0 0 0 1/4 1/4-√(3)/6; 0 0 0 1/4+√(3)/6 1/4 ], b = [ 0 0 0 1/2 1/2 ]^T, A = [ 0 0 0 0 0; 1/4 0 0 0 0; 0 1/2 0 0 0; 1/6 0 1/3-√(3)/6 0 0; 1/6 0 1/3+√(3)/6 0 0; ], b = b. The implicit part of the above method is based on the Gauss RK method (see <cit.>). The eigenvalues of A are [0, 0, 0, 0, 0]. abbrv 10 sav_rk_extra G. Akrivis, B. Li, and D. Li. Energy-decaying extrapolated RK-SAV methods for the Allen–Cahn and Cahn–Hilliard equations. SIAM J. Sci. Comput., 41(6):A3703–A3727, 2019. 001 M. Ambati, T. Gerasimov, and L. De Lorenzis. A review on phase-field models of brittle fracture and a new fast hybrid formulation. Comput. Mech., 55:383–405, 2015. burrage_efficiently_1982 K. Burrage. Efficiently implementable algebraically stable Runge–Kutta methods. SIAM J. Numer. Anal., 19(2):245–258, 1982. burrage_stability_1979 K. Burrage and J. C. Butcher. Stability criteria for implicit Runge–Kutta methods. SIAM J. Numer. Anal., 16(1):46–57, 1979. intr_ac J. W. Cahn and S. M. Allen. A microscopic theory for domain wall motion and its experimental verification in Fe-Al alloy domain growth kinetics. J. Phys. Colloq, 38:C7–51–C7–54, 1977. intr_ch J. W. Cahn and J. E. Hilliard. Free energy of a nonuniform system. I. Interficial free energy. J. Chem. Phys., 28:258–267, 1958. mbe_leapfrog L. Chen, J. Zhao, and Y. Gong. A novel second-order scheme for the molecular beam epitaxy model with slope selection. Commun. Comput. Phys., 25(4):1024–1044, 2019. mbe_etd3 K. Cheng, Z. Qiao, and C. Wang. A third order exponential time differencing numerical scheme for No-Slope-Selection epitaxial thin film model with energy stability. J. Sci. Comput., 81:154–185, 2019. lag Q. Cheng, C. Liu, and J. Shen. A new Lagrange multiplier approach for gradient flows. Comput. Methods Appl. Mech. Engrg., 367:113030, 2020. GSAV1 Q. Cheng, C. Liu, and J. Shen. Generalized SAV approaches for gradient systems. J. Comput. Appl. Math., 394:113532, 2021. intr_other1 Q. Cheng, X. Yang, and J. Shen. Efficient and accurate numerical schemes for a hydro-dynamically coupled phase field diblock copolymer model. J. Comput. Phys., 341:44–60, 2017. tang_splitting Y. Cheng, A. Kurganov, Z. Qu, and T. Tang. Fast and stable explicit operator splitting methods for phase-field models. J. Comput. Phys., 303:45–65, 2015. intr_mbe S. Clarke and D. Vvedensky. Origin of reflection high-energy electron-diffraction intensity oscillations during molecular-beam epitaxy: A computational modeling approach . Phys. Rev. Lett, 58:2235–2238, 1987. du_2019 Q. Du, L. Ju, X. Li, and Z. Qiao. Maximum principle preserving exponential time differencing schemes for the nonlocal Allen-Cahn equation. SIAM J. Numer. Anal., 57:875–898, 2019. du_2021 Q. Du, L. Ju, X. Li, and Z. Qiao. Maximum bound principles for a class of semilinear parabolic equations and exponential time-differencing schemes. SIAM Rev., 63:317–359, 2021. du_jsc_analysis Q. Du, L. Ju, and J. Lu. Analysis of fully discrete approximations for dissipative systems and application to time-dependent nonlocal diffusion problems. J. Sci. Comput., 78(3):1438–1466, 2019. convex_splitting1 C. Elliot and A. Stuart. The global dynamics of discrete semilinear parabolic equations. SIAM J. Numer. Anal., 30:1622–1663, 1993. intr_crystal M. Elsey and B. Wirth. A simple and efficient scheme for phase field crystal simulation. ESIAM: M2AN, 47:1413–1432, 2013. grad_stable_ch D. J. Eyre. Unconditionally Gradient Stable Time Marching the Cahn-Hilliard Equation. Mater. Res. Soc. Sympos. Proc., 529:39–46, 1998. cn_ab X. Feng, T. Tnag, and J. Yang. Stabilized crank-nicolson/adams-bashforth schemes for phase field models. East Asian J. Appl. Math., 3(1):59–80, 2013. dvd2 D. Furihata. A stable and conservative finite difference scheme for the Cahn-Hilliard equation, 2001. dvd D. Furihata and T. Matsuo. a. Chapman and Hall/CRC, 1st edition, 2010. svm Y. Gong, Q. Hong, and Q. Wang. Supplementary variable method for thermodynamically consistent partial differential equations. Comput. Methods Appl. Mech. Engrg., 381:113746, 2021. gong_nls Y. Gong, Q. Wang, Y. Wang, and J. Cai. A conservative Fourier pseudo-spectral method for the nonlinear Schrödinger equation. J. Comput. Phys., 328:354–370, 2017. ieq_gong Y. Gong, J. Zhao, and Q. Wang. Arbitrarily high-order linear energy stable schemes for gradient flow models. J. Comput. Phys., 419:109610, 2020. ieq1 F. Guillén-González and G. Tierra. On linear schemes for a Cahn–Hilliard diffuse interface model. J. Comput. Phys., 234:140–171, 2013. ac_hbvm F. Guo and W. Dai. Arbitrarily high-order accurate and energy-stable schemes for solving the conservative Allen–Cahn equation. Numer. Methods Partial Differential Eq., 39:187–212, 2022. 002 Z. Guo and P. Lin. A thermodynamically consistent phase-field model for two-phase flows with thermocapillary effects. J. Fluid Mech., 766:226–271, 2015. hairer_book E. Hairer, C. Lubich, and G. Wanner. Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations. Springer-Verlag, Berlin, 2nd edition, 2006. hou_leapfrog T. Hou, D. Xiu, and W. Jiang. A new second-order maximum-principle preserving finite difference scheme for Allen-Cahn equations with periodic boundary conditions. Appl. Math. Lett., 104:106256, 2020. sav_ns_err F. Huang and J. Shen. Stability and error analysis of a class of high-order IMEX scheme for Navier–Stokes equations with periodic boundary conditions. SIAM J. Numer. Anal., 59:2926–2954, 2021. dvd_high J. Huang. Energy stable schemes for gradient flows based on the DVD method. arXiv:2210.11960v1, 2022. dvd1 T. Ide. Some energy preserving finite element schemes based on the discrete variational derivative method. Appl. Math. Comput., 175:277–296, 2006. relaxation_sav M. Jiang, Z. Zhang, and J. Zhao. Improving the accuracy and consistency of the scalar auxiliary variable (SAV) method with relaxation. J. Comput. Phys., 456:110954, 2022. ESAV_AC L. Ju, X. Li, and Z. Qiao. Stabilized Exponential-SAV cchemes preserving energy dissipation law and maximum bound principle for the Allen–Cahn type equations. J Sci Comput, 92(2):66, 2022. ju_jcp_if L. Ju, X. Li, Z. Qiao, and J. Yang. Maximum bound principle preserving integrating factor Runge-Kutta methods for semilinear parabolic equations. J. Comput. Phys., 439(110405):18, 2021. ju_mbe L. Ju, X. Li, Z. Qiao, and H. Zhang. Energy stability and error estimates of exponential time differencing schemes for the epitaxial growth model without slope selection. Math. Comp., 87(312):1859–1885, 2017. mbe_model1 B. Li and J.-G. Liu. Thin film epitaxy with or without slope selection. Eur. J. Appl. Math., 14:713–743, 2003. mbe_model2 B. Li and J.-G. Liu. Stability analysis of large time‐stepping methods for epitaxial growth models. SIAM J. Numer. Anal., 44:1759–1779, 2006. sav_li D. Li and W. Sun. Linearly implicit and high-order energy-conserving schemes for nonlinear wave equations. J. Sci. Comput., 83:17, 2020. sav_nlsw X. Li, Y. Gong, and L. Zhang. Linear high-order energy-preserving schemes for the nonlinear Schrödinger equation with wave operator using the scalar auxiliary variable approach. J. Sci. Comput., 88:25, 2021. sav_ns_err2 X. Li, J. Shen, and Z. Liu. New SAV-pressure correction methods for the Navier-Stokes equations: stability and error analysis. Math. Comp., 92(340):141–167, 2022. liao_bdf H. Liao, T. Tang, and T. Zhou. On energy stable, maximum-principle preserving, second-order BDF scheme with variable steps for the Allen-Cahn equation. SIAM J. Numer. Anal., 58:2294–2314, 2020. ESAV Z. Liu and X. Li. The Exponential Scalar Auxiliary Variable (E-SAV) Approach for Phase Field Models and Its Explicit Computing. SIAM J. Sci. Comput., 42:B630–B655, 2020. relaxation_lag Z. Liu and X. Li. A novel lagrange multiplier approach with relaxation for gradient flows. arXiv:2210.02723v1 [math.NA], 2022. lu_lsp L. Lu, Q. Wang, Y. Song, and Y. Wang. Local structure-preserving algorithms for the molecular beam epitaxy model with slope selection. Am. J. Math, 26:4745–4765, 2021. 003 W. Marth, S. Aland, and A. Voigt. Margination of white blood cells: a computational approach by a hydrodynamic phase field model. J. Fluid Mech., 790:389–406, 2016. qiao_mixed_fe Z. Qiao, T. Tang, and H. Xie. Error analysis of a mixed finite element method for the molecular beam epitaxy model. SIAM J. Numer. Anal., 53:184–205, 2015. ark_general A. Sandu and M. Günther. A generalized–structure approach to additive Runge–Kutta methods. SIAM J. Numer. Anal., 53(1):17–42, 2015. sav_shen J. Shen, J. Xu, and J. Yang. The scalar auxiliary variable (SAV) approach for gradient flows. J. Comput. Phys., 353:407–416, 2018. sav_shen_siam J. Shen, J. Xu, and J. Yang. A new class of efficient and robust energy stable schemes for gradient flows. SIAM Rev., 61:474–506, 2019. csrk J. Shin, H. G. Lee, and J.-Y. Lee. Unconditionally stable methods for gradient flow using convex splitting Runge-Kutta scheme. J. Comput. Phys., 347:367–381, 2017. tan_jcp_msrksav Z. Tan and H. Tang. A general class of linear unconditionally energy stable schemes for the gradient flows. J. Comput. Phys., 464(111372):32, 2022. tang_imex T. Tang and J. Yang. Implicit-explicit scheme for the Allen-Cahn equation preserves the maximum principle. J. Comput. Math., 34:471–481, 2016. intr_other2 C.-H. Teng, I.-L. Chern, and L. Ming-Chih. Simulating binary fluid-surfactant dynamics by a phase field model . Discrete Contin. Dyn. Syst. Ser. B, 4(17):1289–1307, 2012. 004 A. Wheeler, W. Boettinger, and G. McFadden. Phase-field model for isothermal phase transitions in binary alloys. Phys. Rev. A, 45:7424–7438, 1992. sav_ch3 J. Yang, J. Wang, Z. Tan, and J. Kim. Efficient IMEX and consistently energy-stable methods of diffuse-interface models for incompressible three-component flows. Comput. Phys. Commun., 282, 2023. ieq3 X. Yang. X. Yang, Linear, first and second-order, unconditionally energy stable numerical schemes for the phase field model of homopolymer blends. J. Comput. Phys., 327:294–316, 2016. ieq4 X. Yang and L. Ju. Efficient linear schemes with unconditional energy stability for the phase field elastic bending energy model. Comput. Methods Appl. Mech. Eng., 135:691–712, 2017. ieq2 X. Yang, J. Zhao, Q. Wang, and J. Shen. Numerical approximations for a three components Cahn–Hilliard phase-field model based on the invariant energy quadratization method. Math. Models Methods Appl. Sci., 27(11):1993–2030, 2017. imex_fac H. Zhang, J. Yan, X. Qian, X. Gu, and S. Song. On the preserving of the maximum principle and energy stability of high-order implicit-explicit Runge-Kutta schemes for the space-fractional Allen-Cahn equation. Numer. Algorithms, 88(3):1309–1336, 2021. GSAV2 Y. Zhang and J. Shen. A generalized SAV approach with relaxation for dissipative systems. J. Comput. Phys., 464:111311, 2022. sav_chhs_err N. Zheng and X. Li. Error analysis of the SAV Fourier-spectral method for the Cahn-Hilliard-Hele-Shaw system. Adv. Comput. Math., 47(71), 2021.
http://arxiv.org/abs/2307.05143v1
20230711094754
Process-Algebraic Models of Multi-Writer Multi-Reader Non-Atomic Registers
[ "Myrthe Spronck", "Bas Luttik" ]
cs.LO
[ "cs.LO", "cs.DC", "F.3.1" ]
language=mCRL2-inline, numbers=left Robust chaos in orientation-reversing and non-invertible two-dimensional piecewise-linear maps. [ =================================================================================================== We present process-algebraic models of multi-writer multi-reader safe, regular and atomic registers. We establish the relationship between our models and alternative versions presented in the literature. We use our models to formally analyse by model checking to what extent several well-known mutual exclusion algorithms are robust for relaxed atomicity requirements. Our analyses refute correctness claims made about some of these algorithms in the literature. § INTRODUCTION The mutual exclusion problem was first outlined by Dijkstra <cit.>. Given n threads executing some code with a special section called the “critical section”, the problem is to ensure that at any one time at most one of the threads is executing its critical section. Dijkstra explicitly states that communication between threads should be done through shared registers, and that reading from and writing to these registers should be considered atomic operations; when two threads simultaneously interact with the register, be it through reading or writing, the register behaves as though these operations took place in some total order. Lamport argued that solutions to the mutual exclusion problem that assume atomicity of register operations do not fundamentally solve it <cit.>. After all, implementing atomic operations would require some form of mutual exclusion at a lower level. Many algorithms have been proposed that solve the mutual exclusion problem without requiring atomicity of register operations, most famously Lamport's own Bakery algorithm <cit.>. Analysing distributed algorithms using non-atomic registers for communication between threads can be difficult, and correctness proofs are error-prone. Due to the vast number of execution paths of distributed algorithms, especially when overlapping register operations need to be taken into account, manual correctness proofs are likely to miss issues. One better uses computer tools (e.g., model checkers or theorem provers) to support correctness claims with a detailed and preferably exhaustive analysis. This introduces the need for formal models of non-atomic registers. Lamport proposed a general mathematical formalism for reasoning about the behaviour of concurrent systems that do not rely on the atomicity of operations, which he then uses to analyse the correctness of four solutions to the mutual exclusion problem not relying on atomicity <cit.>. In <cit.>, he studies in more detail the notion of single-writer multi-reader (SWMR) non-atomic register to implement communication between concurrent threads of computation; there, he distinguishes two variants, which he refers to as safe and regular. When a read operation to a SWMR safe register does not overlap with any write operations, then it will return the value stored in the register, but when it does overlap with a write operation then it may return a completely arbitrary value in the domain of the register. A SWMR regular register is a bit less erratic in the sense that a read operation overlapping with write operations will at least return any of the values actually being written. Raynal presented a straightforward generalisation of the notion of SWMR safe register to the multi-writer case <cit.>. How the notion of SWMR regular register should be generalised to the multi-writer case, however, is less obvious. Shao et al. discuss four possibilities <cit.>. The formalisms in <cit.> for studying the behaviour of non-atomic registers are not directly amenable for analysing the correctness of distributed algorithms by explicit-state model checking, e.g., using the mCRL2 toolset <cit.>. In fact, it is not clear whether the four variants of MWMR regular registers presented in <cit.> will lead to a finite-state model even if the number of readers and writers and the set of data values of the register are finite. In <cit.>, Lamport demonstrates a method of modelling SWMR safe registers through repeatedly writing arbitrary values before settling on the desired value, but this approach does not generalise to multi-writer registers. The main contribution of this paper is to present process-algebraic models of multi-writer multi-reader safe, regular and also atomic registers that can be directly used in mCRL2 to analyse the correctness of distributed algorithms. We have used our process-algebraic models to analyse to what extent various mutual exclusion algorithms are robust for relaxed non-atomicity requirements. We find that Peterson's algorithm <cit.> no longer guarantees mutual exclusion if the atomicity requirement is relaxed for the turn register. A variant of Peterson's algorithm presented in <cit.> does guarantee mutual exclusion even if registers are only safe. The variant presented in <cit.>, however, does not guarantee mutual exclusion with regular registers, despite a claim that it does. We also find that some of the algorithms proposed in <cit.> do not guarantee mutual exclusion for regular registers, which seems to contradict claims that they are immune to the problem of flickering bits during writes. When analysing Lamport's 3-bit algorithm <cit.> we discovered that its mutual exclusion guarantee crucially depends on how one of the more complex statements of the algorithm is implemented. Finally, we confirm that Aravind's BLRU algorithm <cit.>, Dekker's algorithm <cit.>, Dijkstra's algorithm <cit.> and Knuth's algorithm <cit.> guarantee mutual exclusion even with safe registers. This paper is organised as follows. In <ref> we present some basic definitions pertaining to SWMR registers, including formalisations of Lamport's notions of SWMR safe, regular and atomic registers. In <ref> we present and discuss our process-algebraic definitions of MWMR safe, regular and atomic registers, and establish formal relationships with their SWMR counterparts. In <ref> we compare our notion of MWMR regular register with the variants of MWMR regular registers proposed by <cit.>. In <ref> we report on our analyses of the various mutual exclusion algorithms. Finally, we present conclusions and some ideas for future work in <ref>. § SINGLE-WRITER MULTI-READER REGISTERS The definitions presented in this section are adapted from <cit.> and <cit.>. We consider n threads operating on a register with values in a finite set of register values; the initial value of the register will be denoted by d_𝑖𝑛𝑖𝑡. Threads are identified by a natural number in the set ={0,…,n-1}. A read operation by thread i∈ on the register, with return value d∈, is a sequence [i]d=[i][i]d consisting of an invocation [i] (for “thread i starts to read”), and a matching response [i]d (for: “the read by thread i finishes with return value d”). A write operation of thread i on the register, with write value d, is a sequence [i]d=[i]d[i] consisting of an invocation [i]d (for: “thread i starts to write value d”) and a matching response [i] (for: “the write by thread i finishes”). An operation of thread i is either a read operation or a write operation of that thread. For every i∈, let A_i={[i],[i]d,[i]v,[i]| d∈}, and let A=⋃_i∈ A_i. If σ is a sequence of elements of A, then we denote by σi the subsequence of σ consisting of the elements in A_i. A schedule on a register is a finite or infinite sequence σ of elements of A such that σi consists of alternating invocations and matching responses, beginning with an invocation, and if σi is finite, ending with a response. Note that, by these requirements and our definition of the notion of operation, σi can then be obtained as the concatenation of read and write operations o_0o_1o_2… executed by thread i.[The same operation may occur multiple times in σi. Henceforth, when we consider an operation in σi we actually mean to refer to a specific occurrence in σi of the operation. To disambiguate between two different occurrences of the same operation o we could, e.g., annotate each occurrence of o with its position in σi. We will not do so explicitly, because it will unnecessarily clutter the presentation. But the reader should keep in mind that, whenever we refer to an operation in a schedule σ we actually mean to refer to a particular occurrence of that operation in σi.] We shall denote by σ,i the set of all operations executed by thread i (i.e., σ,i={o_0,o_1,o_2,…}) and by σ the set of all operations executed by any of the threads. It is technically convenient to include in σ a special write operation w_𝑖𝑛𝑖𝑡that writes the initial value of the register. Then σ={w_𝑖𝑛𝑖𝑡}∪⋃_i∈σ,i. We also use σ and σ for the subsets of σ respectively consisting of the read operations and the write operations only. A schedule σ induces a partial order on σ: if o,o'∈σ, then we write o <_σ o' if, and only if, the response of o precedes the invocation of o' in σ. We stipulate that w_𝑖𝑛𝑖𝑡<o for all o∈σ\{w_𝑖𝑛𝑖𝑡}. Let r∈σ be a read operation and let w∈σ be a write operation. We say that w is fixed for r if w <_σ r; σ,r denotes the set of all writes that are fixed for r. We say that w is relevant for r if r≮_σw; σ,r denotes the set of all writes in σ that are relevant for r. Note that, by the inclusion of w_𝑖𝑛𝑖𝑡, the sets σ,r and σ,r are non-empty for all r∈σ. We say that r∈σ can read from w∈σ if w is relevant for r and there does not exist w'∈σ such that w<_σw'<_σr. An operation o has overlapping writes if there exists w∈σ such that o≮_σw and w≮_σo. In <cit.>, a register model is defined as a set of schedules satisfying certain conditions. Restricting attention to single-writer multi-reader (SWMR) registers only, Lamport considers three register models: safe, regular and atomic <cit.>. We proceed to define Lamport's models by formulating conditions on single-writer schedules, i.e., schedules in which all write operations are by one particular thread. If σ is a single-writer schedule, then, since a write cannot have overlapping writes, every non-empty finite set W of writes has a <_σ-maximum, i.e., an element w∈ W such that w'<_σw for all w'∈ W∖{w'}. Since writes that are fixed for r have their responses in the finite prefix of σ preceding the invocation of r, we have that σ,r is finite for every r. Since σ,r is non-empty, it always has a <_σ-maximum. A SWMR register is safe if a read that does not have overlapping writes returns the most recently written value. A read that does have overlapping writes may return any arbitrary value in the domain of the register. A single-writer schedule σ is safe if every read r without overlapping writes returns the value written by the <_σ-maximum of the set of σ,r. A SWMR register is regular if it is safe, and a read that has overlapping writes returns the value of one of the overlapping writes or the most recently written value. A single-writer schedule σ is regular if every read r returns either the value written by the <_σ-maximum of the set σ, r or the value of an overlapping write. A SWMR register is atomic if all reads and writes behave as though they occur in some definite order. A serialisation is a total order on a subset O of σ that is consistent with <_σ in the sense that for all o,o'∈ O we have that o<_σo' implies oo'. A serialisation (O,) is legal if every read operation returns the value of the most recent write operation according to , that is, whenever r∈ O is a read operation with return value v, then v is the write value of -maximum of σ,r. A single-writer schedule σ is atomic if σ has a legal serialisation. § MULTI-WRITER MULTI-READER REGISTERS We now want to define multi-write multi-reader (MWMR) safe, regular and atomic registers. Since our goal is to verify the correctness of mutual exclusion algorithms by model checking, we prefer operational, process-algebraic definitions of register models over definitions in terms of schedules. We are going to define register models by giving recursive process definitions that, given the state of the register, admit certain interactions with the register, resulting in an update of the state of the register. Which information needs to be maintained in the state of the register depends on the register model, but the state of register should at least reflect which operations are currently active. So, with each register model m ∈{s,r,a} we associate a set of states , and we assume that the following functions are defined on : [ , ,: →𝒫(); ,,: →; : ×→ . ] The mappings returns the set of all threads that are currently reading, i.e., i∈s if, and only if, thread i has invoked a read operation but the matching response has not yet occurred. Similarly, returns the set of all threads that are currently writing, and returns the set of all threads that are currently not reading and not writing. The mappings , , and perform update operations on the state of the register, corresponding to whether the most recent interaction of the register was an invocation () or response () of a read, or an invocation () or a response () of a write. The update operation also takes the write value into account. In the remainder of this section we shall first present our models of MWMR safe, regular and atomic registers, and then comment on the representation of these models in mCRL2. §.§ MWMR Safe Registers Lamport's SWMR safe register model (see <ref>) accounts for how reads and writes behave when they do not have overlapping writes, and how reads behave when they do have overlapping writes. To generalise Lamport's notion to MWMR registers, we need to define how writes behave when they have overlapping writes. We follow Raynal's approach and define that when a write has overlapping writes, then its effect is that some arbitrary value in 𝔻 is written to the register <cit.>. Our process-algebraic definition of a MWMR safe register is shown in <ref>. The equation defines the behaviour of processes R_s(d,s); the parameter d∈ reflects the current value of the register, and the parameter s∈ reflects its current state. For the behaviour of the safe register it must be determined for every read or write operation of a thread whether, during its interaction with the register, there was an overlapping write operation by some other thread. Therefore, in addition to the functions specified in <ref>, we presuppose on a predicate [i] such that s holds if during the interaction of thread i with the register there was an overlapping write by another thread. At the response of a write that is not overlapping with other writes, the current value d of the register needs to be replaced by the write value. Hence, whenever a write is invoked, the write value is stored in s through s; this value can be retrieved with the mapping :→ if the write had no overlapping writes. If there were overlapping writes, is undefined. The right-hand side of the equation in <ref> specifies the behaviour of the register using standard process-algebraic operations: · denotes sequential composition, + denotes non-deterministic choice, → denotes a conditional, and ∑ denotes choice quantification <cit.>. The definition in <ref> induces a transition relations a (a∈ A) on the set of tuples ⟨ d,s⟩ (d∈, s∈). For instance, if i∈s and [i]s, then there is a transition ⟨ d,s⟩[i]d⟨ d,[i]s⟩ , according to the third summand of the definition in <ref>; and if i∈s and [i]s, then, for every d'∈, there is a transition ⟨ d,s⟩[i]⟨ d', s⟩ , according to the last summand of the definition in <ref>. We let s_𝑖𝑛𝑖𝑡 denote the initial state of the safe register, and we define s_𝑖𝑛𝑖𝑡=, s_𝑖𝑛𝑖𝑡=s_𝑖𝑛𝑖𝑡=∅, [i]s is false, and s=d_𝑖𝑛𝑖𝑡. Henceforth, we shall abbreviate R_s(d_𝑖𝑛𝑖𝑡,s_𝑖𝑛𝑖𝑡) by R_s. A trace of R_s is a finite or infinite sequence a_0a_1⋯ a_n-1a_n⋯ of elements of A such that there exist d_0,d_1,d_2, …,d_n,…∈ and s_0,s_1,s_2,…,s_n,…∈ with d_0=d_𝑖𝑛𝑖𝑡 and s_0=s_𝑖𝑛𝑖𝑡 and ⟨ d_0,s_0⟩a_0⟨ d_1,s_1⟩a_1⋯a_n-1⟨ d_n,s_n⟩a_n⋯. We denote by the set of all traces of R_s. A trace α∈ is complete if, for all i∈, either αi is infinite or αi ends with a response. A single-writer trace is a trace in which all invocations and responses of write operations are by the same thread. We argue that there is a one-to-one correspondence between the single-writer safe schedules and the single-writer complete traces of R_s. First, note that schedules and complete traces adhere to exactly the same restrictions regarding the order in which invocations and responses of read and write operations can occur: the invocation of an operation by some thread can only occur when that same thread is not currently executing another operation, and a response to some thread for an operation can only occur if the last interaction of that thread was, indeed, an invocation of that same operation. Write values are not restricted in schedules, nor in complete traces. Moreover, in the single-writer case the value of the parameter d of the process R_s will always be the write value of write operation of which the execution finished last. Finally, note that both in schedules and in complete traces of R_s, if a read operation overlaps with a write operation, then it may return any value, and if it does not, then it will, indeed, return the value of the most recent write operation. Every single-writer safe schedule is a trace of R_s, and every complete single-writer trace of R_s is a safe schedule. §.§ MWMR regular registers According to Lamport's definition of SWMR regular registers (see <ref>), a read either returns the write value of the <_σ-maximum of σ, r or the value written by one of its overlapping writes. When writes may have overlapping writes, then σ,r may not have a <_σ-maximum. It is then necessary to determine, for every read r, which of the <_σ-maximal elements of σ,r should be taken into account when determining the return value of r, and to what extent different reads should agree on this choice. Our considerations are as follows. First, we want our MWMR regular register model to coincide with Lamport's SWMR regular register model when there are no writes overlapping other writes, so that our analyses of algorithms that rely on SWMR regular registers are valid with respect to Lamport's model. Second, our model should be suitable for explicit-state model checking. This precludes any definition that requires keeping track of unbounded information pertaining to the history of the computation. To limit the amount of information that the model is required to remember, we let the register commit to a unique value when there are no active writes. In this respect, our model deviates from three of the four models considered in <cit.>; in <ref> we provide a more detailed comparison. To be consistent with Lamport's SWMR regular registers, a read r should be able return the value of any overlapping write. To determine which of the elements of the fixed writes is taken into account when determining the return value of r, our model non-deterministically inserts a special order action somewhere between the invocation and the response of every write of every thread i∈. One may think of the order action as marking the moment at which the write truly takes place. Note that this order action is purely for modelling purposes, we make no claims on the implementation of a regular register. The write value associated with the most recent order action preceding the invocation of a read (or the initial value if no order actions have occurred yet) is taken into account as possible return value for that read. Thus, a serialisation of all writes is generated on-the-fly through the order actions: all read operations agree on the order of the writes. Our process-algebraic definition of a MWMR regular register is given in <ref>. Here, denotes the set of possible states of the MWMR regular register. The register keeps track of the readers, writers and idle threads, similar to the safe register. It additionally keeps track of the set s of threads that have invoked a write but for which the order action has not yet occurred. The update function : → associated with the order action removes thread i from s. For every thread i ∈s, [i]s is the write value of that write; it is used to correctly update the current value d of the register when [i] occurs. For every thread i ∈s, [i]s is the set of values that a read r invoked by thread i may return. That is, it consist of the values of all writes overlapping with r (thus far) and the value of the write with the most recent [j] before the invocation of r. For i∈, let A_i^r = A_i∪{[i]}, and let A^r = ⋃_i ∈𝕋A_i^r. The process definition in <ref> induces transition relations a (a∈ A^r) on the set of tuples ⟨ d,s⟩ (d∈, s∈). As before s_𝑖𝑛𝑖𝑡 = 𝕋, s_𝑖𝑛𝑖𝑡 = s_𝑖𝑛𝑖𝑡 = ∅. We also have s_𝑖𝑛𝑖𝑡 = ∅, and [i]s_𝑖𝑛𝑖𝑡 = ∅ for all i ∈𝕋. The initial values for [i]s_𝑖𝑛𝑖𝑡 do not matter, since [i]s only matters when i ∈s. We use R_r to abbreviate R_r(d_𝑖𝑛𝑖𝑡,s_𝑖𝑛𝑖𝑡), and define a trace of R_r, also as before, as a finite or infinite sequence of elements of A^r appearing as labels in a transition sequence starting at ⟨ d_𝑖𝑛𝑖𝑡,s_𝑖𝑛𝑖𝑡⟩. We denote by the set of all traces of R_r. Compared to schedules, the traces of R_r have extra actions. If α is a finite or infinite sequence of elements of A^r, then we denote by α̅ the sequence of elements of A obtained from α by deleting all occurrences of (i∈). We can then formulate a correspondence between the single-writer traces of R_r (i.e., the traces in which all invocations and responses of write operations are by the same thread) and single-writer regular schedules. If writes have no overlapping writes, then the most recent order action when a read r is invoked either corresponds to the <_σ-maximum of σ, r, or to a write that overlaps with r. In the first case, the set of possible values that can be returned by the read according to our model will coincide with the set of possible values that it can return according to <ref>. In the latter case, our model allows a subset of the values possible according to <ref> to be returned. Hence, a read in our model never returns a value that could not be returned according to Lamport's SWMR definition of regular registers. Moreover, if there is a trace of R_r in which the order action of a write that overlaps with r occurs before the invocation of r, then there also exist a trace in which it occurs after the invocation of r. Thus, the set of traces described by our model includes all regular schedules according to <ref> whenever there are no writes overlapping other writes. For every single-writer regular schedule σ there is a trace α of R_r such that α̅=σ, and if α is a complete single-writer trace of R_r, then α̅ is a regular schedule. §.§ MWMR atomic registers <ref>, formalising Lamport's notion of SWMR atomic register, straightforwardly generalises to MWMR registers by omitting the single-writer restriction on schedules. Our process-algebraic model should generate the legal serialisation of all operations on-the-fly. To this end, we introduce, for every thread i, execution actions [i] and [i] to mark the exact moment at which an operation is treated as occurring. An operation's execution action must, of course, occur between its invocation and response. The value that is returned at the response of a read is the value that the register stored at the moment of that read's execution; the register's stored value is updated to a write's value at that write's execution. The process-algebraic model of our MWMR atomic register is shown in <ref>. The set of states of is denoted by . In addition to the standard update functions, there are extra update functions [i], [i] : → for the execution actions. The effect of applying [i] on s is to store the current value d of the register as the value that should be returned at the response of the active read by thread i; this value can then be retrieved with s, and s= until then. The effect of applying [i] is to update the current value d of the register to the write value of the active write by thread i; this value can also be retrieved with s, and s= thereafter. Note that, by setting s to before a read has been executed and after a write has been executed, we can use s in combination with s and s to determine whether the execution of an operation has taken place. For i∈, let A_i^a = A ∪{[i], [i]}, and let A^a = ⋃_i ∈𝕋A_i^a. The process definition in <ref> induces transition relations a (a∈ A^a) on the set of tuples ⟨ d,s⟩ (d∈, s∈). As before s_𝑖𝑛𝑖𝑡= and s_𝑖𝑛𝑖𝑡 = s_𝑖𝑛𝑖𝑡 = ∅; the initial values for [i]s_𝑖𝑛𝑖𝑡 do not matter. We use R_a to abbreviate R_a(d_𝑖𝑛𝑖𝑡,s_𝑖𝑛𝑖𝑡), and define a trace of R_A, also as before, as a finite or infinite sequence of elements of A^A appearing as labels in a transition sequence starting at ⟨ d_𝑖𝑛𝑖𝑡,s_𝑖𝑛𝑖𝑡⟩. We denote by the set of all traces of R_a. Compared to schedules, the traces of have extra [i] and [i] actions. If α is a finite or infinite sequence of elements of A^a, then we denote by α̅ the sequence obtained from α by deleting all occurrences of [i] and [i] for i ∈𝕋. The correspondence between atomic schedules and complete traces of follows straightforwardly. It suffices to prove that admits exactly those traces α such that there exists a legal serialisation of α̅. To this end, note that the execute actions provide such a serialisation, and the definition of has the responses of operations behave in accordance with this serialisation. For every atomic schedule σ there is a trace α of such that α̅=σ, and if α is a complete trace of , then α̅ is an atomic schedule. §.§ mCRL2 implementation The mCRL2 toolset <cit.> provides tools for model checking and equivalence checking. Models are defined in the mCRL2 language <cit.>, which comprises a process-algebraic specification language and facilitates the algebraic specification of data types. Properties defined in the modal μ-calculus can be checked on those models. One nice feature of mCRL2 is that when a property does not hold a counterexample can be generated. For more information we refer to <cit.> as well as the toolset's website[https://www.mcrl2.org]. We have implemented the models presented in Figures <ref>, <ref> and <ref> in the mCRL2 language. By adding processes that model the threads executing the desired algorithm in a manner compatible with the interface of the register models, we can verify the same algorithm easily under different atomicity assumptions. An added benefit is that we can assume different levels of atomicity for different registers simultaneously, so that we pinpoint exactly to what extent the algorithm is robust for non-atomicity. The model can be found as part of the examples delivered with the mCRL2 distribution[<https://github.com/mCRL2org/mCRL2/tree/master/examples/academic/non-atomic_registers> ()]. The mCRL2 language has support for standard data types such as sets, bags and arrays (implemented as mappings) as well an algebraic specification facility to define new datatypes. This allows us to model the register models staying close to the process-algebraic models presented in this paper. § ALTERNATIVE DEFINITIONS OF MWMR REGULAR REGISTERS In <cit.> four definitions for MWMR regular registers are proposed. These are formulated as conditions on schedules. We discuss how our definition of MWMR regular registers relates to these definitions. The following definition captures the weakest condition on schedules presented in <cit.>. A schedule σ satisfies the weak condition if, for every read operation r in σ, there exists a legal serialisation of σ∪{r}. It follows straightforwardly from our MWMR regular register definition that any complete trace α∈ , when transformed into a schedule α̅ by deleting the order actions, satisfies <ref>. As explained in <ref>, our model generates a serialisation of all writes. For every read r by thread i, it returns either the value of the last write in this serialisation before [i], or the value of one of the writes overlapping this read. In both cases, we may obtain a legal serialisation of α̅∪{r} by taking the serialisation of writes associated with α̅ and inserting r right after the write that it reads from. This is consistent with <_σ because the serialisation of the writes is, and r will only be placed after a write that either has its response before the invocation of r, or that r overlaps with. If α∈ is complete, then the schedule α̅ satisfies the weak condition. In all our MWMR register definitions it is the case that when no writes are active on a register, it stores a unique value. It reduces the burden of storing elaborate information on the execution history of the register, as would be necessary with the definitions of <cit.>, and thus leads to a smaller statespace. A consequence of our choice is that not all schedules satisfying the weak condition can be generated by our model. Consider the schedule depicted in <ref>. It is argued in <cit.> that it satisfies the weak condition, but it cannot be generated by our regular register model R_r because once w_1 and w_2 have ended, the register will have stored a unique value (either 1 or 2). Hence, the return values of r_1 and r_2 cannot be different. Note that, for the same reason, the schedule cannot be generated by our safe register model R_s. As illustrated in the preceding example, there exist schedules satisfying the weak condition that cannot be generated by our safe register model R_s. Conversely, it is easy to see that there exist complete traces generated by our safe register model R_s (e.g., with overlapping writes resulting in a value that is not written by any of the writes) that do not satisfy the weak condition. The second condition in <cit.> associates with every read operation a serialisation and formulates a consistency requirement on these serialisations. If r∈σ, then an r-serialisation is a serialisation [r] on σ∪{r}.[By considering serialisations of the relevant writes for r, instead of all writes, we deviate from <cit.>. Since a serialisation [] on σ∪{r} must be consistent with <_σ, we will have that r [] w for all w∈σ∖σ. It follows that the restriction of a serialisation [] on σ∪{r} to σ∪{r} is an r-serialisation, and [] is legal if, and only if, its restriction is.] A schedule σ satisfies write-order if for each read r in σ there exists a legal serialisation _r of σ∪{r} satisfying the following condition: for all reads r_1, r_2 in σ, and for all writes w_1, w_2 ∈σ, r_1∩σ, r_2 it holds that w_1 _r_1 w_2 if and only if w_1 _r_2 w_2. For every schedule σ satisfying the write-order condition, there exists a trace α in such that α̅ = σ. We give a brief, informal description of how such a trace α can be constructed here; a more formal argument is presented in <ref>. The idea is that order actions can be inserted between the invocation and response of every write in σ, such that the return values of the reads match this placement of order actions. Note that for reads that return the value of an overlapping write, this return value is possible according to <ref> regardless of how the order actions are placed. In our placement of order actions, we therefore only need to carefully consider reads that return the value of a write that is fixed for them. According to <ref>, reads in σ agree on the relative ordering of all writes that are relevant to them. Since σ, r⊆σ, r for every read r, the reads also agree on the relative ordering of the fixed writes. We use this information to construct an ordering on all writes that is consistent both with <_σ and with the return values of reads that read from writes that are fixed for them. Effectively, we find a single view on the relative order of all the write operations that is possible for every read in the schedule that returns the value of a fixed write. Using this ordering, we can then place the order actions in the schedule σ to create the trace α∈ such that α̅ = σ. Whilst every schedule satisfying <ref> corresponds to a trace of our model, not every schedule with a corresponding trace in our model is allowed by the write-order condition. Consider <ref>. This schedule is allowed by our model; r_1 can read 2 in x because it overlaps with w_2 and it is possible for r_2 to read 1 if the order action of w_2 is done before the order action of w_1. This schedule does not meet <ref> however; since both writes to x are relevant for both reads, the two reads must agree on the respective order of the writes. For r_2 to read 1, it must be the case that w_2 _r_2 w_1. But since w_1 < r_1 according to the schedule, this means that w_2 _r_1 w_1 _r_1 r_1, so r_1 cannot read 2. The third and fourth conditions on schedules proposed in <cit.> we refer to as reads-from <cit.> and no-inversion <cit.>, respectively. We do not recall these conditions here, and instead refer to <cit.> for more details. Our notion of MWMR regular register is incomparable with the notions induced by the reads-from and no-inversion conditions on schedules. First, as already indicated, every schedule that satisfies the write-order condition is also allowed by our model. As it is proven in <cit.> that the write-order condition is incomparable with the reads-from and no-inversion conditions, this means our model admits schedules not admitted by these definitions. To see that that not all schedules satisfying reads-from and no-inversion are admitted by our model, it suffices to observe that the schedule presented in <ref>, which is not admitted by our MWMR regular register model, satisfies the reads-from and the no-inversion conditions. (See, e.g., <cit.> and <cit.>, which satisfy the reads-from and no-inversion conditions, respectively, and have the schedule in <ref> as prefix.) § VERIFYING MUTUAL EXCLUSION PROTOCOLS remark propertyProperty We have used the register processes described in <ref> to analyse several well-known mutual exclusion algorithms. To this end, we have modelled the behaviour of the threads as prescribed by the algorithm also as processes, which interact with the register processes. That a thread is executing its non-critical section is represented in our model by the action 𝑛𝑜𝑛𝑐𝑟𝑖𝑡, and that is executing its critical section is represented by the action 𝑐𝑟𝑖𝑡; both actions are parameterised with the thread id. We have checked the following two properties. [Mutex] There is no state reachable from the initial state of the model in which there are two distinct threads i and j such that 𝑐𝑟𝑖𝑡(i) and 𝑐𝑟𝑖𝑡(j) are both enabled in this state. [Reach] For all threads i, always after an occurrence of a 𝑛𝑜𝑛𝑐𝑟𝑖𝑡(i) action it holds that, as long as a 𝑐𝑟𝑖𝑡(i) action has not happened, a state is reachable in which 𝑐𝑟𝑖𝑡(i) is enabled. The Reach property is implied by starvation freedom, and so if it does not hold, then neither does starvation freedom. We chose to analyse this property rather than starvation freedom itself because the presence of busy waiting loops in our models would require us to use fairness assumptions to dismiss spurious counterexamples. The question of how to interpret fairness assumptions when dealing with non-atomic registers is outside of the scope of this paper. The results of our verification are shown in <ref>. When doing model checking, we have to instantiate a specific number of threads. We have restricted our verification to three threads for all algorithms, except for Dekker, Attiya-Welch and Peterson, which are only defined for two threads. In this section, we discuss some of our most interesting findings. For complete descriptions of counterexamples, as well as further discussion of our results we refer to <ref>. All models are available through GitHub[<https://github.com/mCRL2org/mCRL2/tree/master/examples/academic/non-atomic_registers> ()]. §.§ Peterson's Algorithm Peterson's classic algorithm (see <ref>) was not designed to be correct under non-atomic register assumptions. An analysis of the mutual exclusion violation with safe registers still gives interesting insights into the algorithm and some of the unexpected behaviour of safe registers. As expected, mCRL2 reports that mutual exclusion does not hold when using non-atomic registers. We present a visualisation of the counterexample generated by mCRL2 for safe registers in <ref>. There are two instances of overlapping operations. First, since the two writes to 𝑡𝑢𝑟𝑛, labelled w_3 and w_4 in <ref>, overlap, according to the safe register model the register can have any arbitrary value after they both have ended. In this counterexample, 𝑡𝑢𝑟𝑛 has the value 1, which allows thread 0 to read the value 1 (the read labelled r_4) and enter the critical section. Second, thread 1's read of 𝑡𝑢𝑟𝑛 (labelled r_2) overlaps with thread 0's write (labelled w_4). The read can therefore return an arbitrary value, in this case the value 0, which allows thread 1 to enter the critical section. This counterexample shows only overlaps on the 𝑡𝑢𝑟𝑛 register. We can initialise our model such that the 𝑡𝑢𝑟𝑛 register is atomic, but both 𝑓𝑙𝑎𝑔 registers behave as safe registers. We find that mutual exclusion does hold then. This confirms that overlapping operations on the 𝑡𝑢𝑟𝑛 register are the sole cause of the mutual exclusion violation for Peterson's algorithm. We discuss Peterson's algorithm with regular registers in <ref>. [t]0.45 [t]0.47 §.§ Szymanski's Flag Algorithm There are several variants of Szymanski's algorithm, which all seem to have been derived from the flag-based algorithm shown as <ref>. In <cit.>, Szymanski proposes this flag-based algorithm and claims that an implementation of it representing the flags using three bits is robust for flickering of bits (i.e., is correct for non-atomic registers). As indicated in <ref>, we find that neither the integer nor the bits variant ensure mutual exclusion when using non-atomic registers. The full analysis of the bits version, as well as a variant of it known as the 3-bit linear wait algorithm <cit.> are presented in <ref>. Here, we only discuss the integer version of the flag algorithm, as the counterexample against Mutex that we have found illustrates the core issue shared by all mentioned variants of Szymanski's algorithm. The pseudocode for the flag algorithm is shown in <ref>. It is originally presented in <cit.>, but note that we have repaired an obvious typo: <cit.> erroneously has a conjunction instead of a disjunction in line <ref>. All 𝑓𝑙𝑎𝑔 registers are initialised at 0. See <ref> for a visualisation of the counterexample for mutual exclusion with two threads and regular registers that we found using the mCRL2 toolset. The first instance of a read overlapping with a write is irrelevant, reading 𝑓𝑙𝑎𝑔[1] = 1 would also have been possible without overlap. The other two instances of overlap are of interest. Thread 0 is writing the value 3 to 𝑓𝑙𝑎𝑔[0] and thread 1 reads 𝑓𝑙𝑎𝑔[0] twice while this write is active. The first time it reads the new value (3), while the second time it reads the old value (1). Lamport specifically highlights that such a sequence is possible when using regular registers <cit.>. Since only single-writer registers are used and write-order reduces to Lamport's definition of regular registers when single-writers in that case <cit.>, this counterexample is also valid for write-order. §.§ Implementation Details Our analyses have also revealed that seemingly minor implementation subtleties can make the difference between a correct and an incorrect algorithm. A non-atomic register that is read multiple times in a row may return different values, even if no new writes to this register have started. This means that when the value of a register needs to be checked several times in an algorithm, there is a difference between reading it once and subsequently checking a local copy of the value, or reading it again when needed. For an example where this affects correctness, consider the Attiya-Welch algorithm. While the presentation in <cit.> ensures reachability of the critical section with safe registers, the seemingly equivalent reformulation of this same algorithm in <cit.> does not. The latter suggests that a thread needs to read a particular register twice as part of two different conditions that in the former are handled simultaneously. In <cit.>, that presentation of the algorithm is claimed to be correct under all four of their MWMR regular register models; our counterexample shows that it is not. A similar phenomenon occurs with Lamport's 3-bit algorithm, in which each thread i has a bit z_i. As part of the algorithm, a computation is done on z (the function assigning z_i to i). Lamport states that “evaluating [z] at j requires a read of the variable z_j.” This may lead one to implement this algorithm by having threads re-read variables whenever needed. It turns out this implementation leads to a deadlock. Locally saving all required z-values at the start of the computation and then only referencing this local copy during the computation solves this issue. Consequently, these algorithms have a correct implementation, but they are also easily implemented incorrectly. See the discussions of Attiya-Welch and Lamport in <ref> for more details. §.§ Other Verifications There have been many mechanical verifications of mutual exclusion algorithms with atomic registers. For instance, in recent tutorials on the verification of distributed algorithms in mCRL2, verifications of Dekker's and Peterson's algorithms are presented <cit.>. Several such verifications have also been done with the CADP toolset; see, e.g., <cit.> for the results of verifying a large number of mutual exclusion algorithms, including Szymanski, Dekker and Peterson, with atomic registers. To the best of our knowledge, we are the first to propose a systematic approach to mechanically verifying the correctness of mutual exclusion algorithms with respect to non-atomic registers, but there have been some mechanical verifications for specific algorithms. Lamport himself modelled the Bakery algorithm in TLA+, representing the non-atomic writes as sequences of write actions of arbitrary length, where every action results in an arbitrary value being written, except for the last which writes the intended value <cit.>. This approach for modelling safe registers only works for SWMR registers; it does not work for MWMR registers. This approach for modelling safe SWMR registers, as well as a similar approach for modelling regular SWMR registers, is presented in <cit.>. This approach is also used in several verifications done by Wim Hesselink, including of the Lycklama–Hadzilacos–Aravind algorithm in <cit.> and the Bakery algorithm in <cit.>. In <cit.>, several mutual exclusion algorithms are verified with atomic registers using timed automata in UPPAAL. Additionally, the Block & Woo algorithm is checked with bit flickering. Their model does not account for writes that overlap with other writes. Additionally, their model for the behaviour of safe registers is specific to the registers used in the algorithm. Dekker's algorithm with safe registers is considered in <cit.>. There it is demonstrated that Dekker's algorithm does not satisfy starvation freedom when safe registers are used, and a fixed version of the algorithm is presented. Szymanski's flag algorithm with atomic registers is proven correct in <cit.>. This paper demonstrates the importance of checking all threads in the “forall” and “exists” statements in the pseudocode in the same order every time. This is also how we model the algorithm. There have been other verifications of Szymanski's algorithms <cit.>, the former paper using the STeP tool. However, the exact pseudocode in those proofs differs from the pseudocode in <cit.> and <cit.>. § CONCLUSIONS We have presented process-algebraic models of safe, regular and atomic multi-writer multi-reader registers and used them to determine the robustness of various mutual exclusion algorithms for relaxed atomicity assumptions. Our analyses revealed issues with several of the algorithms discussed. There are many more mutual exclusion algorithms that could be analysed in the same way as the ones shown in <ref>. In <cit.>, Szymanski presents three other mutual exclusion algorithms. There also exist several variants of Szymanski's algorithm <cit.>, all of which are similar to the 3-bit linear wait algorithm but differ in small ways. In <cit.> it is shown that Dekker's algorithm does not ensure starvation freedom when safe registers are used and a modified version of the algorithm is presented which does satisfy this property. When we add verification of starvation freedom to our analysis, we can confirm their work. We have only considered to what extent various algorithms guarantee mutual exclusion and whether the critical section is always reachable for every thread. Our next step will be to consider starvation freedom. Van Glabbeek proves that starvation freedom cannot hold for any mutual exclusion algorithm for which the correctness, on the one hand, relies on atomicity of memory interactions and, on the other hand, does not rely on assumptions regarding the relative speeds of threads <cit.>. A crucial presupposition for his argument is that a convincing verification hinges on not more than a component-based fairness assumption called justness. In <cit.> a method is proposed for verifying liveness properties under justness assumptions using the mCRL2 toolset. The method requires a classification of the roles of components in interactions. It should be investigated how to classify the roles of threads and registers in invocations and responses, and, in particular, how to deal with the , and actions in the method. § PROOF OF <REF> This section presents our proof that every schedule σ satisfying the write-order condition can be simulated by our model R_r. The return value of a read operation r∈σ is either the initial value of the register or the write value associate with some write operation w∈σ such that r≮_σw. Let σ be a schedule that satisfies the write-order regular register condition; the reads-from mapping ρ for σ is the mapping ρ: σ→σ that associates with every r ∈σ its direct predecessor in [r] (recall that we have included a special write operation w_𝑖𝑛𝑖𝑡 in σ that precedes all other operations, so that every r∈σ indeed has a direct predecessor in [r]). Let σ be a schedule satisfying the write-order regular register condition and let ρ be the associated reads-from mapping. Then * ρ(r)∈r, * there does not exist a write w∈σ such that ρ(r)<_σw <_σ r, and * the write value of ρ(r) equals the return value of r. Our goal is now to show that every schedule satisfying the write-order regular register condition can be transformed into a trace of our regular register model by appropriately inserting [i] actions. Let w∈σ. If the set ρ^-1(w)={r∈σ|ρ(r)=w} is infinite, then it has an infinite subset R⊆ρ^-1(w) such that for all r∈ R and for all w'∈σ we have that w' <_σ r. Let W={w'∈σ| w≮_σ w'}. Since the invocations of all w'∈ W must appear in σ before the response of w, we have that W is finite. At most finitely many reads can have their invocations appear before the last occurrence of a response of a write in W and so for infinitely many reads r∈ρ^-1(w) we have that w' <_σ r for all w'∈ W. Let R be the set of all those reads, i.e., R = { r∈ρ^-1(w) |∀ w'∈ W. w'<_σ r} . It remains to argue that there cannot exist w'∈σ such that w<_σw'. To this end, we derive a contradiction from the assumption that there does exist such w'∈σ. Since only finitely many reads can have their invocations before the occurrence of the response of w' in σ (for the prefix of σ including the response of w' is finite), it follows that there exists r∈ R such that w' <_σ r. But then we have that ρ(r) <_σ w' < r, which contradicts the statements in Proposition <ref>. We say that a read r∈σ is non-overlapping if ρ(r)<_σ r. We denote the set of all non-overlapping reads in σ by σ. Let σ be a schedule that satisfies the write-order condition. Then there exists an enumeration o⃗=o_0,o_1,o_2,… of σ∪σ satisfying the following properties: * o_0=w_𝑖𝑛𝑖𝑡; * if o_i<_σ o_j, then i<j for all relevant i,j; * for every r∈σ we have that ρ(r) appears before r in o⃗ and between ρ(r) and r there is no other write; and * for all reads r,r'∈σ, if r and r' are distinct, ρ(r)=ρ(r') and r appears before r' in o⃗, then the invocation of r occurs before the invocation of r' in σ. We define o⃗ in three steps: first, we define an enumeration of W = ⋃_r∈σσ,r ; then we extend this enumeration to an enumeration of σ; and finally we suitably insert the elements of σ in this enumeration. After defining o⃗ we shall establish that it satisfies the required properties. Define the relation on W by = ⋃_r∈σ([r]∩(W× W)) . To prove that is irreflexive, it suffices to note that [r] is irreflexive for all r∈σ. To prove that is transitive, let w_1,w_2,w_3∈ W and suppose that w_1 w_2 and w_2 w_3. From the definition of and w_1 w_2 it follows that w_2≠ w_𝑖𝑛𝑖𝑡; similarly, from w_2w_3 it follows that w_3≠ w_𝑖𝑛𝑖𝑡. Then there exist r and r' such that w_1[r]w_2 and w_2[r']w_3. If r=r', then w_1w_3 immediately follows by the transitivity of [r]=[r']. Otherwise, either the response of r' occurs later in σ than the response of r, or vice versa. In the first case, we have that σ,r⊆σ,r' and therefore we get by the write-order regular register condition that w_1[r']w_2; since [r'] is transitive, it follows that w_1 [r']w_3 and hence w_1 w_3. In the second case, we have that σ,r'⊆σ,r and therefore we get by the write-order regular register condition that w_2[r]w_3; since [r] is transitive, it follows that w_1[r] w_3 and hence w_1 w_3. To see that for all w,w'∈ W we either have that w w' or w' w, note that there exists r∈σ such that w and w' are both relevant for r. Hence, either w[r]w' or w'[r]w, so we have w w' or w' w. Thus, we have now established that is a total order on W. Let W'=σ∖ W. If W'≠∅, then W must be finite, for the writes in W' are not relevant for any read in σ and hence their invocations all occur after the responses of all reads in σ. This means that there are finitely many reads in σ, and that σ must have a finite prefix σ' in which all responses of all reads in σ occur. The writes in W must have their invocations before the occurrence of the response of some read in σ, and so all occurrences of invocations of writes in W must occur in the finite prefix σ' of σ. This means that W is finite. Now, let w⃗=w_0,w_1,w_2,… be the enumeration of σ that starts with an enumeration of W that is consistent with (i.e., for all natural numbers i,j we have that i<j implies w_i w_j), and that, if W is finite, is followed by an enumeration of W' consistent with the order of the invocations of its elements in σ (i.e., if w_i,w_j∈ W', then we have that i<j implies that the invocation of w_i occurs before the invocation of w_j in σ). We proceed to extend w⃗ to an enumeration of all operations in σ∪σ by inserting directly after each w_i the elements of ρ^-1_𝑛𝑜(w_i)={r∈σ|ρ(r)=w_i} . Let r⃗_⃗i⃗=r_i,0,r_i,1,r_i,2,… be the enumeration of ρ^-1_𝑛𝑜(w_i) that is consistent with the order of the invocations of the reads in σ (i.e., for all relevant natural numbers j,k we have that j<k implies that the invocation of r_i,j precedes the invocation of r_i,k in σ). Note that, by Lemma <ref>, r⃗_⃗i⃗ can only be infinite if w_i is the last element of w⃗. So we can now define an enumeration o⃗ of the operations in σ∪σ as follows: o⃗ = o_0,o_1,o_2,… = w_0,r⃗_⃗0⃗,w_1,r⃗_⃗1⃗,w_2,r⃗_⃗2⃗,… . We proceed to argue that o⃗ satisfies the required properties. * It is immediate by our assumptions about w_𝑖𝑛𝑖𝑡 that it is the least element with respect to . So it is the first element of the enumeration of W, and hence of o⃗. * To prove that o_i <_σ o_j implies i<j for all relevant i,j, we assume that o_i<_σ o_j and distinguish seven cases. * If o_i,o_j∈ W, then from o_j∈ W and o_i<_σo_j it follows that there exists r∈σ such that o_i,o_j∈σ,r. Then o_i[r]o_j, so o_i o_j and hence o_i appears before o_j in o⃗. * If o_i,o_j∈ W', then from o_i<_σo_j it follows that the response of o_i, and hence also the invocation of o_i, appears before the invocation of o_j, and therefore o_i appears before o_j in o⃗. * If o_i∈ W and o_j∈ W', then it is immediately clear from the definition of o⃗ that i<j. * We show that o_i∈ W' and o_j∈ W is impossible. For suppose it is, then, since o_j∈ W, there exists r∈σ such that o_j is relevant for r, while, since o_i∈ W' we have that r<_σ o_i. From o_i<_σo_j it follows that r <_σ o_j, contradicting that o_j is relevant for r. * Consider o_i,o_j∈σ. From o_i <_σ o_j it follows that the response, and hence the invocation, of o_i appears before the invocation of o_j in σ, so if ρ(o_i)=ρ(o_j), then it immediately follows that i<j. So we proceed with the assumption that ρ(o_i)≠ρ(o_j). Since o_i and o_j are non-overlapping reads, we have that ρ(o_i)<_σ o_i and ρ(o_j)<_σ o_j, and from o_i<_σo_j it, moreover, follows that ρ(o_i)<_σ o_j. Since both ρ(o_i) and ρ(o_j) are relevant for o_j, we have that ρ(o_i) and ρ(o_j) are ordered by the o_j-serialisation [o_j]. Moreover, we must have ρ(o_i)[r_j]o_j and ρ(o_j)[r_j] o_j and since ρ(o_j) must be the direct [r_j]-predecessor of o_j it follows that ρ(o_i)[r_j]ρ(o_j). Hence, ρ(o_i) appears before ρ(o_j) in o⃗ and therefore i<j. * If o_i∈σ and o_j∈σ, then both o_i and ρ(o_j) are relevant for o_j. Since [o_j] must be consistent with <_σ we have that o_i[o_j] o_j. Furthermore, since ρ(o_j) is defined as o_j's direct predecessor according to [r_j], it follows that either o_i[o_j]ρ(o_j) or o_i=ρ(o_j). In both cases, we find that i<j. * Suppose that o_i∈σ and o_j∈σ. If o_j∈ W', then ρ(o_i)<_σ o_i <_σ o_j, so according to the definition of o⃗, o_i will appear in o⃗ between ρ(o_i) and o_j, from which it follows that i<j. If o_j∈ W, then there exists a read r such that o_j∈σ,r. Since ρ(o_i)<_σo_i<_σo_j, it follows that ρ(o_i) is also relevant for r. We must have ρ(o_i)[r]o_j, and, according to the definition of o⃗, o_i will appear between ρ(o_i) and o_j, so i<j. * Let r∈σ. Then r∈ρ^-1_𝑛𝑜(ρ(r)), so r is an element of the sequence of writes directly following ρ(r). It follows that ρ(r) appears before r in o⃗ and between ρ(r) and r there is no other write. * Let r,r'∈σ be distinct, suppose that ρ(r)=ρ(r') and r appears before r' in o⃗. Then we have r,r'∈ρ^-1_𝑛𝑜(ρ(r)), so r and r' both appear in the sequence of reads that directly follows ρ(r) in o⃗. The reads in that sequence are ordered in accordance with the order of their invocations. It follows that the invocation of r occurs before the invocation of r' in σ. The enumeration delivered by <ref> allows us to define a procedure that transforms a schedule σ into a trace α∈ as follows. Simultaneously iterate through the enumeration and the schedule. If the current operation in the enumeration is a read, then we move forward in the schedule until after the invocation of r and move to the next operation in the enumeration. If the current operation in the enumeration is a write, then there are two cases: if the invocation of the write has already occurred, then we insert the associated order action in the schedule, move past the order action, and move to the next operation; if the invocation of the write has not yet occurred, then we first move in the schedule until right after the invocation of the write, insert the associated order action, move past the order action, and move to the next operation. This establishes <ref>. § MUTUAL EXCLUSION COUNTEREXAMPLES In this appendix we give in-depth discussions and example traces for our more interesting results presented in <ref>. For all the models used as well as the exact pseudocode we modelled to arrive at the conclusions in <ref>, we point to the examples included with the mCRL2 distribution. These can be found at the same link as the register model itself. In the counterexample discussions, we at times refer to the “entry protocol” and the “exit protocol” of an algorithm. The former is the part of the algorithm before the critical section is entered, the latter is the part after the critical section is released. §.§ Properties In mCRL2 properties must be encoded in the modal μ-calculus. We encoded the mutual exclusion property as ∀_i, j ∈𝕋. (i ≠ j) ⇒ ( [𝑡𝑟𝑢𝑒^⋆](⟨𝑐𝑟𝑖𝑡(i)⟩𝑡𝑟𝑢𝑒⟨𝑐𝑟𝑖𝑡(j)⟩𝑡𝑟𝑢𝑒)) and the reachability of the critical section property as ∀_i ∈𝕋. (⟨𝑡𝑟𝑢𝑒^⋆·𝑛𝑜𝑛𝑐𝑟𝑖𝑡(i)⟩ν X.⟨𝑐𝑟𝑖𝑡(i)⟩ X) §.§ Peterson In <ref> we demonstrate a counterexample for Peterson's algorithm using safe registers. Here, we note that the same counterexample is also valid for our regular register model: r_2 can still return a 0 because it overlaps with w_4; and by placing the 𝑜𝑟𝑑𝑒𝑟_𝑤𝑟𝑖𝑡𝑒 of w_4 before the 𝑜𝑟𝑑𝑒𝑟_𝑤𝑟𝑖𝑡𝑒 of w_3 we can also have thread 0 read a 1 at r_4. However, this counterexample is not valid for the write-order definition from <cit.>. Since both writes to 𝑡𝑢𝑟𝑛 are relevant for both the reads from 𝑡𝑢𝑟𝑛, the two threads must agree on their respective order. For thread 0 to read a 1 in 𝑡𝑢𝑟𝑛, it must be the case that w_4 < w_3. And since w_3 ends before r_2 starts, r_2 cannot possibly read a 0. This is effectively the same situation as in <ref>. We cannot conclude from this that Peterson's algorithm is correct under write-order, only that this specific counterexample is not valid under those assumptions. §.§ Attiya-Welch The algorithm we refer to as the Attiya-Welch algorithm is presented in both <cit.> and <cit.> as being Peterson's algorithm from <cit.>. While the algorithm indeed has some similarities to Peterson's it is not identical and in particular behaves different when using non-atomic registers. Hence why we refer to it as the Attiya-Welch algorithm, rather than a version of Peterson's. As shown in the table, the Attiya-Welch algorithm does ensure mutual exclusion when non-atomic registers are used. Of interest is that while the original presentation of the algorithm from <cit.> also ensures reachability of the critical section when using non-atomic registers, the version of the algorithm presented in <cit.> does not. The two algorithms are shown in <ref> and <ref> respectively. When using regular registers and two threads, we get the following counterexample for reachability of the critical section on the variant presentation: * Thread 1 gains uncontested access to the critical section. Because thread 0 is not competing, thread 1 can reach line <ref> without issue. * Thread 1 starts its exit protocol with starting 𝑡𝑢𝑟𝑛 1. This operation does not have its order-action yet. At this point, a read of 𝑡𝑢𝑟𝑛 can read both a 0 and a 1: 0 is the initial value and no 𝑜𝑟𝑑𝑒𝑟_𝑤𝑟𝑖𝑡𝑒 actions have occurred yet; 1 can be read because of overlap. * Thread 0 starts the competition. Whenever it reads 𝑓𝑙𝑎𝑔[1] it sees a 1, but it can read whatever value it needs from 𝑡𝑢𝑟𝑛 to get through the entry protocol. It escapes the first await-loop by reading 𝑡𝑢𝑟𝑛 = 1; it escapes the repeat-until loop by again reading 𝑡𝑢𝑟𝑛 = 1; it avoids the second await-loop entirely by reading 𝑡𝑢𝑟𝑛 = 0 on line <ref>. * The order-action for thread 1's write takes place, but the write is not finished yet. * Thread 0 enters the critical section. * In the exit protocol, thread 0 writes 𝑡𝑢𝑟𝑛 0. The order- and finish-actions take place immediately. At this point, the most recent order-action on 𝑡𝑢𝑟𝑛 has the value 0, and the write by thread 1 is still active. So once again, reads of 𝑡𝑢𝑟𝑛 can read either a 0 or a 1. * Thread 0 finishes the exit protocol with 𝑓𝑙𝑎𝑔[0] 0. It re-enters the competition. Just like before, even though 𝑓𝑙𝑎𝑔[1] = 1, thread 0 can get through most of the entry protocol by reading the required values for 𝑡𝑢𝑟𝑛. It gets until right after line <ref>, having just escaped the repeat-until loop. Unlike the previous execution of the entry protocol by thread 0, it reads 𝑡𝑢𝑟𝑛 = 1 on line <ref>, as a result it starts awaiting 𝑓𝑙𝑎𝑔[1] to be 0. * Thread 1 stops writing to 𝑡𝑢𝑟𝑛. At this point, any read of 𝑡𝑢𝑟𝑛 will give the value of the most recent order-action, which was 0. * Thread 1 end the exit protocol by setting 𝑓𝑙𝑎𝑔[1] to 0. Thread 0 does not read 𝑓𝑙𝑎𝑔[1] yet. * Thread 1 re-enters the competition. Since 𝑡𝑢𝑟𝑛 is now 0, it can get through most of the entry protocol. It cannot get through the entry protocol entirely however because on line <ref>, it once reads 𝑡𝑢𝑟𝑛 = 0. It then has to start awaiting 𝑓𝑙𝑎𝑔[0] to be equal to 0. * At this point, both threads have their respective 𝑓𝑙𝑎𝑔s set to 1, and both are waiting for the other's 𝑓𝑙𝑎𝑔 to be 0. This is a deadlock; neither thread will ever be able to reach the critical section again. This same counterexample (ignoring the ordering of writes) also works for the safe registers. This counterexample does not work for the write-order definition from <cit.> (it is once again similar to <ref>), but it does hold for the weak definition. This contradicts the claim in <cit.> that this algorithm ensures starvation freedom under weak – if one thread can never reach the critical section again after having done a 𝑛𝑜𝑛𝑐𝑟𝑖𝑡 action, then starvation freedom cannot hold. As stated earlier, this counterexample is only present in the presentation from <cit.>, it is not present in the pseudocode given in <cit.>. At first glance, the algorithms seem to be equivalent: the goto-statement on line <ref> of <ref> is removed, and instead that part of the code is turned into a logically equivalent repeat-until loop. There is only a minor implementation difference: where in the original presentation 𝑡𝑢𝑟𝑛 is read only once to determine whether the loop should be taken and whether the thread needs to wait for the other to lower its flag; in the variant presentation, 𝑡𝑢𝑟𝑛 is read twice. As shown in the counterexample, the deadlock requires an overlapping write on 𝑡𝑢𝑟𝑛 which can only occur when both threads are in the exit protocol simultaneously. Since mutual exclusion is guaranteed, it must be the case that one thread writing to 𝑡𝑢𝑟𝑛 is what allows the other thread to reach the exit protocol. In <ref> this is not possible: on line <ref>, if 𝑡𝑢𝑟𝑛 = i is read then, since 𝑓𝑙𝑎𝑔[j] = 1, the other thread will be forced back to line <ref>. If on the other hand 𝑡𝑢𝑟𝑛 = j is read, then the thread gets stuck in the waiting loop on line <ref>. In <ref> it is possible: by reading 𝑡𝑢𝑟𝑛 = j on line <ref> and 𝑡𝑢𝑟𝑛 = i on line <ref>. This is mentioned in part 3 in the counterexample. §.§ Lamport (3-bit) Lamport highlighted the (theoretical) importance of mutual exclusion algorithms that are resistant to safe registers <cit.>. He also proposed the first algorithm specifically designed to ensure mutual exclusion under safe registers: the Bakery algorithm <cit.>. But the Bakery algorithm requires unbounded memory registers and is therefore in many ways impractical and at the very least hard to analyse. In <cit.>, Lamport proposes 4 different solutions for this problem, each offering a different trade-off between using a small number of communication variables and satisfying stronger properties. We focus on the second algorithm, which requires only three SWMR bits per thread and still ensures both mutual exclusion and starvation freedom. The algorithm can be seen in <ref>. We first introduce the variables used in the algorithm and the required definitions for understanding it. This algorithm is for an arbitrary number of threads. We use id's 0 to N -1 when there are N threads. The j, y and f variables are private variables in the range 0 to N - 1. The x_i, y_i and z_i registers are all Boolean variables initially set to 0/false. Lamport's Three Bit Algorithm makes extensive use of cycles. A cycle, as defined in <cit.>, is an object of the form ⟨ i_0, .., i_m ⟩ of distinct elements. Two cycles are the same if they contain the same elements in the same order except for a cyclic permutation. The first element of a cycle is its smallest element, so we take as the representative of a cycle a list where the smallest element is at index 0. An ordered cycle has all elements in order from smallest to largest, possibly only after cyclic permutation. Since our representation of a cycle is a list with the smallest element at the first position, an ordered cycle can be represented with a sorted list. The operation ORD S takes a set S and returns the ordered cycle containing exactly the elements from S. In the algorithm, the Boolean function CG(v, γ, i_j) is used. Here, v is a Boolean function mapping each element in the cycle γ to either true or false, and i_j is an element from γ. CG(v, γ, i_j)   v(i_j) ≡ CGV(v, γ, i_j) CGV(v, γ, i_j) v(i_j -1) if j > 0 v(i_m) if j = 0 The phrase “i j cyclically to k” means that the iteration starts with i = j, then j gets incremented by 1, modulo the length of the cycle. This continues until j = k, at which point the iteration stops without executing the loop with j = k. ⊕ is used for addition modulo the length of the cycle. We find that the algorithm indeed satisfies mutual exclusion and reachability of the critical section when using non-atomic registers. However, in the process of modelling it became clear these results are only valid when the computation on line <ref> is implemented in a very particular manner, something which is not emphasised in the algorithm's presentation. In our first model, we handled this line as follows: we went through the constructed ordered list γ from the smallest to the largest element. For each, we read the associated z-value as well as the z-value of the element it has to be compared against according to the 𝐶𝐺𝑉 function. As soon as we found one which satisfies the equality, that value was chosen for f. This seems to match the description of the algorithm from <cit.>, “evaluating [z] at j requires a read of the variable z_j.” Yet, this implementation leads to a violation of the reachability of the critical section, even when atomic registers are used. The violation is caused when one thread is updating its z-value in the exit protocol, while a different thread has to read this z-value multiple times for the computation on line <ref>. Consider the following situation with two threads and atomic registers: * Thread 1 executes the entry protocol successfully because it is the only competing thread. It gets to its exit protocol and sets z_1 to 1. It then completes the rest of the exit protocol. * Thread 1 starts the competition again. Once again, it is the only competing thread and hence can reach the critical section and start the exit protocol. It has not yet updated z_1 to be 0 again. * Thread 0 starts the competition. At line <ref> it finds that y_0 = y_1 = 1 so γ = [0, 1]. * On line <ref>, thread 0 tests if 𝐶𝐺(z,γ,0) = 1. For this, it needs to compare z_0 and z_1. It finds z_0 = 0 and z_1 = 1. We find 𝐶𝐺(z, γ, 0) = 0, so 0 is not deemed a valid value for f. * Thread 1 now updates z_1 to be 0. * Next, thread 0 checks 𝐶𝐺(z, γ, 1). It once again reads z_0 and z_1, but now it wants those to have different values. It reads z_0 = 0 and z_1 = 0. Hence, 𝐶𝐺(z, γ, 1) = 0 so 1 is not deemed a valid value for f. The algorithm does not account for there being no valid value for f, because this situation is not meant to occur. As a consequence, in our model thread 0 simply cannot take actions anymore once this situation is reached and will therefore never reach the critical section again. In <cit.> a lemma is proven which states that there is always some i for which 𝐶𝐺(v, γ, i) = 1. But that proof does not account for a value changing while the comparisons are being done. This situation can be avoided by reading all the z values once before starting line <ref>, and storing the result locally. Indeed, once we modelled the algorithm using this approach we found the results shown in <ref>. This once again highlights the importance of minor implementation details when dealing with non-atomic registers. §.§ Szymanski (flag) As stated in <ref>, Szymanski does not claim the flag algorithm is valid when using non-atomic registers. He instead claims that when the 𝑓𝑙𝑎𝑔 register is implemented as three bits as shown in <ref>, the algorithm is resistant to regular registers. Pseudocode is given in <cit.> that incorporates this change, as well as other changes to make the algorithm resistant to thread failure and restarts. However, some formatting issues in the pseudocode presentation means we are not certain exactly what was intended there. Instead of modelling that pseudocode, we therefore made the translation from the flag-algorithm to a three bit implementation ourselves and modelled that. The result is shown in <ref>. Interestingly enough, we find that mutual exclusion no longer holds even with atomic registers when this change is made. With two threads, mCRL2 reports that mutual exclusion holds with atomic registers (although not with safe or regular). For three threads, the following counterexample is found: * Thread 2 runs through the entire algorithm until line <ref>. At this point, 𝑖𝑛𝑡𝑒𝑛𝑡[2] = 0, 𝑑𝑜𝑜𝑟_𝑖𝑛[2] = 1 and 𝑑𝑜𝑜𝑟_𝑜𝑢𝑡[2] = 1. * Threads 0 and 1 can both get past line <ref>, since 𝑑𝑜𝑜𝑟_𝑖𝑛[0] = 𝑑𝑜𝑜𝑟_𝑖𝑛[1] = 0 and 𝑖𝑛𝑡𝑒𝑛𝑡[2] = 0. * Thread 1 continues further, at line <ref> it sees 𝑖𝑛𝑡𝑒𝑛𝑡[0] = 1 and 𝑑𝑜𝑜𝑟_𝑖𝑛[0] = 0 so it has to execute lines <ref> and <ref>. It can immediately escape line <ref> however, because 𝑑𝑜𝑜𝑟_𝑜𝑢𝑡[2] = 1. * Thread 1 continues through lines <ref>, <ref> and <ref>. On line <ref>, it sees 𝑑𝑜𝑜𝑟_𝑖𝑛[0] = 0 so it can enter the critical section. * Thread 0 continues on line <ref>. There is no thread with 𝑖𝑛𝑡𝑒𝑛𝑡 set to true and 𝑑𝑜𝑜𝑟_𝑖𝑛 to false, so it directly gets to line <ref> and <ref>. Since it has the lowest thread id, it can immediately enter the critical section. This counterexample relies on the resetting of the variables in the exit protocol happening separately, rather than all at once. The properties can be made true if the order of the resets in the exit protocol is changed. If the order is 𝑑𝑜𝑜𝑟_𝑜𝑢𝑡, 𝑑𝑜𝑜𝑟_𝑖𝑛, 𝑖𝑛𝑡𝑒𝑛𝑡 then both properties hold with three threads and atomic registers. This is not a desirable solution however. While we did not analyse starvation freedom formally, reconsidering the above counterexample makes it easy to observe that the following scenario is possible if 𝑑𝑜𝑜𝑟_𝑜𝑢𝑡 and 𝑑𝑜𝑜𝑟_𝑖𝑛 are reset before 𝑖𝑛𝑡𝑒𝑛𝑡 is: * Thread 2 runs through the entire algorithm until it gets to the exit protocol. Here it resets 𝑑𝑜𝑜𝑟_𝑜𝑢𝑡 and 𝑑𝑜𝑜𝑟_𝑖𝑛 but not 𝑖𝑛𝑡𝑒𝑛𝑡 yet. * Threads 0 and 1 both get past line <ref> because 𝑑𝑜𝑜𝑟_𝑖𝑛[0] = 𝑑𝑜𝑜𝑟_𝑖𝑛[1] = 𝑑𝑜𝑜𝑟_𝑖𝑛[2] = 0. On line <ref>, both see 𝑖𝑛𝑡𝑒𝑛𝑡[2] = 1 and 𝑑𝑜𝑜𝑟_𝑖𝑛[2] = 0, meaning they both go to lines <ref> and <ref>. * If thread 2 never chooses to re-attempt access of the critical section, then 𝑑𝑜𝑜𝑟_𝑜𝑢𝑡[2] will never become true again. And so threads 0 and 1 will never escape line <ref>. So while reachability of the critical section holds, it relies on thread 2 always wanting to re-enter the competition. We foresee no such issues if the order of resets is 𝑑𝑜𝑜𝑟_𝑜𝑢𝑡, 𝑖𝑛𝑡𝑒𝑛𝑡, 𝑑𝑜𝑜𝑟_𝑖𝑛. Although further formal verification is needed to confirm this reset order has no complications. Note that regardless of the exit protocol's exact implementation, the algorithm still does not ensure mutual exclusion with safe or regular registers. For example, with two threads and safe registers, mCRL2 generates the following counterexample for mutual exclusion: * Both threads 0 and 1 get through lines <ref> and <ref>. * Thread 0 starts writing 1 to 𝑑𝑜𝑜𝑟_𝑖𝑛[0], but does not finish this write yet. * Thread 1 continues through line <ref>. On line <ref>, it reads 𝑑𝑜𝑜𝑟_𝑖𝑛[0] = 1 with overlap. It also sees 𝑑𝑜𝑜𝑟_𝑖𝑛[1] = 1 so can continue to line <ref>. * Thread 1 can simply continue through lines <ref> and <ref>. On line <ref>, it reads 𝑑𝑜𝑜𝑟_𝑖𝑛[0] = 0 with overlap, and can therefore enter the critical section. * Thread 0 finishes the write on line <ref>. On line <ref> it sees 𝑑𝑜𝑜𝑟_𝑖𝑛[0] = 𝑑𝑜𝑜𝑟_𝑖𝑛[1] = 1 so it continues on line <ref>. It does <ref>, <ref> and then on line <ref> it does not need to check any other thread's 𝑑𝑜𝑜𝑟_𝑖𝑛 values because it has the lowest thread id. Thread 0 can enter the critical section. This counterexample is also valid for SWMR regular registers, since those allow two reads overlapping the same write to first read the new and then the old value. §.§ Szymanski (3-bit linear wait) An alternative version of Szymanski's algorithm is the 3-bit linear wait algorithm from <cit.>. The pseudocode is presented in <ref>. Surprisingly, our verification shows this algorithm does not ensure mutual exclusion even when using atomic registers. The counterexample once again requires at least three threads. The first counterexample generated by mCRL2 relies on reading w_j and s_j separately on line <ref>. This was likely unintended behaviour, but as far as we can tell it is not excluded in the algorithm's description in <cit.>; not to mention that enforcing this would require a semaphore which returns us to the issue of assuming a lower-level solution to the mutual exclusion problem. For completeness, we still model a variant where a semaphore is used to protect every write to a w- or s-register, as well as the two reads on line <ref>, but nothing else. This leads to the following counterexample with three threads and atomic registers: * Threads 0, 1 and 2 all execute lines <ref> and <ref>. We now have that all a's are 1, all w's 0 and all s's 0. * Thread 1 continues the competition. It executes lines <ref> and <ref>, on line <ref> it sees s_1 = 0 so it goes into the while-loop. On line <ref> it sees a_0 =1 so it breaks out with j = 0 which means the condition on line <ref> evaluates to false and the condition on line <ref> evaluates to true. * Thread 0 does the same. On line <ref>, it also breaks early because a_2 = 1 hence it also ends up in the body of the if-statement on line <ref>. * Thread 0 reads w_0 = 1, s_0 = 0 on line <ref>; it continues with j = 1. * Thread 1 reads w_0 = 1, s_0 = 0, w_1 = 1, s_1 = 0 on line <ref>, so it continues with j = 2. * Thread 2 now continues. It executes lines <ref> to <ref>. On line <ref>, it finds that all a-values are 0, so it continues on line <ref>. Here it sets s_2 to 1. On line <ref> it once again sees that all a-values are 0, so it continues on line <ref> where it sets w_2 to 0. * Thread 1 reads w_2 = 0, s_2 = 1, hence it breaks out of the loop on line <ref> with j = 2. The condition on line <ref> evaluates to true so thread 1 executes both s_1 1 and w_1 0. * Thread 1 goes back to line <ref>. Here it sees s_1 = 1, so it goes to line <ref>. It reads s_0 = 0 and can therefore enter the critical section. * Thread 0 now reads w_1 = 0, s_1 = 1, so it breaks out of the loop on line <ref> with j = 1. The condition on line <ref> evaluates to true so it executes s_0 1 and w_0 0. * Thread 0 goes back to line <ref> where it sees s_0 = 1. It then goes to line <ref> where it does not need to check the s-value of any thread since it has the lowest id. Thread 0 can enter the critical section. The issue is that the third thread allows the other two to satisfy exists-conditions when this should not be happening. This same counterexample is of course valid with regular and safe registers with three threads. We did find that with only two threads, the semaphore does ensure the two properties even when safe registers are used.
http://arxiv.org/abs/2307.04453v1
20230710100605
Tracking the Long-Term GW Phase Evolution for HM Cancri-like Binaries with LISA
[ "Naoki Seto" ]
gr-qc
[ "gr-qc", "astro-ph.HE", "astro-ph.IM" ]
http://arxiv.org/abs/2307.04921v1
20230710220542
Brown dwarf companions in binaries detected from the 2021 season high-cadence microlensing surveys
[ "Cheongho Han", "Youn Kil Jung", "Ian A. Bond", "Sun-Ju Chung", "Michael D. Albrow", "Andrew Gould", "Kyu-Ha Hwang", "Chung-Uk Lee", "Yoon-Hyun Ryu", "In-Gu Shin", "Yossi Shvartzvald", "Hongjing Yang", "Jennifer C. Yee", "Weicheng Zang", "Sang-Mok Cha", "Doeon Kim", "Dong-Jin Kim", "Seung-Lee Kim", "Dong-Joo Lee", "Yongseok Lee", "Byeong-Gon Park", "Richard W. Pogge", "Fumio Abe", "Richard Barry", "David P. Bennett", "Aparna Bhattacharya", "Hirosame Fujii", "Akihiko Fukui", "Ryusei Hamada", "Yuki Hirao", "Stela Ishitani Silva", "Yoshitaka Itow", "Rintaro Kirikawa", "Naoki Koshimoto", "Yutaka Matsubara", "Shota Miyazaki", "Yasushi Muraki", "Greg Olmschenk", "Clément Ranc", "Nicholas J. Rattenbury", "Yuki Satoh", "Takahiro Sumi", "Daisuke Suzuki", "Mio Tomoyoshi", "Paul J. Tristram", "Aikaterini Vandorou", "Hibiki Yama", "Kansuke Yamashita" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.EP" ]
Microlensing brown-dwarf companions in binaries Department of Physics, Chungbuk National University, Cheongju 28644, Republic of Korea, Korea Astronomy and Space Science Institute, Daejon 34055, Republic of Korea Korea University of Science and Technology (UST), 217 Gajeong-ro, Yuseong-gu, Daejeon, 34113, Republic of Korea Institute of Natural and Mathematical Science, Massey University, Auckland 0745, New Zealand Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA University of Canterbury, Department of Physics and Astronomy, Private Bag 4800, Christchurch 8020, New Zealand Max-Planck-Institute for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany Department of Astronomy, Ohio State University, 140 W. 18th Ave., Columbus, OH 43210, USA Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot 76100, Israel Department of Astronomy, Tsinghua University, Beijing 100084, China School of Space Research, Kyung Hee University, Yongin, Kyeonggi 17104, Republic of Korea Institute for Space-Earth Environmental Research, Nagoya University, Nagoya 464-8601, Japan Code 667, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Department of Astronomy, University of Maryland, College Park, MD 20742, USA Komaba Institute for Science, The University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8902, Japan Instituto de Astrofísica de Canarias, Vía Láctea s/n, E-38205 La Laguna, Tenerife, Spain Department of Earth and Space Science, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015, Japan Department of Physics, The Catholic University of America, Washington, DC 20064, USA Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210, Japan Sorbonne Université, CNRS, UMR 7095, Institut d'Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, France Department of Physics, University of Auckland, Private Bag 92019, Auckland, New Zealand University of Canterbury Mt. John Observatory, P.O. Box 56, Lake Tekapo 8770, New Zealand As a part of the project aiming to build a homogeneous sample of binary-lens (2L1S) events containing brown-dwarf (BD) companions, we investigate the 2021 season microlensing data collected by the Korea Microlensing Telescope Network (KMTNet) survey. For this purpose, we first identify 2L1S events by conducting systematic analyses of anomalous lensing events. We then select candidate BD-companion events by applying the criterion that the mass ratio between the lens components is less than q_ th∼ 0.1. From this procedure, we find four binary-lens events including KMT-2021-BLG-0588, KMT-2021-BLG-1110, KMT-2021-BLG-1643, and KMT-2021-BLG-1770, for which the estimated mass ratios are q∼ 0.10, 0.07, 0.08, and 0.15, respectively. The event KMT-2021-BLG-1770 is selected as a candidate despite the fact that the mass ratio is slightly greater than q_ th because the lens mass expected from the measured short time scale of the event, ∼ 7.6 days, is small. From the Bayesian analyses, we estimate that the primary and companion masses are (M_1/M_⊙, M_2/M_⊙)= (0.54^+0.31_-0.24, 0.053^+0.031_-0.023) for KMT-2021-BLG-0588L, (0.74^+0.27_-0.35, 0.055^+0.020_-0.026) for KMT-2021-BLG-1110L, (0.73^+0.24_-0.17, 0.061^+0.020_-0.014) for KMT-2021-BLG-1643L, and (0.13^+0.18_-0.07, 0.020^+0.028_-0.011) for KMT-2021-BLG-1770L. It is estimated that the probabilities of the lens companions being in the BD mass range are 82%, 85%, 91%, and 59% for the individual events. For confirming the BD nature of the lens companions found in this and previous works by directly imaging the lenses from future high-resolution adaptive-optics (AO) followup observations, we provide the lens-source separations expected in 2030, which is an approximate year of the first AO light on 30 m class telescopes. Brown dwarf companions in binaries detected from the 2021 season high-cadence microlensing surveys Cheongho Han01 Youn Kil Jung02,03 Ian A. Bond04 (Leading authors) Sun-Ju Chung02, 05 Michael D. Albrow06 Andrew Gould07,08 Kyu-Ha Hwang02 Chung-Uk Lee02 Yoon-Hyun Ryu02 In-Gu Shin05 Yossi Shvartzvald09 Hongjing Yang10 Jennifer C. Yee05 Weicheng Zang05,10 Sang-Mok Cha02,11 Doeon Kim01 Dong-Jin Kim02 Seung-Lee Kim02 Dong-Joo Lee02 Yongseok Lee02,11 Byeong-Gon Park02 Richard W. Pogge08 (The KMTNet collaboration) Fumio Abe12 Richard Barry13 David P. Bennett13,14 Aparna Bhattacharya13,14 Hirosame Fujii12 Akihiko Fukui15,16 Ryusei Hamada17 Yuki Hirao18 Stela Ishitani Silva13,19 Yoshitaka Itow12 Rintaro Kirikawa17 Naoki Koshimoto17 Yutaka Matsubara12 Shota Miyazaki20 Yasushi Muraki12 Greg Olmschenk13 Clément Ranc21 Nicholas J. Rattenbury22 Yuki Satoh17 Takahiro Sumi17 Daisuke Suzuki17 Mio Tomoyoshi17 Paul J. Tristram23 Aikaterini Vandorou13,14 Hibiki Yama17 Kansuke Yamashita17 (The MOA Collaboration) August 12, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION With the trait that does not depend on the light of a lens, microlensing is suited for finding and studying faint and dark astronomical objects. One scientifically important object to which this microlensing trait is successfully applied is an extrasolar planet. With the proposals of <cit.> and <cit.>, extensive searches for extrasolar planets using the microlensing method have been carried out since the 1990s. Being started with the first discovery of a giant planet in 2003 by <cit.>, 200 microlensing planets have been reported according to the NASA Exoplanet Archive[https://exoplanetarchive.ipac.caltech.edu], making microlensing the method that is used to detect the third most planets after the transit and radial-velocity methods. Brown dwarfs (BDs) are another population of astronomical objects for which microlensing is well suited for detections. Microlensing BDs can be detected through two channels. The first channel is via a single-lens single-source (1L1S) event with a short time scale . The event time scale is related to the lens mass M as = μ; = (κ M π_ rel)^1/2, and thus short time-scale events may be produced by BDs with masses lower than those of stars. Here represents the angular Einstein radius, μ is the relative lens-source proper motion, κ=4G/(c^2 AU), π_ rel = AU(D_ L^-1- D_ S^-1) is the relative lens-source parallax, and and denote the distances to the lens and source, respectively. However, it is difficult to confirm the BD nature of a lens based on the event time scale alone, because the time scale depends additionally on μ and π_ rel. The mass and distance to the lens can be unambiguously determined by measuring the extra observables of the Einstein radius and the microlens parallax from the relations M= κ; = AU + π_ S. The microlens parallax is related to the relative lens-source parallax and Einstein radius by =π_ rel/ <cit.>. For a 1L1S event, the probability of measuring the angular Einstein radius is very low because can be measured for only a very minor fraction of events in which the lens passes over the surface of the source, for example, 1L1S events presented in <cit.>, <cit.>, and <cit.>. The probability of measuring the microlens parallax, which is generally measured from the deviation of the lensing light curve caused by the departure of the relative lens-source motion from rectilinear induced by the orbital motion of Earth, is even lower because the parallax-induced deviation in the lensing light curve is generally too small to be measured for a short time-scale BD event. The microlens parallax for a short time-scale event can be measured under special observational environments, and there exist only three cases for which the nature of the single BD lens was confirmed from the mass determination by measuring the microlens parallax. The first case is OGLE-2007-BLG-224, for which was measured from the subtle differences among the light curves constructed from observations using telescopes lying at multiple sites on Earth when the magnifications of the event were extremely high <cit.>. For the other two cases of OGLE-2015-BLG-1268 <cit.> and OGLE-2017-BLG-0896 <cit.>, values were measured from simultaneous observations of the events using ground-based telescopes and the space-based Spitzer satellite. In the case of OGLE-2015-BLG-1482 <cit.>, which was also simultaneously observed using the Spitzer and ground-based telescopes, the light curve was almost equally well explained by two solutions, in which the lens was a very low-mass star with a mass 0.10 ± 0.02 M_⊙ according to one solution, and the lens is a BD with a mass 0.052 ± 0.008 M_ J according to the other solution, and thus the BD nature of the lens could not be confirmed. Another channel of detecting microlensing BDs is via a binary-lens single-source (2L1S) event. Compared to a 1L1S event, analysis of a 2L1S event yields an additional constraint of the companion-to-primary mass ratio q. This constraint can be used to select candidate BD companions of binary lenses based on the fact that typical Galactic lensing events are produced by low-mass stars <cit.>, and thus companions with mass ratios q ≲ 0.1 are very likely to be BDs. Furthermore, the probability of measuring the Einstein radii for these events is high because the light curves of these events usually exhibit anomaly features resulting from source crossings over or approaches very close to caustics. In these cases, the light curves are likely to be affected by finite-source effects, from which can be measured and the lens mass can be further constrained. In order to find BDs through the second channel, <cit.>, hereafter paper I, investigated the microlensing data collected during the 2016–2018 period by the high-cadence surveys and reported 6 binaries with candidate BD companions, including OGLE-2016-BLG-0890LB, MOA-2017-BLG-477LB, OGLE-2017-BLG-0614LB, KMT-2018-BLG-0357LB, OGLE-2018-BLG-1489LB, and OGLE-2018-BLG-0360LB. From continued analyses of the lensing events found during the 2018–2020 period, <cit.>, hereafter paper II, reported another 4 binaries with candidates BD companions, including KMT-2018-BLG-0321LB, KMT-2018-BLG-0885LB, KMT-2019-BLG-0297LB, and KMT-2019-BLG-0335LB. In this work, we report four additional candidate BD companions to binary lenses found from the inspection of the 2021 season microlensing data, including KMT-2021-BLG-0588LB, KMT-2021-BLG-1110LB, KMT-2021-BLG-1643LB, and KMT-2021-BLG-1770LB. The main scientific purpose of this and previous works is building a homogeneous sample of binary-lens events containing BD companions found from the KMTNet survey by applying a consistent criterion. The sample will be useful for future statistical analyses on BDs such as the distribution of mass ratios and separations and the occurrence rate of star-BD binary pairs. For the presentation of the findings and analyses of the BD events, we organize the paper as follows. In Sect. <ref>, we describe the procedure of selecting candidate events produced by binary lenses possessing BD companions. In Sect. <ref>, we depict the data used in the analyses and the observations carried out to obtain the data. In Sect. <ref>, we start by explaining the common procedure applied to analyze the events and detail the analyses of the individual events in the following subsections: KMT-2021-BLG-0588L in Sect. <ref>, KMT-2021-BLG-1110L in Sect. <ref>, KMT-2021-BLG-1643L in Sect. <ref>, and KMT-2021-BLG-1770L in Sect. <ref>. In Sect. <ref>, we mention the procedure of specifying the source stars and estimate the Einstein radii of the individual events. In Sect. <ref>, we explain the Bayesian analyses conducted to estimate the physical lens parameters of the events and present the obtained parameters. In Sect. <ref>, we summarize the results from the analyses and discuss future followup observations that can confirm the BD natures of the lens companions reported in this work and those found from previous analyses in papers I and II. § SELECTIONS OF BD CANDIDATES The binary-lens events with BD companions were found from the inspection of the microlensing events that were found in the 2021 season by the Korea Microlensing Telescope Network <cit.> survey. For a 2L1S event possessing a planetary lens companion, with a companion-to-primary mass ratio of order 10^-3 or less, the signal of the companion, in general, can be readily identified from its characteristic short-term anomaly feature in the lensing light curve <cit.>. For a 2L1S event with a BD companion, which has a mass ratio of order 10^-2, however, it is difficult to promptly identify the BD nature of the companion, because the lensing light curves are, in many cases, similar to those produced by binary lenses with approximately equal-mass components. In the searches for BD companions in binary lenses, therefore, we conducted systematic analyses of all anomalous lensing events detected by the KMTNet survey. We selected events with BD companions by imposing the criterion of q ≲ 0.1 among the 2L1S events identified from the first-round analyses. We note that the criterion is the same as the criterion that was adopted in papers I and II, and thus the BD events presented in this and previous works constitute a uniform sample. From this procedure, we identified four candidate BD-companion events including KMT-2021-BLG-0588, KMT-2021-BLG-1110, KMT-2021-BLG-1643, and KMT-2021-BLG-1770. In Table <ref>, we list the equatorial coordinates, (RA, DEC)_ J2000, of the individual events together with the corresponding Galactic coordinates, (l, b), and I-band extinction, A_I, toward the field. Here the extinction values were adopted from the OGLE Internet archive <cit.>.[ http://ftp.astrouw.edu.pl/ogle/ogle3/ext/blg/] The event KMT-2021-BLG-1770 was picked out despite the fact that the estimated mass ratio between the lens components, q∼ 0.15, was slightly greater than the adopted threshold mass ratio q_ th∼ 0.1, because the mass of the lens expected from the short time scale of the event, ∼ 7.6 days, was low, and thus the probability for the mass of the companion to be in the BD mass regime was high. For this reason, this event is not a part of uniformly selected sample for future statistical studies, although analysis is presented in this work. For the identified candidate events, we then checked whether the events were additionally observed by other lensing surveys to include the data in the analyses if they exist. We found that KMT-2021-BLG-0588 was additionally observed by the Microlensing Observations in Astrophysics <cit.> group, who referred to the event as MOA-2021-BLG-139, and the other events were observed solely by the KMTNet group. For KMT-2021-BLG-0588, we use the KMTNet ID reference because the KMTNet group first found the event. § OBSERVATIONS AND DATA The KMTNet group has carried out a high-cadence survey since 2016 by monitoring stars lying toward the Galactic bulge field in search of light variation of stars caused by microlensing. The survey group utilizes three wide-field telescopes, which are distributed in three sites of the Southern Hemisphere for continuous and dense coverage of lensing events. The sites of the individual telescopes are the Siding Spring Observatory in Australia (KMTA), the Cerro Tololo interamerican Observatory in Chile (KMTC), and the South African Astronomical Observatory in South Africa (KMTS). The telescopes are identical and each telescope with a 1.6 m aperture is equipped with a camera that yields 4 deg^2 field of view. KMTNet observations were mainly conducted in the I band, which is relatively less affected by extinction, and about one tenth of images were acquired in the V-band for the source color measurements of lensing events. Photometry of the events was conducted using the automatized pySIS pipeline <cit.>, which is based on the difference image method <cit.>. For the color measurements of the source stars, we additionally used the pyDIA code <cit.> to construct a set of the I and V-band light curves and color-magnitude diagrams (CMDs) of stars that lie in the neighborhoods of the source stars. For the events analyzed in this work, we conducted rereduction of the data to obtain optimized photometry data after the events were selected as BD candidates. We normalized the error bars of the data to make them consistent with scatter of data and χ^2 per degree of freedom (dof) for each data set to become unity. In the error-bar normalization process, we used the routine described in <cit.>. Among the four analyzed events, the lensing event KMT-2021-BLG-0588 was additionally observed by the MOA survey. The observations of the event by the MOA survey were done with the use of the 1.8 m telescope of the Mt. John Observatory in New Zealand. The camera mounted on the telescope yields 2.2 deg^2 field of view. The MOA observations were mostly conducted in the customized MOA-R band, and the photometry was done using the MOA pipeline. Normalization of the MOA data set was done using the same routine that was applied to the KMTNet data sets. [The photometry data are available at the follow site: http://astroph.chungbuk.ac.kr/∼cheongho/download.html.] § ANALYSES The events were analyzed under the common interpretation of the lens-system configuration that the lenses are binaries because the light curves of all events exhibit caustic features that arise due to the multiplicity of the lens masses. Under the assumption of a rectilinear relative lens-source motion, the lensing light curve of a 2L1S event is described by 7 basic lensing parameters. Among these parameters, the first three parameters (t_0, u_0, ) describe the lens-source approach, and the individual parameters represent the time of the closest lens-source approach, the lens-source separation at t_0, and the event time scale, respectively. Another three parameters (s, q, α) describe the binarity of the lens, and the individual parameters describe the projected separation (scaled to ) and mass ratio between the lens components, and the angle between the source trajectory and the axis connecting the binary lens components. The last parameter ρ represents the ratio of the angular source radius θ_* to the Einstein radius, ρ=θ_*/ (normalized source radius), and it describes the deformation of the light curve during the caustic crossings of a source caused by finite-source effects. A 2L1S lensing light curve can deviate from a standard form due to the departure of the relative lens-source motion from rectilinear. The first cause of such a deviation is the microlens-parallax effects, which is caused by the positional change of the observer by the orbital motion of Earth around the sun <cit.>. The second cause is the lens-orbital effects, which is caused by the change of the lens position by the orbital motion of the binary lens <cit.>. These higher-order effects induce subtle deviations in the lensing light curve from the standard form, and description of these deviations requires additional lensing parameters in modeling. We checked these higher-order effects by conducting additional modeling, in which additional parameters were added in the modeling. The two parameters describing the parallax effect are (, ), which represent the north and east components of the microlens-lens parallax vector _ E = (π_ rel/ )(/μ), respectively. Under the assumption that the positional change of the lens by the orbital motion is minor, the lens-orbital effect is described by two parameters (ds/dt, dα/dt), which denote the annual change rate of the binary separation and source trajectory angle, respectively. It was found that secure detections of the higher-order effects were difficult for KMT-2021-BLG-0588, KMT-2021-BLG-1110, and KMT-2021-BLG-1770, for which the event time scales are less than 40 days. For KMT-2021-BLG-1643 with ∼ 105 days, the higher-order effects are minor, but the amplitude of the parallax parameters yielded a useful constraint on the physical lens parameters. See Sect. <ref> for the detailed discussion on the parallax constraint. In the 2L1S modeling, we searched for a lensing solution, which refers to a set of the lensing parameters that best depict the observed lensing light curve. In the first round of modeling, we divided the lensing parameters into two groups, and found the binary parameters (s, q) of the first group via a grid approach with multiple initial values of α, and the other lensing parameters of the second group were searched for by minimizing χ^2 using the Markov Chain Monte Carlo (MCMC) method with an adaptive step size Gaussian sampler <cit.>. In the second round, we refined the local solutions identified from the first-round modeling by further reducing χ^2 value using the MCMC method. We adopt this two-step approach because the change of the lensing magnification with the variation of the grid parameters is discontinuous, while the magnification changes smoothly with the variation of the downhill parameters. Furthermore, the Δχ^2 map obtained from the first-round grid search enables us to identify local solutions that are caused by various types of degeneracy. We consider the limb-darkening variation of the source surface brightness in the computation of finite magnifications by adopting the linear limb-darkening coefficients of <cit.> corresponding to the stellar type of the source stars. In the following subsections, we present the detailed analyses conducted for the individual events. §.§ KMT-2021-BLG-0588 Figure <ref> shows the lensing light curve of the event KMT-2021-BLG-0588. The source with an I-band baseline magnitude I_ base∼ 19.11 was in the KMT32 field, toward which observations were conducted with a 2.5 hr cadence. The source flux magnification induced by lensing was first found by the KMTNet group on 2021 April 26, which corresponds to the abridged heliocentric Julian date ≡ - 2450000 =9331, when the source was brighter than the baseline by Δ I∼ 0.46 mag. The light curve exhibited a strong anomaly, which peaked at ∼ 9354.25 with a strong deviation of Δ I∼ 3 mag from the baseline 1L1S model. The MOA group independently found the event on 2021 May 22 (=9357), which was about 3 days after the strong peak. The zoom-in view of the strong peak, which was covered by the combination of the MOA and KMTA data sets, is shown in the top panel of Figure <ref>. From the sharp rise and fall, the strong peak is likely to be produced by the source star's crossing over the tip of a caustic formed by a binary lens. In Table <ref>, we list the lensing parameters of the solutions found from the 2L1S modeling of the light curve together with the χ^2 values of the fits and degrees of freedom (dof). We identified a pair of local solutions, in which one solution has a binary separation s < 1 (close solution) and the other solution has a separation s > 1 (wide solution). Although the solutions are designated as the "close" and "wide" solutions, we note that the similarity between the model curves of the two solutions is caused by an accidental degeneracy rather than the well-known close–wide degeneracy, which arises due to the similarity between the central caustics induced by a pair of solutions with separations s and 1/s <cit.>. We further discuss the cause of the degeneracy in the following paragraph. It is found that the wide solution with s∼ 1.17 yields a better fit than the close solution with s∼ 0.77 by Δχ^2=71.8, and thus the degeneracy is resolved with strong statistical confidence. In Figure <ref>, we draw the model curve of the wide solution in the bottom panel, which shows the whole view of the light curve, and plot the models curves and residuals of both the close and wide solutions in the upper panels, which show the zoom-in view of the region around the strong peak. According to the wide solution, the estimated event time scale and the mass ratio between the lens components are ∼ 39 days and q∼ 0.10, respectively. From the fact that the time scale is in the range of events produced by stellar lenses together with the fact that the mass ratio is low, the probability of the binary lens companion being a BD is high. The normalized source radius, ρ∼ 0.7× 10^-3, was securely measured from the analysis of the strong peak, which was affected by finite-source effects The lens-system configurations of the close and wide solutions are presented in the two insets of the bottom panel of Figure <ref>. According to the wide solution, the binary lens forms a single six-sided resonant caustic, and the strong peak was produced by the source passage through the tip of the lower left cusp of the caustic. According to the close solution, on the other hand, the lens induces 3 sets of caustics, in which a single central caustic around the primary lens is detached from the two peripheral caustics, and the strong peak was generated by the source crossing over the slim cusp extending from the lower left cusp of the central caustic. The two sets of caustics of the close and wide solutions do not appear to be similar to each other, and this suggests that the degeneracy between the two solutions is accidental. §.§ KMT-2021-BLG-1110 We present the light curve of the lensing event KMT-2021-BLG-1110 in Figure <ref>. The lensing magnification of the source, which had a baseline magnitude I_ base∼ 19.52 before lensing, was found by the KMTNet group on 2021 June 2 (=9367), when the source was brighter than the baseline by Δ I∼ 0.5 mag. The source lies in the overlapping region of the KMTNet prime fields BLG01 and BLG41, toward which observations were done with a 0.5 hr cadence for each field, and a 0.25 hr cadence in combination. The light curve is characterized by the double spikes appearing at t_1∼ 9370.85 and t_2∼ 9371.56. The rising and falling sides of both spikes were densely and continuously resolved from the high-cadence observations conducted with the use of the three KMTNet telescopes. The first spike was resolved by the KMTC data, and the second one was covered by the combined data from KMTS and KMTC. The spike features are very likely to be produced by the caustic crossings of the source, and thus we conducted modeling the light curve under the 2L1S interpretation. The modeling yielded two local solutions: one with s<1 (close solution) and the other with s>1 (wide solution). It is found that the wide solution is preferred over the close solution by Δχ^2 =33.8, which is large enough to resolve the degeneracy between the solutions. The model curve of the wide solution is drawn in the bottom panel of Figure <ref>, and the model curves and residuals of both the close and wide solutions in the region around the two peaks are presented in the upper panels. The similarity between the models of the two solutions is caused by the classic close–wide degeneracy. The lensing parameters of the solutions are listed in Table <ref> together with the values of χ^2/dof. The binary lensing parameters are (s, q)_ close∼ (0.44, 0.07) for the close solution, and (s, q)_ wide∼ (2.43, 0.07) for the wide solution. From the fact that the estimated mass ratio q∼ 0.07 between the lens components is low together with the fact that the event time scale ∼ 27–29 days is a typical value of a stellar lens event, the companion of the lens is a strong BD candidate. The normalized source radius, ρ∼ 0.79× 10^-3 for the wide solution, is precisely measured from the well-resolved spike features. In the two insets of the bottom panels of Figure <ref>, we present the lens-system configurations of the close and wide solutions. Both solutions result in central caustics of similar shape, in which the caustic is elongated along the binary-lens axis. The source passed through the back-end side of the caustic at an acute source trajectory angle of ∼ 69^∘ with respect to the binary axis. According to the model, the two spikes were produced by the successive passages of the source through the on-axis cusp and upper off-axis cusp of the caustic. §.§ KMT-2021-BLG-1643 The lensing light curve of KMT-2021-BLG-1643 is presented in Figure <ref>. The event was found in its early stage by the KMTNet survey on 2021 June 8 (=9374), at which the source was brighter than the baseline magnitude I_ base=18.91 by Δ I∼ 1.2 mag. The source lies in the KMTNet BLG04 field, toward which the event was monitored with a 1 hr cadence. The event exhibited a pair of caustic spikes, which occurred at ∼ 9401.1 and 9403.4, and a weak bump, which was centered at ∼ 9409. The region between the two caustic spikes exhibited a characteristic U-shape pattern, indicating that the spikes occurred when the source entered and exited a caustic. The first caustic spike was not resolved because the sky at the KMTA site was clouded out, but the second caustic was partially covered by the two KMTS and one KMTC data points. From the 2L1S modeling of the light curve, we found a pair of solutions resulting from the close–wide degeneracy. The binary lensing parameters are (s, q)_ close∼ (0.69, 0.08) and (s, q)_ wide∼ (1.52, 0.08) for the close and wide solutions, respectively. We list the full lensing parameters of the two solutions in Table <ref>, and the model curves and residuals are presented in Figure <ref>. From the comparison of the fits, it is found that the wide solution is preferred over the close solution by Δχ^2=38.3, indicating that the degeneracy is lifted with a fairly strong confidence level. Despite the fact that the caustic exit was partially covered by only a small number data points, the normalized source radius, ρ∼ 0.3× 10^-3, could be constrained. The measured event time scale, ∼ 105 days, of the event comprises an important portion of a year, and thus it may be possible to constrain microlens-parallax parameters. We conducted an additional modeling considering the higher-order effects. Figure <ref> shows the scatter plot of points in the MCMC chain on the – parameter plane. It was found that the improvement of model fit with the inclusion of the higher-order effects is very minor, but the amplitude of the scatter plot provided a constraint on the physical lens parameters. We present the configurations of the close and wide lens systems in the two insets of the bottom panel of Figure <ref>. Similar to the case of KMT-2021-BLG-1110, the source passed the back-end side of the caustic. The spike features were produced by the source passage through the lower left cusp of the caustic, and the weak bump was generated by the source approach close to the left-side on-axis cusp of the caustic. §.§ KMT-2021-BLG-1770 Figure <ref> shows the light curve of the lensing event KMT-2021-BLG-1770. The event was found by the KMTNet group on 2021 July 16 (∼ 9406). The source, which had a baseline magnitude I_ base=19.06, was in the KMTNet prime field BLG03, for which images were taken with a 0.5 hr cadence. Most region of this field overlaps with the region covered by the BLG43 field, but the event lies in the offset region that was not covered by the BLG43 field. In our analysis, we do not use the KMTA data set due to its low photometric quality. Similar to the event KMT-2021-BLG-1643, the light curve of KMT-2021-BLG-1770 is characterized by a pair of caustic spikes and a following weak bump. The first caustic spike, which occurred at =9412.2, was not covered, but the second spike, which occurred at =9412.4, and the U-shape region between the two spikes were resolved by the combination of the KMTS and KMTC data sets. The weak bump is centered at ∼ 9414, which was about 2 days after the caustic spikes. From the analyses of the light curve, we identified two local solutions, in which one solution has a binary separation s<1 (close solution) and the other has a separation s>1 (wide solution). The model curves of the solutions are drawn over the data points and residuals from the models are shown in Figure <ref>. The binary lensing parameters of the individual solutions are (s, q)_ close∼ (0.81, 0.15) and (s, q)_ wide∼ (1.14, 0.19). As stated, the event was chosen as a BD candidate despite the fact that the mass ratio between the lens components is slightly greater than the threshold mass ratio q_ th=0.1, because the event time scale, ∼ 7 days, is substantially shorter than several-week time scale of typical lensing events. The normalized source radius, ρ∼ (6-7)× 10^-3, was measured from analyzing the caustic-exit part of the light curve. The lens-system configurations of the close and wide solutions are presented in the two insets of the bottom panel of Figure <ref>. It is found that the configurations of the close and wide solutions are very similar to those of the corresponding solutions of KMT-2021-BLG-0588. That is, the caustic spikes were generated by the passage of the source through the slim bridge part connecting the central and peripheral caustics according to the close solution, and by the source pass through the tip of the lower left cusp of the six-sided resonant caustic according to the wide solution. The difference between the solutions of the two events is that the close solution is preferred over the wide solution by Δχ^2=8.9 in the case of KMT-2021-BLG-1770, while the wide solution yields a better fit than the close solution in the case of KMT-2021-BLG-0588. For the same reason mentioned in Sect. <ref>, the similarity between the model curves of the close and wide solutions is caused by an accidental degeneracy rather than a close–wide degeneracy. § SOURCE STARS AND EINSTEIN RADII In this section, we specify the source stars of the events. Specifying the source star of a caustic-crossing 2L1S event is important to estimate the angular Einstein radius from the relation = θ_*ρ, where the normalized source radius ρ is measured by analyzing the caustic-crossing parts of the light curve, and the angular source radius θ_* can be deduced from the source type. We specified the source stars of the individual events by measuring their de-reddened colors and magnitudes. To estimate the de-reddended color and magnitude, (V-I, I)_0, from the instrumental values, (V-I, I)_s, we applied the <cit.> method, in which the centroid of red giant clump (RGC) is used as a reference for the calibration. Following the routine procedure of the method, we first estimated instrumental I and V-band magnitudes of the source by regressing the photometry data of the individual passbands processed using the pyDIA code, and placed the source in the instrumental CMD of stars around the source constructed using the same pyDIA code. We then measured the offsets in color and magnitude, Δ (V-I, I), of the source from the RGC centroid, and estimated de-reddened color and magnitude as (V - I, I)_s,0 = (V - I, I)_ RGC,0 + Δ (V - I, I), where (V - I, I)_ RGC,0 are the de-reddened color and magnitude of the RGC centroid known from <cit.> and <cit.>, respectively. Figure <ref> shows the positions of the source (blue dot) and RGC centroid (red dot) in the instrumental CMDs of the individual events. In Table <ref>, we list the values of (V-I, I)_s, (V-I, I)_ RGC, (V-I, I)_ RGC,0. and (V-I, I)_s,0 estimated from the procedure described in the previous paragraph. According to the estimated colors and magnitudes, the spectral types of the source stars are G0V, G9V, K3V, and G9V for KMT-2021-BLG-0588, KMT-2021-BLG-1110, KMT-2021-BLG-1643, and KMT-2021-BLG-1770, respectively. With the measured source color and magnitude, we estimated the angular radius of source star by first converting V-I color into V-K color using the <cit.> relation, and then by deducing θ_* from the <cit.> relation between (V-K, V) and θ_*. With the measured source radii, the angular Einstein radii were estimated using the relation in Equation (<ref>). We list the the estimated values of θ_* and of the individual events in the bottom two lines of Table <ref>. Also marked in Figure <ref> are the positions of the blend (green dots) in the CMDs of the individual events. We list the measured values of the color and magnitude of the blend, (V-I, I)_b, in Table <ref>. Besides KMT-2021-BLG-0588, for which the blended light is similar to the flux of the source, it is found that the blended fluxes are substantially greater than the source fluxes. In order to check the possibility that the lens is the main origin of the blended flux, we measured the astrometric offset δθ between the centroid of the source measured at the peak time of the lensing magnification and that measured at the baseline. If the lens were the main origin of the blended flux, the offset would be very small because the relative lens-source proper motions are < 10 mas/yr for all events. In the case that the origin of the blended flux is a nearby star, which is typically separated from the source by an order of 100 mas, the resulting astrometric offset would be substantially greater than the typical astrometric precision of order 10 mas. In Table <ref>, we list the measured centroid offsets of the individual events. For all events, it is found that the astrometric offsets are much greater than the measurement precision, and this indicates that the origins of the blended light are nearby stars rather than the lenses. § PHYSICAL LENS PARAMETERS The mass M and distance to the lens can be constrained by measuring lensing observables: , , and . The event time scale is the basic observable that is measurable for general lensing events, and the angular Einstein radius is another observable that is measurable for events with light curves affected by finite-source effects. These two observables are related to the physical lens parameters by the relations in Equation (<ref>). With the measurement of the extra observable , the physical lens parameters would be uniquely determined from the relations in Equation (<ref>). For the analyzed events, the observables and were measured, but was not securely measured for any of the events. Without the constraint of , we estimated the physical lens parameters by conducting Bayesian analyses of the events using models of physical and dynamical distributions and mass function of objects in our Galaxy together with the constraints provided by the measured blended flux. In the first step of the Bayesian analysis, we conducted a Monte Carlo simulation to generate a large number of artificial lensing events. For each artificial event, the distances of the lens and source and their relative proper motion were assigned using a Galactic model, and the mass of the lens was assigned using a model mass function. In the simulation, we adopted the Galactic model of <cit.> and the mass function model of <cit.>. In the mass function, we included white-dwarf remnants but exclude black holes and neuron stars. In the second step, we computed the lensing observables (t_ E,i, θ_ E,i) corresponding to the assigned values (M, , , μ) of each artificial event using the relations in Equation (<ref>). In the final step, we constructed Bayesian posteriors of the lens mass and distance by imposing a weight w_i=exp(-χ^2/2) on each event. Here the χ^2 value was calculated as χ_i^2 = [ t_ E,i-σ()]^2 + [ θ_ E,i-σ()]^2, where [, σ()] and [, σ()] represent the measured values and uncertainties of the observables and , respectively. For the event KMT-2021-BLG-1643 with a long event time scale, we imposed the constraint by including an additional term ∑_j=1^2 ∑_k=1^2 b_j,k (π_ E,j,i-π_ E,i) (π_ E,k,i- π_ E,i) to the right side of Eq. (<ref>). Besides the constraints from the lensing observables, we additionally imposed the blending constraint in the Bayesian analyses. This constraint is provided by the fact that the flux from the lens comprises a portion of the total blending flux, and thus the lens flux should be less than the total blending flux. For the imposition of this constraint, we calculated the lens brightness as I_L = M_I,L + 5 log( pc) - 5 + A_I, tot, where M_I,L denotes the absolute I-band magnitude corresponding to the lens mass, and A_I,L is the extinction to the lens lying at a distance . The extinction was modeled as A_I,L = A_I, tot[ 1-exp( -|z| h_z, dust)], where A_I, tot denotes the total extinction toward the field, h_z, dust = 100 pc is the adopted vertical scale height of dust, z = sin b + z_0 and z_0=15 pc represent the vertical positions of the lens and the sun above the Galactic plane, respectively. The values A_I, tot for the individual events are listed in Table <ref>. It was found that the blending constraint had important effects on the determined physical parameters of the events KMT-2012-BLG-0558 and KMT-2021-BLG-1643, for which the lenses are expected to be located relatively nearby to the Sun based on their large Einstein radii. Below we discuss this issue in more detail. In Figures <ref> and <ref>, we present the Bayesian posteriors of the mass of the binary lens companion and distance to the lens system, respectively. The estimated values of the primary (M_1) and companion (M_2) masses, distance, and projected separation between the lens components (a_⊥=s) are listed in Table <ref>. For each parameter, the median value was adopted as a representative value and the upper and lower ranges of the uncertainty were chosen as the 16% and 84% of the posterior distribution, respectively. According to the estimated masses, it is found that the masses of the lens companions are well within the BD mass range 0.012<M_2/M_⊙≤ 0.076 (or 13<M_2/M_ J≤ 80), although there is some variation of the primary masses, which lie in the mass range of main-sequence stars with spectral types from K to M. In Table <ref>, we list the probabilities for the companions of the individual lenses being in the BD mass range, P_ BD. It is found that the probabilities are greater than 59% in all cases of the events. For KMT-2021-BLG-1770L, the mass of the primary is so small that it can be a BD as well with a probability of P_ BD∼ 35%. In this case, the lens is a BD binary like OGLE-2009-BLG-151L, OGLE-2011-BLG-0420L <cit.> OGLE-2016-BLG-1266L <cit.>, OGLE-2016-BLG-1469L <cit.>, MOA-2016-BLG-231L <cit.>, and OGLE-2017-BLG-1038L <cit.>. In Table <ref>, we list the probabilities of the lenses being in the disk, P_ disk, and bulge, P_ bulge. For the events KMT-2021-BLG-0588 and KMT-2021-BLG-1643, it is very likely that the lenses lie in the disk, while the lens of KMT-2021-BLG-1770 is likely to lie in the bulge. For KMT-2021-BLG-1110, on the other hand, the disk and bulge probabilities are approximately the same. It is found that the constraint on the lens location comes mainly from the estimated radius of the Einstein ring. For the events KMT-2021-BLG-0588 and KMT-2021-BLG-1643, the respective Einstein radii are ∼ 0.90 mas and ∼ 1.08 mas, which are approximately two times bigger than the typical Einstein radius of ∼ 0.5 mas for the event produced by a low-mass stellar lens with a mass M∼ 0.3 M_⊙ lying about halfway between the sun and a bulge source. By contrast, the Einstein radius ∼ 0.16 mas of KMT-2022-BLG-1770 is substantially smaller than the typical value, and thus P_ bulge is substantially higher than P_ disk. The Einstein radius ∼ 0.58 mas of KMT-2021-BLG-1110 is close to the typical value, and thus P_ disk and P_ bulge are approximately the same. In the posterior distributions presented in Figures <ref> and <ref>, we mark the contributions of the disk and bulge lens populations by blue and red curves, respectively. § SUMMARY AND DISCUSSION Following the works in papers I and II, we reported the BD companions in binary lenses found from the inspection of the microlensing data collected in the 2021 season by the high-cadence surveys, including KMT-2021-BLG-0588LB, KMT-2021-BLG-1110LB, KMT-2021-BLG-1643LB, and KMT-2021-BLG-1770LB. Modeling the light curve of each event yielded a pair of solutions with projected separations smaller and greater than the Einstein radius, but the degeneracy between the solutions was resolved with a strong confidence level except for KMT-2021-BLG-1770, for which the resolution of the degeneracy was less clear than the others. From the Bayesian analyses conducted with the constraints provided by the observables of the event time scale and Einstein radius together with the constraint from the blended light, it was estimated that the masses of the primary and companion of the individual events are (M_1/M_⊙, M_2/M_⊙)= (0.54^+0.31_-0.24, 0.053^+0.031_-0.023) for KMT-2021-BLG-0588L, (0.74^+0.27_-0.35, 0.055^+0.020_-0.026) for KMT-2021-BLG-1110L, (0.73^+0.24_-0.17, 0.061^+0.020_-0.014) for KMT-2021-BLG-1643L, and (0.13^+0.18_-0.07, 0.020^+0.028_-0.011) for KMT-2021-BLG-1770L. The estimated masses of the binary companions were well within the BD mass range, although there was some variation of the primary masses, which were in the mass range of main-sequence stars with spectral types from K to M. The probabilities of the lens companions being in the BD mass range were estimated as 82%, 85%, 91%, and 59% for the individual events. The BD nature of the lens companions presented in this work and papers I and II can be confirmed by directly imaging the lenses from future high-resolution adaptive-optics (AO) followup observations when the lenses are separated from the source stars <cit.>. For these followup observations, we compute the lens-source separations Δθ_2030 expected in 2030, which is an approximate year of the first AO light on 30 m class telescopes. In Table <ref>, we list the relative lens-source proper motions, expected lens-source separations, and K-band source magnitudes of the BD events reported in this work and papers I and II. The K-band source magnitude was estimated as K = I_s,0 + (V-I)_0 - (V-K)_0 + A_I/7, and the separation is estimated as Δθ_2030 =μΔ t, where the relative lens-source proper motion is computed by μ=/ and Δ t indicates the time gap between the peak of the event and the year 2030. We note that Δθ_2030 of the event OGLE-2017-BLG-0614 is not listed because the Einstein radius and the resulting proper motion could not be measured, and only the lower limits are listed for KMT-2018-BLG-0321 and KMT-2018-BLG-0885 because only the lower limits of were constrained for these events. From the table, one finds that the separations are greater than 30 mas for all events with measured proper motions, and except for the two events KMT-2019-BLG-0335 and KMT-2021-BLG-1643, the separations are greater than ∼ 50 mas, which will be adequate for the clear resolution of the lens from the source. By comparing the relative lens-source proper motion estimated from the model with the value measured from followup AO observations, one can confirm the solution. Furthermore, from the stellar type of the primary lens, which comprises most of the flux from the lens, the approximate mass of the lens can be estimated. This together with the estimated mass ratio enables one to confirm the BD nature of the lens companion. We note that this test of presented solutions will be most useful for events with relative accuracy of relative proper motion better than 10%. Work by C.H. was supported by the grants of National Research Foundation of Korea (2019R1A2C2085965). This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI) at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia. Data transfer from the host site to KASI was supported by the Korea Research Environment Open NETwork (KREONET). This research was supported by the Korea Astronomy and Space Science Institute under the R&D program (Project No. 2023-1-832-03) supervised by the Ministry of Science and ICT. The MOA project is supported by JSPS KAKENHI Grant Number JP24253004, JP26247023, JP23340064, JP15H00781, JP16H06287, JP17H02871 and JP22H00153. J.C.Y., I.G.S., and S.J.C. acknowledge support from NSF Grant No. AST-2108414. Y.S. acknowledges support from BSF Grant No 2020740. [Alard & Lupton(1998)]Alard1998 Alard, C., & Lupton, R. H. 1998, , 503, 325 [Albrow et al.(2009)]Albrow2009 Albrow, M., Horne, K., Bramich, D. M., et al. 2009, , 397, 2099 [Albrow(2017)]Albrow2017 Albrow, M. 2017, MichaelDAlbrow/pyDIA: Initial Release on Github,Versionv1.0.0, Zenodo, doi:10.5281/zenodo.268049 [Albrow et al.(2018)]Albrow2018 Albrow, M. D., Yee, J. C., Udalski, A., et al. 2018, , 858, 107 [An(2005)]An2005 An, J. H. 2005, , 356, 1409 [Bensby et al.(2013)]Bensby2013 Bensby, T. Yee, J.C., Feltzing, S. et al. 2013, , 549, A147 [Bessell & Brett(1988)]Bessell1988 Bessell, M. S., & Brett, J. M. 1988, , 100, 1134 [Bond et al.(2001)]Bond2001 Bond, I. A., Abe, F., Dodd, R. J., et al. 2001, , 327, 868 [Bond et al.(2004)]Bond2004 Bond, I. A., Udalski, A., Jaroszyński, M., et al. 2004, , 606, L155 [Choi et al.(2013)]Choi2013 Choi, J. -Y., Han, C., Udalski, A., et al. 2013, , 768, 129 [Chung et al.(2017)]Chung2017 Chung, S. -J., Zhu, W., Udalski, A., et al. 2017, , 838, 154 [Chung et al.(2019)]Chung2019 Chung, S.-J., Gould, A., Skowron, J., et al. 2019, , 871, 179 [Claret(2000)]Claret2000 Claret, A. 2000, , 363, 1081 [Dominik(1998)]Dominik1998 Dominik, M. 1998, , 329, 361 [Dominik(1999)]Dominik1999 Dominik, M. 1999, , 349, 108 [Doran & Mueller(2004)]Doran2004 Doran, M., & Mueller, C. M. 2004, JCAP, 09, 003 [Gould(1992)]Gould1992a Gould, A. 1992, , 392, 442 [Gould & Loeb(1992)]Gould1992b Gould, A., & Loeb, A, 1992, , 396, 104 [Gould(2000)]Gould2000 Gould, A. 2000, , 542, 785 [Gould et al.(2009)]Gould2009 Gould, A., Udalski, A., Monard, B., et al. 2009, , 698, L147 [Gould et al.(2022a)]Gould2022a Gould, A., Han, C., Zang, W., et al. 2022a, , 664, A13 [Gould et al.(2022b)]Gould2022b Gould, A., Jung, Y. K., Hwang, K.-H., et al. 2022b, JKAS, 55, 173 [Griest & Safizadeh(1998)]Griest1998 Griest, K., & Safizadeh, N. 1998, , 500, 37 [Han & Gould(2003)]Han2003 Han, C., & Gould, A. 2003, , 592, 172 [Han et al.(2017)]Han2017 Han, C., Udalski, A., Sumi, T., et al. 2017, , 843, 59 [Han et al.(2020)]Han2020 Han, C., Lee, C.-U., Udalski, A., et al. 2020, , 159, 134 [Han et al.(2022)]Han2022 Han, C., Ryu, Y.-H., Shin, I.-G., et al. 2022, , 667, A64 [Han et al.(2023)]Han2023 Han, C., Jung Y. K., Kim, D., et al. 2023, , 675, A71 [Jung et al.(2018)]Jung2018 Jung, Y. K., Udalski, A., Gould, A., et al. 2018, , 155, 219 [Jung et al.(2021)]Jung2021 Jung, Y. K., Han, C., Udalski, A., et al. 2021, , 161, 293 [Kervella et al.(2004)]Kervella2004 Kervella, P., Thévenin, F., Di Folco, E., & Ségransan, D. 2004, , 426, 29 [Kim et al.(2016)]Kim2016 Kim, S.-L., Lee, C.-U., Park, B.-G., et al. 2016, JKAS, 49, 37 [Koshimoto et al.(2023)]Koshimoto2023 Koshimoto, N., Sumi, T., Bennett, D. P., et al. 2023, arXiv:2303.08279 [Malpas et al.(2022)]Malpas2022 Malpas, A., Albrow, M. D., Yee, J. C., et al. 2022, , 164, 102 [Mao & Paczyński(1991)]Mao1991 Mao, S., & Paczyński, B. 1991, , 374, L37 [Nataf et al.(2013)]Nataf2013 Nataf, D. M., Gould, A., Fouqué, P. et al. 2013, , 769, 88 [Shvartzvald et al.(2019)]Shvartzvald2019 Shvartzvald, Y., Yee, J. C., Skowron, J., et al. 2019, , 157, 106 [Tomaney & Crotts(1996)]Tomaney1996 Tomaney, A. B., & Crotts, A. P. S. 1996, , 112, 2872 [Yee et al.(2012)]Yee2012 Yee, J. C., Shvartzvald, Y., Gal-Yam, A., et al. 2012, , 755, 102 [Yoo et al.(2004)]Yoo2004 Yoo, J., DePoy, D.L., Gal-Yam, A. et al. 2004, , 603, 139 [Zhu et al.(2016)]Zhu2016 Zhu, W., Calchi Novati, S., Gould, A., et al. 2016, , 825, 60
http://arxiv.org/abs/2307.05028v1
20230711060226
Magnetic and quadrupole moments of the $Z_{c}(4020)^+$, $Z_{c}(4050)^+$ and $Z_{c}(4600)^{+}$ states in the diquark-antidiquark picture
[ "U. Ozdem" ]
hep-ph
[ "hep-ph", "hep-ex", "hep-lat" ]
[][email protected] Health Services Vocational School of Higher Education, Istanbul Aydin University, Sefakoy-Kucukcekmece, 34295 Istanbul, Turkey The magnetic and quadrupole moments of the Z_c(4020)^+, Z_c(4050)^+ and Z_c(4600)^+ states are calculated within the QCD light-cone sum rules. To extract the magnetic and quadrupole moments of these states the compact diquark-antidiquark interpolating currents and distribution amplitudes of the on-shell photon are employed. The magnetic moments are acquired as μ_Z_c = 0.55 ^+0.23_-0.22 μ_N, μ_Z^1_c=1.11 ^+0.27_-0.29 μ_N, μ_Z^2_c=2.44 ^+0.53_-0.48 μ_N for the Z_c(4020)^+, Z_c(4050)^+ and Z_c(4600)^+ states, respectively. We see that the magnetic moment results evaluated for the Z_c4020)^+, Z_c(4050)^+ and Z_c(4600)^+ states are large enough to be measured experimentally. We get a nonzero however small value for the quadrupole moments of Z_c states indicating a nonspherical charge distribution. The comparison of any future experimental data on the magnetic and quadrupole moments of the Z_c(4020)^+, Z_c(4050)^+ and Z_c(4600)^+ states together with the results of the present study can shed light on the nature and inner structure of these states. Magnetic and quadrupole moments of the Z_c(4020)^+, Z_c(4050)^+ and Z_c(4600)^+ states in the diquark-antidiquark picture Ulaş Özdem August 12, 2023 ============================================================================================================================ § INTRODUCTION Since the observation of the X(3872) state by the Belle Collaboration in 2003 <cit.>, numerous hadronic states have been observed in the last twenty years, which cannot be classified in the conventional two or three-quark picture. Thereafter, the LHCb, BESIII, CDF, BABAR, CMS, Belle and D0 Collaborations were observed in numerous states, which cannot be classified in the conventional quark picture and are represented as the XYZ particles, hybrids, and pentaquarks states; However, some are still needing confirmation, and the quantum numbers have not been assigned. The greatest acquirement corresponding with the nonconventional states was the discovery of the charged tetraquark states. So far, there are ten members in the set of charged hidden-charmed tetraquark states: Z_c(3900), Z_c(4020), Z_c(4050), Z_c(4100), Z_c(4200), Z_2(4250), Z_c(4430), Z_c(4600), Z_cs(3985), Z_cs(4000), and Z_cs(4220), reported in decays into final states contain a pair of light and charm quarks <cit.>. They cannot be classified as conventional charmonium mesons in consequence of their electric charge, they had to be nonconventional states with a minimum quark content c c̅ u d̅/c c̅ d u̅/c c̅ s u̅/c c̅ u s̅. The charged tetraquark states receive much attention as they show different properties. The study of these states can help us not only elucidate their nature and substructure but also get functional data on the nature of strong interaction inside these particles. Since they are considered tetraquark states with c c̅ u d̅/c c̅ d u̅/c c̅ s u̅/c c̅ u s̅ quark content, this family of charged states has generally been investigated as molecular and compact diquark-antidiquark pictures. It is already possible to find many comprehensive reviews on the subject in the literature <cit.>. Like masses and decays, magnetic and quadrupole moments are important parameters of a hadron that can be measured and calculated. The magnetic and quadrupole moments of the hidden-charm tetraquark states are of particular interest when examining the internal structure as well as the possible deformation of these states. There are several studies in the literature in which magnetic and quadrupole moments of hidden-charmed tetraquark states were extracted <cit.>. In this study, we evaluate the magnetic and quadrupole moments of the Z_c(4020)^+, Z_c(4050)^+ and Z_c(4600)^+ (hereafter Z_c, Z^1_c and Z^2_c, respectively) tetraquark states by considering them as the diquark-antidiquark picture within QCD light-cone sum rules. The QCD light-cone sum rules is a powerful technique in studying the exotic hadron characteristics and have been applied successfully to calculate the masses, form factors, magnetic moments, decay constants, and so on. According to QCD light-cone sum rules technique, the correlation function is evaluated both concerning hadrons (so-called hadronic part) and concerning quark-gluon degrees of freedom (QCD part). Then, the physical quantities, i.e., the magnetic and quadrupole moments are evaluated equating these two different descriptions of the correlation function. <cit.>. The organizational structure of this paper is as follows. In sec. <ref>, we construct the QCD light-cone sum rules for the magnetic and quadrupole moments of the hidden-charm tetraquark states. In Sec. <ref>, we present the numerical results and the corresponding discussion of the magnetic and quadrupole moments of compact hidden-charm tetraquark states. The obtained results are summarized and discussed in Sec. <ref>. The expressions of the correlation function for Z_c tetraquark and distribution amplitudes are presented in the Appendices for brevity. § QCD LIGHT-CONE SUM RULES FOR THE MAGNETIC AND QUADRUPOLE MOMENTS OF THE Z_C STATES In this section, we explore the magnetic and quadrupole moment of hidden-charm tetraquark states built of compact diquark-antidiquarks. To do this, we write down the two-point correlation function in the QCD sum rules in the presence of the external electromagnetic background field as follows, Π _μν(p,q)=i∫ d^4xe^ip· x⟨ 0|𝒯{J_μ(x) J_ν^†(0)}|0⟩_γ, where γ stands for the external electromagnetic background field, q is the momentum of the photon, and J_μ(ν)(x) is the interpolating current of the Z_c states with quantum numbers J^P=1^+, and they are given as J_μ^Z_c(x) =ϵϵ̃/√(2){ [ u̅^bT(x) C σ_αμγ_5 c^c(x)] [ u^d(x) γ^α C c̅^bT(x)] -[ u̅^bT(x) C γ^α c^c(x)][ u^d(x) γ_5 σ_αμ C c̅^eT (x)] }, J_μ^Z^1_c(x) =ϵϵ̃/√(2){ [ u̅^bT(x) C σ_αμ c^c(x)] [ u^d(x) γ_5 γ^α C c̅^bT(x)] +[ u̅^bT(x) C γ^αγ_5 c^c(x)][ u^d(x) σ_αμ C c̅^eT (x)] }, J_μ^Z^2_c(x) =ϵϵ̃/√(2){ [ u̅^bT(x) C σ_αμ c^c(x)] [ u^d(x) γ_5 γ^α C c̅^bT(x)] -[ u̅^bT(x) C γ^αγ_5 c^c(x)][ u^d(x) σ_αμ C c̅^eT (x)] }, where ϵ =ϵ _abc, ϵ̃=ϵ _dec, the a, b, c, d, and e are color indexes, σ_μν=i/2[γ_μ,γ_ν], and C is the charge conjugation matrix. Now let us first calculate the hadronic side of the correlation function. The correlation function in Eq. (<ref>) can be acquired by entering into all intermediate hadronic sum rules, having the same quantum numbers as the corresponding interpolating currents J_μ. After isolating the contributions of the ground Z_c states we obtain Π_μν^Had (p,q) = ⟨ 0 | J_μ (x) | Z_c(p, ε^θ) ⟩/p^2 - m_Z_c^2 ⟨ Z_c(p, ε^θ) | Z_c(p+q, ε^δ) ⟩_γ ⟨ Z_c(p+q,ε^δ) |J^†_ν (0) | 0 ⟩/(p+q)^2 - m_Z_c^2 +··· . where dots stand for the contributions coming from the higher states and continuum. The matrix elements of the interpolating current between one hadron and vacuum states in terms of the polarization vectors and residues are given as ⟨ 0 | J_μ(x) | Z_c(p,ε^θ) ⟩ = λ_Z_cε_μ^θ , ⟨ Z_c(p+q,ε^δ) |J^†_ν (0) | 0 ⟩ = = λ_Z_cε_ν^δ . The radiative transition matrix element in Eq. (<ref>) is written in terms of three Lorentz invariant form factors G_1(Q^2), G_2(Q^2) and G_3(Q^2) as: ⟨ Z_c(p,ε^θ) | Z_c (p+q,ε^δ)⟩_γ = - ε^τ (ε^θ)^α (ε^δ)^β{ G_1(Q^2) (2p+q)_τ g_αβ + G_2(Q^2) ( g_τβ  q_α - g_τα  q_β) - 1/2 m_Z_c^2 G_3(Q^2)  (2p+q)_τ q_α q_β}, where ε^τ and ε^δ(θ) are the polarization vectors of the photon and Z_c states, respectively. Employing Eqs. (<ref>)-(<ref>), the hadronic part of the correlation function becomes, Π_μν^Had(p,q) = ε_ρ λ_Z_c^2/ [m_Z_c^2 - (p+q)^2][m_Z_c^2 - p^2]{G_1(Q^2)(2p+q)_ρ(g_μν-p_μ p_ν/m_Z_c^2 -(p+q)_μ (p+q)_ν/m_Z_c^2 +(p+q)_μ p_ν/2m_Z_c^4 (Q^2+2m_Z_c^2) ) + G_2 (Q^2) (q_μ g_ρν - q_ν g_ρμ - p_ν/m_Z_c^2(q_μ p_ρ - 1/2 Q^2 g_μρ) + (p+q)_μ/m_Z_c^2(q_ν (p+q)_ρ+ 1/2 Q^2 g_νρ) - (p+q)_μ p_ν p_ρ/m_Z_c^4 Q^2 ) -G_3(Q^2)/m_Z_c^2(2p+q)_ρ( q_μ q_ν -p_μ q_ν/2 m_Z_c^2 Q^2 +(p+q)_μ q_ν/2 m_Z_c^2 Q^2 -(p+q)_μ q_ν/4 m_Z_c^4 Q^4) } . The magnetic and quadrupole moments of hadrons are related to their magnetic and quadrupole form factors; more precisely, the magnetic and quadrupole moments are equal to the magnetic and quadrupole form factor at zero momentum square. Magnetic (F_M(Q^2)) and quadrupole (F_𝒟 (Q^2)) form factors, which are more directly accessible in experiments, are described via the form factors G_1(Q^2), G_2(Q^2) and G_3(Q^2) F_M(Q^2) = G_2(Q^2) , F_𝒟(Q^2) = G_1(Q^2)-G_2(Q^2)+(1+λ) G_3(Q^2) , where λ=Q^2/4 m_Z_c^2 with Q^2=-q^2. At static limit, i.e., Q^2 = 0, the form factors F_M(Q^2=0) and F_𝒟(Q^2=0) , are proportional to the magnetic (μ_Z_c) and quadrupole (𝒟_Z_c) moments in the following way: e F_M(Q^2=0) = 2 m_Z_cμ_Z_c , e F_ D(Q^2=0) = m_Z_c^2 𝒟_Z_c . Let us evaluate the QCD part of the correlation function. The QCD side of the above-mentioned correlation function is computed concerning the QCD degrees of freedom in the deep Euclidean region. To do this, we need to insert the interpolating currents in Eqs. (<ref>)-(<ref>) into the correlation function. After substituting the explicit forms of the interpolating currents into the correlation function and applying contractions through Wick’s theorem, we obtain the QCD side as Π _μν^QCD-Z_c(p,q) =ϵϵ̃ϵ^'ϵ̃^'/2∫ d^4xe^ipx⟨ 0 | { Tr[γ^αS̃_c^e^'e(-x)γ ^βS_d^d^'d(-x)] Tr[σ_μαγ _5 S_c^cc^'(x)γ _5σ_νβS̃_u^bb^'(x)] -Tr[ γ^αS̃_c^e^'e(-x)γ _5σ_νβS_d^d^'d(-x)] Tr[ σ_μαγ_5 S_c^cc^'(x)γ^βS̃_u^bb^'(x)] -Tr[σ_μαγ _5S̃_c^e^'e(-x)γ^βS_d^d^'d(-x)] Tr[ γ^αS_c^cc^'(x)γ_5σ_νβS̃_u^bb^'(x)] +Tr[σ_μαγ_5 S̃_c^e^'e(-x)γ _5σ_νβS_d^d^'d(-x)] Tr[γ^αS_c^cc^'(x) γ^βS̃_u^bb^'(x)] }| 0 ⟩_γ, Π _μν^QCD-Z_c^1(p,q) =ϵϵ̃ϵ^'ϵ̃^'/2∫ d^4xe^ipx⟨ 0 | { Tr[ γ _5 γ^αS̃_c^e^'e(-x)γ ^βγ _5 S_d^d^'d(-x)] Tr[σ_μαS_c^cc^'(x)σ_νβS̃_u^bb^'(x)] +Tr[ γ _5 γ^αS̃_c^e^'e(-x)σ_νβS_d^d^'d(-x)] Tr[ σ_μαS_c^cc^'(x)γ^βγ_5 S̃_u^bb^'(x)] +Tr[σ_μαS̃_c^e^'e(-x)γ^βγ _5S_d^d^'d(-x)] Tr[ γ _5γ^αS_c^cc^'(x)σ_νβS̃_u^bb^'(x)] +Tr[σ_μαS̃_c^e^'e(-x)σ_νβS_d^d^'d(-x)] Tr[γ_5γ^αS_c^cc^'(x) γ^βγ_5S̃_u^bb^'(x)] }| 0 ⟩_γ, Π _μν^QCD-Z_c^2(p,q) =ϵϵ̃ϵ^'ϵ̃^'/2∫ d^4xe^ipx⟨ 0 | { Tr[ γ _5 γ^αS̃_c^e^'e(-x)γ ^βγ _5 S_d^d^'d(-x)] Tr[σ_μαS_c^cc^'(x)σ_νβS̃_u^bb^'(x)] -Tr[ γ _5 γ^αS̃_c^e^'e(-x)σ_νβS_d^d^'d(-x)] Tr[ σ_μαS_c^cc^'(x)γ^βγ_5 S̃_u^bb^'(x)] -Tr[σ_μαS̃_c^e^'e(-x)γ^βγ _5S_d^d^'d(-x)] Tr[ γ _5γ^αS_c^cc^'(x)σ_νβS̃_u^bb^'(x)] +Tr[σ_μαS̃_c^e^'e(-x)σ_νβS_d^d^'d(-x)] Tr[γ_5γ^αS_c^cc^'(x) γ^βγ_5S̃_u^bb^'(x)] }| 0 ⟩_γ, where S_c(x) and S_q(x) stand for propagators of the heavy and light quarks. The explicit forms of the quark propagators are written as <cit.> S_q(x) = S_q^free - ⟨q̅q ⟩/12(1-im_q/4) - ⟨q̅σ.G q ⟩/192x^2 (1-im_q/6) -i g_s /32 π^2 x^2 G^μν (x) [/xσ_μν + σ_μν/x], S_c(x) =S_c^free -g_sm_c/16π ^2∫_0^1 dv G^μν(vx)[ (σ _μν +σ _μν) K_1( m_c√(-x^2)) /√(-x^2) +2σ_μνK_0( m_c√(-x^2))]. where S_q^free =1/2 π^2 x^2( i /x^2-m_q/2 ), S_c^free = m_c^2/4 π^2[ K_1(m_c√(-x^2)) /√(-x^2) +i K_2( m_c√(-x^2))/(√(-x^2))^2]. The correlation functions in Eqs. (<ref>)-(<ref>) contain short distance (perturbative), and long distance (nonperturbative) contributions. To get expressions of the contributions when the photon is radiated at a short distance, it is adequate to modify one of the propagators in Eqs. (<ref>)-(<ref>) as follows S^free(x) →∫ d^4y S^free (x-y) /A(y) S^free (y) , where the other propagators in Eqs. (<ref>)-(<ref>) have been considered as full propagators. To obtain the expressions of when the photon is radiated at a long distance the correlation function can be acquired from in Eqs. (<ref>)-(<ref>) by substituting one of the u/d-quark propagators by S_μν^ab(x) → -1/4[q̅^a(x) Γ_i q^b(x)](Γ_i)_μν, where Γ_i = I, γ_5, γ_μ, iγ_5 γ_μ, σ_μν/2. Under this approach, three remaining quark propagators are considered full quark propagators containing perturbative as well as nonperturbative contributions. When a photon interacts with light-quark fields nonperturbatively there shows up the matrix elements of nonlocal operators ⟨γ(q)q̅(x) Γ_i G_μνq(0) 0⟩ and ⟨γ(q)q̅(x) Γ_i q(0) 0⟩ between the photon state and vacuum, which are parameterized in terms of photon distribution amplitudes (DAs) (for details see Ref. <cit.>). Together with these matrix elements non-local operators such as four quarks (q̅q q̅ q) and two gluons (q̅ G G q) are expected to appear. But it is known that the contributions of such terms are small, which is confirmed by the conformal spin expansion <cit.>, and hence we will neglect them. The QCD side of the correlation function is evaluated by employing Eqs. (<ref>)-(<ref>). Then, to transfer expressions in x-space to the momentum space the Fourier transformation is carried out. QCD sum rules for the hadron parameters are obtained by equating the correlation functions evaluated at both the hadronic parameters and quark–gluon parameters through quark–hadron duality. Then, we choose the structure (ε.p) (p_μ q_ν -p_ν q_μ) and (ε.p) q_μ q_ν for the magnetic and quadrupole moments, respectively. As a result, we get μ_Z_c λ_Z_c^2 = e^m_Z_c^2/M^2 Δ_1^QCD(M^2,s_0),       𝒟_Z_c λ_Z_c^2 = m_Z_c^2 e^m_Z_c^2/M^2 Δ_2^QCD(M^2,s_0), μ_Z^1_c λ_Z^1_c^2 = e^m_Z^1_c^2/M^2 Δ_3^QCD(M^2,s_0),       𝒟_Z_c^1 λ_Z_c^1^2 = m_Z_c^1^2 e^m_Z_c^1^2/M^2 Δ_4^QCD(M^2,s_0), μ_Z^2_c λ_Z^2_c^2 =e^m_Z^2_c^2/M^2 Δ_5^QCD(M^2,s_0),       𝒟_Z_c^2 λ_Z_c^2^2 = m_Z_c^2^2 e^m_Z_c^2^2/M^2 Δ_6^QCD(M^2,s_0), where M^2 is the Borel mass and s_0 is the continuum threshold parameter. For the sake of simplicity, only the explicit expressions of the Δ_1^QCD(M^2,s_0) function are presented in Appendix A, since the remaining functions are in similar form. § NUMERICAL ANALYSIS The present section encompasses the numerical analysis for the magnetic and quadrupole moments of the Z_c states. The following QCD parameters are used in our calculations: m_u=m_d=0, m_c = (1.275± 0.025)GeV, ⟨u̅u⟩= ⟨d̅d⟩=(-0.24±0.01)^3GeV^3 <cit.>, m_0^2 = 0.8 ± 0.1 GeV^2, ⟨ g_s^2G^2⟩ = 0.88GeV^4 <cit.> and f_3γ=-0.0039GeV^2 <cit.>. From Eqs. (<ref>)-(<ref>), it follows that for the determination of magnetic and quadrupole moments, the residues of the Z_c states are needed. These residues are calculated in Ref. <cit.>. The photon DAs are one of the main non-perturbative inputs of QCD light-cone sum rules. The parameters used in the photon DAs are presented in Appendix B. As we mentioned previous section, the sum rules depend also on the helping parameters: Borel mass squared parameter M^2 and continuum threshold s_0. The physical observables, i.e., magnetic and quadrupole moments, should be independent of these parameters. Hence, we search working regions for these additional parameters such that in these regions the magnetic and quadrupole moments are nearly independent of them. While determining the working regions for the parameters M^2 and s_0, the standard prescription of the technique used, the operator product expansion (OPE) convergence, and the pole contribution (PC) dominance are considered. To characterize the above-mentioned restrictions, it is convenient to use the following equations: =Δ (M^2,s_0)/Δ (M^2,∞)≥%30, and =Δ^Dim 8 (M^2,s_0)/Δ (M^2,s_0)≤%5, where Δ^Dim 8 (M^2,s_0) stands for the contribution of the highest dimensional term in the OPE. As a result of the above-mentioned restrictions, OPE convergence and PC values for each state together with the working regions acquired for M^2 and s_0 are presented in Table <ref>. From the values given in Table <ref>, it can be seen that the working regions determined for M^2 and s_0 meet the above-mentioned requirements. Having determined the working regions of M^2 and s_0, we now study the dependence of magnetic and quadrupole moments on M^2, at several fixed values of s_0. From Fig. 1, we notice that indeed magnetic and quadrupole moments represent good stability concerning the variation in M^2 in its working region. Our final results for the magnetic and quadrupole moments of Z_c(4020)^+, Z_c(4050)^+ and Z_c(4600)^+ states are presented in Table <ref>. The uncertainties result from the variation of Borel parameter M^2, continuum threshold s_0 as well as from uncertainties in input parameters. The magnitude of the numerical results of the magnetic moments also allows them to be measured experimentally. When the magnetic moment results are examined, we can say that the magnetic moments of these states are large enough to be measured in future experiments. We get a nonzero however small value for the quadrupole moments of Z_c states representing a nonspherical charge distribution. The sign of quadrupole moments is positive for Z_c tetraquark states, which correspond to the prolate charge distributions. Before closing this section, we need to make a few comments on how the magnetic moments of unstable hadrons can be measured. While the short lifetimes of the Z_c states make the magnetic moment difficult to be measured at present experimental facilities, more data accumulation in different experiments in the future may make this possible. Δ^+(1232) baryon has also a very short lifetime, but, its magnetic moment is obtained from the experimental data on the γ N →Δ→Δγ→π N γ process <cit.>. Therefore, one technique for the determination of the magnetic and higher multipole moments is based on soft photon emission off the hadrons recommended in Ref. <cit.>. The photon carries knowledge about the magnetic and higher multipole moments of the hadron emitted from, as well. The radiative transition matrix element can be described concerning the energy of the photon's as M ∼ A (E_γ)^-1 + B (E_γ )^0 + C E_γ +... where E_γ denotes the photon's energy. The electric charge contributes to the amplitude at the (E_γ )^-1 order, while the contribution of the magnetic moment is defined by the (E_γ)^0 term. Hence, by measuring the decay width or cross-section of the radiative transition process and ignoring the small contributions of terms linear and higher order in E_γ, one can determine the magnetic moment of the hadron under examination. § DISCUSSION AND CONCLUDING REMARKS The magnetic and quadrupole moments of the Z_c(4020)^+, Z_c(4050)^+ and Z_c(4600)^+ states, by assuming that these states are represented as compact diquark-antidiquark states with the J^P = 1^+ quantum numbers are calculated in the framework of the QCD light-cone sum rules method. Magnetic and quadrupole moments represent one of the most promising classes of decays in gathering data about the electromagnetic features, which are prominent to reveal the internal structure of the hadrons. Measurement of the magnetic and quadrupole moments of hidden-charm tetraquark states in future experimental facilities can be very helpful in determining the quantum numbers, as well as comprehension of the substructure of these hidden-charm tetraquark states. Besides, we hope that the magnetic and quadrupole moments of the Z_c(4020)^+, Z_c(4050)^+ and Z_c(4600)^+ states can be calculated by other approaches in the future, and these examinations will make our information of the magnetic and quadrupole moments of the Z_c(4020)^+, Z_c(4050)^+ and Z_c(4600)^+ states become more copious. § APPENDIX A: EXPLICIT FORMS OF THE FUNCTION In the present appendix, we present explicit expressions of the analytical expressions obtained for the magnetic moment of the Z_c(4020) state as follows: Δ_1^QCD(M^2,s_0) = 27 (e_d - e_u+e_c)/655360 π^5[ I[0, 5, 3, 1] - 3 I[0, 5, 3, 2] + 3 I[0, 5, 3, 3] - I[0, 5, 3, 4] - 3 I[0, 5, 4, 1] + 6 I[0, 5, 4, 2] - 3 I[0, 5, 4, 3] + 3 I[0, 5, 5, 1] - 3 I[0, 5, 5, 2] - I[0, 5, 6, 1]] +m_c^2 (e_d - e_u) /32768 π^5 (I[0, 4, 2, 2] - 2 I[0, 4, 2, 3] + I[0, 4, 2, 4] - 2 I[0, 4, 3, 2] + 2 I[0, 4, 3, 3] + I[0, 4, 4, 2]) +P_2^2 m_0^2/ 5308416 π^3[e_c ( I[0, 1, 1, 0] +3 I[0, 1, 1, 1] -2 I[0, 1, 1, 2] + 2 I[0, 1, 2, 0] -3 I[0, 1, 2, 1] + I[0, 1, 3, 0] )] +m_c P_ 1 P_ 2 /7077888 π^3[(e_c + e_d) (15 I[0, 1, 1, 0] - 32 I[0, 1, 1, 1] + 16 I[0, 1, 1, 2] - 30 I[0, 1, 2, 0] + 32 I[0, 1, 2, 1] + 15 I[0, 1, 3, 0])+ (234 e_u I_3[𝒮] + 120 e_u I_ 3[𝒮̃] + 195 e_d I_ 4[𝒮] + 82 e_d I_ 4[𝒮̃] + 384 (e_d - e_u) I_ 6[h_γ]) × I[0, 1, 3, 0]+ 576 (e_d - e_u) (I[0, 1, 1, 0] - 2 I[0, 1, 1, 1] + I[0, 1, 1, 2] - 2 I[0, 1, 2, 0] + 2 I[0, 1, 2, 1] + I[0, 1, 3, 0])] -P_ 1 f_3γ m_c^2 /110592 π^3 (e_d - e_u) (48 I[0, 1, 2, 0] + I[0, 1, 3, 0]) I_6[ψ^ν] +7 P_ 1 m_c^2 (e_d - e_u) /147456 π^5(I[0, 2, 1, 2] - I[0, 2, 1, 3] - I[0, 2, 2, 2]) +P_1 f_ 3 γ/9437184 π^3 (e_u I_1[𝒜] + 6 e_u I_ 1[𝒱] + e_d (I_ 2[𝒜] + 6 I_ 2[𝒱]) + 256 (-e_d + e_u) I_ 6[ψ^ν]) I[0, 2, 4, 0] +P_1 (e_d - e_u + e_c)/7077888 π^5[ 27 I[0, 3, 2, 0] - 53 I[0, 3, 2, 1] + 25 I[0, 3, 2, 2] + I[0, 3, 2, 3] - 81 I[0, 3, 3, 0] + 106 I[0, 3, 3, 1]- 25 I[0, 3, 3, 2] + 81 I[0, 3, 4, 0] - 53 I[0, 3, 4, 1] - 27 I[0, 3, 5, 0]] +m_c P_ 2/393216 π^3[-4 (10 e_u I_ 3[𝒮] + 38 e_u I_ 3[𝒮̃] + 6 e_d I_ 4[𝒮] + 45 e_d I_ 4[𝒮̃]) I[0, 3, 4, 0] - 288 (e_d - e_u) × (I[0, 3, 2, 0]- 3 I[0, 3, 2, 1] + 3 I[0, 3, 2, 2] - I[0, 3, 2, 3] - 3 I[0, 3, 3, 0] + 6 I[0, 3, 3, 1] - 3 I[0, 3, 3, 2] + 3 I[0, 3, 4, 0]) + 864 (e_d - e_u) I[0, 3, 4, 1] + 3 (4 e_d I_ 4[𝒮] + 5 e_d I_ 4[𝒮̃] + 6 e_u (I_ 3[𝒮] + I_ 3[𝒮̃] - 32 I_ 6[h_ γ]) + 96 (e_d - e_u + 2 e_d )I_ 6[h_ γ]) I[0, 3, 5, 0]] +f_ 3 γ/2097152 π^3[ 4 (22 e_u I_ 1[𝒜] - 25 e_u I_ 1[𝒱] + 22 e_d I_ 2[𝒜] - 25 e_d I_ 2[𝒱]) I[0, 4, 5, 0] - 3 (6 e_u I_ 1[𝒜] + e_u I_ 1[𝒱] + e_d (6 I_ 2[𝒜] + I_ 2[𝒱]) + 448 (e_d - e_u) I_ 6[ψ^ν]) I[0, 4, 6, 0]], where P_1 =⟨ g_s^2 G^2⟩ and P_2 =⟨q̅ q ⟩ are gluon and u/d-quark condensates, respectively. The functions I[n,m,l,k], I_1[𝒜], I_2[𝒜], I_3[𝒜], I_4[𝒜],  I_5[𝒜], and  I_6[𝒜] are defined as: I[n,m,l,k] = ∫_4 m_c^2^s_0 ds ∫_0^1 dt ∫_0^1 dw  e^-s/M^2  s^n (s-4 m_c^2)^m t^l w^k, I_1[𝒜] =∫ D_α_i∫_0^1 dv 𝒜(α_q̅,α_q,α_g) δ'(α_ q +v̅α_g-u_0), I_2[𝒜] =∫ D_α_i∫_0^1 dv 𝒜(α_q̅,α_q,α_g) δ'(α_q̅+ v α_g-u_0), I_3[𝒜] =∫ D_α_i∫_0^1 dv 𝒜(α_q̅,α_q,α_g) δ(α_ q +v̅α_g-u_0), I_4[𝒜] =∫ D_α_i∫_0^1 dv 𝒜(α_q̅,α_q,α_g) δ(α_q̅+ v α_g-u_0), I_5[𝒜] =∫_0^1 du  A(u)δ'(u-u_0), I_6[𝒜] =∫_0^1 du  A(u), where 𝒜 represents the corresponding photon DAs. § APPENDIX B: THE ON-SHELL PHOTON DISTRIBUTION AMPLITUDES In this Appendix, we give the descriptions of the matrix elements of the form ⟨γ(q)q̅(x) Γ_i q(0) 0⟩ and ⟨γ(q)q̅(x) Γ_i G_μνq(0) 0⟩ concerning the on-shell photon DAs together with the explicit expressions of the photon DAs entering into the matrix elements <cit.> : ⟨γ(q) |q̅(x) γ_μ q(0) | 0 ⟩ = e_q f_3 γ(ε_μ - q_με x/q x) ∫_0^1 du e^i u̅ q xψ^v(u) ⟨γ(q) |q̅(x) γ_μγ_5 q(0) | 0 ⟩ = - 1/4 e_q f_3 γϵ_μναβε^ν q^α x^β∫_0^1 du e^i u̅ q xψ^a(u) ⟨γ(q) |q̅(x) σ_μν q(0) | 0 ⟩ = -i e_q ⟨q̅ q ⟩ (ε_μ q_ν - ε_ν q_μ) ∫_0^1 du e^i u̅ qx(χφ_γ(u) + x^2/16𝔸 (u) ) -i/2(qx) e_q q̅q [x_ν(ε_μ - q_με x/qx) - x_μ(ε_ν - q_νε x/q x) ] ∫_0^1 du e^i u̅ q x h_γ(u) ⟨γ(q) | q̅(x) g_s G_μν (v x) q(0) | 0 ⟩ = -i e_q ⟨q̅ q ⟩(ε_μ q_ν - ε_ν q_μ) ∫ Dα_i e^i (α_q̅ + v α_g) q x S(α_i) ⟨γ(q) | q̅(x) g_s G̃_μν(v x) i γ_5 q(0) | 0 ⟩ = -i e_q ⟨q̅ q ⟩(ε_μ q_ν - ε_ν q_μ) ∫ Dα_i e^i (α_q̅ + v α_g) q xS̃(α_i) ⟨γ(q) |q̅(x) g_s G̃_μν(v x) γ_αγ_5 q(0) | 0 ⟩ = e_q f_3 γ q_α (ε_μ q_ν - ε_ν q_μ) ∫ Dα_i e^i (α_q̅ + v α_g) q x A(α_i) ⟨γ(q) |q̅(x) g_s G_μν(v x) i γ_α q(0) | 0 ⟩ = e_q f_3 γ q_α (ε_μ q_ν - ε_ν q_μ) ∫ Dα_i e^i (α_q̅ + v α_g) q x V(α_i) ⟨γ(q) |q̅(x) σ_αβ g_s G_μν(v x) q(0) | 0 ⟩ = e_q ⟨q̅ q ⟩{[(ε_μ - q_με x/q x)(g_αν - 1/qx (q_α x_ν + q_ν x_α)) . . q_β - (ε_μ - q_με x/q x)(g_βν - 1/qx (q_β x_ν + q_ν x_β)) q_α - (ε_ν - q_νε x/q x)(g_αμ - 1/qx (q_α x_μ + q_μ x_α)) q_β + . (ε_ν - q_νε x/q.x)( g_βμ - 1/qx (q_β x_μ + q_μ x_β)) q_α] ∫ Dα_i e^i (α_q̅ + v α_g) qx T_1(α_i) + [(ε_α - q_αε x/qx) (g_μβ - 1/qx(q_μ x_β + q_β x_μ)) . q_ν - (ε_α - q_αε x/qx) (g_νβ - 1/qx(q_ν x_β + q_β x_ν)) q_μ - (ε_β - q_βε x/qx) (g_μα - 1/qx(q_μ x_α + q_α x_μ)) q_ν + . (ε_β - q_βε x/qx) (g_να - 1/qx(q_ν x_α + q_α x_ν) ) q_μ] ∫ Dα_i e^i (α_q̅ + v α_g) qx T_2(α_i) +1/qx (q_μ x_ν - q_ν x_μ) (ε_α q_β - ε_β q_α) ∫ Dα_i e^i (α_q̅ + v α_g) qx T_3(α_i) + . 1/qx (q_α x_β - q_β x_α) (ε_μ q_ν - ε_ν q_μ) ∫ Dα_i e^i (α_q̅ + v α_g) qx T_4(α_i) }, where the measure Dα_i is defined as ∫ Dα_i = ∫_0^1 d α_q̅∫_0^1 d α_q ∫_0^1 d α_g δ(1-α_q̅-α_q-α_g) . Here, φ_γ(u) denotes the leading twist-2 of the photon DA, ψ^v(u), ψ^a(u), A(α_i) and V(α_i), are the twist-3 DAs, and h_γ(u), 𝔸(u), S(α_i), S̃(α_i), T_1(α_i), T_2(α_i), T_3(α_i) and T_4(α_i) are the twist-4 photon DAs. The explicit expressions of the on-shell photon DAs with different twists are φ_γ(u) = 6 u u̅( 1 + φ_2(μ) C_2^3/2(u - u̅) ), ψ^v(u) = 3 (3 (2 u - 1)^2 -1 )+3/64(15 w^V_γ - 5 w^A_γ) (3 - 30 (2 u - 1)^2 + 35 (2 u -1)^4 ), ψ^a(u) = (1- (2 u -1)^2)(5 (2 u -1)^2 -1) 5/2(1 + 9/16 w^V_γ - 3/16 w^A_γ), h_γ(u) = - 10 (1 + 2 κ^+) C_2^1/2(u - u̅), 𝔸(u) = 40 u^2 u̅^2 (3 κ - κ^+ +1) + 8 (ζ_2^+ - 3 ζ_2) [u u̅ (2 + 13 u u̅) . + . 2 u^3 (10 -15 u + 6 u^2) ln(u) + 2 u̅^3 (10 - 15 u̅ + 6 u̅^2) ln(u̅) ], A(α_i) = 360 α_q α_q̅α_g^2 (1 + w^A_γ1/2 (7 α_g - 3)), V(α_i) = 540 w^V_γ (α_q - α_q̅) α_q α_q̅α_g^2, T_1(α_i) = -120 (3 ζ_2 + ζ_2^+)(α_q̅ - α_q) α_q̅α_q α_g, T_2(α_i) = 30 α_g^2 (α_q̅ - α_q) ((κ - κ^+) + (ζ_1 - ζ_1^+)(1 - 2α_g) + ζ_2 (3 - 4 α_g)), T_3(α_i) = - 120 (3 ζ_2 - ζ_2^+)(α_q̅ -α_q) α_q̅α_q α_g, T_4(α_i) = 30 α_g^2 (α_q̅ - α_q) ((κ + κ^+) + (ζ_1 + ζ_1^+)(1 - 2α_g) + ζ_2 (3 - 4 α_g)), S(α_i) = 30α_g^2{(κ + κ^+)(1-α_g)+(ζ_1 + ζ_1^+)(1 - α_g)(1 - 2α_g) +ζ_2[3 (α_q̅ - α_q)^2-α_g(1 - α_g)]}, S̃(α_i) = -30α_g^2{(κ -κ^+)(1-α_g)+(ζ_1 - ζ_1^+)(1 - α_g)(1 - 2α_g) +ζ_2 [3 (α_q̅ -α_q)^2-α_g(1 - α_g)]}. The numerical values of the constants in the above wave functions are given as φ_2(1 GeV) = 0, w^V_γ = 3.8 ± 1.8, w^A_γ = -2.1 ± 1.0, κ = 0.2, κ^+ = 0, ζ_1 = 0.4, and ζ_2 = 0.3.
http://arxiv.org/abs/2307.05821v1
20230711220201
Quantum Relax-and-Round Algorithm for Combinatorial Optimization
[ "Maxime Dupont", "Bhuvanesh Sundar" ]
quant-ph
[ "quant-ph" ]
[Corresponding author: ][email protected] Rigetti Computing, 775 Heinz Avenue, Berkeley, California 94710, USA Rigetti Computing, 775 Heinz Avenue, Berkeley, California 94710, USA We introduce a relax-and-round approach embedding the quantum approximate optimization algorithm (QAOA) with p≥ 1 layers. We show for many problems, including Sherrington-Kirkpatrick spin glasses, that at p=1, it is as accurate as its classical counterpart and better than the QAOA for all p. Employing a different rounding scheme, we prove the method shares the performance of the Goemans-Williamson algorithm for the maximum cut problem on certain graphs. We pave the way for an overarching quantum relax-and-round framework with performance on par with some of the best classical algorithms. Quantum Relax-and-Round Algorithm for Combinatorial Optimization Bhuvanesh Sundar August 12, 2023 ================================================================ Solving combinatorial optimization problems <cit.>, or Ising models <cit.>, is a formidable challenge connecting basic sciences, such as mathematical optimization, statistical physics, and condensed matter, with everyday life problems in logistics, scheduling, routing, finance, chemistry, biology, etc <cit.>. The advent of controllable quantum simulators has inspired the development of quantum-based approaches for tackling combinatorial optimization. Notable examples include quantum annealing <cit.> and the quantum approximate optimization algorithm (QAOA) <cit.> for programmable quantum computers. These quantum approaches have been successfully implemented at various scales on a wide range of platforms. For instance, superconducting quantum computers executed the QAOA for finding the ground state of spin glasses, solving the maximum cut problem, and performing a machine learning task for up to 23 qubits <cit.>. Trapped ions simulated the QAOA for solving a long-range Ising model on 40 spins <cit.>. Researchers observed a super-linear speedup in finding the maximum independent set on various graphs up to 289 vertices using ultracold Rydberg atoms <cit.>. A superconducting quantum annealer considered a spin glass with 5,000 variables <cit.>. Despite ever-improving practical implementations as to the number of qubits, coherence, operating fidelity, and programmability, it remains an open question whether quantum machines can deliver—even in the fault-tolerant regime <cit.>—an advantage, in terms of speed, or the quality of the solution, versus the best classical methods. The quality of a solution z∈ℤ^N is typically characterized by the approximation ratio α=C(z)/C(z_opt), where C(z) is a problem-dependent objective function that scores the solution and z_opt is the optimal solution. Many combinatorial optimization problems are NP-hard with no efficient way of obtaining the optimal solution. Hence, an overarching goal is to develop approximate algorithms that return α as close to one as possible <cit.>—noting that it can also be NP-hard to get α past a certain threshold. Celebrated classical examples include Goemans-Williamson algorithm for the maximum cut problem with α≃ 0.878 <cit.>, Christofides-Serdyukov α=3/2 algorithm for the traveling salesman problem <cit.>, and the α=7/8 approximation algorithm for the class MAX-E3-SAT of boolean satisfiability problems <cit.>. One of the most promising quantum algorithms for solving quadratic binary optimization problems on near-term quantum computers is the QAOA <cit.>. Given an objective function C(z)=∑_ij𝖶_ijz_iz_j, where 𝖶∈ℝ^N× N is the adjacency matrix of an N-vertex undirected weighted graph encoding the problem, the optimization task is to minimize C(z) over z∈{± 1}^N. This is done by preparing a parameterized quantum state |Ψ⟩_p=[∏_ℓ=1^pe^-iβ_ℓ∑_j=1^NX̂_je^-iγ_ℓĈ]Ĥ^⊗ N|0⟩^⊗ N, where Ĥ is the one-qubit Hadamard gate, X̂_i is the Pauli operator on qubit i, Ĉ is the operator corresponding to the objective function of Eq. (<ref>) obtained by replacing the binary variables z_i with Pauli operators Ẑ_i, and {γ_ℓ, β_ℓ} are real-valued angles. These angles are optimal when they minimize the expectation value of the objective function ⟨Ĉ⟩_p over |Ψ⟩_p of Eq. (<ref>). The depth p acts as a control parameter of the quantum algorithm, such that the quality of the solutions improves as p is increased <cit.>. The solution from the QAOA is guaranteed to converge to the optimal solution (α=1) for p→+∞ due to the adiabatic theorem <cit.>. However, it is more difficult to ascertain its performance at low p. Yet, the low-p regime is particularly relevant for near-term quantum devices, which in the absence of quantum error correction <cit.>, can only execute shallow algorithms. In some cases, the average approximation ratio is known for the QAOA at low p. For instance, the average approximation ratio at p=1 for the maximum cut problem on random 3-regular graphs is α≃ 0.692 <cit.>, and for ring graphs is α=(2p+1)/(2p+2) <cit.>. For paradigmatic Sherrington-Kirkpatrick (SK) spin glasses <cit.>, the QAOA yields α≃ 0.397 at p=1, and α≃ 0.901 at p=20 <cit.>. Devising quantum algorithms with the lowest depth and highest performance compared to other known algorithms is a highly desired goal on the quest to achieving quantum advantage. Here, we introduce an efficient quantum relax-and-round (QRR) algorithm that builds on top of QAOA to enhance its approximation ratio on a range of problems. QRR requires no more data from the quantum computer than what is already computed as part of the QAOA, making it an attractive plug-and-play addition to quantum optimization workflows. We show that at p=1, QRR has the same performance as a classical relax-and-round algorithm on a large class of problem instances, much higher than that of the raw QAOA at p=1. The solution from QRR converges asymptotically to the optimal solution for p→+∞ and displays robustness to certain types of quantum noise. Relax-and-round approaches are ubiquitous in approximate classical algorithms, such as those based on semidefinite programming <cit.>. Indeed, the difficulty in solving binary optimization problems comes from the solutions being restricted to the integer domain. If this constraint is relaxed (e.g., z∈{±1}^N to z∈ℝ^N), the problem becomes an eigenvalue problem and is, therefore, solvable efficiently. A judicious rounding scheme is then employed to map the relaxed solution back to a valid one. For instance, for quadratic binary optimization problems [Eq. (<ref>)] where 𝖶 is drawn from the Gaussian orthogonal ensemble, which corresponds to SK spin glasses <cit.>, a relax-and-round scheme leads to an approximation ratio α=2/π P^*≃ 0.834 <cit.>, where P^* is the Parisi constant <cit.>. Here, we propose to perform a relax-and-round step on the correlation matrix resulting from the QAOA at depth p, 𝖹_ij^(p) = (δ_ij - 1)⟨Ẑ_iẐ_j⟩_p, where δ_ij is the Kronecker delta and ⟨Ẑ_iẐ_j⟩_p is the expectation value of the two-point correlation between qubits i and j over |Ψ⟩_p [Eq. (<ref>)]. Specifically, we do an eigendecomposition of 𝖹^(p) to obtain its eigenvectors {z∈ℝ^N}. We round the eigenvectors entrywise to their sign {z←sign(z)∈{±1}^N} to recover a valid solution to the original problem. Finally, the best rounded eigenvector with respect to the objective function is returned. The intuition is that the correlation matrix elements encode the similarity between variables i and j: A positive matrix element means the variables z_i and z_j are negatively correlated in the ensemble of measurements, and a negative elements means the variables are positively correlated in the ensemble. Thus, implementing a relax-and-round algorithm on 𝖹^(p) returns one solution z such that pairs of variables (z_i,z_j) tend to minimize 𝖹^(p)_ijz_iz_j. A similar argument explains the intuition for the classical relax-and-round algorithm <cit.>. However, the quantum relax-and-round algorithm is more powerful, since there can potentially be more nontrivial information in 𝖹^(p) than in 𝖶. For example, consider a twofold degenerate optimal solution ±z_opt because of the global ℤ_2 sign flip symmetry z_i→-z_i ∀i for problems in the form of Eq. (<ref>). In the p→+∞ limit, the correlation matrix becomes 𝖹^(∞) = 𝖨 - z_opt⊗z_opt, where ⊗ denotes the outer product and 𝖨 is the identity matrix. The second term is a rank one matrix with eigenvector ±z_opt/√(N), where the sign depends on the numerical solver. This eigenvector gets rounded to ±z_opt. For problems with degenerate optimal solutions beyond the global ℤ_2 symmetry, vanishing one-body terms lim_h_i→ 0h_iz_i can be added to the original objective function to favor a single solution. For a nondegenerate solution, the rounding procedure should attempt both z←±sign(z) to ensure the optimal solution is recovered in the infinite-depth limit [This is because the correlation matrix 𝖹^(∞) captures z_opt up to a global sign. In practice, the eigenvectors are real and defined up to a global ± 1 sign. Either can be returned depending on the numerical implementation of the eigendecomposition. Yet, only one corresponds to the nondegenerate optimal solution.]. We show that the QRR at p=1 performs as well as its classical counterpart for a large class of problems. We exemplify this on SK spin glasses with random weights 𝖶_i≠ j=± 1 (see Figs. <ref>a and  <ref>b) and extend the analysis in the supplemental material <cit.>. The correlation matrix can be evaluated analytically through back-propagation at p=1. At the optimal angles for the QAOA and employing a large-N expansion, the correlation matrix elements are given by <cit.> lim_N→+∞𝖹^(p=1)_ij≃𝖶_ij/√(eN) + 𝖭_ij/eN, where 𝖭_ij≡ -[𝖶^2]_ij/2 =-∑_k𝖶_ik𝖶_kj/2 measures the sign imbalance in weights 𝖶_ik and 𝖶_jk for all nodes k≠ i,j. Because of the random nature of the weights, it is distributed similarly to a random walk with a total number N/2-1 of ± 1 steps. Consequently, 𝖶 and 𝖭 commute, and at the optimal angles and large N, the adjacency and correlation matrices also commute. Therefore, they have the same eigenvectors and the QRR implemented on 𝖹^(1) gives the same approximation ratio α=2/π P^* as the classical relax-and-round algorithm based on 𝖶. This is evidenced by the perfect agreement between marker symbols in Fig. <ref>c, and between the bars for the SK model in Fig. <ref>. For comparison, the raw QAOA leads to α≃ 0.397 at p=1 and crosses the threshold α≃ 0.838>2/π P^* only for p≥ 11 <cit.>. To ascertain the performance of the QRR at finite p>1, we perform numerical experiments <cit.> for finite-N SK spin glasses and report the data in Fig. <ref>c. We observe a systematic improvement as p increases, approaching the p→∞ value. Additional data analyses show that the QRR algorithm converges asymptotically faster to the optimal solution than the underlying QAOA at the same depth p <cit.>. The QRR algorithm has a complexity of O(N^2) for each QAOA circuit executed. Executing one circuit takes a time ∼ pN <cit.> for SK instances, building the correlation matrix ∼ N^2, and assuming that the desired eigenvector is within the leading k≪ N ones, the leading k eigenvectors of the correlation matrix 𝖹^(p) can be found in ∼ N^2 operations. Finally, the rounding requires ∼ N steps and computing the objective function value ∼ N^2, i.e., the number of edges in the graph. To date, the best performing classical algorithm for SK problems is an approximate message-passing algorithm that can return a solution with approximation ratio α=1-ε and complexity ∼ Q(ε)N^2 where Q(ε) is an inverse polynomial of ε controlling the desired accuracy <cit.>. We note that there is a nontrivial control of the desired accuracy with the circuit depth p in the quantum case. In particular, there is an overhead in the QAOA algorithm for finding the optimal angles which are only known up to p≤ 17 <cit.>—an overhead which could scale exponentially with N and p <cit.>. Besides, we find that achieving optimal accuracy with the QRR algorithm requires a number of circuit executions for computing the correlation matrix elements going as n_ex∼ N^κ where κ≈ 1.5 for p=1 and where, by definition, κ→ 0 for p→+∞ <cit.>. The QRR algorithm is robust to certain types of incoherent quantum noise such as those captured by a depolarizing noise channel. Under this channel, the system is in the mixed state <cit.>, ρ̂_p,F=F|Ψ⟩⟨Ψ|_p+(1-F)𝖨̂/2^N, where 𝖨̂ the N-qubit identity matrix and F∈[0,1] the total circuit fidelity. The expectation value of the two-point correlation of Eq. (<ref>) reads 𝖹_ij^(p,F)=(δ_ij - 1) tr(ρ̂_p,FẐ_iẐ_j)=F 𝖹_ij^(p,F=0), which is just a rescaling of the noiseless correlation matrix by the total fidelity F. Therefore, the eigenvectors of 𝖹_ij^(p,F) and the relax-and-round approach are unaffected by depolarizing noise. In practice, simulating an N-variable SK problem requires a minimum of pN^2 two-qubit gates <cit.>. Assuming the average gate fidelity is f∈[0,1], and that errors are uncorrelated, the circuit fidelity is F≃ f^pN^2. Thus, an exponentially large number of circuit executions n_ex is required to obtain a reliable signal-to-noise ratio √(n_ex)f^pN^2≫ 1. Depending on the problem (N), circuit (p), and hardware (f), the algorithm can remain practical in the absence of quantum error correction <cit.>. In particular, the algorithm is based on expectation values, enabling expectation-based error mitigation techniques <cit.> which are otherwise inapplicable with the QAOA for mitigating solutions. This reinforces its usability on the current generation of noisy quantum hardware. We perform numerical experiments on various types of graph problems in the form of Eq. (<ref>) <cit.>, and report the approximation ratio in Fig. <ref>. At p=1, we empirically find that the QRR algorithm is always at least on par with its classical counterpart. For unit-weight random 3-regular graphs, this is because the adjacency and correlation matrices commute in the large N limit <cit.>, similar to SK problems. For the other two cases, the QRR algorithm outperforms the classical one. This is confirmed numerically on much larger problem instances up to N=256 <cit.>. As p increases and correlations get closer to that of the optimal solution, we find a systematic improvement in the QRR algorithm's results. Moreover, the approximation ratio is always larger than that of the underlying QAOA at the same depth p. Noting that the classical relax-and-round algorithm that we used as a baseline does not have a performance guarantee for problems besides SK spin glasses, including the various types of graphs in Fig. <ref>, next we exemplify how the QRR algorithm can be modified such that it analogizes other classical algorithms with a performance guarantee. As a concrete example, we consider the Goemans-Williamson (GW) algorithm. In the GW algorithm, the optimization task is to find the maximum cut of a graph, which is given by maximizing the objective function C(z)=∑_ij𝖫_ijz_iz_j/4 subject to z∈{± 1}^N where 𝖫=𝖣-𝖶 is the Laplacian matrix with 𝖶_ij≥ 0 and 𝖣_ij=δ_ij∑_k 𝖶_ik is the degree matrix. The largest eigenvalue of N𝖫/4 gives an upper bound to the actual maximum cut C(z_opt) <cit.>. Noting that the objective function C(z) is invariant under the transformation 𝖣→𝖣+𝖽𝗂𝖺𝗀(u), where 𝖽𝗂𝖺𝗀(u) is a traceless diagonal matrix formed by the so-called correcting vector u∈ℝ^N, it follows that the maximum cut is also bounded by max_z=1z^TN/4[𝖣-𝖶+𝖽𝗂𝖺𝗀(u)]z for any vectors ∑_iu_i=0 and z∈ℝ^N. Thus, one can make the eigenvalue bound tighter by implementing the relax-and-round algorithm on the following u-augmented version. First, solve min_∑_iu_i=0max_z=1z^TN/4[𝖣-𝖶+𝖽𝗂𝖺𝗀(u)]z, and then sign-round the leading eigenvector z∈ℝ^N. This is equivalent to the GW algorithm and guarantees a maximum cut such that α≃ 0.878 <cit.>. A first task for this classical relaxed problem is to find the optimal correcting vector u_opt minimizing the maximum eigenvalue, which is an upper bound to the optimal solution C(z_opt). Finding u_opt is a convex optimization problem, and thus numerically straightforward <cit.>. Eq. (<ref>) can naturally be adapted to our QRR algorithm by replacing 𝖶 with 𝖹^(p). In some cases, the quantum relax-and-round algorithm with 𝖣-𝖹^(1)+𝖽𝗂𝖺𝗀(u) can be analytically shown to perform at least as well as than the classical relax-and-round algorithm on 𝖣-𝖶+𝖽𝗂𝖺𝗀(u) for some correcting vectors u. As a first example, we show this in the p→+∞ limit, where 𝖹^(∞) = 𝖨 - z_opt⊗z_opt. Choosing 𝖽𝗂𝖺𝗀(u) = tr(𝖣)𝖨/N - 𝖣 makes the eigenvalue problem trivial. The leading eigenvector is ±z_opt/√(N), which gets rounded to ±z_opt. Our second example is for certain vertex-transitive graphs. Due to the vertex-transitive nature, all the elements u_i are equal, and this is valid in both the classical and quantum algorithms. Since ∑_i u_i=0, each element u_i is, therefore, zero <cit.>. For such graphs, the degree matrix 𝖣 is proportional to the identity matrix. Thus, the classical and QRR algorithms round the leading eigenvectors of -𝖶 and -𝖹^(p), respectively. The last step is to show that the leading eigenvector is the same in both cases, which is verified for ring and complete graphs <cit.>. We emphasize that the performance guarantee of the classical GW algorithm is agnostic to the graph, and showing whether a similarly strong statement can be made for the quantum version remains an open question for future work. Regardless, numerical experiments in Fig. <ref> show that the QRR is a powerful heuristic even without theoretically established performance guarantees, on par with the classical version at p=1—and surpassing it for p>1. Our approach can embed other algorithms than the QAOA and directly applies to higher-order problems and those with one-body terms. For instance, we consider the maximum independent set problem on random unit disk graphs using a quantum annealing protocol <cit.>. It is a straightforward testbed for Rydberg atoms thanks to their intrinsic blockade mechanism acting as a penalty for nonindependent set solutions <cit.>. A penalty disfavors invalid solutions but will not prevent them outside the p→+∞ limit. Here, the QRR algorithm provides a heuristic to post-process for hard constraints: The squared values of the normalized eigenvectors of the correlation matrix 𝖹^(p) can be interpreted as probabilities to belong to their ± 1 sign-rounded groups. For instance, this probabilistic interpretation can guide a post-processing greedy approach for resolving out-of-constraint solutions, such as the one employed for maximum independent set problems with Rydberg atoms <cit.>. Furthermore, algorithms iteratively freezing variables to their classical values through consecutive executions of the QAOA on smaller and smaller problems <cit.> can also leverage this probabilistic interpretation as a natural freezing selection strategy. A slightly different version of our algorithm would optimize the QAOA angles {γ,β}_p with respect to the solution returned by relax-and-round step instead of ⟨Ĉ⟩_p directly. In addition to the proven guarantees at p=1 and p→+∞, this variant would ensure that the QRR algorithm with p layers is necessarily at least as good as the QRR algorithm with p-1 layers for all p, with the lower bound obtained by setting the variational angles γ_p=β_p=0. Relax-and-round strategies are at the roots of classical semidefinite programming methods—which are some of the best combinatorial problem solvers <cit.>. This makes our algorithm a fierce competitor and paves the way toward a quantum advantage for combinatorial optimization. We are grateful to A. Arrasmith, N. Didier. G. Enos, M. J. Hodson, M. Paini, and M. J. Reagor for interesting discussions. This work is supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112090058. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award DDR-ERCAP0024427. *equationsection Supplemental Material for “Quantum Relax-and-Round Algorithm for Combinatorial Optimization” 5mm 0.79 We provide an overview of all the problem types considered. We derive analytical expressions for the expectation value of two-point correlation functions of the QAOA at p=1. We provide analytical results on the equivalence between the quantum relax-and-round (QRR) algorithm and the classical relax-and-round algorithm for many problems. We detail how the numerical experiments are performed. We provide additional data showing the improvement in the approximation ratio of the QRR algorithm compared to the QAOA on which it is based. We compute numerically the norm of the commutator between the correlation and adjacency matrices for different graph problems. We study the effect on the QRR algorithm of the finite number of circuit executions to estimate expectation values. We extend the data of Fig. 2 of the main text at p=1 to much larger problem sizes. We consider the relax-and-round algorithm on maximum independent set problems using a quantum annealing protocol instead of the QAOA to obtain the correlation matrix. We discuss an analogous relax-and-round algorithm for the Goemans-Williams algorithm, for the maximum cut problem. § DEFINITION OF PROBLEM AND GRAPH TYPES We define the different problem and graph types considered throughout this work. The objective function of the following problems, that one seeks to minimize, is defined over a graph. The structure of the graph is encoded into its adjacency matrix 𝖶 with 𝖶_ii=0 and 𝖶_i≠ j the weight between vertices i and j (it is zero for nonadjacent vertices). The following problem types define effectively different adjacency matrices 𝖶. When needed for numerical experiments, we generate these graphs using the Python package NetworkX <cit.>. Sherrington-Kirkpatrick (SK) with random ± 1 weights SK problem instances <cit.> can be represented as complete graphs (i.e., all-to-all) with a random ± 1 weight of equal probability between all vertices. Random 3-regular graphs with a unit weight Random 3-reguar graphs correspond to graphs where each vertex is connected to three other vertices. The weight of the edges connecting those vertices is chosen to be one. Random Newman-Watts-Strogatz small-world graphs with uniform random weights on the unit line The first step for building a Newman-Watts-Strogatz graph is to construct a ring graph. Then, an edge is added between all next-nearest neighbors, i.e., each node is connected to its k=4 nearest-neighbors. Then, with probability p=1/2, we randomly pick an edge between vertices i and j, and rewire it to vertices i and k  <cit.>. Finally, each edge is given a random weight drawn uniformly from the range [0,1]. Random Barabási-Albert graphs with normally distributed random weights The first step for building a Barabási-Albert graph with N vertices is to generate a star graph with N/4+1 vertices. Then, the graph is grown by attaching new 3N/4-1 nodes each with m=N/4 edges that are preferentially attached to existing nodes with high degree <cit.>. Finally, a random weight drawn from a normal distribution of mean zero and unit variance is given to each edge. The ring graph The ring graph is a d=2 regular graph with unit weight on the edges. The Bethe lattice The Bethe lattice is an infinite connected cycle-free graph where all vertices have the same number of nearest-neighbors. We consider the unit-weight Bethe lattice with d nearest-neighbors. The ring graph with additional next-nearest neighbor The starting point is the ring graph. Then, a connection between next-nearest neighbors is added. Thus, each node has four connections. Each edge has unit weight. The honeycomb lattice The honeycomb lattice is a hexagonal Bravais lattice with two nodes per unit cell. Each vertex has degree three. We either consider infinite honeycomb lattices, or lattices with periodic boundary conditions. Each edge has unit weight. Random two-dimensional geometric graphs The graphs are generated by placing uniformly at random N vertices in the unit plane of dimensions 1× 1. Then, nodes are connected by an edge if they are within an euclidean distance of r=√(ρ/N). Here ρ is a parameter of the graph leading to an average vertex degree for the nodes of πρ. This graph is also known as random unit disk by rescaling r→ 1 and the dimension of the plane to √(N/ρ)×√(N/ρ). § ANALYTICAL EXPRESSION FOR THE EXPECTATION VALUES IN THE QAOA AT P=1 We calculate analytically the expectation value of the two-point correlation function ⟨Ẑ_iẐ_j⟩_1 from the quantum state |Ψ⟩_1 resulting from a one-layer (p=1) QAOA circuit. We denote C(z)=∑_ij𝖶_ijz_iz_j the objective function for a graph problem with adjacency matrix 𝖶 and Ising variables z_i=± 1. The quantum state reads, |Ψ⟩_1=e^-iβ_1∑_j=1^NX̂_je^-iγ_1ĈĤ^⊗ N|0⟩^⊗ N, where Ĥ is the one-qubit Hadamard gate, X̂_i is the Pauli operator on qubit i, Ĉ is the operator corresponding to the objective function obtained by substituting the binary variables z_i for Pauli operators Ẑ_i, and {γ_1≡γ, β_1≡β} are real-valued angles. Hence, the expectation value of the two-point correlation function reads, ⟨Ẑ_iẐ_j⟩_QAOA_1 = ⟨+| e^iγĈ e^iβ∑_j=1^NX̂_jẐ_iẐ_j e^-iβ∑_j=1^NX̂_j e^-iγĈ|+⟩, where, |+⟩≡Ĥ^⊗ N|0⟩^⊗ N=(|0⟩+|1⟩/√(2))^⊗ N, corresponds to an equal superposition of all the basis states. The inner term e^iβ∑_j=1^NX̂_jẐ_iẐ_j e^-iβ∑_j=1^NX̂_j involves exponentials of sum of terms acting independently on the qubits, and can be easily expanded to, e^iβ∑_j=1^NX̂_jẐ_iẐ_j e^-iβ∑_j=1^NX̂_j = [cos(2β)Ẑ_i + sin(2β)Ŷ_i][cos(2β)Ẑ_j + sin(2β)Ŷ_j], where Ŷ_i is the Pauli operator acting on qubit i. Expanding this product gives four terms. To compute ⟨Ẑ_iẐ_j⟩, one should first multiply these four terms on either side by e^iγĈ and e^-iγĈ, and then compute the expectation value with respect to |+⟩. We use the Baker-Campbell-Hausdorff formula to write, e^iγĈẐ_iẐ_j e^-iγĈ = Ẑ_iẐ_j, e^iγĈẐ_iŶ_j e^-iγĈ = Ẑ_i [ cos(∑_k 2γ𝖶_jkẐ_k) Ŷ_j + sin(∑_k 2γ𝖶_jkẐ_k) X̂_j ], e^iγĈŶ_iẐ_j e^-iγĈ = [cos(∑_k 2γ𝖶_ikẐ_k)Ŷ_i + sin(∑_k 2γ𝖶_ikẐ_k)X̂_i]Ẑ_j, e^iγĈŶ_iŶ_j e^-iγĈ = [cos(∑_k 2γ𝖶_ikẐ_k)Ŷ_i + sin(∑_k 2γ𝖶_ikẐ_k)X̂_i ] ×[cos(∑_k 2γ𝖶_jkẐ_k)Ŷ_j + sin(∑_k 2γ𝖶_jkẐ_k)X̂_j ]. In Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), all the arguments inside the cosine and sine functions commute with each other. Therefore, we can use standard trigonometric formulae to expand the cosines and sines. Specifically, cos(A+B) = cos Acos B-sin Asin B and sin(A+B) = sin Acos B+cos Asin B. Next, we compute expectation values of Eqs. (<ref>), (<ref>), (<ref>), and (<ref>) with respect to |+⟩, one by one. The expectation value of Eq. (<ref>) is zero due to the global ℤ_2 sign flip symmetry of the objective function, i.e, ⟨+|Ẑ_iẐ_j|+⟩. The expectation value of the first term in Eq. (<ref>) is also zero, because the expectation value must be real but Ŷ_i has a purely imaginary matrix. For the same reason, the first term in Eq. (<ref>), and two out of four terms in Eq. (<ref>) are zero, ⟨+|Ẑ_icos(∑_k 2γ𝖶_jkẐ_k)Ŷ_j|+⟩ = 0, ⟨+|cos(∑_k 2γ𝖶_ikẐ_k)Ŷ_iẐ_j|+⟩ = 0, ⟨+|cos(∑_k 2γ𝖶_ikẐ_k)Ŷ_isin(∑_k 2γ𝖶_jkẐ_k)X̂_j|+⟩ = 0, ⟨+|sin(∑_k 2γ𝖶_ikẐ_k)X̂_i cos(∑_k 2γ𝖶_jkẐ_k)Ŷ_j|+⟩ = 0. The second term in Eq. (<ref>) is ⟨+|Ẑ_i sin(∑_k 2γ𝖶_jkẐ_k)X̂_j|+⟩. The state |+⟩ is an eigenstate of X̂_j, so we replace X̂_j with its eigenvalue: one. We expand the sine function and obtain, ⟨+|Ẑ_isin(∑_k 2γ𝖶_jkẐ_k)|+⟩ = ⟨+| Ẑ_isin(2γ𝖶_ijẐ_i)cos(∑_k≠ i,j 2γ𝖶_jkẐ_k) + Ẑ_i cos(2γ𝖶_ijẐ_i) sin(∑_k≠ i,j 2γ𝖶_jkẐ_k)|+⟩. The second term in Eq. (<ref>), ⟨+|Ẑ_icos(2γ𝖶_ijẐ_i)⋯|+⟩ evaluates to zero due to the ℤ_2 symmetry. The first term is nonzero. Using the identity that sin(2γ𝖶_ijẐ_i)=Ẑ_isin(2γ𝖶_ij), it reduces to sin(2γ𝖶_ij)⟨+|cos(∑_k≠ i,j 2γ𝖶_jkẐ_k)|+⟩. Now, repeatedly expanding the cosine again, and using ℤ_2 symmetry to send appropriate terms to zero at each step, we are left with ⟨+|Ẑ_i sin(∑_k 2γ𝖶_jkẐ_k)X̂_j|+⟩ = sin(2γ𝖶_ij)∏_k≠ i,jcos(2γ𝖶_jk). A similar argument to above yields that the second term in Eq. (<ref>) is, ⟨+|sin(∑_k 2γ𝖶_ikẐ_k)X̂_iẐ_j|+⟩ = sin(2γ𝖶_ij) ∏_k≠ i,jcos(2γ𝖶_ik). The nonzero terms in Eq. (<ref>) are ⟨+|cos (∑_k 2γ𝖶_ikẐ_k)Ŷ_icos(∑_k 2γ𝖶_jkẐ_k)Ŷ_j|+⟩ and ⟨+|sin (∑_k 2γ𝖶_ikẐ_k)X̂_i sin (∑_k 2γ𝖶_jkẐ_k)X̂_j|+⟩. Note that X̂_i commutes with sin(∑_k 2γ𝖶_ikẐ_k) and recall that |+⟩ is an eigenstate of X̂_i. First, let us compute ⟨+|sin (∑_k 2γ𝖶_ik Z_k) X_i sin (∑_k 2γ𝖶_jkẐ_k)X̂_j|+⟩ = ⟨+|sin (∑_k 2γ𝖶_ikẐ_k) sin (∑_k 2γ𝖶_jkẐ_k)|+⟩. We use the trigonometric formula sin A sin B = 1/2[cos(A-B) - cos(A+B)] and obtain, ⟨+|sin(∑_k 2γ𝖶_ikẐ_k)sin(∑_k 2γ𝖶_jkẐ_k)|+⟩ = 1/2⟨+|cos[∑_k 2γ(𝖶_ik+𝖶_jk) Ẑ_k]|+⟩ -1/2⟨+|cos[∑_k 2γ(𝖶_ik-𝖶_jk)Ẑ_k]|+⟩. Repeatedly expanding the cosines and using ℤ_2 symmetry at each step to keep only nonzero terms, it simplifies to, ⟨+|sin(∑_k 2γ𝖶_ikẐ_k) sin(∑_k 2γ𝖶_jkẐ_k)|+⟩ = 1/2∏_k cos 2γ(𝖶_ik-𝖶_jk) - 1/2∏_k cos 2γ(𝖶_ik+𝖶_jk) = cos^2(2γ𝖶_ij)/2∏_k≠ i,jcos 2γ(𝖶_ik-𝖶_jk) - cos^2(2γ𝖶_ij)/2∏_k≠ i,jcos 2γ(𝖶_ik+𝖶_jk). Next, let us compute ⟨+|cos (∑_k 2γ𝖶_ikẐ_k)Ŷ_i cos (∑_k 2γ𝖶_jkẐ_k) Ŷ_j|+⟩ = ⟨+|Ŷ_i cos (∑_k 2γ𝖶_ikẐ_k) cos (∑_k 2γ𝖶_jkẐ_k) Ŷ_j|+⟩. Using the trigonometric formula cos Acos B = 1/2[cos(A+B)+cos(A-B)], repeatedly expanding the cosines, and again using ℤ_2 symmetry, we obtain, ⟨+|cos(∑_k 2γ𝖶_ikẐ_k)Ŷ_i cos(∑_k 2γ𝖶_jkẐ_k)Ŷ_j|+⟩ = sin^2(2γ𝖶_ij)/2∏_k≠ i,jcos 2γ(𝖶_ik-𝖶_jk) - sin^2(2γ𝖶_ij)/2∏_k≠ i,jcos 2γ(𝖶_ik+𝖶_jk). Finally, adding together the results from Eqs. (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), and multiplied by the right coefficients, the two-point correlation between variables i and j after a one-layer (p=1) QAOA circuit reads, ⟨Ẑ_iẐ_j⟩_1= sin(2β)cos(2β)sin(2γ𝖶_ij)[∏_k≠ i,jcos(2γ𝖶_ik)+∏_k≠ i,jcos(2γ𝖶_jk)] -sin^2(2β)/2[∏_k≠ i,jcos2γ(𝖶_ik+𝖶_jk)-∏_k≠ i,jcos2γ(𝖶_jk-𝖶_ik)]. It follows that the expectation value of the objective function is, ⟨Ĉ⟩_1=∑_ij𝖶_ij{ sin(2β)cos(2β)sin(2γ𝖶_ij)[∏_k≠ i,jcos(2γ𝖶_ik)+∏_k≠ i,jcos(2γ𝖶_jk)] -sin^2(2β)/2[∏_k≠ i,jcos2γ(𝖶_ik+𝖶_jk)-∏_k≠ i,jcos2γ(𝖶_jk-𝖶_ik)]}. § ANALYTICAL RESULTS FOR THE QRR ALGORITHM §.§ Sherrington-Kirkpatrick with random ±1 weights §.§.§ Rewriting the correlation matrix We define the correlation matrix element 𝖹^(p=1)_ij=(δ_ij-1)⟨Ẑ_iẐ_j⟩_1, where δ_ij the Kronecker delta. For N→+∞, it is shown that the optimal QAOA angles are β=-π/8 and γ≃ 1/2√(N) <cit.>. Plugging this in Eq. (<ref>), it follows, -𝖹^(p=1)_i≠ j=(-√(2)/2)(√(2)/2)sin(𝖶_ij/√(N))× 2cos^N-2(1/√(N))-1/4[∏_k≠ i,jcos(𝖶_ik+𝖶_jk/√(N))-∏_k≠ i,jcos(𝖶_jk-𝖶_ik/√(N))]. We use sin x∼ x for x≪ 1 and cos^x-2(1/√(x))∼ e^-1/2 for x→+∞, 𝖹^(p=1)_i≠ j=𝖶_ij1/√(eN)+1/4[∏_k≠ i,jcos(𝖶_ik+𝖶_jk/√(N))-∏_k≠ i,jcos(𝖶_jk-𝖶_ik/√(N))]. If we compare terms for a given k one by one in the two products above, one is equal to 1 while the other is equal to cos(2/√(N)). We denote 𝖪_ij∈[0,N-2] as the number of times 𝖶_jk-𝖶_ik≠ 0 for the edge (i,j). Then, 𝖹^(p=1)_i≠ j=𝖶_ij1/√(eN)+1/4[cos^N-2-𝖪_ij(2/√(N)) - cos^𝖪_ij(2/√(N))]. The distribution of 𝖪_ij follows from a discrete one-dimensional random walk with steps +1 or 0 of equal probability. On average, 𝔼[𝖪_ij]=N/2-1, and for a given walk, we denote 𝖭_ij∈[-N/2+1, N/2-1] the distance from the expected average position. This is now a random walk on the variable 𝖭_ij with step ± 1 and a total number of step of N/2-1. We introduce, 𝖵_i≠ j(N) = √(eN)/4cos^N/2-1(2/√(N))[cos^-𝖭_ij(2/√(N)) - cos^𝖭_ij(2/√(N))], which implies that, lim_N→+∞𝖵_i≠ j(N)∼𝖭_ij/√(eN). In the limit of large N, because 𝖭_ij is normally distributed with mean zero and variance N/4, 𝖵_ij also follows a normal distribution with mean zero and variance 1/4e. We write, 𝖹^(p=1)_i≠ j=1/√(eN)(𝖶_ij+𝖵_ij). §.§.§ Equivalence with classical relax-and-round approach In the large N limit, we show that the matrices 𝖶 and 𝖹^(p=1) share the same eigenvectors, and thus provide the same solution when used in the relax-and-round approach. To do so, we show that 𝖶 and 𝖵 commute (this also assumes that 𝖶 has distinct eigenvalues, or that 𝖵 does). For this, we first write 2𝖭_ij=-∑_k𝖶_ik𝖶_kj. We have, [𝖶,𝖹^(p=1)] =[𝖶,𝖵] ⇒[𝖶𝖵]_ij-[𝖵𝖶]_ij =1/√(eN)(∑_k 𝖶_ik𝖭_kj - ∑_k 𝖭_ik𝖶_kj) =1/2√(eN)(∑_k,q𝖶_iq𝖶_qk𝖶_kj - ∑_k,q𝖶_ik𝖶_kq𝖶_qj) =1/2√(eN)([𝖶^3]_ij - [𝖶^3]_ij)=0. This proves that in the large N limit, 𝖶 and 𝖵 share the same eigenvectors—although they may not be ordered in the same way with respect to their corresponding eigenvalues. Thus, the relax-and-round approach based on 𝖹^(p=1) has the same solution as the classical RR algorithm based on 𝖶. §.§ Unit-weight d-regular graphs §.§.§ The correlation matrix We consider unit-weight d-regular graphs, i.e., graphs where each node is connected to exactly d others. The weights 𝖶_ij of the adjacency matrix of the graph are 1 if two nodes are connected, and 0 otherwise. The correlation matrix element 𝖹^(p=1)_ij=(δ_ij-1)⟨Ẑ_iẐ_j⟩_1 where δ_ij is the Kronecker delta, reads, -𝖹^(p=1)_i≠ j= 𝖶_ij×sin(2β)cos(2β)sin(2γ)cos^d-1(2γ) -sin^2(2β)/2[∏_k≠ i,jcos2γ(𝖶_ik+𝖶_jk)-∏_k≠ i,jcos2γ(𝖶_jk-𝖶_ik)]. We introduce the scalar, f=-2sin(2β)cos(2β)sin(2γ)cos^d-1(2γ), and rewrite the correlation matrix as, 𝖹^(p=1)_i≠ j=f(𝖶_ij+tan(2β)/4tan(2γ)cos^d(2γ)[∏_k≠ i,jcos2γ(𝖶_ik+𝖶_jk)-∏_k≠ i,jcos2γ(𝖶_jk-𝖶_ik)]). For a pair of nodes (i,j), we denote ν_ij as the number of nodes k≠ i,j which are nearest-neighbors to both i and j but for which i and j are not nearest-neighbors (𝖶_ij=0). Then, * there will be a total of 2d-ν_ij≥ 0 nontrivial nonunit terms in the first product. When both i and j are nearest-neighbors with k, we get a contribution cos4γ. When k is the nearest-neighbor of either i or j, we get a contribution cos2γ. Else the contribution is trivially cos0=1. Thus, ∏_k≠ i,jcos2γ(𝖶_ik+𝖶_jk)=cos^ν_ij(4γ)cos^2d-2ν_ij(2γ). * for the second product, we get nontrivial nonunit terms only if k is a nearest-neighbor to either i or j, but not both. Thus ∏_k≠ i,jcos2γ(𝖶_ik-𝖶_jk)=cos^2d-2ν_ij(2γ). For a pair of nodes (i,j), we denote λ_ij as the number of nodes k≠ i,j which are nearest-neighbors to both i and j and for which i and j are also nearest-neighbors (𝖶_ij=1). Then, * each node i and j has a total of d-1 nearest-neighbors others than j and i, respectively. When k is nearest-neighbor with both i and j, the first product will give a contribution cos4γ. When k is the nearest-neighbor of either i or j, but not both, we get a contribution cos2γ. Else, the contribution will be trivially cos0=1. Thus, ∏_k≠ i,jcos2γ(𝖶_ik+𝖶_jk)=cos^λ_ij(4γ)cos^2d-2-2λ_ij(2γ). * for the second product, we get trivial unit contributions for triangles and when k is neither a nearest-neighbor of i nor of j. Thus ∏_k≠ i,jcos2γ(𝖶_jk-𝖶_ik)=cos^2d-2-2λ_ij(2γ). Plugging everything together, we get, 𝖹^(p=1)_i≠ j/f=𝖶_ij + tan(2β)/4tan(2γ)cos^d(2γ)×{cos^2d-2ν_ij(2γ)[cos^ν_ij(4γ) - 1]     if 𝖶_ij=0 cos^2d-2-2λ_ij(2γ)[cos^λ_ij(4γ)-1]     if 𝖶_ij=1 . which we can rewrite in a single expression, 𝖹^(p=1)_i≠ j/f=𝖶_ij + tan(2β)cos^d(2γ)/4tan(2γ)×{  (1-𝖶_ij)cos^-2ν_ij(2γ)[cos^ν_ij(4γ) - 1] +𝖶_ijcos^-2-2λ_ij(2γ)[cos^λ_ij(4γ)-1]}. We can express λ_ij and ν_ij directly with the weights. We introduce n_ij=∑_k𝖶_ik𝖶_kj, which counts the number of times nodes i and j share a common nearest-neighbor k, and this is independent of whether i and j are nearest-neighbors themselves. We note that n_ij is different than 𝖭_ij previously introduced for the SK problems. Therefore, we can write, λ_ij = 𝖶_ijn_ij,   and  ν_ij=(1-𝖶_ij)n_ij =n_ij - λ_ij, and express 𝖹^(p=1)_i≠ j as a function of λ_ij and n_ij exclusively, 𝖹^(p=1)_i≠ j/f=𝖶_ij + tan(2β)cos^d(2γ)/4tan(2γ)×{  (1-𝖶_ij)cos^-2n_ij+2λ_ij(2γ)[cos^n_ij-λ_ij(4γ) - 1] +𝖶_ijcos^-2-2λ_ij(2γ)[cos^λ_ij(4γ)-1]}. §.§.§ The ring graph We consider the special case of a d=2 regular graph with unit weight, which is a ring. In that case, λ_ij=0 ∀ i,j and n_ij=1 if and only if i and j are next-nearest neighbors, and zero otherwise. If i and j are next-nearest neighbors, then by definition, 𝖶_ij=0. We get, 𝖹^(p=1)_ij/f=𝖶_ij + n_ijtan(2β)/4tan(2γ)[cos(4γ) - 1]. Ordering the variables according to the ring structure makes the matrix 𝖹^(p=1) circulant with an entry when (i,j) are nearest- and next-nearest neighbors. Its eigenvectors are the Fourier modes. Because, the adjacency matrix 𝖶 of the problem is also circulant, it shares the same eigenvectors as 𝖹^(p=1). Thus, the QRR algorithm on 𝖹^(p=1) has the same solution as the classical RR algorithm on 𝖶. For an even number of variables, one of such Fourier modes is |ϕ⟩=(+1,-1,+1,…, -1)^T/√(N), which solves the initial problem exactly when rounded. It follows that the QRR algorithm at p=1 also solves the ring graph exactly. The result holds independently of the value of the QAOA angles β and γ. For completeness, the optimal angles are β=γ=π/8 and the correlation matrix takes the following form, 𝖹^(p=1) = [ 0 1/2 -1/8 0 ⋯ 0 -1/8 1/2; 1/2 0 1/2 -1/8 0 ⋱ 0 -1/8; -1/8 1/2 0 1/2 -1/8 0 ⋱ 0; 0 ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋮; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; -1/8 0 ⋱ ⋯ -1/8 1/2 0 1/2; 1/2 -1/8 0 ⋯ 0 -1/8 1/2 0 ], with minimum eigenvalue 4/5 and corresponding eigenvector |ϕ⟩=(+1,-1,+1,…, -1)^T/√(N). Thus the leading eigenvector of the correlation matrix is the optimal solution to the problem. §.§.§ The Bethe lattice The result for the Bethe lattice extends straightforwardly the result for the ring graph. We consider a unit-weight Bethe lattice where each node has d nearest-neighbors. In that case, λ_ij=0 ∀ i,j and n_ij=1 if and only if i and j are next-nearest neighbors, and zero otherwise. If i and j are next-nearest neighbors, then by definition, 𝖶_ij=0. We get, 𝖹^(p=1)_ij/f=𝖶_ij + tan(2β)cos^d-2(2γ)/4tan(2γ)[cos(4γ) - 1](∑_k𝖶_ik𝖶_kj). We now compute the commutator, [𝖶𝖹^(𝗉=1)]_ij-[𝖹^(𝗉=1)𝖶]_ij =ftan(2β)cos^d-2(2γ)/4tan(2γ)[cos(4γ) - 1](∑_k,q𝖶_ik𝖶_kq𝖶_qj - ∑_k,q𝖶_ik𝖶_kq𝖶_qj) =ftan(2β)cos^d-2(2γ)/4tan(2γ)[cos(4γ) - 1]([𝖶^3]_ij - [𝖶^3]_ij)=0. Thus, the QRR algorithm on 𝖹^(p=1) has the same solution as the classical RR algorithm on 𝖶. §.§.§ The ring graph with additional next-nearest neighbor connections We consider the special case of a d=4 regular graph with unit weight, which is a ring with additional next-nearest neighbor connections between the vertices. In that case, λ_ij=0, 1, or 2, and ν_ij=0, 1, or 2. When limited to these values, we have the trigonometric identity, cos^-2ν_ij(2γ)[cos^ν_ij(4γ) - 1]=-2ν_ijtan^2(2γ) for ν_ij∈{0,1,2}      (ν_ij↔λ_ij). Thanks to the linearization with respect to ν_ij and λ_ij, we get, 𝖹^(p=1)_i≠ j/f=𝖶_ij - 1/2tan(2β)cos^4(2γ)tan(2γ)[(1-𝖶_ij)ν_ij+𝖶_ijcos^-2(2γ)λ_ij]. We recall that we consider a graph with unit weights where 𝖶_ij=0,1. There is redundancy because ν_ij=0 when 𝖶_ij=1 and λ_ij=0 when 𝖶_ij=0. Thus, we drop 𝖶_ij and (1-𝖶_ij) in the second term, 𝖹^(p=1)_i≠ j/f=𝖶_ij - 1/2tan(2β)cos^4(2γ)tan(2γ)[ν_ij+cos^-2(2γ)λ_ij]. We now express ν_ij and λ_ij in terms of n_ij and 𝖶_ij, 𝖹^(p=1)_i≠ j/f =𝖶_ij - 1/2tan(2β)cos^4(2γ)tan(2γ)[(1-𝖶_ij)n_ij+𝖶_ijn_ijcos^-2(2γ)] =𝖶_ij - 1/2tan(2β)cos^4(2γ)tan(2γ)n_ij{1+𝖶_ij[cos^-2(2γ)-1]} =𝖶_ij - n_ij/2tan(2β)cos^4(2γ)tan(2γ) - n_ij𝖶_ij/2tan(2β)cos^4(2γ)tan(2γ)[cos^-2(2γ)-1]. Depending on (i,j), there are four possible cases for large N: * If i and j are nearest-neighbors, then 𝖶_ij=1 and n_ij=2. * If i and j are next-nearest-neighbors, then 𝖶_ij=0 and n_ij=1. * If i and j are next-next-nearest-neighbors, then 𝖶_ij=0 and n_ij=2. * If i and j are next-next-next-nearest-neighbors, then 𝖶_ij=0 and n_ij=1. Ordering the variables according to the ring structure makes the matrix 𝖹^(p=1) circulant. Its eigenvectors are the Fourier modes. Because, the adjacency matrix 𝖶 of the problem is also circulant, it shares the same eigenvectors as 𝖹^(p=1). Thus, the QRR algorithm based on 𝖹^(p=1) has the same solution as the classical RR algorithm on 𝖶. When N is a multiple of 4, it is known <cit.> that the ground state of the initial Ising model is the antiphase “⟨ 2⟩” state in the form ±±∓∓±±…∓∓ with four-fold degeneracy by translating this state by one, two, or three entries. Because this sign structure is captured by a Fourier mode, the QRR algorithm based on 𝖹^(p=1) solves this problem exactly, as does the RR algorithm on 𝖶. §.§.§ The complete graph The adjacency matrix 𝖶 of a unit-weight complete graph is filled with ones except on its diagonal which is zero. The QAOA circuit conserves the symmetry of this graph, resulting in a correlation matrix 𝖹^(p)∝𝖶. Hence, they trivially share the same eigenvectors and their eigenvalues are equivalent up to a global rescaling factor. §.§.§ Circulant graphs For circulant graphs, the QAOA circuit will lead to a correlation matrix 𝖹^(p) which is also circulant. The eigenvectors of circulant matrices are Fourier modes. Thus, both 𝖶 and 𝖹^(p) will share the same eigenvectors. This is the property we used previously for the cycle graph and cycle graph with additional edges between next-nearest neighboring vertices. §.§.§ The honeycomb lattice We can extend the above calculations to a two-dimensional honeycomb lattice, either infinite or with periodic boundary conditions for simplicity. It has d=3, λ_ij=0 ∀ i,j and n_ij=1 if and only if i and j are next-nearest neighbors, and zero otherwise. If i and j are next-nearest neighbors, then by definition, 𝖶_ij=0. We get, 𝖹^(p=1)_ij/f=𝖶_ij + tan(2β)cos^2(2γ)/4sin(2γ)[cos(4γ) - 1](∑_k𝖶_ik𝖶_kj). We now compute the commutator, [𝖶𝖹^(𝗉=1)]_ij-[𝖹^(𝗉=1)𝖶]_ij =ftan(2β)cos^2(2γ)/4sin(2γ)[cos(4γ) - 1](∑_k,q𝖶_ik𝖶_kq𝖶_qj - ∑_k,q𝖶_ik𝖶_kq𝖶_qj) =ftan(2β)cos^2(2γ)/4sin(2γ)[cos(4γ) - 1]([𝖶^3]_ij - [𝖶^3]_ij)=0. Thus, the QRR based on 𝖹^(p=1) has the same solution as the classical RR algorithm on 𝖶 for the unit-weight honeycomb lattice. §.§.§ Random 3-regular graphs By definition d=3 for 3-regular graphs as all vertices have exactly 3 nearest-neighbors. Because of that, λ_ij=0, 1, or 2, and ν_ij=0, 1, 2, or 3. We can use the same trigonometric identity used for the ring graph with additional next-nearest neighbor, except for the ν_ij=3 case. We handle this case separately by introducing δ_ν_ij3 the Kronecker delta. We get, 𝖹^(p=1)_i≠ j/f=𝖶_ij + tan(2β)cos^3(2γ)/4tan(2γ){(1-𝖶_ij)[δ_ν_ij3cos^3(4γ) - 1/cos^6(2γ) - (1-δ_ν_ij3)2ν_ijtan^2(2γ)]-𝖶_ij2λ_ijtan^2(2γ)/cos^2(2γ)}, and expand the terms in the bracket, 𝖹^(p=1)_i≠ j/f=𝖶_ij + tan(2β)cos^3(2γ)/4tan(2γ){  δ_ν_ij3cos^3(4γ) - 1/cos^6(2γ)-(1-δ_ν_ij3)2ν_ijtan^2(2γ)-𝖶_ijδ_ν_ij3cos^3(4γ) - 1/cos^6(2γ) +𝖶_ij(1-δ_ν_ij3)2ν_ijtan^2(2γ)-𝖶_ij2λ_ijtan^2(2γ)/cos^2(2γ)}. Because we consider unit-weight graphs, we remove redundancies that may exist when multiplying 𝖶_ij with λ_ij, ν_ij, and δ_ν_ij3, 𝖹^(p=1)_i≠ j/f=𝖶_ij + tan(2β)cos^3(2γ)/4tan(2γ){ δ_ν_ij3cos^3(4γ) - 1/cos^6(2γ)-2ν_ijtan^2(2γ) +δ_ν_ij36tan^2(2γ)-λ_ij2tan^2(2γ)/cos^2(2γ)}. Using the trigonometric identity [cos^3(4γ) - 1]cos^-6(2γ) + 6tan^2(2γ)=-2tan^6(2γ), it can be further simplified to, 𝖹^(p=1)_i≠ j/f=𝖶_ij - tan(2β)cos^2(2γ)sin(2γ)/2[δ_ν_ij3tan^4(2γ)+ ν_ij + λ_ijcos^-2(2γ)]. and then to, 𝖹^(p=1)_i≠ j/f=𝖶_ij - tan(2β)cos^2(2γ)sin(2γ)/2[n_ij + δ_n_ij3tan^4(2γ) + 𝖶_ijn_ijtan^2(2γ)]. Unlike other cases previously discussed, it cannot be easily shown that the last two terms commute with the adjacency matrix. However, relying on numerical experiments, we find in Sec. <ref> that the operator norm of the commutator [𝖶,𝖹^(p=1)] slowly goes to zero as N→+∞. We find strong numerical evidence that the performance of the QRR algorithm is on par with that of the classical RR algorithm, similarly to SK models. § NUMERICAL EXPERIMENTS The relax-and-round algorithm requires expectation values of Pauli Ẑ observables to construct the correlation matrix. When the optimal angles of the QAOA are unknown, these expectation values are also necessary to compute the cost in order to optimize the angles. Numerical experiments are performed using the Python packages NumPy <cit.>, SciPy <cit.>, and Numba <cit.>. Graph problems are generated using the Python package NetworkX <cit.>. For the quantum relax-and-round version of the Goemans-Williamson algorithm, which requires finding the optimal correction vector, we use the Python convex optimization solver CVXPY <cit.>. §.§ Running the QAOA algorithm For QAOA depth p=1, we compute these expectation values using the analytical formulas of Sec. <ref>. We use these formulas for up to several hundred variables N. For p>1, we rely on state vector simulations, which explicitly compute the quantum state |Ψ⟩_p resulting from the QAOA circuit. We use state vector simulations for up to N=26 variables and compute expectation values exactly without sampling the quantum state unless specified otherwise. When needed, we optimize the angles of the QAOA circuit using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm <cit.> with a maximum number of 100 iterations. For a problem of size N using the QAOA with depth p (there are 2p parametric angles), we run the BFGS algorithm independently min(2^4+p,2^10) times with random initial angles [0,2π]^2p. The best result of these simulations is kept and the angles are considered as optimal for computing expectation values. However, it should be noted that there is no guarantee these angles are actually optimal. §.§ Collecting statistics Unless specified otherwise, the data are averaged between 10^3 and 10^5 randomly generated problem instances. §.§ Relax-and-round step We use a standard numerical eigendecomposition method to perform the relax-and-round approach on the adjacency and correlation matrices 𝖶 and 𝖹^(p). For an N-variable problem both matrices are of size N× N, real, and symmetric. The eigendecomposition returns N eigenvectors {z}, which are real and normalized to unity. The eigenvectors are rounded entrywise {z←sign(z)∈{±1}^N}. We also consider sign-flipped eigenvectors {-z}, also rounded entrywise. Therefore, we obtain a total of 2N potential solutions to the initial problem. The objective value of each of these solutions is computed and the one extremizing the objective function is returned by the relax-and-round algorithm. A zero entry is rounded at random to ± 1. § IMPROVEMENT OF THE APPROXIMATION RATIO FROM QAOAₚ(W) TO QRR(Z⁽ᴾ⁾) In Fig. <ref>, we plot the ratio of (1-α) obtained from the relax-and-round approach and the QAOA. In all cases, the numerical experiments show that the relax-and-round algorithm outperforms the QAOA at the same depth. Whether this remains true for larger N when p>1 is an open question. In the limit p→+∞, both methods should converge to the optimal answer. At low p, our numerical results show that the relax-and-round approach is better. § COMMUTATOR BETWEEN THE ADJACENCY AND THE CORRELATION MATRICES W AND Z⁽ᴾ⁾ We consider the operator norm ·_2 of the commutator [𝖶, 𝖹^(p)] between the adjacency and the correlation matrices 𝖶 and 𝖹^(p), [𝖶, 𝖹^(p)]_2 = 𝖶𝖹^(p) - 𝖹^(p)𝖶_2 = σ_max, where σ_max is the largest singular value of the commutator. We have used the commutator to prove analytically for several problem instances that the relax-and-round approach on 𝖶 or 𝖹^(p=1) leads to the same result. In Fig. <ref>, we numerically compute and plot the norm as a function of the problem size N at various values of the QAOA depth p for different problem types. We find that the norm of the commutator increases with p. § EFFECT OF THE FINITE NUMBER OF CIRCUIT EXECUTIONS We investigate the effect of a finite number of circuit executions n_ex to compute the entries of the correlation matrix 𝖹^(p) used in the relax-and-round approach. After collecting n_ex bit strings {b∈{0,1}^N}^n_ex, expectation values are estimated as, ⟨Ẑ_iẐ_j⟩≈1/n_ex∑_{z}⟨b|Ẑ_iẐ_j|b⟩, with an error decreasing as ∼ 1/√(n_ex) as per the central limit theorem. The effect of a finite number of circuit executions on the correlation matrix can be modeled by a random component, 𝖹_i≠ j^(p)(n_ex) = 𝖹_i≠ j^(p)(∞) + 𝒩(0,1)/√(n_ex), with 𝒩(0,1) a random normal variable of mean zero and variance unity such that 𝖹^(p) is symmetric. The robustness of performing an eigendecomposition of noisy matrices has a rich history. We suggest Ref. <cit.> for a recent review. The size N of the matrix, the strength of the noise 1/√(n_ex), the level spacing between eigenvalues, and the condition number of the matrix are parameters typically playing a role on the distance between the eigenvectors of 𝖹^(p)(n_ex) and those of 𝖹^(p)(∞). Here, the fact that we round the entries after computing the eigenvectors complicates the analysis. A noisy toy model for clustering, based on the stochastic block model <cit.>, suggests that rounding adds robustness to noise. In the absence of a theoretical framework for a more general case, we rely on numerical experiments. We run the QAOA at p=1 for SK problem instances with a finite number of circuit executions to evaluate the correlation matrix 𝖹^(p=1)(n_ex). We consider n_ex=4, 16, 64, 256, 1,024, 4,096, 16,384, and 65,536. We then perform the relax-and-round approach and report the data in Fig. <ref>. We find that the average objective function value follows as scaling relation relating n_ex to the number of variables N, C_𝖹^(1)(n_ex)=C_𝖹^(1)(∞) 𝒢(n_ex N^-3/2), where 𝒢(x) is a scaling function such that 𝒢(x→+∞)=1. Hence, achieving a constant relative performance with respect to the ideal n_ex=∞ case only requires a polynomial number of circuit executions with respect to the problem size N. We empirically find that this number is well-fitted by an exponent ≈ 3/2, i.e., n_ex∼ N^3/2. It remains an open question to theoretically establish this scaling law, and in particular the exponent ≈ 3/2. We extend the analysis by running the same numerical experiments for various values of the QAOA depth p. In the p→+∞ limit, a single circuit execution is enough to capture the correlation matrix that will lead to the optimal solution. This corresponds to κ=0. We show in Fig. <ref> that for a fixed accuracy, the number of circuit executions required decreases as p increases. Moreover, the scaling relation with κ≈ 1.5 does not hold as p increases, in line with the expected behavior as p→+∞. § SIZE DEPENDENCE FOR THE DATA OF FIGURE 2 IN THE MAIN TEXT AT P=1 Figure 2 of the main text suggests that, on average, the QRR algorithm on 𝖹^(p=1) performs better than the classical RR on 𝖶 for the NWS and BA problem types at N=16. Here, we consider different problem sizes N≤ 256 with the QAOA depth p=1 using the analytical formula. Data of Fig. <ref> seems to confirm, on average, the superiority of the QRR algorithm for the NWS and BA problem types: The ratio of the objective values—or equivalently the ratio of the approximation ratios—is larger than 1. For the SK and 3REG problem instances, the two approaches perform, on average, equally well. We prove analytically in this work that this is expected for the SK models in the N→+∞ limit and numerical experiments show this result is robust to finite N. § RELAX-AND-ROUND FOR THE MAXIMUM INDEPENDENT SET PROBLEM WITH QUANTUM ANNEALING The objective function to minimize for the weighted maximum independent set problem is given by <cit.>, min_z_i=± 1C (z)=J∑_i<j𝖶_ijz_iz_j+∑_i[Jdeg(i)-2u_i]z_i, where 𝖶_ij=0 or 1 depending whether two vertices of the graph problem are connected by an edge, u_i∈ℝ^+ a vertex-dependent weight, deg(i) is the degree of vertex i, and J>min(u_i) ∀i a parameter. We draw u_i∈[0,1] at random from a uniform distribution and choose J=2. The maximum independent set of the graph is given by the variables with the value z_i=+1 in the optimal solution z_opt. We consider random unit disks with N variables and density parameter ρ=7. Unlike the rest of this work focusing on the QAOA, we now use a quantum annealing protocol <cit.> to obtain the correlation matrix 𝖹. We define the Hamiltonian, ℋ̂(T,t)=-(1-t/T)∑_j=1^NX̂_j+t/TĈ, where the operator Ĉ is obtained by substituting the binary variables z_j for Pauli operators Ẑ_j, X̂_j is the Pauli operator, t∈[0,T] the time, and T the total evolution time. The quantum state at time t reads, |Ψ⟩_T = 𝒯exp[-i∫^T_0dtℋ̂(T,t)]Ĥ^N|0⟩^N, where Ĥ is the Hadamard gate and 𝒯 indicates a time-ordered exponential. In the limit T→+∞, the quantum state will converge to the ground state of the objective function Ĉ, i.e., the optimal solution z_opt to the combinatorial optimization problem. In practice, we discretize the above unitary by introducing a finite time step δ t, |Ψ⟩_T = [∏^p_ℓ=1e^iδ_tβ_ℓ∑_j=1^NX̂_je^-iδ_tγ_ℓĈ]Ĥ^⊗ N|0⟩^⊗ N, where p=T/δ t, β_ℓ=1-ℓδ t/T, and γ_ℓ=ℓδ t/T. From the quantum state time-evolved for a total time T, we compute the correlation matrix 𝖹^(T) on which to perform the QRR algorithm, just like we did with the QAOA. In the following, we use δ_t=0.1 and perform numerical experiments. We consider the objective function value for the relax-and-round divided by the expectation value ⟨Ĉ⟩_T. Data plotted in Fig. <ref> converges to zero as T→+∞ since both methods are expected to return the optimal solution. At finite T, the QRR algorithm systematically outperforms, on average, the quantum annealing protocol. This advantage increases with N. Note that we do not post-process for independent sets. We observe that the advantage of the QRR algorithm over the underlying quantum annealing protocol reduces algebraically with T, approximately as ≈ T^-3/2. § A QUANTUM RELAX-AND-ROUND VERSION OF THE GOEMANS-WILLIAMSON ALGORITHM §.§ The p→+∞ limit In the p→+∞ limit, when the QAOA algorithm returns the optimal solution z_opt, the correlation matrix takes the form 𝖹^(∞)=𝖨 - z_opt⊗z_opt. The matrix in the bracket becomes 𝖣-𝖨+𝖽𝗂𝖺𝗀(u)+z_opt⊗z_opt where z_opt⊗z_opt is a rank one matrix. What is the optimal correcting vector u_opt such that the maximum eigenvalue is minimized? More generally, this eigenvalue problem is that of of a diagonal matrix modified by a rank one matrix which has been studied mathematically. We denote d_i=𝖣_ii-1+u_i with i=1,… N the diagonal entries of 𝖣̃=𝖣-𝖨+𝖽𝗂𝖺𝗀(u) and λ_i the eigenvalues of 𝖣̃+z_opt⊗z_opt. Eigenvalues are ordered such that λ_i≤λ_i+1 and d_i≤ d_i+1. It can be shown that <cit.>, d_i≤λ_i≤ d_i+1    for i=1,2,… N-1, d_N≤λ_N≤ d_N + z_opt⊗z_opt. The goal is to minimize the eigenvalue λ_N bounded from below by d_N=𝖣_NN-1+u_N, and where d_N is at least as large as d_N-1, etc. We recall the constraint ∑_iu_i=0. Hence, the optimal solution is such that d_i are a constant for all i, which is given by 𝖣+𝖽𝗂𝖺𝗀(u_opt)=tr(𝖣)𝖨/N. Thus, the leading eigenvector for the optimal correcting vector is that of the matrix -𝖨 + tr(𝖣)𝖨/N+z_opt⊗z_opt, which is simply ±z_opt/√(N) as a constant diagonal matrix is irrelevant for computing eigenvectors. Therefore, the above algorithm substituting the adjacency matrix 𝖶 for the correlation matrix 𝖹^(p) solves the problem exactly in the limit p→+∞. §.§ The case of finite p It is more difficult to establish performance bounds for the finite p case for arbitrary graphs. We focus on vertex-transitive graphs. For such graphs, it is known that the optimal correcting vector for the Goemans-Williamson algorithm is null, i.e., u_opt=0 <cit.>. The correlation matrix 𝖹^(p) will have the same symmetries as the adjacency matrix: The correlation matrix can be pictured as the adjacency matrix of a graph which will also be vertex-transitive. It follows that the optimal vector is also u_opt=0. From there, demonstrating that 𝖶 and 𝖹^(p) share the same leading eigenvector is enough for the algorithm based on either 𝖶 or 𝖹^(p) to be equivalent. We have shown this is the case for different graphs such as the ring and complete graphs. We emphasize that the performance guarantee of the classical Goemans-Williamson algorithm is agnostic to the graph, and showing whether a similarly strong statement can be made for a quantum relax-and-round version remains an open question. Here, we used the fact that we can show that the optimal correcting vector is the same in the classical and quantum versions, but this is not necessary for the two methods to have equivalent performance.
http://arxiv.org/abs/2307.10194v1
20230710134643
Important Clues that Facilitate Visual Emergence: Three Psychological Experiments
[ "Jingmeng Li", "Hui Wei" ]
q-bio.NC
[ "q-bio.NC", "cs.CV" ]
Exploring Non-Standard Quark Interactions through Solar Neutrino Studies Ilídio Lopes August 12, 2023 ======================================================================== Visual emergence is the phenomenon in which the visual system obtains a holistic perception after grouping and reorganizing local signals. The picture Dalmatian dog is known for its use in explaining visual emergence. This type of image, which consists of a set of discrete black speckles (speckles), is called an emerging image. Not everyone can find the dog in Dalmatian dog, and among those who can, the time spent varies greatly. Although Gestalt theory summarizes perceptual organization into several principles, it remains ambiguous how these principles affect the perception of emerging images. This study, therefore, designed three psychological experiments to explore the factors that influence the perception of emerging images. In the first, we found that the density of speckles in the local area and the arrangements of some key speckles played a key role in the perception of an emerging case. We set parameters in the algorithm to characterize these two factors. We then automatically generated diversified emerging-test images (ETIs) through the algorithm and verified their effectiveness in two subsequent experiments. Keywords: biological intelligence, visual emergence, perceptual organization, emerging image § INTRODUCTION Perception is defined as the process of transforming signals from one's surroundings into the experience of objects, events, sounds, and tastes <cit.>. About 80% of the information we receive each day comes from vision; this suggests that studying visual perception is an important way to explore human intelligence. We recognize words in books, objects on desks, or people in rooms so easily that we ignore the process from the initial visual signal to the emergence of perception. Studies have shown that about half of the cerebral cortex of primates participates in visual perception <cit.>. Therefore, visual emergence has a high computational complexity. Visual emergence is a phenomenon in which the visual system perceives meaningful wholes by integrating seemingly meaningless pieces <cit.>. The picture Dalmatian dog shown in Figure <ref> is often used to explain visual emergence. In the first half of the twentieth century, psychologists <cit.> summarized the laws of perceptual processing and proposed Gestalt theory, which emphasizes the holistic nature of human perception and is the most widely accepted theory of perceptual organization. Discovering a dalmatian dog in the emerging image shown in Figure <ref> is an object recognition task. The whole process can be divided into two stages: bottom-up and top-down <cit.>. In the bottom-up process, the visual system groups and integrates physical signals according to gestalt principles and collects clues used for object recognition. In the top-down process, the visual system forms cognitive hypotheses by combining a priori knowledge and cognitive clues. Although Gestalt theory offers several valuable principles with regard to grouping and reorganizing stimuli, it can neither demonstrate the adequacy and necessity of these grouping principles nor reveal how the visual system perceives a dalmatian dog from the emerging image. It is necessary, therefore, to explore what specific factors hinder or facilitate the occurrence of visual emergence. This not only has theoretical value for investigating visual perception in cognitive psychology but also helps promote the rational design of computer vision algorithms, thereby alleviating practices such as reliance on massive amounts of training data, expensive manual labeling, and huge computing power usage. In object recognition, we use object features such as color, texture, and shape. Traditional object recognition theories emphasize that shape is more important in object recognition <cit.>. Psychological-behavioral experiments have shown that surface information (color, texture) speeds up recognition but does not significantly improve recognition accuracy <cit.>. Biederman's recognition-by-components asserts that surface information only plays a role in low-level vision and provides cues for the organization and integration of visual signals while object recognition tasks rely on shape <cit.>. However, this view cannot explain discrimination between horses and zebras. If we only provide subjects with the shape of a zebra, they will likely mistake the zebra for a horse. The “shape + surface” computational framework for object recognition suggests that surface and shape information play a joint role in high-level visual processing, and that the role of surface information depends heavily on differences in structural properties between the objects in question <cit.>. According to “shape + surface” theory, the process of discovering the dalmatian dog in the emergent image shown in Figure <ref> can be described as follows. The visual system first obtains shape information, such as edges or contour segments, based on the physical features of visual signals in the bottom-up process. Then, it reorganizes the shape and surface information in the top-down process to discover more holistic combinations and form a cognitive hypothesis of a dalmatian dog based on a priori knowledge. Parsing the process backward gives us the following insight. Under the condition that a priori knowledge is available, the visual system reorganizes signals and collects clues in an iterative way. The results of clue collection affect the speed and accuracy of hypothesis formation, which in turn affects the speed and accuracy of finding the dalmatian dog in the emerging image. We believe, therefore, that the quality, quantity, distribution, and accessibility of recognition clues all have an effect on visual emergence. Emerging images differ from normal natural images in that they contain only black and white colors and no detailed texture information. Research on the visual cortex has found that neuronal cluster activity in primate areas V1 and V2 primarily reflects local luminance changes, and that neuronal activity in higher visual cortex areas represents global second-order features <cit.>. We infer, therefore, that the visual system appropriately uses some speckles to obtain recognition clues and also appropriately discards some speckles that might interfere. The criterion for the trade-off is whether a holistic hypothesis is promoted. This study aimed to identify the factors influencing the occurrence of visual emergence using three psychological experiments. In the first, we recorded the behavior of subjects observing a typical emerging case. We analyzed the experimental data inductively to discover two factors that might affect visual emergence: the density of speckles (speckle-density), which affects the speed and accuracy of locating objects, and the arrangements of key speckles (speckle-arrangement), which contain discriminative texture and shape information that affect the accuracy of object recognition. We set parameters to describe these two factors in an algorithm and then automated the generation of emerging-test images (ETIs) using control variables. In the two subsequent experiments, we used these ETIs to verify the effectiveness of the two factors in influencing the occurrence of visual emergence. § EXPERIMENT 1 This experiment was designed to discover the factors that might influence the occurrence of visual emergence. Subjects were presented with a typical emerging case for them to observe. Their responses to it were recorded for analysis to generalize the factors that might have a significant effect on visual emergence. §.§ Participants A total of 120 students participated in this experiment (mean age = 22.8 years; 60 female). The participants come from the School of Computer Science, School of Psychology, School of Life Sciences, and School of Mathematics. None of the subjects had visual cognitive impairment. §.§ Procedure Figure <ref> shows the flow of experiment 1. The whole process was divided into two stages. Subjects were required to perform the following operations using an iPad and an electronic pen (e-pen). In the first stage, the stimuli were presented on the iPad. The subjects observed them and then circled the corresponding regions in order of their perceived saliency with the e-pen. In the second stage, subjects were first presented with six pictures containing animals to ensure they had a priori knowledge of the object to be identified. Each image contained only one type of animal, and each was played for five seconds. Subjects were then asked to draw an outline of the object as completely as possible based on the range circled in the first stage and to identify which type of object it was. §.§ Results and Analysis The experimental data from stage 1 showed that the regions drawn by the subjects were not identical but highly overlapping, indicating that all subjects perceived the region as containing meaningful contours. Combining the results for all subjects, the emerging image can be roughly divided into Region 1 and Region 2, as shown in Figure <ref>(a). Region 1 covers 62.4% of the whole image but contains 93.3% of the contours drawn by the subjects. If the speckle-density in a local region is defined as the proportion of speckles to the total area of the region (i.e., density = Area_bs/ Area_r), then the density value of Region 1 is significantly larger compared with Region 2. Was it the speckle-density factor that made Region 1 more significant in the eyes of the subjects? Neurophysiological studies have found that as the visual pathway deepens, the receptive fields (RF) of neurons gradually increase. In a setting with four different sizes of receptive fields, RF = 40 p, 80 p, 160 p, and 320 p, where p denotes pixels, the speckle-density at each location is the mean value of density at the four RFs. Figure <ref>(b) shows the normalized result of the emerging case after calculating the density at all locations. If the boundary of density is set to 0.45, the emerging case can be divided into two parts, as shown in Figure <ref>(c). Comparing (a) and (c) in Figure <ref>, we find that the region with density>0.45 in (c) overlaps well with Region 2 in (a). Therefore, speckle-density might significantly affect the occurrence of visual emergence. According to “shape+surface” recognition theory, both shape and texture play a role in object recognition, and their importance depends on differences in the structural properties of the objects in question. Since color and its distribution is a texture feature, we only discuss the shape and texture information of the object. In this experiment, we provided the subjects with pictures of six animals as a priori knowledge—elephant, giraffe, cow, tiger, dog, and leopard. Among them, the elephant, giraffe, and tiger had large differences in shape and texture, and the dog, cow, leopard, and tiger had small differences in shape but large differences in texture. Therefore, texture is perhaps more effective than shape information for distinguishing tigers from other animals. The experimental results of stage 2 showed that 93 of the 120 subjects successfully identified the tiger in the emerging case. Based on the results drawn by the subjects in stage 1, and combined with the object topology, the region containing the recognition clues can be divided into three parts—body, legs, and head—as shown in the legend of Figure <ref>. In the group of subjects who successfully identified the tiger, the percentages of subjects who correctly drew the body, legs, and head were 96.8%, 71.0%, and 29.0%, respectively. In the group of subjects who did not successfully identify the tiger, the percentages of these three parts were 18.5%, 55.6%, and 37.0%, respectively. Some subjects drew multiple parts of the recognition clues at the same time; thus, the statistics would show that the sum of the proportions of the three parts exceeded 1. We can conclude that the correct discovery of body and texture features was positively correlated with the successful identification of the tiger. The speckle-arrangement located on the body might have been responsible for the presentation of recognition clues. We suggest, therefore, that speckle-arrangement might affect the occurrence of visual emergence. Based on the abovementioned experimental results and analysis, we identified two factors that might affect visual emergence: speckle-density and speckle-arrangement. When looking at the emerging image, we preferentially focus on the regions with greater speckle-density, and clues in these regions are more easily found. In addition, the quality of the recognition clues is critical to the final recognition. Occlusion often occurs between objects in complex environments, which can result in incomplete objects perceived by the visual system. The reason minor occlusions do not lead to false recognition is that high-quality clues, also called key clues, greatly facilitate recognition tasks. The texture presented by the arrangements of speckles on the body part is the key clue for identifying the tiger. § DIVERSIFIED ETI GENERATION It is unknown whether the two factors proposed in the first experiment are necessarily valid in the perception of emerging images. Therefore, their validity needed to be verified through corresponding psychological experiments. This required diversified ETIs in bulk. From an engineering standpoint, we can automate ETI generation with the help of a computer program with adjustable parameters—that is, controlling the values of the abovementioned two factors to be verified to automate the generation of stimuli under various settings. §.§ Natural Image Dataset We generated the corresponding ETIs based on natural images selected from the AM-2K dataset <cit.>. This dataset was created by Li et al. for natural image matting studies in computer vision, and it includes 2000 images in 20 animal categories. Most images in the AM-2K contain only one animal, and the positions and poses of the animals are rich and diverse, thus meeting the various needs of the second two experiments. In addition, the images in the dataset are high resolution, which makes data processing easier. §.§ Generation Process To automate the efficient generation of ETIs similar to the emerging image Dalmatian dog, the algorithm uses three parameters to characterize the two factors of speckle-density and speckle-arrangement. Figure <ref> uses a zebra image as an example to explain the generation process. Since both shape and texture can be used as key clues for object recognition, they are extracted as two independent dimensions. Object contours express the shape information; the parts of the contours with large curvature variations tend to include more specific shape information, and these contour segments are often more critical for discrimination. Thus, the normalized local curvature of the object contour was used to assess the importance of the contour segments. The first parameter, PoS, set in the program controls the proportion of contours rendered by speckles. For example, PoS = 0.2 means the first 20% of the important parts of the contours are rendered by speckles in the ETI. The second parameter, PoT, controls the proportion of texture information. For example, PoT = 0.2 means 20% of the texture will be randomly selected to be rendered by speckles in the ETI. When PoS and PoT are set, the speckle-density in the object region can be calculated. Then, the third parameter, the density contrast (DC), controls the density of noise speckles around the object. For example, DC=0.2 means the speckle-density of the surroundings is 20% of the object region. § EXPERIMENT 2A The purpose of this experiment was to test the validity of the first-factor speckle-density. Subjects were first presented with ETIs that reflected only changes in the parameter DC to reduce the influence of other factors on the experimental results. We then verified its validity by analyzing the differences in the performance of subjects observing multiple groups of ETIs with different DCs. §.§ Participants The 120 subjects from Experiment 1 were invited to participate in this experiment because they already had some experience and could better cooperate with the experiment. In the experiment, the 120 subjects were divided equally into three groups: G_1 (mean age = 22.5 years), G_2 (mean age = 23.2 years), and G_3 (mean age = 22.7 years). The ratio of male to female in each group was kept the same as the overall ratio. §.§ Stimulus This experiment required subjects to perceive the region where the object was present from the ETIs. Therefore, the animals in the selected natural images had as much diversity as possible in terms of size, position, and pose to reduce the effect of visual habituation on the results. To avoid shape and texture interfering with the subjects' perceptions during the test, we set PoS=0 and PoT=0, and adjusted DC= 0.2, 0.6, and 1, to generate the ETIs of 10 natural images. In the experiment, three ETIs corresponding to an image were presented to subjects in the three groups. For example, ETI_DC=0.2^1, ETI_DC=0.6^1, and ETI_DC=1^1, corresponding to the first natural image, were presented to subjects in the three subgroups, G_1, G_2, and G_3, respectively. In addition, a subject was presented with two successive ETI with different DCs to avoid visual habituation. For example, if the current subject was presented with ETI_DC=0.2^1, then the next subject was presented with ETI_DC=0.6^2 or ETI_DC=1^2. §.§ Procedure The stimuli presented in this experiment were generated by MATLAB Psychtoolbox for more accurate data collection. Subjects were seated 35 cm from a 24-inch 1920×1080 resolution monitor and given the following instructions: You are presented with a set of 10 ETIs, one at a time, with only one animal present in each ETI. Your task is to observe the currently presented ETI, and when you perceive the region of object presence from the ETI, draw a polygon with your mouse by clicking on the window to frame the region where you think the animal is present. When you are finished with the current ETI, click on the “Next” button and start to do the same for a new ETI. §.§ Results and Discussion During the experiment, the subject's reaction time (RT) in perceiving the object from the ETI was recorded. RT started from the presentation of the ETI until the subject drew the region where the object might have been present by clicking the window where the ETI was presented. This reflected the speed at which the subject perceived the object from the ETI. A smaller RT value indicated that the subject perceived the object from the ETI more easily. The time for an object to be perceived in an ETI was the average RT of 40 subjects. Figure <ref>(a) shows the average RTs of ETIs corresponding to 10 natural images under the parameter settings of DC = 0.2, 0.6, and 1. The statistical results showed that although there were differences in the average RTs of ETIs with the same DC for the 10 natural images, they all showed a trend of average RT_DC=0.2 average RT_DC=0.6 average RT_DC=1.0. Pearson correlation coefficient is often used to assess the strength of correlation between variables <cit.>. Pearson correlation coefficient, r = 0.96, between the average RT and DC indicated a strong positive correlation between them. The experimental results demonstrated that a greater DC means less interference speckles and more concentrated effective speckles, and visual emergence is more likely to occur. Intersection over union (IoU) was used to measure the accuracy of the object region framed out by the subjects. IoU is the ratio of the intersection and union between the region drawn by a subject and the actual region of the object; i.e., IoU = Area_s∩ Area_o/ Area_s∪ Area_o. The region drawn by the subject is too large or too small to make IoU 1, and IoU = 1 only when the drawn region and the region where the object is located precisely coincide. Each ETI was finally obtained for 40 subjects. The accuracy of objects perceived in one ETI is the average IoU of 40 subjects. The closer the average IoU value is to 1, the more accurately the subject perceives the object from the ETI. Figure <ref>(b) shows the average IoUs of the ETIs with DC = 0.2, 0.6, and 1 for 10 images. Pearson correlation coefficient between average IoU and DC was -0.92, indicating a strong negative correlation between the two variables. The experimental results demonstrated that as DC decreased, subjects' accuracy in perceiving objects from the ETI also decreased. § EXPERIMENT 2B This experiment verified the second-factor speckle-arrangement. Subjects were first presented with ETIs reflecting only changes in speckle-arrangement. Then, the subjects observed them and identified the animals contained therein. The correlation between recognition accuracy and the parameters PoS and PoT can judge whether speckle-arrangement is effective for the occurrence of visual emergence. §.§ Participants We invited the same 120 subjects to participate in this experiment. The subjects were divided equally into two groups: G_1 (average age = 22.5 years) and G_2 (average age = 23.1 years). The proportion of male and female subjects in the two groups was the same as the overall proportion. We ensured that the 120 subjects had the required prior knowledge before the experiment. §.§ Stimulus When generating ETIs, the parameter DC was set to 1 to avoid the influence of speckle-density. Then, PoS and PoT were sequentially adjusted to generate two sets of ETIs: ETI_PoT = 0.2,0.4,0.6,0.8,1 and ETI_PoS = 0.2,0.4,0.6,0.8,1. Since the subjects recognized objects based on the acquired shape and texture information, the shapes of the animals in the selected natural images were to remain intact and present a normal pose. We selected one image from the AM-2K for each of the six animals—tiger, zebra, leopard, camel, rhinoceros, and rabbit—to generate the corresponding two sets of ETIs. These two sets of ETIs were presented to the subjects in the order of progressively increasing parameters PoS and PoT. In addition, two sets of ETIs of an image were presented to subjects in G_1 and G_2, and a group could not be presented with the same set of ETIs consecutively. For example, if the subjects in the two groups G_1 and G_2 were currently presented with ETI_PoT^1 and ETI_PoS^1, respectively, then the subjects in the next two groups would be presented with ETI_PoS^2 and ETI_PoT^2. §.§ Procedure The visual stimuli presented to the subjects in this experiment were generated by MATLAB Psychtoolbox. The subject sat 35 cm from a 24-inch 1920×1080 resolution monitor and was instructed as follows: You are presented with a set of visual stimuli, one at a time, each containing only one animal and all containing the same animal in the same location. Your task is to look at the stimuli and determine what animals they contained. Click on the window when you are done to display the next stimulus, and then judge again. §.§ Results and Discussion In the experiment, the subjects' recognition results for each ETI were recorded. A correct recognition was recorded as 1; otherwise, 0. Each stimulus had 60 values of 0 or 1, and the mean value was the recognition accuracy, which reflected the difficulty of perceiving and recognizing the animals from ETIs. Figure <ref> shows the recognition accuracies of the ETIs corresponding to six animals with different PoT and PoS settings. The statistical results showed a significant positive correlation between speckle-arrangement and accuracy. However, the statistical results clearly reflected that varying the parameters PoT and PoS had different degrees of influence on accuracy for the six animals. The discriminative clue for tiger, leopard, and zebra is texture, while that for camel, rhinoceros, and rabbit is shape. Thus, adjusting PoT resulted in more significant changes in the accuracies for tiger, leopard, and zebra, while adjusting PoS resulted in more significant changes in the accuracies for camel, rhinoceros, and rabbit. The results of this experiment demonstrated that the speckle-arrangement is crucial for correct identification. § GENERAL DISCUSSION Speckle-density had a significant effect on the occurrence of visual emergence. Although Gestalt psychology explains the processing and organization of visual information in terms of several grouping principles, it is unclear how these principles function with emerging images. In fact, speckle-density and these principles are not contradictory. When the number of speckles in two equally sized regions is similar, the greater the speckle-density, the closer the speckles are to each other. If speckle-density continues to increase, then some speckles will overlap, which makes the object contour more continuous and complete. Among the Gestalt grouping principles, the process of speckle-density change exhibits proximity and continuity. Therefore, speckle-density is more accurate for explaining the occurrence of visual emergence. We also found that some speckle-arrangements were important for accurate object recognition because they indirectly provided texture and contour information. This finding is instructive for current object recognition research in computer vision. Object recognition research based on deep networks has made considerable progress over the last decade. In terms of accuracy, some methods have even outperformed humans on some public datasets <cit.>. However, deep learning techniques rely heavily on the number of learned samples. By contrast, humans can easily recognize objects using only a small number of learning samples. Deep networks essentially rely on the denseness of the distribution of various samples to exhaust possibilities. Biological intelligence, meanwhile, cannot use resources so extravagantly, and biological brains learn more from interpretation <cit.>. Speckle-density and speckle-arrangement are factors that affect the occurrence of visual emergence, and the reason for this occurrence is the holistic precedence nature of human visual perception <cit.>. We perceive objects in terms of the global rather than the local, and even if part of the stimulus is altered, it does not affect the correct perception. By contrast, some studies have found that adding noise to images that could be correctly recognized or making changes that have no effect on human perception can lead to false recognition results. This can make deep networks unreliable for real-world applications (e.g., autonomous driving) and raise safety concerns <cit.>. The abovementioned discussion implies that introducing certain biological mechanisms in the engineering domain could be very promising. The images for the emerging test discussed in this paper can help computer vision improve its ability to cope with unintended inputs. § CONCLUSION This study explored the factors that influence the perception of emerging images. The visual emergence process was divided into two stages—sense and recognition—and we separately examined the specific factors affecting the two parts. In the first experiment, we discovered two factors: speckle-density and speckle-arrangement. We automated the generation of ETIs in bulk with a computer program using a controlled-variable approach and then verified the effectiveness of two factors in the next two psychological experiments. apacite
http://arxiv.org/abs/2307.06200v1
20230712144128
Binary coalescences as sources of Ultra-High Energy Cosmic Rays
[ "Jonas P. Pereira", "Carlos H. Coimbra-Araújo", "Rita C. dos Anjos", "Jaziel G. Coelho" ]
astro-ph.HE
[ "astro-ph.HE", "gr-qc" ]
http://arxiv.org/abs/2307.07466v1
20230714164834
Comparing Scale Parameter Estimators for Gaussian Process Regression: Cross Validation and Maximum Likelihood
[ "Masha Naslidnyk", "Motonobu Kanagawa", "Toni Karvonen", "Maren Mahsereci" ]
math.ST
[ "math.ST", "stat.TH", "60G15, 62G05, 62G08" ]
Canonical Quantization of Teukolsky fields on Kerr Background Jochen Zahn August 12, 2023 ============================================================= Gaussian process (GP) regression is a Bayesian nonparametric method for regression and interpolation, offering a principled way of quantifying the uncertainties of predicted function values. For the quantified uncertainties to be well-calibrated, however, the covariance kernel of the GP prior has to be carefully selected. In this paper, we theoretically compare two methods for choosing the kernel in GP regression: cross-validation and maximum likelihood estimation. Focusing on the scale-parameter estimation of a Brownian motion kernel in the noiseless setting, we prove that cross-validation can yield asymptotically well-calibrated credible intervals for a broader class of ground-truth functions than maximum likelihood estimation, suggesting an advantage of the former over the latter. § INTRODUCTION Gaussian process (GP) regression (or kriging) is a Bayesian nonparametric method for regression and interpolation that has been extensively studied in statistics and machine learning <cit.>. Its key property is that it enables uncertainty quantification of estimated function values in a principled manner, which is crucial for applications involving decision-making, safety concerns, and scientific discovery. As such, GP regression has been a core building block of more applied algorithms, including Bayesian optimisation <cit.>, probabilistic numerical computation <cit.>, and calibration and emulation of computer models  <cit.>, to name just a few. GP regression estimates an unknown function f from its observations as follows. One first defines a prior distribution for f as a GP by specifying its covariance kernel (and mean function). Provided N observations about f, one then derives the posterior distribution of f, which is another GP with mean function m_N and covariance kernel k_N. One can then predict the function value f(x) at any input x by the posterior mean m_N(x) and quantify its uncertainty using the posterior standard deviation √([b]k_N(x))√([b]k_N(x,x) ). Specifically, one can construct a credible interval of f(x) as the interval [ m_N(x) - α√([b]k_N(x)), m_N(x) + α√([b]k_N(x)) ] for a constant α > 0 (for example, α≈ 1.96 leads to the 95% credible interval). Such uncertainty estimates constitute key ingredients in the above applications of GP regression. For GP uncertainty estimates to be reliable, the posterior standard deviation √([b]k_N(x)) should, ideally, decay at the same rate as the prediction error |m_N(x) - f(x)| decreases, with the increase of sample size N. Otherwise, GP uncertainty estimates are either asymptotically overconfident or underconfident. For example, if √([b]k_N(x)) goes to 0 faster than the error |m_N(x) - f(x)|, then the credible interval [m_N(x) - α√(k_N(x)), m_N(x) + α√([b]k_N(x)) ] will not contain the true value f(x) as N increases for any fixed constant α > 0 (asymptotically overconfident). If √([b]k_N(x)) goes to 0 slower than the error |m_N(x) - f(x)|, then the confidence interval [m_N(x) - α√([b]k_N(x)), m_N(x) + α√([b]k_N(x)) ] will get larger than the error |m_N(x) - f(x)| as N increases (asymptotically underconfident). Both of these cases are not desirable in practice, as GP credible intervals will not be accurate estimates of prediction errors. Unfortunately, in general, the posterior standard deviation √([b]k_N(x)) does not decay at the same rate as the prediction error | f(x) - m_N(x) |, because, as is well-known, √([b]k_N(x)) does not depend on the true function f; see (<ref>) in Section <ref>. Exceptionally, if the function f is a sample path of the GP prior (the well-specified case), GP uncertainty estimates can be well-calibrated. However, in general, the unknown f is not exactly a sample path of the GP prior (the misspecified case), and the posterior standard deviation √([b]k_N(x)) does not scale with the prediction error | f(x) - m_N(x) |. Figures <ref> and <ref> (the left panels) show examples where the true function f is not a sample of the GP prior and where the GP uncertainty estimates are not well-calibrated. §.§ Scale Parameter Estimation To obtain sensible uncertainty estimates, one thus needs to adapt the posterior standard deviation √([b]k_N(x)) to the function f. One simple way to achieve this is to introduce the scale parameter σ^2 > 0 and parametrize the kernel as k_σ(x,x') σ^2 k(x,x'), where k is the original kernel. GP regression with this kernel k_σ yields the posterior mean function m_N, which is not influenced by σ^2, and the posterior covariance function σ^2 k, which is scaled by σ^2. If one estimates σ^2 from observed data of f, the estimate σ̂^2 depends on f, and so does the resulting posterior standard deviation σ̂√([b]k_N(x)). One approach to scale-parameter estimation is the method of maximum likelihood (ML), which optimizes σ^2 to maximize the marginal likelihood of the observed data <cit.>. The ML approach is popular for general hyperparameter optimization in GP regression. Another less common way in the GP literature is cross-validation (CV), which optimizes σ^2 to maximize the average predictive likelihood of held-out data <cit.>. For either approach, the optimized scale parameter can be obtained analytically in computational complexity (N^3). Figures <ref> and <ref> (middle and right panels) demonstrate that both approaches yield uncertainty estimates better calibrated than the original estimates without the scale parameter. Do these scale parameter estimators lead to asymptotically well-calibrated uncertainty estimates? To answer this question, one needs to understand their convergence properties as the sample size N increases. Most existing theoretical works focus on the well-specified case where there is a “true” scale parameter σ_0^2 such that the unknown f is a GP with the covariance kernel σ_0^2 k. In this case, both the ML and CV estimators have been shown to be consistent in estimating the true σ_0^2 <cit.>. However, in general, no “true” scale parameter σ^2_0 exists such that the unknown f is a GP with the covariance σ_0^2 k. In such misspecified cases, not much is known about the convergence properties of both estimators. <cit.> analyze the ML estimator for the scale parameter, assuming that f is a deterministic function. They derive upper bounds (and lower bounds in some cases) for the the ML estimator; see <cit.> for closely related work. To our knowledge, no theoretical work exists for the CV estimator for the scale parameter in the misspecified case. <cit.> and <cit.> empirically compare the ML and CV estimators under different model misspecification settings. We will review other related works in Section <ref>. §.§ Contributions This work studies the convergence properties of the ML and CV estimators, σ̂_ ML^2 and σ̂_ CV^2, of the scale parameter σ^2 in GP regression, to understand whether they lead to asymptotically well-calibrated uncertainty estimates. In particular, we provide the first theoretical analysis of the CV estimator σ̂_ CV^2 when the GP prior is misspecified, and also establish novel results for the ML estimator σ̂_ ML^2. To facilitate the analysis, we focus on the following simplified setting. For a constant T > 0, let [0, T] ⊂ℝ be the input domain. Let k in (<ref>) be the Brownian motion kernel k(x, x') = min(x, x') for x, x' ∈ [0, T]. With this choice, a sample path of the GP prior has roughly a smoothness of 1/2 (in terms of the differentiability; we will be more rigorous in later sections). We assume that the true unknown function f has the smoothness l + α, where l ∈{0}∪ℕ and 0 < α≤ 1. The GP prior is well-specified if l = 0 and α = 1/2. Other settings of l and α represent misspecified cases. If l = 0 and α <1/2, the true function f is rougher than the GP prior (Figure <ref>); if l = 0 and α > 1/2 or l ≥ 1, the function f is smoother than the GP prior. We focus on the noise-free setting where one observes the function values f(x_1), …, f(x_N) at input points x_1, …, x_N ∈ [0,T]. Our main results are new upper and lower bounds for the asymptotic rates of the CV estimator σ̂_ CV^2 and the ML estimator σ̂_ ML^2 as N →∞ (Section <ref>). The results suggest that the CV estimator can yield asymptotically well-calibrated uncertainty estimates for a broader class of functions f than the ML estimator; thus, the former has an advantage over the latter (Section <ref>). More specifically, asymptotically well-calibrated uncertainty estimates may be obtained with the CV estimator for the range 0 < l + α≤ 3/2 of smoothness of the true function, while this range becomes 0 < l + α≤ 1 with the ML estimator and is narrower. This finding is consistent with the example in Figure <ref>, where the true function has smoothness l + α = 3/2 and is thus smoother than the GP prior. The uncertainty estimates of the CV estimator appear to be well-calibrated, while those of the ML estimator are unnecessarily wide, failing to adapt to the smoothness. This paper is structured as follows. After reviewing related works in <Ref>, we introduce the necessary background on the ML and CV approaches to scale parameter estimation for GP regression in <Ref>. We describe the setting of the theoretical analysis in <Ref>, present our main results in <Ref>, and discuss its consequences on uncertainty quantification in <Ref>. We report simulation experiments in <Ref>, conclude in <Ref>, and present proofs in <Ref>. §.§ Related work We review here related theoretical works on hyper-parameter selection in GP regression. We categorize them into two groups based on how the true unknown function f is modelled: random and deterministic. Random setting. One group of works models the ground truth f as a random function, specifically as a GP. Most of these works model f as a GP with a Matérn-type covariance kernel and analyze the ML estimator. Under the assumption that the GP prior is correctly specified, asymptotic properties of the ML estimator for the scale parameter and other parameters have been studied <cit.>. Recently <cit.> and <cit.> have constructed consistent estimators of various parameters for many commonly used kernels, including Matérns. <cit.> and <cit.> consider a periodic version of Matérn GPs, and show the consistency of the ML estimator for its smoothness parameter. To our knowledge, no theoretical result exists for the ML estimation of the scale parameter in the misspecified random setting, which we provide in <Ref> (Theorem <ref>). In contrast, few theoretical works exist for the CV estimator. <cit.> study the leave-one-out (LOO) CV estimator for the Matérn-1/2 model (or the Laplace kernel) with one-dimensional inputs, in which case the GP prior is an Ornstein–Uhlenbeck (OU) process. Assuming the well-specified case where the true function is also an OU process, they prove the consistency and asymptotic normality of the CV estimator for the microergodic parameter in the fixed-domain asymptotic setting. <cit.> and <cit.> discuss another CV estimator that uses the mean square prediction error as the scoring criterion of CV (thus different from the one discussed here) in the increasing-domain asymptotics. <cit.> and <cit.> perform empirical comparisons of the ML and CV estimators under different model misspecification settings. Thus, to our knowledge, no theoretical result exists for the CV estimator of the scale parameter in the random misspecified setting, which we provide in <Ref> (Theorem <ref>). Deterministic setting. Another line of research assumes that the ground truth f is a fixed function belonging to a specific function space <cit.>. <cit.> assumed that the ground truth f is a monomial on [0,1] and proved some asymptotic results for the ML estimator when the kernel k is Gaussian. As mentioned earlier, <cit.> proved asymptotic upper (and, in certain cases, also lower) bounds on the ML estimator σ̂_ ML^2 of the scale parameter σ^2; see <cit.> for a closely related work. <cit.> has studied the ML and LOO-CV estimators for the smoothness parameter in the Matérn model; see also <cit.>. <cit.> and <cit.> proved non-asymptotic results on the length-scale parameter in the Matérn and related models. Thus, there has been no work for the CV estimator of the scale parameter σ^2 in the deterministic setting, which we provide in Section <ref> (Theorem <ref>); we also prove a corresponding result for the ML estimator (Theorem <ref>). § BACKGROUND This section briefly reviews GP regression and the ML and LOO-CV estimators of kernel parameters. §.§ Gaussian process regression We first explain GP regression (or interpolation). Let Ω be a set, and fΩ→ℝ be an unknown function of interest. Suppose one observes N function values f(x_1), …, f(x_N) at pairwise distinct input points x_1, …, x_N ∈Ω. The task here is to estimate f based on the data (, f()), where f() [f(x_1), …, f(x_N)]^⊤∈ℝ^N and [x_1, …, x_N]^⊤∈Ω^N. In GP regression, one first defines a prior distribution of the unknown f as a GP by specifying its mean function m Ω→ and covariance function (kernel) k Ω×Ω→; we may write f ∼(m,k) to indicate this. Conditioned on the data (, f()), the posterior distribution of f is again a GP whose mean function m_N: Ω→ℝ and covariance function k_N: Ω×Ω→ℝ are given by m_N(x) m(x) + k(x, )^⊤ k(, )^-1(f() - m()), x ∈Ω, k_N(x, x') k(x, x') - k(x, )^⊤ k(, )^-1 k(x', ), x, x' ∈Ω, where m() [m(x_1), …, m(x_N)]^⊤∈^N and k(x, ) [k(x, x_1), …, k(x, x_N)]^⊤∈^N, and k(, ) [ k(x_1, x_1) … k(x_1, x_N); ⋮ ⋱ ⋮; k(x_N, x_1) … k(x_N, x_N) ]∈ℝ^N × N is the Gram matrix. Throughout this paper, we assume that the points are such that the Gram matrix is non-singular. For notational simplicity, we may write the posterior variance as k_N(x) k_N(x, x), x ∈Ω. For simplicity and as commonly done, we henceforth assume that the prior mean function m is the zero function, m(·) ≡ 0. While the GP prior assumes that the unknown function f is a sample path of the GP with the specified kernel k, this assumption does not hold in general, i.e., model misspecification occurs. In this case, as described in Figures <ref> and <ref> (left), the posterior standard deviation √([b]k_N(x)), which is supposed to quantify the uncertainty of the unknown function value f(x), may not be well calibrated with the prediction error | m_N(x) - f(x) |. One could address this issue by selecting the kernel k or its parameters from the data ( , f() ); we will explain this topic next. §.§ Kernel parameter estimation The selection of the kernel k is typically performed by defining a parametric family of kernels {k_θ}_θ∈Θ and selecting the parameter θ based on an appropriate criterion. Here Θ is a parameter set, and k_θ: Ω×Ω→ℝ for each θ∈Θ is a kernel. Maximum likelihood (ML) estimation. The ML estimator maximises the log-likelihood of the data (, f()) given that f is a GP with kernel k_θ: log p(f() , θ) = -1/2( f()^⊤ k_θ(, )^-1 f() + log k_θ(, ) + n log (2π) ), where k_θ(, ) is the determinant of the Gram matrix k_θ(, ) (see, e.g., ). With the additive terms that do not depend on θ removed from log p(f() , θ), this is equivalent to minimising the loss function _(θ) := f()^⊤ k_θ (, )^-1 f() + log k_θ (, ). In general, _(θ) may not have a unique minimiser, so that any ML estimator satisfies θ̂_∈_θ∈Θ_(θ). Leave-one-out cross-validation (LOO-CV). The LOO-CV estimator <cit.>, which we may simply call the CV estimator, is an alternative to the ML estimator. It maximizes the average log-predictive likelihood ∑_n=1^N log p( f(x_n) x_n, _∖ n, f(_∖ n), θ) of the held-out data (x_n, f(x_n)), where n = 1, …, N, based on the data (_∖ n, f(_∖ n)), where _∖ n denotes the input points with x_n removed: _∖ n = [x_1, …, x_n-1, x_n+1, … , x_N]^⊤∈Ω^N - 1. Let m_θ, ∖ n and k_θ, ∖ n denote the posterior mean and covariance functions of GP regression with the kernel k_θ and the data (_∖ n, f(_∖ n). Because each p( f(x_n) x_n, _∖ n, f(_∖ n), θ) is the Gaussian density of f(x_n) with mean m_θ, ∖ n(x_n) and variance k_θ, ∖ n(x_n) k_θ, ∖ n(x_n,x_n), removing additive terms that do not depend on θ and reversing the sign in (<ref>) yields the following CV objective function: _(θ) = ∑_n=1^N [f(x_n) - m_θ, ∖ n(x_n)]^2/ k_θ, ∖ n(x_n) + log k_θ, ∖ n (x_n), The CV estimator is then defined as its minimizer: θ̂_∈_θ∈Θ_(θ). As for the ML estimator, the CV objective function and its first-order gradients can be computed in closed form in (N^3) time <cit.>. Scale parameter estimation. As explained in Section <ref>, we consider the family of kernels k_σ (x,x') σ^2 k(x,x') parametrized with the scale parameter σ^2 >0, where k is a fixed kernel, and study the estimation of σ^2 using the CV and ML estimators, denoted as σ̂_ CV^2 and σ̂_ ML^2, respectively. In this case, both σ̂_ CV^2 and σ̂_ CV^2 can be derived in closed form by differentiating (<ref>) and (<ref>). Let m_n-1 and k_n-1 be the posterior mean and variance functions of GP regression using the kernel k and the first n-1 training observations (x_1, f(x_1)), …, (x_n-1, f(x_n-1)). Let m_0(·) 0 and k_0(x,x) k(x,x). Then the ML estimator is given by = f()^⊤ k(, )^-1 f()/N = 1/N∑_n=1^N [ f(x_n) - m_n-1(x_n) ]^2/k_n-1(x_n), This expression of the ML estimator is relatively well known; see e.g. Section 4.2.2 in <cit.> or Proposition 7.5 in <cit.>. On the other hand, the CV estimator σ̂_ ML^2 is given by = 1/N∑_n=1^N [f(x_n) - m_∖ n(x_n)]^2/ k_∖ n(x_n), where m_\ n and k_\ n are the posterior mean and covariance functions of GP regression using the kernel k and data (_∖ n, f(_∖ n)) with (x_n, f(x_n)) removed: m_∖ n(x) = k(_∖ n, x)^⊤ k(_∖ n, _∖ n)^-1 f(_∖ n), k_∖ n(x, x') = k(x, x') - k(_∖ n, x)^⊤ k(_∖ n, _∖ n)^-1 k(_∖ n, x'). Notice the similarity between the two expressions (<ref>) and (<ref>). The difference is that the ML estimator uses k_n-1 and m_n-1, which are based on the first n-1 training observations, while the CV estimator uses k_\ n and m_\ n obtained with N-1 observations, for each n = 1,…, N. Therefore, the CV estimator uses all the data points more evenly than the ML estimator. This difference may be the source of the difference in their asymptotic properties established later. As suggested by the similarity between (<ref>) and (<ref>), there is a deeper connection between ML and CV estimators in general. For instance, <cit.> have shown that the Bayesian marginal likelihood equals the average of leave-p-out CV scores. We prove this result for the special case of scale parameter estimation in GP regression in <Ref>. § SETTING This section describes the settings and tools for our theoretical analysis: the Brownian motion kernel in Section <ref>; sequences of partitions in Section <ref>; the Hölder class of functions in Section <ref>; fractional Brownian motion in Section <ref>; and functions of finite quadratic variation in Section <ref>. §.§ Brownian motion kernel As explained in Section <ref>, for the kernel k we focus on the Brownian motion kernel on the domain Ω = [0, T] for some T > 0: k(x, x') = min(x, x'). The resulting kernel k_σ(x,x') = σ^2 k(x,x') induces a Brownian motion prior for GP regression. We assume that the input points =[x_1, … x_N]^⊤ for GP regression are positive and ordered: 0 < x_1 < x_2 < … < x_N ≤ T. The positivity ensures that the Gram matrix (<ref>) is non-singular. As is well known and can be seen in Figures <ref> and <ref>, the posterior mean function m_N in (<ref>) using the Brownian motion kernel becomes the piecewise linear interpolant of the observations ( , f() ). See (<ref>) and (<ref>) in Section <ref> for the explicit expressions of the posterior mean and covariance functions. §.§ Sequences of partitions For our asymptotic analysis, we assume that the input points x_1, …, x_N ∈ [0,T] cover the domain [0,T] more densely as the sample size N increases. To make the dependence on the size N explicit, we write 𝒫_N (x_N,n)_n=1^N ⊂ [0,T] as a point set of size N, and assume that they are ordered as 0 x_N,0 < x_N, 1 < x_N, 2 < … < x_N, N = T Then 𝒫_N defines a partition of [0,T] into N subintervals [x_N,n, x_N, n+1]. When there is no risk of confusion, we may write x_n instead of x_N,n for simplicity. Note that we do not require the nesting 𝒫_N ⊂𝒫_N+1 of partitions. We define the mesh of partition 𝒫_N as the longest subinterval in the partition: _Nmax_n ∈{0, 1,…,N-1} (x_N, n+1 - x_N, n ) The decay rate of the mesh _N quantifies how quickly the points in 𝒫_N cover the interval [0,T]. In particular, the decay rate 𝒫_N = (N^-1) implies that the length of every subinterval is asymptotically upper bounded by 1/N. At the same time, if each subinterval is asymptotically lower bonded by 1/N, we call the sequence of partitions (_N)_N ∈ℕ quasi-uniform, as more formally defined as follows. For each N ∈ℕ, let 𝒫_N (x_N,n)_n=1^N ⊂ [0,T]. Define Δ x_N, n x_N, n+1 - x_N, n. Then the sequence of partitions (_N)_N ∈ℕ is called quasi-uniform if there exists a constant 1 ≤ C_ < ∞ such that sup_N ∈ℕmax_n Δ x_N, n/min_n Δ x_N, n = C_. Th quasi-uniformity, as defined here, requires that the ratio of the longest subinterval, max_nΔ x_N, n, to the shortest one, min_nΔ x_N, n, is upper-bounded by C_ qu for all N ∈ℕ. Quasi-uniformity implies that all subintervals are asymptotically upper and lower bounded by 1/N, as we have, for all N ∈ℕ and n ∈{1, …, N}, T N^-1/C_≤min_n Δ x_N, n≤Δ x_N, n≤max_n Δ x_N, n≤ T C_ N^-1. For example, equally-spaced points (or uniform grids) satisfy the quasi-uniformity with C_ qu = 1. §.§ Hölder spaces <Ref> studies the deterministic setting where the true unknown function f is assumed to belong to a Hölder space of functions. To define this space, we first need the following definition. For 0 < α≤ 1, a function f: [0, T] → is α-Hölder continuous if there exists a constant L ≥ 0 such that, for all x, x' ∈ [0, T], |f(x) - f(x')| ≤ L |x - x' |^ α. Any such constant L is called a Hölder constant of f. For l ∈ℕ∪{ 0 }, denote by C^l([0,T]) the space of functions f [0, T] → such that the lth derivative f^(l) exists and is continuous. For l = 0, this is the space of continuous functions. Hölder spaces are now defined as follows. Let l ∈ℕ∪{0} and 0 < α≤ 1. The Hölder space C^l, α([0, T]) consists of functions f ∈ C^l([0 ,T]) whose lth derivative f^(l) is α-Hölder continuous. Intuitively, l + α represents the smoothness of least-smooth functions in C^l, α ( [0, T] ). It is well known that a sample path of Brownian motion is almost surely α-Hölder continuous if and only if α < 1/2 <cit.>, and thus it belongs to the Hölder space C^l, α ([0,T]) with l = 0 and α = 1/2 - ε almost surely for arbitrarily small ε > 0; in this sense, the smoothness of a Brownian motion is 1/2. As such, as is well known <cit.>, a Brownian motion is almost nowhere differentiable almost surely. Note that have the following strict inclusions:[These inclusions follow from the following facts: By the definition of Hölder continuity, an α_1-Hölder continuous function is α_2-Hölder continuous if α_1 > α_2; continuously differentiable functions are α-Hölder continuous for any 0< α≤ 1; not all Lipschitz functions are differentiable.] * C^l_1, α_1([0, T]) ⊊ C^l_2, α_2([0, T]) if (a) l_1 > l_2 or (b) l_1 = l_2 and α_1 > α_2, * C^l+1([0, T]) ⊊ C^l, 1([0, T]). §.§ Fractional Brownian motion <Ref> considers the random setting where f is a fractional (or integrated fractional) Brownian motion <cit.>. Examples of these processes can be seen in Figures <ref>, <ref>, <ref> and <ref>. A fractional Brownian motion on [0,T] with Hurst parameter 0 < H < 1 is a Gaussian process whose covariance kernel is given by k_0,H(x,x') = ( | x |^2H + | x' |^2H - | x-x'|^2H) / 2. Note that if H = 1/2, this is the Brownian motion kernel: k_0,1/2(x,x') = min (x,x'). The Hurst parameter H quantifies the smoothness of the fractional Brownian motion: for any 0 < H < 1, if f_ FBM∼(0, k_0,H), we have f_ FBM∈ C^0, H-ε ( [0, T] ) almost surely for arbitrarily small ε > 0  <cit.>. An integrated Brownian motion with Hurst parameter H is defined via the integration of a fractional Brownian motion with the same Hurst parameter: if f_ FBM∼(0, k_0, H), then f_iFBM(x) = ∫_0^x f_FBM(z) d z, x ∈ [0,T] is an integrated Brownian motion with Hurst parameter H. It is a zero-mean GP with the covariance kernel k_1,H(x, x') = ∫_0^x ∫_0^x'( | z |^2H + | z' |^2H - | z-z'|^2H) / 2 d z d z' = 1/2(2H+1)( x' x^2H+1 + x (x')^2H+1 - 1/2(H+1)[ x^2H+2 + (x')^2H+2 - |x - x'|^2H+2] ). Because differentiating an integrated fractional Brownian motion f_ iFBM∼(0, k_1, H) yields a fractional Brownian motion f_ FBM∼(0, k_0,H), a sample path of the former satisfies f_ iFBM∈ C^1,H-ε([0, T]) almost surely for arbitrarily small ε > 0; therefore the smoothness of f_ iFBM is 1 + H. §.§ Functions of finite quadratic variation Some of our asymptotic results use the notion of functions of finite quadratic variation, defined below. For each N ∈ℕ, let 𝒫_N (x_N,n)_n=1^N ⊂ [0,T], and suppose that _N→ 0 as N →∞. Then a function f : [0, T] →ℝ is defined to have finite quadratic variation with respect to (_N)_N ∈ℕ, if the limit V^2(f) lim_N →∞∑_n=1^N-1[ f(x_N, n+1) - f(x_N, n) ]^2 exists and is finite. We write V^2(f, ) when it is necessary to indicate the sequence of partitions. Quadratic variation is defined for a specific sequence of partitions (_N)_N ∈ℕ and may take different values for different sequences of partitions <cit.>. For conditions that guarantee the invariance of quadratic variation on the sequence of partitions, see, for instance, <cit.>. Note also that the notion of quadratic variation differs from that of p-variation for p=2, which is defined as the supremum over all possible sequences of partitions whose meshes tend to zero. If f ∈ C^0,α([0, T]) with α > 1/2 and _N = (N^-1) as N →∞, then we have V^2(f) = 0, because in this case ∑_n=1^N-1[ f(x_N, n+1) - f(x_N, n) ]^2 ≤ N C^2 max_n (Δ x_N, n)^2 α = (N^2α - 1) → 0 as N →∞. Therefore, given the inclusion properties of Hölder spaces (see Section <ref>), we arrive at the following standard proposition. Suppose that the partitions (_N)_N ∈ℕ are such that _N = (N^-1). If f ∈ C^l, α([0, T]) for l+α>1/2, then V^2(f) = 0. If the mesh tends to zero faster than 1/log N, in that _N =o(1/log N), then the quadratic variation of almost every sample path of the Brownian motion on the interval [0, T] equals T <cit.>. This is of course true for partitions which have the faster decay _N =(N^-1). § MAIN RESULTS This section presents our main results on the asymptotic properties of the CV and ML estimators, and , for the scale parameter. <Ref> considers the deterministic setting where the true function f is fixed and assumed to belong to a Hölder space. <Ref> studies the random setting where f is an (integrated) fractional Brownian motion. §.§ Deterministic setting We present our main results for the deterministic where the true function f is fixed and assumed to be in a Hölder space C^l, α([0, T]). <Ref> below provides asymptotic upper bounds on the CV estimator for different values of the smoothness parameters l and α of the Hölder space. Suppose that f is an element of C^l, α([0, T]), with l ≥ 0 and 0 < α≤ 1 such that l+α > 1/2, f(0)=0, and the interval partitions (_N)_N ∈ℕ have bounded meshes _N =(N^-1) as N →∞. Then = ( N^1 - min{2(l + α), 3}) = (N^1 - 2 α) if l = 0 and α > 1/2, (N^-1 - 2α) if l = 1 and α < 1/2, (N^- 2) if l = 1 and α≥ 1/2, (N^- 2) if l ≥ 2. See Section <ref>. <Ref> below is a corresponding result for the ML estimator . Note that a similar result has been obtained by <cit.>, where the function f is assumed to belong to a Sobolev space and the kernel is a Matérn-type kernel. <Ref> is a version of this result where f is in a Hölder space and the kernel is the Brownian motion kernel; we provide it for completeness and ease of comparison. Suppose that f is a non-zero element of C^l, α([0, T]), with l ≥ 0 and 0 < α≤ 1 such that l+α > 1/2, f(0)=0, and the interval partitions (_N)_N ∈ℕ have bounded meshes _N =(N^-1) as N →∞. Then σ̂_^2 = ( N^1 - min{2(l + α), 2}) = (N^1 - 2 α) if l = 0 and α > 1/2, Θ(N^- 1) if l ≥ 1. See Section <ref>. The proof is similar to that of <Ref>. Figure <ref> summarises the rates of <Ref>. When l + α≤ 1 (or l = 0 and α≤ 1), the rates of and are (N^1-2α), so both of them may decay adaptively to the smoothness l+α of the function f. However, when l + α > 1, the situation is different: the decay rate of is always Θ(N^-1) and thus insensitive to α, while that of is (N^-1 - 2α) for l=1 and α∈ (0, 1/2]. Therefore the CV estimator may be adaptive to a broader range of the smoothness 0 < l + α≤ 3/2 of the function f than the ML estimator (whose range of adaptation is 0 < l + α≤ 1). Note that <Ref> provide asymptotic upper bounds (except for the case l ≥ 1 of <Ref>) and may not be tight if the function f is smoother than “typical” functions in C^l,α([0,T]).[For example, if f(x) = | x - 1/2 | with T = 1, we have f ∈ C^0,1 ([0,T]), as f is Lipschitz continuous in this case. However, f is almost everywhere infinitely differentiable except at one point x = 1/2, so it is, in this sense, much smoother than “typical” functions in C^0,1 ([0,T]).] In <Ref>, we show that the bounds are indeed tight in expectation if f is a fractional (or integrated fractional) Brownian motion with smoothness l + α. The proof of <Ref> shows that for l = 1 we have = Θ(N^-1) whenever _N → 0 as N →∞. More precisely, it establishes that N → f' _ℒ^2([0, T]) :=∫_0^T f'(x)^2 d x as N →∞. Note that the ℒ^2([0, T]) norm of f' in the right hand side equals the norm of f in the reproducing kernel Hilbert space of the Brownian motion kernel <cit.> Therefore, this fact is consistent with a similar more general statement in <cit.>. In addition to the above results, <Ref> below shows the limit of the CV estimator if the true function f is of finite quadratic variation. For each N ∈ℕ, let _N ⊂ [0,T] be the equally-spaced partition of size N. Suppose that f: [0,T] →ℝ has finite quadratic variation V^2(f) with respect to (_N)_N ∈ℕ, f(0) = 0, and f is continuous on the boundary, i.e., lim_x → 0^+ f(x) = f(0) and lim_x → T^- f(x) = f(T). Moreover, suppose that the quadratic variation V^2(f) remains the same for all sequences of quasi-uniform partitions with constant C_qu=2.[In <Ref>, we discuss the relaxation of this requirement.] Then lim_N →∞ = V^2(f)/T. See <Ref>. For the ML estimator , it is straightforward to obtain a similar result by using (<ref>) and (<ref>) in Section <ref>: Under the same conditions as <Ref>, we have lim_N →∞ =V^2(f)/T. <Ref> and (<ref>) are consistent with <Ref>, which assume f ∈ C^l, α([0,T]) with l + α > 1/2 and imply → 0 and → 0 as N →∞. As summarized in <ref>, we have V(f) = 0 for f ∈ C^l, α([0,T]) with l + α > 1/2, so <Ref> and (<ref>) imply that → 0 and → 0 as N →∞. When f is a Brownian motion, in which case the Brownian motion prior is well-specified, the smoothness of f is l + α = 1/2, and the quadratic variation V(f) becomes a positive constant  <cit.>. <Ref> in the next subsection shows that this fact, <Ref>, and (<ref>) lead to the consistency of the ML and CV estimators in the well-specified setting. §.§ Random setting In <Ref>, we obtained asymptotic upper bounds on the CV and ML scale estimators when the true function f is a fixed function in a Hölder space. This section shows that these asymptotic bounds are tight in expectation when f is a fractional (or integrated fractional) Brownian motion. That is, we consider the asymptotics of the expectations and under the assumption that f ∼(0, k_l, H), where k_l, H is the kernel of a fractional Brownian motion (<ref>) for l = 0 or that of an integrated fractional Brownian motion (<ref>) for l = 1, with 0 < H <1 being the Hurst parameter. Recall that f ∼(0, k_l, H) belongs to the Hölder space C^l, H - ε([0,T]) almost surely for arbitrarily small ε > 0, so its smoothness is l + H. Figure <ref> summarises the obtained upper and lower rates, corroborating the upper rates in Figure <ref>. <Ref> below establish the asymptotic upper and lower bounds for the CV and ML estimators, respectively. Suppose that (_N)_N ∈ℕ are quasi-uniform and f ∼(0, k_l,H) with l ∈{0, 1} and 0 < H < 1. Then = Θ ( N^1 - min{2(l + H),3} ) = Θ(N^1 - 2 H) if l = 0 and H ∈ (0, 1), Θ(N^-1 - 2H) if l = 1 and H < 1/2, Θ(N^- 2) if l = 1 and H ≥ 1/2. See Section <ref>. Suppose that (_N)_N ∈ℕ are quasi-uniform and f ∼(0, k_l,H) with l ∈{0, 1} and 0 < H < 1. Then σ̂_^2 = Θ ( N^1 - min{2(l + H),2} ) = Θ(N^1 - 2 H) if l = 0 and H ∈ (0, 1), Θ(N^-1) if l = 1 and H ∈ (0, 1). See Section <ref>. The proof is similar to that of <Ref>. <Ref> show that the CV estimator is adaptive to the unknown smoothness l + H of the function f for a broader range 0< l+H ≤ 3/2 than the ML estimator, whose range of adaptation is 0 < l+H ≤ 1. These results imply that the CV estimator can be asymptotically well-calibrated for a broader range of unknown smoothness than the ML estimator, as discussed in <Ref>. When the smoothness of f is less than 1/2, i.e., when l + H < 1/2, the Brownian motion prior, whose smoothness is 1/2, is smoother than f. In this case, the expected rates of and are Θ(N^1 - 2 H) and increase as N increases. The increase of and can be interpreted as compensating the overconfidence of the posterior standard deviation √([b]k_N(x)), which decays too fast to be asymptotically well-calibrated. This interpretation agrees with the illustration in Figure <ref>. On the other hand, when l+ H > 1/2, the function f is smoother than the Brownian motion prior. In this case, and decrease as N increases, compensating the under-confidence of the posterior standard deviation √([b]k_N(x)). See Figure <ref> for an illustration. When l + H = 1/2, this is the well-specified case in that the smoothness of f matches the Brownian motion prior. In this case, <Ref> yield = Θ(1) and = Θ(1), i.e., the CV and ML estimators converge to a constant. The following proposition, which follows from <Ref> and (<ref>), shows that this limiting constant is the true value of the scale parameter σ_0^2 in the well-specified setting f ∼(0, σ_0^2 k), recovering similar results in the literature <cit.>. Suppose that f ∼(0, σ_0^2 k) for σ_0 > 0 and that partitions (_N)_N ∈ℕ are equally-spaced. Then lim_N →∞ = lim_N →∞ = σ_0^2 almost surely. Since the quadratic variation of almost all sample paths of the unscaled (i.e., σ_0 = 1) Brownian motion on [0, T] equals T <cit.>, the claim follows from (<ref>) and (<ref>). We next discuss the implications of the obtained asymptotic rates of and on the reliability of the resulting GP uncertainty estimates. § CONSEQUENCES FOR CREDIBLE INTERVALS This section discusses whether the estimated scale parameter, given by the CV or ML estimator, leads to asymptotically well-calibrated credible intervals. With the kernel σ̂^2 k(x,x'), where σ̂^2 = or σ̂^2 =, a GP credible interval at x ∈ [0,T] is given by [m_N(x) - ασ̂√(k_N(x)), m_N(x) + ασ̂√(k_N(x))] where α > 0 is a constant (e.g., α≈ 1.96 leads to the 95% credible interval). As discussed in Section <ref>, this credible interval (<ref>) is asymptotically well-calibrated, if it shrinks to 0 at the same speed as the decay of the error |m_N(x) - f(x)| as N increases, i.e., the ratio |f(x) - m_N(x)| /σ̂√(k_N(x)) should neither diverge to infinity nor converge to 0. If this ratio diverges to infinity, the credible interval (<ref>) is asymptotically overconfident, in that (<ref>) shrinks to 0 faster than the actual error |f(x) - m_N(x)|. If the ratio converges to 0, the credible interval is asymptotically underconfident, as it increasingly overestimates the actual error. Therefore, the ratio (<ref>) should ideally converge to a positive constant for the credible interval (<ref>) to be reliable. For ease of analysis, we focus on the random setting in Section <ref> where f is a fractional (or integrated fractional) Brownian motion and where we obtained asymptotic upper and lower bounds for and . We study how the expectation of the posterior variance σ̂^2 k_N(x) scales with the expected squared error [ f(x) - m_N(x) ]^2. Specifically, we analyze their ratio for σ̂^2 = and σ̂^2 =: R_^(x, N) [ f(x) - m_N(x) ]^2/σ̂_^2 k_N(x) and R_^(x, N) [ f(x) - m_N(x) ]^2/σ̂_^2 k_N(x) . The ratio diverging to infinity (resp. converging to 0) as N →∞ suggests that the credible interval (<ref>) is asymptotically overconfident (resp. underconfident) for a non-zero probability of the realisation of f. Thus ideally, the ratio should converge to a positive constant. <Ref> below establishes the asymptotic rates of the ratios in (<ref>). Suppose that (_N)_N ∈ℕ are quasi-uniform and f ∼(0, k_l, H) for l ∈{0,1 } and 0 < H < 1. Then, sup_x ∈ [0, T] R_^(x, N) = Θ(1) if l = 0 and H ∈ (0, 1), Θ(1) if l = 1 and H ∈ (0, 1/2), Θ(N^1-2H) if l = 1 and H ∈ (1/2, 1) and sup_x ∈ [0, T] R_^(x, N) = Θ(1) if l = 0 and H ∈ (0, 1), Θ(N^-2H) if l = 1 and H ∈ (0, 1). See <Ref>. We have the following observations from <Ref>, which suggest an advantage of the CV estimator over the ML estimator for uncertainty quantification: * The ratio for the CV estimator neither diverges to infinity nor decays to 0 in the range 0 < l+H < 3/2, which is broader than that of the ML estimator, 0 < l+H < 1. This observation suggests that the CV estimator can yield asymptotically well-calibrated credible intervals for a broader range of the unknown smoothness l + H of the function f than the ML estimator. * The ratio decays to 0 for the CV estimator in the range 3/2 < l+H < 2 and for the ML estimator in the range 1 < l+H < 2. Therefore, the ML estimator may yield asymptotically underconfident credible intervals for a broader range of the smoothness l+H than the CV estimator. § EXPERIMENTS This section describes numerical experiments to substantiate the theoretical results in <Ref>. We define test functions in <Ref>, show empirical asymptotic results for the CV estimator in <Ref>, and report comparisons between the CV and ML estimators in <Ref>. To this end, for a continuous function f, define l[f] ∈ℕ∪{ 0 } and α∈ (0, 1] as l[f] := sup{ l ∈ℕ∪{ 0 } : f ∈ C^l([0, T]) }, α[f] := sup{α∈ (0, 1] : f ∈ C^l[f],α([0, T]) }. Then, for arbitrarily small ε_1 ∈ℕ and ε_2 > 0, we have f ∈ C^max ( l[f]-ε_1, 0),α[f]-ε_2([0, T]) and f ∉ C^l[f]+ε_1,α[f]+ε_2([0, T]). In this sense, l[f] and α[f] characterize the smoothness of f. §.§ Test functions We generate test functions f: [0,1] →ℝ as sample paths of stochastic processes with varying degrees of smoothness, as defined below. The left columns of <Ref> show samples of these functions. * To generate nowhere differentiable test functions, we use the Brownian motion (BM), the Ornstein–Uhlenbeck process (OU), and the fractional Brownian motion (FBM[We use <https://github.com/crflynn/fbm> to sample from FBM.]) which are zero-mean GPs with covariance kernels k_(x,x') = min(x, x'), k_(x,x') = (e^- λ| x-x' | - e^-λ (x+x')) / 4, k_(x,x') = ( | x |^2H + | x' |^2H - | x-x'|^2H) / 2, where λ > 0 and 0<H<1 is the Hurst parameter (recall that the FBM = BM if H = 1/2). We set λ = 0.2 in the experiments below. Almost all samples f from these processes satisfy l[f] = 0. For BM and OU we have α[f] = 1/2 and for FBM α[f] = H (see <Ref>). It is well known that the OU process with the kernel k_ above satisfies the stochastic differential equation d f(t) = -λ f(t) d t + √(λ/2) d B(t), where B is the standard Brownian motion whose kernel is k_ BM. * To generate differentiable test functions, we use once (iFBM) and twice (iiFBM) integrated fractional Brownian motions f_(x) =∫_0^x f_(z) d z and f_(x) =∫_0^x f_(z) d z, where f_∼(0, k_ FBM). See (<ref>) for the iFBM covariance kernel. With H the Hurst parameter of the original FBM, almost all samples f from the above processes satisfy l[f] = 1 and α[f] = H (iFBM) or l[f] = 2 and α[f] = H (iiFBM). * We also consider a piecewise infinitely differentiable function f(x) = sin 10x + [x>x_0], where x_0 is randomly sampled from the uniform distribution on [0,1] and [x > x_0] is 1 if x > x_0 and 0 otherwise. This function is of finite quadratic variation with V^2(f) = 1. Denote σ̂^2 = lim_N →∞. For the above test functions, with equally-spaced partitions, we expect the following asymptotic behaviours for the CV estimator from <Ref>, <Ref>, the definition of quadratic variation, and Equation (<ref>): BM (l[f]=0, α[f]=1/2): = (1) and σ̂^2 = 1, OU (l[f]=0, α[f]=1/2): = (1) and σ̂^2 = λ/2, FBM (l[f]=0, α[f]=H): = (N^1 - 2H) and σ̂^2 = 0, iFBM (l[f]=1, α[f]=H): = (N^-1 - 2H) and σ̂^2 = 0, iiFBM (l[f]=2, α[f]=H): = (N^-2) and σ̂^2 = 0, sin 10x + [x > x_0]: = (1) and σ̂^2 = 1. Note that the above rate for the iFBM holds for 0 < H ≤ 1/2. The chosen functions allow us to cover a range of α[f] and l[f] relevant to the varying rate of convergence in <Ref>, as well as a range of V^2(f) relevant to the limit in <Ref>, lim_N →∞ = V^2(f) / T. §.§ Asymptotics of the CV estimator <Ref> shows the asymptotics of , where each row corresponds to one stochastic process generating test functions f; the rows are displayed in the increasing order of smoothness as quantified by l[f] + α[f]. The estimates are obtained for equally-spaced partitions of sizes N=10,10^2,…,10^5. In each row, the left panel plots a single sample of generated test functions f. The middle panel shows the mean and confidence intervals (of two standard deviations) of for 100 sample realisations of f for each sample size N. The right panel describes the convergence rate of to its limit point σ̂^2 = lim_N →∞ on the log scale. We have the following observations: * The first two rows (the FBM and OU) and the last (the piece-wise infinitely differentiable function) confirm <Ref>, which states the convergence → V^2(f) / T as N →∞. While <Ref> does not provide convergence rates, the rates in the first two rows appear to be N^-1/2. In the last row the rate is N^-2. * The remaining rows show that the observed rates of to 0 are in complete agreement with the rates predicted by <Ref>. In particular, the rates are adaptive to the smoothness l[f] + α [f] of the function if l[f] + α[f] ≤ 3/2, as predicted. §.§ Comparison of CV and ML estimators <Ref> shows the decay rates of and σ̂^2_ to 0 for test functions f with l[f] = 1, under the same setting as for <Ref>. In this case, <Ref> predict that decays at the rate Θ(N^-1) regardless of the smoothness; this is confirmed in the right column. In contrast, the middle column shows again that decays with a rate that adapts to l[f] and α[f] as long as l[f] + α[f] ≤ 3/2, as predicted by <Ref>. These results empirically support our theoretical finding that the CV estimator is adaptive to the unknown smoothness l[f] + α[f] of a function f for a broader range of smoothness than the ML estimator. § CONCLUSION AND FUTURE WORK We have analysed the asymptotics of the CV and ML estimators for the scale parameter in GP interpolation with the Brownian motion kernel. As a novel contribution, our analysis covers the misspecified case where the smoothness of the true function f is different from that of the samples from the GP prior. Our main results in <ref> indicate that both CV and ML estimators can adapt to the unknown smoothness of f, but the range of smoothness for which this adaptation happens is broader for the CV estimator. Accordingly, the CV estimator can make GP uncertainty estimates asymptotically well-calibrated for a wider range of smoothness than the ML estimator, as indicated in <Ref>. In this sense, the CV estimator has an advantage over the ML estimator. The experiments provide supporting evidence for the theoretical results. Natural next steps include the following: * Supplement the asymptotic upper bounds in <Ref> of the deterministic setting with matching lower bounds. * Extend the analyses (of both the deterministic and random settings) to more generic finitely smooth kernels and higher dimensions. The matching lower bounds, if obtained, would allow one to analyse the ratio between the prediction error |f(x) - m_N(x)| and the posterior standard deviation σ̂√([b]k_N(x)) in the deterministic setting, corresponding to the one in <Ref> for the random setting. Such an analysis would need additional assumptions on the true function f, such as the homogeneity of the smoothness of f across the input space. It also requires a sharp characterisation of the error | f(x) - m_N(x) |, which could use super convergence results in <cit.> and <cit.>. Most natural kernel classes for extension are Matérns and other kernels whose RKHS are norm-equivalent to Sobolev spaces. To this end, it would be possible to adapt the techniques used in <cit.> for analyzing the ML estimator to the CV estimator. In any case, one would need much more advanced techniques than those used here. § PROOFS This section provides the proofs of the main results and other lengthy computations. For x_0 = 0 and x_1, …, x_N ∈ [0,T], we will use the following notation whenever it can improve the readability or highlight a point: Δ x_n x_n+1 - x_n, n = 0, 1, …, N-1, f_n f(x_n), n = 0, 1, …, N. §.§ Explicit expressions for the CV and ML estimators Let us define x_0 = 0 and use the convention f(x_0) = 0. Then one can show that the posterior mean and covariance functions in (<ref>) can be expressed as m_N(x) = (x_n - x) f(x_n-1) + (x - x_n-1) f(x_n) /x_n - x_n-1 if x ∈ [x_n-1, x_n] for some 1 ≤ n ≤ N, f(x_N) if x ∈ [x_N, T] and k_N(x, x') = ( x_n - x') ( x - x_n-1) /x_n - x_n-1 if x_n-1≤ x ≤ x' ≤ x_n for some 1 ≤ n ≤ N, x - x_N if x_N ≤ x ≤ x' ≤ T, 0 otherwise. We omit the case x' ≤ x for k_N(x,x') as this case is obtained by the symmetry k_N(x,x') = k_N(x', x). Using these expressions, we have, for each 1 ≤ n < N: m_∖ n(x_n) = (x_n - x_n + 1) f(x_n-1) + (x_n - 1 - x_n) f(x_n+1) /x_n-1 - x_n + 1 and k_∖ n (x_n) = k_∖ n(x_n, x_n) = ( x_n - x_n + 1) ( x_n - x_n - 1) /x_n-1 - x_n + 1 For n = N, we have m_∖ N(x_N) = f(x_N-1) and k_∖ N(x_N) = x_N - x_N-1. Inserting these expressions in (<ref>) and using the notation (<ref>), the CV estimator can be written as = 1/N[ (x_2 f_1 - x_1 f_2 )^2/ x_1 x_2 Δ x_1 + ∑_n=2^N-1( Δ x_n-1 [f_n+1 - f_n] - Δ x_n [f_n - f_n-1] )^2/ (Δ x_n + Δ x_n-1) Δ x_n Δ x_n-1 + (f_N - f_N-1)^2/Δ x_N-1]. For the ML estimator (<ref>), we obtain the explicit expression = 1/N∑_n=1^N [ f(x_n) - f(x_n-1) ]^2/Δ x_n-1 by observing that m_n-1(x_n) = f(x_n) and k_n-1(x_n) = x_n - x_n-1. §.§ Proofs for Section <ref> The estimator in (<ref>) may be written as = B_1, N + I_N + B_2,N in terms of the boundary terms B_1,N = 1/N·(x_2 f_1 - x_1 f_2 )^2/ x_1 x_2 Δ x_1 and B_2,N = 1/N·(f_N - f_N-1)^2/Δ x_N-1 and the interior term I_N = 1/N∑_n=2^N-1( Δ x_n-1 [f_n+1 - f_n] - Δ x_n [f_n - f_n-1] )^2/ (Δ x_n + Δ x_n-1) Δ x_n Δ x_n-1 . The claimed rate in (<ref>) is ( N^-2 ) if l ≥ 2 or l = 1 and α≥ 1/2. By the inclusion properties of Hölder spaces in Section <ref>, it is therefore sufficient to consider the cases (a) l = 0 and α∈ (1/2, 1] and (b) l=1 and α∈ (0, 1/2]. Suppose first that l = 0 and α∈ (1/2, 1]. Let L be a Hölder constant of a function f ∈ C^0, α([0,T]). Using the Hölder condition, the bounding assumption on Δ x_n, and f_0 = f(0) = 0, the boundary terms can be bounded as B_1,N = 1/N· (x_1 (f_1 - f_2) + Δ x_1 (f_1 - f_0) )^2/x_1 x_2 Δ x_1 ≤1/N·2(x_1^2 (f_1 - f_2)^2 + Δ x_1^2 (f_1 - f_0)^2)/ x_1 x_2 Δ x_1 ≤1/N· 2L^2 (x_1^2 Δ x_1^2α + x_1^2αΔ x_1^2 ) /x_1 x_2 Δ x_1 = (N^-1Δ x_1^2α - 1) = (N^-2 α) and B_2,N = 1/N·(f_N - f_N-1)^2/Δ x_N-1≤1/N L^2 Δ x_N-1^2α - 1 = (N^- 2α). Similarly, the interior term is bounded as I_N ≤2/N∑_n=2^N-1Δ x_n-1^2 (f_n+1 - f_n)^2 + Δ x_n^2 ( f_n - f_n-1)^2/ (Δ x_n + Δ x_n-1) Δ x_n Δ x_n-1 ≤2L^2/N∑_n=2^N-1Δ x_n-1^2 Δ x_n^2α + Δ x_n^2 Δ x_n-1^2α/ (Δ x_n + Δ x_n-1) Δ x_n Δ x_n-1 = 2L^2/N∑_n=2^N-1Δ x_n-1Δ x_n^2α - 1 + Δ x_n Δ x_n-1^2α - 1/Δ x_n + Δ x_n-1 = 2L^2/N∑_n=2^N-1( Δ x_n-1/Δ x_n + Δ x_n-1Δ x_n^2α - 1 + Δ x_n /Δ x_n + Δ x_n-1Δ x_n-1^2α - 1) ≤2L^2/N∑_n=2^N-1( Δ x_n^2α - 1 + Δ x_n-1^2α - 1) = (N^1 - 2α). Inserting the above bounds in (<ref>) yields = (N^-2α + N^1 - 2α) = (N^1-2α), which is the claimed rate when l=0. Suppose then that l = 1 and α∈ (0, 1/2], so that the first derivative f' of f ∈ C^1, α([0, T]) is α-Hölder and hence continuous. Because a continuously differentiable function is Lipschitz, we may set α = 1 in the estimates (<ref>) and (<ref>) for the boundary terms B_1,N and B_2,N in the preceding case. This shows these terms are (N^-2). Because f is differentiable, we may use the mean value theorem to write the interior term as I_N = 1/N∑_n=2^N-1Δ x_n-1Δ x_n/Δ x_n-1 + Δ x_n( f_n+1 - f_n/Δ x_n - f_n - f_n-1/Δ x_n-1)^2 = 1/N∑_n=2^N-1Δ x_n-1Δ x_n/Δ x_n-1 + Δ x_n[ f'(x̃_n) - f'(x̃_n-1) ]^2, where x̃_n ∈ (x_n, x_n+1). Let L' be a Hölder constant of f'. Then the Hölder continuity of f' and the assumption that _N =(N^-1) yield I_N ≤L^2/N∑_n=2^N-1Δ x_n-1Δ x_n/Δ x_n-1 + Δ x_n|x̃_n - x̃_n-1|^2α ≤L^2/N∑_n=2^N-1Δ x_n-1Δ x_n/Δ x_n-1 + Δ x_n (Δ x_n-1 + Δ x_n)^2α ≤L^2/N∑_n=2^N-1Δ x_n (Δ x_n-1 + Δ x_n)^2α = (N^-2α - 1). Using the above bounds in (<ref>) yields = (N^-2 + N^-2α-1) = (N^-2α-1), which is the claimed rate when l=1. From (<ref>) we have = 1/N∑_n=1^N (f_n - f_n-1)^2/Δ x_n-1. Suppose first that l = 0 and α∈ (1/2, 1]. As in the proof of <Ref>, we get = 1/N∑_n=1^N (f_n - f_n-1)^2/Δ x_n-1≤L^2/N∑_n=1^N Δ x_n-1^2α - 1 = ( N^1-2α) when _N =(N^-1). Suppose then that l = 1. By the mean value theorem there are ξ_n ∈ (x_n-1, x_n) such that = 1/N∑_n=1^N (f_n - f_n-1)^2/Δ x_n-1 = 1/N∑_n=1^N Δ x_n-1( f_n - f_n-1/Δ x_n-1)^2 = 1/N∑_n=1^N Δ x_n-1 f'(ξ_n)^2. Since f' is continuous on [0, T] and hence Riemann integrable, we obtain the asympotic equivalence N →∫_0^T f'(x)^2 d x as N →∞ when _N → 0 as N →∞. The integral is positive because f has been assumed non-constant. For equally-spaced partitions, Δ x_n = x_1 = T/N for all n ∈{2, …, N}, the estimator in (<ref>) takes the form = 1/T[ (x_2 f_1 - x_1 f_2 )^2/ x_1 x_2 + 1/2∑_n=2^N-1 ( (f_n+1 - f_n) - (f_n - f_n-1) )^2 + (f_N - f_N-1)^2], Recall from the proof of <Ref> the decomposition = B_1, N + I_N + B_2,N in terms of the boundary terms B_1,N and B_2,N in (<ref>) and the interior term I_N in (<ref>). Because f is assumed continuous on the boundary and equispaced partitions are quasi-uniform, both B_1,N and B_2,N tend to zero as N →∞. We may therefore focus on the interior term, which decomposes as I_N = 1/2∑_n=2^N-1( (f_n+1 - f_n) - (f_n - f_n-1) )^2 = ∑_n=2^N-1 (f_n+1 - f_n)^2 + ( f_n - f_n-1 )^2 - 1/2 ( f_n+1 - f_n-1 )^2 The sums ∑_n=2^N-1 (f_n+1 - f_n)^2 and ∑_n=2^N-1 ( f_n - f_n-1 )^2 tend to V^2(f) by definition. To establish the claimed bound we are therefore left to prove that ∑_n=2^N-1 (f_n+1 - f_n-1 )^2 → 2V^2(f) as N →∞. We may write the sum as ∑_n=2^N-1 (f_n+1 - f_n-1 )^2 = ∑_n=1^⌊N-1/2⌋ (f_2n+1 - f_2n-1 )^2 + ∑_n=1^⌊N-2/2⌋ (f_2n+2 - f_2n )^2. Consider a sub-partition of _N that consists of odd-index points x_1, x_3, … x_⌊N-1/2⌋ of _N. The sequence of these sub-partitions is quasi-uniform with constant 2. The assumption that the quadratic variation is V^2(f) for all partitions with quasi-uniformity constant 2 implies that lim_N →∞∑_n=1^⌊N-1/2⌋ (f_2n+1 - f_2n-1 )^2 = V^2(f). The same will hold for sub-partitions formed of even-index points of _N, giving lim_N →∞∑_n=1^⌊N-2/2⌋ (f_2n+2 - f_2n )^2 = V^2(f). Thus, (<ref>) holds. This completes the proof. §.§ Proofs for Section <ref> Recall the explicit expression of in (<ref>): = 1/N[ (x_2 f_1 - x_1 f_2 )^2/ x_1 x_2 Δ x_1 + ∑_n=2^N-1( Δ x_n-1 [f_n+1 - f_n] - Δ x_n [f_n - f_n-1] )^2/ (Δ x_n + Δ x_n-1) Δ x_n Δ x_n-1 + (f_N - f_N-1)^2/Δ x_N-1]. We consider the cases l = 0 and l = 1 separately. Recall that f ∼(0, k_l, H) implies that [f(x)f(x')] = k_l,H(x, x'). Suppose first that l = 0, in which case f ∼(0, k_0, H) for the fractional Brownian motion kernel k_0,H in (<ref>). In this case the expected values of squared terms in the expression for are [x_2 f_1 - x_1 f_2 ]^2 = x_1 x_2 Δ x_1 (x_1^2H - 1 - x_2^2H - 1 + Δ x_1^2H-1 ), [ Δ x_n-1 (f_n+1 - f_n) - Δ x_n (f_n - f_n-1) ]^2 = ( Δ x_n^2H-1 + Δ x_n-1^2H-1 - (Δ x_n-1 +Δ x_n)^2H-1) Δ x_n-1Δ x_n (Δ x_n + Δ x_n-1 ), and [f_N - f_N-1]^2 = Δ x_N-1^2H. Substituting these in the expectation of and using the fact that Δ x_n=Θ(N^-1) for all n by quasi-uniformity we get = 1/N[ (x_1^2H - 1 - x_2^2H - 1 + Δ x_1^2H-1 ) + ∑_n=2^N-1(Δ x_n-1^2H-1 + Δ x_n^2H-1 - (Δ x_n-1 + Δ x_n)^2H-1) + Δ x_N-1^2H - 1] = Θ(N^-2H)+ Θ(N^1-2H) + Θ(N^-2H) = Θ(N^1-2H). Suppose then that l = 1, in which case f ∼(0, k_1, H) for the integrated fractional Brownian motion kernel k_1,H in (<ref>). It is straightforward (though, in the case of the second expectation, somewhat tedious) to compute that the expected values of squared terms in the expression (<ref>) for are [x_2 f_1 - x_1 f_2 ]^2 = x_1 x_2 Δ x_1 /2(H+1)(2H+1)( x_2^2H+1 - x_1^2H+1 - Δ x_1^2H+1) and [ Δ x_n-1 (f_n+1 - f_n) - Δ x_n (f_n - f_n-1) ]^2 = Δ x_n Δ x_n-1 (Δ x_n + Δ x_n-1) /2(H+1)(2H+1)[ (Δ x_n + Δ x_n-1)^2H+1 - Δ x_n^2H+1 - Δ x_n-1^2H+1] and [f_N - f_N-1]^2 = Δ x_N-1/2H+1( x_N^2H+1 - x_N-1^2H+1 - 1/2(H+1)Δ x_N-1^2H+1). Therefore, by (<ref>), = ( x_2^2H+1 - x_1^2H+1 - Δ x_1^2H+1)/2(H+1)(2H+1)N + 1/2(H+1)(2H+1)N∑_n=2^N-1[ (Δ x_n + Δ x_n-1)^2H+1 - Δ x_n^2H+1 - Δ x_n-1^2H+1] + 1/(2H+1)N( x_N^2H+1 - x_N-1^2H+1 - 1/2(H+1)Δ x_N-1^2H+1) 1/2(H+1)(2H+1) B_1,N + 1/2(H+1)(2H+1) I_N + 1/(2H+1) B_2,N. By quasi-uniformity, B_1,N≤ N^-1 x_2^2H+1 = (N^-2-2H). Consider then the interior term I_N = 1/N∑_n=2^N-1Δ x_n^2H+1[ (1 + Δ x_n-1/Δ x_n)^2H+1 - (1 + (Δ x_n-1/Δ x_n)^2H+1) ] 1/N∑_n=2^N-1Δ x_n^2H+1 c_n. Because the function x ↦ (1 + x)^c - (1 + x^c) is positive and increasing for x > 0 if c > 1 and C_^-2≤Δ x_n-1 / Δ x_n ≤ C_ by quasi-uniformity, we have 0 < (1 + C_^-2)^2H+1 - (1 + C_^-2H(2H+1)) ≤ c_n ≤(1 + Δ x_n-1/Δ x_n)^2H+1≤ (1 + C_)^2H+1 for every n. Because N^-1∑_n=2^N-1Δ x_n^2H+1 = Θ(N^-1-2H) by quasi-uniformity, we conclude from (<ref>) that I_N = Θ(N^-1-2H). For the last term B_2, N, recall that we have set x_N = T. Thus B_2,N = 1/N( T^2H+1 - (T - Δ x_N-1)^2H+1 - 1/2(H+1)Δ x_N-1^2H+1). By the generalised binomial theorem, T^2H+1 - (T - Δ x_N-1)^2H+1 = (2H+1) T^2HΔ x_N-1 + ( Δ x_N-1^2 ) as Δ x_N-1→ 0. It follows that under quasi-uniformity we have B_2,N = Θ(N^-2) for every H ∈ (0, 1). Putting these bounds for B_1,N, I_N and B_2,N together we conclude that = 1/2(H+1)(2H+1) B_1,N + 1/2(H+1)(2H+1) I_N + 1/(2H+1) B_2,N = (N^-2-2H) + Θ(N^-1-2H) + Θ(N^-2), which gives = Θ(N^-1-2H) if H ∈ (0, 1/2] and = Θ(N^-2) if H ∈ [1/2, 1). Observe that in the proof of Theorem <ref> it is the boundary term B_2,N that determines the rate when there is sufficient smoothness, in that l = 1 and H ∈ [1/2, 1). Similar phenomenon occurs in the proof of Theorem <ref>. The smoother a process is, the more correlation there is between its values at far-away points. Because the Brownian motion (as well as fractional and integrated Brownian motions) has a zero boundary condition at x = 0 but no boundary condition at x = T and no information is available at points beyond T, the importance of B_2,N is caused by the fact that around T one has the least information about the process. From (<ref>) we get σ̂_^2 = 1/N∑_n=1^N [f_n - f_n-1]^2/Δ x_n-1. We may then proceed as in the proof of Theorem <ref> and use quasi-uniformity to show that σ̂_^2 = 1/N∑_n=1^N [f_n - f_n-1]^2/Δ x_n-1 = 1/N∑_n=1^N Δ x_n-1^2H/Δ x_n-1 = 1/N∑_n=1^N Δ x_n-1^2H-1 = Θ(N^1-2H) when l = 0 and σ̂_^2 = ∑_n=1^N [f_n - f_n-1]^2/Δ x_n-1 = 1/(2H+1)N∑_n=1^N ( x_n^2H+1 - x_n-1^2H+1 - 1/2(H+1)Δ x_n-1^2H+1) = 1/(2H+1)N∑_n=1^N ( (2H+1) x_n^2HΔ x_n-1 + (Δ x_n-1^2) - 1/2(H+1)Δ x_n-1^2H+1) = Θ(N^-1) when l = 1. §.§ Proofs for <Ref> We only provide the proof for the case l = 1 and leave the simpler case l = 0 to the reader. Let x ∈ (x_n-1, x_n). From the expression for m_N in <Ref>, we get [ f(x) - m_N(x) ]^2 = [ f(x) - (x_n - x) f(x_n-1) + (x - x_n-1) f(x_n)/Δ x_n-1]^2 = 1/Δ x_n-1^2[ (x - x_n-1)(f(x_n) - f(x)) - (x_n - x)(f(x) - f(x_n-1)) ]^2. Then, we can use (<ref>) with x_n instead of x_n+1 and x instead of x_n to get [ f(x) - m_N(x) ]^2 = (x_n - x)(x - x_n-1)/C_H Δ x_n-1[ Δ x_n-1^2H+1 - (x_n - x)^2H+1 - (x - x_n-1)^2H+1], where C_H = 2(H+1)(2H+1). The expression for k_N in <Ref> gives [ f(x) - m_N(x) ]^2/k_N(x) = 1/C_H[ Δ x_n-1^2H+1 - (x_n - x)^2H+1 - (x - x_n-1)^2H+1]. By removing the negative terms and using the quasi-uniformity (<ref>), we obtain sup_x ∈ [0, T][ f(x) - m_N(x) ]^2/k_N(x)≤ (T C_)^2H+1/C_H N^-1-2H, To see that this bound is tight, observe that for the midpoint x = (x_n + x_n-1) / 2 we have x_n - x = x - x_n-1 = Δ x_n-1 / 2 and [ f(x) - m_N(x) ]^2 / k_N(x) = 1/ C_H(1 - 1/2^2H) Δ x_n-1^2H+1≥T^2H + 1/C_H C_^2H + 1(1 - 1/2^2H) N^-1-2H by the quasi-uniformity. Therefore sup_x ∈ [0, T][ f(x) - m_N(x) ]^2/k_N(x) = Θ(N^-1-2H) when l = 1. One can similarly show that sup_x ∈ [0, T][ f(x) - m_N(x) ]^2/k_N(x) = Θ(N^1-2H) when l = 0. The claims then follow from the rates for and in <Ref>. § ACKNOWLEDGEMENTS MN acknowledges support from the U.K. Research and Innovation under grant number EP/S021566/1. MK has been supported by the French government, through the 3IA Cote d’Azur Investment in the Future Project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. TK was supported by the Academy of Finland postdoctoral researcher grant #338567 “Scalable, adaptive and reliable probabilistic integration”. Part of this research was carried out during a visit by TK to EURECOM in May 2023 that was funded by the Institut français de Finlande, the Embassy of France to Finland, and the Finnish Society of Sciences and Letters. MM gratefully acknowledges financial support by the European Research Council through ERC StG Action 757275 / PANAMA; the DFG Cluster of Excellence “Machine Learning - New Perspectives for Science”, EXC 2064/1, project number 390727645; the German Federal Ministry of Education and Research (BMBF) through the Tübingen AI Center (FKZ: 01IS18039A); and funds from the Ministry of Science, Research and Arts of the State of Baden-Württemberg. § CONNECTION BETWEEN THE ML AND CV ESTIMATORS Here we prove a connection between the ML and CV estimators; see <Ref>. Let C(N, p) = Np = N!/p! (N - p)! denote the binomial coefficient. The leave-p-out cross-validation (LPO-CV) estimator of σ^2 is σ̂_(p)^2 = 1/C(N,p)∑_i=1^C(N, p)1/p∑_n=1^p [f(x_p,i,n) - m_∖{p, i}(x_p,i,n)]^2/ k_∖{p, i}(x_p,i, n), where i indexes the N-choose-p possible sets of held-out datapoints, _∖{p,i}, among and n ≤ p the data points left out of each of these sets. That is, for each p and i we have = _∖{p, i}∪{x_p,i,1, …, x_p,i,p}. The functions m_∖{p, i} and k_∖{p, i} are the GP conditional mean and variance based on the set _∖{p,i}, which contains N-p points. The purpose of this section is to prove that σ̂_^2 = 1/N∑_p=1^N σ̂_(p)^2. Denote ν() = f()^⊤ k(, )^-1 f(). The block matrix inversion formula applied to g(_∖{p,i}∪{x}) and the equations in <Ref> for the conditional mean and variance yield [f(x) - m_∖{p,i}(x)]^2/k_∖{p,i}(x) = ν( _∖{p,i}∪{x} ) - ν( _∖{p,i} ) for any 1 ≤ p ≤ N and x ∉_{p,i}, where we use the convention ν(_∖{N,i}) = ν(∅) = 0. For each 1 ≤ p ≤ N, i ≤ C(N, p) and n ≤ p there is a unique index j(p, i, n) ≤ C(N, p-1) such that _∖{p,i}∪{x_p,i,n} = _∖{p-1, j(p,i,n)}. Setting x = x_p,i,n in (<ref>) gives [f(x_p,i,n) - m_∖{p,i}(x_p,i,n)]^2/k_∖{p,i}(x_p,i,n) = ν( _∖{p,i}∪{x_p,i,n}) - ν( _∖{p,i} ). Therefore ∑_p=1^N σ̂_(p)^2 = 1/N∑_p=1^N 1/C(N,p)∑_i=1^C(N, p)1/p∑_n=1^p [f(x_p,i,n) - m_∖{p, i}(x_p,i,n)]^2/ k_∖{p, i}(x_p,i, n) = ∑_p=1^N 1/C(N,p)∑_i=1^C(N, p)1/p∑_n=1^p [ ν( _∖{p,i}∪{x_p,i,n}) - ν( _∖{p,i} ) ]. By (<ref>) from each set _∖{p,i} on level p (i.e., sets from which p points have been left out) one can obtain p sets on level p-1 by adding one of the left-out datapoints. However, there are C(N,p) sets on level p and C(N, p-1) sets on level p-1. Hence for each set _∖{p-1,j} on level p-1 there are p ·C(N,p)/C(N,p-1) = p ·N!(p-1)!(N-p+1)!/N!p!(N-p)! = N-p+1 combinations of sets _∖{p,i} on level p and points x_p,i,n left out of these sets such that _∖{p,i}∪{x_p,i,n} = _∖{p-1,j}. Therefore ∑_i=1^C(N, p) 1/p∑_n=1^p [ ν( _∖{p,i}∪{x_p,i,n}) - ν( _∖{p,i} ) ] = ∑_i=1^C(N, p)1/p∑_n=1^p ν( _∖{p,i}∪{x_p,i,n}) - ∑_i=1^C(N, p)1/p∑_n=1^p ν( _∖{p,i} ) = N-p+1/p∑_j=1^C(N, p-1)ν( _∖{p-1,j}) - ∑_i=1^C(N, p)ν( _∖{p,i} ) and consequently (<ref>) writes ∑_p=1^N σ̂_(p)^2 = ∑_p=1^N 1/C(N,p)[ N - p + 1/p∑_j=1^C(N, p-1)ν( _∖{p-1,j}) - ∑_i=1^C(N,p)ν( _∖{p,i} ) ] = ∑_p=1^N [ 1/C(N,p-1)∑_j=1^C(N, p-1)ν( _∖{p-1,j}) - 1/C(N,p)∑_i=1^C(N,p)ν( _∖{p,i} ) ], which is a telescoping sum. We are left with ∑_p=1^N σ̂_(p)^2 = 1/C(N, 0)∑_j=1^C(N,0)ν( _∖{0,j}) - 1/C(N,N)∑_i=1^C(N,N)ν( _∖{N,i} ), where ν( _∖{0,j}) = f()^⊤ k(, )^-1 f() and ν( _∖{N,i} ) = ν(∅) = 0. Thus 1/N∑_p=1^N σ̂_(p)^2 = f()^⊤ k(, )^-1 f()/N = , which establishes (<ref>). § FURTHER DISCUSSION ON <REF> The requirement of having the same V^2(f) for all sequences of partitions quasi-uniform with constant 2 can be relaxed somewhat: trivially, it is sufficient that the quadratic variation is V^2(f) specifically with respect to even-points and odd-points sequences of sub-partitions used in the proof in <Ref>. Furthermore, we may even have different quadratic variations with respect to said sequences. Then the results becomes lim_N →∞σ̂^2_CV = ν/T for ν = V_0^2(f) + V_1^2(f)/2, where V_0^2(f) and V_1^2(f) are quadratic variations with respect to the even- and odd-points sub-partitions respectively, meaning that V^2(f) = lim_N →∞∑_n=1^N-1 (f_n+1 - f_n )^2, V_0^2(f) = lim_N →∞∑_n=1^⌊N-2/2⌋ (f_2n+2 - f_2n )^2, V_1^2(f) = lim_N →∞∑_n=1^⌊N-1/2⌋ (f_2n+1 - f_2n-1 )^2.
http://arxiv.org/abs/2307.06139v2
20230709045849
Constructing Maximal Extensions of the Vaidya Metric in Israel Coordinates: I. Integration of the Field Equations
[ "Sheref Nasereldin", "Kayll Lake" ]
gr-qc
[ "gr-qc" ]
APS/123-QED [email protected] [email protected] Department of Physics, Queen's University, Kingston, Ontario, Canada, K7L3N6 This paper explores a complete representation of the Vaidya model, a radial flux of radiation in the eikonal approximation, used for modeling various phenomena in both classical and semi-classical General Relativity and Astrophysics. The majority of the applications of the Vaidya model have been formulated in an incomplete representation. A complete representation is obtained here by direct integration of the Einstein field equations. We present the methodology to obtain this complete representation, and its utility in the modeling of general relativistic phenomena. Constructing Maximal Extensions of the Vaidya Metric in Israel Coordinates: I. Integration of the Field Equations Kayll Lake August 12, 2023 =================================================================================================================== § INTRODUCTION The Schwarzschild metric <cit.> has been used to study the exterior geometry of spherical stellar objects undergoing gravitational collapse <cit.>, where it is assumed that the radiation emitted by the object is insignificant. However, during the advanced stages of stellar collapse, these objects are expected to emit a considerable amount of mass in the form of radiation, see for example <cit.>. Therefore, the exterior of a collapsing stellar object is no longer empty, and the Schwarzschild vacuum metric is no longer suitable for its description. The Vaidya metric <cit.> is more suitable for this situation and has been widely used to classically study the geometry outside [With suitable boundary conditions, such as Israel's conditions, see <cit.>, on the spherical surface, this exterior solution can be matched to some proper interior solution, see for example <cit.> and <cit.>.] radiating spherical stellar objects, see for example <cit.>. Thus, one can treat this dynamical mass distribution with its envelop of radiation as an isolated system existing in otherwise vacuum, asymptotically flat spacetime that is described by the Schwarzschild vacuum metric. The “self-similar" Vaidya metric has been used to construct spacetimes that exhibit a visible strong singularity, demonstrating the potential for the failure of the Penrose “Cosmic censorship hypothesis" <cit.>. This conjecture states that singularities arising from regular initial conditions do not have any causal influence on spacetime. If the hypothesis were to fail, it would be a major flaw in the theory of general relativity and would make it impossible to predict the events in any region of spacetime containing a singularity, as new information could emerge in an unpredictable manner. The growth of curvature along non-spacelike geodesics has been examined (see for example, <cit.>), and the visible singularity in self-similar spacetimes has been classified as strong. Furthermore, Lake and Zannias <cit.> showed that the emergence of naked singularities in these spacetimes is due to the self-similarity assumption, rather than spherical symmetry. On the semi-classical level, the Vaidya metric has been utilized to explore black hole evaporation, possibly due to Hawking's radiation <cit.>, (see for example <cit.>). Furthermore, the Vaidya metric in the double-null coordinates (the mass function must be linear) <cit.> has been used to study the quasi-normal modes (QNM) as a model that supposedly will give deeper insights on the gravitational excitations of black holes (see for example <cit.>). Despite the fact that the majority of applications were structured with the Vaidya metric written in the Eddington-Finkelstein-Like (EFL) coordinates, these coordinates have been known for some time to be incomplete (see for example <cit.>), leaving the Vaidya manifold not maximally covered. Thus, to ensure the accuracy of all applications, it is required to construct a complete set of coordinates and thoroughly assess the impact of this set of coordinates. This is the primary objective of this paper. We organize this paper as follows. In the next section, we review the EFL coordinates and provide a proof of incompleteness of this set of coordinates, which is the main motivation for any subsequent coordinate representation. In Section <ref>, we review the use of Israel coordinates <cit.> to write the Vaidya metric <cit.>, and discuss why the derivation of these coordinates resulted in unsatisfactory results when attempting to obtain maximal coverings of the Vaidya manifold. The main results of this paper are outlined in Section <ref>, in which we introduce an algorithmic method to obtain Israel coordinates by direct integration of the field equations, without relying on any coordinate transformation. In Section <ref>, we present necessary physical restrictions that must be imposed on the flux of radiation. In Section <ref>, we provide a general derivation regarding the location of the apparent horizon in the Vaidya manifold. It is emphasized that the location of the apparent horizon is established before introducing any expressions to the characterizing functions. In Section <ref>, we demonstrate that our construction can be used to obtain both EFL and Israel coordinates by choosing different expressions for the functions that arise from integrating the field equations; such functions, as well as the coefficient of the cross term in the general metric that is presented, shall be referred to as the “characterizing functions". In Section <ref>, we briefly calculate some of the invariants of the Vaidya metric in Israel coordinates. The last section highlights the main results of the paper and discusses the possible extensions of the current work. § THE EFL COORDINATES The Vaidya metric, in the EFL coordinates, is a spherically symmetric solution to the Einstein field equations with the energy momentum tensor approximated in “the eikonal form" <cit.>, which expresses a unidirectional radial flow of unpolarized radiation, T_αβ = Φ k_αk_β= ϵ/4π r^2dm(u)/duk_αk_β, where ϵ = ± 1 and k_α = δ^u_α is tangent to radial inward or outward-going null geodesics. The spacetime line element in the EFL coordinates takes the form ds^2 = -(1-2m(u)/r)du^2+2ϵ dudr+r^2dΩ^2_2, where dΩ^2_2 = dθ^2+sin^2θ dϕ^2 is the metric of a unit 2-sphere. For ϵ = +1, the metric expresses inward-directed radiation (towards smaller values of the radius r) with a monotonically increasing m as a function of the “advanced time" coordinate u. If ϵ = -1, the metric is that of outgoing radiation (towards larger values of the radius r) with m being monotonically decreasing as a function of the “retarded time" coordinate u. However, it is conventional, as stated in <cit.>, to assign u as the retarded time and v as the advanced time. Furthermore, it is worthwhile to note that the quantity Φ, usually called as the energy density of the radiation flux, does not have a direct operational meaning because the tangent null vector k_α does not have a natural normalization. Thus, it is preferable, see also <cit.>, to consider the following quantity: ρ = Φ (k_αu^α)^2, which defines the energy density as measured locally by an observer with a timelike 4-velocity u^α. §.§ Incompleteness of the EFL Coordinates In this subsection, we demonstrate why the EFL coordinates (u,r,θ,ϕ) do not provide a complete description of the Vaidya manifold. The incompleteness of these coordinates is the primary motivation for the search for new coordinates in which the manifold is complete, allowing radial null geodesics to continue moving to infinite values of their affine parameter or be terminated upon encountering a gravitational singularity. The incompleteness of the coordinates (u,r,θ,ϕ) becomes evident when studying the behavior of the ingoing radial null geodesics, emanating from the past null infinity ^- or from the past singularity surface r=0, for the case (0<m(∞)<∞). It was suggested, but not proven in <cit.>, that the geodesics appear to approach the future even horizon (FEH) surface, r=2m(∞), as u →∞, though they actually reach it for finite values of their affine parameter, see Fig. <ref>. To support these insightful claims, we present a more articulated proof. We draw attention to the fact that, whereas Fig. <ref> is only valid for outgoing radiation, the forthcoming proof is valid for both ingoing and outgoing radiation. Let us consider the two branches of radial null curves, for which ds^2=0 and θ = ϕ = const. The first branch is given by u=const (red), and the second branch (blue) is given by the solution of the following ordinary differential equation [This differential equation is a special case of Chini's equation <cit.>, which does not have a general solution.], du/dr =2 ϵ r/r-2m(u). We assume the following to hold 0 < m(±∞)< ∞, the question now arises as to whether the affine parameter λ remains finite as r → 2m(±∞) along the second branch. In order to answer this question we write the second branch (<ref>) as a system of 1^st order ODEs ṙ = r-2m(u)/λ, u̇ = 2ϵ r/λ, where an overdot indicates d/dλ, so that differentiation of the previous system with respect to λ produces the geodesic equations of (<ref>) r̈ = - 4 ϵ m^'(u)r/λ^2, ü = - 4ϵ m(u)/λ^2, where use has been made of both (<ref>) and (<ref>). Now let us assume that λ→±∞ as r → 2m(±∞) then by virtue of (<ref>) and (<ref>) we obtain lim_λ→±∞u̇= lim_λ→±∞ü = 0, which is not possible as this changes the second geodesic branch into the first [Note that the first branch is characterized by u=const, which entails u̇ = ü = 0.]. Therefore, our assumption is wrong, and we conclude that λ along the second branch remains finite as r → 2m(±∞). If we write this value of λ as λ_0, we obtain lim_λ→λ_0ṙ = 0, and lim_λ→λ_0u̇ = 4ϵ m(±∞)/λ_0. Evidently, the last equation remains finite because the mass function m(±∞) is assumed finite from the beginning. By virtue of (<ref>), we conclude that the region (r<2m(±∞)) is inaccessible in the EFL coordinates. Therefore, an extension is necessary. § ISRAEL COORDINATES In order to overcome the “incompleteness problem" of the EFL coordinates, Israel <cit.> introduced what he described as the analytic completion of the Vaidya manifold (<ref>). In Israel coordinates (u,w,θ,ϕ), the Vaidya line element reads ds^2 = (w^2/2m(u)r(u,w)+4m^'(u)/U(u)) du^2+2dudw+r(u,w)^2dΩ^2_2, where U(u) = ∫_0^udu/4m(u), r(u,w) = U(u)w+2m(u), and the function m(u) is always positive. Notice that (<ref>) suffers a true singularity at r(u,w) = 0, see (<ref>), and at u=0, if m'(u) does not vanish there, as explained below. To avoid any possible confusion about what is to be said, let us label the EFL retarded coordinate, u, as t. This then shows that (<ref>) is reduced to the outgoing Vaidya metric, (<ref>) with u=t and ϵ=-1, by the transformation t(u) = -∫_0^udu/U(u), regular for (u>0, t<∞). Apart from the cumbersome nature of Israel coordinates, the Vaidya metric in Israel coordinates (<ref>) does not adequately represent both the internal and external fields as long as the mass function m(u) is only defined for u ≥ 0. Since u=0 corresponds to t=+∞ (t(u)∝ -log U(u)), it is impossible to extend the line element to the range (u<0) via a coordinate transformation, as it would require knowledge of the mass function m(t>∞), i.e., beyond FEH. Hence, we believe that the “maximal" extension of the Vaidya manifold, as given by the line element (<ref>), is imprecise. It is worth noting that there was an attempt <cit.> to extend the Vaidya metric in terms of Israel coordinates. However, this approach faced the same problem as the original Israel extension of relying on coordinate transformations and the necessity of knowing the mass function m(u) beyond the FEH in advance. It is also worthy of notice that although Israel coordinates have obvious advantages over the EFL coordinates, the Vaidya metric in Israel coordinates has not gained enough attention. To our knowledge, the metric has only been used once (see <cit.>) to study the complete gravitational collapse of a radiating shell of matter. Prior to the attempt given in <cit.>, all the work done to investigate the gravitational collapse in the presence of radiation was not complete. That is, the gravitational collapse was not followed beyond the event horizon because the Vaidya manifold in the EFL coordinates only describes the external field around a collapsing radiating object. § GENERAL COORDINATE CONSTRUCTION Consider the following general spherically symmetric metric expressed in the coordinates (u,w,θ,ϕ) <cit.> ds^2 = f(u,w) du^2+2h(u,w) du dw + r(u,w)^2dΩ^2_2, where r(u,w) measures the area of the 2-sphere u=w=const. The energy momentum tensor is once more taken to be of the eikonal form, T^αβ = Φ k^αk^β, where k^α = δ^α_w is a radial null vector and the quantity Φ(k^αu_α)^2 is the energy flux, measured by an observer with tangent u_α. Straightforward calculations <cit.> show that the only non-zero component of the Einstein tensor is G^w w from which Φ can be directly obtained. If we take radial null trajectories with four-tangent k^α to be radial null geodesics affinely parametrized by w, i.e., k^β∇_βk^α = 0, this yields ∂ h(u,w)/∂ w = 0. Thus, the function h(u,w) reduces to a function of only u, h(u,w)≡ h(u). While we will limit ourselves to the choice h(u) = ±1, we will keep the function as is for potential future use. §.§ Solving the Einstein Field Equations First [This approach of solving the field equations was first introduced in <cit.> to express the Schwarzschild-de Sitter vacuum metric in Israel coordinates, and was later utilized in <cit.> to obtain the Vaidya metric in the same set of coordinates.], we benefit from the vanishing of the G^uu component to obtain ∂ ^2/∂ w^2 r(u,w)= 0. This leads, by integration, for a general expression [We also note that this expression can be deduced by assuming that (<ref>) has a vanishing second Ricci invariant <cit.>. This result is particularly important because it is directly obtained from the geometry of the spacetime before considering the matter content.], to r(u,w) r(u,w) = f_1(u)w+f_2(u). In the sequel all the functions f_n (u) are assumed suitably smooth [ All the functions are assumed to be at least C^2.]. Second, by solving G^θθ = 0, with the aid of (<ref>), we obtain r(u,w)∂ ^2/∂ w^2 f(u,w) + 2f_1(u)∂/∂ wf(u,w) - 4h(u)d /duf_1(u) =0. Integrating (<ref>) yields f(u,w)= 2 f_1^'(u) h(u) f_2(u)^2-f_1(u)f_3(u)/f_1(u)^2r(u,w) +2 f_1^'(u) h(u)w/f_1(u)+f_4(u), where (') denotes ordinary differentiation with respect to the coordinate u. By solving G^uw = 0, we find that f_4(u) is given by f_4(u) = h(u)(2f_1(u)f_2^'(u)-h(u))/f_1(u)^2, where use has been made of (<ref>) and (<ref>). By virtue of (<ref>), (<ref>), and (<ref>) the only non-zero component of the Einstein tensor can be given as G^ww = 1/χ(u)(2h(u)^2f_2(u)^2f_1^”(u)+4h(u)^2f_2(u)f_1^'(u)f_2^'(u) -h(u)f_3(u)f_1^'(u)-2h(u)f_2(u)^2 h^'(u)f_1^'(u) -h(u) f_1(u)f_3^'(u)+2f_1(u)f_3(u)h^'(u) ), where χ(u,w)=h(u)^4f_1(u)r(u,w)^2. The G^ww is conveniently expressed in the following way. First define the Hernandez-Misner mass <cit.> m ≡r(u,w)^3/2 R_θϕ^ θϕ, where R is the Riemann tensor. By calculating R_θϕ^ θϕ for (<ref>) and making the necessary simplifications, (<ref>) can be given in terms of the characterizing functions f_n(u) as m = m(u) = 2h(u)f_2(u)^2f_1^'(u)-f_1(u)f_3(u)/2h(u)^2, where the mass function must always remain positive-valued over its domain. As a result, G^ww can be expressed in a more succinct form, G^ww = 2 m^'(u)/h(u)f_1(u)r(u,w)^2 = 8 πΦ. Similarly, a more convenient expression of the function f(u,w) can be obtained with the aid of (<ref>), (<ref>), (<ref>), and (<ref>) f(u,w) = 𝒜(u) r(u,w)^2 +ℬ(u) r(u,w)+𝒞(u)/f_1(u)^2r(u,w), where 𝒜(u) = 2h(u)f_1^'(u), ℬ(u) = 2h(u)f_1(u)f_2^'(u)-2h(u)f_2(u)f_1^'(u)-h(u)^2, 𝒞(u) = 2h(u)^2m(u). § PHYSICAL RESTRICTIONS ON THE CHOICE OF THE CHARACTERIZING FUNCTIONS The first restriction that we impose, using (<ref>), is given by the following inequality 2h(u)f_2(u)^2f_1^'(u)>f_1(u)f_3(u). This is necessary to ensure that the mass function, m(u), is always positive. The second restriction is that the measured radiation flux is a positive quantity, Φ (k^αu_α)^2> 0. Substituting (<ref>) in (<ref>) and simplifying, we obtain m^'(u)/h(u)f_1(u)>0, which dictates that the signs of m^'(u) and h(u)f_1(u) have to be identical. As our attention is confined to classical matter fields (radiation), a minimum requirement is that this matter distribution must satisfy the Weak Energy Condition (WEC). This requirement implies, with the aid of (<ref>), the following stipulations on the different forms of radiation, summarized in Table <ref>. Table. <ref> clearly illustrates that both ingoing and outgoing radiation can be obtained without changing the sign of the function h(u). However, as will be seen shortly, the direction of radiation in the EFL coordinates is dictated by the sign of the function h(u). § THE APPARENT HORIZON AND THE EVENT HORIZON We begin this section by providing a general derivation to the location of the apparent horizon of (<ref>). To this end, let us examine the congruence of radial null trajectories characterized by the four-tangent ℓ^α, ℓ^α = δ^α_u-f(u,w)/2h(u)δ^α_w, However, it does not satisfy the geodesic equation in the affine-parameter form. This is evident from the equations ℓ^α∇_αℓ^u = κℓ^u and ℓ^α∇_αℓ^w = κℓ^w, where κ = κ (u,w) and it is called the inaffinity. The geodesics equations are: ℓ^α∇_αℓ^u = (2d/d uh(u)-∂/∂ wf(u,w)/2h(u))(1) = κℓ^u, and ℓ^α∇_αℓ^w = (2d/d uh(u)-∂/∂ wf(u,w)/2h(u))(-f(u,w)/2h(u)) = κℓ^w, with the inaffinity κ given by κ = 2d/duh(u)-∂/∂ wf(u,w)/2h(u). The associated expansion scalar Θ^(ℓ) of this non affinley parametrized congruence of radial null geodesics, see <cit.> for the definition of the expansion in this case, is given by Θ^(ℓ) = ∇_αℓ^α-κ, = -r(u,w) ∂/∂ wf (u,w)-2 r(u,w) d/d uh (u)/2 h (u) r(u,w) - 2 f (u,w) ∂/∂ wr (u,w)-4 h (u) ∂/∂ ur (u,w)/2 h (u) r(u,w)-κ, = - 1/h(u)r(u,w)( f(u,w) ∂/∂ wr(u,w)-2h(u)∂/∂ ur(u,w)). The apparent horizon is characterized by Θ^(ℓ) = 0, and thus by virtue of (<ref>) we obtain the following condition 2h(u)∂ r(u,w)/∂ u = f(u,w) ∂ r(u,w)/∂ w. We substitute (<ref>) in (<ref>), which yields 2h(u) ( f_1^'(u)w+f_2^'(u)) = f(u,w)f_1(u). With the aid of (<ref>) the previous equation takes the form 0 = 2f_1^'(u)r(u,w)^2+2h(u)m(u) -( 2w f_1(u)f_1^'(u)+2f_2(u)f_1^'(u)+h(u))r(u,w). We can use (<ref>) once more to reduce the last equation to -h(u)( r(u,w)-2m(u) ) = 0, which immediately gives the sought-after result: r(u,w) = 2m(u). It is thus established that the apparent horizon is located at r=2m(u). We also note that the previous result is established before making any choices for the characterizing functions, f_n(u). Determining the location of the event horizon in the Vaidya metric is not as straightforward as locating the apparent horizon. In fact, the entire future history of the metric, as specified by the functions f(u,w) and h(u), must be predetermined in order to identify the null generators of the event horizon <cit.>. However, we may generically define the future (past) event horizon as a causal boundary for the timelike geodesics terminating at future (past) timelike infinity, i^+(i^-) [For the definitions of these infinities we refer to <cit.>.]. § SPECIFIC COORDINATE REPRESENTATIONS OF THE VAIDYA METRIC In this section, we demonstrate that we can obtain various coordinate representations of the Vaidya metric by selecting different expressions for the characterizing functions, h(u) and f_n(u). Additionally, we emphasize that the meaning of the coordinate u is dependent on the choice of the characterizing functions, and thus the coordinate u in the EFL coordinates has a different interpretation to that in Israel coordinates. §.§ The Vaidya Metric in the EFL Coordinates Let us choose the characterizing functions such that h(u) = ± 1, f_1(u) = 1, and f_2(u) = 0, then we obtain w = r with the help of (<ref>). Furthermore, we get f_3(u) = -2m(u) from (<ref>). Substituting these values in (<ref>) yields f(u,r) = -r+2m(u)/r, and thus the metric (<ref>) becomes ds^2 = -(1-2m(u)/r)du^2± 2dudr+r^2dΩ_2^2, with G^ww = ± 2m^'(u)/r^2. It is clear that, with the help of Table <ref>, we can obtain h(u) = -1 for the outgoing radiation version of the Vaidya metric, where the coordinate u is a retarded time. Similarly, selecting h(u) = +1 yields the ingoing radiation version of the Vaidya metric, with u as an advanced time. §.§ The Vaidya Metric in Israel Coordinates In this subsection, we explore how by introducing different choices to the functions f_n(u), we obtain Israel coordinates. Let us consider the following choices: f_1(u) = U(u), f_2(u) = 2 M(u), and f_3(u) = 0. It follows from (<ref>) that for M(u)=m(u) (which is a choice), U^'(u) = h(u)/4m(u). Thus, with the aid of the first fundamental theorem of calculus we write U(u) = ∫_0^uh(x)/4m(x) dx. However, since our choices for the function h(u) will be confined to either +1 or -1, we set h(u)=h=±1. Consequently, the expression (<ref>) takes the form U(u) = h∫_0^u1/4m(x) dx. It follows that the spacetime line element (<ref>) can be written as ds^2 = (w^2/2m(u)r+4hm^'(u)/U(u)) du^2+2hdudw+r^2dΩ^2_2, where r is no longer a coordinate; it is now a function r=r(u,w) = U(u)w+2m(u) and G^ww = 2hm^'(u)/U(u)r(u,w)^2. Here, u is a null coordinate and (<ref>) describes both outgoing and ingoing radiation. It is interesting to note that the presence of h is not necessary for (<ref>), as demonstrated in <cit.>, particularly when m^'(u)=0. It is noteworthy that, in accordance with (<ref>), the apparent horizon is now located at w=0. There is some ambiguity regarding the sign of u which appears in the definition of the function U(u) (<ref>); for example, in <cit.>, u is always positive, whereas in <cit.> u can be either positive or negative. We shall resolve this ambiguity and demonstrate when u can be negative or positive. To this end, recall that U^'(u) = h/4m(u), which means that the sign of U^'(u) is solely determined by the sign of h. Also, with the aid of the WEC, (<ref>), and (<ref>), we have m^'(u)/hU(u) = m^'(u)/∫_0^udx/4m(x) > 0, where in the last equation we have taken h^2 = 1. Hence, for m^'(u)>0 the integral must be positive (u in the integral must be positive) and for m^'(u)<0 the integral has to be negative (u in the integral must be negative). Consequently, we have seen that the sign of u in the integral is not always positive like in <cit.>, and the dichotomy in the function U(u) based on the sign of u is explained in a more articulated way. We have summarized all the choices we have considered thus far in Table <ref>. Finally, we introduce a restriction on the w coordinate corresponding to the the surface r(u,w) = 0, the physical singularity, see below. Since r(u,w) = U(u)w+2m(u), for r(u,w) = 0 we obtain w = -2m(u)/U(u)≡ w_0(u), and so w_0 > 0 for U(u)<0 and w_0 < 0 for U(u)>0. It turns out that this exactly the case when we study the radial null geodesics in the proposed maximal extensions of the Vaidya metric <cit.>. § INVARIANTS Up to syzygies <cit.>, we find that the only non-differential non-vanishing invariant of (<ref>) is the first Weyl invariant, w1R ≡1/8C_αβγδC^αβγδ = 3/2h(u)^4r(u,w)^6(f_1(u)f_3(u)-2h(u)f_1(u)'f_2(u)^2), which reduces to the following expression in Israel coordinates, w1R ≡1/8C_αβγδC^αβγδ = 6m(u)^2/r(u,w)^6, where C_αβγδ is the Weyl tensor. However, as (<ref>) makes clear, it would be informative to have invariant information for m^'(u). This is obtained by way of the Bach tensor <cit.>, see also <cit.>. First define A_αβδ = ∇^γC_αγβδ, where ∇^γ denotes contravariant derivative. The Bach tensor is given by B_αβ = ∇^δ A_αβδ+R^γδC_αγβδ/2. Since the Bach tensor is trace-free, the first Bach invariant is B≡ B_αβB^αβ. In the present case we find, with the aid of (<ref>), that B = (4U(u)m^'(u)/r(u,w)^4)^2. Nevertheless, the preceding result does not provide the desired invariant definition of m'(u) due to its dependence on the functions r(u,w) and U(u). § SUMMARY AND DISCUSSION We have examined the construction of Israel coordinates for the Vaidya metric and have simplified the problem to finding appropriate expressions for the characterizing functions that arise from integrating the field equations. This construction is systematic and does not necessitate any coordinate transformation, which provides us with the chance to spot potential extensions of the Vaidya manifold by introducing distinct expressions for the characterizing functions, f_n(u). Nonetheless, the main focus of this paper is to reconstruct Israel coordinates for the Vaidya metric. By utilizing the WEC, we have understood the role of the function h(u) in the Vaidya metric. Although the sign of the h(u) is paramount in determining the direction of radiation in the EFL coordinates, we have demonstrated that this is not the case for Israel coordinates. That is, both ingoing and outgoing radiation can be achieved with h=+1 or h=-1. However, the impact of changing the sign of the function h(u) will be further investigated when we discuss the completeness of Israel coordinates in <cit.>. The next step, see <cit.>, is to introduce explicit mass functions as candidates for the three possible Vaidya models and assess the completeness of Israel coordinates in relation to these mass functions. § ACKNOWLEDGEMENT This work was supported (in part) by a grant from the Natural Sciences and Engineering Research Council of Canada (to KL).
http://arxiv.org/abs/2307.04088v1
20230709034448
Cracking the Puzzle of CO2 Formation on Interstellar Ices. Quantum Chemical and Kinetic Study of the CO + OH -> CO2 + H Reaction
[ "Germán Molpeceres", "Joan Enrique-Romero", "Yuri Aikawa" ]
astro-ph.GA
[ "astro-ph.GA" ]
Quantum Chemical and Kinetic Study of the CO + OH -> CO2 + H Reaction. Department of Astronomy, Graduate School of Science, The University of Tokyo, Tokyo 113 0033, Japan [email protected] Leiden Institute of Chemistry, Gorlaeus Laboratories, Leiden University, PO Box 9502, 2300 RA Leiden, The Netherlands [email protected] CO2 is one of the dominant components of the interstellar ice. Recent observations show CO2 exists more abundantly in polar (H2O-dominated) ice than in apolar (H2O-poor) ice. CO2 ice formation is primarily attributed to the reaction between CO and OH, which has a barrier. We investigate the title reaction in H2O ice and CO ice to quantify the efficiency of the reaction in polar ice and apolar ice. Highly accurate quantum chemical calculations were employed to analyze the stationary points of the potential energy surfaces of the title reaction in the gas phase on a H2O and CO clusters. Microcanonical transition state theory was used as a diagnostic tool for the efficiency of the reaction under ISM conditions. We simulate the kinetics of ice chemistry, considering different scenarios involving non-thermal processes and energy dissipation. The CO + OH reaction proceeds through the remarkably stable intermediate HOCO radical. On the H2O cluster, the formation of this intermediate is efficient, but the subsequent reaction leading to CO2 formation is not. Conversely, HOCO formation on the CO cluster is inefficient without external energy input. Thus, CO2 ice cannot be formed by the title reaction alone either on H2O cluster or CO cluster. In the polar ice, CO2 ice formation is possible via CO + OH -> HOCO, followed by HOCO + H ->CO2 + H2, as demonstrated by abundant experimental literature. In apolar ice, CO2 formation is less efficient because HOCO formation requires external energy. Our finding is consistent with the JWST observations. Further experimental work is encouraged using low-temperature OH radicals. Cracking the Puzzle of CO2 Formation on Interstellar Ices G. Molpeceres 1 J. Enrique-Romero 2 Y. Aikawa 1 Received August 12, 2023; accepted August 12, 2023 ================================================================================================================ § INTRODUCTION In the cold molecular clouds of the interstellar medium (ISM), a significant fraction of the molecules are contained in the solid phase in the form of ice. While most of the molecules present in the ISM have been detected in the gas phase using radio telescopes through their rotational transitions, the direct observation of ices requires studying their vibrational transitions, which are commonly affected by telluric contamination. In this context, space telescopes, such as Spitzer or, more recently, JWST, are essential. Ice observations <cit.> reveal the presence of several components such as H2O, CO, CH3OH, and the object of this study, CO2. The abundance of these species, as well as their speciation in the ice or their presence in specific regions of the ISM, can only be explained by considering their formation routes and the chemical conditions necessary for their appearance. The different components of interstellar ice may be formed in situ on the surface of refractory material. Such is the case of H2O, which is formed from the hydrogenation of atomic oxygen <cit.>, or the case of CH3OH, which is formed from the hydrogenation of CO <cit.>. Other significant components are primarily synthesized in the gas and accrete under extremely cold and dense conditions on the grain, like CO. Interstellar carbon dioxide, CO2, is thought to form via reactions on the surface (see, e.g., <cit.>). The postulated reactions contributing to the CO2 formation are: CO + OH -> CO2 + H HCO + O -> CO2 + H CO + O -> CO2 From this ternary of reactions, Reaction <ref> has a barrier energy when atomic oxygen is in its ground state, (^3P)O <cit.>. Reaction <ref> is barrierless, and Reaction <ref>, the reaction whose study we tackle in this paper, is assumed to have a minimal activation energy (∼ 100 K, <cit.>. The assumption of tiny activation energy for the CO + OH -> CO2 + H reaction is supported by a plethora of experiments dealing with surface chemical experiments <cit.>. Each of these experiments vary in different factors, including the formation route of the OH radical, either by hydrogenation of O2, <cit.>, dissociation of H2O molecules before deposition on the ice <cit.>, or direct photodissociation of H2O ice molecules <cit.>. Other variations between experiments include the substrate under consideration, either amorphous silicates <cit.>, CO <cit.>, matrix isolation <cit.> or H2O <cit.>. On the modelling side, <cit.> build on the experimental knowledge and coarse-grained it in a combination of a direct formation route CO + OH -> CO2 + H operating at T≥12 K, coinciding with the onset of CO diffusion on H2O, and an indirect three-body route on CO ices that relies in the formation of a kinetically excited OH radical O + H -> OH^* that subsequently partakes in the CO + OH^* reaction. The latter route on CO ices allows to explain the CO2 bands in a non-polar media observed in infrared observations of ices <cit.>. In summary, there is ample evidence for Reaction <ref>, to be efficient on dust grains. However, the same reaction in the gas phase is relatively slow, with rate constants as low as ∼ 2x10^-13 molecules cm^-3 s^-1 at 300 K <cit.>. The title reaction in the gas phase has also been a source of extensive theoretical attention. It has been simulated using both semi-classical and quantum dynamics on highly accurate potential energy surfaces (PES) <cit.>. It was also studied in the presence of other CO2 molecules <cit.>. The theoretical works find rate constants even lower than the values reported in <cit.>. The different reactivity on surfaces and the gas phase is puzzling and counterintuitive. In both phases, the reaction is acknowledged to proceed through the highly stable HOCO radical. The evolution from this radical is the primary source of uncertainty because of the high activation energies to form the bimolecular CO2 + H products. In the gas, where a third body to stabilize HOCO is unavailable, the reaction is more likely to occur owing to the energy redistribution into the few vibrational degrees of freedom, ultimately leading to an irreversible reaction. On the surface, the ice molecules dissipate a significant fraction of this energy, ideally leading to the thermalization of HOCO, hence slowing or impeding the formation of CO2. This was proved by <cit.>, initiating the conundrum we tackle in this work and that has also been debated from different prisms <cit.>. If the reaction is slow in the gas, it should not proceed on the ice, where little energy is left for the reaction after dissipation into the ice. Hence, how is the mismatch between gas and solid phase experiments possible? In this article, we aim to shed light on this particular issue. The two main possibilities to explain the disagreement include, in the first place, the operation of external energy input, either chemical from the O2 + H or O + H reactions required to form the OH radical, or the excess energy used to photodissociate H2O. Secondly, free H atoms from the experiment may promote H abstraction reactions, HOCO + H -> CO2 + H2. While these two possibilities are often assumed when interpreting the experimental results, it is fundamental to distinguish which is dominant, if any, to establish under which conditions the laboratory measurements apply to the ISM. Determining the factors contributing to the reaction yield in the experiments is complicated because the detection techniques are suited for identifying only the final products. Quantum chemical calculations are instrumental and provide an atomistic perspective of the different elementary processes relevant to the reaction. In this work, we simulate the title reaction on two different model ices, H2O and CO, and perform kinetic simulations using a microcanonical formalism to determine the importance of non-thermal effects in the reaction, including dissipation over different numbers of molecules, and complete the picture left by the different experimental studies. The paper is structured as follows. In <Ref>, we describe the employed computational methodology. In <Ref> we present the structural models for the ices (<Ref>), the PES for the reactions in each of the surfaces (<Ref> and <Ref>) and the associated kinetic analysis (<Ref>). <Ref> is dedicated to interpreting our results from an astrophysical point of view, contextualising the preceding experiments. We finally summarize our main findings in <Ref>. § METHODOLOGY §.§ Quantum chemical calculations The stationary points in the PES were characterized using density functional theory (DFT) calculations on model clusters mimicking H2O and CO ices. Because this work aims to determine the impact of energy redistribution in the formation of CO2 on ice, we need to use sufficiently large structural models to allow for (ergodic) energy equipartition. In a preceding calculation, <cit.> used a cluster containing 33 H2O water molecules and discussed the suitability of a model of this size, indicating that energy dissipation should be well described with a model of this size. This was later confirmed with dedicated studies using ab-initio molecular dynamics simulations <cit.>. Therefore, in this study, we use the same 33 H2O cluster to simulate the H2O ice <cit.>, and we constructed a 33 CO cluster to simulate the CO ice. To construct such a cluster, we used Packmol <cit.> in a 8 Å radius sphere, ensuring that every molecule is at a minimum initial distance of 3 Å from each other. This initial cluster is later refined at the level of the theory described below. The geometries of the initial clusters were optimized at the MN15-D3BJ/6-31+G(d,p) level of theory <cit.>, with parameters for the D3BJ dispersion correction taken from <cit.>. The DFT and optimizations utilize the Gaussian16 (rev.C.01) suite of programs <cit.>. We later place the CO and OH admolecules on the clusters sequentially, first occupying a binding site for the CO molecule and later for OH. Once the two admolecules are located on the clusters, we followed the gas-phase reaction mechanism presented in <cit.> for both clusters, except for an alternative exit path on CO ice (<Ref>). Additional differences between the gas-phase and surface-like profiles are highlighted in <Ref>. After locating every stationary point, we confirmed them as either true minima or first-order saddle points, i.e., transition states (TS), in the PES by computing the molecular Hessian of the system. The electronic energies of the stationary points on the PES were further refined using the domain-based local pair-natural orbital coupled cluster singles and doubles with a perturbative treatment of triple excitations, DLPNO-CCSD(T) <cit.> using a two-point complete basis set extrapolation (CBS) to the basis-set limit using the cc-pVDZ and cc-pVTZ basis sets <cit.>. The internal options for the PNO localization scheme were set to normal, and resolution of the identity (RI) techniques were used to evaluate exchange and Coulomb integrals (RIJK) using a cc-PVTZ/JK auxiliary basis set. We apply the frozen-core approximation in the correlated calculations. The ORCA (v.5.0.4) code was used for the DLPNO-CCSD(T)/CBS calculations <cit.>. In addition to cluster calculations, we also carried out gas-phase calculations at the same level of theory for comparative purposes, which are indicated throughout the paper in square brackets. Finally, we assessed the quality of our theoretical method of choice, comparing our gas phase results with the ones of <cit.>, finding excellent agreement for all the relevant parts of the PES. These results are presented in the <Ref>. It is worth noting here that our theoretical method does not predict the correct energetics for the high energy intermediate HCO2. This intermediate is not relevant to the kinetics of the system because its formation requires surmounting an emerged barrier of ∼8-9 kcal mol^-1 from the bimolecular OH + CO asymptote (38-39 kcal mol^-1 from the HOCO potential well) <cit.>. Moreover, we could not find this intermediate in the simulations on the H2O cluster. We, therefore, skip the search for this intermediate in all cluster calculations. Nonetheless, we discuss the origin of this disagreement in <Ref>. §.§ Kinetic Analysis We employed the microcanonical flavour of the transition state theory, called Rice–Ramsperger–Kassel–Marcus (RRKM) to compute the energy-dependent rate constants k(E) for the transitions between reaction wells, given by: k(E) = N^(E - E_0)hρ(E), where h is the Planck's constant, N^(E - E_0) is the sum of states of the transition state evaluated at energy E to the energy of the transition state, E_0, and ρ(E) is the density of states of the reactant at energy E. In addition, the sum of states contains tunnelling corrections, for which the non-symmetric Eckart potential model was employed <cit.>. We did not include rotational symmetry factors in our calculations due to the symmetry breaking induced by the amorphous surface. The rigid-rotor harmonic oscillator model is used throughout the kinetic calculations. The application of RRKM to interstellar reactions is discussed in <cit.> and used or implied in several other works <cit.> As it will be explained later on (<Ref>), the title reaction occurs strictly non-thermally at 10 K. Hence we make our analysis based on k(E) for the entrance CO + OH -> t-HOCO/c-HOCO and exit channels: c-HOCO -> CO2 + H (and alternatively c-HOCO/t-HOCO + CO -> CO2 + HCO, <Ref>). We provide k(E) considering several energy dissipation scenarios. Each of them has a different number of molecules, n, over which instantaneous energy dissipation is allowed. We studied n=16, 10, 5, and 0 (CO/H2O) molecules. In the latter (n=0), energy redistribution occurs only within the CO + OH system. We carried out this study by projecting out the molecular Hessian matrix elements for the m molecules (where m = 33 - n) farther from the t-HOCO minima, as the global minima of our study. The microcanonical rate constants obtained in this study are calculated with the MESS code <cit.>. We note that the sizes of the clusters (see Figure <ref>) and the highest number of dissipating water molecules are sufficient according to previous studies, e.g., <cit.>. Although no specific studies have addressed this issue for CO ice, we have made a reasonable assumption that the same holds true. It is worth highlighting again that we considered different dissipating CO ice molecules. § RESULTS §.§ Cluster model The fully optimized H2O and CO clusters mimicking ice surfaces are presented in <Ref>. While the CO ice model has a more spherical and compact shape with dimensions 10×12×13 Å, the water one is slightly more elongated, 15×9×10.5 Å. The latter hosts a cavity, where the CO + OH -> CO2 + H reaction is simulated. On the contrary, the more compact CO cluster does not have any clear deeper binding site. Hence the reaction site was randomly chosen. The binding energies of the reactants and reaction intermediates on the surfaces are presented in <Ref>. These were calculated as the energy difference between the complexes containing the surface and the admolecule and the sum of the isolated fragments, including ZPVE. In the H2O cluster cavity, we find a binding energy for CO of 4.64 kcal mol^-1, higher than the values reported by <cit.> (≤3.71 kcal mol^-1). This indicates that our cavity is a really deep binding site with a maximized number of neighbour water molecules. For the OH radical, on the contrary, the cavity binding site yields lower than average binding energies (6.45 kcal mol^-1) than other reported values, e.g., 10.33 kcal mol^-1 <cit.>, and 10.6 kcal mol^-1 <cit.>. The observed differences arise from the specific structure of our cavity, where the number of dangling H-bonds is saturated, and the binding mode of OH, whose acceptor/donnor H-bonds about 0.1 Å shorter than in the cavity case reported by <cit.>. On the CO cluster, the CO/CO binding energy corresponds to the lower bound of the values presented in <cit.> while the values of OH/CO are unreported. We note that the dual-level error introduced by our calculations is relevant for determining binding energies for CO/CO due to the mismatch of geometries arising from the weak CO-CO interaction in the ice <cit.>. In the subsequent reactivity studies, the relative magnitude of this error is diminished because energy differences between reaction steps are much higher than the CO-CO interaction energy. For the reactivity studies, we keep the CO binding site determined above, while the OH radical is placed on a different binding site. We justify this choice based on two arguments. First, when both adsorbates are thermalized, the higher interstellar abundance of CO makes it more likely to be located in deep binding sites, such as the cavity formed in the H2O cluster. Second, in <Ref>, we investigate the effect of a translationally excited OH radical colliding with a pre-adsorbed CO. §.§ Potential energy surface construction All the energy diagrams have been referenced from the asymptotes, i.e., from the sum of energies of the surface, reacting CO and the reacting OH radical. We will refer to this as the bimolecular system, and for the sake of simplicity it will be denoted as CO + OH, regardless of the ice surface. This was done for the sake of clarity, as it is much clearer what the influence of the substrate in stabilizing the reactants is, as well as its catalytic effect on the barriers. §.§.§ H2O ice We include two pre-reactant complexes following the literature <cit.>. First, a pre-reactant complex with large dihedral ∠HOCO angles, PRC, which leads to the formation of the t-HOCO intermediate. Second, a near 0° dihedral angle pre-reactant complex (PRC'), that forms the c-HOCO intermediate (which was not found on CO ice, as discussed in <Ref>). The transition states that connect the PRCs with the reaction wells are named TS1 and TS1', respectively, and the transition state connecting these two wells is TS2. Finally, the transition state leading to CO2 + H from c-HOCO is named TS4. The reason for not naming it TS3 is that the TS3 label (specifically TS3') is reserved for the exit transition state from t-HOCO, a stationary point we do not find on water ice. The stationary points on the reaction profile are gathered in <Ref>. The reaction profile has, for the most part, the same profile as in the gas phase, with two notable exceptions. The first concerns the absence of the HCO2 intermediate, as we already discussed in <Ref>. The second is the inversion in energy between PRC and PRC'. This inversion appears following the formation of a HO–H2O hydrogen bond that locks the PRC' geometry in the binding site contiguous to the CO binding site. The snapshots of the stationary points are collated in <Ref>, where this effect can be visualized. The higher stabilization of PRC' also results in higher activation energy to c-HOCO through TS1'. The binding energies of t-HOCO and c-HOCO on the cavity are 15.51 kcal mol^-1 (7805 K) and 12.30 kcal mol^-1 (6190 K), respectively. These binding energies are significantly higher than the ones for CO and OH presented in <Ref>, and are closer to the average values reported for the related molecule, HC(O)OH, formic acid (e.g., ∼ 12.30 kcal mol^-1 <cit.>, 10.7–21.0 kcal mol^-1 <cit.>). The t-HOCO and c-HOCO wells are significantly stabilized on the surface, evinced by the 13–16 kcal mol^-1 difference in energy with the same intermediates in the gas phase. As a consequence, the activation energy of TS4 is higher on water. When breaking the O–H bond in c-HOCO, the energy corresponding to the OH moiety must be overcome, i.e. a significant fraction of the binding energy. The binding energy of the CO2 + H system on H2O was found to be 7.30 kcal mol^-1 (3673 K). Finally, from <Ref>, it is evident that the reaction, if viable, must proceed through quantum tunnelling. The c-HOCO -> CO2 + H barrier is 32.1 kcal mol^-1, which is extremely high for ISM conditions. However, contrary to what happens in the gas phase, TS4 is submerged with respect to the reactant asymptote, thanks to the stabilization promoted by the H2O surface. The product of the reaction, CO2 + H, is higher in energy than both radicals, and the reaction is significantly less exothermic because of the break of hydrogen bonds. Nonetheless, once CO2 + H is formed, H is susceptible of diffusing or evaporating, thus concluding the reaction. §.§.§ CO ice The reaction profile on CO ice is shown in Figure <ref> and the stationary points in Figure <ref>. With respect to the gas-phase process, as previously discussed, the profile lacks the HCO2 intermediate. When comparing with the results for the water cluster presented above, the main difference is the lack of PRC', so that the reaction must go through the t-HOCO intermediate to reach CO2. While PRC' exists on the CO ice, we found it to be a first-order saddle point. Unlike in water, where PRC' is stabilized thanks to the interaction of the OH radical with a dangling bond of H2O, on CO, this interaction is unavailable, and the weak OH-CO interaction promotes the rotation to PRC. There is still the possibility that the lack of PRC' is an effect of the random selection of the binding site, however a full binding site sampling is beyond our computational resources. To reach the t-HOCO intermediate, however, the TS1 must be crossed at the same energy level as the asymptote. Hence, significant energy dissipation would suppress the whole reaction unless enough energy input is provided via non-thermal mechanisms. Additionally, the much reduced inter-molecular interaction of the admolecules with the surface due to the lack of electrostatic and H-bonding interactions of CO ices affects the energetics of the stationary points. The most prominent examples are the lower stabilisation of intermediates and the barrier in TS4, which sits above the energy of the asymptote. In general, the energetics on CO ice is closer to the gas phase case, with small differences, e.g., the isomerisation barrier for the t-HOCO -> cis-HOCO reaction on CO is about 1 kcal mol^-1 lower (and about 2 kcal mol^-1 lower for the reverse reaction). The fact that there are more CO molecules surrounding the reaction site opens a new possibility not available on water ice or the gas phase. It involves the reactivity of the t-HOCO and cis-HOCO intermediates with a neighbouring CO, leading to CO2 + HCO, see Figure <ref>. Interestingly, these reactions possess lower activation energy barriers than TS4, see Figure <ref>, and in the case of the cis-HOCO + CO -> CO2 + HCO reaction, the barrier sits below the asymptote. §.§ Microcanonical rate constants We estimated the microcanonical rate constants for the PES entrance and exit channels described in the previous sections. The entrance channels start with the pre-reactant complexes and finish with t/c-HOCO, and the exit channels start with t/c-HOCO and finish with CO2 + H, and additionally CO2 + HCO for CO. These channels present the relevant rate constants for the kinetics of the reaction because the t-HOCO -> c-HOCO is much faster, even when energy redistribution is at play. Notice that due to the barriers (TS1 and TS1'), if the stationary points of the PES were populated according to a thermal distribution, the formation of the HOCO intermediates would be slow, and the formation of products would likely not happen at all. To simulate non-thermal reactions, an initial amount of energy is given to the system; see below. Experiments of <cit.> show the formation of HOCO with an apparent small barrier or null barrier. We note that for the exit channel (c/t)-HOCO -> CO2 + H/HCO , the starting potential well is very deep, and thermalization is more likely <cit.>. Nevertheless, as we will show, under a microcanonical formalism, the formation of CO2 + H is found to be slow. Finally, different energy dissipation is allowed by changing the number of ice molecules considered in the microcanonical calculations, n. Our PESs indicate that adsorption energy (formation of PRC/PRC') is not completely dissipated but employed in forming HOCO. The energy reference is again the energy of the asymptotes. One could consider that this is not the best choice since the initial energy lies above the energy of the PRC/PRC' and it would actually mean that the initial state is actually higher in energy than a fully thermalized reactant set. However, it must be noted that (i) if a reference state is an upper bound of the real one, and even in this case the reaction is not plausible, then starting from a more stable reference will not change the qualitative picture, and (ii) in cases where an incomplete energy dissipation promoted by certain exothermic processes, e.g. diffusion into deeper binding sites and possible Eley-Rideal mechanisms [That may be of relevance for CO molecules given their abundance in ISM ices.] would actually involve higher initial energies than PRC/PRC'. This effect is irrelevant when the activation energy of a reaction is much higher than the exothermicity caused by the mentioned processes, but for CO + OH -> HOCO the activation energy of the reaction falls below the adsorption energy, and it is of small magnitude. The correct energy reference would lie somewhere in between that of the asymptote and the PRC/PRC'. The microcanonical rate constants for the entrance step are shown in <Ref> and <Ref> for H2O and CO ice. In this plot, we show the reaction rate constants as a function of the energy, where k(E=0) corresponds to the separated, no adsorption asymptote (CO + OH in <Ref> and <Ref>). Energies above zero indicate extra energy from non-thermal excitation mechanisms. In this work, to compare with experimental observations, we will consider the presence of extra energy from either (i) a prior O + H -> OH reaction (ΔU = 102.1 kcal mol^-1) or (ii) half the energy deposited by a single Ly-α photon, assuming equal energy partition into the products of the H2O -> OH + H, (ΔU = 118.7 kcal mol^-1). Notice that the amount of extra energy used to promote the title reaction through the non-thermal mechanisms is unknown. Hence, we represent fractions of that energy, 0.10, 0.25, 0.50, as vertical dashed lines in <Ref> and <Ref> to serve as a guide to evaluate how the rate constants would increase under these assumed scenarios. As we introduced in <Ref>, we evaluated the behaviour of the reaction assuming dissipation into a set of n molecules. The four different cases for n=0, 5, 10, 16 are illustrated in <Ref> and <Ref>. The rate constants for the entrance step on H2O ice are, for all n dissipating molecules, fast for the PRC -> t-HOCO step, indicating that external energy input is unnecessary for this reaction, as determined experimentally by <cit.> and computationally by <cit.>. However, for the alternative PRC' → c-HOCO reaction, we observe k(E=0)≤10^8 s^-1 for the models with 10, 16 H2O dissipating molecules. This means that if the timescale for thermalization is shorter than tens of nanoseconds, the adsorption energy alone is insufficient to overcome the entrance barrier. This constraint is lifted by considering extra energy. The reason for the difference between rate constants for the reactions starting from PRC and PRC' stems from the significantly higher activation energy in the latter case. For the CO model, we observe systematically lower values of k(0) than in water, owing to the lower stabilization of the PRC complex on CO than on H2O leading to higher energy barriers than in the best case for H2O. This, in turn, yields k(E=0)≤10^8 s^-1 for all of our models. Because k(E) is a very steep function around E=0, the reaction is viable with a small input of energy that can come from reactions, e.g. O2 + H <cit.>. This finding reinforces the scenario presented in <cit.> for the three body formations of CO2 on CO ice, as we will discuss in <Ref>. An important comment for each of these rate constants is that we implicitly assumed an infinitely fast energy partition into n molecules, which may not be a good representation of this reaction on CO. At this research stage, we warn that extracting strong conclusions for a limit case like the one found for PRC -> t-HOCO on CO ice is difficult and more sophisticated approaches are necessary. We are currently working on a molecular dynamics study of this reaction to illuminate this issue. Similarly to the entrance rate constants, the exit c-HOCO -> CO2 + H rate constants on H2O ice and c/t-HOCO -> CO2 + H/HCO rate constants on CO ice are plotted in <Ref> and <Ref> for the different dissipation scenarios. It is important to remind that while the entrance channels are unaffected by quantum tunnelling, all the exit channels involve the migration of an H atom, turning quantum tunnelling into an important driver for the reaction, as already evinced by nuclear quantum dynamics calculations <cit.>. Still, even with the influence of quantum tunnelling, the reactions are, in all cases, significantly slower than in the entrance step. The importance of the energy dissipation scheme is major for these reactions. There is a clear gap in exit rate constant values between the (ideal) n=0 dissipation model and the 5, 10 and 16 molecules dissipation models that, in all the cases, yield rate constants k(E=0)≤ 0 s^-1. We remind that these values must be confronted against the thermalization timescale, i.e. if thermalization is faster, the reaction will not proceed. A rate constant of k(E=0)≤ 0 s^-1 means reaction times of seconds, and we find it hard that thermalization would not happen on those timescales, precluding all the c/t-HOCO -> CO2 + H/HCO reactions in all the conditions and substrates considered in this work. We conclude then that, without the input of any external energy other than the adsorption energy of the reactants, the reaction can proceed neither microcanonically nor from thermalized HOCO. When including a degree of external energy from the mechanisms explained above (chemical and H2O photodissociation), the exit reaction is faster, as expected. However, only the n=0 dissipation model yields rate constants that are sufficiently high ≥ 10^8 s^-1 to compete with thermalization. The upper bound of the timescale for (almost) complete thermalization of HOCO is estimated to be similar to that of CO2 formed from the CO + (^1D)O -> CO2 reaction, that is, a few nanoseconds <cit.>. While the energy dissipation in RRKM is instantaneous, and an incomplete energy dissipation may increase the values of the rate constants, our assumption for the external energy input is also rather ideal. Thus, we conclude that even in the presence of external energy input, we find it hard to justify the formation of CO2 and H/HCO from the title reaction. This suggests that the formation of CO2 relies on the subsequent reaction described as follows: t/c-HOCO + H -> CO2 + H2. Reaction <ref> involves two radicals, and even though an activation barrier may be present on ice <cit.> quantum tunnelling should play a major role, as it is the case found for H abstraction reactions <cit.>. Thus, reaction <ref> must be viable. The inclusion of reaction <ref> in the CO2 reaction network was already in place for the non-energetic formation of CO2, for example, in <cit.>. Still, this article shows that it also applies to the energetic formation of CO2. We put our results in a laboratory/simulation and astrophysical context in <Ref>. Finally, and despite it does not affect the outcome of the reactions studied in this work (e.g. the t/c-HOCO ( + CO) -> CO2 + H/HCO reactions remain non-viable under ISM conditions), it is interesting from a purely chemical perspective to comment on the effect observed for the two competing reactions c-HOCO -> CO2 + H and t/c-HOCO + CO -> CO2 + HCO. The competition between these two processes is energy dependent. At low values of E, e.g. k(E=0), favours t/c-HOCO + CO -> CO2 + HCO whereas c-HOCO -> CO2 + H is the preferred exit channel at higher energies, between 10–120 kcal mol^-1, depending on the number of dissipating molecules. The dependence on the energy and number of dissipating molecules clearly reveals that the dominion of the c-HOCO -> CO2 + H route at high energies is an entropic effect. For both routes, the count of states at the TS energy (the numerator of <Ref>) depends on the height of the barrier and the number of low-frequency vibrational modes. Because HCO, in contrast with H, has two molecular vibrations, H-C and C=O, at 2800 and 1900 cm^-1, the count of states will be smaller at high energies. Low-frequency vibrations overwhelm the purely kinetic effect arising from the lower barrier. § DISCUSSION §.§ The CO + OH -> CO2 + H reaction in the laboratory The experiments carried out in the CO + OH -> CO2 + H reaction were reviewed in <Ref>. For most of them, the biggest experimental conundrum is the generation of the OH radical, which is very unstable under laboratoty conditions and needs to be generated in situ. The experimental methods for forming the OH radical in these experiments are, in most cases, different. However, all the possible formation pathways involve the co-deposition or co-generation of H atoms e.g. formation with O2 + H, fragmentation of H2O in a microwave discharge or H2O photodissociation. In general, it is impossible to experimentally discern whether the CO + OH reaction proceeds directly to CO2 + H or, in turn, stops at t-HOCO, which is converted to CO2 via reaction <ref>. A rigorous study of the reaction using molecular dynamics <cit.> showed the probability of direct formation of CO2 on H2O ice is lower than 1%. It is important to remark that in <cit.>, the OH was generated with excess energy coming from photodissociation of H2O. Our results support the latter scenario and discard the direct reaction. Compared with our results, the small fraction observed for the direct formation of CO2 + H in <cit.> may come from the slower and more realistic non-ergodic energy dissipation present in the molecular dynamics study. On CO ice, the reaction proceeds similarly to in H2O, both in our calculations and in the experiments of <cit.>, where HOCO is explicitly included as the intermediate for the reaction. <cit.> discuss the competition with formic acid (HC(O)OH) through the reaction: HOCO + H -> HC(O)OH with Reaction <ref>. Our results complement these experiments as well, showing that in addition to what was already known, the formation of the HOCO complex has to surmount an activation energy of 2.2 kcal mol^-1 with a mere adsorption energy of 2.5 kcal mol ^-1, in contrast with H2O ice, where the higher stabilization of the PRC complex increases the energetic budget for the formation of HOCO. The consequence of this effect in the overall reaction scheme is that the formation of HOCO cannot be taken for granted on CO ice under a non-energetic regime. In <cit.>, such energy input is given by a preceding chemical reaction. The more impeded formation of the HOCO radical on CO is the main difference with H2O ice and is illustrated by the rate constants in <Ref> (Top panel) and <Ref>. This different reactivity on different substrates may explain the recent JWST observations of a higher degree of mixing of CO2 with H2O than with CO <cit.>. However, and as we indicated in section <Ref>, further studies are being undertaken to understand the precise behaviour of the CO + OH -> t-HOCO association step on CO ices. On the other hand, <cit.> used matrix isolation, electron paramagnetic resonance and FT-IR techniques, which made it possible to observe several radicals, among which HOCO, and CO2. HC(O)OH is also detected, although its formation seems to be due to HCO + OH rather than reaction <ref>. In this experiment, methanol molecules embedded in an Argon matrix are photolysed at 14 K. The resulting photo-products can relax as the matrix acts as a third body. Later the sample is warmed up to 35 K, and the Ar matrix is removed, allowing light species to diffuse. The peak of CO_2 production occurs in this last stage. According to our results and interpretation, if CO2 is formed via reaction <ref>, either there is some extra energy input, not all the energy from the photolysis step was completely dissipated, or H-abstraction reactions are in place. In the latter case, this can be triggered by other radicals rather than reaction <ref>, something we did not consider in this work, and that would require either the diffusion at warmer temperatures or the presence of a nearby radical species. In addition, an efficient H-abstraction radical-radical channel should be present, which will certainly depend on their relative orientation <cit.>. Notice that in this experiment, no ice surface is present, but rather the bare copper plate on top of which the matrix and reactant mixture is prepared. Finally, we would like to encourage more experiments on CO_2 formation starting from thermalized reactants, especially on CO surfaces. §.§ The CO + OH -> CO2 + H reaction in the ISM The comparison between the experiments and our calculations presented in the last section motivates us to contextualize our results in the expected conditions of the ISM. We concluded that the sole CO + OH reaction is insufficient for the formation of CO2 on ices and that Reaction <ref> is the most promising candidate for the follow-up reaction. Considering this, is it justified to consider a small activation energy for the OH + CO -> CO2 + H reaction in astrochemical models of molecular clouds and prestellar cores? In light of our simulations, we consider that there are at least four different cases. * High coverage of H2O ice and high abundance of H atoms. * High coverage of H2O ice and low abundance of H atoms. * High coverage of CO ice and high abundance of H atoms. * High coverage of CO ice and low abundance of H atoms. On H2O ice (Cases 1 and 2 above), the formation of the HOCO complex is facile and does not require any energy input, with a fast reaction occurring thanks to the adsorption energy (or a fraction of it) on water ice. Moreover, the dominance of H2O in the early stages of a molecular cloud's life, during the translucent cloud phase <cit.>, ensure mild temperature conditions (15–50 K) that allow for diffusion of CO molecules, and relatively low extinction (A_v∼ 1-2 mag). Under these conditions, Case 1 is the most likely one, with H atoms produced from photodissociation of H2O and other hydrogenated molecules both in the gas and on the grain. Other mechanisms, such as cosmic ray ionization, also contribute to these fragmentation processes. Under these conditions, we determine that considering a null or low activation barrier for Reaction <ref> in astrochemical models is justified because the H atom will ensure prompt conversion of HOCO to CO2 through reaction <ref>. However, we warn that HC(O)OH abundance could be underestimated following this approach. At higher extinctions, but without enough CO surface coverage (Case 2, molecular cloud stage), the abundance of H atoms on grain surfaces will be reduced, and the HOCO complex will survive longer on the grain. Under these conditions, we recommend differentiating Reaction <ref> and <ref>. The next two cases (Cases 3 and 4) can be treated conjointly. Our simulations show that forming the HOCO radical from CO + OH is not straightforward on CO ice and requires initial energy input. While the energy required to initiate the reaction is not very high, the very low temperatures where Cases 3 and 4 would dominate (dense prestellar cores with T=10 K) discard the thermal energy as the initiator of the reaction. This energy input can come from a neighbouring chemical reaction because H2O photodissociation should be a small factor in CO ices. Therefore we consider that the approach presented in <cit.> of modelling the CO2 formation as the three-body reaction, e.g. H + O + CO is a good compromise to model the reaction on CO ice. Whether the three-body reaction can be coarse-grained to yield CO2 + H directly or HOCO (and later proceed through reaction <ref>) is likely to depend on the H atom abundance. For example, an important factor should be the local cosmic ray ionization rate (ζ) determining the dissociation of H2 into 2H, thus the ratio of HOCO radicals to H atoms. We must emphasize that coarse-graining the formation of CO2 through the title reaction to study CO2 formation and evolution may be acceptable only when H atom abundance overwhelms HOCO abundance. However, in doing so, the abundance of other HOCO-derived molecules like HC(O)OH will be underestimated. Precaution is advised when the target of the models involves these molecules. Finally we would like to discuss other possible scenarios. One possibility is that the excited formation of OH leads to non-thermal diffusion out of the reaction site or its desorption (notice that the latter would be more plausible on CO ices due to the lower binding energy), in these cases the reaction would not take place. Another possible scenario regards the energy dissipation after HOCO is formed. Because of the high exothermicity of the CO + OH -> HOCO reaction and the low binding energies of these radicals on CO ice, there is the possibility that HOCO chemically desorbs, or triggers the desorption of a nearby ice CO molecule. In addition, if these reactions would have to take place in the inner layers of the ice, one must take into account that energy dissipation would be even more efficient due to the larger number of intermolecular interactions and the higher number of surrounding molecules, rendering each reaction step less and less efficient. § CONCLUSIONS Using accurate quantum chemical calculations and microcanonical kinetic modelling, we found that the CO + OH -> CO2 + H reaction, which has been considered as the most important producer of interstellar CO2, is rather inefficient, and its occurrence cannot be taken for granted. The reaction proceeds through a rather stable intermediate, HOCO, and more specifically through its two structural isomers t-HOCO and c-HOCO. On H2O ice, the formation of HOCO is feasible, but its evolution to CO2 requires a further reaction step that most likely involves H abstraction through reaction <ref>. On CO ice, we found, for the first time, that the formation of HOCO is not as efficient as currently assumed, owing to the lower adsorption energy of OH and CO molecules on CO ice. We indicate that non-thermal effects are necessary to form HOCO, and thus CO2, on CO ice. This limitation may be behind the recent ice observations showing higher fraction of CO2 found in water-dominated environments <cit.> when comparing with apolar (CO-dominated) ices. Because our calculations assume an ideal energy redistribution in an infinitely short time after the reactions, our results represent a lower bound for the production of HOCO and CO2 from the CO + OH reaction. We aim to improve the description of energy dissipation in forthcoming works to resolve ambiguous cases. We encourage further experimental work on the topic, especially on CO ices following <cit.>. Nonetheless, with our results, we were able to provide atomistic insight into the formation of CO2, one of the most important interstellar ice constituents, and indicate the cases where coarse-graining of the CO + OH reaction in astrochemical models is, to a first approximation, acceptable and not. G.M. thanks the Japan Society for the Promotion of Science (JSPS International Fellow P22013, and Grant-in-aid 22F22013) for its support. The authors acknowledge support by the Research Center for Computational Science in Okazaki, Japan (Projects: 22-IMS-C301, 23-IMS-C128), the state of Baden-Württemberg through the bwHPC consortium and the German Research Foundation (DFG) through grant no INST 40/575-1 FUGG (JUSTUS 2 cluster) (Project: 22-IMS-C301). Y.A. acknowledges support by Grant-in-Aid for Transformative Research Areas (A) grant Nos. 20H05847. aa § GAS-PHASE COMPARISON WITH <CIT.> We compare our energetics of the CO + OH -> CO2 + H gas-phase reaction profile at the DLPNO-CCSD(T)/CBS//MN15-D3BJ/6-31+G(d,p) level with the high-quality CCSD(T)/AVTZ results presented in <cit.> in <Ref>. Note that the energies presented here are not ZPVE corrected, unlike in the main manuscript. We observe excellent (between 0.0–1.3 kcal mol^-1) deviations between methods, e.g. chemical accuracy, for all structures except HCO2. As we introduced in the methods section, this intermediate and the associated entrance and exit transition states, TS5 and TS6, are irrelevant to the reaction kinetics or dynamics <cit.>. Hence, a wrong prediction of the energetics of this intermediate does not affect our results, and we do not include it in our kinetic simulations. Yet, it is interesting to mention the reason for the discrepancy. In <cit.>, the authors show that the HCO2 intermediate belongs to the C_2v symmetry point group at the CCSD(T)/AVTZ level of theory. However, the geometries at the MN15-D3BJ/6-31+G(d,p) level converge to a C_s intermediate. The T_1 diagnostic at the DLPNO-CCSD(T)/cc-pVTZ level of theory for the HCO2 intermediate hints at a strong multireference character (T_1=0.068), so it is not clear if the CCSD(T) or the MN15-D3BJ calculations are better in predicting the correct HCO2 geometry. However, it is clear that a dual-level approach like DLPNO-CCSD(T)/CBS//MN15-D3BJ/6-31+G(d,p) will fail due to the mismatch of geometries. Despite the discrepancy found for HCO2, the excellent agreement for all the relevant parts of the PES indicate that the studies on the H2O and CO clusters will yield the correct energetics for the system.
http://arxiv.org/abs/2307.05234v1
20230711130828
CR-Lasso: Robust cellwise regularized sparse regression
[ "Peng Su", "Garth Tarr", "Samuel Muller", "Suojin Wang" ]
stat.ME
[ "stat.ME", "stat.CO" ]
1]Peng Su 1]Garth Tarr 1,2]Samuel Mullercor1 [email protected] 3]Suojin Wang [cor1]Corresponding author [1]organization=School of Mathematics and Statistics, addressline=The University of Sydney, state=NSW, postcode=2006, country=Australia [2]organization=School of Mathematical and Physical Sciences, addressline=Macquarie University, state=NSW, postcode=2109, country=Australia [3]organization=Department of Statistics, addressline=Texas A&M University, city=College Station, state=Texas, postcode=77843, country=USA Cellwise contamination remains a challenging problem for data scientists, particularly in research fields that require the selection of sparse features. Traditional robust methods may not be feasible nor efficient in dealing with such contaminated datasets. We propose CR-Lasso, a robust Lasso-type cellwise regularization procedure that performs feature selection in the presence of cellwise outliers by minimising a regression loss and cell deviation measure simultaneously. To evaluate the approach, we conduct empirical studies comparing its selection and prediction performance with several sparse regression methods. We show that CR-Lasso is competitive under the settings considered. We illustrate the effectiveness of the proposed method on real data through an analysis of a bone mineral density dataset. Cellwise contamination Cellwise regularization Robust sparse regression feature selection § INTRODUCTION Identifying the most important features in an n× p design matrix X to predict an outcome vector y is a fundamental problem in statistics, where n denotes the sample size and p the number of feature columns in X. This problem is challenging, especially when there is contamination in the data, that is, when some elements of the full data matrix [y,X] are corrupted. It is commonly believed that raw real-data, prior to any cleaning, contains about 1% - 10% of outliers <cit.>. Potential issues caused by these outliers are often ignored <cit.> even though they may negatively impact estimation and variable selection <cit.>. Based on the characteristics of outliers, they can be allocated into either rowwise outliers or cellwise outliers. Rowwise outliers refer to observation vectors whose components are entirely contaminated, such as visualised in row 2 and row 15 in Figure <ref>. One way to overcome this challenge is through robust sparse estimators that can deal with rowwise outliers under sparse settings by combining traditional robust estimators with Lasso-type regularization. For example, <cit.> and <cit.> proposed combining the adaptive Lasso <cit.> and MM-estimation <cit.>, respectively. These estimators work by downweighting outlying observations. Cellwise outliers refer to cells that are contaminated in a way that makes them different to the true data generating process. It is typically assumed that these occur independently across the design matrix X. Observations may experience some form of corruption in one or more of the predictors, such as visualized by the scattered white cells in Figure <ref>. Cellwise outliers are more challenging than rowwise outliers because of the propagation of these outliers to many of the rows, where even a small proportion of cellwise outliers in each predictor could cause a large proportion of outlying observation vectors <cit.>. Outlier detection is often the first step taken when dealing with cellwise outliers in a dataset. One common approach, as suggested by <cit.>, is to predict the value of a cell and flag it as an outlier if the difference between its predicted value and observed value exceeds a certain threshold. Another technique, described by <cit.>, treats all outliers as rowwise outliers and identifies the cells that contribute most to the outlying rows. <cit.> presented “cellflagger", a method that detects outlying cells for each row by combining Lasso regularization with a stepwise application of cutoff values. In the field of robust regression under cellwise contamination, several methods have been proposed to address the issue of outliers. For example, <cit.> introduced the Shooting S-estimator, which combines the Shooting algorithm and the S-estimator to perform robust regression. <cit.> proposed a three-step regression method that involves detecting rowwise and cellwise outliers, followed by processing covariance matrix estimation and regression. Alternatively, <cit.> proposed a cellwise robust M-estimator that replaces the detected outliers with imputed values. Recently, there has been substantial interest in robust sparse regression under cellwise contamination. <cit.> investigated giving adaptive weights for predictors and observations based on detected outliers. <cit.> proposed to filter and impute outliers before regression modelling to prevent the possible damage caused by outliers. <cit.> considered solving this problem by running successive penalized S-regression for all predictors and called this method Sparse shooting S (SSS). In this paper, we provide a new perspective on dealing with cellwise outliers in regression models. An active predictor with a cellwise outlier ideally increases a regression residual's magnitude and the cell deviation. Building on this idea and the work in <cit.>, we propose the cellwise regularized Lasso (CR-Lasso), which incorporates a regression and cell deviation measure into a loss function. We then apply a cellwise regularized and sparse regression procedure that helps identify active predictors and possible outliers. The structure of this paper is as follows. Section  <ref> describes the proposed method and algorithm details. Section <ref> illustrates the empirical results in low and high-dimensional settings. A real data application is presented in Section <ref>. Finally, we state some conclusions in Section <ref>. R functions and sample code that implement the proposed approach are available on the GitHub page of the first author (https://github.com/PengSU517/regcell). § CELLWISE ROBUST SPARSE REGRESSION We first explore the limitations of traditional regularization techniques when dealing with cellwise contamination. We then introduce a novel approach that addresses these issues by providing a constrained loss function, which will be discussed in detail below. Consider an observed response y_i, a set of p predictors x_i and a corresponding coefficient vector β in a linear regression model framework, y_i=x_i^⊤β+ε_i, i=1, 2,… , n, where in classical settings the error term, ϵ_i, is assumed to be independent N(0,σ^2) distributed. We can write this as y=Xβ+ε, where y is an n dimensional response vector and X is an n× p design matrix that may include some outliers. Without loss of generality, unless otherwise specified, we assume all predictors have zero mean and unit variance and the variance of the error term ε equals one. If not otherwise mentioned, we do not include an intercept term in the linear regression model as we work with centered data. §.§ Modification of the regression loss For high-dimensional data and under sparse settings, only a small subset of predictors are active, which means β is a sparse vector including only a few nonzero coefficients, typically much less than min(n,p). Many techniques were proposed in the last thirty years to recover the sparse β. For instance, the popular Lasso <cit.> solves an L_1 regularized objective loss, argmin_β1/2 y - X β_2^2 + λ |β|_1, where λ is a tuning parameter. When X is well-conditioned, Lasso-type estimators can guarantee a high recovery rate for β when the chosen λ is appropriate, meaning that the estimate of β depends on λ. More general regularized objective loss functions can be written as argmin_β1/2 y - X β_2^2 + P_λ ( |β|), where P_λ (|β|) is a well defined penalty function. For instance, <cit.> proposed the Smoothly Clipped Absolute Deviation (SCAD) penalty, which is used to solve an objective loss with a non-convex regularization. The iterative procedure for outlier detection (IPOD) method <cit.>, is a modification of Lasso and handles outlying rows by adding an extra term ζ, argmin_β, ζ1/2 y - X β - ζ_2^2 + θ|ζ|_1, where ζ indicates possible outlying parts in the response y and θ is a tuning parameter for ζ. <cit.> showed the equivalence between IPOD and the Huber's M-estimate β̂_H = argmin_βρ_θ ( y - X β) <cit.>, where Huber's loss function is ρ_θ(z) = {[ θ|z|-θ^2 / 2, if |z|>θ,; z^2 / 2, if |z| ≤θ. ]. With some non-convex penalty P_θ(ζ), IPOD is equivalent to other M-estimates. However, like M-estimates, IPOD is only robust against rowwise outliers. To be robust against cellwise outliers, <cit.> suggested the modification argmin_β, Δ1/2 y - ( X - Δ) β_2^2 + η|Δ|_1, where Δ is an n× p matrix, indicating possible outlying parts in the design matrix X, and η is a tuning parameter for Δ. However, the solution of (<ref>) is non-convex and non-tractable because of the bi-linear term Δβ <cit.>. Similarly, <cit.> considered using the Frobenius norm Δ_F^2 as a penalty, which is more suitable to deal with measurement errors (dense and bounded Δ) instead of cellwise outliers (sparse Δ), as outliers should be sparse while measurement errors are densely present. To solve this problem, we propose an additional constraint in Equation (<ref>). Ideally, a cellwise outlier in an active predictor will increase the magnitude of the regression residual and the deviation of this observation. From this view, we modify the loss function as the sum of the regression loss and the deviation loss, argmin_β, Δ1/2 y- ( X - Δ) β_2^2 + 1/2 X- Δ_F^2 + η|Δ|_1, where y- ( X - Δ) β_2^2 measures the regression loss and X- Δ_F^2 measures the deviation loss. The goal of the minimisation problem in (<ref>) is to decrease the sum of the regression loss and the deviation loss by shrinking only a few outlying cells in the design matrix. However, shrinking cells in predictors can only deal with outliers in X. Thus, we add a ζ term that addresses possible outliers in y: argmin_β, Δ, ζ1/2 y- ( X - Δ) β - ζ_2^2 + 1/2 X- Δ_F^2 + η|Δ|_1+ θ|ζ|_1, where ζ indicate possible outlying parts in the response y and θ is a tuning parameter for ζ. To illustrate the mechanism, Figure <ref> shows a simple linear regression model of y_i = β_1 x_i + ε_i, i = 1,…, n. The distribution of data points is depicted in Figure <ref>. The majority of the clean observations are distributed in the ellipse, such as point A. The grey dashes represent the boundaries of cell regularization in x. Any cells outside the grey boundaries will be regarded as outliers in x and be regularized. Regularizing point B will simultaneously lead to a decrease in both the regression residual and deviation magnitude. Conversely, regularizing point C will reduce the deviation loss but increase the regression loss. In this case, point C will be regarded as a good high-leverage point and will not be regularized, despite having a large deviation magnitude. Blue dashes indicate the boundaries of cellwise regularization in the response y. Point D has a large regression residual but a small deviation. In this case, point D will be regarded as an outlier in y and be regularized. §.§ Cellwise regularized sparse regression To simultaneously select active variables and detect outlying cells under cellwise contamination, we combine equations (<ref>) and (<ref>) as follows: argmin_β, Δ, ζ1/2 y- ( X - Δ) β - ζ_2^2 + 1/2 X- Δ_F^2 + λ|β|_1 + η|Δ|_1+ θ|ζ|_1. To solve the optimization problem (<ref>), we propose an iterative block coordinate descent algorithm to successively estimate β, Δ and ζ. The algorithm can be divided into two stages: cellwise regularization and sparse regression. Given an estimate β̂, the loss function (<ref>) is transformed as argmin_Δ, ζ1/2 y- ( X - Δ) β̂ - ζ_2^2 + 1/2 X- Δ_F^2 + η|Δ|_1 + θ|ζ|_1. Note that Equation (<ref>) is a L_2 loss with L_1 regularization, which can be solved using the gradient descent algorithm directly. The cellwise regularization algorithm proceeds as in Algorithm <ref>. After getting Δ̂ and ζ̂, a regularized design matrix X̃ = X -Δ̂ is obtained as well as a regularized response ỹ = y - ζ̂. Then we obtain a new β̂ via solving argmin_β1/2ỹ- X̃β_2^2 + λ|β|_1, which is a classical Lasso-type optimization. Combining the cellwise regularization technique with the Lasso, we have the Cellwise Regularized Lasso (CR-Lasso) to select variables and detect outliers simultaneously. Details of the CR-Lasso algorithm are in Algorithm <ref> below. The block coordinate descent iterations guarantee that the loss in Equation (<ref>) is non-increasing. Therefore, the final estimates at least converge to a local minimum. §.§ Data initialization For a given dataset [y, X], it is necessary to standardize the dataset before executing Algorithm <ref>. Since the dataset may include cellwise outliers, we use the median and Q_n scale estimator <cit.> to standardize variables robustly. This results in a robustly standardized design matrix X^⋆. We also obtain a robust estimate of the residual standard deviation, σ̂, using RLars <cit.>. Then we obtain standardized estimates β̂^⋆, Δ̂^⋆, ζ̂^⋆ from a standardized version of the objective loss: argmin_β^⋆, Δ^⋆, ζ^⋆1/2 y - ( X^⋆ - Δ^⋆) β^⋆/σ̂ - ζ^⋆_2^2 + 1/2X^⋆- Δ^⋆_F^2 + λ|β^⋆|_1 + η|Δ^⋆|_1+ θ|ζ^⋆|_1, where β^⋆, Δ^⋆, ζ^⋆ are the standardized versions of β, Δ, ζ, respectively. To accelerate the algorithm's convergence and improve its accuracy, we recommend using RLars to obtain an initial estimate of β, where the R function is in the R package <cit.>. §.§ The choice of tuning parameters For sparse regression models, two of the classical selection criteria are the AIC <cit.> and BIC <cit.>, where AIC = L+2k, BIC = L+log(n)k and L denotes the corresponding loss and k is the number of active predictors. For the proposed method, we define L = y- ( X^⋆ - Δ̂^⋆) β̂^⋆/σ̂ - ζ̂^⋆_2^2 + 2θ|ζ̂^⋆|_1. We use the BIC as the default selection criterion because of its ease of implementation and good performance in our empirical work. For conciseness, we do not report results for criteria other than the BIC. We implement a hard threshold for model selection to avoid potential over-regularisation issues, where the algorithm selects only a few predictors and shrinks almost all the cells in the selected predictors. Specifically, we exclude any model where the cell shrinking rate in one active predictor exceeds 30% of the number of observations. It is worth noting that the Lasso estimator can introduce bias. To counteract this, we perform a post-cellwise-regularized regression based on the selected predictors to enhance the model's performance. Regarding η, a natural choice is to set η = z_0.995 for all cells, where z_0.995 = 2.576, the 99.5% quantile of the standard normal distribution. A quantile threshold is commonly used, such as in Huberization <cit.> and DDC <cit.>. Similarly, setting θ = z_0.995 is also a natural choice. However, this parameter is sensitive to the estimated error scale, σ̂, which may not be sufficiently accurate under cellwise contamination. Therefore, to ensure robustness in our empirical studies, we set θ = 1. <cit.> showed that good efficiency under Gaussian assumptions can be achieved despite a conservative choice for θ. Regarding the convergence tolerance levels, we recommend using ϵ_1 = ϵ_2 = 10^-6, which is used in our empirical studies. § SIMULATION To demonstrate the effectiveness of the proposed method, we ran empirical studies and compared the performance of six methods for a moderate-dimensional and a high-dimensional setting: CR-Lasso, sparse shooting S <cit.>, robust Lars <cit.>, MM-Lasso <cit.>, sparse LTS <cit.> and Lasso <cit.>. Among all the compared methods, CR-Lasso, SSS, and RLars are relatively robust to cellwise outliers, MM-Lasso and SLTS are rowwise robust methods, and Lasso is a classical variable selection technique that is not robust to any outliers and is included here for completeness. §.§ Moderate-dimensional setting In our simulations, we set n = 200, p = 50, β = ( 1_10 ^⊤, 0_p-10^⊤ )^⊤. Clean observation vectors x̌_i were sampled from N( 0,Σ), and errors ε_i were sampled from N(0,3^2), indicating a relatively high level of noise in the data. The correlation structure among predictors was given by Σ _ij = ρ^|i-j| and we set ρ = 0.5. We also generated x̌_i from the multivariate t(4) distribution to simulate distributions with heavier tails. To introduce cellwise outliers, we set the contamination proportion e to 0%, 2% and 5% for all predictors and generate outliers independently. Outlying cells Δ_ij were randomly generated from N(γ, 1) and N(-γ, 1) with equal probability, where γ varies to simulate outliers with different magnitudes. We ran 200 simulations for each scenario and used the root of the mean squared prediction error (RMSPE) to assess the prediction accuracy of the considered methods. In addition, to assess the accuracy of variable selection, we employed F_1 = 2TP/2TP + FP + FN, where TP, FP, and FN indicate true positives, false positives, and false negatives, respectively. While the F_1 score is commonly used for classification problems, it is also used to evaluate the performance of variable selection techniques, as in <cit.>. The advantage of using the F_1 score to measure variable selection is that it takes into account both the precision and recall of the selected variables. Precision measures the proportion of selected variables that are relevant, while recall measures the proportion of relevant variables that are selected. By combining precision and recall, the F_1 score provides a balanced evaluation of variable selection performance. This allows us to measure the effectiveness of each method in selection and prediction. Figure <ref> reports the distributions of RMSPEs for the data generated with p = 50. When the predictors are generated from a normal distribution, as depicted in the top row of the figure, CR-Lasso demonstrates superiority compared to other methods. Without contamination (e = 0%), CR-Lasso, and Lasso exhibit similar performance, with an average RMSPE of around 3.2, while MM-Lasso, RLars, SLTS and SSS display slightly inferior performance. At a 2% contamination rate, CR-Lasso exhibits superior and stable performance, even with a high magnitude of outlyingness. On the other hand, Lasso, MM-Lasso, RLars, SLTS and SLTS perform inferior with increasing γ. At a 5% contamination rate, CR-Lasso maintains stable performance. The prediction results are similar to 2% contamination. In contrast, other methods show inferior results compared with their counterparts under 2% contamination. The behaviour of the six compared methods differs significantly when the predictors follow a t(4) distribution, which is known to generate many high-leverage points. CR-Lasso exhibits inferior performance compared to the normal cases since it treats high-leverage points as outliers and shrinks their values, resulting in biased estimates. However, even in this challenging scenario, CR-Lasso outperforms other methods with 2% or 5% contamination with high magnitudes of outlyingness, when γ is as high as 6 or 8. The overall performance of all compared methods is similar to the Gaussian setting, except for some extreme RMSPE values observed in MM-Lasso, RLars, and SLTS. Figure <ref> presents the mean F_1 scores across all evaluated methods. When the predictors are generated from a normal distribution, as depicted in the top row of the figure, CR-Lasso demonstrates exceptional F_1 scores across all scenarios. RLars also exhibits good F_1 scores, followed by MM-Lasso. On the other hand, SLTS, SSS and Lasso exhibit inferior F_1 scores as they select many inactive variables. When the predictors follow a t(4) distribution, CR-Lasso does not perform as well as when the predictors are multivariate normal. RLars shows competitive results as CR-Lasso since there are more good high leverage points, strengthening the estimate. The overall performance of other compared methods for heavy-tailed predictors is similar to the normal setting. §.§ High-dimensional setting In this subsection, we present the empirical results for high-dimensional settings. Specifically, we set the number of predictors to p=300 and β = ( 1_10^⊤, 0_p-10^⊤)^⊤, keeping all other settings the same as in the moderate-dimensional settings. Figure <ref> reports the results of RMSPEs for the data generated with p = 300. With such a high noise level, all compared methods exhibit significantly different performances in high-dimensional cases compared to their moderate-dimensional counterparts. Specifically, with e=0% or 2% contamination, only CR-Lasso and RLARS demonstrate considerable performance, while Lasso also shows superior performance with a low magnitude of outlyingness. MM-Lasso, SLTS and SSS perform less effectively. When the contamination rate increases to 5%, only CR-Lasso maintains stable performance. Other methods show inferior performance in this case, particularly with the increase of γ. Similar to the moderate-dimensional cases, CR-Lasso performs better with normally distributed predictors than when the predictors follow a multivariate t(4) distribution. In contrast, RLars exhibits better performance because of the presence of good high-leverage points. Even in this case, CR-Lasso still performs better with a 5% contamination rate and a large magnitude of outliers. Besides, MM-Lasso, SLTS and SSS exhibit many extreme RMSPE values, indicating that they are not robust to outliers with such a high-level noise. Figure <ref> depicts the mean F_1 scores across all evaluated methods in the high-dimensional cases. Compared to the moderate-dimensional cases, all the methods demonstrate inferior F_1 scores. RLars exhibits competitive F_1 scores with a t(4) distribution. Conversely, Lasso, MM-Lasso, SLTS and SSS show poor F_1 scores as they tend to select too many inactive predictors. § THE BONE MINERAL DENSITY DATA To illustrate the proposed methodology, we considered the bone mineral density (BMD) data from <cit.>. The BMD dataset consists of gene expression measurements of 54,675 probes of 84 Norwegian women, which is available publicly at the European Bioinformatics Institute ArrayExpress repository (access number E-MEXP-1618). It should be noted that microarray measurements are often contaminated (noisy), as <cit.> highlighted. This contamination can stem from multiple sources <cit.>, thereby obscuring the gene expression in the data. Given the large number of variables in the dataset, a pre-screening step was implemented to identify the subset of variables that are most correlated with the outcome of interest, the total hip T-score. To accomplish this, we first log-transformed all the predictors and then utilized the robust correlation estimate based on winsorization as in <cit.>, instead of the Pearson correlation, since winsorization is more robust to outliers that may occur in the dataset. The screened data comprises measurements of p = 100 genes from n = 84 Norwegian women. Figure <ref> displays the outlier detection results based on the screened predictors using DDC <cit.>. On average, the screened genes exhibit a contamination rate of 3.61%, with probe 236831_at having the highest contamination rate of 9.52%. Among the observations, the thirteenth observation shows the highest contamination rate at 22%. Given the cellwise contamination, we first standardized all variables with the median and Q_n. We then conducted a simple simulation study to validate the effectiveness of CR-Lasso, Lasso, MM-Lasso, RLars, SLTS and SSS on the bone mineral density data. We first obtained a clean (imputed) dataset X̌ using DDC <cit.>. We then generated an artificial response y = X̌β + ε using screened clean predictors and ε∼ N(0,0.5^2I). We randomly picked ten active predictors in each simulation run and set β_j ∼ U(1, 1.5) for each of them. We then randomly collected 80% observations from the original (contaminated) dataset for model training, while the remaining 20% of the imputed (clean) dataset is used to assess the prediction accuracy. We repeated the simulation procedure 200 times. For each method, we show MAPE (mean absolute prediction error) and RMSPE values in Table <ref>, as well as their True positive numbers (TP), true negative numbers (TN) and F_1 scores. Table <ref> shows a similar conclusion to the empirical results. CR-Lasso shows superior performance over the other methods considered, evidenced by having the lowest average RMSPE and MAPE values. The Lasso method also performs reasonably well on this dataset. This is expected as there are no extreme outliers in this data and the fraction of cellwise outliers is modest. To demonstrate the performance on the real data, a sparse regression model was fitted via the aforementioned methods separately with the original response (the total hip T-score). Model selection results (model sizes) are shown in Table <ref>. Out of the 100 pre-screened genes, nine genes were commonly chosen by CR-Lasso, Lasso, MM-Lasso, RLars, and SLTS. We ran the Leave-One-Out Cross-Validation to assess the performance of the selected models. The RMSPE and MAPE (mean absolute prediction error) from Leave-one-out prediction residuals are also shown in Table <ref>. From Table <ref>, it is clear that there are significant differences in the prediction outcomes obtained by the various methods evaluated. For this real data, Lasso outperforms other methods by exhibiting the lowest RMSPE and MAPE values, this is followed by the MM-Lasso and CR-Lasso. We note that for this data, RLars and SSS have slightly larger prediction errors, highlighting that in this example, these cellwise-robust methods would require at least some minimal amount of extreme cellwise outliers to show superior prediction performance. § DISCUSSION Cellwise outliers can create significant challenges when building a regression model. Most observations may eventually be contaminated by outliers in regression models due to the propagation of cellwise outliers. Most of the existing methods may not effectively identify and manage such outliers. Therefore, there is a need to identify and manage these outliers. Motivated by this challenge, we proposed a novel approach called CR-Lasso, which incorporates a constraint on the deviation of each cell in the loss function to detect cellwise outliers in regression models. We developed an iterative procedure for sparse regression and outlier detection by combining Lasso and cellwise outlier regularisation. Our empirical studies and real data analysis demonstrate that CR-Lasso generally has superior variable selection performance and estimation accuracy compared to other methods when outliers are present, especially when the noise ratio is high and the magnitudes of outliers are extreme. In data sets where there are only a few outliers that aren't too extreme, traditional non-robust estimators, such as the Lasso, perform considerably well. However, we acknowledge two critical issues that require further investigation. Firstly, estimating σ accurately under cellwise contamination is challenging. Although we used RLars <cit.> to obtain an initial estimate, it can overestimate σ when the noise ratio is high. Therefore, obtaining a more precise initial estimate of σ is necessary and needs further exploration. Secondly, we only considered symmetric light-tailed predictors in this paper. However, handling asymmetric or heavy-tailed predictors with outliers is challenging, as the algorithm may identify clean tails as outliers. In real data applications, it is recommended to perform data transformations as a preprocessing step to improve the model performance. For instance, we applied the log transformation in our real data application. Other transformations such as the Box-Cox transformation <cit.> may also be beneficial. Nevertheless, it is crucial to conduct further investigation to develop more robust methods that can handle asymmetric or heavy-tailed predictors effectively. § ACKNOWLEDGEMENTS Su's research was supported by the Chinese Scholarship Council #201906360181. Muller and Tarr' research was supported by the Australian Research Council Discovery Project #210100521. Wang’s research was supported in part by the Sydney Mathematical Research Institute, University of Sydney, and the Australian National University International Visitor Program. elsarticle-harv
http://arxiv.org/abs/2307.04904v1
20230710210827
Fast dynamic time warping and clustering in C++
[ "Volkan Kumtepeli", "Rebecca Perriment", "David A. Howey" ]
eess.SP
[ "eess.SP", "cs.LG", "cs.SY", "eess.SY" ]
Planar Curve Registration using Bayesian Inversion [ ================================================== § ABSTRACT We present an approach for computationally efficient dynamic time warping (DTW) and clustering of time-series data. The method frames the dynamic warping of time series datasets as an optimisation problem solved using dynamic programming, and then clusters time series data by solving a second optimisation problem using mixed-integer programming (MIP). There is also an option to use k-medoids clustering for increased speed, when a certificate for global optimality is not essential. The improved efficiency of our approach is due to task-level parallelisation of the clustering alongside DTW. Our approach was tested using the UCR Time Series Archive, and was found to be, on average, 33% faster than the next fastest option when using the same clustering method. This increases to 64% faster when considering only larger datasets (with more than 1000 time series). The MIP clustering is most effective on small numbers of longer time series, because the DTW computation is faster than other approaches, but the clustering problem becomes increasingly computationally expensive as the number of time series to be clustered increases. § INTRODUCTION Time series datasets are ubiquitous in science, engineering and many other fields such as economics. Applications range from finding patterns in energy consumption to detecting brain activity in medical applications and discovering patterns in stock prices in the financial industry. Tools for analysing time-series data are widely available, and one such tool is clustering—a form of unsupervised learning that groups datasets into 'similar' subsets, providing useful insights. Most time series clustering algorithms depend on dimension reduction or feature extraction techniques to achieve computational efficiency at scale <cit.> but these can introduce bias into the clustering. Distance-based approaches have the significant advantage of directly using the raw data, thus the results are not biased by the feature selection process. However, choosing which distance metric to use is not obvious, and an incorrect choice can lead to illogical clusters. Dynamic time warping <cit.> is a well-known technique for manipulating time series to enable comparisons between datasets, using local warping (stretching or compressing along the time axis) of the elements within each time series to find an optimal alignment between series. This emphasises the similarity of the shapes of the respective time series, rather than the exact alignment of specific features. Finding similarities in shape is often preferable to finding similarities in time whenever time of occurrence is not relevant to the clustering problem <cit.>. The approach can distinguish similarity in time series when lags or shifts in time occur; these are undetectable if using Euclidean distances <cit.>. This is beneficial even when using time series of the same length and time-frame, such as power load demand time series <cit.>. Finally, a user-defined warping constraint allows flexibility on which time shifts or lags can be defined as `similar' for each clustering problem <cit.>. The warping constraint uses a `window' to limit which points in one data set can be mapped to another <cit.>. For example, a warping window of 99 means the first data point in one time series can be mapped only up to the hundredth data point in the time series it is being compared to. Unfortunately, DTW does not scale well in computational speed as the length and number of time series to be compared increases—the computational complexity grows quadratically with the total number of data points. This complexity is a barrier to DTW being widely implemented in time series clustering <cit.>. In this paper, we present a novel approach to speed up the computation of DTW distances and the subsequent clustering problem, allowing longer time series and larger datasets to be analysed. We use dynamic programming to solve the DTW problem and then perform clustering of the warped time series, using the pairwise DTW distances, by formulating the clustering problem as a mixed-integer program (MIP). The user must specify the number of clusters required, and the algorithm then finds the optimal clusters, including a centroid for each cluster, where the centroid is the time series within each cluster that minimises the intercluster distance, i.e., the sum of the distances between each time series within the cluster and the respective centroid. The software associated with this paper, , is freely available from https://github.com/Battery-Intelligence-Lab/dtw-cpp. While there are other packages available for time series clustering using DTW <cit.>, offers significant improvements in speed and memory use, especially for larger datasets. As an aside, there are also innovative methods for speeding up DTW by solving approximate versions of the problem. For example, Deriso and Boyd <cit.> considered DTW as a continuous-time optimal control problem and solved this by discretisation with iterative refinement using regularisation instead of hard band constraints. In our approach, speed-up is achieved by task-level parallelisation, allowing multiple pairwise comparisons between time series to be evaluated simultaneously. Additionally, implements more efficient memory management by solving the DTW problem using only the preceding vector rather than storing the entire warping matrix (see mathmatical-background for details). This means that the complete warping path between each time series is not stored—but this is not required for the clustering process since only the final cost is needed. Reduction in memory use also paves the way for a future GPU implementation of the algorithm <cit.>. Our approach uses MIP for clustering—this is preferable to other DTW clustering packages that use k-based methods since the iterative nature of the latter means they are susceptible to getting stuck in local optima, whereas MIP provides a certificate for global optimality. However, where a global optimality certificate is not required, also provides the necessary functions to solve the clustering problem iteratively. § OVERVIEW OF METHOD The current functionality of the software is as follows: * Load time series data from CSV file(s). * Calculate DTW pairwise distances between time series, using a vector based approach to reduce memory use. There is also the option to use a Sakoe-Chiba band to restrict warping in the DTW distance calculation <cit.>. This speeds up the computation time as well as being a useful constraint for some time series clustering scenarios (e.g., if an event must occur within a certain time window to be considered similar). * Produce a distance matrix containing all pairwise comparisons between each time series in the dataset. * Split all time series into a predefined number of clusters, with a representative centroid time series for each cluster. This can be done using MIP or k-medoids clustering, depending on user choice. * Output the clustering cost, which is the sum of distances between every time series within each cluster and its cluster centroid. * Find the silhouette score and elbow score for the clusters in order to aid the user decision on how many clusters, k, to include. § MATHEMATICAL BACKGROUND Consider a time series to be a vector of some arbitrary length. Consider that we have p such vectors in total, each possibly differing in length. To find a subset of k clusters within the set of p vectors using MIP formulation, we must first make 1/2p 2 pairwise comparisons between all vectors within the total set and find the `similarity' between each pair. In this case, the similarity is defined as the DTW distance. Consider two time series x and y of differing lengths n and m respectively, x=(x_1, x_2, ..., x_n) y=(y_1, y_2, ..., y_m). The DTW distance is the sum of the Euclidean distance between each point and its matched point(s) in the other vector, as shown in Fig. <ref>. The following constraints must be met: * The first and last elements of each series must be matched. * Only unidirectional forward movement through relative time is allowed, i.e., if x_1 is mapped to y_2 then x_2 may not be mapped to y_1 (monotonicity). * Each point is mapped to at least one other point, i.e., there are no jumps in time (continuity). Finding the optimal warping arrangement is an optimisation problem that can be solved using dynamic programming, which splits the problem into easier sub-problems and solves them recursively, storing intermediate solutions until the final solution is reached. To understand the memory-efficient method used in , it is useful to first examine the full-cost matrix solution, as follows. For each pairwise comparison, an n by m matrix C^n× m is calculated, where each element represents the cumulative cost between series up to the points x_i and y_j: c_i,j = (x_i-y_j)^2+min c_i-1,j-1 c_i-1,j c_i,j-1 The final element c_n,m is then the total cost, C_x,y, which provides the comparison metric between the two series x and y. Fig. <ref> shows an example of this cost matrix C and the warping path through it. For the clustering problem, only this final cost for each pairwise comparison is required; the actual warping path (or mapping of each point in one time series to the other) is superfluous for k-medoids clustering. The memory complexity of the cost matrix C is 𝒪(nm), so as the length of the time series increases, the memory required increases greatly. Therefore, significant reductions in memory can be made by not storing the entire C matrix. When the warping path is not required, only a vector containing the previous row for the current step of the dynamic programming sub-problem is required (i.e., the previous three values c_i-1,j-1, c_i-1,j, c_i,j-1), as indicated in Eq. <ref>. The DTW distance C_x,y is found for each pairwise comparison. As shown in Fig. <ref>, pairwise distances are then stored in a separate symmetric matrix, D^p× p, where p is the total number of time series in the clustering exercise. In other words, the element d_i,j gives the distance between time series i and j. Using this matrix, D, the time series can be split into k separate clusters with integer programming. The problem formulation begins with a binary square matrix A^p× p, where A_ij=1 if time series j is a member of the ith cluster centroid, and 0 otherwise, as shown in Fig. <ref>. As each centroid has to be in its own cluster, non-zero diagonal entries in A represent centroids. In summary, the following constraints apply: * Only k series can be centroids, ∑_i=1^p A_ii=k. * Each time series must be in one and only one cluster, ∑_i=1^pA_ij=1 ∀ j ∈ [1,p]. * In any row, there can only be non-zero entries if the corresponding diagonal entry is non-zero, so a time series can only be in a cluster where the row corresponds to a centroid time series, A_ij≤ A_ii ∀ i,j ∈ [1,p]. The optimisation problem to solve, subject to the above constraints, is A^⋆ = min_A∑_i ∑_j D_ij× A_ij. After solving this integer program, the non-zero diagonal entries of A represent the centroids, and the non-zero elements in the corresponding columns in A represent the members of that cluster. In the example in Fig. <ref>, the clusters are time series 1, 2, 5 and 3, 4 with the bold time series being the centroids. Finding global optimality can increase the computation time, depending on the number of time series within the dataset and the DTW distances. Therefore, there is also a built-in option to cluster using k-medoids, as used in other packages such as <cit.>. The k-medoids method is often quicker as it is an iterative approach, however it is subject to getting stuck in local optima. The results in the next section show the timing and memory performance of both MIP clustering and k-medoids clustering using compared to other packages. § PERFORMANCE COMPARISON AND DISCUSSION We compared our approach with two other DTW clustering packages, <cit.> and <cit.>. The datasets used for the comparison are from the UCR Time Series Classification Archive <cit.>, and consist of 128 time series datasets with up to 16,800 data series of lengths up to 2,844. The full results can be found in Table <ref> in the Appendix. Benchmarking against was stopped after the first 22 datasets because the results were consistently over 20 times slower than . Table <ref> shows the results for datasets downselected to have a number of time series (N) greater than 100 and a length of each time series greater than 500 points. This is because is aimed at larger datasets where the speed improvements are more relevant. As can be seen in these results, is the fastest package for 90% of the datasets, and all 13 datasets where was faster were cases where the entire clustering process was completed in 1.06 seconds or less. Across the whole collection of datasets, was on average 32% faster. When looking at larger datasets with N > 1000, is on average 65% faster. In all apart from 2 of the 115 cases where is the fastest, it uses the k-medoids algorithm. This is however to be expected as the latter is an iterative clustering method and therefore does not compute all DTW distances. Fig. <ref> clearly shows the increasing superiority of as the number of time series increases. In this comparison, both algorithms use k-medoids, so the speed improvement is due to faster dynamic time warping. MIP was on average 16 times slower than over all samples. Fig. <ref> shows that as the number of time series increases, MIP clustering becomes increasingly slower. This is to be expected because the computational complexity of the MIP clustering optimisation increases significantly. However, as the length of the time series increases, the performance of MIP converges to the speed of , while finding global optimality. This confirms the improved performance of DTW in . Therefore, the MIP approach is recommended for occasions when the time series to be clustered are very long, but the number of time series is smaller. It is also worth noting the length of time series in the UCR Time Series Classification Archive are relatively small compared to many time series datasets, and therefore the performance and relevance of the MIP clustering approach in is understated by these results. § ACKNOWLEDGEMENTS We gratefully acknowledge contributions by https://howey.eng.ox.ac.ukBattery Intelligence Lab members, and thank BBOXX for project funding and access to data. This work was also funded by the UKRI PFER Energy Superhub Oxford demonstrator and the “Data-driven exploration of the carbon emissions impact of grid energy storage deployment and dispatch” project (EP/W027321/1). IEEEtran § APPENDIX A We include here the full benchmarking comparison between (using k-Medoids and MIP), and . As stated in the main text, benchmarking of the latter was discontinued once it was apparent it was significantly slower on all datasets. Additionally, any datasets with a number of time series greater than 4000 were not included for the MIP clustering as the computation time is significantly longer and MIP is not suitable to solve these clustering problems.
http://arxiv.org/abs/2307.07299v1
20230714122156
Sources of primary cosmic rays forming the bump near E0=100 PeV
[ "S. E. Pyatovsky" ]
astro-ph.HE
[ "astro-ph.HE" ]
Similarity-based Memory Enhanced Joint Entity and Relation Extraction Witold Kościukiewicz1,20009-0001-0192-8850 Mateusz Wójcik 1,20009-0008-0547-9467 Tomasz Kajdanowicz20000-0002-8417-1012 Adam Gonczarek1 August 12, 2023 =========================================================================================================================================== Введение. Причины и вид нерегулярностей спектра ПКИ по E_0 остается предметом научных дискуссий. Обсуждаются вопросы локализации т.н. изломов спектра, "острота" изломов, при каких энергиях наблюдаются изломы в спектрах легких и тяжелых ядер в массовом составе ПКИ и другие. Внимание вызывают вопросы локализации и источнике бампа в спектре ПКИ по E_0 около 100 PeV. Анализ нерегулярностей спектра ПКИ при E_0 = 1-100 PeV выполнен, в частности, в работе <cit.>. На рисунке <ref> показаны результаты экспериментов  <cit.>, Tunka и энергетического спектра ПКИ. Отдельного внимания заслуживают результаты экспериментов GAMMA () <cit.> (Армения, Арагац) и "Адрон" (Тянь-Шаньская высокогорная научная станция), в которых при E_0 ≅ 70-100 PeV зарегистрирован пик интенсивности, существенно превышающий данные других экспериментов. Природа данной нерегулярности не установлена при том, что другие эксперименты о наличии аналогичного пика не сообщают. Необычным является и то, что данный пик не наблюдался в экспозициях самого эксперимента GAMMA за другие периоды времени, например, , -08 и др. Однако вполне возможно, что данный пик не является методической погрешностью обработки экспериментальных данных. В работах <cit.> показано, что нерегулярности в спектре ПКИ по E_0, следующего за изломом при E_0 = 3-5 PeV, обусловлены выбытием ядер массового состава ПКИ начиная с протонов. Методом "мини-макс возраста ШАЛ" <cit.>, основанным на существенно большой статистике экспериментальных характеристик ШАЛ, полученных, в частности, в эксперименте , показано, что при E_0 = 2-35 PeV массовый состав ядер ПКИ остается смешанным и соответствующим CNO-группе. Однако излом в спектре ядер массового состава ПКИ самой тяжелой группы локализован до бампа, наблюдаемого при E_0 = 50-100 PeV, что указывает на то, что бамп в спектре ПКИ при E_0 = 50-100 PeV образован другими источниками ядер с другими особенностями ускорения. 1 Экспериментальные данные для анализа бампа при E_0 = 50-100 PeV. Анализ бампа спектра ПКИ по E_0 выполнен по данным эксперимента  <cit.>, база данных которого содержит характеристики более 150 млн ШАЛ, в т.ч. глобальное время регистрации ШАЛ. Характеристикой данной нерегулярности (бампа) является показатель наклона γ спектра ПКИ по E_0. Для оценки изменения γ выбран диапазон по E_0 = 20-75 PeV, расположенный после излома группы самых тяжелых ядер в массовом составе ПКИ и до максимума бампа при E_0 = 80 PeV, подтвержденного в экспериментах GAMMA и "Адрон". Изучение изменения γ выполнено с лагом 10 дней, что обеспечило статистику выборок ≅ 1 млн событий. Примеры полученных спектров приведены на рисунке <ref>, где показаны спектры с показателями наклона γ вблизи E_0 = 80 PeV от минимальных значений γ = 1.60 ± 0.02 до максимальных γ = 2.31 ± 0.04. Спектры, построенные по выборкам из экспериментальных данных KASCADE-Grande, приведены в сравнении с данными эксперимента . Несмотря на то, что усредненный по всей статистике наблюдений показатель γ получен с высокой точностью, значения γ за различные интервалы времени существенно различаются. Данное изменение γ может быть связано либо с флуктуациями характеристик ШАЛ, либо с изменением интенсивности ПКИ в данном диапазоне E_0. Из рисунка <ref> следует, что выпадающее событие, зарегистрированное в эксперименте , не является уникальным и имеет аналоги в событиях, зарегистрированных в эксперименте . По базе данных эксперимента получены значения показателя γ в диапазоне по E_0 = 20-75 PeV для 248 временных интервалов. 2 Спектральный анализ изменения показателя γ. Спектральный анализ изменения γ выполнен с целью выявления возможных максимумов периодов изменения значений γ. Для анализа применено спектральное преобразование Фурье с окном Хэмминга. Полученная спектральная плотность лог-периода приведена на рисунке <ref>. Проведенный анализ позволил выявить на интервале 40-300 дней два максимума в периоде изменения γ (66 и 229 дней). Ширина пика спектральной плотности характеризует "локальность" источника ПКИ, – чем ближе пик к нормальному распределению, тем с большей вероятностью в формировании пика доминирует один источник ПКИ. На рисунке <ref> пики с максимумами в периодах 66 и 229 дней описываются нормальными распределениями с R_a^2 > 98%. Для поиска возможных источников ПКИ в диапазоне E_0 = 20-100 PeV использованы каталоги звездных объектов "General Catalogue of Variable Stars (GCVS)" <cit.> и "Zwicky Transient Facility Catalog (ZTF)" <cit.>. В GCVS представлено более 60 тыс. звезд более 250 типов с указанием периодов, локаций и других характеристик. На рисунке <ref> показано, что первой гармонике 66 дней соответствуют, в основном, звезды с переменностью типа SR, вторая гармоника 229 дней образована преимущественно миридами. Здесь же необходимо обратить внимание, что звезды, находящиеся на заключительных этапах эволюции, обычно имеют сильные магнитные поля. Область перехода от полурегулярных гигантов к миридам (рисунок <ref>) характеризуется локальным нарушением скейлинга в спектре ПКИ при E_0 = 3-20 PeV <cit.>. Локальных областей, аналогичных показанным на рисунке <ref>, где происходит нарушение скейлинга, в спектре ПКИ по E_0 несколько, – области нарушения скейлинга связаны с переходом от одного доминирующего типа звезд к другому, а степени проявления нарушений скейлинга определены распределениями по энергиям, которые обеспечены доминирующим типом звезд рассматриваемой переменности. 3 Спектр звезд по периоду. Интегральный спектр переменных звезд в зависимости от лог-периода представлен на рисунке <ref>, где показаны усредненные по типам переменных звезд периоды. Рассмотрены звезды типов от белых карликов до сверхгигантов типа рекуррентных новых. Чем более существенны нерегулярности в спектре по периоду источников ПКИ, тем более существенны нерегулярности в спектре ПКИ по E_0, образованном данными источниками. Самые большие нерегулярности в спектре по периоду обозначены на рисунке <ref> как известные значения E_0: периоду 17 дней соответствует E_0 = 0.1 PeV (область красных карликов), 120 дней, – E_0 = 5 PeV (бамп при E_0 = 3-5 PeV в спектре ПКИ) и 176 дней, – E_0 = 20 PeV (начало бампа с максимумом при E_0 = 80 PeV). Ускорение ПКИ до E_0 = 0.1 PeV во вспышках красных карликов установлено в работах Ю. И. Стожкова <cit.>. На рисунке <ref> показано, что источниками ПКИ низких энергий E_0 < 0.1 PeV являются карлики, локализованные преимущественно в созвездиях Стрельца, Змееносца и Центавра, средних энергий E_0 = 0.2-2 PeV, – субгиганты и гиганты из созвездий Стрельца и Змееносца, и высоких энергий E_0 > 5 PeV, – гиганты и сверхгиганты из созвездия Стрельца. Из анализа данных, представленных на рисунке <ref>, получена регрессия, определяющая зависимость между средним периодом данного типа звезды-источника ПКИ и максимальной E_0: lg(T, days) = (0.45 ± 0.05)lg(E_0, PeV) + + (1.71 ± 0.05) Из регрессии (1) следует, что нижняя граница области субгигантов для периода 23 дня составляет E_0 = 0.2 PeV, область начала первого бампа в спектре ПКИ по E_0, – период 74 дня и E_0 = 2-3 PeV. Также можно оценить максимальную энергию ПКИ: согласно каталогу GCVS <cit.>, максимальный зарегистрированный период составляет 29000 дней (80 лет) для рекуррентных новых звезд переменности типа NR, что соответствует максимально зарегистрированной в КЛ E_0 ≅ 1-2 ZeV. 4 Типы переменных звезд и спектр ПКИ по E_0. Из рисунков <ref> и <ref> следует, что полурегулярные гиганты и мириды составляют основное звездонаселение, обеспечивающее источники ПКИ при E_0 = 1-100 PeV. С применением метода основного массива рассмотрим формирование звездами переменностей типа SR, SRA, SRB и М спектра ПКИ при данных E_0. Распределение звезд по лог-периодам соответствует нормальному распределению N ∼ exp(-(ln(T)-ln(T̅))^2/2σ^2). Наиболее близкими к периоду 229 дней, полученному Фурье-анализом по изменению показателя γ спектра ПКИ по E_0 (рисунок <ref>), становятся звезды переменностей SRA (193 дня), М (280 дней) и SRC (372 дня). С учетом, что количество мирид существенно больше, нежели звезд других типов с близкими периодами, можно предположить, что бамп в спектре ПКИ по E_0 около 100 PeV образован, в основном, миридами. С ростом E_0 в диапазоне 1-100 PeV определяющий вклад в ПКИ начинают вносить звезды на конечных этапах звездной эволюции, что вызывает утяжеление массового состава ПКИ. Однако для каждого значения E_0 массовый состав ПКИ определяется доминирующим типом звезд данного периода, что может приводить к существенным флуктуациям доли различных ядер в массовом составе ПКИ с изменением по E_0. Спектры ПКИ по E_0, полученные в экспериментах GAMMA, , и , в сравнении со спектрами доминирующих звезд переменностей SR, SRA, SRB, SRC и М показаны на рисунке <ref>. Максимум бампа спектра ПКИ в данном случае составляет E_0 ≅ 67 PeV. Однако, как следует из рисунка <ref>, бамп должен быть менее выраженным и находиться при E_0 < 67 PeV. Средний период звезд переменности SRC (сверхгиганты) 372 дня, что по (1) дает значение lg(E_0) = 1.91 (81 PeV). Данное значение E_0 соответствует результатам эксперимента . В то же время средний период мирид 280 дней или lg(E_0) = 1.64 (44 PeV). Т.к. количество наблюдаемых мирид на порядок больше, нежели звезд переменности SRC, локализация максимума бампа по данным эксперимента KASCADE-Grande при E_0 = 80 PeV завышена, и бамп сформирован как миридами, так и звездами-сверхгигантами переменности SRC. Ускорение КЛ до сверхвысоких энергий происходит во взрывных и новоподобных звездах, например, в рекуррентных новых звездах, у которых зарегистрирован максимальный период, обеспечивающий максимальную E_0 ∼ 1-2 ZeV. В интервале E_0 = 200 PeV - 3.5 EeV типу переменности ZAND соответствует средний период T = 553, или по (1) E_0 = 200 PeV, далее следуют звезды переменности N с T = 2000, или E_0 = 3.5 EeV, что обеспечивает минимум в спектре ПКИ при данных E_0. Примером двойной звездной системы, где происходит ускорение до сверхвысоких энергий, может быть звезда переменности EA+SRC типа, с зарегистрированным периодом 7430 дней (20 лет), что должно обеспечить бамп в спектре ПКИ при E_0 = 60 EeV или lg(E_0, PeV) = 4.80. Это может быть звезда типа μ Цефея (гранат Гершеля, красный сверхгигант на последней стадии звездной эволюции с Не-С циклом, что указывает на то, что массовый состав ПКИ относительно CNO-группы становится более легким при данных E_0) и звезда типа Алголь β Персея. Примером тройной звездной системы, где происходит ускорение до сверхвысоких энергий, может быть система звезд с зарегистрированным периодом 11900 дней (33 года), что должно обеспечить бамп в спектре ПКИ при E_0 = 180 EeV или lg(E_0, PeV) = 5.26. Если данный источник считать единственным, обеспечивающим поток ПКИ при E_0 = 180 EeV, изменения потока ПКИ должны быть существенны, – от максимума до полного затухания, что наблюдается в экспериментах. Суммируя полученные в данном исследовании результаты, построим зависимость лог-периода и E_0 от типов затменно-переменных звезд, приведенную на рисунке <ref>. Данный спектр характеризуется тремя основными областями нерегулярностей относительно линейной составляющей: начиная со звезд переменности RS наблюдается "ранний" излом в спектре ПКИ по E_0; начиная со звезд типа SRD (гиганты и сверхгиганты спектральных классов F, G, K), – излом при E_0 = 3-5 PeV; начиная с мирид, – т.н. "бамп" при E_0 около 100 PeV. Как следует из рисунка <ref>, после укручения спектра ПКИ по E_0 после излома при 3-5 PeV, показатель γ спектра ПКИ по E_0 вновь уменьшается и становится примерно таким же, каким был до излома. Выводы. * Источниками ПКИ являются переменные звезды различных типов, находящиеся на различных этапах эволюции, от субкарликов до сверхгигантов. * Существует зависимость между средним периодом для звезды данного типа переменности и максимальной энергией E_0 ПКИ, обеспечиваемой механизмами ускорения в данных звездах. * Звезда каждого типа своими вспышками определяет диапазон по E_0 ПКИ. Каждому диапазону по E_0 соответствует массовый состав ПКИ, определяемый типом звезды-источника и который меняется при изменении E_0. * Бамп в спектре ПКИ при E_0 около 100 PeV образован гигантами и сверхгигантами переменности M и SRC поздних спектральных классов. За другие нерегулярности в спектре ПКИ по E_0 ответственны другие типы звезд. 99 1 Erlykin A. D., Puchkov V. S., Pyatovsky S. E. Change in the mass composition of primary cosmic radiation at energies in the range of E_0 = 1-100 PeV according to data of the experiment//Physics of Atomic Nuclei. - 2021. - Vol. 84. - No 3. - p. 279-286. - DOI: 10.1134/S1063778821030170 2 T. Antoni, W. D. Apel, F. Badea, K. Bekk, A. Bercuci, H. Blumer, H. Bozdog, I. M. Brancus, C. Buttner, A. Chilingarian, K. Daumiller, P. Doll, J. Engler, F. Febler, H. J. Gils, R. Glasstetter, et al., Nucl. Instrum. Methods Phys. Res., Sect. A 513, 490 (2003). - DOI: 10.1016/S0168-9002(03)02076-X 3 A. P. Garyaka, R. M. Martirosov, S. V. Ter-Antonyan, A. D. Erlykin, N. M. Nikolskaya, Y. A. Gallant, L. W. Jones, and J. Procureur, J. Phys. G: Nucl. Part. Phys. 35, 115201 (2008); arXiv: 0808.1421v1 [astro-ph]. - DOI: 10.1088/0954-3899/35/11/115201 4 Apel W., Arteaga J. C., et al. The experiment// Collaborations, Nucl. Instrum. Methods Phys. Res. A 620 (2-3) (2010), pp. 202-216. - DOI: 10.1016/j.nima.2010.03.147 5 General Catalogue of Variable Stars//The Sternberg Astronomical Institute, The Institute of Astronomy of Russian Academy of Sciences. URL: http://www.sai.msu.su/gcvs/ 6 C. Xiaodian, W. Shu, D. Licai, et al. The Zwicky Transient Facility Catalog of Periodic Variable Stars//The Astrophysical Journal Supplement Series, 249:18 (21pp), 2020 July. DOI - 10.3847/1538-4365/ab9cae 7 S. B. Shaulov, V. A. Ryabov, A. L. Schepetov, S. E. Pyatovsky, et al. Strange quark matter and the astrophysical nature of anomalous effects in cosmic rays at energies of 1-100 PeV//Letters to the Journal of Experimental and Theoretical Physics. 2022. 1-2(7). 116. с. 3-12. DOI - 10.31857/S1234567822130018 8 V. G. Sinitsyna, V. Yu. Sinitsyna, Yu. I. Stozhkov. Red dwarf stars as a new source type of galactic cosmic rays//Astronomische Nachrichten. 2021. 342. 1-2. pp. 342-346. DOI - 10.1002/asna.202113931
http://arxiv.org/abs/2307.03973v1
20230708131320
Autonomy 2.0: The Quest for Economies of Scale
[ "Shuang Wu", "Bo Yu", "Shaoshan Liu", "Yuhao Zhu" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.CY" ]
takeaways[2] [title=#1, size=fbox,after skip=0.5, colbacktitle=yellow!25,coltitle=black] #2 none printacmref=false plain Autonomy 2.0: The Quest for Economies of Scale Shuang Wu, Bo Yu, Shaoshan Liu, Yuhao Zhu August 12, 2023 ============================================== § INTRODUCTION With the advancement of robotics and AI technologies in the past decade, we have now entered the age of autonomous machines. In this new age of information technology, autonomous machines, such as service robots, autonomous drones, delivery robots, and autonomous vehicles, rather than humans, will provide services  <cit.>. The rise of autonomous machines promises to completely transform our economy. However, after more than a decade of intense R&D investments, autonomy has yet to deliver its promise <cit.>. In this article, through examining the technical challenges and economic impact of the digital economy, we argue that scalability is both highly necessary from a technical perspective and significantly advantageous from an economic perspective, thus is the key for the autonomy industry to achieve its full potential. Nonetheless, the current development paradigm, dubbed Autonomy 1.0, scales with the number of engineers, instead of with the amount of data or compute resources, hence preventing the autonomy industry to fully benefit from the economies of scale, especially the exponentially cheapening compute cost and the explosion of available data. We further analyze the key scalability blockers and explain how a new development paradigm, dubbed Autonomy 2.0, can address these problems to greatly boost the autonomy industry. § SCALABILITY OF THE DIGITAL ECONOMY The digital economy refers to the use of information technology to create, market, distribute, and consume goods and services. It has been the key driving force for the world's economic growth in the past two decades. Consider the internet industry, for instance. The internet industry has accounted for 21% of the GDP growth in mature economies from 2005 to 2010  <cit.>. In 2019, the internet industry contributed $2.1 trillion to the U.S. economy, about 10% of the U.S. GDP, and is the fourth largest industry of the U.S. economy (behind only real estate, government, and manufacturing)  <cit.>. Along with its contribution to economy, the internet industry provides nearly 6 million direct jobs, accounting for 4% of U.S. employments. Two key forces fuel the continuous growth of the digital economy, both of which have to do with scalability: * The commoditization of computing power, as exemplified by Moore's law  <cit.>, is the greatest driving force behind the digital industry. The most successful digital economy companies have developed core technology stacks that are scale by the available compute resources and data, not by the size of their engineering teams. One remarkable example is WhatsApp: when acquired by Facebook for $19 billion, WhatsApp had only 32 engineers serving over 450 million users. * The breakthrough of artificial intelligence in the last decade has demonstrated that, in addition to many technical improvements and tuning, scaling neural network models and training datasets has been our most effective strategy for achieving continuous performance gains  <cit.>. Autonomy technologies such as those found in autonomous driving are widely seen as the pillar of the next digital economy era. However, today's autonomous machines technologies, dubbed Autonomy 1.0, represent everything a scalable industry should not do. To illustrate the problem facing autonomous driving companies, Figure <ref> analyzes the R&D expenditures and revenue per employee of two leading public digital economy companies, Microsoft representing the software industry and Alphabet representing the internet industry, and two public autonomous driving companies, TuSimple representing the robot truck industry and Aurora representing the robotaxi industry. We selected these autonomous driving companies for the accessibility of their financial data. Both Alphabet and Microsoft spend less than 20% of their total operating expenditures on R&D. For instance, Google employs less than 30,000 engineers while serving over 4.3 billions of users. Their scalability is mainly constrained by available compute resources and data instead of by the number of engineers. In comparison, both TuSimple and Aurora spend more than 70% of their operating expenditures on R&D. Often, to reach new users or to deploy services to new locations, autonomous driving companies need to pour additional R&D resources to re-calibrate their existing technology stacks to adapt to new environments. Hence, their scalability is constrained by R&D investment or, more directly, the number of engineers. As a result, Alphabet and Microsoft are able to generate $1.5 million and $0.8 million of revenue per employee respectively while maintaining a high growth rate, whereas TuSimple and Aurora generate negligible revenue per employee and struggle with growth. For the autonomy industry to achieve economies of scale, we have to revolutionize the R&D paradigm. In following sections, we will describe key scalability issues with Autonomy 1.0, and outline promising solutions that are already at the horizon to achieve scalability in Autonomy 2.0. § AUTONOMY 1.0: THE END OF THE ROAD OF AN AGING ARCHITECTURE Current commercial autonomous driving systems mostly inherited the software architecture from competitors in the DARPA Grand Challenges between 2005 and 2007  <cit.>. This software architecture, while represented a great leap of autonomy technology at the time, has showed its age and become difficult to scale after more than a decade of intense industry efforts to improve and adapt. Figure  <ref> illustrates Autonomy 1.0's scalability problems using autonomous driving operation data from California from 2018 to 2022. Over the past five years, although enormous amount of investment has been poured into autonomous driving, we did not observe significant growth of the number of vehicles under operation, which increased only from 400 in 2018 to 1,500 in 2022. The operation mileage per year increased only from 2 million miles to 5 million miles. Most importantly, there are still over 2,000 disengagement incidents per year. Given this trend in Autonomy 1.0, we are still years away from serious commercial operations of autonomous vehicles. Autonomy 1.0 is modular and consists of functional modules such as sensing, perception, localization, high-definition maps, prediction, planning and control <cit.>, each further consists of several functional sub-modules integrated by explicit and hand-crafted logic. Most decision-making tasks, such as planning, which is responsible for generating optimal and drivable paths, are solved with constraint optimization under a set of hand-tuned rules. When a disengagement incident happens, engineers usually have to go through a long process of debugging to identify which specific module or rule may have been the root cause of the disengagement, then optimize that module or develop logic changes to handle the specific problem. Often, due to intricate dependency and coupling among modules or rules, the new software version leads to other problems that need to be addressed, thus greatly slowing down development process. The Autonomy 1.0 software stack over time became a complicated collection of ad-hoc rules and a set of interdependent modules for handling various long-tail events, which has been increasingly difficult to debug, maintain and evolve for improved performance. Taking the open-source project Apollo <cit.> as an example, its perception module alone consists of multiple individual leaning-based sub-modules to accomplish object detection in 2D images, LiDAR point cloud segmentation, traffic light detection, lane detection, and others. To integrate information from these perception sub-modules, a post-processing module then fuses 2D and 3D information and outputs an integrated representation of the environment to the downstream prediction module. The planning module makes decisions and plans routes based on the data from the prediction, localization, and map modules. These modules often have strong dependencies among themselves. Making changes to one module not only impacts the overall system performance, possibly violating real-time constraints and resource allocation, but also impacts the algorithmic performance of other downstream modules due distributional shift of data. The whole system has become complicated and even brittle, demanding enormous amount of engineering resources to maintain, let alone to scale. We summarize the three Autonomy 1.0's major scalability bottlenecks below. * Complexity Bottleneck: The design of autonomy 1.0 systems demands extensive engineering efforts to define software interfaces, distribute data among modules, and map various workloads in a heterogeneous computing system. It is challenging, given the complexity, to debug and continuously update the software stack. The myriad of components also make it challenging to schedule tasks and optimize the latency of the unwieldy stack at run-time. As a result, typical autonomy 1.0 systems exhibit large latency variations <cit.>, which can harm the reliability of the autonomous driving system. * Human-Data Bottleneck: Autonomy 1.0 systems depend on fleets of physical vehicles operated by humans to collect data and perform system-level tests. This is a time-consuming and expensive process that is difficult to scale out. The scalability issue will only get worse as increasingly more modules of autonomy stack adopt data-driven approaches, which requires continuous collection and labeling, because any specific instance of the recorded data reflects only a particular subset of the world states. * Generalization Bottleneck: Autonomy 1.0 systems consist rule-based processing logic and hand-crafted interfaces, which makes them difficult to generalize to new environments. This is because the complexity and diversity of real-world environments makes it difficult to design the autonomy system to anticipate all possible challenging scenarios, whether for perception or planning. As a result, autonomy 1.0 systems are often over-fitted to frequently operated regions and common situations. To handle new environments and newly encountered rare cases, additional changes to the system are required, which is increasingly difficult and time-consuming. § AUTONOMY 2.0: SCALABILITY IS EVERYTHING Recent research breakthroughs in artificial intelligence, such as Transformer <cit.>, large language models (LLM) <cit.> and offline reinforcement learning <cit.>, have sparked new ideas in architecture design, data and model infrastructure, and engineering practices of autonomous driving, leading to a new development paradigm, which we dub Autonomy 2.0. The key of Autonomy 2.0 is scalability, which is delivered through two ingredients: 1) a software stack that improves continuously with increasing scale of data and compute resources. 2) a simulation paradigm based on digital twins for algorithmic exploration using large-scale, real-time, realistic data before deployment. Figure  <ref> illustrates the differences between Autonomy 1.0 and Autonomy 2.0 system architectures. Table  <ref> summarizes how Autonomy 2.0 addresses the three bottlenecks in Autonomy 1.0. §.§ Learning-Native Software Stack Any autonomous machine performs two main tasks: perception and action, reflecting the natural dichotomy of the past and the future. The perception task observes the environment and infers its current state based on observations so far. The action task, based on these observations, chooses an appropriate sequence of actions to achieve goals while considering how the environment may evolve in the near future. The software stack in Autonomy 2.0, thus, naturally consists of a perception module and an action module. Unlike in Autonomy 1.0 where each module is implemented by a number of sub-modules, there is a strong evidence that the two modules, in Autonomy 2.0, will each be implemented as a single large deep learning model, likely based on transformer or its variants due to their ability to generalize, as demonstrated in their recent successes in LLMs. Benefits. Before describing how the two-model architecture will look like in Autonomy 2.0, we will first discuss why such an architectural design choice is key to scalability. The two-model architecture addresses the Complexity Bottleneck by drastically reducing the amount of code that needs to be maintained and reasoned about. Figure <ref>a) compares the lines of code in the Apollo Perception module <cit.>, which represents the Autonomy 1.0 approach, with an example of the perception module in Autonomy 2.0, BEVFormer  <cit.>. The Apollo Perception module's size is ten times larger than BEVFormer, and BEVFormer has achieved state of the art perception results. The software architecture also handles corner cases through data-driven model learning instead of hand-crafted logic, and thus address the Generalization Bottleneck in Autonomy 1.0. In Figure <ref>b), we analyze over 400 issues associated with the Apollo planning modules, 47% of the issues are related to Apollo failing to handle a specific usage case, and 30% of the issues are related to software engineering problems such as interfaces with other modules. In Autonomy 1.0, many hand crafted rules are implemented to handle specific use cases. As the rules accumulate, software quality naturally becomes an issue. Architectural Design. The perception and action modules have different goals and traditionally require distinctive algorithmic approaches. The perception module is trained using supervised learning and self-supervised learning to infer one unique ground truth of world states. In contrast, the action module needs to search and choose from many acceptable action sequences, while anticipating the behaviors of other agents. Therefore, the action module makes use of methods from reinforcement learning, imitation learning, and model predictive control. Interestingly, while the fundamental distinctions of the two modules have not changed in Autonomy 2.0, there is a growing convergence of the implementation of the two modules: recent successes of large language models (LLM) <cit.> to comprehend a large amount of information to perform multiple sub-tasks suggest that both modules can be implemented using a similar architecture based on Transformer <cit.>. Transformer is a great algorithmic substrate for both the perception and action modules because of its ability to generalize. For perception, a transformer can effectively fuse perceptual data from multiple sensors and multiple moments into a unified representation, avoiding information loss from sparsification and module serialization. For action, the sequential nature of transformer makes it a perfect fit for processing and generating temporal data, especially for sampling multiple possible future paths. Perception. In Autonomy 1.0, the perception module consists of multiple DNNs, each trained separately to support individual tasks such as 2D/3D object detection, segmentation, and tracking. In contrast, the perception module in Autonomy 2.0 uses a single transformer backbone to provide a unified representation of the ego-vehicle's environment (e.g., 2D Bird's Eye View (BEV) <cit.> or 3D occupancy <cit.>), which is then attached to a number of decoder “heads”, each of which is tuned for an individual task. This single-transformer approach toward the perception module has been gaining popularity across the AV industry. For instance, this is the approach described by Tesla engineers in their “AI Day 2022” event  <cit.>, and has been deployed by another leading intelligent electric vehicle company XPENG  <cit.>. Action. The action module anticipates a combinatorially large number of possible “world trajectories”, hypothesizes multiple action sequences, and evaluates them to send the optimal one to actuators. In Autonomy 1.0, the action module is implemented as a set of sub-modules for prediction, planning, and control. The action module in Autonomy 2.0 is end-to-end learned using transformer-inspired architectures for sequential decision making <cit.>. The action transformer incorporates two models: a policy model and a world model. First, the pre-trained, transformer-based policy model leverages the large amount of historical data for agent behavior prediction and ego vehicle decision making and trajectory planning <cit.>. Second, the world model is essentially a behaviorally realistic simulator (validated against real-world data) of the world. The two models are connected with a closed-loop in the transformer so that the policies can be fine-tuned online <cit.>. §.§ Digital-Twin Based Development and Deployment Autonomy 1.0 relies almost exclusively on human efforts for tasks such as manual data labeling and physical testing, posing a scalability bottleneck. Autonomy 2.0 addresses the “Human-Data Bottleneck” using an emerging simulation technology called digital twins, where a virtual representation acts as the counterpart of the physical world. As highlighted by the recent National Artificial Intelligence R&D Strategic Plan 2023 published by the White House <cit.>, digital twins have fueled many real-world applications (e.g., urban planning/management of smart cities and additive manufacturing), and is a main strategy to sustain AI technologies. Under the digital-twins paradigm, one instruments the physical system to collect real-world, real-time data, which is then interactively shared with the digital counterpart. In the digital world, one could further synthesize scenarios (e.g., traffics) with a statistically significant fidelity with a similar behavioral distributions as that in human driving behaviors. Developing and testing autonomous driving software using synthesized virtual scenarios accelerates the evaluation process by 10^3 to 10^5 times  <cit.> and reduces the testing costs by two orders of magnitude  <cit.> compared to the physical-only approach in Autonomy 1.0. Figure <ref>c) demonstrates the R&D cost efficiency in Autonomy 1.0, which costs $180/hr through physical testing, vs. in Autonomy 2.0, which costs $2/hr through virtual testing, an 100-fold improvement <cit.>. Figure <ref>d) demonstrates the R&D efficiency in Autonomy 1.0, which takes around 3 kilo miles per physical vehicle per year through physical testing<cit.>, vs. in Autonomy 2.0, which takes over 3 million miles per virtual vehicle per year through simulation, a 1000-fold improvement <cit.>. Combining these two factors would bring over 10^5 times improvement under the same engineering investment in Autonomy 2.0, and scalability is thus only constrained by the available compute resources instead of number of engineers, effectively eliminating the human-data bottleneck. § SUMMARY The autonomy economy, or the use of autonomous machines to provide goods and services, will fuel the world's economic growth in the coming decades. Huge investments are pouring into the autonomy economy. Such a huge investment will only be justified if autonomous machines can reach, and provide utility for, every person on planet. Similar to today's digital economy, scalability will necessarily be the winning formula in this process. The current practice of developing and deploying autonomous machines carries the historical baggage of complexity bottleneck, human-data bottleneck, and generalization bottleneck, and is thus unscalable. We must start from a clean slate and rethink the architecture design of autonomous machines. We posit that Autonomoy 2.0 will embrace a learning-native software stack, which addresses the complexity bottleneck through software simplicity and addresses the generalization bottleneck through end-to-end learning. The digital twins technologies will have to be integrated throughout the development, evaluation, and deploymemt cycle in Autonomy 2.0 to address the human-data bottleneck. ieeetr
http://arxiv.org/abs/2307.05632v1
20230711071130
Belief Revision from Probability
[ "Jeremy Goodman", "Bernhard Salow" ]
cs.AI
[ "cs.AI", "cs.LO" ]
Belief Revision from Probability J. Goodman & B. Salow Belief Revision from Probability Jeremy Goodman School of Philosophy University of Southern California, USA [email protected] Bernhard Salow Faculty of Philosophy University of Oxford, UK [email protected] August 12, 2023 ======================================================================================================================================================================================================= In previous work (<cit.>), we develop a question-relative, probabilistic account of belief. On this account, what someone believes relative to a given question is (i) closed under entailment, (ii) sufficiently probable given their evidence, and (iii) sensitive to the relative probabilities of the answers to the question. Here we explore the implications of this account for the dynamics of belief. We show that the principles it validates are much weaker than those of orthodox theories of belief revision like AGM <cit.>, but still stronger than those valid according to the popular Lockean theory of belief <cit.>, which equates belief with high subjective probability. We then consider a restricted class of models, suitable for many but not all applications, and identify some further natural principles valid on this class. We conclude by arguing that the present framework compares favorably to the rival probabilistic accounts of belief developed by Leitgeb <cit.> and Lin and Kelly <cit.>. § PROBABILITY STRUCTURES We will work with the following simplification of the models in <cit.>: A probability structure is a tuple ⟨ S,ℰ,Q,Pr,t⟩ such that: * S is a non-empty set (of states), * ℰ⊆𝒫(S)\{∅} (the possible bodies of evidence), * Q (the question) is a partition of S, * Pr (the prior) is a probability distribution over S, and * t∈ [0,1] (the threshold) Propositions are modeled as subsets of S, where p is true in s if and only if s∈ p. We say that E'∈ℰ is the result of discovering p in E∈ℰ just in case E'=E∩ p; this will allow us to talk about how beliefs evolve in response to changes in one's evidence. Which propositions an agent believes is a function of their evidence and is also given by a set of states, so that an agent with evidence E believes p if and only if B(E)⊆ p. This ensures that their beliefs are closed under entailment, and thus already marks a departure from popular `Lockean' accounts of belief <cit.>, according to which one believes a proposition if and only if its probability exceeds a particular threshold. But it is compatible with the more plausible direction of Lockeanism, namely: threshold: You believe p only if p is sufficiently probable given your evidence. If B(E)⊆ p, then Pr(p|E)≥ t. We can think of the members of the question Q as its answers; we write [s]_Q for the member of Q containing s. The proposal in <cit.> then boils down to claiming that s∈ B(E) if and only if s∈ E and the answers to Q that are more probable than [s]_Q have total probability less than the threshold t. Writing Pr_E for Pr(· |E), this can be formalized as follows: B(E)={s∈ E: Pr_E({s': Pr_E([s']_Q) > Pr_E([s]_Q)})<t} This means that one believes as much as possible subject to two constraints: (i) threshold, and (ii) that the totality of one's beliefs corresponds to the conjunction of one's evidence with a disjunction of answers to Q that includes any answer at least as probable (given one's evidence) as any other it includes. One notable attraction of this proposal is that what one believes corresponds to the discrete analogue of the highest posterior-density region typically used to define `credible intervals' from probability density functions in Bayesian statistics. A logically significant feature of the proposal, to which we will return later, is that it involves not only local probability comparisons between different answers to Q, but also a global probability comparison between a collection of such answers and the threshold t. § PRINCIPLES AND RESULTS A core idea behind the orthodox AGM <cit.> theory of belief revision is that belief revisions are trivial whenever what you learn is compatible with your initial beliefs: you should simply add the discovery to your beliefs, draw out the logical consequences of these beliefs, and leave everything else unchanged. Here we will focus on five principles that encode various aspects of this idea. Exploring when and how these principles can fail will be a useful way of exploring the extent to which our account of belief requires departing from orthodoxy when it comes to belief dynamics. These principles are:[The indicates that the discovery is compatible with your initial beliefs, while the indicate that it is something you initially believe. - is often referred to as `preservation'; <cit.> call - `weak preservation' and R `very weak preservation'. If we interpret the non-monotonic consequence relation p q as saying that B(p)⊆ q, then - corresponds to `rational monotony', + to `cut', and - to `cautious monotony' in the standard terminology from <cit.>.] - If you don't believe not-p and then discover p, you shouldn't give up any beliefs. If B(E)∩ p ≠∅, then B(E∩ p) ⊆ B(E). R If you don't believe not-p and then discover p, you shouldn't reverse any of your beliefs (i.e. go from believing something to believing its negation). If B(E)∩ p ≠∅, then B(E)∩ B(E∩ p) ≠∅. + If you believe p and then discover p, you shouldn't form any new beliefs. If B(E)⊆ p, then B(E) ⊆ B(E∩ p). - If you believe p and then discover p, you shouldn't give up any beliefs. If B(E)⊆ p, then B(E∩ p) ⊆ B(E) R If you believe p and then discover p, you shouldn't reverse any of your beliefs. If B(E)⊆ p, then B(E) ∩ B(E∩ p) ≠∅. These principles are not logically independent: the principles entail the corresponding principles, and the + and - principles each entail the corresponding R principles. All of them are valid according to AGM. By contrast, only R is valid according to Lockean theories that equate believing a proposition with assigning it a sufficiently high probability (for some probability threshold less than 1), and it is valid only if this probability threshold is above √(5)-1/2≈ .62 (as discussed in <cit.>). The present account falls in between these extremes: - and R are valid on the class of probability structures. -, R, and + can all fail in probability structures. We will illustrate Proposition <ref> with two examples. Consider a much discussed thought experiment: Flipping for Heads A coin flipper will flip a fair coin until it lands heads. A natural model of this case is as follows: S={s_1,s_2,…} ℰ={{s_i,s_i+1,s_i+2,…}: s_i∈ S} Q={{s_i}:s_i∈ S} Pr({s_i})=1/2^i t=.99 Here s_i is the state in which the coin lands heads on the ith flip, and {s_i,s_i+1,s_i+2,…} is your evidence if you have just observed the coin land tails on the first i-1 flips. The question Q is maximally fine-grained, and the probabilities match the known objective chances. In this model, B({s_i,s_i+1,s_i+2,…})={s_i, s_i+1,…, s_i+6}: you always believe that the coin will land heads within the next seven flips. - is violated whenever you observe the coin land tails. For example, let p={s_2,s_3,…}. Then B(S)∩ p≠∅, but B(S∩ p)={s_2,…,s_8}⊈{s_1,…,s_7}=B(S). We think this is exactly the right prediction. To turn this into a counterexample to +, we add new body of evidence E'={s_1, s_2, …, s_7} to ℰ. Intuitively, we can think of this as the evidence you receive if you walk away from the experiment before the first flip, and are later told that the coin landed heads within the first seven flips. It is easy to verify that B(E')={s_1,…,s_6}. So B(S)⊈B(S∩ E'), even though B(S) ⊆ E'. That this can happen should be unsurprising in a framework like ours in which agents have `inductive' beliefs that go beyond what is strictly entailed by their evidence: discovering something that you previously believed only inductively will strengthen your evidence, putting you in a position to draw further inductive conclusions. Counterexamples to R are subtler, for reasons we will explain in the next section. But here is one: Drawing a CardYou are holding a deck of cards, which is either a fair deck consisting of 52 different cards or a trick deck consisting of 52 Aces of Spades. Your background evidence makes it 90% likely that the deck is fair. You draw a card at random; it is an Ace of Spades. Here is a possible model of the example: S={F_1,F_2,…, F_52, T} ℰ={S, {F_1},…,{F_51},{F_52,T}} Q={{F_1,F_2,…, F_52},{T}} Pr({F_i})=.9/52≈.017, Pr({T})=.1 t=.85 The states F_i are all states in which the deck is fair; they are distinguished only by which card you will draw, with F_52 being the one where you draw the Ace of Spades. State T is the state in which the deck is the trick deck (and you thus draw an Ace of Spades). Your evidence settles all and only what card you drew; so when you draw an Ace of Spades, it leaves open both that you did so by chance and that you did so because it is a trick deck. The question is simply whether the deck is fair. It is easy to see that, according to this model, you should initially believe only that the deck is fair. Your initial beliefs are thus compatible with it being fair and you drawing the Ace of Spades by chance. Yet when you discover that you drew an Ace of Spades, you should reverse your opinion and conclude that you're holding the trick deck, since Pr({T}|{F_52,T})≈.1/.1+.017≈ .855>t. Note that, in this model, your discovery is not a disjunction of answers to the question Q. If we changed the question to a more fine-grained one, so that your discovery was a disjunction of its answers, then the case would no longer yield a counterexample to R. For example, relative to the question is the deck fair and will I draw an Ace of Spades – i.e. relative to Q'={{F_1,F_2,…, F_51}, {F_52},{T}} – you will initially believe that you won't draw an Ace of Spades, in which case your subsequent discovery isn't compatible with your initial beliefs. And relative to the question is the deck fair and what will I draw – i.e. relative to the maximally fine-grained Q”={{s}:s∈ S} – you will initially have no non-trivial beliefs, and in particular you won't start out believing that the deck is fair. In the next section, we will see that this is part of a more general pattern about counterexamples to R. § ORTHOGONALITY In the previous section, we saw that some of the surprising belief dynamics in probability structures depended on discoveries that cross-cut the question. Notice that structures in which this cannot happen, because every member of ℰ is the union of some subset of Q, satisfy the following constraint: orthogonality: Pr([s]_Q)/Pr([s']_Q)=Pr([s]_Q|E)/Pr([s']_Q|E) for all s,s'∈ E∈ℰ s.t. Pr([s']_Q|E)>0 This says that the only way that getting new evidence can change the relative probability of two answers to Q is by completely ruling out one of those answers. While we can ensure orthogonality by making the question fine-grained enough to capture all possible discoveries, this isn't always necessary. For example, we could fine-grain the states and bodies of evidence in our model of Flipping for Heads to capture the fact that you discover where on the table the coin lands. The bodies of evidence in such a fine-grained model will cross-cut the question how many time will the coin be flipped; but, plausibly, orthogonality will still hold for this question, since the added information about where the coin lands is probabilistically independent from how many times it will be flipped. orthogonality is interesting because it leads to a stronger logic of belief revision. Firstly, R is valid on the class of probability structures satisfying orthogonality. Secondly, consider the following principle. It says (roughly) that if you're sure that, whatever you're about to discover, you won't believe a given proposition afterwards, then you already don't believe it: Π - If Π is a partition any member of which you could discover, then there is a p∈Π such that you shouldn't give up any beliefs upon discovering p. If Π⊆ℰ is a partition of E, then B(E∩ p)⊆ B(E) for some p∈Π. We then have the following result: Π - can fail in probability structures. But it is valid on the class of probability structures satisfying orthogonality. It is also worth noting that - and + can still fail in structures satisfying orthogonality. In particular, orthogonality holds in the structures we used in the last section to argue that Flipping for Heads yields counterexamples to - and +. In our view, a good deal of ordinary talk about what people believe is well-modelled by structures satisfying orthogonality. This is because we think that the question Q, to which attributions of belief are implicitly relativized, typically coincides with the question under discussion in the conversational context in which those attributions are made. Moreover, when a discovery is salient, it is natural to consider a question that is sufficiently fine-grained to capture all the aspects of this discovery that are relevant to its answers. Counterexamples to orthogonality (and thus to R and Π -) therefore tend to be `elusive' in Lewis's <cit.> sense: attending to these cases often changes the context in such a way that they can no longer be described as counterexamples. That being said, we do not think that orthogonality is plausible as a general constraint. This is because, very often, the only way to ensure orthogonality is to adopt a very fine-grained question; and, often, such fine-grained questions make overly skeptical predictions about what we can believe. Consider, for example, the following case: One Hundred Flips You will flip a fair coin 100 times and watch how it lands each time. There are natural contexts in which you can be correctly described as initially believing that the coin will not land heads more than 90 times. Our theory predicts this for various natural questions, even for very high thresholds t – for example, the polar question will the coin land heads more than 90 times or the slightly more fine-grained question how often will the coin lands heads. But neither of these questions satisfies orthogonality. For example, discovering that coin lands tails on the first flip will favor `no' over `yes' for the first question, and `51' over `49' for the second question, without ruling out any of these answers. In fact, the only natural question that satisfies orthogonality is the maximally fine-grained question what will the exact sequence of heads and tails be. But all answers to this question are equally likely, and so this question prevents you from having any non-trivial beliefs about what will happen. We conclude that orthogonality should be rejected as a general constraint, even if it will often hold when we are considering a particular case with a limited range of discoveries. R and Π - are thus not fully general principles of belief revision; but counterexamples are likely to be difficult to pin down. orthogonality is also a fruitful principle in that it helps to facilitate comparisons between our framework and other probabilistic theories of belief. Let us now turn to these. § COMPARISONS In this section, we consider two influential probabilistic accounts from the literature, and compare them with our own account. The first can be seen as a version of our account with an additional constraint imposed on probability structures, and validates - but not +; the second can be seen as defining belief from probability structures in a related but different way, and validates + but not -. §.§ A Stability Theory The first theory we want to consider is inspired by Leitgeb's stability theory of belief <cit.>. The guiding idea behind this theory is a probabilistic analogue of - that Leitgeb calls the Humean thesis. However, despite the `stablity' moniker, the constraints imposed by Leitgeb's theory are synchronic ones relating probabilities, partitions, and thresholds at a single time. So both to facilitate comparison with our framework, and to be (in our view) more faithful to its motivating idea, we will consider a strengthening of Leitgeb's theory according to which the requirements it imposes on one's beliefs prior to a discovery continue to hold after one has made that discovery. We can then interpret the view as proposing the following constraint on probability structures:[Our stability strengthens Leitgeb's theory in two ways: first, by identifying the threshold that characterizes the minimal probability of anything one believes with the threshold in terms of which stability is defined, and, second, by not allowing this threshold to be different for different possible bodies of evidence. It also departs from his formulation in quantifying over ℰ rather than {⋃ Y: Y⊆ Q}; however, we read him as identifying ℰ with {⋃ Y: ∅≠ Y⊆ Q}, so this is not a substantive departure.] stability: For all E∈ℰ and X⊆ Q, if Pr(⋃ X)≥ t and E∩⋃ X ≠∅, then Pr(⋃ X|E)≥ t. We then have the following results: - is valid in probability structures satisfying stability and orthogonality. But + can still fail; and - can fail in structures satisfying stability but not orthogonality. This illustrates how a kind of qualitative stability of belief can be secured by a kind of probabilistic stability (given orthogonality), without entailing the full strength of AGM.[Leitgeb <cit.> describes his theory as compatible with AGM (and thus with +) since, upon getting new evidence, one may adopt a different, higher threshold than before. But doing so is in no way required by the demands of stability. ] We reject stability because we reject - (even in cases where orthogonality holds), and along with it the informal idea that rational belief should be stable in anything like the way that Leitgeb claims it should be. stability also places implausible constraints on what agents can believe at a given time. For example, <cit.> show, in effect, that in Flipping for Heads stability entails that the only way to have any non-trivial beliefs about how many times the coin will be flipped is to believe that it will be flipped only once. (This argument depends only on the symmetries of the example, and doesn't depend on whether the coin is fair, biased towards heads, or biased towards tails.) See also <cit.> and <cit.>. §.§ The Tracking Theory Lin and Kelly <cit.> defend a theory which (for reasons we can't explain here) they call the `tracking theory' of belief. This theory can be seen as an alternative way of defining belief in probability structures, with the parameter t playing a rather different role. Put informally, a state s is compatible with your LK-beliefs if there is no answer to Q that is more than 1/t times more likely than [s]_Q. Formally: B_LK(E)={s∈ E: (∀ q ∈ Q) (Pr_E([s]_Q) ≥ t× Pr_E(q))} In many cases – such as Flipping for Heads – the subject will have similar beliefs according to our theory and according to Lin and Kelly's (provided t is chosen judiciously: low values of t for Lin and Kelly correspond to high values of t for us). However, there are important structural differences between the theories. In particular, LK-beliefs are sensitive only to local comparisons of probability between particular answers, while beliefs as we understand them depend also the probabilities of sets of answers. A consequence of this locality is that, as Lin and Kelly note, their theory validates a reasonably strong theory of belief revision (assuming orthogonality, which they essentially build in): +, -, R, R, and Π - are all valid for LK-belief on the class of probability structures satisfying orthogonality. The major shortcoming of the tracking theory, in our view, is that it fails to entail threshold. Consider a case like Drawing a Card, in which one state initially has very low probability (.1) but every other state has even lower probability (.017). Then relative to a fine-grained question such as is the deck fair and which card will you draw, you will LK-believe that the deck is a trick deck even for reasonably low values of t (such as .2). But this belief is only .1 likely on your evidence! And we can, of course, make the case more extreme by increasing the number of distinct cards in the fair deck; so the believed proposition can be arbitrarily improbable for any fixed value of t. One might defend the tracking theory against such cases by insisting that we choose a more coarse-grained question; while the theory still fails to entail threshold, this response at least prevents it from recommending the extreme violations just discussed. However, moving to coarser-grained questions is often in conflict with orthogonality. Moreover, the reasons we gave previously for rejecting orthogonality as a general constraint applies to the tracking theory as well: just like our theory, the tracking theory will make implausibly skeptical predictions in One Hundred Flips unless combined with an orthogonality-violating question such as how many heads will there be. Without orthogonality, the dynamics of LK-belief are substantially less constrained: + and - are valid for LK-belief on the class probability structures. R and Π - can both fail in such structures. + is then the only principle valid for LK-beliefs but not for beliefs as we understand them. Moreover, without orthogonality, the tracking theory invalidates a new principle that holds for belief as we understand it (assuming we restrict to probability structures with t>.5). Consider the following variant of Drawing a Card (taken from <cit.>, who also makes parallel observations as an objection to Levi's <cit.> account of belief): Drawing a Card v.2 You are holding a deck which could be either a `fair' deck of 52 different cards, or one of 52 different `trick' decks that just contain the same card 52 times. Given your background evidence, the probability that you are holding the fair deck is 1/5, with the remaining 4/5 distributed evenly across the 52 trick decks. You are about to draw and turn over one card from your deck. Let us assume that Q is which of the 53 possible decks am I holding and t>.25. According to the tracking theory, you initially believe that you hold the fair deck, but after drawing a card you believe that you are holding the relevant trick deck. So we have a failure of the following principle, which says (roughly) that if you're sure that, whatever you're about to discover, you'll believe that a given proposition is false, then don't currently believe that the proposition is true: Π R If Π is a partition any member of which you could discover, there is a p∈Π such that you shouldn't reverse any of your beliefs upon discovering p. If Π⊆ℰ is a partition of E, then B(E) ∩ B(E∩ p)≠∅ for some p∈Π. By contrast, if belief requires probability over a threshold greater than .5 (as it does on our account), this principle cannot fail.[Failures of Π R are to be expected for certain notions of belief that are weaker than the one we are operating with here. For example, your `best guess' about what deck you are holding plausibly does change no matter what card you draw; and arguably what we `believe' (in ordinary English) often aligns with our best guesses. See <cit.> and <cit.> for discussion.] Overall, then, we see few advantages for the tracking theory over our own. Given orthogonality, which Lin and Kelly essentially build into their formalism, the tracking theory offers a stronger theory of belief revision. However, the theory violates threshold, often in dramatic ways. Moreover, to make reasonable predictions in cases like One Hundred Flips, both theories need to appeal to coarse-grained questions that conflict with orthogonality. Having done so, both theories invalidate many principles of belief revision, although the details differ slightly (with our theory invalidating + and Lin and Kelly's invalidating Π R). § FURTHER WORK We conclude with three directions for further work. One concerns nonmonotonic consequence, where p q is interpreted as B(p)⊆ q. We think that distinguishing one's evidence from one's beliefs that go beyond one's evidence offers a productive way of thinking about nonmonotonic consequence, and that the logic resulting from our framework contrasts in interesting ways with the one resulting from Lockean theories of belief (explored in <cit.>). The second direction concerns constraints on ℰ. Consider, for example, the Monty Hall problem, in which it is crucial that when one gets new evidence about one's environment, one also gets evidence that one has gotten such evidence. We argue in <cit.> that such cases motivate a nestedness requirement on ℰ: if two possible bodies of evidence are mutually consistent, then one entails the other. This requirement induce new subtleties in the resulting nonmonotonic logic. A third question for future work concerns what happens when probability structures are generalized by making the relevant question a function of one's evidence. <cit.> motivate this generalization, in order to vindicate certain judgments about a family of examples discussed in <cit.>. We hope to explore these models in future work; one notable feature is that they invalidate - but still validate Π R. § ACKNOWLEDGEMENTS We thank Kevin Dorst, Josh Pearson, and three anonymous referees for TARK for very helpful comments on earlier versions of this material. § PROOFS - and R are valid on the class of probability structures. Since - entails R, it's sufficient to prove the former. We suppose that B(E) ⊆ p, and show that B(E∩ p)⊆ B(E). Note that if s∈ B(E), [s]_Q⊆ B(E)⊆ p. So for any q∈ Q, if Pr([s]_Q|E)≥ Pr(q|E), then also Pr([s]_Q|E∩ p)≥ Pr(q|E∩ p). Contraposing, this means that if Pr(q|E∩ p)≥ Pr([s]_Q|E∩ p) and s∈ B(E), then Pr(q|E)≥ Pr([s]_Q|E), and so q ⊆ B(E). Moreover, since B(E) ⊆ p, Pr(B(E)|E∩ p)≥ Pr(B(E)|E)≥ t. Now, note that B(E∩ p) is the minimal X⊆ E∩ p such that (i) if s ∈ X and Pr(q|E∩ p)≥ Pr([s]_Q|E∩ p) for q∈ Q, then q ⊆ X, and (ii) Pr(X|E∩ p)≥ t. By the above, B(E) satisfies both (i) and (ii); so it contains the minimal such X as a subset. So B(E∩ p)⊆ B(E), as required. -, R, and + can all fail in probability structures. Counter-models are given in the main text. R is valid in probability structures satisfying orthogonality. Suppose that B(E)∩ p ≠∅. Let s∈ B(E)∩ p be such that Pr_E([s]_Q)≥ Pr_E([s']_Q) for any s' ∈ B(E)∩ p. We will show that, given orthogonality, there can be no q∈ Q such that Pr_E∩ p(q)> Pr_E∩ p([s]_Q). It follows that s ∈ B(E ∩ p), thus establishing B(E)∩ B(E∩ p)≠∅. By orthogonality, if q∈ Q and Pr_E∩ p(q)> Pr_E∩ p([s]_Q), then either Pr_E(q)> Pr_E([s]_Q) or else Pr_E∩ p([s]_Q)=0. But s∈ E∩ p, so Pr_E∩ p([s]_Q)≠ 0. So suppose Pr_E(q)> Pr_E([s]_Q). By the way s was chosen, it follows that q∩ (B(E)∩ p) = ∅. But q∩ p ≠∅, since Pr_E∩ p(q)>0. So q∩ B(E) = ∅. But since s ∈ B(E), this contradicts the assumption that Pr_E(q)> Pr_E([s]_Q). Π - can fail in probability structures. But it is valid in probability structures satisfying orthogonality. To see that Π - can fail, consider S={s_1,s_2,s_3,s_4,s_5,s_6} ℰ={S,{s_1,s_3,s_5},{s_2,s_4,s_6}} Q={{s_1,s_2},{s_3,s_4},{s_5},{s_6}} Pr is uniform t=.65 Let p={s_1,s_3,s_5} and Π={p, S∖ p}. Then B(S∩ p)={s_1,s_3,s_5}⊈{s_1,s_2,s_3,s_4} = B(S) and B(S∩ S∖ p)={s_2,s_4,s_6}⊈B(S). Now suppose ⟨ S,ℰ,Q,Pr,t⟩ satisfies orthogonality. To show that Π - holds, we suppose that B(E∩ p_i) ⊈B(E) for each p_i∈Π, and deduce a contradiction. For each i, let s_i∈ B(E∩ p_i)∖ B(E) be such that Pr_E∩ p_i([s_i]_Q)≥ Pr_E∩ p_i([s]_Q) for every s ∈ B(E∩ p_i)∖ B(E). Since s_i ∈ B(E∩ p_i), Pr_E∩ p_i({s:Pr_E∩ p_i([s]_Q)≥ Pr_E∩ p_i([s_i]_Q)} < t. By orthogonality, Pr_E([s]_Q)≥ Pr_E([s_i]_Q) entails that either Pr_E∩ p_i([s]_Q)≥ Pr_E∩ p_i([s_i]_Q) or Pr_E∩ p_i([s]_Q)=0. So Pr_E∩ p_i({s:Pr_E([s]_Q)≥ Pr_E([s_i]_Q)})=Pr_E∩ p_i({s:Pr_E∩ p_i([s]_Q)≥ Pr_E∩ p_i([s_i]_Q)})< t. Now let k be such that, for every i, Pr_E([s_k]_Q)≥ Pr_E([s_i]_Q). Then {s:Pr_E([s]_Q)≥ Pr_E([s_k]_Q)}⊆{s:Pr_E([s]_Q)≥ Pr_E([s_i]_Q)}, and so Pr_E∩ p_i({s:Pr_E([s]_Q)≥ Pr_E([s_k]_Q)})≤ Pr_E∩ p_i({s:Pr_E([s]_Q)≥ Pr_E([s_i]_Q)})<t for every i. But then by the law of total probability, Pr_E({s:Pr_E([s]_Q)≥ Pr_E([s_k]_Q)})<t, contradicting the assumption that s_k ∉ B(E). - is valid in probability structures satisfying stability and orthogonality; but + can fail in such structures. Moreover, - can fail in probability structures satisfying stability in which orthogonality fails. To see that - holds, note that B(E∩ p) is the minimal X⊆ E∩ p such that (i) if s ∈ X and Pr(q|E∩ p)≥ Pr([s]_Q|E∩ p) for q∈ Q, then q ⊆ X, and (ii) Pr(X|E∩ p)≥ t. Then if B(E)∩ p ≠∅, Pr(B(E)∩ p|E ∩ p)=Pr(B(E)|E∩ p)≥ t by stability, so B(E)∩ p meets condition (ii). Moreover, it meets condition (i) by orthogonality. So B(E)∩ p contains the minimal X meeting (i) and (ii) as a subset. So B(E∩ p)⊆ B(E)∩ p ⊆ B(E), as required. To see how + can fail, let S= {a,b,c}, ℰ={S,{a,b}}, Q={{s}:s∈ S}, Pr({a})=.9, Pr({b})=.09, Pr({c})=.01, and t=.9001. This structure satisfies stability. + fails, since B(S)={a,b}⊈B({a,b})={a}. To see how - can fail in the absence of orthogonality, consider a probability structure in which Q={A,B,C}, ℰ={S,E}, Pr(A)=1/2, Pr(B)=1/4+ϵ, Pr(C) = 1/4-ϵ, Pr_E(A)=Pr_E(B)=Pr_E(C)=1/3, and t=1/2+ϵ. stability hold, but - fails: B(S)∩ E≠∅, but B(E)=E⊈B(S)= A∪ B. +, -, R, R, and Π - are valid for LK-belief on the class of probability structures satisfying orthogonality For R, see the proof of Proposition 3; For + and - (and hence R), see the proof of Proposition 7; for Π -, see <cit.>. + and - are valid for LK-Belief on the class of probability structures. R and Π - can both fail in such structures. The failures of R and Π - follow from the failure of Π R described in the main text. Suppose that B_LK(E)⊆ p. We will show that B_LK(E∩ p)=B_LK(E), thus establishing + and -. Since B_LK(E)⊆ p, we have that if s ∈ B_LK(E), then [s]_Q⊆ p. So if s ∈ B_LK(E) then Pr([s]_Q|E∩ p)/Pr(q|E∩ p)≥Pr([s]_Q|E)/Pr(q|E) for any q∈ Q such that Pr(q|E∩ p)>0. So if s∈ B_LK(E), then for any q∈ Q with Pr(q|E∩ p)>0, Pr([s]_Q|E∩ p)/Pr(q|E∩ p)≥Pr([s]_Q|E)/Pr(q|E)≥ t. And if Pr(q|E∩ p)=0, then trivially Pr([s]_Q|E)≥ t× Pr(q|E∩ p). So s∈ B_LK(E ∩ p). Moreover, if s∉ B_LK(E), then t>0 and there is a q∈ Q such that (i) Pr(q|E) × t > Pr([s]_Q|E) and (ii) q∩ B_LK(E) ≠∅. Assuming Pr([s]_Q|E ∩ p)>0 then, by the above, Pr(q|E∩ p)/Pr([s]_Q|E ∩ p)≥Pr(q|E)/Pr([s]_Q|E) > 1/t. In that case s∉ B_LK(E∩ p). And if Pr([s]_Q|E ∩ p)>0 then also t× Pr(q|E∩ p)> × Pr([s]_Q|E ∩ p). So in that case also s∉ B_LK(E∩ p). So B_LK(E∩ p)=B_LK(E), as required. eptcs
http://arxiv.org/abs/2307.04012v1
20230708164551
Learning Together: Towards foundational models for machine learning interatomic potentials with meta-learning
[ "Alice E. A. Allen", "Nicholas Lubbers", "Sakib Matin", "Justin Smith", "Richard Messerly", "Sergei Tretiak", "Kipton Barros" ]
physics.chem-ph
[ "physics.chem-ph", "physics.comp-ph" ]
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Nvidia Corporation, Santa Clara, CA 9505, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States ]Learning Together: Towards foundational models for machine learning interatomic potentials with meta-learning The development of machine learning models has led to an abundance of datasets containing quantum mechanical (QM) calculations for molecular and material systems. However, traditional training methods for machine learning models are unable to leverage the plethora of data available as they require that each dataset be generated using the same QM method. Taking machine learning interatomic potentials (MLIPs) as an example, we show that meta-learning techniques, a recent advancement from the machine learning community, can be used to fit multiple levels of QM theory in the same training process. Meta-learning changes the training procedure to learn a representation that can be easily re-trained to new tasks with small amounts of data. We then demonstrate that meta-learning enables simultaneously training to multiple large organic molecule datasets. As a proof of concept, we examine the performance of a MLIP refit to a small drug-like molecule and show that pre-training potentials to multiple levels of theory with meta-learning improves performance. This difference in performance can be seen both in the reduced error and in the improved smoothness of the potential energy surface produced. We therefore show that meta-learning can utilize existing datasets with inconsistent QM levels of theory to produce models that are better at specializing to new datasets. This opens new routes for creating pre-trained, foundational models for interatomic potentials. [ Kipton Barros August 12, 2023 =================== § INTRODUCTION Machine learning is fundamentally changing and expanding our capabilities for modeling chemical and materials systems <cit.>. A growing array of properties have been successfully predicted with machine learning models from materials' band gaps and formation energies to molecular energies and bond orders <cit.>. The development of machine learning models for various applications has involved the creation of a large number of datasets containing quantum-mechanical calculations at different fidelities (levels of theory) <cit.>. However, incorporating this multi-fidelity information into machine learning models remains challenging. In this work, we show that multiple datasets can be used to fit a machine learning model, even if the datasets were calculated with many varying QM levels of theory. To overcome this challenge, we incorporate meta-learning techniques into the training process and subsequently demonstrate improvements in accuracy for multiple applications. The aim of meta-learning is to use a wide collection of data to train a machine learning model that can then be easily re-trained to specialized tasks and we demonstrate the applicability of the meta-learning method to MLIPs. In the landscape of broader efforts to incorporate machine learning and molecular and material modelling, a particular attention has been paid to MLIPs <cit.>. Accurate atomistic simulations rely on interatomic potentials that closely recreate the interactions present between atoms and molecules <cit.>. Recreating these interactions involves a trade-off between accuracy and computational cost, with quantum mechanical techniques offering highly accurate simulations whilst classical force fields are fast and capable of modelling much larger systems over long timescales <cit.>. Within the last decade, MLIPs have increasingly been seen as a method that could provide a model that is both fast and accurate <cit.>. However, the development of MLIPs that are transferable to unseen organic molecules requires datasets that cover a large fraction of chemical space. This requirement has lead to the production of numerous datasets <cit.>. These datasets contain the quantum mechanical (QM) energies and forces of millions of structures spanning large regions of chemical space. However, the QM methods used to calculate the energies and forces vary considerably. As different QM methods result in different potential energy surfaces, this inconsistency in QM techniques limits the extent that datasets can used together to fit potentials. Numerous organic molecule datasets have been created for training MLIPs <cit.>. However, a consensus on the best QM techniques to employ to create these datasets has never been reached as a compromise between accuracy and computational cost must always be considered when performing QM calculations. This lack of consensus has led to a variety of different software, methods, basis sets and exchange-correlation functionals being used. For example, the QM7-x and ANI-1x datasets both contain energies and forces for millions of small organic molecules. However, QM7-x was calculated using the PBE0 exchange-correlation functional with many body dispersion whilst ANI-1x was calculated with the ωB97x functional and 6-31G* basis set <cit.> and does not include dispersion effects. Therefore, these two datasets describe similar, but slightly different potential energy surfaces. If both datasets were joined together to train a potential then problems would likely arise as contradictory information is present. For example, identical structures at different levels of theory can have different energy and forces. Whilst datasets from different sources have been fit together without further refinement <cit.>, this approach does not account for differences in the interactions described. Techniques exist in the machine learning literature to address the difference in the potential energy surface. Previous work on fitting MLIPs to multiple datasets is limited. In Ref. , a transferable molecular potential was first trained to ∼ 5 million density functional theory (DFT) training points before being refit, with frozen parameters, to 0.5 million CCSD(T)* energies. This technique, known as transfer learning has been used in several works <cit.>. The advantage of using transfer learning for training MLIPs is that it requires fewer calculations at a higher, and more expensive, level of theory. However, this kind of transfer learning technique, freezing neural network (NN) parameters, is limited to just two datasets. If we want to use multiple existing datasets, and expand the size and variety of training data, then new methods must be found. Fortunately, this problem is being explored in a branch of machine learning research known as meta-learning <cit.>. Meta-learning seeks to build a model that, although not specialized to any particular task, can be quickly re-trained to many new tasks - where a task is a specific learning problem. Furthermore, this retraining can be effective even if the amount of new data is limited <cit.>. For transferable MLIPs, the concepts of tasks naturally lends itself to quantum mechanical datasets calculated with different methods. By using meta-learning techniques, we will show how information from multiple levels of theory can be incorporated together. We begin by investigating training data with multiple levels of theory for an individual aspirin molecule and for the QM9 dataset (which contains over 100,000 molecules in their equilibrium configuration). With these systems, the problems associated with naively combining datasets together are seen and the benefits of meta-learning are clearly observed in the test set errors. We then move on to combining several large molecule datasets to pre-train an MLIP. Combining large organic datasets to fit MLIPs has never previously been attempted. Subsets, chosen using active learning, of six existing datasets (ANI-1x, GEOM, QMugs, QM7-x, Transition-1x and the QM9 dataset from Ref. ) were used to fit an adaptable potential using meta-learning – see Fig. <ref> for a visualization of the space the datasets cover <cit.>. Figure <ref> demonstrates the increase in chemical space possible when multiple datasets are combined together. The benefits of pre-training are then shown by retraining to the 3BPA molecule and testing various properties. These tests show that pre-training models using meta-learning produces a more accurate and smoother potential. The benefits of pre-training include enhanced accuracy and generalization capabilities in modeling interatomic potentials. Training machine learning models to large amounts of data before re-training to a specific task is related to the concept of foundational models <cit.>. This concept has been used to create large language models, ie. GPT-4, which have been pre-trained to extremely large datasets before being fine-tuned to specific tasks, i.e. ChatGPT which is fine-tuned for conversational usage <cit.>. Creating foundational models allows a wide range of information to be encoded before specialisation. With meta-learning techniques, we can now pre-train interatomic potentials to numerous large datasets and this is a step towards foundational models for MLIPs – MLIPs that could be quickly re-trained to diverse molecular systems. The number of QM datasets has grown rapidly over the last few years. However, a major bottleneck in exploiting this information has been the absence of methods that can effectively combine all of this information. In this work, we have overcome this limitation by exploiting techniques which enable the incorporation of datasets with different fidelities. Whilst we focus on MLIPs, these techniques are applicable to the wide range of predictive models that exist for material and molecular property prediction. By showing how meta-learning can be applied, we aim to encourage researchers to fully utilize the vast amount of existing data that the scientific community has already collected. § METHODS §.§ Meta-Learning Algorithm Meta-learning is an area of machine learning concerned with improving the learning process to produce models that can easily adapt to new problems <cit.>. A key component of meta-learning is the concept of different `tasks'. Tasks are datasets with similar properties but slight differences. For example, if we were interested in animal classification of a cat and a dog, a similar task might be to classify a lion and a bear. The task is not the same but we would expect fundamental similarities in the model needed to perform the classification. By using a meta-learning algorithm to learn multiple different tasks, less data will be required when a new learning problem is introduced. The objective of meta-learning algorithms is to train a model that can generalize more easily to new data<cit.>. We will use meta-learning to fit multiple different QM datasets with slightly different properties. To our knowledge, meta-learning for MLIPs has not been previously carried out, although it has been used in other areas of science <cit.>. The meta-learning algorithm we have chosen to fit multiple datasets for MLIPs is called Reptile <cit.>. Reptile works by repeatedly sampling a task (a dataset), performing a limited number of optimization steps on the task and then updating the weights of the machine learning model towards the new weights. Reptile was chosen over other meta-learning algorithms such as MAML <cit.> as Reptile is simpler to implement and therefore more likely to be adopted by the wider community. A comparison of methods such as MAML for interatomic potentials will therefore be left to future work. Reptile is described in Algorithm <ref> with a visual illustration also given. The algorithm works by separating the training data into distinct learning problems (tasks). An individual task is selected and multiple optimization steps are performed. The parameters of the model are then updated. A new task is then selected and the procedure is repeated multiple times. This moves the model to a region of parameter space where it can readily move between the different datasets present. Throughout this work, the k=1 result is used as comparison point. This is because when k=1 the algorithm becomes equivalent to stochastic gradient descent on the expected loss over all the training tasks <cit.>. This is referred to as joint training in Ref.  At k=1, the algorithm is not expected to account for differences in the QM theory but still uses all the information present from the datasets. §.§ Interatomic Potential In this work, we have used the NN architecture implemented in torchANI with the same structure as the ANI-1x model <cit.>. However, the meta-learning techniques described are not specific to this form of model and there is no reason that they could not be applied to other machine learning models that employ similar iterative solvers. The hyperparameters used for the ANI potential are the same as those used for previous training to the ANI-1x and ANI-1ccx datasets, see Ref.  for more details. §.§ Datasets §.§.§ Aspirin Aspirin structures were produced by molecular dynamic simulations at 300K, 600K and 900K. Density Functional based Tight Binding (DFTB) was used to perform the MD simulations and a total of 400 structures were created for each temperature. QM calculations of the energies and forces were then performed on these structures with three levels of theory: DFT with the ωB97x exchange-correlation function and 6-31G* basis set, DFT with Becke, 3-parameter, Lee–Yang–Parr (B3LYP) exchange-correlation functions and def2-TZVP basis set and Hartree-Fock with the def2-SVP basis set for 300K, 600K and 900K respectively. These datasets were used to pre-train a molecular potential. The pre-trained potential was then refit to a new dataset of MD configuration at the Møller–Plesset (MP2) level of theory with the def2-SVP basis set (a more accurate level of theory). The training dataset for refitting used 400 MD configurations sampled at 300K whilst the test set contained structures at 300K,600K and 900K. A batch size of 8 was used for training. §.§.§ QM9 The QM9 dataset contains over 100,000 equilibrium structures for small organic molecules with up to 9 heavy atoms <cit.>. In Ref. , the QM9 dataset was recalculated with 76 different exchange-correlation functionals and 3 basis sets <cit.>. §.§.§ Multiple Organic Molecules Seven separate datasets were chosen to fit a potential to organic molecule potential that could be easily re-trained to new data. The seven datasets used for meta-learning were chosen to cover both diverse regions of chemical space and multiples levels of theory – including the accurate recreation of dispersion effects. The chemical space covered included reactive paths and biologically and pharmacologically relevant structures. Whilst ANI-1x does cover a large number of conformations for organic molecules, it has limitations. This is demonstrated by Fig. <ref> and Fig. S1. Figure <ref> demonstrates how the additional datasets increase the size of the molecules and range of energies included. The E_0 energy is calculated using linear fitting an then subtracted from each dataset. The minimum energy for each dataset is then shifted to zero. Whilst it is not covered in this work as we use the ANI potential, including larger molecules in datasets may be increasingly important for newer generations of interatomic potentials that include message passing and describe longer length scales <cit.>. Figure S1 shows the distribution of uncertainty for the ANI-1x potential across the dataset space. Whilst ANI-1x dz, ANI-1x tz, GEOM and QMugs have similar probability distributions, QM7-x and Transition-1x contain larger uncertainties. Transition-1x contains reactive structures that are not contained in the original dataset and therefore higher uncertainties are expected. For QM7-x, there are also higher uncertainties and this may be due to the different sampling techniques used. A property that is not shown in Table 1 is the software used for the DFT calculations. Even when the same level of theory is used, we can expect different software to give slightly different results. This will cause further discrepancies between the datasets as a variety of codes are employed. For example, although Transition-1x and ANI-1x are calculated at the same level of theory, Transition-1x is calculated with the ORCA program whilst ANI-1x is calculated with Gaussian <cit.>. The individual description and justification for including each dataset used is as follows: * QM9 - This dataset contains a diverse range of 76 functionals and 3 basis sets for small equilibrium organic molecules <cit.>. * ANI-1x - This is a large dataset of small (up to 8 heavy atoms) organic molecules generated with active learning methods <cit.>. * QMugs - This dataset includes the largest molecules with up to 100 heavy atoms. It specializes in including drug-like molecules <cit.> * GEOM - This is the largest dataset and contains both large molecules and drug-like molecules <cit.>. * QM7-x - This is also a large dataset of small (up to 7 heavy atoms) organic molecules but has dispersion accurately described with many-body dispersion <cit.> * Transition-1x - This datasets includes minimum energy paths for 12,000 reactions <cit.>. * ANI-1ccx - This dataset contains coupled cluster level theory calculations for a subset of the ANI-1x dataset <cit.>. Other datasets considered for inclusion include SPICE, PubChemQC-PM6 and Tensormol <cit.>. However, with the existing datasets a sufficient representation of chemical space is covered. It is also worth noting that retraining to recreate the specific properties of the excluded datasets would also be quickly possible with the meta-learning potential. §.§ Meta-learning Hyperparameter Optimization There are three parameters in the Reptile algorithm. These control the number of steps (k) taken at each optimization step, how the parameters are updated (ϵ) from the task's individual NN parameters and the maximum number of epochs used for retraining. The number of epochs was investigated to see whether restricting the training improved accuracy by ensuring the potential remained close to the meta-learned potential or if longer retraining improved results. For a detailed discussion of the hyper parameters chosen when fitting to the seven separate datasets, see Section S1.2. The ϵ value used throughout this work is ϵ=1 whilst the k value is changed depending on the problem. The maximum number of epochs used for retraining for the meta-learning algorithm with k>1 is restricted to 150 epochs. §.§ Stages of Fitting for the Organic Molecule datasets In the first iteration, 100,000 structures were taken randomly from the ANI-1x, QMugs, GEOM, QM7-x and Transition-1x datasets. For QM9, 10,000 structures were used for each level of theory. This is restricted as 276 levels of theory exist, and each theory level samples different structures in the QM9 dataset. After the first iteration, the highest error structures were added to the next iteration <cit.>. The cutoffs used for adding structures are described in SI 1.6. This process was repeated 3 times. A diagram of the process is show in Fig. S3. § RESULTS §.§ A Simple Case Study on Aspirin As the initial test case we investigate the performance of meta-learning on a dataset containing a single aspirin molecule. Aspirin structures were produced by molecules dynamic simulations at 300K, 600K and 900K. The QM energies and forces were then calculated at three different levels of theory: two distinct DFT functionals, and Hartree-Fock. This created three different datasets, with each temperature corresponding to a different level of theory. These three datasets were used to pre-train a molecular potential to the energy and forces of 1,200 structures. The pre-trained potential was then refit to a new dataset of 400 MD configuration at the MP2 level of theory from the 300K simulation. The change in the RMSE error for the forces is shown with the value of k used in the meta-learning algorithm in Fig. <ref>. The k parameter controls the number of steps taken towards each dataset. As k is increased the speed of the algorithm also increases and this is an additional consideration in choosing the optimal value. In the limit of k →∞ the algorithm would correspond to iterative training to each dataset and then transfer learning to a new task. However, while this may work for small problems, this approach is impractical for large datasets. Figure <ref> shows that as the k parameter is increased the error in the test set decreases with the minimum error at around k=400. There is therefore an improvement in test set error in comparison to both no pre-training (5.35 ± 0.41 kcal/mol/ Å) and k=1 (3.38 ± 0.16 kcal/mol/ Å). Note that k=1 effectively corresponds to simultaneous training to all tasks. Therefore, when we attempt to combine multiple datasets at different levels of theory an improvement in performance can be seen when meta-learning is incorporated into the training process. §.§ Meta-learning many levels of theory using QM9 Next, we move onto the QM9 dataset that contains multiple different small organic molecules in their equilibrium structures. The QM9 dataset has been calculated at 228 different levels of theory and therefore provides an ideal dataset for analysing meta-learning techniques. We can use this dataset to test whether meta-learning can develop a potential which can be refit to a new level of theory encountered for the QM9 dataset with less data. In order to do this, a subset of the QM9 dataset was used to train a potential to 10,000 molecules, 50 different exchange-correlation functionals and three different basis set. The potential was then refit to a new exchange-correlation functional, that had not been previously encountered, and the performance of this new model was assessed and compared to no pre-training and k=1 meta-learning. The test set error for the meta-learning potential refit to a new level of theory in the QM9 dataset is shown in Fig. <ref>. Pre-training the potential greatly improves the test set error for this case. In Fig. S9 a comparison between meta-learning and k=1 is shown and we see that k=1 does not perform as well as k=10. This is because it does not account for the discrepancy in the interaction present. These results show that even when the number of levels of theory is relatively large, at 150, and multiple molecules are present that meta-learning improves test set error over k=1. §.§ Making the most of scarce data at CCSD(T) level We will now move to the datasets used to train transferable interatomic potentials. As a starting example, we will look at pre-training to the multiple levels of theory (ωB97x/ 6-31G* and ωB97x/ def2-TZVPP) contained in the ANI-1x dataset <cit.>. We will then retrain to the ANI-1ccx dataset <cit.>. Figure <ref> shows the distribution in error when pre-training to multiple levels of theory with meta-learning and k=1. The RMSE is 3.30 ± 0.10 kcal/mol and 2.39 ± 0.00 kcal/mol for k=1 and meta-learning respectively. Therefore, we can again see that meta-learning with a higher k values improves results compared to k=1. The comparative results for direct training to ωB97x/ 6-31G* and ωB97x/ def2-TZVPP and then transfer learning to CCSD(T) is 2.20± 0.01 kcal/mol and 2.09±0.02 kcal/mol respectively . Therefore, in this case fitting to multiple datasets does not improve results over fitting to just one. This is in part because both datasets contain the same structures and cover the same chemical and configurational space. The potential trained to multiple organic datasets was also refit to the CCSD(T) dataset and the benefits of meta-learning over k=1 were also seen with errors of 2.89± and 3.32± respectively. However, this is notably higher than training to the ANI-1x dataset alone. The CCSD(T) dataset is a subset of the ANI-1x dataset and contains identical structures. For these cases, adding additional data in other areas of chemical space may not improve results. §.§ Training to multiple transferable organic molecule datasets Numerous datasets have been created that contain quantum mechanical calculations for organic molecules. However, as these datasets use different levels of theory and software, combining the information from different datasets requires advanced training techniques. By using meta-learning, a pre-trained model was created that uses information from seven different datasets. This is the first instance, to our knowledge, of combining information from multiple organic molecule datasets in this manner. We have already seen that meta-learning can improve results compared to k=1 when multiple datasets are used. We will now use the pre-trained model to explore the benefits of pre-training with meta-learning in comparison to no pre-training, and k=1 when retraining to a single molecular system. The pre-trained model was re-trained to the 3BPA dataset taken from Ref.  and various properties explored <cit.>. The first properties we will analyze are the energy and force RMSE errors. The force errors for a dataset taken from MD at 1200K is shown in Fig. <ref> with the energy and force learning curves for datasets at 300K, 600K and 1200K given in Fig. S4. From these graphs, the improved performance of pre-training using the meta-learning approach (with three passes through the dataset) to both k=1 and no pre-training can be seen for energies and forces. Therefore, just by adapting the training scheme, with no change in the model architecture or the dataset itself, consistent improvements in accuracy can be seen with meta-learning. The importance of the training method used has previously been seen in Ref. . Here we see how it can improve performance for fitting multiple datasets together. In comparison to when the ANI-1x model is used for pre-training, meta-learning performs slightly better at force errors but slightly worse for energy predictions. Given that the ANI-1x model is fit to the same level of theory as the 3BPA dataset, the performance of the meta-learning potential is encouraging. However, it is known that RMSE errors alone are not enough to verify the performance of a potential <cit.>. We will therefore examine additional properties. The 3BPA molecule has three central dihedral angles which are illustrated in Fig. <ref>. The energy scans along these dihedral angles are shown in Fig. <ref> with the model refit to the energies and forces of just 62 3BPA conformations. When no pre-training is used, the surface at β=120 significantly over-estimates the high energy point and lacks smoothness. A similar shape is seen for the k=1 potential. However, when meta-learning is used for pre-training the surface remains noticeably smoother with significantly less over prediction. When k=1 is used, multiple different potential energy surfaces are combined together in a nonphysical way which destroys the smoothness of the underlying potential. The error in the gradient of the 2D energy surface is shown in Fig. <ref> b) and emphasizes this difference in smoothness. When meta-learning is used, the contradiction in the potential energy surface described is corrected resulting in a smoother model. When no pre-training or k=1 is used, an additional problem can occur with the high energy regions at α=0 failing to be recreated for the β=180 and β=150 scan respectively. In contrast, both the meta-learning pre-training model correctly recreate this behaviour. The results for ANI-1x pre-training are given in Fig. S6. One advantage of pre-training with multiple datasets over ANI-1x or QM7-x, is that reactive systems can be added that are not contained in ANI-1x. To test if this information has been effectively passed to the meta-learning potential, hydrogen bond dissociation for the 3BPA molecules was performed. There is no reactive information contained within the 3BPA training set and so this test relies entirely on the information contained in the pre-training. Figure <ref> shows the change in energy as a hydrogen molecule is removed from the 3BPA. The potential pre-trained with meta-learning recreates the smooth dissociation curve expected. In contrast, when no pre-training, k=1 or ANI-1x is used the curve lacks smoothness and has an additional barrier present. In Fig. S7, the bond dissociation energy when just 31 structures are used for retraining. Even in this low data limit the smooth dissociation curves for the meta-learning potential remain. To demonstrate that this is not unique to 3BPA, the hydrogen bond dissociation for ethanol is shown in Fig. S8. Again, k=1 fails to recreate the smooth curve expected whilst the meta-learning potential captures the correct shape. We have therefore shown how meta-learning can be used to combine multiple datasets and the resulting improvements in the error, torsion energy scans and bond dissociation. Joint-fitting can improve on no-pre-training. However, not accounting for the difference in QM level of theory causes a reduction in performance that can be seen in the test set errors, smoothness of the potential and performance in extrapolation regions. § CONCLUSION The quantum mechanical properties of millions of molecular species and many materials systems have already been calculated and composed into extended datasets <cit.>. However, the varying levels of theory used to perform the QM calculations has previously prevented different datasets being used together to make machine learning models, for example for MLIPs. In this work, we have shown that meta-learning techniques can be used to jointly fit multiple datasets and demonstrated the improvement in performance that results from including a diverse selection of datasets. We show the wide applicability of meta-learning by creating MLIPs for a variety of systems, from a single aspirin molecule to the ANI-1ccx dataset. By pre-training a model to multiple large organic molecule datasets we show that these datasets (QM7-x, QMugs, ANI-1x, Transition-1x and GEOM) can be combined together to pre-train a model. The benefits of using a pre-trained model are then shown for the 3BPA molecule, with a more accurate and smoother potential produced. Meta-learning greatly expands the variety of fitting data available for MLIPs and establishes the possibility of creating readily pre-trained, foundational models for MLIPs. Pre-training machine learning models has been extensively discussed in the machine learning literature in recent years <cit.>. Whilst pre-training has been carried out for MLIPs, its use has been limited to training from one dataset to another <cit.>. With techniques such as meta-learning, this pre-training does not need to be limited to one specific dataset but can include large numbers of existing datasets. In this work, we added only a single reactive dataset to pre-train a model. However, many different reactive datasets exist and combining this large amount of information could help build a general transferable potentials for reactions in both condensed and gas phase without the need for millions of new QM calculations. Additionally, datasets have been created for many different combinations of elements. Meta-learning techniques could help build more transferable MLIPs over a wider range of elements with fewer calculations required. However, combining multiple datasets together and training with meta-learning will not always improve results. This was seen with the CCSD(T) results where fitting straight from ANI-1x to CCSD(T) resulted in the lowest error. Therefore, adding more data when there is a specific application in mind is not always the best approach, particularly if the additional data is far from the final application. For specific applications, transfer learning from one dataset to another may yield the best training and test set errors. However, if multiple data sets need to be incorporate together, or a general model is desired which can be specialized to multiple different tasks, meta-learning methods are preferable. With the techniques described in this work, multiple datasets can be fit at once. However, this advancement has exposed a more practical problem with the datasets currently published. There is not a standard format for storing information. Manual manipulation of datasets to a standard format is extremely time-consuming. The need for uniformity in the structure of datasets produced is therefore becoming increasingly important. The growth of available datasets containing quantum mechanical information for molecular and material structures has given researchers unprecedented levels of QM information. However, combining data from multiple data-sources is a major challenge. We have shown how meta-learning can be used to combine information from multiple datasets generated with varying levels of theory. This advancement changes the way that existing datasets should be viewed, and opens up new avenues for MLIP fitting. Beyond this, the results suggest that meta-learning can be seen as a general approach for combining training datasets for the broad array of chemical and materials processes where data science models can benefit. This work was supported by the United States Department of Energy (US DOE), Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under Triad National Security, LLC (‘Triad’) contract grant no. 89233218CNA000001 (FWP: LANLE3F2). A. E. A. Allen and S. Matin also acknowledge the Center for Nonlinear Studies. Computer time was provided by the CCS-7 Darwin cluster at LANL.
http://arxiv.org/abs/2307.04181v1
20230709141335
Central limit theorem for temporal average of backward Euler--Maruyama method
[ "Diancong Jin" ]
math.NA
[ "math.NA", "cs.NA", "math.PR" ]
CLT]Central limit theorem for temporal average of backward Euler–Maruyama method School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China; Hubei Key Laboratory of Engineering Modeling and Scientific Computing, Huazhong University of Science and Technology, Wuhan 430074, China [email protected] This work is supported by National Natural Science Foundation of China (No. 12201228), and the Fundamental Research Funds for the Central Universities 3004011142. This work focuses on the temporal average of the backward Euler–Maruyama (BEM) method, which is used to approximate the ergodic limit of stochastic ordinary differential equations with super-linearly growing drift coefficients. We give the central limit theorem (CLT) of the temporal average, which characterizes the asymptotics in distribution of the temporal average. When the deviation order is smaller than the optimal strong order, we directly derive the CLT of the temporal average through that of original equations and the uniform strong order of the BEM method. For the case that the deviation order equals to the optimal strong order, the CLT is established via the Poisson equation associated with the generator of original equations. Numerical experiments are performed to illustrate the theoretical results. [ Diancong Jin August 12, 2023 =================== AMS subject classifications: 60H35, 60F05, 60H10 § INTRODUCTION Ergodic theory is a powerful tool to investigate the long-time dynamics and statistical properties of stochastic systems, which is widely applied in physics, biology, chemistry and so on(see, e.g., <cit.>). A crucial problem in ergodic theory is to determine the ergodic measure and ergodic limit. Since explicit expressions of them are generally unavailable, one usually resorts to numerical methods to obtain their approximations. There have been lots of numerical methods which inherit the ergodicity or approximate the ergodic limit of original systems; see <cit.> and references therein. In the aforementioned work, main efforts are made to analyze the error between the numerical invariant measure and the original one, and that between numerical temporal average and the ergodic limit. Besides the convergence of the numerical temporal average in the moment sense, the asymptotics of its distribution is also an essential property. In recent several work, the central limit theorem (CLT) of the temporal average of some numerical methods is given, which characterizes the fluctuation of the numerical temporal average around ergodic limits of original systems in the sense of distribution. In <cit.>, the CLT of the temporal average of the Euler-Maruyama (EM) method with decreasing step-size for ergodic stochastic ordinary differential equations (SODEs) is given. In addition, <cit.> proves the CLT and moderate deviation of the EM method with a fixed step-size for SODEs. For a class of ergodic stochastic partial differential equations (SPDEs), <cit.> shows that the temporal average of a full discretization with fixed temporal and spatial step-sizes satisfies the CLT. In the existing work, the CLT of numerical temporal average is established provided that coefficients of original equations are Lipschitz continuous. This motivates us to investigate the CLT of numerical temporal average for stochastic systems with non-Lipschitz coefficients, which have more extensive applications in reality compared with the Lipschitz case. In this work, we consider the following SODE X(t)=b(X(t)) t+σ(X(t)) W(t), t>0, where {W(t),t≥ 0} is a D-dimensional standard Brownian motion defined on a complete filtered probability space (Ω, F,{ F_t}_t≥0, P), and b: R^d→ R^d and σ: R^d→ R^d× D satisfy Assumptions <ref>-<ref> such that (<ref>) admits a unique strong solution on [0,+∞) for any given deterministic initial value X(0)∈ R^d. Notice that our assumptions allow b to grow super-linearly. It is shown in <cit.> that (<ref>) admits a unique invariant measure π and is thus ergodic, due to the strong dissipation condition on b. In order to inherit the ergodicity of (<ref>) and approximate the ergodic limit π(h):=∫_ R^dh(x)π( x), h∈ C_b( R^d), <cit.> discretizes (<ref>) by the backward Euler–Maruyama (BEM) method (see (<ref>)), and gives the error between the numerical invariant measure π_τ and π with τ being the step-size. The above result together with the strong order of the BEM method in the infinite time horizon implies that the temporal average 1/N∑_k=0^Nh(X̅_k^x) converges to the ergodic limit π(h), i.e., lim_τ→0lim_N→+∞|1/N∑_k=0^N Eh(X̅_k^x)-π(h)|=0, where {X̅^x_n}_n≥ 0 is the numerical solution generated by the BEM method with initial value x∈ R^d. The purpose of this paper is to present the CLT for the following temporal average Π_τ,α(h)=1/τ^-α∑_k=0^τ^-α-1h(X̅^x_k), α∈(1,2], h∈ C^4_b( R^d), where for convenience we always assume that τ^-α is an integer instead of the step number N in (<ref>). More precisely, we prove in Theorems <ref> and <ref> that the normalized temporal average 1/τ^α-1/2(Π_τ,α(h)-π(h)) converges to the normal distribution N(0,π(|σ^⊤∇φ|^2)) in distribution as τ→ 0, respectively for α∈(1,2) and α=2. In fact, Theorem <ref> indicates that the CLT holds for the temporal average of a class of numerical methods with uniform strong order 1/2, for α∈(1,2). Here, φ is defined by (<ref>) and solves the Poisson equation Lφ=h-π(h) (see Lemma <ref>), with L being the generator of (<ref>). We call the parameter τ^α-1/2 the deviation scale and α-1/2 the deviation order; see Remark <ref> for the reason of requiring α>1. The proof ideas of the CLT for Π_τ,α(h) are different for α∈(1,2) and α=2. For the case α∈(1,2), we directly derive the CLT for Π_τ,α(h) in Theorem <ref>, by means of the CLT for (<ref>) and the optimal strong order in the infinite time horizon of the BEM method, considering that the CLT for (<ref>) is a classical result (see <cit.>. The key of this proof lies in that the deviation order α-1/2 is smaller than the optimal strong order 1/2 for α∈(1,2), which does not apply to the case α=2. In order to tackle the more subtle case α=2, we follow the argument in <cit.> and <cit.> to obtain the CLT for Π_τ,2(h). The main idea is to reformulate the normalized temporal average 1/τ^α-1/2(Π_τ,α(h)-π(h)) by means of the Poisson equation. This allows us to decompose 1/τ^α-1/2(Π_τ,α(h)-π(h)) as a martingale difference series sum converging to N(0,π(|σ^⊤∇φ|^2)) in distribution, and a negligible remainder converging to 0 in probability. In this proof, the pth (p>2) moment boundedness of the BEM method in the infinite time horizon and the regularity of the solution to the Poisson equation play important roles, where the former has not been reported for SODEs with non-Lipschitz coefficients to the best our knowledge. To sum up, the contributions of this work are twofold. Firstly, we give the CLT for the temporal average of the BEM method, which generalizes the existing results to SODEs with super-linearly growing drift coefficients. Secondly, we prove the pth (p>2) moment boundedness of the BEM method in the infinite time horizon for the original equation. The rest of this paper is organized as follows. In Section <ref>, we give our assumptions and recall some basic properties of the exact solution. Section <ref> presents our main results and proves the CLT for Π_τ,α(h) with α∈(1,2), and Section <ref> gives the proof of the CLT for Π_τ,2(h). Some numerical tests are displayed to illustrate the theoretical results in Section <ref>. Finally, we give the conclusions and future aspects in Section <ref>. § PRELIMINARIES In this section, we give our main assumptions on the coefficients of (<ref>) and present some basic properties for (<ref>). We begin with some notations. Denote by |·| the 2-norm of a vector or matrix, and by ·,· the scalar product of two vectors. Let d,m,k∈ N^+ with N^+ denoting the set of positive integers. For matrix A,B∈ R^d× m, denote A,B_HS:=∑_i=1^d∑_j=1^mA_ijB_ij and A_HS:=√( A,A_HS). Let B( R^d) stand for the set of all Borel sets of R^d. Denote by P( R^d) the space of all probability measures on R^d. Denote μ(f)=∫_ R^df(x)μ( x) for μ∈ P( R^d) and μ-measurable function f. For convenience, we set F_t=σ(W(s),0≤ s≤ t). Moreover, d⟶ denotes the convergence in distribution of random variables and w⟶ denotes the weak convergence of probabilities in P( R^d). Denote by C( R^d; R^m) (resp. C^k( R^d; R^m)) the space consisting of continuous (resp. kth continuously differentiable) functions from R^d to R^m. Let C_b^k( R^d; R^m) stand for the set of bounded and kth continuously differentiable functions from R^d to R^m with bounded derivatives up to order k. Denote by C_b( R^d; R^m) the set of bounded and continuous functions from R^d to R^m. When no confusion occurs, C( R^d; R^m) is simply written as C( R^d), and so are C_b( R^d; R^m), C^k( R^d; R^m) and C^k_b( R^d; R^m). For l∈ N^+, denote by Poly(l, R^d) the set of functions growing polynomially with order l, i.e., Poly(l, R^d):={g∈ C( R^d; R):|g(x)-g(y)|≤ K(g)(1+|x|^l-1+|y|^l-1)|x-y| for some K(g)>0 }. For f∈𝐂^k(ℝ^d;ℝ), denote by ∇^k f(x)(ξ_1,…,ξ_k) the kth order Gǎteaux derivative along the directions ξ_1,…,ξ_k ∈ℝ^d, i.e., ∇^k f(x)(ξ_1,…,ξ_k)=∑_i_1,…,i_k=1^d ∂^k f(x)/∂ x^i_1⋯∂ x^i_k ξ_1^i_1⋯ξ_k^i_k. For f=(f_1,…,f_m)^⊤∈𝐂^k(ℝ^d; ℝ^m), denote ∇ ^kf(x)(ξ_1,…,ξ_k)=(∇ ^kf_1(x)(ξ_1,…,ξ_k),…,∇ ^kf_m(x)(ξ_1,…,ξ_k))^⊤. The Gǎteaux derivative for a matrix-valued function is defined as previously. For f∈ C^k( R^d; R) the notation ∇ ^kf(x) is viewed as a tensor, i.e., a multilinear form defined on ( R^d)^⊗^k. And ·_⊗ denotes the norm of a tensor. Throughout this paper, let K(a_1,a_2,...,a_m) denote some generic constant dependent on the parameters a_1,a_2,...,a_m but independent of the step-size τ, which may vary for each appearance. §.§ Settings Let us first give the assumptions on b and σ. There exist constants L_1, L_2∈(0,+∞) such that σ(u_1)-σ(u_2)_HS≤ L_1|u_1-u_2| ∀ u_1,u_2∈ R^d, σ(u)_HS≤ L_2 ∀ u∈ R^d. There exist c_1>15/2L_1^2, L_3>0 and q≥ 1 such that u_1-u_2,b(u_1)-b(u_2)≤ -c_1|u_1-u_2|^2 ∀ u_1,u_2∈ R^d, |b(u_1)-b(u_2)|≤ L_3(1+|u_1|^q-1+|u_2|^q-1)|u_1-u_2| ∀ u_1,u_2∈ R^d. The above two assumptions ensure the well-posedness of (<ref>); see e.g., <cit.>. And the generator of (<ref>) is given by Lf(x)=∇ f(x),b(x)+1/2∇^2f(x),σ(x)σ(x)^⊤_HS,  f∈ C^2( R^d; R). Notice that trace(∇^2 f(x)σ(x)σ(x)^⊤)=∇^2 f(x),σ(x)σ(x)^⊤_HS. As an immediate result of (<ref>), |b(u)|≤ L_4(1+|u|^q) ∀ u∈ R^d for some L_4>0. In addition, it is straightforward to conclude from Assumptions <ref>-<ref> that for any l_2>0, 2 u_1-u_2,b(u_1)-b(u_2)+15σ(u_1)-σ(u_2)_HS^2≤ -L_5|u_1-u_2|^2 ∀ u_1,u_2∈ R^d, 2 u,b(u)+l_2σ(u)_HS^2≤ -c_1|u|^2+1/c_1|b(0)|^2+l_2L_2^2 ∀ u∈ R^d, where L_5:=2c_1-15L_1^2. Note that Assumptions <ref>-<ref> in this paper imply Assumptions 2.1-2.4 in <cit.>, by taking A=-ε I_d, f(x)=b(x)+ε x and g(x)=σ(x) in <cit.> with ε small enough. Thus, all conclusions in <cit.> apply to our case provided that Assumptions <ref>-<ref> hold. In order to give the regularity of the solution to the Poisson equation, we need the following assumption. Let σ∈ C_b^4( R^d) and b∈ C^4( R^d). In addition, there exist q'≥ 1 and L_6>0 such that for i=1,2,3,4, ∇^i b(u)_⊗≤ L_6(1+|u|^q') ∀  u∈ R^d. Under Assumptions <ref>-<ref>, it holds that 2 v,∇ b(u)v+15∇σ(u)v^2_HS≤ -L_5|v|^2 ∀  u,v∈ R^d. In fact, it follows from (<ref>) that for any u,v∈ R^d and t∈ R, 2t v,b(u+tv)-b(u)+15σ(u+tv)-σ(u)^2_HS≤ -L_5t^2|v|^2. Then the Taylor expansion yields that for any t∈ R, 2t^2 v,∇ b(u)v+15t^2∇σ(u)v^2_HS+ O(t^3)≤ -L_5t^2|v|^2, which implies (<ref>). Next, we recall some basic knowledge about the invariant measure and ergodicity. Denote by X^s,x(t) the solution to (<ref>) at time t, starting from X(s)=x. Especially, denote X^x(t):=X^0,x(t). Let π_t(x,·) denote the transition probability kernel of {X(t)}_t≥ 0, i.e., π_t(x,A)= P(X^x(t)∈ A) for any A∈ B( R^d). For any ϕ∈ B_b( R^d) and t≥0, define the operator P_t: B_b( R^d)→ B_b( R^d) by (P_tϕ)(x):= Eϕ(X^x(t))=∫_ Rϕ(y)π_t(x, y). Then, {P_t}_t≥ 0 is a Markov semigroup on B_b( R^d). Here, B_b( R^d) is the space of all bounded and measureale functions. A probability measure μ∈ P( R^d) is called an invariant measure of {X(t)}_t≥ 0 or {P_t}_t≥0, if ∫_ R^d P_tϕ(x)μ ( x)= ∫_ R^dϕ(x)μ ( x) ∀ ϕ∈ B_b( R^d), t≥ 0. Further, an invariant measure μ is called an ergodic measure of {X(t)}_t≥ 0 or {P_t}_t≥0, if for any ϕ∈ B_b( R^d), lim_T→+∞1/T∫_0^TP_tϕ(x) t=∫_ R^dϕ(x)μ ( x) in  L^2( R^d,μ), where L^2( R^d,μ) is the space of all square integrable functions with respect to (w.r.t.) μ. Especially, if μ is the unique invariant measure of {X(t)}_t≥ 0, then μ is also the ergodic measure. We refer readers to <cit.> for more details. Let Assumptions <ref>-<ref> hold. Then, the following hold. (1) For any p≥ 1, sup_t≥ 0 E|X^x(t)|^p≤ K(p)(1+|x|^p). (2) For any t,s≥ 0, ( E|X^x(t)-X^x(s)|^2)^1/2≤ K(1+|x|^q)|t-s|^1/2. (3) For any t≥ 0, ( E|X^x(t)-X^y(t)|^2)^1/2≤ |x-y|e^-L_5t/2. The first and second conclusions come from <cit.>. And the third conclusion can be obtained by applying the Itô formula. In addition, <cit.> gives the ergodicity for {X(t)}_t≥ 0. Let Assumptions <ref>-<ref> hold. Then we have the following. (1) {X(t)}_t≥ 0 admits a unique invariant measure π∈ P( R^d). (2) For any p≥ 1, π(|·|^p)<+∞. (3) There is λ_1>0 such that for any f∈ Poly(l, R^d), l≥1 and t≥ 0, | Ef(X^x(t))-π(f)|≤ K(f)(1+|x|^l)e^-λ_1t. It follows from <cit.> that {X(t)}_t≥0 admits a unique invariant measure π∈ P( R^d), and π_t(x,·)w⟶π as t→+∞ for any x∈ R^d. Especially, π_t(0,·)w⟶π, which implies that for any M>0, ∫_ R^d(|x|^p∧ M)π( x) =lim_t→+∞∫_ R^d(|x|^p∧ M)π_t(0, x) ≤ M∧lim sup_t→+∞ E|X^0(t)|^p≤ K, where we used |·|^p∧ M∈ C_b( R^d) and Proposition <ref>(1). Then the Fatou lemma gives π(|·|^p)=∫_ R^d|x|^p π( x)≤lim inf_M→+∞∫_ R^d(|x|^p∧ M)π ( x)≤ K. For any M>0 and f∈ Poly(l, R^d), it holds that f∧ M∈ C_b( R^d). Accordingly, it follows from the definition of the invariant measure (see (<ref>)) that π(f∧ M)=∫_ R^d P_t(f∧ M)(y)π ( y). Thus, using Proposition <ref>(2), the Hölder inequality, the fact |a∧ b-a∧ c|≤ |b-c| and the second conclusion, we conclude that for any M>0, | E(f(X^x(t))∧ M)-π(f∧ M)|=|P_t(f∧ M)(x)-∫_ R^d P_t(f∧ M)(y)π ( y)| = |∫_ R^d[P_t(f∧ M)(x)-P_t(f∧ M)(y)]π ( y)| ≤ ∫_ R^d| E(f(X^x(t))∧ M)- E(f(X^y(t))∧ M)|π ( y) ≤ ∫_ R^d E|f(X^x(t))-f(X^x(y))|π( y) ≤ K(f)∫_ R^d(1+( E|X^x(t)|^2l-2)^1/2+( E|X^y(t)|^2l-2)^1/2)( E|X^x(t)-X^y(t)|^2)^1/2π ( y) ≤ K(f)e^-L_5/2t∫_ R^d(1+|x|^l-1+|y|^l-1)|x-y|π( y) ≤ K(f)e^-L_5/2t(1+|x|^l). The above formula and the monotone convergence theorem lead to (<ref>), which completes the proof. § MAIN RESULTS In this section, we give our main result, i.e., the CLT for the temporal average Π_τ,α(h) of the BEM method used to approximate the ergodic limit π(h). The BEM method has been widely applied to approximating SODEs or SPDEs with non-Lipschitz coefficients; see e.g., <cit.> and references therein. Let τ>0 be the temporal step-size. The BEM method for (<ref>) reads X̅_n+1=X̅_n+b(X̅_n+1)τ+σ(X̅_n)Δ W_n, n=0,1,2,…, where Δ W_n:=W(t_n+1)-W(t_n) with t_n=nτ. We denote by X̅^k,x_n the solution to (<ref>) at the nth step provided X̅_k=x. Especially, denote X̅_n^x:=X̅^0,x_n, i.e., the solution to (<ref>) with the initial value x∈ R^d. The following are some known results about (<ref>), which can be found in Lemmas 4.1-4.2 and Theorems 4.2 in <cit.>. Let Assumptions <ref>-<ref> hold and τ sufficiently small. Then the following properties hold. (1) sup_n≥ 0 E|X̅^x_n|^2≤ K(1+|x|^2). (2) There is ξ_1>0 such that for any n≥ 0, ( E|X̅^x_n-X̅^y_n|^2)^1/2≤ K|x-y|e^-ξ_1nτ. (3) sup_n≥ 0 E|X^x(t_n)-X̅^x_n|^2≤ K(x)τ. Recall that the temporal average of the BEM method is Π_τ,α(h)=1/τ^-α∑_k=0^τ^-α-1h(X̅^x_k), α∈(1,2], h∈ C^4_b( R^d). Define the function φ: R^d→ R by φ(x)=-∫_0^∞ E(h(X^x(t))-π(h)) t, x∈ R^d , which is indeed a solution to the Poisson equation Lφ=h-π(h) due to Lemma <ref>. Then we have the following CLT for Π_τ,α(h), α∈(1,2). Let Assumptions <ref>-<ref> hold and h∈ C_b^4( R^d). (1) Let {Y_n}_n≥0 be a numerical solution for (<ref>). Suppose that there is K>0 independent of τ such that sup_n≥ 0 E|X(t_n)-Y_n|^2≤ Kτ. Then for any α∈(1,2), 1/τ^α-1/2(1/τ^-α∑_k=0^τ^-α-1h(Y_k)-π(h))d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0. (2) For any α∈(1,2) and x∈ R^d, 1/τ^α-1/2(Π_τ,α(h)-π(h))d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0. Let φ be that in (<ref>). By Lemma <ref>, it holds that φ∈ C^3( R^d) and Lφ=h-π(h). It follows from <cit.> that the CLT holds for (<ref>), i.e., 1/√(T)∫_0^T(h(X(t))-π(h)) td⟶ N(0,-2π(φ Lφ)) as T→∞. By (<ref>) and a direct computation, φ Lφ=1/2 Lφ^2-1/2|σ^⊤∇φ|^2. Since φ^2 belongs to the domain of L, π( Lφ^2)=0 due to <cit.>. Combining the above relations, we have 1/√(T)∫_0^T(h(X(t))-π(h)) td⟶ N(0,π(|σ^⊤∇φ|^2)) as T→∞. Notice that 1/τ^α-1/2(1/τ^-α∑_k=0^τ^-α-1h(Y_k)-π(h)) = 1/τ^α-1/2(1/τ^-α∑_k=0^τ^-α-1h(Y_k)-τ^α-1∫_0^τ^1-αh(X(t)) t)+τ^α-1/2∫_0^τ^1-α(h(X(t))-π(h)) t =: J_1(τ)+J_2(τ). By (<ref>) and α>1, J_2(τ)d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0. Denoting N=τ^-α, we use Proposition <ref>(2), (<ref>) and h∈ C_b^1( R^d) to get E|J_1(τ)|= 1/τ^α-1/2 E|1/N∑_k=0^N-1h(Y_k)-1/Nτ∑_k=0^N-1∫_kτ^(k+1)τh(X(t)) t| ≤ 1/τ^α-1/21/N∑_k=0^N-1 E|h(Y_k)-h(X(t_k))|+1/τ^α-1/21/Nτ∑_k=0^N-1∫_kτ^(k+1)τ E|h(X(t))-h(X(t_k))| t ≤ K(h)1/τ^α-1/2sup_k≥0( E|Y_k-X(t_k)|^2)^1/2+K(h)1/τ^α-1/21/Nτ∑_k=0^N-1∫_kτ^(k+1)τ( E|X(t)-X(t_k)|^2)^1/2 t ≤ K(h)1/τ^α-1/2τ^1/2=K(h)τ^2-α/2. Thus, lim_τ→0 E|J_1(τ)|=0 due to α<2, which implies that J_1(τ) converges to 0 in probability. Thus, (<ref>) follows by applying the Slutsky theorem. Finally, (<ref>) holds as a special case of (<ref>) due to Proposition <ref>(3). Thus, the proof is complete. text (1) It is observed that 1/τ^α-1/2(Π_τ,α(h)-π(h))=1/τ^1-α/2∑_k=0^τ^-α-1(h(X̅^x_k)-π(h))τ, which can be viewed as a numerical approximation of 1/√(T)∫_0^T(h(X^x(t))-π(h)) t with T(τ)=Nτ and N=τ^-α. Thus, α>1 is required such that lim_τ→0T(τ)=+∞, which coincides with the CLT for {X(t)}_t≥0. (2) In fact, we give the CLT of the temporal average for a class of numerical methods satisfying (<ref>) for α∈(1,2). We guess that there may be some non-ergodic numerical method whose temporal average satisfies the CTL in view of Theorem <ref>(1). We close the section by presenting the CLT for Π_τ,2(h). Let Assumptions <ref>-<ref> hold and h∈ C^4_b( R^d). Then for any x∈ R^d, 1/√(τ)(Π_τ,2(h)-π(h))d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0. As is pointed out in the introduction, the proof idea of Theorem <ref> does apply to the case α=2. Instead, we will use the Poisson equation L φ=h-π(h) to give a good decomposition of Π_τ,2(h), on basis of which the CLT of Π_τ,2(h) can be established. We postpone the proof of Theorem <ref> to the next section. § PROOF OF THEOREM <REF> §.§ Auxiliary results Notice that <cit.> gives the second moment boundedness of the BEM method, i.e, Proposition <ref>(1). However, in order to give the CLT for Π_τ,2(h), the pth (p>2) moment boundedness in the infinite time horizon is indispensable. We also refer interested readers to <cit.> for the pth (p>2) moment boundedness in the infinite time horizon for the truncated Euler Maruyama method. Suppose that Assumptions <ref>-<ref> hold. Then for any r≥ 1 and τ≤ 1, sup_n≥0 E|X̅^x_n|^r≤ K(r)(1+|x|^r). It is sufficient to show that for any positive integer p, sup_n≥0 E|X̅^x_n|^2p≤ K(p)(1+|x|^2p), in view of the Hölder inequality, which will be derived via mathematical induction. By (<ref>) and (<ref>), |X̅^x_n+1|^2-|X̅^x_n|^2+|X̅^x_n+1-X̅^x_n|^2=2X̅^x_n+1,X̅^x_n+1-X̅^x_n = 2X̅^x_n+1,b(X̅^x_n+1)τ+2X̅^x_n+1-X̅^x_n,σ(X̅^x_n)Δ W_n+2X̅^x_n,σ(X̅^x_n)Δ W_n ≤ -c_1τ|X̅^x_n+1|^2+Kτ+|X̅^x_n+1-X̅^x_n|^2+σ(X̅^x_n)^2_HS|Δ W_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n, which together with the boundedness of σ yields (1+c_1τ)|X̅^x_n+1|^2≤ |X̅^x_n|^2+Kτ+L_2^2|Δ W_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n. Noting that EX̅^x_n,σ(X̅^x_n)Δ W_n=0, we have E|X̅^x_n+1|^2≤1/1+c_1τ E|X̅^x_n|^2+Kτ/1+c_1τ. By iteration, we arrive at E|X̅^x_n|^2≤1/(1+c_1τ)^n|x|^2+Kτ∑_i=1^∞1/(1+c_1τ)^i≤ |x|^2+K. Thus, (<ref>) holds for p=1. Now, we assume that sup_n≥0 E|X̅^x_n|^2(p-1)≤ K(p)(1+|x|^2(p-1)), p≥ 2. It remains to prove sup_n≥0 E|X̅^x_n|^2p≤ K(p)(1+|x|^2p). In fact, using (<ref>) and the inequality (1+x)^α≥ 1+α x, α≥ 1, x>-1 leads to (1+pc_1τ)|X̅^x_n+1|^2p≤(|X̅^x_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n +K(τ+|Δ W_n|^2))^p. Notice that (|X̅^x_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n +K(τ+|Δ W_n|^2))^p = ∑_i_1=0^p∑_i_2=0^p-i_1C_p^i_1C_p-i_1^i_22^i_2K^p-(i_1+i_2)|X̅^x_n|^2i_1X̅^x_n,σ(X̅^x_n)Δ W_n ^i_2(τ+|Δ W_n|^2)^p-(i_1+i_2) = |X̅^x_n|^2p+∑_i_1=0^p-1∑_i_2=0^p-i_1-1C_p^i_1C_p-i_1^i_22^i_2K^p-(i_1+i_2)S_n,i_1,i_2+∑_i=0^p-1C_p^i2^p-iT_n,i, where S_n,i_1,i_2:=|X̅^x_n|^2i_1X̅^x_n,σ(X̅^x_n)Δ W_n ^i_2(τ+|Δ W_n|^2)^p-(i_1+i_2), i_1∈[0,p-1], i_2∈[0,p-i_1-1], T_n,i:=|X̅^x_n|^2iX̅^x_n,σ(X̅^x_n)Δ W_n ^p-i, i∈[0,p-1]. For any i_1∈[0,p-1], i_2∈[0,p-i_1-1], it follows from the independence of Δ W_n and X̅^x_n, the boundedness of σ, the Hölder inequality and (<ref>) that for τ≤ 1, | ES_n,i_1,i_2| ≤ K(p) E|X̅^x_n|^2i_1+i_2 E[|Δ W_n|^i_2(τ+|Δ W_n|^2)^p-(i_1+i_2)] ≤ K(p)( E|X̅^x_n|^2p-2)^2i_1+i_2/2p-2τ≤ K(p)(1+|x|^2p-2)τ. Next we estimate | ET_n,i| for i=0,…,p-1. Notice that the property of conditional expectations (see, e.g., <cit.>) leads to ET_n,p-1 = E[ E_n(|X̅^x_n|^2p-2X̅^x_n,σ(X̅^x_n)Δ W_n )] = E[( E(|y|^2p-2 y,σ(y)Δ W_n ))|_y=X̅^x_n]=0. For i=0,…,p-2, applying (<ref>), the boundedness of σ and the Hölder inequality, we get | ET_n,i|≤ K(p) E|X̅^x_n|^p+i E|Δ W_n|^p-i≤ K(p) ( E|X̅^x_n|^2p-2)^p+i/2p-2τ^p-i/2≤ K(p)(1+|x|^2p-2)τ. Combining the above formulas gives E(|X̅^x_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n +K(τ+|Δ W_n|^2))^p≤ E|X̅^x_n|^2p+K(p)(1+|x|^2p-2)τ, which along with (<ref>) yields E|X̅^x_n+1|^2p≤1/1+pc_1τ E|X̅^x_n|^2p+K(p)(1+|x|^2p-2)τ/1+pc_1τ. Then by iteration, we deduce E|X̅^x_n|^2p≤1/(1+pc_1τ)^n|x|^2p+K(p)(1+|x|^2p-2)τ∑_i=1^∞1/(1+pc_1τ)^i≤ K(p)(1+|x|^2p). Thus, (<ref>) holds by mathematical induction and the proof is complete. Let Assumptions <ref>-<ref> hold and τ be sufficiently small. Then the BEM method (<ref>) admits a unique invariant measure π_τ∈ P( R^d). Moreover, for any f∈ Poly(l, R^d), l≥1 and n≥ 0, | Ef(X̅^x_n)-π_τ(f)| ≤ K(f)(1+|x|^l)e^-ξ_1nτ, x∈ R^d, n≥ 0, |π_τ(f)-π(f)| ≤ K(f)τ^1/2. As is shown in <cit.>, {X̅_n}_n≥0 admits a unique invariant measure π_τ, and X̅_n^xd⟶π_τ for any x∈ R^d. Similar to the proof of (<ref>), one can derive (<ref>) based on Proposition <ref>(2) and Theorem <ref>. As for (<ref>), it follows from f∈ Poly(l, R^d), (<ref>), Theorem <ref>, Proposition <ref>(1), Proposition <ref>(3) and (<ref>) that for any n≥ 0 and τ≪ 1, |π_τ(f)-π(f)|≤ |π_τ(f)- Ef(X̅^0_n)|+| Ef(X̅^0_n)- Ef(X^0(t_n))|+| Ef(X^0(t_n))-π(f)| ≤ K(f)e^-ξ_1nτ+K(f)(1+( E|X̅^0_n|^2l-2)^1/2+( E|X^0(t_n)|^2l-2)^1/2) ( E|X̅^0_n-X^0(t_n)|^2)^1/2+K(f)e^-λ_1 t_n ≤ K(f)(e^-ξ_1nτ+e^-λ_1 t_n)+K(f)τ^1/2. Letting n→∞ in the above formula yields (<ref>), which finishes the proof. In order to prove the CLT for Π_τ,2(h), we need to give the regularity of φ. This can be done through a probabilistic approach by means of mean-square derivatives of {X^x(t)}_t≥0 w.r.t. the initial value x. For any x,y_i∈ R^d, i=1,2,3,4, denote by η^x_y_1(t) the mean-square derivative of X^x(t) along with the direction y_1, i.e., η^x_y_1(t)=lim_ε→01/ε(X^x+ε y_1(t)-X^x(t)) in L^2(Ω; R^d). Further, denote η^x_y_1,y_2(t):=lim_ε→01/ε(η^x+ε y_2_y_1(t)-η^x_y_1(t)) in L^2(Ω; R^d), i.e., η^x_y_1,y_2(t) is the second mean-square derivative of X^x(t) along with the direction y_1 and y_2. η^x_y_1,y_2,y_3(t) and η^x_y_1,y_2,y_3,y_4(t) are defined similarly. We refer readers to <cit.> for more details about the mean-square differentiablity of SDEs w.r.t. initial values. Suppose that Assumptions <ref>-<ref> hold. Then there exist C_1,C_2>0 and κ_i>0, i=1,2,3 such that for any x,y_i∈ R^d, i=1,2,3,4 and t≥ 0, ( E|η^x_y_1(t)|^16+κ_1)^1/16+κ_1 ≤ C_1|y_1|e^-C_2t, ( E|η^x_y_1,y_2(t)|^8+κ_2)^1/8+κ_2 ≤ C_1(1+|x|^q')|y_1||y_2|e^-C_2t, ( E|η^x_y_1,y_2,y_3(t)|^4+κ_3)^1/4+κ_3 ≤ C_1(1+|x|^2q')|y_1||y_2||y_3|e^-C_2t, ( E|η^x_y_1,y_2,y_3,y_4(t)|^2)^1/2 ≤ C_1(1+|x|^3q')|y_1||y_2||y_3||y_4|e^-C_2t. Similarly to <cit.>, η^x_y_1 solves the following variational equation η^x_y_1(t)=∇ b(X^x(t))η^x_y_1(t) t+∇σ (X^x(t))η^x_y_1(t) W(t), η^x_y_1(0)=y_1. Notice that for any p≥2 and matrix A, it holds that ∇ (|x|^p)=p|x|^p-2x and 1/2trace(∇ ^2(|x|^p)AA^⊤)≤1/2p(p-1)|x|^p-2A_HS^2. For any κ∈(0,1) and λ>0, by the Itô formula, (<ref>), σ∈ C_b^4( R^d) and (<ref>), E(e^λ t|η^x_y_1(t)|^16+κ) ≤ |y_1|^16+κ+λ∫_0^te^λ s|η_y_1^x(s)|^16+κ s +1/2(16+κ) E∫_0^te^λ s|η^x_y_1(s)|^14+κ[2η^x_y_1(s),∇ b(X^x(s))η^x_y_1(s) +(15+κ)∇σ(X^x(s))η^x_y_1(s)^2_HS] ≤ |y_1|^16+κ+[λ+(8+κ/2)(-L_5+κ L^2_σ)]∫_0^t E|η^x_y_1(s)|^16+κ s, where L_σ:=sup_x∈ R^d∇σ(x)_⊗. Letting κ_1<L_5/L^2_σ, λ_1 small enough, we obtain E|η^x_y_1(t)|^16+κ_1≤ |y_1|^16+κ_1e^-λ_1 t ∀ t∈[0,T], which yields the (<ref>). Secondly, similar to the argument for η^x_y_1, we have {[ η^x_y_1,y_2(t)= ∇ b(X^x(t))η^x_y_1,y_2(t) t+∇^2 b(X^x(t))(η^x_y_1(t),η^x_y_2(t)) t; +∇σ (X^x(t))η^x_y_1,y_2(t) W(t)+∇^2 σ(X^x(t))(η^x_y_1(t),η^x_y_2(t)) W(t),; η^x_y_1,y_2(0)= 0. ]. For any κ,λ,ε_0∈(0,1), again by the Itô formula, (<ref>), σ∈ C_b^4( R^d) and the elementary inequality (a+b)^2≤ (1+ε_0)a^2+(1+1/ε_0)b^2 with a,b≥ 0, it holds that E(e^λ t|η^x_y_1,y_2(t)|^8+κ) ≤ λ E∫_0^t e^λ s|η^x_y_1,y_2(s)|^8+κ s+(8+κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κη^x_y_1,y_2(s),∇ b(X^x(s))η^x_y_1,y_2(s) s +(8+κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κη^x_y_1,y_2(s),∇^2 b(X^x(s))(η^x_y_1(s),η^x_y_2(s)) s +1/2(8+κ)(7+κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κ∇σ (X^x(s))η^x_y_1,y_2(s)+∇^2 σ(X^x(s))(η^x_y_1(s),η^x_y_2(s))^2_HS s ≤ λ E∫_0^t e^λ s|η^x_y_1,y_2(s)|^8+κ s+1/2(8+κ) E∫_0^t e^λ s|η^x_y_1,y_2(s)|^6+κ[2η^x_y_1,y_2(s),∇ b(X^x(s))η^x_y_1,y_2(s) +(7+κ)(1+ε_0)∇σ (X^x(s))η^x_y_1,y_2(s)_HS^2] s +K(κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^7+κ∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)| s +K(κ,ε_0) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κ|η^x_y_1(s)|^2|η^x_y_2(s)|^2 s. Further, taking ε_0≪ 1 and using (<ref>), we get E(e^λ t|η^x_y_1,y_2(t)|^8+κ) ≤ (λ-(4+κ/2)L_5) E∫_0^te^λ s|η^x_y_1,y_2(s)|^8+κ s+K(κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^7+κ∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)| s +K(κ) E∫_0^te^λ s|η^x_y_1,y_2(s)|^6+κ|η^x_y_1(s)|^2|η^x_y_2(s)|^2 s. It follows from the Young inequality ab≤ε a^p+K(ε)b^q with a,b≥ 0, 1/p+1/q=1, p,q>1 and the Hölder inequality that for any ε,ε'>0, E (|η^x_y_1,y_2(s)|^7+κ∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)|) ≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε) E[(∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)|)^8+κ] ≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε) ( E|η^x_y_1(s)|^(8+κ)(2+ε'))^1/2+ε'( E|η^x_y_2(s)|^(8+κ)(2+ε'))^1/2+ε' ·( E∇^2 b(X^x(s))_⊗^(8+κ)(1+2/ε'))^ε'/2+ε'. Taking sufficiently small κ and ε', from Assumption <ref>, Proposition <ref>(1) and (<ref>) it follows that for any ε>0, E (|η^x_y_1,y_2(s)|^7+κ∇^2 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)|) ≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε)(1+|x|^(8+κ)q')|y_1|^8+κ|y_2|^8+κe^-Ks. Similarity, for any ε>0, E(|η^x_y_1,y_2(s)|^6+κ|η^x_y_1(s)|^2|η^x_y_2(s)|^2) ≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε)( E|η^x_y_1(s)|^2(8+κ))^1/2( E|η^x_y_2(s)|^2(8+κ))^1/2 ≤ ε E|η^x_y_1,y_2(s)|^8+κ+K(ε)|y_1|^8+κ|y_2|^8+κe^-Ks. Plugging (<ref>)-(<ref>) into (<ref>), and taking sufficiently small κ_2, λ_2 and ε, one has E(e^λ_2t|η^x_y_1,y_2(t)|^8+κ_2)≤ -K E∫_0^te^λ_2s|η^x_y_1,y_2(s)|^8+κ_2 s+K(1+|x|^(8+κ_2)q')|y_1|^8+κ|y_2|^8+κ, which produces (<ref>). Further, η^x_y_1,y_2,y_3 solves the following SDE η^x_y_1,y_2,y_3(t)= ∇ b(X^x(t))η^x_y_1,y_2,y_3(t) t+∇^2 b(X^x(t))(η^x_y_1(t),η^x_y_2,y_3(t)) t +∇^2 b(X^x(t))(η^x_y_2(t),η^x_y_1,y_3(t)) t+∇^2 b(X^x(t))(η^x_y_3(t),η^x_y_1,y_2(t)) t +∇^3 b(X^x(t))(η^x_y_1(t),η^x_y_2(t),η^x_y_3(t)) t+ ∇σ(X^x(t))η^x_y_1,y_2,y_3(t) W(t) +∇^2 σ(X^x(t))(η^x_y_1(t),η^x_y_2,y_3(t)) W(t) +∇^2 σ(X^x(t))(η^x_y_2(t),η^x_y_1,y_3(t)) W(t) +∇^2 σ(X^x(t))(η^x_y_3(t),η^x_y_1,y_2(t)) W(t) +∇^3σ (X^x(t))(η^x_y_1(t),η^x_y_2(t),η^x_y_3(t)) W(t), η^x_y_1,y_2,y_3(0)= 0. By the same argument for deriving (<ref>), using Itô formula, (<ref>) and σ∈ C^4_b( R^d), we have that for any κ,λ∈(0,1), E(e^λ t|η^x_y_1,y_2,y_3(t)|^4+κ) ≤ (λ-(2+κ/2)L_5) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s + K(κ) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^3+κ∇^2 b(X^x(s))_⊗(|η^x_y_1(s)||η^x_y_2,y_3(s)|+|η^x_y_2(s)||η^x_y_1,y_3(s)|+|η^x_y_3(s)||η^x_y_1,y_2(s)|) s + K(κ) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^3+κ∇^3 b(X^x(s))_⊗|η^x_y_1(s)||η^x_y_2(s)||η^x_y_3(s)| s + K(κ) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^2+κ(|η^x_y_1(s)|^2|η^x_y_2,y_3(s)|^2+|η^x_y_2(s)|^2|η^x_y_1,y_3(s)|^2 +|η^x_y_3(s)|^2|η^x_y_1,y_3(s)|^2+|η^x_y_1(s)|^2|η^x_y_2(s)|^2|η^x_y_3(s)|^2) s. =: (λ-(2+κ/2)L_5) E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s+I_1(t)+I_2(t)+I_3(t). It follows from the Young inequality, Hölder inequality, (<ref>)-(<ref>), Assumption <ref> and Proposition <ref>(1) that for sufficiently small κ,ε,ε', E(|η^x_y_1,y_2,y_3(s)|^3+κ∇^2 b(X^x(s))_⊗|η^x_y_χ(1)(s)||η^x_y_χ(2),y_χ(3)(s)|) ≤ ε E|η^x_y_1,y_2,y_3(s)|^4+κ+K(ε)( E|η^x_y_χ(1)(s)|^(4+κ)(2+ε'))^1/2+ε'( E|η^x_y_χ(2),χ(3)(s)|^(4+κ)(2+ε'))^1/2+ε' ·( E∇^2 b(X^x(s))^(4+κ)(1+2/ε')_⊗)^ε'/2+ε' ≤ ε E|η^x_y_1,y_2,y_3(s)|^4+κ+K(ε)(1+|x|^2q'(4+κ))(|y_1||y_2||y_3|)^4+κe^-Ks, where (χ(1),χ(2),χ(3)) is any permutation of (1,2,3). Thus, for κ,λ,ε≪1, I_1(t)≤ K(κ)ε E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s+K(κ,ε)(1+|x|^2q'(4+κ))(|y_1||y_2||y_3|)^4+κ. Similarly, it can be verified that for κ,λ,ε≪1, I_2(t)≤ Kε E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s+K(ε)(1+|x|^q'(4+κ))(|y_1||y_2||y_3|)^4+κ, I_3(t)≤ Kε E∫_0^te^λ s|η^x_y_1,y_2,y_3(s)|^4+κ s+K(ε)(1+|x|^q'(4+κ))(|y_1||y_2||y_3|)^4+κ. Plugging (<ref>)-(<ref>) into (<ref>) yields (<ref>). Finally, by means of an analogous proof for (<ref>), we obtain (<ref>). Thus, the proof is finished. Let Assumptions <ref>-<ref> hold and h∈ C_b^4( R^d). Let φ be the function defined by (<ref>). Then, for any x∈ R^d, |φ(x)|≤ K(1+|x|), ∇^i φ(x)_⊗≤ K(1+|x|^(i-1)q'), i=1,2,3,4. Moreover, φ is a solution to the Poisson equation L φ=h-π(h). By (<ref>), |φ(x)|≤ K(h)(1+|x|)∫_0^∞ e^-λ_1 t t≤ K(h)(1+|x|), which indicates that φ is well defined and (<ref>) holds. Denoting u(t,x):= Eh(X^x(t)), we have that for any x,y_1∈ R^d, ∇_x u(t,x)y_1= E(∇ h(X^x(t))η^x_y_1(t)), due to the definition of η^x_y_1 and h∈ C^1_b( R^d). It follows from (<ref>), h∈ C_b^4( R^d) and the Hölder inequality that |∇_x u(t,x)y_1|≤ K E|η^x_y_1(t)|≤ K|y_1|e^-C_2t. By the arbitrariness of y_1, |∇_x u(t,x)|≤ Ke^-C_2t, which implies |∇φ(x)|≤∫_0^∞|∇_x u(t,x)| t≤ K. Further, ∇ ^2_x u(t,x)(y_1,y_2)= E(∇ h(X^x(t))η^x_y_1,y_2(t)+∇^2 h(X^x(t))(η^x_y_1(t),η^x_y_2(t))) for any x,y_1,y_2∈ R^d. Then (<ref>), h∈ C_b^4( R^d) and the Hölder inequality yield |∇ ^2_x u(t,x)(y_1,y_2)|≤ K E|η^x_y_1,y_2(t)|+K E|η^x_y_1(t)||η^x_y_2(t)|≤ K(1+|x|^q')|y_1||y_2|e^-C_2t. This gives ∇ ^2_x u(t,x)_HS≤ K(1+|x|^q')e^-C_2t and thus ∇ ^2 φ(x)_HS≤ K(1+|x|^q'). Similarly, it can be verified that (<ref>) holds for i=3,4. By the Itô formula, Eh(X^x(t))=h(x)+∫_0^t E Lh(X^x(s)) s, which gives E Lh(X^x(t))=/ t Eh(X^x(t)), i.e., Lu(t,x)=/ tu(t,x). It follows from (<ref>), (<ref>) and the previous estimates for ∇_x u(t,x) and ∇_x^2u(t,x) that | Lu(t,x)|≤ K(1+|x|^q+|x|^q')e^-C_2t. Thus, we can exchange the operator L and the integration in t for L∫_0^∞ (u(t,x)-π(h)) t. Accordingly, using (<ref>) and (<ref>) yields that for any x∈ R^d, Lφ(x) =-∫_0^∞ Lu(t,x) t=-∫_0^∞/ tu(t,x) t =u(0,x)-lim_t→+∞u(t,x) =h(x)-lim_t→+∞ Eh(X^x(t))=h(x)-π(h). This finishes the proof. §.§ Detailed proof In this part, we give the proof of Theorem <ref>. As is mentioned previously, we will split 1/√(τ)(Π_τ,2(h)-π(h)) into a martingale difference series sum and a negligible remainder, based on the Poisson equation (<ref>). Proof of Theorem <ref>. For the convenience of notations, we denotes m=τ^-2 with τ being sufficiently small. By (<ref>), we have =1/√(τ)(Π_τ,2(h)-π(h)) =τ^-1/21/m∑_k=0^m-1(h(X̅^x_k)-π(h))=τ^3/2∑_k=0^m-1 Lφ(X̅^x_k) =τ^1/2∑_k=0^m-1( Lφ(X̅^x_k)τ-(φ(X̅^x_k+1)-φ(X̅^x_k)))+τ^1/2(φ(X̅^x_m)-φ(x)). Lemma <ref> enables us to apply the Taylor expansion for φ: φ(X̅^x_k+1)-φ(X̅^x_k) = ∇φ(X̅^x_k),ΔX̅^x_k+1/2∇^2φ(X̅^x_k),ΔX̅^x_k(ΔX̅^x_k)^⊤_HS +1/2∫_0^1(1-θ)^2∇^3φ(X̅_k^x+θΔX̅^x_k)(ΔX̅^x_k,ΔX̅^x_k,ΔX̅^x_k)θ, where ΔX̅^x_k:=b(X̅^x_k+1)τ+σ(X̅^x_k)Δ W_k, k=0,1,…,m. It follows from (<ref>) and the above formulas that 1/√(τ)(Π_τ,2(h)-π(h))= H_τ+ R_τ, where H_τ and R_τ are given by H_τ:=-τ^1/2∑_k=0^m-1∇φ(X̅^x_k),σ(X̅^x_k)Δ W_k, R_τ=∑_i=1^6R_τ,i, with R_τ,1:= τ^1/2(φ(X̅^x_m)-φ(x)), R_τ,2:= -τ^3/2∑_k=0^m-1∇φ(X̅^x_k),b(X̅^x_k+1)-b(X̅^x_k), R_τ,3:= 1/2τ^1/2∑_k=0^m-1∇^2φ(X̅^x_k),σ(X̅^x_k)(τ I_D-Δ W_kΔ W_k^⊤)σ(X̅^x_k)^⊤_HS, R_τ,4:= -1/2τ^5/2∑_k=0^m-1∇^2φ(X̅^x_k),b(X̅^x_k+1)b(X̅^x_k+1)^⊤_HS, R_τ,5:= - τ^3/2∑_k=0^m-1∇^2φ(X̅^x_k),b(X̅^x_k+1)(σ(X̅^x_k)Δ W_k)^⊤_HS, R_τ,6:= -1/2τ^1/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(ΔX̅^x_k,ΔX̅^x_k,ΔX̅^x_k)θ. By Lemmas <ref>-<ref> below and the Slutsky theorem, 1/√(τ)(Π_τ,2(h)-π(h))d⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0 and the proof is complete. □. Suppose that Assumptions <ref>-<ref> hold. Then for any x∈ R^d, H_τd⟶ N(0,π(|σ^⊤∇φ|^2)) as τ→ 0. Recall that H_τ:=-τ^1/2∑_k=0^m-1∇φ(X̅^x_k),σ(X̅^x_k)Δ W_k with m=τ^-2. According to <cit.>, it suffices to show that lim_τ→0τ Emax_0≤ k≤ m-1|Z_k|^2=0, τ∑_k=0^m-1|Z_k|^2 P⟶π(|σ^⊤∇φ|^2) as τ→ 0, where Z_k:=∇φ(X̅^x_k),σ(X̅^x_k)Δ W_k, k=0,1,…,m. It follows from the boundedness of σ and (<ref>) that τ Emax_0≤ k≤ m-1|Z_k|^2 ≤ τ Emax_0≤ k≤ m-1(|Z_k|^2 1_{|Z_k|^2≤ 1})+τ Emax_0≤ k≤ m-1(|Z_k|^2 1_{|Z_k|^2>1}) ≤ τ+τ∑_k=0^m-1 E(|Z_k|^2 1_{|Z_k|^2>1}) ≤τ +τ∑_k=0^m-1 E|Z_k|^4 ≤ τ+Kτ∑_k=0^m-1 E|Δ W_k|^4≤τ +Kτ^3m≤ Kτ, which implies (<ref>). By (<ref>), for any x,y∈ R^d, |∇φ(x)-∇φ(y)|= |∫_0^1∇^2φ(x+θ(y-x))(y-x)θ|≤ K(1+|x|^q'+|y|^q')|x-y|, which together with the assumptions on σ gives |σ^⊤∇φ|^2∈ Poly(q'+1, R^d). As a result of (<ref>), |π_τ(|σ^⊤φ|^2)-π(|σ^⊤φ|^2)|≤ Kτ^1/2. Thus, once we show that τ∑_k=0^m-1|Z_k|^2-π_τ(|σ^⊤∇φ|^2) P⟶0 as τ→ 0, we obtain (<ref>) and complete the proof. According to (<ref>) and (<ref>), | E(|σ(X̅_k^x)^⊤∇φ(X̅_k^x)|^2)-π_τ(|σ^⊤∇φ|^2)|≤ K(1+|x|^q'+1)e^-ξ_1kτ,  k≥ 0. By the above formula and the property of conditional expectations, for any j≥ i, | E_i(|σ(X̅^x_j)^⊤∇φ(X̅^x_j)|^2)-π_τ(|σ^⊤∇φ|^2)|=| E_i(|σ(X̅_j^i,X̅^x_i)^⊤∇φ(X̅_j^i,X̅^x_i)|^2)-π_τ(|σ^⊤∇φ|^2)| = |( E(|σ(X̅_j^i,y)^⊤∇φ(X̅_j^i,y)|^2)-π_τ(|σ^⊤φ|^2))|_y=X̅^x_i| ≤ K(1+|X̅^x_i|^q'+1)e^-ξ_1(j-i)τ. Hereafter, we denote by E_i(·) the conditional expectation E(·| F_t_i), i≥ 0. Further, E(τ∑_k=0^m-1|Z_k|^2-π_τ(|σ^⊤∇φ|^2)^2= E(τ^2∑_k=0^m-1(τ^-1|Z_k|^2-π_τ(|σ^⊤∇φ|^2))^2 = τ^4∑_i=0^m-1 E(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))^2 +2τ^4∑_0≤ i<j≤ m-1 E[(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))(τ^-1|Z_j|^2-π_τ(|σ^⊤∇φ|^2))]. It follows from the boundedness of σ, (<ref>), Proposition <ref>(2), (<ref>) and (<ref>) that for τ∈(0,1) and i≥ 0, ≤ E(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))^2≤ 2τ^-2 E|Z_i|^4+2(π_τ(|σ^⊤∇φ|^2))^2 ≤ K+4(π_τ(|σ^⊤∇φ|^2)-π(|σ^⊤∇φ|^2))^2+4(π(|σ^⊤∇φ|^2))^2 ≤ K+Kτ+K(π(|·|^q'+1))^2≤ K. By the property of conditional expectations, E_j|Z_j|^2=( E x,Δ W_j^2)|_x=σ(X̅^x_j)^⊤∇φ(X̅^x_j)=τ |σ(X̅^x_j)^⊤∇φ(X̅^x_j)|^2. Thus, E_i+1|Z_j|^2= E_i+1( E_j|Z_j|^2)=τ E_i+1|σ(X̅^x_j)^⊤∇φ(X̅^x_j)|^2 for any j>i. Combining the above relation, (<ref>), (<ref>) and Theorem <ref>, we have that for j>i and τ<1, | E[(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))(τ^-1|Z_j|^2-π_τ(|σ^⊤∇φ|^2))]| = | E[(τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2))(τ^-1 E_i+1|Z_j|^2-π_τ(|σ^⊤∇φ|^2))]| ≤ E[|τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2)|| E_i+1(|σ(X̅^x_j)^⊤∇φ(X̅^x_j)|^2)-π_τ(|σ^⊤∇φ|^2)|] ≤ Ke^-ξ_1(j-i-1)τ E[|τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2)|(1+|X̅^x_i+1|^q'+1)] ≤ Ke^-ξ_1(j-i)τ( E|τ^-1|Z_i|^2-π_τ(|σ^⊤∇φ|^2)|^2)^1/2(1+ ( E|X̅^x_i+1|^2q'+2)^1/2) ≤ K(x)e^-ξ_1(j-i)τ. Plugging (<ref>)-(<ref>) into (<ref>) yields E(τ∑_k=0^m-1|Z_k|^2-π_τ(|σ^⊤∇φ|^2)^2 ≤ Kτ^4m+K(x)τ^4∑_0≤ i<j≤ m-1 e^-ξ_1(j-i)τ = Kτ^2+K(x)τ^4∑_i=0^m-1∑_j=i+1^m-1e^-ξ_1(j-i)τ ≤ Kτ^2+K(x)τ^4m∑_j=1^∞e^-ξ_1jτ≤ K(x)τ→0 as τ→ 0, which leads to (<ref>) and finishes the proof. Suppose that Assumptions <ref>-<ref> hold. Then for any x∈ R^d, R_τ P⟶0 as τ tends to 0. We will prove lim_τ→0 E| R_τ|=0 to obtain the conclusion. Estimate of R_τ,1. By Theorem <ref>, (<ref>) and (<ref>), E| R_τ,1|≤ Kτ^1/2(1+sup_n≥ 0 E|X̅^x_n|)≤ K(x)τ^1/2. Estimate of R_τ,2. By means of (<ref>), Assumption <ref>, Theorem <ref>, (<ref>) and the Hölder inequality, we have that for any p≥ 1, i=2,3,4 and j=1,2, sup_k≥0 E|b(X̅^x_k)|^p ≤ K(1+sup_k≥0 E|X̅^x_k|^pq)≤ K(1+|x|^pq), sup_k≥0 E∇^j b(X̅^x_k)_⊗^p ≤ K(1+sup_k≥0 E|X̅^x_k|^pq')≤ K(1+|x|^pq'), sup_k≥0 E∇^iφ(X̅^x_k)_⊗^p ≤ K(1+sup_k≥0 E|X̅^x_k|^(i-1)pq')≤ K(1+|x|^(i-1)pq'). Noting that b(X̅^x_k+1)-b(X̅^x_k)=∇ b(X̅^x_k)ΔX̅^x_k+∫_0^1(1-θ)∇^2 b(X̅^x_k+θΔX̅^x_k)(ΔX̅^x_k,ΔX̅^x_k)θ, one obtains from (<ref>) that R_τ,2= -τ^3/2∑_k=0^m-1∇φ(X̅^x_k),∇ b(X̅_k)σ(X̅^x_k)Δ W_k -τ^5/2∑_k=0^m-1∇φ(X̅^x_k),∇ b(X̅^x_k)b(X̅^x_k+1) -τ^3/2∑_k=0^m-1∫_0^1(1-θ)∇φ(X̅^x_k),∇^2b(X̅^x_k+θΔX̅^x_k)(ΔX̅^x_k,ΔX̅^x_k)θ =: R_τ,2^1+ R_τ,2^2+ R_τ,2^3. By the property of conditional expectations, for i<j, E[∇φ(X̅^x_i),∇ b(X̅^x_i)σ(X̅^x_i)Δ W_i∇φ(X̅^x_j),∇ b(X̅^x_j)σ(X̅^x_j)Δ W_j] = E[∇φ(X̅^x_i),∇ b(X̅^x_i)σ(X̅^x_i)Δ W_i∇φ(X̅^x_j),∇ b(X̅^x_j)σ(X̅^x_j) E_j(Δ W_j)]=0. The above relation, combined with the boundedness of σ, (<ref>) and (<ref>), gives E| R_τ,2^1|^2 =τ^3∑_k=0^m-1 E∇φ(X̅^x_k),∇ b(X̅^x_k)σ(X̅^x_k)Δ W_k^2 ≤ Kτ^4∑_k=0^m-1 E|∇ b(X̅^x_k)|^2≤ K(x)τ ^2. Applying the Hölder inequality, (<ref>) and (<ref>)-(<ref>), we have E| R_τ,2^2|≤ Kτ^5/2∑_k=0^m-1( E|∇ b(X̅^x_k)|^2)^1/2( E| b(X̅^x_k+1)|^2)^1/2≤ K(x)τ^1/2. Further, for any p≥ 1 and k≥ 0, it follows from the Minkowski inequality, (<ref>) and the boundedness of σ that ( E|ΔX̅^x_k|^p)^1/p≤τ( E|b(X̅^x_k+1)|^p)^1/p+K( E|Δ W_k|^p)^1/p≤ K(1+|x|^q)τ^1/2. This together with the Hölder inequality, Assumption <ref> and Theorem <ref> yields E| R_τ,2^3|≤ Kτ^3/2∑_k=0^m-1( E|ΔX̅^x_k|^4)^1/2(1+( E|ΔX̅^x_k|^2q')^1/2+( E|X̅^x_k|^2q')^1/2)≤ K(x)τ^1/2. In this way, we get E| R_τ,2|≤ ( E| R_τ,2^1|^2)^1/2+ E| R_τ,2^2|+ E| R_τ,2^3|≤ K(x)τ^1/2. Estimate of R_τ,3. Notice that that for i<j, E[∇^2φ(X̅^x_i),σ(X̅^x_i)(τ I_D-Δ W_iΔ W_i^⊤)σ(X̅^x_i)^⊤_HS ·∇^2φ(X̅^x_j),σ(X̅^x_j)(τ I_D-Δ W_jΔ W_j^⊤)σ(X̅^x_j)^⊤_HS] = E[∇^2φ(X̅^x_i),σ(X̅^x_i)(τ I_D-Δ W_iΔ W_i^⊤)σ(X̅^x_i)^⊤_HS ·∇^2φ(X̅^x_j),σ(X̅^x_j) E_j(τ I_D-Δ W_jΔ W_j^⊤)σ(X̅^x_j)^⊤_HS]=0. Combining (<ref>), (<ref>), the boundedness of σ and (<ref>), we arrive at E| R_τ,3|^2 =τ/4∑_k=0^m-1 E∇^2φ(X̅^x_k),σ(X̅^x_k)(τ I_D-Δ W_kΔ W_k^⊤)σ(X̅^x_k)^⊤_HS^2 ≤ Kτ∑_k=0^m-1 E(∇^2φ(X̅^x_k)^2_HS(τ^2+|Δ W_k|^4)) ≤ Kτ∑_k=0^m-1( E∇^2φ(X̅^x_k)^4_HS)^1/2(τ^2+( E|Δ W_k|^8)^1/2)≤ K(x)τ. Estimate of R_τ,4. By (<ref>), (<ref>), (<ref>) and the Hölder inequality, E|R_τ,4| ≤ Kτ^5/2∑_k=0^m-1( E|∇^2φ(X̅^x_k)|^2)^1/2( E|b(X̅^x_k+1)|^4)^1/2≤ K(x)τ^5/2m≤ K(x)τ. Estimate of R_τ,5. We decompose R_τ,5 (see (<ref>)) into R_τ,5= R_τ,5^1+ R_τ,5^2 with R_τ,5^1 :=- τ^3/2∑_k=0^m-1∇^2φ(X̅^x_k),(b(X̅^x_k+1)-b(X̅^x_k))(σ(X̅^x_k)Δ W_k)^⊤_HS, R_τ,5^2 :=- τ^3/2∑_k=0^m-1∇^2φ(X̅^x_k),b(X̅^x_k)(σ(X̅^x_k)Δ W_k)^⊤_HS. By the Hölder inequality, (<ref>), (<ref>), Theorem <ref> and (<ref>), E| R_τ,5^1| ≤ Kτ^3/2msup_k≥0( E|∇^2φ(X̅^x_k)|^3)^1/3( E|Δ W_k|^3)^1/3( E|b(X̅^x_k+1)-b(X̅^x_k)|^3)^1/3 ≤ K(x)(1+sup_k≥0( E|X̅^x_k|^6q-6)^1/6)sup_k≥0( E|ΔX̅^x_k|^6)^1/6 ≤ K(x)τ^1/2. Similar to (<ref>), one has that for i<j, E [ ∇^2φ(X̅^x_i),b(X̅^x_i)(σ(X̅^x_i)Δ W_i)^⊤_HS·∇^2φ(X̅^x_j),b(X̅^x_j)(σ(X̅^x_j)Δ W_j)^⊤_HS]=0. The above formula, combined with (<ref>), (<ref>) and the Hölder inequality, yields E| R_τ,5^2|^2 =τ^3∑_k=0^m-1 E∇^2φ(X̅^x_k),b(X̅^x_k)(σ(X̅^x_k)Δ W_k)^⊤_HS^2 ≤ Kτ^3∑_k=0^m-1( E|∇^2φ(X̅^x_k)|^6)^1/3( E|b(X̅^x_k)|^6)^1/3( E|Δ W_k|^6)^1/3 ≤ K(x)τ^2. Thus, E| R_τ,5|≤ E| R^1_τ,5|+( E| R^2_τ,5|^2)^1/2≤ K(x)τ^1/2. Estimate of R_τ,6. Plugging ΔX̅^x_k=b(X̅^x_k+1)τ+σ(X̅^x_k)Δ W_k into (<ref>) gives R_τ,6=∑_i=1^4 R_τ,6^i with R_τ,6^1 :=-τ^7/2/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(b(X̅^x_k+1),b(X̅^x_k+1),b(X̅^x_k+1))θ, R_τ,6^2 :=-3τ^5/2/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(b(X̅^x_k+1),b(X̅^x_k+1),σ(X̅^x_k)Δ W_k)θ, R_τ,6^3 :=-3τ^3/2/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(b(X̅^x_k+1),σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k)θ, R_τ,6^4 :=-τ^1/2/2∑_k=0^m-1∫_0^1(1-θ)^2∇^3φ(X̅^x_k+θΔX̅^x_k)(σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k)θ. Similar to the derivation of (<ref>), one can use (<ref>), (<ref>) and Theorem <ref> to get that for any p≥1 and τ<1, E∇^3φ(X̅^x_k+θΔX̅^x_k)^p_⊗≤ K(1+|x|^2pq'q), θ∈[0,1]. By (<ref>), (<ref>) and the Hölder inequality, one has E| R_τ,6^1|≤ K(x)τ^3/2, E| R_τ,6^2|≤ K(x)τ, E| R_τ,6^3|≤ K(x)τ^1/2. Further, applying the Taylor expansion for ∇^3φ, we write R_τ,6^4= R_τ,6^4,1+ R_τ,6^4,2, where R_τ,6^4,1 :=-τ^1/2/6∑_k=0^m-1∇^3φ(X̅^x_k)(σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k), R_τ,6^4,2 :=-τ^1/2/2∑_k=0^m-1∫_0^1∫_0^1∇^4φ(X̅^x_k+rθΔX̅^x_k)(σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,ΔX̅^x_k) r θ(1-θ)^2θ. Similar to the proof for (<ref>), we have that for any p≥1 and τ<1, sup_k≥0 E∇^4φ(X̅^x_k+rθΔX̅^x_k)^p_⊗≤ K(1+|x|^3pq'q), r,θ∈[0,1]. This together with the Hölder inequality and (<ref>) gives E| R_τ,6^4,2|≤ Kτ^1/2msup_k≥0[sup_r,θ∈[0,1]( E∇^4φ(X̅^x_k+rθΔX̅^x_k)^3_⊗)^1/3( E|Δ W_k|^9)^1/3( E|ΔX̅_k^x|^3)^1/3]≤ K(x)τ^1/2. Since Δ W_j is F_t_j-independent, for any i<j, E[∇^3φ(X̅^x_i)(σ(X̅^x_i)Δ W_i,σ(X̅^x_i)Δ W_i,σ(X̅^x_i)Δ W_i)∇^3φ(X̅^x_j)(σ(X̅^x_j)Δ W_j,σ(X̅^x_j)Δ W_j,σ(X̅^x_j)Δ W_j)] = E[∇^3φ(X̅^x_i)(σ(X̅^x_i)Δ W_i,σ(X̅^x_i)Δ W_i,σ(X̅^x_i)Δ W_i) E_j[∇^3φ(X̅^x_j)(σ(X̅^x_j)Δ W_j,σ(X̅^x_j)Δ W_j,σ(X̅^x_j)Δ W_j)]] = 0, where we used the property of conditional expectations and E(Δ W_j^p_1Δ W_j^p_2Δ W_j^p_3)=0 ∀  p_1,p_2,p_3∈{1,2,…,D}, with Δ W_j^r being the rth component of Δ W_j. In this way, we get E| R_τ,6^4,1|^2=τ/36∑_k=0^m-1 E[∇^3φ(X̅^x_k)(σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k,σ(X̅^x_k)Δ W_k)]^2≤ K(x)τ^2, due to (<ref>). Thus, it holds that E| R_τ,6^4|≤ K(x)τ^1/2 for τ<1, which combined with (<ref>) yields E| R_τ,6|≤ K(x)τ^1/2. Combining the above estimates for R_τ,i, i=1,…,6, we obtain lim_τ→0 E| R_τ|=0. This gives the desired conclusion. □ § NUMERICAL EXPERIMENTS In this section, we perform numerical experiments to verify our theoretical results. First, for a given test function h, we obtain the approximation of the ergodic limit π(h) numerically by virtue of the fact lim_t→∞ E(h(X(t)))=π(h) (see (<ref>)). Here, lim_t→∞ E(h(X(t))) is simulated by the numerical solution {X̅_n}_n≥0 of the BEM method. More precisely, let the step-size τ be small enough, N sufficiently large, and use the Monte–Carlo method to simulate the expectation. Then we have lim_t→∞ E(h(X(t)))≈1/M∑_i=1^Mh(X̅_N^i), with {X̅_N^i}_i=1^M being M samplings of X̅_N. Second, we verify the CLT for Π_τ,α, α∈(1,2]. Denote Z_τ,α(h)= 1/τ^α-1/2(1/τ^-α∑_k=0^τ^-α-1h(X̅_k)-π(h)). Then, the CLT shows that for any f∈ C_b( R^d), lim_τ→0 Ef(Z_τ,α(h))=∫_ R^df(x) N(0,π(|σ^⊤∇φ|^2))( x). We will numerically verify that Ef(Z_τ,α(h)) tends to some constant as τ decreases. Example 5.1. Consider the following SODE with Lipschitz diffusion coefficient: X(t)=-(X^3(t)+8X(t)) t+sin(X(t)) W(t), X(0)=x∈ R. It is not difficult to verify that the coefficients of the above equation satisfy Assumptions <ref>-<ref>. First, we numerically simulate the ergodic limit π(h) using the aforementioned method. The expectation is realized by 5000 sample paths. Fig. <ref> displays the evolution of Eh(X̅_n) w.r.t. n starting from different initial values. It is observed that the ergodic limit are 1 and 0 for h=sin(x)+1 and h=x^4, respectively. Tables <ref>-<ref> show the evolution of Ef(Z_τ,2(h)) w.r.t. τ, where the initial value x=1 for Tables <ref>-<ref> while x=-2 for Tables <ref>-<ref>. It is observed that for all kinds of cases, Ef(Z_τ,2(h)) will tend to some constant as τ decreases. We also find that the CLT of Π_τ,2 also holds for h of super-linear growth. See also Section <ref> for the discussion about this problem. Example 5.2. Consider the following SODE with non-Lipschitz diffusion coefficient: X(t)=-(X^3(t)+10X(t)) t+0.5X^2(t) W(t), X(0)=x∈ R. Notice that the above equation satisfies Assumptions 2.1-2.4 of <cit.>. Thus, {X(t)}_t≥0 admits a unique invariant measure π. Fig. <ref> displays the evolution of Eh(X̅_n) w.r.t. n starting from different initial values. In this case, the numerical ergodic limit is 0. Table <ref> reflects the evolution of Ef(Z_τ,α) as τ decreases. It is observed in Table <ref> that Ef(Z_τ,α) will tend to 0 for three different parameters α=1.2,1.5,2. We remark that the CLT may still hold for the BEM method of SODEs with non-Lipschitz diffusion coefficients, as is numerically shown in this example. § CONCLUSIONS AND FUTURE In this work, we prove the CLT for the temporal average of the BEM method, which characterizes the asymptotics of the BEM method in distribution. The drift coefficients of underlying SODEs are allowed to grow super-linearly. Different proof strategies are used for different deviation orders, which relies on the relationship between the deviation order and optimal strong order of the BEM method. In fact, it is possible to weaken the conditions of Theorems <ref>-<ref>, and we refer to the following two aspects. * Conditions on h. By revisiting the whole proof of Theorem <ref>, it is observed that the requirement for the test function h can be lowered. If we let ∇ ^ih∈ Poly(q”, R^d), i=0,1,…,4 instead of h∈ C^4_b( R^d), then the main difference lies in the regularity of φ. In fact, it holds that ∇ ^iφ∈ Poly(L_0, R^d), i=0,1,…,4 for some integer L_0 dependent on q',q”. And this will make no difference to the conclusions of Lemmas <ref> and <ref>, in view of Theorem <ref>. Thus, the CLT still holds for Π_τ,2(h) for a class of unbounded h. Similarly, Theorem <ref> also holds for ∇^i h∈ Poly(q”, R^d), i=0,1,…,4. The above facts are also observed in the numerical experiments in Section <ref>. * Conditions on σ. Assume that σ is unbounded but globally Lipschitz. Let Assumption <ref> hold with c_1>15/2L_1^2 replaced by c_1 being sufficiently large. We can follow the same argument in Theorem <ref> to give the pth moment boundedness for the BEM method. Roughly speaking, in this case, (<ref>) still holds. Similar to (<ref>), we obtain (1+pc_1τ)|X̅^x_n+1|^2p≤(|X̅^x_n|^2+2X̅^x_n,σ(X̅^x_n)Δ W_n +K(τ+|Δ W_n|^2)+K|X̅^x_n|^2|Δ W_n|^2)^p due to the linear growth of σ. By the similar analysis for (<ref>), one can show that E|X̅^x_n+1|^2p≤(1+A(p,D)τ)/(1+pc_1τ) E|X̅^x_n|^2p+K(p)(1+|x|^2p-2)τ/(1+pc_1τ) for some A(p,D)>0 dependent on p and D. Using the condition that c_1 is sufficiently large, one finally can obtain sup_n≥0 E|X̅^x_n|^r≤ K(1+|x|^r) for some r large enough. Thus, other conclusions still hold on basis of the moment boundedness of {X̅_n}_n≥0. Finally, one can establish the CLT for Π_τ,α(h) when σ is Lipschitz, provided that the dissipation parameter c_1 is sufficiently large. When σ is Lipschitz or of super-linear growth, it is interesting to study how to prove the pth (p>2) moment boundedness of the BEM method in the infinite time horizon for a relatively small c_1. We will study this problem in the future. plain
http://arxiv.org/abs/2307.05863v1
20230712012007
Extending free actions of finite groups on non-orientable surfaces
[ "Omar A. Cruz", "Gustavo Ortega", "Carlos Segovia" ]
math.GT
[ "math.GT", "57M60 (Primary), 57R85 (Secondary)" ]
tqft arc thmTheorem cor[thm]Corollary thesis[thm]Thesis lem[thm]Lemma prop[thm]Proposition exampleExample[section] definition defn[thm]Definition obs[thm]Observation pre[thm]Pregunta definition tab[thm]Table remark rem[thm]Remark: Notation noter[thm]Notation
http://arxiv.org/abs/2307.04587v1
20230710142900
Endotaxial Stabilization of 2D Charge Density Waves with Long-range Order
[ "Suk Hyun Sung", "Nishkarsh Agarwal", "Ismail El Baggari", "Yin Min Goh", "Patrick Kezer", "Noah Schnitzer", "Yu Liu", "Wenjian Lu", "Yuping Sun", "Lena F. Kourkoutis", "John T. Heron", "Kai Sun", "Robert Hovden" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Reliable Devices Yield Stable Quantum Computations The manuscript is authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan: https://www.energy.gov/doe-public-access-plan. Samudra Dasgupta^1, 2^*, and Travis S. Humble^1,2^† ^1Quantum Science Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA ^2Bredesen Center, University of Tennessee, Knoxville, Tennessee, USA ^*[email protected], ORCID: 0000-0002-7831-745X ^†[email protected], ORCID: 0000-0002-9449-0498 February 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Some exotic crystals spontaneously reorganize their valence electrons into periodic structures known as charge density waves (CDWs). In essence, two crystals emerge—the underlying atomic lattice and the emergent charge lattice. Just like atomic crystals, a charge density wave has defects: dislocations, disclinations, and elastic deformation <cit.>. Furthermore, the charge density wave can undergo phase transitions wherein the charge lattice unit cell changes shape and size. All of this CDW reshaping and topological restructuring occurs even when the underlying atomic lattice remains unchanged. In low dimensions, these quantum phase transitions are promising candidates for novel devices <cit.>, efficient ultrafast non-volatile switching <cit.>, and suggest elusive chiral superconductivity <cit.>. Unfortunately, 2D CDWs are inherently unstable and accessing low-dimensional CDWs remains a challenge <cit.>. Even worse, at elevated temperatures where devices typically operate, disruption of charge density waves is all but guaranteed due to ever-present disorder <cit.>. A long-range ordered incommensurate CDW has yet to be reported. Here we stabilize ordered incommensurate charge density waves (oIC-CDW) at elevated temperatures (TIC = 350 K) in two-dimensions by endotaxial synthesis of polytype heterostructures. The estimated hundred-fold amplitude enhancement of the charge density wave has an increased coherence length comparable to the underlying atomic crystal. The enhanced order of the oIC-CDW increases electronic resistivity. This substantial enhancement of charge order is achieved through encapsulation of an isolated octahedral CDW layer within a matrix of prismatic metallic layers via 2D endotaxial synthesis. Realizing the ordered incommensurate CDW reveals CDWs have hexatic structure at high-temperature—that is, long-range translational symmetry is limited by proliferation of topological defects (i.e., dislocations and disclinations) in CDWs. We show at high-temperatures, the CDWs in continuously melt as additional dislocations and disclinations form in the charge lattice. This hexatic CDW melting process was not previously observable since the incommensurate CDW normally emerges as a highly-disordered, melted state. By restoring order through 2D endotaxy, we can reversibly melt and unmelt CDWs in . Based on these results, we access new regimes of the CDW phase diagram for octahedrally coordinated in temperature vs disorder space. Similar vestigial ordering (i.e., hexaticity) was predicted by Nie, Tarjus and Kivelson <cit.>; however, with 2D endotaxy we can now tune down the disorder in the CDW phase diagram. § THE ORDERED INCOMMENSURATE CHARGE DENSITY WAVE The ordered incommensurate CDW (oIC) reported herein (Fig. <ref>a–d) is strikingly distinct from the well-known incommensurate (IC) CDW (Fig. <ref>e–h) found in 1T- or 1T-. Here, the oIC phase is a truly two-dimensional (2D) CDW with long-range positional and orientational order that couples strongly with the underlying crystal lattice (Fig. <ref>a). The oIC-CDW, illustrated in Figure <ref>b, is a crystalline charge-lattice with well-defined, sharp peaks in Fourier space (Fig. <ref>b-inset). This CDW charge-lattice (aCDW = 11.87 nm) exists within an underlying atomic lattice illustrated in Figure <ref>c. Electron–lattice interaction is an essential aspect of CDWs, and associated soft-phonon modes manifest as static periodic lattice distortions (PLDs) that reduce crystal symmetry and lower the electronic energy <cit.>. For , the CDW pulls atoms toward the nearest charge maximum to form periodic clusters of atoms (Fig. <ref>c). Notably for incommensurate charge ordering, each cluster is distinct since the atomic lattice is not commensurate with the CDW. While these lattice distortions are small (<10 pm), selected area electron diffraction (SAED) is sensitive to subtle picoscale distortions and making it a popular choice for characterization of CDW/PLDs  <cit.>. CDW/PLDs diffract incident swift electrons into distinct superlattice peaks decorating each Bragg peak <cit.>. In reciprocal space, the CDW charge lattice (Fig. <ref>b-inset) and the measurable atomic superlattice peaks (Fig. <ref>c-inset) have corresponding spacing, symmetry, and intensity. Diffracted superlattice peaks provide a direct measure of the CDW lattice and contain rich information on their order-disorder. Specifically, diffraction represents an ensemble average of the structure over the selected area, and disorder manifests as diffused diffraction peaks <cit.>. Disorder of CDWs smears superlattice peaks but leaves the principle Bragg peaks unaffected (Fig. <ref>g-inset). For oIC-CDWs, the charge lattice is ordered with limited defects, thus diffraction shows both sharp superlattice and Bragg peaks (Fig. <ref>c-inset). In contrast, the well-known IC-CDW in 1T- possesses significant disorder of its charge distribution. Across decades, the IC phase in 1T- is reported with a ring-like, azimuthally diffuse diffraction around each Bragg peak <cit.>, yet the origin of the diffused superlattice peaks is hardly discussed <cit.>. Here we present the well-known IC-CDW in bulk 1T- as a hexatically disordered charge lattice containing dislocations and disclinations (Fig. <ref>f). In-situ SAED of 1T- taken at 408 K (Fig. <ref>a) shows azimuthally blurred first order superlattice peaks (marked brown). Averaging all six third order Bragg peaks (inset, Γ_3) better highlights this point. Notably, hexatic phases are known to have six-fold rotationally symmetric, azimuthally diffused peaks <cit.>. The experimental diffraction of IC-CDWs are consistent with a hexatic charge distribution (Fig. <ref>f) <cit.> and corresponding azimuthally diffuse structure factor (Fig. <ref>f, g-inset). The IC-CDWs are three-dimensional (or quasi-2D) with non-negligible out-of-plane interactions (Fig. <ref>e–h). In contrast, the oIC-CDW, shows drastically sharper and stronger superlattice peaks measured by in-situ SAED at 408 K (Fig. <ref>b). Sharpening is especially highlighted in averaged third order Bragg peaks (Γ_3). The measured superlattice peaks of oIC-CDW are sharper both in azimuthal (by ∼60%) and radial (by ∼50%) directions when compared to the IC-CDW. Notably, the superlattice peak widths of the oIC phase is comparable to the peak widths of the principle Bragg peaks. Therefore, the oIC is a spatially coherent electronic crystal. The oIC-CDW, a 2D charge ordered state, is enhanced by at least one-hundred fold over previously reported bulk IC-CDWs. Diffracted superlattice peaks in oIC-CDWs have an integrated intensity over ten times stronger despite that the number of charge ordered layers has been reduced to less than 10% of the material. Thus, endotaxial engineering improves not only the long range order but also the charge order amplitude of the IC-CDW. The correlation of long-range order and CDW enhancement is measured directly via hexatic CDW melting later in this manuscript. § ENDOTAXIAL POLYTYPE HETEROSTRUCTURE OF The oIC-CDW phase reported herein is stabilized by synthesizing endotaxial polytype heterostructures of , where oIC-CDWs reside in monolayers of octahedrally coordinated (Oc-) embedded within prismatic (Pr-) matrix and one-to-one atomic registry (Fig. <ref>e). Endotaxial polytype heterostructures are synthesized by heating 1T- at ∼720 K for 15–30 min in an inert environment. Notably, 1T- is metastable and goes through Oc-to-Pr endotaxial layer-by-layer polytype transformation upon heating (≳ 620 K). In-situ SAEDs (Fig. <ref>c i–iv) were acquired at 20 seconds intervals at 408 K through the high temperature conversion process (723 K). These snapshots reveal sharpening of superlattice peaks—a clear indicator of enhanced CDW order. Cooling the sample midst transition stops the conversion and an interleaved polytype heterostructure is synthesized—confirmed by cross-sectional ADF-STEM. Figure <ref>d and e show atomic resolution micrographs of bulk 1T endotaxially converted to a polytype heterostructure. The atomic resolution images demonstrate endotaxial monolayer encapsulation of Oc- (Fig. <ref>e, highlighted red) in Pr-layers. The Pr- (bulk: 2H, 3R) are metallic above ∼100 K. Previous work showed these metallic layers decouple CDWs out-of-plane and raise the critical temperature for commensurate quantum states (i.e., C-CDW) from ∼200 K to ∼350 K <cit.>. Surprisingly, the endotaxial polytype heterostructure stabilizes long-range order in IC-CDWs at elevated (≳ 350 K) temperatures. The oIC-CDW phase has correlation length comparable to the crystal lattice, quantified by comparing widths of both superlattice and Bragg peaks from in-situ selected area electron diffraction patterns (SA aperture: 850 nm diameter). This indicates the CDW is relatively ordered (i.e. spatially coherent) over the distances comparable to the parent atomic crystal (∼102 nm). This enhancement of long-range CDW order is accompanied by a marked increase of the in-plane resistivity of the IC phase (Fig. <ref>f). Figure <ref>f shows temperature vs in-plane resistivity measurement of 1T (brown) and endotaxial (red) specimen. Resistivity of endotaxial is higher for IC-CDW phases (>358 K), despite having many metallic layers introduced to the system. This implies that oIC-CDWs have a much higher resistivity than hexatic-IC in 1T-. § HEXATIC MELTING OF IC-CDW Creating the oIC-CDW provides an ordered charge lattice that can be hexatically melted upon further heating. Hexatic melting is a uniquely 2D process wherein a crystal melts in two stages through the creation of dislocations and disclinations <cit.>. During this process the reciprocal space structure continuously evolves. Initially at lower-temperatures (c.a. 350 K), the oIC phase is an ordered charge crystal with well-defined peaks in reciprocal space (Fig. <ref>c). As temperature rises, the CDW peaks continuously blur azimuthally as the density of dislocations and disclinations increases (Fig. <ref>d, e). Azimuthal blurring of the reciprocal lattice is characteristic of hexatic phases and reflects the loss of translational symmetry while maintaining some orientational order <cit.>. Eventually, at higher temperatures (c.a. 570 K), the hexatic crystal completely dissociates into an amorphous liquid state with ring-like structure factor. Figure <ref>c–e, are generated using a phenomological Monte Carlo simulation wherein displacement of the CDW charge centers follow a temperature dependent Maxwell-Boltzmann probability distribution (See Methods). Here, the incommensurate CDW hexatically melts while the underlying atomic lattice remains unchanged—in diffraction this corresponds to a blurring of CDW superlattice peaks and preservation of Bragg peaks. During the hexatic melting of oIC-CDWs, superlattice peaks increasingly blur as temperature is raised—clearly visible in in-situ SAED at Fig. <ref>a-i) 473 K, Fig. <ref>a-ii) 523 K, and Fig. <ref>a-iii) 573 K. The blurring is anisotropic and more prominent along azimuthal directions as expected for hexatic phases. The CDW peaks are quantified throughout the melting process in Figure <ref>b. Azimuthal peak width (Fig. <ref>b, blue-triangles) increases continuously with temperature; roughly doubling when raised from 410 K to 570 K. Around 520 K the oIC has melted into a state that resembles the well-known IC-CDW for bulk . This CDW melting process is reversible and peaks sharpen when temperature is decreased. Notably, Bragg peaks do not show appreciable changes indicating only the electronic crystal is melting, not the atomic crystal. Although the CDW melting process appears hexatic, it is distinct from familiar liquid crystals, silica spheres, or atomic crystals wherein the amplitude of the order parameter does not change. Here, quantitative analysis of the superlattice peak intensities (Fig. <ref>a-red) reveals the charge density wave amplitude decreases with temperature. This is expected as topological defects in CDWs (dislocations and disclinations) have locally divergent strain with elastic energy cost that forces a local amplitude collapse. These local CDW amplitude collapses have been observed at the center of topologcal defects in the 3D charge ordering of manganites <cit.>. § THE CDW PHASE DIAGRAM FOR OCTAHEDRAL Endotaxial synthesis of octahedrally coordinated allows access to new phases of matter and construction of a phase diagram for CDWs using temperature (T) and disorder (). The CDW phase diagram for 1T- is shown in Figure <ref>. 1T- exists with native disorder and the ordered, commensurate phase (C-CDW, Fig. <ref>g) is only observed at low-temperatures. At room temperature, the CDW is a partially-ordered NC phase (Fig. <ref>f) that enters the hexatic IC phase upon heating (Fig. <ref>e). At high-temperatures or high-disorder, CDWs degrade or vanish. The high disorder regime was historically achieved by substituting tantalum ions with other metal species (e.g. Ti, Nb) or by forcing intercalates within the van der Waals gap <cit.>. At room temperature, mild substitution of titanium (1T-Ta0.7Ti0.3S2) drives the system into hexatic-IC CDW states (Fig. <ref>h), and as more titanium is substituted (1T-Ta0.3Ti0.7S2) CDW vanishes completely (Fig. <ref>i). The low disorder regime, now accessible by endotaxial engineering, provides room temperature ordered C-CDWs and a novel ordered IC-CDW at higher temperatures. Notably with low-disorder, the C to IC transition is direct and the NC phase does not appear. The IC phase is ordered, but the CDW can be continuously melted into a disordered hexatic-IC phase (as described in figure <ref>). The boundaries of the CDW phase diagram are drawn with consistency to hexatic melting of 2D collidal particles under temperature and disorder <cit.> as well as nematic CDWs <cit.>. Notably, CDWs in endotaxial are two dimensional and the oIC phase has enhanced order despite the 3D to 2D dimensionality reduction. In bulk 1T- CDWs are quasi-2D with non-negligible out-of-plane interaction (Fig. <ref>h) <cit.>. Formation of endotaxial polytype heterostructures disrupts the out-of-plane interactions and CDWs reside in a protected 2D environment <cit.>. Stabilization of an ordered IC-CDW in 2D seemingly contradicts with Hohenberg-Mermin-Wagner theorem <cit.> and Imry-Ma argument <cit.> which state spontaneous symmetry breaking of continuous symmetry (e.g. IC-CDWs) is unstable at non-zero temperatures in 2D. While both principles do not prevent intermediate phases with short-range order, the 2D CDWs should be none-the-less more fragile to disorder <cit.>. An ordered IC phase can only emerge in ultra-clean environments. Here endotaxial synthesis protects CDW states by strain-free encapsulation in a chemically identical environment of metallic layers that shield disorder. § CONCLUSION In summary, we demonstrate that endotaxial synthesis of clean interleaved polytypic heterostructures can stabilize fragile quantum phases such as ordered CDWs even at high temperatures. Here, we stabilize and enhance 2D charge density waves (both long-range order and amplitude) in an endotaxially confined monolayer of 1T-. Surprisingly, the low-dimensional symmetry breaking of an ordered incommensurate CDW (oIC-CDW) appears, suggesting the quantum states reside within minimal extrinsic disorder. By enhancing CDW order the hexatic nature of IC-CDWs are revealed. Experimental observation matches advanced simulation of electron diffraction of charge lattices to provide the real-space evolution of 2D CDW melting. Heating the oIC-CDW in-situ TEM above 400 K we see a reversible hexatic melting process, in which disclinations and dislocations destroy long-range translational symmetry of the CDW while maintaining its orientational order. The CDW melts well before the underlying atomic crystal changes. In 2D, CDWs are expected to manifest through vestigial electronic hexaticity—a weak CDW with substantial defects and short range order. The nature of vestigial phases in CDWs remains poorly understood with little direct evidence. From these results, a CDW phase diagram for 1T- is created and consistent with the predicted emergence of vestigial quantum order. § REFERENCES [heading=none] § ACKNOWLEDGEMENTS S.H.S. acknowledges the financial support of the W.M. Keck Foundation. Experiments were conducted using the Michigan Center for Materials Characterization (MC2) with assistance from Tao Ma and Bobby Kerns. This work made us of electron microscopy facility of the Platform for the Accelerated Realization, Analysis, and Discovery of Interface Materials (PARADIM) supported by the National Science Foundation, which is supported by National Science Foundation under Cooperative Agreement No. DMR-2039380. N.S. acknowledges additional support from the NSF GRFP under award number DGE-2139899. P.K. and J.H. gratefully acknowledge support from NSF MRSEC DMR-2011839. Y.L, W.J.L. and Y.P.S, thank the support from the National Key R&D Program (Grant No. 022YFA1403203 and No. 2021YFA1600201), the National Natural Science Foundation of China (Grant No. U2032215, No. U1932217 and No. 12274412). § AUTHOR CONTRIBUTIONS S.H.S and R.H. conceived the charge lattice model and associated lattice distortions and linked them to diffraction of . S.H.S., Y.M.G., N.S., L.F.K., and R.H. performed HAADF-STEM and in-situ TEM and interpreted electron microscopy data. S.H.S. fabricated samples for electronic measurements. P.K. and J.T.H. performed and analyzed electronic measurements. S.H.S., I.E.B., R.H. and K.S. provided theoretical interpretation. S.H.S. and N.A. performed Monte-Carlo simulations. S.H.S. K.S. and R.H. created the phase diagram of octahedrally coordinated . Y.P.S. synthesized 1T- crystal. S.H.S. and R.H. prepared the manuscript. All authors reviewed and edited the manuscript. § COMPETING INTERESTS The authors declare no competing interests. § METHODS §.§ Simulated Diffraction of Charge Lattices with Heating Charge density waves are electronic modulations describable in reciprocal space by three wave vectors (so called, triple q) or in real-space as local charges arranged into a hexagonal lattice. For a fully ordered system, the charge lattice is a perfect lattice (Fig. <ref>b left), and the structure factor (Fig. <ref>b left inset) is also a perfect lattice. Here, the periodicity is equal to the incommensurate CDW wave vector qIC (or aIC in real-space). Traditional CDW theory elegantly describes ordered (or slightly disordered) systems using sparse representation in reciprocal space for ordered systems. However, a real-space basis readily describes topological disorder (dislocations and disclinations) in a charge density wave. This becomes particularly critical for IC phase (>350 K) of 1T-, where diffraction studies reveal azimuthally diffused superlattice peaks <cit.> that we show to be consistent with topological disorder in CDWs. Describing disorder of CDW plays a critical role in simulating experimentally consistent diffraction patterns at high temperatures. The hexatic melting of a real-space charge lattice is illustrated with phenomenological Monte Carlo simulations of the NPT ensemble (constant particle count, temperature, and pressure). The displacement of charge centers in a CDW follow a Maxwell-Boltzmann probability distribution at different temperatures. The interaction energy between charge centers is calculated using a shifted Lennard Jones potential truncated at 18.7 Å. From these first principles, the likelihood of forming dislocations and disclinations in a CDW lattice increases with temperature. Diffraction of the simulated CDWs is calculated from the corresponding periodic lattice distortion (PLD) of a 1T- crystal. The displacements are small (≲10 pm), but clearly manifest as superlattice peaks with distinctive intensity in SAED. Notably, the superlattice peak intensities becomes stronger at higher |𝐤|; this is distinguishable from chemically ordered superlattice peaks that decay as |𝐤| increases <cit.>. In , atoms displace toward the charge centers which is equivalent to a longitudinal displacement wave. Here, a the displacement amplitude is proportional to the charge density gradient with a max displacement set at 7 pm. Electron diffraction is kinematically simulated under a flat Ewald Sphere approximations using the Fourier transform of the displaced atomic lattice. §.§ Electron Microscopy In-situ SAED was performed on Thermofisher Scientific (TFS) Talos (operated at 200 keV, SA aperture 850 nm) with Protochips Fusion Select holder and Gatan OneView Camera. Cross-sectional HAADF-STEM images were taken on JEOL 3100R05 (300 keV, 22 mrad) with samples prepared on TFS Nova Nanolab DualBeam FIB/SEM. TEM specimens were prepared by exfoliating bulk 1T- and 1T- crystals onto polydimethylsiloxane (PDMS) gel stamp. The sample was then transferred to TEM grids using home-built transfer stage. Silicon nitride membrane window TEM grid with 2 µm holes from Norcada and Porotochips Fusion Thermal E-chips. From optical contrast and CBED patterns, the samples (Fig. 1, 2) were estimated to be 20–50 nm thick <cit.>. §.§ Synthesis and Acquisition of bulk crystals 1T- for in-situ SAED measurements and electronic measurements was acquired from HQ Graphene. 1T- (x ≈ 1) for cross-sectional HAADF-STEM measurements was grown by the chemical vapor transport method with iodine as a transport agent. Stoichiometric amounts of the raw materials, high-purity elements Ta, S, and Se, were mixed and heated at 1170 K for 4 days in an evacuated quartz tube. Then the obtained powders and iodine (density: 5 mg/cm3) were sealed in another longer quartz tube, and heated for 10 days in a two-zone furnace, where the temperature of source zone and growth zone was fixed at 1220 K and 1120 K, respectively. A shiny mirror-like sample surface was obtained, confirming their high quality. All CDW characterization was done on 1T-; Se-doped sample was used only for polytype characterization in cross-sectional HAADF-STEM (Fig. <ref>d,e). §.§ Endotaxial Synthesis of oIC-CDW in Interleaved 2D polytypes were synthesized by heating 1T- to 720 K in high vacuum (<107 Torr) or in an argon purged glovebox <cit.>. 1T- was held at 720 K for ∼10 minutes, then brought down to room temperature. Once the interleaved polytype is fully established, the oIC-CDW becomes stable electronic state above 350 K. §.§ Device Fabrication and Electronic Measurement For resistivity measurements, flakes were transferred using PDMS gel stamp method to pre-fabricated bottom contacts. The fabrication of bottom contacts are detailed in <cit.>. The flake was sculpted into rectangular bar (∼11 µm×15 µm) using TFS Nova Nanolab DualBeam FIB/SEM (See Supplementary Figure S4). The thickness of the flake was determined by AFM. Resistivity vs temperature measurements were performed in a Quantum Design Dynacool PPMS using a standard sample puck and an external Keithley 2400 series source meter. The sample was adhered to the puck backplane with silver paint, and contacts were wire bonded to the puck channel pads using 50 µm Au wire. To ensure sample thermalization, a baffle rod with an Au-coated sealing disk hovering <1 cm above the sample was inserted into the PPMS bore, and the heating and cooling rate was restricted to <2 K/min. 10 µA current was sourced for four wire measurements. The current/voltage limits were chosen to keep electric fields below 10 kV/cm to avoid sample breakdown, as well as to keep current densities below 105 A/cm2 and prevent localized heating at low temperatures.
http://arxiv.org/abs/2307.06039v1
20230712093346
Rationality of the Local Jacquet-Langlands Correspondence for GL(n)
[ "Kenta Suzuki" ]
math.NT
[ "math.NT", "math.RT" ]
Rhythm Modeling for Voice Conversion Benjamin van Niekerk, Marc-André Carbonneau, Herman Kamper B. van Niekerk and H. Kamper are with the Department of Electrical and Electronic Engineering, Stellenbosch University, South Africa (e-mails: [email protected] and [email protected]). M.-A. Carbonneau is with Ubisoft La Forge, Montréal (e-mail: [email protected]). August 12, 2023 =============================================================================================================================================================================================================================================================================================================================================================== We relate the field of definition of representations σ of the group of units D^× of a non-archimedean division algebra D/F to that of its L-parameter φ_σ W_F→_n(), extending results of <cit.>. The field of definitions are controlled by division algebras 𝒟_σ and 𝒟_φ_σ over the field of rationality (π), and we completely pin down the relationship between the Hasse invariants at places not over p. Under some additional assumptions we can also specify the Hasse invariants at places over p. § INTRODUCTION Let F/_p be a non-archimedean local field and let D/F be a central division algebra of dimension n^2. The local Langlands correspondence and the Jacquet-Langlands correspondence provide a bijection between: * irreducible square-integrable representations of _n(F) with central character of finite order; * irreducible smooth representations of D^× with central character of finite order; and * semisimple n-dimensional representations of the absolute Galois group Γ_F. For each representation π of _n(F), we let (π) denote the corresponding representation of D^× and let _F(π):=φ_π(-n-1/2) denote the corresponding twisted L-parameter. Prasad and Ramakrishnan <cit.> then relates whether or not (π) descends to a real representation to whether _F(π) descends to a real representation. We extend their result, and given a number field K⊂, we relate whether or not (π) descends to a K-representation to whether or not _F(π) descends to a K-representation. Clearly if representations descend to K then K⊇(π), the field of rationality of π (see Definition <ref>). Such rationality questions are controlled by central division algebras 𝒟_(π) and 𝒟__F(π) over (π), which are common generalizations of the Frobenius-Schur indicator dealt with in <cit.>, and the Schur index (see the discussion after Definition <ref>). We completely pin down the relationship between the two division algebras in places away from p: Let π be an irreducible supercuspidal representation of _n(F), such that ω_π is of finite order. Then, for a place v∤ p of (π), _v(𝒟_(π))+_v(𝒟__F(π))= 0 v∤ p,∞ 1/[:(π)] v|∞. As a consequence, we partially resolve <cit.>. For example, we prove: Let π be an irreducible self-dual supercuspidal representation of _n(F). Then for any place v of (π), _v(𝒟_(π)),_v(𝒟__F(π))∈1/2/. In particular, when (π) has an odd number of places above p (in particular when [(π):] is odd), for any v|p, _v(𝒟_(π))+_v(𝒟__F(π))=1/2[(π):]. § ACKNOWLEDGEMENT The author thanks Guy Henniart and Yoichi Mieda for helpful discussions. § PRELIMINARIES ON RATIONALITY Let G be a locally pro-finite group and let (π,V) be a complex representation of G. The field of rationality of π is (π):={z∈:γ· z=z,∀γ∈Γ}, where Γ(π)⊆(/) is the group of γ∈(/) such that π^γ≅π. Moreover, for each field K⊂, let K(π)=K.(π). When π is an irreducible representation of a finite group G, the field of rationality (π) is always Galois. Indeed, it is a sub-field of (ζ_|G|). However, in general this need not be the case: consider →^×:1↦√(2). Let G be a group and let (π,V) be a complex representation of G. The representation π is defined over a sub-field K⊂ if there exists a K-vector space representation (π_0,V_0) such that V=V_0⊗_K. If (π,V) is defined over a field K, then K contains the field of rationality (π), since certainly (/K)⊆Γ(π). However, in general (π,V) need not be defined over (π). Indeed, the quaternion group Q_8 has a complex 2-dimensional representation whose character is rational, but the representation is not defined over . There need not be a minimal field K such that a representation π is defined over K. The 2-dimensional representation of Q_8 is not defined over , but can be defined over any quadratic field K/ such that a^2+b^2+1=0 has a solution: i↦[ a b; b -a ],j↦[ b -a; -a -b ],k↦[ 0 1; -1 0 ]. To each representation, we can attach an auxilary -algebra, often convenient in addresssing rationality questions: For a (possibly reducible) representation (π,V) let {π}⊂_(V) be the -span of {π(g):g∈ G}. It inherits the natural ring structure from _(V). For convenience, let us assume {π} is finite-dimensional over , which implies {π} is simple, i.e., of the form M_n(𝒟_π) for some division algebra 𝒟_π. Then: Let (π,V) be an irreducible representation of a locally pro-finite group G, such that {π} is finite-dimensional over . Then (π)=({π})=(𝒟_π). Moreover, π is defined over a field K if and only if K⊗_(π)𝒟_π (equivalently, K⊗_(π){π}) is split, i.e., isomorphic to M_k(K) for some integer k. For σ∈(/), clearly π≅π^σ implies that for each z∈({π}) the elements z,σ(z)∈_(V) are conjugate to each other, which, since z is central, shows z=σ(z). Thus, Γ_π⊆(/(π)), in the notation of Definition <ref>. Conversely, since {π} is a central simple algebra over ({π}), so is {π}⊗_({π}). Thus, {π}↪_(V) induces an isomorphism {π}⊗_({π})≅_(V). Thus, the {π}-representation V is simply the canonical representation, which is in particular invariant under the action of (/({π})). Thus (/({π}))⊂Γ_π. Now, π is defined over K/(π) if and only if there is some K-vector space V_0 such that V_0⊗_K=V and {π} has image in _K(V_0). This is exactly equivalent to {π}⊗_(π)K splitting. The equality (π)=({π}) need not hold when π is reducible. For example, let π be the representation of the cyclic group C_4 given by [ 1; -1 ]∈_2(). Then (π)=, while {π}=(√(-1)). The index of 𝒟_π, the square root of its (π)-dimension, is usually called the Schur index of π. We can now re-phrase the observation in Remark <ref> in the language of division algebras. For the 2-dimensional representation π of Q_8 we have {π}=_=(-1,-1)_, a quaternion algebra over . Now, π is defined over K if and only if (-1,-1)_K≅ M_2(K). The algebra [_]∈() is such that _2([_])=1/2 and _∞([_])=1/2, and _p([_])=0 for all p 2,∞. Thus, [_⊗_ K]=0∈(K) if and only if the quadratic extension K/ is inert over 2 and ∞, i.e., K is of the form (√(d)) where d<0 is a square-free integer such that d≡2,34. Let π be an irreducible -representation of a locally pro-finite group G which appears with multiplicity one in a K-rational representation Π for some K⊂. Then π is defined over K(π). View π⊆Π as a subspace. Then for each σ∈(/K(π)), by multiplicity one, we have π=π^σ as subspaces of Π. Thus, π is defined over K(π). § THE FIELD OF DEFINITION FOR SUPERCUSPIDAL REPRESENTATIONS OF _N(F) Let q be the order of the residue field _F/_F of F. For reasons in Remark <ref>, henceforth we assume: π is an irreducible representation of _n(F) such that one of the following equivalent conditions hold: * the central character ω_π is of finite order * (π) factors through a finite quotient of D^× * the L-parameter φ_π W_F×_2()→_n() extends to Γ_F×_2(). The equivalence follows from: Let (φ,V) be an irreducible representation of W_F. Then: * for any g∈ W_F, there exists an integer N>0 such that φ(g)^N acts as a scalar on V. In other words, φ W_F→(V) factors through Γ_F. * there exists an unramified character χ such that χ^-1φ extends to a representations of Γ_F. Clearly (<ref>) follows from (<ref>). To prove (<ref>), note that it suffices to check it for a lift ϖ∈ W_F of the Frobenius of the residue field k_F, since the inertia I_F is pro-finite. Now, the action of I_F factors through some finite subgroup G⊂(V), and ϖ acts as an automorphism of G, so there exists an integer N>0 such that ϖ^N acts trivially on G. Then ϖ^N is central in G⋊⟨ϖ⟩, so by Schur's lemma φ(ϖ)^N is a scalar. The local Langlands correspondence and Jacquet-Langlands correspondence, twisted by ν^(n-1)/2 are invariant under (/). That is, for any σ∈(/): _F(π^σ)=_F(π)^σ,(π^σ)=(π)^σ. In particular, given an irreducible square-integrable representation π of _n(F), we have: (π)=((π))=(_F(π)). Indeed, the characterizing properties in terms of character formulae by <cit.> and <cit.> are invariant (after twisting) under (/). For _n, this was observed in <cit.>. First, we can show that for representations of _n(F), there are no obstructions to defining representations over the field of rationality: Any irreducible square-integrable representation π of _n(F) is defined over (π). Square-integrable representations of _n(F) are generic, so π has a unique Whittaker model. That is, π appears with multiplicity one in _N^G(ψ), where ψ is a non-degenerate character of N. By mimicking the proof of <cit.>, we see that _N^G(ψ) is defined over , and hence by Lemma <ref> the representation π is defined over (π). We can pin-down the relationship between 𝒟_(π) and 𝒟__F(π) in places away from p. For each place v of (π), let _v((π))→/ be the Hasse invariant map, which fits into the standard short exact sequence 1→((π))⊕_v∤∞/⊕⊕_v|∞1/2//→0. Let π be an irreducible supercuspidal representation of _n(F) satisfying Hypothesis <ref>. Then, for a place v∤ p of (π), _v(𝒟_(π))+_v(𝒟__F(π))= 0 v∤ p,∞ 1/[:(π)] v|∞. Moreover, ∑_v|p_v(𝒟_(π))+_v(𝒟__F(π))=1/2[(π):]∈/. It suffices to prove this for a division algebra D/F with (D)=1/n, where the Jacquet-Langlands correspondence admits a geometric description. Then for each finite ℓ p, by <cit.> the étale cohomology H^n-1_LT:=lim_⟶ mH_c^n-1((M_m/ϖ^)⊗_F̆F̆,_ℓ) carries an action of _n(F)× D^×× W_F, and the cuspidal part H^n-1_LT,⊗__ℓ_ℓ contains π^∨⊠(π)⊠_F(π) with multiplicity one. Thus by Lemma <ref> the representation π^∨⊠(π)⊠_F(π) is defined over the field of rationality _ℓ(π). Furthermore, since by Lemma <ref> the representation π is defined over _ℓ(π), and hence the representation (π)⊠_F(π) of D^×× W_F is also defined over _ℓ(π). Now, since {(π)⊠_F(π)}≅{(π)}⊗{_F(π)}, we have: 0=__ℓ(√(q))(π)(𝒟_(π)⊠_F(π))=__ℓ(π)(𝒟_(π))+__ℓ(π)(𝒟__F(π)), i.e., __ℓ(π)(𝒟_(π))=-__ℓ(π)(𝒟__F(π)). Moreover, for places v over infinity, <cit.> tells us _(π)_v(𝒟_(π))+_(π)_v(𝒟__F(π))=1/[:(π)]∈1/[:(π)]/. Indeed, when (π)= the statement trivially holds. Moreover, (π)= is equivalent to the existence of a real character χ F^×→_>0 such that χ^-1⊗π is self-dual. Now the equation follows since φ_χ^-1π is symplectic if and only if _(𝒟_φ_χ^-1π)=1/2. Now, using the fact that ∑_v_v=0, we have ∑_v_v(𝒟_(π))+_v(𝒟__F(π)) =∑_v|p_v(𝒟_(π))+_v(𝒟__F(π))+∑_v|∞1/[:(π)] =∑_v|p_v(𝒟_(π))+_v(𝒟__F(π))+1/2[(π):] =0, since ∑_v|∞[(π)_v:]=_((π)⊗_)=[(π):]. Thus, the main problem now is to calculate the relation between _v(𝒟_(π)) and _v(𝒟__F(π)|_W_F) for v|p. Recall <cit.>: Let us be in the setting of Theorem <ref>. * If (π) is not self-dual, or (π) is orthogonal, then for each v|p, 𝒟_(π)=𝒟__F(π). * If (π) is symplectic, then: * If [(π):] is even, then for each v<∞, _v(𝒟_(π))=_v(𝒟__F(π)). * If [(π):] is odd, then for each v∤ p,∞, _v(𝒟_(π))=_v(𝒟__F(π)), and for each v|p, _v(𝒟_(π))∈_v(𝒟__F(π))+1/2. Although <cit.> phrases many of their results in terms of _v(𝒟_(π))-_v(𝒟__F(π)), we see that it is more natural to ask about the sum _v(𝒟_(π))+_v(𝒟__F(π)). However, in some cases, Theorem <ref> already gives a complete picture. An easy example is: When (π) has a unique prime lying over p, _v(𝒟_(π))+_v(𝒟__F(π))= 0 v∤ p,∞ 1/2[(π):] v| p 1/[:(π)] v|∞. We also have the following arithmetic result due to <cit.> and <cit.>: Let K/ be a finite abelian extension and let [𝒟]∈(K) be a central simple algebra arising as a simple factor of the group algebra K[G] of some finite group G, and fix a rational prime p>0. Then the order of m=_([𝒟]) is independent of the prime |p of K. Moreover, K contains a primitive m-th root of unity ζ_m, and for σ∈(K/) with σ(ζ_m)=ζ_m^b, _([𝒟])=b_σ()([𝒟])∈/. In particular, we can confirm the second half of Conjecture <ref> (<ref>): Let π be an irreducible self-dual supercuspidal representation of _n(F), satisfying Hypothesis <ref>. Then for any place v of (π), _v(𝒟_(π)),_v(𝒟__F(π))∈1/2/, and the values (respectively) only depend on the characteristic of v. In particular, when (π) has an odd number of places above p (in particular when [(π):] is odd), for any v|p, _v(𝒟_(π))+_v(𝒟__F(π))=1/2[(π):]. Since π is self-dual, (π) is totally real, so it only contains the root of unity ζ_2. Thus the invariants must be 2-torsion and dependent only on the characteristic of v by Lemma <ref>. In particular, if (π) has an odd number of places over p, equation (<ref>) is enough to pin down the invariants, and implies (<ref>). We also have uniform bounds on the local indices of the division algebras 𝒟_(π) and 𝒟__F(π): Let π be a supercuspidal representation of _n(F) satisfying Hypothesis <ref>. Then for each prime v|p of (π), _v(𝒟_(π)),_v(𝒟__F(π))∈1/(n,p-1)/. In particular, if (p-1,n)≤2, Conjecture <ref> holds. By <cit.>, both of _v(𝒟_(π)) and _v(𝒟__F(π)) are (p-1)-torsion. Moreover, since _v(𝒟__F(π)) is n-torsion, since (_F(π))=n. Thus _v(𝒟__F(π)) is also torsion under the greatest common divisor (n,p-1). Moreover, D^×/F^× is an extension of a pro-p group by _q^n^×/_q, which has order 1+q+⋯+q^n-1. Thus _v(𝒟_(π)) is also p^N(1+q+⋯+q^n-1)-torsion. Again _v(𝒟_(π)) is (n,p-1)-torsion. amsalpha
http://arxiv.org/abs/2307.04399v1
20230710080102
The Topological Quandles up to Four Elements
[ "Mohamed Ayadi" ]
math.CO
[ "math.CO" ]
The topological quandles up to four elements]The topological quandles up to four elements empty Laboratoire de Mathématiques Blaise Pascal, CNRS–Université Clermont-Auvergne, 3 place Vasarély, CS 60026, F63178 Aubière, France, and University of Sfax, Faculty of Sciences of Sfax, LAMHA, route de Soukra, 3038 Sfax, Tunisia. [email protected] stdNode/.style=rounded corners, draw, align=right, greenRed/.style=stdNode, top color=green, bottom color=red, blueRed/.style=stdNode, top color=blue, bottom color=red The finite topological quandles can be represented as n × n matrices, recently defined by S. Nelson and C. Wong. In this paper, we first study the finite topological quandles and we show how to use these matrices to distinguish all isomorphism classes of finite topological quandles for a given cardinality n. As an application, we classify finite topological quandles with up to 4 elements. [2020]57K12, 16T05 . [ Mohamed Ayadi July 8, 2023 ================= § INTRODUCTION A quandle is a set Q with a binary operation : Q × Q ⟶ Q satisfying the three axioms * (i) for every a ∈ Q, we have a a = a, * (ii) for every pair a, b ∈ Q there is a unique c ∈ Q such that a = c b, and * (iii) for every a, b, c ∈ Q, we have (a b) c = (a c) (b c). As an example for (G, ∘) a group and : G× G ⟶ G the operation defined by x y=x∘ y∘ x^-1, for all x, y ∈ G, then Q is a quandle. More on quandles can be found in <cit.>. A quasi-poset is a pairs (X, ≤), where X is a set and ≤ a quasi-order on X, that is to say a transitive and reflexive relation on X. Recall (see e.g. <cit.>) that a topology on a finite set X is given by the family 𝒯 of open subsets of X, subject to the three following axioms: * ø∈𝒯, X∈𝒯, * The union of (a finite number of) open subsets is an open subset, * The intersection of a finite number of open subsets is an open subset. By Alexandroff’s theorem <cit.>, for any finite set X, there is a bijection between topologies on X and quasi-orders on X. Any topology 𝒯 on X defines a quasi-order denoted by ≤_𝒯 on X: x≤_𝒯y⟺ any open subset containing x also contains y. Conversely, any quasi-order ≤ on X defines a topology 𝒯_≤ given by its upper ideals, i.e., subsets Y⊂ X such that (y∈ Y and y≤ z) z∈ Y. Both operations are inverse to each other: ≤_𝒯_≤= ≤ and 𝒯_≤_𝒯=𝒯. Hence there is a natural bijection between topologies and quasi-orders on a finite set X. Any quasi-order (hence any topology 𝒯 ) on X gives rise to an equivalence relation: x ∼_𝒯y⟺( x≤_𝒯y and y≤_𝒯x ). A finite topological space (X, ≤) will be represented by the Hasse diagram of the quotient X/∼, where ∼ is the equivalence relation defined above. Each vertex is drawn as a bubble in which all elements of the same equivalence class are represented by points. More on finite topological spaces can be found in <cit.>. Let (Q, ≤) be a topological space equipped with a continuous map μ : Q × Q ⟶ Q , denoted by μ(a, b) = a b, such that for every b∈ Q the mapping a↦ a b is a homeomorphism of (Q,≤). The space Q (together with the map μ ) is called a topological quandle <cit.> if it satisfies for all a, b, c ∈ Q * (i) (a b) c=(a c) (b c), * (ii) a a=a. Let (Q, ) and (Q', ') be two topological quandles. A continuous map ϕ : Q ⟶ Q' is called a topological quandle homomorphism if ϕ(a b) = ϕ(a) ' ϕ(b), for all a, b ∈ Q. The paper is organized as follows. We recall in Section <ref> the method of B. Ho and S. Nelson <cit.> to describe finite quandles with up to 5 elements, and we also recall in section <ref> how S. Nelson and C-Y. Wong in <cit.> prove that the decomposition of a finite quandle into orbits coincides with our notion of decomposition into Q-complemented subquandles. In section <ref> we prove that, if Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n is a finite quandle, written in its orbit decomposition, and if 𝒯=(Q, ≤) is a topological space such as 𝒯_|Q_i is the coarse topology on Q_i for all i∈ [n], then 𝒯 is Q-compatible. Then we apply this result to find the finite topological quandles with up to 4 elements. § THE MATRIX OF A FINITE QUANDLE Let Q={x_1, x_2, . . . , x_n} be a finite quandle with n elements. We define the matrix of Q, denoted M_Q, to be the matrix whose entry in row i column j is x_i x_j: M_Q= [ x_1 x_1 x_1 x_2 ... x_1 x_n; x_2 x_1 x_2 x_2 ... x_2 x_n; . . ... .; . . ... .; . . ... .; x_n x_1 x_n x_2 ... x_n x_n ] <cit.> Let Q={a, b, c }, the Quandle matrices for quandles of order 3 are, up to permutations of Q: [ a a a; b b b; c c c ] , [ a c b; c b a; b a c ] , [ a a a; c b b; b c c ] Let Q={a, b, c, d }, the Quandle matrices for quandles of order 4 are, up to permutations of Q: [ a a a a; b b b b; c c c c; d d d d ] , [ a a a a; b b b c; c c c b; d d d d ] , [ a a a b; b b b c; c c c a; d d d d ] , [ a a b b; b b a a; c c c c; d d d d ] , [ a a a a; b b d c; c d c b; d c b d ] , [ a a b b; b b a a; d d c c; c c d d ] , [ a d b c; c b d a; d a c b; b c a d ] Let Q be a quandle. A subquandle X ⊂ Q is a subset of Q which is itself a quandle under . Let Q be a quandle and X ⊂ Q a subquandle. We say that X is complemented in Q or Q-complemented if Q\ X is a subquandle of Q. <cit.> Let Q be a finite quandle. Then Q may be written as Q = Q_1 ⨿ Q_2 ⨿···⨿ Q_n, where every Q_i is Q-complemented and no proper subquandle of any Q_i is Q-complemented. This decomposition is well-defined up to isomorphism; if Q ≈ Q', then in the decompositions Q = Q_1 ⨿ Q_2 ⨿···⨿ Q_n, and Q' = Q'_1 ⨿ Q'_2 ⨿···⨿ Q'_m, we have n = m and (after reordering if necessary), Q_i=Q'_j. § REMINDER ON THE ORBIT DECOMPOSITION Notation. Let (Q, ) be a finite quandle, for x'∈ Q, we note R_x':Q ⟶ Q x ⟼ x x', and L_x':Q ⟶ Q x ⟼ x' x. (Q, 𝒯) is a finite topological quandle if and only if, (R_x' is an homeomorphism and L_x' is a continuous map, for all x'∈ Q) if and only if, for all x, y, x', y' ∈ X, if x≤ x' and y≤ y', we obtain x y≤ x' y'. Let (Q, ) be a finite quandle, the intersection of two Q-complemented subquandles is also Q-complemented. Let (Q, ) be a finite quandle and let Q_1, Q_2 be two Q-complemented sub-quandles. It is clear that the binary operation : (Q_1∩ Q_2) × (Q_1∩ Q_2) ⟶ Q_1∩ Q_2 satisfies the two axioms (i) and (iii) of the definition of quandle. For x, y∈ Q_1∩ Q_2, it exists z∈ Q such as x=R_y(z), where R_y:Q⟶ Q defined by R_y(z)=z y. i.e., z=R^-1_y(x). Since x, y∈ Q_1∩ Q_2 and the map R_y is a bijection on Q_1 (resp. on Q_2), so we get z∈ Q_1∩ Q_2. Hence : (Q_1∩ Q_2) × (Q_1∩ Q_2) ⟶ Q_1∩ Q_2 satisfies the axiom (ii). So Q_1∩ Q_2 is a sub-quandle. On the other hand: Q=(Q_1∩ Q_2)⨿ (Q_1∩Q_2)⨿ (Q_1∩ Q_2)⨿ (Q_1∩Q_2), where Q_1=Q\ Q_1 and Q_2=Q\ Q_2. Let a∈Q_1∩ Q_2, so we have three possible cases; a∈ Q_1∩Q_2 or a∈Q_1∩ Q_2 or a∈Q_1∩Q_2. * If a∈Q_1∩Q_2, we obtain * R_a: Q_1∩Q_2⟼Q_1∩Q_2 is a bijection. * R_a: Q_1⟼Q_1 is a bijection. * R_a: Q_2⟼Q_2 is a bijection. * R_a: Q⟼ Q is a bijection. Then R_a respects all four blocks. * If a∈Q_1∩ Q_2 or a∈ Q_1∩Q_2; similarly. Hence R_a respects Q_1∩ Q_2, so we then deduce that Q_1∩ Q_2 is a Q-complemented subquandles. Then the finite intersection of Q-complemented subquandles is also Q-complemented. Notation: Let (Q, ) be a finite quandle. For a∈ Q, we not Q_a=⋂_a∈ Q' Q' is Q-complemented Q' and Ω_a={ b∈ Q, a∼ b }, where ∼ is the transitive closure of the relation ℛ̃ defined by: xℛ̃y⟺ it exists z∈ Q such as (x=y z or y=x z). i.e., for all a, b∈ Q, a∼ b if and only if, it exists c_1,..., c_n∈ Q such as aℛ̃c_1...ℛ̃c_nℛ̃b. <cit.> Let (Q, ) be a finite quandle then Ω_a and Q_a defined above are equal for any a∈ Q. Let (Q, ) be a finite quandle and a∈ Q, according to the Lemma <ref>, Q_a is a Q-complemented subquandle. - It is clear that the binary operation : Ω_a ×Ω_a ⟶Ω_a satisfies the two axioms (i) and (iii) of the definition of quandle. Let x, y∈Ω_a, then there exists a unique z ∈ Q such that x=z y, and hence xℛ̃z, hence z∈Ω_x=Ω_a. Hence, the map R_x: Ω_a ⟶Ω_a defined by R_x(y)=y x is a bijection. So Ω_a is a sub-quandle of Q. Moreover the binary operation : Q\Ω_a × Q \Ω_a ⟶ Q\Ω_a satisfies the two axioms (i) and (iii) of the definition of quandle. And for all x, y∈ Q\Ω_a there exists z∈ Q such that x=z y, hence xℛ̃z, then z∈ Q\Ω_a necessarily, because otherwise then x∈Ω_a, which is absurd. Hence, the map R_x :Q\Ω_a ⟶ Q\Ω_a defined by R_x(y)=y x is a bijection. So Q\Ω_a is a sub quandle of Q. then Ω_a is Q-complemented. - Since Q_a is the smallest complemented sub-quandle containing a, we obtain that Q_a⊆Ω_a. - It remains to show that Ω_a ⊆ Q_a, let B be a sub-quandle Q-complemented containing a. For x∈ B, then R_x respects B. Moreover for x∈B, R_x respects B. So for all x∈ Q, R_x and R^-1_x respect B. And since Ω={P_1...P_ka, with P_j equal to R_x_j or R^-1_x_j, x_j∈ Q }, then Ω_a⊂ B. Hence Ω_a⊂ Q_a. Consequently Ω_a=Q_a. § RESULTS In this section we prove that, if Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n is a finite quandle and let 𝒯=(Q, ≤) is a topological space such as for all i∈ [n], 𝒯_|Q_i is the coarse topology on Q_i, then 𝒯 is Q-compatible. From this result I find the topological quandles of 3 and 4 elements. §.§ The topologies of orbits of finite quandle Let Q be a finite quandle, then the discrete topology and the coarse topology are Q-compatible. Let Q=(X, ) be a finite quandle. If 𝒯 is the discrete topology, then for all x, x', y, y' ∈ X, if x≤_𝒯 x' and y≤_𝒯 y', then x=x' and y=y', so x y≤_𝒯x' y', hence 𝒯 is Q-compatible. If 𝒯 is the coarse topology, then for all x, y ∈ X, x∼_𝒯 y, so for all x, x', y, y'∈ X, x≤_𝒯 x' and y≤_𝒯 y' and x y≤_𝒯x' y'. Hence 𝒯 is a Q-compatible. Notation. Let Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n be a finite quandle written in its orbit decomposition (see Theorem <ref>). We denote by 𝒯_Q=𝒯_Q_1···𝒯_Q_n the usual product topology of 𝒯_Q_i, i∈ [n], where 𝒯_Q_i is the coarse topology on Q_i. Let :X× X⟶ X the operation of the quandle Q defined by M_Q=[ a a a; c b b; b c c ]. Its orbit decomposition is Q=Q_1⨿ Q_2, where Q_1=[ a ] and Q_2=[ b b; c c ]. In this case 𝒯_Q=0.7(73,47) (183,-169) 1.0Black(194,-154)2(204,-154)2(191,-150)[lb]b(202,-150)[lb]c(200,-152)(15.811,215,575) (174,-154)2(174,-150)[lb]a Let Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n be a finite quandle written in its orbit decomposition, and let 𝒯 be a topology on Q. If for all i∈ [n], 𝒯_|Q_i is the coarse topology on Q_i, then 𝒯 is Q-compatible. Let 𝒯 be a topology on Q, such that 𝒯 _|Q_i is the coarse topology, for all i∈ [ n ]. For x∈ Q_i, we note Q_x=Q_i. Let z, z'∈ Q such that z≤ z', then for all x∈ Q, L_x(z)=x z∈ Q_x and L_x(z' )=x z'∈ Q_x. But 𝒯_Q_i is the coarse topology, then for all a, b∈ Q_i, a∼ b, hence x z ∼ x z'. Hence the continuity of L_x for all x∈ Q is proven. Moreover z≤ z' implies that, for all a∈ Q_z, b∈ Q_z', a≤ b. In particular, R_x(z)=z x∈ Q_z and R_x(z')=z' x∈ Q_z', hence we get R_x(z)≤ R_x(z' ). Hence R_x is continuous for all x∈ Q. As 𝒯 is finite, we therefore conclude that 𝒯 is Q-compatible. I use this theorem to find the topological quandles of 3 and 4 elements below. §.§ List of the topological quandles with three elements In the three examples above X={a,b,c}. - Let :X× X⟶ X the operation of the trivial quandle Q defined by M_Q=[ a a a; b b b; c c c ]. All topologies on X are compatible with this quandle structure. Indeed: let 𝒯 be a topology on X, for all x,y∈ X, x y=y, then for all x',y'∈ X such that x≤ x' and y≤ y', we obtain x y≤ x' y'. - Let :X× X⟶ X the quandle structure defined by M_Q=[ a c b; c b a; b a c ], let 𝒯=(X, ≤) be a Q-compatible topology, if there exists x y∈{a, b, c} such that x≤ y, then 𝒯 is the coarse topology. In fact, suppose a≤ b we get, R_a(a)=a≤ R_a(b)=c and R_b(a)=c≤ R_b(b)=b and R_c(a)=b≤ R_c(b)=a, we therefore obtain, a≤ b implies that a≤ c≤ b≤ a, hence 𝒯 is the coarse topology. Similarly if we change a, b by x, y∈{a, b, c}, we find that 𝒯 is the coarse topology. From Proposition <ref>,we conclude in this case that the topologies on X compatible with the structure are: the discrete topology and the coarse topology. - Let :X× X⟶ X the quandle structure defined by M_Q=[ a a a; c b b; b c c ], then according to Theorem <ref>, the four topologies below endowed with are compatible with the structure of quandle. 0.7(65,49) (172,-166) 1.0Black(195,-143)(21.633,56,416) (183,-151)2(194,-151)2(204,-152)2(180,-147)[lb]a(191,-147)[lb]b(202,-147)[lb]c , 0.7(54,61) (183,-169) 1.0Black(194,-142)2(204,-142)2(191,-138)[lb]b(202,-138)[lb]c(200,-138)(15.811,215,575) (200,-155)(200,-163) (200,-165)2(204,-168)[lb]a, 0.7(54,67) (183,-149) 1.0Black(194,-136)2(204,-136)2(191,-133)[lb]b(202,-133)[lb]c(200,-132)(15.811,215,575) (199,-116)(199,-105) (199,-104)2(196,-100)[lb]a, 0.7(73,47) (183,-169) 1.0Black(194,-154)2(204,-154)2(191,-150)[lb]b(202,-150)[lb]c(200,-152)(15.811,215,575) (223,-154)2(221,-150)[lb]a Let 𝒯=(X, ≤) be a Q-compatible topology, then (b∼ c or b and c are incomparable). Indeed: if b≤ c, then R_a(b)=c≤ R_a(c)=b, similarly if c≤ b, then R_a(c)=b≤ R_a (b)=c. So the result. Let 𝒯=(X, ≤) be a Q-compatible topology such that c and b are incomparable then 𝒯 is the discrete topology. In fact ; if a≤ b then L_b(a)=c≤ b=L_b(b), which is absurd, moreover, if a≤ c then L_c(a)=b≤ c=L_c(c) which is absurd (same if b≤ a or c≤ a). Hence 𝒯 is the discrete topology. We conclude that the discrete topology 0.7(58,39) (179,-166) 1.0Black(183,-161)2(194,-161)2(204,-162)2(180,-156)[lb]a(191,-156)[lb]b(202,-156)[lb]c and the above four topologies are the only Q-compatible topologies. §.§ List of the topological quandles with four elements In the seven examples below X={a, b, c, d}. - Let :X× X⟶ X the quandle structure defined by M_Q=[ a d b c; c b d a; d a c b; b c a d ], the only topologies on X compatible with the quandle structure are the discrete topology and the coarse topology. Indeed: let (Q, 𝒯) be a topological quandle different from the discrete topology, then there exists x y ∈{a, b, c, d}, such that x≤ y. If a≤ b, then R_a(a)=a≤ c=R_a(b), R_b(a)=d≤ b=R_b(b), R_c(a)=b≤ d=R_c(b), R_d(a)=c≤ a=R_d(b), L_a(a)=a≤ d=L_a(b), L_b(a)=c≤ b=L_b(b), L_c(a)=d≤ a=L_c(b) and L_d(a)=b≤ c=L_d(b). Then, a∼ b∼ c∼ d, i.e., 𝒯 is a coarse topology. Same if a≤ c, or a≤ d, or b≤ a, or b≤ c, or b≤ d, or c≤ a, or c≤ b, or c≤ d, or d≤ a, or d≤ b, or d≤ c, we prove that 𝒯 is a coarse topology. - Let :X× X⟶ X the quandle structure defined by M_Q=[ a a b b; b b a a; d d c c; c c d d ], If (Q, 𝒯) is a topological quandle, then (a∼ b and c∼ d) or (a∼ b and c, d are incomparable) or (a, b are incomparable and c∼ d) or (a, b are incomparable and c, d are incomparable). Indeed, if a≤ b, then R_c(a)=b≤ a=R_c(b). So a∼ b. Similarly, if b≤ a, then a∼ b. If c≤ d, then c∼ d. If d≤ c, then c∼ d. By Theorem <ref>, the three topologies below are Q-compatible. 0.7(75,36) (133,-200) 1.0Black(144,-189)(10,180,540) (171,-189)(10,180,540) (141,-194)2(147,-194)2(167,-193)2(175,-193)2(137,-190)[lb]a(145,-190)[lb]b(164,-190)[lb]c(173,-190)[lb]d, 0.7(47,62) (133,-174) 1.0Black(144,-163)(10,180,540) (141,-168)2(147,-168)2(137,-165)[lb]a(145,-165)[lb]b(144,-135)(10.05,174,534) (145,-153)(144,-145) (140,-139)2(147,-139)2(137,-136)[lb]c(145,-136)[lb]d, 0.7(50,68) (132,-140) 1.0Black(144,-129)(10.05,174,534) (140,-133)2(147,-133)2(137,-131)[lb]c(145,-131)[lb]d(143,-96)(10.198,169,529) (144,-119)(144,-106) (140,-100)2(147,-100)2(137,-97)[lb]a(145,-97)[lb]b The disjoint union of the discrete topology on {a, b} and the coarse topology on {c, d} is Q-compatible and vice versa. Let 𝒯=(X,≤) be a topological space differs from the coarse topology, and suppose there exists x∈{a,b} (resp. x∈{c,d}) and y∈{c,d} (resp. y∈{a,b}) such that x≤ y. Then, 𝒯 is not Q-compatible. Indeed, if 𝒯 is a Q-compatible wich a≤ c then L_a(a)=a≤ b=L_a(c), which is absurd. Conclusion: there are seven topologies Q-compatible (the three topologies above and the 4 below). 0.7(65,49) (172,-166) 1.0Black(180,-151)2(188,-151)2(198,-151)2(208,-151)2(178,-147)[lb]a(188,-147)[lb]b(198,-147)[lb]c(208,-147)[lb]d, 0.7(65,49) (172,-166) 1.0Black(195,-143)(21.633,56,416) (180,-151)2(188,-151)2(198,-151)2(208,-151)2(178,-147)[lb]a(188,-147)[lb]b(198,-147)[lb]c(208,-147)[lb]d, 0.7(75,36) (133,-200) 1.0Black(144,-189)(10,180,540) (141,-194)2(147,-194)2(167,-193)2(175,-193)2(137,-190)[lb]a(145,-190)[lb]b(164,-190)[lb]c(173,-190)[lb]d, 0.7(75,36) (133,-200) 1.0Black(171,-189)(10,180,540) (141,-194)2(147,-194)2(167,-193)2(175,-193)2(137,-190)[lb]a(145,-190)[lb]b(164,-190)[lb]c(173,-190)[lb]d - Let :X× X⟶ X the quandle structure defined by M_Q=[ a a a a; b b d c; c d c b; d c b d ], then Q=Q_1⨿ Q_2, where Q_1=[ a ] and Q_2=[ b d c; d c b; c b d ] If (Q, 𝒯) is a topological quandle, then (b∼ c∼ d or b, c, d are incomparable). If b≤ c, then R_a(b)=b≤ c=R_a(c), R_b(b)=b≤ d=R_b(c), R_c(b)=d≤ c=R_c(c), R_d(b)=c≤ b=R_d(c), L_a(b)=a≤ a=L_a(c), L_b(b)=b≤ d=L_b(c), L_c(b)=d≤ c=L_c(c) and L_d(b)=c≤ b=L_d(c). So b∼ c, b≤ d and d≤ c, implies that b∼ c and b∼ d, then b∼ c∼ d. By Theorem <ref>, the three topologies below are Q-compatible. 0.7(61,35) (106,-198) 1.0Black(128,-184)(13.038,122,482) (122,-189)2(128,-189)2(134,-189)2(109,-189)2(105,-185)[lb]a(118,-185)[lb]b(126,-185)[lb]c(132,-185)[lb]d, 0.7(53,70) (114,-163) 1.0Black(128,-149)(13.038,122,482) (122,-154)2(128,-154)2(134,-154)2(118,-150)[lb]b(125,-150)[lb]c(131,-150)[lb]d(129,-123)2(129,-136)(129,-125) (124,-119)[lb]a, 0.7(53,48) (114,-198) 1.0Black(128,-171)(13.038,122,482) (121,-176)2(128,-176)2(134,-176)2(119,-171)[lb]b(126,-171)[lb]c(132,-172)[lb]d(126,-196)2(126,-195)(127,-184) (116,-202)[lb]a Let (Q, 𝒯) be a topological quandle such that, there exists x∈{b, c, d} such that a≤ x or x≤ a then 𝒯 is the coarse topology. Indeed: if a≤ b, then R_a(a)=a≤ a=R_a(b), R_b(a)=a≤ b=R_b(b), R_c(a)=c≤ d=R_c(b), R_d(a)=a≤ c=R_d(b), L_a(a)=a≤ a=L_a(b), L_b(a)=b≤ b=L_b(b), L_c(a)=c≤ d=L_c(b) and L_d(a)=d≤ c=L_d(b). So a≤ d≤ a and c≤ a≤ c, then c∼ d, then a∼ c and b∼ c∼ d, so a∼ b∼ c∼ d. Conclusion: there are five Q-compatible topologies: the coarse topology, the discrete topology and the three topologies described above. - Let :X× X⟶ X the quandle structure defined by M_Q=[ a a b b; b b a a; c c c c; d d d d ], then Q=Q_1⨿ Q_2⨿ Q_3, where Q_1=[ a a; b b ], Q_2=[ c ] and Q_3=[ d ] If (Q, 𝒯) is a topological quandle, then (a∼ b or a, b are incomparable). Indeed, if a≤ b, then R_d(a)=b≤ a=R_d(b), so a∼ b. Same thing if b≤ a then a∼ b. By Theorem <ref>, any topology that is coarse on the bags {a, b}, {c}, {d} is Q-compatible. If a, b are incomparable: for all x∈{a, b}, * if x≤ d, then L_a(x)=a≤ b=L_a(d), which is absurd, * if d≤ x, then L_a(d)=b≤ a=L_a(x), which is absurd, * if x≤ c, then L_a(x)=a≤ b=L_a(c), which is absurd, * if c≤ x, then L_a(c)=b≤ a=L_a(x), which is absurd. Therefore, if (Q, 𝒯) is a topological quandle with a, b are incomparable, it implies that 𝒯=0.7(48,37) (156,-206) 1.0Black(160,-202)2(160,-192)2(166,-202)2(174,-202)2(150,-204)[lb]c(151,-190)[lb]d(167,-200)[lb]a(177,-200)[lb]b(161,-202)(160,-193), or 𝒯=0.7(48,37) (156,-206) 1.0Black(160,-202)2(160,-192)2(166,-202)2(174,-202)2(150,-204)[lb]d(153,-190)[lb]c(167,-200)[lb]a(178,-200)[lb]b(161,-202)(160,-193), or 𝒯=0.7(58,39) (179,-166) 1.0Black(183,-161)2(194,-161)2(204,-161)2(214,-161)2(180,-156)[lb]a(191,-156)[lb]b(202,-156)[lb]c(212,-156)[lb]d, or 𝒯=0.7(73,47) (183,-169) 1.0Black(194,-154)2(204,-154)2(191,-150)[lb]c(202,-150)[lb]d(200,-152)(15.811,215,575) (164,-154)2(174,-154)2(164,-150)[lb]a(174,-150)[lb]b It is clear that the above topologies are Q-compatible. Conclusion: The Q-compatible topologies are the four topologies above and all topologies on X such that a and b are equivalent. - Let :X× X⟶ X the quandle structure defined by M_Q=[ a a a b; b b b c; c c c a; d d d d ], then Q=Q_1⨿ Q_2, where Q_1=[ a a a; b b b; c c c ] and Q_2=[ d ]. If (Q, 𝒯) is a topological quandle, then (a∼ b∼ c or a, b, c are incomparable). Indeed: if a≤ b, then R_d(a)=b≤ c=R_d(b), then R_d(b)=c≤ a=R_d(c). So a≤ b≤ c≤ a, then a∼ b∼ c. Similarly for x, y∈{ a, b, c}, if x≤ y then a∼ b∼ c. Then the result. (Q, 𝒯) be a topological quandle with a, b, c are incomparable, implies that 𝒯 is the discrete topology. Indeed, if there exists x∈{a, b, c} such that, (x≤ d or d≤ x), then ( L_a(x)=a≤ b=L_a(d) or L_a(d)=b≤ a=L_a(x)), which is absurd. Conclusion: The Q-compatible topologies are the four topologies below. 0.7(65,49) (172,-166) 1.0Black(195,-143)(21.633,56,416) (180,-151)2(188,-151)2(198,-151)2(208,-151)2(178,-147)[lb]a(188,-147)[lb]b(198,-147)[lb]c(208,-147)[lb]d, 0.7(58,39) (179,-166) 1.0Black(183,-161)2(194,-161)2(204,-161)2(214,-161)2(180,-156)[lb]a(191,-156)[lb]b(202,-156)[lb]c(212,-156)[lb]d, 0.7(61,35) (106,-198) 1.0Black(128,-184)(13.038,122,482) (122,-189)2(128,-189)2(134,-189)2(109,-189)2(105,-185)[lb]d(118,-185)[lb]a(125,-185)[lb]b(132,-185)[lb]c, 0.7(53,70) (114,-163) 1.0Black(128,-149)(13.038,122,482) (122,-154)2(128,-154)2(134,-154)2(118,-150)[lb]a(125,-150)[lb]b(132,-150)[lb]c(129,-123)2(129,-136)(129,-125) (124,-119)[lb]d, 0.7(53,48) (114,-198) 1.0Black(128,-171)(13.038,122,482) (121,-176)2(128,-176)2(134,-176)2(119,-171)[lb]a(125,-171)[lb]b(132,-172)[lb]c(126,-196)2(126,-195)(127,-184) (116,-202)[lb]d - Let :X× X⟶ X the quandle structure defined by M_Q=[ a a a a; b b b c; c c c b; d d d d ], then Q=Q_1⨿ Q_2⨿ Q_3, where Q_1=[ a ], Q_2=[ b b; c c ] and Q_3=[ d ]. If(Q, 𝒯) is a topological quandle, then (b∼ c or b, c are incomparable). Indeed: if b≤ c, then R_d(b)=c≤ b=R_d(c), then b∼ c. By Theorem <ref>, the topology of the bags {a} {b, c}, {d} is a Q-compatible. Let (Q, 𝒯) be a topological quandle, then: b, c are incomparable, implies that for all x∈{a, b, c}, x and d are incomparable. By absurd: if (x≤ d or d≤ x) then (L_b(x)=b≤ c=L_b(d) or L_b(d)=c≤ d=L_b(x)), which is absurd. Moreover if (a≤ b or a≤ c) then (R_d(a)=a≤ c=R_d(b) or R_d(a)=a≤ b=R_d(c)). We deduce therefore that: (Q, 𝒯) be a topological quandle which b, c are incomparable, implies that 𝒯=0.7(56,37) (150,-205) 1.0Black(172,-203)2(160,-203)2(153,-191)2(167,-191)2(160,-203)(154,-192) (160,-204)(167,-191) (150,-207)[lb]a(177,-207)[lb]d(147,-189)[lb]b(171,-191)[lb]c or 𝒯=0.7(56,38) (133,-202) 1.0Black(145,-187)2(138,-200)2(153,-200)2(168,-200)2(138,-199)(145,-188) (153,-200)(145,-187) (139,-184)[lb]a(128,-202)[lb]c(156,-202)[lb]b(176,-202)[lb]d Conclusion: The set of Q-compatible topologies are the topologies such that {b} and {c} are equivalent and the first two topologies above, and the discrete topology. - Let :X× X⟶ X the trivial quandle structure defined by M_Q=[ a a a a; b b b b; c c c c; d d d d ], all topologies on X are compatible with this quandle structure. Let Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n be a finite quandle which contains at most four elements, where the Q_i are the orbits and let 𝒯=(Q, ≤) be a topological space. We noticed that, if 𝒯 is Q-compatible then for all i∈ [n], T_|Q_i is coarse or discrete topology. Does this remark remain true for any finite quandles ? This is not the case. Indeed, let :X× X⟶ X be the quandle structure defined by M_Q=[ a a a a a a; b b b b b b; d e c c c c; c f d d d d; f c e e e e; e d f f f f ] - In the first step, we prove that Q is well defined. It is clear that the operation satisfies the conditions (i) and (ii) of the definition of a quandle, moreover we have R_c=R_d=R_e=R_f=Id and: R_a(c a)=c=d a=R_a(c) R_a(a) R_a(d a)=d=c a=R_a(d) R_a(a) R_a(e a)=e=f a=R_a(e) R_a(a) R_a(f a)=f=e a=R_a(f) R_a(a) R_a(c b)=f=d b=R_a(c) R_a(b) R_a(d b)=e=c b=R_a(d) R_a(b) R_a(e b)=d=f b=R_a(e) R_a(b) R_a(f b)=c=e b=R_a(f) R_a(f) and R_b(c a)=f=e a=R_b(c) R_b(a) R_b(d a)=e=f a=R_ b(d) R_b(a) R_b(e a)=d=c a=R_b(e) R_b(a) R_b(f a)=c=d a=R_b(f) R_b(a) R_b(c b)=c=e b=R_b(c) R_b(b) R_b(d b)=d=f b=R_b(d) R_b(b) R_b(e b)=e=c b=R_b(e) R_b(b) R_b(f b)=f=d b=R_b(f) R_b(f) then satisfies the condition (iii) of the definition of a quandle. So (Q, ) is a quandle. - Secondly, if 𝒯=0.7(101,33) (188,-172) 1.0Black(223,-161)(12.817,146,506) (253,-161)(12.817,146,506) (190,-167)2(199,-167)2(218,-167)2(226,-167)2(249,-167)2(257,-167)2(187,-164)[lb]a(197,-165)[lb]b(216,-163)[lb]c(224,-163)[lb]d(246,-163)[lb]e(253,-165)[lb]f we prove that 𝒯 is a Q-compatible. We have R_c=R_d=R_e=R_f=Id and L_a(x)=a and L_b(x)=b for all x∈{ a, b, c, d, e, f }, then it suffices to show that R_a, R_b is an isomorphism and L_c, L_d, L_e, L_f is a continuous maps. We have c∼ d and e∼ f, we obtain: R_a(c)=d∼ c=R_a(d), R_b(c)=e∼ d=R_b(d), R_a(e)=f∼ e=R_a(f), R_b(e)=c∼ d=R_b(f), then R_a and R_b is an isomorphism. Moreover L_c(a)=d, L_c(b)=e, L_d(a)=c, L_d(b)=f, L_e(a)=f, L_e(b)=c, L_e(b)=c, L_f(a)=e, L_f(b)=d, and L_c(x)=c, L_d(x)=d, L_e(x)=e and L_f(x)=f, for all x∈{c, d, e, f}. Then L_x is a continuous maps for all x∈{a, b, c, d, e, f}. So 𝒯 is Q-compatible. Acknowledgements: The authors would like to thank Mohamed Elhamdadi for useful suggestions and comments. Conflicts of interest: none 10 tocchapterBibliography acg.AlexP. Alexandroff, Diskrete Räume, Rec. Math. Moscou, n. Ser. 2 (1937), no. 3, p. 501-519. Moh. DoublingM. Ayadi, D. Manchon, Doubling bialgebras of finite topologies, Letters in Mathematical Physics, vol. 111, p. 1-23, 2021. Moh. twistedM. Ayadi, Twisted pre-Lie algebras of finite topological spaces, Communications in algebra, vol. 50, p. 2115-2138, 2022. B. Ho and S. NelsonB. Ho and S. Nelson, Matrices and Finite Quandles, Homology, Homotopy and Applications, vol. 7, p. 197-208, 2005. acg10F. Fauvet, L. Foissy, D. Manchon, The Hopf algebra of finite topologies and mould composition; Ann. Inst. Fourier, Tome 67, No. 3 (2017), 911–945. Nelson and WongS. Nelson and C. Wong, On the orbit decomposition of finite quandles; Journal of Knot Theory and Its Ramifications, vol. 15, p. 761-772, 2006. RubinszteinR L. Rubinsztein, Topological quandles and invariants of links; Journal of knot theory and its ramifications, vol. 16, p. 789-808, 2007. L. Pedro and R. DennisL. Pedro and R. Dennis, On finite racks and quandles, Communications in Algebra®, vol. 34, p. 371-406, 2006. acg..12R. E. Stong, Finite topological spaces, Trans. Amer. Math. Soc. 123 (1966), 325-340. acg15A. K. Steiner, The lattice of topologies: structure and complementation, Trans. Am. Math. Soc. 122 (1966), 379–398. acg16R. S. Vaidyanathaswamy, Set topology, Chelsea, New-York (1960). DN. YetterD N. Yetter, Quandles and monodromy, Journal of Knot Theory and Its Ramifications, vol. 12, p. 523-541, 2003.
http://arxiv.org/abs/2307.06292v1
20230712163902
Feature Embeddings from Large-Scale Acoustic Bird Classifiers Enable Few-Shot Transfer Learning
[ "Burooj Ghani", "Tom Denton", "Stefan Kahl", "Holger Klinck" ]
eess.AS
[ "eess.AS", "cs.SD" ]
The dilemma of voluntary quarantine: insights from coupled dynamics of disease spreading and adaptive quarantine Aycaya-Paco Yhack Bryan Facultad de Ingeniería Estadística e Informática Universidad Nacional del Altiplano Puno, Perú Email: [email protected] Torres-Cruz Fred Facultad de Ingeniería Estadística e Informática Universidad Nacional del Altiplano Puno, Perú Email: [email protected] Vilca-Mamani Lindell Dennis Facultad de Ingeniería Estadística e Informática Universidad Nacional del Altiplano Puno, Perú Email: [email protected] August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= [1]The two authors contributed equally to this paper and share first authorship. (Email: [email protected]; [email protected]) Automated bioacoustic analysis aids understanding and protection of both marine and terrestrial animals and their habitats across extensive spatiotemporal scales, and typically involves analyzing vast collections of acoustic data. With the advent of deep learning models, classification of important signals from these datasets has markedly improved. These models power critical data analyses for research and decision-making in biodiversity monitoring, animal behaviour studies, and natural resource management. However, deep learning models are often data-hungry and require a significant amount of labeled training data to perform well. While sufficient training data is available for certain taxonomic groups (e.g., common bird species), many classes (such as rare and endangered species, many non-bird taxa, and call-type), lack enough data to train a robust model from scratch. This study investigates the utility of feature embeddings extracted from large-scale audio classification models to identify bioacoustic classes other than the ones these models were originally trained on. We evaluate models on diverse datasets, including different bird calls and dialect types, bat calls, marine mammals calls, and amphibians calls. The embeddings extracted from the models trained on bird vocalization data consistently allowed higher quality classification than the embeddings trained on general audio datasets. The results of this study indicate that high-quality feature embeddings from large-scale acoustic bird classifiers can be harnessed for few-shot transfer learning, enabling the learning of new classes from a limited quantity of training data. Our findings reveal the potential for efficient analyses of novel bioacoustic tasks, even in scenarios where available training data is limited to a few samples. Keywords: Deep learning, feature embeddings, bioacoustics, classification, few-shot learning, transfer learning, passive acoustic monitoring § INTRODUCTION Bioacoustic analysis provides a rich window into biodiversity, animal behavior and ecosystem health. Passive acoustic monitoring (PAM) in particular has become a widely used tool for wildlife conservation. PAM uses battery-operated autonomous recording devices (ARUs) that collect vast amounts of acoustic data, containing a wealth of information about biological, geophysical, and anthropogenic activities in the deployment area. It allows researchers to study and protect animals and their habitats non-invasively at ecologically-relevant temporal and spatial scales <cit.>. PAM involves recording sound in nature and has been used to study a wide range of species, including whales and dolphins <cit.>, pinnipeds <cit.>, birds <cit.>, insects <cit.>, fish <cit.>, frogs <cit.>, and terrestrial mammals <cit.>. In recent years, many automated deep learning-based analysis tools have been developed that are now commonly used to analyze long-term acoustic data efficiently <cit.>. By utilizing these tools, researchers can automatically detect and categorize animal vocalizations, saving them a significant amount of time and effort and facilitating the investigation of less researched species <cit.>. However, the development of these tools typically depends on the availability of well-annotated training data. Obtaining sufficient training data can be a major challenge. While there are sufficient amounts of training data available for some taxonomic groups, including common bird species (e.g., through community collections like Xeno-canto[https://xeno-canto.org] or the Macaulay Library[https://www.macaulaylibrary.org]), training data is often lacking for rare and endangered species, which are often the prime target of conservation efforts <cit.>. In addition, traditional approaches to species-level classification may not be suitable for all applications. For example, a fixed set of classes may not be desirable in cases where researchers are interested in the fine-grained classification of vocalizations, such as identifying specific call types rather than simply identifying the presence or absence of a species <cit.>. Call types and the associated behaviors (e.g., foraging or breeding) can provide critically important cues on habitat use and inform, for example, land management decisions. One way to address the challenge of data deficiencies is to utilize learned feature embeddings for few-shot transfer learning. In the context of machine learning, feature embeddings are vectors obtained from some intermediate layer of a machine learning model <cit.>. High-quality feature embeddings offer several benefits over traditional approaches to species-level classification. First, feature embeddings can help to differentiate between classes of acoustic events that are very similar and differ only in subtle details. For instance, songbirds can display local variations (also called dialects) in their song patterns, which may lead to slight differences in note sequences <cit.>. Feature embeddings can capture these nuances and enable more precise classification. Additionally, embeddings facilitate transfer learning between species, enabling researchers to train models on data from more commonly occurring or extensively studied species and then apply that knowledge to a target species, which may have insufficient training data. This approach also saves researchers time and effort that would otherwise be needed to train a dedicated classifier from scratch while enhancing the accuracy of classification results. Furthermore, cross-taxa classification based on feature embeddings is also possible when such embeddings can generalize across acoustic domains and events. We can view feature embeddings as a lossy compression of the input data. For instance, in terms of raw data, the embedding produced by Google's bird classification model (called Perch) contains only 1.6% of the data of the raw audio (a 1280-dimensional 32-bit float vector, derived from 5 seconds of 32 kHz audio encoded as 16-bit integers). Yet these embeddings enable efficient recognition of a wide range of global bird species. For this to work well, the classifier must learn features relevant to the classification problem while allowing irrelevant data to be discarded. This perspective is typified by data augmentation techniques, in which we apply transformations of the inputs irrelevant to the desired function outputs, thus training the classifier to ignore the augmentations. Because the relevant features for different problems may vary, we hypothesize that models trained on a problem closely related to the target problem will often outperform models trained on very different problems. In fact, the recent HEAR Benchmark competition found that no single model dominated across event detection, music transcription, and speech recognition tasks <cit.>. However, as mentioned earlier, many problems lack sufficient data for training a robust classifier from scratch. In these cases, re-using the feature embeddings from a pre-trained model allows learning the new task efficiently, so long as the embeddings are sufficiently relevant. In this study, we investigate the use of various large-scale acoustic classifiers to produce feature embeddings that can be used to perform fine-grained classification of bird calls and dialect types, and out-of-scope but related identification of acoustic events (non-bird animal calls) that these models have not been trained on. Furthermore, we include in our analysis classifiers that are either trained on AudioSet dataset <cit.> (a broad spectrum of audio data extracted from YouTube clips) or on extensive datasets of bird vocalisations from around the world. In doing so, we are able to compare the effectiveness of these embeddings derived from different classifiers, evaluating their capacity to generalize and detect a variety of bioacoustic events. The paper aims to provide a simple method for species-agnostic classification across taxonomic groups by leveraging transfer learning capabilities of selected classifiers. The effectiveness of the approach is demonstrated by evaluating on a diverse set of data sources covering birds, bats, marine mammals, and amphibians. Overall, our study suggests that the proposed approach can help to advance automated analysis in passive acoustic monitoring by solving the problem of species and call type recognition in low-data regimes. The use of transfer learning capabilities of selected classifiers provides a practical and effective way to classify a wide range of acoustic events across different taxa and can help to improve the accuracy and efficiency of PAM analysis efforts. Our approach – utilizing fixed, pre-trained embeddings for novel problems – also suggests a more efficient workflow for large-scale bioacoustic data sets. Large PAM deployments may accumulate tens to hundreds of terabytes of data during a single field season <cit.>. This makes model inference tasks especially time-consuming and potentially expensive. Given a model which produces generally useful feature embeddings, the practitioner may embed their entire data set once and then use the pre-computed embeddings for a wide range of subsequent analysis tasks. Training and inference with small models over fixed embeddings are much faster than training entirely new models: Training a high-quality classifier from scratch can take many days of GPU time, but training small linear classifiers over fixed embeddings, which we discuss in this paper, can take less than a minute to train on a modern workstation. This allows fast experimentation with different analysis techniques and quickly iterating with human-in-the-loop active learning techniques. § RELATED WORK §.§ Transfer learning In 2014 <cit.> observed that pre-trained CNN layers could be used as general feature extractors for novel tasks by training a new output layer for the target task. Meanwhile, <cit.> demonstrated that using pre-trained features leads to more general models and experimented with different combinations of freezing and fine-tuning layers in the network. This strategy, where pre-trained models are utilized as foundational building blocks to extract robust features for new tasks and potentially fine-tuned for specific target tasks, is known as transfer learning <cit.>. <cit.> have compared different transfer learning strategies in CNNs. This technique has proven to be extremely effective, especially when the available training data for the target task is limited. <cit.> employed DCNN models that were pre-trained on ImageNet dataset <cit.> to subsequently fine-tune them for bird sound detection. Similarly, <cit.> fine-tune an Inception-v4 CNN that was previously trained on ImageNet to perform audio bird classification. In <cit.>, the authors investigate the utility of transfer learning, specifically adapting existing convolutional neural networks (CNNs) pretrained on the ImageNet dataset, in the realm of bioacoustics research. Their study, which compares 12 modern CNN arctitectures across four passive acoustic datasets, reveals promising results that suggests transfer learning could make bioacoustic model design more accessible, efficient, and accurate, particularly in scenarios with limited data. §.§ Few-shot learning Few-shot learning <cit.> attempts to learn new classes from a small amount of training data; pre-trained embeddings are core to many few-shot learning strategies. In this work, we consider the case of keeping the entire pre-trained embedding frozen and learning a single linear layer for the new tasks. This method is essentially a linear probe of the selected embeddings, which allows assessment of the availability of desired task-specific information in the embeddings <cit.>. In 2021-2023, the DCASE competition included a few-shot bioacoustic classification task on multiple taxa, utilizing exactly five `shots' (training examples) for each class of interest <cit.>. In the 2023 iteration of the competition <cit.> one team used a pre-trained transformer model trained on AudioSet, while all other teams used only the 21 hours of provided training data. By contrast, in this study we focus on the relative utility of pre-trained embeddings, including embeddings from bird classifiers, and demonstrate the relative utility of additional training data for each dataset we work with, which is useful for practitioners. §.§ Feature embeddings for bioacoustics tasks BirdNET[<https://birdnet.cornell.edu>] embeddings have been previously used to distinguish adult and juvenile owls and woodpecker call types <cit.>, and to improve pre-trained model performance on downstream bioacoustic tasks given additional unlabeled data <cit.>. The BEANS Benchmark <cit.> applies pre-trained image classification models (ResNets <cit.>) and audio event classifiers (VGGish <cit.>) to a range of bioacoustic tasks. However, the strongest model considered in the benchmark (VGGish) is an older general audio event classification model. In <cit.> the authors illustrate the process of developing a global identification model that can be refined for optimal performance on localized data from specific regions, employing a strategy that aligns closely with transfer learning. The methodology consists of constructing a model trained on a global data set, then fine-tuning it using local data. This allows the model to be highly adaptable, providing effective classification of bird vocalizations across different regions. In <cit.> feature embeddings from the VGGish model are employed to transform soundscapes from various ecosystems into a common acoustic space. By using the feature embeddings, the researchers were able to extract meaningful features from complex eco-acoustic data, providing them with a useful tool for monitoring ecosystems in an efficient and scalable manner. In <cit.> a classification model is trained using embeddings derived from the VGGish model to study the utility of soundscapes to predict species occurrence in tropical forests. <cit.> employed VGGish feature embeddings to train models to identify eight acoustic event categories, including vocalizations of songbirds and insects as well as non-biological signals from recordings made in Northern Alaska. The study found that the performance of classification models was significantly influenced by the choice of acoustic index. Models using AudioSet embeddings demonstrated substantially superior accuracy, precision, and recall, improving by 12%–16%. The authors therefore recommend the use of AudioSet embeddings in soundscape analysis due to its consistent performance, even with limited data sets, and its ability to be compressed efficiently, facilitating the use of restricted data storage without impacting the comparability of results between different studies. <cit.> analysed soundscape recordings from North-Eastern Borneo using analytical indices and VGGish-based feature embeddings and reported consistent and superior performance on small pools of data using the latter. § METHOD In this work, we have focused on the extraction of feature embeddings from four CNN models and one transformer model, described below in Section <ref>. These models are trained on either general YouTube data or global data sets of bird vocalizations. All of these are large-scale audio classification models that map spectrograms (visual representation of sound) to their class labels. We train linear models (fully-connected feed forward NNs) on the feature embeddings, extracted from these models, using different amounts of training data, as described in Section <ref>. Fig. <ref> provides an overview of the classification pipeline we employed for our experiments. Spectrograms serve as the input data for our framework. The deep backbone, which is essentially the large-scale classifier without the classifier head, processes the spectrograms and produces an embedding. The embedding can be seen as a compact representation capturing the salient features of the input. This embedding is then forwarded to the classifier head, which is implemented as a fully connected layer. The classifier head applies a linear transformation to the embedding, followed by a sigmoid function to obtain class probabilities, and is trained via standard logistic regression. In summary, this architecture, comprising the deep backbone, fully connected layer, and sigmoid activation, enables the extraction of relevant features from spectrograms and the subsequent generation of probability estimates for downstream classification purposes. By employing simple logistic regression, we are able to judge the direct utility of each model's pre-trained embedding to a range of problems. Additionally, we save an immense amount of training effort by pre-computing the embeddings for each dataset. §.§ Evaluation datasets We use a range of datasets for our analysis. These datasets were constructed by different groups with different goals and methodologies, and therefore vary in their characteristics. For instance, the RFCX and Watkins datasets contain cross-class contamination — examples of a specific class where another unlabeled class is present. The bat species and Watkins datasets have variable clip length, whereas the other datasets have a fixed clip length. Table <ref> presents an overview of all the datasets used in this work. Yellowhammer Dialects (YD): The YD dataset comprises two dialects of Yellowhammer songs, denoted as X and B <cit.>, derived from audio recordings of Yellowhammer vocalizations. The two dialects are characterized based on variations of elements in the terminal phrase of the song. These recordings were sourced from submissions made through the BirdNET App, captured with various mobile phone microphones. Recordings were annotated in a two-step process. Initially, Connor Wood performed preliminary annotations, which were later refined by Pavel Pipek specializing in yellowhammer dialects at the Department of Ecology, Charles University in Prague. All recordings were acquired in 2020. Each audio recording within the data set has a duration of three seconds, facilitating a comprehensive analysis of the yellowhammer vocalizations. These dialects have a duration in the range 2.2-2.7 seconds and a fundamental frequency in the range of 5-6 kHz. Bats (BT): The BT dataset contains four species of North American bats. The eastern red bat (Lasiurus borealis, LABO) with 1,124 recordings, the little brown bat (Myotis lucifugus, MYLU) with 1,119 recordings, the northern long-eared bat (Myotis septentrionalis, MYSE) with 360 recordings, and the tricolored bat (Perimyotis subflavus, PESU) with 948 recordings. The audio files have been frequency-shifted to place the bats in the audible range. The dataset is sourced from two origins: 1) Training dataset for NABat Machine Learning V1.0 <cit.>, and 2) Dr. Patrick Wolff, US Army ERDC-CERL. The datasets were collected at ultrasonic sampling rates. We applied pitch shifting via sample rate conversion to these datasets. After this pre-processing step, all of the datasets featured a sampling rate of 44.1 kHz. Rainforest Connection Kaggle dataset (RFCX): This is the training data from the 2021 Species Audio Detection challenge, consisting of recordings of Puerto Rican birds and frogs. This data set has weak negative labels. Both birds and frogs are present in the class list; to understand model performance on these taxa, we present results on each taxa separately, and all together. The bird species in the RFCX data are present in the training data for both the Perch and BirdNET models, but most of these species have very limited training data. As of this writing, the median number of Xeno-Canto recordings for these thirteen species is just 17, and only two species have more than 50 recordings (the Bananaquit with 579 recordings, and the Black-Whiskered Vireo with 68 recordings). Thus, these are largely low-data species for these models, and the results for this data set indicate the ability of the BirdNET and Perch embeddings to separate species ID for under-trained species. Watkins Marine Mammal Sounds Database (WMMSD): The WMMSD dataset covers 60 species of marine mammals but we employ the `best of' category enlisted in the database as the species with higher quality and lower noise recordings. The taxonomical representation encompasses species from the Odontocete and Mysticete suborders within the order Cetacea, in addition to the Phocid and Otariid families, which are part of the clade Pinnipedia. The auditory documentation, spanning a substantial time period of seven decades, encapsulates a diverse range of recording methodologies, ambient acoustical conditions, and sampling frequencies <cit.>. The compilation of this auditory data was accomplished and annotated by several researchers including William Watkins, William Schevill, G. C. Ray, D. Wartzok, D. and M. Caldwell, K. Norris, and T. Poulte, and is openly accessible for academic use <cit.>. The audio examples are cropped to the length of the actual vocalization, which means that the lengths of the audio files vary greatly by species. We exclude five classes for which there are a fewer than 32 examples provided, and two additional species which are characterized by very low frequency vocalizations (fin whale and northern right whale). Godwit Calls (GC): The GC dataset contains five different calls of Black-tailed Godwit. The recordings were made by Ondrej Belfin as part of his masters thesis at the University of Groningen in the Netherlands <cit.>. All recordings are 3 seconds long and are annotated by Ondrej Belfin himself. The author of the dataset is in the process of publishing the dataset, a link to the published dataset will be added to the final version of the paper. §.§ Experimental methodology For each pairing of model and data set, we first calculate the model embeddings for the full data set. Each model has a native sample rate and window size, chosen independently of any of the datasets under consideration. Each audio sample is resampled to the model's native sample rate (though we experiment with alternatives in <ref>). When an example is shorter than the model's window size, we apply centered zero-padding to obtain the target length. When a model's window size is shorter than a target example, we frame the audio according to the model's window size, create an embedding for each frame, and then average the results. In the end, each example is associated with a single embedding vector. We then choose a fixed number k of examples from each class at random, by randomly shuffling the list of examples and picking the first k examples of each class in the shuffled list. We use a seeded random shuffle to ensure that the same training examples are used for every model. The k examples are used to train a linear classifier over the pre-computed embeddings, and all remaining examples are used for evaluating the trained classifier. We use a binary cross entropy (BCE) loss, with sigmoid activation, and train the classifier to convergence. This process is repeated five times with different random seeds for each combination of model, dataset, and k, using the same set of five random seeds for each combination. We do this to report a reliable estimate of the classification performance <cit.>. Note that our use of BCE loss could be replaced with categorical cross entropy (CCE) with a softmax output. We found that this produces somewhat higher model quality scores, but is not reflective of real-world requirements, where models may encounter simultaneous vocalizations. Training with BCE loss is equivalent to training independent binary classifiers for each class. For each experiment, we compute (1) macro-averaged ROC-AUC (computing ROC-AUC for each class, and then averaging over all classes) and (2) Top-1 Accuracy. Dataset Limitations: Each of the datasets we work with presents different difficulties. First, our methodology does not create an ideal train/test split when multiple examples originate from the same original recording. Ideally, different source recordings or entire recording sites would appear entirely as train or test data to reflect model generalization to new conditions. We do not have sufficient metadata available for all datasets to perform such a split, and so results may overestimate model generalization. Instead, we treat each example independently, and create a train/test split over the examples we have. We believe this issue affects only the Bats, RFCX, and a subset of the Watkins species. Secondly, some recordings contain additional unlabeled vocalizations, which may lead to under-estimation of model quality. This is especially the case for the Watkins and RFCX frog datasets. (See Table <ref> for some analysis of the Watkins dataset.) §.§ Model descriptions BirdNET and Perch are similar models, differing mostly in their training data. While Perch is trained exclusively on bird sounds data, BirdNET's training dataset also comprises of a relatively small fraction of non-birds sound data. We compare these bird models to three models trained for general audio event detection, using variants of AudioSet <cit.>. AudioSet comprises an extensive compilation of over 2 million audio clips, each 10 seconds in duration. These clips are derived from YouTube videos and are categorically labeled according to the type of sound they contain, with a total of 527 unique classes. The classes include ‘wild animals,’ but the associated labels are very coarse (bird, frog, roaring cat) and constitute only about 2% of the total dataset. To elaborate further, the specifications of the models are detailed as follows: Perch[<https://tfhub.dev/google/bird-vocalization-classifier/2>] is an EfficientNet B1 <cit.> trained on the full corpus of bird song recordings from Xeno-Canto (XC) downloaded in July, 2022. Because XC is weakly labeled (a single label for an entire file), we use an activity detector to select training windows from each file, as described in <cit.>. During training we augment with MixUp <cit.>, random gain adjustment, and random time-shifting of up to one second. The model is trained to classify all levels of the taxonomy for each recording simultaneously (species, genus, family, order). Note that the model in <cit.> was a regional model trained on 89 species, while the Perch model is trained on all Xeno-Canto species. This new single model obtains a cMAP score of 0.49 on the Caples data set, where the regional model obtained a score of 0.34 using a combination of ensembling and source separation. The base Google Perch model and further evaluation statistics are available at TFHub and supporting code is available on GitHub[<https://github.com/google-research/chirp/tree/main/chirp/inference>]. BirdNET <cit.> also uses an EfficientNet architecture, but does not use taxonomic outputs. BirdNET has a broader training set, including XC, the Macaulay Library, and labeled soundscape data from around the world, ultimately targeting over 3,000 bird species. Additionally, BirdNET is trained to identify human speech, dogs, and many species of frogs. To enable a range of downstream use-cases, BirdNET trades off some accuracy for efficient computation. We report on BirdNET 2.2 and 2.3, which differ only in the dimensionality of the embedding (see Section <ref>). The BirdNET code is available on GitHub[<https://github.com/kahst/BirdNET-Analyzer>], and includes support for training small classifiers on embeddings. YAMNet and VGGish are both convolutional models trained to predict AudioSet classes. YAMNet uses a MobileNetV1 architecture <cit.>. VGGish is an older audio event-detection model, using a variant of the VGG architecture and trained on an earlier version of AudioSet <cit.>. Both of these models process audio frames of 0.96 seconds. While the YAMNet model generates a feature embedding vector of 1024 dimensions, the VGGish embedding size is limited to 128 dimensions. The YAMNet[<https://github.com/tensorflow/models/tree/master/research/audioset/yamnet>] and VGGish[<https://github.com/tensorflow/models/tree/master/research/audioset/vggish>] codes can be accessed on GitHub. AudioMAE <cit.> is a more recent general audio model built with a transformer architecture. The model is trained on AudioSet with a self-supervision task, reconstructing masked spectrograms. The model consists of an encoder (which produces embeddings of patches of the spectrogram) and a decoder (which reconstructs the spectrogram from the patch embeddings). For this study, we use the embeddings produced by the encoder and discard the decoder. A 1024-dimensional embedding is obtained by averaging the per-patch embeddings, as is typical when using AudioMAE for classification tasks. We evaluated a re-implementation of AudioMAE, using the `Large' model with 300M parameters, provided by Eduardo Fonseca <cit.>. This model obtains a mAP of 46.4 on AudioSet-2M after fine-tuning, comparable to the original AudioMAE's reported mAP of 47.3. We experimented with many configurations of AudioMAE, as described in Section <ref>. AudioMAE training consists of a pre-training stage, where it is trained only for reconstruction of masked spectrograms, and a fine-tuning stage, where it is trained for supervised classification. None of these methods was consistently better than all others, so for brevity, we report results for the fine-tuned model with averaged embeddings unless otherwise noted. The original AudioMAE code can be accessed on GitHub[<https://github.com/facebookresearch/AudioMAE>]. § RESULTS §.§ Classification performance on novel bioacoustic tasks Table <ref> presents the classification performance using a range of embeddings with linear probes on novel bioacoustic tasks. The results presented in the table correspond to an experiment in which the models are trained on 32 audio samples. Figure <ref> shows results at various amounts of training data, from 4 to 32 or 256 examples per class (depending on dataset size). The Perch and BirdNET 2.3 models obtain similar performance. However, Perch achieved the highest Top-1 accuracy and AUC across all the datasets, making it the most consistent performer. It performed particularly well with “Godwit Calls” and “Bat Species”, with AUCs of 0.99 and 0.97, respectively. Similarly, BirdNET 2.3 exhibited a good performance, especially with Godwit Calls (GC) and Bat species (BT) (0.99, 0.97). Both bird models significantly outperform the AudioSet models on all tasks (VGGish, YAMNet, and AudioMAE). The macro-averaged ROC-AUC scores are typically high, suggesting good binary classification on each class individually. In case of AudioMAE, the performance dropped significantly, especially noticeable with the Yellowhammer dialects (YD) and RFCX birds datasets, which had lower AUCs of 0.66 and 0.78, respectively. The performance declined further using the YamNet model. The Top-1 accuracy was relatively low across datasets, and AUCs were significantly lower, particularly for the YD dataset. The VGGish model had the lowest performance across all datasets, notably underperforming on the WMMSD dataset with a very low Top-1 accuracy of 0.04 and AUC of 0.52. In summary, the Perch and BirdNET 2.3 models outperformed the others in terms of both Top-1 accuracy and AUC, demonstrating superior generalizability across various bioacoustic datasets. On the other hand, VGGish showed the weakest performance. Among the three models trained on the AudioSet, the transformer-based AudioMAE model outperformed the CNN-based VGGish and YAMNet models across all datasets except for the RFCX Birds dataset in which YAMNet performed slightly better. The performance gain for YD and GC datasets was significant. It's important to note that these results are average values over five runs. §.§ Varying amount of training data: few-shot learning In Fig. <ref> we show results with varying amounts of training data per class. We again find that using transfer learning with global bird models (BirdNET and Perch) consistently outperforms general event-detection models trained on YouTube data (AudioMAE, Yamnet, and VGGish). In all cases, the bird models have an ROC-AUC significantly greater than 0.5 even with only 4 training examples. This suggests these models can be used for active learning on novel tasks, starting even from a handful of examples. Lower Top-1 accuracy scores suggest that inter-class calibration may still be a difficulty for simple linear probes, though unlabelled vocalizations in the eval set may account for some difficulty. For the Watkins dataset, a significant amount of confusions (18.6%) occurred between bearded seals and bowhead whales, two highly vocal Arctic marine mammal species (see Table <ref>). Both species are known to overlap in range and are frequently recorded together, especially during the late spring and early summer months <cit.>. This is also the case for the weakly-labeled training data we used, which explains the comparatively high degree of confusion. More sophisticated pre-processing of the training data and adding some strongly labeled data would help to increase the classification performance for these two species. The confusion between co-occurring dolphin species is also not surprising. First, these data were downsampled to the audible frequency range, which will cutoff higher frequency components of the vocalizations. In addition, dolphin species are generally difficult to classify acoustically <cit.> because they produce highly variable vocalizations including whistle, echolocation clicks, and burst pulses. Lastly, dolphins also occur in mixed species groups which can make it challenging to obtain clean training data. We also see a particularly high variance in model quality for the YD dataset in the low-data regime. Since this is only a two-class problem, there are fewer total examples used for training in the low-data regime. However, this is also a subtle problem: The Yellowhammer dialect is distinguished by the order of the last two notes of the song: mid-then-high versus high-then-mid. Other variations in timbre of the initial portion of the song and up- or down-sweep in the high note do not distinguish between the two dialects. The subtlety of the problem apparently makes it easy to over-generalize from few examples. §.§ Embedding size We ran an additional ablation on embedding size while investigating the difference between BirdNET 2.2 and Perch models, which had embedding sizes 320 and 1280, respectively. Increasing the size of the BirdNET embedding to 1024 led to similar performance as the Perch model in most downstream tasks; the new BirdNET 2.3 has a larger embedding as a result. An ablation over the embedding dimension is summarized in Fig. <ref> that shows the ROC-AUC scores. The Top-1 Accuracy and ROC-AUC scores on different datasets using various embedding sizes are shown in Table <ref>. For this, we varied the final embedding size in the Perch model, keeping the EfficientNet B1 architecture otherwise unchanged. The 320-dimensional embedding (matching BirdNET 2.2) has significantly degraded quality in all tasks. Doubling the base Perch embedding dimension to 2560 yields a further increase in model performance for some downstream tasks. The larger embedding size substantially increases model size (because the large classification output layer doubles in size) and increases the storage footprint for the embeddings themselves. However, the impact on overall model runtime (as reported in <ref>) is modest because most computation time is spent in the early layers. §.§ Visualizing pre-trained embeddings We can also observe the geometry of the embedding space using a t-SNE transformation of the model embeddings <cit.>. The t-SNE transformation attempts to preserve distances in the embedding space while projecting to two dimensions. In Fig. <ref> we plot t-SNE transforms for YAMNet, AudioMAE, and Perch. Note that t-SNE plots can be tricky to interpret appropriately <cit.>, though points which are close in the original space tend to be close after applying the t-SNE transform. In the easier Godwit problem (Fig. <ref>), we observe cleaner clustering of labeled data in the Perch embeddings, with large margins suggesting easy linear separability of the classes. By contrast, there are no clean margins between classes in the YAMNet embeddings, and smaller, noisier margins for the AudioMAE embeddings. For the more difficult Yellowhammer problem, we observe a complete intermixing of the two classes for YAMNet, explaining the model's inability to linearly separate the classes. For AudioMAE, which performs marginally better, we can observe a couple pockets of concentrated blue points, but no clear clustering. For Perch, we see some clustering, but still a great deal of inter-mixed data. §.§ Additional AudioMAE Investigation In recent years, transformer and self-supervised models have taken a dominant position in machine learning research. Therefore, it may be surprising that AudioMAE - a self-supervised transformer - under-performed the humble EfficientNet-b1 architecture. We performed a number of additional experiments to discover whether additional tweaking of the experimental setup would uncover hidden performance gains for the AudioMAE embeddings. In Table <ref> we give results for three different treatments on all six datasets. First, we compared embedding quality for between the pre-trained unspervised embedding and the embedding obtained from supervised fine-tuning on AudioSet. Because the unsupervised objective is spectrogram reconstruction, one would expect that all relevant information should be present in the pre-trained embedding, but possibly suppressed by fine-tuning on the irrelevant AudioSet label-space. In fact, using the pre-trained or fine-tuned embedding does change the metrics, but not in a predictable way. One significant improvement was obtained by ignoring the audio sample rate when loading the target audio. Because the AudioMAE consumes 16kHz audio, any significant features above the Nyquist frequency of 8kHz will be lost when audio is resampled to the model's input rate. Instead of resampling, we may instead load the audio at its native sample rate and feed it directly to the model as though it were 16kHz. This change almost always improved the AudioMAE metrics. We also tried using a two-layer network with the pre-trained model, under the hypothesis that the raw self-supervised embedding may not be well aligned for classification tasks. The two-layer network consists of batch-normalization, a hidden layer with 2048 units (double the embedding dimensionality), a ReLU activation, and an output layer. The best overall AudioMAE performance was obtained by using a 2-layer perceptron and no audio resampling with the pre-trained embeddings. Despite substantial effort, we found the bird models - with no additional tweaking - uniformly outperformed the AudioMAE model. § DISCUSSION Our study explored generalizable feature representations (embeddings) within the bioacoustics domain, focusing on the application of large-scale audio classification models to previously unencountered taxonomic groups such as marine mammals, bats, and frogs, in addition to intraspecific calls and dialects of a bird species. Our empirical findings have significant implications for Passive Acoustic Monitoring (PAM), potentially enhancing the methods by which we detect and classify animal species based on their sounds. The performance results displayed in Fig. <ref> underscore the value of transfer learning with global bird models such as BirdNET and Perch. These models consistently outperformed general event-detection models trained on broader auditory data, such as YouTube-sourced data utilized by AudioMAE, YAMNet, and VGGish. This observation is pivotal as it suggests that models specifically trained on bird data possess a heightened capacity for generalization, successfully identifying and analyzing previously not encountered bioacoustic patterns. This finding might be attributed to the inherent diversity and complexity found in bird vocalizations. Bird songs and calls occupy a broad range both temporally and in the spectral domain, exhibiting diverse frequency modulations, harmonic structures, and rhythmic patterns. This wide array of acoustic characteristics provides a rich and versatile training data set for models such as BirdNET and Perch. The comprehensive nature of these vocalizations may have facilitated the models' ability to learn more generalized representations of bioacoustic patterns. This versatility in bird vocalizations has a dual implication. Firstly, it enriches the training dataset, providing varied instances for the model to learn from, and subsequently, it enables the model to capture a broader range of acoustic patterns, improving its ability to generalize to novel categories. Secondly, the acoustic diversity among bird species might mimic the bioacoustic variability encountered in other taxa, thus further enhancing the model's generalization capabilities when applied to sounds from different taxa. This hypothesis provides an intriguing direction for future research – exploring the specific characteristics of bird vocalizations that contribute to these superior generalization capabilities. Understanding these characteristics could guide the collection and selection of training data for future bioacoustic models, with the aim of maximizing their generalization potential. The extensive diversity inherent in bird vocalizations, both in terms of acoustic characteristics and species diversity, is not just a theoretical advantage but also a practical one. The availability of a vast array of bird species audio data provides an advantageous basis for model training. This superior generalization capability of deep embeddings from bird models is an interesting finding, as it highlights the potential of these specialized models in providing a more robust and adaptable framework for varied bioacoustic tasks by learning good quality embeddings from data. In the realm of bioacoutic sound event detection, the ability to generalize across distinct taxonomic categories and acoustic characteristics is invaluable, as it facilitates the fine-grained classification of call types, song dialects, and out-of-scope identification of acoustic events. Our results have shown promising prospects for bioacoustic recognition tasks even when faced with limited training data. Such good quality feature embeddings can be utilized toward few-shot transfer learning to learn new classes from a small amount of training data. Furthermore, our study supports the hypothesis that feature embeddings, especially those derived from bird data, can effectively represent high-dimensional categorical or discrete features as a low-dimensional continuous vector space. This could revolutionize the application of PAM, particularly in low-data regimes, by enabling more effective transfer learning between species or coarse-level classification and more fine-grained vocalization classification. § ACKNOWLEDGEMENTS The German Federal Ministry of Education and Research is funding the development of BirdNET through the project “BirdNET+” (FKZ 01|S22072). Additionally, Federal Ministry of Environment, Nature Conservation and Nuclear Safety is funding the development of BirdNET through the project “DeepBirdDetect” (FKZ 67KI31040E). BirdNET is also supported by Jake Holshuh (Cornell class of ’69) and The Arthur Vining Davis Foundations. Our work in the K. Lisa Yang Center for Conservation Bioacoustics is made possible by the generosity of K. Lisa Yang to advance innovative conservation technologies to inspire and inform the conservation of wildlife and habitats. BG acknowledges funding from the Landesforschungsförderung Hamburg within the AuTag BeoFisch (LFF-FV91) project while working as a researcher at the Hamburg University of Applied Sciences. apalike2
http://arxiv.org/abs/2307.06003v1
20230712083042
Unsupervised Optical Flow Estimation with Dynamic Timing Representation for Spike Camera
[ "Lujie Xia", "Ziluo Ding", "Rui Zhao", "Jiyuan Zhang", "Lei Ma", "Zhaofei Yu", "Tiejun Huang", "Ruiqin Xiong" ]
cs.CV
[ "cs.CV" ]
headings 100 ECCV-22 submission ID ECCV-22 submission ID National Engineering Research Center of Visual Technology, Peking University Beijing Academy of Artificial Intelligence Institute for Artificial Intelligence, Peking University Abbreviated paper title F. Author et al. Princeton University, Princeton NJ 08544, USA Springer Heidelberg, Tiergartenstr. 17, 69121 Heidelberg, Germany [email protected] <http://www.springer.com/gp/computer-science/lncs> ABC Institute, Rupert-Karls-University Heidelberg, Heidelberg, Germany {abc,lncs}@uni-heidelberg.de Unsupervised Optical Flow Estimation with Dynamic Timing Representation for Spike Camera First Author10000-1111-2222-3333 Second Author2,31111-2222-3333-4444 Third Author32222–3333-4444-5555 August 12, 2023 ========================================================================================================= Efficiently selecting an appropriate spike stream data length to extract precise information is the key to the spike vision tasks. To address this issue, we propose a dynamic timing representation for spike streams. Based on multi-layers architecture, it applies dilated convolutions on temporal dimension to extract features on multi-temporal scales with few parameters. And we design layer attention to dynamically fuse these features. Moreover, we propose an unsupervised learning method for optical flow estimation in a spike-based manner to break the dependence on labeled data. In addition, to verify the robustness, we also build a spike-based synthetic validation dataset for extreme scenarios in autonomous driving, denoted as SSES dataset. It consists of various corner cases. Experiments show that our method can predict optical flow from spike streams in different high-speed scenes, including real scenes. For instance, our method gets 15% and 19% error reduction from the best spike-based work, SCFlow, in Δ t=10 and Δ t=20 respectively which are the same settings as the previous works. § INTRODUCTION Optical flow is defined as the apparent motion of individual pixels on the image plane and is prevalent for being an auxiliary tool for various vision tasks, frame rate conversion <cit.>, scene segmentation <cit.>, and object detection <cit.>. In high-speed scenes, the optical flow estimation may suffer from blurry images from low frame-rate traditional cameras. Obtaining the data with a device that can precisely record the continuous light intensity changes of the scene is the key to addressing this issue. Recently, neuromorphic cameras have been developed greatly, such as event camera and spike camera. They can record light intensity changes in high-speed scenes. Especially, for spike camera, each pixel of it responds independently to the accumulation of photons by generating asynchronous spikes. It records full visual details with an ultra-high temporal resolution (up to 40kHZ). With these features, spike camera has demonstrated superiority in handling some high-speed scenarios <cit.>. Since spike camera can record details of high-speed moving objects, it has enormous potential for estimating more accurate optical flow in high-speed scenes. Considering that deep learning has achieved remarkable success in frame-based optical flow estimation <cit.>, it seems reasonable to directly apply frame-based architecture in spike data. However, the data modality of spike stream output by spike camera is quite different from frame images. For each pixel in spike camera, a spike is fired and the accumulation is reset when photons accumulation at a pixel exceeds a set threshold. At each timestamp, the spike camera outputs a binary matrix, denoted as spike frame, representing the presence of spikes at all pixels. Previous work <cit.> utilizes spike stream as naïve input representation for one timestamp, which consists of a series of spike frames within a predefined time window. However, a too-long window can incur lots of misleading frames, while a short window is sensitive to noise and cannot provide enough information. Therefore, deliberate modifications are needed for input representation to extract salient information more flexibly and efficiently from spike streams before optical flow estimation architecture takes over. In addition, the ground truth of optical flow is scarce in the real world, especially for high-speed scenes. To cope with the lack of labeled real-world data, it is necessary to study spike-based optical flow estimation in an unsupervised manner. As described above, the light intensity information is contained within the spike intervals. This difference in data characteristics makes it unreasonable to directly apply the frame-based unsupervised loss to spike streams. Therefore, the light intensity should first be extracted from spike streams in the spike-based unsupervised loss. This is also the core of constructing the illuminance consistency on spike streams. Moreover, we argue the field of autonomous driving is a good place to validate spike camera since it is suitable for high-speed scenes. In autonomous driving, it is nearly impossible to collect real data for complex, diverse, high-speed extreme scenarios, vehicle collisions and pedestrian-vehicle accidents. However, these scenarios are of great significance to improve the safety of this field and should be highlighted. Therefore, in order to verify that the spike-based algorithm can handle extreme scenarios, we propose a spike-based synthetic validation dataset for extreme scenarios in autonomous driving, denoted as the SSES dataset. In this paper, we propose an unsupervised method for spike-based optical flow estimation with dynamic timing representation, named USFlow. In our unsupervised loss, we propose two strategies, multi-interval-based and multi-time-window-based, to estimate light intensity in regions with different motion speeds. The estimated optical flow is utilized to distinguish regions with different motion speeds and generates corresponding weights for light intensity fusion. Then the final approximate light intensity participates in loss calculation. As for addressing the fixed time window issue, there is a way that apply a dynamic time window to the different spike streams. To this end, we propose Temporal Multi-dilated Representation(TMR) for spike streams. In more detail, we apply multi-layer dilated convolutions to operate on the temporal dimension of sipke streams. Multi-layer dilated convolutions enable the network to have different receptive fields and each layer can be regarded as summarizing spike stream with one different time window. We also design a Layer Attention(LA) module to extract salient features and filter the redundant ones. Following the settings in previous works <cit.>, we train our method on SPIFT <cit.> dataset and evaluate it on PHM <cit.> and our proposed SSES datasets. We demonstrate its superior generalization ability in different scenarios. Results show that USFlow outperforms all the existing state-of-the-art methods qualitatively and quantitatively. USFlow shows visually impressive performance on real-world data. § RELATED WORKS Deep Learning in Optical Flow Estimation. Frame-based optical flow estimation is a classical task in the computer vision area through the years and has been solved well. PWC-Net<cit.> and Liteflownet<cit.> introduce the pyramid and cost volume to neural networks for optical flow, warping the features in different levels of the pyramid and learning the flow fields in a coarse-to-fine manner. RAFT<cit.> utilizes ConvGRU to construct decoders in the network and iteratively decodes the correlation and context information in a fixed resolution. Due to their excellent performance, PWC-Net and RAFT are the backbones of most algorithms<cit.> in frame-based optical flow estimation. In addition, many frame-based unsupervised optical flow networks<cit.> were proposed to discard the need for labeled data. Similar to the traditional optimization-based methods, Yu <cit.> employ photometric loss and smoothness loss to train a flow estimation network. Unflow<cit.> applies a forward-backward check on bidirectional optical flow to estimate the occlusion area, where the backpropagation of photometric loss is stopped. Deep learning has also been applied to event-based optical flow<cit.>. EV-FlowNet<cit.> can be regarded as the first deep learning work training on large datasets with a U-Net architecture<cit.>. As an updated version of EV-FlowNet, an unsupervised framework has been proposed by Zhu <cit.> based on contrast maximization. Spike-FlowNet<cit.> and STE-FlowNet<cit.> use spiking neural networks and ConvGRU to extract the spatial-temporal features of events, respectively. The research on spike-based optical flow estimation is just getting started. SCFlow<cit.>, is the first deep-learning method. It proposes a large dataset, the SPIFT dataset, to train an end-to-end neural network via supervised learning. However, our work aims to fill in the blanks of unsupervised learning. Event-based and Spike-based Input Representation. Normally, asynchronous event streams are not compatible well with the frame-based deep learning architecture. Therefore, frame-like input representation is needed and is expected to capture rich salient information about the input streams. Apart from many handcrafted representations<cit.>, some end-to-end learned representations have been proposed, making it possible to generalize to different vision tasks better. Gehring <cit.> simply uses multi-layer perceptrons as a trilinear filter to produce a voxel grid of temporal features. Cannici <cit.> propose Matrix-LSTM, a grid of Long Short-Term Memory (LSTM) cells that efficiently process events and learn end-to-end task-dependent event surfaces. Similarly, Event-LSTM <cit.> utilizes LSTM cells to process the sequence of events at each pixel considering it into a single output vector that populates that 2D grid. Moreover, Vemprala <cit.> presents an event variational autoencoder and shows that it is feasible to learn compact representations directly from asynchronous spatio-temporal event data. Directly using spike stream as input might incur lots of misleading information or miss something necessary if the time window is not chosen appropriately, therefore, frame-like input representation should also be carefully designed to extract sufficient information from spike stream. SCFlow <cit.> uses estimated optical flow to align spike frames to eliminate motion blur. Different from all previous works, we aim to train an end-to-end input representation with the function of a dynamic time window by multi-layer dilated convolutions. § PRELIMINARIES §.§ Spike Camera Spike camera works by an "integrate-and-fire" mechanism, which asynchronously accumulates the light on each pixel. The integrator of spike cameras accumulates the electrons transferred from incoming photons. Once the cumulative electrons exceed a set threshold, the camera fires a spike and resets the accumulation. The process of accumulation can be formulated as, 𝐀(𝐱, t) = ∫_0^tα I(𝐱,τ) dτθ, where 𝐀(𝐱, t) is the cumulative electrons at pixel 𝐱=(x,y). I(𝐱, τ) is the light intensity at pixel 𝐱 at time τ. α is the photoelectric conversion rate. θ is the threshold. The reading time of spikes is quantified with a period δ of microseconds. The spike camera fires spikes at time T, T=nδ, n ∈𝐙, and generate an H × W spike frame s. As time goes on, the camera produces a spatial-temporal binary stream S_t^N in H × W × N size, as shown in Figure <ref>. The N is the temporal length of the spike stream. H and W are the height and width of the sensor, respectively. §.§ Problem Statement Given two timestamps t_0 and t_1, we have two spike streams centered on t_0 and t_1, noted as S_t_0^L and S_t_1^L, respectively. Then we estimate a dense displacement field 𝐟=(f^u,f^v ) from t_0 to t_1 using these two sub-spike streams, mapping each pixel 𝐱=(x,y) at t_0 to its corresponding coordinates 𝐱'=(x',y')=(x+f^u(𝐱),y+f^v(𝐱)) at t_1. § METHOD §.§ Overview To verify the effectiveness of the proposed dynamic timing representation, we present two versions of USFlow. One is the PWC-like version. It adopts a variant of PWC-Net <cit.> which is also the backbone of SCFlow <cit.>. The other is the RAFT version which adopts an officially small version of RAFT <cit.> as the backbone. More about these two backbones is included in the appendix. Considering one advantage of neuromorphic vision is low latency, our representation part should be lightweight and efficient. The two spike streams first pass into the shared dynamic timing representation module separately, whose outputs are sent into existing backbones. In addition, we design an unsupervised loss to break the dependence on labeled data. Note that all the components are trained end-to-end. §.§ Dynamic Timing Representation As stated in <ref>, one binary spike frame itself is sensitive to noise and meaningless without contextual connection. Given a bunch of spike data, selecting an appropriate data length is pivotal for subsequent processing. A too-long spike stream is not suitable for high-speed regions since time offset accumulation introduces more redundant information. A too-short spike stream won't work either, it can not exhibit light intensity precisely with few binary data. To address this issue, we propose a dynamic timing representation for input spike streams. The Dynamic Timing Representation consists of Temporal Multi-dilated Representation (TMR) module and a Layer Attention module (LA). The main ingredient of TMR is dilated convolution. By using dilated convolution, we can extend receptive fields to a larger range with just a few stacked layers, while preserving the input resolution throughout the network as well as computational efficiency. In more detail, we apply 1D dilated convolutions to the spike stream for each pixel and the parameters are shared across all the pixels. The TMR can be formulated as follow, { F^(i)(𝐱) } = D1C^(i)(F^(i-1)(𝐱)) i=1, …, n, where D1C(·) represents the 1D dilated convolution operation, i and 𝐱 are layer index and spatial coordinate respectively. Note that F^(0)(𝐱) is the input spike stream on position 𝐱. Figure <ref> depicts dilated convolutions for dilations 1, 2, 4, and 8. The higher level of the layer, the larger the receptive field is. The intuition behind this configuration is two-fold. First, a different layer can be regarded as summarizing spikes or extracting temporal correlation among spike frames with a different time window. Second, dilated convolution allows the network to operate on a coarser scale more effectively than with a normal convolution, which guarantees the model is lightweight. In Table <ref>, we show the parameter size of our USFlow and other methods. Multi-dilated architecture has already greatly expanded the input dimensions but not all layers provide equally useful information. Blindly fusing the output of all layers may impair the learning process. To address this issue, we propose the Layer Attention module to further flexibly emphasize which layer is meaningful or which time window is optimal. As illustrated in Figure <ref>, we average the output of n layers, { F^(i)(𝐱) }^n_i=1, noted as F^'(𝐱) to generate one n-dimension layer context descriptor. The descriptor is then forwarded to a multi-layer perceptron(MLP) to produce our layer attention map. The layer attention values are broadcast along the layer dimension and the final concatenation, RF(𝐱), is computed as follows: RF(𝐱) = σ ( MLP( AvgPool(F^'(𝐱)))) ⊗ F^'(𝐱), here ⊗ denotes element-wise multiplication. We do this operation on all pixels by sharing weights. §.§ Unsupervised Loss Unsupervised learning is necessary for spike-based optical flow estimation due to the lack of labeled training data in high-speed real-world scenes. To this end, we sidestep the need for ground truth of optical flow as only spike data are required for training. More specifically, at training time, two spike streams centered on t_0 and t_1, S_t_0^L and S_t_1^L, are utilized to predict optical flow, 𝐟=(f^u, f^v ), from t_0 to t_1. Different from traditional RGB frames, a binary spike frame can not accurately represent the light intensity at the current timestamp. Therefore, estimating precise current light intensity from binary spike streams is the key to constructing an unsupervised loss function. For low-speed regions. Since the light intensity accumulation in low-speed regions is an approximately linear process, we count spikes to utilize longer-duration information to estimate the light intensity. In this way, we can improve the robustness of light intensity estimation. However, the data length of the spike stream used to estimate light intensity varies with different low-speed motions. For reducing computational complexity, we only set two different time windows. The light intensity estimation can be formulated as follow: I_T(𝐱,τ) = ω_s·θ/2D_s+1·∑_t=τ-D_s^τ+D_ss(𝐱,t) + ω_l·θ/2D_l+1·∑_t=τ-D_l^τ+D_ls(𝐱,t), here D_s and D_l are the half length of time windows. They are set to 40 and 100 respectively. ω_s and ω_l are the weight factors. The subscripts s and l refer to short and long time windows, respectively. For high-speed regions. Selecting a large time span of spike streams to extract information would incur motion blur due to time offset accumulation. Hence, we estimate light intensity during a single spike interval which is typically on the order of microseconds. Since the spike streams have extremely high temporal resolution and the interval between adjacent spikes is ultra-short, we can safely assume that the light intensity remains constant during the interval <cit.>. Let us denote s(𝐱,m) and s(𝐱,n) as two adjacent spikes at position 𝐱. m and n are the timestamps corresponding to these two spikes. According to the spike camera working mechanism in Equation <ref>, the constant light intensity can be approximated by: Î(𝐱) ≈θ/α· (n-m) , m < n In reality, however, the number of incoming photons in an ultra-short interval is a random variable subject to the Poisson distribution even under constant illumination conditions. Therefore, the light intensity calculated by Equation <ref> includes errors due to random fluctuations. To address this issue, we extend it by multi-intervals and fuse light intensity information with different numbers of intervals, I_I(𝐱,τ) = ∑_k=1^K(ω_k ·(2k-1) ·θ/α· [T(𝐱,N_τ(𝐱)+k-1)-T(𝐱,M_τ(𝐱)-k+1)]), where M_τ(𝐱) = max_z ( T(𝐱,z) < τ ), N_τ(𝐱) = min_z ( T(𝐱,z) ≥τ ). In Equation <ref> and <ref>, T(𝐱,z) refers to the timestamp corresponding to the zth spike at position 𝐱. [T(𝐱,N_τ(𝐱)+k-1)-T(𝐱,M_τ(𝐱)-k+1)] is the total time length of these (2k-1) intervals. ω_k is the weight factor of the light intensity calculated by using (2k-1) intervals. Note that k is set to 1 and 2 in our experiments. This setting ensures that the data length of spike stream used for light intensity estimation in high-speed motion regions is much shorter than that used in low-speed regions. Further detailed discussion is presented in the appendix. Learnable weights of estimated light intensity. Considering a scene may contain both high-speed regions and low-speed regions, it is necessary for our unsupervised loss to fuse light intensity estimation methods based on multiple intervals and multiple time windows. We use the estimated optical flow to reflect the motion speed, and choose the most appropriate light intensity estimation strategy for different regions. As shown in Figure <ref>, we learn the weights ω_s, ω_l, ω_k=1, and ω_k=2 from the estimated optical flow. Thence, we can fuse all the terms in Equation <ref> and Equation <ref> to achieve the final approximate light intensity I. After achieving the I, we then derive the bidirectional photometric loss in a spike-based manner: ℒ_ photo(𝐟,𝐟') = ∑_𝐱(ρ(I(𝐱,t_0)-I(𝐱+𝐟,t_1))+ρ(I(𝐱+𝐟',t_0)-I(𝐱,t_1))), where ρ is the Charbonnier loss, 𝐟 is the flow from t_0 to t_1, 𝐟' is the flow from t_1 to t_0. Furthermore, we use smoothness loss to regularize the predicted flow. The total loss function consists of two loss terms above-mentioned, which can be written as ℒ_ total = ℒ_ photo+λℒ_ smooth, where λ is the weight factor. § EXPERIMENTS §.§ Implementation Details In this work, we choose the SPIFT dataset <cit.> as the training dataset. The PHM dataset <cit.> and our proposed SSES dataset is used for evaluation. The SPIFT and PHM datasets provide two data settings, generating optical flow every 10 spike frames (Δ t=10) and 20 spike frames (Δ t=20) separately from the start to end of sampling. Therefore, we train models for the (Δ t=10) and (Δ t=20) settings separately as SCFlow <cit.>. All training details are included in the appendix. More details on the SSES dataset will be elaborated in Section <ref>. The color map used in visualization refers to Middlebury <cit.>. §.§ Comparison Results Evaluation of Input Representation. To fully validate the effectiveness of the proposed input representation, we first train the model in a supervised manner and compare it with other supervised baselines listed in the prior spike-based work, SCFlow <cit.>. In more detail, apart from SCFlow, we compare our network with baselines in event-based optical flow, EV-FlowNet <cit.> and Spike-FlowNet <cit.>. We also compare our network with frame-based optical flow network, RAFT <cit.> and PWC-Net(variant) <cit.>. Note that these two frame-based networks are lightweight versions as illustrated in Section <ref> and we implement our method on both networks, denoted as USFlow(raft) and USFlow(pwc). All the methods in Table <ref> are only fed spike streams as inputs. As illustrated in Table <ref>, our input representation indeed can improve the performance on top of frame-based backbones. It demonstrates the necessity of directly pre-processing operations on spike stream information. In addition, USFlow(pwc) achieves the best mean AEE of 0.854 in Δ t=10 setting and USFlow(raft) gets the best mean AEE of 1.649 in Δ t=20 setting, which gets 15% and 19% error reduction from the best prior deep network, SCFlow, respectively. Note that the mean AEE value is averaged over nine scenes. Meanwhile, our input representation has the least parameters compared to other methods. The parameter size of input representation in SCFlow is 0.23M and it in our USFlow is only 0.05M. In order to verify that the performance boost is not brought by the number of parameters increasing, we change the dilated convolution to normal convolution with the same input size and feature channel, denoted as RAFT+conv or PWC-Net(variant)+conv. We found that blindly increasing the number of parameters does not make any sense. Normal convolution only provides a limited performance improvement. Therefore, we claim that dilated convolutions can extract salient information more effectively from spike streams. Table <ref> shows a comparison between PWC-Net(variant) and USFlow(pwc), indicating that our representation also shows superiority with our unsupervised loss. Evaluation of Unsupervised Loss. Note that we only build USFlow on PWC-Net(variant) for unsupervised evaluation due to the similar performance of two backbones in Table <ref>. Since the metric for measuring appearance similarity is critical for any unsupervised optical flow technique <cit.> in the frame-based domain, we compare with some metrics, the structural similarity index (SSIM) and the Census loss. As illustrated in Table <ref>, our proposed unsupervised loss can help the model achieve significant performance improvements. Components of our loss are analyzed in Section <ref>. Qualitative Results. Parts of RGB images, ground truth flow, and the corresponding predicted flow images of the PHM dataset are visualized in Figure <ref>. Note that PWC(variant) and USFlow(pwc) are trained unsupervised. USFlow(pwc) can predict more detailed flow than PWC-Net(variant) in many regions. However, there still exists a performance gap between the unsupervised method and the supervised method. Moreover, as for supervised methods, the directions of predicted flows (viewed in color) of USFlow(pwc) are closer to the ground truth than SCFlow, especially in object edges. Fine-tuned Results. We collect some real data in street scenes and split them into training and evaluation set. Due to the advantage of unsupervised learning over supervised learning, we can fine-tune the unsupervised model on the real data, which has no ground truth, to bridge the domain gap. The fine-tuned model has achieved better qualitative results than the supervised model trained on SPIFT in the evaluation set of street scenes. Parts of qualitative results can be found in Figure <ref>. More clarifications are placed in the appendix. §.§ Ablation Study Dynamic Timing Representation. We do ablation studies on dynamic timing representation in both supervised and unsupervised manners. Table <ref> again verifies that dilated convolutions can effectively extract salient information to build promising input representation. More analysis regarding dilated convolutions are in the appendix. Through it seems that layer attention can only improve the performance marginally, we can find out this part makes the training process more stable and faster as illustrated in Figure <ref>. Unsupervised Loss. Since estimating the light intensity from spike streams is the core of our unsupervised loss, we do ablation studies on it. As shown in Figure <ref>, the mean AEE of the experiment (A) is higher than that of (B). The reason is there are fewer high-speed motion regions than low-speed motion regions in the PHM dataset. Compared with experiments (A) and (B), our unsupervised loss can handle regions with different motion speeds and get the best performance. §.§ SSES Dataset Based on the CARLA <cit.>, we build a synthetic dataset for extreme scenarios of autonomous driving. CARLA is an open-source simulator for autonomous driving research, which provides open digital assets (urban layouts, buildings, vehicles) to build specific scenarios. Moreover, the simulation platform supports flexible specifications of sensor suites, environmental conditions, and much more. In addition, CARLA can provide the ground truth of optical flow, instance segmentation, and depth. In the proposed dataset SSES, we design ten extreme scenarios, mainly focusing on traffic accidents caused by violating traffic rules or vision-blind areas. We also include various street scenes, background vehicles, and weather conditions to make scenes more diverse. The demonstration of sample cases and more descriptions of the extreme scenarios are in the appendix. In all scenarios, the speed setting range is 10 ∼ 16 m/s for cars, 5 ∼ 8 m/s for pedestrians and bicycles, and the frame rates for rendered RGB frames and spike frames are 500 fps and 40K fps respectively. Regarding the generation of spike frames, we first increase the frame rate of RGB frames to 40K fps through a flow-based interpolation method and then generate spikes by treating pixel value as light intensity and simulating the integrate-and-fire mechanism <cit.>. Note that the ground truth of optical flow is obtained from time aligned with RGB frames. The sequence duration is about 0.5 ∼ 1.5s. Parts of RGB images, ground truth flow, and the corresponding predicted flow images of the SSES dataset are visualized in Figure <ref>. USFlow(pwc) can successfully predict the optical flow in regions where vehicles and pedestrians exist (highlighted by black boxes), which can help decision-making in autonomous driving. Table <ref> shows the quantitative evaluation of the SSES dataset. Overall, USFlwo(pwc) models trained in Δ t=20 setting get better performance. Because the magnitude of motion is relatively large in autonomous scenarios, especially facing the sudden appearance of objects. § CONCLUSIONS We propose an unsupervised method for learning optical flow from continuous spike streams. Specifically, we design a dynamic timing representation for spike streams. We also propose an unsupervised loss function in a spike-based manner. Moreover, we simulate extreme scenarios in autonomous driving and propose a validation dataset SSES for testing the robustness of optical flow estimation in high-speed scenes. Experiment results show that our USFlow achieves the state-of-the-art performance on PHM, SSES, and real data. Limitations. The characteristics of spike streams generated in extremely dark scenes are quite different from those in bright scenes, so the length of the time window in the unsupervised loss may need to be reset during fine-tuning. We plan to extend our method to address this issue in future work. plain
http://arxiv.org/abs/2307.04677v1
20230710162135
Practical Trustworthiness Model for DNN in Dedicated 6G Application
[ "Anouar Nechi", "Ahmed Mahmoudi", "Christoph Herold", "Daniel Widmer", "Thomas Kürner", "Mladen Berekovic", "Saleh Mulhem" ]
cs.NI
[ "cs.NI", "eess.SP" ]
Practical Trustworthiness Model for DNN in Dedicated 6G Application This work was partially supported by the DFG Project Nr. 403579441, ”Meteracom: Metrology for parallel THz communication channels.” Anouar Nechi^1, Ahmed Mahmoudi^1, Christoph Herold^2, Daniel Widmer^1, Thomas Kürner^2, Mladen Berekovic^1, and Saleh Mulhem^1 ^1Institute of Computer Engineering, University of Lübeck, Lübeck, Germany ^2Institute for Communications Technology, Technische Universität Braunschweig, Braunschweig, Germany ^1{name.surname}@uni-luebeck.de, ^2{surname}@ifn.ing.tu-bs.de August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================== Artificial intelligence (AI) is considered an efficient response to several challenges facing 6G technology. However, AI still suffers from a huge trust issue due to its ambiguous way of making predictions. Therefore, there is a need for a method to evaluate the AI's trustworthiness in practice for future 6G applications. This paper presents a practical model to analyze the trustworthiness of AI in a dedicated 6G application. In particular, we present two customized deep neural networks (DNNs) to solve the automatic modulation recognition (AMR) problem in Terahertz communications-based 6G technology. Then, a specific trustworthiness model and its attributes, namely data robustness, parameter sensitivity, and security covering adversarial examples, are introduced. The evaluation results indicate that the proposed trustworthiness attributes are crucial to evaluate the trustworthiness of DNN for this 6G application. 6G communication, Terahertz band, AI, Modulation recognition, Trustworthiness § INTRODUCTION The sixth-generation (6G) network technology aims to outperform the current wireless standards by utilizing frequencies above 100 GHz <cit.>. Hence, designing efficient communication systems at these frequencies is far more complex than those at lower frequency systems. 6G technology increasingly relies on two main pillars: Terahertz communications (THzCom) and Machine Learning (ML). While 6G has made one further step toward THzCom according to IEEE802.15.3d standard <cit.>, ML is recommended as a novel solution for 6G performance optimization <cit.>. In other words, the ultra-wide THz band ranging from 0.1 to 10 THz is foreseen as an excellent candidate for 6G, whereas ML has proved its efficiency in solving technical problems in communication systems <cit.>. Furthermore, ML ambiguously solves several problems in wireless communications. For instance, Deep Neural Networks (DNNs) as a subset of ML have been used in a black-box manner to solve the automatic modulation recognition problem <cit.>. Therefore, there is a huge need to understand the risk of deploying such an Artificial Intelligence (AI) algorithm. According to an Independent High-Level Expert Group on AI <cit.>, the only possibility to achieve the maximum benefits of AI is to ensure its trustworthiness during all steps of development and use. The concept of trustworthy AI can be perceived as a response to mitigate the risks of deploying AI <cit.>. Several works have proposed definitions of system trustworthiness <cit.>, <cit.> or specified definitions of trustworthy AI <cit.>, <cit.>. However, these definitions are still general and introduce principles more than practical approaches <cit.>. In practice, a model of trustworthiness evaluation for a dedicated application in 6G is still missing. To our knowledge, the literature comprises studies investigating AI, especially DNN in communication, such as THzCom-based 6G technology. Nevertheless, the open literature on applying DNN in the 6G domain still needs to catch up to the problem of trustworthiness evaluation. § RESEARCH METHODOLOGY & BACKGROUND Our proposed research methodology is carried out as follows: * We first define one of the 6G problems. In particular, we choose a THzCom-based automatic modulation recognition (AMR) problem to demonstrate. * We propose two customized DNNs to solve this dedicated problem. * Then, we study trustworthiness attributes that need to be considered for this problem, and we present the so-called trustworthiness model based on these attributes. * Finally, we apply this model as a practical approach to evaluate the trustworthiness of the customized DNNs. The focus of this research methodology is not on developing DNNs to solve the AMR problem but on using the customized DNNs as practical examples to evaluate their trustworthiness in the 6G environment. In the following, we introduce THzCom-based AMR as one of the 6G problems, and we review the available DNN-based solutions for such a problem. Then, we investigate the available trustworthiness models for DNNs. §.§ Deep Learning-based AMR for THz Communication In modern communication systems, a transmitter can use a pool of modulation schemes to control data rate and bandwidth usage. While the transmitter adaptively selects the modulation type, the receiver may or may not know the modulation type. This problem is usually perceived as a classification problem, where the receiver aims at recognizing and classifying the modulation. To solve such a problem, modulation information can be supplied in each signal frame, allowing the receiver to identify the modulation type and react accordingly. However, this approach has become more expensive since modern wireless networks are very heterogeneous, and the number of users is increasing significantly. Therefore, such an approach may not be efficient enough in real-world scenarios as it degrades spectrum efficiency due to the additional information in each signal frame <cit.>. AMR has been proposed to detect the modulation scheme of received signals without any potential overhead in the network protocol. Ultimately, the signals are demodulated, and the received data is recovered correctly. Further, conventional AMR approaches require a huge amount of computation or experts’ feature extraction experience <cit.>. To overcome these issues, deep learning (DL) is considered a powerful tool that can be used for AMR to provide high classification accuracy. DL does not require prior pre-processing or feature extraction, making it more efficient than conventional approaches. For instance, Convolutional Neural Networks (CNNs) were used in <cit.> to extract features from raw I/Q data and perform classification. In <cit.>, Recurrent Neural Network (RNN)-based AMR has been proposed to extract sequence-correlated features of I/Q signal components and amplitude/phase signal components to recognize modulation schemes. Other works employed RNNs to estimate signal parameters and correct signal distortions like Carrier Frequency Offset (CFO) and multipath fading <cit.>. The results revealed that the proposed RNN model provides not only good accuracy in signal distortion estimation but also outperforms many DL methods in terms of classification accuracy. §.§ AI Trustworthiness Several works have investigated the concepts of trustworthiness and dependability to determine their attributes. In system design, availability, reliability, safety, integrity, and maintainability are defined as dependability attributes <cit.>. Nevertheless, this definition does not cover all security attributes in which the definition excludes confidentiality. In <cit.>, trustworthiness is defined as a twin of dependability that includes the following attributes: reliability, safety, maintainability, availability, integrity, and confidentiality. This definition considers security as one of the dependability attributes. In AI-based system design, the above definitions of trustworthiness do not cover the recent AI requirements. AI is highly data-dependent and needs dedicated attributes for its trustworthiness. Therefore, new attributes of trustworthiness have been introduced, mainly security, robustness, safety, transparency, and fairness <cit.>. However, these attributes are general and not specified for a dedicated AI application. To determine the trustworthiness attributes of DNN regarding AMR in THzCom-based 6G technology, the interaction between DNN and its host environment needs to be carefully investigated and described. §.§ Paper's Contribution As 6G is crossing the primeval stage of its development, it is the right time to consider the trustworthiness of DL deployed in this technology. This paper proposes a trustworthiness model to analyze DNNs designed for recognizing modulation schemes in THzCom-based 6G technology. To the best of our knowledge, this work introduces the first practical approach to evaluating the trustworthiness of DNNs designed for AMR in THzCom-based 6G technology. § DEEP LEARNING-BASED AUTOMATIC MODULATION RECOGNITION §.§ Synthetic THz dataset A dataset of transmitted I/Q samples has been used for the AMR task. The THz-dataset contains seven modulation schemes: BPSK, QPSK, 8PSK, QAM16, QAM64, 8APSK, and OOK. Each modulation scheme consists of 26 Signal-to-Noise-Ratio (SNR) levels with 4096 examples per level. The total number of samples in the dataset is 745,472. It was generated using the link-level simulation module of the Simulator for Mobile Networks (SiMoNe) <cit.>. The link level simulation module was developed to simulate point-to-point communication links under the influence of realistic propagation effects in accordance with the IEEE802.15.3d standard <cit.>. The simulated transmission was performed using a Root-Raised-Cosine (RRC) transmit pulse and an AWGN channel. The Nyquist Bandwidth is 880 MHz with an oversampling factor of 8, and it has not been subjected to any channel coding technique. All samples have the exact representation to make data processing easier. The THz dataset samples have a 1024 × 2 shape (I/Q representation). §.§ Two DNN Models for Automatic Modulation Classification DNN consists of multiple layers that process input data and generate a set of probabilities (classification). Each layer comprises a set of parameters (weights and biases) used to process the final output in conjunction with the activation function. In the following, two DNNs are adopted using the proposed THz dataset to classify THz modulation schemes. The resulting DNN classifiers have a 32-bit floating point (FP) parameter format. §.§.§ CNN for AMR CNNs have been widely used for computer vision problems <cit.>. A CNN model can learn directly from the raw data without prior expert feature extraction or pre-processing of the raw data. To benefit from this property in AMR, we construct a CNN including three convolutional layers. Each layer is followed by a batch normalization layer, a ReLU activation function, and a MaxPooling layer. We feed the raw I/Q samples of each radio signal into the CNN model. The extracted feature maps are then forwarded to the fully connected region of the network for classification, where we employ the Scaled exponential Linear Unit (SeLU) activation function and an Alpha dropout. The proposed CNN classifier has 555,287 parameters, and its layout is shown in Table <ref>. The CNN classifier achieves, on average, 68.8% accuracy across all SNR levels. Fig. <ref> shows the confusion matrix of CNN regarding each modulation scheme. CNN classifier imprecisely predicts the higher-order modulation schemes, namely 16QAM and 64QAM, and achieves only 56.2% and 55.9% correct predictions, respectively. In contrast, the low-order modulation schemes appear to be the least confused, achieving 84.6% for BPSK and 93.1% for OOK. §.§.§ ResNet for AMR Deep Residual Networks (ResNets) are enhanced versions of CNN. ResNet uses skip connections to process features at multiple scales and depths through the network. Moreover, it is possible to use wider layers, train effectively with fewer epochs, and achieve better results compared to traditional CNN <cit.>. We construct a ResNet layout similar to <cit.> for radio signal classification. Fig. <ref> shows the proposed ResNet architecture. It consists of six residual units, each with two skip connections, followed by a fully connected region with the same configuration as the proposed CNN but with only 159015 parameters. The ResNet classifier achieves 70.8% accuracy across all SNR levels. ResNet classifier exhibits 2% higher accuracy than CNN and has fewer parameters. This result emphasizes the effectiveness of ResNets over conventional CNN classifiers. Fig. <ref> shows the confusion matrix of ResNet. 16.8% confusion between 16QAM and 64QAM is noted only. For the remaining modulation schemes, we observe a slight improvement in accuracy. § TRUSTWORTHINESS: MODEL & ATTRIBUTES To determine the trustworthiness attributes of DNN regarding AMR in THzCom-based 6G technology, we first formulate DNN as a function of multiple inputs and parameters, and we link this formulation and the trustworthiness attributes as follows. The layer i of a DNN can be seen as an operation f_i[p_i](x_i-1), where p_i represents a set of layer i's parameters p_i=(W^j_i,b_i) including j weights and one bias, and x_i-1 is the output of the previous layer. A composition of these operations defines the DNN classifier f_DNN as, f_DNN(x_in; p)=f_l[p_l] ∘⋯∘ f_2[p_2] ∘ f_1[p_1](x_in) Where x_in is an input signal, p is a set of DNN parameters p={p_1, ..., p_l}, and l is the number of DNN layers. The values of DNN parameters and the model hyperparameters are given during the training phase based on the THz dataset. In the prediction phase, the DNN classifier f_DNN(x_in; p) can be perceived as a function of two inputs: The trained parameters p and the signal x_in like an input variable. Therefore, the proposed trustworthiness model of such a DNN considers only the input signals x_in and the DNN parameters p={p_1, ..., p_l}. Other building blocks of DNN are considered reliable and trustworthy such as activation functions etc. This model helps to explain how DNN interacts with the THzCom environment and the user. Fig. <ref> illustrates the three trustworthiness attributes that need to be considered in DNN-based AMR, described as follows: * Data robustness analysis helps to understand when the DNN classifiers exhibit low accuracy due to environmental variation. It aims at evaluating such variations and their impact on the quality of DNN classifiers <cit.>. Precisely, DNN robustness analysis investigates a noisy environment effect on the input signals x_in and its impact on the DNN classification accuracy. Here, different SNR levels are applied to input signal x_in, and then, the drop in the DNN accuracy is observed and estimated. * Parameters sensitivity analysis provides a deep understanding of DNN's reliability, especially the unreliable classification causes. The reliability can be evaluated by analyzing the sensitivity of the DNN parameters p={p_1, ..., p_l} for given signals. Reliability indicates the DNN classifier should perform with the same accuracy as it's intended without failure (unreliable classification with less than 50% accuracy). The proposed sensitivity analysis follows a random bit flipping model <cit.>. * Adversarial examples indicate the impact of deterministic signal changes on the DNN classifiers introduced by an attacker. Adversarial example attacks can be performed for Security evaluation. Here, the attacker chooses and generates inputs x_in to confuse DNN classifiers during the inference phase, resulting in misclassification <cit.>. Our trustworthiness model excludes the transparency of DNN from its attributes as AMR doesn't use private data, and it also ignores DNN fairness as the used dataset is balanced. The misclassification leads to selecting an incorrect scheme, and the received signals cannot be demodulated. This event and its consequence are already involved in the reliability attribute. Therefore, DNN safety can be seen as a subset of reliability in our application. § TRUSTWORTHINESS ANALYSIS OF DNN FOR AMR In this section, we analyze the trustworthiness of the proposed CNN and ResNet by using our trustworthiness model and its attributes. §.§ Data Robustness Analysis The impact of environmental variation on the trained DNN model is considered a significant factor of trustworthiness. In other words, the trained DNN model should be aware of the diverse data distribution regarding different environmental scenarios <cit.>. In this context, the impact of a noisy environment on DL-based AMR is evaluated. This problem is critical as it affects the data robustness of the DNN model. SNR is a crucial metric in any communication system. SNR quantifies the environmental variation by indicating the signal quality concerning a communication channel noise. To analyze the data robustness of DL-based AMR, the following steps are carried out: (1) The dataset is split into a training and testing set with consideration of the various SNR levels to maintain a balanced dataset, (2) DNN models are trained based on the resulting dataset, and (3) the accuracy of the proposed DNN models is evaluated considering the various SNR levels. Further, we apply the mentioned steps to the proposed CNN and ResNet models. Fig. <ref> shows that data samples with low SNR ranging from -20 to -4 dB are hard to classify and score a maximum accuracy of 50%. With such a noise level, the constellation of the received signals is random and does not form meaningful clusters to distinguish between the different modulation schemes. It's worth noting that the model accuracy increases when the SNR increases from -2 dB to 10 dB. The model accuracy attains 99% as SNR approaches 10 dB. The highest model accuracy is achieved starting from 10 dB. Moreover, the ResNet model exhibits better accuracy than the CNN model in the SNR interval of -2 dB to 10 dB. The accuracies of both models are correlated out of this interval. As a result, ResNet-based AMR is more robust than CNN-based AMR regarding the noisy channel variation. §.§ Sensitivity Analysis AI sensitivity analysis determines vulnerable bits that significantly decrease the classification accuracy when flipped. Sensitivity analysis relies on a bit-flipping model of AI parameters. Sensitivity analysis aims to provide a deep understanding of AI's behavior and gives some hints towards explaining AI's decision-making. To conduct the sensitivity analysis of CNN and ResNet classifiers, a single-bit flip is randomly introduced to the DNN's parameters <cit.>. Both the bit position and the targeted parameter are uniformly distributed. First, we randomly inject single-bit faults 1000 times at different bit positions and parameter locations of each layer of the CNN classifier. In the case of the ResNet, we randomly inject single-bit faults in the residual block, convolution, and dense layers. Nevertheless, the injected faults in the convolution and dense layers are performed similarly to the CNN. However, the faults in the residual block are randomly injected at different bit positions, parameter locations, and layers. The above fault injection experiments are conducted during inference. The single-bit faults injected in 32-bit FP parameters indicate that the exponent (from 23 to 30-bit position) is more sensitive than the mantissa (from 0 to 22-bit position). This emphasizes the well-known results in public literature. To better understand the exponent sensitivity, we divide the vulnerable exponent bits into two categories: the first category includes the vulnerable bits resulting in misclassification (unreliable classifier with accuracy lower than 50%), and the second covers the vulnerable bits resulting in accuracy degradation. Fig. <ref>-a illustrates the impact of single-bit faults on the CNN classifier regarding convolution layers C1, C2, C3 and dense layers D1, D2, D3. The unreliable classification is observed at 25 in C1 and 30 in C1, C2, C3, D2, D3. However, the flipping of bit 30 in D1 results only in accuracy degradation. It should be noted that the faults injected in the remaining layers show insignificant accuracy degradation. Fig. <ref>-b shows the impact of bit flipping on all layers of the ResNet classifier. The 30th bit (i.e. bit 31) is more sensitive than the others as it causes unreliable classification across all layers. The remaining vulnerable bits cause only an accuracy drop. As a result, flipping the vulnerable 30th-bit causes the misclassification of both classifiers. Other vulnerable bits show only accuracy degradation. §.§ Security Analysis In <cit.>, several neural network models exhibit vulnerabilities to adversarial examples, where the attacker generates some inputs that lead to misclassification. These inputs are slightly different from the original inputs that are classified correctly, yet they are likely to cause such misclassification. The adversarial examples mainly occur due to some “linear behavior in high-dimensional spaces” <cit.>. This observation introduces many efficient adversarial example attacks such as Fast Gradient Method <cit.>, Projected Gradient Descend <cit.>. To analyze the impact of adversarial examples, we launch eight attacks to generate adversarial examples against the investigated CNN and ResNet classifiers using the Adversarial Robustness Toolbox (v1.2.0) <cit.>. We set up each attack using the predefined sets of attack parameters. Then, we perform the same attacks on both classifiers. Table <ref> shows the attack results of Fast Gradient Method (FGM) <cit.>, Projected Gradient Descend (PGD) <cit.>, NewtonFool <cit.>, DeepFool <cit.>, HopSkipJump <cit.>, Zeroth Order Optimization (Zoo) <cit.>, and Carlini & Wagner methods (C&W) <cit.> over L_2, and L_∞. The adversarial example resistance (AER) quantifies the model's correct classifications despite adversarial example generation. For instance, both classifiers exhibit comparably high AER against C&W over L_2, while the generated adversarial examples by PGD fundamentally devastate both. Generally, both models lead to different AER w.r.t. the respective attacks. Thus, the decision for the more-resistant model against adversarial examples will remain dependent on the chosen attacks. §.§ Trustworthiness Evaluation According to the above analysis, the trustworthiness level of the investigated classifiers can be evaluated as follows. First, when the SNR ranges from 0 to 30 dB, both classifiers are robust against environmental variations. Here, ResNet shows greater robustness than the CNN classifier. Second, the sensitivity analysis of the classifier’s parameters indicates that flipping the vulnerable 30th-bit results in unreliable classifications. Finally, both classifiers show different levels of resistance against selected adversarial example attacks without yielding a clear verdict for either DNN. Future work may identify suitable attacks to provide the required metrics. § CONCLUSION In this paper, we introduced a methodology to build a practical trustworthiness model of Deep Neuronal Networks (DNN) dedicated to one of the 6G applications. The need for such a model is significant as 6G technology requires higher levels of reliability and security compared to prior generations. Particularly, we constructed two DNN classifiers addressing the automatic modulation recognition (AMR) problem in a THzCom-based 6G environment. Then, we applied our trustworthiness model to analyze the classifiers w.r.t. attributes chosen to meet this environment: Robustness, DNN parameter reliability, and DNN adversarial example resistance. Based on our experiments results, we conclude our trustworthiness model a suitable approach to analyze the trustworthiness of the used DNN for AMR in THzCom-based 6G technology. IEEEtran
http://arxiv.org/abs/2307.04081v1
20230709014122
Score-based Conditional Generation with Fewer Labeled Data by Self-calibrating Classifier Guidance
[ "Paul Kuo-Ming Huang", "Si-An Chen", "Hsuan-Tien Lin" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement Jinghua Zhang Received August 12, 2023; accepted August 12, 2023 ==================================================================================== Score-based Generative Models (SGMs) are a popular family of deep generative models that achieves leading image generation quality. Earlier studies have extended SGMs to tackle class-conditional generation by coupling an unconditional SGM with the guidance of a trained classifier. Nevertheless, such classifier-guided SGMs do not always achieve accurate conditional generation, especially when trained with fewer labeled data. We argue that the issue is rooted in unreliable gradients of the classifier and the inability to fully utilize unlabeled data during training. We then propose to improve classifier-guided SGMs by letting the classifier calibrate itself. Our key idea is to use principles from energy-based models to convert the classifier as another view of the unconditional SGM. Then, existing loss for the unconditional SGM can be adopted to calibrate the classifier using both labeled and unlabeled data. Empirical results validate that the proposed approach significantly improves the conditional generation quality across different percentages of labeled data. The improved performance makes the proposed approach consistently superior to other conditional SGMs when using fewer labeled data. The results confirm the potential of the proposed approach for generative modeling with limited labeled data. § INTRODUCTION Score-based Generative Models (SGMs) capture the underlying data distribution by learning the gradient function of the log-likelihood on data, also known as the score function. SGMs, when coupled with a diffusion process that gradually converts noise to data, can often synthesize higher-quality images than other popular alternatives, such as generative adversarial networks <cit.>. SGMs attracted research attention and demonstrated promising performance not only in image generation <cit.> but also in audio synthesis <cit.>, natural language generation <cit.>, and various other fields. Many successful SGMs above focus on unconditional generation, which models the data distribution without considering other variables <cit.>. When aiming to generate data with some control, it is necessary to model the conditional distribution concerning another variable, such as the class label for generating images from a particular class. Such conditional SGMs will be the focus of this paper. They have achieved cutting-edge performance for class-conditional generation <cit.>, image inpainting <cit.>, and audio upsampling <cit.>. There are two major families of conditional SGMs. The family of Classifier-Free SGMs designs specific conditional network architectures with their losses derived from the conditional score functions <cit.>. Such SGMs are known to generate high-fidelity images in fully-supervised settings where all data are labeled. Nevertheless, they are often criticized for generating data with less diversity, favoring some easier classes while being inaccurate for some harder classes. Furthermore, their performance drops significantly as the proportion of labeled data decreases, making them less preferable in semi-supervised settings. Classifier-Guided SGMs (CGSGMs) form another family of conditional SGMs that address the aforementioned issues by decomposing the conditional score function into a mixture of the unconditional score function and the gradient of an auxiliary classifier <cit.>. For instance, the vanilla CGSVM <cit.> trains the unconditional SBM with the popular Denoising Score Matching (DSM) <cit.> technique that learns the score function from noise-perturbed data, and the classifier with the usual cross-entropy loss from labeled data. The additional classifier improves the accuracy of conditional generation and allows better control of the trade-off between generation diversity and fidelity <cit.>. Furthermore, because the unconditional SBM can be trained with either labeled or unlabeled data in principle, CGSGMs potentially fit the semi-supervised setting better by requiring fewer labeled data. The quality of the auxiliary classifier is critical for CGSGMs. If the classifier is overly confident in its predictions, as often happens with cross-entropy loss <cit.>, the resulting conditional scores may be unreliable. This, in turn, leads to low generation accuracy, even if the unconditional scores are reliable enough to ensure decent generation fidelity. Robust CGSGM <cit.> trains an adversarial robust classifier instead of a usual one to improve the quality of the auxiliary classifier. Somehow there is no theoretical guarantee that adversarial robustness is related to reliable conditional scores. Denoising Likelihood Score Matching <cit.> proposes to calibrate the classifier on the labeled data externally, leveraging the help of the unconditional SGM. Then, the training of the classifier is dependent on having a trained unconditional SGM first. Our proposed approach is aligned with both techniques above to design a better loss to train the classifier. Still, it significantly differs from them by letting the classifier self-calibrate. Unlike the robust CGSGM, the self-calibration technique carries a sound theoretical guarantee by converting the classifier to another view of the unconditional SGM when reinterpreting the classifier through the angle of energy-based models. The novel view allows reusing DSM seamlessly to design a Self-Calibration (SC) loss (as illustrated with ℒ_SC in Fig. <ref>) that can be used on the classifier without dependence to the unconditional SGM. Furthermore, the SC loss can be effortlessly applied to both labeled and unlabeled data, resulting in immediate advantages in the semi-supervised setting. We demonstrate the effect of self-calibration by visualizations on a toy data set. The results justify that our proposed CGSGM with the SC loss (CGSGM-SC) approach results in more accurate classifier gradients, thus enhancing the estimation of the conditional scores. We further conduct thorough experiments on CIFAR-10 and CIFAR-100 datasets to validate the advantages of the proposed approach. The results confirm that CGSGM-SC is superior to the vanilla CGSGM and state-of-the-art techniques in the CGSGM family. Furthermore, in an extreme setting of having only 5% of the data being labeled, CGSGM-SC, which can use unlabeled data to self-calibrate the classifier, is significantly better than both classifier-guided and classifier-free SGMs, which cannot easily take the unlabeled data into account. The results confirm the potential of CGSGM-SC in scenarios where labeled data are costly to obtain. § BACKGROUND Consider a data distribution p(x) where x∈ℝ^d. SGMs aim to generate samples from p(x) via the information contained in the score function ∇_xlog p(x), which is learned from data. We first introduce how the score function can be efficiently learned from data in Section <ref>, which is related to the derivation of our proposed loss. Then, we discuss how a diffusion process can be combined with learning a score function to effectively sample from p(x) in Section <ref>. Finally, we review studies that extend SGMs to conditional SGMs in Section <ref>. §.§ Learning the score function Learning the score function aims to choose the best function from a family of functions {s_θ(x)}_θ, such as deep learning models parameterized by θ, to approximate the score function ∇_x log p(x) of interest. The learning is based on some data {x_n}_n=1^N that are assumed to be sampled from p(x). It has been shown that the aim can be achieved by optimizing the in-sample version of the following score-matching loss over θ: ℒ_SM=𝔼_p(x)[tr(∇_x s_θ(x))+1/2‖ s_θ(x)‖^2_2], where tr(·) denotes the trace of a matrix and ∇_x s_θ(x)=∇^2_x log p(x) is the Hessian matrix of log-likelihood p(x). Somehow calculating the score-matching loss requires O(d) passes of computation for x ∈ℝ^d, which makes the optimization process computationally prohibitive on high-dimensional data. Several previous studies <cit.> attempted to resolve the computational issue by approximating or transforming score matching into equivalent objectives. One standard approach nowadays is called Denoise Score Matching (DSM) <cit.>, which learns the score function of a noise-perturbed data distribution q(x̃) instead. DSM typically assumes that q(x̃) comes from the original distribution p(x) injected with a pre-specified noise q(x̃|x). Then, it has been proved <cit.> that the score function can be learned by minimizing the in-sample version of 𝔼_q(x̃|x)p(x)[1/2‖ s_θ(x̃) - ∇_x̃log q(x̃|x)‖_2^2], where ∇_x̃log q(x̃|x) is the score function of the noise distribution centered at x. DSM is generally more efficient than the original score matching and is scalable to high-dimensional data as it replaces the heavy computation on the Hessian matrix with simple perturbations that can be efficiently computed from data. §.§ Generating from the score function by diffusion Assume that we hope to sample from some unknown target distribution p(x) = p_0(x), and the distribution can be transited to a known prior distribution p_T(x) through a Markov chain that is described with some stochastic differential equation (SDE) <cit.>: dx(t)=f(x(t),t)dt+g(t)dw, where the Markov chain is computed for 0 ≤ t < T using the drift function f(x(t),t) that describes the overall movement and the dispersion function g(t) that describes how the noise w from a standard Wiener process enters the system. To sample from p(x) = p_0(x), the VE-SDE framework <cit.> proposes to reverse the SDE from p_T(x) to p_0(x), which turns out to operate with another SDE <cit.>: dx=[f(x(t),t)-g(t)^2 s(x(t), t)]dt+g(t)dw̅ where w̅ is a standard Wiener process when time-step flows from T back to 0 and s(x(t), t) ≡∇_xlog p_t(x(t))=s(x(t), t) denotes a time-dependent score function. If we can learn the score function s(x(t), t), the diffusion process in (<ref>) can then be used to take any instance sampled from the known p_T(x) to a sample from the unknown p(x) = p_0(x). Learning the time-dependent score function s(x(t), t)) can be done by minimizing an time-generalized (in-sample) version of the DSM loss because the diffusion process can be viewed as one particular way of injecting noise. The extended DSM loss is defined as ℒ_DSM(θ)=𝔼_t[λ(t)𝔼_x^(t),x^(0)[1/2‖ s_θ(x(t),t) - s_t(x^(t)|x^(0))‖_2^2]], where t is selected uniformly between 1 and T, x^(0)∼ p_t(x), x^(0)∼ p_0(x), and s_t(x |x^(0)) denotes the score function of p_t(x | x^(0)), and λ(t) is a weighting function that balances the loss of different time steps. In this paper, we take the same drift, dispersion, and weighting functions f(x,t), g(t), and λ(t) as the original VE-SDE framework <cit.>. §.§ Related studies of conditional score-based generative models In conditional SGMs, we are given some labeled data {(x_m, y_m)}_m=1^M in addition to the unlabeled data {x_n}_n=M+1^M+N, where y ∈{1, 2, …, K} denotes the class label. The case of N = 0 is called the fully-supervised setting, while we focus on the more challenging semi-supervised setting with N > 0 (and possibly N ≫ M) in this paper. Conditional score-based generative models aim to learn the conditional score function ∇_x log p(x | y) from the data and then generate samples from p(x | y). Previous studies <cit.> showed how to decompose the conditional score function using Bayes' theorem: ∇_x log p(x|y) =∇_x[log p(x) + log p(y|x)- log p(y)]= ∇_xlog p(x) + ∇_xlog p(y|x). The term log p(y) can be dropped because it is not a function of x and is thus of gradient 0. The decomposition shows that conditional generation can be achieved by an unconditional SGM that learns the score function ∇_x log p(x) plus an extra conditional gradient term ∇_xlog p(y|x). The vanilla form of Classifier Guidance (CG) for SGM estimates ∇_xlog p(y|x) with an auxiliary classifier trained from the cross-entropy loss on the labeled data and learns the unconditional score function by the DSM loss ℒ_DSM that can in principle be applied on both the labeled and unlabeled data. Nevertheless, the classifier within the vanilla CG approach is known to be potentially over-confident <cit.> on its predictions, which in term results in inaccurate gradients. The issue can mislead the conditional generation process and decrease class-conditional generation quality. <cit.> propose to address the issue by tuning the term ∇_xlog p(y|x) with a scaling parameter λ_CG≠ 1. ∇_x log p(x|y) = ∇_xlog p(x) + λ_CG∇_xlog p_ϕ(y|x), where p_ϕ(y|x) is the posterior probability distribution outputted by a classifier parameterized by ϕ. Increasing λ_CG sharpens the distribution p_ϕ(y | x), guiding the generation process to produce less diverse but higher fidelity samples. While the tuning heuristic is effective in improving the vanilla CG approach, it is not backed by sound theoretical explanations. <cit.> propose to resolve the issue differently by enhancing the adversarial robustness of the classifier. It is empirically observed that adversarially robust classifiers produce more interpretable and perceptually more aligned <cit.> gradients. Somehow it remains theoretically unclear whether robust classifiers are truly more accurate for capturing the true data distribution. <cit.> propose the Denoising Likelihood Score Matching (CG-DLSM) approach that calibrates the classifiers to resolve the issues. The calibration is done by designing a loss computed from the outputs of a trained unconditional SGM to regularize the classifier during training. CG-DLSM achieves state-of-the-art performance within the CGSGM family in the fully-supervised setting. Somehow because of the design, the learning of unconditional SGM and the classifier needs to be done in sequential steps, losing the computational advantage of the original vanilla CGSGM of being able to train the two components in parallel. Furthermore, it is not clear whether the unlabeled data in the semi-supervised setting could be helpful in improving the classifier under the design. The approaches above are all CGSGMs. Another popular approach for conditional SGM is Classifier-Free Guidance (CFG) <cit.>. The approach parameterizes its deep learning model with more sophisticated architectures such that the class labels y can be included as inputs to calculate the score. A null token y_nil is used to indicate unconditional score calculation, which is linearly combined with conditional score calculation for some specific y to form the final estimate of s(x | y). CFG is a state-of-the-art conditional SGM in the fully-supervised setting. Nevertheless, as we shall show in our experiments, its performance drops significantly in the semi-supervised setting, as the conditional parts of the architecture may not get enough labeled data during training. The disadvantages of CFG and other CGSGMs in the semi-supervised setting motivate us to design another CGSGM that (1) comes with theoretical justifications; (2) includes a classifier that can be trained in parallel to the unconditional SGM; (3) can leverage both the unlabeled and labeled data to achieve better performance in the semi-supervised setting. § SELF-CALIBRATION FOR CLASSIFIER GUIDANCE §.§ Motivation As mentioned in Section <ref>, inaccurate gradients of classifiers could potentially misguide the conditional generation process. Therefore, we need an efficient way to calibrate the classifiers. Motivated by JEM <cit.> where the classifiers are calibrated by being reinterpreted as an energy-based model (EBM), we propose to connect the EBM and SGM and calibrate the classifiers by interpreting them as EBMs in a similar approach. To be more specific, we formulate a self-calibration loss that utilizes denoising score matching to calibrate the score function estimated by the classifier. §.§ Formulation of self-calibration loss In this work, we adopted the framework of score-based generative modeling using stochastic differential equations (SDEs) <cit.>. Given a target distribution p_0(x) and a known prior distribution p_T(x) (typically a Gaussian distribution) where the transition between them is a diffusion process with timestep 0≤ t< T, we can describe the diffusion process and its reverse process using SDEs. To incorporate the results of Section <ref> into this framework, we introduce the time-dependent version of ∇_x log p(x) and ∇_x log p(y|x). That is ∇_x log p_t(x(t)) and ∇_x log p_t(y|x(t)), respectively, where x(t)∼ p_t. Denoising score matching (DSM) <cit.> is often utilized to train the score-based model under this framework due to its close relationship with diffusion process modeling. A time-generalized cross-entropy loss is adopted o train the classifier. Inspired by JEM <cit.>, we propose to improve CGSGM through self-calibration during the training stage. We reinterpret the classifier as a time-dependent EBM and obtain the score function by calculating the gradient. Since both energy function -log p(x) and score function ∇_x log p(x) are calculated from the log-likelihood function, we hypothesize that integrating EBM-related objectives into classifier training can be beneficial to CGSGM. To incorporate the energy function into our framework, we used a time-dependent version of the transformation described in JEM <cit.>: E_ϕ,t(x) = -log∑_yexp(f_ϕ,t(x)[y])= -LogSumExp_y(f_ϕ,t(x)[y]) where f_ϕ,t(x)[y] is the output logits of the classifier. The score function can then be computed like the following: s_ϕ(x,t) =∇_x LogSumExp_y(f_ϕ,t(x)[y]) To calibrate this score estimated by the classifier, we adopt DSM to calculate the Self-calibration Loss (SC loss): ℒ_SC(ϕ)=𝔼_t[λ(t)𝔼_x_t,x_0[1/2‖ s_ϕ(x_t,t)-s_t(x_t|x_0)‖_2^2]] where x_t∼ p_t, x_0∼ p_0, and s_t(x_t|x_0) denotes the score function of the noise centered at x_0. Fig. <ref> summarizes the calculation of the proposed SC loss. After the self-calibration loss is obtained, it is summed with the cross-entropy loss to train the classifier. The total loss can be written as: ℒ_CLS(ϕ)=ℒ_CE(ϕ)+λ_SCℒ_SC(ϕ) where ℒ_CE is the cross-entropy loss and λ_SC is a hyperparameter. By applying self-calibration, the classifier should be able to more accurately estimate the score function of the underlying data distribution, which implies the underlying data distribution itself is also more accurately estimated. As a result, the gradients of the classifiers should be more aligned with the ground truth as it is calculated from the estimated distribution. After self-calibration, the classifier then can be used just like the original classifier to guide an unconditional SGM to achieve conditional generation. Note that since our method calibrates the classifier in training time and scaling classifier gradient is done in sampling time, we can easily combine the two methods to achieve better performance. §.§ 2D toy dataset We use a 2D toy dataset containing two classes to demonstrate the effects of the self-calibration loss. The data distribution is shown in Fig. <ref>, where the two classes are shown in two different colors. After training the classifiers on the toy dataset with (1) only cross-entropy loss and (2) both cross-entropy loss and self-calibration loss, we plot the gradients ∇_x log p(y|x) estimated by the classifiers and compare them with the ground truth. Also, we added the ground truth unconditional score to the estimated gradients just like CGSGM and compared the results with the real conditional score. Additional quantitative measurements of the toy dataset is included in Appendix <ref>. Fig. <ref> shows the ground truth classifier gradient (Fig. <ref>) and the gradients estimated by classifiers trained on the toy dataset (1) without self-calibration (Fig. <ref>) and (2) with self-calibration (Fig. <ref>). Uncalibrated classifiers produce gradients that contain rapid changes in magnitude across the 2D space, with frequent fluctuations and mismatches with the ground truth. Such fluctuations can impede the convergence of the reverse diffusion process to a stable data point, leading SGMs to generate noisier samples. Moreover, the divergence from the ground truth gradient can misguide the SGM, leading to generation of samples from incorrect classes. Uncalibrated classifiers also tend to generate large gradients near the distribution borders and tiny gradients elsewhere. This implies that when the sampling process is heading toward the incorrect class, such classifiers are not able to “guide" the sampling process back toward the desired class. In contrast, the introduction of self-calibration results in estimated gradients that are more stable, continuous across the 2D space, and better aligned with the ground truth. This stability results in a smoother generation process and contributes to the production of higher-quality samples. §.§ Using self-calibration loss on semi-supervised learning In this work, we also explore the benefit of self-calibration loss in semi-supervised setting where only a small proportion of data are labeled. In original classifier guidance, the classifiers are solely trained from labeled data. The lack of label in semi-supervised setting leads to more challenges to learn an unbiased classifier. With self calibration, we are able to better utilize the large amount of unlabeled data by calculating the self calibration loss with all data. To incorporate the loss and utilize the unlabeled samples during training time, we changed the way of calculating ℒ_CLS from Eq. <ref>. As illustrated in Fig. <ref>, the entire batch of data is used to calculate ℒ_SC, but only the labeled data is used to calculate ℒ_CE. During training, we observed that when the majority is unlabeled data, the cross-entropy loss does not converge to a low-and-steady stage if the algorithm randomly samples from all training data. We suspect this is due to the low percentage of labeled data in each batch. Therefore, we changed the way of sampling batches. We always ensure that half of the data is labeled while the other half is not. Appendix <ref> summarizes the semi-supervised training process of the classifier. Note that even though the classifier is learning a time-generalized classification task, we can still make it perform as an ordinary classifier that classifies the unperturbed data by setting the input timestep t=0. Therefore, we can easily incorporate many other common semi-supervised classification methods like pseudo-labeling <cit.>, self-training, and noisy student <cit.>. § EXPERIMENTS We have tested our method on a toy dataset (Section <ref>) to provide a high-level view of how self-calibration can improve classifiers in terms of producing accurate gradients. In this section, we present the experimental results on the CIFAR-10 and CIFAR-100 datasets to demonstrate the improvement of CGSGM after incorporating our method on different percentage of labeled data (Section <ref>). Randomly selected images of CGSGM before and after self-calibration on the dataset CIFAR-10 are shown in Appendix <ref>. For conditional metrics, we report the average scores across all classes. Results of individual classes on the CIFAR-10 dataset are included in Appendix <ref>. §.§ Experimental setup In the following sections, we tested our methods on the CIFAR-10 and CIFAR-100 datasets for image generation. We demonstrate that our methods are able to improve generation quality both conditionally and unconditionally with different percentage of labeled data. Implementation details We follow NCSN++ <cit.> to implement the unconditional score estimation model. We also adapted the encoder part of NCSN++ as the classifier used in CGSGM <cit.>. Sampling method: We used Predictor-Corrector (PC) samplers <cit.> with 1000 sampling steps. Evaluation metrics: Besides commonly used metrics Frechet Inception Distance (FID) <cit.> and Inception Score (IS) <cit.>, we also evaluated class-conditional performance of our methods using several different methods. This includes intra-FID, which measures the average FID for each class, and generation accuracy (on the CIFAR-10 dataset), which uses a pre-trained ViT <cit.> classifier to check whether the samples are generated in the correct class. The test accuracy of the pre-trained ViT is 98.52% on the CIFAR-10 dataset. Baseline methods: The baseline methods used in our work include: * Cond: Adopts conditional SGMs by conditional normalization techniques <cit.> rather than classifier guidance. * CFG-labeled: Classifier-free guidance<cit.> using only labeled data is applied. * CFG-all: Classifier-free guidance<cit.> using only labeled data to train the conditional part of the model and all data to train the unconditional part of the model. * CG: Vanilla classifier guidance. * CG-DLSM: Classifier guidance with DLSM loss <cit.> applied. §.§ Experiment Result Table <ref> and Fig. <ref> present the performance of all methods when applied to varying percentages of labeled data. Notice that it includes the fully-supervised setting when 100% of data are labeled. CG-SC-labeled implies self-calibration is only applied on labeled data while CG-SC-all implies self-calibration is applied on all data. Conditional SGMs vs Unconditional SGMs. The first observation from our results is that conditional SGMs, including Cond, CFG-labeled, and CFG-all, consistently excel in generation accuracy. However, when the quantity of labeled data decreases below 40%, a significant performance drop is witnessed in these models. These conditional SGMs, while generating high-quality images, tend to lose diversity when working with fewer labeled data. This occurs mainly because of the lack of labeled data in training phase, leading them to generate samples closely mirroring the distribution of the labeled data instead of all data. In contrast, unconditional SGMs, such as CG, demonstrate superior performance when the majority of the data is unlabeled, as they are capable of leveraging both labeled and unlabeled data during training. Classifier-Guided SGMs (CGSGMs) vs Conditional SGMs Our experimental results align with our expectations that CGSGMs produce improved performance compared to conditional SGMs. The CG method exhibits a consistent performance in terms of FID and inception scores across varying percentages of labeled data when evaluated using unconditional metrics. Notably, when unlabeled data is in the majority, we observe a 16% drop in generation accuracy on the CIFAR-10 dataset. Despite this, the intra-FID of CG significantly outperforms that of conditional SGMs on both datasets. As for the proposed method, incorporating self-calibration with labeled data does not majorly affect unconditional metrics but substantially improves conditional metrics. This process reduces intra-FID by 8.25 and 17.86 on the CIFAR-10 and CIFAR-100 dataset respectively and increases generation accuracy on CIFAR-10 by up to 23%. The results demonstrate that with self-calibration, the classifier can better represent the class-conditional distribution even when labeled data is limited. Leverage unlabeled data for semi-supervised conditional generation Intuitively, incorporating unlabeled data into the computation of self-calibration loss would enhance the quality of conditional generation, because the classifier can exploit additional information from unlabeled data during the training phase. As the proportion of labeled data decreases, this benefit of leveraging unlabel data should become more significant. As our experimental results show, conditional metrics do not differ greatly when the proportion of labeled data ranges between 40% and 100%. However, when the percentage of labeled data falls below 40%, the use of unlabeled data significantly improves intra-FID and generation accuracy. Specifically, with just 5% labeled data, intra-FID improves by 12.22, and generation accuracy increases by 22.8% compared to the original CG. These results affirm our expectation that as the quantity of labeled data decreases, the beneficial impact of utilizing unlabeled data increases. § CONCLUSION In this work, we verify that the existing CGSGM approach results in a high generation fidelity but low accuracy. We hypothesize that the root cause lies in the unreliable scores produced by the classifiers and design a Self-Calibration Loss to enhance the classifier directly towards better scores without resorting to an external SGM. The Self-Calibration Loss is derived from rigorous principles when viewing the classifier as an energy-based model. We demonstrate three immediate benefits of the proposed Self-Calibrating CGSGM approach. Using the toy dataset, we show that the scores computed from the approach are indeed closer to the ground-truth scores. Secondly, across all percentages of labeled data, our proposed approach outperforms the existing CGSGM in the semi-supervised setting. Lastly, our empirical study justifies that our proposed approach can consistently reach the best intra-FID by seamlessly leveraging the power of unlabeled data, when compared to other conditional SGMs. The benefits establish the rich potential of the proposed approach. § LIMITATIONS The major limitation of our work lies in the selection of datasets. We can only afford to conduct experiments on smaller and lower-resolution datasets (CIFAR-10 and CIFAR-100) because of limited computational resources. In particular, even with those smaller data, training, sampling, and testing a single approach on a single setting once requires up to 210 hours (more than a week) with 4 NVIDIA Tesla V100 GPUs. We understand that conducting more experiments on larger and higher-resolution datasets can further strengthen our claims, but those experiments are not affordable to us. While we tested on only two datasets, the observed results are consistent—our proposed approach achieves the best class-conditional performance in the semi-supervised setting with much fewer labeled data. plainnat § DETAILED CLASS-CONDITIONAL GENERATION MEASUREMENTS OF CIFAR-10 Section <ref> contains the class-conditional measurements averaged among all classes of CIFAR-10. This section includes a more detailed result that contains the measurement of each class. § TRAINING ALGORITHM FOR SEMI-SUPERVISED SELF-CALIBRATING CLASSIFIER [h] Semi-supervised classifier training with self-calibration loss § QUANTITATIVE MEASUREMENTS OF TOY DATASET Table <ref> shows the quantitative measurements of the methods on the toy dataset. First, we compared the gradients ∇_x log p(y|x) estimated by the classifiers with the ground truth by calculating the mean squared error (first column) and cosine similarity (second column). We observed that after self-calibration, the mean squared error of estimated gradients can be lowered by 18%, and tuning the scaling factor can further improve it to 36%. This improvement after scaling implies that the direction of gradients is more aligned with the ground truth, and scaling can further reduce the mismatch between the magnitude of the classifier and the ground truth. In terms of cosine similarity, self-calibration grants the classifiers an improvement of 42%. The numerical results agree with our previous observation that after self-calibration, classifiers align better with the ground truth in terms of both direction and magnitude. Then, we add the unconditional score of the training data distribution to the classifier gradients to calculate the conditional scores and compare the results with the ground truth. As we can see, the classifiers are able to estimate conditional scores with a cosine similarity of 0.9175 even without self-calibration. The result shows that with a well-trained unconditional SGM, in which we use the ground truth unconditional score in this case, CGSGM is able to produce conditional scores pointing in the correct directions in most cases. This explains why the original CGSGM is able to generate samples with decent quality. After applying the self-calibration loss and scaling method, we can further improve the cosine similarity to 0.9689, which we believe can enhance the quality of class-conditional generation. § TUNING THE SCALING FACTOR FOR CLASSIFIER GUIDANCE This section includes the experimental results of tuning the scaling factor λ_CG for classifier guidance with and without self-calibration under fully-supervised setting. Fig. <ref> shows the result of tuning the scaling factor λ_CG for classifier guidance. While tuning λ_CG with and without self-calibration, we can see that self-calibration does not affect unconditional performance by much. However, when evaluated with conditional metrics, the improvement after incorporating self-calibration becomes more significant. The improvement in intra-FID is up to 7.9 while the generation accuracy can improve up to 13%. § IMAGES GENERATED BY CLASSIFIER GUIDANCE WITH AND WITHOUT SELF-CALIBRATION This section includes images generated by classifier guidance with (first 6 images) and without (last 6 images) self-calibration after training on different percentage of labeled data. Each row corresponds to a class in the CIFAR-10 dataset.
http://arxiv.org/abs/2307.03878v2
20230708014803
New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump
[ "Francesco Capozzi", "Bhaskar Dutta", "Gajendra Gurung", "Wooyoung Jang", "Ian M. Shoemaker", "Adrian Thompson", "Jaehoon Yu" ]
hep-ph
[ "hep-ph", "hep-ex" ]
apsrev4-1 Dipartimento di Scienze Fisiche e Chimiche, Università degli Studi dell’Aquila, 67100 L’Aquila, Italy Istituto Nazionale di Fisica Nucleare (INFN), Laboratori Nazionali del Gran Sasso, 67100 Assergi (AQ), Italy Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA Department of Physics, University of Texas, Arlington, TX 76019, USA Department of Physics, University of Texas, Arlington, TX 76019, USA Center for Neutrino Physics, Department of Physics, Virginia Tech, Blacksburg, VA 24061, USA Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA Department of Physics, University of Texas, Arlington, TX 76019, USA Beam dumps and fixed-target experiments have been very sensitive probes of such particles and other physics beyond the Standard Model (BSM) by considering the production of new states from the primary interaction in the beam dump. In a proton beam dump, there are many secondary interactions taking place in electromagnetic showers which may be additional production channels for pseudoscalar bosons or axion-like particles (ALPs). The target-less configuration of the MiniBooNE experiment, which collected data from 1.86 × 10^20 protons impinging directly on the steel beam dump, is an excellent test of sensitivity to these production channels of ALPs in the MeV mass region. Using the null observation of the MiniBooNE dump mode data, we set new constraints on ALPs coupling to electrons and photons produced through a multitude of channels and detected via both scattering and decays in the MiniBooNE detector volume. We find that the null result rules out parameter space that was previously unconstrained by laboratory probes in the 10-100 MeV mass regime for both electron and photon couplings. Lastly, we make the case for performing a dedicated analysis with 1.25× 10^20 POT of data collected by the ArgoNeuT experiment, which we show to have complementary sensitivity and set the stage for future searches. MI-HET-808 New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump Jaehoon Yu ============================================================================================== § INTRODUCTION Particle beam dumps have proven to be ultra-sensitive probes of new physics sectors beyond the Standard Model (BSM), where the myriad electromagnetic and hadronic cascades produce showers of electrons, positrons, gamma rays, and mesons; each a potential channel for BSM particle production. Studying the beam target environment and the particle showers within is thus a crucial first step to understanding what kind of physics is possible, and at what energy scales. Already many searches have been performed by electron beam dumps (E137, NA64, E141, Orsay, E774, etc. <cit.>) and proton beam dumps at the GeV energy scale (e.g. CHARM, NuCal, NA62, SeaQuest/SpinQuest <cit.>) and sub-GeV sources (e.g. CCM <cit.>, IsoDAR <cit.>, and COHERENT <cit.>), and others <cit.>. The existence of pseudoscalar bosons with small couplings to the SM are predicted in models of broken symmetries in connection with explaining many puzzles in nature. Axions and axion-like particles (ALPs) are central features in the landscape of solutions, in particular, to the strong CP problem <cit.> and to the dark matter problem <cit.>, and otherwise appear ubiquitously in string theory <cit.>, and the ultraviolet spectra of many other puzzle-solving models with spontaneously broken symmetries. In many of these scenarios, it is possible that the ALP has couplings to SM leptons and the electromagnetic field, making the particle showers inside the beam target good laboratory probes of ALPs, reaching up to GeV mass scales. ALPs at the MeV to GeV mass scales are of particular interest to beam dump and fixed target experiments and have been studied in the context of heavy axions <cit.>, whose parameter space extends beyond that of traditional QCD axion models. In 2018 MiniBooNE collaboration performed an analysis of their targetless-mode run <cit.>, in which they collected data associated with 1.86 × 10^20 protons on target (POT) bypassing the main beryllium target and impinging on the steel beam dump. Expected neutrino rates for this mode were very low, and no excess of events was observed, in contrast to the results from the target-mode runs <cit.>. In this work, we show that the null result from this data set is sensitive enough to ALPs produced in electromagnetic showers in the dump to set new limits on photon and electron couplings. Running in a target-less mode has the effect of suppressing the fluxes of neutrinos coming from charged meson decays. Searches for BSM particles that have production channels orthogonal to the charged pion decay gain a big advantage here; in the case of a thin target, the charged mesons decay in flight after getting produced, allowing them to be focused by the magnetic horn system. In the thick beam dump case, however, the charged pions are stopped in the material and decay isotropically, suppressing the subsequent neutrino background that would lie in the signal region for the BSM search. This realization is especially important for future beam dump experiments at higher energies, where the higher intensity of electromagnetic cascades provide both the coupling and mass reach necessary to significantly extend the limits tested so far by laboratory searches in the MeV to GeV mass range. We will show that data collected by the ArgoNeuT detector <cit.> already has this capability, and depending on the specific sensitivity of a dedicated analysis, null observations in this data could already rule out parameter space unconstrained by laboratory probes to-date. In  <ref> we outline the production and detection channels we consider for electromagnetically-coupled ALPs. In  <ref> we describe the statistical analysis performed for the MiniBooNE dump-mode data and the ArgoNeuT data given an ALP signal hypothesis, with the resulting limits placed on the parameter space of photon and electron couplings in  <ref>. Finally we conclude in  <ref>. § BSM PRODUCTION AND DETECTION IN A BEAM DUMP We consider primarily ALPs produced in electromagnetic cascades inside the beam dump or beam target environment, e.g., those that get produced from couplings to photons and to electrons; ℒ_ALP⊃ i g_ae a ψ̅_e γ^5 ψ_e - 1/4 g_aγ a F_μνF^μν This Lagrangian, which for simplicity we will assume only one tree-level coupling active or dominant at a time, opens up a slew of production and detection channels available to beam target and beam dump experiments. These have recently been investigated in refs. <cit.>, and we summarize them in Table <ref>. For ALPs coupled to electrons, the dominant final state will be e^+ e^- pairs appearing in the detector as single Cherenkov rings, either from the pair being highly collinear with a separating angle less than the typical angular resolution of the detector or if one of the electrons/positrons are too soft. This final state appears mainly through decays for m_a > 2 m_e and otherwise through the Bethe-Heitler lepton pair production process (a Z → e^+ e^- Z) for sub-MeV ALPs, considered before to set limits on light (pseudo)scalars appearing in a proton beam target <cit.>. The cross-section for this process was computed in refs. <cit.> using the formalism and atomic form factors presented in ref. <cit.>, and it is larger than inverse-Compton scattering (a e^- →γ e^-) by up to an order of magnitude for ALP energies in the 100 MeV - 1 GeV range, which is the energy region of interest for this study. The resonant cross section in the electron rest frame is σ = 2 π m_e g_ae^2 sm_a^2 √(s(s-4m_e^2))δ(E_+ - (m_a^2/2m_e - m_e)) ≃2π m_e g_ae^2m_a^2δ(E_+ - (m_a^2/2m_e - m_e)). To simulate the production fluxes, we first generate the SM particle fluxes inside the MiniBooNE dump with GEANT4 using the physics list, then pass a high-statistics sample of each particle flux (e^±, γ, π^±) into the event generator.[https://github.com/athompson-git/alplibhttps://github.com/athompson-git/alplib] The positron and electron fluxes are shown in Fig. <ref>, while the photon flux is shown in Fig. <ref>. We show a large phase space of the e^± and γ fluxes to illustrate the many low-energy features that come about from processes like nuclear de-excitation and beta decay. However, in principle, only the high energy tail (>75 MeV) in the forward-going region (θ≲ 10^-2 rad) is responsible for the bulk of BSM particle production that is captured within the signal region and pointing within the solid angle of the MiniBooNE detector. This is illustrated in Fig. <ref> where we show the energy spectra before and after an angular cut of 10 mrad. Further details of the event selection and signal window are discussed in the following section. For ALPs produced from electrons or positrons in resonant production (e^+ e^- → a), associated production (e^+ e^- → a γ), or bremsstrahlung (e^± Z → e^± Z a), the energy loss of the electrons and positrons in the material during particle transport must also be folded into the event rate calculation. This modifies the number flux leaving the beam dump as dN_adE_a = N_A X_0/A (ħ c)^2 ∫d^2Φ_e^+/dE_e dΩ_e I(t, E_+, E^') ×Θ_detd^2σ(E^')dE^' dΩ^' dΩ_e dΩ^' dE_+ dt dE^' where N_A is Avogadro's number, X_0 is the radiation length of the electrons/positrons in the dump material, and A is the atomic weight. I(t, E_i, E_f) = θ(E_i - E_f)/E_i Γ (4 t/3) (ln E_i/E_f)^4t/3 - 1 is the energy loss smearing function for the electron/positron radiation length t integrated up to target radiation thickness T <cit.>. We integrate over the solid angle of the positron with respect to the beamline, Ω_e, and outgoing ALP solid angle with respect to the positron direction, Ω^', taking care to integrate only those ALPs pointed in the direction of the detector solid angle through the Heaviside function Θ_det <cit.>. § DATA ANALYSIS §.§ MiniBooNE Dump Mode The final states of concern in our search for ALPs in the MiniBooNE detector are photon-like events and electron-like events, listed in Table <ref>. We have adopted the same selection cuts made in the ν-e analysis of the MiniBooNE dump mode data for these states. Here we study the detector response with true simulated information to analyze the efficiency of the electron-like event selection from reconstructed events inside the detector. For the analysis of the Monte Carlo generated data, after the preliminary cuts have been applied, the first round of the reconstructed events is fit under the one-track electron and muon hypothesis. Each fit returns the likelihood of the corresponding hypothesis: ℒ_e and ℒ_μ. Those events satisfying the log(ℒ_e/ℒ_μ) > -0.05 continue the next round of reconstruction. In the second round, reconstructed events are fit under the general two-photon hypothesis. Similarly, the events should satisfy log(ℒ_π^0/ℒ_e) < 0. The efficiencies of these two cuts using simulated data as functions of electron visible energy and electron scattering angle are shown in Fig. <ref>. The selection efficiencies as a function of the visible energy, E^vis_e, are fitted as an arctangent function (p_0arctan(p_1 x) + p_2). The selection efficiencies as a function of the cosine of the angle with respect to the beam axis, cosθ_e, are fitted as a straight line (p_0+p_1x) except for the forward region of log(ℒ_e/ℒ_μ) which has a second-order polynomial fit (p_0+p_1x+p_2x^2). Uncertainties from the goodness-of-fit on the efficiency curve as a function of E_e^vis and cosθ_e are constrained to be less than 20%, so their impact on the exclusions over the model parameter space shown in the following section will not be qualitatively different. In addition to these log-likelihood efficiencies, we also take into account the cut on the reconstructed vertex radius of 500 cm, which effectively reduces the MiniBooNE volume to a sphere of 10 m in diameter. Other cuts, such as the number of tank and veto hits, and the Scintillation / Cherenkov ratios we assume to have perfect signal efficiency for the detection channels in Table <ref>. However, we do check that the γγ, e^+ e^-, and γ e^- final states from axion interactions and decays are collinear enough to be identified as a single electron-like Cherenkov ring in the detector. This also ensures that the cut on the di-gamma invariant mass m_γγ≤ 80 MeV is passed by selection for our ALP signals. Lastly, we bin the ALP signal Monte Carlo events into visible energy and cosine bins between 75 ≤ E_γ≤ 850 MeV and cosθ≥ 0.9 (taking E_γ = E_e^vis for the electron-like visible energy measurement). Since inverse Primakoff scattering is characterized by a forward outgoing photon, while inverse Compton scattering is characterized by a forward outgoing electron and a soft off-forward photon (typically below the lower energy cut), these scattering channels are well within the selection region for most choices of the couplings and the ALP mass. Example spectra for photon and electron coupling channels are shown in Fig. <ref>, where we have convolved the predicted event rates with the efficiency functions described above. For the case of ALPs undergoing inverse Primakoff scattering in the detector, a Z →γ Z, we integrate over the visible energy and outgoing angle of the final state photon; d^2R/dE_γ dΩ_γ = N_T ∫dN_a/dE_ad^2σ(E_a)/dE_γ dΩ_γϵ(E_γ) ϵ(Ω_γ) dE_a where ϵ(E_γ) and ϵ(Ω_γ) = ϵ(cosθ_γ) are equivalent to the visible energy and cosine efficiencies, respectively, of the electron-like signals shown in Fig. <ref>. Here, recall the differential event rate dN_a / dE_a passing into the detector from Eq. <ref>. Integrating Eq. <ref> over energy bin edges [75, 100, 150, 200, 250, 300, 500, 850] (in MeV) and cosine bin edges [0.9, 0.95, 0.99, 1.0] yields the ALP signal s_i in each bin i as a function of the mass and couplings. In the case of decays, instead of the differential cross section in Eq. <ref> we use the probability of decays occurring inside the detector P_decay= e^-ℓ/(τ v_a)[ 1 - e^-Δℓ /(τ v_a) ] where τ v_a is the ALP decay length in the lab frame, ℓ is the baseline distance between the ALP production in the dump, and Δℓ is the fiducial path length in the detector during which the decay must take place. For the other detection channel final states (2γ, 1γ1e^-, or e^+e^-), both final state particles leave visible energy in the detector, so we need to ensure that they are collinear enough to be reconstructed as a single Cherenkov ring in the detector. We check the angular distribution of the final state and cut events if two final state particles are separated by more than 5 degrees. We use a binned log-Poisson likelihood to obtain the confidence limits; ln L(θ⃗) = ∑_i=1^7 d_i ln[s_i(θ⃗) + b_i] - [s_i(θ⃗) + b_i] - ln[Γ(d_i + 1)] for data d_i, backgrounds b_i, and signal s_i(θ⃗), where θ⃗ = (m_a, g_aγ) in the case of dominant ALP-photon coupling and θ⃗ = (m_a, g_ae). The CLs are then given by finding regions of constant delta-log-likelihood, -2Δln L ≡ 2(ln L(θ) - ln L(θ)_min), in the relevant model parameter space θ⃗. §.§ ArgoNeuT ArgoNeuT <cit.> collected data from 1.25 × 10^20 POT impinging on the NuMI target, with its LArTPC detector situated 1.04 km downstream of the target while the beamline was in anti-neutrino mode <cit.>. With a fiducial volume of 0.40×0.47×0.90 cm^3, the angular acceptance of the detector coverage corresponds to roughly 0.325 mrad in solid angle. We perform a similar simulation with GEANT4 using the physics list to model the particle cascades inside the NuMI beam target environment (120 GeV protons on graphite). The ALP flux is calculated in the same way explained in the case of the MiniBooNE dump. From the GEANT4 flux distributions of e^± and γ in the solid angle of ArgoNeuT, shown in Fig. <ref>, we estimate the ALP flux produced from 1.25× 10^20 POT during data collection. A dedicated search for heavy ALPs decaying to di-muon pairs was performed by the ArgoNeuT collaboration <cit.>, exhibiting an event topology with very low background expectations. However, here we are interested in different types of event topologies: e^+ e^-, e^- γ, 2γ and 1γ (see Table <ref>), for which a dedicated analysis is missing. Therefore, we will not perform a likelihood analysis. We will just provide the contours in the parameter space for which the following number of signal ALP events would be observed in ArgoNeuT: 3, 20, and 100. These numbers are equal to the Poisson error of ∼ 10, 400, and 10^4 background events, respectively. § RESULTS The constraints on the ALP-photon coupling g_aγ as a function of the ALP mass m_a derived from MiniBooNE beam dump mode data is shown in Fig. <ref>. The 1σ and 2σ CLs are shown individually using the delta-log-likelihood method, and we find that the MiniBooNE data sets new laboratory limits on the ALP coupling for masses below 100 keV or so, where previously astrophysics (HB star cooling and SN1987a <cit.>, see also refs. <cit.>) had placed the only constraints ahead of beam dump constraints <cit.>[The measurement of the explosion energy of SN1987A can have tension to the cosmological triangle region unless the star cooling process is significantly different from the standard picture <cit.>.] and recently, constraints set by the CCM120 engineering run <cit.>. Limits set by the ArgoNeuT null result from 1.25× 10^20 POT of collected data are shown in blue, benchmarking the signal event rate at 3, 20, and 100 events in the absence of a dedicated analysis with backgrounds and proper event selection. Comparing the shape of the exclusion contours between MiniBooNE and ArgoNeuT, one can see the impact of the longer baseline between beam target and detector at ArgoNeuT (∼ 1 km) versus MiniBooNE (489 m) shifting the sensitivity contour to larger masses reflecting longer ALP lifetimes for a →γγ decay. In this space, we also show the parameter space associated with QCD Axion model benchmarks spanned between the dashed black lines. Here the range of couplings and masses are shown for Kim-Shifman-Vainshtein-Zakharov (KSVZ) benchmark models <cit.>, where the range is defined by taking the anomaly number ratios of E/N = 44/3 to E/N = 2 in the model. The correlations between the QCD axion mass and its effective couplings are taken from ref. <cit.> (see also Appendix <ref>). While the constraints shown here are purely on the photon-ALP couplings, independent constraints on the ALP-gluon couplings in these model variants are stringent and would indirectly rule out much of the parameter space <cit.>. These bands are of course only representative of these traditional QCD models shown for a sense of scale. QCD axions that are invoked to solve the strong CP problem which have parametrically heavier or lighter masses in other non-traditional models are also possible <cit.>. We set limits in the same way on the electron-ALP coupling g_ae as a function of the ALP mass in Fig. <ref>. The parameter space associated with Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) benchmark models <cit.>, for which couplings to electrons would be dominant relative to the photon couplings, the span between the dashed black lines. Again, we show this span of model parameter space for reference although the constraints shown here from pure g_ae-driven channels are conservative and indirect constraints on the DFSZ gluon couplings would be more stringent. In the electron coupling, we find that MiniBooNE dump mode tests parameter space already ruled out by existing laboratory searches (e.g. NA64, E137, and other beam dumps). Although, in the mass range ∼ 10 MeV the resonant channel e^+ e^- → a produces a highly peaked signal which becomes visible inside the energy region of interest, 75 < E_vis < 850 MeV (see Fig. <ref>). This is because the resonant energy tracks the square of the ALP mass, as E_a = m_a^2 / (2m_e), producing the first visible peak within this energy range for m_a ≃ 10 MeV. The MiniBooNE dump mode becomes highly sensitive to ALP signals here for those masses but is consistent with the existing E137 constraints in this region. The subtle undulating features in the CL contours from m_a = 10 - 30 MeV then reflect the signal rising and falling to accommodate the two data points in the 3rd and 6th energy bins in Fig. <ref>. ArgoNeuT sensitivity to this coupling is fairly powerful in the m_a > 2m_e mass range and would exclude new parameter space ahead of the limits set by the CCM120 engineering run between m_a = 1 MeV and m_a = 5 MeV. This is owed in part to the energy scale and long distance from the detector to the target being ideal to probe long ALP lifetimes, and also the relatively larger e^± fluxes produced in the NuMI target (Fig. <ref>). This exclusion would be possible even for a benchmark signal rate of 100 events, corresponding roughly to a Poisson background of 10^4 events without taking into account signal efficiency. This sensitivity is lost in the scattering limit for m_a < 2 m_e where NA64 missing energy and CCM120, where being at much closer proximity to the production site, ℓ∼ 20 m plays a bigger role, set the leading constraints. § OUTLOOK The analysis of the MiniBooNE dump mode data shows significant sensitivity to dark sector states produced by the secondary electromagnetic cascades in the BNB dump environment. By utilizing the off-target configuration and examining the interactions of 1.86 × 10^20 protons with the steel beam dump, we have expanded the existing constraints on ALPs in the 10-100 MeV mass regime that couple to photons. Simultaneously, despite a small exposure and fiducial detector mass, the null observations of ArgoNeuT could potentially rule out parameter space for ALPs in the same mass range coupling to electrons, due to the higher beam energy. Stopped-pion experiments at ∼GeV scale proton beam dumps also have the capability to probe new physics in the secondary electromagnetic showers, expanding in complementary regions of model parameter space to the higher energy, longer baseline beam dump experiments situated at the NuMI, BNB, or LBNF beams. Future beam dump searches may be possible to fully probe QCD axion parameter space for MeV masses, such as a proposed dump mode or target-less running mode for DUNE <cit.>. A dedicated target-less mode was shown to test electron-ALP couplings down to g_ae∼ 10^-6 for m_a < 2 m_e and down to g_ae∼10^-9 from ALP decays to e^+ e^- pairs with a limited 3 month to 1-year exposure. § ACKNOWLEDGMENTS We are grateful to Ornella Palamara for the helpful discussions regarding the potential for dedicated ALP studies at ArgoNeuT. The work of IMS is supported by DOE under the award number DE-SC0020250. The work of BD and AT is supported by the DOE Grant No. DE-SC0010813. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High-Performance Research Computing. The work of GG, WJ, and JY is supported by the U.S. Department of Energy under Grant No. DE-SC0011686. We thank the Center for Theoretical Underground Physics and Related Areas (CETUP*) and SURF for facilitating portions of this research. § QCD AXION MODELS The correlations between the QCD axion mass and its effective couplings are given below, taken from ref. <cit.>. We simply reiterate those correlations here for the convenience of the reader. The relation between the Peccei-Quinn breaking scale f_a and the axion mass is f_a = (5.691× 10^6eV/m_a) GeV To find the correlations between the axion mass and its effective couplings to photons in the Kim-Shifman-Vainshtein-Zakharov (KSVZ) benchmark model <cit.> is then given by Eq. <ref>; g_aγ = m_a/GeV(0.203 E/N - 0.39) We then consider a range of model parameter space by considering anomaly number ratios of E/N = 44/3 to E/N = 2. This defines a band in (m_a, g_aγ) parameter space in which the QCD axion's couplings and mass may reside. For the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) benchmark model <cit.>, for which couplings to electrons would be dominant relative to the photon couplings, we take g_ae = m_e C_ae(m_a, tanβ)f_a where the coefficient C_ae is dependent on the rotation angle β for the vacuum expectation values of the extended Higgs sector in DFSZI and DFSZII models; DFSZ(I): C_ae = -1/3sin^2β + loop factors DFSZ(II): C_ae = 1/3sin^2β + loop factors Here we take tanβ values between 0.25 and 120, which equates to sinβ = 0.242536 and sinβ = 0.999965, respectively <cit.>.
http://arxiv.org/abs/2307.04213v1
20230709155609
Family Floer theory, non-abelianization, and Spectral Networks
[ "Yoon Jae Nho" ]
math.SG
[ "math.SG", "53D40" ]
Hierarchical Autoencoder-based Lossy Compression for Large-scale High-resolution Scientific Data Jian Tao August 12, 2023 ================================================================================================ In this paper, we study the relationship between Gaiotto-Moore-Neitzke's non-abelia nization map and Floer theory. Given a complete GMN quadratic differential ϕ defined on a closed Riemann surface C, let C̃ be the complement of the poles of ϕ. In the case where the spectral curve Σ_ϕ is exact with respect to the canonical Liouville form on T^∗C̃, we show that an “almost flat" GL(1;ℂ)-local system ℒ on Σ_ϕ defines a Floer cohomology local system HF_t(Σ_ϕ,ℒ;ℂ) on C̃ for 0< t≤ 1. Then we show that for small enough t, the non-abelianization of ℒ is isomorphic to the family Floer cohomology local system HF_t(Σ_ϕ,ℒ;ℂ). § INTRODUCTION §.§ Main result Let C be a closed Riemann surface, and let ω_C be the canonical line bundle of C. A quadratic differential ϕ is a meromorphic section of ω_C^⊗ 2. We say that a quadratic differential is a ϕ complete GMN quadratic differential if all of its zeroes are simple, it does not have poles of order 1, and has at least one pole. Let C̃ be the complement of the poles of ϕ, and let (T^∗_ℂ)^1,0C̃ denote the total space of ω_C̃. The complete GMN quadratic differential ϕ defines a smooth embedded algebraic subvariety Σ_ϕ of (T^∗_ℂ)^1,0C̃ called the spectral curve associated to ϕ. It becomes a simple branched double covering of C̃ by restricting the projection map π:(T^∗_ℂ)^1,0C̃→C̃ to Σ_ϕ. This curve is defined using the canonical holomorphic Liouville form λ=p^z dz via Σ_ϕ:={λ^2-π^∗ϕ=0}⊂T^1,0_ℂ^∗C̃. Here p^z is the complex fibre coordinate and z is the complex base coordinate [For the definition, see Section <ref>]. We have the canonical holomorphic symplectic form Ω on (T^∗_ℂ)^1,0C̃ defined by Ω:=dλ. The spectral curve Σ_ϕ is a holomorphic Lagrangian submanifold in the sense that it is a holomorphic submanifold of (T^∗_ℂ)^1,0C̃ and the holomorphic symplectic form Ω vanishes on Σ_ϕ. There is also a diffeomorphism between (T^∗_ℂ)^1,0C̃ and the real cotangent bundle T^∗C̃, sending the real part of the holomorphic Liouville form to the canonical real Liouville form λ_re=∑ p^i dq_i. Then the spectral curve Σ_ϕ becomes an ω-Lagrangian submanifold of T^∗C̃ under this identification, where ω=dλ_re. Suppose now that the spectral curve Σ_ϕ is exact with respect to λ_re, then so is tΣ_ϕ for any t∈ℝ_>0. Now let C^∘ be the complement of the zeroes and poles of ϕ, and Σ_ϕ^∘=π^-1(C^∘). Following <cit.>, we say that a rank 1 local system ℒ over Σ_ϕ^∘ is almost flat if the monodromy along a small loop around any of the ramification points in π^-1(zero(ϕ)) is -Id. Let 𝔰 be a spin structure on C, and let 𝔣_z be the induced spin structure on the cotangent fibre F_z, for z∈ C. Given an almost flat GL(1;ℤ)-local system ℬ, the spin structure 𝔰̃=𝔰⊗ℬ on Σ_ϕ^∘ extends to a global spin structure on Σ_ϕ, which we still denote as 𝔰̃. Furthermore, given an almost flat GL(1;ℂ)-local system ℒ, ℒ⊗ℬ extends to a GL(1;ℂ)-local system on Σ_ϕ. We show that together with an almost flat GL(1;ℂ)-local system ℒ on Σ_ϕ, spin structures 𝔰̃ and 𝔣_z, and a choice of ℬ, we can define the family Floer cohomology local system HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ): z↦ HF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,ℒ⊗ℬ;ℂ), for any t∈ℝ_>0. Here we are taking Floer cohomology over ℂ twisted by the GL(1;ℂ)-local system ℒ⊗ℬ. It turns out that (<ref>) is concentrated in the zeroth degree, is free and has rank 2. In <cit.>, Gaiotto, Moore and Neitzke constructed the non-abelianization map which sends an almost flat GL(1;ℂ)-local system on Σ_ϕ^∘ to a GL(2;ℂ)-local system on C̃. The main theorem of this paper is that for small enough t, HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ) and the non-abelianization of ℒ are isomorphic. Suppose Σ_ϕ is exact with respect to the real Liouville form λ_re. Given an E≫ 1 and small enough δ>0, there exists a t_0(δ;E)>0 such that the following holds. Let 0<t<t_0. Let 𝔰 be a global spin structure on C, and let ℒ be an almost flat GL(1;ℂ)-local system on the spectral curve. Let ℬ be an almost flat GL(1;ℤ)-local system, and extend 𝔰̃=π^∗𝔰⊗ℬ and ℒ⊗ℬ to Σ_ϕ. Then the Floer cohomology local system HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ): z↦ HF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,ℒ⊗ℬ;ℂ) is isomorphic to the non-abelianization of ℒ. In particular, the isomorphism class of the local system does not depend on the choice of 𝔰 and ℬ and so we write HF_t(Σ_ϕ,ℒ;ℂ) instead. We take a direct Floer-theoretic approach to study GMN non-abelianization. The microlocal side of GMN non-abelianization have been studied in various other papers, such as <cit.>, <cit.> and <cit.>. We now split the rest of the introduction into two parts. The first part contains a brief review of the theory of quadratic differentials, and the GMN non-abelianization map. The second part outlines the main ideas involved in the proof of the main theorem. §.§ Quadratic differentials and non-abelianization §.§.§ Quadratic differentials We review the theory of quadratic differentials in more detail. We will describe the local structure of zeroes and poles, the spectral curve, and the induced singular flat metric on the base. We follow the expositions from <cit.> and <cit.>. Again, let C be a closed Riemann surface, and let ω_C be the canonical line bundle. Then A quadratic differential is a meromorphic section of ω_C^⊗ 2. Equivalently, a quadratic differential is a collection of open conformal charts (U_μ,z_μ) where z_μ:U_μ→ℂ is a biholomorphism onto its image, together with a collection of meromorphic functions ϕ_μ on U_μ such that ϕ_μ'=ϕ_μ(dz_μ/dz_μ')^2 on U_μ∩ U_μ'. We then locally write ϕ=ϕ_μdz_μ^2 on U_μ. A zero or a pole of ϕ is called a critical point. We say that a critical point is finite if it is either a simple pole or a zero of ϕ. Otherwise, we say that a critical point is an infinite critical point. The critical points of ϕ have the following local structure. For details, see <cit.>. The following is Theorem 6.1—Theorem 6.4 in <cit.> combined. Let b be either a finite critical point of ϕ or a pole of an odd order. Let n be the exponent of b. Then there exists a neighbourhood U_b of b, an open set D of ℂ containing zero, and a biholomorphism ξ=ξ_b:(D,0)→ (U_b,b) such that ϕ(ξ)dξ^2=(n+2/2)^2 ξ^n dξ^2. Furthermore, the germ of the biholomorphism is unique up to a factor of some c=exp(k/n+2(2π i)) for k=0,1,2,..,n+1. In particular, for n=1, we get ϕ(ξ)dξ^2=(3/2)^2 ξ dξ^2. Let b be a pole of order 2. Then there exists a local conformal parameter ξ which is unique up to a factor of a constant c∈ℂ such that ϕ(ξ)dξ^2=a_-2ξ^-2 dξ^2. Let b be a pole of ϕ with even order n≥ 4. Then there exists a local conformal parameter ξ and a constant r∈ℂ such that ϕ(ξ)dξ^2=(1/2(2-n)ξ^-m+rξ^-1)^2 dξ^2. Spectral curves A quadratic differential ϕ gives rises to a holomorphic Lagrangian submanifold of T^∗C̃ called the spectral curve Σ_ϕ. To define this, let (T^∗_ℂ)^1,0C̃ denote the holomorphic cotangent bundle of C̃. There exists a canonical holomorphic Liouville 1-form λ on (T^∗_ℂ)^1,0C̃; for (q,p)∈(T^∗_ℂ)^1,0C̃ and V∈ T(T^∗_ℂ)^1,0C̃, we define λ(q,p)(V)=p(π_∗(V)) where π:(T^∗_ℂ)^1,0C̃→C̃ is the projection map. We evaluate π_∗V∈ T_q C̃ on p∈ (T_q C̃)^∗ with respect to the canonical pairing (T_q C̃)^∗⊗ (T_qC̃)→ℂ. In local coordinates, we can write λ=p^z dz where p^z is the complex fibre coordinate and z is the complex base coordinate. We see that λ gives the canonical section of the line bundle π^∗ω_C̃. We obtain the canonical holomorphic symplectic form on (T^∗_ℂ)^1,0C̃ by taking the exterior derivative Ω=dλ. There exists a diffeomorphism of the total space of the real fibre bundles T^∗C̃→ (T^∗_ℂ)^1,0C̃ between the real cotangent bundle and the holomorphic cotangent bundle, under which the real part of λ is pull-backed to the canonical real Liouville form λ_re on T^∗C̃. The diffeomorphism is induced by the identification of V_ℂ^1,0≃ V with V a real vector space with a complex structure I:V→ V, I^2=-Id (see <cit.>). The algebraic variety Σ_ϕ:={λ^2-π^∗ϕ=0}⊂ (T^∗_ℂ)^1,0C̃ is called the spectral curve associated to the quadratic differential ϕ. The projection π:Σ_ϕ→C̃ gives a branched double covering of C̃. The spectral curve is smooth if the zeros of ϕ are simple. In this case, the map Σ_ϕ→C̃ becomes a simple branched covering. To see this, note that by Proposition <ref>, if z_0∈ϕ is a zero of ϕ, then one can find some conformal coordinate charts near z_0 such that ϕ reads locally zdz^2. Then realizing ℂ^2 as the holomorphic cotangent bundle of ℂ, we see that the germ of the spectral curve near z_0 is equivalent to the germ of {(p^z)^2-z=0}⊂ℂ^2 at (z,p^z)=(0,0), which is smooth. Now observe that the holomorphic symplectic form Ω vanishes on any smooth codimension one algebraic subvariety of (T^∗_ℂ)^1,0C̃. Under the identification of the holomorphic and the real cotangent bundle, we see that Σ_ϕ becomes a real Lagrangian submanifold of T^∗C̃. ϕ- metric. We need a further ingredient to describe non-abelianization, which is the natural flat singular metric structure on C. To define this, let C^∘ denote the complement of the zeros and poles of ϕ. On C^∘, we have a corresponding Riemannian metric g^ϕ=ϕ(z)dz^2. which we regard as a singular metric on C̃. The metric g^ϕ is actually flat because in local conformal coordinate W=∫√(ϕ), ϕ≡ dW^2 and g^ϕ≡dW^2 by (<ref>). Note that g^ϕ induces a topological metric space structure on C̃. We will be interested in the following class of quadratic differentials with nice g^ϕ-metric properties. <cit.> A meromorphic quadratic differential is GMN[For Gaiotto, Moore and Neitzke who first introduced the theory of spectral networks with which we are concerned.] if: * all the zeroes of ϕ are simple, * ϕ has at least one pole, * ϕ has at least one finite critical point (either an order one pole or zero). We say that a GMN quadratic differential ϕ is complete if ϕ has no simple poles. If ϕ is complete, then the metric space is complete as well. To see this, note that the integral lim_a→ 0^+∫_a^11/x^bdx for 0<b<∞ converges for b=1/2, but not for b≥ 1. Now comparing with the local forms in Proposition <ref>, we see that the integral of the line element √(ϕ)∼1/z^b for b≥ 1 blows up as z→ 0. Each g^ϕ-geodesic, or ϕ-geodesic for short, admits a unique phase in ℝπℤ since ϕ-geodesics are just straight lines in the W-coordinate. We call geodesics with phase θ=0 horizontal and geodesics with phase θ= π/2 vertical. We call maximal solutions of the ϕ-geodesic equation trajectories. There are several types of trajectories. If the trajectory γ has its maximal interval of definition a finite open interval, or equivalently, approaches finite critical points at both ends, we say that γ is a saddle trajectory. If it is defined over (-∞,∞), or equivalently approaches infinite critical points at boths ends, then we say that it is a generic trajectory. If the trajectory approaches a finite critical point at a single end, then we say that it is a separating trajectory. Note that horizontal generic trajectories do not intersect with each other. We say that ϕ is saddle-free if there are no horizontal saddle trajectories on C. We can always rotate ϕ by e^i2θ for a generic θ to obtain a saddle-free quadratic differential <cit.>. The phase θ trajectories in C^∘ give a singular foliation on C̃. The critical graph of this singular foliation is called the spectral network S(θ). The spectral network S(θ) is stratified into a 0-th dimensional stratum consisting of all the zeroes of ϕ and a 1-dimensional stratum consisting of the separating θ-trajectories, also called walls. The complement of the spectral network for a saddle-free GMN quadratic differential is a disjoint union of chambers; chambers are connected contractible conformal subdomains of C̃. Given a chamber 𝒵^h, there exists a conformal equivalence of (𝒵^h,ϕ)≃ (𝒵^h(a,b),dz^2) where 𝒵^h(a,b) is either the upper half-plane or a finite horizontal strip subdomain of ℂ (c.f. <cit.>). These chambers are maximal horizontal domains, meaning that they are spanned by generic horizontal trajectories. Thus we have a cellular decomposition of C̃, where the 2-cells are the chambers, the 1-cells the walls, and the 0-cells the zeroes of ϕ. The spectral curve Σ_ϕ restricted to a chamber is sent under the conformal equivalence (𝒵^h,ϕ)≃ (𝒵^h(a,b),dz^2), to the two disjoint affine hyperplanes {p^x=± 1, p^y=0}. §.§.§ Non-abelianization We now state what we mean by non-abelianization. Let ℛ be either ℤ or ℂ. We define a GL(k;ℛ)-local system to be a rank k locally constant sheaf of free ℛ-modules. Inspired by <cit.>, we look at the “integrated version" of local systems, or the path groupoid representation of ℛ-local systems (Definition <ref>). For this purpose, let M be a two dimensional real manifold. Suppose furthermore that M admits a compactification M, by which we mean that there is an embedding of M into a closed two-dimensional manifold M such that M_∞=M-M consists of finite set of points. In the above set-up, M admits a wall-chamber decomposition if there exists a finite collection M^0 of points on M, and a collection M^1 of embedded arcs (called walls) in M satisfying the following conditions. * If w∈ M^1, then w connects a point in M^0 to either a point in M^0 or a point in M_∞. * Given a point m_0∈ M^0, there exists a wall W such that m_0∈∂ W, and given a point m_∞∈ M_∞, there exists at least one point m_0 in M^0 and a wall W∈ M^1 such that ∂ W={m_0}∪{m_∞}. * The walls in M^1 only meet at the points in M^0∪ M_∞. * The complement M^2 of all the walls s in M^1 decompose M into a finite disjoint union of contractible components (called chambers). <cit.>. Given a wall-chamber decomposition (M^0,M^1,M^2), we say that a collection of points 𝒫_M in M is a set of base points if each component of M^2 contains at least one element of 𝒫_M. Given the base points 𝒫_M, the path groupoid 𝒢_M=𝒢_M(𝒫_M) is the groupoid whose objects are points in 𝒫_M, and whose morphisms are path-homotopy classes between the points in 𝒫_M. A collection of morphisms of 𝒢_M is said to be a path groupoid generating set if their concatenations generate 𝒢_M. A path groupoid representation of a GL(k;ℛ)-local system consists of the following data. * A free rank k ℛ-module E_b for each b∈𝒫_M together with an isomorphism ℛ^⊕ k≃ E_b. * A morphism Γ(α):E_b→ E_b' given a path homotopy class α∈π_1(b,b')_M, b,b'∈𝒫_M, such that Γ(α) is compatible with path concatenations. Two path groupoid representations (𝒫_M,E',P) and (𝒫_M,E',P') are said to be equivalent if for each b∈ M there are isomorphisms g_b:E_b→ E_b' such that (i) the following diagram commutes for α∈π_1(b,b'), b,b'∈𝒫_M: E_b E_b' E'_b E'_b'["Γ'(α)", from=2-1, to=2-2] ["Γ(α)", from=1-1, to=1-2] ["g_b"', from=1-1, to=2-1] ["g_b'"', from=1-2, to=2-2] , and (ii) the isomorphisms g_b are compatible with the isomorphisms (<ref>) to ℛ^⊕ k above. Given a path groupoid representation of a GL(k;ℛ)-local system, we can build a genuine ℛ-local system on M. To see this, we borrow the argument in <cit.>. Consider the following space 𝒫̃^M:{γ:I→ M: γ(0)∈𝒫_M}/{∼} of path homotopy classes that begin at some m∈𝒫_M and end at some other point m'. Let P̃^M_b be the connected component of P̃^M containing the constant path at b∈𝒫^M. Then we glue the constant sheaves E_b×P̃^M_b by (v,b)∼ (Γ(α)v,b') for α∈π_1(b,b'). The spectral network S(0) induces a wall-chamber decomposition of C̃. Suppose we choose a collection of base points b(w) for each wall w in S(0). The wall w picks out a unique sheet of √(ϕ) in the following sense: choose any parametrization w:[0,∞)→C̃ in the outward orientation; there exists a unique sheet of √(ϕ) such that the function s→∫_0^s √(ϕ) along w takes values in ℝ_≥ 0 which is independent on the choice of an oriented parametrization of w. We can then similarly choose a pair of points b^u(w) and b^d(w) connected by an oriented vertical arc α called a short path passing through b(w) such that the integral s→∫_α(0)^α(s) Im√(ϕ) is non-negative and increasing. Furthermore, we can give ± labels for the lifts of b^u(w)^u (or b^d(w)) by letting b(w)^u,+ (or b(w)^d,+) to be the lift corresponding to the positive sheet of √(ϕ) along w. Let 𝒫_C be the resulting collection of points b^u(w) and b^d(w) for w a wall in S(0). Let 𝒫_Σ_ϕ^∘=π^-1(𝒫_C^∘), and lift the wall-chamber decomposition of C to a wall-chamber decomposition of Σ_ϕ. Recall that we call a GL(1;ℛ)-local system on Σ_ϕ almost flat if the monodromy around a ramification point is -Id. Similar to Definition <ref>, we introduce a path-groupoid representation analogue of an almost flat GL(1;ℛ)-local system introduced in <cit.>. A path groupoid representation of an almost flat GL(1;ℛ)-local system ℒ on Σ_ϕ^∘ is a collection of the following data: * A one-dimensional free ℛ-module ℒ_b̃ for each of the points b̃∈𝒫_Σ_ϕ^∘ with a preferred choice of basis. * A morphism of vector spaces Φ^ℒ(α): ℒ_b̃→ℒ_b̃' given a morphism α∈ Hom(b̃,b̃') of the path groupoid 𝒢_Σ_ϕ^∘=𝒢_Σ_ϕ^∘(𝒫_Σ_ϕ^∘). This data is subject to the following conditions: * The morphisms Φ^ℒ(α) are compatible with composition of path homotopy classes. * The holonomy around a based loop encircling a ramification point of π, is -Id. We now define non-abelianization. Given a path groupoid representation ℒ of an almost flat GL(1;ℂ)-local system on Σ_ϕ^∘ and a path groupoid representation E of a GL(2;ℂ)-local system on C̃, we say that L and E form a 𝒲-pair, or equivalently that E is a non-abelianization of ℒ, if: * There is an isomorphism i_b:E_b→π_∗(ℒ)_b for each b∈𝒫_C. * If α does not cross walls of S(0), then Γ(α)=i_f(α)^-1(π_∗Φ^ℒ(α)) i_i(α). * If α is a short path between b(w)^- and b(w)^+, then Γ(α)=i_f(α)^-1𝒮_w(Φ^ℒ(α)) i_i(α) where 𝒮_w is a unipotent matrix of form Id+μ_w, for some ℂ-morphism μ_w:ℒ_b(w)^d,-→ℒ_b(w)^u,+. Furthermore, we say that the induced local systems on C̃ and Σ_ϕ form a 𝒲-pair if their path groupoid representations form a 𝒲-pair. One of the main insights of <cit.> was that homotopy invariance and the data of ℒ uniquely determine the matrices μ_w. We will revisit this idea in Section <ref>. Consider Σ_ϕ as a real Lagrangian submanifold in T^∗C̃ with respect to the real canonical Liouville form λ_re under the identification of the real cotangent bundle and the holomorphic cotangent bundle. We are interested in complete GMN quadratic differentials ϕ such that the corresponding spectral curve Σ_ϕ is exact with respect to the canonical real Liouville form on T^∗C̃. We call such quadratic differentials real exact. The space of real exact quadratic differentials constitutes a totally real submanifold of the space of quadratic differentials (see the remark after Proposition <ref>; the space of GMN quadratic differentials is a complex manifold by <cit.>). We show in Section <ref> that given a real exact quadratic differential ϕ, the Floer cohomology local system HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ): z↦ HF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,ℒ⊗ℬ;ℂ) is well defined. The construction of the precise Floer-theoretic set-up uses only standard techniques, but is slightly involved. This is carried out in Section <ref>. We also show that the points 𝒫_C and the ℂ-vector spaces HF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,ℒ⊗ℬ;ℂ) for z∈𝒫_C, along with the Floer-theoretic parallel transports, define a path groupoid representation HF_t(Σ_ϕ,ℒ,𝔰,ℬ,𝒫_C;ℂ) of a GL(2;ℂ)-local system over C̃. In addition, we show that compact Hamiltonian isotopies of tΣ_ϕ which are supported away from the points in π^-1(𝒫_C) define an equivalent path groupoid representation (Proposition <ref>). We can now restate our main theorem as follows. Note that there are some constants involved for technical reasons. Let Σ_ϕ be the spectral curve associated to a real-exact GMN quadratic differential on a closed Riemann surface C. Given a small deformation parameter δ>0 and a large energy cut-off E≫ 1, there exists a t_0>0 and a collection of points 𝒫_C=𝒫_C(δ;E) (with lifts P_Σ_ϕ^∘) such that the following holds for all 0<t<t_0. Let ℒ=ℒ(P_Σ_ϕ^∘) be a path groupoid representation of an almost flat GL(1;ℂ)-local system, 𝔰 be a spin structure on C, and ℬ be an almost flat GL(1;ℤ)-local system. Then HF_t(Σ_ϕ,ℒ,𝔰, ℬ, 𝒫_C;ℂ) and ℒ(P_Σ_ϕ^∘) form a 𝒲-pair, or equivalently, HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ) is a non-abelianization of ℒ. §.§ Towards the proof of Theorem <ref> We outline the strategy towards the proof of Theorem <ref>. Given some small parameter δ>0, we find a suitable Kähler metric g^ϕ_δ over C̃ which agrees with g^ϕ outside a small neighbourhood of the zeroes of ϕ. Then as we said in Section <ref>, we restrict our attention to spectral curves which are exact with respect to the real Liouville form λ_re. Given such a quadratic differential and a large energy cut-off E≫ 0, we construct some bounded open subdomain C(δ;E) of C which is a deformation retract of C̃-S(0), such that g^ϕ_δ=g^ϕ on C(δ;E). The metric g^ϕ_δ gives rises to an induced almost complex structure J on T^∗C̃ called the Sasaki almost complex structure. For definition, see Definition <ref>. For z∈ C^∘, and 0<t≤ 1, a J-holomorphic strip u bounded between tΣ_ϕ and F_z, that travels between distinct lifts of z on Σ_ϕ, is called a t-BPS disc ending at z. The main analytic theorem of the paper is the following non-existence result for t-BPS discs ending at z for small enough t>0 and z∈ C(δ;E). Given E≫ 0, δ≪ 1, there exists a metric g^ϕ_δ on C̃, a deformation retract C(δ;E) of C̃-S(0) over which g^ϕ_δ=g^ϕ such that the following holds. Let J be the Sasaki almost complex structure associated to g^ϕ_δ. Then there exists a scaling parameter t_0=t_0(δ;E)>0 such that for 0<t≤ t_0, there are no non-constant J-holomorphic strips bounded between F_z and tΣ_ϕ for z∈ C(δ;E). The main motivation behind the proof of Theorem <ref> comes from the following general expectation. Suppose we have sequence of discs u_t bound in a t-rescaling of an exact multi-graph. Then we expect the sequence u_t to degenerate to sets of solutions of Morse-like local differential equations on C̃, after possibly passing to a subsequence. The resulting set of solutions on C̃ is called the adiabatic degeneration of the sequence u_t. However, in our case, the resulting Morse-like local differential equation turns out to agree with the horizontal ϕ-geodesic equations on C̃ over the region C(δ;E). Furthermore, we show in Section <ref>, that the flow lines for points z∈ C(δ;E) can never enter some neighbourhood of the branch points. Using these observations, in Section <ref>, we modify Ekholm's Morse flow tree techniques <cit.> to show that after passing to a subsequence, u_t maps arbitrarily close to a horizontal trajectory γ passing through some z∈ C(δ;E) for small enough t. By construction, we can find a small neighbourhood of γ contained in C^∘ over which the metric g^ϕ_δ agrees with g^ϕ. We show that such discs cannot exist under the finite energy assumption, and prove the the main analytic theorem. We now explain briefly how to relate Theorem <ref> to Theorem <ref>. Let z∈𝒫_C. By choosing a suitable grading, we show that the chain complex CF(Σ_ϕ,F_z) is concentrated in degree 0. Thus the intersection points in F_z⋔Σ_ϕ give rises to a natural decomposition of HF(Σ_ϕ,F_z) for z∈𝒫_C. The key part is using Theorem <ref> to show that the parallel transport is diagonal along (homotopy classes of) paths that are strictly contained in a connected component of C(δ;E). This is necessary to show that (<ref>) holds. To understand the heuristics, consider the holomorphic strips that contribute to the non-diagonal terms in the Floer-theoretic parallel transport map along horizontal or vertical arcs in C(δ;E). We show that these strips Gromov-converges to broken holomorphic strips bounded between F_z and tΣ_ϕ as the corresponding arcs converge to the point z. Since Theorem <ref> implies that such holomorphic strips cannot exist, we deduce that the parallel transport must be diagonal. We now explain why the parallel transport along a short arc is of the form (<ref>). This is essentially due to positivity of energy. From real exactness, we have W:Σ_ϕ→ℝ such that λ_re=dW. From Stokes' theorem, we see that the energy of t-BPS discs ending at z must be bounded above by ± t (W(z^+)-W(z^-)). We show that W(z^+)=W(z^-) if and only if z lies on the spectral network S(π/2). Thus for z≠ S(π/2), we can choose the ordering z^+ and z^- in such a way that W(z^+)>W(z^-). This means that there are no t-BPS discs ending at z that travel from z^+ to z^-, regarding them as J-holomorphic strips. A similar energy argument applies to show that the parallel transport (<ref>) should be strictly upper-triangular for short enough α. We leave the discussions on holonomy contributions of ℒ in (<ref>) and (<ref>) to Section <ref>. It is worth remarking here that similar applications of Morse flow-tree techniques to study degeneration of holomorphic discs for (certain generalizations of) spectral curves also appeared in <cit.> and implicitly in <cit.>. §.§ Set-up of the paper The setup of the paper is now as follows. In Section <ref>, we introduce and gather the necessary ingredients from pseudo-holomorphic curve theory (namely, monotonicity) to establish the Floer theoretic set-up that we will use through out this paper. In Section <ref>, we discuss the geometry of ϕ-metrics and the wall-chamber decomposition induced by S(0). We will then study conditions under which the spectral curve is real exact and find particular deformation retracts of C-S(0) called C(δ;E) with the properties described in Section <ref>. In Section <ref>, we will adapt the adiabatic degeneration techniques as in <cit.> to prove theorem <ref>. Finally, in Section <ref>, we will use the Gromov compactness argument, to show that the local system is a non-abelianization up to signs. We will then compute the signs explicitly, by making careful choices of the spin structures, and prove Theorem <ref>. §.§ Conventions We use the following conventions: * The canonical symplectic form on the cotangent bundle is dp∧dq. * The Hamiltonian vector field associated to a smooth function H on T^∗M is defined by i_X_Hω=-dH. * All the holomorphic polygons are given anticlockwise boundary orientations regarded as the unit disc with punctures on the boundary in ℂ. * When we take identification ℂ^n≃ T^∗ℝ^n as a target symplectic manifold, we take the induced “standard complex structure" on ℂ^n to be given by z_k=x_k-iy_k,k=1,...,n. * When we regard ℂ as a Riemann surface, or a conformal domain, we take the complex structure given by z=x+iy. * The contact form on the jet bundle is given by dz-pdq. * W^k,p denotes the (k,p)-Sobolev Space. * Given a topological metric space (X,d), a subset N⊂ X, and x∈ X, we set d(N,x) to be the distance between N and x. Given l>0, we set B_l(N) to be the set of points x in X with d(x,N)<l. * Given a complete Riemannian manifold (M,g) and x,y∈ M, we consider the induced topological metric d on M, and we define d(N,x) and B_l(N) accordingly. * Given a complete Riemannian manifold (M,g), we set r:T^∗M→ℝ, r(q,p)=p. Here the norm of the covector p is taken with respect to g. Then we set D_l^∗M ={(q,p)∈ T^∗M: r(q,p)<l} S_l^∗M ={(q,p)∈ T^∗M, r(q,p)=l}. * For l>0, we set: A_l ={z∈ℂ:z<l} ∂ A_l ={z∈ℂ:z=l} E_l ={z∈ℂ:z<l, Im(z)≥ 0} ∂ E_l = {z∈ℂ:z<l, Im(z)= 0}. * Given a real positive function a_t defined on some subset I of ℝ and α∈ℝ_>0, we say that a_t is of size O(t^α) if there exists some C>0 such that a_t<Ct^α for all t∈ I. * We adopt the convention that the infinite strip 𝒵=(-∞,∞)× [0,1] is given the conformal coordinate z=s+iτ. The parameter t is reserved for either the scaling parameter or family parameter. §.§ Acknowledgements The author would like to thank his supervisor, Ailsa Keating, for her continued support and encouragement. The author is also grateful to Ivan Smith for sharing his insights on general Floer theory; to Andy Neitzke for explaining non-abelianization; to Tobias Ekholm for explaining his paper on Morse flow-trees; to Roger Casals for interesting discussions on spectral networks and their relationship to Legendrian weaves, as well as for his kind hospitality at UC Davis; to Jack Smith for explaining how to orient Floer-theoretic moduli spaces which played an immense role for the sign computations in Section <ref>; to Jean-Philippe Chassé and Jeff Hicks for the joint collaboration on reverse isoperimetric inequalities, which played a crucial role in resolving a specific technical issue in the paper; to Yoel Groman and Sheel Ganatra for conversations on Floer theory on open manifolds; and to Noah Porcelli, Aleksander Doan, and Benedict Morrissey for useful conversations. This project owes a significant amount of debt to the works of Davide Gaiotto, Greg Moore and Andy Neitzke <cit.>, <cit.>, Tobias Ekholm's work on Morse flowtrees <cit.>, and Ganatra-Pardon-Shende and Yoel Groman's works on Floer theory on open manifolds, namely, <cit.> and <cit.>. The author was sponsored by the Cambridge Trust scholarship. § FLOER THEORY ON COTANGENT BUNDLES OF OPEN MANIFOLDS The main aim of this section is to establish a Floer-theoretic set-up on T^∗C̃ such that the Floer cohomology local system HF(tΣ_ϕ,F_z) is well-defined on C̃. To do this, we define Floer theory on cotangent bundles of the more general class of Riemannian open manifolds that are “flat at infinity". In Section <ref> we introduce the notion of flatness at infinity, define finiteness conditions for Lagrangians, Hamiltonians and almost complex structures. In particular, we will introduce the class of vertically finite Lagrangians, which includes spectral curves associated to GMN complete quadratic differentials ϕ. In Section <ref>, we review the notion of geometric boundedness. In Section <ref>, we discuss the basic monotonicity techniques. In Section <ref>, we show using the monotonicity techniques and the arguments in <cit.> that the moduli space of Floer solutions satisfy the usual compactness and transversality properties. The key is showing that the relevant pseudo-holomorphic curves do not escape off to infinity. Here, the boundary conditions are given with respect to the classes of Lagrangians and almost complex structures defined in Section <ref>. This allows us to define, for instance, CF(tΣ_ϕ,F_z). In Section <ref>, we show that the Floer chain complex satisfies certain invariance properties up to isomorphism in cohomology. This section is heavily based on the works of Sikorav<cit.>, Groman<cit.> and Ganatra-Pardon-Shende<cit.>. §.§ Finiteness conditions and confinement via monotonicity §.§.§ Flatness at infinity and finiteness conditions We start with the following definition. A Riemannian manifold (M,g) is flat at infinity if g is complete, there exists a compact subset K⊂ M such that g|_M-K is flat, and there exists an R_0>0 such that the injectivity radius of g is bounded below by R_0. ℝ. We will also see in Section <ref> that C̃ equipped with the flat metric desingularized at the branch points is also flat at infinity. Consider the cotangent bundle T^∗M. Although it is not a Liouville manifold, it is very close to one. T^∗M admits the canonical Liouville form λ_re=p·dq and the canonical symplectic form ω=dp∧dq. Furthermore, the standard Liouville vector field Z:=p∂_p satisfies L_Zω=ω and ι_Z ω=λ_re. In addition to this, given any metric g on M, the unit sphere bundle S^∗M is a codimension 1 submanifold of T^∗M and the restriction of the Liouville form defines a contact form α on S^∗M. Consider the diffeomorphism of the positive cone of (S^∗M,α) into T^∗M [1,∞)× S^∗M→ T^∗M given by sending the point (r,(q,p)) to its time-log(r) image under the Liouville flow. Since this is simply the map (r,(q,p))→ (q,rp), the pullback of the canonical Liouville form λ=pdq is equal to rα and the canonical symplectic form reads d(rα) on the positive cone. The Liouville vector field then takes the form rd/dr over [1,∞)× S^∗M. We see that the positive flow of the Liouville vector field is complete and that the image of [1,∞)× S^∗M covers the neighbourhood of the vertical infinity. The Reeb field R over S^∗M is the unique vector field defined by the condition α(R,-)=1, dα(R,-)=0. At (q,p)∈ S^∗M, in geodesic normal coordinates, the Reeb vector field reads: R:=∑_i=1^n p_i/p∂/∂ q_i. The Reeb field over [1,∞)× S^∗M is defined as the field 0⊕ R⊂ TS^∗M. This is the Hamiltonian vector field associated to the linear function r. We now introduce objects which are compatible with the Liouville-like structure. We say that an almost complex structure or a Lagrangian submanifold is cylindrical if it is invariant under the positive Liouville flow. Furthermore, An ω-compatible almost complex structure J is of general contact type if there exists a positive smooth function h:ℝ_>0→ℝ_>0 such that h(r)dr=λ_re∘ J. If this condition holds over {r>R} for some R≫ 1, we say that it is of general contact type at vertical infinity. The almost complex structure J is of contact type if h(r)=1 and of rescaled contact type if h(r)=r. The notion of almost complex structures of rescaled contact type comes from <cit.>. Definition <ref> is equivalent to J mapping the kernel of α to itself on the level sets of r and swapping the Liouville flow Z with h(r)R. In this paper, we will utilize the “canonical" ω-compatible almost complex structure on T^∗M induced from the metric g, called the Sasaki almost complex structure (See <cit.>, or for the full exposition, <cit.>). To define this, first we note that given the projection π: T^∗M→ M, the kernel V of the derivative dπ:T^∗M→ M gives the canonical vertical distribution on T^∗M. Then the metric g gives rises to a distribution H on T^∗M called the horizontal distribution for which the restriction dπ:H_p→ T_π(p)M for p∈ M gives a vector space isomorphism. We then identify H with (TM,g) with respect to dπ; we have the following covariant decomposition TT^∗M =H⊕ V =(TM,g)⊕ (T^∗M,g), of TT^∗M. Regarding g as a real vector bundle isomorphism g:TM→ T^∗M, we get: The Sasaki almost complex structure J_g is the almost complex structure on T^∗M defined by the following matrix J_g:= [ 0 +g^-1; -g 0 ], with respect to the covariant decomposition (<ref>). We write g^S for the metric on T^∗M induced from ω and J_g. We have the following simple local computation. Let η be a differential 1-form on M and let c:I→ M be a curve. Then the velocity vector of the curve C(t)=(c(t),η(c(t))):I→ T^∗M has the following decomposition ∂ C(t)/dt=c'(t)^H⊕∇_c'(η)^V with respect to TT^∗M=H⊕ V. The Sasaki almost complex structure is not of contact type at infinity, so we deform the almost complex J_g as in <cit.> to find some conical deformations of J_g. The same conical deformation also appeared in <cit.>. Let ρ:[1,∞)→ [1,∞) be a smooth increasing positive function such that ρ(r)=1 for r<3/2 and ρ(r)=r for r≫ 2. The following deformation of the Sasaki almost complex structure J_con=[ 0 +ρ(r)^-1g^-1; -ρ(r) g 0. ], is called the (ρ-)conical deformation of J_g. We write g_con for the Riemannian metric induced from ω and J_con. Here the matrix is taken with respect to the decomposition (<ref>). Fixing a smooth ρ once and for all, we obtain our background almost complex structure J_con and our reference metric g_con on T^∗M. The following proposition is mentioned in <cit.>. We prove it for the sake of completion. Let λ_re be the canonical Liouville form on T^∗M. The Sasaki almost complex structure is of rescaled contact type. The deformed almost complex structure is invariant under the Liouville flow and satisfies λ_re∘ J_con=dr for r≫ 1. Hence, it is of contact type at infinity. Let x be a point on M. Take the geodesic normal coordinates q=(q_1,...,q_n) and the corresponding covector coordinates p=(p^1,...,p^n) centred at x=(0,....,0). Since the statement of Proposition <ref> is local, we will show that the statement of Proposition <ref> holds for any (x,p). We first observe that in the coordinate system (q_1,...,q_n,p^1,...,p^n), the Liouville field Z defined by ι_Z ω=λ can be written as Z=∑_i=1^n p^i∂/∂ p_i where n is the dimension of M. Therefore, we see that the Liouville flow is given by ϕ_t:(x,p)→ (x,e^tp). We now check that (ϕ_t)^∗J_con=J_con. Computing (ϕ_t)^∗ J_con(x,p), we get (ϕ_t)^∗J_con(x,p) =[ I 0; 0 e^-tI ] J_con(x,e^tp) [ I 0; 0 e^tI ] =[ I 0; 0 e^-t I ][ 0 e^-tr^-1; -e^tr 0 ][ I 0; 0 e^tI ]= [ 0 r^-1; -r 0 ]=J_con(x,p). This finishes the proof of invariance under the Liouville flow. Furthermore, from J_conZ= ∑_i=1^n p^i/ρ(p)∂/∂ q_i which is r/ρ(r) times the Reeb vector field (unit geodesic flow) of λ_re, we see that the deformed Sasaki almost complex structure swaps the Reeb vector field and the Liouville vector field. We now check that λ_re|_S^∗M is J_con-orthogonal to the distribution ⟨ R,Z⟩ generated by R and Z. We first check that λ_re|_S^∗M is J_g-orthogonal to the distribution ⟨ R,Z⟩ generated by R and Z. Indeed, given X=∑ a^i∂/∂ q_i+∑ b^i ∂/∂ p^i in ⟨ R,Z⟩^⊥_g^S, we get: 0=g^S(x,p)(Z,X) =∑ p^ib_i⇒ X∈ T_(x,p)(S^∗M) 0=g^S(x,p)(R,X) =1/r∑ p^ia_i=1/rλ_re(X)⇒ X∈λ_re(x,p). Since the dimensions match up, we see that ⟨ R,Z⟩^⊥_g^S=λ_re|_S^∗M. The vector Z is totally vertical and R is totally horizontal and since we are just rescaling the norm in each horizontal and vertical tangent space, a vector is orthogonal to Z (or R) in g^S if and only if it is orthogonal to Z (or R) in g_con. Hence ⟨ R,Z⟩^⊥_g^S=⟨ R,Z⟩^⊥_g_con and we have a J_con-invariant decomposition T(T^∗M)=⟨ R,Z⟩⊕λ_re|_S^∗M. Since J_con(Z)=r/ρ(r)R, we get that λ_re∘ J_con vanishes on ⟨ R,Z⟩^⊥_g_con that λ_re∘ J=λ_re∘ J_con|_⟨ R,Z⟩=r/ρ(r)· dr where the last equality can be checked directly. This finishes the proof. We introduce the class of horizontally finite Hamiltonians and Lagrangians. Let (M,g) be a Riemannian manifold which is flat at infinity. Equip the cotangent bundle T^∗M with its background almost complex structure J_con. * Let L be a Lagrangian submanifold in T^∗M which is cylindrical at infinity. We say that L is horizontally finite if π(L)⊂ K for some compact subset K⊂ M. * Let H be a Hamiltonian function on T^∗M. We say that it is cylindrical if ZH=H at infinity, or equivalently, if H=hr for r≥ R, R≫ 1, where h:S^∗M→ℝ is a contact Hamiltonian. We say that H is horizontally finite if there exists a compact subset K⊂ M such that the support of H lies inside T^∗K. We restrict to the following class of almost complex structures on T^∗M. Let J be an ω-compatible almost complex structure. We say that J is an admissible almost complex structure if J is cylindrical at infinity and if there exists a compact subset K⊂ M such that J=J_con outside of T^∗K. We say that K is the horizontal support of J. Let 𝒥(T^∗M) denote the space of ω-compatible admissible almost complex structures. Let S be a Riemann surface with boundary. A family of admissible almost complex structures parametrized by S is a smooth map J:S→𝒥(T^∗M). We will be concerned with a family of almost complex structures that is uniform in the following sense. Let J:S→𝒥(T^∗M) be a family of admissible almost complex structures, then J is uniformly cylindrical, if there exists a subset of S× T^∗M, which is proper over S, such that outside of this subset, the almost complex structures J_s|_s∈ S are invariant under the Liouville flow. A family of admissible almost complex structures is called uniformly admissible if there exists a uniform horizontal support, and if the family is uniformly cylindrical at infinity. We now introduce the notion of vertically finite Lagrangians. A properly embedded Lagrangian submanifold L in T^∗M which is a closed subspace of T^∗M is vertically finite if there exists an R≫ 1, ϵ_L>0 and a compact subset K_L⊂ M such that: * L is contained in D_R^∗M, * the complement M-K_L is an open submanifold of M and outside of T^∗K_L, the projection π:L→ M is a proper covering map, * the space π^-1(K_L)∩ L is a manifold with boundary and consists of finitely many connected components, * the submanifold L∩ T^∗(M-K_L) is totally g_con-geodesic and contained in the subset D_1^∗M, * for all x∈ M-K_L, and x'∈π^-1(x) , B^g_con_ϵ_L(x')∩ L|_(M-K_L) is connected. We say that a Lagrangian is finite at infinity if it is either horizontally finite or vertically finite. We will show in Corollary <ref> that spectral curves associated to complete GMN quadra- tic differentials are vertically finite. Note that on D_1^∗M, g_con coincides with g^S. This is why we had ρ(r)=1 in the neighbourhood of ST^∗M. §.§.§ Geometric boundedness We review the notion of geometric boundedness and tameness for almost complex manifolds (V,J) and totally real submanifolds of V, following <cit.>. This will be necessary for controlling the C^0 images of pseudo-holomorphic curves using monotonicity techniques. Recall that an almost complex manifold (V,ω,J) equipped with a symplectic form ω such that J is ω-compatible is called almost Kähler. The following definition of geometric boundedness is due to Ganatra-Pardon-Shende. <cit.> Let (V,ω,J) be a 2n-dimensional almost Kähler manifold equipped with a symplectic form. We say that (V,ω,J) is geometrically bounded if there is an open cover {V_α} of V and charts ϕ_α: B_1(0)⊂ℝ^2n→ U_α such that: * the collection {ϕ_α(B_1/2(0))} also covers V, * with respect to the standard metric on B_1(0), sup_αϕ_α^∗J_C^r<∞, sup_αϕ_α^∗ω_C^r<∞, * there exists some r_0>0 such that ω(v,(ϕ_α)^∗Jv)>r_0 g_std. Furthermore, we say that an ω-Lagrangian submanifold L of V is geometrically bounded if the charts ϕ_α can be chosen in a way that ϕ_α^-1(L) is either empty or a linear subspace of B_1(0) for all α. Let (S,ω_S,j_S) be an almost Kähler manifold. Suppose we have a family (V,ω_s,J_s) of almost Kähler structures over S. Then we say that (V,ω_s,J_s) is uniformly geometrically bounded if the almost Kähler manifold (V× S,ω_s⊕ω_S,J_s⊕ j_S) is geometrically bounded. From geometric boundedness, one can obtain the tameness condition, which is originally due to Sikorav <cit.>. <cit.>. Let (V,g) be a Riemannian manifold. We say that g is (δ,c)-isoperimetric at p if given any closed curve γ:S^1→ B_δ(p), there is a disc D_γ in V such that ∂ D=γ and Area(D)≤ cℓ (γ)^2. Here ℓ(γ) is the length of γ. Let (V,J,g) be an almost complex manifold equipped with a Riemannian metric g. Then we say that (V,J,g) is tame if there exist constants r_V,C_0,C_1,C_2>0 such that the following holds. * The metric is complete, r_g=inf_x∈ Minj_x g>0 and r_V<r_g. * (V,g) is uniformly (r_V,C_1)-isoperimetric. * Over each ball B(p,r_V), there exists a local symplectic form ω_p such that ω_p_g≤ C_0. Furthermore, X_g^2≤ C_2 ω_p(X,JX). Suppose (S,ω_S,j_S,g_S) is an almost Kähler manifold. A family of quadruples (V,J_s,ω_s,g_s) on S is said to be uniformly tame if (S× V,ω_S⊕ω,j⊕ J_s,g_s⊕ g) is tame. Let (V,ω,J) be an almost Kähler manifold. Let g_J=ω(-,J) be the induced metric on V. Then (V,ω,J) is said to be tame if (V,J,g_J) is tame with respect to symplectic forms ω_p=ω|_B(p,r_V). A family of almost Kähler structures (V,ω_s,J_s) parametrized on S is said to be uniformly tame if (S× V, ω_S⊕ω,j⊕ J_s) is tame. Recall that a submanifold W of V is called totally real if TW∩ JTW=0. <cit.> Let W be a properly embedded totally real submanifold of V. A point p∈ W is (δ,c)-isoperimetric with respect to g if g is (δ,c)-isoperimetric at p, and for any chord γ:[0,1]→ B_δ(p) with endpoints on L, there is a half disc D with ∂ D=γ∪γ̃ with γ̃⊂ L such that Area(D)≤ cℓ (γ)^2. (<cit.>) Let (V,J,μ) be as in Definition <ref>. Let W⊂ V be a properly embedded totally real submanifold of V. Then W is said to be tame if there exists an r_W>0, C_W>0 such that the following holds. * For x,y∈ W with d(x,y)_V<r_W, we have d(x,y)_W≤ C_W d(x,y)_V. * For all p∈ W, B(r_W,p)∩ W is contractible and there exists a symplectic form ω_p on B(r_W,p) satisfying the conditions in Definition <ref>, such that W∩ B(r_W,p) is ω_p-Lagrangian. Given a uniformly tame family (V,ω_s,J_s) over an almost Kähler mainfold S, we say that L is uniformly tame if ∂ S× L is tame in (S× V,ω_s⊕ω_S, J_s⊕ j_S,g_s⊕ g_S). In particular, any Lagrangian submanifold of an almost Kähler manifold is totally real. A properly embedded Lagrangian submanifold W⊂ V which is a closed subspace of V, in a tame almost Kähler manifold (V,J,ω,g), is said to be tame if there exists an r_W>0, C_W>0 such that: * for x,y∈ W with d(x,y)_V<r_W, we have d(x,y)_W≤ C_W d(x,y)_V; * each B(r_W,p)∩ W is contractible. Given a uniformly tame family (V,ω_s,J_s) parametrized on S, we say that W is uniformly tame if the totally real manifold ∂ S× W is tame in (S× V,ω_s⊕ω_S, J_s⊕ j_S,g_s⊕ g_S). See Remark (<ref>) for the relationship between tameness and isoperimetricity. The following well known proposition relates tameness with geometric boundedness. Suppose (V,ω,J) is geometrically bounded, then it is tame. Furthermore, if W is a geometrically bounded Lagrangian submanifold of V, then W is also tame. Groman's estimate <cit.> gives control over the isoperimetricity constants in terms of the sectional curvature and the injectivity radius. Jean-Philippe Chassé's estimate <cit.> shows tameness for geometrically bounded Lagrangian submanifolds. Controlling the injectivity radius requires control over the sectional curvature and the volume comparison between the Euclidean volume and the volume induced from g_J. This requires theorem <cit.>. In particular, a uniform lower bound on the g-volume of the unit ball and an upper bound on the sectional curvature gives a uniform lower bound on the injectivity radius. Controlling the injectivity radius and the cut locus distance (the supremum of the radial radius of the embedded tubular neighbourhood) for Lagrangians was done in <cit.> and <cit.> respectively. The following proposition verifies geometrical boundedness of almost complex manifolds (T^∗M,ω,J) for J an admissible almost complex structure. This is a modification of <cit.> and we follow their proof closely. Let J be an admissible complex structure on T^∗M. Let g_J be the metric induced from J and ω. Then the almost Kähler manifold (T^∗M,J,ω,g_J) is geometrically bounded. Furthermore, Lagrangians which are finite at infinity are also geometrically bounded. Since admissible almost complex structures are cylindrical at infinity, we may assume that J is cylindrical without loss of generality, over the positive cone over some fixed sphere bundle of J_g-radius R>0 which depends only on J. This R only depends on the auxiliary function ρ. Let p be a point near vertical infinity. Take the reverse Liouville flow to bring it down to the point q on the sphere bundle. Since J is invariant under the Liouville flow, we see that the geometry of (T^∗M,J,ω) near p is the same as the geometry of (T^∗M,J,rω) near q for some real number r≥ 1. Take geodesic normal coordinates (x_1,...,x_m) near q with respect to g=ω(J,-). Changing ω to rω rescales the metric by rg, but we can zoom in and send x_i'→ x_i=√(r)^-1x_i'. Taylor expansion in the x-coordinates gives g=g_ijdx^idx^j=(g_ij(0)+O(x^2))dx^idx^j. Here the O(x^2) term depends only on the curvature, the inverse metric g^ij at q and its covariant derivatives. In the rescaled coordinates, we get rg =rg_i'j'(√(r)^-1dx^i')(√(r)^-1dx^j') =(g_i'j'(0)+√(r)^-1O(x'^2))dx^i'dx^j'. This implies that as r→∞, the local geometry at q uniformly converges to the linear Kähler geometry at T_q(T^∗M) induced by the triple (ω,J,g(0)). Hence, it suffices to bound the geometry of the sphere bundle. Let R_0 be as in Definition <ref>. Outside some compact subset of K, we can cover M with countably many balls B_r_i(x_i) such that 0<R_1'<r_i<R_0 for some R_1'>0. Since the curvature vanishes, the exponential map exp_x_i:B_r_i(0)→ B_r_i(x_i) is a local isometry. Taking the pullback via the exponential map and using the covariance of J_con, we see that the unit sphere bundle is trivial and we are simply bounding the geometry of S^n-1× B_r_i(0) equipped with the standard metric scaled by R^1/2 (The R^1/2 factor only appears because we had taken the sphere near infinity where the almost complex structure becomes conical). This is automatic. Now suppose L is a horizontally finite Lagrangian. Then the same argument holds since it is conical at infinity and the horizontal support is contained in a compact subset of M. Suppose now L is a vertically finite Lagrangian. Let K_L and ϵ_L be as in Definition <ref>. By definition, there exists some compact subset K_1⊂ M such that J=J_con on T^∗(B_R_0(M-K_1)) and K_L∩ B_R_0(M-K_1)=∅. For x∈ M-K_1, the restriction of the exponential map exp_x:B_R_0(0)→ B_R_0(x) is an isometry since the sectional curvature vanishes identically on the image, by the flatness at infinity condition. Consider the induced map (exp_x,d(exp^-1_x)^∗):T^∗B_R_0(0)→ T^∗B_R_0(x). Then J|_D_3/2T^∗B_R_0(x)=J_con|_D_3/2^∗B_R_0(x)=J_g|_D_3/2T^∗B_R_0(x) by definition, that by the covariance of J_g, (exp_x,d(exp^-1_x)^∗)^∗J|_D_3/2^∗B_R_0(x)=(exp_x,d(exp^-1_x)^∗)^∗J_g|_D_3/2^∗B_R_0(x)=J_g_std|_D_3/2^∗B_R_0(0). Here g_std is the standard metric on ℝ^n. Of course, the metric induced from J_g_std is the standard metric on ℝ^2n. Now any totally geodesic submanifold of ℝ^2n equipped with the standard flat metric is a linear subplane of ℝ^2n. Furthermore, since L→ M on B_R_0(x) is a proper covering, and any covering on a contractible open set is trivial, (exp_x,d(exp^-1_x)^∗)^-1(L) consists of finitely many disjoint Lagrangian subplanes of T^∗B_R_0(0)⊂ℝ^2n. By the final condition in Definition <ref>, setting ϵ'_L=min{ϵ_L,1/4,1/2(R_0)}, for any x'∈ (exp_x,d(exp^-1_x)^∗)^-1(L), B_ϵ'_L(x')∩ L is connected and consists of a single Lagrangian plane. Furthermore, (exp_x,d(exp^-1_x)^∗)^∗ω=ω_std. This finishes the proof of geometric boundedness of L. Note that we have derived tameness of L directly in the proof as well. We also need the following “family" version of the geometric boundedness statement. This is a modification of <cit.>. Let J:A_1→𝒥(T^∗M) be a uniformly admissible family of almost complex structures over S, then (A_1× T^∗M, j_A_1⊕ J,ω_A_1⊕ω_T^∗M) is geometrically bounded. Furthermore, if a Lagrangian submanifold L⊂ T^∗M is finite at infinity, then ∂A_1× L is geometrically bounded. Suppose L is vertically finite. Since the family is uniformly admissible, there exists a compact subset K⊂ M such that over T^∗(M-K), J agrees with the background almost complex structure J_con^g. Furthermore, L is totally geodesic outside of some T^∗K' for some compact subset K'⊂C̃. Then the manifold ∂ A_1× L|_T^∗((K∪ K')^c) is geometrically bounded, and the statement for the compact part of L also follows. For the case L is horizontally finite, repeat the argument in the proof of Proposition <ref>. Replacing a family of almost complex structures on A_1 with an almost complex structure on A_1× T^∗M is called the Gromov Trick. We now focus our attention back to C̃. We first show flatness at infinity. Let ϕ be a complete GMN quadratic differential over C. Let g be a Riemannian metric on C̃ that agrees with the singular metric g^ϕ outside a compact subset of C̃. Then (C̃,g) is flat at infinity. By definition, the metric g is equal to g^ϕ in some neighbourhood of infinity. We consider the union of the neighbourhoods of the poles U contained in this neighbourhood where g=g^ϕ such that the points in U are of distance >1 away from the zeros of ϕ. Then if p∈ U, the flat coordinate W=∫√(ϕ(z)) can be extended over the disc of radius ≥ 1. This shows that the minimal injectivity radius is positive. Hence g is flat at infinity. Let ϕ be a GMN complete quadratic differential. Let (C̃,g) be as above. Then the pair T^∗C̃ and Σ_ϕ equipped with an admissible almost complex structure is geometrically bounded. Furthermore, the spectral curve Σ_ϕ is vertically finite. Note that on D_1^∗M, J_con=J_g. Since Σ_ϕ lies in D_1 ^∗M, it suffices to show that outside T^∗K for some compact subset K ot C̃ containing the branch points, Σ_ϕ is totally geodesic with respect to J_g induced by the flat singular metric g^ϕ. We now show that the spectral curve is vertically finite. Let p^x and p^y denote the dual coordinates in the W-coordinate system. Outside of a compact set in C̃, the metric on the base equals dW^2 and the spectral curve reads {p^x=± 1, p^y=0} in the W=∫√(ϕ) coordinate system. Hence we may take the vertical sheet gap to be ϵ_Σ_ϕ=1/2. §.§.§ Monotonicity techniques Now we introduce monotonicity techniques and apply them to find a priori restriction on the diameter of the Floer trajectories. We first start with the statement of the monotonicity lemma from <cit.>,<cit.>. Suppose J is such that g_J is (δ,C)-isoperimetric at p. Then any J-holomorphic curve u passing through p and with boundary in M-B_δ(p) satisfies Area(u;u^-1(B_δ(p)))=∫_u^-1(B_δ(p))1/2du^2≥δ^2/2C. If p∈ L and J,L are (δ,C) isoperimetric, the same holds if ∂ u ∩ B_δ(p)⊂ L. For tame manifolds, we can set δ=r_W and C=1/4(C_1+1+C_W)). From the monotonicity lemma, we derive the following C^0 boundary estimate on the image of the J-holomorphic curves. This is <cit.>. Let (V,J,ω,g) be a tame manifold. Let W be a tame Lagrangian submanifold of V. Let S be a connected Riemann surface with boundary and let K be a compact subset of V, then there exists a constant C_5(W,K,E)>0 with the following property. Let u:S→ V be a J-holomorphic map with Area(u)<E such that S∩ K≠∅ and u(∂ S)⊂ K∪ W, then C⊂ B_C_5(W,K,E)(K). Here C_5(W,K,E) can be chosen to be linearly dependent on E and r_W^-1. We follow the proof of <cit.>. Set r_0=min(R_V,r_W). First, note that the compact subsets B(K,2nr_0) of V for n∈ℕ give an exhaustion of the manifold V. We will be done if we can show that there exists an N=N(W,E,K)∈ℕ such that any u:S→ (V,K∪ W) is contained in B(K,2iN). Intuitively, we regard the subsets B(K,2in) as giving the nth “energy levels", and their boundaries ∂ B(K,2in) as giving “energy shells". The idea of the proof is to show that everytime u enters and leaves an energy shell, it loses a finite fixed amount of energy, which does not depend on u. Let u:S→ (V,K∪ W) be a J-holomorphic curve, then u(S)⊂ B(K,2(N+1)r_0) for some N>0. Choose the smallest such N. Then we can find points x_1,...,x_N such that u(x_i)∈∂ B(K,2ir_0). We claim that the subsets B_r_0(u(x_i))∩ u(∂ C) lie in W. This is simply because of the condition ∂ S⊂ K∪ C. Applying the monotonicity lemma on each B_r_0(u(x_i)), we see that there exists constants (δ_i,C_i) such that for M=1,...,N, ∫_u^-1(B_r_0(u(x_i)))du^2>∑_i=1^M δ_i^2/C_i. However, by tameness, we may choose the δ_is so that δ_i>r_W and 1/C_i>C_4 for some C_4. Therefore, we see that ∑∫_u^-1(B_r_0(u(x_i)))du^2>MC_4r_W^2, giving the energy bound E≥ NC_4 r_W^2 that E/C_4 r_W^2≥ N. Hence N is bounded. This finishes the first part of the proof. We now show the scaling properties of the constant C_5(W,K,E). Suppose that we rescale so that r_W is sent to t r_W. Then we need to take N/t^2 many t r_W-balls hence we see that the image of u lies in the (N+2)r_W/t-neighbourhood of K. So C_5(W',K,E)=1/tC_5(W,K,E). Alternatively, scaling E to t E scales N by t. In this case, we see that u(S) is in the t (N+2)r_W neighbourhood of K. So we have C_5(W,K,tE)=tC_5(W,K,E). This finishes the proof. On the other hand, we will also need the interior estimate <cit.> by Groman. For the proof, see <cit.>. Let (V,J,ω,g) be a tame almost Kähler manifold. Let L be a tame Lagrangian submanifold of V. Let E>0 and let K be a compact set of V. Then there exists an R=R(V,L,E,K)>0 such that the following holds. * For any J-holomorphic map u:A_1→ V satisfying Area(u;A_1)≤ E and u(A_1/2)∩ K≠∅, we have u(A_1/2)⊂ B_R(K). * For any J-holomorphic map u:(E_1,∂ E_1)→ (V,L) satisfying Area(u;E_1)≤ E and u(E_1/2)∩ K≠∅, we have u(E_1/2)⊂ B_R(K). We also have the following “family" version of the interior estimate: Let (J_s,ω,g_s), s∈A_1 be a family of uniformly tame compatible triples on V. Let E>0 and let K be a compact set of V. Then there exists a compact subset R(K,E,J_s,ω,g_s) such that the following holds. If u:A_1→ V satisfies (du)^0,1_J_s=0, Area(u;A_1)≤ E and u(A_1/2^2)∩ K≠∅, then u(A_1/2)⊂ R(K,E,J_s,ω,g_s). Furthermore, suppose L is a Lagrangian submanifold of V which is uniformly tame with respect to (J_s,ω,g_s). Then there exists a compact subset R'(K,E,J_s,ω,g_s,L) of V such that the following holds. If u:(E_1,∂ E_1)→ (V,L) satisfies (du)^0,1_J_s=0, Area(u;E_1)≤ E, and u(E_1/2)∩ K≠∅, then u(E_1/2)⊂ R'(K,E,J_s,ω,g_s,L). Reduce to Proposition <ref> by taking (A_1× V, j_A_1⊕ J_s,ω_A_1⊕ω). §.§ Floer operations We now utilize the estimates in Section <ref>. In Section <ref>, we discuss the compactification and transversality of moduli space of stable pseudo-holomorphic polygons. The key idea is to use geometric boundedness and convexity to bound the images of pseudo-holomorphic polygons. We use this to define the Floer chain complex. In Section <ref>, we define the notion of passive continuation strips. In Section <ref>, we derive various formulas for the geometric energy of the continuation strips. In Section <ref>, we show the C^0 confinement of the passive continuation strips. In Section <ref>, we construct continuation chain maps and discuss their properties. Finally, in Section <ref>, we discuss the path groupoid representation of the Floer cohomology local system given a wall-chamber decomposition on the base M. We only consider the case where M is two-dimensional. From now on, we assume that all the Lagrangians are exact, with respect to λ_re. §.§.§ Compactness and transversality Moduli Spaces We first start with the compactness and transversality properties of the Floer moduli spaces. We follow <cit.> closely. In this section, we assume that all the Lagrangians are exact with respect to the canonical Liouville form λ_re. Let L_1 and L_2 be a pair of transversely intersecting Lagrangians in T^∗M such that L_1 is finite at infinity, and L_2 is horizontally finite (see Definition <ref>). Since L_i is λ_re-exact, there are smooth functions f_i:L_i→ℝ on L_i such that df_i=λ_re| L_i. Such functions f_i are unique up to constants. We choose the primitives f_1 and f_2 once and for all for L_1 and L_2 respectively. We define the action of an intersection point x∈ L_1⋔ L_2 by: a(x):=f_1(x)-f_2(x). Given such a pair (L_1,L_2), a choice of an s-invariant admissible family of almost complex structures J_L_i,L_j=J_L_i,L_j(τ) on the infinite strip 𝒵 is called the Floer datum of the pair L_1, L_2. For our purpose, it suffices to only consider the case where J_L_1,L_2 is given by a compact deformation of J_con, and we will assume so from now on. For x,y∈ L_1⋔ L_2, let ℛ(L_0,L_1,J_L_1,L_2)_x↦ y be the moduli space of unparametrized J_L_1,L_2(τ)-holomorphic strips u between L_0 and L_1 with lim_s→ -∞ u(s,τ)=x and lim_s→ +∞ u(s,τ)=y. The space ℛ(L_0,L_1,J_L_1,L_2(τ)) is the union of unparametrized moduli spaces ℛ(L_0,L_1,J_L_1,L_2(τ))_x↦ y for x,y∈ L_0⋔ L_1. The moduli space ℛ(L_0,L_1,J_L_1,L_2(τ)) is the union of ℛ(L_0,L_1,J_L_1,L_2(τ))_x↦ y consisting of broken J_L_1,L_2-holomorphic strips. More generally, let L_1 be a Lagrangian which is finite at infinity and let L_2,...,L_n be any finite collection of mutually transverse, horizontally finite Lagrangians which is transverse to L_1. For k≥ 2, let ℛ_k,1 be the compactified Deligne-Mumford moduli space [For details of Deligne-Mumford moduli spaces, see <cit.>).] of stable discs with k+1 marked points x_1...,x_k,y (labelled in anticlockwise directions). Let 𝒮_k,1 be the universal bundle over ℛ_k,1. Given a fibre S of 𝒮_k,1→ℛ_k,1, we say that a neighbourhood of the boundary marked points x_1,...,x_k,y and the nodes of the fibre inside the total space 𝒮_k,1 gives the thin part of the fibre S. The complement of the thin parts gives the thick part of the fibre S. For k=1, we set ℛ_1,1 to be the stack pt/ℝ. Choose a Floer datum for each pair (L_i,L_j) for i,j=1,...,n, i≠ j. For every sequence 1≤ i_0< ....< i_k≤ n, choose “universal strip-like coordinates": End^+_i_0,....,i_k;j :[0,∞)× [0,1]×ℛ_k,1→𝒮_k,1;j=1,....,k, End^-_i_0,...,i_k :(-∞,0]× [0,1]×ℛ_k,1→𝒮_k,1, and a uniformly admissible family of almost complex structures J_i_0,...,i_k: 𝒮_k,1→𝒥(T^∗M) such that the strip-like coordinates are compatible with gluing and the almost complex structures are compatible with gluing. By this we mean the following. * For each j=1,...,l there is a boundary collar ℛ_k,1×ℛ_j,1× (0,∞)→ℛ_k+l-1,1 given by glueing two ends at x_j with respect to the glueing parameter in (0,∞). Then the “glued" strip-like coordinates on ℛ_k+l-1,1 must agree with the universal strip-like coordinates specified on ℛ_k+l-1,1. * The almost complex structures J must be compatible with glueing via End^±. By this we mean the following: End^±_j^∗J_i_0,...,i_k must be s-invariant; J_i_0,...,i_k+1-1 must be given by glueing with respect to End over the image of (<ref>); J_i_0,...,i_k+1-1 must coincide with the product of J_i_j-1,...,i_k+j-1 and J_i_0,...,i_j-1,i_k+j-1,...,i_k+l-1 over the image of ℛ_k,1×ℛ_j,1 under (<ref>). For details, see <cit.>. We simply remark that given a negative (or positive) strip-like end with Lagrangian labels L_i_m and L_i_m+1, then End_m^-,∗J_i_0,...,i_k (or End_m^+,∗J_i_0,...,i_k) must be equal to the Floer datum J_L_i_m,i_m+1(τ) of the pair L_i_m and L_i_m+1. Note that here we have not said anything about the regularity of the moduli spaces. We consider the moduli spaces ℛ_k,1(y;x_1,....,x_k)=ℛ_k,1(y;x_1,...,x_k;L_i_0,...,L_i_k) of stable J_i_0,...,i_k-holomorphic maps u:S→ T^∗M. Here the Lagrangian boundary conditions are given as in Figure <ref>; the marked points x_i_k are mapped to intersection points of L_i_k-1 and L_i_k, and the marked point y is mapped to intersection points of L_i_0 and L_i_k. Compactness We want to show that ℛ_k,1(y;x_1,...,x_k) is compact. First, we need the following lemma, which is originally due to Seidel-Abouzaid <cit.>, whose current form and the proof we have borrowed from Ganatra-Pardon-Shende <cit.>. For related ideas, see <cit.>. Let (S,j) be a Riemann surface with boundary. Let J be an ω-compatible almost complex structure of general contact type (Definition <ref>) on {r>a} for some a>0. Let u:S→ T^∗M be a a (j,J)-holomorphic curve such that u^∗λ_re|_∂ S≤ 0 on u^-1({r>a}) then u is locally constant over u^-1({r>a}). By u^∗λ_re|_∂ S≥ 0, we mean that the evaluation of u^∗λ_re on a positively oriented vector field along T∂ S is negative. Here the orientation is given with respect to j. We follow the proof in <cit.>. For any smooth function f:ℝ→ℝ_≥ 0 satisfying f'≥ 0 and f(r)=0 for r≤ a, we have 0≤∫_S f(r(u))u^∗ω=∫_∂ Sf(r(u))· u^∗λ_re- ∫_S f'(r(u))· u^∗(dr∧λ_re). To see this, note that d(f(r(u))· u^∗λ_re)=f'(r(u))∧ u^∗(dr)∧ u^∗λ_re+f(r(u))· u^∗ω. The first term on the right hand side of (<ref>) is ≤ 0, because of the condition u^∗λ_re≤ 0. Since (dr∧λ_re)(X,JX)≥ 0 for any vector field X, the second term is also ≤ 0. To see this, since h(r)dr=λ_re∘ J, we have (dr∧λ_re)(X,JX)=dr(X)λ_re(JX)-λ_re(X)dr(JX)=h(r)(dr(X)^2+dr(JX)^2)≥ 0. So 0≤∫_S f(r(u))u^∗ω=∫_∂ Sf(r(u))· u^∗λ_re- ∫_S f'(r(u))· u^∗(dr∧λ_re)≤ 0, since u^∗λ_re≤ 0. This finishes the proof. In particular, the condition u^∗λ_re≤ 0 in Lemma <ref> holds when connected components of ∂ S belong to Lagrangians that are cylindrical over {r>a} since on the cylindrical part u^∗λ_re=0. We now show that the moduli space is compact. The moduli space ℛ_k,1(y;x_1,...,x_k) is compact. We modify the proof in <cit.>. Let K_L_i,i≠ n be the compact subsets of M such that π(L_i)⊂ K_L_i. In the case L_n is vertically finite, let K_L_n be as in Definition <ref>. In the case L_n is horizontally finite, let K_L_n be a compact subset of M such that π(L_n)⊂ M. Let R>0 be such that: (i) the Lagrangians L_is are either cylindrical or empty outside of D_R^∗M, and (ii) the almost complex structure J(s,τ) is cylindrical outside D_R^∗M. We furthermore demand that the Legendrian submanifolds Λ_i=L_i∩ S_R^∗M are either compact or empty, and that they are disjoint. Let K_0 be a compact codimension 0 submanifold with boundary of M such that on T^∗K_0^c, J(s,τ)=J_con. We assume that K_0 is large enough so that it contains all the compact subsets K_L_i of M. Here the radius of the codisc bundle is taken with respect to the metric g on the base M. We first estimate the energy. Since the Lagrangians L_i_0,...,L_i_k are all exact and the almost complex structures in the family are ω-compatible, there is an upper bound E>0 such that if u∈ℛ_k,1(y;x_1,...,x_k) then du^2_L^2,J≤ E. Indeed, this follows from 1/2du^2_L^2,J=∫ u^∗ω=a(y)-∑ a(x_i) where we used ω-compatibility in the first equality, and Stokes' theorem on dλ_re=ω and df_L_i=λ_re|_L_i in the second equality. Here a is the action of the intersection point defined in (<ref>). Since the geometric energy admits a uniform finite upper bound, Proposition <ref> will be true by Gromov compactness if we can find a fixed compact subset K of M and R_3>0 such that if u∈ℛ_k,1(y;x_1,...,x_k) then the image of u lies in D_R_3^∗K. To do this, we show that outside of some compact subset the Lagrangians are uniformly separated near infinity, and argue via monotonicity to control the images of the thick parts and the thin parts. We first show that the Lagrangians in question are uniformly separated at infinity, outside D_R^∗K_0, that is, we have a lower bound C>0 on the J(s,τ)-distance between the Lagrangians L_is outside of D_R^∗K_0. When L_n is vertically finite, such a lower bound between L_i and L_n for i≠ n is obvious. We now show that horizontally finite Lagrangians are also uniformly separated outside of D_R (T^∗K_0). This was stated so in the proof of <cit.> but we supply the full argument here. Let h denote the metric obtained from a cylindrical ω-compatible almost complex structure. Take the pullback of the metric h to the positive cone S^∗M× [R,∞) over S^∗M and consider a vector field Y⊕ 0 in S^∗M× [R,∞) tangent to S^∗M. Let Z be the Liouville vector field. Since L_Z h=h and L_Z Y=0 for any vector field tangent to S^∗M, the S^∗M-compontent of the metric grows with a factor of r, and the norm of ∂_r grows with a factor of r^-1. We are interested in how h(Y,X) grows, for Y a vector field on T(S^∗M). Taking the Lie derivative, we check that: h(Y,X)=(L_X h)(Y,X)=X· (h(Y,X))-h(L_X Y,X)-h(Y,L_X X)=X· (h(Y,X)). This can only happen if h(Y,∂_r) is r-invariant. Indeed, L_r∂_r(h_yrdydr)=h_yr,rdydr+h_yrdydr where we have used Cartan's formula: L_r∂ r(dr)=dr. Hence the metric is of the form h=rh|_S^∗M+r^-1dr^2+∑ h_yrdydr Taking r=s^2, we find that the metric is now of the form of the standard metric on the cone: h=s^2h|_S^∗M+4ds^2+2s∑ h_yrdyds. Let P be the local orthogonal projection to TS^∗M. Then h(·,·)≥ h(P·,P·)=s^2h_S^∗M(P·,P·). So it follows that d_h(L_i,L_j)|_S^∗M× [R,∞)≥ R^2 d_h(Λ_i,Λ_j)>0. This shows that the horizontally finite Lagrangians are uniformly separated at infinity. Modifying C_2 in the case L_n is vertically finite, we get our uniform lower bound C. Having separated the Lagrangians at infinity, we deal with the thick part. Lemma <ref> tells us that restricting u to some disc A_l (or half-disc E_l) in the thick part of S is uniformly geometrically bounded, and the geometric boundedness constants only depend on l, the family J, and the Lagrangians L_is. Lemma <ref> then tell us that if the image of u restricted to the aforementioned unit disc (or unit half-disc) intersects a large enough compact subset A⊂ T^∗M that separates the Lagrangians near infinity in A_l/2 (or E_l/2) then the image of u restricted to A_l/2 (or E_l/2) is contained in some R(A,E,J_s,ω,g_s,l) (or R'(A,E,J_s,ω,g_s,L_s,l)). Since the thick parts are compact Riemann surfaces with boundaries and corners with uniform topology, they can be covered by uniformly finite number of half-discs and discs of some uniform radius l_1>0 so that the shrinked discs of radius l_1/2 also cover the thick part of S. This constant doesn't depend on u but only on the topology of the thick part of S. The boundary conditions corresponding to horizontally finite Lagrangians lie in a compact set, since L_i∩ D_R^∗M for i≠ n is compact. So by repeatedly applying Lemma <ref>, we can use monotonicity to show that over the thick parts the J-holomorphic curves are a priori contained in D_R_1^∗ K' for some compact subset K'⊂ M and R_1>R>0. So now it remains to show that we can compactly enlarge K' and R_1 so that the whole image of u is contained in the compact enlargement. We follow the strategy in the proof of <cit.>. On the thin-parts, we have s-invariant almost complex structures J(τ). We take some constants R_2>R_1>0 so that the image of u restricted to the thick part is contained in D_R_2^∗M and outside of D_R_2^∗M, J(τ) is cylindrical. Let K_thin be a compact subset of M such that outside of T^∗K_thin, J(τ)=J_con. Then we take a codimension 0 submanifold-with-boundary K_base of M containing K_thin, K' and K_0, such that the g-distance d_base between ∂ K_base and K_L_i is positive. Suppose now that the interval [a,b]× [0,1] in the thin part is mapped outside of the disc bundle D_R_2^∗K_base. Then we have E≥∫_[a,b](∫_0^1 ∂_t u)^2 ≥ C (b-a), Therefore, taking L=E/C, we see that if (a-b)>L then u cannot map [a,b]× [0,1] outside of D_R_2^∗K_base. So there exists some ϵ>0 such that [a,b]× [0,1] is covered by uniformly finite half-discs and discs of radius ϵ. Hence applying the interior estimate (Lemma <ref>) again, we can enlarge D_R_2^∗K_base to D_R_3^∗K so that the image of u is wholly contained in D_R_3^∗K. This finishes the proof. Transversality We now proceed on with the construction of A_∞-structures. For details, see <cit.>. In particular, we will postpone detailed discussion of orientation lines and spin structures to Section <ref> since there we will carry out explicit computations. We now assume that the Lagrangians are graded, and that they are spin. Given each pair L_1, L_2, choose an initial s-invariant uniformly admissible family of almost complex structure J^in_L_1,L_2(τ)[For our purpose, it suffices to set J^in_L_1,L_2(τ)=J_con]. By Proposition <ref>, we can find some H>0 and a compact subset K⊂ M such that: (i) J^in_L_1,L_2 is cylindrical outside D_H^∗M, (ii) J^in=J_con outside T^∗K and u∈ℛ(L_1,L_2,J_L_1,L_2^in) are contained in the interior of D_H^∗K, (iii) D_H^∗K contains all the intersection points of L_1 and L_2, and (iv) if L_i is horizontally finite then K contains the horizontal support of L_i, and if L_i is vertically finite, then K contains the compact subsets K_L_i in the sense of Definition <ref>. Consider the following space of ω-compatible almost complex structures 𝒥(K,H):={J: J=J^in_L_1,L_2(τ) outside D_H^∗(K).} This space 𝒥(K,H), equipped with the C^k topology for large enough k>0 (which is equivalent to the uniform topology induced from g_con) is a Banach manifold modelled on the space of C^k-infinitesimal deformations 𝒴(U) that satisfies the conditions YJ+JY=0 ω(Y·,·)+ω(·,Y·)=0 (Y)⊂D_H^∗K. Note then under J→ Jexp(-JY), the class of horizontally finite almost complex structures stays invariant. Similarly, we can equip the space 𝒥(K,H) with the C^∞-topology which makes it a Fréchet manifold. Furthermore, applying the proof of Proposition <ref>, we see that there exists a compact set P containing every u∈ℛ(L_1,L_2,J) for J∈𝒥(K,H). Indeed, we can run the argument of Proposition <ref> outside of D_H^∗K where J=J_con where the uniform separation of the Lagrangians and tameness constants coincide for J∈𝒥(K,H). [Given a general symplectic manifold M, the space 𝒥(M) of (tame, or compatible) almost complex structures is given the weak C^∞-topology. When the base M is compact, the space 𝒥^k(M) and the space 𝒥^∞(M) of C^k and smooth compatible almost complex structures are Banach and Fréchet manifold. However, when M is not compact, endowing Banach/Fréchet structures on such spaces become much more involved, unless one specifies appropriate decay condition at infinity for maps in W^k,p_loc.] Since we are now in the situation covered in <cit.>, we can perturb the family J_L_0,L_1(τ) generically so that moduli spaces of holomorphic strips ℛ(L_0,L_1,J_L_1,L_2)_x↦ y are transversely cut out for all x,y∈ L_0⋔ L_1. Indeed, note that given a J^in_L_1,L_2-holomorphic curve u, the set of injective points is dense, and the images of such points must necessarily lie in the interior of D_H^∗K. To ensure smoothness of the J-holomorphic strips via elliptic regularity, we need to find a Baire dense subset in the C^∞-topology whose associated moduli spaces of strips are transversely cut-out. This is done either using the Floer C^∞_ϵ-space or Taubes' trick. For details, see <cit.>. Note that since there are only finitely many intersections, and since finite intersections of Baire dense subsets are Baire dense, we can find a generic J(τ) such that all the moduli spaces ℛ(L_0,L_1,J_L_1,L_2)_x↦ y,x,y∈ L_0⋔ L_1 are transversely cut out. A uniformly admissible family J(τ) of almost complex structures such that the moduli space of holomorphic strips ℛ(L_0,L_1,J(τ)) is transversely cut out is called a regular Floer-datum for the pair L_0,L_1. Choose a regular Floer datum for each pair once and for all. We can regard each intersection point x∈ L_0⋔ L_1 as a constant holomorphic half-strip with boundary conditions given by a path L_s of Lagrangian subspaces of T_x (T^∗M) that begin at T_x L_0 and end at T_x L_1, that satisfy the grading constraints[If A is the induced Maslov grading on LGr(T_x (T^∗M)) and A_0,A_1 are chosen grading functions on L_0 and L_1 respectively, the path L_s must satisfy A(L_0)=A_0(x) and A(L_1)=A_1(x).]. Then we define the orientation line o_L_i,L_j,p to be the determinant of the linearized Cauchy-Riemann operator associated to x. Any other choice of paths that satisfy the grading constraints give a canonically isomorphic real line. For details, see <cit.>. We define the Floer intersection complex CF(L_1,L_2)=⊕_p∈ L_1∩ L_2o_L_1,L_2,p. Since this is standard in literature, we conclude that we can define a chain complex structure on CF(L_i,L_j) by counting regular J-holomorphic strips. As usual, we call the cohomology HF(L_i,L_j) of this chain complex the Floer cohomology. Carrying over to the general moduli space of stable discs is an inductive procedure. Again, we begin with an initial admissible family J_i_0,...,i_k^in of almost complex structures that satisfy the consistency conditions in (<cit.>). In particular, at the strip-like ends, the pullback of the family with the universal strip-like coordinates agrees with the chosen regular Floer datum of the pair (L_i_m,L_i_m+1). We can use Proposition <ref> again to find a compact subset K of M and H>0 such that J is cylindrical outside D_H^∗M, J=J_con outside T^∗K and u∈ℛ(y;x_i_0,...,x_i_k) is contained in D_H^∗K. Then using 𝒥(K,H) again, we are back in the situation covered in <cit.>; we can perturb the family of almost complex structures J_i_0,...,i_k:𝒮_k,1→𝒥(T^∗M) generically so that all the moduli spaces ℛ(y;x_1,...,x_k) are transversely cut out and so that the consistency conditions are still satisfied. By using the same trick, we see that we have a Baire dense subset of regular (family of) almost complex structures in the C^∞-Fréchet space of ω-compatible almost complex structures. Furthermore, the higher operations μ_k: CF(L_i_0,L_i_1)⊗...⊗ CF(L_i_k-1,L_i_k)→ CF(L_i_0, L_i_k)[2-k] can be defined by counting holomorphic discs in the zeroth dimensional part of the moduli spaces ℛ_k,1(y;x_1,...,x_k). Again, we will describe in detail how orientation lines enter the story in Section <ref>, so we skip the discussion of that. Then studying the boundary stratification of the 1-dimensional part of the moduli spaces give the A_∞ relations. §.§.§ Continuation strips We now pose the continuation strip moduli problem. Let L_s be an exact Lagrangian isotopy of horizontally finite Lagrangians. We say that L_s is uniformly horizontally supported if there exists a compact subset K of M such that π(L_s)⊂ K. Given such an exact Lagrangian isotopy, we can find some time-dependent horizontally finite Hamiltonian H_s such that L_s=ψ_s(L) with uniform horizontal support (Definition <ref>). Here ψ_s is the Hamiltonian flow associated to H_s. Let V be a vertically finite Lagrangian submanifold. We say that an exact Lagrangian isotopy V_s:V× [0,1]→ T^∗M of vertically finite Lagrangians is compactly supported if there exists a compact subset K of V such that V_s(v,s)=v for v outside of K. Note that given a uniformly horizontally finite isotopy ψ_s, the isotopy V_s=ψ_s^-1(V) is compactly supported. Let L_s be a uniformly horizontally supported isotopy of horizontally finite Lagrangians. Suppose V is transverse to L_0 and L_1. As we explained in Section <ref>, we can compactly perturb the constant family J_con=J_con(τ) to find a regular Floer datum J_0(τ),J_1(τ) for the pair (V,L_0) and (V,L_1), respectively, such that the moduli spaces ℛ(V,L_0,J_0(τ)) and ℛ(V,L_1,J_1(τ)) are transversely cut out. Choose a uniformly horizontally supported Hamiltonian isotopy ψ_s generating L_s=ψ_s(L_0). Given a pair ((L_0,V,J_0),(L_1,V,J_1)), fix a uniformly admissible family J̃(s,τ) of almost complex structures on 𝒵 such that J̃(s,τ)=J_0(τ) for s≤ -N, J̃(s,τ)=J_1 for s≥ N, and J̃ is given by a compact perturbation of the constant family J_con on [-N,N]. [By this, we mean that J(s,τ)=J^in(s,τ) outside some compact subset of T^∗M.] Let J(s,τ)=(ψ_l(s)^∗)J̃ Note that J satisfies J(s,τ)=(Dψ_1^-1)_∗ J_1(τ)(Dψ_1)_∗=(ψ_1)^∗J_1 for s≥ N. Choose a smooth increasing elongation function l:[0,∞)→ [0,1] such that l(s)=0 for s<-N and l(s)=1 for s>N. We say that a map u:𝒵→ T^∗M is a J-holomorphic strip with a passive moving Lagrangian boundary condition if the following equation is satisfied. ∂̅_J u=0 u(s,0)⊂ V_l(s) u(s,1)⊂ L_0 lim_s→∞ u(s,τ)∈ L_0∩ V lim_s→ -∞ u(s,τ)∈ L_0∩ψ_1^-1(V). We call the solutions passive continuation strips. We now introduce the classes of homotopies of families that we will use to show certain invariance properties of HF. Suppose we are given two uniformly admissible families of almost complex structures J̃^0 and J̃^1 on 𝒵 such that J̃^i(s,τ)=J_0 for s≤ -N and J̃^i(s,τ)=J_1 for s≥ N for some N>0. Then we say that a path J̃^t of family of almost complex structures between J̃^0 and J̃^1 over 𝒵 is a uniformly admissible homotopy if (i) there exists some N'>0 such that J̃^t(s,τ)=J_0 for s≤ -N' and J̃^t(s,τ)=J_1 for s≥ N', and (ii) there exists a compact set K⊂ M and some R>0 such that outside T^∗K, J̃^t=J_con and J̃^t(s,τ) is cylindrical for all s,t∈𝒵 and τ∈ [0,1] outside D_R^∗M. The Hamiltonian counterpart is as follows; suppose we are given a family of time-dependent Hamiltonians H^t_s and suppose there exists an R>0 and a compact subset K⊂ M such that H_s^t is cylindrical outside D_R^∗M and π( H_s^t)⊂ K for all s and τ. Then we say that such a family is a uniformly cylindrical and horizontal. We now state the result, whose proof we postpone to Section <ref>. Let V be a vertically finite Lagrangian and let L be a horizontally finite Lagrangian. For uniformly horizontally supported isotopies L_s such that L_s⋔ V,s=0,1 there exists a chain map called passive continuation map c^passive =c_(L_0,J_0)→(L_1,J_1):CF(V,L_0,J_0)→ CF(V,L_1,J_1). The passive continuation map has the following properties. * A uniformly horizontally finite generic homotopy (L_s^t,J̃^t) relative to end points generated by a uniformly cylindrical and horizontally supported family of Hamiltonians H_s^t induces a chain homotopy map H:CF^∗(V,L_0)→ CF^∗+1(V,L_1) for the passive continuation maps. * There is a chain homotopy between c_L_0→ L_1∘ c_L_1→ L_2 and the continuation map c_L_0→ L_2 associated to concatenation of isotopies. Hence the continuation maps are well-defined up to isomorphisms in cohomology. * For constant maps L_s=L, the induced continuation map is the identity. * For any uniformly horizontally finite isotopy, the passive continuation maps (<ref>) are quasi-isomorphisms. As usual, we define the continuation chain maps using holomorphic strips with moving Lagrangian boundary conditions. The difficulty lies in making sure we pose the right moduli problem for the holomorphic strips so that they don't escape off to infinity; we will shortly show that the images of passive continuation strips are a priori confined. (Proposition <ref>) Then an argument as in the end of Section <ref> tells us that for generic J, the moduli spaces of solutions of (<ref>) are transversely cut out. This will show the existence of a chain map ĉ:CF(V_0,L_0,J_0)→ CF(V_1,L_0,(Dψ_1^-1)_∗J_1 (Dψ_1)_∗). Making use of the trivial isomorphism CF(V_1,L_0,(Dψ^-1)_∗J_1 (Dψ)_∗)≃ CF(V,L_1,J_1) induced by the global Hamiltonian isotopy ψ^-1_1:T^∗M→ T^∗M, we get the passive continuation map c̃:CF(V,L_0,J_0)→ CF(V,L_1,J_1) defined by the following commutative diagram. CF(V,L_0,J_0) CF(V,L_0,J_0) CF(V,L_1,J_1) CF((ψ_1)^-1V,L_0,(Dψ_1^-1)_∗J_1(Dψ_1)_∗). ["Id", from=1-1, to=1-2] ["ĉ", from=1-2, to=2-2] ["Id"', from=2-1, to=2-2] ["c̃"', from=1-1, to=2-1] §.§.§ Energy Formula We now derive energy upper bounds for solutions of (<ref>). We first start with the following formula from Oh <cit.>. Let (X,dα) be an exact symplectic manifold and let L⊂ X be an exact Lagrangian. Let ψ_s be a Hamiltonian isotopy on X and let F be the primitive of α on L. Let L_s=ψ_s(L), then F_s= F+∫_0^s (-H_t∘ i_t+α(X_H_t)∘ i_t)dt satisfies dF_s=i_s^∗α. Here i_s=ψ_s∘ i where i:L→ X is the inclusion map. We also need the following formula for the integral on the moving part. Suppose we have (X,dα,L,ψ_s) as above. Suppose γ:[0,1]→ X is a curve such that γ(s)∈ L_s. Then we have ∫γ^∗α= F_0(γ(0))-F_1(γ(1))+∫_0^1 H_s(γ(s)) ds. Let γ̃=ψ_s^-1(γ(s)). Then γ̃ is a curve on L. Consider the following homotopy on [0,1]× [0,1] defined by: v(s,t)=ψ_st(γ̃(s)). Then it follows that: v^∗ω=∫_0^1 ∫_0^1 sdsdt d(H_st∘ψ_st)(γ̃(̃s̃)̃)ds. Taking the change of coordinate τ=st, we get v^∗ω =∫_0^1 ∫_τ^1 dτ ds d(H_τ∘ψ_τ)(γ̃(s)) =∫_0^1 ∫_τ^1 ds ∂(H_τ∘ψ_τ)(γ̃(s))/∂ s =-∫_0^1 (H_τ∘ψ_τ)(γ̃(τ))dτ +∫_0^1 (H_τ∘ψ_τ)(γ̃(1)) dτ =∫_0^1 H_τ(γ(τ))dτ+∫_0^1(H_τ∘ψ_τ)(γ̃(1)) dτ On the other hand, Stokes' theorem gives us v^∗ω= ∫γ̃^∗α+∫α(X_H_t)(ψ_t(γ̃(1)))-∫γ^∗α Equating the two sides give us the desired formula. Combining the two lemmas, we arrive at the following expression for the energy of discs with moving boundary conditions Suppose S is a disc with k+1 marked points x_0,...,x_k+1. Identify each of the anticlockwise ordered boundary segments ∂ S_1,...,∂ S_k+1,∂ S_0 with [0,1]. Suppose we have moving Lagrangian labels L_0,...,L_k+1, L^s_i=ψ_s(L_i) as above, with L_i^0=L_i, L_i^1=L_i+1. Suppose the Lagrangians L_j,j=0,...,ks are mutually transverse. Let u:S→ T^∗M be a continuation disc with moving Lagrangian labels with respect to Hamiltonians H_s:S→ C^∞(T^∗ M,ℝ). Choose the primitives of L_i^s as in Proposition <ref>. Then the geometric energy of the solutions of (<ref>) satisfy: ∫_S 1/2du^2_J=∫_S u^∗ω=∑_i a^+(x_i)-∑_i a(x_i^-)+ ∫_∂ S H_s(u)ds. In particular, if the isotopies H_s are compactly supported on L_is, then the geometric energy is bounded by a constant which depends only on the original action of the intersection points and H_s. There exists some N≫ 1 such that l(s) is locally constant outside of [-N,N]. Fix such an N. Then we can regard L_l(s) as a family on [-N,N]. Define the primitives of λ_re of L_l(s) with respect to this family on [-N,N] using Lemma <ref>. Then the proof follows from Lemmas <ref> and <ref>. §.§.§ Confinement of continuation Strips We now show the C^0 confinement of passive continuation strips and construct continuation chain homomorphisms. Firstly, given a moving Lagrangian boundary condition L_s, let ℒ:={(s,p):s∈ [0,1], p∈ L_s}. Then ℒ is a totally real submanifold of A_1× T^∗M. We have the following analogue of Lemma <ref>. Let J:A_1→𝒥(T^∗M) be a uniformly admissible family of almost complex structures over A_1, then (A_1× T^∗M, j_A_1⊕ J,ω_A_1⊕ω_T^∗M) is geometrically bounded. Furthermore, if L⊂ T^∗M is finite at infinity, then ∂A_1× L is geometrically bounded. If the submanifold W⊂ (A_1× T^∗M,j_A_1⊕ J,ω_A_1⊕ω_T^∗M) is totally real, and coincides with some ∂A_1× L outside a compact subset for L a Lagrangian finite at infinity, then W must be tame. The proof is as before; the Lagrangian submanifold ∂A_1× L is geometrically bounded by Lemma <ref>. So since W agrees with ∂A_1× L outside a compact subset, W must be geometrically bounded as well. From now on, we do not distinguish horizontally finite Hamiltonian isotopies with a uniform horizontal support, and exact Lagrangian isotopies with a uniform horizontal support. From Lemma <ref>, we arrive at the following corollary: Suppose L_s is an exact Lagrangian isotopy of horizontally finite Lagrangians with uniform horizontal support. Given an R>0, let ℒ_p≤ R:={(s,p):s ∈ [0,1], p∈ L_s, p≤ R }. Then ℒ_p≤ R is tame. Alternatively, suppose K_s is an exact Lagrangian isotopy of vertically finite Lagrangians with uniform horizontal support. The totally real submanifold 𝒦={(s,p):s∈ [0,1], p∈ K_s} is tame. The first case follows immediately since ℒ_p≤ R is compact. The second case satisfies the hypothesis of Lemma <ref> so we are done. We have the following analogue of Proposition <ref> for J-holomorphic curves with moving boundary conditions: Let L_s,s∈ [0,1], be an exact Lagrangian isotopy of horizontally finite Lagrangians with uniform horizontal support and let ψ_s be the horizontally finite Hamiltonian isotopy generating L_s. Let V be a vertically finite Lagrangian. Then the following holds. There exists a compact set K=K(J(s,τ),L_s,V,l)⊂ T^∗M such that the solutions of (<ref>) are contained in K. We modify the proof of <cit.>. The boundary conditions are fixed for s>>0 and s<<0, so the moving boundary conditions appear only on the compact part S_N:=[-(N+1),(N+1)]× [0,1] for some N≫0. We can split the strip 𝒵 into the thin part (-∞,-N-1)× [0,1]∪ (N+1,∞)× [0,1] and the thick part S_N. We control the thick part using tameness for totally real submanifolds. Consider the compatible triple (S_N× T^∗M,j⊕ J,ω_ℂ⊕ω_T^∗M). Note that the manifold 𝒱:={(s,p):s∈[0,1],p∈ V_s} is totally real with respect to j⊕ J. Furthermore, it is (ω_S_N⊕ω_T^∗M)-Lagrangian outside some compact subset of (V_s(l)). So by Lemma <ref> and Corollary <ref>, the manifold 𝒱 must be actually tame with respect to (j⊕ J). Then from the a priori bound on the geometric energy and tameness, we see that the image of the thick part S_N must be a priori confined by Proposition <ref>. The analysis for the thin-part is unchanged. This finishes the proof. Note that the proof does not extend to the case where L and K are both vertically finite. §.§.§ Continuation maps Recall the set-up in <ref>. From the discussion in Section <ref>, we had chosen a generic compact perturbation J_i(τ),i=0,1 of the constant family J_con as regular Floer datum for pairs (V,L_0) and (V,L_1). Then we further chose an initial family of uniformly admissible almost complex structures J̃^in on (-∞,∞)× [0,1] for (<ref>) such that J̃^in(s,τ)=J_0(τ) for s≪ 0 and J̃(s,τ)=J_1(τ) for s≫ 0. Then we set J^in(s,τ)=(ψ_l(s))^∗J̃^in. We see from Proposition <ref> that for such a family J^in, the solutions of (<ref>) are compactly confined. Just as we did in the end of Section <ref>, (See expression (<ref>)), we perturb the family J^in to J over this compact set such that the solutions of (<ref>) are transversely cut out. Then for this perturbed J, J̃=(ψ_s^-1)^∗J is called a regular perturbation datum for ((V,L_0),(V,L_1),ψ_s,J̃^in). Furthermore, the 1 dimensional part of this moduli space is compactified as usual. We briefly explain how to orient the moduli space of passive continuation strips. We adopt the notions from <cit.>. Suppose we have Lagrangian branes (V,A_V,P_V) and (L_s,A_s,P_L_s). For x∈ V⋔ L_0 and y∈ V⋔ L_1, choose a path of Lagrangian subspaces L_T(x), T∈ [0,1] from T_x V to T_x L_0, and a path L_T(y) from T_y V to T_y L_1, that satisfy the grading constraint. We furthermore choose a path of spin structures (P_T)_x over L_T(x) and isomorphisms of Spin torsors (P_0)_x≃ P_V(x) and (P_1)_x≃ P_L_0(x). We choose an analogous path of spin structures (P_T)_y over L_T(y). Then given a regular u satisfying (<ref>) we glue the constant half-strips x and ψ_1^-1(y) to the strip-like ends of ψ_s∘ u. Then given the glued disc x♯ u♯ψ_1^-1(y), the induced spin structure on the boundary is given by glueing (P_T)_x, ψ_s^∗P_V(u), ψ_1^∗(P_T)_y and ψ_s^∗P_L_s(u) along the boundary of x♯ u♯ y. We orient the tangent space at u by the induced spin structure— for details, see <cit.>. So just as we suggested above, by counting the 0th dimensional parts, we get an induced chain map c^passive =c_(L_0,J_0)→(L_1,J_1):CF(V,L_0,J_0)→ CF(V,L_1,J_0) which we call the passive continuation map. Two passive continuation maps are concatenated as indicated by the following commutative diagram CF(V,L_0,J_0) CF(V,L_0,J_0) CF(V,L_1,J_1) CF(ψ_1^-1(V),L_0,ψ_1^∗J_1) CF(ψ_2^-1(V),L_1,(ψ_2^-1)^∗J_2) CF((ψ_1^-1∘ψ_2^-1)(V),L_0,(ψ_2∘ψ_1)^∗J_2) CF(V,L_2,J_2) CF(V,L_2,J_2) ["Id", from=3-1, to=4-1] ["Id"', tail reversed, from=2-1, to=2-2] ["ĉ_̂1̂2̂", from=2-1, to=3-1] ["c̃_̃0̃1̃", from=1-1, to=2-1] ["ĉ_̂0̂1̂", from=1-2, to=2-2] ["Id", tail reversed, from=1-1, to=1-2] ["Id", tail reversed, from=4-1, to=4-2] ["Id", tail reversed, from=3-1, to=3-2] ["ĉ_̂1̂2̂'", from=2-2, to=3-2] ["Id"', from=3-2, to=4-2] We are now ready to show Proposition <ref>. We only sketch the proof. See <cit.> and the construction in <cit.> for details. We first show the first assertion. Suppose we are given a homotopy of Lagrangian isotopies L^t_s=ψ_s^t(L), that is fixed at the endpoints s=0,s=1 generated by a homotopy ψ_s^t of uniformly cylindrical and horizontally supported Hamiltonian isotopies. Set V^t_s=(ψ_s^t)^-1(V). Recall that we had chosen a uniformly admissible family J̃^in for the pair of triples ((V,L_0,J_0),(V,L_1,J_1)) such that J̃^in(s,τ)=J_0 for s≪ 0, J̃^in(s,τ)=J_1 for s≫ 0. Suppose J̃^0 is a regular perturbation datum for ((V,L_0),(V,L_1),ψ^0_s,J̃^in) and J̃^1 is a regular perturbation datum for ((V,L_0),(V,L_1),ψ^1_s,J̃^in). Suppose furthermore there exists an initial uniformly admissible homotopy of ω-compatible almost complex structures J̃^t,t∈ [0,1] extending J̃^0 and J̃^1 such that each J̃^t is given by compactly perturbing J̃^in(s,τ) for s∈ [-2,2]. Set J^t(s,τ)=(ψ^t_l(s))^∗J̃^t. The corresponding family of passive continuation strip equations is given by: ∂̅_J^t u=0 u(s,0)⊂ V^t_l(s) u(s,1)⊂ L_0 lim_s→∞ u(s,τ)∈ L_0∩ V lim_s→ -∞ u(s,τ)∈ L_0∩(ψ^t_1)^-1(V). By the properness of the map (ψ^t_s)^-1:[0,1]× [0,1]× T^∗M→ T^∗M, and application of the arguments in the proof of Proposition <ref>, we may enlarge K, and R>0 such that: (i) the horizontal support of ψ^t_s is contained in K, (ii) ψ^t_s is cylindrical outside D_R^∗, (iii) J^t=J_con outside T^∗K, and (iv) solutions of (<ref>) are contained in D_R^∗K for t∈[0,1]. In particular, condition (i) implies that the set T^∗K is invariant under ψ^t_s. Let R_1> R be such that ψ^t_s(D_R^∗K)⊂ D_R_1^∗K. Replacing u(s,τ) with ũ(s,τ)=ψ_s^t(u(s,τ)), we arrive at an equivalent family of equations ∂ũ/∂ s+J̃^t(s,τ)∂ũ/∂τ-l'(s)X^t_l(s)(ũ)=0 ũ(s,0)⊂ V ũ(s,1)⊂ L_l(s) lim_s→∞ũ(s,τ)∈ L_0∩ V lim_s→ -∞ũ(s,τ)∈ L_1∩ V. The Hamiltonian perturbation datum in the sense of Seidel <cit.> is given by the Hamiltonian valued 1-form B(s,τ)=l'(s)H^t_l(s)ds. Indeed, the corresponding Hamiltonian vector field valued 1-form is Y^t=l'(s)X^t_l(s)ds and (<ref>) just reads (dũ-Y^t)^0,1=0 as usual. Let 𝒥(K,R_1,J^0,J^1) be the space of homotopy of uniformly admissible almost complex structures Ĵ^t rel endpoints such that Ĵ^t=J̃^t outside D_R_1 ^∗K. Let ℋ(R_1,K) be the space of Hamiltonians supported inside D_R_1^∗K. Now further perturb the equation (<ref>) by replacing J̃^t with Ĵ^t in 𝒥(K,R_1,J^0,J^1) and B^t(s,τ) with B̂^t(s,τ)=B^t(s,τ)+Q^t(s,τ) where Q^t(s,τ) is a family of Hamiltonian valued 1-forms taking values in ℋ(R_1,K). We may assume that the 1-form vanishes on the boundary. Let Q̂^t be the vector field valued 1-form obtained from Q(s,τ)^t and Ŷ^t=Y^t+Q̂^t. Consider the following equation (dũ-Ŷ^t)_Ĵ^t^0,1=0 ũ(s,0)⊂ V ũ(s,1)⊂ L_l(s) lim_s→∞ũ(s,τ)∈ L_0∩ V lim_s→ -∞ũ(s,τ)∈ L_1∩ V. Let R_2>R_1 such that the image of (ψ_s^t)^-1(D_R_1^∗K) is contained in D_R_2^∗K. The most important feature of (<ref>) is that the endpoint Lagrangian conditions now match. Abusing notation, let J^t be the pullback of Ĵ^t via ψ^t_l(s). The pullback equation (ψ_s^t)^-1(ũ) solves (du-Z^t)^0,1_J^t=0 for Hamiltonian vector field valued 1-forms Z^t coming from the Hamiltonian-valued 1-form Q(s,τ)∘ψ^t_s. Indeed, as before, (dψ_s^t)^-1(dũ-Y^t)_Ĵ^t^0,1=(du)^0,1_J^t and so (dψ_s^t)^-1(dũ-Ŷ^t)^0,1_Ĵ^t=(du-Z^t)^0,1_J^t. Note that Z^t is supported on D_R_2^∗K. In particular, the geometric energy ∫du-Z^t^2 is bounded above in terms of u^∗ω and the curvature integrand (See <cit.>). [Actually, the curvature integrand vanishes in this situation since Z^t vanishes on the boundary and ω(∂_s u-Z,J(∂_s u-Z))=ω(∂_s u-Z,∂_t u)=u^∗ω-dH(∂_t u). ] The boundary conditions are the same as in (<ref>) and outside D_R_2^∗K, solutions of (du-Z^t)^0,1_J^t solves (<ref>) that the solutions of (<ref>) are still compactly confined on, say, D_R_3^∗ K_1, for any Ĵ^t and with respect to the bound on sup_t ∇ Q^t. Note that the bound on sup_t∇ Q^t will however, depend on Ĵ^t. Then we may use the solutions of (<ref>) to construct the desired chain homotopy H. To achieve transversality, we use the Banach manifold 𝒥(K,R_1,J^0,J^1) and ℋ(R_1,K) and run the standard transversality argument, say as in the proof of <cit.>). This is essentially the same strategy as in <cit.>. Then we count the zero dimensional component of the moduli space of solutions of (<ref>) for generic Ĵ^t and Ŷ; the 1-dimensional component has boundary either that induced from strip breaking or the solutions of the equation (<ref>) for t=0,1 so we get the desired chain homotopy relation. We now discuss the second bullet point. Note that showing that the following commutative diagram holds up to chain homotopy, CF(ψ_1^-1(V),L_0,ψ_1^∗J_1) CF(V,L_0,J_0) CF((ψ_1^-1∘ψ_2^-1)(V),L_0,(ψ_2∘ψ_1)^∗J_2)["ĉ_̂0̂1̂", from=2-1, to=1-2] ["ĉ_̂1̂2̂'", from=1-2, to=2-3] ["ĉ_ψ_2 ∘ψ_1"', from=2-1, to=2-3] reduces to the standard case discussed in <cit.> since the endpoint conditions match. The last point on passive continuation maps being quasi-isomorphisms follows because uniformly horizontally finite isotopies are compactly supported on V, and so are their inverses. Therefore, the “inverse movie" 𝒱^-:={(s,v):p∈ V_-s} is still tame. Hence the same argument applies and we can explicitly construct the chain inverse map. Suppose now that V is a vertical finite Lagrangian in T^∗M and suppose that the set V(F):={m∈ M: T_m^∗M is transverse to V} is dense. Given two points m,m'∈ V(F) and a path homotopy class α between m and m', we can find a piecewise smooth representative of α such that i) each of the smooth components α_i are embedded curves in M, and ii) the endpoints are contained in the set V(F). We call the induced passive continuation map the parallel transport map associated to α. From Proposition <ref>, we readily obtain: A relative path homotopy class α between m,m∈ V(F) as above induces a parallel transport map Γ(α): HF(V,F_α(0),J_0)→ HF(V,F_α(1),J_1) with the following properties: * parallel transport maps are isomorphisms, * parallel transport maps are compatible with respect to concatenation of paths, * parallel transport maps only depend on the path homotopy classes. In particular, the assignment z↦ HF(V,F_z) equipped with the parallel transport maps defines a local system on M. §.§.§ Path groupoid representation We now relate everything we discussed to path groupoid representations of the Floer cohomology local system. We first recall the definition in Section <ref>. Let M be a two dimensional manifold which is flat at infinity. Suppose furthermore that M admits a compactification M, by which we mean that there is a proper embedding of M into a compact two dimensional manifold M such that M_∞=M-M consist of finite set of points. (Definition <ref>) In the above set-up, M admits a wall-chamber decomposition if there exists a finite collection M^0 of points on M, and a collection M^1 of embedded arcs (called walls) in M satisfying the following conditions. * If w∈ M^1, then w connects a point in M^0 to either a point in M^0 or a point in M_∞. * Given a point m_0∈ M^0, there exists a wall W such that m_0∈∂ W, and given a point m_∞∈ M_∞, there exists at least one point m_0 in M^0 and a wall W∈ M^1 such that ∂ W={m_0}∪{m_∞}. * The walls in M^1 only meet at the points in M^0∪ M_∞. * The complement M^2 of all the walls s in M^1 decompose M into a finite disjoint union of contractible components (called chambers). (Definition <ref>) Given a wall-chamber decomposition (M^0,M^1,M^2), we say that a collection of points 𝒫_M in M is a set of base points if each component of M^2 contains at least one element of 𝒫_M. Given the base points 𝒫_M, the path groupoid 𝒢_M=𝒢_M(𝒫_M) is the groupoid whose objects are points in 𝒫_M, and whose morphisms are path-homotopy classes between the points in 𝒫_M. A collection of morphisms of 𝒢_M is said to be a path groupoid generating set if their concatenations generate 𝒢_M. A path groupoid representation of a GL(k;ℛ)-local system consists of the following data. * A free rank k ℛ-module E_b for each b∈𝒫_M together with an isomorphism ℛ^⊕ k≃ E_b. * A morphism Γ(α):E_b→ E_b' given a path homotopy class α∈π_1(b,b')_M, b,b'∈𝒫_M, such that Γ(α) is compatible with path concatenations. Two path groupoid representations (𝒫_M,E',P) and (𝒫_M,E',P') are said to be equivalent if for each b∈ M there are isomorphisms g_b:E_b→ E_b' such that (i) the following diagram commutes for α∈π_1(b,b'), b,b'∈𝒫_M: E_b E_b' E'_b E'_b'["Γ'(α)", from=2-1, to=2-2] ["Γ(α)", from=1-1, to=1-2] ["g_b"', from=1-1, to=2-1] ["g_b'"', from=1-2, to=2-2] , and (ii) the isomorphisms g_b are compatible with the isomorphisms (<ref>) to ℛ^⊕ k above. Suppose now V is a vertically finite Lagrangian over M. Suppose we can choose a finite set of points 𝒫_M(V) on M such that 𝒫_M(V) contains at least one point from each chamber of (M^0,M^1,M^2) and that F_b and V are transverse for b∈𝒫_M(V). Choose a grading on T^∗M. Suppose furthermore that V is spin and graded. Choose a spin structure on M and the induced spin structure on F_b (See Section <ref> for details), and a spin structure on V. Then consistent sign choices can be made so that the chain complex CF(V,F_b) is a ℤ-graded ℤ-module, for b∈ P_M(V). Suppose we have a compactly supported exact Lagrangian isotopy V∼ V' such that the support lies outside π^-1(𝒫_M(V)). Then CF(V',F_b) can also be made a ℤ-graded ℤ-module in a compatible manner. In particular, the quasi-isomorphisms CF(V,F_b)→ CF(V',F_b) for b∈ P_M(V) commute with parallel transport maps. Then the following proposition rewrites our discussion in Section <ref>, in the language of path groupoids. Let k= HF(V,·). The following data forms a path groupoid representation of a GL(ℤ;k)-local system. * Points 𝒫_M(V). * The free ℤ-module HF(V,F_b). * Parallel transport maps Γ(α):HF(V,F_b)→ HF(V,F_b') defined as in <ref> Furthermore, let Γ'(α) denote the parallel transport maps associated to the ℤ-modules HF(V',F_b) for b∈𝒫_M(V). Then the two path groupoid representations (𝒫_M(V),HF(V,F_b),Γ(α)) and (𝒫_M(V),HF(V',F_b),Γ'(α)) are equivalent. We may instead think of the global local system z↦ HF(V,F_z) as the local system induced from the path groupoid representation (𝒫_M(V),HF(V,F_b),Γ(α)). We will switch between these two conceptual pictures depending on whichever is more convenient. § DESINGULARIZATION AND REAL-EXACT SPECTRAL CURVES In this section we discuss the geometry and topology of real-exact spectral curves. In Section <ref>, given a small deformation parameter δ>0, we deform the singular metric g^ϕ on C^∘ to a Kähler metric g^ϕ_δ on C̃ as we discussed briefly in Section <ref>. In Section <ref>, we define what we mean by “non-constant discs bounded between the fibre and the spectral curve", also called in this paper as BPS discs, and provide various conformal models. In Section <ref>, we look at the toy case of ϕ=zdz^2 on ℂ=ℂP^1-{∞}, whose spectral curve is isomorphic to Σ_ϕ={(p^z)^2-z=0} on ℂ^2=T^∗ℂ. Then we discuss how the associated spectral network is related to BPS discs. In Section <ref>, we discuss the wall-chamber decomposition of C induced from a complete saddle-free GMN quadratic differential ϕ. In Section <ref>, we discuss the geometry of real-exact spectral curves. Like we said in Section <ref>, we show that given an energy cut-off E≫ 1 we can deform C-S(0) to a bounded open subdomain C(δ;E) of C such that horizontal trajectories passing through z∈ C(δ;E) never enter sufficiently small neighbourhoods of the zeroes of ϕ. Furthermore, we show in Proposition <ref> that outside of S(π/2), we have a canonical ± ordering on the lifts of the points in z∈C̃ with respect to the projection π:Σ_ϕ→C̃. §.§ Desingularization We provide a way of deforming the singular ϕ-metric to a smooth metric on C̃. We first start with the case of ϕ=zdz^2. This deformation depends on some auxiliary choices but all the deformed metrics are conformally equivalent and they agree near infinity. The singular flat metric g^ϕ=zdz^2 in polar coordinates reads g^ϕ=r(dr^2+r^2 dθ^2). Choose a δ>0 and a smooth strictly increasing positive function ψ_δ:[0,∞)→ [1,∞) such that ψ_δ(r)=r for r<δ and r=1 for r>3/2δ. The metric g_δ^ϕ=r/ψ_δ(r)(dr^2+r^2 dθ^2) is now globally defined on ℂ, and conformal hence invariant with respect to the standard complex structure on ℂ. So g_δ^ϕ is actually a Kähler metric since we are in complex dimension 1, though it is not real analytic. Recall that we call a quadratic differential complete if it does not admit poles of order one. Let ϕ be a complete GMN quadratic differential and let b_1,...,b_n be the zeroes of ϕ. Recall from Proposition <ref> that: Let b be a simple zero of ϕ. Then there exists a neighbourhood U_b of b, an open set D of ℂ containing zero, and a biholomorphism ξ=ξ_b:(D,0)→ (U_b,b) such that ϕ(ξ)dξ^2=(3/2)^2 ξ dξ^2. Furthermore, the germ of the biholomorphism is unique up to a factor of some c=exp(k/3(2π i)) for k=0,1,2. We may assume that ϕ in (<ref>) is dξ^2. Let U_i=U_b_i and ξ_i=ξ_b_i. By shrinking if necessary, we may assume that the open sets U_i are disjoint and that ξ_i^-1(U_i)=D(r_i) for some r_i>0. Having made these choices, we define: Let 0<r<min{r_1,...,r_n}. Let b_i, U_i, ξ_i and D(r_i) be as above. Let U_i(r)=ξ_i(D(r)). We define U(r)=⋃_i=1^n U_i(r). By choosing δ<1/2min{r_1,...,r_n}, we use the local form (<ref>) near each branch point to conformally deform the flat metric g^ϕ to obtain a global smooth metric on C̃ which we still denote as g_δ^ϕ. Note that for any other choice of δ'<δ and ψ'_δ', the resulting metric g_ϕ^δ' is conformally equivalent to g_δ^ϕ. Furthermore, the conformal factor is a smooth positive function which is equal to 1 except on some small annular regions near each of the zeroes of ϕ. We call the metrics obtained by this general method (conformally) desingularized metrics. §.§ The Moduli problem We now define the moduli problem that we are interested in. Let ϕ be a complete GMN quadratic differential, g^ϕ the induced singular flat metric on C, and g^ϕ_δ a desingularization of g^ϕ that we constructed in Section <ref>. We do not require ϕ to be saddle-free. Here J=J_ϕ is the induced almost complex structure on T^∗C̃ with respect to g_δ^ϕ, and J_con its conical deformation. §.§.§ Conformal structures We start with a brief discussion on the conformal model _m of the closed unit disc with m punctures on the boundary, which was constructed by Ekholm in <cit.>. Given points c=(c_1,...,c_m-2)∈ℝ^m-2, we consider the subdomain of (-∞,∞)×[0,m] given by removing m-2 horizontal slits in the direction of +∞, of width 0<ϵ≪ 1, starting from the points (c_j,j) for j=1,...,m-2. A boundary component I with both of its ends at +∞ is called a slit boundary component. Given a slit boundary component I, the boundary minimum of I is the unique point with the smallest real part along I. We can regard each of these subdomains as giving conformal structures on _m induced by z=s+it. Note that translating by (t,...,t) on ℝ^m-2 for t∈ℝ gives a biholomorphism of this subdomain and hence a conformal equivalence between two different conformal structures on _m. Quotienting ℝ^m-2 by the t-action gives ℝ^m-3. In <cit.>, Ekholm shows that there is a diffeomorphism between ℝ^m-2/ℝ and the space of conformal structures on _m. In particular, we recover the unique conformal structure on _3. §.§.§ t-BPS discs ending at z We now provide several models of t-BPS discs ending at z. Let 𝒵=(-∞,∞)× [0,1] be the infinite strip. Then Let 0≤ t≤ 1. A map u:𝒵→ T^∗C̃ is a t-BPS disc ending at z in the infinite strip model if it satisfies the following equation: ∂̅_̅J̅u=0 u((-∞,∞)×{0})⊂ tΣ_ϕ u((-∞,∞)×{1})⊂ F_z lim_s→±∞u(s,τ)∈ F_z∩ tΣ_ϕ lim_s→ -∞u(s,τ)≠lim_s→∞u(s,τ). A map u:_3→ T^∗C̃ is a t-BPS disc ending at z in the _3-model if it satisfies the following equation: ∂̅_Ju=0 u(s,0)⊂Σ_ϕ u(s,1)⊂Σ_ϕ lim_s→ -∞u(s,τ)∈ tΣ_ϕ lim_s→ +∞, 0≤ t≤ 1-ϵ u(s,τ)∈ tΣ_ϕ∩ F_z. lim_s→ +∞ 1-ϵ≤τ≤ 2 u(s,τ)∈ tΣ_ϕ∩ F_z u(slit)⊂ F_z lim_s→ +∞, 0≤τ≤ 1-ϵ u(s,τ)≠lim_s→ -∞ 1+ϵ≤ t≤ 2. where slit is the unique slit boundary component of _3. The notion of J_con-BPS discs with the various conformal models is defined by replacing J with J_con. When t=1, we just write BPS discs. See Figure <ref> for the triangle model. The infinite strip model is useful when dealing with degeneration of continuation strips and the slit model is useful when carrying out adiabatic degeneration techniques. One can pass from the strip model to the slit model by removing the point (0,0) on 𝒵. Similarly, by removal of singularity, one can remove the strip-like end at s=-∞ from the _3-model and return to the strip model. We will now construct explicitly some BPS discs ending at z. The J-disc we will construct here as a submanifold coincides with the vertical strip constructed in <cit.>. In fact, the construction of the metric g_δ^ϕ was initially motivated by the problem of finding a suitable metric on C̃ making the vertical strips J-holomorphic for the Sasaki almost complex structure J. This construction will not be used in Sections <ref>– <ref>, but we have included it here for the sake of completion. Let ϕ be a complete GMN quadratic differential. Let z∈ S(0), then there exists a BPS disc at z. Recall that the spectral network S(0) is the critical graph of the singular foliation on C̃ given by horizontal trajectories. In particular, the walls on S(0) are ϕ-trajectories with maximal domain of definition an open interval of form (a,∞) or (a,b) where a and b are both finite. The metric g_δ^ϕ (<ref>) constructed in Section <ref> is radial near the zeroes of ϕ and all the radial rays are geodesics with respect to g_δ^ϕ. Furthermore, the walls on the spectral network S(0) initially propagate at the zeroes of ϕ as positive radial rays of phase 0 and ± 2π/3. Thus the walls on the spectral network lie on some g_δ^ϕ-geodesics. Let g=g_δ^ϕ. Given an arc-length parametrized h-geodesic γ:(-ϵ,ϵ)→C̃, let S_γ be the embedded plane in T^∗C̃ given by the parametrization (s,τ)→(γ(s),-τ g((∂_s γ)(s))) for (s,τ)∈ (-ϵ,ϵ)×ℝ, where we regard h as an isomorphism TC̃→ T^∗C̃. To see that the S_γ is J_h-holomorphic, note that by Lemma <ref>: ∂_s(S_γ)(s,τ) =((∂_s γ)(s))^H ∂_t(S_γ)(s,τ) =-(g((∂_s γ)(s)))^V and J=[ 0 g^-1; -g 0 ] in the horizontal-vertical decomposition, that ∂_s (S_γ)(s,τ)+ J(∂_t(S_γ)(s,τ)) =((∂_s γ)(s))^H-(g^-1(g(∂_s γ(s))))^H = ((∂_s γ)(s))^H-((∂_s γ)(s))^H=0. Here the fact that γ was a geodesic was crucial; we have ∇_∂_s γ(s)∂_s γ(s)=0. In particular, suppose η is any reparametrization of γ, then the curve (∂_s η(s),-g(∂_s η(s))) in T^∗C̃ traces out an arc in S_γ. Let z∈ S(0). Let w be the wall of S(0) containing z. Let γ:(a,b)→ C be a unit speed geodesic defined on (a,b) containing a finite closed interval [a',b'] such that γ|_[a',b'] is contained in the wall w, γ(a')=z_0≠ z, and γ(b')=z, where z_0 is a zero of ϕ contained in w. Then one can check that the ϕ-geodesic equation implies that the complex vector √(ϕ(γ(s))) is actually colinear to (∂_s γ)(s). So the arcs (γ(s),±√(ϕ(γ(s)))) and (γ(b'),-τ g(∂_s γ)(b')), for s∈ [a',b'] and t∈ℝ, are contained in the J-holormophic plane S_γ (<ref>) associated to γ. Furthermore, they bound a simply connected domain in S_γ. So by the Riemann mapping theorem, we have constructed our J-holomorphic disc u. §.§ The toy case Now we discuss the case of ϕ=z dz^2 to illustrate how the spectral network relates to the existence of BPS discs. We remark that for general complete GMN quadratic differentials, one needs the adiabatic degeneration argument in Section <ref>. We can identify ℂ^2≃ T^∗ℂ and Σ_ϕ with {(p^z)^2-z=0} in ℂ^2. Recall that g^S is the metric induced on ℂ^2 and Ω is the canonical holomorphic symplectic form on ℂ^2 (Defined in <ref>). Let Ĩ be the horizontal lift of I. In conformal normal Kähler coordinates, we have: g^S= dz^2+d(p^z)^2 Ω=dp^z∧ dz J=[ 0 Id; -Id 0 ] Ĩ=[ i 0; 0 -i ]. Note that g^S is Ĩ and J invariant. Let K=ĨJ and ω_I=g^S(Ĩ-,-). Then the imaginary part ω_π/2=1/2i(Ω-Ω̅) of Ω is given by g^S(K̃-,-). Furthermore, since ω_K(v,Jv)=g^S(Kv,Jv)=-g^S(v,KJv)=-g^S(Iv,v)=ω_I(v,v)=0, the imaginary part of Ω vanishes on the interior of a J-holomorphic disc. The spectral curve Σ_ϕ is exact with respect to the holomorphic Liouville form λ. We choose a primitive W of λ W(p^z,z)=2(p^z)^3/3. Then we have dW|_Σ_ϕ=λ. Given a complex number z∈ℂ, write z_θ= e^-iθz+e^iθz̅/2. For the quadratic differential ϕ=zdz^2, the spectral network S(θ) consists of three positive rays of phases e^i2θ+2π k/3,k=0,1,2 emanating from the origin. Comparing with (<ref>), we see that we have the following alternative characterization of the spectral network S(θ) in terms of the holomorphic primitive W: The spectral network S(θ) is the locus of points z on ℂ such that W(π^-1(z))=W(±√(z),z)_θ+π/2=(± z√(z))_θ+π/2=0. For a∈ℂ-{0}, let {a^0,a^1} be the set of the lifts of a on Σ. Since W(w,z)=±2/3 z√(z) on (z,w)∈Σ, we see that S(0) is the locus of points a∈ℂ such that the imaginary value of W(a^i) is equal to zero for i=0,1. We now give an ordering to the pair provided that Re(W(a^0))≠Re(W(a^1)). The equality happens if and only if the real part vanishes, which then implies that a is on S(π/2). Based on this fact, for a∉ S(π/2), we order the two lifts of a by a^± with respect to the relation Re(W(a^+))>Re(W(a^-). We will construct a simillar ordering in Section <ref>. We now provide a Floer theoretic reformulation of the characterisation of the spectral network for {(p^z)^2-z=0}. From now on we fix the phase θ=0. Let ϕ=zdz^2 on ℂ. The spectral network S(0) is locus of the points z on ℂ such that there exists a BPS disc (<ref>) ending at z. We utilise the exactness of the holomorphic Liouville form. Since ∫ u^∗Ω=W(z^+)-W(z^-)=4z^3/2/3, and ω_π/2 vanishes in the interior of any J-holomorphic disc, there can be no BPS disc ending at z for z∉ S(0). From Proposition <ref>, we see that we can construct explicitly some BPS disc ending at z∈ S(0). Notice that for the case ϕ=zdz^2, Proposition <ref> is much stronger than Theorem <ref>. However, since most spectral curves are not exact with respect to the holomorphic Liouville form λ, our argument in this section cannot be applied for general spectral curves Σ_ϕ. §.§ Domain decomposition We now discuss the domain decomposition that comes from the spectral network S(0) associated to a saddle-free GMN quadratic differential ϕ. We assume that ϕ is GMN and complete. We use the conventions introduced in the ϕ-metric part in Section <ref>. Given a class [γ]∈ H_1(Σ;ℤ), its charge Z(γ) is defined by the following formula: Z(γ)=∫_γλ where γ is a smooth representative of [γ]. The induced ℤ-additive homomorphism Z:H_1(Σ;ℤ)→ℂ is called the charge homomorphism. Given a saddle trajectory γ of phase θ we can join the two lifts of γ so that the charge of the corresponding class in H_1(Σ,ℤ) is of phase e^-iθ. Furthermore, by rotating the quadratic differential ϕ to e^i2θϕ for generic θ, we can make the image of Z avoid ℝ_>0∪ℝ_<0. This means that by rotating the quadratic differential by a generic phase, we can always obtain a saddle-free quadratic differential (See <cit.>). We have the following result on the conformal equivalence classes of the connected components (which we called the chambers) of C-S(0) for saddle-free, complete quadratic differentials ϕ. For the proof, see Chapters 6 and 9-11 of <cit.>, and Sections 3.4-3.5 and Lemma 3.1 of <cit.>. Let ϕ be a complete, saddle-free quadratic differential. Then the connected components of C̃-S(0)) are conformally equivalent to one of the following. * Vertically finite horizontal strips 𝒵(a,b)={z∈ℂ:a<Im(z)<b} for some -∞<a,b<∞. The boundary of 𝒵(a,b) consist of separating horizontal trajectories given by extending the biholomorphism to the lines {Im(z)=a, Re(z)> a_0}, {Im(z)=a, Re(z)< a_0}, {Im(z)=b, Re(z)< b_0}, {Im(z)=b, Re(z)> b_0} for some a_0,b_0∈ℝ. In other words, the biholomorphism extends to a continuous map 𝒵(a,b)→ℂ which is a surjection onto the closure of the corresponding horizontal chamber component, such that the points a_0+ia and b_0+ib are mapped to zeroes of ϕ. * The open upper half-plane ℋ:={z∈ℂ:im(z)>0}. Again, there exists some x_0∈ℝ such that the biholomorphism extends to a continuous map ℋ→C̃ which is a surjection onto the closure of the corresponding horizontal chamber component, where the point x_0+i· 0 is mapped to a zero of ϕ, and the lines {im(z)=0, re(z)>x_0} and {im(z)=0, re(z)<x_0} are mapped to separating horizontal trajectories. In both cases, the pullback of ϕ under the conformal equivalence is equal to dz^2. In fact, these domains are given by maximal analytic continuations of ∫√(ϕ(z)) along open neighbourhoods of generic horizontal trajectories. Both of these domains are traced out by generic horizontal trajectories. From now on, we will not distinguish the horizontal chambers of ϕ (which are open conformal subdomains of C̃) from their conformally equivalent counterparts 𝒵(a,b) and ℋ (which are open conformal subdomains of ℂ). From the proposition, we see that given a δ>0 we have an ϵ(δ)>0, h(δ)>0 and η(δ)>0 such that the h(δ)-neighborhoods of horizontal trajectories which trace out the horizontal subdomains 𝒵(δ;a,b) =𝒵(a+ϵ(δ),b-ϵ(δ))⊂𝒵(a,b) ℋ(δ) =ℋ∩{y>ϵ(δ)} never enter the (slightly thickened) neighbourhood U((2+η)δ). For later purposes, we demand that η>0 is small enough so that outside U((2-η)δ), g_δ^ϕ=g^ϕ. We sometimes call the latter neighbourhood the desingularization region. Note that 𝒵(δ;a,b) and ℋ(δ) are naturally deformation retracts of the horizontal chambers of ϕ. Taking the union of the horizontal subdomains 𝒵(δ;a,b) and ℋ(δ) inside each of the horizontal chambers, we obtain our domain C(δ;∞). There exists a conformal subdomain C(δ;∞)⊂C̃ which is a disjoint union of deformation retracts of connected components of C̃-S(0) which satisfies the following. There exists an h=h(δ;E)>0 and an η(δ)>0 such that if γ is a horizontal trajectory passing through z∈ C(δ;∞), then the h(δ)-neighborhood of γ lies strictly outside the desingularization region U((2+η)δ). §.§ Real-exact spectral curves We now look at real-exact quadratic differentials ϕ, which, as stated in the introduction, is the main object of our interest. Recall that (Section <ref>) we have the identification of the real cotangent bundle and the holomorphic cotangent bundle via dx→ dz, dy→ -idz. Recall that a complete GMN quadratic differential ϕ is called real-exact if the spectral curve Σ_ϕ associated to ϕ is sent to a λ_re-exact Lagrangian. Equivalently, this means that Σ_ϕ is exact with respect to the real part of the holomorphic Liouville form: λ_θ=0:=λ+λ̅/2. We discuss when saddle-free GMN quadratic differentials give real exact spectral curves. Then for ϕ real-exact, we find an open subdomain C(δ;E)⊂C̃ which is a deformation retract of C(δ;∞), such that the energy of a BPS disc ending at z∈ C(δ;E) is a priori bounded above by 2E. Furthermore, we also construct a vertical neighbourhood 𝒱 of the “truncated" spectral network (see Definition <ref>) such that we have a preferred ordering z^+,z^- of the lifts π^-1(z), for which the geometric energy of a t-BPS disc ending at z, that travels from z^+ to z^-, is strictly negative; hence we show the non-existence of such J-discs. §.§.§ Criterion for real exactness Given a horizontal strip (𝒵(a,b),ϕ=dz^2), consider the saddle trajectory given by connecting the two zeroes of ϕ on the horizontal boundary segments of 𝒵(a,b). Such saddle trajectories are called standard saddle trajectories. The corresponding homology classes in H_1(Σ_ϕ;ℤ) given by joining the two lifts of the straight line are called standard saddle classes. From standard saddle classes, we obtain the following criterion for real-exactness. The Lagrangian Σ_ϕ with respect to the canonical symplectic form ω of the real cotangent bundle is real-exact if and only if the standard saddle trajectories all have purely imaginary charge. The natural involution on the spectral curve induces a ℤ_2-action on the homology group H_1(Σ;ℤ). Define the hat-homology group H_1(ϕ) to be the ℤ_2 anti-invariant part of H_1(Σ;ℤ). Then <cit.> shows that the hat-homology group H_1(ϕ) is generated by the standard saddle classes of ϕ. Since λ is ℤ_2-anti-invariant, the ℤ_2-invariant part of H_1(Σ;ℤ) lies in Z. Hence the charge homomorphism factors through H_1(ϕ). However, the image of a standard saddle class under Z is equal to ± e^iθ times its ϕ-length. So we see that Σ_ϕ is real-exact if and only if the phases associated to the standard saddle trajectories are all imaginary. The standard saddle classes give a ℤ-basis in the lattice H_1(ϕ). Hence we can identify it with ℤ^⊕ n. Following Bridgeland and Smith <cit.>, let Quad_free(g,m=({m_1,p_1},...,{m_k,p_k}) be the space of pairs (C,ϕ), where C is a genus g closed Riemann surface and ϕ is a quadratic differential over C, such that the poles of ϕ are the points p_i with the order m_i. We identify the pairs (C,ϕ) and (C',ϕ') up to conformal equivalence. Then in <cit.>, Bridgeland and Smith show that Quad_free(g,m) is locally isomorphic to the space of ℤ-homomorphisms from ℤ^n to ℂ, which implies that it is a complex manifold of dimension H_1(ϕ). Restricting to the homomorphisms which map entirely to iℝ⊂ℂ, we see that the real-exact quadratic differentials form a totally real submanifold of Quad_free(g,m). §.§.§ Energy and horizontal distance Let W be the primitive of λ_Re over Σ_ϕ. We now relate W to the ϕ-length. Given an arc-length parametrized ϕ-geodesic α:[0,l]→ C^∘ of phase θ, let α̃ be a lift α onto Σ_ϕ. Then We have ∫_α̃λ=± e^-iθl. Take a flat local conformal coordinate ∫√(ϕ(z))dz, sending α(0) to 0, over which α reads e^-iθt, and Σ_ϕ:={p^x=± 1, p^y=0}. So the value of the holomorphic Liouville form is just ± 1· e^-iθ. Integrating this from t=0 to t=l gives (<ref>). Let 𝒵^h be a horizontal chamber. Suppose z,z' are two points on the closure of 𝒵^h in C̃. Choose the shortest straight line segment l in 𝒵^h connecting z and z' and a lift l̃ of l to Σ_ϕ. Then the horizontal distance d_hor(z,z') is defined by: d_hor(z,z'):=∫_l̃λ_re. Since any other choice of l̃ reverses the sign of ∫_l̃λ_re by -1, the horizontal distance is well-defined. Since ϕ is real-exact, all the standard saddle trajectories are vertical. By translating if necessary, we may assume that the standard saddle trajectory on 𝒵^h lies on x=0. Let 𝒵^h be a horizontal chamber and let z,z' be two points on the closure of 𝒵^h. Suppose z=x+iy, z'=x'+iy' under some ϕ-flat conformal equivalence (𝒵^h,ϕ)≃ (ℋ,dz^2) or (𝒵^h,ϕ)≃ (𝒵(a,b),dz^2). Then d_hor(z,z')=x-x'. Furthermore, let d=d_hor(z,b) for some zero b of ϕ on the boundray of 𝒵^h. Then d only depends on z and not on b. Finally, if z^0 and z^1 are the two lifts of z, then W(z^0)-W(z^1)=2d. By translating the standard vertical saddle trajectory in 𝒵^h into the line Re(z)=0, we may assume that the branch points lie over the line Re(z)=0. Let z=x+iy and z'=x'+iy'. The straight line segment l connecting (x,y) and (x',y') is homotopic to the concatenation of the horizontal line segment from (x,y) to (x',y) and the vertical line segment from (x',y) to (x',y'). Call the concatenation of these two line segments γ. Note that γ and l bound a triangle in 𝒵^h. Consider the lift of the homotopy given by the triangle. Call γ̃ the resulting lift of γ. (See Figure <ref>) Since the vertical line segment is a π/2-geodesic, we see that integrating the real Liouville form over the vertical component vanishes. Furthermore, since the horizontal line segment is a phase zero geodesic, the integral of the real Liouville form over γ̃ is equal to ± (x-x') by (<ref>). This finishes the proof. In particular, set b to be one of the zeroes. Then the straight line segment connecting b and z attains two lifts, both of which meet at the point π^-1(b). We can join the two lifts at b. Consider the resulting curve on Σ_ϕ. Integral of the real Liouville form then gives us ±(W(z^0)-W(z^1)) but this is also equal to twice the horizontal distance between z and b, up to sign. This finishes the proof. Given a point z∈C̃-S(π/2), we can now order the two lifts of z to Σ_ϕ by the condition W(z^+)>W(z^-). Furthermore, we have the following corollary: Let z be a point in C̃-S(π/2). Connect z to a point z̃ on the wall γ of S(0) by a vertical trajectory. Then W(z^+)-W(z^-)=W(z̃^+)-W(z̃^-)=2l>0, where l is the ϕ distance of z̃ from the branch point end of the wall γ. l does not depend on the choice of z̃. §.§.§ Chamber deformations We now construct the region C(δ;E) and the bridge region 𝒱(δ;E). Constructing C(δ;E). The conformal subdomain C(δ;E) is a deformation retract of C(δ;∞) such that we have a bound on W(z^+)-W(z^-) for z∈ C(δ;E). Again, since all the standard saddle trajectories are vertical, we can translate the vertically finite horizontal strip domains and half-plane domains as in Proposition <ref> so that all the branch points lie over x=0. Then for E>0, set 𝒵(a,b;E) :={z∈𝒵(a,b): Re(z)<E} 𝒵(δ;a,b;E) :=𝒵(a,b;E)∩𝒵(δ;a,b) ℋ(E) :={z∈ℋ: Re(z),Im(z)<E.} ℋ(δ;E) :=ℋ(δ)∩ℋ(E). We define C(δ;E):=⋃𝒵(a,b;E) ∪⋃ℋ(δ;E) where we take the union over all the horizontal chambers of C. Note that C̃-S(0) deformation retracts to C(δ;E) and W(z^+)-W(z^-)<2E by Proposition <ref>. Constructing 𝒱(δ;E). We now construct the bridge region 𝒱(δ;E). We start with a definition. Suppose γ:[0,∞)→C̃ is a wall on the spectral network S(0), arc-length parametrized with respect to ϕ. Then for T>0, the T-truncated wall γ is the restriction of γ to the interval [T,∞). The T-truncated spectral network (or the truncated spectral network for short) S(0)_T is the union of the images of the T-truncated walls. The following definition will be useful: Let γ be an open geodesic arc in C^∘. Then we say that a neighbourhood V of γ is a vertical neighbourhood if V is traced out by open vertical segments that passes through γ. Let h_v≪min_𝒵(a,b)⊂ C-S(0)b-a/2 and let 𝒱(h_v) be the set of points in C that are connected to points on S(0)_T by a vertical geodesic of length less than h_v. Each component 𝒱 of 𝒱(h_v) is a vertical neighbourhood of a unique truncated wall γ|_[T,∞] which we call the core of 𝒱. By taking δ≪ 1, we can ensure that 𝒱 intersects all the horizontal chambers that are adjacent to the core wall γ. If z is a point on 𝒱, then W(z^+)-W(z^-) only depends on the core horizontal geodesic. For small enough δ>0, 𝒱 serves as a “connecting bridge" between the connected components of C(δ;∞) for T=D((2+η)δ) for some continuous function D that only depends on ϕ and the choice of identifications U_i≃ D(r_i)⊂ℂ made in Section <ref>. Note that 𝒱 now lies outside S(π/2) and U(2+η)δ. We summarize the discussion. Let δ≪1 be a small deformation parameter and let E≫1 be an energy cut-off. Then there are precompact open conformal subdomains 𝒱(δ;E) and C(δ;E) contained in the set C̃-S(π/2) and C̃- U(2+η)δ respectively, with the following properties. * There exists a h(δ)>0 and an η(δ)>0 such that if γ is a generic horizontal trajectory passing through a point z contained in C(δ;E) then γ never enters U((2+η)δ). Furthermore, if z∈ C(δ;E) then W(z^+)-W(z^-)<2E. * Given a connected component 𝒱 of 𝒱(δ;E), there exists a unique wall γ:(0,∞)→ C called the core of 𝒱 and a truncated portion γ|_(δ;∞) lying in 𝒱, such that the component of 𝒱 is given by some vertical thickening of γ|_(δ,∞). Furthermore, for z∈𝒱, we can order the lifts z^+,z^- of z on Σ_ϕ such that W(z^+)-W(z^-)>0. Finally, the connected component 𝒱 overlaps with all the components of 𝒞(δ;E) adjacent to its core wall γ. Let J be a compatible almost complex structure on T^∗C̃. Let z∈𝒱(δ;E). Then there are no J-discs bounded between F_z and Σ_ϕ going from z^+ to z^-. From Stokes' theorem, and ω-compatibility, Area(u)= ∫ u^∗ω=∫ u^∗ω=W(z^-)-W(z^+)<0. This is a contradiction since u must have a positive L^2 norm. This finishes the proof. Corollary <ref> says nothing about the discs going from z^- and z^+. However, it is important because otherwise we do not know a priori that the parallel transport across the connected components of 𝒱 is upper-triangular. § ADIABATIC DEGENERATION We now study the adiabatic degeneration of t-BPS discs ending at z as t→ 0. From now on, we set T^∗C̃ and stick to the _3-conformal model introduced in Definition <ref>. Recall that we had constructed a wall-chamber decomposition of C and a deformation retract C(δ;E) of its horizontal chambers, with respect to a parameter δ>0 and an energy cut-off E≫ 1. The region C(δ;E) has the property that the maximal horizontal trajectory passing through z∈ C(δ;E) never enters the region U((2+η)δ). In Section <ref>, we define the notion of holomorphic flow lines for (slight generalizations of) spectral curves and describe how they relate to ϕ-trajectories. In Section <ref>, we find an a priori energy and boundary length estimate for t-BPS discs ending at z for z∈ C(δ;E). In Section <ref>, we establish some gradient estimates. In Section <ref>, we follow <cit.> closely and introduce a t-uniformly finite number of punctures on the boundary of _3 for each t to obtain a conformal domain _r with the conformal structure defined as in Section <ref>. On this new conformal domain _r of the map u_t, we construct a domain subdivision D_0(t)∪ D_1(t) with the following two properties. * The discs u_t map D_0(t) outside of T^∗U(2δ) and map D_1(t) into T^∗(U(2+η)δ). * The size of the derivatives of u_t over D_0(t) is O(t). We show this by utilising the gradient estimates in Section <ref>. In Section <ref> and <ref>, we study the limiting behaviour of u_t restricted to D_0(t) as t→ 0. We introduce auxiliary subdomains W_0(t) of D_0(t) so that D_0(t)-W_0(t) consist of uniformly finitely many strip-like domains, and u_t|_W_0(t) converges to points on C̃ (Lemma <ref>). The components of D_0(t)-W_0(t) satisfy the following properties. * A 0-special domain is a strip domain that contains a horizontal boundary component with an F_z-label. By Lemma <ref>, a 0-special domain uniformly converges to z. * A non-0-special strip domain is either a vertex region or a non-vertex region. By Proposition <ref> a vertex region is mapped very close to a point in C̃, after taking a subsequence. * By Proposition <ref> a non-vertex region is mapped very close to a horizontal trajectory, after taking a subsequence. In Section <ref>, we will prove the main analytic Theorem <ref> by combining the results in Sections <ref>-<ref>. §.§ Flow lines We adapt the notion of flow lines introduced in <cit.> to the holomorphic setting. Let C̃ be a Riemann surface and let (T^∗_ℂ)^1,0C̃ be the holomorphic cotangent bundle. Let Y be a codimension 1 holomorphic submanifold of (T^∗_ℂ)^1,0C̃ such that the holomorphic projection π:Y→C̃ is a simple branched covering of C̃. Let n be the degree of the branched covering Y→C̃. Suppose z∈C̃ is a regular value of π. Then there exist an open neighbourhood U of z, and locally defined holomorphic functions f_1,...,f_n such that Y∩π^-1(U) locally reads as Γ_df_1⊔...⊔Γ_df_n over U. Suppose now that z is a branch point. Then the germ of Y near the ramification point over z is isomorphic to the germ of (0,0) for the zero set of {(p^z)^2-z^2)=0}. The other smooth sheets of Y over z are given by the disjoint union Γ_dg_1⊔...⊔Γ_dg_n-2 for some locally defined holomorphic functions g_1,...,g_n-2. Equip C̃ with a Kähler metric h=h_zz̅ dz d̅z̅. We can regard h as an isomorphism h:T^1,0_ℂC̃→T^∗_ℂ^1,0C̃. Following <cit.> and <cit.> Let W be a locally defined holomorphic function on C̃. Then the h-gradient of W is defined by ∇_h(W)=h^-1(dW). Given a curve z:[0,1]→C̃, a cotangent lift of z is an ordered pair {z_1,z_2} of lifts z_i:[0,1]→ Y such that z_1≠ z_2, or their common value is a branch point of the projection Y→C̃. A curve z:I→C̃, defined over an open interval I, with an ordered cotangent lift (z_1,z_2), is called a holomorphic (Morse) flow line of phase θ associated to Y if the following equation is satisfied: dz/dt=-e^-iθ∇_h(W_1-W_2), whenever the local holomorphic functions W_1, W_2 associated to the lifts z_1 and z_2 are defined. Now we restrict to the case Y=Σ_ϕ. Given a point z∈C^∘, the quadratic differential ϕ determines local functions W^±(z) such that W^±(0)=0, ∂_z W^±=±√(ϕ)(z). Furthermore, a GMN quadratic differential admits a conformal coordinate near a zero which pulls back ϕ to zdz^2. Recall that the GMN equation is the ODE Im(e^-2iθϕ(γ'))=0. Holomorphic flow lines associated to the spectral curve satisfy the GMN equation (<ref>). In particular, holomorphic Morse flow lines of phase 0 lie on a horizontal trajectory. This follows from e^2iθϕ(z)(dz/dt)^2= 2h^-2ϕ(z) ϕ(z). §.§ The energy and boundary length estimate We prove the crucial energy and boundary length estimate. Suppose u:𝒵→ T^∗C̃ is a t-BPS disc ending at z for z∈ C(δ;E). Then Area(u)≤ 2Et. Let W be the primitive of λ_re on Σ_ϕ. By Stokes' theorem, Area_J(u)=∫ u^∗ω=tW(z^0)-W(z^1)≤ 2Et where the last inequality follows from Proposition <ref>. This finishes the proof. We now need the estimate on the length of the boundary of u_t on tΣ_ϕ lying outside T^∗U(2δ)∩ tΣ_ϕ. There exists some c=c(δ,η)>0 such that for all sufficiently small 0<t≤ 1, the following holds. Let u be a t-BPS disc ending at z. Let ∂_3^hor be the union of the horizontal boundary components of _3. The length of u(∂_3^hor) outside (T^∗U(2δ)∪ T^∗B_η/2(z))∩ tΣ_ϕ is bounded above by c. Let K=(T^∗U((2-η/2)δ)∪ T^∗B_η/3(z)) and l be the length of u(∂_3^hor) on the region outside (T^∗U(2δ)∪ T^∗B_η/2(z))∩ tΣ_ϕ. Recall that we had chosen an η>0 such that over U((2-η)δ)^c, g_δ^ϕ=g^ϕ. Outside K, the normal injectivity radius of tΣ_ϕ is r'_0t for some r'_0>0 independent of t. We take r_0=min(r_0',η/8,δ). On U((2-η)δ)^c, the W=∫√(ϕ)-coordinate brings J=J_std, tΣ_ϕ={y_1=± t, y_2=0} and ω=ω_std. Translating the chosen sheet to {y_1=y_2=0}, we see the following. * In the neighbourhood of the boundary ∂ K∩ tΣ_ϕ, there are charts of radius r_0 t contained in the complement of (T^∗U((2-η)δ)∪ T^∗B_η/4(z)) such that: J=J_std, g=g_std, tΣ_ϕ=ℝ^2⊂ℂ^2, and K=D× iℝ^2. Here ℂ^2 is given by the coordinates (x_1,x_2,y_1,y_2) and D is some open subdomain of ℝ^2. * Each point of tΣ_ϕ∩ (N_r_0K)^c admits a chart of radius r_0t/2 in the complement of (T^∗U((2-η)δ)∪ T^∗B_η/4(z)) such that: J=J_std, g=g_std, and tΣ_ϕ=ℝ^2⊂ℂ^2. We choose a non-negative support function β:T^∗C̃→ℝ_≥ 0 such that β=β(x_1,x_2) on each standard open chart chosen above, β is positive on K^c, β vanishes on K, and β is equal to 1 on (T^∗U(2δ)∪ T^∗B_η/2(z))^c. Such a β can be chosen such that ξ=sup∇β depends only on δ and η. Let ρ be the distance function from tΣ_ϕ. Note that in the above local charts, ρ=y=√(()y_1^2+y_2^2). Let r≤ r_0t. We define the functions a(r),l(r),a^β(r), and l^β(r): a(r)=∫_{ρ≤β r}∩ u∩ K^c dA, a^β(r) =∫_{ρ≤β r}∩ u∩K^cβ dA, l(r)=∫_{ρ= β r}∩ u∩K^c dl, l^β(r) =∫_{ρ=β r}∩ u∩K^cβ dl. For ϵ>0, let K_ϵ=N_ϵ(K). Since β>0 on K^c, it follows that ρ/β is Lipschitz on K_ϵ^c. Hence applying the coarea formula, we get ∫_{ρ≤β r}∩ u∩ K_ϵ^c dA=∫_{ρ≤β r}∩ u∩ K_ϵ^c1/∇ (ρ/β)·∇ (ρ/β)dA=∫_0^r ∫_{ρ=βτ}∩ u∩ K_ϵ^c1/∇ (ρ/β) dl dτ. On ρ=τβ, ∇(ρ/β)≤1/β(∇ρ+τ|β'|)≤1+τξ/β. So combining (<ref>)-(<ref>) and using the monotone convergence theorem, it follows that a(r)≥∫_0^rl^β(τ)/1+τξdτ and d/dra(r)≥l^β(r)/1+rξ a.e . Now, observe that rl^β(r) =∫_{ρ=β r}∩ u∩ K^c rβ dl≥∫_{ρ=β r}∩ u∩ K^c1/2 d^c (ρ^2) =∫_{ρ≤β r}∩ u∩ K^c1/2dd^c(ρ^2)=∫_{ρ≤β r}∩ u∩ K^cω_std=a(r). For the inequality in (<ref>), we use d^c (y^2)≤ 2ρ which follows from d^c (ρ^2)=d^c(y^2)= 2∑ y^i dx^i. To arrive at the first equality in (<ref>) is a bit more involved. We first use Stokes' theorem: ∫_{ρ≤β r}∩ u∩ K^c dd^c(ρ^2) = ∫_{ρ≤β r}∩ u∩∂ K d^c(ρ^2)+∫_{ρ≤β r}∩∂ u∩ K^cd^c(ρ^2) +∫_{ρ=β r}∩ u∩ K^cd^c(ρ^2). Now d^c(ρ^2)=0 on {ρ≤β r}∩ u∩∂ K, since β=0 on ∂ K, ρ=β r=0. Furthermore, d^c(ρ)^2=0 on {ρ≤β r} as well since this set is contained in tΣ_ϕ. Hence the first two terms in (<ref>) vanish and we arrive at the first equality in (<ref>). For the second equality, note that 1/2 dd^c(y^2)=ω_std and for J-holomorphic curves, the area density is just equal to u^∗ω. So we get (<ref>)–(<ref>). Combining (<ref>) and (<ref>)–(<ref>), we see that ra'(r)≥a(r)/1+r ξ. Hence we get the differential inequality d/drlog(a(r)·ξ r+1/r)≥ 0, which implies that the function r→ a(r)·ξ r+1/r is nondecreasing. Now, if r<ξ^-1, then we get 2a(r)/r≥lim_s→ 0Area(u;(T^∗U(2δ)∪ T^∗B_η/2(z))^c∩{ρ≤ s})/s⇒ 2a(r)/r≥ l. The total energy of u is bounded above by <2Et by Proposition <ref>. Setting r=r_0 t it follows that we have Er_0^-1>l. Set c=Er_0^-1. This finishes the proof. There exists a compact subset K=K(δ,ϕ,E)⊂C̃ containing C(δ;E) such that if u is a t-BPS disc ending at z for z∈ C(δ;E), then u lies in P=D_1^∗K^∘ for all small enough t. The rescaled spectral curves tΣ_ϕ for 0<t≤ 1 lie inside the unit disc bundle D_1^∗C̃. By the integrated maximum principle, we see that the disc u must also lie in the unit disc bundle D^∗C̃. Let V be a sufficiently small neighbourhood of the poles of ϕ lying outside the region C(δ;E)∪ U(2δ) such that g|_V=g^ϕ. Let K_1 be the complement of V. The spectral curve tΣ_ϕ is (Gt,H)-isoperimetric outside T^∗K_1, for sufficiently small t>0 and some G,H>0 independent of t. Furthermore, by Proposition <ref>, the total energy of u is bounded above by 2Et. So we can apply the proof of Proposition <ref> to see that the discs cannot leave some neighbourhood of D_1^∗K by some precompact open subset K containing K_1. Set P=D_1^∗K^∘. §.§ Gradient estimate We now follow <cit.> to prove the gradient estimates, which will be needed for the rest of the Section. We will only consider those fibres F_z for z∈ C(δ;E). From Proposition <ref>, we see that the discs of our interest are contained in a precompact neighbourhood P of C(δ;E) in T^∗C̃. For this reason, from now on we only consider smooth functions that map into P. We start with the following gradient estimate: <cit.> There exists some ħ>0 such that for all 0<t≤ 1, the following inequalities hold. * If u:A_r→ T^∗C̃ is a J-holomorphic disc, then Area(u)<ħ⇒du(0)^2≤8/π r^2∫_A_rdu^2. * If u:E_2r→ (T^∗C̃,tΣ_ϕ) is a J-holomorphic half-disc with u(∂ E_2r)⊂ T^∗U(2δ)^c then Area(u)<ħ⇒sup_E_rdu^2≤8/π r^2∫_E_2rdu^2. The same statement holds replacing tΣ_ϕ with F_z for z∈ C(δ;E). The Sasaki almost complex structure J already satisfies the conditions in <cit.> that outside T^∗U(2δ), tΣ_ϕ is totally geodesic, JT(tΣ_ϕ) is orthogonal to T(tΣ_ϕ) and J is skew-adjoint with respect to g^S. Then by <cit.>, there exists some ħ=ħ(g^ϕ_δ,η)>0 such that the statement of Lemma <ref> holds. The same argument applies for F_z,z∈ C(δ;E) Fix now some ϵ>0. Suppose we have a t-BPS disc u ending at z and suppose u admits a subdomain (E_ϵ,∂ E_ϵ)⊂ (_3,∂_3) such that u|_E_∂ϵ maps outside T^∗U(2δ). Suppose t is small enough so that 2Et<ħ. By Proposition <ref>, the total energy of u is bounded above by 2Et, so u|_E_ϵ satisfies the conditions in Lemma <ref>. From this, we see that sup_E_1/2ϵdu must be bounded above by √(8E/2πϵ^2)t^1/2. The following estimate by Ekholm improves the above O(t^1/2)-estimate to an O(t)-estimate. The crucial ingredient is that for u=(q,p), we get p≤ t, from the integrated maximum principle (see also <cit.>). <cit.> Fix some positive constants ϵ,C_1,C_2>0, then for sufficiently t>0, the following holds. * Let u:A_8ϵ→ D_C_1t^∗C̃ be a J-holomorphic disc such that Area(u)<C_2t. Then there exists a constant k(ϵ,δ,η,ϕ,C_1,C_2)>0 such that sup_A_ϵDu≤ kt. * Let u:E_8ϵ→ D_C_1t^∗C̃ be a J-holomorphic half-disc such that Area(u)<C_2t and u(∂ E_8ϵ) lies on either tΣ_ϕ outside T^∗U(2δ), or on F_z for z∈ C(δ;E). Then there exists a constant k(ϵ,δ,η,ϕ,C_1,C_2)>0 such that sup_E_ϵDu≤ kt. Take a small enough t so that C_2 t<ħ. The idea is to show that the geometric energy of u restricted to E_2ϵ(p) is actually of the size O(t^2). Hence applying Lemma <ref>, we see that Du on E_ϵ is of size O(t) which is precisely (<ref>) in the case ∂ E_8ϵ maps to either tΣ_ϕ or F_z. The proof is essentially the same as the proof of <cit.>. The case where ∂ E_8ϵ∩∂_m maps to tΣ_ϕ is unchanged. For the case the boundary maps to F_z, note that since the energy of u is bounded above by C_1 t on E_8ϵ, the C^1 norm of u_t on E_4ϵ is of O(t^1/2) by Lemma <ref>. This implies that after taking a uniformly bounded conformal isomorphism Φ:E_4ϵ≃ E_1, the image of E_1 under u∘Φ^-1 remains O(t^1/2)-close to z. So for t>0 small, we can ensure that for z∈ C(δ;E), the image of u∘Φ^-1 on E_1 maps inside T^∗C(δ;E). However, we have a local isometry G:(T^∗C̃,F_z)≃ (ℂ^2,iℝ^2) sending J to the standard almost complex structure on ℂ^2 (induced from taking the coordinate ∫√(ϕ) near z). Composing with this isometry, we get holomorphic maps v=G∘ u∘Φ^-1: A_1→ℂ^2, with the imaginary part bounded above by C_1t. Furthermore, we can double along iℝ^2 to get maps v̂:E_1→ℂ^2. Let ṽ=t^-1v̂ then the imaginary part of ṽ is bounded above by C_1. Let F(z_1,z_2)=(e^iz_1,e^iz_2), then f=F∘ṽ is holomorphic. Furthermore, the image is uniformly bounded since the imaginary part of ṽ is uniformly bounded, and so is the derivative of F on the images of ṽ. The L^2-norm of Df on the disc of radius 1/2 can be uniformly bounded by supf by Cauchy's inequality[If f:A_1→ℂ is holomorphic, and z∈ A_1/2, then D^nf(z)≤ n!·f_∞,D/(1/4)^n. ]. Furthermore, since by chain rule Df=DF(ṽ)Dṽ(z) and both the norms of Dv and DF(ṽ) are bounded on A_1/2, so is the norm of Dṽ. So we see that there exists some k_1>0 such that Dṽ_L^2,A_1/2≤ k_1. Now Dṽ^2_L^2,A_1/2=t^-2Dv̂_L^2,A_1/2^2 hence D(u∘Φ^-1)_L^2,E_1/2^2=Dv^2_L^2,E_1/2=1/4Dv̂^2_L^2,A_1/2≤1/4k_1t^2, where the first equality follows from v=G∘ u∘Φ^-1 and G being an isometry, and the second equality follows from v̂ being a doubling of v. Here recall that we had composed with a conformal equivalence E_4ϵ≃ E_1. Hence we have managed to show that the energy of u is of size O(t^2) on E_2ϵ, just as claimed. [The actual proof is more or less the same, except that there are some diffeomorphisms involved sending the local graph t· graph(dg) uniformly to ℝ^n, and comparing the almost complex structure with the standard almost complex structure J_0 on ℂ^n. The resulting function f in <cit.> is not fully holomorphic, but it is very close to one.] §.§ Domain subdivision To show Theorem <ref>, we argue by contradiction. We assume that there exists a sequences of positive real numbers t_n→ 0 and a sequence of points z_t_n∈ C(δ;E) converging to a point z∈ C(δ;E) such that there exist t_n-BPS discs u_t_n:_3→ T^∗C̃ ending at z_t_n. We will find a subsequence of (z_t_n,t_n) such that the corresponding discs lie strictly outside the desingularization region T^∗U((2+η)δ). In order to do this, we modify the construction in <cit.>, which will take the rest of Section <ref>. We introduce uniformly finite number of punctures on the boundary of the domain _3 of u_t mapping to tΣ_ϕ. The new domain _r admits a subdivison into domains D_0(t) and D_1(t). Throughout this construction, we have to make choices for some auxiliary functions δ_0(t). We now summarize their properties. * ∂ D_j(t)-∂_r consist of vertical line segments disjoint from the boundary minima. * (Corollary <ref>) Over D_0(t), we have sup_z∈ D_0Du_t(z)≤ kt. * (Lemma <ref>) The subdomain D_0(t) is mapped outside of U((2+1/20δ_0(t))δ) for some function δ_0(t) satisfying 0<δ_0(t)<η/10. * The subdomain D_1(t) is mapped inside U((2+9/20δ_0)δ). Construction of domain subdivision Now we begin the construction. Fix a constant 0<δ_0<η/10 such that u|∂_3 is transverse to ∂(T^∗U((2+cδ_0)δ) for c∈{1,2,3,4}. Let I≃ℝ be a boundary component of _3. Let b_1^c<b_2^c<....<b^c_n(c), c=1,2,3,4 be the points in I such that u(b_j^c) lies in the boundary ∂(T^∗U(2+cδ_0)δ). Set ∞=b^c_k for any k>n(c). Let B_i={b_1^c,....,b^c_n(c)}, B=∪ B_i, and c(b):B→{1,2,3,4} be the indexing function. For c≤ 2≤ 4, we add a puncture at each b_j^c and b_j+1^c with the property that there exists some b_k^c-1 with b_j^c<b_k^c-1<b_j+1^c. Intuitively, we are adding punctures everytime the image of the boundary enters at the point b_j^c, and then leaves at the point b_j+1^c the same “level" ∂(T^∗U((2+cδ_0)δ). Note also that at b_j+1^c, the image of the boundary points outward. Removing the punctures, we arrive at a new domain _r=_3+m_1 with a holomorphic map u:_r→ T^∗C̃. It can be readily checked that the boundary components Ĩ of _r separate into three different types: * out: u(I)⊂ T^∗(C̃-U((2+3δ_0)δ)) * 0: u(I)⊂ T^∗((U(2+4δ_0)δ)-U((2+δ_0)δ)) * in: u(I)⊂ T^∗U((2+2δ_0)δ). One very important property is that the number of added punctures is uniformly finite. <cit.> There exists a constant R=R(δ_0)>0 such that the number m_1 of added punctures satisfies m_1≤ R. Each new puncture corresponds to a segment in u(∂_m) connecting the boundary ∂ T^∗U((2+cδ_0)δ) to ∂(T^∗U(2+(c-1)δ_0)δ), c=2,3,4. The lengths of these segments admit a positive lower bound given by min_c=2,3,4 d_g^ϕ_δ(∂(U(2+cδ_0)δ),∂(U(2+(c-1)δ_0)δ)) by the definition of the Sasaki almost complex structure. Then the proof follows from the a priori bound on the total length of the boundary components outside T^∗U(2δ) (Lemma <ref>). Note that a boundary component I which maps into fibres F_z for z∈ C(δ;E) is automatically an out boundary component. From now on, given a subset S⊂_r and l>0, let B_l(S) denote the l-neighbourhood of S in _r. For 1/4>d>0 let Ω_d=_r-⋃_I⊂∂_r B_d(I). Fix a small ϵ>0 so that for p∈out∪0, the conformal domain B_ϵ(p) is uniformly conformally equivalent to E_ϵ/2(p) independent of t. Let Θ_ϵ=Ω_ϵ∪⋃_I∈out∪0 B_ϵ(I). We have from Theorem <ref> that: <cit.> There exist a constant k>0 such that if t>0 is sufficiently small then sup_z∈Θ_ϵDu≤ kt. By the integrated maximum principle, for u_t=(q_t,p_t), p_t≤ t (see also <cit.>). Now suppose t is small enough so that 2Et<ħ. By Proposition <ref>, the total energy of u_t is bounded above by 2Et, so u|_Θ_ϵ satisfies the conditions in Theorem <ref>, after restricting to a smaller neighbourhood of radius ϵ on the boundary which is uniformly conformally equivalent to E_ϵ/2. Now at each of the boundary minima of _r, introduce a vertical ray in _r passing through the boundary minimum, connecting a boundary point to a boundary point, and consider the resulting subdivison of _r. Since the number of the punctures is uniformly finite, so is the number of the components. Colour a component blue if the component contains an in horizontal boundary segment. Consider the union of all the blue connected components. Equivalently, let D'⊂_r be the union of all the vertical line segments in _r connecting a point in a type-in boundary component to some other boundary point on ∂_r. Observe that D' is the same as the union of all the blue subdomains. The set ∂ D'-∂_r is a collection of vertical line segments. We state <cit.> without proof since the proof is word-to-word the same. For any 0<a<1 and for sufficiently small t>0 we have d(p,D')>t^-a for any point p∈ I, where I is a boundary segment of type out. In particular, a vertical segment l in ∂ D'-∂_r has its end points either on the boundary minimum of a boundary segment of type in or on a boundary segment of type 0. Now, colour a component of the vertical ray subdivision red if the component contains an out horizontal boundary segment. The lemma states that the union of all the red components is separated away from D' by the distance at least t^-a. Note that t^-a grows much faster than logt^-1. Let log(t^-1)≤ d ≤ 2log(t^-1) be chosen such that ∂B_d(D')-∂_r and ∂ B_1/2d(D')-∂_r consist of vertical line segments disjoint from all the boundary minimum. Intuitively, we are taking a horizontal thickening of the blue and the red subdomains by length d. Let D_0=_r-B_1/2d(D') and D_1=B_d(D'). We see that if p∈∂ D_0∩∂_r, then p lies in a boundary component of type 0 or type out, and if q∈∂ D_1-∂_r then q lies in a boundary segment of type 0 or in. Note also that by thickening the red subdomain (or the blue subdomain) to D_0 (or D_1), we have not increased the number of connected components of the red subdomain (or the blue subdomain). Hence the number of the components of D_0 and D_1 are still uniformly bounded. Furthermore, sup_z∈ D_0Du(z)≤ kt. This follows from Lemma <ref>. The following is adapted from <cit.>. Again, the proof is word-to-word the same. u(D_1)⊂ T^∗U((2+9/2δ_0)δ) and u(D_0)⊂ T^∗(U((2+1/2δ_0)δ)^c) for sufficiently small t. The upshot is that D_1 is mapped inside the region where h-neighbourhoods of a horizontal trajectory passing through z∈ C(δ;E) cannot enter, and D_0 is mapped into a region outside all the deformations, and where the metric coincides with g^ϕ. Now, given the sequence of t_n-BPS discs u_t_n:_3→ T^∗C̃, ending at z_n, we apply the same subdivision procedure by letting δ_0 be a function of t which is a very small variation of the constant δ_0 such that 0<δ_0(t)<1/10η. By taking a subsequence, we assume that the number of added punctures is in fact, constant. Construct a decorated graph by assigning red vertices for D_0 and black for D_1 and assign an edge between vertices if the intersection between the corresponding components is non-empty. Since there are only finitely many vertices, the number of all possible configurations is also finite. So by taking a subsequence if necessary, we may assume that the resulting graph is constant. Furthermore, we may also assume that the topology of the components of D_j(t) is also constant. We have now finished the construction. §.§ Convergence to gradient flow lines In this subsection, we introduce the auxiliary subdomain W_0(t) of D_0(t) such that the components of W_0(t)-D_0(t) consist of strip-like domains and they degenerate to solutions of gradient flow line equations. We also study limits of the auxiliary subdomains W_0(t) and the 0-special domains which we recall to be the components of D_0(t)-W_0(t) that contains a horizontal F_z-labelling. We have the domain subdivision _r= D_0(t)∪ D_1(t) as constructed in Section <ref>. Let W_j(t) be the neighbourhood of the boundary minima of D_j(t) such that: * the boundary ∂ W_j(t) consist of arcs in ∂ D_j(t) and vertical line segments, * there is at least one boundary minimum on each component of W_j(t). For such W_j(t), D_j(t)-W_j(t) is a finite collection of strip regions. For a connected component W⊂ W_j(t), we define the width of W as the maximum distance from a vertical line segment in the boundary of W to a boundary minima inside W. We define the width of the neighbourhood W_j(t) to be the maximum of the width of the finitely many connected components of W_j(t). Given a vertical segment l≃{0}×[0,1]⊂ D_0(t)-W_0(t) with ∂ l⊂∂ D_0(t), let [-c,c]× [0,1]⊂ D_0(t) be a strip-like domain centred around l. With (s,τ)∈ [-c,c]× [0,1], we write u_t(s,τ)=(q_t(s,τ),p_t(s,τ)). Let tb_σ denote the (1-form) section of the sheet that contains u_t(0,σ) for σ=0,1. We have the following estimate due to Ekholm <cit.> which describe the degenerative behaviour of components of D_0(t)-W_0(t). For all sufficiently small t>0, we can find neighborhoods W_0(t) of the above type with width at most 2log(t^-1), such that the following holds. Let Θ be a component of D_0(t)-W_0(t) that is not a 0-special domain. Then along any vertical line segment l⊂Θ, we have 1/t∇_τ p_t(0,τ)-(b_1(q_t(0,0))-b_0(q_t(0,0)))=O(t) 1/t∇_s p_t(0,τ)=O(t). In particular, if Θ=[-c_t,c_t]× [0,1] is a non-0-special component of D_0(t)-W_0(t), then the rescaled strips ũ_̃t̃=u_t(t^-1s,t^-1τ) on [-tc_t,tc_t]× [0,t] locally converge to a gradient-flow equation determined by b_σ. Observe that since the 1-form sections b_σ are holomorphic, the resulting gradient flow equation is a holomorphic gradient flow equation. The question is when can we ensure that b_0≠ b_1. This issue will be discussed in Section <ref>. We now deal with the 0-special domains. Let Θ⊂ D_0(t)-W_0(t) be a 0-special domain. Then lim_t→ 0d(u_t|_Θ,z)=0. The size of the derivative of u_t on Θ is O(t). Let l be a vertical line segment in Θ, then l intersects a boundary component labelled F_z. So any point on u_t(l) is O(t)-close to the point z. Since Θ⊂ℝ× [0,r] and so the length of a vertical line segment in Θ is bounded above by r. Hence as t→ 0, u(l)→ z. Furthermore, the speed of convergence is independent of l since it only depends on r and the O(t)-estimate. This finishes the proof. Furthermore, we show that the domains W_0(t) are mapped very close to points in C̃. Let Θ be a component of W_0(t). Then after taking a subsequence if necessary, there exists a point z in C such that lim d(u_t|_Θ,z)→ 0. The widths of the domains W_0(t) are controlled by 2log(t^-1). From the O(t)-estimate, we see that diameters of the discs restricted to the domains W_0(t) are of size O(tlogt^-1). Since tlog(t^-1) converges to 0 as t→ 0, we see that, after taking a subsequence if necessary, that u_t|_Θ uniformly converges to a point in C̃. §.§ Convergence to horizontal geodesics In this subsection, we further investigate the convergence of strip-like domains in D_0(t)-W_0(t). We separate the non 0-special strip domains in D_0(t)-W_0(t) into vertex and non-vertex regions. We show that the vertex regions converge to points and the non-vertex regions converge to ϕ-horizontal geodesics. We modify the approach in <cit.>. We first fix some conventions. From now on, Θ means a strip domain either of the form [a,b]× [0,1] with both a and b finite, [a,∞)× [0,1] or (-∞,b]× [0,1]. We write j for the standard complex structure on Θ given by z=s+iτ. We regard ℂ^2 as T^∗_ℂ^1,0ℂ and write J_0 for the Sasaki almost complex structure induced from the standard flat metric on ℂ. We take the complex coordinates z_1=x-ip^x and z_2=y-ip^y where p^x,p^y are dual coordinates. As before, let C^∘ denote the complement of both the zeroes and poles of ϕ, and let J_ϕ be the Sasaki almost complex structure on T^∗C^∘ with respect to the flat metric g^ϕ. We need the following technical proposition. Suppose u:Θ→ T^∗C^∘ is a (j,J_ϕ)-holomorphic map with the horizontal boundary components [a,b]×{0,1} mapping to Σ_ϕ. Then there exists a (j,J_0)-holomorphic map v: Θ→ℂ^2 such that the pointwise equality holds: Du=Dv, and v maps the horizontal boundary components into {p^x=± 1,p^y=0}. In particular, the L^2 energy of the maps u and v agree. The same applies for tΣ_ϕ with {p^x=± t,p^y=0} instead. We explain where we use Proposition <ref>. Recall that the strip-like domains Θ inside D_0(t)-W_0(t) are mapped outside of U(2δ). The restriction of u_t on Θ then satisfies the conditions in Proposition <ref>. So we obtain a holomorphic map v_t:Θ→ℂ^2 with the horizontal boundary components now mapping to {p^x=± t, p^y=0}. Observe that the Lagrangian boundary condition now splits globally into distinct sheets. We can use this global sheet splitting to prove the following two Lemmas. Suppose there exists a subsequence of v_t such that the horizontal boundary segments of Θ under v_t map to the same sheets of {p^x=± t, p^y=0}. Then we show in Lemma <ref> that the corresponding subsequence of u_t|_Θ must uniformly converge to a point. On the other hand, suppose the horizontal boundary segments of Θ under v_t maps to distinct sheets of {p^x=± t,p^y=0}. Then we show in Lemma <ref> that u_t|_Θ must stay C^0 close to a horizontal trajectory passing through points in C(δ;E). We now briefly explain the motivation behind Proposition <ref>. Over the ∫√(ϕ) coordinate, Σ_ϕ splits into two distinct hyperplanes {p^x=± 1, p^y=0}. The idea is simply to take the analytic continuation of ∫√(ϕ) along the disc. Assume that Θ=[0,1]× [0,1]. We regard the map u:Θ→ T^∗C^∘ as a section p(s,τ) of u^∗(T^∗C^∘) over π(u(Θ)) by taking the point (s,τ)∈Θ to the element u(s,τ)∈ T^∗_π∘ u(s,τ)C^∘. Since the domain Θ is contractible, we can choose a lift l(s,τ) of π(u(Θ)) to Σ_ϕ as it is a genuine cover of C^∘. We let (0,0)∈ [0,1]× [0,1] to be our basepoint. For (s,τ)∈ [0,1]× [0,1], consider a smooth family of parametrized line segments L_(s,τ)(T)=T(s+iτ). Consider the smooth map v(s,τ)=(v_1(s,τ),v_2(s,τ))=( ∫_0^1 l(L_(s,τ)(T))^∗λ dT ,p(s,τ)/l(s,τ)) into ℂ^2. Here we regard both p(s,τ) and l(s,τ) as complex vectors in the 1-dimensional complex vector space T^∗_π∘ u(s,τ) C^∘ and we compare their ratio. Note that l(s,τ) is never equal to zero, and that we can rewrite (<ref>) as (s,τ)↦ (∮ _L_(s,τ)(T)√(ϕ) dT,p(s,τ)/√(ϕ)) where we regard l as a sheet of √(ϕ). This clarifies the meaning of the map (<ref>). We show that it is (j,J_0)-holomorphic. Let (s,τ)∈ [0,1]× [0,1] and choose the ϕ-flat coordinate near u(s,τ) so that ϕ=dz^2, and the choice of √(ϕ) agrees with that of l. In this coordinate system, we may write u(s,τ)=(k(s,τ),p(s,τ))∈ℂ^2. Let x∈ T_(s,τ)([0,1]× [0,1]) and h be sufficiently small. Let L_h(s,τ,x) be the line segment between (s,τ)+hx and (s,τ). The 1-chain (s,τ,v)=[L_(s,τ)+hx]-[L_h(s,τ,x)]-[L_(s,τ)] is null-homologous in [0,1]× [0,1] (See Figure <ref>). Since dλ=Ω=0 on Σ_ϕ, ∫_(s,τ,x) l^∗λ=0. So we see that v_1((s,τ)+xh)-v_1(s,τ)=∫_L_h(s,τ,x) l^∗λ. For h≪ 1, l(s,τ)=1 and so the right hand side just computes k((s,τ)+xh)-k(s,τ). Dividing both sides by h and sending h to zero, we get D_xv_1= dk/dx. The computation is easier for v_2 since in the ϕ-flat coordinate, v_2(s,τ)=p(s,τ). Thus D_x v=(dk/dx,dp/dx). So since u is J_ϕ-holomorphic, and J_ϕ is covariant, the map (k(s,τ),p(s,τ))→ T^∗ℂ is holomorphic with respect to the Sasaki almost complex structure associated to the standard flat metric on ℂ and so holomorphicity follows. Furthermore, it is straightforward to see that the norm of the derivatives agree. The general case where Θ=[a,b]× [0,1] or [a,∞)× [0,1], or (-∞,b]× [0,1] is entirely analogous since it is conformally equivalent to either [0,1]× [0,1], (-∞,0]× [0,1] or [0,∞)× [0,1]. But for these domains, the same argument applies. This finishes the proof. Now, note that given Θ⊂ D_0(t)-W_0(t) a strip region, since Du_t=O(t), possibly passing to a subsequence, we see that given a vertical line segment l⊂Θ, π(u_t(l)) is contained in an O(t)-ball around a point. Since this point lies outside U(2δ), we have two sheets of Σ_ϕ over this point. Call the region a vertex region, if, we can find a subsequence of t converging to 0 such that the endpoints of the vertical segments lie on the same sheet. We have the following lemma: Let Θ⊂ D_0(t)-W_0(t) be a vertex region and let ϵ>0. Then, after passing to a subsequence, there exists a point p∈ C(δ;E)^c such that u_t(Θ) is contained in an ϵ-ball around p in T^∗C̃. We modify the proof of <cit.>. After passing to a subsequence, we assume that π(u_t(l)) converges to some p∈ U((2+η)δ)^c. Assume that for all small t>0, u_t|_Θ does not stay entirely in an ϵ-ball around p. Then there exists a sequence of points q_t∈Θ such that u_t(q_t) lies strictly outside the ϵ-ball around p for small enough t. By taking a subsequence, we may assume that u_t(q_t) converges to a point q. By the O(t)-estimate on the derivative of u_t restricted to D_0(t), the vertical line segment passing through q_t must map outside the 1/2ϵ-ball around p and must also uniformly converge to the point q. Let Θ_t be the strip region inside Θ bound by the vertical line segment l and the vertical line segment passing through q_t. We claim that there exists a k>0 such that Area(u_t;Θ_t)<kt^2. Suppose for now this is true, and consider the disjoint union of balls B=B_ϵ/4(q)∪ B_ϵ/4(p) in T^∗C̃. Again, by the O(t)-estimate and the convergence u_t(q_t)→ q, the boundary of u_t|_Θ_t is contained in B ∪ tΣ_ϕ for small enough t. In particular, since Θ maps outside T^∗U(2δ), u_t maps the horizontal boundary segments of Θ_t to the same sheets of Σ_ϕ. Since each sheet of tΣ_ϕ over π(u(Θ)) is uniformly geometrically bounded, we see that the curve u_t restricted to Θ_t cannot leave some O(t)-small neighbourhood of B by the boundary estimate (Proposition <ref>). For small enough t, such a neighbourhood of B is disconnected, but the image of u_t over Θ_t must be connected, a contradiction. To show the claim, let v_t be the holomorphic maps v_t:Θ_t→ℂ^2 obtained from u_t via Proposition <ref>. We know that the norm of the derivatives of v_t and u_t agree, and that Area(v_t)=Area(u_t). The advantage is that now we are looking at holomorphic maps of Θ into ℂ^2 with the horizontal boundary components mapping into {p^x=± t, p^y=0}. Furthermore, we have a primitive of the real Liouvile form, simply given by ± tq^x. Observe that if the vertical segment l lies on a single sheet, say {p^x=+t, p^y=0}, then the entire v_t must map its horizontal boundary into {p^x=+t, p^y=0}. Let Θ_t=[a_t,b_t]× [0,1]. By Stokes' theorem, we have Area(v_t) = ∫_∂([a_t,b_t]×[0,1]) p dq =-∫_{a_t}×[0,1] pdq+ ∫_{b_t}×[0,1] pdq ± t(q_x(v_t(a_t,1))-q_x(v_t(b_t,1))-q_x(v_t(a_t,0))+q_x(v_t(b_t,0))). Now since p=O(t), and Du=O(t) over D_0(t), the first two terms are O(t^2). For the four terms after that, note that tq_x(v_t(a_t,1))-q_x(v_t(a_t,0)≤ tsupDv_t. Indeed, q_x(v_t(a_t,1))-q_x(v_t(a_t,0)) is just approximated by the q_x-component of the velocity of u_t near (a_t,1). Similarly, tq_x(v_t(b_t,1))-q_x(v_t(b_t,0)≤ tsupDv_t. Since Dv_t=Du_t=O(t), the area on the Θ_t region must be of O(t^2). We have the following statement on the adiabatic degeneration of non-vertex strip regions. Let Θ(t)⊂ D_0(t)-W_0(t) be a non 0-special, non-vertex strip region and let ϵ>0. Then, after passing to a subsequence, there exists a horizontal trajectory passing through a point in C(δ;E) such that u_t(Θ) is contained in an ϵ-neighborhood of γ. The proof is a very small modification of <cit.>. We split into two cases. First, assume that Θ=[-c_t,c_t]× [0,1] is such that tc_t≤ K for some K. Write u_t=(q_t,p_t). Since u_t is J-holomorphic, we have ∂ q_t/∂ s+g^-1(∇_τp_t)=0, ∂ q_t/∂τ-g^-1(∇_s p_t)=0. Then consider the rescaling ũ_̃t̃=(q̃_t,p̃_t)=u_t(t^-1s,t^-1τ) defined on [-tc_t,tc_t]× [0,t]. We see from Proposition <ref> that ∂q̃_t/∂ s-Y=O(t), ∂q̃_t/∂τ=O(t). Here Y is the local gradient difference determined by the two local sheets of tΣ_ϕ. Pass to a subsequence for which both the rescaled lengths tc_t and the points u_t(-c_t,0) converge (recall that all the discs map into P; see Lemma <ref>). We see that the image of the strip region must lie in a small neighborhood of a flow line. Since these flow lines are contained in C(δ;E), they must correspond to a horizontal trajectory. We next consider the case where tc_t is unbounded. In this case, the strips map outside the regions where the gradient difference vanishes. By applying the same argument, we see that this cannot happen since otherwise the length of the boundary will be unbounded, a contradiction. §.§ Proof of Theorem <ref> We now show the main theorem: (Theorem <ref>) Given E≫ 0, δ≪ 1, there exists a metric g^ϕ_δ on C̃, a deformation retract C(δ;E) of C̃-S(0) over which g^ϕ_δ=g^ϕ such that the following holds. Let J be the Sasaki almost complex structure associated to g^ϕ_δ. Then there exists a scaling parameter t_0=t_0(δ;E)>0 such that for 0<t≤ t_0, there are no non-constant J-holomorphic discs bounded between F_z and tΣ_ϕ for z∈ C(δ;E). We argue by contradiction. Let u_t:_3→ T^∗C̃ be a sequence of t-BPS discs ending at z_t with z_t∈ C(δ;E) such that z_t→ z and t→ 0. Let D_0(t), W_0(t) and D_1(t) be as in Sections <ref> and <ref>. Recall that the strip-like regions in D_0(t)-W_0(t) with an F_z_t-horizontal labelling were called 0-special. Consider the decorated graph 𝒢 constructed as follows: we associate a red vertex to each of the connected components of W_0(t) and D_0(t)-W_0(t); a blue vertex for each of the connected components of D_1(t). We connect two vertices with an edge if the corresponding components are not disjoint. Since there are finitely many vertices, there are finitely many possible configurations, and so by taking a subsequence, we can ensure that the graph configuration remains constant. Now, choose a red vertex x corresponding to a 0-special component. We argue via induction that for any finite length path P beginning at x and ending at y with all the vertices red, there exists a subsequence of u_t such that for small enough t, the restriction of u_t to the connected component corresponding to y lies in C(δ;∞). Suppose the length of P is one. By Lemma <ref>, the 0-special regions inside D_0(t)-W_0(t) converge to the point z. This proves the case when the length of P is equal to 1. Suppose now the claim holds for any red path with length less or equal to k-1. Consider a red path of length k. Then the red path of length k-1 connecting x to the penultimate point p satisfies the condition above. Let D_0(y) be the component corresponding to y. Suppose D_0(y) is a component of W_0(t). Then by Lemma <ref>, the region D_0(y) must converge to points on C(δ;E). Now suppose D_0(y) is contained in D_0(t)-W_0(t) and that D_0(y) is not 0-special. Then either D_0(y) is a vertex region, or a non-vertex region. Since y is red, by taking a subsequence, the restriction of u_t on D_0(y) admits a reparameterization that converges to a holomorphic flow line (Lemma <ref>) in the case y is a non-vertex region, or to a point in C(δ;∞) in the case y is a vertex region (Lemma <ref>). In the former case, since the restriction of u_t on p for small enough t lies in C(δ;∞), the holomorphic flow line must lie on a horizontal trajectory which passes through a point in C(δ;∞). However, such a horizontal trajectory belongs entirely in C(δ;∞), proving the claim. Now suppose the set of blue vertices is non-empty, then there exists a finite path of minimal length beginning at x and ending at a blue vertex y, such that all the intermediate vertices are red. Let p be the penultimate vertex in the path, let D_0(p) be the component corresponding to p and D_1(y) be the component corresponding to y. Since the restriction of u_t on D_0(p) maps to C(δ;∞) for small enough t, it follows that this component cannot intersect D_1(y) for small enough t, a contradiction. Therefore, the set of blue vertices is empty and for small enough t, u_t maps entirely into T^∗C(δ;E). In fact, the argument implies that u_t lies in a small neighbourhood of the unique horizontal trajectory γ passing through z. So we see that we can find a t_0>0 such that if t<t_0 then the t-BPS disc ending at z lies entirely outside T^∗U(2δ) and lies over a small vertical neighbourhood of γ. On this neighbourhood, ϕ=dz^2 and so we reduce to the case of holomorphic discs of finite energy u:ℛ→ℂ^2=ℂ(z=x-ip^x)⊕ℂ(z=y-ip^y) with the following boundary conditions: u extends to a continuous map u on the closed half-disc ℋ∩D_1 mapping [-1,1] to {p^x=± t, p^y=0} and mapping {r=1, Im(y)≥ 0} to x=y=0. However, there isn't one by maximum principle. So we have arrived at the contradiction, finishing the proof. § WALL-CROSSING ANALYSIS In this section, we compute the Floer cohomology local system z↦ HF(Σ,F_z) and prove the main theorem. (Theorem <ref>) Let Σ_ϕ be the spectral curve associated to a real-exact GMN quadratic differential on a closed Riemann surface C. Given a small deformation parameter δ>0 and a large energy cut-off E≫ 1, there exists a t_0>0 and a collection of points 𝒫_C=𝒫_C(δ;E) (with lifts P_Σ_ϕ^∘) such that the following holds. Let ℒ=ℒ(P_Σ_ϕ^∘) be a path groupoid representation of an almost flat GL(1;ℂ)-local system, 𝔰 be a spin structure on C, and ℬ be an almost flat GL(1;ℤ)-local system. For 0<t<t_0, HF_t(Σ_ϕ,ℒ,𝔰, ℬ, 𝒫_C;ℂ) and ℒ(P_Σ_ϕ^∘) form a 𝒲-pair, or equivalently, HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ) is a non-abelianization of ℒ. To do this, we must show that the Floer-theoretic parallel transport along a path α contained in C(δ;E) is given by the pushforward of ℒ, and the Floer-theoretic parallel transport along the “short paths" (see Section <ref>) admits the form (<ref>). In Section <ref>, we define and study the relevant passive continuation strips. In Section <ref>, we set up some conventions, fix the branch cut and the sheet ordering data once and for all, and specify the path groupoid generators on C that we will use throughout the section. In Section <ref>, we specify the Floer data that we will use for the Lagrangian pair (tΣ_ϕ,F_z). In Section <ref>, we study the moduli problem for parallel transports along the “short paths". The main result is Proposition <ref> which explains the form (<ref>) up to sign. In Section <ref>, we study the moduli problem for parallel transports along arcs contained in C(δ;E). The main result is Proposition <ref>; we show using Theorem <ref> that for infinitesimal fibre parallel transports, the relevant continuation strips are all constant strips. This explains the form (<ref>) up to sign. In Section <ref>, we specify the necessary grading and spin structure data in order to compute the Floer cohomology local system. In Section <ref>, we define the grading functions. In Section <ref>, we introduce a finite subset M_C of C, a good open cover {G_α}_α∈ M_C [An open cover such that any arbitrary finite intersection is contractible.] and a “good" local framing on C, using the material from Sections <ref>- <ref>. We use this data to define spin structures as Čech cocycles. In Section <ref>, we use the Čech formalism to prove Lemma <ref>, which states that for constant passive continuation strips, the sign difference between the Floer-theoretic parallel transport map induced from π^∗𝔰 and 𝔰̃ is given precisely by Φ^ℬ. In Section <ref>, we use the sign difference lemma <ref> to compute the Floer-theoretic parallel transport maps and prove Theorem <ref>. §.§ Moduli problem for parallel transports In this subsection, we define and study the various moduli problem for Floer-theoretic parallel transport maps associated to horizontal and vertical geodesic arcs on the base. We use the conventions from Section <ref>. §.§.§ Wall-chamber data In this section, we fix the branch cut data, a choice of a “positive sheet" of ϕ for each component of C-S(0), the set of base points 𝒫_C for the path groupoid over C̃ and the generators for the path groupoid morphisms. We will need the following definition. Let γ be an oriented horizontal trajectory in C. Then the positive sheet +√(ϕ) along γ is the unique sheet of √(ϕ) such that the line element √(ϕ)(γ(s))·γ'(s)ds is real and positive, for any smooth parametrization of γ that respects the chosen orientation. Let 𝒱⊂ C^∘ be a vertical neighbourhood of γ, and let +√(ϕ) be the positive sheet along γ. Then we say that the a point z in V-γ lies above γ if the integral ∫ Im(+√(ϕ)) along the unique vertical segment between a point on γ and z is positive. Otherwise, we say the point z lies below γ. Note that for small enough 𝒱, 𝒱-γ consists of two connected components, the one that lies above γ, and the one that lies below γ. Now let w be a wall on S(0). We always orient the walls in the outward direction, travelling away from the branch points. This orientation on the wall w picks out a unique positive sheet of √(ϕ) along w. Let w be a wall. We define 𝒵^h(w) to be the unique component of C-S(0) containing the points that lies above w. Note that the conformal equivalence in Proposition <ref> defined using the positive sheet +√(ϕ) sends w to the lower right bottom corner. Let 𝒵^h be a component of C-S(0). Then 𝒵^h=𝒵^h(w) for at most two walls. In fact, w is unique if and only if 𝒵^h has the conformal type of the upper half plane. Given a conformal equivalence of 𝒵^h with a finite horizontal strip (which sends ϕ to dz^2), w corresponds to either the right bottom boundary or the left upper boundary. Reversing the parametrization by z→ -z swaps the two. Branch-cut data We fix the branch-cut data. Let b be a zero of ϕ. By Proposition <ref>, there exists a neighbourhood of zero (U_b,ϕ) and a biholomorphism (U_b,b,ϕ)≃ (D,0,zdz^2) whose germ at b is unique up to a phase factor of e^2π k/3,k=0,1,2. Choose a phase factor once and for all and introduce a branch cut on the negative real axis. Label the wall corresponding to the positive ray ℝ_>0e^i· 0 by w_0, the wall corresponding to ℝ_>0· e^i· 2π/3 by w_1, and the wall corresponding to ℝ_>0· e^i4π/3 by w_-1. Give ± labels for the two sheets of √(ϕ), + and - with respect to the branch cut. Let v be a positive tangent vector along a wall. If +√(ϕ)(v) is positive, we label the wall -+. If -√(ϕ)(v) is negative, we label the wall +-. So we see that at each vertex of the spectral network S(0), the three walls are labelled +-, +- and -+. In particular, w_0 is now labelled -+ and w_1 and w_-1 are labelled +-. We do this for each of the zeroes of ϕ. See Figure <ref>. Chamber sheet data We fix the “positive sheet" of √(ϕ) over each component of C-S(0). Let 𝒵^h be a component of C-S(0) and choose an oriented generic horizontal trajectory γ(𝒵^h). This orientation picks out a positive sheet of √(ϕ) along γ. We use the positive sheet of √(ϕ) with respect to γ(𝒵^h) to identify 𝒵^h with the corresponding horizontal subdomain in ℂ. From now on, we'll write 𝒵^h(δ;E)=C(δ;E)∩𝒵^h, and we will abuse notation and let 𝒵^h also denote its representative as a horizontal subdomain in ℂ. On the other hand, when we write 𝒵^h(w) for w a wall, we will find its representative as a horizontal subdomain in ℂ, using the trivialization induced from the positive sheet of √(ϕ) along w. Note that there exists a single wall w on the boundary of the closure of γ(𝒵^h) respecting the choice of +√(ϕ) making 𝒵^h=𝒵^h(w). Path groupoid data We now fix the path groupoid base points 𝒫_C and the path groupoid generators. Let 𝒱^v be a component of 𝒱(δ;E), and let w be its core wall. On this component, we take the trivialization induced by the positive sheet of √(ϕ) with respect to w. For each wall w, we choose a point b(w)∈𝒱(δ;E) and the adjacent points b^u(w)=b(w)+iη(w) and b^d(w)=b(w)-iη(w) for some η(w)>0. We choose them in a way that the adjacent points contained in a component 𝒵^h(δ;E) are connected by either a vertical or a horizontal arc fully contained in 𝒵^h(δ;E). We can always manage this to happen by taking δ much smaller than mina-b/2 where the minimum is taken over all the horizontal strips 𝒵(a,b) as in Proposition <ref>. Having made these choices, we choose the set 𝒫_C:={b^∙(w): w is a wall,∙=h,v}, to be the set of base points for the path groupoid of C̃. Let w be a wall, and let w' be a wall that lies on the left bottom boundary of 𝒵^h(w). The arc α(w,w') is the unique horizontal arc contained in 𝒵^h(w)∩ C(δ;E) connecting b^u(w) to b^d(w'). The arc α(w) is the shortest vertical arc connecting b^d(w) to b^u(w). The arc γ(w,w”) is the unique vertical arc contained in 𝒵^h(w) connecting b^u(w) to b^d(w”) where w” is the wall on the right top boundary of 𝒵^h(w). For any other w', we set α(w,w') and γ(w,w') to be the emptyset. We then set the path groupoid generators to be {α(w,w')^± 1,γ(w,w')^± 1,α(w)^± 1: w, w' a wall on S(0)}. §.§.§ Floer data We now study the passive continuation strips associated to fibre parallel transports. To define the moduli spaces, we fix a regular Floer datum for the pair tΣ_ϕ and F_z for z∈ C(δ;E) and 0<t<t_0(δ;E). We start with the following lemma: There exists an auxiliary function ρ:[1,∞)→ [1,∞] satisfying ρ(r)=r for r≫ 1 such that u is a J_g^ϕ_δ-strip bounded between tΣ_ϕ and F_z if and only if it is a J_con-strip bounded between tΣ_ϕ and F_z, where J_con is the conical deformation of J_g^ϕ_δ obtained using ρ. Choose a smooth, positive increasing function ρ:[1,∞)→ [1,∞) such that ρ(r)=1 for r<3 and ρ(r)=r for r>5. Let J_con be the ρ-conically deformed almost complex structure. By Lemma <ref>, since J_con is of general contact type, the discs lie in D_2.5T^∗M, where J_con=J_g^ϕ_δ. This finishes the proof. For z∈ C(δ;E) and 0<t<t_0, the Floer datum (tΣ_ϕ,F_z,J_con) is regular. By Theorem <ref> and Lemma <ref> we see that there are no non-trivial J_con holomorphic strips bounded between F_z and tΣ_ϕ for z∈ C(δ;E) and 0<t<t_0. So all the strips are constant, which are regular because local configurations near the intersection points coincide with the intersection of ℝ^2 and iℝ^2 in ℂ^2 equipped with the standard complex structure. We now fix some 0<t<t_0 for the rest of the section. Following Corollary <ref>, we set the Floer datum to be (tΣ_ϕ,F_z,J_con). §.§.§ Moduli problem for arcs α(w) We now study the moduli problem associated to Floer-theoretic parallel transports along the arcs α(w). As before, let w be a wall and let 𝒱^v(w) be the component of 𝒱(δ;E) containing w as its core. Choose a bump function ξ: 𝒱^v(w)→ [0,1] such that ξ=1 in a small neighbourhood of the arc α(w). Trivialize 𝒱^v(w) using the flat coordinate ∫√(ϕ) with respect to the outward orientation on the wall w. The outward orientation allows us to order the lifts of z∈𝒱^v(w) to z^±. Note that the canonical ordering introduced in Proposition <ref> agrees with the ordering of the lifts z^±. Consider the Hamiltonian S^v=ξ(x,y)p^y. Recall that 2η(w) is the distance between b^d(w) and b^u(w). Let χ^v denote the Hamiltonian isotopy generated by S^v. The Hamiltonian isotopy χ^v has the following property: Σ_ϕ and its ℝ_>0-rescalings are invariant under χ^v. The Hamiltonian vector field is given by: X_S^v=-p^y(∂ξ/∂ x∂/∂ p^x+∂ξ/∂ y∂/∂ p^y)+ξ(x,y)d/dy. However, on T^∗(𝒱^v(w)), Σ_ϕ equals {p^x=± 1, p^y=0}. So the vector field X_S^v restricts there as ρ(x,y)d/dy, the flow of which preserves the set {p^x=± 1,p^y=0}. By Corollary <ref>, the Floer datum (tΣ_ϕ,F_b^u(w),J_con) and (tΣ_ϕ,F_b^d(w),J_con) are all regular. Let J^short be a uniformly admissible family of almost complex structures on 𝒵 such that J^short(s,τ)=J_con for s≪ 0 and J^short(s,τ)=(χ^v)^∗J_con for s≫ 0. Let ℳ^short(w) be the moduli space of J^short-holomorphic maps u:𝒵→ T^∗C̃ satisfying the following boundary conditions u(s,0)⊂ tΣ_ϕ u(s,1)⊂ F_z lim_s→ -∞ u(s,τ)∈ F_b^d(w)∩ tΣ_ϕ lim_s→ +∞ u(s,τ)∈ F_b^d(w)∩ tΣ_ϕ By Lemma <ref>, ℳ^short coincides with the moduli space of passive continuation strip equation associated to tΣ_ϕ and χ^v. We choose a generic J^short so that ℳ^short is transversely cut out. We have the decomposition ℳ^short=ℳ^short,diag(w)⊔ℳ^short,-(w) where ℳ^short,diag(w) is the moduli of passive continuation strips that travel from (tb^d(w))^± to (tb^d(w))^±, and ℳ^short,nondiag(w) is the moduli of passive continuation strips that travel from (tb^d(w))^± to (tb^d(w))^∓. Let ℳ^short,-(w) denote the moduli space of continuation strips that travel from (tb^d(w))^+ to (tb^d(w))^-. ℳ^short,diag(w) consist of constant maps and ℳ^short,-(w) is empty. If u∈ℳ^short,diag(w), then lim_s→ -∞ u(s,τ)=lim_s→ +∞ u(s,τ)=(b^d(w))^±. This implies that ∫ u^∗ω=0. Since the energy vanishes and J^short is ω-compatible, the moduli space ℳ^short,diag(w) must consist of constant maps. By the same argument, the energy of discs in ℳ^short,- must be negative. By positivity of energy, ℳ^short,-(w) must be empty. §.§.§ Moduli problem for arcs contained in C(δ;E) We now study the moduli problem associated to Floer-theoretic parallel transports along arcs contained in C(δ;E). As before, let 𝒵^h be a horizontal chamber and let 𝒵^h(δ;E)=C(δ;E)∩𝒵^h. Let +√(ϕ) be the positive sheet of ϕ picked out by γ(𝒵^h). Let z^± be the corresponding ordering on the lifts of z∈𝒵^h(δ;E). Choose a compactly supported smooth positive bump function ρ(𝒵^h):𝒵^h→ [0,1] once and for all such that ρ(𝒵^h)=1 on C(δ;E)∩𝒵^h and ρ=0 on 𝒵^h∩ U((2+η)δ). We will consider the following two Hamiltonians. H^h := ρ(𝒵^h)p^x H^v := ρ(𝒵^h)p^y. By Proposition <ref>, we see that we have the canonical ordering on the lifts of z to Σ_ϕ, for z≠ S(π/2). On the other hand, we also have the ordering on the lifts given by the choice of +√(ϕ)|_𝒵^h. For convenience, we may regard the first type of ordering as an energy ordering, and the second type of ordering as a sheet ordering. Note that for points contained the right hand side of 𝒵^h-S(π/2), the sheet ordering and the energy ordering coincide, but they become opposite when we cross S(π/2). For z∈𝒵^h(δ;E), let α^h_z and α_z^v be arc-length parametrized horizontal and vertical arcs, respectively, beginning at z. Let ψ^h_s denote the time-s flow of the constant Hamiltonian H^h. Similarly, let ψ^v_s denote the time-s flow of the constant Hamiltonian H^v. Note that the time-s flow ψ^∙_s sends F_z to F_α^∙(ϵ s),∙=v,h. From now on, the superscript ∙ will denote either v or h. Given z∈𝒵^h(δ;E), we will only consider those s∈ℝ such that α^∙_z(s)∈𝒵^h(δ;E). We will need the following formula. Let z∈𝒵^h(δ;E). The primitive of W^∙_s of λ_re on (ψ^∙_s)^-1(t Σ_ϕ) at tz^± satisfies: W^∙_s(tz^±)=tW(ψ^∙_s(z^±)). In particular, W^h_s(tz^±)=tW(z^±)±ϵ and W^v_s(tz^±)=tW(z^±). This follows from Lemma <ref>. (W^∙_t∘ (ψ^∙_s)^-1)(tz^±) =tW(z^±)+∫(H^∙∘ (ψ^∙_s)^-1)(tz^±)-λ_re(X_H^∙)(ψ^∙_s)^-1(tz^±). One can check directly that the integrand in the second term on the right hand side of (<ref>) vanishes. The next statement follows from the observation that W depends only on the horizontal distance to a zero of ϕ on the boundary of 𝒵^h; ψ_s^h changes this distance by s whereas ψ_s^v leaves this distance invariant. (see Lemma <ref>) Choose a smooth strictly increasing elongation function l:(-∞,∞)→ [0,1] once and for all such that l(s)=0 for s≤ -2 and l(s)=1 for s≥ 2. Write: J^∙_±ϵ(s,τ)=(ψ_±ϵ l(s)^∙)^∗J_con, and consider the moduli spaces ℳ^∙_±ϵ,z of solutions of ∂̅_J^∙_±ϵ u=0 u(s,0)⊂ (ψ^∙_±ϵ l(s))^-1(tΣ_ϕ) u(s,1)⊂ F_z lim_s→ -∞ u(s,τ)∈ F_z ∩ tΣ_ϕ lim_s→ +∞ u(s,τ)∈ F_z ∩ tΣ_ϕ. Intuitively, these are the continuaion strips that contribute to the parallel transport along the paths s↦α_z^∙(±ϵ s),∙=v,h. Keep in mind that the flow ψ^∙_ϵ l(s) preserves tΣ_ϕ and J_con invariant in a neighbourhood of z. Hence, F_z∩ψ^∙_±ϵ^-1(tΣ_ϕ)=F_z ∩ tΣ_ϕ. As before, we split the moduli space ℳ^∙_±ϵ,z into the diagonal part and the non-diagonal part: ℳ^∙_±ϵ,z=ℳ^diag,∙_±ϵ,z⊔ℳ^nondiag,∙_±ϵ,z. The diagonal part consists of solutions of (<ref>) that travel from tz^± to tz^± with respect to the sheet ordering. The non-diagonal part consists of solutions of (<ref>) that travel from tz^∓ to tz^±. The moduli space ℳ^nondiag,∙_±ϵ,z further decomposes into ℳ^+,∙_±ϵ,z and ℳ^-,∙_±ϵ,z consisting of passive continuation strips travelling from z^- to z^+ and z^+ to z^-, respectively. We now state the main analytic result of Section <ref>. Given z∈𝒵^h(δ;E), there exists some ϵ(z)>0 such that for any 0<ϵ<ϵ(z), the following holds. * The moduli spaces ℳ^diag,∙_±ϵ,z consist of constant strips. * The moduli spaces ℳ^nondiag,∙_±ϵ,z are empty. In particular, the moduli spaces ℳ^∙_±ϵ,z are regular. For the proof of Proposition <ref>, we will need the following statement: Let z∉ S(π/2) and let ϵ_n be a sequence of positive real numbers converging to zero. Let u_n∈ℳ^+,∙_±ϵ_n,z be a sequence of non-constant J^∙_ϵ_n-holomorphic strips with respect to the energy ordering. Then u_n Gromov converges to a non-constant broken strip bounded between F_z and tΣ_ϕ. We will show Proposition <ref> in Section <ref>. We first treat the case of ℳ^v_ϵ,z. The case of ℳ^v_-ϵ,z is entirely analogous. We prove the first assertion in Proposition <ref>. Let u be a solution of the equation ∂̅_J^v_ϵ u=0 u(s,0)⊂ (ψ^v_ϵ l(s))^-1(tΣ_ϕ) u(s,1)⊂ F_z lim_s→ -∞ u(s,τ)= tz^± lim_s→ +∞ u(s,τ)= tz^±. The action of the pair ((ψ^v_ϵ)^-1(tΣ_ϕ),F_z) is given by W_ϵ^v and the pair (tΣ_ϕ,F_z) by tW=W_0^v. By Lemma <ref>, the geometric energy is equal to Area(u)= W_ϵ^v(tz^±)- W_0^v(tz^±) +ϵ∫_-∞^∞ H^v(u(s,τ))l'(s)ds=ϵ∫_-∞^∞H^v(u(s,τ))l'(s)ds. For small enough ϵ, (ψ^v_ϵ l(s)^-1)(tΣ_ϕ)∩ H^v lies inside D_1^∗C̃ for all s∈ [0,1]. Hence sup H^v_s(u) is bounded above. So as ϵ→ 0, (<ref>) uniformly converges to zero. Now the Hamiltonian isotopy ψ^v_s(z) leaves tΣ_ϕ and F_z invariant in some neighbourhood of F_z since the generating function is locally of form ±ρ(x,y)p^y. In this neighbourhood, the configuration (T^∗C̃,J^v_ϵ,(ψ^v_ϵ l(s))^-1(tΣ_ϕ),F_z) is isometric to the standard configuration (T^∗ℝ^2,J_std,{p^x=± t, p^y=0},{x=y=0}). In the latter configuration, the equation (<ref>) becomes the standard J_std-holomorphic strip equation with non-moving boundary conditions. Applying the boundary estimate, we see that the disc cannot escape such a neighbourhood of F_z, for small enough ϵ. However, in the standard configuration, there are no non-constant J_std-holomorphic strips. So we conclude that for small enough ϵ, ℳ^diag,v_ϵ,z must consist of constant strips. We now show that the non-diagonal part of ℳ^v is empty and hence prove the second assertion. We first treat the case z∈ S(π/2). The difference W(z^+)-W(z^-) of the primitive between the two lifts vanishes by Lemma <ref>. Then the difference of the action of the intersection point vanishes at z, and the geometric energy is again of size O(ϵ). By the previous observation, it follows that for some ϵ(z)>0, all the passive continuation strips associated to the path s↦α^v_z (s) for 0<ϵ<ϵ(z) and s∈ [0,±ϵ] are the constant strips. Now suppose that z∈𝒵^h(δ;E)-S(π/2). Without loss of generality, we assume that z lies on the right-hand side of 𝒵^h(δ;E)-S(π/2). We see that ℳ^-,v_ϵ,z must be empty for small enough ϵ by Equation (<ref>) below and positivity of energy. We now show that ℳ^+,v_ϵ,z is empty for small enough ϵ for z∉ S(π/2). Suppose there exists a strictly decreasing sequence of positive real numbers 0<ϵ_n<1,ϵ_n→ 0, such that the moduli spaces ℳ^+,v_ϵ_n,z are all non-empty. We have a sequence of J^v_ϵ_n-holomorphic strips u_n satisfying the equation ∂̅_J^v_ϵ_n u=0 u(s,0)⊂ (ψ^v_ϵ_n l(s))^-1(tΣ_ϕ) u(s,1)⊂ F_z lim_s→ -∞ u(s,τ)=t z^± lim_s→ +∞ u(s,τ)= tz^∓, However, by Proposition <ref>, the sequence of the strips u_n Gromov converges to a non-constant broken J_con-strip between F_z and tΣ_ϕ. By Theorem <ref>, such strips cannot exist, a contradiction. For the case ∙=h, the proof is entirely analogous except that W^h_ϵ(tz^±)-W^h_0(tz^±) is now equal to tϵ, by Proposition <ref>. Hence we get Area(u)= tϵ+ϵ∫_-∞^∞H^h(u(s,τ))l'(s)ds. The same monotonicity argument applies for diagonal continuation strips and for z∈ S(π/2). We treat ℳ^nondiag,h_ϵ,z as before, using (<ref>) and Proposition <ref>. This finishes the proof of Proposition <ref>. §.§.§ Proof of proposition <ref> We now proceed with the proof of Proposition <ref> for ℳ_ϵ_n,z^+,∙. The case where ϵ_n is replaced by -ϵ_n is entirely analogous. We first establish lower and upper bounds for energy. Recall that we had the following expression for the geometric energy ∫_𝒵du_n^2_J^∙_ϵ_n=∫_𝒵 u_n^∗ω= W_ϵ_n^∙(tz^+)- W_0^∙(tz^+)+ϵ_n ∫_-∞^∞ H^∙(u_n(s,τ))l'(s)ds. From Lemma <ref>, we see that (<ref>) is bounded above by 2E+ϵ_n C for some C>0 and bounded below by ħ=t/2(W(z^+)-W(z^-)) for small enough ϵ_n. Note that this lower bound depends on the fixed t. We now carry out the “blow-up" analysis estimate: Let u_n be a sequence of J^v_ϵ_n-holomorphic strips with moving boundary conditions as above. Let p>2. Then there exists a constant C=C(p) such that Du_n_∞≤ C. This is standard blow-up analysis so we sketch the proof. See <cit.> and <cit.> for details. Suppose by contradiction we have a subsequence of u_n and points v_n=(s_n,τ_n)∈ (-∞,∞)× [0,1] such that Du_n(v_n) blows up. Suppose v_n converges to a point v_0∈ (-∞,∞)× [0,1]. Note that from the set-up, the moving data appears only on [-2,2]× [0,1]. We first homogenize the problem. Since the family J^v_ϵ_n is uniformly geometrically bounded, the images of the holomorphic discs u_n are contained in an a priori compact subset K of T^∗C̃. We consider the manifold [-2,2]× [0,1]× K and the compact submanifold 𝒦_n:={(s,p):s∈ [0,1], p∈ψ^-1_ϵ_n l(s)(tΣ_ϕ)∩ K} which is totally real with respect to (j⊕ J_ϵ_n^v(s,τ)). It is Lagrangian outside a compact subset since ψ_s has compact support on tΣ_ϕ. Furthermore, as ϵ_n→ 0, the manifolds 𝒦_n converge in the C^∞ topology to the compact Lagrangian submanifold [-2,2]× (tΣ_ϕ∩ K)⊂ [-2,2]× [0,1]× K. The graph (s,τ)↦ (s,τ,u_n(s,τ)) is j⊕ J^v_ϵ_n holomorphic. Identify the neighbourhood of v_0 with an open subset of the upper half-plane. By Hofer's lemma <cit.>, we have sequences c_n∈ℍ, positive real numbers e_n>0, and d_n=du_n(v_n) such that c_n→ v_0, Du_n_B_e_n(c_n)≤ 2d_n, e_n→ 0, e_n d_n→∞ Consider the zoomed-in curve z↦ (c_n+z/d_n,u_n(c_n+z/d_n)). We split into two cases: either Im(c_n)d_n is unbounded or is bounded. These are cases I and II, respectively, in <cit.>. For the first case, sphere bubbles develop, but we know that they cannot exist since [ω]∘π_2(T^∗C̃)=0. For the second case, disc bubbles may develop localised on a boundary point. Such a boundary point either lies on the moving-boundary or the non-moving boundary. Suppose the boundary point lies on the non-moving boundary, and s_n stays bounded. Then the boundary bubble is a J_con-disc bounded in tΣ_ϕ or F_z. However, such discs cannot exist by exactness. So we arrive at a contradiction. Suppose now the boundary bubble develops at a point on the moving boundary. Since H_n(s,τ)=ϵ_nH^v(s,τ)→ 0 and 𝒦_n→ [-2,2]× tΣ_ϕ, in the C^∞-topology, the configuration (j⊕ J^v_ϵ_n(s,τ),[-2,2]× [0,1]× K,𝒦_n) converges to (j⊕ J_con,[-2,2]× [0,1]× K,tΣ_ϕ∩ K) in the C^∞-topology. By Gromov compactness for totally real submanifolds (<cit.> and Remark <cit.>), the disc bubble is a j⊕ J_con-disc bound in [-2,2]× tΣ_ϕ. Since bubbles localize, projection of the bubble to the [-2,2]× [0,1] component must be constant. Furthermore, projecting to T^∗C̃, we see that the T^∗C̃-component of the disc must be constant since the Lagrangian tΣ_ϕ is exact. So no bubbles can develop for s_n bounded . Now we treat the case where s_n is unbounded. Choose ξ>0 such that the ξ-neighbourhood of v_n does not intersect [-2,2]× [0,1]. Then consider the translated strips (ψ_ϵ_n∘ u_n)(s-s_n,τ_n) over [-ξ,ξ]× [0,1] which is now J_con-holomorphic and has non-moving boundary conditions on tΣ_ϕ and F_α^v(ϵ_n). Arguing as before, we see that no bubbles can develop by exactness. This finishes the proof. We conclude: From Lemma <ref>, we see that the sequence u_n is equicontinuous on any compact subset of (-∞,∞)× [0,1]. By Arzela-Ascoli, given some sequence N_n∈ℝ_n, the translated localised strips ψ_ϵ_n∘ u_n(s-N_n,τ)|_[-R,R]× [0,1] admits a subsequence that uniformly converges for all R>0. The Arzela-Ascoli limit in the C^∞_loc-topology must be a J_con-holomorphic strip between tΣ_ϕ and F_z. We call such an Arzela-Ascoli limit a local strip. [Smoothness is given by elliptic regularity. Showing that the endpoints are indeed the intersection points between F_z and tΣ_ϕ requires the exponential decay estimate at the transverse intersection points.] Consider a chain of such non-constant local strips (see <cit.>). By <cit.>, the length of the chain must be a priori bounded because of uniform upper and lower bound on energy of u_n, and by <cit.>, there exists a maximal chain. From <cit.>, we see that the strip-like ends of the local limits are consecutively glueable. Furthermore, according to <cit.>, the total energy of the maximal chain agrees with the limit of the geometric energy of u_n. Hence by the uniform lower bound on the energy of u_n, the broken strip must have positive total energy and therefore must be non-constant. This finishes the proof. §.§.§ Subdividing path groupoid generators With Propositions <ref> and <ref> established, we subdivide the arcs α(w,w') and γ(w,w') (see Definition <ref>) into smaller paths. Regard α(w,w') as a closed bounded interval in ℝ. By Proposition <ref>, there exists an open cover I_z of α(w,w') indexed by z∈α(w,w') such that if z'∈ I_z, then the passive continuation strips from z to z' are all constant. By Lebesgue's number lemma, there exists some δ(w,w')>0 such that any set of diameter <δ(w,w') is contained in some I_z. Take a partition of the interval α(w,w') into segments of length <δ(w,w'). Each subinterval of the partition belongs in some I_z. Choose one such I_z for each subinterval once and for all. By adding these points z, further refine the partition, and obtain a sequence of points b(w,w')^0,....,b(w,w')^m(w,w'), which are in increasing order regarded as points in the interval α(w,w'), with b^u(w)=b(w,w')^0 and b^d(w')=b(w,w')^m(w,w'). Then the points have the following property that: for 0≤ i<m(w,w'), there exists 0≤ j≤ m(w,w') such that the passive continuation strips from b(w,w')^i to b(w,w')^j and b(w,w')^i+1 to b(w,w')^j are all constants. Simillarly, do the same for γ(w,w”) and obtain a sequence of points c(w,w”)^0,...,c(w,w”)^k(w,w') which are in increasing order regarded as points in the interval γ(w,w”), with b^u(w)=c(w,w”)^0 and b^d(w”)^=c(w,w”)^k(w,w') so that: for 0≤ i<k(w,w”), there exists 0≤ j≤ k(w,w') such that the passive continuation strips from c(w,w”;)^i to c(w,w”)^j and c(w,w”)^i+1 to c(w,w”)^j are all constants. We will now write b(w,w')^k→ b(w,w')^l for the horizontal arc between b(w,w')^k and b(w,w')^l contained in α(w,w') for w,w' walls and 0≤ k,l≤ k(w,w'), and similarly for c(w,w')^i→ c(w,w')^j for 0≤ i,j≤ m(w,w'). §.§ Computation of family Floer cohomology local system In this section, we compute the family Floer cohomology local system and prove Theorem <ref>. In Section <ref>, we define the grading data for the spectral curve. In Section <ref>, we introduce a good open cover and the local framing data to define spin structures on the base and the spectral curve as a Čech cocycle. In Section <ref>, we use spin structures to orient the moduli spaces of continuation strips and derive the sign comparison formula (Lemma <ref>). In Section <ref>, we use the sign comparison formula to prove Theorem <ref>. §.§.§ Grading Let I be the complex structure on C. We have the following almost complex structure on T^∗C̃ Ĩ:=[ I 0; 0 I^t ] with respect to TT^∗C̃=H⊕ V. Let ω_I be a non-degenerate 2-form defined by ω_I=g^S(I,) where g^S is the Sasaki metric on T^∗C̃. Let ω_Im denote the imaginary part of the holomorphic volume form Ω, then the 2-form ω_I+iω_Im is non-degenerate and gives a preferred section of ω_T^∗C̃^⊗ 2. The corresponding phase for Σ_ϕ is constant since ω_Im|Σ_ϕ=0 and so we choose the grading function to be the constant map 0. Similarly, we choose the grading function on any of the fibres to be the constant map 0 as well. This implies that the chain complex CF(Σ_ϕ,F_z) is concentrated in degree 0, for z∈ C^∘ (recall that C^∘ is the complement of the zeroes and the poles of ϕ). §.§.§ Spin structures Open cover data With respect to the points b(w),b^u(w),b^d(w), b(w,w')^i and c(w,w')^j, we choose a finite subset M_C of points in C and a good open cover {G_α}_α∈ M_C such that the following conditions hold (see Figure <ref>). * The critical points of ϕ, and the points b(w), b(w,w')^i, c(w,w')^j, for w,w' a wall in S(0) and 0<i<m(w,w'), 0<j<k(w,w'), are all contained in M_C. * The open set G_α contains the point α, and does not contain any other β∈ M_C. * The open set G_α for α∈ zero(ϕ) is contained in U(δ). * The open set G_α for α∈ pole(ϕ) is contained in a small conformal coordinate chart near α as in Proposition <ref>. So is any other G_β such that G_α∩ G_β≠∅. * For α∉ crit(ϕ), the open set G_α intersects at most one wall and the covering π:Σ_ϕ→ C is trivial over G_α. * For w a wall, the open set G_b(w) contains the closed minimal horizontal arc between b(w)^u and b(w)^d and is contained in 𝒱^v(w). The horizontal arc does not intersect any other G_α. * The open sets G_b(w,w')^i and G_c(w,w)^j, for w and w' walls on S(0), are contained in C(δ;E). * For each b∈ zero(ϕ) and a wall w emanating from b, there exists a unique q(w)∈ M_C∩ w such that (G_q(w)∩ G_b)≠∅. Recall that at each branch point b∈ zero(ϕ), we made a choice for the branch cut, and that for each component 𝒵^h of C-S(0), we made a choice for an oriented generic horizontal trajectory γ(𝒵^h) which determined a positive sheet of √(ϕ) over 𝒵^h. With respect to this, we choose the following local orthonormal framing data: We define the good local frame on the open cover {G_α}_α∈ M_C to consist of the following data: * For α∈ zero(ϕ), we take the conformal equivalence (G_α,ϕ)≃ (ℂ,zdz^2) with respect to the choice of the branch cut. We take the local frame given by ⟨d/dx,d/dy⟩ on G_α with respect to z=x+iy. * For α such that G_α intersects a wall, take the local conformal chart defined using the positive sheet +√(ϕ) along w. We take the pullback of the local conformal frame given by ⟨d/dx,d/dy⟩ with respect to z=x+iy. The frame is orthonormal outside U_δ∩ G_α, but it is only orthogonal on U(δ)∩ G_α. On U(δ)∩ G_α, we conformally normalize and consider the resulting frame instead. * For those α such that G_α lie in the interior of some 𝒵^h, take the sheet of √(ϕ) induced from the chosen orientation of a generic horizontal trajectory γ(𝒵^h). We take the pullback of the conformal frame ⟨d/dx,d/dy⟩ for z=x+iy, with respect to the orientation of the trajectory. The frame is orthonormal outside U_δ∩ G_α, but it is only orthogonal on U(δ)∩ G_α. On U(δ)∩ G_α, we conformally normalize and consider the resulting frame instead. A choice of a local orthonormal frame ⟨ e_1,e_2 ⟩ gives a local trivialization of the orthonormal frame bundle by sending A∈ SO(2) to the orthonormal frame ⟨ Ae_1,Ae_2 ⟩. Hence our good local frame gives rise to a Čech cocycle in ψ̃ in Č^1({G_α},α∈ M_C;SO(2)). From now on, let P_SO(2)(z) denote the SO(2)-torsor of orthonormal frames of T_z C and let ϕ_αβ denote the SO(2)-transition functions induced from the good choice of the local frame data (Definition (<ref>)). Spin structures Using the open cover {G_α;α∈ M_C}, we now take spin structures on C̃ as a Spin(2)-Čech cocycle in terms of {G_α;α∈ M_C}. A spin structure 𝔰 on C̃ is a Čech cocycle {ϕ̃_αβ} in the group Č^1({G_α;α∈ M_C};Spin(2)) lifting the cocycles {ϕ_αβ}. Choose a spin structure 𝔰. The corresponding Spin(2)-bundle is the bundle obtained by glueing the trivial copies of Spin(2)× G_α with respect to ϕ̃_αβ. The fibrewise double cover structure is given according to the following commutative diagram G_α× Spin(2) ⋃_z∈ G_αP_SO(2)(z) G_α× SO(2)[from=1-2, to=2-2] ["≃"', from=2-1, to=2-2] [from=1-2, to=2-1] . We will still denote the resulting Spin(2) bundle as 𝔰. We now define the induced spin structure on each of the fibre. Identify the orthonormal coframes on the vector space T^∗_z C with the orthonormal frames on T_z C using the metric g, and let P_SO(2)(z)^-1 denote the fibre of the orthonormal coframe bundle over z. Then P_SO(2)(z)^-1 defines a trivial local frame on the cotangent fibre T_z^∗C regarded as a submanifold. In other words, the bundle F_z× P_SO(2)(z)^-1 defines a trivial SO(2) bundle over F_z. Let z∈C̃. The spin structure 𝔣_z is a trivial Spin(2)-bundle over T^∗_z C with the fibre torsor defined by the fibre of P_Spin(2)C bundle over z, which is mapped to the orthonormal coframe over z via the diagram Spin(2) P_SO(2)^-1(z) P_SO(2)(z) SO(2)[from=2-3, to=2-2] [from=1-3, to=2-3] ["g"', from=2-2, to=2-1] [from=1-3, to=2-1] . We will need the following technical lemma for later computations. Let γ:[0,1]→ C be a smooth path with γ(0),γ(1)∈ M_C. Consider a complex vector bundle γ^∗(T^∗C) over [0,1] and the real subbundle given by F_γ(s). Consider the spin structure on F_γ(s) given by P_s=𝔣_γ(s), and let γ^-1(G_α),α∈ M_C be the resulting open cover of γ. Trivialize P_s by pulling back the trivialization of P_SO(2)(z)^-1 over {G_p}. Then the transition functions are given by γ^∗ψ_αβ=ψ_αβ∘γ. Spin structures—the spectral curve We lift the good open cover {G_α;α∈ M_C} to a good open cover {G̃_α̃;α̃∈π^-1(M_C)}. To do this, for α∉ crit(ϕ), we take G̃_α̃ for α̃∈π^-1(α) to be the component of π^-1(G_α) containing α̃. For b∈ zero(ϕ), we simply take the preimage π^-1(G_b). We explain how the explicit choice of local orthonormal frames (Definition <ref>) give rise to spin structures on the spectral curve. We first define a convenient metric on Σ_ϕ. Let π^∗(g_δ)_reg be a conformal desingularization of the pullback metric π^∗(g_δ) such that π^∗(g_δ)_reg agrees with π^∗(g_δ) outside of π^-1(U(2δ)) and agrees with the pushforward of the metric π_∗dp^z^2 with respect to the map p^z→ ((p^z)^2,p^z) mapping from the p^z-plane to a local germ of a branch point over U(δ). The orthonormal frame bundle on Σ_ϕ, with respect to the metric π^∗(g_δ)_reg restricted to Σ_ϕ^∘, is isomorphic to the orthonormal frame bundle with respect to π^∗(g_δ). Hence the pullback spin structure π^∗𝔰 gives rises to a spin structure on (P_Σ_ϕ^∘SO(2),π^∗(g_δ)_reg). Note that we are regarding the pullback metric bundle (π^∗(TC^∘),π^∗(g_δ)) on Σ_ϕ^∘ as a subbundle of TΣ_ϕ with respect to the isomorphism dπ:T_z̃Σ_ϕ^∘→ T_π(z̃)C^∘ for z̃∈Σ_ϕ^∘. We now assign a Čech cochain representing π^∗𝔰. For α̃∈π^-1(M_C)-π^-1(zero(ϕ)), we can pull back the orthonormal frames in Definition <ref> and take a suitable conformal normalization of the basis to define the SO(2)-transition functions ψ_α̃β̃=ϕ_αβ∘π. For α̃ in π^-1(zero(ϕ)), we take the pushforward of the trivial frame ⟨d/dx,d/dy⟩ on the (p^z)-coordinate with respect to the map (p^z)→ ((p^z)^2,(p^z)). This defines a principal SO(2)-bundle structure on the orthonormal frame bundle of (Σ_ϕ,π^∗(g_δ)_reg). The transition functions ψ_α̃β̃ and their spin lifts ψ̃_α̃β̃:G̃_α̃∩G̃_β̃→ Spin(2) for α̃,β̃∈π^-1(M_C)-π^-1(zero(ϕ)) now define a spin structure on Σ_ϕ^∘. We can regard this as the pullback of the spin structure 𝔰 over (P_Σ_ϕ^∘SO(2),π^∗(g_δ)_reg). Hence we get Čech cocycles ψ∈ Č^1(G̃_α̃,α̃∈π^-1(M_C)-π^-1(zero(ϕ));SO(2)) ψ̃∈ Č^1(G̃_α̃,α̃∈π^-1(M_C)-π^-1(zero(ϕ));Spin(2)) given by transition functions ψ_α̃β̃ and ψ̃_α̃β̃, respectively, such that the following commutative diagram holds Spin(2)=U(1) G̃_α̃∩G̃_β̃ SO(2)=U(1)["z^2", from=1-2, to=2-2] ["ψ_α̃β̃"', from=2-1, to=2-2] ["ψ̃_α̃β̃", from=2-1, to=1-2] . We have the following lemma: The pullback spin structure π^∗𝔰 on Σ_ϕ^∘ does not extend to Σ_ϕ. The local frame on ℂ^∗ given by ⟨p^z/p^z,i·p^z/p^z⟩ for p^z∈ℂ^∗ maps to the frame ⟨ 2p^z,2p^zi⟩ in ℂ under the projection map π:p^z→ (p^z)^2 whose differential is dπ=2p^z. Take the unit circle S^1 in the p^z-plane. The trivial frame ⟨d/dx,d/dy⟩ gives rises to a section of (P_Σ_ϕ^∘ SO(2),π^∗(g_δ)_reg) which we regard as a constant map S^1→ 1. We regard the trivial spin structure on S^1 as the lift 1∈ S^1 of 1∈ S^1. On the other hand, the frame ⟨p^z/p^z,i·p^z/p^z⟩, for p^z∈ℂ^∗, restricted to the unit circle can be regarded as a map S^1→ S^1,z↦ z^-1. However, there is no lift of this map to Spin(2)→ SO(2), z↦ z^2. Hence we see that the induced spin structure on (P_Σ_ϕ^∘SO(2),π^∗(g_δ)_reg) is non-trivial when restricted to the unit circle in p^z-plane, and hence does not extend to the whole of the spectral curve. Recall that ℛ is the coefficient ring which is either ℤ or ℂ. We now reintroduce the notion of almost flat GL(1;ℛ)-local systems. A cocycle ℬ∈Č^1({G̃_α̃}_α̃∈π^-1(M_C)∩Σ_ϕ^∘;ℤ) given by transition functions ℬ_α̃β̃:G̃_α̃∩G̃_β̃→ GL(1;ℤ) is a Čech almost flat GL(1;ℤ)-local system if the induced GL(1;ℤ)-bundle on Σ_ϕ^∘ has monodromy -1 along small loops encircling the ramification points. At this point, we make the following choices. * We fix a reference Čech almost flat GL(1;ℤ)-local system ℬ once and for all. Let Q_ℬ be the induced GL(1;ℤ)-principal bundle. Then Q_ℬ comes equipped with the canonical flat Ehresmann connection since GL(1;ℤ)=0. The induced Koszul connection on the associated ℤ-bundle is then flat and so together with the parallel transport maps Φ^ℬ, we have defined a path groupoid representation of a GL(1;ℤ)-local system which we still denote as ℬ. Observe that the stalks of ℬ at α̃∈π^-1(M_C)∩Σ_ϕ^∘ are now identified with ℤ and so with respect to these identifications, the parallel transport map Φ^ℬ lies in {± 1}. * Given the pullback spin structure π^∗𝔰∈Č^1({G̃_α̃}:α̃∈π^-1(M_C)∩Σ_ϕ^∘;Spin(2)) and an almost flat GL(1;ℤ)-local system ℒ∈Č^1({G̃_α̃}_α̃∈π^-1(M_C)∩Σ_ϕ^∘,ℤ_2), we choose s̃ to be a cocycle in Č^1({G̃_α̃}_α̃∈π^-1(M_C);Spin(2)) extending the cocycle ℬ_α̃β̃ψ̃_α̃β̃. * Given a path groupoid representation ℒ=(M_C,ℒ_α̃∈π^-1(M_C)∩Σ_ϕ^∘,ℒ()) of an almost flat GL(1;ℂ)-local system, let ℒ⊗ℬ be the GL(1;ℂ)-local system on Σ_ϕ^∘ induced from the tensor product of ℒ with the local system induced from ℬ. Since the monodromy along the ramification points vanish, ℒ⊗ℬ extends to Σ_ϕ and defines a global GL(1;ℂ)-local system. We will shortly see that any other choices of 𝔰̃ and ℒ⊗ℬ will yield an isomorphic local system. Note that as Principal Spin(2)-homogenous spaces, the fibre of 𝔰̃ is identified with the fibre π^∗𝔰 over x∈ M_C. For the rest of the section, we fix ℬ,ℒ,𝔰̃ and ℒ⊗ℬ. §.§.§ Spin structures and orientation lines Using the grading and the spin structure, we now follow <cit.> to define the ℤ-graded chain complex CF(Σ,F_z) over ℤ. For z∈{M_C-crit(ϕ)}∪𝒫_C, let p∈Σ⋔ F_z be an intersection point. We regard T_p Σ_ϕ and T_pF_z as linear subspaces in V_p:=T_p T^∗C̃ which we regard as a complex vector space of dimension 2. Furthermore, we regard (π^∗𝔰)_p and 𝔰̃_p as spin structures on the linear subspaces T_p Σ_ϕ and 𝔣_z as a spin structure on T_pF_z. (<cit.>) Let V be a complex vector space and let A be a grading on LGr(V). Then we call the triple (L,A,P) a linear Lagrangian brane, given a linear Lagrangian L in LGr(V), a grading A of L, and principal Spin(n)-torsor P on L equipped with an isomorphism P×_spin(n)ℝ^n≃ L. Since the grading functions for Σ_ϕ and F_z are both equal to zero, we see that the triples (T_pΣ_ϕ,0,π^∗𝔰_p), (T_p Σ_ϕ,0,𝔰_p), (T_p F_z,0,𝔣_z) form linear Lagrangian branes. We omit the grading function from this point onwards. We now choose an explicit path of Lagrangians and spin structures. With respect to the good local frame (See Definition <ref>), use the path of Lagrangian subspaces L_T(p) by subspaces of T_p(T^∗C̃) generated by cos (1/2π T)d/dx+ sin(1/2π T)d/dp^x and cos (1/2π T)d/dy+ sin(1/2π T)d/dp^y for s∈ [0,1]. This Lagrangian path has a constant grading since it is I_p-invariant. We then use the following path of spin structures. Over (L_T(p),g_p), we fix the base point of the SO(2)-torsor of orthonormal frame over L_T(p) to SO(2) by identifying the basis ⟨cos (π/2T)d/dx+sin(π/2T) d/dp^x, cos (π/2T)d/dy+sin(π/2T) d/dp^y⟩ with the unit. This gives a trivialization of the SO(2)-bundle associated to L_T over [0,1] and we take the trivial Spin(2) bundle which gives a spin structure P_T over L_T. Notice then that (P_T)_0 is identified with 𝔰̃_p and (P_T)_1 is identified with 𝔣_z(p). We do the same for any ℝ_>0-rescaling of Σ_ϕ. Observe that when z∈{M_C-crit(ϕ)}∪𝒫_C, the principal Spin(2)-homogenous spaces π^∗𝔰_z and 𝔰̃_z are identified. So we choose the same path of Lagrangians, the same spin structure on E_T, and the same pair of isomorphisms for each brane pair ((T_pΣ_ϕ,π^∗𝔰),(T_p F_z,𝔣_z)). The path L_s of Lagrangians gives a boundary condition for the Cauchy-Riemann operator ∂̅_∇ on the upper half-plane ℋ. This gives rises to an abstract real line D_ℋ. Let us consider the space 𝒫(L_0,L_1) of all the paths between L_0 and L_1 which satisfy the grading condition. Then the real lines D_ℋ form a line bundle on 𝒫(L_0,L_1). We then take the double cover consisting of the triples (P_T,f_0,f_1), over which the (pullback) line bundle D_ℋ becomes trivial. For details, see <cit.>. We then choose a trivialization once, and define the orientation line 𝔬_p to be the fibre of D_ℋ over the triple (P_T,f_0,f_1). The choice of the trivialization of D_ℋ makes 𝔬_p an oriented real vector space. Regarding the orientation as a choice of an element in (𝔬_p-{0})/ℝ^∗, we write 𝔬_p=ℤ((𝔬_p-{0})/ℝ^∗) with +1 identified with the orientation of 𝔬_p. We then form the ℤ-group as a direct sum: CF(Σ_ϕ,F_z;ℤ)=⊕_p∈ F_z⋔Σ_ϕ𝔬_p. For z∈ M_C-crit(ϕ)∪𝒫_C, order the intersection points of Σ_ϕ and F_z with respect to the choice of the positive sheet of √(ϕ) on G_z (see Definition <ref>). Then for CF(Σ_ϕ,F_z;ℤ)=𝔬_z^+⊕𝔬_z^- we use the ordered basis {(+1,0),(0,+1)}. Now CF(Σ_ϕ,F_z;ℤ) is a ℤ-graded ℤ-module. Since it is concentrated in degree 0, all the differentials vanish, so we have CF^∗(Σ_ϕ,F_z;ℤ)=HF^∗(Σ_ϕ,F_z;ℤ). The same discussion applies for any ℝ_>0-rescaling of Σ_ϕ. We now look at the case of tΣ_ϕ in detail. Again, let 𝒵^h be a horizontal chamber and 𝒵^h(δ;E) be the unique connected component of C(δ;E) contained in 𝒵^h. Let z∈𝒵^h∩ M_C and let z'∈𝒵^h∩ M_C be a point connected to z by a geodesic arc α^∙_z of length d less than ϵ(z) in the sense of Proposition <ref>. Let u∈ℳ^∙_(-1)^i d,z , where i=0 if the positive sheet picked out by α coincides with the positive sheet of √(ϕ) on 𝒵^h(δ;E), or 1 otherwise. By Proposition <ref>, u is a constant map. The induced spin structure P_u(s,·) on the boundary of the infinite strip 𝒵 is given by: { P_u(s,1)= (ψ^∙)^∗_(-1)^idl(s)𝔣_ψ^∙_(-1)^idl(s)u(s,1) P_u(s,0)= (ψ^∙)^∗_(-1)^idl(s)𝔰̃_ψ^∙_(-1)^idl(s)u(s,0). We now describe what happens when we pass to ℒ⊗ℬ-twisted family Floer cohomology local system over ℂ. We set CF(tΣ_ϕ,F_z,ℒ;ℂ)=CF(tΣ_ϕ,F_z;𝔰̃,𝔣_z,ℒ⊗ℬ,ℂ)=CF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,;ℤ)⊗ (ℒ⊗ℬ). We denote the resulting local system as HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ), Γ_ℒ for the resulting parallel transport map, and the resulting path groupoid representation as HF_t(Σ_ϕ,ℒ,𝔰,ℬ,𝒫_C;ℂ). For example, given u∈ℳ^∙_(-1)^id,z the contribution of ℒ⊗ℬ for is given via Φ^ℒ⊗ℬ(∂ (ψ_s∘ u)|_(-∞,∞)×{1}): (ℒ⊗ℬ)_z̃→ (ℒ⊗ℬ)_z̃'. Then the corresponding component of the induced ℤ-parallel transport map 𝔬_z̃→𝔬_z̃' is twisted by Φ^ℒ⊗ℬ. We have the following technical lemma: For u∈ℳ^∙_(-1)^i d,z let g_1:𝔬_z̃→𝔬_z̃' be the induced isomorphism with respect to (𝔰̃,𝔣,ℳ^∙_(-1)^i d,z) and let g_2:𝔬_z̃→𝔬_z̃' be the induced isomorphism with respect to (π^∗𝔰,𝔣,ℳ^∙_(-1)^i d,z). Then g_1 and g_2 differ by a sign Φ^ℬ. To determine the maps g_i,i=1,2, we glue the half-strip operators ∂̅_H determined by Lagrangian paths L_T(z̃) and (ψ^∙_(-1)^i d)^-1L_T(z̃'̃) at the negative strip-like end and the positive strip-like end of the strip u, respectively. Let v:z̃♯ u♯ (ψ^∙_(-1)^i d)^-1z̃':A_1→ T^∗C̃ be the glued map with respect to a sufficiently big enough glueing parameter. The maps g_1 and g_2 are induced by the canonical isomorphism D_A_1,v= D_A_1,z̃♯ u ♯(ψ^∙_(-1)^i d)^-1z̃' ≃𝔬^∨_z̃⊗ D_𝒵,u⊗(ψ^∙_(-1)^i d)^∗𝔬_z̃'̃, after orienting D_A_1,v. To do this, choose a base point on A_1 that maps to F_z and lies on the non-moving part. We may trivialize the pullback bundle v^∗(TT^∗C̃)≃ℂ^2 on A_1, and obtain a loop ρ of Lagrangian subspaces in ℂ^2. Let P_π^∗(𝔰) be the induced spin structure from (ψ_s^∗(π^∗𝔰),ψ_s^∗𝔣). Similarly, let P_𝔰̃ be the induced spin structure from (ψ_s^∗𝔰̃,ψ_s^∗𝔣). By <cit.>, we see that the two possible isomorphism classes of spin structures over Maslov zero loops ρ' inside ℒ_0 Gr(V) form a double cover isomorphic to the covering obtained from the line bundle D∂̅_A_1,ρ'⊗⋀^top(ρ'(0)). Using D∂̅_A_1,const≃ρ'(0), we trivialize the bundle by choosing the trivial spin structure on S^1. This choice of the trivialization orients the vector spaces ∂̅_A_1,ρ'. Deform the linearized Cauchy-Riemann operator D ∂̅_J v to D∂̅_A_1. By the deformation invariance of determinant lines for Fredholm operators, we see that the two induced orientations on the moduli space ℳ^∙_(-1)^id,z are equivalent if and only if the two spin structures P_π^∗(𝔰) and P_𝔰̃ are isomorphic. Since we have made the same choices for 𝔰̃ and π^∗𝔰 at each of the intersection points z̃ and z̃', we see that P_π^∗(𝔰) and P_𝔰̃ are isomorphic if and only Φ^ℬ along ∂ (ψ_s∘ u) restricted to V_s is the identity. This finishes the proof. We now have all the ingredients needed to prove Theorem <ref>. §.§.§ Proof of non-abelianization We now use Lemma <ref> to compute Floer-theoretic parallel transports along the arcs α(w,w') and γ(w,w'). By construction, given any pair b(w,w)^i, b(w,w)^i+1, there exists some b(w,w')^j such that the continuation strips from b(w,w)^j to b(w,w)^i and b(w,w)^i+1 are all constants, and so necessarily lie on T^∗𝒵^h. So ℒ, and the different spin structures π^∗𝔰 and 𝔰̃ yield maps for k=i,i+1. Γ_ℒ(π^∗𝔰)(b(w,w)^j→ b(w,w)^k) :CF(tΣ_ϕ,F_b(w,w)^j;ℂ)→ CF( tΣ_ϕ,F_b(w,w)^k;ℂ) Γ_ℒ(𝔰̃)(b(w,w)^j→ b(w,w)^k) : CF(tΣ_ϕ,F_b(w,w)^j;ℂ)→ CF( tΣ_ϕ,F_b(w,w)^k;ℂ) For α an arc contained in a horizontal chamber 𝒵^h, let Φ^ℒ(α)^± (or Φ^ℒ⊗ℬ(α)^±) denote the parallel transport map of ℒ (or ℒ⊗ℬ) restricted to the ±-lift of α to Σ^∘ with respect to the sheet ordering of the chamber 𝒵^h. The maps (<ref>) and (<ref>) read as follows: Γ_ℒ(π^∗𝔰) =[ Φ^ℒ⊗ℬ^+ 0; 0 Φ^ℒ⊗ℬ^- ] Γ_ℒ(𝔰̃) =[ Φ^ℒ^+ 0; 0 Φ^ℒ^- ]. with respect to the ordered basis (<ref>). We claim that it is sufficient to prove Γ_ℒ(π^∗𝔰)=[ Φ^ℒ⊗ℬ^+ 0; 0 Φ^ℒ⊗ℬ^- ]. Indeed, by Lemma <ref>, the sign difference is Φ^B. So (<ref>) follows from (<ref>) since (Φ^ℒ⊗ℬ)Φ^ℬ=Φ^ℒ which follows because ℬ is a GL(1;ℤ)-local system. We now proceed on with the proof of (<ref>). The induced loop of Lagrangian spaces is the same as concatenation of L_s with L̅_̅s̅ where L̅_̅s̅=L_1-s and so it is homotopic to the constant loop. We get an open cover on S^1 induced from the open covers of Σ^∘_ϕ. The transition function on the boundary segment that maps to Σ_ϕ is given by the transition functions ψ_αβ. By Lemma <ref>, the same transition functions appear on the boundary segment that maps to the fibres. By construction of P_s over L_s, the transition functions induced from the half-disc operators glued at the strip-like ends are equal to the identity. This implies that the spin structure is trivial. Therefore, the induced count on the moduli space of continuation strips must be +1. But since all the continuation strips are constant, the twisted contribution must be equal to Φ^ℒ⊗ℬ. This finishes the proof. Then composing the parallel transport map from b(w,w')_i to b(w,w')_j and the parallel transport map from b(w,w')_j to b(w,w')_i+1, we see that We have Γ_ℒ(𝔰̃)(b(w,w')_i→ b(w,w')_i+1) =[ Φ^ℒ^+ 0; 0 Φ^ℒ^- ] Therefore, The parallel transport along α(w,w') is given by the matrix [ Φ^ℒ(w,w')^+ 0; 0 Φ^ℒ(w,w')^-. ]. The same argument gives us The parallel transport along γ(w,w”) is given by the matrix [ Φ^ℒ(w,w”)^+ 0; 0 Φ^ℒ(w,w”)^- ]. Finally, repeating the argument in the proof of <ref>, we obtain the following: The parallel transport along α(w) is given by the matrix [ 1 μ(w); 0 1; ] Note that we get an upper triangular matrix because the moduli spaces ℳ^short,-(w) are empty. Here the basis of ℤ^2 are chosen with respect to the orientations on the orientation lines. We now compute the number μ(w) explicitly using Φ^ℒ and finish the proof of the main theorem. The proof is essentially due to <cit.>. We will abbreviate Φ^ℒ(α^±(w_i,w_j)) by Φ^ℒ(w_i,w_j)^± . Let z be a zero of ϕ and order the three walls w_0,w_± 1 as above. Let Γ_ℒ(w_i,w_j) denote the parallel transport map with respect to α(w_i,w_j). Then we have μ(w_0) =-Φ^ℒ(w_1,w_-1)^+Φ^ℒ(w_0,w_1)^-Φ^ℒ(w_-1,w_0)^- μ(w_1) =-Φ^ℒ(w_1,w_-1)^-Φ^ℒ(w_0,w_1)^-Φ^ℒ(w_-1,w_0)^+ μ(w_-1) =-Φ^ℒ(w_0,w_1)^+Φ^ℒ(w_1,w_-1)^-Φ^ℒ(w_-1,w_0)^-. Consider the concatenation of paths α(w_0), α(w_0,w_1), α(w_1), α(w_1,w_-1), α(w_-1) and α(w_-1,w_0) in that order. This gives a loop encircling z once and on C it is contractible. Notice that when we go from 𝒵^h(w) to 𝒵^h(w') along the loop, we reverse the ordering of the basis. The configuration is illustrated in Figure <ref>. Let Γ_ℒ(w) denote the parallel transport map with respect to α(w). Then from homotopy invariance, we have Id= Γ_ℒ(w_-1,w_0)∘Γ_ℒ(w_-1)∘Γ_ℒ(w_1,w_-1)∘Γ_ℒ(w_1)∘Γ_ℒ(w_0,w_1)∘Γ_ℒ(w_0) which we rewrite in the form Id= [ 0 Φ^ℒ(w_-1,w_0)^-; Φ^ℒ(w_-1,w_0)^+ 0; ][ 1 μ(w_-1); 0 1; ][ 0 Φ^ℒ(w_1,w_-1)^-; Φ^ℒ(w_1,w_-1)^+ 0; ] [ 1 μ(w_+1); 0 1; ][ 0 Φ^ℒ(w_0,w_1)^-; Φ^ℒ(w_0,w_1)^+ 0; ][ 1 μ(w_0); 0 1; ] Expanding the matrix out, it follows that μ(w_0),μ(w_1),μ(w_2) are given by products of the transport coefficients Φ^ℒ^± as in (<ref>). Summarizing everything, we obtain our main theorem: (Theorem <ref>) Corollaries <ref>, <ref>, <ref>, and <ref> give the full description of the path groupoid representation of the Floer cohomology local system, in terms of ℒ. This is the non-abelianization.
http://arxiv.org/abs/2307.04781v1
20230710121715
Demonstrations of the Potential of AI-based Political Issue Polling
[ "Nathan E. Sanders", "Alex Ulinich", "Bruce Schneier" ]
cs.CY
[ "cs.CY" ]
bottom=1.5in 0000.000 lightblue!0 Demonstrations of the Potential of AI-based Political Issue Polling Michael Liut August 12, 2023 =================================================================== empty Nathan E. Sanders,*, Alex Ulinich, Bruce Schneier Berkman Klein Center, Harvard University, 23 Everett St #2, Cambridge, Massachusetts, 02138 Mountain View High School, 3535 Truman Avenue, Mountain View, CA 94040 Harvard Kennedy School, 79 JFK Street, Cambridge, Massachusetts USA 02138 *[email protected] Political polling is a multi-billion dollar industry with outsized influence on the societal trajectory of the United States and nations around the world. However, in recent years it has been severely challenged by rising nonresponse rates and other factors that stress its cost, availability, and accuracy. At the same time, artificial intelligence (AI) chatbots such as ChatGPT have become highly compelling stand-ins for a wide range of human behavior, powered by increasingly sophisticated large language models (LLMs). Because these LLMs are trained on huge corpora of writing by diverse people captured from across the Internet, they are potentially capable of representing a wide range of beliefs on many policy issues. Could AI chatbots be an effective tool for anticipating public opinion on controversial issues to the extent that they could be used by campaigns, interest groups, and polling firms? We have developed a prompt engineering methodology for eliciting human-like survey responses from ChatGPT, which simulate the response to a policy question of a person described by a set of demographic factors, and produce both an ordinal numeric response score and a textual justification. We execute large scale experiments using this method, querying GPT for thousands of simulated responses at a cost more than three orders of magnitude lower than human surveys. We compare this simulated data to human issue polling data from the Cooperative Election Study (CES). We find that ChatGPT is effective at anticipating both the mean level and distribution of public opinion on a variety of policy issues such as abortion bans and approval of the US Supreme Court, particularly in their breakdown along partisan lines (correlation typically >85%). However, it is much less successful at anticipating demographic (age, race, and gender) differences between respondents. Moreover, ChatGPT tends to overgeneralize its conception of ideological differences to new policy issues that arose after its training data was collected, such as American support for involvement in the war in Ukraine. Our work has implications for our understanding of the strengths and limitations of the current generation of AI chatbots as virtual publics or online listening platforms, future directions for LLM development, and applications of AI tools to the political domain. § INTRODUCTION While survey experiments and polling have been powerful tools for political campaigns, parties, and advocacy organizations in the US and around the world for centuries <cit.>, in recent years the cost and difficulty of operating polls has grown dramatically. Political polling firms commonly recruit panels intended to be representative of, and to achieve high coverage of, their targeted population, such as eligible voters nationally or likely voters in a voting district. Reaching these populations has become harder primarily because of the growth in survey nonresponse internationally: the failure to contact or refusal of potential participants to be surveyed due to factors such as lack of time, disinterest, and distrust <cit.>. Moreover, the migration of respondents to new technologies such as cell phones and the Internet, which have uneven and evolving penetration and usage across regions and demographic groups, has constrained the coverage of survey samples . These effects have generated simultaneous challenges for the quality and cost of political polling, as biases in political engagement and hyper-polarization manifest on response rates <cit.>. A vast literature has developed on statistical methodologies for designing and postprocessing survey data to overcome these challenges, including methods such as demographic weighting and poststratification <cit.>. In particular, pollsters have explored methodologies that enable meaningful public opinion research from digital platforms such as Facebook and other social media platforms, where traditional techniques of probability sampling cannot be applied because of the lack of a conventional sampling frame and researcher-controlled contact mechanism . These various methodologies seem to have been successful at maintaining the predictive accuracy of election polling thus far, even as nonresponse has proliferated <cit.>, and yet there is widespread interest in finding transformative new models for measuring public opinion that could lead to more cost-effective, sustainable. and more reliable polling results <cit.>. As statistical methodologies have come to play a critical role in collecting, processing, and interpreting political polling data, machine learning (ML) and artificial intelligence (AI) systems may further revolutionize this domain. In particular, large language models (LLMs) such as ChatGPT, which can be incorporated into AI chatbots and other systems capable of providing human-like responses to natural language prompts, have a wide variety of potential applications in democratic processes, such as assisting lobbying firms <cit.>, helping citizens and stakeholders to formulate and advocate for their opinions <cit.>, facilitating connections between candidates and voters <cit.>, and even helping humans social engineer or hack political systems <cit.>. Already, researchers have experimented with a variety of social science research and public polling applications of LLMs, such as coding open-ended survey responses <cit.>, inferring the ideology of a politician <cit.>, simulating economic behavior <cit.>, and simulating election results <cit.>. Because they are trained on wide Internet corpora including opinion writing from a diverse range of people, LLM's have a compelling ability to represent different perspectives and to perform a wide range of tasks without specialized training <cit.>. We therefore hypothesize that they may be effective at generating individualized responses to policy preference questions that can account for the same factors that influence human respondents, such as demographics. However, the nature of LLMs limits their potential effectiveness as opinion sampling tools. Like platforms such as social media, AI chatbots do not have well defined sample frames or well understood coverage characteristics. Moreover, unlike true survey platforms, using LLMs does not actually involve any solicitation of opinion from an authentic human individual. Instead, LLMs generate a response predicted to be most acceptable to the user on the basis of a training process such as reinforcement learning with human feedback , which may therefore reflect the incomplete, biased, or even stereotyping properties of its training dataset. Some specific biases of Internet corpora-trained LLMs are coming in to focus. One study attempted to assess the age and gender characteristics of ChatGPT by prompting it to express a demographic profile, finding that its responses are biased towards a young (<30 years old) and female profile . Other investigators identified that an earlier model, GPT-2, is biased in its representation of the opinions of people from nations underrepresented in Internet usage . Regardless of their ability to reflect the perspectives of a given demographic group, AI models may also exhibit bias in the text they generate; for example, in an analysis of the BERT model, researchers found that neural embeddings learn harmful stereotypes about persons with disabilities . In this work, we seek to test the capability of current generation AI tools to accurately reflect distributions of public opinion, and to expose insight into its effective sociodemographic coverage as a polling instrument, using a generally available LLM and real public opinion survey questionnaires. We have developed experimental methods (<ref>) to prompt the AI chatbot ChatGPT to generate public polling-like responses such that it can simulate a survey panel. We test the model's ability to reflect the shift in valence between demographic groups across a variety of issues, as well as reasonably reproduce the key arguments appealed to by each demographic (<ref>). We provide an interpretation of this capability in the context of prior Internet-assisted approaches to public opinion research, discuss the limitations of this approach and the current generation of tools, and the implications these capabilities may have as they improve (<ref>), before concluding (<ref>). § METHODS We explore the viability of AI language models to simulate public opinion polling responses by developing a system that automates querying an LLM based on the questionnaire of a survey previously given to people, so that the resulting AI responses are aligned and comparable to human data.[We will publish the code associated with this work at the time the article is accepted.] §.§ Large Language Model We use the OpenAI Chat Completion API endpoint, through OpenAI's openai python library,[<https://github.com/openai/openai-python>] to query the gpt-3.5-turbo-0301 LLM for polling responses. This model was the most recent model from OpenAI optimized for chat applications and made generally available as of April 2023; it is trained on data samples written as late as September 2021.[See <https://platform.openai.com/docs/models/gpt-3-5>] We generate a balanced sample of n=20 responses per prompt per demographic cross-tab per issue across ideology (in five bins) and three demographic fields with simple categorizations (age in four bins, “man” or “woman” gender, and “white” or “non-white” race), for a total of 1,600 responses across each of seven issue prompts (see Table <ref>) for 11,200 total responses. Note that this balanced sample does not, therefore, represent any particular target population such as US adults, as our focus is on understanding the performance of LLM's in representing the viewpoints within and across distinct demographic groups. Because LLMs offer the opportunity to generate data for arbitrary sub-populations at arbitrary sizes, the process to generate a sample representative of a population with defined demographic characteristics is trivial, if the model is successful at accurately reproducing the views of each demographic group. Regarding our selected demographic classes, we acknowledge that binary categorizations for gender and race are reductive and far from representative of the full spectrum of human gender and racial identity. Our reason for focusing on these broad classes is to enable initial statistical comparisons with demographic groups well sampled in the CES dataset. Future work should further explore the representation of AI generated responses associated with nonbinary gender and more diverse racial identities. These queries were executed at a cost of about $3 USD through the OpenAI API, whereas an online survey of 10,000+ responses on a human population would cost at least 1,000 times that much. LLMs can be sensitive to the way questions are phrased and what information is provided to prime them before answering a question. We arrived at a prompt suitable for simulating public polling responses aligned to an established survey questionnaire through several iterations of trial and error in prompt engineering. We used the following prompt template when querying the LLM, Please write a 1 paragraph letter to the editor from the perspective of a {gender} in the age range of {age} years who identifies as {white} expressing a clear point of view on the policy proposal to: “{issue}”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a {cardinality}-point scale, where 1 represents the position “{low_level}” and {cardinality} represents the position “{high_level}”. where {gender}, {age}, and {white} are demographic features; {issue} represents the question text from a survey given to humans (<ref>); {cardinality} is the maximum value of the numeric response scale; and {low_level} and {high_level} are descriptions of the bottom and top end of the response scale as defined in the polling questionnaire. The prompt component describing the “Position score:” successfully formats the output so that an ordinal numeric response value can be extracted from the plaintext completion with a simple regular expression. Additionally, we extract the textual descriptors of the top and bottom options on the original scale from the survey questionnaire to align the LLM outputs to the scale the human respondents used. The prompt template defined above evolved significantly over the course of our experimentation. Initially, we did not include a “Position score” requirement in the prompt. We first tested the model's ability to generate realistic-seeming textual arguments in response to policy issue questions, from various demographically-aligned points of view. Having initially vetted this capability, we then added a brief instruction to the prompt to assign a score on a 1-5 rating and verified that the generated ratings generally agreed with the textual letter generated by the model. However, we identified two further challenges: 1) the generated position score would be formatted inconsistently and was difficult to extract from the generated text without manual review and, 2) the model would sometimes flip the polarity of the scoring scale, such that a given position would be variously represented as a score of 1 or 5. To address issue 1, we added far more explicit formatting instructions (“Before the letter, summarize their position with...”), which succeeded at enforcing a formatting for the score that could be reliably extracted. To address issue 2, we added explicit definitions to the template of the low and high position levels. In addition to ensuring a consistent directionality of the position score, this instruction has the added benefit of aligning the model-generated score to the rating scale used in the human survey. We use the default OpenAI system prompt of, “You are a helpful assistant”. The LLM is not prompted sequentially; rather, each query is executed as an individual user prompt immediately following the system prompt. Importantly, this allows us to simulate semi-independent poll responses, where the LLMs are probabilistically generating various isolated responses similar to sampling a human population.[ In contrast, a methodology that queried a system like ChatGPT with sequential user prompts would entail state evolution that biases subsequent responses, as iterative prompting is known to change the responses of LLM-based dialog systems, see e.g. . ] Figure <ref> illustrates an example survey response distribution from this method. A typical exchange looks like, Prompt: Please write a 1 paragraph letter to the editor from the perspective of a politically Liberal Man in the age range of (16, 30][Note that we provide the age range in interval notation reflecting bins from the CES data; the GPT model demonstrates through its completions that it interprets the interval notation accurately.] years who identifies as non-white expressing a clear point of view on the policy proposal to: “Improve background checks to give authorities time to check the juvenile and mental health records of any prospective gun buyer under the age of 21”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a 2-point scale, where 1 represents the position “support” and 2 represents the position “oppose”. Response: Position score: 1 As a young, politically liberal man who identifies as non-white, I fully support the proposal to improve background checks for prospective gun buyers under the age of 21. It is unacceptable that individuals with a history of mental illness or juvenile offenses can easily acquire firearms. This new policy would ensure that authorities have enough time to properly check a person's background before they are allowed to purchase a gun. We have seen far too many tragic shootings in recent years, and it is time that we take meaningful action to prevent future tragedies. By implementing stricter background checks, we can protect ourselves and our loved ones from senseless violence. §.§ Human Polling Data As a human polling comparison for our AI-generated responses, we use the 2022 preliminary data release of the Cooperative Election Study (CES) <cit.>. The CES is an annual online survey of ∼60,000 nationally representative US respondents administered by YouGov. The full CES Common Content dataset consists of nearly 700 demographic, voting, and issue response variables, covering a wide range of policy- and politics-relevant factors and questions. We selected policy issue polling questions from the CES dataset on the basis of their ability to test the LLM's ability to represent distinctive demographic groups. In particular, we looked for questions that are fairly strongly correlated with demographic factors such as age and gender, yet relatively poorly correlated with ideological factors. In particular, we selected questions on the basis of the empirical correlation calculated between the question-specific ordinal response and the respondent-specific political affiliation in the CES data. Because of the high degree of partisan polarization in the US political system for so many issue, these questions provide a better test of the demographic response simulation abilities of the LLM than would more ideologically driven questions. We make some manipulations to the survey data to accommodate generation of equivalent LLM completions. In particular, we constrain policy issue responses to an ordinal scale by removing categories such as “Not sure” (and dropping any associated responses) and replace multi-selection responses “selected” and “not selected” with “strongly agree” and “strongly disagree,” respectively. We also coarsely bin (aggregate) the age demographic variable (which is provided as a birth year integer in the raw dataset). § RESULTS We systematically compare the AI-generated and human respondent issue polling data across the seven queried issues, ideology, and three demographics to understand the quality of the AI-driven approach through its correspondence to a human population. Figure <ref> illustrates an example of this demographic level comparison for the police_safety question. This figure demonstrates the general level of correspondence between CES and GPT-generated survey data at the finest granularity of our demographic groups for one question. The two datasets exhibit a similar pattern of increasing safety reported from the liberal (top of figure) to conservative (bottom) ends of the spectrum. However, some trends present in the CES data are not reproduced in the GPT results; for example, the significant, age-mediated variation across demographic subgroups among `Very liberal' CES respondents is not present in the GPT data; the GPT model seems to be over-confident in the expected response for the ideological group, regardless of other factors. In the remainder of this section, we interrogate this correspondence statistically across survey questions and demographic properties. In some cases, the GPT model demonstrates an excellent capacity to precisely reproduce the public polling response for individual population crosstabs (subgroups of age, gender, race, and ideological identity). Figure <ref> shows that for the SCOTUS approval questions, there is a ρ=86% Pearson correlation between the CES and GPT polling results across all demographic crosstabs, and an even higher 95% correlation when looking at ideological subgroups only. Beyond the correlation measure, the absolute reconstruction of the ordinal response is also highly accurate, with a mean absolute percentage error (MAPE) across demographic subgroups of ≲10% in both cases. Naturally, the AI polling results are less impressive in some other cases. In the following subsections, we explore the level of correspondence between the GPT and CES results in more depth by question and demographic field. §.§ Ideological alignment The AI model demonstrates an excellent ability to predict the alignment of different ideological subgroups across a range of policy issues (Figure <ref>). The correlation between the AI-generated responses and the CES survey results, aggregated by ideological identification, is extremely high (>85%) for not only the scotus_approval question (Figure <ref>b), but also the abortion_ban (98% correlation), police_safety (94%), and increase_fuel_production (86%) issues. For the prescription_import (ρ=67%) and gun_background_checks (91%) issues, the AI results are directionally consistent with the survey results and the correlations are still quite strong, but differ in the range and shape of the response, as the GPT results show a step-function-like difference between conservatives and liberals versus the gradual change in the survey data. These trends are generally reflected in the MAPE values. Like scotus_approval, abortion_ban has both an excellent correlation and MAPE (5%). In contrast, the discontinuity in the prescription_import and gun_background_checks response pattern is reflected with higher MAPE values (31% and 29%, respectively). The increase_fuel_production MAPE value is intermediate (21%). Lastly, police_safety has a high MAPE (35%) relative to its correlation. In this case, the high correlation reflects a consistently monotonic relationship between the GPT and CES demographic means, but a mis-calibration such that the GPT responses overestimate the decrease in perceived safety associated with the liberal groups (i.e. the ordinal response value is inflated at the liberal end). (For discussion of the remaining queried issue, regarding the Ukraine war, see <ref>). §.§ Distributional similarity We further investigate the ability of the probabilistic output of the AI models to represent the distributional responses of the human panel. Figure <ref> illustrates the correspondence between question response distributions on each policy issue. (The widths of these distributions are also illustrated by the error bar lengths in Figures <ref>, <ref>, and <ref>). The distribution similarity is generally fairly good, with particularly good matches for the binary-valued abortion_ban and prescription_import questions. The GPT model gets the absolute level of support wrong for the binary-valued questions increase_fuel_production and gun_background_checks; the AI model substantially underestimates the policy provisions' level of support. For the multi-valued questions police_safety and scotus_approval, the level of matching is intermediate. The spread of the distributions is similar. However, as observed above, the GPT responses favor higher ordinal values for police_safety than in the CES data. For scotus_approval, the median ordinal value (2) is over-represented in GPT responses. (For discussion of the ukraine_war question, see <ref>.) §.§ Demographic alignment Because of the substantial polarization of the American public, ideological differences dominate issue response for many public policy issues and for the CES questions we studied. It is difficult to find any policy question on which there are large, systematic differences in public opinion that cannot be explained primarily in partisan terms. Therefore, the success of the GPT model in predicting ideological issue response is significant, and may be sufficient for many polling applications where the ideological distribution of the respondent pool is known (e.g., from jurisdictional voting records). However, beyond ideological differences, the GPT model fails at anticipating several of the most important demographic trends relevant to these issues (Figure <ref>). Among the strongest demographic trends in the CES data is that older respondents feel significantly more safe around police than do younger respondents. While the GPT model does marginally reflect that trend, it does so at a vastly smaller magnitude than the survey data. There is also a racial difference on this issue, such that non-white respondents are somewhat less likely to feel safe than white respondents, and the GPT model correctly anticipates the size and direction of this effect. Likewise, the largest demographic trend for the fossil fuel production issue is also by age; younger respondents are more likely to disfavor increasing fuel production. The GPT model misses this trend entirely. In some cases, the GPT model makes a prediction about demographic trends that seem reasonable, but overshoots what appears in the human sample. In particular, the AI presumes a large difference in opposition to a complete abortion ban by gender, which seems plausible given this issue's strong identification with women's rights, as in this example: Prompt: Please write a 1 paragraph letter to the editor from the perspective of a politically Moderate Woman in the age range of (45, 60] years who identifies as non-white expressing a clear point of view on the policy proposal to: “On the topic of abortion, do you support or oppose a policy to make abortions illegal in all circumstances?”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a 2-point scale, where 1 represents the position “support” and 2 represents the position “oppose”. Response: Position score: 2 As a politically moderate woman in my late 50s who identifies as non-white, I oppose the policy proposal to make abortions illegal in all circumstances. This issue is not as simple as being pro-life or pro-choice, and I firmly believe that a woman should have the right to make decisions about her own body without interference from the government or anyone else. Making abortions illegal would only lead to unsafe and potentially life-threatening procedures, and it would be a violation of a woman's fundamental rights. Instead, we should focus on increasing access to education and resources that help prevent unwanted pregnancies in the first place, and provide support for women who do choose to carry a pregnancy to term. It is important to respect women's autonomy and trust them to make the best decisions for themselves and their families. However, the CES data shows a far smaller gender difference on this question (but note that there are a wide variety of other questions addressing this complex issue in the CES survey). In contrast, the GPT model misses an important gender trend on the question about prescription drug imports. In the human data, women are significantly more likely to oppose the idea of allowing states to import drugs from other countries, a trend that persists strongly across ideological segments. While GPT overshoots the ideological associations for this issue, it misses the gender skew entirely. §.§ Generalization While political and environmental factors are constantly shifting for all the issue questions studied in this work, there was one issue that changed overwhelmingly between when the GPT training data was collected (through September 2021) and when the CES survey was fielded (fall 2022): the Ukraine war. CES survey respondents would clearly have in mind the interstate conflict that principally began in February 2022 with the Russian invasion of Ukraine, whereas the data used to train the GPT model would have contained only data relevant to prior Russian actions, such as the Russian annexation of Crimea in 2014, the war in Donbas, or the troop buildup that began in spring 2021. To give the GPT model specific prompting, we modify the question from the CES survey with an explicit year to specifically reference the 2022 invasion. The CES results reflect widespread, nearly uniform public support for US “involvement” (the question does not specify what means of intervention). There is little evidence of an ideological signal in the CES data, although a poll taken months later might reflect the growing divide among conservatives over American support for the Ukrainian effort <cit.>. There is a small gender difference, in that men are more likely to support involvement. In contrast, the GPT results suggest a much lower level of overall support and a wider range of support (and lack thereof) across all ideological groups (Figure <ref>). Moreover, the GPT model strongly predicts that “very liberal” respondents would not support US involvement. This seems like a natural extension of anti-interventionist sentiment in left-wing political opinion exhibited in response to, for example, the US invasion of Iraq, as illustrated by this example, Prompt: Please write a 1 paragraph letter to the editor from the perspective of a politically Liberal Man in the age range of (16, 30] years who identifies as non-white expressing a clear point of view on the policy proposal to: “The United States should not get involved in response to Russia's 2022 invasion of Ukraine.”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a 2-point scale, where 1 represents the position “strongly agree” and 2 represents the position “strongly disagree”. Response: Position score: 1 As a politically liberal man in the age range of 16-30 who identifies as non-white, I strongly believe that the United States should not get involved in response to Russia's 2022 invasion of Ukraine. As much as we want to be the global police, we simply do not have the resources nor the legitimacy to intervene in every conflict around the world. American imperialism has caused more harm than good in many cases, and it's time for us to realize that we cannot solve every problem with force. Furthermore, getting involved in another country's conflict has the potential to escalate the situation and put American lives at risk. Let us focus on repairing relationships with our allies and strengthening our own nation instead of stirring up more conflict. And yet the GPT responses do not well capture the dynamics specific to the Ukraine war, including the nature of the Russian aggression associated with the war, the reporting on possible war crimes and atrocities associated with the conflict, and the vocal support of the Ukrainian cause from the Democratic leader, President Joe Biden. We will discuss the potential to include such additional information in model inference in <ref>. § DISCUSSION This work demonstrates the potential of AI chatbot models to generate synthetic public opinion polling data that realistically reproduces human responses. It extends the work of <cit.>, for example, to issue polling. We provide multiple ways of thinking about how these capabilities arise (<ref>), and discuss limitations, and potential mitigations, for these abilities (<ref>). This demonstration has significant potential implications for the political polling and market research industries and for consumers of issue polling data such as political campaigns and advocates (<ref>). §.§ Interpretation The mechanism by which LLMs can generate synthetic polling data can be viewed alternatively as accessing a virtual public or as a new form of AI-assisted online listening platform. Under the virtual public framework, we consider the LLM to be simulating a population of individual synthetic respondents akin to a human survey panel. The multi-head attention architecture used by leading LLMs has a natural interpretation in these terms; to the extent that they capture distinguishable semantic information, each attention head can effectively represent a different perspective on an issue <cit.>.[ In deep learning models, “attention” is a widely used mechanism to differentially weight components of a layer input, effectively guiding the focus of the model. In transformer models, multiple versions of attention are learned (attention heads) to produce independent attention mechanisms, which may correspond to recognition of distinct lexical patterns such as detecting named entities, representing entity relations, word parts of speech, or even semantic information. See for further information. ] Combined with the increasingly human-like reasoning performance and natively probabilistic nature of autoregressive LLMs, these features provide a basis by which models like ChatGPT can generate text emanations and survey responses that appear as if they came from a diverse panel of human respondents. The online listening interpretation places models like ChatGPT alongside tools for online social media, news, and opinion aggregation like Brandwatch <cit.>, Meltwater <cit.>, and MediaCloud <cit.>, tools widely used by market researchers, brands, and political actors to understand public sentiment and reactions to recent events. Like those online listening platforms, the source of the LLM's capabilities is a large corpus of Internet-derived training data that reflects a broad range of perspectives that, in aggregate, reflect public opinion and, when disaggregated, can elucidate trends with respect to demographics and other variables. A substantial advantage of LLMs in principle is that they have reasoning capacity, allowing them to generalize beyond their training data to make predictions about hypothetical events or those that occur outside of the context of their sources. While the results of <ref> illustrate the limited abilities of current generation LLMs to succeed at this task, this ability represents a major long-term advantage of LLMs and AI generally that is sure to be exploited by companies and other users <cit.>. LLMs are more akin to a virtual public than an online listening platform, beyond their capability to generalize to new issues, in that they offer an opportunity for AI-assisted pollsters to manipulate context and state. When using online listening tools, you are limited to the questions and context that actual people have been exposed to and responded to, which makes it impossible to simulate a longform questionnaire like that used in the CES survey. In the longform questionnaire, respondents (or subsets of respondents) answer questions in sequence and can be primed with certain information, such as factual evidence or talking points, in an effort to measure that contexts' influence on their response. Because LLMs are capable of accepting sequential prompts and (at some level) of generalizing beyond the specific examples in their training data, they can simulate this kind of longitudinal questionnaire. §.§ Limitations A primary challenge in the design of AI polling tools is prompt engineering, as prompting strategies can dramatically effect the reasoning skills and accuracy of LLMs <cit.>. The LLM model must be prompted not only to elicit demographically accurate differences in real public opinion associated with complex policy issues, but also, preferably, to align its response to established public polling datasets and methodologies. As a step towards that level of alignment, in this work, we have established a methodology (<ref>) for prompting LLMs to generate both numerical responses aligned to the questionnaire of a real public polling samples as well as explanations of their policy positions. Improved alignment on numerical responses can lend additional credence to the textual responses generated by the AI models. The imperfect correspondence between the AI-generated results and the real human survey data presented in <ref> is surely due in part to inadequacies of the LLM used in this work, and in part to the imperfection of the prompt engineering. Even with existing LLMs like GPT-3.5, a variety of additional model parameters and prompt considerations could enable improvements upon our results. In particular, systematic modification of the LLM's temperature parameter,[<https://platform.openai.com/docs/api-reference/chat/create#chat/create-temperature>] which adjusts variance in the probabilistic generative text output, may have the effect of controlling the spread in opinion responses returned for a given demographic and issue configuration. Moreover, because GPT models are autoregressive, their outputs may be sensitive to the instructions in our prompt about where to place the numeric “Position score.” In particular, since chain of thought prompting is known to affect reasoning in LLMs <cit.>, asking it to assert a score before generating the text may significantly condition that response. Among the most critical ethical considerations in using LLMs is their potential to repeat biases from their training data, including harmful stereotypes and misinformation <cit.>. In some cases, these biases may reflect actual (if objectionable) distributions of human opinion and beliefs, and in other cases they may reflect the over-representation of those beliefs in certain online sources. This vulnerability would not only weaken the usefulness of LLMs for public opinion measurement, but could actively create harm from their use. Similarly, there are biases (perceived and legitimate) in human political polling that limits its usefulness for actionable public opinion measurement <cit.>. Another key limitation is the availability of training data relevant to novel policy issues. In particular, the current generation of LLMs are typically trained with fixed datasets that halt at a certain time (e.g., GPT-3.5 was trained on data collected through September 2021), and their training corpora may lack coverage of certain issues (e.g., Internet corpora may reflect a systematic silencing of certain issues, see, e.g., ). To the extent that LLMs are limited to “parroting” memorized training samples <cit.>, they cannot be expected to accurately extrapolate to the likely reactions of human respondents to truly novel world events. Moreover, absent highly detailed prompting about the state of the world at the time, LLMs may lack context that would be determinative of human responses; for example, the repeal of the Supreme Court precedent from Roe v. Wade is important context for Americans surveyed on the question of abortion rights in 2023. This limitation could be mitigated by further development of continuously trained or diachronic LLMs, which can be updated with new training data over time and are aware of the time sensitivity of their training samples <cit.>. Furthermore, LLMs can be augmented with capabilities to access new sources such as by browsing the web <cit.>, giving them access to new information to inform their responses at prediction time. §.§ Implications If this impressive, but nascent, ability of LLMs to realistically reflect ideological and demographic issue alignment improved, it would raise significant challenges and potential benefits for the future of the survey and polling industries. Given the rapid dissemination and low cost inference for powerful LLMs and AI chatbot systems such as ChatGPT over the past year, an accurate AI-based polling system would become a highly cost-effective alternative to human surveying. This cost advantage could democratize access to the tool of survey research, giving smaller institutions and individuals greater access to public opinion research. If problems of survey nonresponse continue (or grow), it may compel survey consumers to increasingly turn to alternative approaches, such as LLMs, which are capable of generating data at arbitrary speed and resolution. Moreover, the nearly instantaneous response rate from AI models (when not subject to rate limits from the companies that control them) provides an attractive capability to iterate on survey results. When days or weeks are not required to re-field a survey instrument, marketers and pollsters have a much greater ability to refine and update their questionnaires and collect new data. However, these abilities will only be actionable to marketers or political users if the significant challenges associated with the current generation of LLMs can be overcome. It remains to be fully assessed how bias inherent to LLM training data and model design will become imprinted on its outputs, and how that could shape decisions informed by simulated market research studies or simulated polling. It may be that the web datasets commonly used to train modern LLMs <cit.> will appropriately reflect the distribution of real world public thought, but perhaps only if curated to reflect a specific jurisdiction (e.g., sources primarily from one country) and to be balanced across the ideological spectrum. At present, these biases and their dependence on large pretraining dataset properties is both difficult to quantify and costly to measure <cit.>. And it is unclear to what extent such a system could capture rapidly evolving market and political dynamics, either historically or in real time, which is key to most practical uses of survey data (see <ref> for further discussion). § CONCLUSIONS By sampling from the OpenAI ChatGPT model (GPT-3.5) at scale (>11,000 responses), we have demonstrated the ability of LLMs to generate synthetic political issue polling data that realistically simulates American popular opinion across a variety of controversial topics in some respects. In particular, we have shown that AI-generated responses have an excellent correlation (typically ρ>85%) with human data within ideological subgroups for many issues. However, we have also shown the limitations of the AI-based approach to accurate match trends in non-ideological demographic factors such as age, race, and gender, and to extrapolate to public opinion on novel events that occurred after the harvesting of their training data (such as the 2022 war in Ukraine). We have interpreted these results in terms of multiple frameworks for the role of LLMs, as either virtual publics or online listening tools, and discussed their potential implications on the political polling and market research industries. While additional development of capabilities for dynamic updating of LLMs, bias reduction, and generalization to novel issue topics is needed for AI tools to robustly supplement human opinion surveying, this study demonstrates the potential utility of even the current generation of AI tools to reduce cost, increase speed, and widen the accessibility of issue polling. §.§ Acknowledgments We thank Henry Farrell for thoughtful conversations on the role of AI in democracy, Beth Friedman for her helpful edits, and Xiao-Li Meng and an anonymous editor for their feedback.
http://arxiv.org/abs/2307.04833v1
20230710181143
3D Simulations of Magnetoconvection in a Rapidly Rotating Supernova Progenitor
[ "Vishnu Varma", "Bernhard Mueller" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.HE" ]
firstpage–lastpage Saturation and multifractality of Lagrangian and Eulerian scaling exponents in 3D turbulence Katepalli R. Sreenivasan August 12, 2023 ===================================================================================================== We present a first 3D magnetohydrodynamic (MHD) simulation of oxygen, neon and carbon shell burning in a rapidly rotating 16 core-collapse supernova progenitor. We also run a purely hydrodynamic simulation for comparison. After ≈180s (≈ 15 and 7 convective turnovers respectively), the magnetic fields in the oxygen and neon shells achieve saturation at 10^11G and 5×10^10G. The strong Maxwell stresses become comparable to the radial Reynolds stresses and eventually suppress convection. The suppression of mixing by convection and shear instabilities results in the depletion of fuel at the base of the burning regions, so that the burning shell eventually move outward to cooler regions, thus reducing the energy generation rate. The strong magnetic fields efficiently transport angular momentum outwards, quickly spinning down the rapidly rotating convective oxygen and neon shells and forcing them into rigid rotation. The hydrodynamic model shows complicated redistribution of angular momentum and develops regions of retrograde rotation at the base of the convective shells. We discuss implications of our results for stellar evolution and for the subsequent core-collapse supernova. The rapid redistribution of angular momentum in the MHD model casts some doubt on the possibility of retaining significant core angular momentum for explosions driven by millisecond magnetars. However, findings from multi-D models remain tentative until stellar evolution calculations can provide more consistent rotation profiles and estimates of magnetic field strengths to initialise multi-D simulations without substantial numerical transients. We also stress the need for longer simulations, resolution studies, and an investigation of non-ideal effects. stars: massive –- stars: magnetic fields –- stars:interiors – stars:rotation -– MHD –- convection § INTRODUCTION In recent years, multi-dimensional effects during the advanced convective burning stages of massive stars have received significant interest for multiple reasons, and have been studied extensively by means of hydrodynamic simulations. It has been recognised that seed instabilities from convection play an important dynamical role in core-collapse supernova explosions of massive stars <cit.>. There is also the question of whether convective boundary mixing by turbulent entrainment and shell mergers may lead to structural changes in the pre-collapse structure of supernova progenitors and compared to current spherically symmetric stellar evolution models <cit.> and affect the nucleosynthesis outcomes from massive stars <cit.>. Finally, multi-dimensional simulations of late-stage convective burning are starting to shed light on angular momentum transport and magnetic evolution inside massive stars <cit.>, which is particularly relevant for our understanding of neutron star birth spin rates and magnetic fields and hyperenergetic supernova explosions that are probably driven by rotation and magnetic fields. Three-dimensional simulations of convection during advanced burning stages in massive stars have so far largely disregarded two important aspects of real stars – rotation and magnetic fields. The effects of rotation had only been touched upon by the seminal work of <cit.>, and studies in axisymmetry (2D) of <cit.>, while 3D simulations have only started to explore rotation in recent years <cit.>. Similarly, magnetic fields during the advanced burning stages have only been considered recently by <cit.>, and their work was limited to the non-rotating case. Outside the context of advanced burning stages in massive stars, magnetohydrodynamic (MHD) simulations have been used extensively as a means to study convection over the years, primarily in the context of the Sun and solar-like stars <cit.>. Given the high quality of spatially and temporally resolved solar data <cit.>, these simulations often aim to explain more detailed observational time-varying features on the solar surface <cit.> and envelope convection, which includes understanding the formation of the solar rotation profile <cit.>. Studies of stars more massive than the sun are currently limited to just a handful of core dynamo simulations of A and B-type stars <cit.>. Simulations of magnetoconvection during the late burning stages, both in rotating and non-rotating stars, are a necessity for several reasons. Even in slowly rotating massive stars, magnetic fields have been shown to impact the dynamics of the subsequent neutrino-driven explosions <cit.>. For the magnetorotational explosion scenario (e.g., ; see also early work on explosions driven by millisecond magnetars, e.g., ), a better understanding of the interplay between convection, rotation, and magnetic fields in supernova progenitors is even more critical. In this mechanism, a rapidly rotating core and very strong initial magnetic fields are required to launch the very energetic explosion. Such magnetorotational explosions are thought to explain rare, unusually energetic “hypernovae” with energies of up to ∼10^52 erg <cit.>. The magnetorotational explosion mechanism is linked to the problem of rotation and magnetism in massive stars. Initial conditions for magnetorotational explosion simulations currently come from “1.5D” stellar evolution models that assume shellular rotation and include effective recipes for magnetic field generation and angular momentum transport by hydrodynamic and magnetohydrodynamic processes <cit.>. There are still many open questions about the treatment of rotation and magnetic fields in stellar evolution models. Aside from purely hydrodynamic instabilities <cit.>, the interaction of rotation and magnetic fields is a critical issue. Since convective regions are usually assumed to rotate rigidly (as this is the only allowed rotational state in thermal equilibrium; ), attention has usually focused on angular momentum transport and dynamos in non-convective regions. The dynamo mechanism often implemented in these 1.5D stellar models to generate magnetic fields relies on sufficiently strong differential rotation in convectively stable regions of the star to stretch poloidal magnetic fields into toroidal fields. The dynamo loop is closed by the development of a pinch-type (Pitts-Tayler) instability <cit.>. This mechanism developed by <cit.> is often referred to as the Tayler-Spruit dynamo. Recently, <cit.> have tried to improve the Tayler-Spruit dynamo mechanism, arguing that the Tayler instability saturates via turbulent dissipation of unstable magnetic field perturbations. This mechanism has a smaller energy dissipation rate and thus allows for stronger magnetic fields and more efficient angular momentum transport than the traditional Tayler-Spruit dynamo. Other attempts to understand magnetic stellar evolution models have been to derive scaling relationships in convective regions <cit.> and to explore the role of the magnetorotational instability (MRI) <cit.>, driven, in part, by global 3D simulations such as that in <cit.>. These simulations have suggested that the interaction between the different instabilities and flows can be quite intricate, and may induce not only the pinch instability but can also be strongly affected by MRI and magnetic buoyancy. Since 1.5D stellar evolution models implementing the Tayler-Spruit dynamo predict magnetic fields that are rather weak and predominantly toroidal, the general notion has long been that field amplification processes after the collapse are critical in magnetorotational explosions <cit.>, although this has recently been challenged <cit.>. In particular, for sufficiently strong seed fields in the progenitor, the initial field strengths and geometry could have a significant impact on the development of magnetorotational explosions after collapse <cit.>, making an understanding of the pre-collapse magnetic fields in 3D indispensable. In this study, we present a first simulation of rotating magnetoconvection during the final phases of shell burning using the ideal MHD approximation. This simulation constitutes a first step beyond spherically symmetric prescriptions in stellar evolution models to predict the magnetic field strength and geometry, as well as its role in angular momentum transport encountered in the inner shells of massive stars at the pre-supernova stage. We also compare to a corresponding non-magnetic model of the same progenitor to gauge the feedback of magnetic fields on the convective flow and rotation profiles. Our paper is structured as follows. In Section <ref>, we describe the numerical methods, progenitor model, and initial conditions used in our study. The results of the simulations are presented in Section <ref>. We first focus on the strength and geometry of the emerging magnetic field and then analyse the impact of magnetic fields on the convective flows and rotation, with a focus on the turbulent mixing and angular momentum transport within and between the burning shells. We summarise our results and discuss their implications in Section <ref>. § NUMERICAL METHODS AND SIMULATION SETUP We simulate oxygen, neon, and carbon shell burning with and without magnetic fields in a rapidly rotating 16 solar-metallicity helium star from <cit.> with a strong differential rotation profile calculated using the stellar evolution code Kepler. The same progenitor model has previously been used in the PROMETHEUS rotating shell convection simulation of <cit.>. The structure of the stellar evolution model at the time of mapping to 3D is illustrated in Figure <ref>. For our 3D simulations we employ the Newtonian magnetohydrodynamic (MHD) version of the CoCoNuT code as described in <cit.>. The MHD equations are solved in spherical polar coordinates using the HLLC (Harten-Lax-van Leer-Contact) Riemann solver <cit.>. The divergence-free condition ∇·𝐁 = 0 is maintained using a modification of the original hyperbolic divergence cleaning scheme of <cit.> that allows for a variable cleaning speed while still maintaining total energy conservation as described in <cit.> (building on similar ideas by ). The extended system of MHD equations for the density ρ, velocity 𝐯, magnetic field 𝐁, total energy density ê, mass fractions X_i, and the rescaled Lagrange multiplier ψ̂ reads, ∂_t ρ +∇·ρ𝐯 = 0, ∂_t (ρ𝐯) +∇·(ρ𝐯𝐯- 𝐁𝐁/4π +P_tℐ) = ρ𝐠 - (∇·𝐁) 𝐁/4π , ∂_t ê+ ∇·[(e+P_t)𝐮 -𝐁 (𝐯·𝐁) -c_hψ̂𝐁/4π] = ρ𝐠·𝐯 + ρϵ̇_nuc , ∂_t 𝐁 +∇· (𝐯𝐁-𝐁𝐯) +∇ (c_hψ̂) = 0, ∂_t ψ̂ +c_h∇·𝐁 = -ψ̂/τ, ∂_t (ρ X_i) +∇· (ρ X_i 𝐯) = ρẊ_i , where 𝐠 is the gravitational acceleration, P_t is the total (gas and magnetic) pressure, ℐ is the Kronecker tensor, c_h is the hyperbolic cleaning speed, τ is the damping time scale for divergence cleaning, and ϵ̇_nuc and Ẋ_i are energy and mass fraction source terms from nuclear reactions. This system conserves the volume integral of a modified total energy density ê, which also contains the cleaning field ψ̂, ê =ρ(ϵ+v^2/2)+B^2+ψ̂^2/8π, where ϵ is the mass-specific internal energy. The simulations are conducted on a grid with 400×128×256 zones in radius r, colatitude θ, and longitude φ with an exponential grid in r and uniform spacing in θ and φ. To reduce computational costs, we excise the non-convective inner core up to 3,000 km and replace the excised core with a point mass. The grid extends to a radius of 40,000 km and includes a small part of the silicon shell, the entire convective oxygen, neon and carbon shells. Our simulations cover the full sphere (4π solid angle). In the MHD simulation, we impose a homogeneous magnetic field with B_z = 10^7 G parallel to the grid axis as initial conditions. We implement reflecting and periodic boundary conditions in θ and φ, respectively. For the hydrodynamic variables, we use hydrostatic extrapolation <cit.> at the inner and outer boundary, and impose an effectively slip-free inner boundary. Different from the hydrodynamic simulations of <cit.> and <cit.>, we do not contract the inner boundary to follow the contraction and collapse of the core. The inner and outer boundary conditions for the magnetic fields are less trivial. In simulations of magnetoconvection in the Sun, various choices such as vertical boundary conditions (B_x=B_y=0), radial boundary conditions (B_θ=B_φ=0), vanishing tangential electric fields or currents, perfect-conductor boundary conditions, or extrapolation to a potential solution have been employed <cit.>. Since our domain boundaries are separated from the convective regions by shell interfaces with significant buoyancy jumps, we opt for the simplest choice of boundary conditions and merely fix the magnetic fields in the ghost zones to their initial values for a homogeneous vertical magnetic field. We argue that due to the buffer regions at our radial boundaries, and the lack of rotational shear (due to the slip-free boundary conditions), our choice of magnetic boundary conditions should not have a significant impact on the dynamically relevant regions of the star. Similar to the non-rotating magnetoconvection simulations done in <cit.>, our models will not (and are not intended to) provide an exact representation of the pre-collapse state of the particular 16 star that we are simulating. We would expect, e.g., that for the particular 16 model, the burning rate and hence the convective velocities would increase until the onset of collapse due to the contraction of the convective oxygen shell. As a consequence of accelerating convection and flux compression, the magnetic fields will likely also be somewhat higher at the onset of collapse. The model is rather meant to reveal the physical principles governing late-stage magnetoconvection in rapidly rotating massive stars, and to be representative of the typical conditions in the burning shells with the understanding that there are significant variations in convective Mach number and shell geometry at the onset of collapse <cit.>, which will also be reflected in the magnetic field strengths in the interiors of magnetorotational supernova progenitors. § RESULTS §.§ Evolution of the magnetic fields We simulate two rapidly rotating 16 models, one with and one without magnetic fields. The magnetic model is initiated with a homogeneous magnetic field of 10^7 G. We then allow the geometry of the magnetic field to evolve naturally under the influence of rapid rotation and convection. The evolution of the root mean square (RMS), volume-averaged magnetic fields in the three convective burning shells we simulate — the oxygen, neon and carbon burning shell — are shown in Figure <ref>. We see an initial period of exponential growth of magnetic field strength in each shell before a plateau forms after ≈200s in each shell. The field strengths in the oxygen and neon shells both appear to follow a very similar trajectory, achieving a peak at ≈190s, before a gradual decline sets in. In each of the shells, convection takes a different amount of time to fully develop, which explains the slight delay from the start of the simulation to the beginning of the exponential field growth. In particular, the carbon shell has a much longer convective turnover timescale τ_c than the other two shells with an initial value τ_c≈300 s compared to ≈ 15 s and 25 s for the oxygen and neon shell, respectively, which considerably delays the growth of magnetic fields. The growth of the magnetic fields in the carbon shell already becomes apparent after ≈ 60 s, even without convection being fully developed. This is due to field amplification from strong differential rotation which develops at the base of the shell, and turbulent fluctuations that developed alongside the convective plumes. Due to the development of convection in the shells, coupled with rapid differential rotation (maximum rotation rate of Ω ≈ 0.104 rad s^-1), we expect the field amplification to be dominated by the αΩ dynamo mechanism, which is often proposed as the mechanism that sustains the solar magnetic field <cit.>. The mechanism stretches the poloidal magnetic fields into toroidal fields via differential rotation (Ω-mechanism), and the toroidal field is then stretched into a poloidal field due to convective motions (α-mechanism), completing the cycle and amplifying the seed field. To test if this is the case, we plot the expected growth of the αΩ dynamo for the oxygen shell in Figure <ref>. To this end, we approximate the growth of the magnetic field via the αΩ mechanism in the oxygen shell by assuming the magnetic field evolves via the simplified evolution equation: ∂B_rms/∂t = Γ_αΩB_rms, which has a solution for the magnetic field growth of the form B_rms = B_0e^Γ_αΩΔ t, where B_0 is the initial field strength. We take the growth rate of the αΩ dynamo to be Γ_αΩ = (v/L)(Ωτ_c)^1/2 as presented in <cit.> based on dimensional arguments, where v is the convective velocity, L is the radial extent of the convective zone, Ω the rotation rate and τ_c is the convective turnover timescale. Since τ_c∼ L/V, this effectively amounts to a growth time scale τ_αΩ of the order of the geometric mean of the rotation period P=2πΩ^-1 and the convective turnover time, τ_αΩ=Γ_αΩ^-1∼ (2 π)^-1/2√(P τ_c). In the evolution plotted in Figure <ref>, we first calculate the RMS averaged values v and Ω in the oxygen shell, as well as angular averages the radial extent of the oxygen shell, L. The convective turnover time is calculated using these averaged quantities, τ_c = L/v. These averaged quantities are used to then determine the growth rate, Γ_αΩ to be used at each time step, to evolve the magnetic field. The expected growth rate of the αΩ dynamo follows the growth of the magnetic field in the oxygen shell very closely for the first ≈140 s, after which time the magnetic field growth in the simulation slows down, and eventually decays after hitting a peak magnetic field strength of ≈ 10^11 G and 7×10^10 G for the oxygen and neon burning shells, respectively. We see that the expected growth rate from an αΩ dynamo also decreases at later times due to two factors. First, convective velocities drop due to suppression of convection by strong magnetic stresses. Second the rotation rate drops due to large angular momentum fluxes. We will discuss these effects further in Sections <ref> and <ref>. These effects stop the magnetic field from being amplified via the αΩ dynamo. Since the convection dies down, it is reasonable to expect that late-time field amplification and saturation is determined by differential rotation alone. Interestingly, the saturation of the field appears to be well described by an amplification mechanism that is driven by the MRI. For the MRI, <cit.> argue that the saturation field is roughly given by B_sat^2 ∝ 4πρ r^2 Ω^2 dlnΩ/dln r, where ρ is the density, Ω the rotation rate, r the radius and dlnΩ/dln R quantifies the amount of differential rotation present. <cit.> derive Equation (<ref>) by assuming that saturation of the magnetic field is achieved in the star when the characteristic mode scale l_mode≈ v_A (lnΩ/ ln r)^-1 is equal to the local radius r, since the wavelength of the mode cannot be larger than the physical size of the unstable region. Here, v_A is the Alfvén velocity (v_A = B/√(4πρ)). On dimensional grounds, one can expect Equation (<ref>) to hold not just specifically for the MRI, but more broadly for amplification mechanisms driven solely by differential rotation in the ideal MHD regime with negligible resistivity. Figure <ref> shows that the magnetic field in the oxygen shell saturates at a very similar level to Equation (<ref>). The strong magnetic fields also result in very rapid redistribution of angular momentum, which slows the rotation rate of the oxygen and neon shells dramatically. Since the magnetic field saturation depends on the rate of rotation, this in turn leads to a drop in the average magnetic field strength by over 50% in these shells, as we see in Figure <ref>. The consequences of this will be discussed in more detail in the next sections. The carbon shell behaves somewhat differently from its neighbours as the shell is much larger, and already more slowly rotating at the beginning of the simulation (Figure <ref>). Due to the slower rotation and density of the shell, the magnetic fields in the carbon shell saturate at a lower field strength. But unlike the two inner shells, the magnetic stresses here always remain below the radial kinetic stresses (Figure <ref>), so convection continues unimpeded by the magnetic fields, and coupled with differential rotation, sustains a relatively constant magnetic field strength. Unfortunately, due to the very long convective turnover times, we are only able to resolve about one convective turnover in this shell. Pushing this simulation further becomes untenable as the convection in the carbon shell has begun to interact strongly with our outer domain boundary. Aside from the equilibrium field strength, it is worth investigating the field geometry that has naturally developed in the saturation state. To this end, we show radial profiles of the RMS averaged field strength and of the dipole field strength in Figure <ref>. The dipole field is calculated by extracting just the ℓ=1 component of the spherical harmonic decomposition: M̂_ℓ = √(∑_m=-ℓ^ℓ|∫ Y_ℓ m^*(θ,φ) B dΩ|^2). Close to the end of the simulation, the RMS field strength appears almost flat throughout the simulated domain, varying only between ≈5×10^9G and 10^10G. The dipole component of the magnetic field reaches about one third of the total field strength in the inner oxygen and neon burning shells, which are no longer convective at this point. Further out, however, the dipole is weaker by comparison to the RMS-averaged field. The slowly rotating and still convective carbon shell may be concentrated in smaller scale field structures that are similar to the non-rotating convective shell presented in <cit.>. However, as the carbon shell has only completed 1–2 convective turnovers, it is difficult to say if this structure will be maintained at later times. §.§ Impact of magnetic fields on convection and energy generation As we already briefly mentioned above, the amplification of the magnetic fields in our simulation leads to a very rapid suppression of convection, as well as fast transport of angular momentum out of the affected shells. Here, we attempt to understand the consequences of these dynamical changes by comparing the MHD simulation to a purely hydrodynamical simulation of this progenitor. As the magnetic field grows, it eventually becomes strong enough to affect the bulk flow in the convection zones. To illustrate this, we compare the spherically-averaged diagonal components of the kinetic (Reynolds) and magnetic (Maxwell) stress tensors R_ij and M_ij. R_ij and M_ij are computed as R_ij = ⟨ρ v_i v_j⟩, M_ij = 1/8π⟨ B_i B_j⟩, where angled brackets denote volume-weighted averages. Note that we do not subtract the mean rotational flow for R_ϕϕ here. In Figure <ref> we present the stresses in the MHD model at ≈ 180 s, where the Maxwell stresses begin to be comparable to the radial Reynolds stress, and near the final time-step of the simulation at ≈ 480 s. In Figure <ref>(a), the radial and meridional magnetic stresses are comparable to the radial kinetic stresses in the innermost regions of the star. This corresponds to pseudo-equipartition of these stress components throughout the oxygen and neon burning shells out to a mass coordinate of ≈ 2.8. The magnetic stresses then generate backreaction against the convective flows in these shells. As shown in Figure <ref>(b) at a later time in the simulation, the backreaction greatly suppresses the convective velocities in these shells, lowering the radial kinetic stresses by several orders of magnitude. We plot the RMS angle-averaged radial kinetic energy near the end of the simulation (≈480 s) in Figure <ref> for both the MHD model (top row) and for the purely hydrodynamic case (bottom row). At this time, the suppression of convective motions in the oxygen and neon shells by strong magnetic fields cause the radial kinetic energy in the inner 2.8 to be about three orders of magnitude lower than what is seen in the hydrodynamic model (≈ 10^16-10^17g cm^-1 s^-2 compared to ≈ 10^19-10^20g cm^-1 s^-2). As mentioned above, the carbon shell has only had time for about one convective turnover, i.e., convection is yet to reach a fully developed state. At the end of our simulation, the carbon shell is still convective, and the kinetic stresses in this shell remain higher than the magnetic stresses. At early times, we see that the angular Reynolds stresses are the dominant components due to the rapid rotation. This along with the sharp gradients in these stresses that develop means that the shear instabilities can efficiently mix material more efficiently than in the underlying 1D stellar evolution models, where there is little mixing beyond the convective zones on dynamical time scales. Shear mixing outside the convective regions initially plays a significant role in the MHD model as well. The R_rr stress component, which is indicative of radial motions that contribute to turbulent mixing, stays high outside the convective regions in the MHD model initially (Figure <ref>(a)). However, in the MHD model, the transport of angular momentum flattens the rotation profile (as discussed in detail in Section <ref>) , which we can see from the change in R_θθ and R_ϕϕ, significantly reducing the shear mixing at late times. In our purely hydrodynamic model, however, the Reynolds stresses remain roughly the same throughout our simulation, leading to continuous enhanced mixing compared to the MHD counterpart. We find that this enhanced shear mixing compared to the initial expectation from the 1D stellar evolution model means that the burning occurs at different regions, outside the initial location of the convection zones. The consequence of this can be seen by analysing how the mass fractions of several key elements evolve. In Figure <ref>, we compare the mass fractions of silicon, oxygen and neon in the MHD simulation and the purely hydrodynamic simulation. We also plot the radial kinetic energy at the end of the simulations, to further stress the differences in turbulent mixing when magnetic fields are introduced. The plots are limited to the inner 3 of the enclosed mass to focus on the oxygen and neon burning shells, which are most strongly affected by magnetic fields. The radial profiles of all three elemental mass fractions in the hydrodynamic and MHD model evolve very similarly at early times (up to ≈120 s in Figure <ref>), when the magnetic fields are not strong enough to significantly affect mixing. At later times, however, the differences in mixing become quite apparent. The plots of the silicon mass fraction show that large fractions of silicon are mixed outwards to an enclosed mass of ≈2.65 in the hydrodynamic model, while there is little mixing beyond ≈2.5 in the MHD simulation. The inhibition of mixing also means that material that gets burnt at the base of a shell is less efficiently replenished by fresh material or not replenished at all. This effect is seen in the oxygen shell as it burns material in a relatively narrow region around 1.85. In the MHD simulation, a sharp drop in the oxygen and neon mass fractions develops in this region, which gets steeper at later times due to the rapid burning (mostly of oxygen), which is no longer replenished by convective and shear mixing. It is clear that this is caused by the very strong magnetic fields and a significant reduction in turbulent mixing by comparing the mass fractions to the purely hydrodynamic simulation. Without magnetic fields, there is no sharp drop in mass fraction at the bottom of the burning shell. The profiles remain smoother within the shell and are even smoothed beyond the boundaries by shear mixing. The oxygen and neon mass fractions in the oxygen burning region even increase over time, as the rapid rotation and convection act to continually entrain fresh material into the oxygen and neon shells. We also find reduced mixing in the neon burning shell between ≈2.20 and 2.40. Here, instead of a sharp drop, we see the steep gradient of neon, where neon is consumed, move outwards after the initial transient where material is mixed inwards. This has the effect of shifting the location of the base of the burning shell. We show this phenomenon more clearly in Figure <ref>, which presents angle-averaged radial profiles of the energy generation rate at 110 s, 220 s and 450 s (dotted, dashed and solid lines, respectively). We show their evolution in both mass and radial coordinates in Figures <ref>(a) and (b) respectively, from the inner boundary of our simulation 1.75 to an enclosed mass of 3.0, and radius of 3000 km to 12000 km for both the hydrodynamic model and the MHD model. The energy generation profiles for both models show three clear peaks, which correspond to the three burning shells (oxygen, neon and carbon). We see that both simulations quickly deviate from each other, with the energy generation rate in the oxygen and neon shells in the hydrodynamic model receding backwards, and getting stronger at later times, while the opposite is seen in the MHD simulation. We note that the relative change in the position of these energy generation peaks are largely similar in both mass and radial coordinates. Figure <ref> clearly shows that the second energy generation peak in the MHD model, which corresponds to neon shell burning, moves from ≈ 2.35 (≈ 5400 km) at early times to ≈ 2.50 (≈ 5800 km) later on, compatible with the change in the neon mass fraction profiles in Figure <ref>. Due to the lack of turbulent mixing, the peak of neon burning moves radially outward as neon gets burnt at its initial position. However, since the neon burning rate is extremely temperature-sensitive (∝ T^50, ), the energy generation rate drops significantly when burning is moved to a slightly cooler region. We see a similar effect for the oxygen shell. At early times, before the magnetic field heavily suppresses turbulent mixing, both the MHD and hydrodynamic model mix material rapidly. The convective boundary mixing (CBM), particularly at the lower boundary of the oxygen shell, entrains material, increasing the size of the oxygen shell, causing it to move inwards (both in mass and in radius). We see the peak energy generation move from ≈ 1.95 (≈ 3900 km) to ≈ 1.85 (≈ 3500 km). After turbulent convection is suppressed in the MHD simulation, however, the nuclear energy generation peak in the oxygen shell behaves similarly to the neon burning shell, i.e., it moves outward in mass and radius to where oxygen is burned at a lower temperature. The hydrodynamic model, however, continues to entrain material from beneath the oxygen shell, moving the peak of its energy generation deeper into the core. oxygen burning is also a very temperature-sensitive reaction (∝ T^33, ), so these shifts in the burning shells lead to noticeable changes in the total energy generation rate over time. The consequences of the suppression of mixing in the MHD model are seen more clearly in Figure <ref>. Here, we plot the total (volume-integrated) energy generation in the oxygen, neon and carbon burning shells with time for both the hydrodynamic and MHD models. As we see in Figure <ref>, the lack of mixing in the MHD model leads to the location of the oxygen and neon shells moving radially outwards with time, to cooler regions of the star. This causes a gradual decrease in energy generation after the magnetic fields in these shells first achieve saturation at ≈200 s. The increased mixing in the hydrodynamic model, on the other hand, causes a subsequent increase in peak energy generation rate as the shells move deeper into the star. Interestingly, after ≈ 220 s of similar energy generation in the carbon shell in both models, the energy generation rate starts to increase in the MHD model and decreases in the hydrodynamic case. This is likely due to the slight change in the position of the carbon burning shell for the hydrodynamic model while it remains mostly stationary in radius in the MHD model. From the profiles in Figure  <ref>, this is likely caused by the radial expansion of the neon shell in the hydrodynamic model as its energy generation rate increases, pushing the carbon shell further outwards. In the MHD model, by contrast, the energy generation rate in the neon shell has dropped over time and hence there the carbon shell is not driven outward. §.§ Evolution of rotation and angular momentum transport In addition to its effect on turbulent mixing, the development of strong magnetic fields leads to rapid redistribution of angular momentum. The evolution equation for the angular momentum can be obtained by taking the cross product of the position vector 𝐫 with the fluid momentum equation <cit.>. When including magnetic stresses in the momentum equation and integrating over spherical shells, one obtains <cit.>, ∂⟨ρ v_ϕ r sinθ⟩/∂t - ∇_r·⟨ρ v_r v_ϕ r sinθ + B_rB_ϕ/4πr sinθ⟩ = 0, where ∇_r is the radial component of the divergence operator and angled brackets denote averages over solid angle. We then perform a Reynolds/Favre decomposition <cit.> around a base state with constant angular velocity Ω_z, Ω_z = ⟨ρΩ_z r^2 sin^2 θ⟩/⟨ρ r^2 sin^2 ⟩ = v_ϕ r sinθ/i_zz, on spheres, as in <cit.>. Here i_zz = ⟨ρ r^2 sin^2 ⟩/ρ. Note that we use hats and primes for volume-weighted Reynolds averages and their fluctuating components: X(r) = ⟨ X ⟩ = 1/4π∫ X dω X'(r,θ,ϕ) = X - X, where dω = sinθ dθ dϕ is the solid angle element. We denote mass-weighted averages and fluctuations with tildes or angled brackets and double primes: X(r) = ∫ρ X dω/∫ρ dω X”(r,θ,ϕ) = X - X. Applying the usual rules for Favre averages, ⟨ρ X⟩ = ρ̂X, ⟨ρXY⟩ = ρ̂XY and ⟨ρXY”⟩ = 0, we get ∂ρΩ_zi_zz/∂ t + ∇_r · (⟨ρv_r(Ω_z + Ω”_z) r^2 sin^2 θ⟩ + ⟨ρ v”_rΩ_z r^2 sin^2 θ⟩ + ⟨ρ v”_r Ω”_z r^2 sin^2 θ⟩ - ⟨B_r B_ϕ/4πr sinθ⟩ ) = 0. We see from the decomposed angular momentum Equation (<ref>) that angular momentum is transported by four distinct flux terms. In the order they are listed above we have an advective term, a meridional circulation term and turbulent transport, as well as an additional magnetic stress term. From this equation, aside from the usual hydrodynamic terms, the radial flux of angular momentum also depends on the strength of the magnetic fields. The evolution of the angle-averaged rotation rates in the MHD model and purely hydrodynamic model are depicted in Figure <ref>. The top row shows the rotation rates, Ω, over time for both models, and the bottom row presents angle-averaged specific angular momenta. We note that in comparison to its hydrodynamic counterpart, as expected, the MHD model exhibit more efficient angular momentum transport. At the innermost region of our domain, the rotation rate is lowered by over an order of magnitude due to outward transport of angular momentum, from over 0.10 rad s^-1 initially to ≈ 0.01 rad s^-1). The rotation profile is flattened considerably. The angular momentum is taken up by carbon shell outside 2.8, whose rotation rate increases. We note that Figure <ref> only shows a limited inner portion of the total simulated carbon shell. The hydrodynamic model displays much less smooth rotation profiles than the MHD model. It is noteworthy that at the various convective shell boundaries, the rotation profile shows significant dips. Figure <ref>(b) shows dips in rotation at ≈1.85, which is the base of the oxygen shell, between the oxygen and neon shells at a location that varies over time between ≈2.20 and ≈2.40, and finally between the neon and carbon shells at ≈2.80. It is particularly interesting that some of these dips even reach negative values, i.e., there are shells with net retrograde rotation, which remain quite stable. On closer inspection of the rotation profile of the MHD model, we see that these dips in the rotation profile also begin to form early on in this case (Figure <ref> (a) at 120s). However, the rotation profile is quickly smoothed once angular momentum transport due to magnetic fields become efficient. Unlike in the MHD model, where angular momentum is simply transported outward, the redistribution of angular momentum in the hydrodynamic model appears more complicated (Figure <ref> (d)). Effectively, positive angular momentum is transported into convective shells from the shell boundaries. This leads to an interesting non-monotonic rotation profile, where the fastest rotation rate is not reached at the inner boundary, but instead at 2.42.5 at late times, as well as the aforementioned dips between the convective shells. Redistribution of angular momentum also increases the rate of rotation in the inner carbon shell (around 2.8) and induces strong (radial) differential rotation there. To better understand the emerging rotation pattern in the hydrodynamic model and the counterintuitive phenomenon of retrograde rotation, we consider zonal and temporal averages of the meridional velocity and the rotation rate in Figure <ref>. In Figure <ref> (a), the meridional velocity plotted is an average over ϕ (zonal average) and time of |𝐯_r + 𝐯_θ|, while Figure <ref> (b) plots the same averages of the angular velocity, Ω = v_ϕ/(r sinθ), where r is the radius. Both figures plot the cube root of their original values to retain the direction of the flow, but reduce the dynamic range. The positive (red) meridional velocity represents clockwise motion, with the negative (blue) flows showing counter-clockwise flows (viewed from the North pole). The left halves of Figure <ref> (a) and (b) show the zonal flow averages of the quantities, while the right halves show snapshots of the two quantities on a meridional slice. From Figure <ref>(b), it is clear that the retrograde rotation occurs mostly at the base of the carbon shell, and to a lesser extent, at the base of the oxygen shell. The snapshot on the right half of Figure <ref>(b) shows that regions of retrograde rotation form at the base of the neon shell as well, but these are more transient and do not show up in the zonal averages. We also observe that for the hydrodynamic model, although the retrograde rotation appears to form as a shell at the base of the carbon shell, in general, the rotation pattern that forms is not shellular. At the base of both the oxygen and carbon shell, we rather see indications of anti-solar differential rotation with faster rotation near the poles. Such a rotation pattern has been observed before in simulations of surface convection zones. <cit.> attribute the development of different rotation profiles, in part, to a link between the differential rotation and the meridional circulation in the convection zone. Anti-solar rotation profiles, are attributed to inward angular momentum transport, which establishes a single-celled meridional circulation profile throughout the convection zone. This circulation transports angular momentum polewards, spinning up the poles relative to the equator. Although our model exhibits antisolar-like rotation profiles at the base of the carbon (8400 km) and oxygen (≈3400 km) shells in Figure  <ref>(b), the corresponding meridional circulation in Figure  <ref>(a) does not exhibit the single-celled structure expected from <cit.>. We instead find that the meridional flows are not clearly structured, but appear more similar to a case with multi-celled circulation. We see an analogous meridional circulation flow develop in the inner regions of the carbon shell of our model (above 8400km), with two large cells of material (Figure  <ref>(a)). One noticeable difference shown in our model is that the circulation velocities do not drop off towards the poles, instead, they remain very strong. This may be part of the reason why the model exhibits a stable retrograde shell of material at 8400 km that extends across almost all latitudes as shown in Figure <ref>(b), rather than being confined to the equatorial region as in the surface convection models presented in <cit.>. One major difference that our simulations have that likely alters the meridional flows, is the proximity of other convective shells. While in surface convection simulations, the shell is bounded only by a radiative core, in our model, we have three adjacent convective shells. Although these shells are initially separated by thin radiative zones in the 1D stellar structure input model, the rapid rotation and turbulent convection increases the amount of mixing that occurs at the convective shell boundaries in the 3D model, causing the convective shells to start interacting. This adds additional complications to the transport of angular momentum, and in turn, to the meridional circulation. From extensive studies in solar physics, it is often expected that the stable differential rotation in our Sun is maintained by rotating turbulent convection <cit.>, due to the interplay of buoyancy and inertial forces. We suspect that, similarly, the complicated rotation profile developed in our purely hydrodynamic model occurs due to an interaction between the Coriolis and buoyancy forces. Similar retrograde rotation patterns have been found in studies of rotating solar-like stars, and depend on the Rossby number of the flow <cit.>. To confirm that our model is in the relevant regime where Coriolis forces are strong enough to make such an effect plausible, we analyse how the Rossby number compared to the MHD model, which develops a flat rotation profile. We define the Rossby number of any given burning shell as Ro = v_conv/2Ω_shellΔ r, where v_conv is the convective velocity, Ω_shell, is the rotation rate, and Δ r is the radial extent of each burning shell. The time evolution of the Rossby number in the oxygen, neon and carbon shell is shown in Figure <ref>. We find that the Rossby number for the oxygen and neon shell for both models starts at ≈ 0.4 (and remains largely unchanged for the hydrodynamic model), indicating that the flows are strongly shaped by the Coriolis force. The Rossby numbers in the MHD model clearly reflects that the magnetic fields starts to strongly alter the bulk flows. At ≈ 180s the magnetic fields become strong enough to start rapidly transporting angular momentum outwards, slowing the rotation of the star and hence increasing the Rossby number. The Rossby number continues to increase until ≈ 200 s in the neon shell and ≈ 240 s in the oxygen shell, when the suppression of convective flows by magnetic stresses becomes dominant, lowering the Rossby number. The carbon shells of both models clearly have not reached a convective steady state and hence not even a transient steady state in the Rossby number can be discerned. The MHD case initially transports angular momentum out of the star more efficiently. We see in Figure <ref>(c) that angular momentum from the rapidly rotating inner regions (inner 2.8), is transported out to the carbon shell (2.8 - 3.0). Focusing on the lines at 215 s in Figure <ref>(a), we see that at these times, rotation rate in the inner carbon shell increases in the MHD case compared to the purely hydrodynamic model. This leads to the deviation in Rossby number initially seen from ≈ 150 s, causing the Rossby number in the MHD model to be lowered compared to the hydrodynamic model. After ≈260 s, the Rossby number in the hydrodynamic model starts decreasing gradually. We associate this with the decrease in energy generation compared to the MHD model seen after ≈220 s in Figure <ref>, which would have the effect of decreasing the convective velocity in the carbon shell. The opposite trend is seen for the MHD Rossby number due to the corresponding increase in energy generation. At first glance, we find a result that appears in opposition to what is found in solar differential rotation simulations and the shell burning simulation of <cit.> with a very similar setup. As summarised in <cit.>, retrograde rotation is sometimes seen when buoyancy forces dominate the flow (i.e. Ro≥ 1), where solar-like stars develop retrograde rotation at the equators and faster rotation at the poles (anti-solar rotation), and faster rotation at the equator for Ro≤ 1. Figure <ref> shows that although we develop strong retrograde rotation in the hydrodynamic model, the Coriolis force dominates its flow in each shell (i.e. Ro≤ 1). We note, however, a key difference between these models. The anti-solar rotation profiles in low-mass stars develop throughout the convective zone, whereas we find the retrograde motion to be largely concentrated at the base of the burning shells. The different phenomenology could be explained by the rather disparate conditions in surface convection zones in solar-like stars and convection during advanced burning stages. Solar-like stars have an inner radiative zone and a single convective shell above it, and radiation diffusion plays a role both for the internal structure of the convection zone and especially for the structure of the convective boundaries. Our progenitor has three interacting convective shells, radiative effects are unimportant (and not included in the model), and the structure of the boundary is determined by turbulent entrainment. We suspect that the cause of the retrograde rotation in our simulation is similar to what is described in <cit.> from hydrodynamic simulations of ice giants, and for solar-like simulations in <cit.>, where convective rolls exhibit a preferred “tilt” in the positive ϕ direction. The tilted flow structures create a correlation between flows moving inward (outward) and those moving in the negative (positive) ϕ direction. Due to the strong turbulent mixing between shells, however, it is difficult to see in our models whether this tilted flow structure truly arises. For low Rossby convection in the solar case, this usually results in net angular momentum transport away from the rotation axis, which tends to speed up the equator. Since the usual Rossby number characterises the flow in the convection zone globally, it alone would not give us insight into dynamics at the convective boundaries or between convective shells. To understand the interplay between the Coriolis and buoyancy forces at the convective boundaries, we instead plot the angle-averaged magnitude of the buoyancy and Coriolis forces (f_B and f_C) per unit mass of the hydrodynamic model in Figure <ref>. f_B = gδρ/ρ_0 f_C = 2|Ω×𝐯| Here g is the gravitational acceleration, ρ̂ the RMS averaged density, δρ is the RMS averaged fluctuation from the average density, Ω is the angular velocity vector of the rotation and 𝐯 is the velocity. For simplicity, we assume rotation is confined to v_ϕ as in our initial conditions, i.e., Ω points in the z-direction, giving us only a component for f_C pointing away from the rotation axis. We plot the RMS average of the absolute value of f_C in Figure <ref>. This is to allow for greater clarity in comparing the two forces, since the retrograde rotation would lead to regions of negative f_C. We plot these forces at 57 s, 115 s and 190 s (dotted, dashed and solid lines, respectively). These times were chosen to represent approximate times before the development of retrograde rotation in the hydrodynamic model, when retrograde motion initially begins at the base of the carbon shell, and when it begins at the base of the oxygen shell. For our simulation, we find the convective boundary overshooting at lower convective boundaries leads to buoyancy forces dominating the Coriolis force in between convective shells (≈ 1.85 and 2.8, see Figure <ref>), which is likely why retrograde motion in our models are confined to the shell boundaries. This is further supported by the fact that we see the “stable” retrograde shell start to form at the base of the oxygen shell (≈ 1.85) when the buoyancy force surpasses the Coriolis force. The evolution of the Coriolis force in Figure <ref> shows that this effect develops due to the transport of angular momentum away from convective boundaries into the convective shells. The magnetic model initially shows a similar force ratio, however, the ratio of buoyancy to Coriolis forces is soon reduced by the rapid rise of magnetic field strength and the subsequent suppression of convection, and hence buoyancy force. § CONCLUSIONS We investigated the evolution of magnetic fields during advanced convective burning stages in massive stars and their backreaction on the flow in a simulation of the oxygen, neon and carbon shells in a rapidly rotating 16 progenitor shortly before core collapse. For comparison, we conducted a purely hydrodynamic simulation of the same progenitor as well. The simulations were run for about 8 minutes of physical time, corresponding to about 32 convective turnovers in the oxygen shell. Rapid differential rotation and convection initially amplify the magnetic fields exponentially via the αΩ-dynamo. However, strong magnetic stresses eventually dominate the radial kinetic stresses. The backreaction of the fields on the flow stops the exponential growth and suppresses convection in the oxygen and neon burning shells. These shells are effectively turned into convectively stable shells by the strong magnetic stresses and continue to burn fuel at the base of the shells without mixing of fuel and ashes. The magnetic field reaches saturation in the oxygen and neon shells after 180 s (corresponding to ≈12 and 7 convective turnovers respectively). It peaks at 10^11 G in the oxygen shell but decays to 3×10^10 G by the end of the simulation. In the carbon shell, the field appears to saturate at 10^10 G, but this shell has only completed about one turnover during our entire simulation, so a steady state likely has not been reached. The strong magnetic fields that develop also transport angular momentum much more efficiently than in the purely hydrodynamic model. Already within the short duration of this simulation, the structure transitions from strong differential rotation into a nearly uniform rotation profile, with significant spin-down of the inner shells. In the purely hydrodynamic model shell convection is sustained and strong differential rotation is maintained. However, it develops a much more complicated rotation profile than in the underlying 1.5D stellar evolution models. The emerging rotation profile shows sharp drops at convective boundaries and, during some phases, even shells with retrograde rotation. We hypothesise that this is due to an instability that occurs when rapid changes of the rotational and convective velocities occur at the convective boundaries, coupled with strong meridional flows towards the poles. The regions with retrograde rotation are conspicuously associated with spikes in the local Rossby number, i.e., the ratio of the RMS-averaged buoyancy and Coriolis force. To better understand this phenomenon, we require further studies with different progenitors, varying Rossby number flows, and different grid geometries. The transition of the oxygen and neon shells to slowly and rigidly rotation, non-convective region significantly reduces turbulent mixing. While the hydrodynamic model rapidly mixes new material into regions that are burning, the MHD model exhibits sharp drops in oxygen and neon mass fractions in the narrow burning regions. One consequence of this difference is that the hydrodynamic model entrains material deeper into the star, moving the peak of the energy generation of the oxygen shell radially inwards, while the location of peak energy generation of the same shell in the MHD simulation moves outwards. Due to the strong temperature sensitivity of oxygen burning (∝ T^33), this small change in shell position leads to a noticeable change in energy generation between the two models, resulting in increasing nuclear energy generation in the hydrodynamic model and inhibition of nuclear energy generation in the MHD model at late times. Our results have important implications for core-collapse supernova modelling. For this particular rotating progenitor model, we predict pre-collapse fields of ≈2× 10^10 G in the oxygen shell, similar to what we find for the non-rotating case in <cit.>. Our rotating model exhibits a more gradual drop in field strength with radius. With relatively strong seed fields, we expect less of a delay until magnetic fields can contribute to become relevant for shock revival, i.e., by providing an additional “boost” to neutrino heating, as seen in <cit.>. Due to the suppressed convective flows, the perturbation-aided mechanism <cit.> may be less effective, however, asymmetries seeded by the strong magnetic fields may be enough to deliver a similar effect <cit.>. Perhaps most importantly, the very rapid redistribution of angular momentum transport from the inner shells casts doubt on the viability of a fast magnetorotational explosion powered by a “millisecond magnetar”. For the right conditions to develop, a mechanism would be required to spin up the proto-neutron star during or after the core collapse for a magnetorotational explosion to be launched. However, there is still work to be done until the findings from simulations of magnetoconvection in rotating stars can be incorporated into models of magnetically- or magnetorotationally-driven explosions. For example, future simulations will need to include the core and self-consistently follow its contraction and incipient collapse to provide initial conditions for supernova simulations. However, multi-D simulations of rotating massive stars face a much more fundamental challenge. The MHD model, and to some extent the hydrodynamic model, rapidly diverges from the initial structure of the stellar evolution model. Current stellar evolution models are clearly far from the actual quasi-steady state conditions that would emerge under the influence of rotation, convection and magnetic fields. Ideally, 3D simulations should cover significantly longer time scales to follow the relaxation of the structure into equilibrium and then study the subsequent evolution on secular time scales, but this is clearly beyond current computational resources. It is therefore very important to make 1D stellar evolution models and MHD models more consistent with each other to minimise deleterious effects from big initial transients that limit the fidelity of 3D simulations. This will require improved formalisms for stellar evolution with rotation and magnetic fields <cit.>. Developing the appropriate methodology for solving the problem of stellar evolution with rotation and magnetic fields by a combination of 1D and 3D modelling is bound to remain an extraordinary and exciting challenge. § ACKNOWLEDGEMENTS We acknowledge fruitful discussions with R. Hirschi and A. Heger. VV acknowledges support from the STFC (Science and Technology Facilities Council; ST/V000543/1). BM was supported by ARC Future Fellowship FT160100035. We acknowledge computer time allocations from Astronomy Australia Limited's ASTAC scheme, the National Computational Merit Allocation Scheme (NCMAS), and from an Australasian Leadership Computing Grant. Some of this work was performed on the Gadi supercomputer with the assistance of resources and services from the National Computational Infrastructure (NCI), which is supported by the Australian Government, and through support by an Australasian Leadership Computing Grant. Some of this work was performed on the OzSTAR national facility at Swinburne University of Technology. OzSTAR is funded by Swinburne University of Technology and the National Collaborative Research Infrastructure Strategy (NCRIS). § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the authors, subject to considerations of intellectual property law. mnras
http://arxiv.org/abs/2307.04889v2
20230710202446
Critical behavior of cascading failures in overloaded networks
[ "Ignacio A. Perez", "Dana Ben Porath", "Cristian E. La Rocca", "Lidia A. Braunstein", "Shlomo Havlin" ]
physics.soc-ph
[ "physics.soc-ph" ]
Correspondence: [email protected] Instituto de Investigaciones Físicas de Mar del Plata (IFIMAR)-Departamento de Física, FCEyN, Universidad Nacional de Mar del Plata-CONICET, Deán Funes 3350, (7600) Mar del Plata, Argentina Formerly Dana Vaknin Faculty of Engineering and the Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, Israel Instituto de Investigaciones Físicas de Mar del Plata (IFIMAR)-Departamento de Física, FCEyN, Universidad Nacional de Mar del Plata-CONICET, Deán Funes 3350, (7600) Mar del Plata, Argentina Instituto de Investigaciones Físicas de Mar del Plata (IFIMAR)-Departamento de Física, FCEyN, Universidad Nacional de Mar del Plata-CONICET, Deán Funes 3350, (7600) Mar del Plata, Argentina Physics Department, Boston University, 590 Commonwealth Ave., Boston, Massachussets 02215, USA Department of Physics, Bar-Ilan University, Ramat-Gan 52900, Israel Physics Department, Boston University, 590 Commonwealth Ave., Boston, Massachussets 02215, USA While network abrupt breakdowns due to overloads and cascading failures have been studied extensively, the critical exponents and the universality class of such phase transitions have not been discussed. Here we study breakdowns triggered by failures of links and overloads in networks with a spatial characteristic link-length ζ. Our results indicate that this abrupt transition has features and critical exponents similar to those of interdependent networks, suggesting that both systems are in the same universality class. For weakly embedded systems (i.e., ζ of the order of the system size L) we observe a mixed-order transition, where the order parameter collapses following a long critical plateau. On the other hand, strongly embedded systems (i.e., ζ≪ L) exhibit a pure first order transition, involving nucleation and growth of damage. The system's critical behavior in both limits is the same as that observed in interdependent networks. Critical behavior of cascading failures in overloaded networks Shlomo Havlin ^1 Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP, F-69342, Lyon Cedex 07, France ^2 CNRS, Univ de Lyon, ENS de Lyon, Laboratoire de Physique, F-69342 Lyon, France ^3 Department of Network and Data Science, Central European University, 1100 Vienna, Austria ^4 Rényi Institute of Mathematics, 1053 Budapest, Hungary ^*Corresponding author: [email protected] =========================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Cascading failures and system collapse due to overloads have been modeled and studied within a network framework <cit.>. Relevant infrastructure such as power grids, transportation networks, and communication systems, many of which are embedded in two or three dimensional space <cit.>, are threatened by overloads in which even a small failure (e.g., deliberate attacks, natural disasters, or random malfunctions) may spread the overload failures, producing a partial or total collapse. Thus, understanding the origin, dynamics, and laws of cascading failures due to overloads is crucial for ensuring the stability, reliability, and resilience of infrastructure and services that we rely on every day. Far from ideal spatial systems such as lattices, many real world networks present links with a characteristic length ζ <cit.>. Several studies <cit.> model this structural property with a 2D lattice where the sites are the nodes of the network and the link-lengths are chosen from an exponential distribution, P(r) ∼ exp(-r/ζ) (the so called ζ-model), which produces networks with a dimension that changes from two, for small ζ (short links), to infinite dimension for large ζ (of order of the system linear size L) <cit.>. Thus, in the ζ-model, the parameter ζ represents the strength of the spatial embedding. A fundamental model for cascading failures due to overloads is the Motter and Lai (ML) model <cit.>, that introduced and defined the concept of load and overload for a node or element in a network. In this model, the load is defined as the number of shortest paths that pass through the node (or a link), and it is considered as a measure for node relevance in the transmission of some quantity (e.g., information or energy) throughout the system. They defined a threshold called capacity, which is proportional to the initial load and represents the maximum amount of load that a node can hold. Above this threshold, the node is regarded to be overloaded and fails. However, the shortest path is not always the optimal path <cit.>. Thus, a reasonable modification of this model is defining weighted networks, where links have associated weights that may indicate, for instance, the time (or cost) that it takes to travel across a given link. In this way, optimal paths, which represent the paths with minimal travel time (or cost) between nodes, are considered instead of shortest paths to define the node or link loads. Currently, the critical behavior and the universality class of the phase transition due to cascading failures induced by overloads have not been systematically studied. Here we study this phase transition in both, spatial ζ-model <cit.> and in Erdős-Rényi (ER) <cit.> networks, and we find indications that it belongs to the same universality class as percolation of interdependent networks <cit.>. We observe that for weakly or non spatially-embedded systems, like ER networks or the ζ-model for large ζ (of the order of the system's linear size L), there exists a mixed-order transition, similar to interdependent ER networks <cit.>. At and near this abrupt transition, we find a long term plateau in the order parameter characterized by critical exponents. In contrast, for strongly embedded networks, (i.e., ζ ≪ L), we observe a pure first order transition caused by nucleation of a random damage, a behavior also exhibited by interdependent lattices with dependencies of finite length or spatial multiplex networks <cit.>. § MODEL For the construction of the networks we use the ζ-model <cit.>. It consists of nodes located at the vertices of a two-dimensional lattice of size N = L × L and links created between two different nodes according to the following steps: 1) For each of the N nodes in the network, we assign integer coordinates (x,y), with x, y ∈ [1,L]. 2) We randomly select a node i with coordinates (x_i,y_i) and draw a ray of length r, taken from an exponential distribution P(r) ∼ exp(-r/ζ), and a random angle θ above the horizontal axis, uniformly distributed. 3) We link node i with node j, where j is the closest node to the end point of the ray, p, of real coordinates (p_x,p_y) = (x_i + r cos θ,y_i + r sinθ). We repeat the process until we build a network with an average degree ⟨ k ⟩ (we do not allow self or multiple connections, and we assume periodic boundary conditions). Note that it is easy to generalize the ζ model to any d-dimensional lattice. Regarding the cascade dynamics due to overloads, we study the ML model <cit.> in weighted networks, with positive weights that follow a Gaussian distribution. Considering this, the load of node i, L_i(t) ≡ L_i^t, is defined as the number of optimal paths between all pairs of nodes, excluding node i, that pass through node i at time t. The amount of load that a node can sustain at anytime is given by its capacity, C_i = L^0_i(1 + α), which is proportional to the initial load L^0_i. The parameter α is the tolerance of the system, and it represents the resilience of nodes to failure. We perform, at t = 1, a random link percolation process by removing a fraction 1 - p of links, p ∈ [0, 1]. As a result, optimal paths throughout the network change producing modifications in node loads, which may generate successive failures due to nodes that become overloaded, in a cascade manner (see Fig. <ref>). After removing the links, we advance one unit of time and compute the new loads. For t > 1, at each time step, node i fails if L_i^t > C_i. We repeat the process until there are no more failures in the network. The model presented above is not solvable analytically because of spatial constraints but it can be analyzed via numerical simulations, which are highly time-consuming even for a relatively small system size. To reduce the sensitivity of the results and produce smoother and consistent curves for a single realization, randomness is somewhat reduced. When doing percolation using a series of 1 - p values, we proceed as follows: if E_p_1 is the set of links that have been randomly removed for 1 - p_1 then, for a larger value 1 - p_2, we remove the same set of links E_p_1 and additional random links until we reach the value 1 - p_2. § RESULTS At the end of the cascading process, we analyze the relative size of the giant component, S(p) ≡ S, for weak and strong spatial embedding, i.e., for long and short ζ, respectively. This is seen in Fig. <ref>. In both limits, we find that the system undergoes an abrupt transition at a critical value p_c, such that S(p ≥ p_c) > 0 and S(p < p_c) ≈ 0. Nevertheless, we can distinguish two different behaviors at the vicinity of these transitions. For weak spatial embedding (ζ = 100, Fig. <ref> (a)) the system approaches criticality from the right (i.e., for p > p_c and S > 0) with a clear curvature that appears to be absent for strong embedding (ζ = 3, Fig. <ref> (b)). We characterize the weakly embedded system, near and at criticality, through a generalization of the critical exponent β for abrupt transitions <cit.>, with respect to S(p_c) > 0. Thus, for p close to the percolation threshold p_c, S(p)-S(p_c) ∼ (p-p_c)^β. Indeed, in the inset of Fig. <ref> (a), we show that the exponent β has a value of β≅ 0.5 for ζ = 100, which is in agreement with the usual mixed-order transition and with the value for interdependent random networks <cit.>. In contrast, for the case of strong spatial structure (ζ = 3, Fig. <ref> (b)), we do not observe a curvature with a critical exponent, but just a linear decrease followed by an abrupt collapse, suggesting a pure first order transition like interdependent spatial networks (see, e.g., Fig. 1 in <cit.>). Note that both behaviors near criticality found here, for low and large ζ, are very similar to those found in pure percolation (no overloads) of interdependent networks <cit.> for short range and long range dependency links, respectively. This suggests that the overload process plays a similar role to that of dependencies. Depending on the individual realization observed, both the critical threshold p_c and the mass of the giant component at p_c, M_c = N S_c, may vary (see Fig. 1 from Supplemental Material <cit.>). We focus next on mean field networks with long-range connectivity links (ζ = L) and study the fluctuations of these two quantities at criticality, σ(p_c) = (⟨p_c^2 ⟩ - ⟨ p_c ⟩^2)^1/2 and σ(M_c) = (⟨M_c^2 ⟩ - ⟨ M_c ⟩^2)^1/2, for different system sizes. Gross et al. <cit.> found for interdependent networks that a finite-size scaling analysis yields the relations σ(p_c) ∼ L^-1/ν', ν' = 2/d, σ(M_c) ∼ L^d'_f, d'_f = 3d/4, where d is the spatial dimension. In Fig. <ref>, we show that also for the ML overload model there exists a similar scaling behavior of the dispersions with the linear size of the system L, and that the values of the exponents are the same as in mixed-order transitions of interdependent networks <cit.> (i.e., for d = 2, ν' = 1 and d'_f = 3/2). Continuing the comparison between spatial and non-spatial networks, two types of transitions can be understood by observing, in the proximity of criticality, the way in which the cascade propagation evolves while reaching the steady state of the system. In Fig. <ref> (a), we show the time evolution of S for ζ = 100 and for several values of p, with p ≤ p_c. The total time of the cascade, τ, increases as the system gets closer to criticality and diverges at p_c (for N →∞; see Fig. <ref> for a finite-size scaling). Thus, it is a useful method to identify p_c for each realization as the value of p for which the maximal number of iterations occurs in the numerical simulations. This is analogous to the behavior found in interdependent networks <cit.>. These abrupt transitions are also characterized, close to criticality, by a plateau in S, where a microscopic amount of failures (Fig. <ref> (b)) keeps the cascade going on with a branching factor of η≈ 1 (Fig. <ref> (c)), meaning that a small number of failures at a given time step produces a similar small number of failures in the next step, for many steps of the order of N^1/3 (see Fig. <ref> (b)). Due to finite size effects, this phase does not last forever and, eventually, the amount of failed nodes starts to increase because of accumulated damage in the system, leading to an abrupt collapse <cit.>. In Fig. <ref> (d) we show, for this weakly embedded network with ζ = 100, the spatio-temporal distribution of the failures just above criticality. It is seen that the failures spread at all times over the whole network. This occurs because optimal paths that disappear after some node failures are likely to be replaced by paths that pass through nodes in distant sites of the network, due to long range connections and overloading these distant nodes. In marked contrast, the process for spatial networks (ζ = 3, Figs. <ref> (e)-(h)) is strikingly different. As the typical length of links is short (relative to the system linear size L), initial failures due to overloads may concentrate and spread to close neighbors (Fig. <ref> (h)). Eventually, the overload and the random failures create a hole of failed nodes within the functional giant component, which then grows spontaneously near criticality and spreads throughout the entire system causing its collapse. This phenomenon is known as nucleation (analogous to the well-known water-freezing nucleation transitions), and it has also been observed in interdependent lattices with dependency links of finite length <cit.> and in spatial multiplex networks <cit.>. In addition, the complete disintegration of the giant component happens in a prolonged time interval with a relatively short plateau stage (in contrast to weakly embedded systems, as seen in Fig. <ref> (a)). All in all, our results regarding the temporal evolution of the cascades as well as those corresponding to critical exponents, in the steady state, show a striking similarity with cascading failures in interdependent networks, therefore suggesting that both, i.e., overload and interdependent networks, belong to the same universality class. § CONCLUSIONS In this paper, we study the critical behavior and exponents characterizing the steady state and the dynamics of cascading failures due to overloads in both non-spatial and spatial networks. After initiating the overload cascade by randomly removing a fraction of links, we analyze how the spatial embedding strength, governed by the typical length of links ζ, affects the behavior of the system at criticality. We find that the steady state of this process is characterized by abrupt transitions, regardless of the strength of the spatial embedding. However, for weakly embedded or non-embedded systems we observe a usual mixed-order transition similar to that of interdependent random networks, with a critical exponent value of β = 0.5. Furthermore, exponent values that characterize the fluctuations of the quantities p_c and M(p_c), at criticality, are also in agreement with those of interdependent networks. These exponents characterize the correlation length and the fractal fluctuations of the order parameter. In contrast, strongly embedded networks do not show a curvature (singularity) in the order parameter near p_c, but rather a linear decrease in the giant component size, as in interdependent spatial networks, which is a characteristic of pure first order transitions. Regarding the dynamical aspects near the transition, weak and strong embedded systems also show a strikingly different behavior in the propagation of cascading failures. When studying the spatio-temporal propagation of failures, we find that for large ζ the failures spread through the whole network at all times. In contrast, for small ζ, initial failures are likely to initiate in a random location and propagate to nearby sites, yielding to a nucleation spreading process that is observed as well in spatial interdependent and multiplex networks <cit.> (see also the recent study by Choi et al. <cit.>). Since the phenomena and the critical exponents studied in this paper for overload failures are the same as those of interdependent networks, we suggest that both interdependent networks and overloads in networks belong to the same universality class. This is probably due to the two types of similar interactions in both systems. Our study represents an important contribution to the understanding of the mechanisms and the critical behavior of such catastrophic processes, especially systems for which there are no analytical approaches, such as cascading failures in overloaded networks. 31 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Motter and Lai(2002)]mott-02 author author A. E. Motter and author Y.-C. Lai, title title Cascade-based attacks on complex networks, https://doi.org/10.1103/PhysRevE.66.065102 journal journal Phys. Rev. E volume 66, pages 065102(R) (year 2002)NoStop [Motter(2004)]mott-04 author author A. E. Motter, title title Cascade control and defense in complex networks, https://doi.org/10.1103/PhysRevLett.93.098701 journal journal Phys. Rev. Lett. volume 93, pages 098701 (year 2004)NoStop [Watts and Strogatz(1998)]watt-98 author author D. J. Watts and author S. H. Strogatz, title title Collective dynamics of ‘small-world’ networks, https://doi.org/10.1038/30918 journal journal Nature volume 393, pages 440 (year 1998)NoStop [Barthélemy(2011)]bart-11 author author M. Barthélemy, title title Spatial networks, https://doi.org/https://doi.org/10.1016/j.physrep.2010.11.002 journal journal Physics Reports volume 499, pages 1 (year 2011)NoStop [Gross et al.(2017)Gross, Vaknin, Danziger, and Havlin]gros-17 author author B. Gross, author D. Vaknin, author M. M. Danziger, and author S. Havlin, title title Multi-universality and localized attacks in spatially embedded networks, https://doi.org/10.7566/JPSCP.16.011002 journal journal Proceedings of the Asia-Pacific Econophysics Conference 2016 - Big Data Analysis and Modeling toward Super Smart Society (APEC-SSS2016) , pages 011002 (year 2017)NoStop [Gross et al.(2022a)Gross, Bonamassa, and Havlin]gros-22-b author author B. Gross, author I. Bonamassa, and author S. Havlin, title title Fractal fluctuations at mixed-order transitions in interdependent networks, https://doi.org/10.1103/PhysRevLett.129.268301 journal journal Phys. Rev. Lett. volume 129, pages 268301 (year 2022a)NoStop [Waxman(1988)]wax-88 author author B. M. Waxman, title title Routing of multipoint connections, @noop journal journal IEEE J. Sel. Areas Commun. volume 6, pages 1617 (year 1988)NoStop [Daqing et al.(2011)Daqing, Kosmidis, Bunde, and Havlin]daq-11 author author L. Daqing, author K. Kosmidis, author A. Bunde, and author S. Havlin, title title Dimension of spatially embedded networks, https://doi.org/10.1038/nphys1932 journal journal Nature Physics volume 7, pages 481 (year 2011)NoStop [National Land Information Division, National Spatial Planning and Regional Policy Bureau, MILT of Japan(2012)]2012japan author author National Land Information Division, National Spatial Planning and Regional Policy Bureau, MILT of Japan, http://nlftp.mlit.go.jp/ksj/gml/datalist/KsjTmplt-N02.html journal journal National railway data (year 2012)NoStop [Danziger et al.(2016)Danziger, Shekhtman, Berezin, and Havlin]dan-16 author author M. M. Danziger, author L. M. Shekhtman, author Y. Berezin, and author S. Havlin, title title The effect of spatiality on multiplex networks, https://doi.org/10.1209/0295-5075/115/36002 journal journal EPL (Europhysics Letters) volume 115, pages 36002 (year 2016)NoStop [Vaknin et al.(2017)Vaknin, Danziger, and Havlin]vak-17 author author D. Vaknin, author M. M. Danziger, and author S. Havlin, title title Spreading of localized attacks in spatial multiplex networks, https://doi.org/10.1088/1367-2630/aa7b09 journal journal New Journal of Physics volume 19, pages 073037 (year 2017)NoStop [Perez et al.(2022)Perez, Porath, Rocca, Buldyrev, Braunstein, and Havlin]perez-22 author author I. A. Perez, author D. V. B. Porath, author C. E. L. Rocca, author S. V. Buldyrev, author L. A. Braunstein, and author S. Havlin, title title Cascading failures in isotropic and anisotropic spatial networks induced by localized attacks and overloads, https://doi.org/10.1088/1367-2630/ac652e journal journal New Journal of Physics volume 24, pages 043045 (year 2022)NoStop [Gotesdyner et al.(2022)Gotesdyner, Gross, Porath, and Havlin]gote-22 author author O. Gotesdyner, author B. Gross, author D. V. B. Porath, and author S. Havlin, title title Percolation on spatial anisotropic networks, https://doi.org/10.1088/1751-8121/ac6914 journal journal Journal of Physics A: Mathematical and Theoretical volume 55, pages 254003 (year 2022)NoStop [Havlin et al.(2005)Havlin, Braunstein, Buldyrev, Cohen, Kalisky, Sreenivasan, and Stanley]havl-05 author author S. Havlin, author L. A. Braunstein, author S. V. Buldyrev, author R. Cohen, author T. Kalisky, author S. Sreenivasan, and author H. E. Stanley, title title Optimal path in random networks with disorder: A mini review, https://doi.org/https://doi.org/10.1016/j.physa.2004.08.053 journal journal Physica A volume 346, pages 82 (year 2005)NoStop [Erdös and Rényi(1959)]erdos-59 author author P. Erdös and author A. Rényi, title title On random graphs I, @noop journal journal Publicationes Mathematicae Debrecen volume 6, pages 290 (year 1959)NoStop [Bunde and Havlin(1991)]bunde-91 author author A. Bunde and author S. Havlin, @noop title Fractals and disordered systems (publisher Springer-Verlag New York, Inc., year 1991)NoStop [Newman(2010)]new-10 author author M. E. J. Newman, https://doi.org/10.1093/acprof:oso/9780199206650.001.0001 title Networks: An Introduction (publisher Oxford University Press, year 2010)NoStop [Buldyrev et al.(2010)Buldyrev, Parshani, Paul, Stanley, and Havlin]bul-10 author author S. Buldyrev, author R. Parshani, author G. Paul, author H. Stanley, and author S. Havlin, title title Catastrophic cascade of failures in interdependent networks, https://doi.org/10.1038/nature08932 journal journal Nature volume 464, pages 1025 (year 2010)NoStop [Gao et al.(2011)Gao, Buldyrev, Stanley, and Havlin]gao-11 author author J. Gao, author S. Buldyrev, author H. Stanley, and author S. Havlin, title title Networks formed from interdependent networks, https://doi.org/10.1038/nphys2180 journal journal Nature Physics volume 8, pages 40 (year 2011)NoStop [Li et al.(2012)Li, Bashan, Buldyrev, Stanley, and Havlin]li-12 author author W. Li, author A. Bashan, author S. V. Buldyrev, author H. E. Stanley, and author S. Havlin, title title Cascading failures in interdependent lattice networks: The critical role of the length of dependency links, https://doi.org/10.1103/PhysRevLett.108.228702 journal journal Phys. Rev. Lett. volume 108, pages 228702 (year 2012)NoStop [Zhou et al.(2014)Zhou, Bashan, Cohen, Berezin, Shnerb, and Havlin]zho-14 author author D. Zhou, author A. Bashan, author R. Cohen, author Y. Berezin, author N. Shnerb, and author S. Havlin, title title Simultaneous first- and second-order percolation transitions in interdependent networks, https://doi.org/10.1103/PhysRevE.90.012803 journal journal Phys. Rev. E volume 90, pages 012803 (year 2014)NoStop [Danziger et al.(2014)Danziger, Bashan, Berezin, and Havlin]dan-14 author author M. M. Danziger, author A. Bashan, author Y. Berezin, and author S. Havlin, title title Percolation and cascade dynamics of spatial networks with partial dependency, https://doi.org/10.1093/comnet/cnu020 journal journal Journal of Complex Networks volume 2, pages 460 (year 2014)NoStop [Kiani et al.(2021)Kiani, Gomez-Cabrero, and Bianconi]kia-21 author author N. A. Kiani, author D. Gomez-Cabrero, and author G. Bianconi, https://doi.org/10.1017/9781108553711 title Networks of Networks in Biology: Concepts, Tools and Applications (publisher Cambridge University Press, year 2021)NoStop [Berezin et al.(2015)Berezin, Bashan, Danziger, Daqing, and Havlin]berez-15 author author Y. Berezin, author A. Bashan, author M. Danziger, author L. Daqing, and author S. Havlin, title title Localized attacks on spatially embedded networks with dependencies, https://doi.org/10.1038/srep08934 journal journal Scientific Reports volume 5, pages 8934 (year 2015)NoStop [Gross and Havlin(2022)]gros-22 author author B. Gross and author S. Havlin, https://doi.org/10.1017/9781009168076 title Percolation in Spatial Networks: Spatial Network Models Beyond Nearest Neighbours Structures, Elements in the Structure and Dynamics of Complex Networks (publisher Cambridge University Press, year 2022)NoStop [sm()]sm @noop title See Supplemental Material at [URL] for a plot of independent realizations of the steady state of the cascades for different values of ζ.Stop [Boccaletti et al.(2016)Boccaletti, Almendral, Guan, Leyva, Liu, Sendiña-Nadal, Wang, and Zou]bocc-16 author author S. Boccaletti, author J. Almendral, author S. Guan, author I. Leyva, author Z. Liu, author I. Sendiña-Nadal, author Z. Wang, and author Y. Zou, title title Explosive transitions in complex networks’ structure and dynamics: Percolation and synchronization, https://doi.org/https://doi.org/10.1016/j.physrep.2016.10.004 journal journal Physics Reports volume 660, pages 1 (year 2016)NoStop [Gross et al.(2022b)Gross, Bonamassa, and Havlin]gros-22-c author author B. Gross, author I. Bonamassa, and author S. Havlin, title title Fractal fluctuations at mixed-order transitions in interdependent networks, https://doi.org/10.1103/PhysRevLett.129.268301 journal journal Phys. Rev. Lett. volume 129, pages 268301 (year 2022b)NoStop [Zhao et al.(2016)Zhao, Li, Sanhedrai, Cohen, and Havlin]zhao-16 author author J. Zhao, author D. Li, author H. Sanhedrai, author R. Cohen, and author S. Havlin, title title Spatio-temporal propagation of cascading overload failures in spatially embedded networks, @noop journal journal Nature Communications volume 7, pages 1 (year 2016)NoStop [Bashan et al.(2012)Bashan, Berezin, Buldyrev, and Havlin]bas-12 author author A. Bashan, author Y. Berezin, author S. Buldyrev, and author S. Havlin, title title The extreme vulnerability of interdependent spatially embedded networks, https://doi.org/10.1038/nphys2727 journal journal Nature Physics volume 9, pages 667 (year 2012)NoStop [Choi et al.(2023)Choi, Cho, D'Souza, Kertész, and Kahng]cho-23 author author H. Choi, author Y. S. Cho, author R. D'Souza, author J. Kertész, and author B. Kahng, @noop title Unified framework for hybrid percolation transitions based on microscopic dynamics (year 2023), https://arxiv.org/abs/2307.03584 arXiv:2307.03584 NoStop § SUPPLEMENTAL MATERIAL
http://arxiv.org/abs/2307.05767v1
20230711195138
Statistical analysis of Discrete Dislocation Dynamics simulations: initial structures, cross-slip and microstructure evolution
[ "Aytekin Demirci", "Dominik Steinberger", "Markus Stricker", "Nina Merkert", "Daniel Weygand", "Stefan Sandfeld" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "stat.AP" ]
1]Aytekin Demirci 2]Dominik Steinberger 3]Markus Stricker 2,4]Nina Merkert 5]Daniel Weygand 1, 2, 6, *]Stefan Sandfeld [1]Institute for Advanced Simulations – Materials Data Science and Informatics (IAS-9), Forschungszentrum Juelich, Juelich, Germany [2]Institute of Mechanics and Fluid Dynamics, Freiberg University of Mining and Technology, Freiberg, Germany [3]Interdisciplinary Centre for Advanced Materials Simulation, Ruhr-Universität Bochum, Bochum, Germany [4]Institute of Applied Mechanics, Clausthal University of Technology, Clausthal-Zellerfeld, Germany [5]Institute for Applied Materials, Karlsruhe Institute of Technology, Karlsruhe, Germany [6]RWTH Aachen University, Faculty of Georesources and Materials Engineering, Chair of Materials Data Science and Materials Informatics, Aachen, Germany [*]Corresponding author: [email protected] Probabilistic Unitary Formulation of Open Quantum System Dynamics [ August 12, 2023 ================================================================== § ABSTRACT Over the past decades, discrete dislocation dynamics simulations have been shown to reliably predict the evolution of dislocation microstructures for micrometer-sized metallic samples. Such simulations provide insight into the governing deformation mechanisms and the interplay between different physical phenomena such as dislocation reactions or cross-slip. This work is focused on a detailed analysis of the influence of the cross-slip on the evolution of dislocation systems. A tailored data mining strategy using the “*discrete-to-continuous framework” allows to quantify differences and to quantitatively compare dislocation structures. We analyze the quantitative effects of the cross-slip on the microstructure in the course of a tensile test and a subsequent relaxation to present the role of cross-slip in the microstructure evolution. The precision of the extracted quantitative information using D2C strongly depends on the resolution of the domain averaging. We also analyze how the resolution of the averaging influences the distribution of total dislocation density and curvature fields of the specimen. Our analyzes are important approaches for interpreting the resulting structures calculated by dislocation dynamics simulations. § INTRODUCTION Dislocations are one-dimensional defects found in crystalline materials. They are the boundary of an area over which relative slip occurred on defined slip planes Anderson_2017_a. The crystal lattice in the vicinity of the dislocation core is distorted, which results in long-range stresses in the material. Dislocation glide, i.e., the expansion or contraction of the slipped areas is their response to the local stresses which is the sum of external loading and the stress field of other dislocation and results in the plastic deformation of the crystal. But dislocations do not only interact via their respective stress field. Their behavior is more complex and includes several types of topological changes: dislocations can form junctions to lower their elastic energy (Frank's rule Hull2011) or can change their glide plane through a process called cross-slip. This leads to complex dislocation networks during straining of a specimen because, depending on individual dislocation's properties like slip plane and Burgers vector, these drivers of the topological changes can be mobile or immobile. In this division of possible dislocation interactions, the process of cross-slip is of the mobile sort and thereby provides an additional path for the material to relax external loading. While it has been accepted that cross-slip significantly impacts the formation of the dislocation microstructure, the exact impact of cross-slip onto the actual dislocation microstructure is yet to be quantified, especially as we are not able to arbitrarily turn it on or off in experiments. *discrete dislocation dynamics Weygand_2002_a,Po_2014_c,LeSar_2020_a allows to study the effect of particular mechanisms on the plastic deformation behavior due to dislocation propagation, e.g., how different junction types contribute to the strain hardening behavior of *face-centered cubic crystals Weygand_2014,Stricker_2015_aSills_2018_a. Using *discrete dislocation dynamics, the impact of cross-slip on the stress-strain curve and the total dislocation density has been studied Motz_2009_aZhou_2010_aHussein_2015_a. A common observation is that cross-slip results in lower stresses and higher total dislocation densities for the same macroscopic strain when compared to the same numerical experiments without cross-slip. Cross-slip was also identified as one of two processes providing new dislocations Stricker_2018_a. Dislocation network characteristics were further studied in terms of the density of dislocation junctions Hussein_2015_a. Based on *discrete dislocation dynamics simulations, Xia_2016_a extracted cross-slip rates and used them to enhance a continuum dislocation dynamics model. In the aforementioned work, the influence of cross-slip on the microstructure was only considered in the sense of global densities of either the dislocations themselves or of junctions formed by the dislocations. This work puts the focus on where dislocation microstructures are affected by cross-slip during uniaxial tension loading and unloading of a cubiod specimen on the micron scale. Recent studies of dislocation microstructures from *discrete dislocation dynamics simulations indicate that, given a moderately high dislocation density, dislocation motion and therefore plasticity is a relatively local phenomenon Stricker_2018_aSudmanns2019. Therefore the study of where dislocations interact is needed to contribute to the existing purely averaged approaches. Another open question related to dislocation microstructures apart from the evolution is the initial state of a simulation. Real specimen have an existing, physically consistent microstructure but simulations need to be initialized with a dislocation microstructure. Ideally, one would use initial microstructures which are a statistically equivalent to real ones or at least equivalent to microstructures. On a side note, this also raises the question of what a statistically equivalent microstructure is and is connected to our second question. Although there are several novel techniques and approaches for observing 3D structure of dislocations via experiments  LIU20141OVEISI2018116leon2020threesteinberger2023datazhang2022data, the precision of the state-of-art techniques is limited, therefore quantitative microstructure information can be accessed is insufficient to describe a complete dislocation network of a microstructure. Furthermore, initializing the microstructures is also a concern for *continuum dislocation dynamics simulations where the initial variables other than the dislocation density are less amenable to “guesswork”. Both *discrete dislocation dynamics and *continuum dislocation dynamics simulations require local statistics about how density or curvature changes during the deformation path and exploring the role of the cross-slip on those observables for a physically reasonable initialization of a microstructure. This is another question we explore here via *continuum dislocation dynamics field variables obtained from coarse graining of *discrete dislocation dynamics simulations microstructures. Our study aims to perform a statistical characterization of discrete dislocation microstructures via *continuum dislocation dynamics field variables. The reason of using continuous field variables is that *continuum dislocation dynamics variables allow to base plasticity on dislocation-related measures: spatially resolved densities and curvatures, while we can access only simple statistics such as average dislocation densities via *discrete dislocation dynamics simulations. In the following, we first introduce how we used large-scale *discrete dislocation dynamics simulations to generate dislocation microstructures in uniaxial tensile tests up to a strain of 0.6 and subsequent unloading. We do this twice for each initial dislocation microstructure; once with cross-slip and once without it. Subsequently, we summarize the so-called *discrete-to-continuous method Sandfeld_2015_cSteinberger_2016_b and use it to convert the discrete dislocation to continuous field data. In this study, the fields which we make use of are total dislocation density and curvature fields. In general, the total dislocation density is important for direct quantitative comparison of coarse-grained microstructures, while the latter carries topological properties of dislocation structures weger2021analysing. We then make use of D2C to compare how dislocation microstructures that form with and without cross-slip differ over the course of the loading and unloading. The statistics of the unloaded dislocation microstructures are then analyzed and the resulting implications for initializing simulations are discussed. Finally, we summarize and discuss the results. § METHODS §.§ Discrete dislocation dynamics Throughout this work, we use the *discrete dislocation dynamics code described in Weygand_2002_a, Weygand_2001_a to generate dislocation microstructures similar to the ones of Motz_2009_aStricker_2018_a. Material parameters for *face-centered cubic aluminum are used: The lattice constant is 0.4045, shear modulus of 27, Poisson's ratio is 0.347. The cuboid-shaped simulation box with free surfaces has a volume of 5x5x5^3. Its axes align with the crystallographic axes of the material, i.e., the x-axis is parallel to [100], the y-axis to [010], and the z-axis to [001]. The initial dislocation microstructure consists of dislocation loops with randomly selected radii between 28 such that their centers are in a volume that is four times the size of the simulation box. This way, the simulation box comprises whole loops and segments that end at its surfaces. Dislocations are uniformly drawn from all 12 possible slip systems. Subsequently, the system is allowed to relax, i.e., we evolve it in time without applied external load until an equilibrium dislocation structure is reached. This process is tuned such that after the relaxation, the total dislocation density of each initial dislocation microstructure is close to 1.15e131/□. We generated 10 realizations. An example for the initial dislocation microstructure is shown in the left column of <ref>; further details can also be found in steinberger_thesis. We then perform tensile tests with these initial dislocation microstructures along the y-axis twice for each realization, once with cross-slip and once without. Displacements are prescribed at the top in positive y direction with a strain rate of 5000. The bottom surface is fixed (u=0). Snapshots of the dislocation microstructure are saved periodically during loading and subsequent unloading: At a strain of about 0.6, we stop the tensile test and allow the dislocation microstructure to relax without external load. Examples for dislocation microstructures at maximum strain and after relaxation are shown in the center and right column of <ref>, respectively. The top row shows microstructures with cross-slip allowed, the bottom without. §.§ Discrete-to-continuous method Within the *discrete-to-continuous method Sandfeld_2015_cSteinberger_2016_b, we treat each dislocation as a parameterized directed curve 𝒞(t). t ∈ [a, b] in 𝒞(t) denotes the parametrization where a and b are the start and end positions of the dislocations. In addition to the spatial location of all points of the curves in space, we associate each curve with the Burgers vector of the dislocation that it represents. Then, treating dislocations as curves allows us to conveniently compute quantities such as the tangent vector ξ̂(t) = 𝒞^'(t) /[0]𝒞^'(t) and the unsigned curvature k(t) = √([0]𝒞^'(t)^2[0]𝒞^''(t)^2 - [1]𝒞^'(t) ·𝒞^''(t) ^2)/[0]𝒞^'^3, where 𝒞^'(t) and 𝒞^''(t) denote the first and the second derivative of 𝒞(t) with respect to t. To compute dislocation density fields ∙, we first discretize the domain Ω into n(𝒱) subvolumes Ω_i. Within a subvolume, we may compute the quantity of interest via ∙_Ω_i = 1/V_Ω_i∑_𝒞∫_𝒞∈Ω_i f_𝒞^∙(t) [0]𝒞^'(t) t, where V_Ω_i denotes the volume of the subdomain Ω_i, and f_𝒞(t) denotes a function whose expression depends on the continuum field ∙ to be computed. For example, if we use f_𝒞(t) = 1, we obtain the total dislocation density ρ^(0), and with f_𝒞(t) = k_𝒞(t) we obtain the curvature density, denoted by q^(0). For more details, see Steinberger_2019_a. §.§ Averaging and comparing dislocation microstructures Discretizing dislocation microstructures within equal domains using a fixed discretization scheme allows us to average and compare dislocation microstructures quantitatively. The comparison of dislocation microstructures is carried out on two levels: field values in subvolumes and whole simulations. First, we introduce a measure of deviation between the field values in a subvolume. For the comparison of scalar fields we use the absolute difference: D^Ω_i(∙_Ω_i, ∘_Ω_i) = [0]∘_Ω_i - ∙_Ω_i where the fields from two separate dislocation microstructures (∙ and ∘), within a subvolume are respresented by ∙_Ω_i and ∘_Ω_i. For the comparison of two whole simulations (domains), we use the weighted average absolute difference D^Ω(∙, ∘) = 1/V_Ω∑_i = 1^n(𝒱) D^Ω_i(∙_Ω_i, ∘_Ω_i) V_Ω_i, where V_Ω denotes the volume of the domain, and V_Ω_i is the volume of a subvolume as introduced earlier. A set of fields, ∙, (either total dislocation density or curvature in this study) of dislocation microstructures 𝒮 = {∙^1, ∙^2, …, ∙^n(𝒮)} is extracted using the *discrete-to-continuous method. The average value of a field within a given subvolume Ω_i is then computed via ⟨∙⟩_Ω_i^𝒮 = 1/n(𝒮)∑_j=1^n(𝒮)∙_Ω_i^j. The *mean absolute deviation is used as a measure of how different the dislocation microstructures within a set are to each other in terms of a selected field variable: mean absolute deviation = 1/n(𝒮)∑_j^n(𝒮)[1] D^Ω[1]∙^j, ⟨∙⟩^𝒮. Lower values indicate higher similarity within a set. But a comparison of values between different sets of dislocation microstructures is not meaningful as their average values of, e.g., the total dislocation density, might be very different. To enable a comparison across microstructure, we first compute the domain average ⟨∙⟩^Ω = 1/V_Ω∑_i = 1^n(𝒱)∙_Ω_i V_Ω_i, of our field quantity of interest and use it to compute the unitless *coefficient of variation of the *mean absolute deviation coefficient of variation[1]mean absolute deviation[0]𝒮 = mean absolute deviation[0]𝒮/⟨⟨∙⟩^𝒮_Ω_i⟩^Ω. This measure of dispersion around the average can then be used to compare the deviation of the microstructures of a set with the one of another set. § RESULTS The averaged tensile stress and averaged dislocation density over the plastic strain in the y-direction for both simulation sets with and without cross-slip is shown in <ref>. The evaluation of the average stress in <ref> can be divided into 3 phases. In the first phase, the average stress has the same steep increase for both cross-slip and without cross-slip cases. It is mainly elastic deformation with the first indication of microplasticity. In the second phase, the average stress increases linearly for both cases, but the slope is higher for without cross-slip. The mismatch of the maximum plastic strain for two cases is because of the adaptive time steps used in the simulations. The third phase is the relaxation phase where the stress drops to zero due to the removal of load and followed by the small decrease in the plastic strain due to the Bauschinger effect. These phases can also be tracked in <ref>: First, both cases show similar increase in the average total density, then higher increase in the case of cross-slip is observed. Finally, the density decreases in both of the cases during the relaxation phase. The average evolution of the total dislocation density ρ_Ω^(0) of the whole domains Ω for both with and without cross-slip is shown in <ref>. During the initial 0.2, all microstructures show the same small increase in total dislocation density irrespective of cross-slip or not. After this initial stage, the total dislocation density increases more rapidly with cross-slip during loading than without cross-slip. Upon releasing the external load after 1.3, the dislocation density initially drops sharply and then continues to decrease but at a much slower rate. For a simpler comparison of the decrease in the density during relaxation, we show the average total dislocation density normalized by the maximum density as a function of time starting with the time of the external load release in <ref>. The initial relative decrease in the average total dislocation density is comparable for all simulations. However, after about 0.1 we observe that no cross-slip leads to a larger decrease in the relative total dislocation density compared to with cross-slip. While the former drops by about 15, the latter only decreases by little more than 10. Features of the dislocation microstructures In the initial state, we observe the highest concentration of dislocations in the center of the xz-plane with a noticeable drop-off within about 1 distance to the surface. Dislocations align approximately perpendicular to the surface. At maximum strain, the total density of the microstructure strongly depends on cross-slip. In case without, we observe an increase of the density in the center portion of the xz-plane. The depletion close to the free surface is similar to the initial microstructure. With cross-slip we see an even stronger increase in the density in the center of the xy-plane as well as a higher dislocation density closer to the free surfaces. Upon relaxation we observe a depletion of dislocation near the free surfaces in both cases. However, with cross-slip a denser dislocation structure can be observed in surface-near regions compared to without cross-slip. § DISCUSSION §.§ Influence of cross-slip on the dislocation microstructure The prominent differences between dislocation microstructures with and without cross-slip are higher total dislocation densities and dislocations present closer to the open surfaces. While the former is evident from <ref> and in line with similar numerical experiments Motz_2009_aZhou_2010_aHussein_2015_a, we have only shown one realization per set in <ref>. By performing ensemble averages of each set via <ref>, we investigate whether this is a general observation of considering cross-slip or merely an outlier of the shown realizations. The average total dislocation density for each set of dislocation microstructures further averaged along the y-direction is shown in <ref>. We can see that the higher probability of dislocations being present closer to the surface is a feature in the set of simulations with cross-slip. The reason for this is the two-fold nature of cross-slip with regards to dislocation motion. On the one hand, cross-slip enables dislocations to move on a second glide plane in three dimensions instead of being confined to their glide plane. On the other hand, the motion of the part of the dislocation where cross-slip occurred is confined to the intersection line of the two slip planes on which it took place. Therefore, the dislocation is restricted by the intersection of primary and cross-slip plane and only able to move in one dimension at the cross-slip site. These two contributions result in more space for dislocations to evolve but limited mobility, hence the dislocation density at the surface is stabilized. This is beyond a simple scaling with the total dislocation density as discussed later by evaluating the relative dislocation density evaluation in the regions close to the surface and within the center region. As the dislocations move, they act as obstacles for other dislocations and may also form junctions that restrict the dislocation motion. To overcome these obstacles, dislocations can cross-slip onto other slip planes. And while some segments of the dislocation might be able to move on its new slip plane, the segment connected to the primary dislocation is not be able to overcome other obstacles in this manner due to its reduced degrees of freedom. This effectively means that cross-slip both adds a degree of freedom to the motion of dislocations overall at the expense of limiting the motion of parts of the dislocation where the cross-slip originated from and thereby also stabilizing the structure. In combination, more space is available for dislocations to move with potentially more obstacles to get stuck at, even in the presence of attractive image forces due to free surfaces. Therefore, cross-slip stabilizes dislocation densities close to surfaces. In combination with <ref>, we conclude that the stabilizing characteristics of cross-slip affect the subsequent relaxation because the average relative decrease in the total dislocation density is smaller for realizations with cross-slip. We now address how the change in density is spatially distributed within the specimens. <ref> shows the average relative change in the total dislocation density for the two sets of realization between the initial state and the maximum strain state, as well as between the maximum strain state and the subsequent relaxed state. With cross-slip, the relative increase is higher closer to the free surfaces. Without cross-slip, we see an increase in the center and the regions close to the open surfaces of the sample while there is decrease in the total dislocation density at the open surfaces. During unloading, the largest relative decreases in the total dislocation density are observed close to the surfaces. With cross-slip, the difference between the decrease close to the surfaces compared to the one in the center of the sample is smaller compared to realizations without cross-slip. Hence, we conclude that this further confirms the stabilizing effect of cross-slip. §.§ Similarity of dislocation microstructures across sets We show the relative mean absolute difference of two dislocation microstructure sets for the total density over time for different spatial discretizations in <ref>. The data points in the lines in <ref> are calculated as follows: Δρ^(0)(t) = ⟨⟨[0]ρ^(0)_𝒮_0,j(t) - ρ^(0)_𝒮_1,j(t) ⟩_Ω_i^𝒮⟩^Ω/⟨⟨ρ^(0)_𝒮_0, 𝒮_1(t) ⟩_Ω_i^𝒮⟩^Ω where ⟨∙⟩_Ω_i^𝒮 and ⟨∙⟩^Ω are defined in <ref> and <ref>, respectively. 𝒮_0,j and 𝒮_1,j indicate the specimen pairs, which have the same initial microstructures, from the set of samples without cross-slip (𝒮_0) and with cross-slip (𝒮_1). The denominator calculates the average value by using the density values of the subvolumes of all specimens to normalize the difference. Finally, Δρ^(0)(t) is the relative mean absolute difference of two dislocation microstructure sets for the total density at a given time step and discretization resolution. Initially, there is no difference as we start from the exact same microstructures. Upon loading and deformation of the sample we observe an increase in the relative mean absolute difference that strongly depends on the spatial discretization. Higher resolutions are more suitable to show increasing differences between the structures with and without cross-slip as microstructures discretized with higher resolutions contains more information about the actual location of the dislocations. The largest spread of the difference is observed at about 0.2, which is around the end of the elastic regime. Afterwards, the magnitude of the relative mean absolute difference between the spatial discretizations decreases again. From this we may conclude that the initial differences observed are primarily on a rather short length scale and therefore only seen for high resolutions. As the plastic strain accumulates, the changes in the topology of the dislocation microstructure cover a larger length scale and we observe an increase in the difference for coarse spatial resolutions as well. At about 1.3 a spike occurs. This coincides with the unloading for realizations without cross-slip. While these realizations exhibit a rather severe and fast change in the dislocation microstructure, the realizations with cross-slip show a more stable behavior. §.§ Similarity of dislocation microstructures within each set We know that differences may manifest on different length scales from our previous discussion. Thus, to study the similarity of realizations within each set, we show the *coefficient of variation of the *mean absolute deviation of the total dislocation densities for different spatial discretizations in <ref>. The first thing to note is that the CV values depend strongly on the discretization. Discretizations with higher spatial resolution show larger dissimilarity overall. This stems from coarser discretizations averaging over many more microstructure features with less sensitivity for their actual position in space. In contrast, finer discretized microstructures are more sensitive to the position of the dislocation and therefore they are similar to each other when the dislocations do not match up closely between different realizations. Irrespective of the discretization, the dislocation microstructure sets exhibit the same coefficient of variation of the mean absolute deviation in the beginning. This is due to the fact that realizations are using the exact same initial structures and require some “incubation” time of mainly elastic deformation until more and more dislocations start to move. After about 0.1, the samples' behavior starts to differ with the onset of plastic deformation. For 1 voxel along each direction, the realizations without cross-slip are more similar to each other than the ones with cross-slip. The exception at around 1.3 comes from the staggered onset of the relaxation sequence for the realizations without cross-slip. For higher spatial resolutions, this trend is reversed and the dislocation microstructures with cross-slip are more similar to each other than the ones without. The evolution of the *coefficient of variation of the *mean absolute deviation for discretizations using 134 voxels along each direction is shown in <ref>. Darker colors indicate higher similarity between the dislocation microstructures of a set. We observe the previously mentioned trend that a higher resolution results in a larger dissimilarity. During the tensile test, we notice a tendency of the microstructures becoming more similar to each other as the tensile tests progress, particularly for discretizations using more than 15 voxels along each direction. This trend is more pronounced for simulations with cross-slip. We conclude that the spatial arrangement of the dislocation microstructure becomes increasingly more similar over the course of the tensile test. Assuming that there are relatively stable and/or favorable dislocation configurations that form during loading, the inclusion of cross-slip as a degree of freedom explains why the similarity between realizations with cross-slip increases more rapidly than that of the realizations without cross-slip. §.§ Probability density functions for total density and line curvature While the total density (where the average is taken by assuming only one voxel) is a field variable that is commonly used for analyzing dislocation data, the lines' curvature are often not assessed for analyzes. With the *discrete-to-continuous framework this can be easily done: the mean curvature can be directly computed from the curvature density and the total density, k=q^(0)/ρ^(0). <Ref> shows the probability density functions of the total dislocation density, ρ^(0), and the curvature, k. The quantities are averaged over multiple realizations, then their probability density functions are calculated via kernel density estimation. When cross-slip is activated, we observe more even distribution of the total dislocation density as the probability of low densities increases significantly in the case of without cross-slip. However, the probabilities of the normalized total dislocation densities from the cross-slip and without cross-slip cases become closer in terms of their values. In the distributions of the curvature, the difference between the cases of with and without cross-slip is more pronounced compared to the total dislocation density distributions. Moreover, when the curvature values are normalized, the difference between two cross-slip cases becomes even larger in terms of the values of the probabilities which in contrasts with what is observed when the total dislocation density is normalized. For the simulations with cross-slip, a pronounced a high probability density of curvature values of about 1.2 ⟨ k⟩ becomes clearly visible. This seems to be an aspect to be considered for initial field values for *discrete dislocation dynamics or *continuum dislocation dynamics simulations. Furthermore, it might be a way of “testing” if the simulation was run with cross-slip enabled or not. §.§ Effects of discretization on total density and line curvature distributions We further investigate how the distribution of the total density and line curvature fields over the domain changes for different discretizations. In <ref>, the probability density functions of the normalized total dislocation density for different discretizations (8, 16, 24 and 32 voxels along each axis) are shown. The comparison of the distributions for the discretizations with 16, 32, 40, 64 voxels along each axis is provided in <ref>. Probability densities are calculated by kernel density estimations, and we used the improved Sheather-Jones algorithm botev2010kernel to decide on the optimum bandwidth values to represent the distributions accurately. Through this, we plotted distributions as if they were represented by histograms with a very high number of small bins. The first notable observation is the occurrence of equally spaced fluctuations or oscillations in the distributions for each type of the specimen (with and without cross-slip) and state of the simulation (maximum strain and relaxed) when specimens discretized with higher resolutions. The formation of the peaks in the total density distributions are due to the geometrical reasons rather than the physical dislocation behavior or the crystallographic orientation of the specimen: as we increase the resolution of the discretization, we are approaching a discrete representation of the dislocation lines. Considering this together with a large number of straight, diagonal lines in the simulation volume, the result is a high number of voxels that contain the same total line length. We also notice that the spacing between the peak points are inversely proportional to the volume of the voxels for each case in <ref>. This means that the distribution of the total dislocation length in the voxels has peak points at the same values for all discretizations as shown in <ref>. (See <ref> for the distributions in the case of discretizations with 16, 32, 40, 64 voxels along each axis.) Geometrical reasons behind the formation of the peaks are simply demonstrated in two-dimensional geometries in <ref>. In <ref>, as the discretization becomes finer, the number of the subvolumes that have the same length of the line (the subvolumes with thicker edges) increases. This leads to the formation of local peak values in the distributions plots. Apart from this, the discretization of one line results in paired subvolumes with respect to the contained length (cf. the highlighted pixels by light and dark gray in the <ref>). This leads to the formation of multiple peak points in the distributions. For the same level of discretization, more pronounced fluctuations are observed without cross-slip. In other words, the total density fields of the specimens with cross-slip are less sensitive to the discretization as compared to the ones without cross-slip. The lower sensitivity to the discretization is desirable for continuous field data since we can access more information by increasing the resolution before the discrete information starts to dominate the distribution. The difference in sensitivity between the cases with and without cross-slip is another result of the stabilizing effect of cross-slip, which was already mentioned in the previous analyzes. For both specimens, we do not observe a significant difference between the distributions of the maximum strain state and the relaxed state. In addition, observing strong fluctuations at finer resolutions than 16 voxels for the case without cross-slip and 24 for the case including cross-slip is in line with the conclusions in Steinberger_2019_a as the authors proposed the average dislocation spacing as a lower limit for the voxel size due to the physical considerations. The reason is that for both numbers of voxels the voxel sizes are smaller than the average dislocation spacing. Although we capture more differences in very high resolutions between microstructures, domains discretized with a voxel size larger than average dislocation spacing are not greatly affected by geometrically induced distortions in the distributions. To show the effect of the discretization based on a larger range of resolutions, we calculated cumulative density functions to improve readability. In <ref>, the cumulative distributions are shown for 18 different discretizations. The step-function-like shape of the distributions corresponds to the fluctuations in probability density functions, and they are less pronounced with cross-slip. In <ref>, we further show cumulative density distributions for the line lengths. There, fluctuations are observed at the same values of the line length for each discretization. This is consistent with the observation of the spikes which occur at the same values at each discretization in the probability density distributions of normalized line length values. We repeat the previous analysis for the normalized line curvatures, which are obtained by averaging over multiple realizations, and the comparison of these distributions for 18 different discretization levels are shown in <ref>. We can see the effect of the cross-slip on the curvature distributions on the relaxed states of the specimens: the peak is closer to the median of the data sample in the case of cross-slip. The effect on the shapes of the curves is demonstrated in <ref> by the skewness values of the distributions. Hence, the deduction, from section <ref>, that cross-slip has an effect on the curvature distribution shape is also true for different discretizations. Similar to our observations for the total density distributions, we do not observe a significant change in the curvature distributions after the load is removed at the maximum strain state both for the specimen with cross-slip and without cross-slip. § CONCLUSION We studied the impact of cross-slip on the evolution of dislocation microstructures, which are obtained by *discrete dislocation dynamics simulations, using continuum field variables obtained by *discrete-to-continuous method. We found that cross-slip leads to more homogeneous and stable dislocation microstructures within which dislocations are able to remain stable closer to free surfaces. Cross-slip also results in more similar dislocation microstructures. These findings indicate that the disadvantage of larger computational complexity when including cross-slip in *discrete dislocation dynamics simulations might be offset by requiring fewer realizations to capture statistical aspects of dislocation microstructures. The finding that the dislocation microstructure close to surfaces changes significantly is important for analyzing dislocations experimentally via non-destructive methods. These observations are influenced by the surface and any conclusions drawn from such observations about the impact of dislocation arrangements must be made carefully. In order to provide better heuristics on the actual changes near the surface, however, we have to perform further analyzes that consider only surface near regions by taking alignment of the surfaces with respect to the loading direction into account, which is out of the current scope of our work presented here. Our analyzes for the effects of discretization resolution shows that discretized domains with high resolutions contain too much discrete information which is not in line with the intention of obtaining continuous field data. We observed a lower bound for the resolution by the dislocation density distributions: a voxel length should not be smaller than the average dislocation spacing. In addition, cross-slip has an impact on the formation as the peaks are present in coarser discretizations when cross-slip is not activated. This is simply because of the fact that in the microstructures with cross-slip, the total dislocation density increases more and results in lower average dislocation spacing which allows smaller voxel sizes. The fluctuations are important geometrical artifacts that have to be considered in continuum field data calculations as the they can lead to misjudgments of further analyzes and hidden calculation errors in *continuum dislocation dynamics simulations. From a broader perspective, the outcomes of our study are: * We established a method of descriptive statistics for systems of curved dislocations. By using the *continuum dislocation dynamics field variables as descriptors for certain microstructural aspects we indirectly could leverage the fact that CDD is based on a statistical coarse graining of systems of discrete dislocations. In other words, our descriptors are strongly based on physics which make them easily interpretable. * We demonstrated that using these descriptors it is possible to investigate and to discuss situations that otherwise can only be approached by eyeballing or through very coarse measures. In particular the fact that curved dislocations behave entirely different from straight dislocations (one of the reason why 3D *discrete dislocation dynamics is such an important method) could so far not properly be accounted for or leveraged. * All of this is a step towards parameterizing and validating continuum simulations methods, ranging from gradient based models up to multislip *continuum dislocation dynamics methods of various complexity. Furthermore, in the context of continuum model development, we have now a methodology that helps us to infer how, e.g., additional terms concerning dislocation multiplication could be included. This, however, is still a significant undertaking, which can not be presented in the present manuscript as well. * Last but not least, a similar analysis can also be done with experimental data (at least to some extent). For example, in  zhang2022data, we extracted the dislocation geometry from in-situ TEM experiments, converted them using *discrete-to-continuous and used in particular the curvature to understand details of the “energy landscape” of high-entrop alloys. Thus, *discrete-to-continuous can also be seen as a tool to bring experiments and simulations closer together. Furthermore, *discrete-to-continuous method has a potential to be extended for providing information on the mechanical fields of a microstructures in the simulations, however it is out of the scope of the current study. § ACKNOWLEDGEMENTS AD, DS and SS acknowledge funding from the European Research Council Starting Grant, “A Multiscale Dislocation Language for Data-Driven Materials Science”, ERC Grant Agreement No. 759419 MuDiLingo. § DATA AND CODE AVAILABILITY The data that support the findings of this study will be openly available following an embargo at the following DOI: <10.5281/zenodo.7788934> § DECLARATIONS On behalf of all authors, the corresponding author states that there is no conflict of interest. figuresection § TOTAL DISLOCATION DENSITY FIELDS white § PROBABILITY DENSITY DISTRIBUTIONS OF TOTAL DISLOCATION DENSITY white § CURVATURE DENSITY DISTRIBUTION SKEWNESS white
http://arxiv.org/abs/2307.04035v1
20230708191401
A novel framework for Shot number minimization in Quantum Variational Algorithms
[ "Seyed Sajad Kahani", "Amin Nobakhti" ]
quant-ph
[ "quant-ph" ]
High Fidelity 3D Hand Shape Reconstruction via Scalable Graph Frequency Decomposition Tianyu Luan^1 Yuanhao Zhai^1 Jingjing Meng^1 Zhong Li^2 Zhang Chen^2 Yi Xu^2 Junsong Yuan^1 ^1State University of New York at Buffalo        ^2OPPO US Research Center, InnoPeak Technology, Inc. {tianyulu,yzhai6,jmeng2,jsyuan}@buffalo.edu {zhong.li,zhang.chen,yi.xu}@oppo.com =================================================================================================================================================================================================================================================================================================== Variational Quantum Algorithms (VQAs) have gained significant attention as a potential solution for various quantum computing applications in the near term. However, implementing these algorithms on quantum devices often necessitates a substantial number of measurements, resulting in time-consuming and resource-intensive processes. This paper presents a generalized framework for optimization algorithms aiming to reduce the number of shot evaluations in VQAs. The proposed framework combines an estimator and an optimizer. We investigate two specific case studies within this framework. In the first case, we pair a sample mean estimator with a simulated annealing optimizer, while in the second case, we combine a recursive estimator with a gradient descent optimizer. In both instances, we demonstrate that our proposed approach yields notable performance enhancements compared to conventional methods. § INTRODUCTION Variational Quantum Algorithms <cit.> have emerged as a promising solution for near-term applications of quantum computers. These versatile algorithms offer the capability to tackle a diverse range of complex problems, including but not limited to quantum chemistry <cit.>, combinatorial optimization <cit.>, and machine learning <cit.>. Despite their potential for near-term applications, variational algorithms often require a large number of measurements. This makes implementation of those algorithms on quantum devices extremely time and resource-intensive <cit.>, even when performed on shallow and low-width circuits. Various research efforts have sought to employ optimizers to reduce the computational burden of VQAs. These include application of both existing and novel optimization techniques <cit.>. Such approaches are related to well studied and rich literature on optimization of noisy functions in various fields such as signal processing and control theory (see for example <cit.> and <cit.>). Sweke et al.<cit.> introduced a quantum stochastic gradient descent optimizer that relies on a gradient estimator with a limited number of shots. They proved that with some simplifying assumptions this approach will converge to the optimal values. However, the convergence rate is dependent on the error of the estimator. In another study, Polloreno et al.<cit.> studied the robustness of a double simulated annealing optimizer against inherent quantum noise, even when only a few shots are available and the noise is noticeable. Another approach to solve this problem has been to employ a nested optimization framework in which a high-level optimizer is used to improve the performance of a low-level optimizer by tuning its parameters. For example, Tamiya et al.<cit.> employed Bayesian optimization on stochastic measurement results to determine the optimal step size through a line search. Inspired by stochastic gradient descent, this method incorporates an adaptive shot technique to reduce the number of measurements required during the line search. Similarly, Mueller et al.<cit.> proposed a technique to identify a suitable initial value set using Gaussian Processes. Subsequently, they utilized ImFil as the optimizer in their approach. In this work we propose a generalized framework for optimization algorithms which seek to reduce shot-number evaluations in VQAs. The key performance improving novelty in our approach are two fold. First, devising a framework to incorporate powerful estimation techniques to achieve near-true parameter estimates with much fewer data samples. Secondly, by utilizing the sensitivity analysis of the optimizers, it will be assured that the error level of estimators (and the number of shots as a result) are suitably chosen. This is made possible by breaking the problem into two separate estimation and optimization problems, and deriving theoretical results on the sufficient number of shot. We explore two specific case studies within this framework. For the first case, a sample mean estimator is paired with a simulated annealing optimizer, and in the second case, a recursive estimator is paired with a gradient descent optimizer. The remainder of the paper is organized as follows; In section <ref> background material, including quantum variational circuits, and estimation theory are presented. In section <ref> we develop the proposed error control strategy and discuss the resulting optimization framework. In section <ref> we present two case studies together with numerical results. Finally, in section <ref>, we conclude our work. § BASIC CONCEPTS §.§ Quantum Variational Algorithms 𝒞 ℝ In theory of quantum variational algorithms, the expected value of an observable O over a state, generated by applying the parameterized quantum circuit U(*θ) on the initial state |0⟩ is a required data. This value is used by cost function ∈^m to be minimized with respect to the parameter space *θ. Accordingly, the class of algorithms such as VQE, QAOA and QNN, can be formulated as <cit.>, *θ^* = min_*θ∈^m( U(*θ)^† O U(*θ)0 ). Specific details of these algorithms are available in <cit.>. Here we would like to focus on the underlying operation of these algorithms. Let, f^U, O(*θ) = U(*θ)^† O U(*θ)0, in which U and O may be omitted when discussion is not related to the specific choice of U and O. One of the simplest and widely used parameter-shift rules to compute the derivatives of f is given in Lemma <ref>. i [Parameter-shift rule <cit.>] under the circumstance that each the dependence of f to each parameter (like *θ_k) is in the form of e^*θ_k P_k where P_k is a Pauli operator, we have, ∂_k f(*θ) = f(*θ + e_k π / 2) - f(*θ - e_k π / 2)/2. Variable ∂_k is θ_k and e_k is the vector with 1 in the k-th position and 0 elsewhere. Lemma <ref> is not only useful in calculating the derivative of f, it can also be used to bound higher derivatives of f as shown in Lemma <ref>. For any *θ∈^m, we have, Hess f_2 ≤ mO_2. From the definition we know that f < O_2∀*θ∈^m. For any i and j there always exist some values of *θ_1, *θ_2, *θ_3, *θ_4 for which, Hess f_ij = f(*θ_1) - f(*θ_2) - f(*θ_3) + f(*θ_4)/4≤O_2. Accordingly, Hess f_2 ≤ mO_2. §.§ Estimation and Error Analysis Var MSE Bias Contrary to the simple definition of f^U, O, evaluating such an expected value at each sample point may involve measurements with respect to ℓ multiple bases. Accordingly, the observable O will be decomposed to ℓ observables, each of which is diagonal in a different basis, such as, O = ∑_j=1^ℓ V^†_j D_j V_j. For each ℓ, it is necessary to perform r_j repetitive measurements on a quantum circuit. The lth (out of r_j) measurement outcome will be considered as a sample from a random variable χ_j, l∼ X(UV_j, D_j, *θ). We know that 𝔼[χ_j,l] = f^UV_j, D_j(*θ) and this is the reason we typically define an estimator f^U, O(*θ) as follows. A sample mean estimator for f is defined as, f̂^U, O(*θ) = ∑_j=1^ℓ1/r_j∑_l = 1^r_jχ_j, l. And for any of ∂_k fs, ∂̂_k f^U, O(*θ) = ∑_j=1^ℓ1/2r_j+∑_l = 1^r_j+χ_j+, l - 1/2r_j+∑_l = 1^r_j-χ_j-, l. where χ_j+, l∼ X(UV_j, D_j, *θ + e_i π / 2) and χ_j-, l∼ X(UV_j, D_j, *θ - e_i π / 2). The performance of such an estimator can be bounded with the aid of the Hoeffding's inequality. The inequality provides confidence intervals of the estimators of bounded random variables. 𝔼 [Hoeffding's inequality <cit.>] For n random variables ξ_1, ξ_2, …, ξ_n with a_i ≤ξ_i ≤ b_i for all i, and any t > 0, we have, (∑_i=1^n ξ_i - ∑_i=1^n [ξ_i]≥ t) ≤ 2e^-2t^2/∑_i=1^n (b_i - a_i)^2. Based on this, the following bounds are obtained for the MSE (mean square error) and confidence interval (CI) of the sample mean estimator. [Sample mean estimator bounds] By defining, ϵ_f = ∑_j=1^ℓD_j^2_2/r_j, and, ϵ_∂_k f = ∑_j=1^ℓD_j^2_2/4(1/r_j+ + 1/r_j-). When ŝ is f̂^U, O or ∂̂_k f^U, O, it can be respectively bounded by ϵ_f and ϵ_∂_k f for any *θ and κ > 0 as follows, [ ŝ(*θ)] ≤ϵ, (ŝ(*θ) - s(*θ) > κ√(ϵ)) ≤ 2e^-κ^2/2. To prove the bounds for f, we start by setting ξs in Hoeffding's inequality to χ_j,l/r_j for different j and ls. They are bounded to -D_j/r_j≤χ_j,l/r_j≤D_j/r_j, it can thus be shown that, (f̂(*θ) - f(*θ) > t) ≤ 2e^-2t^2/4ϵ_f. It is now only required to replace t with κ√(ϵ_f). From Popoviciu's inequality <cit.> it is evident that [ξ_i] ≤b_i - a_i/4 which is used for the MSE of bounded random variables. The same results hold for the partial derivatives, if we set ξs to χ_j±,l/2r_j± for different j and l and + and - signs. § MAIN RESULTS §.§ Error Control Strategy As mentioned in the introduction, a key performance improving novelty of our work is the means to control the error level, as well as the number of shots. This will be possible by connecting the number of shots to the error level of any estimator, using the problem below. Contrary to the normal estimators that often use a constant number of shots without any further analysis, we intend to find a sufficient value for r_js such that the resulting estimation error is bounded by a specified amount. [Sufficient Number of Shots] Given an estimator ŝ, find the values of r_js which satisfy the following constraints, [ŝ] ≤ E_s. For the sample mean estimator discussed previously, solving Problem <ref>, for f^U, O and ∂_k f^U, O is equivalent to the following optimisation problems, r_j∈ℕargmin∑_j=1^ℓ r_j s. t. [f̂] ≤ E_f. r_j±∈ℕargmin∑_j=1^ℓ r_j+ + r_j- s. t. [∂̂_k f] ≤ E_∂_k f. Optimization problems <ref> and <ref> can be approximately solved using Algorithm <ref>. This algorithm solves the optimisations by relaxing MSE values to the bounds ϵ_f and ϵ_∂_k f defined in Theorem <ref> and limiting r_js and r_j±s to have real values. We can easily verify the algorithm by replacing the values using the formulas in Theorem <ref> and deduce that the algorithm not only bounds the MSE but also provides a CI for the values. §.§ Optimizing Agent Regardless of technical detail, the function of all variational algorithms can be considered as that of agent which interacts with a quantum computer as shown in Figure <ref>. Such a high level conceptualization permits development of a unified framework for the evaluation of f, ∂_k f and higher derivatives. Most general purpose optimizers will not aim to control the number of shots which is often taken as a constant during the optimization. There have been attempts to develop adaptive algorithms such as <cit.> but the scope of their application is limited. Any optimizing agent will ultimately utilize available data by calculating a set of estimators. Statistically, it is possible to reduce the number of estimators to a sufficient set of estimators. For most typical optimizer, those estimates will be limited to f̂^U, O(θ_i) and ∂̂_k f^U, O(θ_i), where f^U, O is the function that is being optimized. However, by application of sufficient shot problem proposed earlier, it is possible to control the optimization error, instead of the number of shots. In our view this is a more natural way of looking at the problem. In such an improved strategy, the optimizer is provided with the errors E_f and E_∂_k f instead of r_j, and solves for f̂, ∂̂_k f instead of χ_j, l. This is illustrated in Figure <ref>. For the sake of simplicity we shall henceforth refer to f^U, O(θ_i) and ∂_k f^U, O(θ_i) as f_i and ∂_k f_i respectively. Moreover, this strategy can also be extended to the sample mean estimator f̂_i and ∂̂_̂k̂f_i, defined in Definition <ref>. In the proposed framework the main problem is broken down into two separate problems. These are, * An optimization problem of uncertain values, with a sensitivity analysis * An estimation problem, with the question of sufficient shots for the estimator. In the proposed framework one is not limited to the sample mean estimator defined in Definition <ref> and can make use of any static or dynamic estimator. Dynamic estimators will also have an internal states which is shown by a gray arrow in Figure <ref>. We will demonstrate the profound effectiveness of this approach by introducing a few examples of estimators and optimizers in the following section. For the sake of illustrating the methodology we shall make use of existing standard and rather simple optimization and estimation techniques. Evidently the eventual obtainable performance improvements can be much greater by a well matched and individually powerful optimizer and estimator. § CASE STUDIES §.§ Example I: Error-Aware Simulated Annealing A simple simulated annealing algorithm is a stochastic process that starts from a random point in the search space and iteratively moves to a new point with a transition probability P based on the values and temperature T_i at step i. In order to introduce the uncertainty, we only need to redefine the transition probability P̂ based on the estimator as follows, P̂(*θ_i+1 | *θ_i) = 1 if f̂_i+1 < f̂_i e^-f̂_i+1 - f̂_i/T_i otherwise. Then, the sensitivity can be analyzed as follows. In order to maintain an accuracy for P̂(*θ_i+1 | *θ_i) we seek, [D_KL(P ∥P̂)] ≤η, where D_KL is the Kullback-Leibler divergence. We know that this equation will hold if, [logP(*θ_i+1 | *θ_i)/P̂(*θ_i+1 | *θ_i)] ≤η ∀*θ_i+1. The RHS could be bounded using [x - [x]] ≤√([x]) and the independence of f̂_i+1 and f̂_i and by assuming a monotonically decreasing temperature T_i+1 < T_i, [log P(*θ_i+1 | *θ_i) - logP̂(*θ_i+1 | *θ_i)] ≤1/T_i[f̂_i+1 - f̂_i - f_i+1 + f_i], ≤1/T_i√([f̂_i+1 - f̂_i]), ≤1/T_i√([f̂_i+1] + [f̂_i]) . Note that the estimators should be unbiased, otherwise the equation above will not hold. Finally we will introduce the condition below, that is sufficient for the equation above and furthermore to bound KL divergence by η, [f_i+1] ≤η^2 T^2_i/2. This is a more efficient condition for the estimator in comparison to the simply asking [f_i+1] ≤ E. In order to compare the performance of the simulated annealing with and without the sensitivity analysis, we conducted three experiments as follows, * Simple Optimizer (1): A simulated annealing optimizer with the condition [f_i+1] ≤ E with a high value for E. * Simple Optimizer (2): A simulated annealing optimizer with the condition [f_i+1] ≤ E with a low value for E. * Error-Aware Optimizer: A simulated annealing optimizer with Equation <ref> as the condition. For experimental studies, consider the benchmark problem defined in <ref>. [Benchmark problem] Assume a variational task with one qubit and U(θ) = R_x(θ) and O = Z with 𝒞 = I, which implies ℓ = 1 and m = 1. Also C(θ) = R_x^†(θ) Z R_x^†(θ)0 could be simplified further into cosθ. We start with an ensemble of θs near 0 and compare the distribution of the exact value of the function f through the optimization (with respect to the number of shots conducted) for each optimizer. The results are shown in Figure <ref>. To more clearly highlight the difference between the distributions, we have also plotted the distribution of data points after 7000 shots for each optimizer in Figure <ref>. Note that the error bound for different optimizers as a function of the number of shots is shown in Figure <ref> which is just a visualisation of condition <ref>. The results show that the error-aware simulated annealing is able to find a better solution with less number of shots. §.§ Example II: Recursive Estimator for Gradient Descent To illustrate the flexibility of the framework with respect to the choice of estimators and optimizers, in this section we perform experiments with a standard gradient descent algorithm and a novel recursive estimator for the function and its derivative. The proposed recursive estimator works on the assumption that the distance between two function evaluations required by the optimizer at two consecutive iterations is not great. That is, the function (and possibly its gradient) at a point *θ_i and its next evaluation at *θ_i+1 doesn't differ drastically from *θ_i. This assumption allows the update rule of the optimizer to be written in the form *θ_i+1 = *θ_i + δ*θ_i where δ*θ_i is a vector with bounded norm. The proposed recursive estimation methodology is formally defined in Definition <ref>. f̂^*_i = α_i(f̂^*_i-1 + δ*θ_i-1·f^*_i-1) + (1 - α_i) f̂_i ∂̂_̂k̂f^*_i = β_i ∂̂_̂k̂f^*_i-1 + (1 - β_i) ∂̂_̂k̂f_i , f̂^*_0 = f̂_0 ∂̂_̂k̂f^*_0 = ∂̂_̂k̂f_0 Note that α_is and β_is are values between 0 and 1 and act as hyperparameters which control the relative weight given to prior knowledge. The optimal values of these parameters are derives in later sections. First we present Theorem <ref> which derives theoretical bounds for the bias and variance of the estimate so obtained. [Recursive estimator bounds] For any i, [f̂^*_i] ≤ B_i [∂̂_̂k̂ f^*_i] ≤ B_∂_k, i. Where B_i and B_∂_k, i are calculated recursively as follows, B_i = α_i(B_i-1 + ∑_k=1^m (δ*θ_i-1)_k B_∂_k, i-1 + m/2δ*θ_i-1_2^2 O_2) B_∂_k, i = β_k,i(B_∂_k, i-1 + δ*θ_i-1_2 O_2) , B_0 = 0 B_∂_k, 0 = 0. and similarly for the variance, [f̂^*_i] ≤ A^2_i [∂̂_̂k̂f^*_i] ≤ A^2_∂_k, i. Using the notation in, Theorem <ref> A^2_i = α_i^2 (A^2_i-1 + ∑_k=1^m (δ*θ_i-1)_k^2 A^2_∂_k, i-1) + (1 - α_i)^2 ϵ^2_f_i A^2_∂_k, i = β_k,i^2 A^2_∂_k, i-1 + (1 - β_k,i)^2 ϵ^2_∂_k f_i, Defining the drift term d_i = f_i - 1 + δ*θ_i-1· f_i-1 - f_i, we can write the bias and variance of f̂^*_i as, [f̂^*_i] = α_i ([f̂^*_i-1] + δ*θ_i-1·[f^*_i-1] + d_i) [f̂^*_i] = α_i^2 ([f̂^*_i-1] + δ*θ_i - 1^2·[f^*_i-1]) + (1 - α_i)^2 [f̂_i]. In an abuse of notation, δ*θ^2_i-1 represents a vector of squared elements and [f^*_i-1] represents a vector of variances. This facilitates a more compact proof as shall be seen. With the same objective, we define another drift term for the derivatives of f as d_∂_k, i = ∂_k f_i - 1 - ∂_k f_i will helps us to write the bias and variance of ∂̂_̂k̂f^*_i as, [∂̂_̂k̂f^*_i] = β_k,i([∂̂_̂k̂f^*_i-1] + d_∂_k, i) [∂̂_̂k̂f^*_i] = β_k,i^2 [∂̂_̂k̂f^*_i-1] + (1 - β_k,i)^2 [∂̂_̂k̂f_i]. Combining Lemma <ref> with the mean value theorem, we have, d_i≤1/2δ*θ_i-1_2^2 m O_2 d_∂_k, i≤δ*θ_i-1_2 O_2. Finally, combining the above equations with the fact that [f̂_i] ≤ϵ^2_f_i and [∂̂_̂k̂ f_i] ≤ϵ^2_∂_k f_i completes the proof. For the confidence interval of recursive estimator, we can prove the following result, [Confidence Interval] As a result of Theorem <ref> the following equation is valid for s^* is any of f_is or ∂_k f_is, simply by setting corresponding A and Bs. [ŝ^*] ≤ B^2 + A^2, (ŝ^* - f > κ A + B) ≤ 2e^-κ^2/2. While the expression for the MSE is trivial, for the confidence interval we have, (f̂^*_i - [f̂^*_i] > κ√(A_i)) ≤ 2e^-κ^2/2. This is true because f̂^*_i is a linear combination of χs that are from bounded distributions. Accordingly, Hoeffding's inequality applies. Moreover, there is a one-to-one correspondence between bounds from Hoeffding's and Popoviciu's inequalities (see the proof of Theorem <ref>), which obviously validates the equation above. Since f̂^*_i - f_i > κ√(A_i) + B_i ⇒f̂^*_i - [f̂^*_i] > κ√(A_i), (f̂^*_i - f_i > κ√(A_i) + B_i) ≤(f̂^*_i - [f̂^*_i] > κ√(A_i)) ≤ 2e^-κ^2/2. Finally, we need to solve the sufficient shots problem (Problem <ref>) for the recursive estimator. The actual objective is to solve, r_j, i, r_j±,i∈ℕ, α_i, β_k,iargmin ∑_i=1^∞∑_j=1^ℓ r_j, i + ∑_k=1^m r_j+, k, i + r_j-, k, i s. t. ∀ i [f̂^*_i] ≤ E_f s. t. ∀ i, k [∂̂_k f^*_i] ≤ E_∂_k f. However, we solve an iterative version as in Algorithm <ref>, min_r_j ∈ℕ, α_i∑_j=1^ℓ r_j s. t. [f̂^*_i] ≤ E_f. min_r_j,±∈ℕ, β_k,i∑_j=1^ℓ r_j+ + r_j- s. t. [∂̂_k f^*_i] ≤ E_∂_k f. Combining the two leads to Algorithm <ref>. Note that with this algorithm, for the same error bound, the number of shots for a recursive estimator of a function will be at max equal to the number of shots for the naive estimator of that function. To illustrate the performance of Algorithm <ref>, first we apply the estimator for the variational Problem <ref> with a random (zero mean) initial point and a simple gradient-descent optimizer. Figure <ref> shows the estimated values (with CIs) of the loss function, for different estimators, as a function of the number of shots used to evaluate the function. It is evident that the proposed recursive estimator is outperforming the sample mean estimator by a significant margin. Another comparison made by visualizing number of shots per each GD iteration is shown in Figure <ref>. To verify the theoretical results derived earlier, the bounds on MSE and CI are compared with the actual values of the MSE and CI of the estimators in Figures <ref> and <ref> respectively. For further experimental verification, the same experiment has also been carried out on the more complex MaxCut problem for a square graph (V = 4 and E = 4). The results are shown in Figure <ref> and Figure <ref>. § CONCLUDING REMARKS In this paper, a generalized framework for optimization algorithms which seek to reduce shot-number evaluations in VQAs was proposed. In the general form, the proposed framework entails a combination of an estimator together with a numerical optimization algorithm. We introduced the sufficient shots problem and proposed an algorithm for it to be used with the sample mean estimator. This concept together with sensitivity analysis of optimizers, allows us to control the number of shots leading to a more natural and effective optimization process. Two specific case studies of this framework were subject to extensive experiments. In the first case, a sample mean estimator is coupled with a simulated annealing optimizer, and in the second case, a recursive estimator was coupled with a gradient descent optimizer. In both cases we demonstrated that the proposed approach achieves significant performance improvements over conventional methods. Our results highlight the importance of considering error control strategies and incorporating them into the design of optimizers for variational quantum algorithms. By leveraging estimators with error control and integrating them with interactive optimization processes, we can achieve better optimization performance and reduce the resource requirements for quantum computations. Overall, this work contributes to advancing the field of variational quantum algorithms by providing a systematic framework for designing error-aware optimizers. The presented approaches and results open up new possibilities for improving the efficiency and effectiveness of quantum computing research in various domains, such as quantum chemistry, combinatorial optimization, and machine learning. Future directions could explore further extensions and applications of the proposed framework, as well as experimental validations on quantum devices. § APPENDIX
http://arxiv.org/abs/2307.04472v1
20230710104248
Partial Vessels Annotation-based Coronary Artery Segmentation with Self-training and Prototype Learning
[ "Zheng Zhang", "Xiaolei Zhang", "Yaolei Qi", "Guanyu Yang" ]
cs.CV
[ "cs.CV" ]
Partial Vessels Annotation-based Coronary Artery Segmentation Z. Zhang and X. Zhang—Contributed equally to this work. Z. Zhang et al. LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096, China [email protected]. of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, Nanjing, China Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing 210096, China Centre de Recherche en Information Biom´edicale Sino-Fran¸cais (CRIBs), Strasbourg, France Partial Vessels Annotation-based Coronary Artery Segmentation with Self-training and Prototype Learning Zheng Zhang1 Xiaolei Zhang2 Yaolei Qi1 Guanyu Yang1,3,4() August 12, 2023 ======================================================================================================= Coronary artery segmentation on coronary-computed tomography angiography (CCTA) images is crucial for clinical use. Due to the expertise-required and labor-intensive annotation process, there is a growing demand for the relevant label-efficient learning algorithms. To this end, we propose partial vessels annotation (PVA) based on the challenges of coronary artery segmentation and clinical diagnostic characteristics. Further, we propose a progressive weakly supervised learning framework to achieve accurate segmentation under PVA. First, our proposed framework learns the local features of vessels to propagate the knowledge to unlabeled regions. Subsequently, it learns the global structure by utilizing the propagated knowledge, and corrects the errors introduced in the propagation process. Finally, it leverages the similarity between feature embeddings and the feature prototype to enhance testing outputs. Experiments on clinical data reveals that our proposed framework outperforms the competing methods under PVA (24.29% vessels), and achieves comparable performance in trunk continuity with the baseline model using full annotation (100% vessels). § INTRODUCTION Coronary artery segmentation is crucial for clinical coronary artery disease diagnosis and treatment <cit.>. Coronary-computed tomography angiography (CCTA), as a non-invasive technique, has been certified and recommended as established technology in the cardiological clinical arena <cit.>. Thus, automatic coronary artery segmentation on CCTA images has become increasingly sought after as a means to enhance diagnostic efficiency for clinicians. In recent years, the performance of deep learning-based methods have surpassed that of conventional machine learning approaches (e.g. region growing) in coronary artery segmentation <cit.>. Nevertheless, most of these deep learning-based methods highly depend on accurately labeled datasets, which need labor-intensive annotations. Therefore, there is a growing demand for relevant label-efficient learning algorithms for automatic coronary artery segmentation on CCTA images. Label-efficient learning algorithms have garnered considerable interest and research efforts in natural and medical image processing <cit.>, while research on label-efficient coronary artery segmentation for CCTA images is slightly lagging behind. Although numerous label-efficient algorithms for coronary artery segmentation in X-ray angiograms have been proposed <cit.>, only a few researches focus on CCTA images. Qi et al. <cit.> proposed an elabrately designed EE-Net to achieve commendable performance with limited labels. Zheng et al <cit.> transformed nnU-Net into semi-supervised segmentation field as the generator of Gan, having achieved satisfactory performance on CCTA images. Most of these researches use incomplete supervision, which labels a subset of data. However, other types of weak supervision (e.g. inexact supervision), which are widely used in natural image segmentation <cit.>, are seldom applied to coronary artery segmentation on CCTA images. Different types of supervision are utilized according to the specific tasks. The application of various types of weak supervision are inhibited in coronary artery segmentation on CCTA images by the following challenges. 1) Difficult labeling (Fig. <ref>(a)). The target regions are scattered, while manual annotation is drawn slice by slice on the planes along the vessels. Also, boundaries of branches and peripheral vessels are blurred. These make the annotating process time-consuming and expertise-required. 2) Complex topology (Fig. <ref>(b)). Coronary artery shows complex and slender structures, diameter of which ranges from 2 mm to 5 mm. The tree-like structure varies individually. Based on these challenges and the insight that vessels share local feature (Fig. <ref>(b)), we propose partial vessels annotation and our framework as following. Given the above, we propose partial vessels annotation (PVA) (Fig. <ref>(c)) for CCTA images. While PVA is a form of partial annotation (PA) which has been adopted by a number of researches <cit.>, our proposed PVA differs from the commonly used PA methods. More specifically, PVA labels vessels continuously from the proximal end to the distal end, while the labeled regions of PA are typically randomly selected. Thus, our proposed PVA has two merits. 1) PVA balances efficiency and informativity. Compared with full annotation, PVA only requires clinicians to label vessels within restricted regions in adjacent slices, rather than all scattered target regions in each individual slice. Compared with PA, PVA keep labeled vessels continuous to preserve local topology information. 2) PVA provides flexibility for clinicians. Given that clinical diagnosis places greater emphasis on the trunks rather than the branches, PVA allows clinicians to focus their labeling efforts on vessels of particular interest. Therefore, our proposed PVA is well-suited for clinical use. In this paper, we further propose a progressive weakly supervised learning framework for PVA. Our proposed framework, using PVA (only 24.29% vessels labeled), achieved better performance than the competing weakly supervised methods, and comparable performance in trunk continuity with the full annotation (100% vessels labeled) supervised baseline model. The framework works in two stages, which are local feature extraction (LFE) stage and global structure reconstruction (GSR) stage. 1) LFE stage extracts the local features of coronary artery from the limited labeled vessels in PVA, and then propagates the knowledge to unlabeled regions. 2) GSR stage leverages prediction consistency during the iterative self-training process to correct the errors, which are introduced inevitably by the label propagation process. The code of our method is available at <https://github.com/ZhangZ7112/PVA-CAS>. To summarize, the contributions of our work are three-fold: * To the best of our knowledge, we proposed partial vessels annotation for coronary artery segmentation for the first time, which is in accord with clinical use. First, it balances efficiency and informativity. Second, it provides flexibility for clinicians to annotate where they pay more attention. * We proposed a progressive weakly supervised learning framework for partial vessels annotation-based coronary artery segmentation. It only required 24.29% labeled vessels, but achieved comparable performance in trunk continuity with the baseline model using full annotation. Thus, it shows great potential to lower the label cost for relevant clinical and research use. * We proposed an adaptive label propagation unit (LPU) and a learnable plug-and-play feature prototype analysis (FPA) block in our framework. LPU integrates the functions of pseudo label initialization and updating, which dynamically adjusts the updating weights according to the calculated confidence level. FPA enhances vessel continuity by leveraging the similarity between feature embeddings and the feature prototype. § METHOD As shown in Fig. <ref>, our proposed framework for partial vessels annotation (PVA) works in two stages. 1) The LFE stage(Sec. <ref>) extracts and learns vessel features from PVA locally. After the learning process, it infers on the training set to propagate the learned knowledge to unlabeled regions, outputs of which are integrated with PVA labels to initialize pseudo labels. 2) The GSR stage (Sec. <ref>) utilizes pseudo labels to conduct self-training, and leverages prediction consistency to improve the pseudo labels. In our proposed framework, we also designed an adaptive label propagation unit (LPU) and a learnable plug-and-play feature prototype analysis (FPA) block. LPU initialize and update the pseudo labels; FPA block learns before testing and improves the final output during testing. §.§ Local Feature Extraction Stage In LFE stage, our hypothesis is that the small areas surrounding the labeled regions hold valid information. Based on this, a light segmentation model 𝒮_l is trained to learn vessel features locally, with small patches centering around the labeled regions as input and output. In this manner, the negative impact of inaccurate supervision information in unlabeled regions is also reduced. §.§.§ Pseudo Label Initialization in LPU. After training, 𝒮_l propagates the learned knowledge of local feature to unlabeled regions. For each image of shape H× W× D, the corresponding output logit ŷ_1∈ [0,1]^H× W× D of 𝒮_l provides a complete estimate of the distribution of vessels, albeit with some approximation. Meanwhile, the PVA label y_PVA∈{0,1}^H× W× D provides accurate information on the distribution of vessels, but only to a limited extent. Therefore, LPU integrate ŷ_1 and y_PVA to initialize the pseudo label y_PL (Equ. <ref>), which will be utilized in GSR stage and updated during iterative self-training. y_PL^(t=0)(h,w,d)_∀ (h,w,d) ∈ℝ^H× W× D= 1, y_PVA(h,w,d)=1, ŷ_1(h,w,d), otherwise. §.§ Global Structure Reconstruction Stage The GSR stage mainly consists of three parts: 1) The segmentation model 𝒮_g to learn the global tree-like structure; 2) LPU to improve pseudo labels; 3) FPA block to improve segmentation results at testing. Through initialization (Equ. <ref>), the initial pseudo label y_PL^(t=0) contains the information of both PVA labels and the knowledge of local features in 𝒮_l. Therefore, at the beginning of this stage, 𝒮_g learns from y_PL^(t=0) to warm up. After this, logits of 𝒮_g are utilized to update the pseudo labels during iterative self-training. §.§.§ Pseudo Label Updating in LPU. The principle of this process is that more reliable logit influences more the distribution of the corresponding pseudo label. Based on this principle, first we calculate the confidence degree η^(t)∈ [0,1] for ŷ_2^(t). Defined by Equ. <ref>, η^(t) numerically equals to the average of the logits in labeled regions. This definition makes sense since the expected logit equals to ones in vessel regions and zeros in background regions. The closer ŷ_2^(t) gets to the expected logit, the higher η^(t) (confidence degree) will be. η^(t) = ∑_h∑_w∑_dy_PVA(h,w,d) ·ŷ_2^(t)(h,w,d)/∑_h∑_w∑_dy_PVA(h,w,d) Then, a quality control test is performed to avoid negative optimization as far as possible. If the confidence degree η^(t) is higher than all elements in the set {η^(i)}_i=1^t-1, the current logit is trustworthy to pass the test to improve the pseudo label. Then, y_PL^(t) is updated by the exponentially weighted moving average (EWMA) of the logits and the pseudo labels (Equ. <ref>). This process is similar to prediction ensemble <cit.>, which hase been adopted to filter pseudo labels<cit.>. However, different from their methods, where the factor η^(t) is a fixed hyperparameter coefficient and the pseudo labels are updated each or every several epoches, η^(t) in our method is adaptive and a quality control test is performed. y_PL^(t)= η^(t)ŷ_2^(t)+(1-η^(t))y_PL^(t-1), η^(t)=max{{η^(i)}_i=1^t} y_PL^(t-1), otherwise. §.§.§ Feature Prototype Analysis Block. Inspired by <cit.>, which generates class feature prototype ρ _c (Equ. <ref>) from the embeddings z^l_i of labeled points in class c, we inherit the idea but further transform the mechanism into the proposed learnable plug-and-play block, FPA block. Experimental experience finds that the output of FPA block has good continuity, for which the FPA output are utilized to enhance the continuity of convolution output at testing. ρ _c = 1/|ℐ_c |∑_z^l_i∈ℐ_cz^l_i In the penultimate layer of the network, which is followed by a 1×1×1 convolutional layer to output logits, we parallelly put the feature map Z∈ℛ^C× H× W× D into FPA. The output similarity map O∈ℛ^1× H× W× D is calculated by Equ. <ref>, where Z(h,w,d)∈ℛ^C denotes the feature embeddings of voxel (h,w,d), and ρ_θ∈ℛ^C the kernel parameters of FPA. O(h,w,d)=exp(-‖ Z(h,w,d)-ρ_θ‖^2) The learning process of FPA block is before testing, during which the whole model except FPA gets frozen. To reduce the additional overhead, ρ_θ is initialized by one-time calculated ρ _c and fine-tuned with loss ℒ_fpa (Equ. <ref>), where only labeled voxels will take effect in updating the kernel. ℒ_fpa=∑_h∑_w∑_dy_PVA(h,w,d)· log(O(h,w,d))/∑_h∑_w∑_dy_PVA(h,w,d) § EXPERIMENTS AND RESULTS §.§ Dataset and Evaluation Metrics Experiments are implemented on a clinical dataset, which includes 108 subjects of CCTA volumes (2:1 for training and testing). The volumes share the size of 512 × 512 × D, with D ranging from 261 to 608. PVA labels of the training set are annotated by clinicians, where only 24.29% vessels are labeled. The metrics used to quantify the results include both integrity and continuity assessment indicators. Integrity assessment indicators are Mean Dice Coefficient (Dice), Relevant Dice Coefficient (RDice) <cit.>, Overlap (OV) <cit.>; continuity assessment indicators are Overlap util First Error (OF) <cit.> on the three main trunks (LAD, LCX and RCA). §.§ Implementation Details 3D U-Net<cit.> is set as our baseline model. Experiments were implemented using Pytorch on GeForce RTX 2080Ti. Adam optimizer was used to train the models with an initial learning rate of 10^-4. The patch sizes were set as 128 × 128 × 128 and 512 × 512 × 256 respectively for 𝒮_l and 𝒮_g. When testing, sliding windows were used with a half-window width step to cover the entire volume. §.§ Comparative Test To verify the effectiveness of our proposed method, it is compared with both classic segmentation models (3D U-Net <cit.>, HRNet <cit.>, Transunet <cit.>) and partial annotation-related weakly supervised frameworks (EWPA <cit.>, DMPLS <cit.>). The quantative results of different methods are summarized in Tab. <ref>, which shows that our proposed method outperforms the competing methods under PVA. The competing frameworks (EWPA and DMPLS) had achieved the best results in their respective tasks under partial annotation, but our proposed method achieved better results for PVA-based coronary artery segmentation. It is worth mentioning that the performance in trunk continuity (measured by the indicator OF) of our proposed method using PVA (24.29% vessels labeled) is comparable to that of the baseline model using full annotation (100% vessels labeled). The qualitative visual results verify that our proposed method outperforms the competing methods under PVA. Three cases are shown in Fig. <ref>. All the cases show that the segmentation results of our method have good overall topology integrity, especially on trunk continuity. §.§ Ablation Study Ablation experiments were conducted to verify the importance of the components in our proposed framework (summarized in Tab. <ref>). The performance improvement verifies the effectiveness of pseudo label initialization (PLI) and updating (PLU) mechanisms in the label propagation unit (LPU). PLI integrates the information of PVA labels with the propagated knowledge, and PLU improves the pseudo labels during self-training. With the help of FPA block, the segmentation results gain further improvement, especially on the continuity of trunks. § CONCLUSION In this paper, we proposed partial vessels annotation (PVA) for coronary artery segmentation on CCTA images. The proposed PVA is convenient for clinical use for the two merits, providing flexibility as well as balancing efficiency and informativity. Under PVA, we proposed a progressive weakly supervised learning framework, which outperforms the competing methods and shows comparable performance in trunk continuity with the full annotation supervised baseline model. In our framework, we also designed an adaptive label propagation unit (LPU) and a learnable plug-and-play feature prototype analysis(FPA) block. LPU integrates the functions of pseudo label initialization and updating, and FPA improves vessel continuity by leveraging the similarity between feature embeddings and the feature prototype. To conclude, our proposed framework under PVA shows great potential for accurate coronary artery segmentation while requiring significantly less annotation effort. splncs04
http://arxiv.org/abs/2307.04672v1
20230710161831
Black-hole powered quantum coherent amplifier
[ "Avijit Misra", "Pritam Chattopadhyay", "Anatoly Svidzinsky", "Marlan O. Scully", "Gershon Kurizki" ]
quant-ph
[ "quant-ph", "gr-qc", "hep-th" ]
[email protected] AMOS and Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot 7610001, Israel [email protected] AMOS and Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot 7610001, Israel [email protected] Texas A& M University, College Station, Texas 77843, USA [email protected] Texas A& M University, College Station, Texas 77843, USA Baylor University, Waco, Texas 76798, USA Princeton University, Princeton, New Jersey 08544, USA [email protected] AMOS and Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot 7610001, Israel Atoms falling into a black hole (BH) through a cavity are shown to enable coherent amplification of light quanta powered by the BH gravitational vacuum energy. This process can harness the BH energy towards useful purposes, such as propelling a spaceship trapped by the BH. The process can occur via transient amplification of a signal field by falling atoms that are partly excited by Hawking radiation reflected by an orbiting mirror. In the steady-state regime of thermally equilibrated atoms that weakly couple to the field, this amplifier constitutes a BH-powered quantum heat engine. The envisaged effects substantiate the thermodynamic approach to BH acceleration radiation. Black-hole powered quantum coherent amplifier Gershon Kurizki August 12, 2023 ============================================= Introduction: Imagine a scene that can play out in a science fiction movie (Fig. <ref>): a spaceship is helplessly falling into a black hole (BH) because its fuel supply is dwindling and does not suffice for a breakaway maneuver. Luckily, its SOS message has been received by a faraway spaceship, which is equipped with a powerful laser that can transfer coherent energy to its distressed sister ship. Unlike heat, coherent energy transfer is associated with ergotropy <cit.> that can perform mechanical work <cit.> to propel the ship. Unfortunately, coherent energy transfer would have poor efficiency due to diffraction and BH gravitational lensing over large distances between the ships. Yet a revolutionary technique may still rescue the ill-fated spaceship: the laser signal can be coherently amplified in a novel fashion by atoms in free fall through a cavity. Namely, the amplification can only occur through excitation of the free-falling atoms by BH Hawking radiation redirected by an orbiting mirror. The envisioned amplification can strongly enhance the coherent power transfer to the falling spaceship, providing it with enough thrust to free itself from the grip of the BH. What is the theoretical basis for this fantastic story? It is the mind-boggling idea that the Unruh vacuum <cit.> yields thermal Hawking radiation near the BH horizon, but cannot directly excite atoms falling into the BH, as opposed to a bright star that can directly heat up falling atoms in its vicinity. By contrast, near a BH the free-falling atoms feel the heat only if the Hawking radiation is redirected by a mirror placed on a stable orbit around the BH (Fig. <ref>). Then, counter-intuitively, BH gravity can act on atoms as a heat bath, although the process is purely unitary <cit.>. For atoms falling into a BH during their passage through a cavity, a perturbative (master-equation) approach maps this BH-gravitational problem onto that of a quantum heat engine that acts as a two-level maser/laser without population inversion coupled to two baths at different temperatures <cit.>. Here the piston of the heat engine is the signal laser field whereas the BH scalar field modes redirected by a mirror replace the hot bath as the energy source and the cold bath as the entropy dump of the engine. This uniquely quantum mechanical manifestation of anomalous, gravitational vacuum effect unequivocally demonstrates the validity of the thermodynamic approach to acceleration radiation near a BH. Another intriguing limit is the strong-coupling field-atom regime mediated by the BH vacuum state, a novel manifestation of gravity-induced quantum electrodynamics. Analysis: A cloud of two-level atoms (TLA) initially in their ground state, is freely falling towards the BH through a cavity. The TLA are coupled to the gravitational field of the BH by a quantized scalar field <cit.> Φ̂(r,t)=∑_i[â_iϕ _i(r,t)+H.c.], where H.c. stands for the Hermitian conjugate, index i labels the field modes, r=(r,Θ ) denotes the radial and angular coordinates, and â_i is the i-th mode annihilation operator. The scalar field is coupled with the TLA as depicted in the space-time diagram (Fig. <ref>b). An atom freely falling into a non-rotating BH while still above the horizon can (see App. <ref>) be resonant with the following scalar field modes (in the Kruskal-Szekeres coordinates) ϕ _1Ω(T,X)=e^-iΩ( T-X) , ϕ _2Ω(T,X)=( T+X) ^-iΩθ (T+X), where θ is the step function and Ω >0. From the perspective of the free-falling atom the modes (<ref>)-(<ref>) harmonically oscillate as a function of the atom's proper time with positive frequency. The form of the outgoing mode (<ref>) and the ingoing mode (<ref>) derived here (App. <ref>) is, as shown below, key to our ability to employ the BH as a source of useful quanta. The free-falling atoms may resonantly interact with the outgoing plane-wave field ϕ _1Ω and with the ingoing Rindler field ϕ _2Ω. However, in the Unruh vacuum, which by consensus represents the state of the evaporating BH field <cit.>, there are no photons in the modes ( <ref>) and (<ref>). Consequently, free-falling atoms cannot become excited in the Unruh vacuum (see App. <ref>). Instead, we might consider exciting these atoms by the outgoing Rindler photons, which fill the Unruh vacuum and constitute the Hawking radiation <cit.>. They thermally populate the modes ϕ _3Ω(T,X)=( X-T) ^iΩθ (X-T). Yet, it can be shown (App. <ref>) that these outgoing Rindler photons cannot excite free-falling atoms. Is there another way to excite these atoms by BH radiation? Indeed, there is: we show that free-falling atoms can be excited by redirecting the outgoing Rindler photons (Hawking radiation) towards the BH via a mirror. The mirror should orbit the BH at a fixed radius r=r_0. To be stable, the mirror orbit should lie at r > 3r_g, r_g being the gravitational radius, but otherwise the value of r does not affect the result (see below). In the presence of such a mirror, the mode function satisfying the boundary condition ϕ (t,r_0)=0 at the mirror surface acquires a new, advantageous form ϕ (T,X)=( X-T) ^iΩ_ϕ _c mode-e^iΩ( r_0+ln (r_0-1)) ( T+X) ^-iΩ_ϕ _h mode. This hitherto unexplored scalar field mode has two parts: the outgoing Rindler photon mode (the first term on the rhs) and a part reflected from the mirror into the ingoing Rindler mode (the second term on the rhs). This ingoing Rindler mode acts as a hot bath mode, denoted as ϕ _h(r,t) with frequency Ω =Ω _h, that can excite the free-falling atom. The outgoing Rindler modes act as a cold-bath (vacuum state) mode denoted as ϕ _c(r,t). We wish to show that the redirected Hawking radiation can enable coherent amplification of a signal mode. The complete field-atom interaction Hamiltonian has then the form H_int=∑_ig_hiϕ _hib̂^†â_hi|e⟩⟨ g|+∑_jg_jϕ_cjĉ_j|e⟩⟨ g|+H.c. Here b̂ stands for the signal-mode annihilation operator, â _hi is the i-th mode annihilation operator of the hot bath mode ϕ_hi of the redirected Hawking radiation, and ĉ_j for that of the j-th cold bath mode ϕ_cj of the redirected Hawking radiation (Eq. (<ref>)). The atom-scalar field interaction (first term on the rhs of Eq. (<ref>)) represents an anti-resonant Raman process whereby a scalar-field quantum in the i-th redirected Hawking-radiation mode ϕ _hi is converted into a signal photon by the atomic transition between the ground (g) and excited (e) states, with coupling strength g_hi. The interaction Hamiltonian of the atom with the cold bath ϕ_cj involves the same atomic transition operator |e⟩⟨ g| with coupling strength g_cj. Our goal is to maximize the energy gain of the signal mode in a non-passive (ergotropy-carrying) form, capable of delivering work <cit.>. Strong TLA-BH coupling: Here we assume that while traversing the cavity, the atom is strongly coupled to one redirected Hawking radiation mode ϕ _h with a coupling strength g_h that overwhelms the coupling strengths g_cj to all cold bath modes. This scenario corresponds to a high-Q cavity which allows for strong coupling of a single Hawking radiation mode to the atom. To render the problem single-mode, we choose the TLA resonant frequency ω _0, the cavity frequency ω _c, the signal ν and the Ω _h frequency of the redirected mode ϕ _h in (<ref>) such that ν≈Ω _h-ω _0. Then the interaction Hamiltonian in Eq. (<ref>) simplifies to H_int=g_hϕ _hb̂^†â_h|e⟩⟨ g|+H.c. The basis for the combined atom-field energy states can then be |1⟩ = |g,n_s,n_h⟩ , |2⟩ = |e,n_s+1,n_h-1⟩ , where |n_s⟩ and |n_h⟩ are Fock states of the signal mode and the BH ϕ _h mode respectively. At short times, where first-order transitions between the atom and the field modes predominate, the subspace in Eq. (<ref>) is decoupled from other subspaces, whilst keeping the total number of excitations constant. Let us assume that the atom and the signal mode are initially in the ground and Fock state |n_s⟩ respectively. Thus, the initial state of the combined system is ρ^i= |g⟩⟨ g| ⊗ |n_s⟩⟨ n_s|⊗ρ_T_c⊗ρ_T_h , where ρ_T_c and ρ_T_h are the thermal field states at temperature T_c and T_h, respectively. In this problem, T_c = 0. Then the initial state is a mixture of the pure states |g⟩ |n_s⟩ |n_h⟩ with probabilities p_n_h=e^-β _hΩ _hn_h/Z_β _h, where β _h=1/k_BT_H is the effective BH (Hawking) temperature <cit.>. The final-states of the atom and the signal mode after their unitary evolution over time t are then (App. <ref>) ρ_atom^f= |u|^2|g⟩⟨ g|+ |v|^2 |e⟩⟨ e|, ρ_s^f= |u|^2|n_s⟩⟨ n_s|+ |v|^2 |n_s+1⟩⟨ n_s+1| where u = e^-1/2 i δ t(cos(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))+i δsin(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))/√(δ ^2+4 g _h^2 ϕ _h^2)), v = -2 i g_h ϕ _h e^-1/2 i δ tsin( 1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))/√(δ ^2+4 g_h^2 ϕ _h^2), δ = ω_0 + ν - Ω_h. The work capacity (ergotropy) change following the interaction in the cavity is Erg(ρ _s^f)-Erg(ρ _s^i)=ν (|v|^2-|u|^2), which is maximized for |v|=1, |u|=0. For the choice δ =0, g_h t|ϕ _h|=(2m+1)π /2, where m is an integer, the atom is transferred to the excited state and the signal adds a photon to its mode, ρ _s^f=|n_s+1⟩⟨ n_s+1|. The highest amplification per atom is achieved for n_s = 1. The efficiency of work extraction by the signal from the BH is then η = ν/ω _0+ν. This efficiency can closely approach the Scovil-Schulz-Dubois (SSD) bound of quantum heat engine/amplifiers  <cit.> ν /(ω _0+ν ). In turn, the SSD efficiency η_ SSD can approach the Carnot efficiency η_C if T_h/T_c≳Ω_h/ω_c. However, as T_c → 0, the atom resonant frequency must approach zero in order to attain the Carnot efficiency, which is unfeasible. The maximal average power of work extraction in this regime is given by Ẇ= 2 g_h |ϕ_h|ν/(2m+1)π, where the maximal power corresponds to m=0. Spectacular power boost can be obtained in the Dicke regime of N atoms that are collectively coupled to the hot bath mode. Following <cit.>, we can have Ẇ→ N Ẇ. Weak TLA-BH coupling: Let us now consider the opposite limiting regime of a cavity with insufficiently high Q, such that its leakage to cold bath modes ϕ_c outside the cavity is stronger than the coupling of the atom to the Hawking radiation mode ϕ_h. In this regime, the atom that is energized by the redirected Hawking radiation reaches a steady state (equilibrates) under the action of the cold bath while in the cavity. Hence, the process is analogous to our continuously operating heat-engine maser based on a TLA <cit.>. Here, the atom together with the signal at frequency ν are coupled to a hot field mode near resonantly, but the coupling strength g_h is assumed to be weaker than the coupling to the cold modes g_cj. The atom then reaches a steady state under the action of the cold bath (App. <ref>). The atom-scalar field interaction obeys the Raman Hamiltonian that in the interaction picture reads (cf. Ref <cit.> for derivation) H_(t)=g_h∑_i( ϕ _hiâ_hib̂^†|e⟩⟨ g|e^-i[Ω _hi-(ν +ω _0)]t+H.c.) . Under this interaction, we then get a master equation for the state of the hot scalar field. By tracing out the atom, which has reached a steady population under the influence of the cold bath, we then find the time evolution of the signal mode (see SI) The ergotropy (work capacity) of the signal state in this regime that corresponds to coherent amplification grows as 𝒲= ν |α_0|^2 e^𝒢t, where |α _0| is the mean initial signal amplitude and 𝒢 is the gain (see SI). The power of the gained work is therefore given by 𝒲̇= 𝒢ν |α_0|^2 e^𝒢t. As in the strong-coupling regime, N-fold collective (Dicke) power boost <cit.> is attainable by N atoms. The efficiency can be computed as the ratio of power generated by the signal to the heat flux from the BH, Q̇_h. This efficiency evaluates to (see App. <ref>) η = Ẇ/Q̇_h = ν/Ω _h|α _0|^2/|α |^2+ n_h(n_c+1)/n_h-n_c, where |α _0| is the mean initial signal amplitude. It approaches the Scovil-Schulz-Dubois (SSD) bound ν /(ω _0+ν ) as |α _0|>>1 (Fig.<ref>). In Fig. <ref> we show that the division of the gained signal energy between ergotropy and heat tends in favor of ergotropy (coherent work production) as the gain increases. Conclusions: We have put forth the possibility of black hole (BH) gravity to act as the energizing source of coherent light amplification. The amplification is mediated by the Hawking radiation of the BH in the presence of an orbiting mirror that transforms outgoing Hawking radiation into ingoing Rindler quanta. It can be viewed as a BH-fueled heat engine that converts Hawking radiation into work in a coherent signal mode. The main energy source in our model is Hawking radiation, and not the kinetic or potential energy of the atoms. In principle, one can also use the kinetic energy of ground-state atoms passing through the cavity to amplify light <cit.>. Our results corroborate the view <cit.> that, despite the unitarity of such processes, a BH can act as a heat source on falling matter (cf. <cit.>). Concepts of quantum information theory and optics have been gaining prominence in the context of quantum effects of gravity <cit.>. We here venture in yet another direction, demonstrating that such effects may find practical use, such as propelling a spaceship by atoms falling into a BH. These results open a new avenue that bridges quantum optics, quantum thermodynamics and BH gravity. Acknowledgements: GK and MOS acknowledge the support of NSF-BSF. GK acknowledges the support of PACE-IN (QUANTERA), PATHOS (EU FET OPEN) and DFG (FOR 2724). MOS acknowledges the support of the Air Force Office of Scientific Research (Grant No. FA9550-20-1-0366 DEF), the Robert A. Welch Foundation (Grant No. A-1261), and the National Science Foundation (Grant No. PHY 2013771). Author contributions: GK conceived the initial idea, and then all authors conceptualized and designed the project. AM, PC and AS did the analytical study. PC did the figures and plots. GK and MOS supervised the project. All authors were involved in the analysis and interpretation of the results. GK, AM and AS wrote the manuscript with input from all authors. Competing interests: The authors declare no competing interests. Data availability: Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. 5pt § MODE FUNCTIONS OF PHOTONS RESONANT WITH FREE-FALLING ATOMS Here we consider a two-level atom with transition frequency ω freely falling into a nonrotating BH of mass M along a radial trajectory from infinity with zero initial velocity. We choose the gravitational radius r_g=2GM/c^2 as a unit of distance and r_g/c as a unit of time and introduce the dimensionless distance, time, and frequency as r→ r_gr, t→ (r_g/c)t, ω→ (c/r_g)ω. In dimensionless Schwarzschild coordinates the atom trajectory is described by the equations dr/dτ=-1/√(r), dt/dτ=r/r-1, where t is the dimensionless time in Schwarzschild coordinates and τ is the dimensionless proper time for the atom. Integration of equations (<ref>) yields τ =-2/3r^3/2+const, t=-2/3r^3/2-2√(r)-ln( √(r)-1/√(r)+1 ) +const. For a scalar photon in the Regge-Wheeler coordinate r_∗=r+ln (r-1) the field propagation equation reads [ ∂ ^2/∂ t^2-∂ ^2/∂ r_∗^2+( 1-1/r) ( 1/r^3- Δ/r^2) ] ψ =0, where Δ is the angular part of the Laplacian. We are interested in solutions of this equation outside of the event horizon, that is for r>1. If the dimensionless photon frequency ν≫ 1, then the first two terms in Eq. (<ref>) dominate and one can approximately write ( ∂ ^2/∂ t^2-∂ ^2/∂ r_∗^2) ψ =0. The general solution of this equation reads ψ =F( t± r_∗) =F( t± r±ln (r-1)) , where F is an arbitrary function. We consider a trajectory of the atom near the event horizon and choose the origin of τ such that τ =0 when the atom crosses the horizon. In the vicinity of the horizon, we obtain for the atom's trajectory t≈ -ln (-τ )+5/4τ +const, r≈ 1-τ -1/4τ ^2, and, therefore, along the atom's trajectory t-r-ln (r-1)≈ -2ln (-τ )+const, t+r+ln (r-1)≈1/2τ +const. Eqs. (<ref>) and (<ref>) yield the following mode functions of the field which harmonically oscillates as a function of τ along the atom's trajectory ψ _1ν(t,r)=e^iν e^-1/2( t-r-ln (r-1)) ≈ e^-iντ, ψ _2ν(t,r)=e^-2iν( t+r+ln (r-1)) ≈ e^-iντ. It is insightful to write the mode functions (<ref>) and (<ref>) in the Kruskal-Szekeres coordinates T and X that are defined in terms of the Schwarzschild coordinates t and r as T=√(r-1)e^r/2sinh( t/2) , X=√(r-1)e^r/2cosh( t/2) , for r>1, and T=√(1-r)e^r/2cosh( t/2) , X=√(1-r)e^r/2sinh( t/2) , for 0<r<1. In these coordinates, we obtain for r>1 e^-1/2( t-r-ln (r-1)) =X-T, T+X=e^1/2( t+r+ln (r-1)) , and, therefore, ψ _1ν(T,X)=e^-iν( T-X) , ψ _2ν(T,X)=( T+X) ^-4iν. § STRONG-COUPLING AMPLIFIER REGIME The initial state of the combined system is ρ^i= |g⟩⟨ g| ⊗ |n_s⟩⟨ n_s|⊗ρ_T_h, which is a mixture of the pure states |g⟩ |n_s⟩ |n_h⟩ with thermal occupation probability of the hot bath mode p_ n_h= e^-(β_h Ω_h n_h)/Z_β_h. Each such pure state can be written in the basis in Eq. (<ref>) as |ψ⟩^i=( [ 1; 0 ]), which under the unitary evolution maps to |ψ⟩^f=( [ [ e^-1/2 i δ t(cos(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2)); +i δsin(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))/√(δ ^2+4 g_h^2 ϕ _h^2)) ] & -2 i g_h ϕ _h e^-1/2 i δ tsin(1/2 t √(δ ^2+4 g_h^2 ϕ _h^2))/√(δ ^2+4 g_h^2 ϕ _h^2) ]) = ( [ u; v ]). The final state of the atom after time t is then ρ_atom^f= |u|^2|g⟩⟨ g|+ (|v|^2) |e⟩⟨ e|, and the final state of the piston is ρ_p^f= |u|^2|n_s⟩⟨ n_s|+ |v|^2 |n_s+1⟩⟨ n_s+1|. Here we have taken the sum over all pure state in Eq. (<ref>) with the thermal probability p_ n_h in the hot bath mode. The initial ergotropy of the piston mode is [ρ_s^i]= ν n_s. The final ergotropy of the piston mode is [ρ_s^f]= ν [n_s+ (|v|^2-|u|^2)]. The ergotropy gain or the work gain is _gain= ν (|v|^2-|u|^2), which is maximized when |v|^2=1. § WEAK-COUPLING AMPLIFIER REGIME The Hamiltonian in Eq. (<ref>) holds only when the cold and the hot modes are not in the ground state, but their probability of being in the ground state for a thermal distribution is p_0,0= (1/Z_β_c)(1/Z_β_h), which is the probability to have no transition from the initial state. Then the master equation (ME) for the combined signal-atom state associated with the hot bath mode is <cit.> ρ̇_h = g_h^2 |I_h,gi|^2 (n̅_h+1)([Sρ_h, S^†]+[S,ρ_h S^†]) + g_h^2 |I_h,ei|^2 n̅_h([S^†ρ_h, S]+[S^†,ρ_h S]), where S=b |g⟩⟨ e|, n_h is the mean quanta number in the thermal state associated with the Hawking radiation, and |I_h,gi|^2= ∫_t_i ^t_f dt^' e^-i δ_ci t^'ϕ_h^⋆ (t^') ∫_t_i ^t_f dt^'' e^i δ_ci t^''ϕ_h (t^'') , |I_h,ei|^2= ∫_t_i ^t_f dt^' e^i δ_ci t^'ϕ_h (t^') ∫_t_i ^t_f dt^'' e^-i δ_ci t^''ϕ_h^⋆ (t^''), where δ_ci =(Ω_ci- ω_0). Upon tracing out the atom, we obtain for the signal mode s the ME ρ̇_s = g_h^2 [|I_h,gi|^2 (n̅_h+1) ρ_ee([bρ_s, b^†]+[b,ρ_s b^†]) + |I_h,ei|^2 n̅_hρ_gg([b^†ρ_s, b]+[b^†,ρ_s b]) ], where we have assumed for simplicity that |I_h,gi| = |I_h,ei| and ρ_ee/ρ_gg≈n̅_c/n̅_c+1= exp [-ħω/k_B T_c], T_c being the cold bath temperature. The resulting time evolution of the signal-mode Fock state n_s is given by ṅ_s = - 2 g_h^2 |I_h,gi|^2 ((n̅_h+1) n_s ρ_ee - n̅_h (n_s+1) ρ_gg), For the Glauber-Sudarshan P-distribution of the signal state, i.e., ρ_s = ∫ P(α) |α⟩⟨α | d^2 α, one obtains the Fokker-Planck (FP) equation ∂/∂ t P(α) = -𝒢/2( ∂/∂α + ∂/∂α^⋆) P + 𝒟∂^2 P/∂α∂α^⋆, with 𝒢 = 2 g_h^2 |I_h,ai|^2 (n_h-n_c)/2n_c+1 𝒟 = 2 g_h^2 |I_h,ai|^2 n_h (n_c +1)/2n_c + 1. Here 𝒢 describes the effective gain rate in the amplification regime and 𝒟 describes the diffusion rate for the process. An initial coherent state |α_0⟩ then evolves into P(α, t) = 1/πσ^2 (t) Exp ( -|α - α_0 e^𝒢t/2|^2/σ^2 (t)), with σ^2 (t) = 𝒢/𝒟 (e^𝒢t -1).
http://arxiv.org/abs/2307.04728v1
20230710173804
Non-equilibrium attractor for non-linear stochastic dynamics
[ "A. Patrón", "B. Sánchez-Rey", "E. Trizac", "A. Prados" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
[email protected] Física Teórica, Universidad de Sevilla, Apartado de Correos 1065, E-41080 Sevilla, Spain [email protected] Departamento de Física Aplicada I, E.P.S., Universidad de Sevilla, Virgen de África 7, E-41011 Sevilla, Spain LPTMS, Université Paris-Saclay, CNRS, 91405, Orsay, France Ecole normale supérieure de Lyon, F-69364 Lyon, France [email protected] Física Teórica, Universidad de Sevilla, Apartado de Correos 1065, E-41080 Sevilla, Spain We study the dynamical behaviour of mesoscopic systems in contact with a thermal bath, described either via the non-linear Fokker-Planck equation for the probability distribution function at the ensemble level—or the corresponding non-linear Langevin equation at the trajectory level. Our focus is put on one-dimensional—or d-dimensional isotropic—systems in confining potentials, with detailed balance—fluctuation-dissipation thus holds, and the stationary probability distribution has the canonical form at the bath temperature. When quenching the bath temperature to low enough values, a far-from-equilibrium state emerges that rules the dynamics over a characteristic intermediate timescale. Such a long-lived state has a Dirac-delta probability distribution function and attracts all solutions over this intermediate timescale, in which the initial conditions are immaterial while the influence of the bath is still negligible. Numerical evidence and qualitative physical arguments suggest that the above picture extends to higher-dimensional systems, with anisotropy and interactions. Non-equilibrium attractor for non-linear stochastic dynamics A. Prados August 12, 2023 ============================================================ Stochastic processes are ubiquitous in physics. Systems of interest are usually not isolated but in contact with a much larger environment. What makes their dynamics stochastic is the interaction with the environment (thermal bath): the integration over its degrees of freedom entails that the “force”—understood in a generalised sense—acting on the system becomes effectively random <cit.>. It is in this approach, often called mesoscopic, that the Langevin equation emerges—see Ref. <cit.> for a recent review. More than a century ago, Langevin initiated the approach that bears his name, when studying Brownian motion <cit.>. This is still an active field of research today: current experimental techniques make it possible to confine the Brownian particles in a potential, the profile of which can be controlled <cit.>. In turn, shaping the potential makes it possible to control the dynamical evolution, allowing for optimising observables such as irreversible work <cit.> or escape times <cit.>, building smooth protocols that connect arbitrary states <cit.>, or precisely designing finite-time computations <cit.>. The relevance of the Langevin approach is not restricted to Brownian motion; it is employed in a wealth of physical contexts, in which the above general picture for stochastic dynamics applies. Examples abound, including astrophysics <cit.>, polymers <cit.>, laser-cooled atoms <cit.>, particle physics <cit.>, systems with negative temperatures <cit.>, or optical spectroscopy <cit.>, to name just a few. Interestingly, the analysis of experimental “noisy” data makes it possible to infer the underlying stochastic, Langevin-like, dynamical equations, not only in physics but also in neuroscience or biology <cit.>. Besides, since the early days of quantitative economy, related approaches making use of random walks are employed <cit.>. In the long-time limit, systems evolving under stochastic dynamics typically relax to equilibrium at the bath temperature. The equilibrium state is thus a global attractor, reached from an arbitrary initial condition, of the system dynamics <cit.>. A relevant question is whether it is only the final equilibrium state that is independent of the initial preparation or there appears a previous global non-equilibrium attractor, already independent of the initial preparation. In the latter case, relaxation to equilibrium would proceed in two stages: first, the system would approach the universal non-equilibrium state and, second, this non-equilibrium state would tend to equilibrium. In this Letter, we show—under general assumptions—that there emerges such a universal non-equilibrium state for a wide class of systems in contact with a thermal bath, when quenched to low enough temperatures. Their dynamics is assumed to be Markovian and described by a non-linear Langevin equation. This state, which we term long-lived non-equilibrium state (LLNES), is a global attractor of the dynamics for an intermediate time scale, over which initial conditions are immaterial but the system is still far from equilibrium. In particular, the probability distribution function (pdf) features a Dirac-delta shape within the LLNES. For the sake of concreteness, we focus here on the physical, intuitive ideas, that are behind the emergence of the LLNES in one-dimensional—or d-dimensional isotropic—systems; a more formal, mathematical, approach is presented in the supplemental material <cit.>. Therein, we also provide numerical evidence on the existence of the LLNES for a more general situation, d-dimensional confining potentials—including anisotropy and interactions. Let us now consider a physical system with mesoscopic state described by r≡{x_1,…,x_d}. A prototypical example is a colloidal particle confined in a d-dimensional potential well. We assume the dynamics of r to be Markovian and governed by the following non-linear Fokker-Planck equation for the pdf P = P(r,t), ∂_t P= ∇_r·[ A(r)P +1/2 B^2(r)∇_rP]. We stress the fact that, in general, not only the “force” A(r) but also the diffusivity B^2(r) are non-linear functions of r. The dynamics of the system is stochastic due to its contact with a thermal bath at temperature T. We assume that detailed balance holds <cit.>, so the fluctuation-dissipation relation 2A(r)=β B^2(r) ∇ H(r). is verified, H(r) being the system's “Hamiltonian” [In certain contexts, H(r) would not be the Hamiltonian of the system but the function playing its role: e.g., for an overdamped Brownian particle, H(r) would be the confining potential.] and β=(k_BT)^-1. Therefore, the canonical distribution, proportional to e^-β H(r), is the stationary solution of the Fokker-Planck equation <cit.>. The Markov process r(t) can also be characterised by the Langevin equation at the trajectory level of description. When B depends on r, the noise is said to be “multiplicative” <cit.> and several Langevin formulations correspond to the same Fokker-Planck equation, ṙ(t)=-[A(r)-(α-1) B(r)∇ B(r) ]+B(r)η(t). Here, η(t) is the unit Gaussian white noise, η_i(t)=0, η_i(t)η_j(t')=δ_ijδ(t-t') and the “multiplicative-noise” parameter α must be chosen in the interval [0,1] <cit.>[For each physical situation, the correct interpretation—typical ones are α=0 for Ito's, α=1/2 for Stratonovich's, α=1 for Klimontovich's—of the Langevin equation with multiplicative noise is dictated by physics, not by mathematics <cit.>. If B is constant, i.e., if the noise is additive, α becomes irrelevant.]. Now we consider a quench to a low temperature: the system is initially prepared at equilibrium at temperature T_i, and put in contact with a thermal bath at a much lower temperature T_f. In the subsequent relaxation to equilibrium at temperature T_f, there is a time regime in which noise is negligible: since H is independent of the temperature, fluctuation-dissipation (<ref>) entails that B^2(r)/|A(r)|∝ T_f≪ T_i. Therefore, terms containing B(r) in Eq. (<ref>) can be neglected and the Langevin equation reduces to the deterministic, noiseless equation ṙ=-A(r), which is independent of the parameter α in Eq. (<ref>). In what follows, we establish the conditions under which, for long enough times, the initial conditions are forgotten for the solution of Eq. (<ref>). To be concrete, a simple but physically relevant situation with radial symmetry, A(r)=A(r)r̂, r=|r|, r̂=r/r, is considered. The deterministic “force” A must be confining but otherwise arbitrary. This is indeed the case of the prototypical situation of a Brownian particle confined in an isotropic potential U, for which the Langevin equation reads ṙ=-γ^-1 U'(r)r̂+√(2D) η(t), where γ and D are the friction and diffusion coefficients, assumed to be position independent. The identifications H=U, A=γ^-1U'(r)r̂ and B=√(2D) (thus additive noise) in the general fluctuation-dissipation relation (<ref>) lead to the Einstein relation βγ D=1 [Still, this is not the only physical situation, e.g. one may also address the relaxation of the velocity of a colloidal particle due to the nonlinear drag force stemming from its interaction with the background fluid, considered later. Therein, the variable r would stand for the velocity of the particle.]. Note that, since A may change sign as r decreases, the potential may have several minima. From Eq. (<ref>), the time evolution for one trajectory starting from r_i is implicitly given by t=∫_r(t)^r_idr'/A(r'), r_i≡ r(t=0). Assuming that lim_r→+∞r^-1A(r)=+∞, i.e. A diverging faster than linearly for large r, we have t = ∫_r(t)^+∞dr'/A(r')-∫_r_i^+∞dr'/A(r'), when the confining is stronger than harmonic at large distances. The first (second) term on the rhs of Eq. (<ref>) is the time needed to relax from a very large value of r, much larger than r_i, to the instantaneous position r(t) (r_i). Let us assume that the initial temperature T_i is much larger than the final one T_f, implying the following timescale separation t_1≡τ(T_i)≪ t ≪ t_2≡τ(T_f), where τ(T) is the relaxation time to equilibrium at temperature T. In this way, there appears an intermediate time regime, in which the second term on the rhs of Eq. (<ref>) is negligible against the first while noise is still irrelevant. Over the timescale in Eq. (<ref>), we thus get r(t)∼ r_(t), ∫_r_(t)^+∞dr/A(r)=t. The state r_(t) defined in Eq. (<ref>) is a non-equilibrium attractor of the dynamics of the system. We term it long-lived non-equilibrium state (LLNES) [This terminology was already employed in Ref. <cit.> for a specific form of A(r) in the context of non-linear Brownian motion.]. Note that t_1 and t_2 are thus determined by the conditions r_(t_1)=r_i and r_(t_2)=r_f, respectively. Over this far-from-equilibrium state, independent of initial conditions, the pdf is [Throughout the paper, we use the symbol ∼ with the meaning of “asymptotic to” <cit.>, i.e. f(x)∼ g(x) for x→ x_0 means that lim_x→ x_0f(x)/g(x)=1.] <cit.> P_(r,t)∼δ (r-r_(t)). The function r_(t) defined by Eq. (<ref>) depends on the specific form of the function A(r). However, we can introduce a scaled variable c such that its corresponding pdf is universal and time-independent, c≡r/r(t), P_(c,t)∼δ (c-1). We recall that, over the LLNES, r(t)=r_(t). Note that the terms containing B(r) in the Langevin equation (<ref>) eventually drive the system to equilibrium at T_f. In other words, the LLNES is “destroyed” for long enough times, when r_(t)=O(r_(T_f)), i.e. as t=O(t_2). We now apply the results presented here to two different physical situations. First, we consider the confined Brownian particle of Eq. (<ref>), particularised for the nonlinear potential U(r)=1/2k r^2+1/4λ r^4, λ>0. The condition λ>0 ensures that the potential is confining: A(r)=ar+br^3, a≡ k/γ, b≡λ/γ. Moreover, Eq. (<ref>) holds and we have the necessary timescale separation. We analyse the case k>0 to start with, in which the “force” A(r)>0 ∀ r 0 and U(r) has only one minimum at the origin. Later, we consider the case k<0, which corresponds to a “lemon-squeezer” potential with multiple minima at r=r_c≡√(|a|/b)=√(|k|/λ). For k>0, Eq. (<ref>) reduces to ṙ=-ar (1+r^2/r_c^2)r̂+√(2D) η(t). In this physical situation, there are two characteristic lengths, r_λ≡ (k_B T/λ)^1/4 and r_k≡ (k_B T/k)^1/2, which—aside from constants—correspondingly give the equilibrium lengths at high and low temperatures. In fact, it is useful for our analysis to introduce a dimensionless temperature T^*=k_B T λ/k^2=(r_k/r_λ)^4, high and low temperatures thus correspond to the regimes T^*≫ 1 and T^*≪ 1, respectively. Let us analyse the emergence of the LLNES in this specific situation. The particularisation of Eq. (<ref>) gives 2at= ln(1+r_c^2/r^2(t))-ln(1+r_c^2/r_i^2). For a high enough initial temperature T_i^*≫ 1, we estimate r_i with r_λ,i=(k_B T_i/λ)^1/4. There appears an intermediate time window over which r_i ≫ r(t) ≫ r_c, initial conditions are forgotten, specifically r(t)∼ r_(t)=(2bt)^-1/2, (T_i^*)^-1/2≪ 2at≪ 1 . Note that r_(t) only depends on b=λ/γ, i.e. only on the behaviour of the potential at large distances. In order to derive Eq. (<ref>), it is only necessary to consider a high enough initial temperature; the role of the final temperature is to (possibly) limit the timescale over which the LLNES is observed. Noise is negligible as long as r_(t) is much larger than the equilibrium value at the final temperature, r_k,f=(k_B T_f/k)^1/2, which gives the condition 2at ≪ (T_f^*)^-1. If T_f^*=O(1) or larger, this restricts the LLNES in Eq. (<ref>) to the time window (T_i^*)^-1/2≪ 2at ≪ (T_f^*)^-1. If T_f^*≪ 1, the LLNES extends to longer times such that 2at=O(1), r(t) becomes of the order of r_c and r_(t)=r_c (e^2at-1)^-1/2. Figure <ref> shows a set of stochastic trajectories for which the behaviours in Eqs. (<ref>) and (<ref>) are observed. We now study the case k<0, the “lemon-squeezer” potential with multiple minima at r=r_c [In the one-dimensional situation, the potential would be bistable, with two symmetric minima. ]. The LLNES in Eq. (<ref>), which only depends on the details of the potential at large r, is still present for T_i^*≫ 1; it is thus independent of the presence of other minima. Also, the LLNES extends to longer times if T_f^*≪ 1, but it is no longer given by Eq. (<ref>), we have r_(t)=r_c(1-e^-2at) instead. The system reaches equilibrium at r_c over this regime, with small thermal fluctuations <cit.>. Now we consider another relevant physical system: an isotropic fluid with non-linear drag force. Specifically, we investigate the stochastic evolution of N particles undergoing binary collisions and immersed in a background fluid acting as a thermal bath. For dilute enough systems, the velocity pdf P(v,t) obeys the Boltzmann-Fokker-Planck equation <cit.> ∂_t P=∇_v·[ζ(v)(v+k_B T/m∇_v)P]+J[P,P], where ζ(v) stands for the velocity-dependent drag coefficient and J[P,P] is the Boltzmann collision term, which is bilinear in P <cit.>. For low velocities, the drag force is usually linear in v, lim_v→ 0ζ(v)= ζ_0. For large velocities, the drag force may become non-linear in v: the dimensionless drag coefficient ζ^*≡ζ/ζ_0 thus depends on v, as is the case then the masses of the Brownian and background fluid particles are comparable <cit.>. If collisions among particles are elastic, this system tends to the canonical distribution with H(v) = mv^2/2, provided that A and B are such that Eq. (<ref>) holds. Since A(v) = ζ(v)v, we need B^2(v)=2 ζ(v) k_B T/m; noise is thus multiplicative. The kinetic temperature is T_(t)≡ mv^2(t)/(dk_B), which equals the bath temperature at equilibrium. Initially, the system is equilibrated at T_i, thus T_(t=0)=T_i, and the bath temperature is suddenly quenched to T_f≪ T_i. To be concrete, we restrict ourselves to drag coefficients with algebraic behaviour for large v, ζ^*(v)∼γ (v/v_,f)^n, where γ is the non-linearity parameter and v_,f is the thermal velocity at T_f, v_,f≡ (2 k_B T_f/m)^1/2. If n>1, there appears a timescale over which the non-linear drag dominates and both noise and collisions—even if they are inelastic—are negligible. Over this wide time window, initial conditions are forgotten and the LLNES emerges <cit.>, v_(t)/v_,f=(γζ_0n t)^-1/n, (T_f/T_i)^n/2≪ n γζ_0 t ≪ 1 . in strong analogy with Eq. (<ref>). The kinetic temperature thus shows a slow non-exponential, algebraic, decay as T_(t)∝ t^-2/n, which rules the emergence of memory effects such as the Kovacs and Mpemba effects <cit.>. Figure <ref> shows the pdf of the scaled variable c, for the two specific examples of physical systems described above. The delta-peak structure is clearly observed, for one-, two-, and three-dimensional systems. For the non-linear fluid, the data shown corresponds to n=2. In this Letter, we have analysed the dynamical behaviour of a wide class of physical systems, described by a non-linear Langevin (or Fokker-Planck) equation with detailed balance. When quenched to a low enough temperature, all these systems reach a universal long-lived non-equilibrium state, regardless of initial conditions. This state, which we have termed LLNES, is characterised by a Dirac-delta pdf. There are two main hypotheses for the emergence of the LLNES: (i) the non-linearity of the “force” in the Langevin equation and (ii) the separation of the initial temperature T_i from the final one T_f, T_i≫ T_f. A separation of time scales ensues, with the LLNES appearing in the intermediate window, where initial conditions are irrelevant and noise is negligible. Under these quite general assumptions, our results are independent of both the nature of the noise (either additive or multiplicative) and the dimensionality of the system, as shown in Fig. <ref>. For the sake of simplicity, we have restricted the discussion to isotropic situations, in which our work proves the existence of the LLNES. It is always the form of the “force” at large distances that controls the emergence and shape of the LLNES, as illustrated by our analyses of the quartic potential and the non-linear fluid above. The effective reduction to one degree of freedom stemming from isotropy have allowed us to obtain analytical results for the emergence of the LLNES—see also <cit.>. Numerical evidence and qualitative, physical, arguments hint at the the existence of the LLNES for more complex scenarios with several degrees of freedom, including anisotropy and interactions <cit.>. Rigorously proving the conditions under which the LLNES emerges in these more complex situations is a highly non-trivial problem that lies beyond the goals of the present work. Quasi-elastic one-dimensional granular systems have been shown to display Dirac-delta pdfs <cit.> resembling that of the LLNES. This result was derived from the inelastic Boltzmann equation, and therefore it cannot be considered as a particular case of the general result derived in this Letter—obtained within the Langevin framework. Still, the similarity of the observed pdfs entails it is worth investigating possible connections between these two intrinsically different physical situations. Also, testing the emergence of the LLNES in real experiments is an interesting prospect for future work. In particular, it seems worth exploring the relevance of the LLNES to control the time evolution of mesoscopic systems, like biomolecules or memory devices. In this regard, it must be stressed that the two specific examples considered here describe actual physical systems. Current techniques make it possible to control the shape of the potential confining a colloidal particle immersed in a fluid <cit.>, and the Langevin equation for the velocity with non-linear drag has been successfully employed to describe mixtures of ultracold atoms <cit.>. A. Patrón, B. Sánchez-Rey and A. Prados acknowledge financial support from Grant PID2021-122588NB-I00 funded by MCIN/AEI/10.13039/501100011033/ and by “ERDF A way of making Europe”. All the authors acknowledge financial support from Grant ProyExcel_00796 funded by Junta de Andalucía's PAIDI 2020 programme. A. Patrón acknowledges support from the FPU programme through Grant FPU2019-4110, and also additional support from the FPU programe through Grant EST22/00346, which funded his research stay at Univ. Paris-Saclay during autumn 2022. A. Prados also acknowledges the hospitality of LPTMS, which funded his stay at Univ. Paris-Saclay in June 2022.
http://arxiv.org/abs/2307.04542v1
20230710131729
Customizing Synthetic Data for Data-Free Student Learning
[ "Shiya Luo", "Defang Chen", "Can Wang" ]
cs.CV
[ "cs.CV" ]
Customizing Synthetic Data for Data-Free Student Learning Shiya Luo Zhejiang University Hangzhou, China [email protected] Defang Chen Zhejiang University Hangzhou, China [email protected] Can Wang Zhejiang University Hangzhou, China [email protected] August 12, 2023 =============================================================================================================================================================================================================== Data-free knowledge distillation (DFKD) aims to obtain a lightweight student model without original training data. Existing works generally synthesize data from the pre-trained teacher model to replace the original training data for student learning. To more effectively train the student model, the synthetic data shall be customized to the current student learning ability. However, this is ignored in the existing DFKD methods and thus negatively affects the student training. To address this issue, we propose Customizing Synthetic Data for Data-Free Student Learning (CSD) in this paper, which achieves adaptive data synthesis using a self-supervised augmented auxiliary task to estimate the student learning ability. Specifically, data synthesis is dynamically adjusted to enlarge the cross entropy between the labels and the predictions from the self-supervised augmented task, thus generating hard samples for the student model. The experiments on various datasets and teacher-student models show the effectiveness of our proposed method. Code is available at: https://github.com/luoshiya/CSDhttps://github.com/luoshiya/CSD data-free knowledge distillation, self-supervision, model compression § INTRODUCTION In recent years, convolutional neural networks (CNNs) have achieved remarkable success in various applications <cit.> with over-parameterized architectures. But its expensive storage and computational costs make model deployment on mobile devices difficult. Therefore, knowledge distillation (KD) <cit.> comes into play to compress models by transferring dark knowledge from a well-trained cumbersome teacher model to a lightweight student model. The prevailing knowledge distillation methods <cit.> depend on a strong premise that the original data utilized to train the teacher model is directly accessible for student training. However, this is not always the case in some practical scenarios where the data is not publicly shared due to privacy, intellectual property concerns or excessive data size etc. Data-free knowledge distillation (DFKD) <cit.> is thus proposed to solve this problem. Existing DFKD methods generally divide each training round into two stages: data synthesis and knowledge transfer. Two different approaches are proposed in the data synthesis stage: model inversion inputs the random Gaussian noise into the fixed teacher model and iteratively updates the input via the back-propagation from the teacher model <cit.>; generative reconstruction utilizes a generator network to learn a mapping from the low-dimensional noise to the desired high-dimensional data manifold close to the original training data <cit.>. In the knowledge transfer stage, the synthetic data from the previous stage is used to train the student model with the regular knowledge distillation procedure. As training progresses, easy samples bring little new knowledge and contribute less to the student learning. The key to improvement of the student learning ability is to provide the student with hard samples in training such that it can continuously acquire new knowledge. Some existing adversarial DFKD methods generate hard samples on which the student disagree with the teacher by enlarging the divergence between their prediction distribution <cit.> (see Fig. <ref>). However, the teacher has not been trained on such synthetic samples, and thus soft predictions for many samples are likely to be inaccurate. The student will experience minimal improvement, or even a decline, in its learning ability when attempting to imitate the teacher on those incorrect samples (as shown in Fig. <ref>). Furthermore, it is difficult to manually evaluate whether soft predictions of the teacher is correct. In this paper, we propose Customizing Synthetic Data for Data-Free Student Learning (CSD), which directly takes the current student learning ability as a reference to adaptively synthesize hard samples and the learning ability is estimated through a self-supervised augmented auxiliary task that learns the joint distribution of the classification task and the self-supervised rotation task. In this way, the capability of capturing semantic information can serve as a good indicator of the student learning ability, and the auxiliary task can effectively verify how well the student understand semantics <cit.>. An extra auxiliary classifier appended to the student feature extractor learns the self-supervised augmented auxiliary task in knowledge transfer stage and then estimates the current student learning ability as an evaluator in data synthesis stage by calculating the divergence between labels and predictions from the auxiliary task. In this way, we accurately generate hard samples relative to current student learning ability by enlarging this divergence in an adversarial way. Different from the traditional adversarial objective <cit.>, we use the student model itself rather than the pre-trained teacher model to estimate the sample difficulty of the synthetic data (see Fig. <ref>), which is more reliable for the student training and beneficial for the student performance improvement. As shown in Fig. <ref>, the student improves its learning ability with our hard samples and are not easily disturbed by the teacher misinformation. Our contributions are summarized as follows: * We propose a novel method to dynamically generate hard samples based on the current learning ability of the student in the data-free knowledge distillation scenario. * An auxiliary classifier is used to learn a self-supervised augmented task, and also acts as an evaluator to estimate the student learning ability for hard data synthesis. * We conduct extensive experiments on various datasets and teacher-student model architectures. Experimental results confirm the effectiveness of our method. § PROPOSED METHOD The overview of our proposed CSD framework is shown in Fig. <ref>. The framework consists of a fixed pre-trained teacher, a generator, a student and an auxiliary classifier appended to the student feature extractor. The generator and the auxiliary classifier are trained in an adversarial manner. In data synthesis stage, the generator would explore hard samples based on the student learning ability with the auxiliary classifier. In knowledge transfer stage, the auxiliary classifier tries to improve its own evaluating ability. Two stages are executed alternately until convergence. §.§ Data Synthesis In data synthesis stage, we follow CMI <cit.> to synthesize data x̃∈ℝ^H× W × C (H, W, C denote the height, width and channel number, respectively) from a pre-trained teacher model as the surrogate for original training data x. We jointly update random noise vector z and the parameters θ_g of the generator 𝒢 to obtain x̃=𝒢(z) for n_g steps in each training round. The generator provides stronger regularization on pixels due to the shared parameters θ_g. Although the main purpose of our work is to synthesize hard data based on the current ability of the student itself, if we synthesize data only by the student, this may make the distribution of the synthetic data far away from the original training data due to the lack of data prior constraints. The optimization objective of data synthesis consists of two components and is formulated as: min_z,θ_gℒ_narrow-αℒ_csd, where ℒ_narrow aims to narrow the gap between the synthetic data and the original training data with the help of the well-trained teacher model for alleviating outliers, and ℒ_csd estimates the learning ability of the student. We will elaborate these two terms later. Narrowing the Distribution Gap. To make synthetic data more realistic, we adopt the following optimization objective to narrow the gap between the distribution of synthetic data and original training data: ℒ_narrow = ℒ_cls + ℒ_bns, ℒ_cls represents an one-hot assumption that if the synthetic data have the same distribution as that of the original training data, the prediction of the synthetic data by the teacher model would be like a one-hot vector <cit.>. Therefore, ℒ_cls is calculated as the cross entropy between the teacher prediction 𝒯(x̃) and the pre-defined label ỹ: ℒ_cls=CrossEntropy(ỹ, 𝒯(x̃)), ℒ_bns is a constraint that effectively utilizes statistics stored in the batch normalization (BN) layers of the teacher as data prior information <cit.>. It employs running mean μ_l and running variance σ_l^2 of the l-th BN layer as feature statistics of original training data. ℒ_bns is then calculated as the l2-norm distance between features statistics of synthetic data x̃ and original training data: ℒ_bns=∑_l(‖μ̃_l(x̃)-μ_l‖_2+‖σ̃_l^2(x̃)-σ_l^2‖_2), where μ̃_l(x̃) and σ̃_l^2(x̃) are mean and variance of the feature maps at the l-th teacher layer, respectively. Customizing Synthetic Data for the Student. In each training round, it is necessary to synthesize data adaptively according to the current student learning ability, so as to prevent the student from repeatedly learning oversimple samples. To quantify learning ability, we consider that if a model can understand the semantic information of a image well, it would have a strong learning ability. Specifically, we adopt a simple self-supervised task by first rotating each image at different angles and then forcing the model to identify which angle each image comes from. As illustrated in <cit.>, the model can effectively perform the rotation recognition task unless it first learns to recognize the object categories and then recognize semantic parts in the image. But only using the rotation task to estimate learning ability is not enough. For example,“6” is rotated 180^∘ for the digit “9” and 0^∘ for the digit “6”. Inspired by <cit.>, we also combine the original classification task and the self-supervised rotation task into a unified task, named as the self-supervised augmented task, which forces the model to identify the angle as well as the category to eliminating incorrect estimation. We consider a N-way classification task and a M-way self-supervised rotation task. The CNN student model consists of two components: the feature extractor Φ:x̃→ℝ^d and the classifier h:ℝ^d→ℝ^N, i.e., 𝒮(x̃)=h(Φ(x̃)). Here d denotes the feature dimension. we attach an auxiliary classifier c:ℝ^d→ℝ^K with parameters θ_c behind the feature extractor, where K=N*M represents the number of categories for the self-supervised augmented task. ℒ_csd is calculated as follows: ℒ_csd = CrossEntropy(k, c(Φ(trans(x̃)))), where trans(·) is the operation of rotation and k is the label of the rotated version of synthetic data x̃ in the self-supervised augmented task. For example, if the category of x̃ in the original classification task is n and the category of its rotated version in the self-supervised rotation task is m, then the category in the self-supervised augmented task is n*M+m. By enlarging ℒ_csd, we generate hard samples on which the student has difficulty understanding semantics. §.§ Knowledge Transfer In knowledge transfer stage, the main purpose is to encourage the student model to mimic behaviors of the teacher model. The vanilla KD <cit.> matches final prediction distribution of the teacher and student model by calculating the Kullback-Leibler (KL) divergence between outputs of the teacher and the student: ℒ_kd = KL(σ(𝒯(x̃)/τ), σ(𝒮(x̃)/τ)), where σ(·) is the softmax function and τ is a hyper-parameter to soften the distribution. We set τ to 20 throughout all experiments for fair comparison as CMI <cit.>. Besides prediction distribution, feature maps can also be used as valuable knowledge to effectively guide the student <cit.>. We define the Mean-Square-error (MSE) loss between teacher feature maps F_t∈ℝ^H_t*W_t*C_t and student feature maps F_s∈ℝ^H_s*W_s*C_s from the last layer as: ℒ_fea = MSE(F_t, r(F_s)), where r(·) is a projection to align the dimension of feature maps. The student is trained for n_s steps in each training round and optimized by: min_θ_sℒ_ce+ℒ_kd+β*ℒ_fea, where β is a hyper parameter to balance the three loss items, and ℒ_ce=CrossEntropy(ỹ,𝒮(x̃)) is a regular loss in the original classification task to calculate cross entropy between student outputs and pre-defined labels. Besides the student training, the auxiliary classifier is also separately trained with the following loss to improve its own evaluation capability to better help the data synthesis stage: min_θ_cℒ_csd. §.§ Training Procedure The two-stage training procedure is summarized in Algorithm <ref>. In the data synthesis stage, the random noise z and generator 𝒢 are first trained for n_g times. Then we append the new synthetic data into an image bank for preventing catastrophic forgetting <cit.>. In knowledge transfer stage, we sample data from the image bank and separately train the student 𝒮 and the auxiliary classifier c for n_s times. § EXPERIMENTS Datasets and models. We conduct experiments on SVHN <cit.>, CIFAR-10 and CIFAR-100 <cit.> datasets, following a similar training setting as <cit.>. For all datasets, various models are used, including ResNet <cit.>, WRN <cit.>, VGG <cit.> and MobileNet <cit.>. The generator architecture is the same as <cit.>. Training details. For all datasets, to prevent the student from overfitting to data generated by early training rounds <cit.>, we first synthesize some data to initialize the image bank by removing ℒ_csd and running 400 synthesis batches with each one containing 200 samples. We totally train 100 rounds (epochs). In data synthesis stage, the random noise vector and generator are updated using Adam optimizer with 1e-3 learning rate. We synthesize 200 images in each step and repeat for n_g=500 steps. The hyper-parameter α is set to 10. In knowledge transfer stage, the student and the auxiliary classifier are update using SGD optimizer with 0.1 learning rate, 0.9 momentum and 1e-4 weight decay and we adopt cosine annealing for the learning rate decay. we sample 128 images from the image bank in each step and repeat for n_s=2000 steps. The hyper-parameter β is set to 30. We set temperature τ to 20. Test accuracy is used to evaluate the proposed method. We run all experiments for three times and report the means. More implementation details and results can be found in the appendix. §.§ Comparison with DFKD methods We compare with four representative DFKD methods on five groups of teacher-student models, including three homogeneous and two heterogeneous architecture combinations. DAFL <cit.> and ZSKT <cit.> are generator-based methods. ADI <cit.> and CMI <cit.> are inversion-based methods. Table <ref> shows that our proposed CSD outperforms all other methods. We also observe that, except for CMI, other comparison methods perform poorly on heterogeneous combinations and more complex datasets. For example, in the case of “WRN-40-2 & VGG8" on CIFAR-100, the test accuracy of DFAL is only 25.24%, which do not even achieve half accuracy of the student trained on the original data (68.76%). In contrast, our proposed CSD is robust on different datasets and teacher-student combinations. §.§ Effect of Our Proposed Adversarial Loss We conduct ablation study on CIFAR-10 and CIAFR-100 to explore whether our proposed adversarial loss L_csd can help improve the student performance. As shown in Table <ref>, in the case of Baseline, i.e., removing the adversarial loss (Equation <ref>), the accuracy drops by 3.62% on CIFAR-10 (from 90.50% to 86.88%) and 3.29% on CIFAR-100 (from 60.88% to 57.59%), which demonstrates the effectiveness of our proposed ℒ_csd. To further demonstrate the superiority of our method, we compare with two alternative adversarial strategies. The first one is traditional adversarial manner as the previous work <cit.>, whose adversarial loss is to calculate the divergence between predictions of the teacher and student. We replace ℒ_csd with traditional adversarial loss L_adv = KL(σ(𝒯(x̃)/τ), σ(𝒮(x̃)/τ)) and find that it has a slight improvement of 0.65% (from 86.88% to 87.57%) compared to Baseline on CIFAR-10. Surprisingly, We observe that it even results in a large drop of 4.09% (from 57.59% to 53.5%) on the more complex CIFAR-100 dataset. This indicates that estimating the sample difficulty with teacher predictions is likely to be unreliable, which would enlarge the negative effect in the case of teacher misdirection and thus weakens the student performance. Additionally, we plot the learning curves of the student trained by different strategies. In Fig. <ref>, it is clear that ℒ_adv causes very large accuracy fluctuations across training rounds (epochs), while our CSD makes the model converge faster and more stable. The second alternative strategy is to use only the rotation task as the final task to quantify the student learning ability without containing the original classification task. So we replace ℒ_csd with self-supervised rotation loss ℒ_rotation = CrossEntropy(m,c(Φ(trans(x̃)))), where m is the label of synthetic data in the rotation task. From Table <ref>, this causes significantly performance improvement on both CIFAR-10 and CIFAR-100 compared to the traditional adversarial manner, which shows the superiority of synthesizing hard samples according to the current student learning ability. However, only rotation task may destroy the original visual semantic information on some samples (such as “6” vs “9”) and results in inaccurate ability estimation. By combining the original classification task and the self-supervised rotation task, our CSD further improves the model performance. §.§ Auxiliary Classifier Analysis Next, we explore how the structure and training strategy of the auxiliary classifier affect the final student performance. To study the effect of the auxiliary classifier structure, we attach different numbers of fully-connected layers (from 1 to 3) behind the feature extractor. In Fig. <ref>, only one fully-connected layer even has a negative impact, which reduces the student performance on CIFAR-10 and CIFAR-100 by about 3% and 5% compared to the Baseline (without ℒ_csd), while two or three fully-connected layers can achieve similarly superior performance. We conjecture that multiple layers can effectively filter out noise in feature representations to accurately estimate the student ability. Therefore, we adopt two fully-connected layers as the auxiliary classifier for all experiments to trade off between the effectiveness and complexity. To study the effect of the training strategy during the knowledge transfer stage, we conduct experiments with two different training strategies: joint training and separate training. (1) Joint training updates the parameters of the student and the auxiliary classifier simultaneously at each step, that is, change the lines 17 and 18 of the Algorithm <ref> to θ_s←θ_s-ξ∇_s(ℒ_KT+ℒ_csd) and θ_c←θ_c-ξ∇_c(ℒ_KT+ℒ_csd). This strategy requires the student to learn the self-supervised augmented task together with the original classification task. (2) Separate training is exactly our adopted strategy for CSD. At each step, we update the student parameters first and then fix it and turn to train the auxiliary classifier. Table <ref> demonstrates separate training performs better. We conjecture that the additional self-supervised auxiliary task might distract the student from the main classification task. § CONCLUSION In data-free knowledge distillation, the student model itself can act as a key contributor to synthesize more valuable data while this point is largely overlook previously. In this paper, we utilize a self-supervised augmented task to accurately estimate the current student learning ability in each training round to synthesize more valuable data rather than oversimple synthetic data. Extensive experiments are conducted on three popular datasets and various groups of teacher-student models to evaluate the performance of our proposed method, and the results demonstrates the effectiveness of our proposed CSD. A potential future work is to explore how to apply the popular diffusion models to synthetic samples for data-free knowledge distillation <cit.>. § APPENDIX §.§ Experimental Details §.§.§ Datasets We evaluate our proposed CSD on three public datasets for classification task: SVHN, CIFAR-10 and CIFAR-100. The details of these datasets are listed as follows: * SVHN <cit.>. SVHN is a dataset of street view house numbers collected by Google, and the size of each image is 32×32. It consists of over 600,000 labeled images, including 73257 training images, 26,032 testing images and 531,131 additional training images. * CIFAR-10 <cit.>. CIFAR-10 is a dataset of 32×32 colored images. It consists of 60,000 labeled images from 10 categories. Each category contains 6,000 images, which are divided into 5,000 and 1,000 for training and testing, respectively. * CIFAR-100 <cit.>. CIFAR-100 is similar but more challenging to CIFAR-10, which consists of 100 categories. Each categories contains 500 training images and 100 testing images. Note that the training set is only utilized for teacher training and is unseen for data-free knowledge distillation. However, the testing set is still used for assessment. §.§.§ Model Architectures For all datasets, three network types are used in teacher-student models: ResNet <cit.> ,WRN <cit.>, VGG <cit.> and MobileNet-V2 <cit.>. The number behind “VGG" and “ResNet" denotes the depth of the network. “WRN-n-k" denotes a residual network with n depths and widening factor k. We use the same generator architecture as the previous work <cit.>, which is detailed in Table <ref>. We set the dimension of random noise vector to 256. §.§.§ Baseline We compare with four representative data-free knowledge distillation methods: two generator-based methods (DSFL and ZSKT) and two inversion-based methods (ADI and CMI). The details of these compared methods are listed as follows: * DAFL <cit.>. DFAL is a generator-based DFKD method that introduces one-hot loss, activation loss and information entropy loss from the teacher feedback as constraints to generate data close to the original training data. * ZSKT <cit.>. ZSKT is another generator-based DFKD method that first introduces adversarial distillation. It generate hard samples on which the student poorly matches the teacher, i.e., maximizing the KL divergence between their predictions, and then use these hard samples to minimize the KL divergence in order to train the student. * ADI <cit.>. ADI is an inversion-based DFKD method that first proposes to utilize statistics stored in batch normalization layers of the teacher as image prior information. * CMI <cit.>. CMI is another inversion-based DFKD method that mainly addresses model collapse issue. It introduces a contrastive learning objective to encourage each sample to distinguish itself from others for sample diversity. §.§ Visualization We visualize synthetic images of our CSD from different training epochs in Figure <ref>. We observe that images from early training epoch are more visually discernible than images from later training epoch, which indicates that as the number of training epochs increases, the student learning ability gradually becomes stronger, leading to more difficult synthetic images. Additionally, we plot the learning curves of the auxiliary classifier during knowledge transfer in Fig. <ref>. §.§ Sensitivity Analysis To study how the hyper-parameter α affect the student final performance, we plot student accuracy curves on CIFAR-100 for WRN-40-2 & WRN-16-1 with α ranging from 2 to 20 at equal interval of 2. From Fig. <ref>, we find that our CSD outperforms the best competitor (CMI) on all values of α. §.§ RELATED WORK §.§.§ Data-Driven Knowledge Distillation Knowledge distillation (KD) is proposed to solve model compression problem by distilling knowledge from a cumbersome model (teacher) into a less-parameterized model (student). The vanilla KD <cit.> takes predictions from the last layer as the teacher knowledge to guide the student training. Besides predictions, many subsequent works excavate the knowledge in the output of intermediate layers to supervise the training of the student. The intermediate supervision can be formed by feature maps <cit.>, attention maps <cit.> or feature representation <cit.>. There are also some works for transferring knowledge in relationships between different samples or layers <cit.>. All the above mentioned methods are based on the premise that the original training data is available, while our proposed method is discussed in a more challenging scenario of no original data. §.§ Data-Free Knowledge Distillation Data-free knowledge distillation (DFKD) deals with transferring knowledge without the access to the original training data. A straightforward idea is to synthesize the original data for knowledge transfer. The approaches of data synthesis can be roughly categorized into two classes: inversion-based and generator-based approaches. Inversion-based approaches input the random Gaussian noise into the fixed teacher and update the input iteratively via the back-propogation until meeting certain constraints <cit.>. ADI <cit.> proposes to leverage information stored in the batch normalization layers of the teacher to narrow gap between synthetic data and original data. CMI <cit.> introduces contrastive learning objective to address the mode collapse issue and thus ensure sample diversity. FastDFKD <cit.> introduces a meta-synthesizer to accelerate data synthesis process and achieves 100× faster speed. Generator-based approaches adopt a learnable generator to synthesize data <cit.>. DAFL <cit.> introduce one-hot loss, activation loss and information entropy loss as the objective of synthesizing data, which are calculated according to the teacher output. PRE-DFKD <cit.> designs a Variational Autoencoder (VAE) to replay synthetic samples for preventing catastrophic forgetting without storing any data. Adversarial Distillation <cit.> focus on synthesizing hard data by enlarging the divergence between predictions of the teacher and the student, so as to narrow the information gap between the teacher and the student. However, all above methods do not properly take into account the student's current ability during data synthesis, which may lead to oversimple samples and thus limit the final student performance. IEEEbib
http://arxiv.org/abs/2307.05920v1
20230712051910
Unified Medical Image-Text-Label Contrastive Learning With Continuous Prompt
[ "Yuhao Wang" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
anonymous et al. Beijing University of Posts and Telecommunications [email protected] Unified Medical Image-Text-Label Contrastive Learning With Continuous Prompt Yuhao Wang1 ============================================================================ Contrastive language-image Pre-training (CLIP) <cit.> can leverage large datasets of unlabeled Image-Text pairs, which have demonstrated impressive performance in various downstream tasks. Given that annotating medical data is time-consuming and laborious, Image-Text Pre-training has promising applications in exploiting large-scale medical image and radiology report datasets. However, medical Image-Text Pre-training faces several challenges, as follows: (1) Due to privacy concerns, the amount of available medical data is relatively small compared to natural data, leading to weaker generalization ability of the model. (2) Medical images are highly similar with only fine-grained differences in subtleties, resulting in a large number of false-negative sample pairs in comparison learning. (3) The hand-crafted Prompt usually differs from the natural medical image report, Subtle changes in wording can lead to significant differences in performance. In this paper, we propose a unified Image-Text-Label contrastive learning framework based on continuous prompts, with three main contributions. First, We unified the data of images, text, and labels, which greatly expanded the training data that the model could utilize. Second, we address the issue of data diversity and the impact of hand-crafted prompts on model performance by introducing continuous implicit prompts. Lastly, we propose a Image-Text-Label contrastive Training to mitigate the problem of too many false-negative samples. We demonstrate through sufficient experiments that the Unified Medical Contrastive Learning (UMCL) framework exhibits excellent performance on several downstream tasks. § INTRODUCTION Visual-Language models<cit.> aim to learn generic visual representations through Image-Text pairs. With the development of multimodal healthcare AI <cit.>, increasing amounts of multimodal healthcare data are becoming available, including images, text, and other multimodal data. These massive amounts of unlabeled data provide a good foundation for the application of self-supervised learning methods in healthcare. One such method is Contrastive Language-Image Pre-training (CLIP) <cit.>, which can be pre-trained using a large number of unlabeled Image-Text pairs. CLIP has demonstrated impressive performance in a variety of downstream tasks, e.g. zero-shot classification, cross-modal retrieval. In the medical field, where data annotation is often time-consuming and laborious<cit.>, the superior performance of Image-Text Pre-training in zero-shot classification can enable the diagnosis of rare diseases with zero or few shots. However, existing Image-Text Pre-training has several issues in the medical field: 1. Medical image reports contain rich patient information, which may result in patient privacy leakage. The insufficient existing medical image-report text pairs cannot provide the same powerful generalization ability to Image-Text Pre-training models as natural images. 2. The hand-crafted Prompt usually differs from the natural medical image report, Subtle changes in wording can lead to significant differences in performance 3. Medical images are usually highly similar, differing only in subtleties, and there are often inextricable links between unpaired images and reports, which leads to a large number of false-negative samples in the traditional Pre-training paradigm. For all the reasons above, only a few studies have been conducted on Vision-language pretrain in the medical field<cit.>. <cit.> adopted cross-supervised paradigm combining contrast learning and image captioning is used to facilitate the adoption of generic representation capabilities for image encoders. <cit.> borrows the idea of MAE to perform graphical Pre-training by cross-modal mask reconstruction, so that the trained model can support MedQA. However, all these Image-language pretrain cannot support zero-sample inference as flexibly as the CLIP Pretraining paradigm. Most similar to CLIP Pretrain paradigm, ConviRT <cit.> uses Pre-training by bi-directional comparison between image text targets using paired text data to compute different InfoNCE <cit.> losses separately, while GLoRIA <cit.> proposes an attention-based framework by comparing image sub-regions and words in paired reports to learn global and local representations. To address existing issues, we propose a unified Image-Text-Label Pre-training framework based on continuous prompts. Firstly, by constructing a prompt, we map the Image-Label dataset and the Image-Text dataset into the same implicit space. This greatly expands the number of datasets that can be used for Pre-training, enabling multiple sources and types of datasets to be trained in a unified framework. By using different types of supervised signals in the model, it has both the excellent generalization of the Image-Text Pre-training model and the characterization ability of robust transfer learning. Secondly, Image-Text-Label contrastive Training is proposed to solve the problem of false negative samples in the contrast learning process. Finally, we introduce continuous prompts, which effectively avoid the performance difference brought by hand-crafted prompts and bridge the difference between hand-crafted prompts and natural medical image reports. This improves the ability of zero-shot inference. Extensive experiments show that our unified Image-Text-Label Pre-training framework can effectively exploit both types of data and achieve excellent performance in multiple downstream tasks, such as zero-shot inference, transfer learning, and cross-modal retrieval. § METHOD Our proposed UMCL model, as shown in Fig. 1, utilizes continuous prompts to construct corresponding prompts on Image-Label data. This allows disease label to be mapped to a unified feature space while pretraining. To address the issue of false negatives in the Image-Text Pre-training model, our proposed Image-Text-Label contrastive Training can effectively leverage the specific representational information in the Image-Label data, as well as the generic representational information in the Image-Text pair. This enables the learned model to achieve superior performance with zero-shot classification and transfer learning §.§ Vision and Text Encoder For the image encoder, we utilized the current state-of-the-art image encoder architecture SwinTransformer<cit.>, to encode the image as a visual vector with a fixed dimension. Regarding the text encoder, we adopted the previously established approach of using BioClinicalBERT<cit.> and encoded the corresponding text as a text vector with the same dimension as the visual vector. §.§ Unified Image-Text-lable pretrain with continuous prompt Inspired by <cit.>,to address the issue of insufficient data amount for the Image-Text Pre-training model, we incorporated the common Image-Label dataset in the model training. We unified the Image-Text and Image-Label datasets by constructing a prompt that maps Label into the contrast learning paradigm. However, we focused on the differences between the natural radiology report and prompts constructed from the label for the two types of sentences, originating from different data types, differentiating them at the input level. Specifically, for the Image-Label data, we first followed the relevant disease description template proposed in GLoria <cit.> to create a textual description prompt (discrete prompt) for the disease label. This template was reviewed by several radiologists and contains a reasonable and rich description of the disease, lesion condition, severity, and location, which can reduce the difference between image labels and natural medical image reports to some extent. Subsequently, inspired by <cit.> we proposed a continuous image labeling-based medical imaging system and characterized the Image-Label dataset using a continuous prompt approach. Continuous prompt is a set of learnable vectors inserted for the generated text description cues that, unlike hand-crafted Prompt, do not have to be bound to a deliberate vocabulary but use an invisible unified vector. During model training, continuous prompts can be trained end-to-end, where the parameters can be updated during the Pre-training process. Our framework unifies Image-Text-Label pretraining, and continuous prompt can effectively capture the commonalities and differences between the labels and the original medical image reports, bridging the differences in the Image-Label dataset in the downstream task of zero-shot classification for the Image-Text pretraining model. This formula can be expressed as follows: T=[V]_1[ V]_2 …[V]_M[CLASS][F_ds(CLASS)] Where[V]_m(m ∈{1, …, M} is a vector with the same dimension as word embeddings, and M is a hyperparameter specifying the number of context tokens. F_ds denotes the process of discrete prompt construct, which is to search several common hand-crafted sentences by class names. For the Image-Text data, we adopted the same preprocess pipeline as CLIP. §.§ Image-Text-Label contrastive Training Traditional Image-Text Pre-training models typically encounter the problem of excessive false negatives. This problem arises because image reports from different medical images are unlikely to be completely different. As a result, the effectiveness of the original CLIP model, applied directly to medical Image-Text report pairs datasets, is limited, as the model often forces the separation of otherwise semantically consistent image reports. Our Image-Text-Label Pre-training framework effectively introduces the disease labels of images, allowing the disease labels of images to effectively modeling the consistency of images and corresponding prompts at the semantic consistency level. During the model training process, our successive prompts can be automatically optimized so that the final image labels constitute a prompt highly close to the natural medical image reports. For the Image-Label dataset, the similar score of unparired images and prompts constructed from disease label is calculated by: y_i,j=𝐥_i^⊤·𝐥_j/𝐥_i·𝐥_j where y_i,j denotes the similar score of i images and j prompts constructed from disease label, and l denotes the multi-hot vectors generated by disease labels, e.g. ['consolidation','lung Opacity'] is denoted as [0,1,.....1,0]. The predicted similar scoreis also obtained by L2 normalize: s_i,j=𝐯_i*𝐭_j^T where v_i and t_j represent the image embedding and text embedding. s _i j=s i j/√(∑_i=1^N ∑_j=1^N s_i j^2) s indicates the medical semantic similarity. We adopt the L2 to nomralize the predicted similar score, and Image-Text-Label contrastive Training can be formulated as followings: ℒ=-1/N_b a t c h∑_i=1^N_b a t c h∑_j=1^N_b a t c h y_i jlogs_i j For the image-text dataset, Due to the lack of semantic similarity index between different samples, we only assign the similarity labels of pair samples as 1, while the similarity labels of un-paired samples is 0. More specifically, we outlaw the softmax layer in Cross Entropy loss and utilize L2 normalization, which avoids the original contrast learning loss from forcibly maximizing the distance between un-paired samples when faced with image-text datasets lacking semantic similarity labels during model training. For the Image-Text data, the loss can optimize only the Image-Text sample pairs in pair data and ignore other datasets without similar soc because there is no structured disease label. For the Image-Label, the loss function can effectively constraint the similarity between the image and the corresponding prompt by the disease label. In this way, the loss function can effectively unify different Pre-training data sources in the Image-Text-Label dataset, allowing the model to be equiped with a more discriminative feature representation in downstream transfer learning tasks and to learn generic features. § EXPERIMENT §.§ Datasets and Implementation details §.§.§ Pre-train datasets CheXpert<cit.>:CheXpert is a publicly available Pre-training dataset developed by the Stanford University School of Medicine. This dataset contains a vast collection of 224,316 chest X-rays obtained from 65,240 patients. It includes labeled information from free-text reports describing the presence of 14 common chest diseases and findings in the images, such as pneumonia, nodules, cardiopulmonary enlargement, and more. MIMIC-CXR<cit.>: MIMIC-CXR (Medical Information Mart for Intensive Care - Chest X-ray) is another Pre-training dataset containing nearly 227,835 chest X-ray images, each of which corresponds to a text report from various US medical institutions. §.§.§ Evaluation Datasets CheXpert-5x200: we fellow the the config of GloRIA<cit.>, sample a multi-class classification dataset from the testing split.The class of heXpert-5x200 include Atelectasis, Cardiomegaly, Edema, Pleural, Effsion, and each class contain 200 positive samples. MIMIC-5x200: For evaluation, we also sample a MIMIC-5x200 dataset for the same five tasks above. COVID-19<cit.>:This dataset was collected from several hospitals in QATAR, containing approximately equal numbers of COVID and non-COVID tags cases. We sample a balanced subset for COVID-19 and Non-COVD19 1:1 for 2000 samples. RSNA pneumonia <cit.>: The RSNA pneumonia dataset, published by the Radiological Society of North America (RSNA) in 2018, containing 26,684 chest X-ray images of patients with pneumonia and normal people of different ages, genders, and races. We fellwing the configs in <cit.>,and sample a balanced subset for pneumonia and non-pneunomia and use it for evaluation. §.§.§ Implementation details We utilize a multimodal Pre-training model that combines ViT architecture<cit.> as the image encoder and bioclinical bert<cit.> architecture as the text encoder. Our model involves merging the initial hand-crafted prompt with a continuous prompt in the embedding layer of the text encoder. The continuous prompt has a length of 32, and we set the specific dimension to 512 learnable vectors. Additionally, we set the implicit dimension of the encoder to 768. For all Pre-training experiments, we train the model for 100,000 steps using the Adam optimizer with a learning rate of 1e-5. We also set the pre-warming ratio to 10 and use a linear learning rate scheduler after the pre-warming. The pre-processing steps involve scaling images to 256 × 256 and uniformly processing text to a length of 77 tokens. We complete the training on an NVIDIA A40 GPU, which offers fast and efficient processing capabilities. §.§ Results and Discussions §.§.§ zero-shot classification Following previous work, we we evaluate zero-shot image classification on four datasets, CheXpert-5x200, MIMIC5x200, COVID and RSNA to assess the generalizability of our model. We represent the image labels by constructing a continuous prompts , and comparing the similarity of image embedding and text embedding . The results are presented in Table <ref>. We adopted the ACC(accuracy) as the evaluation metric. The results show that our method exhibits excellent performance on the zero-shot task for multiple datasets. Of interest is that, unlike other models, the insertion of our continuous prompt largely enhances the advantages of prompt ensemble, which indicates that the contextual vectors learned during training can well capture the differences and commonalities between hand-craft prompts and natural medical image reports, thus unifying the three feature spaces of Image-Text-Label. In particular, we achieve excellent performance on both COVID and RSNA datasets, and our model never sees the COVID and RSNA datasets during training, while the ACC of zero-shot on the two datasets reaches 0.748 and 0.764, respectively, which demonstrates the strong generalization ability of our model §.§.§ Classification with finetune We validated the strong transfer learning ability of our model by fine-tuning it on same datasets as above. The results are shown in Table <ref>. In comparison to other models, ours demonstrated better performance after fine-tuning, suggesting that our model effectively learns more discriminative representations during the training process. Meanwhile, compared with Table 1 <ref>, it can be found that the performance difference between finetung and zero-shot is lesser margin. This demonstrates that our model has strong general and specific representations at the same time, and indicates the promising application of our Image-Text-Label Pre-training model in scenarios with limited labeled data. §.§.§ Cross-modal Retrieval As with previous work, we evaluated our proposed model using the CheXpert-5x200 dataset. Specifically, we computed several Image-Text pairs with the largest similarity scores by encoding the images and corresponding sentence embeddings. We then evaluated our model's accuracy using Precision@K to separately compute the accuracy in top K retrieval reports/sentences. The specific results are presented in Table <ref>, which demonstrate that our Image-Text-Label Pre-training model achieved optimal results on the metrics of multiple sampled samples. §.§.§ Ablation study To investigate the impact of different length of the continuous prompt and the Image-Label dataset to our approach, we conduct a rigorous ablation study on Chexpert-5x200. when the length of continuous prompt is set to 16 or 64, the UMCL has a noticeable performance degradation. This demonstrate that the continuous prompt have ontly enhance efficiency, while the zero-shot classfication highly depends on the loading of discrete prompt. We then remove the continuous prompt and Image-Label data, the results show that the introduction of Image-Label dataset have a significant improvement on the model performance and the continous prompt bridge the gap between the Text and the Label. In summary, our proposed components improve on all metrics, which demonstrate the prominent effectiveness of our approach. §.§.§ T-SNE Visulization Finally, we compared the t-SNE results of our Image-Text-Label Pre-training model and the CLIP model. As shown in Figure <ref>, the image encoder of the CLIP model was found to be less effective in distinguishing different chest films, while our model was able to effectively distinguish between multiple diseases. It is worth noting that our model achieved a significant clustering effect for different diseases. The results suggest that our model outperforms the image encoder of the CLIP model in this specific task. The ability to differentiate between various diseases is essential in medical imaging, and our model's success in achieving this could be of great value in clinical settings. § CONCLUSION In this paper, we introduced a new Pre-training paradigm, which unifies the Image-Text-Label modality using continuous prompts and a Image-Text-Label contrastive Training function. Our proposed model effectively integrates three different data types into the same feature space, thus addressing several issues related to Image-Text-text Pre-training models, such as the lack of sufficient data, the strong dependence on hand-crafted prompts, and the high false-negative rate. Our comprehensive experimental evaluation demonstrates the effectiveness of our approach in various tasks, including zero-shot classification, cross-modal retrieval, and fine-tuned classification, across multiple datasets. Our results indicate that our method outperforms existing state-of-the-art Pre-training models, demonstrating the potential of our approach in various applications. §.§ Acknowledgements splncs04
http://arxiv.org/abs/2307.04424v1
20230710085906
About the algebraic closure of formal power series in several variables
[ "Michel Hickel", "Mickaël Matusinski" ]
math.AC
[ "math.AC", "math.AG", "13J05, 13F25, 14J99, 12-08" ]
theoTheorem[section] definition[theo]Definition defi remarque[theo]Remark remark exemple[theo]Example ex lemma[theo]Lemma propo[theo]Proposition coro[theo]Corollary nota[theo]Notation notation prf1Idea of the proof demo1 prfProof demo *theorem*Theorem @#1#2 @th###1@font Lim1.5@#2-@ @@@ @rlay @rlay#1#2 @skip -@th#1###2 𝔸 ℕ ℤ ℚ ℝ ℂ 𝕂 ℙ δ̣ Michel Hickel and Mickaël Matusinski, Univ. Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, F-33400 Talence, France [2020]13J05, 13F25, 14J99 and 12-08 Let K be a field of characteristic zero. We deal with the algebraic closure of the field of fractions of the ring of formal power series K[[x_1,…,x_r]], r≥ 2. More precisely, we view the latter as a subfield of an iterated Puiseux series field 𝒦_r. On the one hand, given y_0∈𝒦_r which is algebraic, we provide an algorithm that reconstructs the space of all polynomials which annihilates y_0 up to a certain order (arbitrarily high). On the other hand, given a polynomial P∈ K[[x_1,…,x_r]][y] with simple roots, we derive a closed form formula for the coefficients of a root y_0 in terms of the coefficients of P and a fixed initial part of y_0. About the algebraic closure of formal power series in several variables. Michel Hickel and Mickaël Matusinski August 12, 2023 ======================================================================== § INTRODUCTION. Let K be a field of characteristic zero and K its algebraic closure. Let x:=(x_1,…,x_r) be an r-tuple of indeterminates where r∈, r≥ 2. Let K[x] and K[[x]] denote respectively the domains of polynomials and of formal power series in r variables with coefficients in K, and K(x) and K((x)) their fraction fields. Both fields embed naturally into K((x_r))((x_r-1))⋯((x_1)), the latter being naturally endowed with the lexicographic valuation in the variables (x_1,…,x_r) (see Section <ref>). By iteration of the classical Newton-Puiseux theorem (see e.g. <cit.> and <cit.>), one can derive a description of an algebraic closure of K((x_r))((x_r-1))⋯((x_1)) in terms of iterated fractional Laurent series (see <cit.><cit.>): The following field, where L ranges over the finite extensions of K in K: ℒ_r:= _p∈ℕ^*_L L((x_r^1/p))((x_r-1^1/p))⋯ ((x_1^1/p)) is the algebraic closure of K((x_r))((x_r-1))⋯((x_1)). Within this framework, there are several results concerning those iterated fractional Laurent series which are solutions of polynomial equations with coefficients either in K(x) or K((x)). More precisely, the authors provide necessary constraints on the supports of such a series (see <cit.>, <cit.>, <cit.> <cit.>, <cit.>). More recently, Aroca, Decaup and Rond study more precisely the support of Laurent-Puiseux power series which are algebraic over K[[x]] (with certain results for K of positive characteristic) <cit.>. As asserted in <cit.>, one can prove the following result (see the proof in Section <ref>), which could also be derived from the methods in <cit.> or <cit.>: The following field 𝒦_r, where L ranges over the finite extensions of K in K, is an algebraically closed extension of K(x) and K((x)) in ℒ_r: 𝒦_r := _(p,q)∈ℕ^*×ℕ^r-1_L L(( ( x_1/x_2^q_1)^1/p,…, ( x_r-1/x_r^q_r-1)^1/p ,x_r^1/p)). Let ỹ_0∈𝒦_r and f̃,g̃∈ L[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] such that ỹ_0=f̃/g̃. Let α be the lexicographic valuation of g̃ (where it is understood that the valuation of x_i^1/p is equal to 1/p times the valuation of x_i). Denote g̃=ax^α(1-ε) with ε having positive valuation. We expand: ỹ_0=f̃/g̃=f̃ a^-1x^-α∑_k∈ε^k as a generalized power series ∑_n∈(^r,≤_lex) c_n/px^n/p (the latter is well defined by <cit.>). We set: Supp(∑_n∈(^r,≤_lex) c_n/px^n/p):={1/pn∈(1/p^r,≤_lex) | c_n/p≠ 0}. Let us call the elements of 𝒦_r rational polyhedral Puiseux series (since one can observe that the support with respect to the variables x_i's of such a series is included in the translation of some rational convex polyhedral cone). We are interested in those rational polyhedral Puiseux series that are algebraic over K((x)), say the rational polyhedral Puiseux series which verify a polynomial equation P̃(x,y)=0 with coefficients which are themselves formal power series in x: P̃(x,y)∈ K[[x]][y]∖{0}. Let us call such a series algebroid. If such a series ỹ_0 admits a vanishing polynomial of degree at most d in y, we will say that ỹ_0 is algebroid of degree bounded by d. More precisely, we extend our previous work on algebraic (over K(x)) Puiseux series in several variables <cit.>, by dealing with the following analogous questions: ∙ Reconstruction of pseudo-vanishing polynomials for a given algebroid rational polyhedral Puiseux series. In this part, for simplicity reasons, we will assume that K is algebraically closed. For Q̃(x,y)∈ K[[x]][y] a nonzero polynomial, the (x)-adic order of Q̃ is the maximum of the integers k such that Q̃(x,y)∈ (x)^kK[[x]][y] where (x) denotes the ideal of K[[x]] generated by x_1,…,x_r. We consider ỹ_0=f̃/g̃ with f̃,g̃∈ K[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] algebroid of degree bounded by d. For an arbitrarily large valuation l∈, we provide an algorithm which computes polynomials Q̃(x,y)∈ K[[x]][y] such that the expansion of Q̃(x,ỹ_0)∈𝒦_r as a rational polyhedral Puiseux series has valuation greater than l. More precisely, let us denote ζ_i:=(x_i/x_i+1^q_i)^1/p for i=1,…,r-1, and ζ_r:=x_r^1/p. We suppose that for any k∈, one can compute all the coefficients of ζ^n with n_1+⋯+n_r≤ k in f̃ and g̃. Moreover, we assume that the lexicographic valuations with respect to ζ of f̃ and g̃ are given. Let d∈^* and ν̃_0∈. Let ỹ_0∈𝒦_r be algebroid of degree bounded by d. We assume that there is a vanishing polynomial P̃ of degree bounded by d and of (x)-adic order bounded by ν̃_0. We consider formal power series f̃,g̃∈ K[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] such that ỹ_0=f̃/g̃. Let β=(β_1,…,β_r) be the lexicographic valuation of f̃g̃ with respect to the variables ζ_i:=(x_i/x_i+1^q_i)^1/p, ζ_r:=x_r^1/p, and q_i':=q_i+β_i+1+1 for i=1,…,r-1. We set: [ L̃: ^r → ; (n_1,…,n_r) ↦ n_r+q'_r-1n_r-1+q'_r-1q'_r-2n_r-2+⋯+q'_r-1q'_r-2⋯ q'_1n_1. ] The algorithm described in Section <ref> provides for any ν∈ a parametric description of the space of all the polynomials Q̃_ν(x,y)∈ K[[x]][y] with _yQ̃_ν≤ d and of (x)-adic order bounded by ν̃_0 such that, for any 1/pn=1/p(n_1,…,n_r)∈Supp Q̃_ν(x,ỹ_0), one has: L̃(n)≥ν. Note that the condition L̃(n)≥ν for 1/pn∈Supp Q̃_ν(x,ỹ_0) implies that infinitely many coefficients of Q̃_ν(x,ỹ_0) vanish since n∈^r. With more information on ỹ_0, we can use other linear forms L̃, see Theorem <ref>. ∙ Description of the coefficients of an algebroid rational polyhedral Puiseux series in terms of the coefficients of a vanishing polynomial. Now, let a polynomial P̃(x,y)∈ K[[x]][y] with only simple roots and a root ỹ_0∈𝒦_r be given. Up to a change of coordinates (see Section <ref>), we reduce to the case of a polynomial P(u,y)∈ K[[u]][y] whose support has constraints (see Lemma <ref>), and a simple root y_0∈ L[[u]] (where [L:K]<∞). In Theorem <ref> and Corollary <ref>, we provide a closed form formula for the coefficients of y_0 in terms of the coefficients of P and the coefficients of a fixed initial part of y_0. This is obtained as a consequence of a generalization of the multivariate Flajolet-Soria formula for Henselian equations (<cit.>), see Theorem <ref>. Our article is organized as follows. In Section <ref>, we prove a monomialization lemma (Lemma <ref>) which is a key to reduce to the case of formal power series annihilating a polynomial whose support has constraints (Lemma <ref>). This is done by a change of variable (<ref>) corresponding to the lexicographic valuation. Moreover, we distinguish two sets s and t of variables and we show that our series y_0 can be expanded as y_0=∑_nc_n(s) t^n where the c_n(s)∈ K[[s]] are algebraic power series (see Lemma <ref>) of bounded degree (see Lemma <ref>). Section <ref> is devoted to the proof of the nested depth lemma (Theorem <ref>). It is used in the subsequent sections to ensure the finiteness of the computations. We use elementary properties on Bézout's identity and the resultant of two polynomials. In Section <ref>, we show how to reconstruct all the polynomials of given bounded degrees which vanish at given several algebraic power series. This is based on Section <ref> and our previous work on algebraic multivariate power series <cit.>. In Section <ref>, we prove our first main result, Theorem <ref> and its variant Theorem <ref>. Sections <ref> and <ref> are devoted to our second question. In Section <ref>, we study what we call strongly reduced Henselian equations (see Definition <ref>) and prove a generalisation of the multivariate Flajolet-Soria formula (see Theorem <ref>). In Section <ref>, we prove how to reduce to the case of a strongly reduced Henselian equation (see Theorem <ref>) and, in the case of an equation with only simple roots, we derive a closed form formula for the coefficients of a solution y_0 in terms of the coefficients of the equation and of a bounded initial part of y_0 (see Corollary <ref>). § PRELIMINARIES Let us denote ℕ:=ℤ_≥ 0 and ℕ^*:=ℕ∖{0}=ℤ_>0. For any set ℰ, we denote by |ℰ| its cardinal. We systematically write the vectors using underlined letters, e.g. x:=(x_1,…,x_r), n:=(n_1,…,n_r), and in particular 0:=(0,…,0). Moreover, x^n:=x_1^n_1⋯ x_r^n_r. The floor function will be denoted by ⌊ q ⌋ for q∈ℚ. For a polynomial P(y)=∑_i=0^d a_iy^i with coefficients a_i in a domain and a_d≠ 0, we consider that its discriminant Δ_P is equal to the resultant of P and ∂ P/∂ y (instead of the more usual convention Δ_P=(-1)^d(d-1)/2/a_dRes(P,∂ P/∂ y)). For any sequence of nonnegative integers m=(m_i,j)_i,j with finite support and any sequence of scalars a=(a_i,j)_i,j indexed by i∈ℤ^r and j∈ℕ, we set: * m!:=∏_i,jm_i,j!; * a^m:=∏_i,ja_i,j^m_i,j; * |m|:=∑_i,jm_i,j, ||m||:= ∑_i,jm_i,j j∈ and g(m) := ∑_i,jm_i,j i∈^r. In the case where k=(k_0,…,k_l), we set k :=∑_j=0^lk_j j. In the case where k=(k_i)_i∈Δ where Δ is a finite subset of ℤ^r, we set g(k):=∑_i∈Δk_i i. We will consider the following orders on tuples in ℤ^r: The lexicographic order n≤_lexm :⇔ n_1<m_1 or (n_1=m_1 and n_2<m_2) or ⋯ or (n_1=m_1, n_2=m_2, … and n_r<m_r). The graded lexicographic order n≤_grlexm :⇔ |n |<|m| or (|n |=|m| and n≤_lexm). The product (partial) order n≤m :⇔ n_1≤ m_1 and n_2≤ m_2 ⋯ and n_r≤ m_r. Note that we will apply also the lexicographic order on ℚ^r. Similarly, one has the anti-lexicographic order denoted by ≤_alex. Considering the restriction of ≤_grlex to ^r (for which ^r has order type ω), we denote by S(k) (respectively A(k) for k≠ 0), the successor element (respectively the predecessor element) of k in (ℕ^r,≤_grlex). Given a variable x and a field K, we call Laurent series in x with coefficients in K any formal series ∑_n≥ n^0c_nx^n for some n^0∈ and c_n∈ K for any n. They consist in a field, which is identified with the fraction field K((x)) of K[[x]]. To view the fields K(x) and K((x)) as embedded into K((x_r))((x_r-1))⋯((x_1)) means that the rational fractions or formal meromorphic fractions can be represented as iterated formal Laurent series, i.e. Laurent series in x_1 whose coefficients are Laurent series in x_2, whose coefficients... etc. This corresponds to the following approach. As in <cit.>, we identify K((x_r))((x_r-1))⋯((x_1)) with the field of generalized power series (in the sense of <cit.>, see also <cit.>) with coefficients in K and exponents in ℤ^r ordered lexicographically, usually denoted by K((X^ℤ^r))^lex. By definition, such a generalized series is a formal expression s=∑_n∈ℤ^rc_nX^n (say a map ℤ^r→ K) whose support (s):={n∈ℤ^r | c_n≠ 0} is well-ordered. The field K((X^ℤ^r))^lex comes naturally equipped with the following valuation of rank r: [ v_x: K((X^ℤ^r))^lex → (ℤ^r∪{∞},≤_lex); s≠ 0 ↦ min((s)); 0 ↦ ∞ ] The identification of K((X^ℤ^r)) and K((x_r))((x_r-1))⋯((x_1)) reduces to the identification X^(1,0,…,0)=x_1 , X^(0,1,…,0)=x_2 , … , X^(0,…,0,1)=x_r. By abuse of terminology, we call K((X^ℤ^r))^lex or K((x_r))((x_r-1))⋯((x_1)) the field of (iterated) multivariate Laurent series. Note also that this corresponds to the fact that the power series in the rings K[x] and K[[x]] are viewed as expanded along (ℤ^r,≤_lex). Similarly, the field ℒ_r is a union of fields of generalized series L((X^(ℤ^r)/p))^lex and comes naturally equipped with the valuation of rank r: [ v_x: ℒ_r → (ℚ^r∪{∞},≤_lex); s≠ 0 ↦ min((s)); 0 ↦ ∞. ] We will need another representation of the elements in K(x) and K((x)), via the embedding of these fields into the field K((X^ℤ^r))^grlex with valuation: [ w_x: K((X^ℤ^r))^grlex → (ℤ^r∪{∞},≤_grlex); s≠ 0 ↦ min((s)); 0 ↦ ∞. ] and the same identification: X^(1,0,…,0)=x_1 , X^(0,1,…,0)=x_2 , … , X^(0,…,0,1)=x_r. For a polynomial P(y)=∑_j=0^da_jy^j∈ K((X^ℤ^r))^grlex[y], we denote: w_x(P(y)):=min_j=0,…,d{w_x(a_j)}. We will also use the following notations to keep track of the variables used to write the monomials. Given a ring R, we denote by R((x_1^ℤ,…,x_r^ℤ))^lex and R((x_1^ℤ,…,x_r^ℤ))^grlex the corresponding rings of generalized series ∑_n∈ℤ^rc_nx^n with coefficients c_n in R. Accordingly, let us write R((x_1^ℤ,…,x_r^ℤ))^lex_Mod and R((x_1^ℤ,…,x_r^ℤ))^grlex_Mod the subrings of series whose actual exponents are all bounded by below by some constant for the product order. Note that these subrings are both isomorphic to the ring ⋃_n∈ℤ^rx^nR[[x]]. Let us write also R((x_1^ℤ,…,x_r^ℤ))^lex_≥_lex0 and R((x_1^ℤ,…,x_r^ℤ))^grlex_≥_grlex0 the subrings of series s with v_x(s)≥_lex0, respectively w_x(s)≥_grlex0. Let f be non zero in K[[ξ_1,…,ξ_r]]. There exists ρ_1,…,ρ_r-1∈ℕ such that, if we set {[ η_1 := ξ_1/ξ_2^ρ_1; ⋮; η_r-1 := ξ_r-1/ξ_r^ρ_r-1; η_r := ξ_r ]. then f(ξ_1,…,ξ_r)=η^αg(η_1,…,η_r) where α∈ℕ^r and g is an invertible element of K[[η_1,…,η_r]]. Moreover, for all i=1,…,r-1, ρ_i≤ 1+β_i+1 where β:=v_ξ(f). Let us write f=ξ^β h where β=v_ξ(f) and h∈ K((ξ_1^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod with v_ξ(h)=0. Note that h can be written as h=h_0+h_1 where h_0∈ K((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod with v_ξ(h_0)=0, and h_1∈ξ_1K[[ξ_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_Mod. If h_1∈ K[[ξ_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod, then we set ρ_1=0. Otherwise, let ρ_1 be the smallest positive integer such that: ρ_1≥sup{1 ; (1-m_2)/m_1, m∈supp h_1}. Note that, since m_1≥ 1 and m_2≥ -β_2, we have that ρ_1≤ 1+β_2. We also remark that the supremum is achieved for 0≥ m_2≥ -β_2 and 1+β_2 ≥ m_1≥ 1. Let η_1:=ξ_1/ξ_2^ρ_1. For every monomial in h_1, one has ξ_1^m_1ξ_2^m_2…ξ_r^m_r=η_1^m_1ξ_2^m_2+ρ_1m_1…ξ_r^m_r. Hence, m_2+ρ_1m_1≥ 1 by definition of ρ_1. So (m_2+ρ_1m_1,…,m_r)>_lex0, meaning that h_1∈ K[[η_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and that v(h_1)>_lex0 where here v is the lexicographic valuation with respect to the variables (η_1,ξ_2,…,ξ_r). So h∈ K[[η_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and v(h)=0. Note that the exponents m_3,…, m_r remain unchanged in the support of h. Suppose now that we have obtained h∈ K[[η_1,…,η_p]]((ξ_p+1^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and that v(h)=0 where v is now the lexicographic valuation with respect to the variables (η_1,…,η_p,ξ_p+1,…,ξ_r). The induction step is similar to the initial one. As before, let us write h=h_0^(p+1)+h_1^(p+1) where h_0^(p+1)∈ K[[η_1,…,η_p]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod with v(h_0^(p+1))=0, and h_1^(p+1)∈ξ_p+1K[[η_1,…,η_p,ξ_p+1]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_Mod. If h_1^(p+1)∈ K[[η_1,…,η_p,ξ_p+1]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod, then we set ρ_p+1=0. Otherwise, let ρ_p+1 be the smallest positive integer such that: ρ_p+1≥sup{1 ; (1-m_p+2)/m_p+1, m∈supp h_1^(p+1)}. Note that, since m_p+1≥ 1 and m_p+2≥ -β_p+2 (since these exponents m_p+2 remained unchanged until this step), we have that ρ_p+1≤ 1+β_p+2. If we set η_p+1:=ξ_p+1/ξ_p+2^ρ_p+1, then h∈ K[[η_1,…,η_p+1]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and v(h)=0 (where v is now the lexicographic valuation with respect to the variables (η_1,…,η_p+1,ξ_p+2,…,ξ_r)). By iteration of this process, we obtain that h ∈ K[[η_1,…,η_r-1]]((ξ_r^ℤ))^lex_≥_lex0, Mod and v(h)=0 (where v is now the lexicographic valuation with respect to the variables (η_1,…,η_r-1, ξ_r)), which means that h∈ K[[η_1,…,η_r-1,ξ_r]] with h invertible. Since ξ^β=η^α for some α∈^r, the lemma follows. (i) Let ỹ_0:=f̃/g̃∈𝒦_r. There exist (p,q)∈ℕ^*×ℕ^r-1 and L with [L:K]<+∞ such that ỹ_0∈ L(((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p)). We note that we can rewrite ỹ_0 as a monomial (with integer exponents) times an invertible power series in other variables (( x_1/x_2^q_1')^1/p,…, (x_r-1/x_r^q_r-1')^1/p ,x_r^1/p). Indeed, let us denote ξ=(ξ_1,…,ξ_r):=((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p). So ỹ_0=f̃/g̃ for some f̃,g̃∈ L[[ξ]]. By the preceding lemma, we can monomialize the product f̃.g̃, so f̃ and g̃ simultaneously, by a suitable transformation (<ref>). Note that this transformation maps L[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] into some L[[(x_1/x_2^q_1')^1/p,…, (x_r-1/x_r^q'_r-1)^1/p ,x_r^1/p]]. Indeed, a monomial in ξ is transformed into a monomial in η, and one has that: η_1^i_1/p⋯η_r-1^i_r-1/pη_r^i_r/p= (x_1/x_2^q_1/(x_2/x_3^q_2)^ρ_1)^i_1/p⋯(x_r-1/x_r^q_r-1/x_r^ρ_r-1)^i_r-1/px_r^i_r/p = (x_1/x_2^q_1+ρ_1)^i_1/p⋯(x_r-1/x_r^q_r-1+ρ_r-1)^i_r-1/p x_r^i_r/p(x_3^q_2ρ_1)^i_1/p(x_4^q_3ρ_2)^i_2/p⋯(x_r^q_r-1ρ_r-2)^i_r-2/p and we write (x_3^q_2ρ_1)^i_1/p= (x_3/x_4^q_3+ρ_3)^q_2ρ_1i_1/px_4^(q_3+ρ_3)q_2ρ_1i_1/p and so on. Thus we obtain a monomial in the variables ((x_1/x_2^q_1+ρ_1)^1/p,…, (x_r-1/x_r^q_r-1+ρ_r-1)^1/p, x_r^1/p). (ii) Let f∈ K[[ξ]], ρ_1,…,ρ_r-1∈ℕ, and η be as in the Monomialization Lemma <ref>. Let β=v_ξ(f). If we replace ρ_1,…,ρ_r-1 by ρ_1',…,ρ_r-1' with ρ_i'≥ρ_i for all i, and we proceed to the corresponding change of variables η' as in (<ref>), then we still have f(ξ)=(η')^αg'(η') for some invertible g'∈ K[[η']]. So Lemma <ref> holds true if we take 1+β_i+1 instead of ρ_i whenever ρ_i>0. 𝒦_r is an algebraically closed extension of K((x)). This is a consequence of Abhyankar-Jung Theorem <cit.>, see <cit.>, and our Monomialization Lemma <ref>. Let P(y)=∑_i=0^da_iy^i∈ L[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]][y] where [L:K]<+∞, p∈^*, q_i∈ for i=1,..r-1 and a_d≠ 0. We want to show that P has a root in 𝒦_r. Up to multiplication by a_d^d-1 and change of variable z=a_dy, we may assume that P is monic. Let us denote ξ=(ξ_1,…,ξ_r):=((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p) and P(y)=P(ξ,y). Up to replacing L by a finite algebraic extension of it, we may also suppose that P(0,y)=(y-c_1)^α_1⋯ (y-c_m)^α_m with c_i∈ L. By Hensel's Lemma [CITE Raynaud Propo 5 4) and Lafon Alg locale, chap 12, theo 12.5 p.166], there exist polynomials P_1(ξ,y),…,P_m(ξ,y) such that P_i(0,y)=(y-c_i)^α_i (i=1,..,m) and P=P_1⋯ P_m. It is enough to show that P_1 has a root in 𝒦_r. By a change of variable y=z-c_1, we are lead to the case of a polynomial P(ξ,y)=y^d+∑_i=0^d-1a_i(ξ)y^i with a_i(0)=0, i=0,..,d-1. By our Monomialization Lemma <ref> and Remark <ref>(i), we may assume that the discriminant of P is monomialized. Hence, Abhyankar-Jung Theorem applies. Note that this last step may require to replace L by a finite algebraic extension. Let ỹ_0∈𝒦_r be a non zero rational polyhedral Puiseux series. Let us show that the existence of a nonzero polynomial P̃(x,y) cancelling ỹ_0 is equivalent to the one of a polynomial P(u,y) cancelling y_0∈ L[[u]], but with constraints on the support of P. Indeed, by our Monomialization Lemma <ref> and Remark <ref>(i), there are (p,q)∈ℕ^*×ℕ^r-1 such that, if we set: (u_1,…,u_r-1,u_r):=((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p), then we can rewrite ỹ_0 =∑_n≥ñ^0c̃_nu^n, c̃_ñ^0≠ 0. Let us denote c_n:=c̃_n+ñ^0, and: ỹ_0=u^ñ^0∑_n≥0 c_nu^n=u^ñ^0 y_0 with c_0≠ 0. Hence, y_0 is a formal power series in u with coefficient in a finite algebraic extension L of K. By the change of variable (<ref>), we have: x_k=u_k^pu_k+1^pq_ku_k+2^pq_kq_k+1⋯ u_r^pq_kq_k+1⋯ q_r-1, k=1,…,r The rational polyhedral Puiseux series ỹ_0 is a root of a polynomial P̃(x,y)=∑_j=0^d∑_i∈^rã_i,jx^iy^j ∈ K[[x]][y] of degree d in y if and only if the power series y_0=∑_n∈^r c_nu^n∈ L[[u]] is a root of u^m̃^0P̃( u_1^pu_2^pq_1⋯ u_r^pq_1q_2⋯ q_r-1 , … , u_r^p , u^ñ^0y), the latter being a polynomial P(u,y) in K[[u]][y] for m̃^0 such that m̃^0_k=max{0 ; -ñ_k^0d}, k=1,…,r . Note that the transformation is uniquely defined by p,q,d and ñ^0. In the following lemma, we clarify the constraints on the support of the polynomial P. With the notations of (<ref>), we set u=( t_0, s_1, t_1,…, s_σ, t_σ) where t_0 might be empty, such that u_i∈ s_k if and only if q_i≠ 0 (and, so u_i∈ t_k if and only if q_i=0). Moreover, we write s:=( s_1,…, s_σ) and t:=( t_0, t_1,…, t_σ). Hence, a polynomial P̃(x,y) ∈ K[[x]][y] is changed by the transformation induced by (<ref>) and (<ref>) into a polynomial: P(s,t,y)=∑_l≥0∑_j=0^dP_l,j(s)y^j t^l∈ K[s,y][[t]] with for any i such that u_i∈s_k, _u_i(P_l,j(s))-(m̃^0_i+jñ_i^0) ≤_u_i+1 (P_l,j(s) t^l)-(m̃^0_i+1+jñ_i+1^0) /q_i, j=0,..,d. Conversely, any polynomial P(s,t,y)=∑_l≥0∑_j=0^dP_l,j(s)y^j t^l∈ K[s,y][[t]] comes from a unique polynomial P̃(x,y) ∈ K[[x]][y] by the transformation induced by (<ref>) and (<ref>) if and only if each monomial u^αy^j in the support of P satisfies the following conditions: (i) α≥m̃^0+jñ^0; (ii) ∀ i=1,…,r, α_i-(m̃^0_i+jñ_i^0)≡ 0 (p) ; (iii) For any u_i∈s_k, α_i-(m̃^0_i+jñ_i^0)≤α_i+1-(m̃^0_i+1+jñ_i+1^0)/q_i. Let us collect the variables x_i according to the distinction between t_j and s_k among the variables u_l. We set x_k for the sub-tuple of variables x_i corresponding to t_k, and ξ_k for s_k respectively. Let us consider a general monomial: x^ ny^j = x_0^ n_0 ξ_1^ m_1 x_1^ n_1⋯ξ_σ^ m_σ x_σ^ n_σy^j. where n=( n_0, m _1, n_1,…, m_σ, n_σ). For k=1,…,σ, we denote ξ_k=(x_i_k,…,x_j_k-1) and x_k=(x_j_k,…,x_i_k+1-1), and accordingly m_k=(n_i_k,…,n_j_k-1) and n_k=(n_j_k,…,n_i_k+1-1) with i_σ+1:=r+1. For k=0 when t_0 is not empty, we denote x_0= t_0=(x_j_0,…,x_i_1-1) and n_0=(n_j_0,…,n_i_1-1) with j_0:=1. By the change of variable (<ref>), for each k=1,…,σ, we obtain that: ξ_k^ m_k x_k^ n_k= ((x_i_k/x_i_k+1^q_i_k)^1/p)^pn_i_k( (x_i_k+1/x_i_k+2^q_i_k+1)^1/p)^p(n_i_k+1+q_i_kn_i_k)⋯ ((x_j_k-1/x_j_k^q_j_k-1)^1/p)^p(n_j_k-1+q_j_k-2n_j_k-2 +q_j_k-2q_j_k-3n_j_k-3+⋯+ q_j_k-2q_j_k-3⋯ q_i_kn_i_k) ×( x_j_k^1/p)^p(n_j_k+q_j_k-1n_j_k-1+q_j_k-1q_j_k-2n_j_k-2+⋯+ q_j_k-1q_j_k-2⋯ q_i_kn_i_k) ×( x_j_k+1^1/p)^pn_j_k+1⋯(x_i_k+1-1^1/p)^pn_i_k+1-1 = u_i_k^pn_i_ku_i_k+1^p(n_i_k+1+q_i_kn_i_k)⋯u_j_k-1^p(n_j_k-1+q_j_k-2n_j_k-2+q_j_k-2q_j_k-3n_j_k-3+⋯+ q_j_k-2q_j_k-3⋯ q_i_kn_i_k) u_j_k^p(n_j_k+q_j_k-1n_j_k-1+q_j_k-1q_j_k-2n_j_k-2+⋯+ q_j_k-1q_j_k-2⋯ q_i_kn_i_k) u_j_k+1^pn_j_k+1⋯u_i_k+1-1^pn_i_k+1-1 [ = s_i_k^pn_i_ks_i_k+1^p(n_i_k+1+q_i_kn_i_k)⋯s_j_k-1^p(n_j_k-1+q_j_k-2n_j_k-2+q_j_k-2q_j_k-3n_j_k-3+⋯+ q_j_k-2q_j_k-3⋯ q_i_kn_i_k); t_j_k^p(n_j_k+q_j_k-1n_j_k-1+q_j_k-1q_j_k-2n_j_k-2+⋯+ q_j_k-1q_j_k-2⋯ q_i_kn_i_k) t_j_k+1^pn_j_k+1⋯t_i_k+1-1^pn_i_k+1-1. ] Moreover, y^j is transformed into u^m̃^0+jñ^0y^j. For u_i∈ s_k, we denote by c_i its exponent in Formula (<ref>). If i<j_k-1, then u_i+1∈ s_k and its exponent is c_i+1=p(n_i+1+q_in_i+⋯ +q_iq_i-1⋯ q_i_kn_i_k) =pn_i+1+q_ic_i. The total exponent of u_i in the transform of x^ ny^j is c_i+m̃^0_i+jñ_i^0. So, _u_i+1 (P_l,j(s) y^j t^l)-(m̃^0_i+1+jñ_i+1^0) = _u_i+1 (P_l,j(s))-(m̃^0_i+1+jñ_i+1^0) ≥q_i(_u_i (P_l,j(s))-(m̃^0_i+jñ_i^0)). If i=j_k-1, then u_i+1=t_j_k∈ t_k. Likewise, its exponent in (<ref>) is pn_j_k+q_j_k-1c_j_k-1. We obtain that _u_i+1 (P_l(s) y^j t^l)-(m̃^0_j_k+jñ_j_k^0) =_t_j_kt^l-(m̃^0_j_k+jñ_j_k^0) ≥q_j_k-1(_u_j_k-1P_l(s,y)-(m̃^0_j_k-1+jñ_j_k-1^0) ). Conversely, we consider a monomial s_k^λ t_k^μ. It is of the form (<ref>), that is, it comes from a monomial ξ_k^ m_k x_k^ n_k, if and only if _u_is_k^λ≤_u_i+1s_k^λ t_k^μ/q_i and λ_i≡μ_j≡ 0 (p), which are equivalent to the conditions (ii) and (iii). Taking into account the transformation (<ref>), this gives the converse part of the lemma. Note that, if x^ny^j≠x^n'y^j', the transformation applied to these monomials gives u^αy^j≠u^α'y^j'. For the rest of this section, and also for Sections <ref>, <ref> and <ref>, we assume that the field K is algebraically closed, hence K=L=K. If for all i, q_i=0, namely if u_i=x_i^1/p, then any ỹ_0=f/g with f,g∈ K[[u]] is algebroid. Indeed, let θ_p denote a primitive pth root of unity. We set: P̃(u,y) := ∏_i=1,…,r∏_k_i=0,…,p-1g(θ_p^k_1u_1,…,θ_p^k_ru_r) (y-ỹ_0(θ_p^k_1u_1,…,θ_p^k_ru_r)) = ∏_i=1,…,r∏_k_i=0,…,p-1[g(θ_p^k_1u_1,…,θ_p^k_ru_r) y-f(θ_p^k_1u_1,…,θ_p^k_ru_r)]. Note that P̃(u,ỹ_0)=0. Moreover, since P̃(u_1,…,θ_pu_i,…,u_r,y)=P̃(u,y) for any i=1,…,r, we conclude that P̃∈ K[[x]][y]. Consequently, from now on, we consider the case where q_i≠ 0 for at least one i∈{1,…,r}. Let us denote by τ the number of variables in s, and so r-τ is the number of variables in t. We consider y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m ,ns^mt^n =∑_n∈ℕ^r-τ c_n(s) t^n such that c_0,0≠ 0 which satisfies an equation P(s,t,y)=0 where P agrees conditions (i), (ii) and (iii) of Lemma <ref>. The series c_n(s)∈ K[[s]], n∈ℕ^r-τ, are all algebraic over K(s), and lie in a finite extension of K(s). We consider y_0 =∑_n∈ℕ^r-τ c_n(s) t^n root of a non-trivial polynomial P(s,t,y)=∑_l∈ℕ^r-τ P_l(s,y) t^l∈ K[s,y][[t]] which satisfies conditions (i), (ii) and (iii). We proceed by induction on ℕ^r-τ ordered by ≤_ grlex. Given some n∈ℕ^r-τ, we set y_0=z̃_n+c_nt^n+y_n with z̃_n=∑_β<_grlexn c_βt^β, y_n=∑_β>_grlexn c_βt^β, (and z_0:=0 which corresponds to the initial step of the induction). We assume that the coefficients c_β of z̃_n belong to a finite extension L_n of K(s). We set Q_n(t,y):=P(s,t,z̃_n+y)∈ L_n[y][[t]] and we denote it by: Q_n(t,y)=∑_l≥0Q_n,l(y) t^l. We claim that w_t(P)=w_t(Q_n). This is clear if n=0. For n>_ grlex0, let l_0:=w_t(P). We have Q_n(t,y)=P_l_0(s, z̃_n+y)t^l_0+⋯ =( ∑_j=0^d 1/j!∂^j P_l_0/∂ y^j(s,y)z̃_n^j )t^l_0+⋯ Let d_l_0 :=_y P_l_0: the coefficient of y^d_l_0 in the previous parenthesis is not zero for j=0 but zero for j≥ 1. Namely, it is the coefficient of P_l_0(s,y), which is of the form a(s)y^d_l_0t^l_0 and therefore cannot overlap with other terms. By Taylor's formula, we have that: Q_n(t,Ct^n+y)=∑_l≥_ grlexl_0∑_j=0^d 1/j!∂^j Q_n,l/∂ y^j(0) (Ct^n+y)^j t^l. Recall that y_n∈ K[[s]][[t]] with w_t(y_n)>_grlexn. Then Q_n(t,Ct^n+y_n)≠ 0 as a polynomial in C (otherwise P would have more than d roots). Necessarily, w_t( Q_n(t,Ct^n+y_n)) is of the form ω=l_1+j_1 n. Indeed, let us consider ω:=min_l,j{l+j n | ∂^j Q_n,l/∂ y^j(0)≠ 0}, and among the (l,j)'s which achieve this minimum, consider the term with the biggest j. This term cannot be cancelled. The correspondent coefficient of t^ω in Q_n(t,Ct^n+y_n) is a nonzero polynomial in C of the form: ∑_l_k+j_k n=ω1/j_k!∂^j_k Q_n,l_k/∂ y^j_k(0) C ^j_k. Since y_0 is a root of P, this polynomial needs to vanish for C=c_n, which proves by the induction hypothesis that c_n is itself algebraic over K(s). Without loss of generality, we may assume that y_0 is a simple root of P, hence, ∂ P/∂ y(s,t,y_0) ≠ 0. With the same notations as above, we consider n_0:= w_t(∂ P/∂ y(s,t,y_0)) ∈ℕ^r-τ. For any n>_grlexn_0, ∂ Q_n/∂ y(t,0)=∂ P/∂ y(s,t,z̃_ n) and w_t(∂ Q_n/∂ y(t,0)-∂ P/∂ y(s,t,y_0))=w_t(∂ P/∂ y(s,t,z̃_ n)-∂ P/∂ y(s,t,y_0))≥_grlexn>_grlexn_0. So w_t(∂ Q_n/∂ y(t,0))=n_0. By Taylor's formula: Q_n(t,Ct^n+y_n)=∑_j=0^d 1/j!∂^j Q_n/∂y_n^j(t,0) (Ct^n+y)^j. We have: w_t(∂ Q_n/∂ y(t,0) (Ct^n+y_n))= n+n_0, and for any j≥ 2: w_t(∂^j Q_n/∂ y^j(t,0) (Ct^n+y_n)^j)≥_grlex 2n>n+n_0. We deduce by (<ref>) that w_t(Q_n(t,0)) ≥_grlexn+n_0 since, otherwise, Q_n(t,Ct^n+y_n) could not vanish at C=c_n. Let us prove by induction on n∈ℕ^r-τ ordered by ≤_ grlex, n≥_ grlexn_0, that the coefficients c_l of t^l in z̃_n all belong to L_ n_0=K(s,c_0,…,c_n_0). The initial case is clear. Assume that the property holds for less than some given n. Let us denote ∂ Q_n/∂ y(t,0)=a_n_0t^n_0+R(t) with w_t(R(t)) >_grlexn_0, a_n_0≠ 0, and Q_n(t,0)=b_n+n_0t^n+n_0+S(t) with w_t(S(t)) >_grlexn+n_0. By (<ref>) and the induction hypothesis, a_n_0 and b_n+n_0 belong to L_n_0. Looking at the coefficient of t^n+n_0 in (<ref>) evaluated at C=c_n, we get: a_n_0c_n +b_n+n_0=0. Hence we obtain that c_n∈ L_n_0=K(s,c_0,…,c_n_0) for all n>_ grlexn_0. Let us recall that A(n) denotes the predecessor element of n in (ℕ^r,≤_grlex). The following lemma will be used in Section<ref> in order to apply the results of Section <ref>. Let d, m̃^0, ñ^0, q, p and P be as above (see (<ref>) and (<ref>)). As in the proof of the previous lemma, we set l_0:=w_t(P). We resume the notations of Lemma <ref>. For k=1,…,σ, with s_k=(u_i_k,…,u_j_k-1), we denote e_s_k:=1/q_i_kq_i_k+1⋯ q_j_k-1+1/q_i_k+1⋯ q_j_k-1+⋯ + 1/ q_j_k-1, and ñ^0,s_k (respectively m̃^0,s_k), the multi-index obtained from ñ^0 (respectively m̃^0), by restriction to the components corresponding to the variables in s_k. Likewise, we set ñ^0,t_k and m̃^0,t_k corresponding to the variables in t_k for k=0,…, σ. Let n∈^r-τ, then there exists T_n∈ K[s,( C_β)_β≤_grlexn]∖{0} such that T_n(s,c_0,…,c_A(n),c_n)=0, T_n(s,c_0,…,c_A(n),C_n)≢0 with _C_βT_n≤ d, _ sT_n≤( |l_0|+d |n| )a+b, where a:=∑_k=1^σ e_s_k, b:=ε(∑_k=1^σ |ñ^0,s_k|-∑_k=1^σñ^0,t_k_j_k e_s_k)+∑_k=1^σ |m̃^0,s_k|-∑_k=1^σm̃^0,t_k_j_k e_s_k, with ñ^0,t_k_j_k (respectively m̃^0,t_k_j_k) the first component of ñ^0,t_k (respectively m̃^0,t_k), and ε:={[ 0 if ∑_k=1^σ |ñ^0,s_k|-∑_k=1^σñ^0,t_k_j_k e_s_k≤ 0,; d if ∑_k=1^σ |ñ^0,s_k|-∑_k=1^σñ^0,t_k_j_k e_s_k> 0. ]. Resuming the notations and computations of the previous lemma (see (<ref>) to (<ref>)), c_n is a root of a nonzero polynomial in C of the form: ∑_l_k+p_k n=ω1/p_k!∂^p_k Q_ n,l_k/∂ y^p_k(0) C ^p_k where ω:=w_t( Q_n(t,Ct^n+y_n))=l_1+p_1 n≤_grlexl_1+d n. Let us denote by T_n the polynomial obtained from the preceding expression by substituting C_n to C and C_β to c_β for β<_grlexn. More precisely, if we set H_n(s,t,(C_β)_β≤_grlexn,y)= P(s,t,∑_β≤_grlexn C_βt^β+y ) =∑_l∈ℕ^r-τ H_n,l(s,(C_β)_β≤_grlexn,y)t^l then T_n(s,(C_β)_β≤_grlexn):=H_n,ω(s,(C_β)_β≤_grlexn,0). Since w_t(Q_ n)=w_t(P) by (<ref>), we observe that l_0=min_≤_grlex{l | ∃ p, ∂^p Q_ n,l/∂ y^p(0)≠ 0 }. Let p_0 = min{p | ∂^p Q_ n,l_0/∂ y^p(0)≠ 0 }. Then the coefficient of C^p_0t^l_0 +p_0n in the expansion of Q_ n(t,C t^n+y_n) is not zero. Since we have that: Q_ n(t,Ct^n+y_n)=∑_l≥0∑_j=0^d 1/j!∂^j Q_ n,l/∂ y^j(0) (Ct^n+y_n)^j t^l, the term 1/p_0!∂^p_0 Q_ n,l_0/∂ y^p_0(0) C ^ p_0 t^l_0+p_0n cannot overlap with other terms since the latter will necessarily be of the form 1/(p-p_0)!p_0!∂^p Q_ n,l/∂ y^p(0) C ^ p_0 t^l+p_0ny_n^p-p_0 with l≥_grlexl_0, p≥ p_0 and w_t(y_n)>_grlexn. (see (<ref>)). So, ω≤_grlexl_0+p_0n≤_grlexl_0+dn. Let us detail the expression of the connection between P and Q_ n. We denote P(s,t,y)=∑_l∈ℕ^r-τ(∑_k∈^τ∑_j=0^d a_k,l,js^ky^j) t^l, and we get: Q_ n(s,t,y) =P(s,t,z̃_n+y ) =∑_l∈ℕ^r-τ(∑_k∈^τ∑_j=0^d a_k,l,js^k(∑_β<_grlexn c_βt^β+y)^j) t^l =∑_l∈ℕ^r-τ(∑_k∈^τ∑_j=0^d a_k,l,js^k(∑_|j|=jj!/j!(∏_β<_grlexn c_β^j_β) y^j_nt^g(j)-j_nn)) t^l =∑_l∈ℕ^r-τ∑_k∈^τ∑_j=0^d∑_|j|=j a_k,l,js^kj!/j!(∏_β<_grlexn c_β^j_β) y^j_nt^l+g(j)-j_nn where j=(j_0,…,j_n) and g(j) is as in Notation <ref>. Next, we evaluate y at C t^n+y_n and we consider the (l,j)'s such that l+g(j)=ω for which the coefficient of t^ω is the non-trivial polynomial of which c_n is a root. Then, the multi-indices l involved are such that l≤_grlexl_0+dn. Consider such a monomial s^ kt^ly^j written as u^αy^j as in (<ref>). Recall that the elements of the support of P satisfy Condition (iii) of Lemma <ref>: for any k=1,…,σ, for any u_i∈s_k, α_i-(m̃^0_i+jñ_i^0)≤α_i+1-(m̃^0_i+1+jñ_i+1^0)/q_i. For s_k=(u_i_k,…,u_j_k-1) and t_k=(u_j_k,…,u_i_k+1-1), we claim that for any i=i_k,…,j_k-1, α_i≤α_j_k/q_iq_i+1⋯ q_j_k-1+j ( ñ^0_i-ñ^0_j_k/q_iq_i+1⋯ q_j_k-1)+m̃^0_i-m̃^0_j_k/q_iq_i+1⋯ q_j_k-1. The case i=j_k-1 is given by Condition (iii). Suppose that the formula holds until i+1, i.e. α_i+1≤α_j_k/q_i+1⋯ q_j_k-1+j ( ñ^0_i+1-ñ^0_j_k/q_i+1⋯ q_j_k-1)+m̃^0_i+1-m̃^0_j_k/q_i+1⋯ q_j_k-1. Since, by Condition (iii), we have α_i≤α_i+1/q_i+j(ñ_i^0-ñ_i+1^0/q_i)+m̃_i^0-m̃_i+1^0/q_i, we obtain the formula for α_i as expected. Now, we consider the sum for i=i_k,…,j_k-1 of these inequalities (<ref>): ∑_i=i_k^j_k-1α_i≤α_j_ke_s_k+j(|ñ^0,s_k|-ñ^0 _j_ke_s_k)+|m̃^0,s_k|-m̃^0 _j_ke_s_k. Note that ñ^0 _j_k=ñ^0,t_k_j_k and m̃^0 _j_k=m̃^0,t_k_j_k. Moreover, α_j_k is equal to some l_γ component of l, so α_j_k≤ |l_0|+d|n|. So, ∑_i=i_k^j_k-1α_i≤(|l_0|+d|n|)e_s_k+j(|ñ^0,s_k|-ñ^0,t_k_j_ke_s_k)+|m̃^0,s_k|-m̃^0,t_k_j_ke_s_k. Taking the sum for k=1,…,σ, we obtain: |k|≤(|l_0|+d|n|)∑_i=1^σe_s_k+j(∑_i=1^σ|ñ^0,s_k|-∑_i=1^σñ^0,t_k_j_ke_s_k)+∑_i=1^σ|m̃^0,s_k|-∑_i=1^σm̃^0,t_k_j_ke_s_k. Since 0≤ j≤ d, we finally obtain: |k|≤(|l_0|+d|n|)∑_i=1^σe_s_k+ε(∑_i=1^σ|ñ^0,s_k|-∑_i=1^σñ^0,t_k_j_ke_s_k)+∑_i=1^σ|m̃^0,s_k|-∑_i=1^σm̃^0,t_k_j_ke_s_k. From the previous proof, we observe that, for any monomial s^ kt^ly^j in the support of a polynomial P which satisfies the conditions of Lemma <ref>, one has that: | k|≤ a | l|+b, where a and b are as in Lemma <ref>. To see this, use α_j_k≤ |l| in place of α_j_k≤ |l_0|+d|n| in (<ref>). For r=2, let p,q∈^* and ñ^0=(ñ^0_1,ñ^0_2)∈^2. * Let us consider: ỹ_0=(x_1/x_2^q)^ñ^0_1/px_2^ñ^0_2/p∑_i,j=0^p-1(1/1-x_2x_2^q/x_2^q-x_1) (x_1/x_2^q)^i/p x_2^j/p∈𝒦_2. The series ỹ_0 is algebroid, even algebraic, since it is a finite sum and product of algebraic series. Hence, (u_1,u_2)=( (x_1/x_2^q_1)^1/p, x_2^1/p)=(s,t). Moreover, it has a full support: {1/pñ^0+(k/p, l-qk/p) | (k,l)∈^2 }. < g r a p h i c s > * Let us consider ỹ_0=(x_1/x_2^q)^ñ^0_1/px_2^ñ^0_2/p(1/1-x_2^1/p) exp((x_1/x_2^q)^1/p) ∈𝒦_2. The series ỹ_0 is transcendental over K[[x_1,x_2]]. Indeed, with the same notations as above, ỹ_0=s^ñ^0_1/pt^ñ^0_2/p1/1-texp(s) is algebroid if and only if exp(s) is algebraic by Lemma <ref>. This is clearly not the case. Moreover, ỹ_0 has the same support as above. In <cit.>, the authors ask whether K((x)) is a Rayner field. The above example with p=1 provides us with two series having same support, the first belonging to K((x)), and the second not. Following the argument after <cit.>, this shows that K((x)) is not a Rayner field. § A NESTED DEPTH LEMMA. Let d_x, d, _̣x, ∈̣ℕ^*. Given two polynomials P∈ K[x,y]∖{0 }, _xP≤ d_x, _yP≤ d, and Q∈ K[x,y]∖{0 }, _xQ≤_̣x, _yQ≤$̣, we denote byR∈K[x] their resultant. It satisfies_xR≤d_̣x+ḍ_x. Moreover, in the Bézout identity:AP+BQ=R,one can choose the polynomialsS, T ∈K[x,y]which satisfy:{[ _xA≤ d_x(-̣1)+_̣xd _yA≤-̣1; _xB≤ d_x+̣_̣x(d-1) _yB≤ d-1 ].We consider the following linear map: [ φ: K(x)[y]_× K(x)[y]_d → K(x)[y]_d+; (A,B) ↦ AP+BQ, ] where K(x)[y]_n denotes the K(x)-vector space of polynomials of degree less than n in y. The matrix M of φ in the standard basis {(y^i,0)}∪{(0,y^j)} and {y^k} is the Sylvester matrix of P and Q. The polynomial R∈ K[x] is its determinant. So, _xR≤ d_̣x+ḍ_x. Let M' be the matrix of cofactors of M. From the relation M. ^tM'=R Id_d+, one deduces the Bézout identity AP+BQ=R, the coefficients of A and B being minors of M of maximal order minus 1. Let 𝔄 be a domain and 𝔎 its field of fractions. Given n∈, n≥ 2, we consider an n× n matrix M=(m_i,j) with coefficients in 𝔄. We suppose that M (as a matrix with coefficients in 𝔎) has rank n-p for some 1≤ p<n. Then there exists a vector V∈𝔄^n∖{0} whose nonzero coefficients are equal, up to sign ±, to minors of order n-p of M and such that M.V=0. Without loss of generality, we can suppose that the minor of order n-p, say Δ, given by the first n-p rows and columns is not zero. Denote V:=(Δ_1,…, Δ_n). For k>n-p+1, set Δ_k:=0. For k=n-p+1, set Δ_k:=(-1)^n-p+1Δ≠ 0. For k< n-p+1, we set Δ_k equal to (-1)^k times the minor of M given by the first n-p rows, and all but the k'th first n-p+1 columns. Denote M.V:=(c_1,…,c_n). We claim that M.V=0. Indeed, c_1= ∑_j=1^n-p+1 m_1,jΔ_j which is the determinant of the (n-p+1)×(n-p+1)-matrix (δ_i,j) with δ_i,j=m_i,j for 1≤ i≤ n-p and 1≤ j≤ n-p+1, and δ_n-p+1,j=m_1,j for 1≤ j≤ n-p+1. This determinant vanishes since it has two identical rows. Similarly, we have that c_2=⋯=c_n-p=0. Now, c_n-p+1=∑_j=1^n-p+1 m_n-p+1,jΔ_j, which is equal to a minor of order n-p+1 of M. It vanishes since M has rank n-p. Similarly, c_n-p+2=…=c_n=0. Let 𝔄 be a domain and 𝔎 its field of fractions. Let P_1,P_2∈𝔄[y]∖{0} of positive degrees d_1≥ d_2 respectively. The Sylvester matrix of P_1 and P_2 has rank at least d_1. Moreover, it has rank d_1 if and only if aP_1=BP_2 for some a∈𝔄 and B∈𝔄[y]∖{0}. In this case, one can take a=q_d_2^d_1-d_2 + 1 (where q_d_2 is the coefficient of y^d_2 in P_2) and the coefficients of such a polynomial B can be computed as homogeneous polynomial formulas in the coefficients of P_1 and P_2 of degree d_1-d_2+1, each monomial consisting of d_1-d_2 coefficients of P_2 times 1 coefficient of P_1. As in the proof of Lemma <ref>, we denote by M_P_1,P_2 the Sylvester matrix of P_1 and P_2. By definition, its d_1 columns corresponding to the coefficients of y^lP_2, l=0,…,d_1-1, being upper triangular are linearly independent (and the same holds for the d_2 columns corresponding to the coefficients of y^kP_1). Hence, M_P_1,P_2 has rank at least max{d_1,d_2}=d_1. Moreover, an equality aP_1=BP_2 translates exactly into a linear relation between the column corresponding to P_1 and the columns corresponding to y^lP_2 for l=0,…,d_1-d_2. In this case, the linear relation repeats mutatis mutandi between the column corresponding to y^k P_1 and the columns corresponding to y^lP_2 for l=k,…,d_1-d_2+k, corresponding to an equality ay^kP_1=y^kBP_2. Let us consider the submatrix N_P_1,P_2 of M_P_1,P_2 consisting of the column corresponding to P_1 and the columns corresponding to y^lP_2 for l=0,…,d_1-d_2. It has rank d_1-d_2+1. By the previous lemma, there exists a nonzero vector in the kernel of N_P_1,P_2, given by minors of order d_1-d_2+1. More precisely, we are in the case of a Cramer system encoding an equality BP_2 = aP_1, with in particular a=q_d_2^d_1-d_2+1 corresponding to the determinant of the matrix of the linear map B↦ BP_2. By Cramer's rules, the coefficients of B are computed as determinants which indeed give homogeneous polynomial formulas with monomials consisting of d_1-d_2 coefficients of P_2 and 1 coefficient of P_1. Let d_x, d, _̣x, ∈̣ℕ^* and P, Q∈ K[x,y]∖{0 }, _xP≤ d_x, _yP≤ d, _xQ≤_̣x, _yQ≤$̣. For any seriesc_0 ∈ K[[x]]such thatP(x,c_0)=0andQ(x,c_0)≠ 0, one has thatord_xQ(x,c_0)≤_̣xd+ d_x.̣Let c_0 be a series as in the statement of Lemma <ref>. We consider the prime ideal ℑ_0:={R(x,y)∈ K[x,y] | R(x,c_0)=0}. Since ℑ_0≠ (0), (K[x,y]/ℑ_0)=trdeg_KFrac(K[x,y]/ℑ_0)≤ r. But, in Frac(K[x,y]/ℑ_0), the elements x_1,…,x_r are algebraically independant (if not, we would have T(x_1,…,x_r)=0 for some non trivial T∈ K[X], i.e. T(x_1,…,x_r)∈ℑ_0, a contradiction). Thus, ℑ_0 is a height one prime ideal of the factorial ring K[x,y]. It is generated by an irreducible polynomial P_0(x,y)∈ K[x,y]. We set d_x,0:=_x P_0 and d_y,0:=_y P_0. Note also that, by factoriality of K[x,y], P_0 is also irreducible as an element of K(x)[y]. Let P be as in the statement of Lemma <ref>. One has that P=SP_0 for some S∈ K[x,y]. Hence d_x,0≤ d_x and d_y,0≤ d. Let Q∈ K[x,y] be such that Q(x,c_0)≠ 0 with _x Q≤_̣x, _yQ≤$̣. SoP_0andQare coprime inK(x)[y]. Their resultantR(x)is nonzero. One has the following Bézout relation inK[x][y]:A(x,y)P_0(x,y)+B(x,y)Q(x,y)=R(x).We evaluate aty=c_0:0+B(x,c_0)Q(x,c_0)=R(x).But, by Lemma <ref>,_x R ≤ d_y,0_̣x+ ḍ_x,0≤ d_̣x+ ḍ_x. Hence, one has that:ord_x Q(x,c_0)≤ord_xR ≤_x R≤ d_̣x+ ḍ_x.Let i, d_x, d, _̣x, ∈̣ℕ, d≥ 2, ≥̣1. There exists ω(i,d_x, d, _̣x, )̣∈ minimal such that: for any j=0,…,i, given c_j=∑_n∈^r c_j,nx^n∈ K[[x]] power series satisfying some equations P_j(x,c_0,…,c_j)=0 where P_j∈ K[x,z_0,z_1,…,z_j ]∖{0 }, _xP_j≤ d_x, _z_kP_j≤ d for k=0,…,j, and P_j (x,c_0,…,c_j-1,z_j)≢0, and given Q_i∈ K[x,z_0,z_1,…,z_i ]∖{0 }, _xQ_i≤_̣x, _z_jQ_i≤$̣ forj=0,…,ia polynomial such thatQ_i(x,c_0,c_1,…,c_i)≠ 0, one has that_xQ_i(x,c_0,c_1,…,c_i) ≤ ω(i,d_x, d, _̣x, )̣.Moreover, for≥̣3: [ ω(i,d_x, d, _̣x, )̣≤ (2.3^d^i-1+⋯+d^2+d+1 -2^i3^d^i-1+⋯+d^2+d-(i-1)) d^d^i-1+⋯+d^2+d+1 d_x^̣d^i+; 2^i.3^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2_̣x^̣d^i-1 . ] So, ford≥ 3: ω(i,d_x, d, d_x, d )≤ 2.3^d^i-1+⋯+d^2+d+1 d_x d^d^i+⋯+d^2+d+1 . Finally, for anyε>0, there is_̣εsuch that, for≥̣_̣ε: [ ω(i,d_x, d, _̣x, )̣≤; ( 2.(2+ε)^d^i-1+⋯+d^2+d+1 - (1+ε)^i.(2+ε)^d^i-1+⋯+d^2+d-(i-1))d^d^i-1+⋯+d^2+d+1 d_x^̣d^i +; (1+ε)^i.(2+ε)^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2_̣x^̣d^i-1 , ] and ford≥_̣ε: ω(i,d_x, d, d_x, d ) ≤ 2.(2+ε)^d^i-1+⋯+d^2+d+1 d^d^i+d^i-1+⋯+d^2+d+1 d_x. We proceed by induction on i∈, the case i=0 being Lemma <ref> where we set d^i-1+⋯+d^2+d+1:=0, d^i-1+⋯+d^2+d+2:=d^i-1+⋯+d^2+d+1+1=1 and d^i-1+⋯+d^2+d-(i-1):=0 and where we get: ord_xQ_0(x,c_0)≤_̣xd+ d_x.̣ Suppose that the property holds until some rank i-1≥ 0, and consider polynomials P_i and Q_i as in the statement of the theorem. Let R_1 be the resultant of P_i and Q_i with respect to z_i, and the following Bézout identity according to Lemma <ref> (where x there stands for x or z_j, j=0,..,i-1, here): A_1P_i+B_1Q_i=R_1. There are two cases. If R_1(x,c_0,…,c_i-1)≠ 0, since R_1∈ K[x,z_0,…,z_i-1] with _xR_1≤ d_x+̣_̣xd, _z_jR_1≤ 2d $̣ forj=1,…,i-1, we deduce from the induction hypothesis thatord_x R_1(x,c_0,…,c_i-1)≤ω(i-1,d_x,d,d_x+̣_̣xd, 2d )̣. So, by the Bézout identity:ord_x Q_i(x,c_0,…,c_i)≤ord_xR_1(x,c_0,…,c_i-1) ≤ω (i-1,d_x,d,d_x+̣_̣xd, 2d )̣.IfR_1(x,c_0,…,c_i-1)=0, thenB_1(x,c_0,…,c_i-1,c_i)=0. There are several sub-cases. If R_1(x,c_0,…,c_i-1)=0, then there exist A,B∈ K[x,z_0,…,z_i] such that B(x,c_0,…,c_i-1,c_i)=0, B(x,c_0,…,c_i-1,z_i)≢0 and A(x,c_0,…,c_i-1,z_i)P_i(x,c_0,…,c_i-1,z_i)+B(x,c_0,…,c_i-1,z_i) Q_i(x,c_0,…,c_i-1,z_i)=0 with _xB≤ d_x+̣_̣x(d-1), _z_jB≤ (2d-1) $̣ forj=1,…,i-1, and_z_i B≤ d-1. If B_1(x,c_0,…,c_i-1,z_i)≢0, we take A=A_1 and B=B_1, noticing by Lemma <ref> that _xB_1≤ d_x+̣_̣x(d-1), _z_jB_1≤ (2d-1) $̣ forj=1,…,i-1, and_z_i B_1≤ d-1. IfB_1(x,c_0,…,c_i-1,z_i)≡ 0, necessarilyA_1(x,c_0,…,c_i-1,z_i)≡ 0. Let us denoteP̃_i:=P_i(x,c_0,…,c_i-1,z_i)andQ̃_i:=Q_i(x,c_0,…,c_i-1,z_i), henceP̃_i,Q̃_i∈ K[x,c_0,…,c_i-1][z_i], with degreesd̃andinz_irespectively. Note thatd̃≥ 1and≥ 1(if not,R_1(x,c_0,…,c_i-1)≠ 0). LetM_P̃_i,Q̃_ibe the Sylvester matrix ofP̃_iandQ̃_i, andd̃+-pits rank. Hence,p≥ 1. Suppose thatp=1. Let us denote byM'_P̃_i,Q̃_ithe matrix of cofactors ofM_P̃_i,Q̃_i, and by^tM'_P̃_i,Q̃_iits transpose. At least one of the columns of^tM'_P̃_i,Q̃_iis not zero. Since we have thatM_P̃_i,Q̃_i.^tM'_P̃_i,Q̃_i=0, this column determines a non-trivial relationÃP̃_i+B̃Q̃_i=0where the coefficients ofÃ,B̃are given by the coefficients of this column. Moreover,B̃(x,c_0,…,c_i-1,c_i)=0sinceP̃_i(x,c_0,…,c_i-1,c_i)=0andQ̃_i(x,c_0,…,c_i-1,c_i)≠ 0, andB̃(x,c_0,…,c_i-1,z_i)≢0(if not, we would haveÃ(x,c_0,…,c_i-1,z_i)≡ 0sinceP̃_i(x,c_0,…,c_i-1,z_i)≢0). The coefficients ofB̃are homogeneous polynomial formulas incoefficients ofP̃_iandd̃-1coefficients ofQ̃_i. Lifting these formulas toK[x,z_0,…,z_i-1,z_i]by replacing thec_j's by thez_j's, we obtainAandBwith_xB≤ d_x +_̣x(d̃-1), _z_jB≤ d +(̣d̃-1)forj=1,…,i-1, and_z_i B≤d̃-1. We conclude since≤$̣ and d̃≤ d. Suppose that p≥ 2. The columns corresponding to the coefficients of the z_i^k P̃_i's, k=0,..,-1, are linearly independent (since they form an upper triangular system). We complete them with d̃-p columns corresponding to the coefficients of the z_i^k Q̃_i to a maximal linearly independent family. There is a non-zero minor, say Δ, of maximal order +d̃-p of this family. Proceeding as in Lemma <ref>, there is a non-zero vector V in the kernel of M_P̃_i,Q̃_i whose coefficients are minors of order +d̃-p. More precisely, except for Δ, the other minors are obtained by replacing a column of Δ by the corresponding part of another column of M_P̃_i,Q̃_i. Hence, they consist of either d̃-p+1 columns with coefficients of Q̃_i and -1 columns with coefficients of P̃_i, or d̃-p columns with coefficients of Q̃_i and columns with coefficients of P̃_i. We translate the relation M_P̃_i,Q̃_i.V=0 to a non-trivial relation ÃP̃_i+B̃Q̃_i=0 where the coefficients of Ã,B̃ are given by the coefficients de V. Moreover, B̃(x,c_0,…,c_i-1,c_i)=0 since P̃_i(x,c_0,…,c_i-1,c_i)=0 and Q̃_i(x,c_0,…,c_i-1,c_i)≠ 0, and B̃(x,c_0,…,c_i-1,z_i)≢0 (if not, we would have Ã(x,c_0,…,c_i-1,z_i)≡ 0 since P̃_i(x,c_0,…,c_i-1,z_i)≢0). The coefficients of B̃ are homogeneous polynomial formulas in at most coefficients of P̃_i and d̃-p+1 coefficients of Q̃_i. Lifting these formulas to K[x,z_0,…,z_i-1,z_i] by replacing the c_j's by the z_j's, since p≥ 2, we obtain A and B with _xB≤ d_x +_̣x(d̃-1), _z_jB≤ d +(̣d̃-1) for j=1,…,i-1, and _z_i B≤d̃-1. We conclude since ≤$̣ andd̃≤ d. We denote byB_1the polynomialBof the previous lemma. In any case, we are in position to replacePbyB_1, with_xB_1≤ d_x+̣_̣x(d-1), _z_jB_1≤ (2d-1) $̣ for j=1,…,i-1, and _z_i B_1≤ d-1. We obtain another Bézout identity: A_2B_1+B_2Q_i=R_2 with R_2 the resultant of B_1 and Q_i with respect to z_i, _xR_2≤ (d_x+̣_̣x(d -1) )+̣_̣x(d -1) = d_x^2+_̣x ((d-1)+̣(d -2)+1), likewise, for j=1,…,i-1, _z_jR_2≤ d ^̣2+(̣(d -1)+̣(d-2)+1). Moreover, [ _xB_2 ≤ (_xB_1)+̣_̣x(_z_iB_1 -1); ≤ (d_x+̣_̣x(d-1))+̣_̣x(d-1 -1)=d_x^̣2+_̣x((̣d-1)+d-2), ] and likewise, for j=1,…,i-1, [ _z_jB_2 ≤ (_z_jB_1)+̣ (_z_iB_1-1) ≤ (2d-1)^̣2+(d-2)=̣d^̣2+(̣(̣d-1)+d-2), ] and _z_iB_2≤_z_i B_1-1≤ d-2. If R_2(x,c_0,…,c_i-1)≠ 0, we proceed as before Lemma <ref>, and we obtain: ord_x Q_i(x,c_0,…,c_i)≤ord_x R_2(x,c_0,…,c_i-1)≤ω(i-1,d_x, d, d_x^2+_̣x ((d-1)+̣(d -2)+1), d ^̣2+(̣(d -1)+̣(d-2)+1)). Note that this new bound for ord_x Q_i(x,c_0,…,c_i-1,c_i) has increased with respect to the previous one, since d≤ (d-1)(+̣1)=(d-1)+̣(d -2)+1 for any d≥ 2, ≥̣1. At worst, one can have repeatedly the second case with successive Bézout identities: A_kB_k-1+B_kQ_i=R_k with R_k(x,c_0,…,c_i-1)=0 where for j=0,…,i-1, {[ _xR_k ≤ d_x^̣k+_̣x(^̣k-1(d-1)+^k-2(d-2)+⋯+(d-(k-1))+(d-k)+1); _z_jR_k ≤ d^k+(^k-1(d-1)+^k-2(d-2)+⋯+(d-(k-1))+(d-k)+1), ]. and with {[ _xB_k ≤ d_x^k+_̣x(^k-1(d-1)+^k-2(d-2)+⋯+(d-k+1)+(d-k)); _z_jB_k ≤ d^k+(^k-1(d-1)+^k-2(d-2)+⋯+(d-k+1)+(d-k)); _z_iB_k ≤ d-k. ]. The greatest bound is obtained for k=d-1, for which B_d-1 has _z_iB_d-1= 1. In this case, B_d-1 has c_i as unique root and Q_i(x,c_0,…,c_i-1,c_i)≠ 0, so R_d(x,c_0,…,c_i-1)≠ 0. We set for n,m∈^*: [ ϕ(n,m) := (n-1)m^n-1+(n-2)m^n-2+⋯+m +1; = ((n-1)m^n-2+(n-2)m^n-3+⋯+2m +1)m+1; = (n-1)m^n+1-nm^n+m^2-m+1/(m-1)^2 for m≠ 1 ] We have for j=0,…,i-1: {[ _xR_d ≤ d_x^̣d+_̣xϕ(d ,)̣; _z_jR_d ≤ d^̣d+ϕ̣(d,)̣, ]. By the induction hypothesis, ord_x R_d(x,c_0,…,c_i-1) is bounded by ω(i-1,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣). We get the corresponding expected bound: ord_x Q_i(x,c_0,…,c_i-1,c_i)≤ω(i-1,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣), which proves the existence of ω(i,d_x,d, _̣x,)̣ with ω(i,d_x,d,_̣x,)≤ω(i-1,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣). To bound ω(i,d_x,d, _̣x,)̣, we need to find estimates for ϕ. First step: for n,m≥ 2, ϕ(n,m)≤ (n-1)m^n. Indeed, ϕ(n,m)=(n-1)m^n+1-nm^n+m^2-m+1/(m-1)^2. For n≥ 2, -nm^n+m^2-m+1≤ 0, so ϕ(n,m)≤(n-1)m^n+1/(m-1)^2 and (n-1)m^n+1/(m-1)^2≤ (n-1) m^n⇔ m/(m-1)^2≤ 1 ⇔ m^2- 3m+1≥ 0 with Δ=5 et m=(3+√(5))/2< 3. This holds for m≥ 3. For m=2, we compute: ϕ(n,2)=(n-1)2^n+1-n2^n+3≤ (n-1)2^n⇔ 3≤ 2^n This holds for n≥ 2. On the other hand, this does not hold for m=1 and n≥ 3. Second step: for n≥ 3, m≥ 2, ϕ(n,m)≤ (2n-3)m^n-1 Indeed, from the first step: [ ϕ(n,m):=(n-1)m^n-1+(n-2)m^n-2+⋯+m +1 = (n-1)m^n-1+ϕ(n-1,m); ≤ (n-1)m^n-1+(n-2)m^n-1; ≤ (2n-3)m^n-1 ] Let ε>0. For n≥ 2, since -nm^n+m^2-m+1≤ 0, the inequality ϕ(n,m)≤ (1+ε)(n-1)m^n-1 is implied by (n-1)m^n+1/(m-1)^2≤ (1+ε)(n-1)m^n-1⇔m^2/(m-1)^2≤ 1+ε. This holds for m large enough, say for m≥ m_ε, since m^2/(m-1)^2 decreases to 1. Now, let us prove the estimates for ω(i,…) by induction on i. For i=0, ω(0,…)≤ d_̣x+ ḍ_x by Lemma <ref>. Suppose that the estimates (<ref>), (<ref>), (<ref>) and (<ref>) hold until some i≥ 0. By (<ref>): ω(i+1,d_x,d,_̣x,)≤ω(i,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣) ≤ω(i,d_x,d, d_x^̣d+_̣x (2d-3)^̣d-1, d^̣d+(̣2d-3)^̣d-1) ≤ω(i,d_x,d, d_x^̣d+_̣x 2d^̣d-1, d^̣d+2̣d^̣d-1) ≤ω(i,d_x,d, d_x^̣d+_̣x 2d^̣d-1, 3d^̣d) ≤ (2.3^d^i-1+⋯+d^2+d+1 -2^i3^d^i-1+⋯+d^2+d-(i-1)) d^d^i-1+⋯+d^2+d+1 d_x(3d^̣d)^d^i+ 2^i.3^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2 (d_x^̣d+_̣x 2d^̣d-1) (3d^̣d)^d^i-1 ≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2^i3^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i.3^d^i+d^i-1+⋯+d^2+d-(i-1)-1 d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2^i3^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 1/32^i3^d^i+d^i-1+⋯+d^2+d-(i-1) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2/32^i3^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2^i+13^d^i+d^i-1+⋯+d^2+d-i) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1. This proves (<ref>), and also (<ref>) by letting ≤̣d and _̣x≤ d_x. Similarly, given ε>0, we use (<ref>) and (<ref>) with ≥̣_̣ε and, since d-1<d, we get: ω(i+1,d_x,d,_̣x,)≤ω(i,d_x,d, d_x^̣d+_̣x (1+ε)d^̣d-1, (2+ε)d^̣d) ≤ (2.(2+ε)^d^i-1+⋯+d^2+d+1 -(1+ε)^i(2+ε)^d^i-1+⋯+d^2+d-(i-1)) d^d^i-1+⋯+d^2+d+1 d_x((2+ε)d^̣d)^d^i+ (1+ε)^i.(2+ε)^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2 (d_x^̣d+_̣x (1+ε)d^̣d-1) ((2+ε)d^̣d)^d^i-1 ≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)-1 d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ 1/(2+ε)(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)/(2+ε)(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i+1.(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1 ≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+ (1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1. This proves (<ref>), and also (<ref>) by letting ≤̣d and _̣x≤ d_x. § TOTAL RECONSTRUCTION OF VANISHING POLYNOMIALS FOR SEVERAL ALGEBRAIC SERIES. In the present section, we provide several improvements of <cit.>. §.§ Total reconstruction in the algebraic case. * Let ℱ' and 𝒢' be two strictly increasing finite sequences of pairs (k,j)∈(ℕ^τ×ℕ)_alex* ordered anti-lexicographically: (k_1,j_1) ≤_alex* (k_2,j_2)⇔ j_1 < j_2 or (j_1 = j_2 and k_1 ≤_grlexk_2). We suppose additionally that (k_1,j_1) ≥_alex*(0,1)>_alex*(k_2,j_2) for any (k_1,j_1)∈ℱ' and (k_2,j_2)∈𝒢' (thus the elements of 𝒢' are ordered pairs of the form (k_2,0), and those of ℱ' are of the form (k_1,j_1), j_1≥ 1). We denote d_y'':=max{j, (k,j)∈ℱ'} and d_ s':=max{|k|, (k,j)∈ℱ'∪𝒢'}. * We say that a series y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]] is algebraic relatively to (ℱ',𝒢') if there exists a polynomial P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=0. * Let d_y'', d_ s'∈, d_y''≥ 1. We say that a series y_0' ∈ K[[s]] is algebraic of degrees bounded by d_y'' and d_ s' if it is algebraic relatively to (ℱ',𝒢') where ℱ' and 𝒢' are the complete sequences of indices (k,j)∈(ℕ^τ×ℕ)_alex* with j≤ d_y'' and |k|≤ d_ s'. Let us consider a series Y_0'=∑_m∈ℕ^τ C_ms^m∈ K[(C_m)_m∈ℕ^τ][[s]] where s and the C_m's are variables. We denote the multinomial expansion of the jth power Y_0'^j of Y_0' by: Y_0'^j=∑_m∈ℕ^τ C_m^(j)s^m. where C_m^(j)∈ K[(C_m)_m∈ℕ^τ]. For instance, one has that C_0^(j)=C_0^j. For j=0, we set Y_0'^0:=1. More generally, for any m and any j≤ |m|, C_m^(j) is a homogeneous polynomial of degree j in the C_k's for k∈ℕ^τ, k≤m, with coefficients in ℕ^*. Now suppose we are given a series y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]]∖{0}. For any j∈ℕ, we denote the multinomial expansion of y_0'^j by: y_0'^j=∑_m∈ℕ^τ c_m^(j)s^m. So, c_m^(j)=C_m^(j)(c_0,…,c_m). Let y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]]∖{0}. * Given a pair (k,j)∈ℕ^τ×ℕ, we call Wilczynski vectorV_k,j (associated to y_0') the infinite vector with components γ_m^k,j with m∈ℕ^τ ordered with ≤_grlex: - if j≥ 1: V_k,j:=(γ_m^k,j)_m∈ℕ^τ with γ_m^k,j={[ =c_m-k^(j) if m≥k; =0 otherwise ]. - otherwise: 1 in the kth position and 0 for the other coefficients, V_k,0:=(0,…,1,0,0,…,0,…). So γ_m^k,j is the coefficient of s^m in the expansion of s^ky_0'^j. * Let ℱ' and 𝒢' be two sequences as in Definition <ref>. We associate to ℱ', 𝒢' and y_0' the (infinite) Wilczynski matrix whose columns are the corresponding vectors V_k,j: M_ℱ',𝒢':=(V_k,j)_(k,j)∈ℱ'∪𝒢' ,ℱ'∪𝒢' being ordered by ≤_alex* as in Definition <ref>. We also define the reduced Wilczynski matrix, M_ℱ',𝒢'^red: it is the matrix obtained from M_ℱ',𝒢' by removing the columns indexed in 𝒢', and also removing the corresponding rows (suppress the kth row for any (k,0)∈𝒢'). This amounts exactly to remove the rows containing the coefficient 1 for some Wilczynski vector indexed in 𝒢'. For (i,j)∈ℱ', we also denote by V_i,j^red the corresponding vectors obtained from V_i,j by suppressing the kth row for any (k,0)∈𝒢' and we call them reduced Wilczynski vectors. The following result is <cit.>: The series y_0' is algebraic relatively to (ℱ',𝒢') if and only if all the minors of order |ℱ'∪𝒢'| of the Wilczynski matrix M_ℱ',𝒢' vanish, or also if and only if all the minors of order |ℱ'| of the reduced Wilczynski matrix M_ℱ',𝒢'^red vanish. Let us give an outline of the reconstruction process of <cit.>. Let ℱ' and 𝒢' be two sequences as in Definition <ref> and y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]]∖{0} be algebraic relatively to (ℱ',𝒢'). Our purpose is to describe the K-vector space whose non-zero elements are the polynomials P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=0. The components of the infinite vector computed as M_ℱ',𝒢'· (a_k,j)_(k,j)∈ℱ'∪𝒢' are exactly the coefficients of the expansion of P(s,y_0') in K[[s]]. Let us now remark that, in the infinite vector M_ℱ',𝒢'· (a_k,j)_(k,j)∈ℱ'∪𝒢', if we remove the components indexed by k for (k,0)∈𝒢', then we get exactly the infinite vector M_ℱ',𝒢'^red· (a_k,j)_(k,j)∈ℱ'. The vanishing of the latter means precisely that the rank of M_ℱ',𝒢'^red is less than |ℱ|. Conversely, if the columns of M_ℱ',𝒢'^red are dependent for certain ℱ' and 𝒢', we denote by (a_k,j)_(k,j)∈ℱ' a corresponding sequence of coefficients of a nontrivial vanishing linear combination of the column vectors. Then it suffices to note that the remaining coefficients a_k,0 for (k,0)∈𝒢' are uniquely determined as follows: a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j) . We consider a maximal family ℱ”⊊ℱ' such that the corresponding reduced Wilczynski vectors are K-linearly independent. Proceeding as in Lemma 3.7 in <cit.>, ℱ” is such a family if and only if, in the reduced Wilczynski matrix M_ℱ',𝒢'^red, there is a nonzero minor (A) where A has columns indexed in ℱ” and lowest row with index m such that |m|≤ 2d_s'd_y'' and ℱ” is maximal with this property. Moreover, among such A's, we take one that has its lowest row having an index minimal for ≤_grlex, and we denote the latter index by p̂. For any (k_0,j_0)∈ℱ'∖ℱ”, the family of reduced Wilczynski vectors (V_k,j^red) with (k,j)∈ℱ”∪{(k_0,j_0)} is K-linearly dependent. There is a unique relation: V_k_0,j_0^red =∑_(k,j)∈ℱ”λ_k,j^k_0,j_0 V_k,j^red with λ_k,j^k_0,j_0∈ K. We consider the restriction of M_ℱ',𝒢'^red to the rows of A. For these rows, by Cramer's rule, we reconstruct the linear combination (<ref>). The coefficients λ_k,j^k_0,j_0 of such a linear combination are quotients of homogeneous polynomials with integer coefficients in terms of the entries of these restricted matrix, hence quotients of polynomials in the corresponding c_m's, |m|≤ 2d_s'd_y''. Let P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0}. One has P(s,y_0')=0 if and only if (<ref>) holds as well as: ∑_(k,j)∈ℱ”a_k,j V_k,j^red+∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0 V_k_0,j_0^red=0 ⇔∑_(k,j)∈ℱ”a_k,j V_k,j^red+∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0(∑_(k,j)∈ℱ”λ_k,j^k_0,j_0 V_k,j^red)=0 ⇔∑_(k,j)∈ℱ”( a_k,j +∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0)V_k,j^red=0 ⇔∀ (k,j)∈ℱ”, a_k,j =-∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0, Let ℱ',𝒢',d_s',d_y'', y_0',ℱ” be as above. Then, the K-vector space of polynomials P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y'] such that P(s,y_0')=0 is the set of polynomials such that ∀ (k,j)∈ℱ”, a_k,j =-∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0, and ∀ (k,0)∈𝒢', a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j) , where the λ_k,j^k_0,j_0's are computed as in (<ref>) as quotients of polynomials with integer coefficients in the c_m's for |m|≤ 2d_s'd_y''. Note that the set of polynomials P(s,y')∈ K[s,y'] with support in ℱ'∪𝒢' such that P(s,y_0')=0 is a K-vector space of dimension |ℱ'|-|ℱ”|≥ 1. §.§ Total algebraic reconstruction in the non-homogeneous case. Let ℱ',𝒢', d_y'',d_ s' be as in Definition <ref>. §.§.§ First case. Let y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]] be algebraic relatively to (ℱ',𝒢'). Let i, d_s, d' ∈ℕ, d'≥ 3, d_ s'≤ d_ s and d_y''≤ d'. For any j=0,…,i, we consider power series y_j'=∑_m∈^τ c_j,ms^m∈ K[[s]] which satisfy some equations P_j(s,y'_0,…,y'_j)=0 where P_j∈ K[s,z_0,z_1,…,z_j ]∖{0 }, P_j(s,y_0',…,y_j-1',z_j)≢0, _sP_j≤ d_s,_z_kP_j≤ d' for k=0,…,j. In particular, c_m=c_0,m for any m. Let z'=R(s,y'_0,…,y'_i)∈ K[[s]]∖{0}, where R∈ K[s,z_0,z_1,…,z_i ]∖{0 } with _sR≤ d_s,_z_kR≤ d' for k=0,…,i. We want to determine when there is a polynomial P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=z' and, subsequently, to reconstruct all such possible P's. Let V be the infinite vector with components the coefficients of z', and V^red the corresponding reduced vector as in Definition <ref>. For ℱ” as in the previous section, we have P(s,y_0')=z' if and only if: ∑_(k,j)∈ℱ”( a_k,j +∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0)V_k,j^red= V^red. We want to examine when the vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent. Let N^red be the infinite matrix with columns (V_k,j^red)_(k,j)∈ℱ” and V^red. The vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent if and only if all the minors of maximal order of N^red up to the row p with: |p| ≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1 vanish. The vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent if and only if all the minors of N^red of maximal order vanish: see <cit.>. Conversely, we suppose that the vectors are linearly independent. So, there is a minor of N of maximal order which is nonzero. Let p be the smallest multi-index for ≤_grlex such that there is such a nonzero minor of N^red of maximal order with lowest row of index p. Hence, there is a subminor of it based on the columns indexed in ℱ” which is nonzero, say (B). The lowest row of B is at most p. So, by minimality of p̂ (see before (<ref>) in the previous section), p≥_grlexp̂. If p=p̂, then | p|≤ 2d_s'd' and we are done. If p>_grlexp̂, let us denote by p̃ the predecessor of p for ≤_grlex. Then p̃≥_grlexp̂. For any multi-index m∈^r, denote by N_m^red, V_k,j,m^red,V_m^red the truncations up to the row m of N^red,V_k,j^red,V^red respectively. By definition of p, the rank of the matrix N^red_p is |ℱ”|+1, whereas the rank of N^red_p̃ is |ℱ”|. There exists a nonzero vector ((a_i,j)_(i,j)∈ℱ”,-a) of elements of K such that N_p̃^red · ( [ (a_i,j)_(i,j)∈ℱ”; -a ])= 0, where a can be chosen to be 1 since the vectors (V_k,j,p̃^red)_(k,j)∈ℱ” are independent. The components of the resulting vector N_p̃^red · ( [ (a_i,j)_(i,j)∈ℱ”; -1 ]) are exactly the coefficients e_k, (k,0)∉𝒢' and k≤_grlexp̃, of the expansion of ∑_(i,j)∈ℱ”a_i,j s^i (y_0')^j-z'. By computing the coefficients a_k,0 for (k,0)∈𝒢' as: a_k,0=-∑_(i,j)∈ℱ”, k>i a_i,jc_k-i^(j)+f_k, where f_k denotes the coefficient of s^k in z', we obtain the vanishing of the first terms of Q(s,y_0',…,y'_i):=∑_(i,j)∈ℱ”∪𝒢' a_i,js^i(y_0')^j-z' up to p̃. So, w_s(Q(s,y_0',…,y'_i))≥_grlexp and, therefore, (Q(s,y_0',…,y'_i))≥ |p|. On the contrary, we have: N_p^red · ( [ (a_i,j)_(i,j)∈ℱ”; -1 ])≠ 0. From (<ref>) and (<ref>), we deduce that the coefficient e_p of s^p in the expansion of ∑_(i,j)∈ℱ”a_i,j x^i (y_0')^j-z' is nonzero. Observe that this term of the latter series does not overlap with the terms of ∑_(i,0)∈𝒢'a_i,0 s^i since (p,0)∉𝒢'. Therefore, w_s(Q(s,y_0',…,y'_i))=p. In particular, Q(s,y_0',…,y'_i)≠ 0, so the bound (<ref>) in Theorem <ref> applies: |p| ≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1 . Let us return to (<ref>). Let A be the square matrix defined after (<ref>). For any (k,j)∈ℱ”, we denote by A_k,j the matrix deduced from A by substituting the corresponding part of V^red instead of the column indexed by (k,j). Equality (<ref>) holds if and only if the vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent, and by Cramer's rule, one has: ∀ (k,j)∈ℱ”, a_k,j +∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0= (A_k,j) /( A). Recall that one determines that (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent by examining the dependence of the finite truncation of these vectors according to Lemma <ref>. Finally, the remaining coefficients a_k,0 for (k,0)∈𝒢' are each uniquely determined as follows: a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j)+f_ k , where f_k denotes the coefficient of s^k in z'. As a conclusion, we obtain the affine space of P(s,y')∈ K[s,y']∖{0} such that P(s,y_0')=z' as a parametric family of its coefficients with free parameters the a_k_0,j_0's for (k_0,j_0)∈ℱ'∖ℱ”. §.§.§ Second case. Let _̣ s'∈ and y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]] be algebraic of degrees d'_y' and _̣ s', but not algebraic relatively to (ℱ',𝒢'). Let i, d_s, d' ∈ℕ, d'≥ 3, d_ s'≤ d_ s and d_y''≤ d'. For any j=0,…,i, we consider power series y_j'=∑_m∈^τ c_j,ms^m∈ K[[s]] which satisfy some equations P_j(s,y'_0,…,y'_j)=0 where P_j∈ K[s,z_0,z_1,…,z_j ]∖{0 }, P_j(s,y_0',…,y_j-1',z_j)≢0, _sP_j≤ d_s,_z_kP_j≤ d' for k=0,…,j. In particular, c_m=c_0,m for any m. Let z'=R(s,y'_0,…,y'_i)∈ K[[s]]∖{0}, where R∈ K[s,z_0,z_1,…,z_j ]∖{0 } with _sR≤ d_s,_z_kR≤ d' for k=0,…,j. As in the previous section, our purpose is to determine when there is a polynomial P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=z'. Note that such a polynomial is necessarily unique, since y_0' is not algebraic relatively to (ℱ',𝒢'). We consider the corresponding reduced Wilczynski matrix M_ℱ',𝒢'^red. Proceeding as in Lemma 3.7 in <cit.> and using Lemma <ref>, there is a nonzero minor (B) of maximal order where the lowest row of B is indexed by m such that |m|≤(_̣s'+ d'_s)d'_y'. We resume the notations of the previous section. There is a polynomial P such that P(s,y_0')=z' if and only if the vectors (V_k,j^red)_(k,j)∈ℱ' and V^red are K-linearly dependent, since the vectors (V_k,j^red)_(k,j)∈ℱ' are independent. One determines that (V_k,j^red)_(k,j)∈ℱ' and V^red are linearly dependent by examining the dependence of the finite truncation of these vectors according to the following lemma. The vectors (V_k,j^red)_(k,j)∈ℱ' and V^red are linearly dependent if and only if, in the corresponding matrix denoted by N^red, all the minors of maximal order up to the row p with |p| ≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1 vanish. The proof is analogous to that of Lemma <ref>, also using Theorem <ref>. We proceed as in the previous section. For any (k,j)∈ℱ', we denote by B_k,j the matrix deduced from B by substituting the corresponding part of V^red instead of the column indexed by (k,j). If the condition of the previous lemma holds, by Cramer's rule, one has: ∀ (k,j)∈ℱ', a_k,j = (B_k,j) /( B). Then it suffices to note that the remaining coefficients a_k,0 for (k,0)∈𝒢' are each uniquely determined as follows: a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j)+f_ k , where f_k denotes the coefficient of s^k in z'. §.§ Total algebraic reconstruction with several algebraic series. Let i, d_s, d' ∈ℕ, d'≥ 3. For any j=0,…,i, we consider power series y_j'=∑_m∈^τ c_j,ms^m∈ K[[s]] which satisfy some equations P_j(s,y'_0,…,y'_j)=0 where P_j∈ K[s,z_0,z_1,…,z_j ]∖{0 }, P_j(s,y_0',…,y_j-1',z_j)≢0, _sP_j≤ d_s,_z_kP_j≤ d' for k=0,…,j. Let 𝒦' and ℒ', 𝒦'≠∅, be two strictly increasing finite sequences of pairs (k,l)∈(ℕ^τ×ℕ^i+1) ordered anti-lexicographically: (k_1,l_1) ≤_alex* (k_2,l_2)⇔l_1 <_grlexl_2 or (l_1 = l_2 and k_1 ≤_grlexk_2). We suppose additionally that 𝒦'≥_alex*(0,(0,…,0,1))>_alex*ℒ' (thus the elements of ℒ' are ordered tuples of the form (k,0), and those of 𝒦' are of the form (k,l), |l|≥ 1). We set d_y'_j':=max{l_j, (k,l)∈𝒦'} for j=0,…,i, and d_ s':=max{|k|, (k,l)∈𝒦'∪ℒ'}. We assume that d_y'_j'≤ d' for j=0,…,i, and d_ s'≤ d_s. Let us set z=(z_0,…,z_i) and y'=(y_0',…,y_i'). We assume that y'≠0. We want to determine when there is a polynomial P(s,z)=∑_(k,l)∈𝒦'∪ℒ' a_k,ls^kz^l∈ K[s,z]∖{0} such that P(s,y')=0 and, subsequently, to reconstruct all such possible P's. It is a generalization of Section <ref>. For any j=0,…,i, for any l_j∈ℕ, we denote the multinomial expansion of y_j'^l_j by: y_j'^l_j=∑_n_j∈ℕ^τ c_j,n_j^(l_j)s^n_j. So the coefficient of s^m in y'^l=y_0'^l_0⋯y_i'^l_i is equal to: c_m^(l):=∑_n_0∈^τ,…,n_i∈^τ, n_0+⋯+n_i=m c_0,n_0^(l_0)⋯ c_i,n_i^(l_i). * Given an ordered pair (k,l)∈ℕ^τ×ℕ^i+1, we call Wilczynski vectorV_k,l the infinite vector with components γ_m^k,l with m∈ℕ^τ ordered with ≤_grlex: - if l≥_grlex (0,…,0,1): V_k,l:= (γ_m^k,l)_m∈ℕ^τ with γ_m^k,l={[ =c_m-k^(l) if m≥k; =0 otherwise ]. - otherwise: 1 in the kth position and 0 for the other coefficients, V_k,0:=(0,…,1,0,0,…,0,…). So γ_m^k,l is the coefficient of s^m in the expansion of s^ky'^l. * Let 𝒦' and ℒ' be two sequences as above. We associate to 𝒦' and ℒ' the (infinite) Wilczynski matrix whose columns are the corresponding vectors V_k,l: M_𝒦',ℒ':=(V_k,l)_(k,l)∈𝒦'∪ℒ' ,𝒦'∪ℒ' being ordered by ≤_alex* as above. We also define the reduced Wilczynski matrix, M_𝒦',ℒ'^red: it is the matrix obtained from M_𝒦',ℒ' by removing the columns indexed in ℒ', and also removing the corresponding rows (suppress the kth row for any (k,0)∈ℒ'). This amounts exactly to remove the rows containing the coefficient 1 for some Wilczynski vector indexed in ℒ'. For (i,l)∈𝒦', we also denote by V_i,l^red the corresponding vectors obtained from V_i,l by suppressing the kth row for any (k,0)∈ℒ' and we call them reduced Wilczynski vectors. There exists a nonzero polynomial with support included in 𝒦'∪ℒ' which vanishes at y' if and only if all the minors of order |𝒦'∪ℒ'| of the Wilczynski matrix M_𝒦',ℒ' vanish, or also if and only if all the minors of order |𝒦'| of the reduced Wilczynski matrix M_𝒦',ℒ'^red vanish. By construction of the Wilczynski matrix M_𝒦',ℒ', the existence of such a polynomial is equivalent to the fact that the corresponding Wilczynski vectors are K-linearly dependent. This is in turn equivalent to the vanishing of all the minors of maximal order of M_𝒦',ℒ'. Suppose that we are given a nonzero vector (a_k,l)_(k,l)∈𝒦'∪ℒ' such that M_𝒦',ℒ'·(a_k,l)_(k,l)∈𝒦'∪ℒ'=0. Observe that, necessarily, the vector (a_k,l)_(k,l)∈𝒦' is also nonzero (since the vectors V_k,0 for (k,0)∈ℒ' are independent). Let us remark that: M_𝒦',ℒ'^red·(a_k,l)_(k,l)∈𝒦'=0 since the latter vector is deduced from the former one by deleting the rows corresponding to (k,0)∈ℒ'. So, the columns of M_𝒦',ℒ'^red are linked, which is equivalent to the vanishing of its minors of maximal order. Conversely, suppose that there exists a nonzero (a_k,l)_(k,l)∈𝒦' such that M_𝒦',ℒ'^red·(a_k,l)_(k,l)∈𝒦'=0. Then, we can complete the list of coefficients (a_k,l)_(k,l)∈𝒦'∪ℒ' by setting: a_k,0=- ∑_(i,l)∈𝒦', i≤k a_i,l c_k-i^(l). There exists a nonzero polynomial with support included in 𝒦'∪ℒ' which vanishes at y' if and only if all the minors of the reduced Wilczynski matrix M_𝒦',ℒ'^red of order |𝒦'| and with lowest row indexed by m with: |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1, vanish. The direct part follows from the previous lemma. Suppose that there is no nonzero polynomial with support included in 𝒦'∪ℒ' which vanishes at y'. So there is a nonzero minor of the reduced Wilczynski matrix M_𝒦',ℒ'^red of order |𝒦'| and with lowest row indexed by m that we assume to be minimal for ≤_grlex. Reasoning as in the proof of Lemma <ref>, we obtain a nonzero polynomial Q(s,z_0,…,z_i) with Supp(Q)⊆𝒦'∪ℒ', such that Q(s,y')≠ 0, and with _s(Q(s,y'))≥ |m|. Since d_y'_j'≤ d' for j=0,…,i, and d_ s'≤ d_s, by Theorem <ref>, we obtain that: _s(Q(s,y'))≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1, which gives the expected result. Let us suppose that there is a nonzero polynomial P with support included in 𝒦'∪ℒ' which vanishes at y'. Our purpose is to determine the space of all such polynomials. For this, we consider a maximal family 𝒦”⊊𝒦' such that the corresponding reduced Wilczynski vectors are K-linearly independent. This is equivalent to the fact that, for the matrix consisting of the (V_k,l^red) with (k,l)∈𝒦”, there is a nonzero minor (A) of maximal order and with lowest row indexed by m with |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1. For any (k_0,l_0)∈𝒦'∖𝒦”, the corresponding family of reduced Wilczynski vectors (V_k,l^red) with (k,l)∈ℱ”∪{(k_0,l_0)} is K-linearly dependent. There is a unique relation: V_k_0,l_0^red =∑_(k,l)∈𝒦”λ_k,l^k_0,l_0 V_k,l^red with λ_k,l^k_0,l_0∈ K. which can be computed by Cramer's rule based on (A). The coefficients λ_k,l^k_0,l_0 of such a linear combination are quotients of homogeneous polynomials with integer coefficients in terms of the entries of these restricted matrices, hence quotients of polynomials in the corresponding c_m's, |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1. Let z=(z_0,…,z_i), and P(s,z)=∑_(k,l)∈𝒦'∪ℒ' a_k,ls^kz^l∈ K[s,z]∖{0}. One has P(s,y')=0 if and only if (<ref>) holds as well as: ∑_(k,l)∈𝒦”a_k,l V_k,l^red+∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0 V_k_0,l_0^red=0 ⇔∑_(k,l)∈𝒦”a_k,l V_k,l^red+∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0(∑_(k,l)∈𝒦”λ_k,l^k_0,l_0 V_k,l^red)=0 ⇔∑_(k,l)∈𝒦”( a_k,l +∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0λ_k,l^k_0,l_0)V_k,l^red=0 ⇔∀ (k,l)∈𝒦”, a_k,l =-∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0λ_k,l^k_0,l_0. Let 𝒦',ℒ',d_s,d', y',𝒦” be as above. Then, the set of polynomials P(s,z)=∑_(k,l)∈𝒦'∪ℒ' a_k,ls^kz^l∈ K[s,z] such that P(s,y')=0 is the set of polynomials such that ∀ (k,l)∈𝒦”, a_k,l =-∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0λ_k,l^k_0,l_0, and ∀ (k,0)∈ℒ', a_k,0=-∑_(i,l)∈𝒦', i≤k a_i,lc_k-i^(j) , where the λ_k,l^k_0,l_0's are computed as in (<ref>) as quotients of polynomials with integer coefficients in the c_m's for |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1 d_s (d')^(d')^i+⋯+(d')^2+d'+1. Note that the set of polynomials P(s,z)∈ K[s,z] with support in 𝒦'∪ℒ' such that P(s,y')=0 is a K-vector space of dimension |𝒦'|-|𝒦”|≥ 1. § RECONSTRUCTION OF AN EQUATION FOR AN ALGEBROID SERIES. §.§ The reconstruction algorithm We resume the notations of Section <ref>, in particular Lemma <ref> and after. In particular, recall that τ is the number of variables in s, and so r-τ is the number of variables in t. Let ℱ and 𝒢 be two strictly increasing sequences of triples (k,l,j)∈ℕ^τ×ℕ^r-τ×ℕ ordered as follows: (k_1,l_1,j_1) ≤_*alex* (k_2,l_2,j_2):⇔ j_1 < j_2 or (j_1 = j_2 and (k_1,l_1) ≤_alex* (k_2,l_2)) with (k_1,l_1) ≤_alex* (k_2,l_2):⇔l_1 <_grlexl_2 or (l_1 = l_2 and k_1 ≤_grlexk_2). We suppose additionally that (k_1,l_1,j_1)≥_*alex*(0,0,1)>_*alex*(k_2,l_2,j_2) for any (k_1,l_1,j_1)∈ℱ and (k_2,l_2,j_2)∈𝒢 (thus the elements of 𝒢 are ordered triples of the form (k_2,l_2,0), and those of ℱ are of the form (k_1,l_1,j_1), j_1≥ 1). Moreover, we assume that there is d∈, d≥ 1, such that j≤ d for any (k,l,j)∈ℱ∪𝒢, and we set d:= max{j | ∃ (k,l,j)∈ℱ∪𝒢}. We say that a series y_0=∑_(m,n)∈ℕ^τ×ℕ^r-τ c_m,ns^mt^n∈ K[[s,t]], c_0,0≠ 0, is algebroid relatively to (ℱ,𝒢) if there exists a polynomial P(s,t,y)=∑_(k,l,j)∈ℱ∪𝒢 a_k,l,js^kt^ly^j∈ K[[s, t]][y]∖{0} such that P(s,t,y_0)=0. For any ℱ,𝒢 satisfying Conditions (i), (ii), (iii) of Lemma <ref>, let us denote by (K[s][[t]][y])_ℱ,𝒢 the subset of polynomials in K[s][[t]][y]∖{0} with support in ℱ∪𝒢. The purpose of the following discussion is to make more explicit the conditions in Lemma <ref> for the vanishing of a polynomial P∈(K[s][[t]][y])_ℱ,𝒢 for some ℱ,𝒢 corresponding to (i), (ii), (iii) in Lemma <ref>, at a formal power series y_0∈ K[[s]][[t]]. As we have seen in Section <ref>, one can always assume that y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m ,ns^mt^n =∑_n∈ℕ^r-τ c_n(s) t^n is such that c_0,0≠ 0. Let us consider a series Y_0=∑_n∈ℕ^r-τ (∑_m∈ℕ^τ C_m,ns^m)t^n = ∑_n∈ℕ^r-τC_n(s) t^n∈ K[(C_m , n)_m∈ℕ^τ, n∈ℕ^r-τ][[s]][[t]] where s, t and the C_m,n's are variables. We denote the multinomial expansion of the jth power Y_0^j of Y_0 by: Y_0^j=∑_n∈ℕ^r-τ (∑_m∈ℕ^τ C_m,n^(j)s^m)t^n = ∑_n∈ℕ^r-τC_n^(j)(s) t^n where C_m,n^(j)∈ K[(C_k,l)_k≤m, l≤n] and C_n^(j)(s)∈ K[(C_l(s))_l≤n]⊆ K[(C_k,l)_k≤m, l≤_grlexn][[s]]. We also set Y_0^0:=1. Now, suppose we are given a series y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m, ns^mt^n∈ K[[s,t]] with c_0,0≠ 0. For any j∈ℕ, we denote the multinomial expansion of y_0^j by: y_0^j=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m,n^(j)s^mt^n= ∑_n∈ℕ^r-τc_n^(j)(s) t^n. So, c_m,n^(j)=C_m,n^(j)(c_0,0,…,c_m,n) and c_n^(j)(s)=C_n^(j)(c_0(s),…,c_n(s)). We also set y_0^0:=1. For a polynomial P∈(K[s][[t]][y])_ℱ,𝒢∖{0}, we denote P(s,t,y)=∑_(k,l,j)∈ℱ∪𝒢 a_k,l,js^kt^ly^j =∑_l∈ℕ^r-τ, j=0,..,d a_l,j(s)t^ly^j. A series y_0∈ K[[s]][[t]], y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m ,ns^mt^n =∑_n∈ℕ^r-τ c_n(s) t^n, is a root of P if and only if the following polynomial relations hold when evaluated at the series c_0(s),…, c_n(s): ∀l∈ℕ^r-τ, ∑_j=0,..,d a_l,j(s) C_0^j(s)=- ∑_i<l, j=0,..,d a_i,j(s) C_l-i^(j)(s) . Let us compute: P(s,t,y_0)=∑_i∈ℕ^r-τ, j=0,..,d a_i,j(s)t^iy_0^j =∑_i∈ℕ^r-τ, j=0,..,d a_i,j(s)t^i(∑_n∈ℕ^r-τc_n^(j)(s) t^n) =∑_l∈ℕ^r-τ(∑_i≤l, j=0,..,d a_i,j(s)c_l-i^(j)(s))t^l. So, y_0 is a root of P if and only if, in the latter formula, the coefficient of t^l for each l vanishes, which is equivalent to the vanishing of (<ref>) (noticing that C_0^(j)= C_0^j for all j). Let ℱ,𝒢 be as in Definition <ref> and satisfying Conditions (i), (ii), (iii) of Lemma <ref>. Let y_0=∑_(m,n)∈ℕ^τ×ℕ^r-τ c_m,ns^mt^n=∑_n∈ℕ^r-τc_n(s) t^n∈ K[[s,t]], c_0,0≠ 0, be a series algebroid relatively to (ℱ,𝒢). Let P∈(K[s][[t]][y])_ℱ,𝒢∖{0} be a polynomial such that P(s,t,y_0)=0. We notice that w_t(P) is the index of the first non-trivial relation (<ref>), for ℕ^r-τ ordered with ≤_grlex. Let l̂_0∈^r-τ be such that w_t(P)≤_grlexl̂_0. If w_t(P) is known, then one can take l̂_0=w_t(P). §.§.§ First step For any l∈^r-τ, we denote by ℱ_l' and 𝒢_l' the corresponding sets of tuples (k,j)∈^τ× where (k,l,j)∈ℱ and (k,l,0)∈𝒢 respectively. We denote d'_s,l:=max{|k| | (k,j)∈ℱ_l'∪𝒢_l' } (which is well-defined thanks to Condition (iii) of Lemma <ref>). By (<ref>) in Remark <ref>, we have that: d'_s,l≤ a|l|+b, where a and b are as in Lemma <ref>. Let l≤_grlexl̂_0 (or directly l=w_t(P) if known). As we are interested in the first non trivial relation in (<ref>), we consider its following instance: ∑_j=0,..,d a_l,j(s) C_0^j=∑_(k,j)∈ℱ'_l∪𝒢'_l a_k,l,js^kC_0^j=0 . By Lemma <ref>, there is l≤_grlexl̂_0 such that c_0 satisfies the latter relation, i.e. c_0 is algebraic relatively to (ℱ'_l,𝒢'_l). In particular, c_0 is algebraic relatively to (⋃_l≤_grlexl̂_0ℱ'_l,⋃_l≤_grlexl̂_0𝒢'_l). We denote d'_s:=max_l≤_grlexl̂_0(d'_s,l). Let us now describe the reconstruction method for this first step: * We determine the multi-indices l≤_grlexl̂_0 such that ℱ'_l∪𝒢'_l≠∅. * For each l≤_grlexl̂_0 as above, we determine whether c_0 is algebraic relatively to (ℱ'_l,𝒢'_l) by computing the first minors of maximal order of the corresponding Wilczynski matrix M_ℱ'_l,𝒢'_l^red. Proceeding as in <cit.> or Lemma <ref>, it suffices to compute them up to the row indexed by the biggest m∈^τ such that | m|≤ 2 d d'_s. * Let l≤_grlexl̂_0 such that c_0 is algebraic relatively to (ℱ'_l,𝒢'_l). We reconstruct the K-vector space of polynomials corresponding to Equation (<ref>) according to the method in Section <ref>, in particular Lemma <ref>, applied to (ℱ'_l,𝒢'_l) and c_0. We denote by E_l this space. * For each l'<_grlexl, we set a_k, l',j:=0 for (k, l',j)∈ℱ∪𝒢. §.§.§ Second step With the notations of the previous section, let l be such that E_l≠{0}. Let us consider the instances of (<ref>) corresponding to the l' such that: l<_grlexl'<_grlexl+(0,…,0,1), For such l', we claim that the set of indices i such that i<l' and i≥_grlexl is empty. Indeed, by (<ref>), note that | l'|=| l|. For such i, one necessarily has | i|<| l'|=| l|, but also | i|≥ | l|: a contradiction. According to (4) at the end of First Step above and to the previous claim, the right hand sides of such instances are equal to 0. Hence, they also are of the same form as (<ref>): ∑_j=0,..,d a_l',j(s) C_0^j=∑_(k,j)∈ℱ'_l'∪𝒢'_l' a_k,l',js^kC_0^j=0 . We perform the same method of reconstruction as in the First Step <ref> to determine E_l' the K-vector space of polynomials corresponding to this equation. Note that E_l' might be equal to {0}. At this step, for each l≤_grlexl̂_0 such that E_l≠{0} from the First Step, we have built the vector spaces E_l' (possibly {0}) of all the coefficients a_k,l',j for (k, l',j)∈ℱ∪𝒢 satisfying the instances of (<ref>) for l'<_grlexl+(0,…,0,1). §.§.§ Third step Let l≤_grlexl̂_0 such that E_l≠{0} as in the First Step <ref>. We consider the instance of (<ref>) corresponding to l+(0,…,0,1). Note that for i< l+(0,…,0,1), we have that i≤_grlexl. Applying (4) from the end of the First Step, we obtain: ∑_j=0,..,d a_l+(0,…,0,1),j(s) C_0^j=- ∑_j=0,..,d a_l,j(s) C_(0,…,0,1)^(j) . Noticing that C_(0,…,0,1)^(j)=j C_0^j-1C_(0,…,0,1), we get: ∑_(k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1) a_k,l+(0,…,0,1),js^kC_0^j=- (∑_(k,j)∈ℱ'_l∪𝒢'_l a_k,l,js^kj C_0^j-1) C_(0,…,0,1) . There is l≤_grlexl̂_0 such that c_0 and c_(0,…,0,1) satisfy the latter relation, and c_0 satisfies the relations (<ref>) and (<ref>). If c_(0,…,0,1)=0, then there are two cases. Either ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1)=∅ i.e. there is no coefficient a_k,l+(0,…,0,1),j to reconstruct. Or else, we obtain an equation like (<ref>) and we derive E_l+(0,…,0,1) as in the first and second step. If c_(0,…,0,1)≠ 0, let us denote θ_s,(0,…,0,1):= (| l̂_0|+d)a +b where a and b are as in Lemma <ref>. By this lemma, there are non-trivial polynomial relations P_0(s,z_0)=0 and P_1(s,z_0,z_1)=0 satisfied by c_0 and c_(0,…,0,1) with _sP_j≤θ_s,(0,…,0,1), _z_0P_j≤ d and _z_1P_1≤ d. There are several cases. ∙ Suppose that ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1)=∅. Equation (<ref>) reduces to: ∑_(k,j)∈ℱ'_l∪𝒢'_l a_k,l,js^kj c_0^j-1= ∑_(k,j)∈ℱ'_l a_k,l,js^kj c_0^j-1=0, which means that c_0 is at least a double root of (<ref>). We resume the notations of Section <ref>. Let us denote by ℱ”_l the family corresponding to ℱ” for (<ref>), and λ_l, k,j^k_0,j_0 the coefficients corresponding to λ_ k,j^k_0,j_0. Formula (<ref>) of Lemma <ref> becomes: ∀ (k,j)∈ℱ”_l, a_k,l,j =-∑_(k_0,j_0)∈ℱ'_l∖ℱ”_la_k_0,l,j_0 λ_l, k,j^k_0,j_0 . Substituting this formula in (<ref>) gives: ∑_(k_0,j_0)∈ℱ'_l∖ℱ”_l a_k_0,l,j_0s^k_0j_0 c_0^j_0-1 + ∑_(k,j)∈ℱ”_l( -∑_(k_0,j_0)∈ℱ'_l∖ℱ”_la_k_0,l,j_0 λ_l, k,j^k_0,j_0) s^kj c_0^j-1 =0 , which is: ∑_(k_0,j_0)∈ℱ'_l∖ℱ”_l a_k_0,l,j_0( s^k_0j_0 c_0^j_0-1 - ∑_(k,j)∈ℱ”_l λ_l, k,j^k_0,j_0s^kj c_0^j-1) =0 . Either, the latter relation is trivial, i.e. for all (k_0,j_0)∈ℱ'_l∖ℱ”_l, the contents of the parenthesis are all 0. In this case, the space E_l of possible equations for c_0 remains unchanged. Or, the dimension of E_l drops. Since the contents of these parenthesis are polynomials in s and c_0, by Lemma <ref>, the s-adic order of the non-vanishing ones is at most 2d'_sd. The vanishing of (<ref>) follows from the vanishing of the terms of s-adic order up to 2d'_sd. This gives linear relations (with at least one that is nontrivial) between the a_k_0,l,j_0's for (k_0,j_0)∈ℱ'_l∖ℱ”_l. Accordingly, we derive a new space of possible equations for c_0, that we still denote by E_l for simplicity. In the particular case where E_l={0}, we exclude l from the list of admissible multi-indices. ⋆ Suppose now that ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1)≠∅. We determine whether c_0 is algebraic relatively to (ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)). For this, we examine the vanishing of the minors of maximal order of M_ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)^red up to the lowest row of order 2d'_s,l+(0,…,0,1)d. There are two subcases. ⋆∙ If c_0 is algebraic relatively to (ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)), according to Equation (<ref>), we set z'=- (∑_(k,j)∈ℱ'_l a_k,l,js^kj c_0^j-1) c_(0,…,0,1). We have to determine whether there exists a relation P( s, c_0)=z' with P having support in ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1). We consider as in Section <ref>, a subfamily ℱ”_l+(0,…,0,1) of ℱ'_l+(0,…,0,1), the vectors (V_l+(0,…,0,1), k,j^red)_(k,j)∈ℱ”_l+(0,…,0,1) and V^red_l+(0,…,0,1) for z', and the corresponding matrix N^red_l+(0,…,0,1). According to Lemma <ref>, the existence of such a polynomial P is equivalent to the vanishing of the minors of N^red_l+(0,…,0,1) of maximal order up to the row p with |p| ≤ 2.3. θ_s,(0,…,0,1)d^d+1. Let us consider one of these minors, say (D). For (k,j)∈ℱ'_l, we denote by W_k,j^red the infinite vector corresponding to s^kj c_0^j-1 c_(0,…,0,1). Hence, we have: V^red_l+(0,…,0,1)= -∑_(k,j)∈ℱ'_l a_k,l,j W_k,j^red. For each (k,j)∈ℱ'_l, we set D_k,j the matrix obtained from D by substituting to its last column, i.e. the part of V^red_l+(0,…,0,1), the corresponding part of the W_k,j^red. By multilinearity of the determinant, one obtains: (D)=-∑_(k,j)∈ℱ'_l(D_k,j)a_k,l,j. So, the vanishing of (D) is equivalent to the vanishing of a linear form in the a_k,l,j's for (k,j)∈ℱ'_l. Considering the linear relations for all these D's, we derive from E_l a new space of possible equations for c_0, that we still denote by E_l for simplicity. In the particular case where E_l={0}, we exclude l from the list of admissible multi-indices. If E_l≠{0}, for each a_l:=(a_k,l,j)_(k,j)∈ℱ'_l∪𝒢'_l list of coefficients of a polynomial in E_l, we perform the method in Section <ref> and we reconstruct the space Φ_l+(0,…,0,1)(a_l) of coefficients (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1) for a relation (<ref>). By (<ref>) and (<ref>), it is an affine space ϕ_l+(0,…,0,1)(a_l) + F_l+(0,…,0,1) where ϕ_l+(0,…,0,1)(a_l) is a point and F_l+(0,…,0,1) a vector space. Note that ϕ_l+(0,…,0,1)(a_l) depends linearly on a_l and that its computation is done by computing a finite number of minors of matrices given by the W_k',j'^red's, (k',j')∈ℱ'_l , and the V_k”,j”^red's, (k”,j”)∈ℱ”_l+(0,…,0,1). Also, we have that F_l+(0,…,0,1) is independent of a_l. Finally, we observe that, for a given l, the set of admissible ((a_k,l,j)_(k,j)∈ℱ'_l∪𝒢'_l , (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1))'s is a nonzero K-vector space. ⋆⋆ If c_0 is not algebraic relatively to (ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)), we have to determine whether there exists a relation P( s, c_0)=z' with P having support in ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1). Note that in this case, such a polynomial P is necessarily unique for a given z'. We proceed as above with ℱ'_l+(0,…,0,1) instead of ℱ”_l+(0,…,0,1) and as in Section <ref>, in particular Lemma <ref> with 2.3. θ_s,(0,…,0,1)d^d+1 as bound for the depth of the minors involved. This determines from E_l a new space of possible equations for c_0, that we still denote by E_l for simplicity. In the particular case where E_l={0}, we exclude l from the list of admissible multi-indices. Also, if E_l≠{0}, for each a_l∈ E_l≠{0}, we reconstruct the list of coefficients ϕ_l+(0,…,0,1)(a_l):= (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1) for a relation (<ref>). By (<ref>) and (<ref>), ϕ_l+(0,…,0,1)(a_l) depends linearly on a_l and its computation is done by computing a finite number of minors of matrices given by the W_k',j'^red's, (k',j')∈ℱ'_l , and the V_k”,j”^red's, (k”,j”)∈ℱ'_l+(0,…,0,1). Again, we observe that, for a given l, the set of admissible ((a_k,l,j)_(k,j)∈ℱ'_l∪𝒢'_l , (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1))'s is a nonzero K-vector space. To sum up Sections <ref> to <ref>, we have reconstructed a finite number of multi-indices l (i.e. possible initial steps l_0:=w_t(P)) and, for each of these l's, the nonzero K-vector space E_l,l+(0,…,0,1) of coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l≤_grlexl'≤_grlexl+(0,…,0,1) for the initial part of a possible vanishing polynomial for y_0. §.§.§ Induction step. For each l≤_grlexl̂_0 possible initial step as above, we assume that up to some l̃≥_grlexl+(0,…,0,1) we have reconstructed the nonzero K-vector space, say E_l,l̃, of coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l'≤_grlexl̃ for the initial part of a possible vanishing polynomial for y_0. Recall that, for λ∈^r, S(λ) (respectively A(λ) for λ≠ 0) denotes the successor (respectively the predecessor) for ≤_grlex of λ in ^r. Equation (<ref>) gives: ∑_j=0,..,d a_S(l̃),j(s) C_0^j=- ∑_i<S(l̃), j=0,..,d a_i,j(s) C_S(l̃)-i^(j) , which we write as: ∑_(k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃) a_k,S(l̃),js^kC_0^j=- ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k C_S(l̃)-i^(j)) . Let us denote θ_s,S(l̃):= (|l̂_0|+d |S(l̃)|)a +b where a and b are as in Lemma <ref>. By this lemma, there exist polynomials (P_λ(s,z_0,…,z_λ))_λ= 0,…,S(l̃) such that P_λ(s,c_0,…,c_λ)=0, P_λ(s,c_0,…,c_A(λ),z_λ)≢0, _sP_λ≤θ_s,S(l̃), _z_μP_λ≤ d for μ≤_grlexλ. Let us denote i_S(l̃):=([ |S(l̃)|+r-τ; |S(l̃)| ])-1. Note that i_S(l̃)+1 is at most the number of multi-indices λ such that λ≤_grlexS(l̃). ∙ Suppose that ℱ'_S(l̃)∪𝒢'_S(l̃)=∅. Equation (<ref>) evaluated at c_0,…,c_S(l̃) reduces to: ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j))=0 . Let us expand c_n^(j) in (<ref>): y_0^j= ∑_n∈ℕ^r-τc_n^(j) t^n= (∑_γ∈ℕ^r-τc_γ t^γ)^j, so, c_n^(j)=∑_j / |j|=j g(j)=nj!/j!c^j where j:=(j_0,…,j_n) and c^j:= c_0^j_0⋯ c_n^j_n (and where g is as in Notation <ref>). Let us expand the left hand side of (<ref>): ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j))= ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k ∑_j / |j|=j g(j)=S(l̃)-ij!/j!c^j) (where j:=(j_0,…,j_S(l̃)) and c^j:= c_0^j_0⋯ c_S(l̃)^j_S(l̃)). We set 𝒦'_S(l̃) the set of (k,j) where k∈^τ and j:=(j_0,…,j_S(l̃)), j≠0, such that j:=|j|∈{0,…,d} and there exists i∈^r-τ with i<S(l̃), (k,j)∈ℱ'_i∪𝒢'_i, g(j)=S(l̃)-i. Equation (<ref>) becomes: ∑_(k,j)∈𝒦'_S(l̃)∪ℒ'_S(l̃)j!/j!a_k,S(l̃)-g(j),j s^kc^j=0 . Thanks to Remark <ref>, for any (k,j)∈𝒦'_S(l̃)∪ℒ'_S(l̃), we have that |k|≤ a |S(l̃)|+b≤θ_s,S(l̃). We are in position to apply the method of reconstruction of Section <ref> of all the polynomials such that ∑_(k,j)∈𝒦'_S(l̃)∪ℒ'_S(l̃) b_k,j s^kc^j=0. This requires computations of minors of the corresponding Wilczynski matrix up to a finite depth bounded by 2.3^d^i_S(l̃)-1+⋯+d^2+d+1θ_s,S(l̃) d^d^i_S(l̃)+⋯+d^2+d+1 (see Lemma <ref>). By Lemma <ref>, the formulas (<ref>) and (<ref>) give us with a vector space B_S(l̃) (possibly zero) of coefficients b_k,j, hence a corresponding vector space A_S(l̃) of coefficients a_k,S(l̃)-g(j),j=j!/j!b_k,j. We take the intersection of A_S(l̃) with E_l,l̃ and we obtain another vector space of admissible coefficients that we still denote by E_l,l̃ for simplicity. In the particular case where the projection of E_l,l̃ on E_l is {0}, we exclude l from the list of admissible multi-indices. ⋆ Suppose that ℱ'_S(l̃)∪𝒢'_S(l̃)≠∅. We determine whether c_0 is algebraic relatively to (ℱ'_S(l̃),𝒢'_S(l̃)). For this, we examine the vanishing of the minors of maximal order of M_ℱ'_S(l̃),𝒢'_S(l̃)^red up to the lowest row of order 2d'_s,S(l̃)d (see Section <ref> for the notation). There are two subcases. ⋆∙ If c_0 is algebraic relatively to (ℱ'_S(l̃),𝒢'_S(l̃)), according to Equation (<ref>), we set z':=- ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j)). We have to determine whether there exists a relation P( s, c_0)=z' with P having support in ℱ'_S(l̃)∪𝒢'_S(l̃). We consider as in Section <ref>, a subfamily ℱ”_S(l̃) of ℱ'_S(l̃), the vectors (V_S(l̃), k,j^red)_(k,j)∈ℱ”_S(l̃) and V^red_S(l̃) for z', and the corresponding matrix N^red_S(l̃). According to Lemma <ref>, the existence of such a polynomial P is equivalent to the vanishing of the minors of N^red_S(l̃) of maximal order up to the row p with |p| ≤ 2.3^d^i_S(l̃)-1+⋯+d^2+d+1θ_s,S(l̃) . d^d^i_S(l̃)+⋯+d^2+d+1 Let us consider one of these minors, say (D). For i<S(l̃), for (k,j)∈ℱ'_i∪𝒢'_i, we denote by W_k,i,j^red the infinite vector corresponding to s^k c_S(l̃)-i^(j). We set D_k,i,j the matrix obtained from D by substituting to its last column, i.e. the part of V^red_S(l̃), the corresponding parts of the W_k,i,j^red's. Since V^red_S(l̃)= ∑_i<S(l̃) ( ∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,j.W_k,i,j^red), one has: (D)=- ∑_i<S(l̃) ( ∑_(k,j)∈ℱ'_i∪𝒢'_i(D_k,i,j) a_k,i,j). So, the vanishing of (D) is equivalent to the vanishing of a linear form in the a_k,i,j's for i<S(l̃) and (k,j)∈ℱ'_i∪𝒢'_i. Considering these linear relations, we derive from E_l,l̃ a new space of possible coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l'≤_grlexl̃, that we still denote by E_l,l̃ for simplicity. In the particular case where the projection of E_l,l̃ on E_l is {0}, we exclude l from the list of admissible multi-indices. If this projection is not {0}, so in particular E_l≠{0}, for each a_l̃:=(a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ list of coefficients of a polynomial in E_l,l̃, we perform the method in Section <ref> and we reconstruct the space Φ_S(l̃)(a_l̃) of coefficients (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃) for a relation (<ref>). By (<ref>) and (<ref>), it is an affine space ϕ_S(l̃)(a_l̃) + F_S(l̃) where ϕ_S(l̃)(a_l̃) is a point and F_S(l̃) a vector space. Note that ϕ_S(l̃)(a_l̃) depends linearly on a_l̃ and that its computation is done by computing a finite number of minors of matrices given by the W_k',i,j'^red's, i<S(l̃), (k',j')∈ℱ'_i∪𝒢'_i, and the V_k”,j”^red's, (k”,j”)∈ℱ”_S(l̃). Also, we have that F_S(l̃) is independent of a_l̃. Finally, we observe that the set of admissible ((a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ , (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃))'s, for a given l, is a nonzero K-vector space which we denote by E_l,S(l̃). ⋆⋆ If c_0 is not algebraic relatively to (ℱ'_S(l̃),𝒢'_S(l̃)), according to Equation (<ref>), we set z'=- ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j)). We want to determine if there exists a relation P( s, c_0)=z' with P having support in ℱ'_S(l̃)∪𝒢'_S(l̃). As in Section <ref>, we consider the vectors (V_S(l̃), k,j^red)_(k,j)∈ℱ'_S(l̃), V^red_S(l̃) for z', and the corresponding matrix N^red_S(l̃). According to Lemma <ref>, the existence of such a polynomial P is equivalent to the vanishing of the minors of N^red_S(l̃) of maximal order up to the row p with |p| ≤ 2.3^d^i_S(l̃)-1+⋯+d^2+d+1θ_s,S(l̃) . d^d^i_S(l̃)+⋯+d^2+d+1 where i_S(l̃) is defined by (<ref>). As previously, for any of such minors, say (D), the vanishing of (D) is equivalent to the vanishing of a linear form in the a_k,i,j's for i<S(l̃) and (k,j)∈ℱ'_i∪𝒢'_i. Considering these linear relations, we derive from E_l,l̃ a new space of possible coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l'≤_grlexl̃, that we still denote by E_l,l̃ for simplicity. In the particular case where the projection of E_l,l̃ on E_l is {0}, we exclude l from the list of admissible multi-indices. If this projection is not {0}, so in particular E_l≠{0}, for each a_l̃:=(a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ list of coefficients of a polynomial in E_l,l̃, we perform the method in Section <ref> and we reconstruct the unique list of coefficients (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃) for a relation (<ref>). Note that this list depends linearly on (a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ by relations (<ref>) and (<ref>). Finally, we denote by E_l,S(l̃) the K-vector space of ((a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ , (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃)) admissible. As a conclusion, we obtain: Let ñ^0∈^r, p∈^*, q∈^r-1∖{0}, d∈^* be given. Let ℱ,𝒢 be as in Definition <ref> and satisfying Conditions (i), (ii), (iii) of Lemma <ref>. Let y_0=∑_(m,n)∈ℕ^τ×ℕ^r-τ c_m,ns^mt^n=∑_n∈ℕ^r-τc_n(s) t^n∈ K[[s,t]], c_0,0≠ 0, be a series algebroid relatively to (ℱ,𝒢). Let l̂_0∈^r-τ be given. Assume that there exists a polynomial P∈(K[s][[t]][y])_ℱ,𝒢∖{0} such that P(s,t,y_0)=0 and w_t(P)≤_grlexl̂_0. For any l≤_grlexl̂_0, for any l̃≥_grlexl, Sections <ref> to <ref> provide the vector space E_l,l̃ of all the polynomials Q_l,l̃∈(K[s][[t]][y])_ℱ,𝒢 such that: w_t(Q_l,l̃)=l and w_t(Q_l,l̃(s,t,y_0) )>_grlexl̃. §.§ Proof of Theorem <ref> Theorem <ref> will be a corollary of the following result: Let d∈^* and ν̃_0∈. Let ỹ_0∈𝒦_r, more precisely ỹ_0=f̃/g̃ for some formal power series f̃,g̃∈ K[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]]. We assume that ỹ_0 is algebroid of degree bounded by d, and that there is a vanishing polynomial P̃ of degree bounded by d and of (x)-adic order bounded by ν̃_0. Let q_i'≥ q_i, i=1,…,r-1, be such that the transform fg of f̃g̃ under the change of variables u_i:=(x_i/x_i+1^q_i')^1/p, i=1,…,r-1, u_r=x_r^1/p, is monomialized with respect to the u_i's: (fg)(u):=(f̃g̃)( u_1^pu_2^pq'_1⋯ u_r^pq'_1q'_2⋯ q'_r-1 , … , u_r-1^pu_r^p q'_r-1 , u_r^p , y) We resume the notations of (<ref>), (<ref>), (<ref>), in particular, x_i∈ξ_k if and only if q_i'>0, and otherwise x_i ∈x_k for some k: x^ ny^j = x_0^ n_0 ξ_1^ m_1 x_1^ n_1⋯ξ_σ^ m_σ x_σ^ n_σy^j. where n=( n_0, m _1, n_1,…, m_σ, n_σ). For k=1,…,σ, we denote ξ_k=(x_i_k,…,x_j_k-1) and x_k=(x_j_k,…,x_i_k+1-1), and accordingly m_k=(n_i_k,…,n_j_k-1) and n_k=(n_j_k,…,n_i_k+1-1) with i_σ+1:=r+1. For k=0 when x_0 is not empty, we denote x_0=(x_j_0,…,x_i_1-1) and n_0=(n_j_0,…,n_i_1-1) with j_0:=1. When x_0 is empty, we set n_0=0. We set: [ L̃_k: ^i_k+1-i_k → ; (m_k,n_k)=(n_i_k,…,n_i_k+1-1) ↦ L̃_k(m_k,0)+ |n_k| ] where: L̃_k(m_k,0):=q'_j_k-1q'_j_k-2⋯ q'_i_kn_i_k+⋯+q'_j_k-1q'_j_k-2n_j_k-2 + q'_j_k-1n_j_k-1. Moreover, let L̃(n):=|n_0|+∑_k=1,…,σL̃_k(m_k,n_k). The algorithm described in Section <ref> provides for any ν∈ all the polynomials Q̃_ν(x,y)∈ K[[x]][y] with _yQ̃_ν≤ d and of (x)-adic order bounded by ν̃_0 such that, for any 1/pn=1/p(n_1,…,n_r)∈Supp Q̃_ν(x,ỹ_0), one has: L̃(n)≥ν. Recall that, by the Monomialization Lemma <ref> and by Remark <ref>, if β=(β_1,…,β_r) is the lexicographic valuation of f̃g̃ with respect to the variables ζ_i:=(x_i/x_i+1^q_i)^1/p for i=1,…,r-1, ζ_r:=x_r^1/p, then the assumptions of Theorem <ref> are satisfied with q_i':=q_i+β_i+1+1. Therefore, Theorem <ref> follows. Let us now deduce Theorem <ref> from Theorem <ref>. Suppose that _xP̃≤ν̃_0. Let ℱ,𝒢 be as in Definition <ref> and such that ℱ∪𝒢 is the total family of multi-indices (α,j) satisfying Conditions (i), (ii), (iii) of Lemma <ref> with q_i' instead of q_i. By the transformations described in (<ref>), (<ref>) and (<ref>) associated to the change of variables u_i:=(x_i/x_i+1^q_i')^1/p, i=1,…,r-1, u_r=x_r^1/p, we obtain a polynomial P(u,y):=u^m̃^0P̃( u_1^pu_2^pq'_1⋯ u_r^pq'_1q'_2⋯ q'_r-1 , … , u_r^p , u^ñ^0y)∈(K[[u]][y])_ℱ,𝒢. Recall that we denote by x_k, ξ_k the sub-tuple of variables x_i corresponding to t_k, s_k respectively. For k=0 when t_0 is not empty, we denote x_0=(x_j_0,…,x_i_1-1), t_0=(u_j_0,…,u_i_1-1)=(x_j_0^1/p,…,x_i_1-1^1/p) and n_0=(n_j_0,…,n_i_1-1) with j_0:=1. According to (<ref>), (<ref>), (<ref>), a monomial x^ n is transformed into a monomial u^α=s^βt^γ such that, for k=1,…,σ, we have: [ ξ_k^ m_k x_k^ n_k= s_i_k^pn_i_ks_i_k+1^p(n_i_k+1+q'_i_kn_i_k)⋯s_j_k-1^p(n_j_k-1+q'_j_k-2n_j_k-2+q'_j_k-2q'_j_k-3n_j_k-3+⋯+ q'_j_k-2q'_j_k-3⋯ q'_i_kn_i_k); t_j_k^p(n_j_k+q'_j_k-1n_j_k-1+q'_j_k-1q'_j_k-2n_j_k-2+⋯+ q'_j_k-1q'_j_k-2⋯ q'_i_kn_i_k) t_j_k+1^pn_j_k+1⋯t_i_k+1-1^pn_i_k+1-1. ] Hence, a monomial x^ ny^j of P̃(x,y) gives a monomial u^αu^m̃^0+jñ^0y^j=s^βt^γu^m̃^0+jñ^0y^j of P(u,y). Since (P̃) contains a monomial x^ ny^j such that |n|= |n_0|+∑_k=1^σ(|m_k|+|n_k|)≤ν̃_0, we have that: _tP≤ p|n_0|+ ∑_k=1^σ(pq'_j_k-1q'_j_k-2⋯ q'_i_k|m_k|+p|n_k|) + |(m̃^0+jñ^0)_|t| ≤ p.κ.ν̃_0 + d.ρ where n_|t denotes the components of n corresponding to the exponents of the variables t in u^n, κ:=max_k=1,..,σ(q'_j_k-1q'_j_k-2⋯ q'_i_k) and ρ:=∑_k=0^σ( |ñ^0_j_k|+⋯+|ñ^0_i_k+1-1|). We set l̂_0:= (p.κ.ν̃_0 + d.ρ,0,…,0)∈^r-τ, so that w_t(P)≤_grlexl̂_0. Given Q̃_ν(x,y) as in Theorem <ref>, let us denote by Q_ν(u,y) its transform via (<ref>), (<ref>), (<ref>) as recalled between P̃ and P above. One gets Q̃_ν(x,ỹ_0)=u^m̃^0Q_ν(u,y_0). According to (<ref>), (<ref>), (<ref>), a monomial x^ n/p of Q̃_ν(x,ỹ_0) is transformed into a monomial u^α=s^βt^γ such that, for k=1,…,σ, we have: [ ξ_k^ m_k/p x_k^ n_k/p= s_i_k^n_i_ks_i_k+1^n_i_k+1+q'_i_kn_i_k⋯s_j_k-1^n_j_k-1+q'_j_k-2n_j_k-2+q'_j_k-2q'_j_k-3n_j_k-3+⋯+ q'_j_k-2q'_j_k-3⋯ q'_i_kn_i_k; t_j_k^n_j_k+q'_j_k-1n_j_k-1+q'_j_k-1q'_j_k-2n_j_k-2+⋯+ q'_j_k-1q'_j_k-2⋯ q'_i_kn_i_k t_j_k+1^n_j_k+1⋯t_i_k+1-1^n_i_k+1-1. ] So the monomials of Q_ν(u,y_0) are of the form u^α-m̃^0. As in the computation of (<ref>), _xQ̃_ν(x,y)≤ν̃_0 implies that _tQ_ν(u,y)≤ p.κ.ν̃_0 + d.ρ, so w_t(Q_ν(u,y))≤_grlexl̂_0. Moreover, since Q̃_ν(x,ỹ_0)=u^m̃^0Q_ν(u,y_0), the condition such that for any 1/pn=1/p(n_1,…,n_r)∈Supp Q̃_ν(x,ỹ_0), L̃(n)≥ν, is equivalent to _t(Q_ν(u,y_0))+|m̃^0_ |t|≥ν. This is in turn equivalent to w_t(Q_ν(u,y_0))≥(0,…,0,ν-|m̃^0_ |t|). We set l̃_ν:= (0,…,0,ν-|m̃^0_ |t|), and l:=w_t(Q_ν(u,y)). A polynomial Q̃_ν(x,y) satisfying the conditions of Theorem <ref> comes from a polynomial Q_ν(u,y) as above satisfying w_t(Q_ν(u,y))≤_grlexl̂_0 and w_t(Q_ν(u,y_0))≥l̃_ν. The construction of such polynomials Q_ν(u,y)=Q_l,l̃_ν(u,y) is given by Theorem <ref>. This achieves the proofs of Theorems <ref> and <ref>. §.§ Plan of the algorithm and example For the convenience of the reader, we now give several flowcharts in order to describe the algorithm. The first one provides the plan of the algorithm. The others consist of the details of the corresponding steps. < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > The purpose of the present example is to illustrate the various points of our Theorem <ref>. For r=d=p=2 and q_1=ν̃_0=1, let us consider ỹ_0=f̃/g̃∈𝒦_2 with f̃,g̃∈ K[[(x_1/x_2)^1/2,x_2^1/2]] a root of the following equation: P̃(x_1,x_2,y) := sin(x_1+x_2)y^2+e^x_1x_1x_2y-x_2^2cos(x_1x_2) = 0. For instance, ỹ_0 := - e^x_1x_1x_2+ √( e^2x_1x_1^2x_2^2+4 x_2^2cos( x_1x_2 ) sin( x_1+x_2 ) )/2 sin( x_1+x_2 ) = - e^x_1/x_2x_2x_1/x_2x_2+ x_2^1/2√( e^2x_1/x_2x_2(x_1/x_2)^2x_2+4 cos( x_1/x_2x_2^2 ) sin( x_1/x_2x_2+x_2)/x_2)/2 sin( x_1/x_2x_2+x_2) / x_2 and therefore: f̃ := [ 2+x_1/x_2-1/4(x_1/x_2)^2+1/8(x_1/x_2)^3-5/64(x_1/x_2)^4+7/128(x_1/x_2)^5] x_2^1/2 -x_1/x_2x_2 +[ 1/4(x_1/x_2)^2-1/8(x_1/x_2)^3+3/32(x_1/x_2)^4-5/64(x_1/x_2)^5]x_2^3/2 -(x_1/x_2)^2x_2^2 +[ -1/6-5/12x_1/x_2-5/16(x_1/x_2)^2+43/96(x_1/x_2)^3-199/768(x_1/x_2)^4+107/512(x_1/x_2)^5] x_2^5/2 -1/2(x_1/x_2)^2x_2^3+⋯ g̃ := [2+2 x_1/x_2]-[ 1/3+x_1/x_2+(x_1/x_2)^2+1/3(x_1/x_2)^3]x_2^2 + [1/60+1/12x_1/x_2+1/6(x_1/x_2)^2+1/6(x_1/x_2)^3+1/12(x_1/x_2)^4 +1/60(x_1/x_2)^5]x_2^4 -1/2520[∑_k=0^7 7!/k!(7-k)! (x_1/x_2)^k ]x_2^6+⋯ In this case, note that the transform fg of f̃g̃ under the change of variables u_1:=(x_1/x_2)^1/2, u_2=x_2^1/2, is monomialized with respect to (u_1,u_2), so that q_1'=q_1=1 and (u_1,u_2)=(s,t). Hence, r-τ=τ=1. Therefore, one can expand ỹ_0 as a monomialized power series in (s,t): ỹ_0=ty_0 with y_0 = 1-1/2s^2+3/8s^4-5/16s^6+35/128s^8-63/256s^10+⋯ + ( -1/2s^2+1/2s^4-1/2s^6+1/2s^8-1/2s^10+⋯)t + (1/8s^4-3/16s^6+15/64s^8-35/128s^10+⋯)t^2 +(-1/2s^4+1/2s^6-1/2s^8+1/2s^10+⋯)t^3 +( 1/12+1/8s^2+1/32s^4+47/192s^6-195/512s^8+499/1024s^10+⋯)t^4 ( -1/12s^2-1/12s^4-1/4s^6+1/4s^8-1/4s^10+⋯)t^5 +⋯ = ∑_n∈ℕc_n(s) t^n with c_0,0=1≠ 0 As described after (<ref>), now we are in position to apply the algorithm as stated in Theorem <ref> with ñ^0=(0,1) and ñ^0=(0,0) and l̂_0:= p.κ.ν̃_0 + d.ρ=2× 1× 1+2×1=4. The corresponding support of the vanishing polynomial P belongs to some ℱ∪𝒢 as in Definition <ref> and satisfying Conditions (i), (ii), (iii) of Lemma <ref>, namely for any (k,l,j)∈ℱ∪𝒢: (i) (k,l)≥ (0,j); (ii)k and l-j are even; (iii)k≤ l-j. For the first step of the algorithm (Section <ref>), the list of plausible indices to begin with are all the non-negative integers l≤l̂_0=4. We resume the notations of Section <ref> (see also the method in Section <ref>). For simplicity, let us write c_0 for c_0(s). Step 1. If l=0 then j=0 and thefore l=k=0, so ℱ'_0=∅ and 𝒢'_0={(0,0,0)}. Equation (<ref>) translates as a_0,0,0=0, which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=0 from the list of admissible indices. If l=1 then j=0 or 1. But l-j has to be even, so j=1 and l-j=0=k. Thus, ℱ'_1={(0,1,1)} and 𝒢'_1=∅. Equation (<ref>) translates as a_0,1,1.s.C_0=0⇔ a_0,1,1=0, which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=1 from the list of admissible indices. If l=2 then j∈{0,1,2}. But l-j has to be even, so j=0 or 2. Since k is even, in the former case, k=0 or 2, and in the latter case k=0. Thus, ℱ'_2={(0,2,2)} and 𝒢'_2={(0,2,0), (2,2,0)}. Equation (<ref>) translates as a_0,2,2.C_0^2+a_0,2,0+a_2,2,0.s^2=0. However, since c_0^2=1-s^2+s^4-s^6+s^8-s^10+⋯ is not a polynomial of degree at most 2, the only possibility is a_0,2,2=a_0,2,0=a_2,2,0=0 which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=2 from the list of admissible indices. If l=3 then j∈{0,1,2} (recall that _y P=2≤ d=2). But l-j has to be even, so j=1. Since k is even, k=0 or 2. Thus, ℱ'_3={(0,3,1), (2,3,1)} and 𝒢'_3=∅. Equation (<ref>) translates as (a_0,3,1+a_2,3,1.s^2).C_0=0⇔ a_0,3,1=a_2,3,1=0, which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=3 from the list of admissible indices. If l=4, again since l-j has to be even, we have that j=0 or 2. Since k is even, in the former case, k∈{0,2,4}, and in the latter case k∈{0,2}. Thus, ℱ'_4={(0,4,2), (2,4,2)} and 𝒢'_2={(0,4,0), (2,4,0),(4,4,0)}. Equation (<ref>) translates as (a_0,4,2+a_2,4,2.s^2).C_0^2+a_0,4,0+a_2,4,0.s^2+a_4,4,0.s^4=0. Let us consider the corresponding Wilczynski matrices, where for simplicity the lines consists only of the coefficients of 1, s^2, s^4, etc. M_ℱ'_4,𝒢'_4 :=[[ 1 0 0 1 0; 0 1 0 -1 1; 0 0 1 1 -1; 0 0 0 -1 1; 0 0 0 1 -1; 0 0 0 -1 1; ⋮ ⋮ ⋮ ⋮ ⋮; ]] and M_ℱ'_4,𝒢'_4^red :=[[ -1 1; 1 -1; -1 1; 1 -1; -1 1; ⋮ ⋮; ]] (Recall that here the reduced matrix is obtained by removing the 3 first rows and columns.) One can easily check that all the minors of maximal order vanish up to order 2d_sd=2× 4× 2=16: as expected, c_0 is algebraic relatively to (ℱ'_4,𝒢'_4). Moreover, a first non-zero minor of order 1 in M_ℱ'_4,𝒢'_4^red is obtained e.g. with the coefficient 1 of the second column (this is the coefficient of s^6 in the expansion of s^2.c_0^2). Using the Cramer's rule, we identify it, up to a multiplicative constant λ∈ K, with a_2,4,2, and we also get a_0,4,2=λ. According to (<ref>), we derive a_0,4,0=-λ and a_2,4,0=a_4,4,0=0. As a conclusion, the K-vector space E_4 of polynomials corresponding to Equation (<ref>) is E_4:={λ[(1+s^2)y^2-1]t^4+R(s,t,y) | λ∈ K, R∈(K[s][[t]][y])_ℱ,𝒢, w_t(R)≥ 5}. Here, the linear form L̃ of Theorem <ref> is given by: L̃(n_1,n_2)=1n_1+n_2=n_1+n_2. We go back to the variables (x_1,x_2) by the following transformation: Q(s, t,y)=Q̃(s^2t^2,t^2,ty). The space E_4 corresponds to the space of polynomials in K[[x_1,x_2]][y] of the form: λ[(x_1+x_2)y^2-x_2^2]+ R̃(x_1,x_2,y) with λ∈ K, R̃∈ K[[x_1,x_2]][y] such that: R̃=ã_0+ã_1y+ã_2y^2 with _x(ã_0)≥ 3, _x(ã_1)≥ 2 and _x(ã_2)≥ 2. Step 2. Here, there isn't any l'>4 as in (<ref>). Step 3. We consider the case where l+1=5 corresponding to Third Step <ref>. By applying Conditions (i), (ii), (iii) of Lemma <ref> as before, we obtain: ℱ'_5={(0,5,1),(2,5,1),(4,5,1) } and 𝒢'_5=∅. The instance of (<ref>) is: [ (a_0,5,1+a_2,5,1.s^2++a_4,5,1.s^4).C_0 = -( a_0,4,2+a_2,4,2.s^2)2C_0C_1; = -λ(1+s^2) 2C_0C_1. ] Here, c_1≠ 0, and c_0 is not algebraic relatively to (ℱ'_5,𝒢'_5) since 𝒢'_5=∅, so we are in the case ⋆⋆ of Third Step <ref>. Note that θ_s,1=(4+2)a+b with a=1, b=0 (see Lemma <ref>), so θ_s,1=6. According to Lemma <ref>, we are assured to find a non zero reconstruction minor at depth at most 2.3.θ_s,(0,…,0,1)d^d+1=2× 3× 6× 2^3=288. However, here, the Wilczynski matrices (where again for simplicity we only consider the lines consisting of the coefficients of 1, s^2, s^4, etc.) are triangular with non zero diagonal coefficients: M_ℱ'_5,𝒢'_5=M_ℱ'_5,𝒢'_5^red =[[ 1 0 0; -1/2 1 0; 3/8 -1/2 1; -5/16 3/8 -1/2; 35/128 -5/16 3/8; ⋮ ⋮ ⋮; ]]. A first nonzero minor is obtained with the three first lines, and is equal to 1. But we notice that, here, Equation (<ref>) can be simplified by C_0 (since c_0≠ 0) and we get: a_0,5,1+a_2,5,1.s^2++a_4,5,1.s^4= -λ(1+s^2) 2 C_1. By evaluating at c_1=-1/2s^2+1/2s^4-1/2s^6+1/2s^8-1/2s^10+⋯, we see that: -λ(1+s^2) 2 c_1=λ s^2 and therefore a_0,5,1= a_4,5,1=0 and a_2,5,1=λ. As a conclusion, the K-vector space E_4,5 of polynomials corresponding to Third Step <ref> is E_4,5:= {λ[(1+s^2)y^2-1]t^4+(λ s^2 y) t^5+R(s,t,y) | λ∈ K, R∈(K[s][[t]][y])_ℱ,𝒢, w_t(R)≥ 6}. As before, we go back to the variables (x_1,x_2) by the following transformation: Q(s, t,y)=Q̃(s^2t^2,t^2,ty). The space E_4,5 corresponds to the space of polynomials in K[[x_1,x_2]][y] of the form: λ[(x_1+x_2)y^2+ x_1x_2 y-x_2^2]+ R̃(x_1,x_2,y) with λ∈ K, R̃∈ K[[x_1,x_2]][y] such that: R̃=ã_0+ã_1y+ã_2y^2 with _x(ã_0)≥ 3, _x(ã_1)≥ 3 and _x(ã_2)≥ 2. Step 4. We consider the case where S(l̃)=6 corresponding to Induction Step <ref>. By applying Conditions (i), (ii), (iii) of Lemma <ref> as before, we obtain: ℱ'_6={(0,6,2),(2,6,2),(4,6,2) } and 𝒢'_6={(0,6,0),(2,6,0),(4,6,0),(6,6,0) }. The instance of (<ref>) is: [ (a_0,6,2+a_2,6,2.s^2++a_4,6,2.s^4).C_0^2+ a_0,6,0+a_2,6,0.s^2+a_4,6,0.s^4+a_6,6,0.s^6; =-(( a_0,4,2+a_2,4,2.s^2)(2C_0C_2+ C_1^2) +(a_0,5,1+a_2,5,1.s^2++a_4,5,1.s^4).C_1); = -λ[(1+s^2) (2C_0C_2+ C_1^2) + s^2 C_1]. ] Note that we are in the case ⋆∙ of Induction Step <ref> since c_0 is algebraic relatively to (ℱ'_6,𝒢'_6). Moreover, when evaluating at c_0, c_1 and c_2=1/8s^4-3/16s^6+15/64s^8-35/128s^10+⋯, we obtain that the right-hand side of (<ref>) vanishes. So we get: (a_0,6,2+a_2,6,2.s^2++a_4,6,2.s^4).C_0^2+ a_0,6,0+a_2,6,0.s^2+a_4,6,0.s^4+a_6,6,0.s^6=0 which is of the same type as (<ref>). The corresponding Wilczynski matrices (where again for simplicity the lines consists only of the coefficients of 1, s^2, s^4, etc.) are M_ℱ'_6,𝒢'_6 :=[[ 1 0 0 0 1 0 0; 0 1 0 0 -1 1 0; 0 0 1 0 1 -1 1; 0 0 0 1 -1 1 -1; 0 0 0 0 1 -1 1; 0 0 0 0 -1 1 -1; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; ]] and M_ℱ'_6,𝒢'_6^red :=[[ -1 1 -1; 1 -1 1; -1 1 -1; 1 -1 1; -1 1 -1; ⋮ ⋮ ⋮; ]] We apply the reconstruction method of Section <ref> with maximal subfamily ℱ”_6={(2,6,2)}. According to Lemma <ref>, we obtain: a_2,6,2= a_0,6,2λ_2,6,2^0,6,2+ a_4,6,2λ_2,6,2^4,6,2 where here λ_2,6,2^0,6,2=-1 is the coefficient relating the column (0,6,2) to the column (2,6,2). Likewise, λ_2,6,2^4,6,2=-1. Let us consider a_0,6,2 and a_4,6,2 as parameters α,β∈ K, so a_2,6,2=-α-β. Moreover, we compute the coefficients of 𝒢'_6 according to (<ref>) in Lemma <ref>: [ a_0,6,0 = -a_0,6,2. 1 = -α; a_2,6,0 = a_0,6,2. 1 -a_2,6,2.1 = 2α+β; a_4,6,0 = - a_0,6,2. 1 +a_2,6,2.1 -a_4,6,2.1 = -2α-2β; a_6,6,0 = a_0,6,2. 1 -a_2,6,2.1 +a_4,6,2.1 = 2α+2β ] As a conclusion, the K-vector space E_4,6 of polynomials corresponding to Induction Step <ref> is [ E_4,6:={λ[(1+s^2)y^2-1]t^4+(λ s^2 y) t^5 +.; [ (α - (α +β )s^2 +β s^4) y^2 -α +(2α+β)s^2- 2(α +β)s^4+ 2(α +β)s^6 ]t^6 +R(s,t,y) |; .λ,α,β∈ K, R∈(K[s][[t]][y])_ℱ,𝒢, w_t(R)≥ 7}. ] As before, we go back to the variables (x_1,x_2) by the following transformation: Q(s, t,y)=Q̃(s^2t^2,t^2,ty). The space E_4,6 corresponds to the space of polynomials in K[[x_1,x_2]][y] of the form: [ (λ x_1+λ x_2+ αx_2^2- (α +β )x_1x_2 +β x_1^2)y^2+ λ x_1x_2 y; -λx_2^2 -αx_2^3 +(2α+β)x_1x_2^2- 2(α +β)x_1^2x_2+ 2(α +β)x_1^3 + R̃(x_1,x_2,y) ] with λ,α,β∈ K, R̃∈ K[[x_1,x_2]][y] such that: R̃=ã_0+ã_1y+ã_2y^2 with _x(ã_0)≥ 4, _x(ã_1)≥ 3 and _x(ã_2)≥ 3. Note that we recover the beginning of the analytic expansion of P̃ at 0 in (<ref>) for λ=1 and α=β=0. § A GENERALIZATION OF THE FLAJOLET-SORIA FORMULA. In the monovariate context, let Q(x,y)=∑_i,ja_i,jx^iy^j ∈ K[x,y] with Q(0,0)=∂ Q/∂ y(0,0)=0 and Q(x,0)≠ 0. In <cit.>, P. Flajolet and M. Soria give the following formula for the coefficients of the unique formal solution y_0=∑_n≥ 1c_nx^n of the implicit equation y=Q(x,y): [Flajolet-Soria's Formula <cit.>] c_n=∑_m=1^2n-11/m∑_|k|=m, ||k||=m-1, g(k)=nm!/∏_i,jk_i,j!∏_i,ja_i,j^k_i,j, where k=(k_i,j)_i,j, |k|=∑_i,jk_i,j, ||k|| = ∑_i,jj k_i,j and g(k) = ∑_i,ji k_i,j. Note that in the particular case where the coefficients of Q verify a_0,j=0 for all j, one has m≤ n in the summation. One can derive immediately from Theorems 3.5 and 3.6 in <cit.> a multivariate version of the Flajolet-Soria Formula in the case where Q(x,y)∈ K[x,y]. The purpose of the present section is to generalize the latter result to the case where Q(x,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y]. We will need a special version of Hensel's Lemma for multivariate power series elements of K((x_1^ℤ,…,x_r^ℤ))^grlex. Recall that the latter denotes the field of generalized series (K((X^ℤ^r))^grlex, w) where w is the graded lexicographic valuation as described in Section <ref>. Generalized series fields are known to be Henselian <cit.>. For the convenience of the reader, we give a short proof in our particular context. We call strongly reduced Henselian equation any equation of the following type: y=F(u,y) with F(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod, such that w(F(u,y))>_grlex0 and F(u,0) 0. [Hensel's lemma] Any strongly reduced Henselian equation admits a unique solution y_0= ∑_n>_grlex0c_nu^n∈ K((u_1^ℤ,…,u_r^ℤ))^grlex. Let y=F(u,y) be a strongly reduced Henselian equation and let y_0=∑_n>_grlex0c_nu^n∈ K((u_1^ℤ,…,u_r^ℤ))^grlex. For n∈ℤ^r, n>_grlex0, let us denote z̃_n:= ∑_m<_grlexn c_mu^m. We get started with the following key lemma: The following are equivalent: * a series y_0 is a solution of (<ref>); * for any n∈ℤ^r, n>_grlex0, w(z̃_n-F(u,z̃_n))=w(y_0-z̃_n); * for any n∈ℤ^r, n>_grlex0, w(z̃_n-F(u,z̃_n))≥_grlexn. For n>_grlex0, let us denote ỹ_n:=y_0-z̃_n=∑_m≥_grlexn c_mu^m. We apply Taylor's Formula to G(u,y):=y-F(u,y) at z̃_n: G(u,z̃_n+y) =z̃_n-F(u,z̃_n)+(1-∂ F/∂ y(u,z̃_n))y +y^2H(u,y), where H(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[y] with w(R(u,y))>_grlex0. The series y_0 is a solution of (<ref>) iff for any n, ỹ_n is a root of G(u,z̃_n+y)=0, i.e.: z̃_n-F(u,z̃_n)+(1-∂ F/∂ y(u,z̃_n))ỹ_n+ ỹ_n^2H(u,ỹ_n)=0. Now consider y_0 a solution of (<ref>) and n∈ℤ^r, n>_grlex0. Either ỹ_n=0, i.e. y_0=z̃_n: (2) holds trivially. Or ỹ_n≠ 0, so we have: n≤_grlex w((1-∂ G/∂ y(u,z̃_n))ỹ_n) =w(ỹ_n)<_grlex 2w(ỹ_n)<_grlex w(ỹ_n^2H(u,ỹ_n)). So we must have w(z̃_n-G(u,z̃_n))=w(ỹ_n). Now, (2) ⇒ (3) since w(ỹ_n)≥_grlexn. Finally, suppose that for any n, w(z̃_n-F(u,z̃_n))≥_grlexn. If y_0-F(x,y_0)≠ 0, denote n_0:= w(y_0-F(u,y_0)). For n>_grlexn_0, one has n_0=w(z̃_n-F(u,z̃_n))≥_grlexn. A contradiction. Let us return to the proof of Theorem <ref>. Note that, if y_0 is a solution of (<ref>), then its support needs to be included in the monoid 𝒮 generated by the i's from the nonzero coefficients a_i,j of F(x,y). If not, consider the smallest index n for ≤_grlex which is not in 𝒮. Property (2) of Lemma <ref> gives a contradiction for this index. 𝒮 is a well-ordered subset of (ℤ^r)_≥_grlex0 by <cit.>. Let us prove by transfinite induction on n∈𝒮 the existence and uniqueness of a sequence of series z̃_n as in the statement of the previous lemma. Suppose that for some n∈𝒮, we are given a series z̃_n with support included in 𝒮 and <_grlexn, such that w(z̃_n-F(u,z̃_n))≥_grlexn. Then by Taylor's formula as in the proof of the previous lemma, denoting by m the successor of n in 𝒮 for ≤_grlex: G(u,z̃_m)=G(u,z̃_n+c_nu^n) =z̃_n-F(u,z̃_n)+(1-∂ F/∂ y(u,z̃_n))c_nu^n +c_n^2u^2nH(u,z̃_n). Note that w(H(u,z̃_n))≥_grlex0 since w(z̃_n)>_grlex0 and w(F(u,y))>_grlex0. Therefore, one has: w(G(u,z̃_m))=w(z̃_m-F(u,z̃_m))≥_grlexm>_grlexn if and only if c_n is equal to the coefficient of u^n in F(u,z̃_n). This determines z̃_m in a unique way as desired. We prove now our generalized version of the Flajolet-Soria Formula <cit.>. Our proof, as the one in <cit.>, uses the classical Lagrange Inversion Formula in one variable. We will use Notation <ref>. [Generalized multivariate Flajolet-Soria Formula] Let y=F(u,y)=∑_i,ja_i,ju^iy^j be a strongly reduced Henselian equation. Define ι_0=(ι_0,1,…,ι_0,r) by: -ι_0,k:=min{0, i_k / a_i,j≠ 0, i = (i_1,…,i_k,…,i_r)}, k=1,…,r. Then the coefficients c_n of the unique solution y_0=∑_n>_grlex0 c_nu^n∈ K((u_1^ℤ,…,u_r^ℤ))^grlex are given by: c_n=∑_m=1^μ_n1/m∑_|M|=m, ||M||=m-1, g(M)=nm!/M!A^M where μ_n is the greatest integer m such that there exists an M with |M|=m, ||M||=m-1 and g(M)=n. Moreover, for n=(n_1,…,n_r), μ_n≤∑_k=1^rλ_k n_k with: λ_k={[ ∏_j=k+1^r-1(1+ι_0,j)+∏_j=1^r-1(1+ι_0,j) if k<r-1;; 1+∏_j=1^r-1(1+ι_0,j) if k=r-1;; ∏_j=1^r-1(1+ι_0,j) if k=r. ]. * In (<ref>), note that the second sum is finite. Indeed, let M=(m_i,j) be such that |M|=m, ||M||=m-1, g(M)=n. Since F∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y], if i has a component negative enough, then a_i,j=0. On the other hand, since |M|=m and g(M)=n, the positive components of i are bounded. * By <cit.>, 1/m·m!/M!∈ℕ. If we set m_j:=∑_im_i ,j and N=(m_j)_j, then |N|=m, N=m-1 and: 1/m·m!/M!= 1/m·m!/N!·N!/M!, where N!/M! is a product of multinomial coefficients and 1/m·m!/N! is an integer again by <cit.>. Thus, each c_n is the evaluation at the a_i,j's of a polynomial with coefficients in ℤ. For a given strongly reduced Henselian equation y=F(u,y), one can expand: f(u,y):=y/F(u,y)=∑_n≥ 1b_n(u)y^n ∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[[y]] with b_1≠ 0, which admits a unique formal inverse in K((u_1^ℤ,…,u_r^ℤ))^grlex[[y]]: f̃(u,y)= ∑_m≥ 1d_m(u) y^m. The Lagrange Inversion Theorem (see e.g. <cit.> with ℱ=K((u_1^ℤ,…,u_r^ℤ))^grlex and P=f(u,y)) applies: for any m, d_m(u) is equal to the coefficient of y^m-1 in [F(u,y)]^m, divided by m. Hence, according to the multinomial expansion of [F(u,y)]^m=[∑_i,ja_i,ju^iy^j]^m: d_m(u)=1/m∑_|M|=m, ||M||=m-1m!/M!A^Mu^g(M). Note that the powers n of u that appear in d_m are nonzero elements of the monoid generated by the exponents i of the monomials u^iy^j appearing in F(u,y), so they are >_grlex0. Now, it will suffice to show that, for any fixed n, the number ∑_k=1^rλ_k n_k is indeed a bound for the number μ_n of m's for which d_m can contribute to the coefficient of u^n. Indeed, this will show that f̃(u,y)∈ K[y]((u_1^ℤ,…,u_r^ℤ))^grlex. But, by definition of f̃, one has that: f̃(u,y)=y F(u,f̃(u,y)) ∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[[y]]. Hence, both members of this equality are in fact in K[y]((u_1^ℤ,…,u_r^ℤ))^grlex. So, for y=1, we get that f̃(u,1)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex is a solution with w(f̃(u,1))>_grlex0 of the equation: f(u,y)=y/F(u,y)=1 ⇔ y=F(u,y). It is equal to the unique solution y_0 of Theorem <ref>: y_0=f̃(u,1)= ∑_m≥ 1d_m(u). We consider the relation: g(M)=n ⇔ {[ ∑_i,jm_i,j i_1 = n_1;; ⋮; ∑_i,jm_i,j i_r = n_r. ]. Let us decompose m=|M|=∑_i,jm_i,j as follows: |M|=∑_|i|>0m_i,j+∑_|i|=0, i_1>0m_i,j+⋯+ ∑_|i|=0=i_1=⋯=i_r-2, i_r-1>0m_i,j. So, the relation g(M)=n can be written as: {[ ∑_|i|>0m_i,j i_1+∑_|i|=0, i_1>0m_i,j i_1 = n_1;; ⋮; ∑_|i|>0m_i,j i_k+∑_|i|=0, i_1>0m_i,j i_k+⋯+ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j i_k = n_k;; ⋮; ∑_i,jm_i,j i_r = n_r. ]. Firstly, let us show by induction on k∈{0,…,r-1} that: [ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j ≤ ∑_q=1^k-1[ι_0,k(∏_p=q+1^k-1(1+ι_0,p) + ∏_p=1^k-1(1+ι_0,p) )]n_q; +[1+ι_0,k∏_p=1^k-1(1+ι_0,p) ]n_k; +[ι_0,k∏_p=1^k-1(1+ι_0,p)]n_k+1 +⋯+[ι_0,k∏_p=1^k-1(1+ι_0,p)]n_r , ] the initial step k=0 being: ∑_|i|>0m_i,j≤ n_1+…+n_r. This case k=0 follows directly from (<ref>), by summing its r relations: ∑_|i|>0m_i,j≤∑_|i|>0m_i,j|i|≤ n_1+…+n_r. Suppose that we have the desired property until some rank k-1. Recall that for any i, i_k≥ -ι_0,k. By the k'th equation in (<ref>), we have: [ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j ≤ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j i_k ][ ≤ n_k-( ∑_|i|>0m_i,j i_k+∑_|i|=0, i_1>0m_i,j i_k+⋯+ ∑_|i|=0=i_1=⋯=i_k-2, i_k-1>0m_i,j i_k); ≤ n_k+ι_0,k( ∑_|i|>0m_i,j +∑_|i|=0, i_1>0m_i,j +⋯+ ∑_|i|=0=i_1=⋯=i_k-2, i_k-1>0m_i,j). ] We apply the induction hypothesis to these k sums and obtain an inequality of type: ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j≤α_k,1 n_1+⋯+α_k,r n_r. For q>k, let us compute: [ α_k,q = ι_0,k( 1+ ι_0,1+ ι_0,2(1+ι_0,1)+ι_0,3(1+ι_0,1)(1+ι_0,2)+⋯ + ι_0,k-1∏_p=1^k-2(1+ι_0,p) ); = ι_0,k∏_p=1^k-1(1+ι_0,p). ] For q=k, we have the same computation, plus the contribution of the isolated term n_k. Hence: α_k,k=1+ι_0,k∏_p=1^k-1(1+ι_0,p). For q<k, we have a part of the terms leading again by the same computation to the formula ι_0,k∏_p=1^k-1(1+ι_0,p). The other part consists of terms starting to appear at the rank q and whose sum can be computed as: ι_0,k( 1+ ι_0,q+1+ ι_0,q+2(1+ι_0,q+1)+⋯ + ι_0,k-1∏_p=q+1^k-2(1+ι_0,p) ) = ι_0,k∏_p=q+1^k-1(1+ι_0,p). So we obtain as desired: α_k,q= ι_0,k[ ∏_p=q+1^k-1(1+ι_0,p)+ ∏_p=1^k-1(1+ι_0,p)]. Subsequently, we obtain an inequality for m=|M|=∑_i,jm_i,j of type: [ m = ∑_|i|>0m_i,j+∑_|i|=0, i_1>0m_i,j+⋯+ ∑_|i|=0=i_1=⋯=i_r-2, i_r-1>0m_i,j; ≤ α_1 n_1+⋯ +α_r n_r, ] with α_k= 1+∑_l=1^r-1α_l,k for any k. For k=r, let us compute in a similar way as before for α_k,q: [ α_r = 1+ι_0,1+ι_0,2(1+ι_0,1)+⋯ +ι_0,k∏_p=1^k-1(1+ι_0,p)+⋯ +ι_0,r-1∏_p=1^r-2(1+ι_0,p); = ∏_p=1^r-1(1+ι_0,p)=λ_r. ] For k=r-1, we have the same computation plus 1 coming from the term α_r-1,r-1. Hence: α_r-1=1+ ∏_p=1^r-1(1+ι_0,p)=λ_r-1. For k∈{1,…,r-2}, we have a part of the terms leading again by the same computation to the formula ∏_p=1^r-1(1+ι_0,p). The other part consists of terms starting to appear at the rank k and whose sum can be computed as: 1+ι_0,k+1+ι_0,k+2(1+ι_0,k+1)+⋯+ι_0,r-1∏_p=k+1^r-2(1+ι_0,p)=∏_p=k+1^r-1(1+ι_0,p) Altogether, we obtain as desired: α_k=∏_p=k+1^r-1(1+ι_0,p)+∏_p=1^r-1(1+ι_0,p)=λ_k. * Note that for any k∈{1,…,r-1}, λ_k=λ_r(1/(1+ι_0,1)⋯(1+ι_0,k)+1), so λ_1≥λ_k>λ_r. Thus, we obtain that: μ_n≤λ_1|n|. Moreover, in the particular case where ι_0=0– i.e. when Q(x,y)∈ K[[x]][y] and y_0∈ K[[x]] as in <cit.>– we have λ_k=2 for k∈{1,…,r-1} and λ_r=1. Thus we obtain: μ_n≤ 2|n|-n_r≤ 2|n|. Note that : |n| ≤ 2|n|-n_r≤ 2|n| which can be related in this context with the effective bounds 2|n|-1 (case w_x(Q(x,y))≥_grlex0) and |n| (case w_x(Q(x,y))>_grlex0) given in <cit.>. * With the notation from Theorem <ref>, any strongly reduced Henselian equation y=Q(x,y) can be written: x^ι_0y=Q̃(x,y)with Q̃(x,y)∈ K[[x]][y] and w_x(Q̃(x,y))>_grlexι_0. Any element n of Supp y_0, being in the monoid 𝒮 of the proof of Theorem <ref>, is of the form: n=m-k ι_0 with m∈ℕ^r, k∈ℕ and k |ι_0|≤ |m|. Let us consider the following example of strongly reduced Henselian equation: [ y = a_1,-1,2x_1x_2^-1 y^2 + a_-1,2,0x_1^-1x_2^2 +a_0,1,1x_2y+ a_-1,3,0x_1^-1x_2^3 +a_0,2,1x_2^2y; +(a_1, 1, 0+ a_1,1,2y^2)x_1 x_2 +a_1,2,0 x_1x_2^2+a_2,1,1yx_1^2x_2; + a_1,3,0 x_1x_2^3 +a_2,2,1 yx_1^2x_2^2+a_3,1,2y^2x_1^3x_2. ] The support of the solution is included in the monoid 𝒮 generated by the exponents of (x_1,x_2), which is equal to the pairs n=(n_1,n_2)∈ℤ^2 with n_2=-n_1+ l and n_1≥ -l for l∈ℕ. We have ι_0=(1,1), so (λ_1,λ_2)=(3,2) and μ_n≤ 3n_1+2n_2=n_1+2l. We are in position to compute the first coefficients of the unique solution y_0. Let us give the details for the computation of the first terms, for l=0. In this case, to compute c_n_1,-n_1, n_1>0, we consider m such that 1≤ m≤μ_n_1,-n_1≤ n_1, and M=(m_i,j)_i,j such that: {[ |M|=m ⇔ ∑_i,jm_i,j=m≤ n_1;; M=m-1 ⇔ ∑_i,jm_i,jj=m-1≤ n_1-1;; g(M)=n ⇔ {[ ∑_i,jm_i,j i_1 = n_1>0;; ∑_i,jm_i,j i_2 = -n_1<0. ]. ]. The last condition implies that m_1,-1,2≥ n_1. But, according to the second condition, this gives n_1-1≥M≥ 2 m_1,-1,2≥ 2 n_1, a contradiction. Hence, c_n_1,-n_1=0 for any n_1>0. In the case l=1, we consider the corresponding conditions to compute c_n_1,-n_1+1 for n_1≥ -1. We obtain that 1≤ m≤μ_n_1,-n_1+1≤ n_1+2. Suming the two conditions in g(M)=(n_1,-n_1+1), we get m_-1,2,0+m_0,1,1=1 and m_i,j=0 for any i such that i_1+i_2≥ 2. So we are left with the following linear system: {[ (L_1) m_1,-1,2 + m_-1,2,0 + m_0,1,1 = m ≤ n_1+2; (L_2) 2 m_1,-1,2 + m_0,1,1 = m-1 ≤ n_1+1; (L_3) m_1,-1,2 - m_-1,2,0 = n_1; (L_4) -m_1,-1,2 + 2 m_-1,2,0 + m_0,1,1 = -n_1+1; ]. By comparing (L_2)-(L_3) and (L_1), we get that m=m-1-n_1, so n_1=-1. Consequently, by (L_1), m=1, and by (L_2), m_1,-1,2=m_0,1,1=0. Since m_-1,2,0+m_0,1,1=1, we obtain m_-1,2,0=1 which indeed gives the only solution. Finally, c_n_1,-n_1+1=0 for any n_1≥ 0 and: c_-1,2=1/11!/1!0!a_-1,2,0^1=a_-1,2,0. Similarly, we claim that one can determine that: [ c_-2,4 = 0, μ_n≤ 2;; c_-1,3 = a_-1,3,0+a_0,1,1a_-1,2,0+a_1,-1,2a_-1,2,0^2, μ_n≤ 3;; c_0,2 = 0, μ_n≤ 4;; c_1,1 = a_1,1,0, μ_n≤ 5;; c_n_1,-n_1+2 = 0 for n_1≥ 0, n_1≠ 1 μ_n≤ n_1+4;; c_n_1,-n_1+3 = 0 for -3≤ n_1≤ -2, μ_n≤ n_1+6;; c_-1,4 = a_0,2,1a_-1,2,0+a_0,1,1a_-1,3,0+2 a_1,-1,2a_-1,2,0a_-1,3,0; +a_0,1,1^2a_-1,2,0+3 a_0,1,1a_1,-1,2a_-1,2,0^2+2 a_1,-1,2^2a_-1,2,0^3, μ_n≤ 5;; ⋮ ] § CLOSED-FORM EXPRESSION OF AN ALGEBROID MULTIVARIATE SERIES. The field K of coefficients has still characteristic zero. Our purpose is to determine the coefficients of an algebroid series in terms of the coefficients of a vanishing polynomial. We consider the following polynomial of degree in y bounded by d_y and satisfying the conditions (i) to (iii) of Lemma <ref>: [ P(u,y) = ∑_i∈^r∑_j=0^d_ya_i,ju^iy^j , with P(u,y)∈ K[[u]][y]∖{0}; = ∑_i∈^rπ_i^P(y)u^i; = ∑_j=0^d_ya_j^P(u)y^j, ] and a formal power series: y_0=∑_n≥_grlex0c_ nu^n, with y_0∈ K[[u]], c_0≠ 0. The field K((u)) is endowed with the graded lexicographic valuation w. For any k∈ℕ^r and for any Q(u,y)=∑_j=0^da_j^Q(u)y^j∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[y], we denote: * S(k) the successor element of k in (ℕ^r,≤_grlex); * w(Q):=min{w (a_j^Q(u)), j=0,..,d}; * For any k∈^r, z_k:=∑_n=0^kc_nu ^n; * y_k:=y_0-z_k=∑_n≥_grlexS( k)c_nu^n; * Q_k(u,y):=Q(u,z_k+u^S(k)y) =∑_i≥_grlexi_kπ^Q_k,i(y)u^i where i_k:=w( Q_k). Note that the sequence (i_k)_k∈ℕ^r is nondecreasing since Q_S(k)(u,y)=Q_k(u,c_S(k)+u^ny) for n=S^2(k)-S(k)>_grlex0, n∈ℤ^r. As for the algebraic case <cit.>, we consider y_0 solution of the equation P=0 via an adaptation in several variables of the algorithmic method of Newton-Puiseux, also with two stages: * a first stage of separation of the solutions, which illustrates the following fact: y_0 may share an initial part with other roots of P. But, if y_0 is a simple root of P, this step concerns only finitely many of the first terms of y_0 since w(∂ P/∂ y (u,y_0)) is finite. * a second stage of unique "automatic" resolution: for y_0 a simple root of P, once it has been separated from the other solutions, we will show that the remaining part of y_0 is a root of a strongly reduced Henselian equation, in the sense of Definition <ref>, naturally derived from P and an initial part of y_0. (i) The series y_0 is a root of P(u,y) if and only if the sequence (i_k)_k∈ℕ^r where i_k:=w( P_k) is strictly increasing. (ii) The series y_0 is a simple root of P(u,y) if and only if the sequence (i_k)_k∈ℕ^r is strictly increasing and there exists a lowest multi-index k_0 such that i_S(k_0)=i_k_0-S(k_0)+S^2(k_0). In that case, one has that i_S(k)=i_k-S(k)+S^2(k)=i_k_0-S(k_0)+S^2(k) for any k≥_grlexk_0. (i) Note that for any k∈ℕ^r,i_k≤_grlex w(P_k(u,0)=w(P(u,z_k)). Hence, if the sequence (i_k)_k∈ℕ^r is strictly increasing in (ℕ^r,≤_grlex), it tends to +∞ (i.e. ∀n∈ℕ^r, ∃k_0∈ℕ^r, ∀k≥_grlexk_0, i_k≥_grlexn), and so does w(P(u,z_k)). The series y_0 is indeed a root of P(u,y). Conversely, suppose that there exist k<_grlexl such that i_k≥_grlexi_l. Since the sequence (i_n)_n∈ℕ^r is nondecreasing, one has that i_l≥i_k, so i_l=i_k. We apply the multivariate Taylor's formula to P_j(u,y) for j>_grlexk: [ P_j(u,y) = P_k(u,c_S(k)+ c_S^2(k)u^S^2(k)-S(k) +⋯+c_ju^j-S(k)+u^S(j)-S(k)y); = ∑_i≥_grlexi_kπ^P_k,i(c_S(k)+ c_S^2(k)u^S^2(k)-S(k) +⋯+u^S(j)-S(k)y) u^i; = π^P_k,i_k(c_S(k))u^i_k+b_S(i_k)u^S(i_k)+ ⋯. ] Note that b_S(i_k)= π^P_k,S(i_k)(c_S(k)) or b_S(i_k)= (π^P_k,i_k )'(c_S(k)) c_S^2(k)+π^P_k,S(i_k)(c_S(k)) depending on whether S(i_k)<_grlexi_k+S^2(k)-S(k) or S(i_k)=i_k+S^2(k)-S(k). For j=l, we deduce that π^P_k,i_k(c_S(k))≠ 0. This implies that for any j>_grlexk, i_j=i_k and w(P_j(u,0))=w(P(u,z_j))=i_k. Hence w(P(u,y_0))=i_k≠ +∞. (ii) The series y_0 is a double root of P if and only if it is a root of P and ∂ P/∂ y. Let y_0 be a root of P. Let us expand the multivariate Taylor's formula (<ref>) for j=S(k): [ [ P_S(k)(u,y) = π^P_k,i_k(c_S(k))u^i_k+ π^P_k,S(i_k)(c_S(k))u^S(i_k)+⋯; +[(π^P_k,i_k)'(c_S(k)) y+π^P_k,i_k+S^2(k)-S(k)(c_S(k))]u^i_k+S^2(k)-S(k)+⋯ + ]; [(π^P_k,i_k)”(c_S(k))/2 y^2+(π^P_k,i_k+S^2(k)-S(k))'(c_S(k)) y+π^P_k,i_k+2(S^2(k)-S(k))(c_S(k))]u^i_k+2(S^2(k)-S(k))+⋯ ] Note that if S(i_k)=i_k+S^2(k)-S(k), then there are no intermediary terms between the first one and the one with valuation i_k+S^2(k)-S(k). We have by definition of P_k: ∂ P_k/∂ y(u,y)=u^S(k)(∂ P/∂ y)_k(u,y)=∑_i≥_grlexi_k(π^P_k,i)'(y)u^i One has that π^P_k,i_k(y) 0 and π^P_k,i_k(c_S(k))=0 (see the point (i) above), so (π^P_k,i_k)'(y) 0. Thus: w((∂ P/∂ y)_k)=i_k-S(k). We perform the Taylor's expansion of (∂ P/∂ y)_S(k): [ (∂ P/∂ y)_S(k)(u,y) = (∂ P/∂ y)_k(u,c_S(k)+u^S^2(k)-S(k)y); = ( π^P_k,i_k)'(c_S(k))u^i_k-S(k)+⋯; + [(π^P_k,i_k)”(c_S(k)) y+(π^P_k,i_k+S^2(k)-S(k))'(c_S(k))]u^i_k+S^2(k)-2S(k)+⋯. ] By the point (i) applied to ∂ P/∂ y, if y_0 is a double root P, we must have (π^P_k,i_k)'(c_S(k))=0. Moreover, if π^P_k,i(c_S(k))≠ 0 for some i∈{S(i_k), … , i_k+S^2(k)-S(k)}, by Formula (<ref>) we would have i_S(k)≤_grlexi_k+S^2(k)-S(k) and even i_j≤_grlexi_k+S^2(k)-S(k) for every j>_grlexk according to Formula (<ref>): y_0 could not be a root of P. So, π^P_k,i(c_S(k))= 0 for i=S(i_k),..,i_k+S^2(k)-S(k), and, accordingly, i_S(k)>_grlexi_k+S^2(k)-S(k). If y_0 is a simple root of P, from the point (i) and its proof there exists a lowest k_0 such that the sequence (i_k-S(k))_k∈ℕ^r is no longer strictly increasing, that is to say, such that (π^P_k_0,i_k_0)'(c_S(k_0))≠ 0. For any k≥_grlexk_0, we consider the Taylor's expansion of (∂ P/∂ y)_S(k)=(∂ P/∂ y)_k_0(c_S(k_0)+⋯+u^S^2(k)-S(k_0 )y): [ (∂ P/∂ y)_S(k)(u,y) = (π^P_k_0,i_k_0)'(c_S(k_0))u^i_k_0-S(k_0)+⋯; +[(π^P_k_0,i_k_0)”(c_S(k_0))c_S^2(k_0)+(π^P_k_0, i_k_0+S^2(k_0)-S(k_0))' (c_S(k_0))]u^i_k_0+ S^2(k_0)-S(k_0) +⋯ ] and we get that: w(∂ P/∂ y(z_S(k),0) )=w((∂ P/∂ y)_S(k)(u,0))=w((∂ P/∂ y)_S(k))=i_k_0-S(k_0). By Equation (<ref>), we obtain that w((∂ P/∂ y)_S(k))=i_S(k)-S^2(k). So, i_S(k)=i_k_0-S(k_0)+S^2(k). As every k>_grlexk_0 is the successor of some k'≥_grlexk_0, we get that for every k≥_grlexk_0, i_k-S(k)=i_k_0-S(k_0). So, finally, i_S(k)=i_k-S(k)+S^2(k) as desired. Resuming the notations of Lemma <ref>, the multi-index k_0 represents the length of the initial part in the stage of separation of the solutions. In the following lemma, we bound it using the discriminant Δ_P of P (see just before Notation <ref>). Let P(u,y) be a nonzero polynomial with _y(P)≤ d_y and with only simple roots. Let y_0=∑_n∈^rc_ nu^n, c_0≠ 0 be one of these roots. The multi-index k_0 of Lemma <ref> verifies that: |k_0|≤_u(Δ_P(u)). By definition of k_0 and by Formula (<ref>), for any k≥_grlexk_0, w( ∂ P/∂ y(u,z_S(k)))=w(∂ P/∂ y(u,z_S(k_0)))=i_k_0-S(k_0). So, w(∂ P/∂ y(u,y_0))=w(∂ P/∂ y(u,z_S(k_0))). Moreover, by minimality of k_0, the sequence (i_k-S(k))_k is strictly increasing up to k_0, so by Formula (<ref>): w( ∂ P/∂ y(u,y_0))=w(∂ P/∂ y(u,z_S(k_0)))=w((∂ P/∂ y)_S(k_0)(u,0))≥_grlex w((∂ P/∂ y)_S(k_0))≥_grlexk_0. So: |k_0|≤|w( ∂ P/∂ y(u,y_0))|=ord_u∂ P/∂ y(u,y_0). Since P has only simple roots, its discriminant Δ_P is nonzero and one has a Bezout identity: A(u,y)P(u,y)+B(u,y)∂ P/∂ y(u,y)=Δ_P(u) with A,B∈ K[[u]][y]. By evaluating this identity at y=y_0, we obtain that _u(∂ P/∂ y(u,y_0) )≤_u(Δ_P(u)), so |k_0|≤_u(Δ_P(u)) as desired. Resuming Notation <ref> and the content of Lemma <ref>, we set: ω_0:=(π^P_k_0,i_k_0)'(c_S(k_0)). By Formula (<ref>), we note that (∂ P/∂ y)(u,y_0)=ω_0 u^i_k_0-S(k_0)+⋯. Thus, ω_0 is the initial coefficient of (∂ P/∂ y)(u,y_0) with respect to ≤_grlex, hence ω_0≠ 0. Consider the following nonzero polynomial in K[[u]][y] of degree in y bounded by d_y: P(u,y)=∑_i∈^r∑_j=0^d_ya_i,ju^iy^j = ∑_i≥_grlex0π^P_i(y)u^i, and a formal power series which is a simple root: y_0=∑_n≥_grlex0c_nu^n ∈ K[[u]], c_0≠ 0. Resuming Notations <ref> and <ref> and the content of Lemma <ref>, recall that ω_0:=(π^P_k_0,i_k_0)'(c_S(k_0))≠ 0. Then, for any k>_grlexk_0: * either the polynomial z_S(k)=∑_n=0^S(k)c_nu^n is a solution of P(u,y)=0; * or _kR(u,y):=P_k(u,y+c_S(k))/-ω_0u^i_k=-y+ _kQ(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] defines a strongly reduced Henselian equation: y= _kQ(u,y) as in Definition <ref> and satisfied by: t_S(k):=y_0-z_S(k)/u^S(k)=c_S^2(k)u^S^2(k)-S(k)+c_S^3(k)u^S^3(k)-S(k)+⋯. We show by induction on k∈(ℕ^r,≤_grlex), k>_grlexk_0, that _kR(u,y)=-y+ _kQ(u,y) with _kQ(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] is such that w( _kQ(u,y)) >_grlex0. Let us apply Formula (<ref>) with parameter k=k_0. Since i_S(k_0)=i_k_0+S^2(k_0)-S(k_0), we have that π^P_k_0,i(c_S(k_0))=0 for i_k_0≤_grlexi<_grlexi_k_0+S^2(k_0)-S(k_0), and accordingly: P_S(k_0)(u,y)=[ω_0 y+π^P_k_0,i_k_0+S^2(k_0)-S(k_0)(c_S(k_0))]u^i_k_0+S^2(k_0)-S(k_0)+ _S(k_0)T(u,y) where _S(k_0)T(u,y)∈ K[[u]][y] with w( _S(k_0)T(u,y))>_grlexi_k_0+S^2(k_0)-S(k_0). Since i_S^2(k_0)=i_k_0+S^3(k_0)-S(k_0)>_grlexi_k_0+S^2(k_0)-S(k_0), we obtain that: π^P_S(k_0),i_k_0+S^2(k_0)-S(k_0)(y)=ω_0 y+π^P_k_0,i_k_0+S^2(k_0)-S(k_0)(c_S(k_0)) vanishes at c_S^2(k_0), which implies that c_S^2(k_0)= -π^P_k_0,i_k_0+S^2(k_0)-S(k_0)(c_S(k_0))/ω_0. Computing _S(k_0)R(u,y), it follows that: _S(k_0)R(u,y)=-y+ _S(k_0)Q(u,y), with _S(k_0)Q(u,y)=_S(k_0)T(u,y +c_S^2(k_0))/-ω_0u^i_k_0+S^2(k_0)-S(k_0). So _S(k_0)Q(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] with w( _S(k_0)Q(u,y))>_grlex0. Now suppose that the property holds true at a rank k≥_grlexS(k_0), which means that _kR(u,y):=P_k(u,y+c_S(k))/-ω_0u^i_k=-y+ _kQ(u,y). Therefore, for _kQ̌(u,y)=-ω_0 _kQ(u,y-c_S(k))∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] which is such that w( _kQ̌(u, y)) >_grlex0, we can write: [ P_k(u,y) = ω_0(y-c_S(k))u^i_k+ u^i_k· _kQ̌(u,y); = π^P_k,i_k(y)u^i_k+π^P_k,S(i_k)(y)u^S(i_k)+ ⋯. ] Since P_S(k)(u,y)= P_k(u,c_S(k)+u^S^2(k)-S(k)y) and i_S(k)=i_k+S^2(k)-S(k) by Lemma <ref>, we have that: P_S(k)(u,y)=[ω_0 y+π^P_k,i_k+S^2(k)-S(k)(c_S(k))]u^i_k+S^2(k)-S(k)+π^P_S(k),S(i_S(k))(y)u^S(i_S(k))+⋯. But, again by Lemma <ref>, i_S^2(k)=i_S(k)+S^3(k)-S^2(k) >_grlexi_S(k)=i_k+S^2(k)-S(k). So we must have π^P_S(k),i_S(k)(c_S^2(k))=0, i.e. c_S^2(k)=-π^P_k,i_k+S^2(k)-S(k)(c_S(k))/ω_0. It follows that: P_S(k)(u,y)=ω_0(y-c_S^2(k))u^i_k+S^2(k)-S(k)+π^P_S(k),S(i_S(k))(y)u^S(i_S(k))+⋯, Since, by definition, _S(k)R(u,y):=P_S(k)(u,y+c_S^2(k))/-ω_0u^i_S(k)=-y+ _S(k)Q(u,y), we get that: [ _S(k)R(u,y) = -y- π^P_S(k),S(i_S(k))(y+c_S^2(k))/ω_0u^S(i_S(k))-i_S(k)+ ⋯; = -y+ _S(k)Q(u,y), _S(k)Q∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y], ] with w( _kQ(u,y)) >_grlex0 as desired. To conclude the proof, it suffices to note that the equation _kR(u,y)=0 is strongly reduced Henselian if and only if _kQ(u,0) 0, which is equivalent to z_S(k) not being a root of P. We will need the following lemma: Let P(u,y)∈ K[[u]][y]∖{0} be a polynomial of degree _y(P)≤ d_y with only simple roots. Assume that y_0, y_1∈ K[[u]] are two distinct roots. One has that: ord_u (y_0-y_1)≤_u(Δ_P(u)). Note that the hypothesis imply that d_y≥ 2. Let us write y_1-y_0=δ_1,0 and k:=w(y_1-y_0)=w(δ_1,0)∈ℕ^r. By Taylor's Formula, we have: [ P(u,y_0+δ_1,0) = 0; = P(u,y_0)+∂ P/∂ y(u,y_0) δ_1,0+⋯+1/d_y!∂^d_y P/∂ y^d_y(u,y_0)δ_1,0^d_y; = δ_1,0(∂ P/∂ y(u,y_0)+⋯+1/d_y!∂^d_y P/∂ y^d_y(u,y_0)δ_1,0^d_y-1). ] Since δ_1,0≠ 0 and ∂ P/∂ y(u,y_0)≠ 0, one has that: ∂ P/∂ y(u,y_0)=-δ_1,0(1/2∂^2 P/∂ y^2(u,y_0)+⋯+1/d_y!∂^d_y P/∂ y^d_y(u,y_0)δ_1,0^d_y-2) The valuation of the right hand side being at least k, we obtain that: w(∂ P/∂ y(u,y_0))≥_grlexk. But, by Lemma <ref>, we must have ord_u(∂ P/∂ y(u,y_0))≤_u(Δ_P(u)). So |k|≤_u(Δ_P(u)). For the courageous reader, in the case where y_0 is a series which is not a polynomial, we deduce from Theorem <ref> and from the generalized Flajolet-Soria's Formula <ref> a closed-form expression for the coefficients of y_0 in terms of the coefficients a_i,j of P and of the coefficients of an initial part z_k of y_0 sufficiently large, in particular for any k∈ℕ^r such that |k|≥_u(Δ_P(u))+1. Recall that i_k=w( P_k(u,y)). Note that for such a k, since y_0 is not a polynomial, by Lemma <ref>, z_S(k) cannot be a root of P. Let P(u,y)∈ K[[u]][y]∖{0} be a polynomial of degree _y(P)≤ d_y with only simple roots. Let k∈ℕ^r be such that |k|≥_u(Δ_P(u))+1. For any p>_grlex S(k), consider n:=p-S(k). Then: c_p=c_S(k)+n=∑_q=1^μ_n1/q(-1/ω_0)^q∑_|S|=q, S≥ q-1A^S(∑_|T_S|=S-q+1 g(T_S)=n+qi_k-(q-1)S(k)-g(S)e_T_SC^T_S), where μ_n is as in Theorem <ref> for the equation y= _kQ(u,y) of Theorem <ref>, S=(s_i,j)_i∈^r, j=0,…,d_y with finite support, and as in Notation <ref>, A^S=∏_i, ja_i,j^s_i,j, T_S=(t_S,i), C^T_S=∏_i=0^S(k)c_i^t_S,i, and e_T_S∈ℕ is of the form: e_T_S= ∑_(n^l,m_i,j,L)q!/∏_l =S(i_k)-i_k,…, d_yS(k)+(d_u,0,…,0)-i_k m=0,…,m_l∏_|i|=0,…,d_u j=m,…,d_y∏_|L|=j-m g(L)=l+i_k-mS(k)-in^l,m_i,j,L!∏_l=S(i_k)-i_k,…, d_y S(k)+(d_u,0,…,0)-i_k m=0,…,m_l∏_|i|=0,…,d_u j=m,…,d_y∏_|L|=j-m g(L)=l+i_k-mS(k)-i(j!/m! L!)^n^l,m_i,j,L, where we denote m_l:=min{d_y, max{m∈ℕ / mS(k)≤_grlexl +i_k}}, L=L_i,j^l,m=(l_i,j,0^l,m,…,l_i,j,S(k)^l,m), and where the sum is taken over the set of tuples (n^l,m_i,j,L)_l= S(i_k)-i_k,…,d_yS(k)+(d_u,0,…,0)-i_k, m=0,…,m_l |i|=0,…,d_u, j=m,…,d_y, |L|=j-m, g(L)=l+i_k-mS(k)-i such that: ∑_l,m∑_L n^l,m_i,j,L=s_i,j, ∑_l,m∑_i,j∑_Ln^l,m_i,j,L=q and ∑_l,m∑_i,j∑_Ln^l,m_i,j,LL= T_S. Note that the coefficients e_T_S are indeed natural numbers, since they are sums of products of multinomial coefficients because ∑_l,m∑_i,j∑_L n^l,m_i,j,L=q and m+|L|=j. In fact, 1/qe_T_S∈ℕ by Remark <ref> as we will see along the proof. We get started by computing the coefficients of ω_0u^i_k _kR, in order to get those of _kQ: [ -ω_0u^i_k _kR = P_k(u, y+c_S(k)); = P(u,z_S(k)+u^S(k)y); = ∑_i∈^r , j=0,…,d_ya_i,ju^i(z_S(k)+u ^S(k)y)^j; = ∑_i∈^r , j=0,…,d_ya_i,ju^i∑_m=0^jj!/m! (j-m)!z_S(k)^j-mu^mS(k)y^m. ] For L=(l_0,⋯,l_S(k)), we denote C^L:=c_0^l_0⋯ c_S(k)^l_S(k). One has that: z_S(k)^j-m=∑_|L|=j-m(j-m)!/L!C^Lu^g(L). So: -ω_0u^i_k _kR=∑_m=0^d_y∑_i∈^r j=m,…,d_ya_i,j∑_|L|=j-mj!/m! L!C^Lu^g(L) +mS(k)+i y^m. We set l̂=g(L)+mS(k)+i. It verifies: l̂≥ mS(k). Thus: -ω_0u^i_k _kR=∑_m=0,…,d_y ∑_l̂ ≥ mS(k)∑_i ≤ l̂- mS(k) j=m,…,d_ya_i,j∑_|L|=j-m g(L)=l̂-mS(k)-ij!/m! L!C^Lu^l̂y^m. Since _kR(u,y)=-y+ _kQ(u,y) with w( _kQ(u,y))>_grlex0, the coefficients of _kQ are obtained for l̂≥_grlexS(i_k). We set l:=l̂-i_k and m_l:=min{d_y, max{m∈ℕ / mS(k)≤l +i_k}}. We obtain: _kQ(u,y)=∑_l ≥_grlex S(i_k)-i_k m=0,…,m_lb_l,mu^ly^m, with: b_l,m=-1/ω_0∑_i ≤ l+i_k- mS(k) j=m,…,d_ya_i,j∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L. According to Lemma <ref>, Theorem <ref> and Lemma <ref>, we are in position to apply the generalized Flajolet-Soria's Formula of Theorem <ref> in order to compute the coefficients of the solution t_S(k)=c_S^2(k)u^S^2(k)-S(k)+c_S^3(k)u^S^3(k)-S(k)+⋯. Thus, denoting B:=(b_l,m), Q:=(q_l,m) with finite support and B^Q:=∏_l,m b_l,m^q_l,m for l≥_grlexS(i_k)-i_k and m=0,…,m_l, we obtain for n>_grlex0: c_S(k)+n=∑_q=1^μ_n1/q∑_|Q|=q, Q=q-1 , g(Q)=nq!/Q!B^Q. As in Remark <ref> (1), the previous sum is finite, and as in Remark <ref> (2), we have 1/q·q!/Q!∈ℕ. Let us compute: [ [ b_l,m^q_l,m = (-1/ω_0)^q_l,m(∑_i ≤ l+i_k- mS(k) j=m,…,d_ya_i,j∑_|L|=j-m g(L)=l +i_k-mS(k)-ij!/m! L!C^L)^q_l,m; = (-1/ω_0)^q_l,m∑_|M_l,m|=q_l,mq_l,m!/M_l,m!A^M_l,m∏_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_|L|=j-m g(L)=l+i_k-mS(k)- ij!/m! L!C^L)^m^l,m_i,j ]; where M_l,m=(m^l,m_i,j) for i≤l+i_k- mS(k) , j=0,…,d_y and m^l,m_i,j=0 for j<m. ] Note that, in the previous formula, (-ω_0)^q_l,mb_l,m^q_l,m is the evaluation at A and C of a polynomial with coefficients in ℕ. Since 1/q·q!/Q!∈ℕ, the expansion of (-ω_0)^q1/q·q!/Q!B^Q as a polynomial in A and C will only have natural numbers as coefficients. Let us expand the expression ∏_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L)^m^l,m_i,j. For each (l,m,i,j), we enumerate the terms j!/m! L!C^L with h=1,…,α_i,j^l,m. Subsequently: [ (∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L)^m^l,m_i,j = (∑_h=1^α_i,j^l,mj!/m! L_i,j,h^l,m!C^L_i,j,h^l,m)^ m^l,m_i,j; = ∑_|N^l,m_i,j|=m^l,m_i,jm^l,m_i,j!/N^l,m_i,j!( ∏_h=1^α_i,j^l,m(j!/m! L_i,j,h^l,m!)^ n^l,m_i,j,h) C^∑_h=1^α^l,m_i,j n^l,m_i,j,hL_i,j,h^l,m, ] where N^l,m_i,j= (n^l,m_i,j,h)_h=1,…,α_i,j^l,m, N^l,m_i,j!= ∏_h=1^α_i,j^l,m n^l,m_i,j,h!. Denoting H_l,m=(h^l,m_0,…,h^l,m_S(k)):= ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m, one computes: [ |H_l,m| = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,m n^l,m_i,j,h|L_i,j,h^l,m|; = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_h=1^α_i,j^l,m n^l,m_i,j,h)(j-m); = ∑_i ≤ l+i_k- mS(k) j=m,…,d_ym^l,m_i,j(j-m); = M_l,m-m q_l,m. ] Likewise, one computes: [ g(H_l,m) = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,m n^l,m_i,j,hg(L_i,j,h^l,m); = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_h=1^α_i,j^l,m n^l,m_i,j,h)(l+i_k-mS(k)-i); = ∑_i ≤ l+i_k- mS(k) j=m,…,d_ym^l,m_i,j(l+i_k-mS(k)-i); = q_l,m[l+i_k-mS(k)]-g(M_l,m). ] So, according to Formula (<ref>) and the new way of writing the expression ∏_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L)^m^l,m_i,j, we obtain: [ b_l,m^q_l,m = (-1/ω_0)^q_l,m∑_|M_l,m|=q_l,mA^M_l,m∑_|H_l,m|=M_l,m-m q_l,m g(H_l,m)=q_l,m[l+i_k-mS(k)]-g(M_l,m) d_H_l,mC^H_l,m; with d_H_l,m:=∑_(N^l,m_i,j)q_l,m!/∏_i ≤ l+i_k- mS(k) j=m,…,d_yN^l,m_i,j!∏_i ≤ l+i_k- mS(k) j=m,…,d_y∏_h=1^α_i,j^l,m(j!/m! L_i,j,h^l,m!)^n^l,m_i,j,h, ] where the sum is taken over {(N^l,m_i,j)_i ≤ l+i_k- mS(k) j=m,…,d_y such that |N^l,m_i,j|=m^l,m_i,j and ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,m n^l,m_i,j,hL_i,j,h^l,m=H_l,m}. Note that, if the latter set is empty, then d_H_l,m=0. Recall that we consider Q:=(q_l,m) with finite support and such that |Q|=q, Q=q-1 and g(Q)=n. We deduce that: [ B^Q = ∏_l ≥_grlexS(i_k)-i_k m=0,…,m_lb_l,m^q_l,m; = (-1/ω_0)^q∏_l,m[∑_|M_l,m|=q_l,mA^M_l,m∑_|H_l,m|=M_l,m-m q_l,mH_l,m=q_l,m(l+i_k-mS(k))-g(M_l,m)d_H_l,mC^H_l,m]. ] Now, in order to expand the latter product of sums, we consider the corresponding sets: 𝒮_Q:={∑_l,mM_l,m / ∃ (M_l,m) s.t. |M_l,m|=q_l,m and ∀l,m, m^l,m_i,j=0 for j<m or i ≰ l+i_k- mS(k)} and, for any S∈𝒮_Q, ℋ_Q,S:={(H_l,m) / ∃ (M_l,m) s.t. |M_l,m|=q_l,m and ∀l,m, m^l,m_i,j=0 for j<m or i ≰ l+i_k- mS(k), . . ∑_l,mM_l,m=S, |H_l,m|=M_l,m-m q_l,m and g(H_l,m)=q_l,m(l+i_k-mS(k))-g(M_l,m) /} and 𝒯_Q,S:={∑_l,mH_l,m / (H_l,m)∈ℋ_Q,S}. We have: [ B^Q = (-1/ω_0)^q∑_S∈𝒮_QA^S∑_T_S∈𝒯_Q,S(∑_(H_l,m)∈ℋ_Q,S∑_l,mH_l,m=T_S∏_l,m d_H_l,m) C^T_S; = (-1/ω_0)^q∑_S∈𝒮_QA^S∑_T_S∈𝒯_Q,Se_Q,T_SC^T_S. ] where : e_Q,T_S:= ∑_(N^l,m_i,j)∏_l,mq_l,m!/∏_l,m∏_i,jN^l,m_i,j!∏_l,m∏_i,j∏_h(j!/m! L_i,j,h^l,m!)^n^l,m_i,j,h and where the previous sum is taken over: ℰ_Q,T_S:={( N^l,m_i,j)_l ≥_grlexS(i_k)-i_k, m=0,…,m_li ≤ l+i_k- mS(k), j=m,…,d_y / ∀i,j, ∑_l,m∑_h=1^α_i,j^l,mn^l,m_i,j,h=s_i,j, . . ∀l,m, ∑_i,j|N^l,m_i,j|=q_l,m, and ∑_l,m∑_i, j∑_h=1^α_i,j^l,m n^l,m_i,j,hL_i,j,h^l,m =T_S}. Note that, if the latter set is empty, then e_Q,T_S=0. Observe that 1/qq!/Q!e_Q,T_S lies in ℕ as a coefficient of (-ω_0) ^q1/qq!/Q!B^Q as seen before. Note also that, for any Q and for any S∈𝒮_Q, |S|=∑_l,mq_l,m=q and S≥∑_l,mmq_l,m=Q=q-1. Moreover, for any T_S∈𝒯_Q,S: [ |T_S| = ∑_l,mM_l,m-m q_l,m; = S-Q; = S-q+1 ] and: [ g(T_S) = ∑_l,mq_l,m(l+i_k-mS(k))-g(M_l,m); = g(Q)+|Q| i_k-Q S(k)-g(S); = n+q i_k-(q-1) S(k)-g(S). ] Let us show that: [ ∑_|Q|=q, Q=q-1, g(Q)=nq!/Q!B^Q = (-1/ω_0)^q∑_|S|=q, S≥ q-1A^S∑_|T_S|=S-q+1 g(T_S)=n+qi_k-(q-1)S(k)-g(S)e_T_SC^T_S, ] where e_T_S:=∑_(N^l,m_i,j)q!/∏_l,m∏_i,jN^l,m_i,j!∏_l,m∏_i,j∏_h(j!/m! L_i,j,h^l,m!)^n^l,m_i,j,h and where the sum is taken over ℰ_T_S:={(N^l,m_i,j)_l ≥_grlexS(i_k)-i_k, m=0,…,m_li ≤ l+i_k- mS(k), j=m,…,d_y s.t. ∑_l,m∑_h n^l,m_i,j,h=s_i,j, ∑_l,m∑_i,j|N^l,m_i,j|=q. . and ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m=T_S}. Note that, if the latter set is empty, then e_T_S=0. Recall that N^l,m_i,j!= ∏_h=1^α_i,j^l,m n^l,m_i,j,h! and that the L^l,m_i,j,h's enumerate the L's such that |L|=j-m and g(L)=l+i_k-m S(k)-i for given l,m,i,j. Let us consider S and T_S such that |S|=q, S≥ q-1, |T_S|=S-q+1, g(T_S)=n+qi_k-(q-1)S(k)-g(S) and such that ℰ_T_S≠∅. Take an element ( n^l,m_i,j,h)∈ℰ_T_S. Define m^l,m_i,j:=∑_h=1^α_i,j^l,m n^l,m_i,j,h for each i, j, l, m with j≥ m, and m^l,m_i,j:=0 if j<m or i ≰ l+i_k- mS(k). Set M_l,m:=(m^l,m_i,j)_i,j for each l, m. So, ∑_l,mm^l,m_i,j=∑_l,m∑_h=1^α_i,j^l,m n^l,m_i,j,h=s_i,j, and S=∑_l,mM_l,m. Define q_l,m:=∑_i,jm^l,m_i,j=|M_l,m| for each l, m, and Q:=(q_l,m). Let us show that |Q|=q, g(Q)=n and Q=q-1. By definition of ℰ_T_S, |Q|:=∑_l,mq_l,m= ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h=q. Recall that Q:=∑_l,mmq_l,m. We have: [ |T_S|= |∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m|=S -q+1; ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h|L_i,j,h^l,m|=∑_i,jjs_i,j-q+1; ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h(j-m)= ∑_i,jjs_i,j-q+1; ⇔ ∑_i,j j∑_l,m∑_h=1^α_i,j^l,mn^l,m_i,j,h- ∑_l,mm∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h= ∑_i,jjs_i,j-q+1; ⇔ ∑_i,j js_i,j-∑_l,mmq_l,m =∑_i,jjs_i,j-q+1; ⇔ Q=q-1. ] Recall that g(Q):=∑_l,mq_l,ml. We have: [ g(T_S)= g(∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,u ^l,m)=n+q i_k -(q-1)S(k) -g(S); ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hg(L_i,j,h^l,m)=n+q i_k -(q-1)S(k) -g(S); ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h(l+i_k -mS(k) -i)= n+q i_k -(q-1)S(k) - g(S); ⇔ [ ∑_l,ml∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h+ i_k∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h -S(k) ∑_l,mm∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h; -∑_i,ji∑_l,m∑_h=1^α_i,j^l,mn^l,m_i,j,h=n+q i_k -(q-1)S(k) -g(S); ]; ⇔ ∑_l,mq_l,ml+q i_k-S(k) ∑_l,mm q_l,m-∑_i,j s_i,ji= n+q i_k -(q-1)S(k) -g(S); ⇔ g(Q)+q i_k-QS(k)-g(S)=n+q i_k -(q-1)S(k) -g(S). ] Since Q=q-1, we deduce that g(Q)=n as desired. So, S∈𝒮_Q for Q as in the left-hand side of (<ref>). Now, set H_l,m:=∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m, so ∑_l,mH_l,m=T_S. Let us show that (H_l,m)∈ℋ_Q,S, which implies that T_S∈𝒯_Q,S as desired. The existence of (M_l,m) such that |M_l,m|=q_l,m and m^l,m_i,j=0 for j<m and ∑_l,mM_l,m=S follows by construction. Conditions |H_l,m|=M_l,m-m q_l,m and g(H_l,m)=q_l,m[l+i_k-mS(k)]-g(M_l,m) are obtained exactly as in (<ref>) and (<ref>). This shows that (n^l ,m_i,j,h) ∈ℰ_Q,T_S, so: ℰ_T_S⊆_|Q|=q, g(Q)=n, Q=q-1ℰ_Q,T_S. The reverse inclusion holds trivially since |Q|=q, so: ℰ_T_S = _|Q|=q, g(Q)=n, Q=q-1ℰ_Q,T_S. We deduce that: e_T_S=∑_|Q|=q, g(Q)=n, Q=q-1q!/Q! e_Q,T_S. We conclude that any term occuring in the right-hand side of (<ref>) comes from a term from the left-hand side. Conversely, for any Q as in the left-hand side of Formula (<ref>), S∈𝒮_Q and T_S∈𝒯_Q,S verify the following conditions: |S|=q, S≥ q-1, |T_S|=S-q+1 , T_S=n+q i_k-(q-1)S(k)-g(S) and ℰ_T_S = _|Q|=q, g(Q)=n, Q=q-1ℰ_Q,T_S, e_T_S=∑_|Q|=q, g(Q)=n, Q=q-1q!/Q! e_Q,T_S. Hence, any term occuring in the expansion of B^Q contributes to the right hand side of Formula (<ref>). Thus we obtain Formula (<ref>) from which the statement of Corollary <ref> follows. Note also that: 1/qe_T_S=∑_|Q|=q, g(Q)=n, Q=q-11/qq!/Q! e_Q,T_S, so 1/qe_T_S∈. We have seen in Theorem <ref> and its proof (see Formula (<ref>) with k=k_0) that ω_0=(π^P_k_0,i_k_0)'(c_S(k_0)) is the coefficient of the monomial u^i_S(k_0)y in the expansion of P_S(k_0)(u,y)=P(u,c_0u_r+⋯+c_S(k_0)u^S(k_0)+ u^S^2(k_0)y), and that c_S^2(k_0)=-π^P_k_0,i_S(k_0)(c_S(k_0))/ω_0 where π^P_k_0,i_S(k_0)(c_S(k_0)) is the coefficient of u^i_S(k_0) in the expansion of P_S(k_0)(u,y). Expanding P_S(k_0)(u,y), having done the whole computations, we deduce that: {[ ω_0 = ∑_i ≤ l+i_k- mS(k), j=1,..,d_y ∑_|L|=j-1, g(L)=i_k_0-S(k_0)-ij!/L!a_i,jC^L ;; c_S^2(k_0) = -1/ω_0∑_i ≤ l+i_k- mS(k), j=0,..,d_y ∑_|L|=j, g(L)=i_S(k_0)-i j!/L!a_i,jC ^L, ]. where C:=(c_0,…,c_S(k_0)) and L:=(l_0,…,l_S(k_0)). amsalpha'' EKM+01[Abh56]abh:val-cent-local-domain S. S. Abhyankar, On the valuations centered in a local domain, Amer. J. Math. 78 (1956), 321–348. [ADR22]aroca-decaup-rond:support-alg-laurent-series F. Aroca, J. Decaup, and G. Rond, The minimal cone of an algebraic Laurent series, Math. Ann. 382 (2022), no. 3-4, 1745–1773 (English). [AI09]aroca-ilardi:puiseux-multivar F. Aroca and G. Ilardi, A family of algebraically closed fields containing polynomials in several variables, Comm. Algebra 37 (2009), no. 4, 1284–1296. 2510985 (2010f:12008)[AR19]aroca-rond:support-alg-series F. Aroca and G. Rond, Support of Laurent series algebraic over the field of formal power series, Proc. Lond. Math. Soc. (3) 118 (2019), no. 3, 577–605. [EKM+01]evans-al:tot-ord-commut-monoids K. Evans, M. Konikoff, J. J. Madden, R. Mathis, and G. Whipple, Totally ordered commutative monoids, Semigroup Forum 62 (2001), no. 2, 249–278. [EP05]engler-prestel:valued-fields A. J. Engler and A. Prestel, Valued fields, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2005. [FS97]flajolet-soria:coeff-alg-series P. Flajolet and M. Soria, Coefficients of algebraic series, Algorithms seminar 1997-1998, Tech. Report, INRIA, 1997. [GP00]gonzalez-perez_singul-quasi-ord P. D. González Pérez, Singularités quasi-ordinaires toriques et polyèdre de Newton du discriminant, Canad. J. Math. 52 (2000), no. 2, 348–368. [Hah07]hahn:nichtarchim H. Hahn, Über die nichtarchimedischen Grössensystem, Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften, Mathematisch - Naturwissenschaftliche Klasse (Wien) 116 (1907), no. Abteilung IIa, 601–655. [Hen64]henrici:lagr-burmann P. Henrici, An algebraic proof of the Lagrange-Bürmann formula, J. Math. Anal. Appl. 8 (1964), 218–224. [HM17]hickel-matu:puiseux-alg M. Hickel and M. Matusinski, On the algebraicity of Puiseux series, Rev. Mat. Complut. 30 (2017), no. 3, 589–620. [HM19]hickel-matu:puiseux-alg-multivar M. Hickel and M. Matusinski, About algebraic Puiseux series in several variables, J. Algebra 527 (2019), 55–108. [KKS23]kuhlmann-krapp-serra:generalised-LRR L. S. Krapp, S. Kuhlmann, and M. Serra, Generalised power series determined by linear recurrence relations, 2023, Arxiv: arxiv.org/abs/2206.04126. [Leg30]legendre:theorie-nbres A.-M. Legendre, Théorie des nombres t.1, Firmin-Didot (Paris), 1830. [McD95]mcdonald_puiseux-multivar J. McDonald, Fiber polytopes and fractional power series, J. Pure Appl. Algebra 104 (1995), no. 2, 213–233. [Neu49]neumann:ord-div-rings B. H. Neumann, On ordered division rings, Trans. Amer. Math. Soc. 66 (1949), 202–252. [PR12]parusinski-rond:abhyankar-jung A. Parusiński and G. Rond, The Abhyankar-Jung theorem, J. Algebra 365 (2012), 29–41. [Ray74]rayner_puiseux-multivar F. J. Rayner, Algebraically closed fields analogous to fields of Puiseux series, J. London Math. Soc. (2) 8 (1974), 504–506. [Rib92]rib:series-fields-alg-closed P. Ribenboim, Fields: algebraically closed and others, Manuscripta Math. 75 (1992), no. 2, 115–150. [RvdD84]rib-vdd_ratio-funct-field P. Ribenboim and L. van den Dries, The absolute Galois group of a rational function field in characteristic zero is a semidirect product, Canad. Math. Bull. 27 (1984), no. 3, 313–315. [Saf00]safonov:algebraic-power-series K. V. Safonov, On power series of algebraic and rational functions in C^n, J. Math. Anal. Appl. 243 (2000), no. 2, 261–277. [Sat83]sathaye:newt-puiseux-exp_abh-moh-semigr A. Sathaye, Generalized Newton-Puiseux expansion and Abhyankar-Moh semigroup theorem, Inventiones Mathematicae 74 (1983), 149–157, 10.1007/BF01388535. [Sin80]singmaster:binomial-multinomial D. Singmaster, Divisibility of binomial and multinomial coefficients by primes and prime powers, A collection of manuscripts related to the Fibonacci sequence, Fibonacci Assoc., Santa Clara, Calif., 1980, pp. 98–113. [Sok11]sokal:implicit-function A. D. Sokal, A ridiculously simple and explicit implicit function theorem, Sém. Lothar. Combin. 61A (2009/11), Art. B61Ad, 21. [SV06]soto-vicente:polyhedral-cones M. J. Soto and J. L. Vicente, Polyhedral cones and monomial blowing-ups, Linear Algebra Appl. 412 (2006), no. 2-3, 362–372. [SV11]soto-vicente_puiseux-multivar, The Newton procedure for several variables, Linear Algebra Appl. 435 (2011), no. 2, 255–269. 2782778[Wal78]walker_alg-curves R. J. Walker, Algebraic curves, Springer-Verlag, New York, 1978, Reprint of the 1950 edition. [Wil19]wilczynski:alg-power-series E. J. Wilczynski, On the form of the power series for an algebraic function., Am. Math. Mon.26 (1919), 9–12 (English).
http://arxiv.org/abs/2307.04273v1
20230709222249
Characterization of a novel proton-CT scanner based on Silicon and LaBr$_3$(Ce) detectors
[ "E. Nácher", "J. A. Briz", "A. N. Nerio", "A. Perea", "V. G. Távora", "O. Tengblad", "M. Ciemala", "N. Cieplicka-Orynczak", "A. Maj", "K. Mazurek", "P. Olko", "M. Zieblinski", "M. J. G. Borge" ]
physics.med-ph
[ "physics.med-ph", "nucl-ex", "physics.ins-det" ]
e1corresponding author: [email protected] e2present address: Universidad Complutense de Madrid, CEI Moncloa, E-28040 Madrid, Spain Instituto de Física Corpuscular, CSIC - Universidad de Valencia, Spain Instituto de Estructura de la Materia, CSIC, Madrid, Spain Instytut Fizyki Jadrowej PAN, 31-342 Krakow, Poland Characterization of a novel proton-CT scanner based on Silicon and LaBr_3(Ce) detectors E. Nácheraddr1, e1 J.A. Brizaddr2, e2 A.N. Nerioaddr2 A. Perea,addr2 V.G. Távoraaddr2 O. Tengbladaddr2 M. Ciemalaaddr3 N. Cieplicka-Orynczakaddr3 A. Majaddr3 K. Mazurekaddr3 P. Olkoaddr3 M. Zieblinskiaddr3 M.J.G. Borgeaddr2 Received: date / Accepted: date ================================================================================================================================================================================================================================================================================================================================================ Treatment planning systems at proton-therapy centres generally use X-ray computed tomography (CT) as primary imaging technique to infer the proton treatment doses to tumour and healthy tissues. However, proton stopping powers in the body, as derived from X-ray images, suffer from important proton-range uncertainties. In order to reduce this uncertainty in range, one could use proton-CT images instead. The main goal of this work is to test the capabilities of a newly-developed proton-CT scanner, based on the use of a set of tracking detectors and a high energy resolution scintillator for the residual energy of the protons. Different custom-made phantoms were positioned at the field of view of the scanner and were irradiated with protons at the CCB proton-therapy center in Krakow. We measured with the phantoms at different angles and produced sinograms that were used to obtain reconstructed images by Filtered Back-Projection (FBP). The obtained images were used to determine the capabilities of our scanner in terms of spatial resolution and proton Relative Stopping Power mapping and validate its use as proton-CT scanner. The results show that the scanner can produce medium-high quality images, with spatial resolution better than 2 mm in radiography, below 3 mm in tomography and resolving power in the RSP comparable to other state of the art pCT cameras. 42.30.Wb 42.79.Pw 07.77.Ka § INTRODUCTION According to World Health Organisation, cancer is the leading cause of death in the world. More than 50% of cancer patients receive some kind of radiation therapy (radiotherapy) during their course of treatment. Conventional radiotherapy for deep tumours makes use of X rays to control or kill malignant cells. Unfortunately, healthy tissue is not immune to the ionisation produced by the X rays and, therefore, the areas surrounding the cancerous tumour are severely damaged. Proton therapy is a technique that uses proton beams instead of X rays as ionising radiation. It has a far higher selectivity than conventional radiotherapy, what makes it ideal for the treatment of localised tumours in highly sensitive areas e.g. brain, heart or spinal cord. The application of proton therapy, however, is not exempt of difficulties. The precision in the determination of the distal position of the dose distribution is crucial for a complete irradiation of the tumour and to avoid, as much as possible, any dosage to the surrounding healthy tissue. So far, treatment planning systems at proton-therapy centres use X-ray computed tomography (X-ray CT) as primary imaging technique to calculate doses to tumour and healthy tissues. This produces a map of the linear attenuation coefficient of the tissue for X rays, the so called Hounsfield Units (HU). In the production of the treatment plan, one has to transform the map of HU into a map of relative proton stopping powers (RSP), since the patient is going to be treated with a beam of protons. However, there are unavoidable uncertainties associated with the derivation of the RSP map from the X-ray CT scan. Apart from the fact that the HU to RSP conversion depends on the chemical composition of the volume traversed by protons and not only on its HU value, it is not possible to ignore the ambiguity and limitations of the different HU to RSP conversion algorithms that are being used nowadays <cit.>. The aforementioned effects may lead to proton range uncertainties up to 5% in the abdomen and 11% in the head (see <cit.> and refs. therein). These large proton range uncertainties result in higher dose to healthy tissues or in a far too conservative treatment plan to avoid that. Reducing these uncertainties would allow a better planning that maximises the dose to the tumour, minimising at the same time the dose to the surrounding tissue. In order to reduce the uncertainty in proton range and take full advantage of the therapeutic potential of proton therapy, it is necessary to provide the treatment planning software with RSP maps obtained with proton beams rather than those derived after a conversion from the HU maps obtained with X rays. Proton computed tomography (proton CT) is the appropriate tool to produce such images since it makes use of proton beams provided by the same accelerator that is used later for the treatment, but this time at higher energy, so that the protons go through the patient and reach an appropriate proton scanner to form an image. See for instance the work of Takabe et al. <cit.> for a descriptive introduction to proton CT. Some other recent studies with more advanced scanners are described in Dedes et al. <cit.> and Esposito et al. <cit.>. In the next section we will describe the basis of proton CT and our approach to a proton scanner for imaging. Besides medical physics and imaging, basic nuclear-physics research in general involves the development of nuclear instrumentation, in the form of spectrometers and radiation detectors, to perform nuclear reactions and study the structure of matter. Within the detector R&D process, the design and test of prototypes is rather frequent. Sometimes, the prototypes are just small parts, the building blocks of the final device. In these cases, the optimised prototypes are often an integral part of the final product. However, some other times, the prototypes, although being very valuable radiation detectors themselves, cannot be used in the final device because they do not comply with the requirements or simply because they do not have the appropriate geometry: shape or size. In this work we will show how we have re-used one of these prototype detectors, that was developed as part of the R&D of a larger device but cannot be used now as part of it. This, in combination with some other instrumentation used for nuclear reaction and structure experiments, has been converted into a proton scanner capable of performing proton radiography and tomography as will be described in the next sections. § MATERIALS AND METHODS §.§ The proton CT scanner Any scanning technique based on the use of a penetrating probe to obtain images by sections is known as tomography. These images by sections can be combined, using the appropriate reconstruction method, to form a 3-dimensional (3D) model of the object under study. In the case of transmission tomography, one generally starts by obtaining plane 2-dimensional (2D) projections, that, using the appropriate reconstruction algorithm, are turned into the final tomographic sections or 3D image. The most typical case of medical tomography technique by transmission is the X-ray CT, obtained from plane X-ray radiographs. The subject of this paper refers to tomography with proton beams, in other words, the obtention of images by sections using a proton beam as probe. Therefore, at the basis of proton CT stands the use of a proton accelerator that provides a proton beam with enough energy to go through the object of study. In clinical practice, the object of study is a part of a patient body, however, since we are presenting here a pre-clinical instrument, from now on the object of study will be referred to as phantom. As for the case of X-ray CT, we will start by obtaining proton radiographs that will be useful by themselves as explained later. Since the main goal of the proton CT scan is to produce a map of RSP, we need to detect the individual protons that form the beam, once they have gone through the phantom, and determine their trajectory and energy deposited in the phantom. With this aim, we will make use of tracking detectors to determine the trajectories and a calorimeter, a detector that absorbs all the energy of the particles penetrating, to measure the residual energy of the protons. Fig. <ref> shows a sketch of a simple proton-CT scanner. The tracking detectors, in green, are placed at the entrance and exit sides of the object of study, and are due to determine the entrance and exit point of each proton trajectory. In this work these trajectories are taken as straight lines as zero-order approximation, although we know that there is always a certain deviation within the object due to multiple Coulomb scattering (lateral deviation of the proton trajectories due to Coulomb interaction with the atomic nuclei). Apart from the trajectory followed, to calculate the RSP of the different materials within the phantom, we need to know the energy deposited by the protons. Since we know the energy of the beam delivered by the accelerator, what we really want to measure is the residual energy of the protons after having passed through the object and tracking detectors. For that, we need to place a calorimeter, or residual-energy detector, right after the rear tracking detector. For a concise description of the process let us refer to Fig. <ref> again. The proton beam reaches the setup, from the left side in the figure, and pass through the front position-sensitive detector, the phantom, and the rear position-sensitive detector. From the positions recorded in the tracking detectors we can trace back a straight line, our zero-order approximation for the trajectory. After traversing the rear tracking detector, the protons leave their remaining kinetic energy in the bulk of the calorimeter, at the right-hand side in the figure. Combining the trajectories and residual energy measured for each proton, we can reconstruct tomographic images of the RSP in the bulk of the phantom following any of the methods detailed in <cit.>. Fig. <ref> Panel A shows a 3D-CAD design of our proton-CT scanner. In this sketch one can clearly see the front and rear tracking detectors held by their red supports, the green phantom cylinder between them, and the calorimeter at the right end of the setup. The full setup is enclosed in an opaque box to prevent the passage of light that would produce spurious signals in the tracking detectors. At the right-hand side, Fig. <ref> panel B shows a real picture of the actual setup. Details on the tracking and residual energy detectors are given in what follows. Double-Sided Silicon Detectors for proton tracking The tracking detector system is comprised of two Double-Sided Silicon Strip Detectors (DSSD), manufactured by Micron Semiconductor Ltd. The first DSSD detector is placed directly facing the proton beam, at the front side of the phantom, to determine the entrance point. The second one in placed at the rear position, to determine the exit point of the protons. Both DSSDs are 1-mm thick, and segmented into 16 vertical and 16 horizontal strips, giving a total of 256 pixels of 3x3 mm^2 per tracking detector. The two DSSD detectors where set 8 cm apart from each other, covering a field of view of 48×48×80 mm^3. A full description of these detectors and a very thorough characterisation of their response function to charged particles is given in <cit.>. During the measurements presented in this work, the signals from the DSSDs went through Mesytec preamplifiers and shapers before entering the CAEN V785 ADCs at the data acquisition system. CEPA4: The Residual-Energy Detector The calorimeter, or residual-energy detector, used in our scanner is an array of four scintillation units, each of them comprised of two scintillator crystals in phoswich configuration: 4 cm of LaBr_3(Ce) and 6 cm of LaCl_3(Ce) with a common photomultiplier tube (see Fig. <ref>). The crystals are individually wrapped in reflecting material and closed packed in a 0.5 mm Aluminum can.The full detector array, called CEPA4, is a prototype detector for the endcap of CALIFA, the electromagnetic calorimeter of R^3B at FAIR. A full description of the CEPA4 and its response to high-energy proton beams can be found in <cit.>. In all the measurements described in this work, the signals from the photomultipliers were directly acquired by a Mesytec MDPP-16-QDC high-resolution time and charge integrating digitizer at the data acquisition system. The main advantage of using CEPA4 as residual-energy detector lies in its energy resolution, that translates in a better contrast in the final RSP image. For protons of 80-130 MeV, which are the relevant energies for our study, the protons are stopped by the first crystal, namely the LaBr_3(Ce) part of the phoswich, and the resolution of CEPA4 improves from 3.5 to 2%. For higher energies the protons penetrate the second crystal, namely the LaCl_3(Ce) and the resolution deteriorates to a maximum of 7%. §.§ In-beam experiments Apart from the setup and fine-tuning of the system at the laboratory of Instituto de Estructura de la Materia (IEM-CSIC, Madrid), we have carried out two experiments with proton beams: one at Centro de Microanálisis de Materiales (CMAM, Madrid), and the other at the Centrum Cyklotronowe Bronowice (CCB, Krakow). The former, at CMAM, was a proof-of-concept experiment with low-energy proton beams to test the DSSD as tracking detectors, details on the results can be found in <cit.>. The latter, at CCB, was the first test of the full setup with high-energy protons and is detailed in what follows. For a realistic test of our proton-CT scanner we used the high-energy proton beams provided by the IBA PROTEUS C235 proton cyclotron at the CCB in Krakow. The latter is part of the Henryk Niewodniczański Institute of Nuclear Physics Polish Academy of Sciences in Krakow (IFJ PAN) and its main focus is the application of cyclotrons in scientific research and tumor radiotherapy. For our measurements we were provided with mono-energetic proton beams at energies 100 and 110 MeV, with an energy spread of 1.5% (FWHM). The accelerator provided a high-current pencil beam (≈ 1 nA, ≈ 10 mm diameter), however, for our purposes, we needed a low-current fan beam covering the full field of view of our scanner. Thus, we measured using the protons scattered on a 25-μm thick (11.25 mg/cm^2) Titanium foil. The measurement was performed in air and with the proton-CT scanner at an angle of 12.5 degrees with respect to the beam direction, the alignment of the system was done using a laser system provided by the local team. In these conditions, our acquisition rate was kept around 10 kHz (triggered by an OR condition between the 3 detector signals). The energy loss due to the scattering angle, and the losses in the Ti foil have been calculated using the GEANT4 Monte Carlo code, the losses in the DSSD tracking detectors have been directly acquired by the detectors themselves since they also perform well as spectrometers. We used proton beams of 95, 100 and 120 MeV to calibrate the tracking and residual energy detectors, but the final measurements for radiography and tomography where carried out at 100 and 110 MeV respectively. A detailed Monte Carlo simulation of the experiment was performed using Geant4 <cit.> to calculate the values of energy deposited in the different volumes for the three proton beam energies, which allowed for an accurate calibration in the energy range of interest. A picture of the cyclotron providing the proton beam at CCB and a schematic view of the setup are shown in Fig. <ref>. § RESULTS §.§ Proton Radiography As we explained before, in the measuring process for proton CT we will obtain proton radiographs, 2D images that are useful by themselves. While the slices from the tomographic reconstruction hold a direct measurement of the RSP, the plane radiographs hold information of the line integrals of the RSP. This line integrals of the RSP are referred to as Water Equivalent Path Length (WEPL), and when they are averaged within a certain spatial bin, they turn into the so-called water-equivalent thickness (WET). Proton radiographs, when the spatial resolution allows for it, can be used for patient alignment/positioning. Furthermore, a comparison between a real proton radiograph and a virtual proton radiograph reconstructed from the X-ray CT used for the treatment plan, can be a very powerful tool to detect possible proton range errors due to the conversion of HU to RSP before the treatment. For our test beam at CCB we used some custom-made phantoms specially designed to test the spatial resolution of the system in realistic conditions. For that, we enclosed our Aluminum phantom inserts in a thick PMMA square box (a cube of 50 mm side). We tested two different patterns: a cross and a point/line regular spatial pattern. A picture of these two Aluminum inserts included in the two phantoms can be seen in Fig. <ref>, left column, close to the proton radiographs obtained with our scanner with proton beams of 100 MeV, at the right column. The radiographs reconstructed in the figure were obtained at the central plane of the phantom, the X,Y coordinates were determined by simply averaging the X and Y coordinates at both detector planes, always assuming straight proton trajectories. The colour scale represents the average energy deposited per detected proton. It is important to recall here that the energy deposited in the phantom is not proportional to the RSP but to its line integral along the proton trajectory and averaged within a spatial bin, namely the WET. For a rough estimation of the spatial resolution one can look carefully at the bottom half of Fig. <ref>. In the picture of the phantom, displayed in C, at the left hand side, the holes at the third raw starting from the bottom have 2 mm of diameter and are 2 mm apart of each other. In the radiograph, at the right hand side, one can clearly see that these holes are well resolved in the image, however, the 1-mm holes at the upper row are not. This allows us to conclude that the spatial resolution is better than 2 mm. A detailed description of the radiography measurements and results has already been published in <cit.>. The quality of the images was studied via a Modulation Transfer Function (MTF) analysis using the profiles obtained with the regular spatial pattern (holes in C and D panels in Fig. <ref>). The MTF is a measure of the capability of our device to transfer contrast at a particular resolution from the object to the image. In other words, the MTF is a way to incorporate resolution and contrast into a single specification. In this study, the MTF is calculated as the contrast (in percentage of grey level) in the image between one hole and the Aluminum spacing, and it is represented, in Fig. <ref>, as a function of the number of line pairs (hole-spacing pairs in our case) per mm. Looking at the resolved lines in Fig. <ref>, and the MTF analysis shown in Fig. <ref>, we concluded that the spatial resolution of the device is better than 2 mm and the MTF-10% = 0.3 line pairs / mm, comparable to those of other existing devices (e.g. 0.35 in <cit.> from a tomographic image). For more details the reader is referred to <cit.>. §.§ Proton Tomography As far as the 3D image reconstruction is concerned, we are currently implementing different algorithms from those described in the very detailed review of <cit.>. Furthermore, we are concerned with different approaches to correct for the multiple Coulomb scattering effect in the phantom. In that respect, a solution based on the use of neural networks has been recently published for the 2D radiographs in <cit.> and we are considering a similar approach. However, for the purpose of this work, with emphasis in the validation of the device as proton-CT scanner, we will only present images obtained with a simple filtered back-projection algorithm, using a ramp filter, that assumes straight paths for the protons inside the phantom. With the aforementioned approach, we have performed two different measurements, one to work on the different reconstruction algorithms and estimate the spatial resolution of the system, and another one to check its capability to resolve the proton RSP values of different materials. For these measurements we designed two different phantoms based on PMMA cylinders. The first one was a Derenzo-like pattern with holes of 7, 5 and 3 mm diameter and with separations of the same length. A picture of this phantom and a schematic top view are shown at the leftmost panels of Fig. <ref>. In order to take several projections at different angular positions, the phantom was placed on a rotatory platform connected to a step motor. The measurements were carried out at a proton energy of 110 MeV. The A, B, C and D panels of Fig. <ref> are the filtered back-projected images obtained for the four different sets of measurements that were carried out: A) 10 projections of 20 minutes each, in steps of 18^∘; B) 20 projections of 5.5 minutes each, in steps of 9^∘; C) 20 projections of 20 minutes each, in steps of 9^∘; D) 100 projections of 5.5 minutes each, in steps of 1.8^∘. The total number of projections of each measurement shown in the figure always cover half a turn, i.e., 180^∘. During these measurements the proton current was stable at around 1 nA and, at this intensity, we counted ≈700 triple coincidences per second (front DSSD and rear DSSD and Calorimeter). In these conditions, the projections of 5.5 minutes recorded ≈2.3× 10^5 events, whereas the projections of 20 minutes recorded ≈8.4× 10^5. The difference in statistics per projection, as well as the different number of projections affect considerably the image quality. Looking at the four images of Fig. <ref> we can clearly appreciate, firstly, that the image with the lowest number of projections, panel A), does not reproduce fairly the pattern, since one of the cylinders of 7 mm has not the shape of a cylinder and two of the cylinders of 3 mm are blurred and practically absent. Secondly, panel B) shows the image with low statistics per projection (5.5 min) but 20 projections in total, and it already reproduces fairly well the pattern, since all cylinders are seen with the right shape and position. Going from A) to B) shows that the effect of lowering the statistics per projection is well compensated by taking a higher number of projections. The third panel C) keeps the same number of projections than B) but increasing the statistics per projection and the improvement is obvious. Finally, we took a longer measurement of 100 projections of lower statistics that is shown in D). In this case the result is more uniform, but we do not see a better resolution than in the previous image, indicating that, with a proper measurement of a uniform cylinder for normalization, 20 projections covering 180^∘ is an acceptable sampling for our purposes. A far deeper study of the effects on the quality of the images due to different sampling rates, statistics, addition of subsets or reconstruction algorithms, will be published soon <cit.>. The limitation in statistics/time in this study was due to the high dead time of the data acquisition system. However, recently we have carried out a new series of measurements with the same system but an improved electronic setup and digitization configuration, being able to take similar images with less than 10% of dead time at counting rates of 45 kHz. This compares well to other similar devices in the field (see Table 1 of Ref. <cit.> for a complete list). Beyond the capabilities to produce images with a high resolving power, our main goal in this work is to produce reliable RSP maps. In this context, the energy resolution of the residual-energy detector is crucial, since the energy deposited by the protons in the traversed volumes depends completely on the RSP of the material. This is why our proton-CT scanner, even being made of detectors that were originally designed for other use, is very promising in terms of RSP mapping, since the residual-energy detector is made of high-resolution scintillators. To test the RSP mapping capabilities of our setup we designed a special phantom, a PMMA cylinder of 60-mm diameter with two inserts of 9-mm diameter each that can be filled with different liquids, gels or powders. A picture of such phantom is shown at the leftmost panel of Fig. <ref>. We performed proton scans at 110 MeV of the phantom with the inserts filled with ethanol and water. We took 10 projections of 20 minutes each, in steps of 18^∘, covering 180^∘ in total. As with the previous scans of the Derenzo-like phantom, we have used a simple filtered back-projection with the ramp filter to reconstruct the images. The rightmost panel of Fig. <ref> shows the four regions of interest that have been defined to study each material present in our phantom, one region of water, one of ethanol and two regions covering the PMMA matrix. The reconstructed image was consequently normalised to water in order to estimate the RSP of PMMA and ethanol. The resulting values are shown in Table 1, where they are compared with the experimental values reported in Ref. <cit.>, that were measured using proton beams of 149 MeV. The values and uncertainties of the RSP of PMMA and ethanol have been obtained, after the normalisation of the image with respect to the region of water (R4 in Fig. <ref>), as the mean and the standard deviation of the RSP values obtained inside the respective regions indicated in Fig. <ref> as R1 and R2 for PMMA and R3 for ethanol. The last column of Table 1 shows the relative difference between the present RSP values and those taken from <cit.> as reference values, being in both cases of the order of 1%. Our results and those of Ref. <cit.> are in agreement within the uncertainties. The resulting proton RSP map from our test beam is satisfactory for a first experiment. However, the relative differences are not negligible, definitely a bit worse than those reported e.g. in the recent work of Dedes et al. <cit.>, and the relative error on our values are 8% for Ethanol and 4% for PMMA, far worse than those of <cit.>. We remind here the reader that this was the first test of this proton-CT scanner that is made of detectors originally designed for different applications. This RSP map was obtained only with 10 projections and without any uniformity correction. Taking the optimal 20 projections and correcting the data with a dedicated measurement of a uniform PMMA phantom will decrease the uncertainties and improve both the spatial and RSP resolution and accuracy. § DISCUSSION The results shown in the previous section validate our setup as a prototype of proton-CT scanner. The spatial resolution of our setup has room for improvement, for instance there are DSSDs in the market with much higher granularity, and position sensitive photomultipliers for the scintillator, but for a proof of concept we have demonstrated that our scanner can resolve 2 mm holes in radiography and 3 mm in tomography images. Unfortunately, we did not expect such a good resolution in tomography mode and this is why we did not build a Derenzo-like cylinder with smaller holes to really find the limit. Therefore, we can say that the spatial resolution is better than 3 mm but, at this stage, we cannot state how much better. The excellent energy resolution of the scintillator crystals used for the residual-energy detector, namely the LaBr_3-LaCl_3 phoswich detectors, allows for a fairly good resolving power in RSP in the tomographic images, and it was shown how the RSP of ethanol and PMMA materials can be reproduced accurately, although there is room for improvement in terms of precision. Additional tests will be performed to deeper evaluate the potential of our first prototype in tomography and to reduce the relative uncertainties in our reconstructed RSP values. The main concern during the first set of measurements presented here was the dead time of the acquisition system that only allowed for measurements at low counting rates (below 10 kHz), meaning very long scanning times. This would be a showstopper for the future of our device as proton-CT scanner, however, recently we have optimised our electronics and data acquisition system and carried out a new set of measurements with different phantoms. With some improvements at the digitisation level, we have been able to take images with less than 10% of dead time at counting rates of 45 kHz, far closer to clinical levels. This translates into a much faster system capable to take the images presented here in few minutes rather than hours, and compares well to other similar devices in the field. This work has been mainly supported by the PRONTO-CM B2017/BMD-3888 project (Comunidad de Madrid, Spain) that has sponsored J.A. Briz and A.N. Nerio. The experiments have been carried out with the support of the European Union Horizon 2020 research and innovation programme under grant agreement no. 654002 (ENSAR2) and grant agreement No [730983] (INSPIRE). This publication is also part of the R&D grants PID2019-104714GB-C21, PID2019-104390GB-I00 and PDC2022-133382-I00, funded by MCIN/ AEI/10.13039/501100011033 (Spanish Ministry of Science) and grant CIPROM/2021/064 from Generalitat Valenciana. The authors want to express their gratefulness to the CCB crew for their unconditional help during the data taking. spphys
http://arxiv.org/abs/2307.07562v1
20230711070553
Iterated Elimination of Weakly Dominated Strategies in Well-Founded Games
[ "Krzysztof R. Apt", "Sunil Simon" ]
cs.GT
[ "cs.GT" ]
Belief Revision from Probability Jeremy Goodman School of Philosophy University of Southern California, USA [email protected] Bernhard Salow Faculty of Philosophy University of Oxford, UK [email protected] August 12, 2023 ======================================================================================================================================================================================================= Recently, in <cit.>, we studied well-founded games, a natural extension of finite extensive games with perfect information in which all plays are finite. We extend here, to this class of games, two results concerned with iterated elimination of weakly dominated strategies, originally established for finite extensive games. The first one states that every finite extensive game with perfect information and injective payoff functions can be reduced by a specific iterated elimination of weakly dominated strategies to a trivial game containing the unique subgame perfect equilibrium. Our extension of this result to well-founded games admits transfinite iterated elimination of strategies. It applies to an infinite version of the centipede game. It also generalizes the original result to a class of finite games that may have several subgame perfect equilibria. The second one states that finite zero-sum games with n outcomes can be solved by the maximal iterated elimination of weakly dominated strategies in n-1 steps. We generalize this result to a natural class of well-founded strictly competitive games. § INTRODUCTION This paper is concerned with the iterated elimination of weakly dominated strategies (IEWDS) in the context of natural class of infinite extensive games with perfect information. While simple examples show that the deletion of weakly dominated strategies may result in removal of a unique Nash equilibrium, IEWDS has some merit if it results in solving a game. It is for instance used to show that the so-called “beauty contest” game has exactly one Nash equilibrium (see, e.g., <cit.>). Other games can be solved this way, see, e.g., <cit.>. This procedure was also studied in the realm of finite extensive games with perfect information. In <cit.> the correspondence between the outcomes given by the iterated elimination of weakly dominated strategies and backward induction was investigated in the context of binary voting agendas with sequential voting. More recently, this procedure was studied in <cit.> in the context of supermodular games. For arbitrary games two important results were established. The first one states, see <cit.>, that in such games with injective payoff functions (such games are sometimes called generic) a specific iterated elimination of weakly dominated strategies (that mimics the backward induction) yields a trivial game which contains the unique subgame perfect equilibrium. It was noticed in <cit.> that this result holds for a slightly more general class of games without relevant ties.[All mentioned concepts are explained in Sections <ref>, <ref>, and <ref>. We did not find any precise proofs in the literature. The proof is briefly sketched in <cit.> and summarized in <cit.> as follows: “if backward induction deletes action a at node x, delete all the strategies reaching x and choosing a”. We provided in <cit.> a detailed proof of the stronger result of <cit.> in which we clarified how the backward induction algorithm needs to be modified to achieve the desired outcome.] The second result, due to <cit.>, is concerned with finite extensive zero-sum games. It states that such games can be reduced to a trivial game by the `maximal' iterated elimination of weakly dominated strategies in n-1 steps, where n is the number of outcomes.[An alternative proof given in <cit.> shows that the result holds for the larger class of strictly competitive games. In <cit.> we clarified that the original proof also holds for this class of games.] In <cit.> we studied a natural extension of finite extensive games with perfect information in which one assumes that all plays are finite. We called these games well-founded games.[In the economic literature such games are sometimes called `games with finite horizon'.] The subject of this paper is to extend the above two results to well-founded games. In both cases some non-trivial difficulties arise. Consider the extensive game G and the corresponding strategic game Γ(G) given in Figures <ref>. G has three subgame perfect equilibria which are all payoff equivalent: {(𝐴𝐶,R), (𝐵𝐶,L), (𝐵𝐶,R)}. We can observe that in Γ(G) no sequence of iterated elimination of weakly dominated strategies results in a trivial game that contains all the subgame perfect equilibria in G. To see this, first note that the strategies L and R of player 2 are never weakly dominated irrespective of the elimination done with respect to the strategies of player 1. Also, note that the strategy 𝐵𝐷 of player 1 is strictly dominated by 𝐵𝐶 in Γ(G). Thus the only possibility of reducing Γ(G) to a trivial game is to eliminate all strategies of player 1 except 𝐵𝐶. But this results in the elimination of (𝐴𝐶, R) which is a subgame perfect equilibrium in G. This might suggest that one should limit oneself to extensive games with a unique subgame perfect equilibrium. Unfortunately, this restriction does not work either as shown in Example <ref>. Additional complication arises when the game has no subgame perfect equilibrium as shown in <ref>. Consider a `trimmed version' of the ultimatum game from <cit.> given in Figure <ref>, in which for each x ∈ [0,100] the root has a direct descendant x. This game has a unique subgame perfect equilibrium, namely (100, L). Consider an iterated elimination of weakly dominated strategies. For each strategy of player 1 the strategies L and R of player 2 yield the same payoff. So these two strategies are never eliminated. Further, strategy 100 of player 1 is never eliminated either, since for any strategy x < 100 we have p_1(x, L) = x < 100 = p_1(100, L) and p_1(x, R) = x > 0 = p_1(100, R). So the joint strategies (100, L) and (100, R) are never eliminated and they are not payoff equivalent. (In fact, each iterated elimination of weakly dominated strategies yields the game with the sets of strategies {100} and {L, R}.) Consider the well-founded game G given in Figure <ref>. Clearly G has no subgame perfect equilibrium. Further, strategies A and B of player 1 yield the same outcome for him, so cannot be eliminated by any iterated elimination of weakly dominated strategies. Thus any result of such an elimination contains at least two outcomes, (0,0) and (0,1). So G cannot be reduced to a trivial game. To address these issues, we introduce the concept of an SPE-invariant well-founded game. These are games in which subgame perfect equilibria exist and moreover in each subgame such equilibria are payoff equivalent. Then we show that the first result can be extended to such games. In view of the above examples it looks like the strongest possible generalization of the original result. In particular, it applies to an infinite version of the well-known centipede game of <cit.>. This result calls for a careful extension of the iterated elimination of weakly dominated strategies to infinite games: its stages have to be indexed by ordinals and one has to take into account that the outcome can be the empty game. When limited to finite games, our theorem extends the original result. In particular it applies to the class of extensive games that satisfy the transference of decisionmaker indifference (TDI) condition due to <cit.>, a class that includes strictly competitive games. We also show that the well-founded games with finitely many outcomes that satisfy the TDI condition are SPE-invariant. Also when extending the second result, about strictly competitive games, to well-founded games one has to be careful. The original proof crucially relies on the fact that finite extensive zero-sum games have a value. Fortunately, as we showed in <cit.>, well-founded games with finitely many outcomes have a subgame perfect equilibrium, so a fortiori a Nash equilibrium, which suffices to justify the relevant argument (Lemma <ref> in Section <ref>). By carefully checking of the crucial steps of the original proof we extend the original result to a class of well-founded strictly competitive games that includes almost constant games, in which for all but finitely many leaves the outcome is the same. It remains an open problem whether this result holds for all strictly competitive games with finitely many outcomes. IEWDS is one of the early approaches applied to analyze strategies and extensive games. It does not take into account epistemic reasoning of players in the presence of assumptions such as common knowledge of rationality. The vast literature on this subject, starting with <cit.> and <cit.>, led to identification of several more informative ways of analyzing finite extensive games with imperfect information. We just mention here two representative references. In <cit.> Pearce's notion of extensive form rationalizability (EFR) was studied and it was shown that for extensive games without relevant ties it coincides with the IEWDS. A more general notion of common belief in future rationality was studied in <cit.> that led to identification of a new iterative elimination procedure called backward dominance. In our paper IEWDS is defined as a transfinite elimination procedure. A number of papers, starting with <cit.>, analyzed when such a transfinite elimination of strategies cannot be reduced to an iteration over ω steps. In our framework it is a simple consequence of the fact that the ranks of the admitted game trees can be arbitrary ordinals. In particular, an infinite version of the centipede game considered in Example <ref> requires more than ω elimination rounds. § PRELIMINARIES §.§ Strategic games A strategic game H=(H_1, , H_n, p_1, , p_n) consists of a set of players {1, , n}, where n ≥ 1, and for each player i, a set H_i of strategies along with a payoff function p_i : H_1 ×⋯× H_n ℝ. We call each element of H_1 ×⋯× H_n a joint strategy of players 1, , n, denote the ith element of s ∈ H_1 ×⋯× H_n by s_i, and abbreviate the sequence (s_j)_j ≠ i to s_-i. We write (s'_i, s_-i) to denote the joint strategy in which player's i strategy is s'_i and each other player's j strategy is s_j. Occasionally we write (s_i, s_-i) instead of s. Finally, we abbreviate the Cartesian product ×_j ≠ i H_j to H_-i. Given a joint strategy s, we denote the sequence (p_1(s), , p_n(s)) by p(s) and call it an outcome of the game. We say that H has k outcomes if |{p(s) | s ∈ H_1 ×⋯× H_n }|= k and call a game trivial if it has one outcome. If one of the sets H_i is empty, we call the game empty and non-empty otherwise. Unless explicitly stated, all used strategic games are assumed to be non-empty. We say that two joint strategies s and t are payoff equivalent if p(s) = p(t). We call a joint strategy s a Nash equilibrium if i ∈{1, …, n} s'_i ∈ H_i : p_i(s_i, s_-i) ≥ p_i(s'_i, s_-i). When the number of players and their payoff functions are known we can identify the game H with the set of strategies in it. By a subgame of a strategic game H we mean a game obtained from H by removing some strategies. Given a set J of subgames of a strategic game H we define ⋂ J as the subgame of H in which for each player i his set of strategies is ⋂_J ∈ J J_i. Also, given two subgames H' and H” of a strategic game H we write H' H” if for each player i, H'_i H”_i. Consider two strategies s_i and s'_i of player i in a strategic game H. We say that s_i weakly dominates s'_i (or equivalently, that s'_i is weakly dominated by s_i) in H if s_-i∈ H_-i : p_i(s_i, s_-i) ≥ p_i(s'_i, s_-i) and s_-i∈ H_-i : p_i(s_i, s_-i) > p_i(s'_i, s_-i). In what follows, given a strategic game we consider, possibly transfinite, sequences of sets of strategies. They are written as (ρ_α, α < γ), where α ranges over all ordinals smaller than some ordinal γ. Given two such sequences ρ := (ρ_α, α < γ) and ρ' := (ρ'_α', α' < γ'), we denote by (ρ, ρ') their concatenation (which is indexed by γ + γ'), by ρ^β the subsequence (ρ_α, α < β) of ρ, and for α < β by ρ^β - α the subsequence such that (ρ^α, ρ^β - α) = ρ^β. Further, we write H →^ρ H' to denote the fact that the game H' is the outcome of the iterated elimination from the non-empty game H of the sets of strategies that form ρ. In each step all eliminated strategies are weakly dominated in the current game. As a result H' may be empty. The relation →^ρ is defined as follows. If ρ = (ρ_0), that is, if γ = 1, then H →^ρ H' holds if each strategy in the set ρ_0 is weakly dominated in H and H' is the outcome of removing from H all strategies from ρ_0. If γ is a successor ordinal >1, say γ = δ + 1, and H →^ρ' H', H' →^(ρ_δ) H”, where H' is non-empty, and ρ' := (ρ_α, α < δ), then H →^ρ H”. Finally, if γ is a limit ordinal and for all β < γ, H →^ρ^β H^β, then H →^ρ⋂_β < γ H^β. In general, the strategic game H from which we eliminate strategies will be a subgame of a game Γ(G), where G is an extensive game (to be defined shortly). It will be then convenient to allow in ρ strategies from Γ(G). In the definition of H →^ρ H' we then disregard the strategies from ρ that are not from H. In the proofs below we rely on the following observations about the →^ρ relation, the proofs of which we omit. * Suppose H →^ρ H' and H' →^ρ' H”, where H' is non-empty. Then H →^(ρ, ρ') H”. * Suppose H →^ρ H', where ρ = (ρ_α, α < γ) and γ is a limit ordinal. Suppose further that for a sequence of ordinals (α_δ)_δ < ϵ converging to γ we have H →^ρ^α_δ H^α_δ for all δ < ϵ. Then H' = ⋂_δ < ϵ H^α_δ. §.§ Well-founded games We recall from <cit.> the definition of a well-founded game. A tree is an acyclic directed connected graph, written as (V,E), where V is a non-empty set of nodes and E is a possibly empty set of edges. An extensive game with perfect information (T, , p_1, , p_n) consists of a set of players {1, , n}, where n ≥ 1 along with the following. A game tree, which is a tree T := (V,E) with a turn function : V ∖ Z →{1, , n}, where Z is the set of leaves of T. For each player i a payoff function p_i: Z ℝ, for each player i. The function determines at each non-leaf node which player should move. The edges of T represent possible moves in the considered game, while for a node v ∈ V ∖ Z the set of its children C(v) := {w | (v,w) ∈ E} represents possible actions of player (v) at v. We say that an extensive game with perfect information is finite, infinite, or well-founded if, respectively, its game tree is finite, infinite, or well-founded. Recall that a tree is called well-founded if it has no infinite paths. From now on by an extensive game we mean a well-founded extensive game with perfect information. For a node u in T we denote the subtree of T rooted at u by T^u. In the proofs we shall often rely on the concept of a rank of a well-founded tree T, defined inductively as follows, where v is the root of T: 𝑟𝑎𝑛𝑘(T):= 0 if T has one node 𝑠𝑢𝑝{𝑟𝑎𝑛𝑘(T^u) + 1 | u ∈ C(v) } otherwise, where sup(X) denotes the least ordinal larger than all ordinals in the set X. For an extensive game G:= (T, , p_1, , p_n) let V_i := {v ∈ V ∖ Z |(v) = i}. So V_i is the set of nodes at which player i moves. A strategy for player i is a function s_i: V_i → V, such that (v, s_i(v)) ∈ E for all v ∈ V_i. We denote the set of strategies of player i by S_i. Let S = S_1 ×⋯× S_n. As in the case of the strategic games we use the `-i' notation, when referring to sequences of strategies or sets of strategies. Each joint strategy s = (s_1, , s_n) determines a rooted path 𝑝𝑙𝑎𝑦(s) := (v_1, , v_m) in T defined inductively as follows. v_1 is the root of T and if v_k∉Z, then v_k+1 := s_i(v_k), where (v_k) = i. So when the game tree consists of just one node, v, we have 𝑝𝑙𝑎𝑦(s) = v. Informally, given a joint strategy s, we can view 𝑝𝑙𝑎𝑦(s) as the resulting play of the game. For each joint strategy s the rooted path 𝑝𝑙𝑎𝑦(s) is finite since the game tree is assumed to be well-founded. Denote by (s) the last element of 𝑝𝑙𝑎𝑦(s). To simplify the notation we just write everywhere p_i(s) instead of p_i((s)). With each extensive game G: = (T, , p_1, , p_n) we associate a strategic game Γ(G) defined as follows. Γ(G):= (S_1, , S_n , p_1, , p_n), where each S_i is the set of strategies of player i in G. In the degenerate situation when the game tree consists of just one node, each strategy is the empty function, denoted by , and there is only one joint strategy, namely the n-tuple (, , ) of these functions. In that case we just stipulate that p_i(, , ) = 0 for all players i. All notions introduced in the context of strategic games can now be reused in the context of an extensive game G simply by referring to the corresponding strategic form Γ(G). In particular, the notion of a Nash equilibrium is well-defined. The subgame of an extensive game G:= (T, , p_1, , p_n), rooted at the node w and denoted by G^w, is defined as follows. The set of players is {1, , n}, the game tree is T^w. The and payoff functions are the restrictions of the corresponding functions of G to the nodes of T^w. We call G^w a direct subgame of G if w is a child of the root v. Note that some players may `drop out' in G^w, in the sense that at no node of T^w it is their turn to move. Still, to keep the notation simple, it is convenient to admit in G^w all original players in G. Each strategy s_i of player i in G uniquely determines his strategy s^w_i in G^w. Given a joint strategy s = (s_1, , s_n) of G we denote by s^w the joint strategy (s^w_1, , s^w_n) in G^w. Further, we denote by S_i^w the set of strategies of player i in the subgame G^w and by S^w the set of joint strategies in this subgame. Finally, a joint strategy s of G is called a subgame perfect equilibrium in G if for each node w of T, the joint strategy s^w of G^w is a Nash equilibrium in the subgame G^w. We denote by (G) the set of subgame perfect equilibria in G. Finally, we say that a game is SPE-invariant if it has a subgame perfect equilibrium and in each subgame of it all subgame perfect equilibria are payoff equivalent. We shall often use the following result. Every extensive game with finitely many outcomes has a subgame perfect equilibrium. § PRELIMINARY LEMMAS In this section we present a sequence of lemmas needed to prove our first main result. In the proofs we often switch between a game and its direct subgames. Consider an extensive game G: = (T, , p_1, , p_n) with the root v and a child w of v. For each player j to each of his strategy t_j in a direct subgame G^w there corresponds a natural set [t_j] of his strategies in the game G defined by [t_j] := {s_j | t_j = s^w_j and s_j(v) = w if j = (v)}. So for a player j, [t_j] is the set of his strategies in G the restriction of which to G^w is t_j, with the additional proviso that if j = (v), then each strategy in [t_j] selects w at the root v. We call [t_j] the lifting of t_j to the game G. The following lemma clarifies the relevance of lifting. lemmalemLifting Consider a direct subgame G^w of G. Suppose that the strategy t_j is weakly dominated in G^w. Then each strategy in [t_j] is weakly dominated in G. Suppose that t_j is weakly dominated in G^w by some strategy u_j. Take a strategy v_j in [t_j]. We show that v_j is weakly dominated in G by the strategy w_j in [u_j] that coincides with v_j on all the nodes that do not belong to G^w. So w_j is obtained from v_j by replacing in it v^w_j, i.e., t_j, by u_j. Below s_-j denotes a sequence of strategies in G of the opponents of player j. Case 1. j = (v). By the choice of u_j for all s_-j p_j(t_j, s^w_-j) ≤ p_j(u_j, s^w_-j) and for some s_-j p_j(t_j, s^w_-j) < p_j(u_j, s^w_-j). Further, by the definition of [·] we have v_j(v) = w, so for all s_-j we have p_j(v_j, s_-j)= p_j(t_j, s^w_-j) and p_j(u_j, s^w_-j) = p_j(w_j, s_-j), so the claim follows. Case 2. j ≠(v). Let i = (v). Take some s_-j. If s_i(v) = w, then p_j(v_j, s_-j)= p_j(t_j, s^w_-j) and p_j(w_j, s_-j)= p_j(u_j, s^w_-j). Thus p_j(v_j, s_-j) ≤ p_j(w_j, s_-j) by the choice of u_j and w_j. Further, if s_i(v) ≠ w, then p_j(v_j, s_-j)= p_j(w_j, s_-j) by the choice of w_j. Choose an arbitrary s_-j such that s_i(v) = w and p_j(t_j, s^w_-j) < p_j(u_j, s^w_-j). By the choice of s_i we have p_j(v_j, s_-j)= p_j(t_j, s^w_-j) and p_j(w_j, s_-j)= p_j(u_j, s^w_-j), so p_j(v_j, s_-j) < p_j(w_j, s_-j). Thus the claim follows. We now extend the notation [·] to sets of strategies and sequences of sets strategies. First, given a set of strategies A in a direct subgame G^w of G we define [A] := ⋃_s_j ∈ A [s_j]. Next, given a sequence ρ of sets of strategies of players, each set taken from a direct subgame of G, we denote by [ρ] the corresponding sequence of sets of strategies of players in G obtained by replacing each element A in ρ by [A]. Given a set A of strategies of players in a direct subgame G^w we define the corresponding set of strategies in the game G by putting ⟨ A ⟩ = {s_j | s^w_j ∈ A}. Thus for a set A of strategies in a direct subgame G^w, the set ⟨ A ⟩ differs from [A] in that we do include in the former set strategies s_j for which s_j(v) ≠ w. Given a set A of strategies of player j in the subgame G^w, we call ⟨ A ⟩ an extension of A to the game G. Further, given a subgame H of Γ(G^w), we define ⟨ H ⟩ as the subgame of Γ(G) in which for each player j we have ⟨ H ⟩_j = ⟨ H_j ⟩. In what follows we need a substantially strengthened version of Lemma <ref> that relies on the following concept. Given an extensive game G with a root v, we say that a non-empty subgame J of Γ(G) does not depend on a direct subgame G^w if for any strategy s_j from J any modification of it on the non-leaf nodes of G^w or on v if (v) = j is also in J. Note that in particular Γ(G) does not depend on any of its direct subgame and that for any non-empty subgame H of a direct subgame G^w of G the subgame ⟨ H ⟩ does not depend on any other direct subgame of G. lemmalemLiftingThree Consider a direct subgame G^w of G, subgames H and H' of Γ(G^w) and a set A of strategies in H. Suppose that H →^A H' and that the subgame J of Γ(G) does not depend on G^w. Then J ∩⟨ H ⟩→^[A] J ∩⟨ H' ⟩. Take a strategy v_j in [A]. For some strategy t_j from A that is weakly dominated in H by some strategy u_j we have v_j ∈ [t_j] ∩ J_j. Select a strategy w_j in [u_j] that coincides with v_j on the nodes that do not belong to G^w. So w_j is a modification of v_j on the non-leaf nodes of G^w and consequently, by the assumption about J, it is in J_j. Further, w_j is in ⟨ H ⟩, since u_j is from H. We claim that v_j is weakly dominated in J ∩⟨ H ⟩ by w_j. Below s_-j denotes a sequence of strategies of the opponents of player j in the original game G. Case 1. j = (v). By the choice of u_j for all s_-j such that s^w_-j∈ H_-j p_j(t_j, s^w_-j) ≤ p_j(u_j, s^w_-j) and for some s_-j such that s^w_-j∈ H_-j p_j(t_j, s^w_-j) < p_j(u_j, s^w_-j). By the definition of `does not depend on' and the fact that j = (v) we can also assume that the latter s_-j is from J_-j by stipulating that s_-j = t_-j for an arbitrary joint strategy t from J. Further, by the definition of [·] we have v_j(v) = w, so for all s_-j such that s^w_-j∈ H_-j we have p_j(v_j, s_-j)= p_j(t_j, s^w_-j) and p_j(u_j, s^w_-j) = p_j(w_j, s_-j). Hence for all s_-j p_j(v_j, s_-j) ≤ p_j(w_j, s_-j) and for some s_-j such that s_-j∈ J_-j and s^w_-j∈ H_-j (i.e., for some s_-j∈ (J ∩⟨ H ⟩)_-j) p_j(v_j, s_-j) < p_j(w_j, s_-j). This establishes the claim. Case 2. j ≠(v). Let i = (v). Take some s_-j. If s_i(v) = w, then p_j(v_j, s_-j)= p_j(t_j, s^w_-j) and p_j(w_j, s_-j)= p_j(u_j, s^w_-j). Thus p_j(v_j, s_-j) ≤ p_j(w_j, s_-j) by the choice of u_j and w_j. Further, if s_i(v) ≠ w, then p_j(v_j, s_-j)= p_j(w_j, s_-j) by the choice of w_j. So for all s_-j we have p_j(v_j, s_-j) ≤ p_j(w_j, s_-j). Choose an arbitrary s_-j such that s_i(v) = w, s^w_-j∈ H_-j, and p_j(t_j, s^w_-j) < p_j(u_j, s^w_-j). Additionally, we can claim that s_-j∈ J_-j by stipulating that s_-j = t_-j for an arbitrary joint strategy t from J. Then s_-j∈ (J ∩⟨ H ⟩)_-j. By the choice of s_i we have p_j(v_j, s_-j)= p_j(t_j, s^w_-j) and p_j(w_j, s_-j)= p_j(u_j, s^w_-j), so p_j(v_j, s_-j) < p_j(w_j, s_-j). This establishes the claim for this case. We continue with some lemmas concerned with the relation →^ρ. Consider a direct subgame G^w of G. Suppose that for some sequence ρ of sets of strategies of players in G^w and a subgame H of Γ(G^w), Γ(G^w) →^ρ H. Suppose further that the subgame J of Γ(G) does not depend on G^w. Then J →^[ρ] J ∩⟨ H ⟩. We proceed by transfinite induction on the length γ of ρ = (ρ_α, α < γ). Case 1. γ = 1. By Lemma <ref> J ∩⟨Γ(G^w) ⟩→^[ρ_0] J ∩⟨ H ⟩, so the claim holds since ⟨Γ(G^w) ⟩ = Γ(G) and J ∩Γ(G) = J. Case 2. γ is a successor ordinal >1. Suppose γ = δ + 1. Then ρ = (ρ', ρ_δ), where ρ' := (ρ_α, α < δ). By definition for some H' we have Γ(G^w) →^ρ' H' and H' →^ρ_δ H. By the induction hypothesis J →^[ρ'] J ∩⟨ H' ⟩ and by Lemma <ref> J ∩⟨ H' ⟩→^[ρ_δ] J ∩⟨ H ⟩, so the claim follows by Note <ref>(i), since [ρ] = ([ρ'], [ρ_δ]). Case 3. γ is a limit ordinal. By definition for some games H^β, where β < γ, we have Γ(G^w) →^ρ^β H^β and H = ⋂_β < γ H^β, where—recall—ρ^β = (ρ_α, α < β). By the induction hypothesis for all β < γ, we have J →^[ρ^β] J ∩⟨ H^β⟩. So by definition J →^[ρ] J ∩⟨ H ⟩, since J ∩⟨ H ⟩ = ⋂_β < γ⟨ J ∩ H^β⟩ as ⟨ H ⟩ = ⋂_β < γ⟨ H^β⟩. lemmalemTwo Consider an extensive game G with the root v. Suppose that (w_α, α < γ) is a sequence of children of v and that for all α < γ, ρ_α is a sequence of sets of strategies in the direct subgame G^w_α. Suppose further that for each α < γ Γ(G^w_α) →^ρ_α H^w_α, where each game H^w_α is non-empty. Let ρ be the concatenation of the sequences (ρ_α, α < γ). Then Γ(G) →^[ρ]⋂_α < γ⟨ H^w_α⟩. By assumption each H^w_α is a non-empty subgame of Γ(G^w_α), so each ⟨ H^w_α⟩ is a non-empty subgame of Γ(G), and consequently ⋂_α < γ⟨ H^w_α⟩ is also a non-empty subgame of Γ(G). Informally, suppose that for each direct subgame G^w_α of G we can reduce the corresponding strategic game Γ(G^w_α) to a non-empty game H^w_α. Then the strategic game Γ(G) can be reduced to a strategic game the strategies of which are obtained by intersecting for each player the extensions of his strategy sets in all games H^w_α. To establish this lemma we do not assume that (w_α, α < γ) contains all children of v, which makes it possible to proceed by induction. We proceed by transfinite induction on the length γ of ρ. Case 1. γ = 1. Follows from Lemma <ref> with J = Γ(G). Case 2. γ is a successor ordinal >1. Suppose γ = δ + 1. By the induction hypothesis Γ(G) →^[ρ^δ]⋂_α < δ⟨ H^w_α⟩, where ρ^δ is the concatenation of the sequences (ρ_α, α < δ). We also have by assumption Γ(G^w_δ) →^ρ_δ H^w_δ. Note that the subgame ⋂_α < δ⟨ H^w_α⟩ of Γ(G) does not depend on G^w_δ, so by Lemma <ref> we have that ⋂_α < δ⟨ H^w_α⟩→^[ρ_δ]⋂_α < δ⟨ H^w_α⟩∩⟨ H^w_δ⟩. By Note <ref>(i) the claim follows. Case 3. γ is a limit ordinal. By the induction hypothesis for all β < γ Γ(G) →^[ρ^β]⋂_α < β⟨ H^w_α⟩, where ρ^β is the concatenation of the sequences (ρ_α, α < β). Then by Note <ref>(ii) and by definition Γ(G) →^[ρ]⋂_β < γ⋂_α < β⟨ H^w_α⟩. But ⋂_β < γ⋂_α < β⟨ H^w_α⟩ = ⋂_α < γ⟨ H^w_α⟩, so the claim follows. The next lemma shows that when each subgame H^w_α of Γ(G^w_α) is trivial, under some natural assumptions the subgame ⋂_α < γ⟨ H^w_α⟩ of Γ(G) can then be reduced in one step to a trivial game. lemmalemTrivial Consider an extensive game G with the root v. Suppose that * G has a subgame perfect equilibrium and all subgame perfect equilibria of G are payoff equivalent, * for all w ∈ C(v), (G^w) H^w, where H^w is a trivial subgame of Γ(G^w). Then for some set of strategies A we have ⋂_w ∈ C(v)⟨ H^w⟩→^A H', where H' a trivial game and (G) H'. Let H := ⋂_w ∈ C(v)⟨ H^w⟩. Note that H is a non-empty subgame of Γ(G). Denote the unique outcome in the game H^w by val^w, i.e., for all joint strategies s in H^w we have p(s) = val^w. Then the possible outcomes in H are val^w, where w ∈ C(v). More precisely, suppose that i = (v). Then if s is a joint strategy in H, then p(s) = val^w, where s_i(v) = w. Take two strategies t'_i and t”_i of player i in H with t'_i(v) = w_1 and t”_i(v) = w_2 such that val_i^w_1 < val_i^w_2. This means that for any joint strategies s_-i from H_-i we have p_i(t'_i, s_-i < p_i(t”_i, s_-i, so t'_i is weakly dominated in H by t”_i (actually, even strictly dominated). By assumption (a) G has a subgame perfect equilibrium, so by Corollary 7 of <cit.> max{val_i^w| w ∈ C(v)} exists. Denote it by val_i and let W := {w ∈ C(v) | val_i^w = val_i }. So W is the set of children w of v for which the corresponding value val_i^w is maximal. Finally, let A be the set of strategies t_i of player i in H such that t_i(v) ∉W. By the above observation about t'_i and t”_i all strategies in A are weakly dominated in H. By removing them from H we get a game H' with the unique payoff val_i for player i. To prove that H' is trivial consider two joint strategies s and t in H'. Suppose that s_i(v) = w_1 and t_i(v) = w_2. Then w_1, w_2 ∈ W, s^w_1∈ H^w_1, t^w_2∈ H^w_2, p(s) = p(s^w_1), and p(t) = p(t^w_2). By Theorem 8 of <cit.> subgame perfect equilibria u' and u” in G exist such that u'_i(v) = w_1, (u')^w_1 is a subgame perfect equilibrium in G^w_1, u”_i(v) = w_2, and (u”)^w_2 is a subgame perfect equilibrium in G^w_2. Then p(u') = p((u')^w_1) and p(u”) = p((u”)^w_2), so p((u')^w_1) = p((u”)^w_2) by assumption (a). Further, by assumption (b) both (u')^w_1∈ H^w_1 and (u”)^w_2∈ H^w_2, so since both subgames are trivial, p(s^w_1) = p((u')^w_1) and p(t^w_2) = p((u')^w_2). Consequently p(s) = p(t), which proves that H' is trivial. To prove that (G) H' consider a subgame perfect equilibrium s in G. Take some u ∈ C(v). By assumption (b), s^u ∈ H^u, so p_i(s^u) = val_i^u and, by the definition of ⟨·⟩, s ∈ H. Suppose that s_i(v)=w. By Corollary 7 of <cit.> val_i^w = val_i, i.e., s_i(v) ∈ W. This means that s_i ∉A and thus s ∈ H'. § SPE-INVARIANT GAMES We can now prove the desired result. Consider an SPE-invariant extensive game G. There exists a sequence ρ of strategies of players in G and a subgame H of Γ(G) such that Γ(G) →^ρ H, H is trivial and (G) ⊆ H. We proceed by induction on the rank of the game tree of G. For game trees of rank 0 all strategies are empty functions, so Γ(G) is a trivial game with the unique joint strategy (, , ) and (G) = {(, , )}, so the claim holds. Suppose that the rank of the game tree of G is α > 0 and assume that claim holds for all extensive games with the game trees of rank smaller than α. Let v be the root of G. Each direct subgame of G is SPE-invariant, so by the induction hypothesis for all w ∈ C(v) there exists a sequence ρ^w of strategies of players in G^w and a subgame H^w of Γ(G^w) such that Γ(G^w) →^ρ^w H^w, H^w is trivial and (G^w) H^w. The claim now follows by Lemmas <ref> and <ref>. The following example illustrates the use of this theorem. An extensive game is called generic if each payoff function is an injective. Recall that the centipede game, introduced in <cit.> (see also <cit.>), is a two-players extensive game played for an even number of periods. We define it inductively as follows. The game with 2 periods is depicted in Figure <ref>. Here and below the argument of each non-leaf is the player whose turn is to move, and the leaves are followed by players' payoffs. The moves are denoted by the letters C and S. The game with 2t+2 periods is obtained from the game with 2t periods by replacing the leaf C_2t by the tree depicted in Figure <ref>. By the the result of <cit.>) each centipede game can be reduced by an iterated elimination of weakly dominated strategies to a trivial game which contains the unique subgame perfect equilibrium, with the outcome (1,0). We now show that the same holds for an infinite version of the centipede game G in which player 2 begins the game by selecting an even number 2t > 0. Subsequently, the centipede version with 2t periods is played. Note that G is SPE-invariant. Indeed, G has infinitely many subgame perfect equilibria (one for each first move of player 2), but each of them yields the outcome (1,0). Moreover, each subgame of G is either a centipede game with 2t periods for some t > 0, or a subgame of such a game. So each subgame of G is a finite generic game and thus has a unique subgame perfect equilibrium. By Theorem <ref> we can reduce G by an infinite iterated elimination of weakly dominated strategies to a trivial game which contains all its subgame perfect equilibria. Note that the strategy elimination sequence constructed in the proof of this theorem consists of for more than ω steps. For finite extensive games, Theorem <ref> extends the original result reported in <cit.>. Namely, the authors prove the corresponding result for finite extensive games that are generic. In such games a unique subgame perfect equilibrium exists, while we only claim that the game is SPE-invariant. To clarify the relevance of this relaxation let us mention two classes of well-founded extensive games that are SPE-invariant and that were studied for finite extensive games. Following <cit.> we say that an extensive game (T, , p_1, , p_n) is without relevant ties if for all non-leaf nodes u in T the payoff function p_i, where (u)=i, is injective on the leaves of T^u. This is a more general property than being generic. The relevant property for finite extensive games is that a game without relevant ties has a unique subgame perfect equilibrium, see <cit.> for a straightforward proof. In the case of well-founded games a direct modification of this proof, that we omit, shows that every extensive game without relevant ties has at most one subgame perfect equilibrium. Further, if a game is without relevant ties, then so is every subgame of it, so we conclude that well-founded games without relevant ties are SPE-invariant. Next, following <cit.> we say that an extensive game (T, turn, p_1, , p_n) satisfies the transference of decisionmaker indifference (TDI) condition if: [ ; ] where S_i is the set of strategies of player i. Informally, this condition states that whenever for some player i, two of his strategies r_i and t_i are indifferent w.r.t. some joint strategy s_-i of the other players then this indifference extends to all players. Strategic games that satisfy the TDI condition are of interest because of the main result of <cit.> which states that in finite games that satisfy this condition iterated elimination of weakly dominated strategies is order independent.[Alternative proofs of this result were given in <cit.> and <cit.>.] The authors also give examples of natural games that satisfy this condition. Also strictly competitive games studied in the next section satisfy this condition. The following result extends an implicit result of <cit.> to well-founded games. Consider an extensive game G. Suppose that G has finitely many outcomes and G satisfies the TDI condition. Then G is SPE-invariant. We reduce the game G to a finite game H as follows. First, consider the set of all leaves of the game tree T of G that are the ends of the plays corresponding with a subgame perfect equilibrium. Next, for each outcome associated with a subgame perfect equilibrium retain in this set just one leaf with this outcome. By assumption the resulting set L is finite. Next, order the leaves arbitrarily. Following this ordering remove all leaves with an outcome already associated with an earlier leaf, but ensuring that the leaves from L are retained. Let M be the resulting set of leaves. Finally, remove all nodes of T from which no leaf in M can be reached. The resulting tree corresponds to a finite extensive game H in which all the outcomes possible in G are present. Further, all the leaves of H are also leaves of G, so H satisfies the TDI condition since G does. So by Theorem 12 of <cit.> (that is implicit in <cit.>) all subgame perfect equilibria of H are payoff equivalent. Further, by Theorem <ref> G has a subgame perfect equilibrium. Consider two subgame perfect equilibria s and t in G with the outcomes p(s) and p(t). By construction two subgame perfect equilibria s' and t' in H exist such that p(s) = p(s') and p(t) = p(t'). We conclude that all subgame perfect equilibria of G are payoff equivalent. To complete the proof it suffice to note that if an extensive game G satisfies the TDI condition, then so does every subgame of it. Indeed, consider a subgame G^w of G. Let i = (w) and take r^w_i, t^w_i ∈ S^w_i and s^w_-i∈ S^w_-i. Extend these strategies to the strategies r_i, t_i and s_-i in the game G in such a way that w lies both on (r_i, s_-i) and on (t_i, s_-i). Then p(r^w_i, s^w_-i) = p(r_i, s_-i) and p(t^w_i, s^w_-i) = p(t_i, s_-i), so the claim follows. The claim of Theorem <ref> holds for extensive games with finitely many outcomes that satisfy the TDI condition. Conjecture Every extensive game that satisfies the TDI-condition is SPE-invariant. If the conjecture is true, Theorem <ref> holds for all extensive games that satisfy the TDI condition. An example of a game with infinitely many outcomes that satisfies the TDI condition is the infinite version of the centipede game from Example <ref>. § STRICTLY COMPETITIVE EXTENSIVE GAMES In some games, for instance, the infinite version of the centipede game from Example <ref>, infinite rounds of elimination of weakly dominated strategies are needed to solve the game. In this section, we focus on maximal elimination of weakly dominated strategies and identify a subclass of extensive games for which we can provide a finite bound on the number of elimination steps required to solve the game. The outcome is our second main result which is a generalization of the following result due to <cit.> to a class of well-founded games. Theorem Every finite extensive zero-sum game with n outcomes can be reduced to a trivial game by the maximal iterated elimination of weakly dominated strategies in n-1 steps. We first present some auxiliary results. Their proofs follow our detailed exposition in <cit.> of the proofs in <cit.> generalized to strictly competitive games, now appropriately modified to infinite games. §.§ Preliminary results We denote by H^1 the subgame of H obtained by the elimination of all strategies that are weakly dominated in H, and put H^0 := H and H^k+1 := (H^k)^1, where k ≥ 1. Abbreviate the phrase `iterated elimination of weakly dominated strategies' to IEWDS. If for some k, H^k is a trivial game we say that H can be solved by the IEWDS. In infinite strategic games with finitely many outcomes it is possible that all strategies of a player are weakly dominated as shown in the Example <ref>. Then by definition, H^1 is an empty game. We define a class of games, called WD-admissible games in which this does not happen. Consider the following infinite zero-sum strategic game with two outcomes: 55 A B C D … A 0,0 0,0 0,0 0,0 … B 0,0 1,-1 0,0 0,0 … C 0,0 1,-1 1,-1 0,0 … D 0,0 1,-1 1,-1 1,-1 … … … … … … … This game has a Nash equilibrium, namely (A,A), but each strategy of the row player is weakly dominated. So after one round of elimination the empty game is reached. Consider a strategic game H. We say that a strategy is undominated if no strategy weakly dominates it. Next, we say that H is WD-admissible if for all subgames H' of it the following holds: each strategy is undominated or is weakly dominated by an undominated strategy. Intuitively, a strategic game H is WD-admissible if in every subgame H' of it, for every strategy s_i in H' the relation `is weakly dominated' in H' has a maximal element above s_i. The crucial property of WD-admissible games is formalised in the following lemma whose proof follows directly by induction. Let H := (H_1, , H_n, p_1, , p_n) be a WD-admissible strategic game and for k ≥ 1, let H^k := (H^k_1, , H^k_n, p_1, , p_n). Then i ∈{1, , n} s_i ∈ H_i t_i ∈ H^k_i s_-i∈ H^k_-i: p_i(t_i,s_-i) ≥ p_i(s_i,s_-i). A two player strategic game H=(H_1, H_2, p_1, p_2) is called strictly competitive if i ∈{1,2} s,s' ∈ S : p_i(s) ≥ p_i(s') iff p_(s) ≤ p_(s'). For i ∈{1,2} we define maxmin_i(H) := max_s_i∈ H_imin_s_-i∈ H_-i p_i(s_i, s_-i). We allow - ∞ and ∞ as minima and maxima, so maxmin_i(H) always exists. When maxmin_i(H) is finite we call any strategy s^*_i such that min_s_-i∈ H_-i p_i(s^*_i, s_-i) = maxmin_i(H) a security strategy for player i in H. We shall reuse the following auxiliary results from <cit.>. Let H = (H_1, H_2, p_1, p_2) be a strictly competitive strategic game. Then i ∈{1,2} s,s' ∈ S : p_i(s) = p_i(s') iff p_(s) = p_(s'). This simply means that every strictly competitive strategic game satisfies the TDI condition. Consider a strictly competitive strategic game H with a Nash equilibrium s. Suppose that for some i ∈{1,2}, t_i weakly dominates s_i. Then (t_i, s_-i) is also a Nash equilibrium. Consider a strictly competitive strategic game H with two outcomes that has a Nash equilibrium. Then H^1 is a trivial game. The following result is standard (for the used formulation see, e.g., <cit.>). Consider a strictly competitive strategic game H. * All Nash equilibria of H yield the same payoff for player i, namely maxmin_i(H). * All Nash equilibria of H are of the form (s^*_1, s^*_2) where each s^*_i is a security strategy for player i. By modifying the proof of Corollary 5 from <cit.> appropriately, we have the following. lemmalemHOne Consider a WD-admissible strictly competitive strategic game H that has a Nash equilibrium. Then H^1 has a Nash equilibrium, as well, and for all i ∈{1,2}, maxmin_i(H) = maxmin_i(H^1). §.§ A bound on IEWDS We now move on to a discussion of extensive games. We say that an extensive game G is WD-admissible (respectively, strictly competitive) if Γ(G) is WD-admissible (respectively, strictly competitive). We write Γ^k(G) instead of (Γ(G))^k, Γ_i(G) instead of (Γ(G))_i, and Γ^k_i(G) instead of (Γ^k(G))_i. So Γ^0(G) = Γ(G). Further, for a strictly competitive game H = (H_1, H_2, p_1, p_2) with finitely many outcomes for each player i we define the following three sets: p_i^(H) := max_s ∈ S p_i(s), _i(H) : ={s_i ∈ H_i | s_-i∈ H_-i p_i(s_i,s_-i)=p_i^(H)} and _-i(H) = {s_-i∈ H_-i|∃ s_i∈ H_i p_i(s_i,s_-i) = p^_i(H)}. By the assumption about H, p_i^(H) is finite. We can then prove the following generalization of the crucial Lemma 1 and Theorem 1 from <cit.>, where the proofs are analogous to that of Lemma 18 and Theorem 19 in <cit.>. lemmalemscLose Let G be a WD-admissible strictly competitive extensive game with finitely many outcomes. For all i ∈{1,2} and for all k ≥ 0, if _i(Γ^k(G)) = ∅ then _(Γ^k(G)) ∩Γ^k+2_-i(G) =. Lemma <ref> implies that if for all i ∈{1,2}, _i(Γ^k(G)) = ∅ then two further rounds of eliminations of weakly dominated strategies remove from Γ^k(G) at least two outcomes. This allows us to establish the following result. The proof is almost the same as the one given in <cit.> for the finite extensive games. We reproduce it here for the convenience of the reader. theoremthmSciewds Let G be a WD-admissible strictly competitive extensive game with at most m outcomes. Then Γ^m-1(G) is a trivial game. We prove a stronger claim, namely that for all m ≥ 1 and k ≥ 0 if Γ^k(G) has at most m outcomes, then Γ^k+m-1(G) is a trivial game. We proceed by induction on m. For m = 1 the claim is trivial. For m = 2 we first note that by Theorem <ref> and Lemma <ref> each game Γ^k(G) has a Nash equilibrium. So the claim follows by Lemma <ref>. For m > 2 two cases arise. Case 1. For some i ∈{1,2}, _i(Γ^k(G)) ≠∅. For player i every strategy s_i ∈_i(Γ^k(G)) weakly dominates all strategies s_i' ∉_i(Γ^k(G)) and no strategy in _i(Γ^k(G)) is weakly dominated. So the set of strategies of player i in Γ^k+1(G) equals _i(Γ^k(G)) and consequently p_i^(Γ^k(G)) is his unique payoff in this game. By Note <ref> Γ^k+1(G), and hence also Γ^k+m-1(G), is a trivial game. Case 2. For all i ∈{1,2}, _i(Γ^k(G)) = ∅. Take joint strategies s and t such that p_1(s)=p_1^max(Γ^k(G)) and p_2(t)=p_2^max(Γ^k(G)). By Note <ref> the outcomes (p_1(s), p_2(s)) and (p_1(t), p_2(t)) are different since m > 1. We have s_2 ∈_2(Γ^k(G)) and t_1 ∈_1(Γ^k(G)). Hence by Lemma <ref> for no joint strategy s' in Γ^k+2(G) we have p_1(s')=p_1^max(Γ^k(G)) or p_2(s')=p_2^max(Γ^k(G)). So Γ(G^k+2) has at most m-2 outcomes. By the induction hypothesis Γ(G^k+m-1) is a trivial game. We now show that Theorem <ref> holds for a large class of natural games. Call an extensive game almost constant if for all but finitely many leaves the outcome is the same. Note that every almost constant game has finitely many outcomes, but the converse does not hold. Indeed, it suffices to take a game with two outcomes, each associated with infinitely many leaves. The following general result holds. Every almost constant extensive game is WD-admissible. We begin with two unrelated observations. Call a function p: A → B almost constant if for some b we have p(a) = b for all but finitely many a ∈ A. Observation 1. Consider two sequences of some elements (v_0, v_1, ) and (w_0, w_1, ) such that v_j ≠ v_k, v_j ≠ w_k, and w_j ≠ w_k for all j ≥ 0 and k > j, and a function p: {v_0, v_1, }∪{w_0, w_1, }→ B such that p(v_j) ≠ p(w_j) for all j ≥ 0. Then p is not almost constant. Indeed, otherwise for some k ≥ 0 the function p: {v_k, v_k+1, }∪{w_k, w_k+1, }→ B would be constant. Observation 2. Take an extensive game. For some player i, consider two joint strategies (s_i, s_-i) and (s_i',s'_-i). If (s_i,s_-i) = (s'_i,s'_-i) then (s_i,s_-i) = (s'_i,s_-i). Indeed, consider any node w in 𝑝𝑙𝑎𝑦(s_i,s_-i) such that 𝑡𝑢𝑟𝑛(w)=i. Then by assumption s_i(w)=s'_i(w). This implies that 𝑝𝑙𝑎𝑦(s_i,s_-i) = 𝑝𝑙𝑎𝑦(s'_i,s_-i), which yields the claim. Now consider an almost constant extensive game G. Take an arbitrary subgame H of Γ(G). Suppose by contradiction that for some player i there exists an infinite sequence of strategies s_i^0, s_i^1, s_i^2, … such that for all j ≥ 0, s_i^j+1 weakly dominates s_i^j in H. By definition of weak dominance, for all j ≥ 0 there exists s^j_-i∈ H_-i such that p_i(s_i^j,s_-i^j) < p_i(s_i^j+1,s_-i^j). Let for j ≥ 0, v_j = (s_i^j,s_-i^j) and w_j = (s_i^j+1,s_-i^j). By the above inequalities p_i(v_j) ≠ p_i(w_j) for all j ≥ 0. We now argue that v_j ≠ v_k, v_j ≠ w_k, and w_j ≠ w_k for all j ≥ 0 and k > j. First, note that by the transitivity of the `weakly dominates' relation we have the following. * p_i(s_i^j, s_-i^j) < p_i(s_i^j+1, s_-i^j) ≤ p_i(s_i^k,s_-i^j), * p_i(s_i^j, s_-i^j) < p_i(s_i^j+1, s_-i^j) ≤ p_i(s_i^k+1,s_-i^j), * p_i(s_i^j+1, s_-i^k) ≤ p_i(s_i^k,s_-i^k) < p_i(s_i^k+1,s_-i^k). This implies in turn, (s_i^j,s_-i^j) ≠(s_i^k,s_-i^j), (s_i^j,s_-i^j) ≠(s_i^k+1,s_-i^j), and (s_i^j+1, s_-i^k) ≠(s_i^k+1,s_-i^k). So by Observation 2 we have the following. * v_j = (s_i^j,s_-i^j) ≠(s_i^k,s_-i^k) = v_k, * v_j = (s_i^j,s_-i^j) ≠(s_i^k+1,s_-i^k) = w_k, * w_j = (s_i^j+1,s_-i^j) ≠(s_i^k+1,s_-i^k) = w_k. By Observation 1, p_i is not almost constant, which contradicts the assumption that G is almost constant. By the transitivity of the `weakly dominates' relation we conclude that G is WD-admissible. Let G be an almost constant strictly competitive extensive game with at most m outcomes. Then Γ^m-1(G) is a trivial game. §.§ Acknowledgments We thank the reviewers for their helpful comments. The second author was partially supported by the grant CRG/2022/006140. eptcs
http://arxiv.org/abs/2307.04943v1
20230711001558
Dispersive estimates for 1D matrix Schrödinger operators with threshold resonance
[ "Yongming Li" ]
math.AP
[ "math.AP" ]
Department of Mathematics Texas A&M University College Station, TX 77843, USA [email protected] The author was partially supported by NSF grants DMS-1954707 and DMS-2235233. We establish dispersive estimates and local decay estimates for the time evolution of non-self-adjoint matrix Schrödinger operators with threshold resonances in one space dimension. In particular, we show that the decay rates in the weighted setting are the same as in the regular case after subtracting a finite rank operator corresponding to the threshold resonances. Such matrix Schrödinger operators naturally arise from linearizing a focusing nonlinear Schrödinger equation around a solitary wave. It is known that the linearized operator for the 1D focusing cubic NLS equation exhibits a threshold resonance. We also include an observation of a favorable structure in the quadratic nonlinearity of the evolution equation for perturbations of solitary waves of the 1D focusing cubic NLS equation. Dispersive estimates for 1D matrix Schrödinger operators with threshold resonance Yongming Li October 2023 ================================================================================= § INTRODUCTION In this article, we establish dispersive estimates and local decay estimates for the (non-self-adjoint) matrix Schrödinger operators = _0 + = [ -∂_x^2 + μ 0; 0 ∂_x^2 - μ ] + [ -V_1 -V_2; V_2 V_1 ] on L^2() × L^2(), where μ is a positive constant and V_1, V_2 are real-valued sufficiently decaying potentials. The operator is closed on the domain D() = H^2() × H^2(). These matrix operators arise when linearizing a focusing nonlinear Schrödinger equation around a solitary wave. By our assumptions on V_1 and V_2, Weyl's criterion implies that the essential spectrum of is the same as that of _0, given by (-∞,-μ] ∪ [μ,∞). As a core assumption in this paper, we suppose that the edges ±μ of the essential spectrum are irregular in the sense of Definition <ref>. This implies that there exist non-trivial bounded solutions to the equation Ψ⃗_± = ±μΨ⃗_±, see Lemma <ref>. The dispersive estimates for when the thresholds ±μ are regular have been obtained in Section 7-8 of the paper by Krieger-Schlag <cit.>, building on the scattering theory developed by Buslaev-Perel'man <cit.>. See also the recent work of Collot-Germain <cit.>. Our proof is instead based on the unifying approach to resolvent expansions first initiated by Jensen-Nenciu <cit.>, and then further refined in Erdogan-Schlag <cit.> for matrix Schrödinger operators. We also adopt techniques from Erdogan-Green <cit.>, where the authors prove similar dispersive estimates for one-dimensional Dirac operators. §.§ Motivation Our interest in developing dispersive estimates for (<ref>) stems from the asymptotic stability problem for solitary wave solutions to nonlinear Schrödinger (NLS) equations. The NLS equation i∂_t ψ + ∂_x^2 ψ + F(|ψ|^2)ψ = 0, ψ_t ×_x →, appears in many important physical contexts such as the propagation of a laser beam, the envelope description of water waves in an ideal fluid, or the propagation of light waves in nonlinear optical fibers. See, e.g., Sulem-Sulem <cit.> for physics background. Under certain general conditions on the nonlinearity F(·) (see, e.g., <cit.>), the equation (<ref>) admits a parameterized family of localized, finite energy, traveling solitary waves of the form ψ(t,x) = e^itα^2ϕ(x;α), where ϕ(·;α) is a ground state, i.e., a positive, decaying, real-valued solution to the (nonlinear) elliptic equation - ∂_x^2 ϕ + α^2 ϕ = F( ϕ^2)ϕ. The existence and uniqueness of these ground state solutions are well-understood, see, e.g., <cit.>, <cit.>. The solitary wave solutions (or simply, solitons) are of importance due to the special role they play for the long-time dynamics of the Cauchy problem (<ref>). Consequently, over the last few decades there has been a significant interest in the study of stability (or instability) of such solitary waves under small perturbations. The primary notion of stability is that of orbital stability, and it is by now well-understood for the NLS equation. The pioneering works in this direction were due to Cazenave-Lions <cit.>, Shatah-Strauss <cit.>, and Weinstein <cit.>; see also <cit.> for the general theory. On the other hand, a stronger notion of stability is that of asymptotic stability. There are two general approaches for the asymptotic stability problem. The first approach is to use integrability techniques, when the underlying partial differential equation is completely integrable and inverse scattering is available. A second approach is perturbative, which means that one studies the dynamics of the nonlinear flow in the neighborhood of the solitary wave, on a restricted set of the initial data. Generally, one starts by decomposing the perturbed solution into a sum of a solitary wave and a dispersive remainder term. For the perturbative approach, dispersive estimates for the linear flow are key. Let us briefly describe the perturbative approach for the NLS equation. To keep our exposition short, we will not take into account any modulation aspects related to the Galilean invariance of the equation. For small α>0, consider the perturbation ansatz ψ(t,x) = e^itα^2(ϕ(x) + u(t,x)) with the ground state ϕ(·) = ϕ(·;α) and the dispersive remainder term u(t,x). The linearization of (<ref>) around the solitary wave e^itα^2ϕ(x) then leads to the following nonlinear partial differential equation i ∂_t u = (- ∂_x^2 + α^2 - V)u + W u+ N, where N = N(ϕ,u,u) is nonlinear in the variables (u,u), and V = F(ϕ^2) + F'(ϕ^2)ϕ^2 and W = F'(ϕ^2)ϕ^2 are real-valued potentials related to the ground state ϕ. Equivalently, the above equation can be recast as a system for the vector U := (u,u)^⊤, which is given by i∂_t U - U = , where is a nonlinear term, and is a matrix Schrödinger operator of the form (<ref>) with the parameters μ = α^2, V_1 = V, and V_2 = W. For the study of asymptotic stability of solitary waves for NLS, it is thus crucial to fully understand the spectral properties of the matrix operator as well as to derive dispersive estimates for the linear evolution operator e^it. One of the key steps in a perturbative analysis is to prove that the dispersive remainder (<ref>) decays to zero in a suitable topology. Let us consider for example, the 1D focusing NLS with a pure power nonlinearity, i.e. i∂_t ψ + ∂_x^2 ψ + |ψ|^2σψ = 0,σ>0. The ground state ϕ(x;1) has an explicit formula for all σ > 0 given by ϕ(x;1) = (σ + 1)^1/2σ^1/σ(σ x), and the linearized operator around e^itϕ(x;1) takes the form _σ = [ -∂_x^2 - (σ+1)^2^2(σ x) + 1 - σ(σ+1)^2(σ x); σ(σ+1)^2(σ x) ∂_x^2 + (σ+1)^2^2(σ x) - 1 ]. For monomial nonlinearities, we may obtain ϕ(x;α) from rescaling by ϕ(x;α) = α^1/σϕ(α x,1). The matrix operators when linearizing around e^itα^2ϕ(x;α) are also equivalent to the matrix operator _σ by rescaling. The spectra for these matrix operators were investigated in <cit.>; see also Section 9 of <cit.>. For σ≥ 2, Krieger-Schlag <cit.> were able to construct finite co-dimensional center-stable manifolds around the solitary waves and prove asymptotic stability using dispersive and Strichartz estimates developed for the evolution operator e^it. However, for the completely integrable case (σ =1), it was shown in <cit.> that the matrix operator _1 exhibits the threshold resonance Ψ(x) = (tanh^2(x),-^2(x) )^⊤ at λ = 1. The dispersive estimates developed in <cit.> do not apply in this case. Furthermore, we note that a key assumption in the papers <cit.>, <cit.>, <cit.>, <cit.> is that the linearized matrix operator does not possess threshold resonances at the edges of the essential spectrum. In these “generic" (regular) cases, it can be shown that the evolution operator enjoy improved decay estimates in weighted spaces; see, e.g., Proposition 8.1 in <cit.>. Thus, a meaningful motivation for this paper is to prove dispersive estimates in the presence of threshold resonances under some general spectral assumptions on the matrix operator , which are applicable to the 1D cubic NLS case (σ=1). We will discuss this particular case briefly in Section <ref>. §.§ Main result We are now in the position to state the main result of this paper. We begin by specifying some spectral assumptions on . (A1) -σ_3 is a positive matrix, where σ_3 is one of the Pauli matrices (c.f. (<ref>)), (A2)L_- := -∂_x^2 + μ - V_1 + V_2 is non-negative, (A3) there exists β>0 such that | V_1 (x) | + | V_2 (x) |≲ e^-(√(2μ)+β)| x | for all x ∈, (A4) there are no embedded eigenvalues in (-∞,- μ)∪(μ,∞). Under these assumptions, we recall the general spectral theory for from <cit.>.[The results in Section 2 of <cit.> are stated for dimension 3, but they in fact hold for all dimensions. Moreover, only a polynomial decay on V_1 and V_2 is assumed in <cit.>. See also <cit.>.] <cit.> Suppose Assumption <ref> holds. The essential spectrum of equals (-∞,-μ] ∪ [μ,∞). Moreover, () = -() = () = (^*), and () ⊂∪ i. The discrete spectrum of consists of eigenvalues {z_j}_j=1^N, 0≤ N < ∞, of finite multiplicity. For each z_j ≠ 0, the algebraic and geometric multiplicities coincide and (-z_j) is closed. The zero eigenvalue has finite algebraic multiplicity, i.e., the generalized eigenspace ∪_k=1^∞(^k) has finite dimension. In fact, there is a finite m ≥ 1 so that (^k) = (^k+1) for all k ≥ m. The symmetry (<ref>) is due to the following commutation properties of , ^* = σ_3 σ_3, - =σ_1 σ_1, with the Pauli matrices σ_1 = [ 0 1; 1 0 ], σ_2 = [ 0 -i; i 0 ], σ_3 = [ 1 0; 0 -1 ]. As a core assumption in this paper, we impose that the thresholds ±μ of the essential spectrum are irregular. (A5) The thresholds ±μ are irregular in the sense of Definition <ref>. This implies that there exist non-trivial bounded solutions Ψ⃗_± = (Ψ_1^±,Ψ_2^±)^⊤ to the equation Ψ⃗_± = ±μΨ⃗_±. (A6) The vanishing (bilateral)-Laplace transform condition holds [V_2Ψ_1^+ + V_1 Ψ_2^+](±√(2μ)) = ∫_-∞^∞ e^∓√(2μ) (V_2 Ψ_1^+ + V_1 Ψ_2^+)(y) y = 0. For details about the characterization of the threshold functions Ψ⃗, we refer the reader to Definition <ref> and Lemma <ref> in Section 4. Due to the commutation identity (<ref>), we have the relation Ψ⃗_+ = σ_1 Ψ⃗_-. We emphasize that assumption (A6) is used to infer that (non-trivial) bounded solutions Ψ⃗_± = (Ψ_1^±,Ψ_2^±) to the equation Ψ⃗_± = ±μΨ⃗_± satisfy Ψ_1^+ = Ψ_2^- ∈ L^∞()∖ L^2(). Let P_d L^2()× L^2() → L^2() × L^2() be the Riesz projection corresponding to the discrete spectrum of , and define P_s := I - P_d. We now state the main theorem of this article. Suppose assumptions (A1) – (A6) hold, and let Ψ⃗ = (Ψ_1,Ψ_2) be the L^∞()× L^∞()∖ L^2() × L^2() distributional solution to Ψ⃗ = μΨ⃗, with the normalization lim_x →∞( |Ψ_1(x)|^2 + |Ψ_1(-x) |^2 ) = 2. Then, for any f⃗=(f_1,f_2) ∈() ×(), we have * the unweighted dispersive estimate ‖ e^itP_sf⃗ ‖_L^∞()× L^∞()≲| t |^-1/2‖f⃗ ‖_L^1() × L^1(), ∀ | t |≥ 1, * and the weighted dispersive estimate ‖⟨ x ⟩^-2 (e^itP_s - F_t)f⃗ ‖_L^∞()× L^∞()≲| t |^-3/2‖⟨ x ⟩^2f⃗ ‖_L^1() × L^1(), ∀ | t|≥ 1, where F_tf⃗ := e^itμ/√(-4 π i t)⟨σ_3Ψ⃗,f⃗ ⟩Ψ⃗ - e^-itμ/√(4π i t)⟨σ_3 σ_1Ψ⃗, ⟩σ_1Ψ⃗. We proceed with some remarks on the main theorem: * The estimate (<ref>) is an analogue of the weighted dispersive estimates obtained by Goldberg <cit.> for the scalar Schödinger operator H = -∂_x^2 + V on the real line for non-generic potentials V; see <cit.>. The local decay estimate (<ref>) shows that the bulk of the free wave e^itP_s enjoys improved local decay at the integrable rate (| t |^-3/2), and that the slow (| t |^-1/2) local decay can be pinned down to the contribution of the finite rank operator F_t. Such sharp information can be useful for nonlinear asymptotic stability problems, see also Section <ref> below. * We make some comments on the spectral hypotheses. The assumptions (A1)–(A4) are known to be satisfied by the linearized operator around the solitary wave for the 1D focusing power-type NLS (<ref>). In the case of the 1D focusing cubic NLS (σ = 1), the linearized operator _1 satisfies the assumptions (A1)–(A6); see Section <ref> below. More generally, in Lemma <ref>, we show for matrix operators of the form (<ref>) satisfying assumptions (A1)–(A6) that the edges ±μ of the essential spectrum of cannot be eigenvalues, and that the non-trivial bounded solutions _± = (Ψ_1^±,Ψ_2^±)^⊤ to Ψ⃗_± = ±μΨ⃗_± belong to L^∞∖ L^2 since Ψ_1(x) has a non-zero limit as x →±∞. In this sense, we characterize the solutions Ψ⃗_± as threshold resonances. However, it is not yet clear to the author whether assumption (A6) is strictly needed to show that non-trivial bounded solutions Ψ⃗_± to Ψ⃗_± = ±μΨ⃗_± cannot be eigenfunctions. Moreover, an inspection of the proof of Lemma <ref> reveals that the strong exponential decay assumption (A3) and the vanishing condition assumption (A6) are only used in a Volterra integral equation argument. In all other proofs, we only use some polynomial decay of the potentials V_1 and V_2. * It might be possible to prove Theorem <ref> using the scattering theory developed by <cit.>. However, one major difficulty for this approach is due to the fact that the matrix Wronskian associated with the vector Jost solutions is not invertible at the origin for cases where the matrix operators exhibit threshold resonances. Hence, the vector-valued distorted Fourier basis functions are not immediately well-defined at zero frequency. See Corollary 5.21 and Section 6 in <cit.> for further details. §.§ Previous works In this subsection, we collect references related to dispersive estimates for Schrödinger operators and to the study of the stability of solitary waves. For dispersive estimates for the matrix operator , we refer to Section 5-9 of <cit.> in dimension 1, and to <cit.> in higher dimensions. A comprehensive study on the spectral theory for the matrix operator arising from pure-power type NLS is given in <cit.>. See also <cit.> for related analytical and numerical studies. For dispersive estimates for the scalar Schrödinger operators, pioneering works include <cit.>, and we refer to <cit.> for a sample of recent works. Finally, we mention the papers <cit.> on resolvent expansions for the scalar Schrödinger operator. On the general well-posedness theory for the NLS Cauchy problem (<ref>), we refer to the pioneering works <cit.>. Results on the orbital stability (or instability) of solitary waves for the NLS equation were first obtained by <cit.>, and a general theory was established in <cit.>. Subsequent developments for general nonlinearities were due to <cit.>. Regarding the asymptotic stability of solitary waves, the first results were due to Buslaev-Perel'man <cit.>. Subsequent works in this direction were due to <cit.>. For surveys on the stability of solitary waves, we refer to the reviews <cit.> and the monographs <cit.>. §.§ On the solitary wave for the 1D focusing cubic NLS In this subsection, we present two observations related to the asymptotic stability problem for the solitary wave of the 1D focusing cubic NLS. First, we verify that the assumption (A6) holds for the linearized operator around the solitary wave of the 1D focusing cubic NLS. Second, we use the local decay estimate (<ref>) to shed some light on the leading order structure of the quadratic nonlinearity in the perturbation equation for the solitary wave of the 1D focusing cubic NLS. We note that a proof for the asymptotic stability problem has been given by Cuccagna-Pelinovsky <cit.> via inverse scattering techniques. On the other hand, a perturbative proof that does not explicitly rely on the integrable structure has not yet appeared in the literature to the best of the author's knowledge. We now briefly discuss the evolution equation for perturbations of the solitary wave for the 1D focusing cubic NLS. To keep our exposition short, we do not discuss the modulation aspects for the solitary wave. For simplicity, consider the perturbation ansatz ψ(t,x) = e^it(Q(x)+u(t,x)) for the equation (<ref>) (σ = 1). The ground state has the explicit formula Q(x) := ϕ(x;1) = √(2)(x). The evolution equation for the perturbation in vector form u⃗ = (u_1,u_2) :=(u,u̅) is given by i ∂_t u⃗ - ℋ_1u⃗ = () + (u⃗), where _1 = ℋ_0 + 𝒱_1 = [ -∂_x^2 + 1 0; 0 ∂_x^2 - 1 ] + [ -4^2(x) - 2^2(x); 2^2(x) 4^2(x) ], and () := [ - Qu_1^2 - 2Qu_1u_2; Qu_2^2 + 2Qu_1u_2 ], () := [ - u_1^2u_2; u_1u_2^2 ]. Recall from <cit.> that the matrix operator _1 has the essential spectrum (-∞,-1]∪[1,∞), and a four-dimensional generalized nullspace _g(_1) = {[ Q; - Q ], [ (1+x∂_x)Q; (1+x∂_x)Q ], [ ∂_x Q; ∂_x Q ], [ x Q; - xQ ]}, as well as a threshold resonance at +1 given by Ψ⃗≡Ψ⃗_+ := [ Ψ_1; Ψ_2 ] = [ 1-1/2Q^2; -1/2Q^2 ] = [ tanh^2(x); -^2(x) ]. By symmetry, there is also a threshold resonance function at -1 given by Ψ⃗_- = σ_1Ψ⃗_+ = [ -^2(x); tanh^2(x) ]. The eigenfunctions listed in (<ref>) are related to the underlying symmetries for the NLS equation. Note that we have normalized the resonance function Ψ⃗ to satisfy the condition (<ref>) stated in Theorem <ref>. §.§.§ On assumption (A6) for the 1D focusing cubic NLS Our first observation is that the assumption (A6) is satisfied by the matrix operator _1. Let V_1(x) = 4^2(x), V_2(x) = 2^2(x), and (Ψ_1(x),Ψ_2(x)) = (tanh^2(x),-^2(x)). Then, we have ∫_ e^±√(2)y(V_2(y) Ψ_1(y) + V_1(y)Ψ_2(y)) y = 0. We denote the (two-sided) Laplace transform by [f](s) = ∫_-∞^∞ e^-syf(y) y, s ∈, which is formally related to the Fourier transform by [f](s) = √(2π)[f](is). By direct computation, (V_1Ψ_2+V_2Ψ_1)(x) = 2^2(x) - 6 ^4(x), and ^4(x) = 2/3^2(x)-1/6∂_x^2(^2(x)). Recall from <cit.> that as equalities in (), [^2](ξ) = √(π/2)ξ/sinh(π2ξ). Hence, using the basic property [-∂_x^2 f](ξ) = ξ^2 [f](ξ) and (<ref>), we obtain [^4](ξ) = 1/6√(π/2)ξ(4+ξ^2)/sinh(π2ξ). As complex functions, we recall that sinh(iz) = i sin(z) and that z ↦z/sin(z) is analytic[to be pedantic, there is a removable singularity at z=0 which we can remove by setting the function z/sin(z) equal to 1 at z=0.] in the strip {s+iσ: s ∈ (-π,π), σ∈}. Thus, by analytic continuation, [V_1 Ψ_2 + V_2 Ψ_1](s) = √(2π)(2 [^2](is) - 6 [^4](is) ) = π s(-2+s^2)/sin(π s2), for any s ∈ with (s) ∈ (-2,2), which in particular proves the vanishing condition (<ref>). The other assumptions (A1)–(A5) for _1 are also satisfied by either checking directly or invoking the results from Section 9 in <cit.>. §.§.§ Null structure for perturbations of the solitary wave of the 1D focusing cubic NLS Due to the slow local decay of the Schrödinger waves in the presence of a threshold resonance, the spatially localized quadratic nonlinearity in (<ref>) may pose significant difficulties for proving decay of small solutions to (<ref>). The weighted dispersive estimate (<ref>) shows that the slow local decay is only due to the finite rank projection F_t. To shed some light on the expected leading order behavior of the quadratic nonlinearity () in (<ref>), it is instructive to insert a free Schrödinger wave _free(t) := e^-itP_s, for some fixed ∈() ×(). By Theorem <ref>, we have _free(t) = c_- e^-it/√(t)[ Ψ_1; Ψ_2 ] + c_+ e^it/√(t)[ Ψ_2; Ψ_1 ] + (t), with c_- = 1/√(-4 π i)⟨σ_3 Ψ⃗, ⟩, c_+ = -1/√(4π i)⟨σ_3 σ_1 Ψ⃗, ⟩, and where the remainder (t) satisfies ‖⟨ x ⟩^-2(t) ‖_L_x^∞() × L_x^∞()≲| t |^-3/2‖⟨ x ⟩^2 ‖_L_x^1() × L_x^1(). Thus, owing to the spatial localization of the quadratic nonlinearity, we have (_free(t)) = c_+^2e^2it/t𝒬_1(Ψ⃗) + c_+c_-/t𝒬_2(Ψ⃗) + c_-^2e^-2it/t𝒬_3(Ψ⃗) + _L^∞(| t |^-2), where 𝒬_1(Ψ⃗) = [ -QΨ_2^2 - 2QΨ_1Ψ_2; Q Ψ_1^2 + 2QΨ_1Ψ_2 ], 𝒬_2(Ψ⃗) = [ - 2QΨ_1Ψ_2 - 2Q(Ψ_1^2+Ψ_2^2); 2QΨ_1Ψ_2 + 2Q(Ψ_1^2+Ψ_2^2) ], 𝒬_3(Ψ⃗) = -σ_1_1(Ψ⃗) = [ -QΨ_1^2 - 2QΨ_1Ψ_2; Q Ψ_2^2 + 2QΨ_1Ψ_2 ]. Due to the critical (| t |^-1) decay of the leading order terms on the right-hand side of (<ref>), it is instructive to analyze the long-time behavior of small solutions to the inhomogeneous matrix Schrödinger equation with such a source term { i∂_t _src- _1 _src = P_s(c_+^2e^2it/t𝒬_1(Ψ⃗) + c_+c_-/t𝒬_2(Ψ⃗) + c_-^2e^-2it/t𝒬_3(Ψ⃗) ), t ≥ 1, _src(1) = 0⃗. . To this end, it will be useful to exploit a special conjugation identity for the matrix Schrödinger operator _1. It was recently pointed out by Martel, see <cit.>, that the matrix operator _1 can be conjugated to the flat matrix Schrödinger operator _0. By first conjugating _1 with the unitary matrix = 1/√(2)[ 1 i; 1 - i ], we obtain the equivalent matrix Schrödinger operator _1 = -i ^-1_1 := [ 0 L_-; -L_+ 0 ] = _0 + := [ 0 -∂_x^2 + 1; ∂_x^2 -1 0 ] + [ 0 - 2^2(x); 6^2(x) 0 ]. Introducing the operator := [ 0 (-∂_x^2+1)S^2; -S^2L_+ 0 ], S := Q ·∂_x · Q^-1 = ∂_x + tanh(x), one has the conjugation identity (see also <cit.>) _1 = _0 . We then transfer the above identity to the matrix operator by setting := ^-1 to obtain the conjugation identity _1 = _0 . Moreover, it can be checked directly that η⃗= 0 for any generalized eigenfunction η⃗∈_g(_1), and this implies that P_d≡ 0, which is equivalent to saying that = P_s. Hence, by applying the transformation to the equation (<ref>), we obtain the transformed equation i∂_t _src - _0 _src = (c_+^2e^2it/t𝒬_1(Ψ⃗) + c_+c_-/t𝒬_2(Ψ⃗) + c_-^2e^-2it/t𝒬_3(Ψ⃗) ), where _src := _src is the transformed variable. Note that the above equation features the flat operator _0 on the left. The Duhamel formula for _src(t) at times t ≥ 1 reads _src(t) = -i∫_1^t e^-i(t-s)_0(c_+^2e^2is/s𝒬_1(Ψ⃗) + c_+c_-/s𝒬_2(Ψ⃗) + c_-^2e^-2is/s𝒬_3(Ψ⃗)) s. The flat, self-adjoint, matrix operator _0 has the benefit that the semigroup e^-it_0 can be represented in terms of the standard Fourier transform by the formula (e^-it_0) (x) = 1/√(2π)∫_ e^-it(ξ^2+1)g_1(ξ)e^ixξ ξ e_1 + 1/√(2π)∫_ e^it(ξ^2+1)g_2(ξ)e^ixξ ξ e_2, where = (g_1,g_2)^⊤ and e_1,e_2 are the standard unit vectors in ^2. The profile of _src(t) is given by _src(t) := e^it_0_src(t). Setting _j(Ψ⃗) =: (G_j,1,G_j,2)^⊤ 1≤ j ≤ 3, we have for times t ≥ 1 that ℱ[_src(t)](ξ) =c_+^2 ∫_1^t e^is(ξ^2+3)/sG_1,1(ξ) s e_1 + c_+c_- ∫_1^t e^is(ξ^2+1)/sG_2,1(ξ) s e_1 + c_-^2 ∫_1^t e^is(ξ^2-1)/sG_3,1(ξ) s e_1 + c_+^2 ∫_1^t e^-is(ξ^2-1)/sG_1,2(ξ) s e_2 + c_+c_- ∫_1^t e^-is(ξ^2+1)/sG_2,2(ξ) s e_2 + c_-^2 ∫_1^t e^-is(ξ^2+3)/sG_3,2(ξ) s e_2. The uniform-in-time boundedness in L_ξ^∞ of the Fourier transform of the profile ℱ[_src(t)](ξ) is related to recovering the free decay rate for _src(t). However, in view of the critical decay of the integrand, this requires favorable time oscillations. Observe that the above terms with time phases e^± is(ξ^2+1), e^± is(ξ^2+3) are non-stationary for any s ∈ which implies that they have a better decay rate using integration by parts in the variable s. On the other hand, the terms with the phases e^± is(ξ^2-1) are stationary at the points ξ = ± 1. Thus, it is important to know if the Fourier coefficients G_3,1(±1) and G_1,2(± 1) vanish. Indeed, this is true due to the following lemma. It holds that G_3,1(±1) = G_1,2(±1)= 0. First, to ease notation, we write = i/2[ (-D_1 - D_2) (D_1-D_2); (-D_1 + D_2) (D_1 + D_2) ], where D_1 := (-∂_x^2+1)S^2 = (-∂_x^2+1)(∂_x + tanh(x))(∂_x + tanh(x)), D_2 := S^2L_+ = (∂_x + tanh(x))(∂_x + tanh(x))(-∂_x^2 - 6^2(x) + 1). Since σ_1 = - σ_1 and _3(Ψ⃗)= -σ_1 _1(Ψ⃗) (c.f. (<ref>)), it follows that G_3,1≡ G_1,2 as functions. Note that G_3,1 = i/2( D_1(QΨ_1^2) + D_1(QΨ_2^2) + 2D_1(2QΨ_1Ψ_2) + D_2(QΨ_1^2) - D_2(QΨ_2^2)), where (QΨ_1^2)(x) = √(2)(x)tanh^4(x), (QΨ_1Ψ_2)(x) = -√(2)^3(x)tanh^2(x), (QΨ_2^2)(x) = √(2)^5(x). By using the trigonometric identity ^2(x) + tanh^2(x) =1, we may simplify the expression for G_3,1 into G_3,1(x) = i √(2)/2(D_1((x)-6^3(x)+6^5(x)) + D_2((x) - 2^3(x)) ). By patient direct computation, we find F_1(x) := D_1((x)-6^3(x)+6^5(x)) =192^3(x) - 3456^5(x) + 9720 ^7(x) - 6720^9(x) and F_2(x) :=D_2((x)-2^3(x)) = 48^3(x) -264^5(x) + 240^7(x). Moreover, using the identities (∂_x^2)(x) = (x) - 2^3(x), (∂_x^4)(x) = (x) - 20^3(x)+24^5(x), (∂_x^6)(x) = (x) -182^3(x)+840^5(x)-720^7(x), (∂_x^8)(x) = (x) - 1640^3(x) +23184^5(x)-60480^7(x) + 40320^9(x), we obtain F_1(x) = - 1/6(-∂_x^2 + 3∂_x^4 - 3∂_x^6 + ∂_x^8)(x)= -1/6(-∂_x^2+1)^3(-∂_x^2)(x), and F_2(x) = 1/3(-∂_x^2 + 2∂_x^4 - ∂_x^6)(x) = 1/3(-∂_x^2+1)^2(-∂_x^2)(x). Thus, using the property [-∂_x^2 f] = ξ^2 [f](ξ) and the fact that (ξ)= √(π/2)(πξ/2), we compute that G_3,1(ξ) = i√(2)/2(F_1(ξ)+F_2(ξ))= -i √(π)/12 (ξ^2-1)ξ^2(ξ^2+1)^2(πξ/2), which implies (<ref>) as claimed. We determined the identities (<ref>) – (<ref>) with the aid of the Wolfram Mathematica software. The above lemma shows that the localized quadratic resonant terms are well-behaved for the nonlinear perturbation equation (<ref>). The presence of this null structure is potentially a key ingredient for a perturbative proof of the asymptotic stability of the solitary wave solutions to the 1D focusing cubic NLS. We end this subsection with the following closing remark. The motivation for analyzing the quadratic nonlinearity in the perturbation equation (<ref>) and for uncovering the null structure for the localized quadratic resonant terms in Lemma <ref> is due to the recent work by Lührmann-Schlag <cit.>, where the authors investigate the asymptotic stability of kink solutions to the 1D sine-Gordon equation under odd perturbations. In <cit.>, the authors employ a similar conjugation identity like the one we used in (<ref>) to transform the scalar Schrödinger operator H_1 := -∂_x^2 -2^2(x) to the flat operator H_0 := -∂_x^2 for the perturbation equation. In fact, it is easy to check that one has the conjugation identity SH_1 = H_0 S, where S = ∂_x + tanh(x). Moreover, an analogue of Lemma <ref> on the non-resonant property for the localized quadratic resonant terms in the perturbation equation for the sine-Gordon kink was first obtained in <cit.>. This remarkable null structure for the sine-Gordon model played a key role in the asymptotic stability proof in <cit.>. In <cit.>, the same authors obtained long-time decay estimates for even perturbation of the soliton of the 1D focusing cubic Klein-Gordon equation. The absence of the null structure in the nonlinearity of the perturbation equation in the focusing cubic Klein-Gordon model is a major obstruction to full co-dimension one asymptotic stability result under even perturbations. Our short discussion on the effects of the threshold resonance on the quadratic term for (<ref>) suggests that the localized quadratic resonant terms are well-behaved for the perturbation equation in the 1D cubic NLS model. However, note that a full perturbative proof of the asymptotic stability problem for this model has to encompass the modulation theory associated to the moving solitary wave, and take into account the long-range (modified) scattering effects due to the non-localized cubic nonlinearities in the perturbation equation. We point out that Collot-Germain <cit.> recently obtained general such asymptotic stability results for solitary waves for 1D nonlinear Schrödinger equations under the assumption that the linearized matrix Schrödinger operator does not exhibit threshold resonances. §.§ Organization of the article The remaining sections of this paper are devoted to the proof of Theorem <ref>. In Section 2, we state a few stationary phase lemmas, which will be heavily utilized in Sections 5 and 6, and we will also provide an analogue of Theorem <ref> for the free matrix operator _0. In Section 3, we employ the symmetric resolvent expansion following the framework in <cit.>, and in Section 4, we carefully extract the leading operators for these resolvent expansions. A characterization of the threshold resonance is stated in Lemma <ref> under the spectral assumptions (A1)–(A6). Then, in Section 5, we prove dispersive estimates for the evolution operator e^it in the low energy regime. The approach taken in Section 5 largely follows the techniques employed in <cit.> for one-dimensional Dirac operators. In Section 6, we prove dispersive estimates for the remaining energy regimes and finish the proof of Theorem <ref>. §.§ Notation For any = (f_1,f_2)^⊤, = (g_1,g_2)^⊤∈ L^2() × L^2(), we use the inner product ⟨,⟩ := ∫_f⃗^* g⃗ x = ∫_(f̅_1g_1 + f̅_2 g_2) x,f⃗^* := (f̅_1,f̅_2). The Schwartz space is denoted by () and we use the weighted L^2-spaces X_σ := ⟨ x ⟩^-σL^2() ×⟨ x ⟩^-σL^2(), ‖‖_X_σ := ‖x^σ‖_L^2()× L^2(), σ∈. Note that for any α > β > 0, one has the continuous inclusions X_α⊂ X_β⊂ X_0 = L^2()× L^2() ⊂ X_-β⊂ X_-α, and the duality X_α^* = X_-α. Our convention for the Fourier transform is [f](ξ) = (ξ) = 1/√(2π)∫_ e^-ixξf(x) x, ^-1[f](x) = (x) = 1/√(2π)∫_ e^ixξf(ξ) ξ. We denote by C>0 an absolute constant whose value is allowed to change from line to line. In order to indicate that the constant depends on a parameter, say θ, we will use the notation C_θ or C(θ). For non-negative X, Y we write X ≲ Y if X ≤ CY. We use the Japanese bracket notation ⟨ x ⟩ = (1+x^2)^1/2 for x ∈. The standard tensors on ^2 are denoted by e_1 = [ 1; 0 ], e_2 = [ 0; 1 ], e_11 = e_1e_1^⊤ =[ 1 0; 0 0 ], e_22 = e_2e_2^⊤= [ 0 0; 0 1 ]. Acknowledgments. The author would like to thank his Ph.D. advisor Jonas Lührmann for suggesting the problem and patiently checking the manuscript. The author is grateful to Andrew Comech, Wilhelm Schlag, Gigliola Staffilani, and Ebru Toprak for helpful discussions. § FREE MATRIX SCHRÖDINGER ESTIMATES In this section, we derive dispersive estimates for the free evolution semigroup e^it_0. We recall that the free matrix Schödinger operator _0 = [ -∂_x^2 + μ 0; 0 ∂_x^2 - μ ], has a purely continuous spectrum (_0) = σ_ac(_0) = (-∞,-μ] ∪ [μ,∞), and the resolvent operator of _0 is given by (_0 - λ)^-1 = [ R_0(λ - μ) 0; 0 -R_0(-λ-μ) ], λ∈∖ (-∞,-μ] ∪ [μ,∞), where R_0 is the resolvent operator for the one-dimensional Laplacian, with an integral kernel given by R_0(ζ^2)(x,y) := (-∂^2 - ζ^2)^-1(x,y) = -e^i ζ| x - y|/2i ζ, ζ∈_+, where _+ is the upper half-plane. We obtain from the scalar resolvent theory due to Agmon <cit.> that the limiting resolvent operators (_0 - (λ± i0) )^-1 = lim_↓ 0 (_0 - (λ± i))^-1, λ∈ (-∞,-μ) ∪ (μ,∞), are well defined as operators from X_σ→ X_-σ for any σ > 1/2. Here, the matrix operator _0 is self-adjoint and Stone's formula applies: e^it_0 = 1/2 π i∫_|λ|≥μ e^itλ[(_0 - (λ + i0))^-1 - (_0 - (λ - i0))^-1] λ. Let us focus on the spectrum on the positive semi-axis [μ,∞), as the negative part can be treated using the symmetric properties of (c.f. Remark <ref>). By invoking the change of variables λ↦λ = μ + z^2 with 0< z <∞, the kernel of e^it_0P_s^+ is then given by e^it_0P_s^+(x,y) = e^itμ/π i∫_0^∞ e^itz^2z [(_0 - (μ+z^2+ i0))^-1 - (_0 - (μ+z^2- i0))^-1](x,y) z. Here, the notation P_s^+ means that we restrict the free evolution e^it_0 to the positive semi-axis in the integral representation (<ref>). By (<ref>) and (<ref>), we have (_0 - (μ+z^2± i0))^-1(x,y) = [ ± ie^± i z | x - y |/2 z 0; 0 -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ) ], 0 < z < ∞, and thus, e^it _0P_s^+(x,y) = e^it μ/2π∫_ e^it z^2 e^i z | x - y |e_11 z. Note that the above integral is to be understood in the principal value sense, due to the pole in (<ref>). To this end, we recall the following standard stationary phase results. The first lemma is a direct consequence of the classic van der Corput lemma. Let r ∈, and let ψ(z) be a compactly supported smooth function. Then for any | t| > 0, |∫_ e^itz^2 + i z rψ (z) z |≤ C | t |^-1/2‖∂_zψ‖_L_z^1(). Moreover, if ψ(z) is supported away from zero, then for all | t | > 0, |∫_ e^itz^2 + i z rψ (z) z |≤ C | t |^-3/2‖ [∂_z^2 + i r ∂_z](ψz) ‖_L_z^1(). The bound (<ref>) follows from the van der Corput lemma (see e.g. <cit.>) by observing that the phase ϕ(z) = z^2 + zr/t satisfies |∂_z^2 ϕ(z)| = 2>0. The last bound follows by first integrating by parts ∫_ e^itz^2e^i z rψ (z) z = -1/2 i t∫_ e^itz^2∂_z [e^iz rψ(z)/z] z =-1/2 i t∫_ e^itz^2 + iz r[ir + ∂_z][ψ (z)/z] z, and then invoking the van der Corput lemma. We will also need the following sharper stationary phase lemma, which may be found in many text on oscillatory integrals with a Fresnel phase. Let χ(z) be a smooth, non-negative, even cut-off function such that χ(z) = 1 for z ∈ [-1,1] and χ(z) = 0 for | z |≥ 2. For r, t ∈, define G_t(r) := ∫_ e^itz^2+izrχ(z^2) z. Then there exists C = C(‖χ(z^2)‖_W^4,1()) >0 such that for any r ∈ and for any | t | > 0, | G_t(r) - √(π)/√(-it) e^-ir^2/4t|≤ C | t |^-3/2⟨ r ⟩. Moreover, if r_1, r_2 ≥ 0, then | G_t(r_1+r_2) - √(π)/√(-it) e^-ir_1^2/4te^-ir_2^2/4t|≤ C | t |^-3/2⟨ r_1 ⟩⟨ r_2 ⟩. First, the phase ϕ(z) := z^2 + zr/t has a critical point at z_* = -r/2t∈ with ϕ”(z) = 2 > 0. We use Taylor expansion of ϕ(z) and shift the integral by the change of variables z ↦ z + z^* to obtain G_t(r) = ∫_ e^itϕ(z)χ(z^2) z = ∫_R e^itϕ(z^*)+ ϕ”(z_*)(z-z_*)^2χ(z^2) z = e^-ir^2/4t∫_ e^itz^2χ((z+z_*)^2) z. Using the Fourier transform of the free Schrödinger group and the Plancherel's identity, we have ∫_ e^itz^2χ((z+z_*)^2) z = 1/√(-2 i t)∫_ e^-iξ^2/4t_z →ξ[χ((z+z_*)^2)](ξ) ξ = 1/√(-2 i t)∫__z →ξ[χ((z+z_*)^2)](ξ) ξ + 1/√(-2 i t)∫_(e^-iξ^2/4t-1) _z →ξ[χ((z+z_*)^2)](ξ) ξ = √(2π)/√(-2it)χ(z_*^2) + 1/√(-2 i t)∫_(e^-iξ^2/4t-1) e^iz_*ξ[χ((z+z_*)^2)](ξ) ξ. Using the bound | e^iξ^2/4t-1|≤ C| t |^-1ξ^2 and the Hölder's inequality, we bound the remainder term by |1/√(-2 i t)∫_(e^-iξ^2/4t-1) e^iz_*ξ[χ((z+z_*)^2)](ξ) ξ| ≤ C | t|^-3/2∫_|ξ^2 [χ(z^2)](ξ) | ξ ≤ C | t |^-3/2‖χ(z^2)‖_W^4,1()≤ C | t |^-3/2. Next, we use the fact that | 1 - χ(z^2) |≤ C | z | for all z ∈ and for some C>0 large enough so that | 1-χ(z_*^2)|≤ C | z_* |≤ C | t |^-1⟨ r ⟩. Then (<ref>) follows (<ref>)–(<ref>). Finally, we use the estimate (<ref>) to obtain | G_t(r_1+r_2) - √(2π)/√(-2it) e^-i(r_1-r_2)^2/4t|≤ C | t |^-3/2⟨ r_1 - r_2 ⟩≤ C | t |^-1⟨ r_1 ⟩⟨ r_2 ⟩. Thus, by the triangle inequality and the bound | e^-i(r_1-r_2)^2/4t - e^-ir_1^2/4te^-ir_2^2/4t| = | e^-ir_1^2/4te^-ir_2^2/4t|| e^ir_1 r_2/2t - 1 |≤ C | t |^-1⟨ r_1 ⟩⟨ r_2 ⟩, we conclude (<ref>). Next, we prove the analogue of Theorem <ref> for the free evolution. We emphasize that the free matrix Schrödinger operator _0 has threshold resonances _0 e_1 = μe_1 and _0 e_2 = -μe_2. For any u⃗ = (u_1,u_2) ∈() ×() and for any | t|≥ 1, we have ‖ e^it_0 P_s^+ u⃗ ‖_L_x^∞× L_x^∞≲| t |^-1/2‖u⃗ ‖_L_x^1 × L_x^1, and ‖⟨ x ⟩^-1( e^it_0P_s^+ - F_t^0)u⃗ ‖_L_x^∞× L_x^∞`≲| t |^-3/2‖⟨ x ⟩u⃗ ‖_L_x^1 × L_x^1, where F_t^0(x,y) := e^it μ/√(-4π i t)e^-ix^2/4te_1e^-iy^2/4te_1^⊤. We first begin by splitting the evolution operator into low and high energy parts[Symbols like χ(_0 - μ I) are only used in a formal way to represent the cut-off χ(z^2) in the z-integrals, where they arise.]: e^it_0P_s^+(x,y) = e^it_0χ(_0 - μ I)P_s^+(x,y) +e^it_0(1-χ(_0 - μ I))P_s^+(x,y) = e^itμ/2π∫_ e^itz^2+iz| x - y |χ(z^2) z e_11 + e^itμ/2π∫_ e^itz^2+iz| x - y | (1-χ(z^2)) z e_11, where χ(z) is a standard smooth, even, non-negative cut-off function satisfying χ(z) = 1 for | z |≤ 1 and χ(z) =0 for | z |≥ 2. In the high energy part in (<ref>), following the ideas from <cit.> <cit.>, we prove the estimate |∫_ e^itz^2+iz| x - y | (1-χ(z^2))dz |≲min{| t |^-1/2,| t |^-3/2⟨ x ⟩⟨ y ⟩}. For a more rigorous treatment, we instead use a truncated cutoff χ_L(z) = (1-χ(z^2))χ(z/L), where L ≥ 1, and we prove the uniform estimate sup_L ≥ 1|∫_ e^itz^2 + iz| x - y |χ_L(z) z |≤ Cmin{| t |^-1/2,| t |^-3/2⟨ x ⟩⟨ y ⟩}, with a constant C>0 independent of L. This estimate will imply (<ref>). Indeed for any | t | >0, by the Plancherel's identity, we have sup_a ∈|∫_ e^itz^2+iazχ_L(z) z | = sup_a ∈|∫_^-1[e^itz^2+iaz] (ξ ) [χ_L(z)](ξ) ξ|≤ C| t |^-1/2‖[χ_L]‖_L_ξ^1(). Here, we use that the Fourier transform of the tempered distribution e^itz^2+iaz has | t |^-1/2 decay. Using the definition of χ_L, the scaling properties of the Fourier transform, and Young's convolution inequality, we obtain ‖[χ_L]‖_L_ξ^1() ≤‖[χ(z/L)]‖_L_ξ^1() + ‖[χ(z/L)]‖_L_ξ^1()‖[χ(z^2)]‖_L_ξ^1() ≤ C ‖ L [χ](Lξ)‖_L_ξ^1() = C ‖[χ](ξ)‖_L_ξ^1()≤ C ‖χ‖_W^2,1()≲ 1. For the high-energy weighted dispersive estimate, we use integration by parts to find that |∫_ e^itz^2e^iz | x - y |)χ_L(z) z |≤ C | t |^-1|∫_ e^itz^2∂_z( e^iz| x - y | z^-1χ_L(z) ) z|. When the derivative falls onto e^iz | x - y |, the weights ⟨ x ⟩⟨ y ⟩ appear, whereas the term z^-1χ_L(z) is smooth since χ_L is compactly supported away from the interval [-1,1]. By following the previous argument, we conclude the (| t |^-3/2⟨ x ⟩⟨ y ⟩) bound for (<ref>) in the high-energy regime. Next we turn to the low-energy estimates. For the low-energy unweighted estimate, we employ Lemma <ref> to obtain |∫_ e^itz^2+iz| x -y |χ(z^2) z |≤ C | t |^-1/2‖∂_z χ(z^2) ‖_L^1()≤ C | t |^-1/2. On the other hand, for the low-energy weighted estimate, we observe that by Lemma <ref>, |∫_ e^itz^2+iz| x -y |χ(z^2) z - √(2π)/√(-2it) e^-ix^2/4te^-iy^2/4t|≤ C | t |^-3/2⟨ x ⟩⟨ y ⟩. Hence, using that e_11 = e_1e_1^⊤, we arrive at the kernel estimate | e^it_0χ(_0 - μ)P_s^+(x,y) - F_t^0(x,y) |≤ C | t |^-3/2⟨ x ⟩⟨ y ⟩, where F_t^0 is given by (<ref>). Thus, by combining the high energy bounds (<ref>) and the low energy bounds (<ref>) - (<ref>), we conclude the dispersive estimates (<ref>) and (<ref>). § SYMMETRIC RESOLVENT IDENTITY By assumption (A1), we can factorize the matrix potential = -σ_3 v v = v_1 v_2, with v_1 = -σ_3 v := [ -a -b; b a ] v_2 = v := [ a b; b a ], where a := 1/2(√(V_1+V_2) + √(V_1 - V_2)) b := 1/2(√(V_1+V_2) - √(V_1 - V_2)). It will be helpful in later sections to keep in mind that V_1 = a^2 + b^2, V_2 = 2ab. We denote the resolvent of = _0 + by (-z)^-1 for z ∈ρ(). The resolvent identity states that ( - z)^-1 = (I+(_0 -z)^-1)^-1(_0-z)^-1, ∀ z ∈ρ(_0) ∩ρ(). This identity was used in <cit.> to establish that there is a limiting absorption principle for the resolvent of on the semi-axes (-∞,-μ)∪ (μ,∞) in the weighted L^2-spaces X_σ→ X_-σ, σ>1/2. Note that the lemma below applies in any spatial dimension. (<cit.>, see also the proof in <cit.>) Suppose assumptions (A1) – (A4) hold. Then, the following holds. * For σ > 1/2, and |λ| > μ, the operator (_0 - (λ± i0))^-1: X_-σ→ X_-σ is compact and I + (_0 - (λ± i0))^-1 is boundedly invertible on X_-σ. * For σ>1/2 and λ_0>μ arbitrary, we have sup_|λ|≥λ_0, > 0|λ|^1/2‖( - (λ± i))^-1‖_X_σ→ X_-σ<∞. * For |λ| > μ, define ( - (λ± i0) )^-1 := (I+ (_0 - (λ± i0))^-1)^-1(_0 -(λ± i0) )^-1. Then, as ↘ 0, ‖( - (λ± i) )^-1 - ( - (λ± i0) )^-1‖_X_σ→ X_-σ⟶ 0 for any σ > 1/2. We recall the following spectral representation of e^it from <cit.>. (<cit.>) Under assumptions (A1) – (A6), there is the representation e^it= 1/2π i∫_|λ|≥μ e^itλ[( - (λ+i0))^-1 - ( -(λ - i0))^-1] λ + ∑_j e^itP_z_j, where the sum runs over the entire discrete spectrum and P_z_j is the Riesz projection corresponding to the eigenvalue z_j. The formula (<ref>) and the convergence of the integral are to be understood in the sense that if ϕ,ψ∈ [W^2,2() × W^2,2()] ∩ [⟨ x⟩^-1- L^2() ×⟨ x⟩^-1- L^2()], then ⟨ e^itϕ,ψ⟩ = lim_R →∞1/2π i∫_R ≥|λ|≥μ e^itλ⟨[(-(λ+i0))^-1-( - (λ - i0))^-1]ϕ,ψ⟩ λ + ∑_j⟨ e^itP_z_jϕ,ψ⟩ , for all t ∈. We write P_s = P_s^+ + P_s^-, where the signs ± refer to the positive and negative halves of the essential spectrum (-∞,-μ]∪ [μ,∞). In the following sections, we will focus on the analysis on the positive semi-axis part of the essential spectrum. We can treat the negative semi-axis of the essential spectrum by taking advantage of the symmetry properties of , see Remark <ref> below. In view of the spectral representation of e^it from Lemma <ref>, we use the change of variables λ↦λ = μ+z^2 with 0<z<∞ to write e^itP_s^+ = e^itμ/π i∫_0^∞ e^itz^2 z [( - (μ + z^2 + i0))^-1 - ( - (μ + z^2 - i0))^-1] z. For the upcoming dispersive estimates, it is convenient to first open up the domain of integration for the above integral to the entire real line by means of analytic continuation for the perturbed resolvent. Following the framework of Section 5 in <cit.>, we introduce the operator (z) := ( - (μ + z^2 + i0))^-1, for z>0, (z) := ( - (μ + z^2 - i0))^-1 = ( - (μ + z^2 + i0))^-1, for z<0, so that e^itP_s^+ = e^itμ/π i∫_ e^itz^2 z (z) z. Here, the integral should be understood in the principal value sense due to the pole associated with the resolvent (z) at the origin. We also set _0(z) := (_0 - (μ + z^2 + i0))^-1, for z>0, _0(z) := (_0 - (μ + z^2 + i0))^-1, for z<0. In particular, with this definition, we have by (<ref>) for all z ∈∖{0} that _0(z)(x,y) = (_0 - (μ + z^2 + i0))^-1(x,y) = [ ie^ i z | x - y |/2 z 0; 0 -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ) ]. As in <cit.>, we employ the symmetric resolvent identity (z) = _0(z) - _0(z)v_1(M(z))^-1v_2_0(z), where M(z) = I + v_2 _0(z)v_1, z ∈∖{0}. By inserting the above identity, one checks that e^itP_s^+ = e^itμ/π i∫_ e^itz^2zℛ_0(z) z - e^itμ/π i∫_ e^itz^2zℛ_0(z)v_1 (M(z))^-1v_2ℛ_0(z) z. In the next section, we will investigate the invertibility of the matrix operator M(z) near the origin. We give the following remark for the evolution operator in the negative part of the essential spectrum. Using the identities = -σ_1 σ_1, = -σ_1 σ_1, we infer that e^itP_s^- = σ_1 e^-itP_s^+σ_1. Furthermore, since these identities also hold for _0, the analogue of Proposition <ref> for the weighted estimate of the free evolution e^it_0P_s^- is given by ‖⟨ x ⟩^-1( e^it_0P_s^- - F_t^0) ‖_L_x^∞≤ C | t |^-3/2‖⟨ x ⟩ ‖_L_x^1, | t |≥ 1, where F_t^0(x,y) := e^-it μ/√(4π i t)e^i x ^2/4te_2e^i y^2/4te_2^⊤. Note that F_t^0 = σ_1 F_-t^0 σ_1. § LAURENT EXPANSION OF THE RESOLVENT NEAR THE THRESHOLD In this section we study asymptotic expansions of the perturbed resolvent operators near the thresholds of the essential spectrum, closely following the framework of the seminal paper <cit.> for the scalar Schrödinger operators H = -∂_x^2 + V on the real line. As specified in the introduction, we are interested in the irregular case, where the matrix Schrödinger operator exhibits a threshold resonance. See Definition <ref> for a precise definition. This means that there exist globally bounded non-trivial solutions of Ψ = ±μΨ. In this context, we mention that the threshold regularity can also be characterized by the scattering theory introduced by <cit.>; see Lemma 5.20 of <cit.>. We begin with the terminology used in <cit.>. We say an operator A L^2()× L^2() → L^2()× L^2() with an integral kernel A(x,y)∈^2 × 2 is absolutely bounded if the operator with the kernel | A(x,y) | := (| A(x,y)_i,j|)_i,j=1^2∈^2 × 2 is bounded from L^2()× L^2() → L^2()× L^2(). In particular, Hilbert-Schmidt and finite rank operators are absolutely bounded. To investigate the asymptotic expansions of the operator M(z) (c.f. (<ref>)), we start with the following Taylor expansions of the free resolvent around the origin z=0. Let z_0 := min{1,√(2μ)}. For any 0 < | z| < z_0, we have the following expansion _0(z)(x,y) = i/2ze_11 + _0(x,y) + z_1(x,y) + E(z)(x,y) where _0(x,y) := [ - | x - y |/2 0; 0 - e^-√( 2μ)| x - y |/2√(2μ) ], _1(x,y) := [ | x - y |^2/4i 0; 0 0 ], and E(z) is an error term which satisfies the estimate | z|^k |∂_z^k E(z)(x,y)|≤ C_μ,k | z |^2 ⟨ x ⟩^3+k⟨ y ⟩^3+k, ∀ k=0,1,2, for any | z | < z_0. Recall from (<ref>) that _0(z)(x,y) = [ ie^i z | x - y |/2 z 0; 0 -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ) ]. For 0 < | z | < 1, we have the Laurent expansion i e^i z | x -y |/2 z = i/2 z + -| x - y |/2 + | x - y |^2/4iz + r_1(z,| x - y |), where the remainder term is r_1(z,| x - y|) := i/2z_1(z,| x - y|), _1(z,| x - y|) := (iz| x - y |)^3/2!∫_0^1 e^isz | x - y | (1-s)^2 s. By direct computation, for any x, y ∈ and for any | z | <1, we have the estimate | z |^k |∂_z^k r_1(z,| x - y |) |≲| z |^2 ⟨ x⟩^3+k⟨ y ⟩^3+k, k=0,1,2. In the lower component of the resolvent kernel, for | z | < 2μ, we have the Taylor expansion -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ) = -e^-√(2μ)| x - y|/2√(2μ) + r_2(z,| x - y |), where we denote the remainder term by r_2(z,| x - y |) := z^2/2!∫_0^1 (1-s) (∂_z^2 g_μ)(sz,| x - y |) s, g_μ(z,| x - y |) := -e^-√(z^2+2μ)| x-y|/2√(z^2+2μ). Using the fact that for any η∈, ⟨η⟩ := (1+η^2)^1/2, one has the bounds |∂_η^k ⟨η⟩^-1|≤ C_k ⟨η⟩^-1-k |∂_η^k ⟨η⟩|≤ C_k ⟨η⟩^1-k, k =0,1,2,…, it follows that all derivatives of √(z^2+2μ) and 2(z^2+2μ)^-1/2 are uniformly bounded in z up to a constant depending only on μ and the number of derivatives. Therefore, by the Leibniz formula, we have the estimate sup_z ∈|∂_z^k g_μ(z,| x- y |) |≤ C_μ,k⟨ x ⟩^k ⟨ y ⟩^k, k=0,1,…,4, which in turn implies that | z |^k|∂_z^k r_2(z,| x - y |) |≲| z |^2 ⟨ x ⟩^2+k⟨ y ⟩^2+k, k=0,1,2. Thus, by using (<ref>) and (<ref>), the error term given by E(z)(x,y) := [ r_1(z, | x - y |) 0; 0 r_2(z,| x - y |) ] satisfies (<ref>) as claimed. We insert the above asymptotic expansion into the operator M(z) = I + v_2_0(z)v_1. First, we have the transfer operator T on L^2() × L^2() with a kernel given by T(x,y) = I + v_2(x) _0(x,y) v_1(y). Note that T is self-adjoint because (v_2_0v_1)^* = v_1^*_0 v_2 = (-vσ_3)_0v = v_0(-σ_3v) = v_2_0v_1. Since the potentials v_1 and v_2 have exponential decay by assumption (A3), it follows that v_2_0 v_1 is a Hilbert-Schmidt operator on L^2() × L^2(). Hence, T is a compact perturbation of the identity, and therefore the dimension of (T) is finite by the Fredholm alternative. Recalling the formulas for v_1 and v_2 from (<ref>), we have the identity v_2 e_11 v_1 = -[ a 0; b 0 ][ a b; 0 0 ] = - [ a; b ][ a b ]. Next, we define the orthogonal projection onto the span of the vector (a,b)^⊤∈ L^2() × L^2() by P[ f_1; f_2 ](x) := ∫_( a(y)f_1(y) + b(y)f_2(y)) y/‖ a^2 + b^2 ‖_L^1()[ a(x); b(x) ] = 1/‖ V_1 ‖_L^1()⟨ (a,b)^⊤, f⃗ ⟩[ a(x); b(x) ]. Note that we use the identity (<ref>) above. From (<ref>), the contribution of the singular term i/2ze_11 of _0(z) to M(z) will be associated to the following integral operator with the kernel i/2zv_2(x)e_11v_1(y) = - i/2z[ a(x); b(x) ][ a(y) b(y) ] =: g(z)P(x,y), where g(z) := -i/2z‖ V_1 ‖_L^1(). Lastly, we denote the orthogonal projection to the complement of the span of (a,b)^⊤ by Q := I - P. In summary, we have the following proposition. Suppose | a(x) |, | b(x) |≲⟨ x ⟩^-5.5-, and let z_0 := min{1,√(2μ)}. Then, for any 0<| z | < z_0, we have M(z) = g(z)P + T + zM_1 + _2(z), where M_1 and _2(z) are Hilbert-Schmidt operators on L^2() × L^2() defined by M_1(x,y) := v_2(x)_1(x,y)v_1(y) = | x - y |^2/4i[ a(x); b(x) ][ a(y) b(y) ], _2(z)(x,y) := v_2(x)E(z)(x,y)v_1(y), with G_1 and E(z) defined in Lemma <ref>. Moreover, the error term _2(z) and its derivatives satisfy the absolute bound | z |^k ‖|∂_z^k _2(z) |‖_L^2() × L^2() → L^2() × L^2()≲| z |^2, k =0,1,2, for all | z | < z_0. The identity on the right of (<ref>) follows from (<ref>). We recall that operators of the following type U(x)⟨ x⟩^k⟨ y⟩^k W(y) are Hilbert-Schmidt operators on L^2() whenever U and W are smooth potentials with polynomial decay | U(x) |, | W(x) |≲⟨ x ⟩^-k-1/2-, for k ∈. Hence, under the assumptions on a(x) and b(x), and using the fact that |_1(x,y)|≲| x -y |^2≤⟨ x ⟩^2⟨ y ⟩^2, it follows that M_1 is Hilbert-Schmidt. The same argument can be applied to the error term _2(z) and its derivatives using the remainder estimates in (<ref>) and we are done. The next definition characterizes the regularity of the endpoint μ of the essential spectrum. * We say that the threshold μ is a regular point of the spectrum of provided that the operator QTQ is invertible on the subspace Q(L^2() × L^2()). * Suppose μ is not a regular point. Let S_1 be the Riesz projection onto the kernel of QTQ, and we define D_0 = (Q(T+S_1)Q)^-1. Note that QD_0Q is an absolutely bounded operator on L^2()× L^2(). The proof for this follows from Lemma 8 of <cit.> with minor changes. See also <cit.>. Note that since we impose symmetry assumptions on the potential , the thresholds μ and -μ are either both regular or irregular. The invertibility of QTQ is related to the absence of distributional L^∞() × L^∞() solutions to Ψ = μΨ. The following lemma establishes the equivalent definitions. See <cit.> for the analogue in the scalar case. Suppose assumptions (A1) – (A5) hold. Then the following holds. * Let Φ∈ S_1(L^2() × L^2()) ∖{0}. If Ψ = (Ψ_1,Ψ_2)^⊤ is defined by Ψ(x) := -_0[ v_1 Φ](x) + c_0 e_1, with c_0 = ⟨(a,b)^⊤, TΦ⟩/‖ V_1 ‖_L^1(), then Φ = v_2 Ψ, and Ψ∈ L^∞() × L^∞() is a distributional solution to Ψ = μΨ. Furthermore, if additionally assumption (A6) holds, i.e., c_2,± := 1/2√(2μ)∫_ e^±√(2μ)y(V_2(y) Ψ_1(y) + V_1(y)Ψ_2(y)) y = 0, then lim_x →±∞Ψ_1(x) = c_0 ∓ c_1, where c_1 := 1/2⟨ x(a(x),b(x))^⊤,Φ(x)⟩ = 1/2∫_ x ( a(x)Φ_1(x) + b(x) Φ_2(x)) x. In particular, Ψ_1 ∉ L^2(). More precisely, the constants c_0 and c_1 cannot both be zero. * Conversely, suppose there exists Ψ∈ L^∞() × L^∞() satisfying (<ref>) in the distributional sense. Then Φ = v_2 Ψ∈ S_1 (L^2() × L^2()). * Suppose assumptions (A1) – (A6) hold. Then, S_1(L^2() × L^2()) ≤ 1. In the case S_1(L^2() × L^2()) =1, i.e., S_1(L^2()× L^2()) = {Φ} for some Φ = (Φ_1,Φ_2)^⊤∈ L^2() × L^2() ∖{0}, we have the following identities S_1 T P T S_1 = | c_0| ^2 ‖Φ‖_L^2()× L^2()^-2‖ V_1 ‖_L^1() S_1, PTS_1TP = | c_0|^2 ‖Φ‖_L^2()× L^2()^-2‖ V_1 ‖_L^1()P, S_1M_1S_1 = -2i | c_1 |^2‖Φ‖_L^2()× L^2()^-2 S_1, where the constants c_0 and c_1 are given by (<ref>) and (<ref>) respectively for this Φ. Let Φ = (Φ_1,Φ_2) ∈ S_1(L^2() × L^2()) with Φ≠ 0. Since S_1(L^2() × L^2()) is a subspace of Q(L^2() × L^2()), we have QΦ = Φ. Using the fact that Φ∈(QTQ) and the definition of T (c.f (<ref>)), we obtain 0 = QTQΦ = (I- P)TΦ = (I+v_2 _0 v_1)Φ - PTΦ. Since (a,b)^⊤ = v_2 e_1 and P is the orthogonal projection onto the span of (a,b)^⊤, we have PTΦ = ⟨ (a,b)^⊤ , TΦ⟩/‖ V_1 ‖_L^1()(a,b)^⊤ = c_0 v_2 e_1, with c_0 defined in (<ref>). It follows that Φ = -v_2_0v_1 Φ + c_0 v_2 e_1 = v_2(-_0v_1 Φ + c_0 e_1) = v_2 Ψ. This proves (<ref>). Next, we show (<ref>). Denoting Φ = (Φ_1,Φ_2)^⊤ and using the definition of 𝒢_0 (c.f. (<ref>)), we have (_0 - μ I)_0 (v_1Φ) = v_1 Φ , i.e., (-∂_x^2)∫_- | x - y |/2(-a(y)Φ_1(y) - b(y)Φ_2(y)) y = -a(x)Φ_1(x) - b(x)Φ_2(x), (∂_x^2 - 2μ) ∫_-e^-√( 2μ)| x - y |/2√(2μ)(b(y)Φ_1(y) + a(y)Φ_2(y)) y = b(x)Φ_1(x) + a(x)Φ_2(x). This equation is well-defined, since v_1Φ∈⟨ x ⟩^-1- L^1() ×⟨ x ⟩^-1- L^1(). Using (<ref>), (<ref>), and (H_0 - μ I)(c_0 e_1) = 0, we have (_0 - μ I)Ψ = (H_0 - μ I)[-_0 (v_1 Φ)+c_0 e_1] = - v_1 Φ = -v_1 v_2 Ψ = -Ψ, which implies (<ref>). We now show that Ψ = (Ψ_1,Ψ_2)^⊤ is in L^∞() × L^∞(). Noting that Ψ_1(x) = c_0 + 1/2∫_| x - y |(a(y)Φ_1(y) + b(y) Φ_2(y)) y, by employing the orthogonality condition ⟨ (a,b)^⊤,Φ⟩ = 0, we have Ψ_1(x) = c_0 + 1/2∫_ (| x - y | - | x |) (a(y)Φ_1(y) + b(y) Φ_2(y)) y. Using || x - y | - | x ||≤| y | and | a(y) | + | b(y) |≲⟨ y ⟩^-2, we have the uniform bound sup_x ∈|Ψ_1(x) |≤| c_0 | + 1/2∫| y || a(y)Φ_1(y) + b(y) Φ_2(y)| y ≲‖Φ‖_L^2()× L^2()≲ 1. Since (a,b)^⊤ and Φ are in L^2() × L^2(), we have the uniform bound on Ψ_2 by the Cauchy-Schwarz inequality sup_x ∈|Ψ_2(x) |≲∫_| b(y)Φ_1(y) + a(y)Φ_2(y) | y ≤‖ b ‖_L^2()‖Φ_1 ‖_L^2() + ‖ a ‖_L^2()‖Φ_2 ‖_L^2()≲ 1. Thus, we have shown that Ψ =(Ψ_1,Ψ_2)^⊤∈ L^∞() × L^∞(). Finally, we now assume c_2,± = 0 and show that Ψ_1 cannot be in L^2() ∖{0} by a Volterra argument. Using ⟨ (a,b)^⊤,Φ⟩ = 0, for x ≥ 0 large, we write Ψ_1(x) = c_0 - c_1 + ∫_x^∞ (y-x) (a(y)Φ_1(y) + b(y) Φ_2(y)) y. Using c_2,± = 0, we insert -e^-√(2μ)xc_2,+ = 0 to write Ψ_2(x) = 1/2√(2μ)∫_x^∞(e^-√(2μ)(y-x)-e^-√(2μ)(x-y)) (V_2(y)Ψ_1(y) + V_1(y)Ψ_2(y)) y. Similarly, for x<0, using e^√(2μ x)c_2,-=0, we have Ψ_1(x) = c_0 + c_1 +∫_-∞^x (x-y)(V_1(y) Ψ_1(y) +V_2(y) Ψ_2(y)) y, Ψ_2(x) = 1/2√(2μ)∫_-∞^x (e^-√(2μ)(x-y)-e^-√(2μ)(y-x)) (V_2(y)Ψ_1(y) + V_1(y)Ψ_2(y)) y. Suppose now that c_0 = c_1 = 0. Owing to the exponential decay of V_1, V_2 by assumption (A3), we obtain from (<ref>) and (<ref>) a homogeneous Volterra equation for Ψ=(Ψ_1,Ψ_2)^⊤ satisfying Ψ(x) = ∫_ K(x,y) Ψ(y) y, x ≥ 0, where | K(x,y) |≲ e^-γ| y |1_y > x for some 0< γ < β, which is a quasi-nilpotent operator. By performing a standard contraction on L^∞(M,∞), with M>0 sufficiently large, one arrives at a solution Ψ(x) ≡ 0 for all x ≥ M. By the uniqueness theorem for ODEs, this implies that Ψ≡ 0 on . Then, by the relation Φ = v_2 Ψ and the fact that v_2 is a positive matrix, one finds that Φ≡ 0, which contradicts the hypothesis Φ≠ 0. Thus, the conclusion is that c_0 and c_1 cannot be both zero. In particular, it follows from (<ref>) and (<ref>) that lim_x →±∞Ψ_1(x) = c_0 ∓ c_1. Since either c_0+c_1 ≠ 0 or c_0 - c_1 ≠ 0, we conclude that Ψ_1 ∉L^2(). Proof of (2). Define Φ = v_2 Ψ. Since Ψ is a distributional solution to (<ref>), using = v_1v_2, we have (_0 - μ I)Ψ = v_1 Φ⟺Ψ_1” = a Φ_1 + bΦ_2, Ψ_2” - 2μΨ_2 = b Φ_1 + a Φ_2. Let η∈ C_0^∞() be a non-negative function satisfying η(x) = 1 for | x |≤1 and η(x) = 0 for | x |≥ 2. Using the first equation from above and integrating by parts, we have for any >0, |∫_(a(y)Φ_1(y) + b(y)Φ_2(y)) η( y) y | = |∫_Ψ_1”(y) η( y) y | = |∫_Ψ_1(y) ^2 η”( y) y |≤‖Ψ_1 ‖_L^∞()∫_|η”(x) | x. By taking the limit → 0 and using the Lebesgue dominated convergence theorem, we find that ⟨ (a,b)^⊤, Φ⟩ = 0. Thus, PΦ = 0, i.e. Φ∈ Q(L^2() × L^2()). Following this fact and using Φ = v_2 Ψ, we have QTQΦ = QTΦ = Q(I+v_2_0v_1)Φ = Qv_2(Ψ +_0(Ψ)). Now set u := Ψ + _0(Ψ). Since u = (u_1,u_2)^⊤ is a distributional solution of (_0 - μ I)u = 0, i.e. -u_1” = 0, u_2” - 2μ u_2 = 0, we find that u_1(x) = κ_1 + κ_2x, u_2(x) = κ_3e^-√(2μ)x + κ_4e^√(2μ)x, for some κ_i ∈, i ∈{1,…,4}. By similar arguments from Item (1), we obtain that _0(Ψ) ∈ L^∞() × L^∞(). Since Ψ∈ L^∞() × L^∞(), it follows that u ∈ L^∞() × L^∞(), which implies that κ_2 = κ_3 = κ_4 = 0. Thus, we have u(x) ≡ (κ_1,0)^⊤ = κ_1e_1. Since Qv_2 e_1 = 0, we conclude from (<ref>) using the definition of u(x) that QTQΦ = 0, whence Φ∈ S_1(L^2() × L^2()). Proof of (3). Suppose there are two linearly independent Φ,∈ S_1(L^2() × L^2()). As in the proof of Item (1), for x ≥ 0, we have Ψ_1(x) = c_0 - c_1 + ∫_x^∞ (y-x) (V_1(y)Ψ_1(y) + V_2(y) Ψ_2(y)) y, Ψ_2(x) = 1/2√(2μ)∫_x^∞(e^-√(2μ)(y-x)-e^-√(2μ)(x-y)) (V_2(y)Ψ_1(y) + V_1(y)Ψ_2(y)) y, and Ψ_1(x) = d_0 - d_1 + ∫_x^∞ (y-x) (V_1(y)Ψ_1(y) + V_2(y) Ψ_2(y)) y, Ψ_2(x) = 1/2√(2μ)∫_x^∞(e^-√(2μ)(y-x)-e^-√(2μ)(x-y)) (V_2(y)Ψ_1(y) + V_1(y)Ψ_2(y)) y, where d_0 and d_1 are constants defined from which are analogous to c_0 and c_1. There is some constant θ∈ such that c_0 - c_1 = -θ (d_0 - d_1), which imply the Volterra integral equation [ Ψ_1 + θΨ_1; Ψ_2 + θΨ_2 ](x) = ∫_x^∞[ y - x 0; 0 e^-√(2μ)(y-x)-e^-√(2μ)(x-y)/2√(2μ) ](y) [ Ψ_1(y) + θΨ_1(y); Ψ_2(y) + θΨ_2(y) ]dy, for any x ≥ 0. By the same Volterra equation argument used in Item (1), we obtain Ψ+ θΨ≡ 0, which implies that Φ + θΦ≡ 0, but this contradicts that Φ and Φ are linearly independent. Thus, we have shown that S_1(L^2() × L^2()) ≤ 1. Next, we prove (<ref>)–(<ref>). Write S_1 = ‖Φ‖_L^2 × L^2^-2⟨Φ,·⟩Φ. By (<ref>) and the fact that P, S_1, and T are self-adjoint, we compute for any u ∈ L^2() × L^2() that S_1 T P T S_1 u = ‖Φ‖_L^2 × L^2^-2⟨Φ,u ⟩ S_1 T P T Φ = ‖Φ‖_L^2 × L^2^-2c_0⟨Φ,u⟩ S_1T [ a; b ] = | c_0 |^2 ‖Φ‖_L^2 × L^2^-2‖ V_1 ‖_L^1()S_1 u. A similar computation reveals PTS_1TPu = | c_0|^2 ‖Φ‖_L^2 × L^2^-2‖ V_1 ‖_L^1()Pu. For the third identity (<ref>), in view of (<ref>) and (<ref>), we write M_1(x,y) = v_2(x)G_1(x,y)v_1(y) = i| x - y |^2/4[ a(x); b(x) ][ a(y) b(y) ]. By using the orthogonality ⟨Φ,(a,b)^⊤⟩ = ∫_(Φ_1(x)a(x) + Φ_2(x)b(x)) x = 0, and the identity | x - y |^2 = x^2 + y^2 - 2xy, we have [S_1M_1S_1](x,y) = ∫_^2 S_1(x,x_1)M_1(x_1,y_1)S_1(y_1,y) x_1 y_1 = i/4Φ(x)/‖Φ‖_L^2 × L^2^2∫_^2( | x_1 - y_1 |^2 Φ^*(x_1)[ a(x_1); b(x_1) ][ a(y_1) b(y_1) ]Φ(y_1)) x_1 y_1 Φ^*(y)/‖Φ‖_L^2 × L^2^2 = -2i (∫_x_1/2Φ^*(x_1)[ a(x_1); b(x_1) ] x_1) (∫_y_12[ a(y_1) b(y_1) ]Φ(y_1) y_1) ‖Φ‖_L^2 × L^2^-2S_1(x,y) = -2i | c_1 |^2 ‖Φ‖_L^2 × L^2^-2 S_1(x,y). This proves (<ref>) and we are done. By direct computation, the conjugation identity σ_3 = ^* σ_3 and the identity v_1 = -σ_3 v_2 imply that the vector Ψ := σ_3 Ψ solves ^* Ψ = μΨ, where Ψ is the distribution solution to (<ref>). Moreover, one has the identities σ_3 Ψ = _0 (v_2 Φ) + (c_0,0)^⊤, Φ = v_2 Ψ = -v_1^⊤Ψ Similarly, using the conjugation identity σ_1 = - σ_1, we note that the vector Ψ_- = σ_1 Ψ solves the system Ψ_- = -μΨ_-. Following the preceding discussion, we assume the threshold μ is irregular and we derive an expansion for the inverse operator M(z)^-1 on a small punctured disk near the origin. We employ the inversion lemma due to Jensen and Nenciu <cit.>. Let H be a Hilbert space, let A be a closed operator and S a projection. Suppose A+S has a bounded inverse. Then A has a bounded inverse if and only if B = S - S(A+S)^-1S has a bounded inverse in SH, and in this case, A^-1 = (A+S)^-1 + (A+S)^-1SB^-1S(A+S)^-1, on H. We will now state the inverse operator of M(z) away from z=0. Suppose assumptions (A1) – (A6) hold. Let S_1(L^2() × L^2()) = ({Φ}) for some Φ = (Φ_1,Φ_2)^⊤≠0⃗. Let κ := (2i)^-1‖ V_1 ‖_L^1(), and let d be the constant defined by d := -2i(| c_0 |^2 + | c_1 |^2) ‖Φ‖_L^2 × L^2^-2≠ 0, with c_0 and c_1 defined by (<ref>) and (<ref>) respectively for this Φ. Then, there exists a positive radius z_0>0 such that for all 0 < | z| < z_0, M(z) is invertible on L^2() × L^2() and M(z)^-1 = 1/d(1/zS_1 - 1/κ PTS_1 - 1/κ S_1TP) +( 1/κ + | c_0 |^2 ‖Φ‖_L^2 × L^2^-2‖ V_1 ‖_L^1()/dκ^2)zP + Q Λ_0(z) Q + zQΛ_1(z) + zΛ_2(z)Q + z^2Λ_3(z), where Λ_j(z) are absolutely bounded operators on L^2() × L^2() satisfying the improved bounds ‖|∂_z^k Λ_j(z) |‖_L^2() × L^2() → L^2() × L^2()≲ 1, k=0,1,2, j=0,1,2,3, uniformly in z for | z| < z_0. Throughout the proof, we will denote by _j(z), for 0 ≤ j ≤ 3, as error terms that satisfy the absolute bound | z |^k ‖|∂_z^k _j(z) |‖_L^2() × L^2() → L^2() × L^2()≲| z |^j, ∀ k = 0,1,2, ∀ | z | < z_0, for some z_0>0 small. This convenient notation will be useful in invoking Neumann series inversion for small values of z. Since we only need the expansion of M(z)^-1 up to a few powers of z, the exact expressions of _j(z) are insignificant and we allow it to vary from line to line. By Proposition <ref>, we rewrite M(z) by setting (z) := z/κM(z) = P + z/κ (T + zM_1 + _2(z)), where _2(z) is the error term in Proposition <ref>. Using I = P + Q, we write (z) + Q = I + z/κ (T + zM_1 + _2(z)), and by choosing z small enough, a Neumann series expansion yields the inverse operator [(z)+Q]^-1 = ∑_n ≥ 0 (-1)^n (z/κ(T + zM_1 + _2(z)))^n on L^2()× L^2(). We collect the terms of power order up to 2 to obtain [(z)+Q]^-1 = I - z/κT - z^2 ( 1/κM_1 -1/κ^2T^2 ) + _3(z). Note that z_2(z) is of the form _3(z). Recall by Lemma <ref> that the operator (z) is invertible on L^2() × L^2() if and only if the operator B_1(z) := Q-Q[(z)+Q]^-1Q is invertible on the subspace QL^2 ≡ Q(L^2() × L^2()). Using (<ref>), we find that B_1(z) = z/κQTQ + z^2(1/κQM_1Q - 1/κ^2QT^2Q) + Q_3(z)Q. We rewrite B_1(z) by setting _1(z) := κ/zB_1(z) = QTQ + z(QM_1Q - 1/κQT^2Q) + Q_2(z)Q. Since the threshold μ is not regular, the operator QTQ is not invertible on QL^2 according to Definition <ref>. By considering the operator _1(z) + S_1 = (QTQ + S_1) + z(QM_1Q - 1/κQT^2Q) + Q_2(z)Q, and the fact that we have QD_0Q = D_0 = (QTQ+S_1)^-1 on QL^2, we can pick z small enough such that ‖ z(QM_1Q - 1/κQT^2Q) + Q_2(z)Q ‖_L^2 × L^2 → L^2 × L^2 < ‖ QD_0Q ‖_L^2 × L^2 → L^2 × L^2^-1. This allows for the more complicated Neumann series expansion (c.f. Lemma <ref>) on QL^2: (_1(z) + S_1)^-1 = D_0∑_n≥0 (-1)^n( (z (QM_1Q - κ^-1 QT^2Q ) + Q_2(z)Q)D_0)^n on QL^2. We collect the leading order terms in this expansion and write (_1(z) + S_1)^-1 = D_0 - zD_0(QM_1Q - κ^-1QT^2Q)D_0 + Q_2(z)Q. At this step, it is crucial that the operator D_0 is absolutely bounded to ensure that the remainder term Q_2(z)Q and its derivatives are absolutely bounded. Next, we set B_2(z) := S_1 - S_1(_1(z) + S_1)^-1S_1, on S_1L^2 ≡ S_1(L^2()× L^2()). Using the orthogonality conditions S_1D_0 = D_0 S_1 = S_1, S_1Q= QS_1 = S_1, QTS_1 = S_1TQ = 0, we obtain B_2(z) = z S_1(M_1 - κ^-1 T^2)S_1 + S_1_2(z)S_1. By Lemma <ref>, we note that S_1L^2 is spanned by Φ(x) and that PTΦ = TΦ holds (c.f. (<ref>)), whence S_1T^2S_1 = S_1TPTS_1. Using Lemma <ref> (c.f. (<ref>), (<ref>)), we obtain that d := (S_1(M_1 - κ^-1T^2)S_1) = (S_1M_1S_1)-κ^-1(S_1TPTS_1) =-2i(| c_0 |^2 + | c_1 |^2)‖Φ‖_L^2 × L^2^-2≠ 0. Hence, we apply another Neumann series expansion to invert the operator B_2(z) on S_1L^2 for small z and write B_2(z)^-1 = 1/dzS_1 + S_1_0(z)S_1 on S_1L^2. Moreover, by Lemma <ref>, we have _1(z)^-1 = (_1(z)+S_1 )^-1 + (_1(z)+S_1)^-1S_1B_2(z)^-1S_1(_1(z)+S_1)^-1 on QL^2. Using (<ref>), (<ref>), and (<ref>), we find that _1(z)^-1 = 1/dzS_1 + Q_0(z)Q on QL^2. Hence, B_1(z)^-1 = κ/z_1(z)^-1 = κ/d z^2S_1 + κ/zQ_0(z)Q on QL^2. We return to the expansion of (z)^-1 by using Lemma <ref> with (<ref>) to obtain that (z)^-1 = ((z)+Q)^-1 + ((z)+Q)^-1QB_1(z)^-1Q((z)+Q)^-1 = (I - z/κT) + κ/d z^2S_1 -1/dzTS_1 - 1/dzS_1T + 1/d κTS_1T + κ/z(Q_0(z)Q + _1(z)Q + Q_1(z) + _2(z)). Here, we used the identity Q = IQ = QI. By reverting back to M(z) = κ/z(z), we have M(z)^-1 = z/κ(z)^-1 = z/κI + 1/d zS_1 - 1/dκTS_1 - 1/dκS_1T + z/dκ^2TS_1T + Q_0(z)Q + _1(z)Q + Q_1(z) + _2(z). Note that we absorb the z^2/κ^2T term into the error _2(z) above. By using the identities I = Q + P, QTS_1 = S_1TQ = 0, and by factoring the powers of z from the error terms _j(z), we obtain the expansion of M(z)^-1 on L^2: for 0 < | z | < z_0, M(z)^-1 = z/κP + 1/d(1/zS_1 - 1/κ PTS_1 - 1/κ S_1TP + 1/κ^2PTS_1TP ) + Q Λ_0(z) Q + zQΛ_1(z) + zΛ_2(z)Q + z^2Λ_3(z), where the operators Λ_j(z), j=0,…,3, satisfy (<ref>). Here, we choose z_0>0 sufficiently small such that the expansion (<ref>) and the Neumann series inversions (<ref>), (<ref>), (<ref>) are valid for all 0<| z | < z_0. Finally, by Lemma <ref> (c.f. (<ref>)), the term PTS_1TP can be simplified to | c_0 |^2 ‖Φ‖_L^2 × L^2^-2‖ V_1 ‖_L^1()P, which finishes the proof. We appeal to the reader that each leading term in the expansion (<ref>) plays an important role in revealing the cancellations among the finite rank operators that arise in the local decay estimate (<ref>). Such a precise expression was also obtained for the one-dimensional Dirac operators in <cit.>, even though the proof we give here is different. See Remark 3.7 in that paper. For the low-energy unweighted dispersive estimates, it is sufficient to work with the simpler expression M(z)^-1 = 1/zQΛ_0(z)Q + QΛ_1(z) + Λ_2(z)Q + zΛ_3(z), where we absorb the operators S_1,S_1TP,PTS_1,P in (<ref>) into the operators QΛ_0(z)Q, QΛ_1(z), Λ_2(z)Q, Λ_3(z) respectively. The operators Λ_j(z), for j=0,…,3, satisfy the same estimates as (<ref>). § LOW ENERGY ESTIMATES In this section, we prove the low energy bounds for the perturbed evolution, following the ideas in Section 4 of <cit.>. We will frequently exploit the crucial orthogonality condition ∫_e_11v_1(x) Q(x,y) x = ∫_ Q(x,y) v_2(y) e_11 y= 0_2× 2. The following calculus lemma will be helpful for dealing with the lower entry of the free resolvent kernel. For any m>0 and r ≥ 0, we define g_m(x) := e^-r√(x^2+m^2)/√(x^2+m^2). Then, there exists C_m > 0 (independent of r) such that ‖∂_x^k g_m ‖_L^∞()≤ C_m ≲ 1, ∀ k=0,1,2. First, by rescaling, we set g_m(x) = 1/m(x/m) where (x) := e^-rm √( x^2+1)/√(x^2+1) = 1/e^⟨ x ⟩⟨ x ⟩, := rm. Hence, it sufficient to prove the same estimate (<ref>) for (x). For k=0, it is clear that |(x) |≤ 1 for all x ∈. For k=1,2, direct computation shows that ∂_x (x) = - x(1+⟨ x ⟩)/e^⟨ x ⟩⟨ x ⟩^3, and ∂_x^2 (x) = 3x^2 + 3 x^2⟨ x ⟩-⟨ x ⟩^2 + ^2 x^2⟨ x ⟩^2 - ⟨ x ⟩^4/e^⟨ x ⟩⟨ x ⟩^5. Since e^-⟨ x ⟩max{1,,^2}≤ 1, it follows from (<ref>), (<ref>) that the estimate (<ref>) holds for and thus for g(x) too. The next proposition establishes the dispersive estimates for the evolution semigroup e^itP_s^+ for small energies close to the threshold μ. Let the assumptions of Theorem <ref> hold. Let χ_0(z) be a smooth, even, non-negative cut-off function satisfying χ_0(z) = 1 for | z |≤z_0/2 and χ_0(z) = 0 for | z |≥ z_0, where z_0>0 is given by Proposition <ref>. Then, for any | t |≥ 1, and u⃗ = (u_1,u_2) ∈() ×(), we have ‖ e^itχ_0( - μ I)P_s^+ u⃗‖_L^∞()× L^∞()≲| t |^-1/2‖u⃗‖_L^1() × L^1(), and ‖⟨ x ⟩^-2(e^itχ_0( - μ I)P_s^+ - F_t^+ )u⃗‖_L^∞()× L^∞()≲| t |^-3/2‖⟨ x ⟩^2u⃗‖_L^1() × L^1(), where F_t^+ is defined by F_t^+(x,y) = e^itμ/√(-4 π i t)Ψ⃗(x) [σ_3 Ψ⃗(y)]^⊤. We begin with the proof of the dispersive decay estimate (<ref>). We recall the spectral representation from (<ref>): e^itP_s^+ = e^itμ/π i∫_ e^itz^2zℛ_0(z) z - e^itμ/π i∫_ e^itz^2zℛ_0(z)v_1(M(z))^-1v_2ℛ_0(z) z. Note that the first term on the right is the spectral representation for the free evolution e^it_0P_s^+ and it satisfies the same estimate as (<ref>) thanks to Proposition <ref>. We insert the weaker expansion (<ref>) for M(z)^-1 following Remark <ref>, and write ∫_ e^itz^2zχ_0(z^2)ℛ_0(z)v_1(M(z))^-1v_2ℛ_0(z) z =∫_ e^itz^2χ_0(z^2)ℛ_0(z)v_1 QΛ_0(z)Q v_2ℛ_0(z) z + ∫_ e^itz^2zχ_0(z^2)ℛ_0(z)v_1 QΛ_1(z) v_2ℛ_0(z) z +∫_ e^itz^2zχ_0(z^2)ℛ_0(z)v_1 Λ_2(z)Q v_2ℛ_0(z) z+∫_ e^itz^2z^2χ_0(z^2)ℛ_0(z)v_1 Λ_3(z) v_2ℛ_0(z) z =: J_1 + J_2 + J_3 + J_4. It remains to show that ‖ J_k‖_L^1→ L^∞≤ C | t |^-1/2, ∀ k=1,…,4 . We treat the case for J_1 since the other cases follow similarly. First, we recall the kernel of _0(z) from (<ref>) and write _0(z)(x,y) := _1(z)(x,y) + _2(z)(x,y) := ie^iz | x - y |/2ze_11 + -e^-√(z^2 + 2μ)| x - y |/2√(z^2 + 2μ)e_22, and we further decompose the integral J_1 as J_1 = J_1^(1,1) + J_1^(1,2) + J_1^(2,1) + J_1^(2,2), where J_1^(i,j)(x,y) := ∫_ e^itz^2χ_0(z^2)[ℛ_i(z)v_1 QΛ_0(z)Q v_2ℛ_j(z)](x,y) z, i,j ∈{1,2}. We begin with the most singular term J_1^(1,1)(x,y) = ∫_^3 e^itz^2 + iz(| x - x_1 | + | y - y_1 |)χ_0(z^2)/(2iz)^2 [e_11v_1QΛ_0(z)Qv_2e_11](x_1,y_1) z x_1 y_1 . The orthogonality conditions (<ref>) imply that ∫_ e^iz | x |e_11 v_1(x_1)Q(x_1,x_2) x_1 = ∫_ e^iz | y | Q(y_2,y_1)v_2(y_1)e_11 y_1 = 0. Hence, writing e^iz| x - x_1| - e^iz| x | = iz ∫_| x |^| x - x_1 |e^izs_1 s_1 and e^iz| y - y_1| - e^iz| y | = iz ∫_| y |^| y - y_1 |e^izs_2 s_2, we obtain J_1^(1,1)(x,y) = 1/4∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |∫_ e^itz^2 + iz(s_1+s_2) A(z,x_1,y_1) s_1 s_2 x_1 y_1 z, where A(z,x_1,y_1) = χ_0(z^2)[e_11v_1QΛ_0(z)Qv_2e_11](x_1,y_1), and note that A is differentiable and compactly supported in z due to Proposition <ref> and the compact support of χ_0(z^2). We obtain by Lemma <ref> and the Fubini theorem that | J_1^(1,1)(x,y) |≤ C | t|^-1/2∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |∫_|∂_z A(z,x_1,x_2)| z s_1 s_2 x_1 y_1. Using ∫_| x |^| x - x_1 |∫_| y |^| y - y_1 | 1 s_1 s_2 ≤|| x - x_1 | - | x ||·|| y - y_1 | - | y ||≲⟨ x_1 ⟩⟨ y_1 ⟩, as well as ∂_z A(z,x_1,y_1) = [e_11v_1Q ∂_z(χ_0(z^2)Λ_0(z))Qv_2e_11](x_1,y_1), along with the bound (<ref>) on Λ_0, we deduce that ∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |∫_|∂_z A(z,x_1,x_2)| z s_1 s_2 x_1 y_1 ≤ C‖ Q ‖_L^2 → L^2^2 ‖⟨ x_1 ⟩ v_1(x_1)‖_L^2()‖⟨ y_1 ⟩ v_2(y_1)‖_L^2() ·∫_[-z_0,z_0] (‖|Λ_0(z)|‖_L^2× L^2 → L^2× L^2 + ‖|∂_z Λ_0(z)|‖_L^2× L^2 → L^2× L^2) z ≲ 1. Hence, ‖ J_1^(1,1)‖_L^1 × L^1 → L^∞× L^∞≤ C | t |^-1/2. Next, we consider the least singular term J_1^(2,2)(x,y) = ∫_^3 e^itz^2B(z,x,y,x_1,y_1) x_1 y_1 z, where B(z,x,y,x_1,y_1) := e^-√(z^2+2μ)(| x - x_1 | + | y - y_1 |)χ_0(z^2) /4(z^2+2μ) [e_22v_1QΛ_0(z)Qv_2e_22](x_1,y_1). By Lemma <ref>, we have | J_1^(2,2)(x,y)|≤ C | t |^-1/2, if we can show the uniform estimate sup_x,y ∈∫_^3|∂_z B(z,x,y,x_1,y_1)| z x_1 y_1 ≲ 1. By Lemma <ref>, we have sup_z∈|∂_z^k ( e^-√(z^2+2μ)(| x - x_1 | + | y - y_1 |)/4(z^2+2μ)) |≤ C_μ≲ 1, k=0,1, uniformly in the x,y,x_1,y_1 variables. Hence, using the Cauchy-Schwarz inequality in the x_1,y_1 variables and the bound (<ref>) on Λ_0, we have ∫_^3|∂_z B(z,x,y,x_1,y_1)| z x_1 y_1 ≤ C_μ∫_^3| (1+∂_z)χ_0(z^2)[e_22v_1QΛ_0(z)Qv_2e_22](x_1,y_1) | z x_1 y_1 ≲‖ Q ‖_L^2 × L^2 → L^2 × L^2^2 ‖ v_1‖_L^2()‖ v_2‖_L^2() ∫_[-z_0,z_0](‖|Λ_0(z)|‖_L^2 × L^2 → L^2 × L^2 + ‖|∂_z Λ_0(z)|‖_L^2 × L^2 → L^2 × L^2) z ≲ 1. Hence, the bound (<ref>) is proven. The remaining terms J_1^(1,2) and J_1^(2,1) can be treated similarly with the same techniques, while for the remaining cases J_2,J_3, and J_4, we use the additional powers of z in place of the missing Q operators to obtain the same bounds (<ref>) as the term J_1. This finishes the proof of (<ref>). Next, we turn to the proof of the low-energy weighted estimate (<ref>). Recall that the threshold resonance function Ψ = (Ψ_1,Ψ_2)^⊤ has been normalized in Theorem <ref>, which means that we need to carefully treat the constants relating to the function Φ where Φ := v_2Ψ. By Lemma <ref>, note that Φ spans the subspace S_1(L^2()× L^2()). We define η := ‖Φ‖_L^2() × L^2()^-2≠ 0, so that S_1(x,y) = η Φ^*(y)Φ(x), and we fix the constants c_0 and c_1 defined by (<ref>) and (<ref>) respectively for this Φ. By Lemma <ref>, one finds the relation 2 = lim_x →∞(|Ψ_1(x)|^2 + |Ψ_1(-x)|^2) = 2(| c_0 |^2 + | c_1 |^2), by the polarization identity (c.f. (<ref>)). Thus, the precise expansion (<ref>) of M(z)^-1 from Proposition <ref> simplifies to M(z)^-1 = i/2η zS_1 + 1/η‖ V_1 ‖_L^1() PTS_1 + 1/η‖ V_1 ‖_L^1() S_1TP + (2i/‖ V_1 ‖_L^1() + 2 | c_0 |^2/i‖ V_1 ‖_L^1() )zP + Q Λ_0(z) Q + zQΛ_1(z) + zΛ_2(z)Q + z^2Λ_3(z), 0 < | z | < z_0. We insert the above expression into the spectral representation of e^itχ_0( - μ I)P_s^+, and obtain that e^itχ_0( - μ I)P_s^+ = e^itμ/π i∫_e^itz^2zχ_0(z^2)_0(z) z - e^itμ/π i∫_e^itz^2zχ_0(z^2)_0(z)v_1(M(z))^-1v_2_0(z) z = e^itμ/π iI_1 -e^itμ/π i( i/2ηI_2,1 + 1/η‖ V_1 ‖_L^1()I_2,2 + 1/η‖ V_1 ‖_L^1() I_2,3 + (2i/‖ V_1 ‖_L^1() + 2 | c_0 |^2/i‖ V_1 ‖_L^1() )I_2,4) -e^itμ/π i(I_3,1 + I_3,2 + I_3,3 + I_3,4), where I_1 := ∫_ e^itz^2z χ_0(z^2) _0(z) z, I_2,1 := ∫_ e^itz^2χ_0(z^2) [_0(z)v_1 S_1 v_2_0(z)] z, I_2,2 := ∫_ e^itz^2 zχ_0(z^2) [_0(z)v_1 S_1 T P v_2_0(z)] z, I_2,3 := ∫_ e^itz^2 zχ_0(z^2) [_0(z)v_1 P T S_1 v_2_0(z)] z, I_2,4 := ∫_ e^itz^2 z^2χ_0(z^2) [_0(z)v_1 P v_2_0(z)] z, and I_3,1 := ∫_ e^itz^2 zχ_0(z^2) [_0(z)v_1 Q Λ_0 (z) Q v_2_0(z)] z, I_3,2 := ∫_ e^itz^2 z^2χ_0(z^2) [_0(z)v_1 Q Λ_1(z) v_2_0(z)] z, I_3,3 := ∫_ e^itz^2 z^2χ_0(z^2) [_0(z)v_1 Λ_2(z)Q v_2_0(z)] z, I_3,4 := ∫_ e^itz^2 z^3χ_0(z^2) [_0(z)v_1 Λ_3(z) v_2_0(z)] z. Now we study the local decay of the terms I_1, I_2,j, I_3,ℓ, for j,ℓ∈{1,…,4} and we will observe in the following propositions that the terms I_1,I_2,1,…,I_2,4 contribute to the leading order for the local decay estimate while the remainder terms I_3,1, …, I_3,4 satisfy the stronger local decay estimate (| t |^-3/2⟨ x ⟩⟨ y ⟩). We first handle these remainder terms by Lemma <ref> in a similar spirit to the proof for the (unweighted) dispersive bound (<ref>), exploiting the additional power of z. For i∈{1,2,…,4} and | t|≥ 1, we have | I_3,i(x,y) |≤ C | t|^-3/2⟨ x ⟩⟨ y ⟩. We treat the case for I_3,1 as the other cases follow similarly by using the additional powers of z in place of the missing operators Q. As before, we consider the decomposition I_3,1 = I_3,1^(1,1) + I_3,1^(1,2) + I_3,1^(2,1) + I_3,1^(2,2), where I_3,1^(i,j) := ∫_ e^itz^2zχ_0(z^2)[_i(z)v_1QΛ_0(z)Qv_2_j(z)] z, i,j∈{1,2}, with _1 and _2 defined in (<ref>). We begin with the term I_3,1^(1,1)(x,y) = ∫_^3 e^itz^2 + iz(| x - x_1 | + | y - y_1 |)zχ_0(z^2)/(2iz)^2 [e_11v_1Q Λ_0(z)Qv_2e_11](x_1,y_1) z x_1 y_1. Using the orthogonality condition (<ref>) like in (<ref>), we obtain I_3,1^(1,1)(x,y) = 1/4∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1|∫_e^itz^2+iz(s_1+s_2)z A(z,x_1,y_1) z s_1 s_2 x_1 y_1, where A(z,x_1,y_1) := χ_0(z^2)[e_11v_1QΛ_0(z)v_2Qe_11](x_1,y_1). By Lemma <ref>, we obtain that | I_3,1^(1,1)(x,y) | ≲| t |^-3/2∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1|∫_[-z_0,z_0](|∂_z^2 A | + (s_1+s_2)|∂_z A | + | A |) z s_1 s_2 x_1 y_1. Using the bounds ∫_| x |^| x - x_1 |∫_| y |^| y - y_1| 1 s_1 s_2 ≲⟨ x_1 ⟩⟨ y_1 ⟩, ∫_| x |^| x - x_1 |∫_| y |^| y - y_1| (s_1+s_2) s_1 s_2 ≲⟨ x_1 ⟩^2 ⟨ y_1 ⟩^2 ⟨ x ⟩⟨ y ⟩, we have | I_3,1^(1,1) (x,y) |≲| t |^-3/2∫_^2∫_[| z |≤ z_0]⟨ x_1 ⟩⟨ y_1 ⟩ (|∂_z^2 A | + ⟨ x_1 ⟩⟨ y_1 ⟩⟨ x ⟩⟨ y ⟩|∂_z A | + | A | ) z x_1 y_1. Noting that ⟨ x⟩ v_1(x_1) and ⟨ y_1 ⟩ v_2(y_1) are in L^2 and that Λ_0 satisfies the bound (<ref>), we apply Cauchy-Schwarz inequality in x_1 and y_1 variables to obtain the bound | I_3,1^(1,1)(x,y) | ≲| t |^-3/2‖ Q ‖_L^2 → L^2^2 ‖⟨ x_1 ⟩ v_1 ‖_L_x_1^2() ‖⟨ y_1 ⟩ v_2 ‖_L_y_1^2() ·∫_[| z |≤ z_0] (‖|∂_z^2 Λ_0(z) |‖_L^2 × L^2 → L^2 × L^2 + ‖|Λ_0(z) |‖_L^2 × L^2→ L^2× L^2 ) z +| t |^-3/2⟨ x ⟩⟨ y ⟩‖ Q ‖_L^2 → L^2^2‖⟨ x_1 ⟩ v_1 ‖_L_x_1^2() ‖⟨ y_1 ⟩ v_2 ‖_L_y_1^2() ·∫_[| z |≤ z_0]‖|∂_z Λ_0(z) |‖_L^2 × L^2 → L^2× L^2 z ≲| t |^-3/2⟨ x ⟩⟨ y ⟩ . Next, we consider the term I_3,1^(1,2)(x,y) = ∫_^3 e^itz^2 + iz| x - x_1 | - √(z^2+2μ)| y - y_1 |χ_0(z^2)/4i√(z^2+2μ) [e_11v_1Q Λ_0(z)Qv_2e_22](x_1,y_1) z x_1 y_1. By using the Q orthogonality (c.f. (<ref>)) condition, we write I_3,1^(1,2)(x,y) = ∫_^3∫_| x |^| x - x_1 | e^itz^2 + izs_1 zB(z,x_1,y_1,x,y) s_1 z x_1 y_1, where B(z,x_1,y_1,x,y) := e^- √(z^2+2μ)| y - y_1 |/4i√(z^2+2μ)χ_0(z^2) [e_11v_1Q Λ_0(z)Qv_2e_22](x_1,y_1) . Since B is compactly supported in z, we can exchange the order of integration and we use Lemma <ref> to obtain | I_3,1^(1,2)(x,y) |≤ C | t |^-3/2∫_^2∫_| x |^| x - x_1|∫_| [∂_z^2 + is_1 ∂_z] B(z,x_1,y_1,x,y) | z s_1 x_1 y_1. By Lemma <ref>, we have sup_z ∈|∂_z^k (e^- √(z^2+2μ)| y - y_1 |4i√(z^2+2μ)) |≤ C_μ≲ 1, ∀ k=0,1,2, which implies by Hölder's inequality and Leibniz rule that ∫_| [∂_z^2 + is_1 ∂_z] B(z,x_1,y_1,x,y) | z ≤ C ⟨ s_1 ⟩∫_|e_11v_1Q [1+∂_z + ∂_z^2](χ_0(z^2)Λ_0(z))Qv_2e_22| z. Repeating the arguments from (<ref>)–(<ref>), we obtain | I_3,1^(1,2)(x,y) |≤ C | t |^-3/2⟨ x ⟩. Similarly, one has the bounds | I_3,1^(2,1)(x,y) |≤ C | t |^-3/2⟨ y ⟩, | I_3,1^(2,2)(x,y) |≤ C | t |^-3/2, and we are done. For all | t |≥ 1, we have | I_2,1(x,y) - F_t^1(x,y) |≤ C | t |^-3/2⟨ x⟩^2 ⟨ y ⟩^2, where F_t^1(x,y) := η√(π)/√(-it) [c_0 e_1 - Ψ(x)][σ_3 Ψ(y) - c_0e_1]^*. As in the previous propositions, we decompose I_2,1 into the sum I_2,1 = I_2,1^(1,1)+I_2,1^(1,2)+I_2,1^(2,1)+I_2,1^(2,2), with I_2,1^(i,j) := ∫_ e^itz^2χ_0(z^2) [_i(z)v_1 S_1 v_2_j(z)] z, i,j ∈{1,2}. We start with the most singular term I_2,1^(1,1)(x,y) = ∫_^3 e^itz^2 + iz(| x - x_1 | + | y - y_1 |)χ_0(z^2)/(2iz)^2 [e_11v_1S_1v_2e_11](x_1,y_1) x_1 y_1 z. Noting that S_1L^2 ⊂ QL^2, the orthogonality conditions (<ref>) imply that ∫_ e^iz | x |e_11 v_1(x_1)S_1(x_1,x_2) x_1 = ∫_ e^iz | y | S_1(y_2,y_1)v_2(y_1)e_11 y_1 = 0_2× 2, ∀ x, y ∈. Hence, by the Fubini theorem, I_2,1^(1,1) (x,y) = 1/4∫_^2∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |∫_ e^itz^2 + iz(s_1 + s_2)χ_0(z^2) [e_11v_1S_1v_2e_11](x_1,y_1) z s_1 s_2 x_1 y_1 = 1/4∫_| x |^| x - x_1 |∫_| y |^| y - y_1 | G_t(s_1+s_2) s_1 s_2 ∫_^2 [e_11v_1S_1v_2e_11](x_1,y_1) x_1 y_1, where G_t(·) is the function defined in Lemma <ref>, which satisfies the estimate | G_t(s_1+s_2) - √(π)/√(-it) e^-is_1^2/4te^-is_2^2/4t|≤ C | t |^-3/2⟨ s_1 ⟩⟨ s_2 ⟩. Using the bound ∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |⟨ s_1 ⟩⟨ s_2 ⟩ s_1 s_2 ≲⟨ x_1⟩^2⟨ y_1⟩^2⟨ x ⟩⟨ y ⟩, the decay assumptions on v_1,v_2, and the estimate (<ref>), we have | I_2,1^(1,1)(x,y) - √(π)/4√(-it) e^i π/4∫_^2H_t(x_1,x)[e_11v_1S_1v_2e_11](x_1,y_1)H_t(y_1,y) x_1 y_1| ≤ C | t |^-3/2⟨ x ⟩⟨ y ⟩‖ S_1 ‖_L^2 × L^2 → L^2 × L^2 ‖⟨ x_1 ⟩^2 v_1(x_1) ‖_L^2‖⟨ y_1 ⟩^2 v_2(y_2) ‖_L^2≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩, where we set H_t(x_1,x) := ∫_| x |^| x_1 - x | e^-is^2/4t s. Since S_1(x,y) = ηΦ(x)Φ^*(y), the orthogonality conditions (<ref>) imply that ∫_| x |e_11 v_1(x_1) S_1(x_1,y_1) x_1 = η∫_| x |e_11v_1(x_1)Φ(x_1) x_1Φ^*(y_1) = 0_2× 2, ∀ y ∈, ∫_| y | S_1(x_1,y_1) v_2(y_1)e_11 y_1 = ηΦ(x_1)∫_| y |Φ^*(y_1)v_2(y_1)e_11 y_1 = 0_2 × 2, ∀ x ∈. Hence, using the bound | H_t(x_1,x) - (| x - x_1 | - | x | )|≤ C | t |^-1⟨ x ⟩^2 ⟨ x_1 ⟩^3, and the exponential decay of v_1,v_2, we conclude the estimate | I_2,1^(1,1)(x,y) - η√(π)/√(-it) [G_0(e_11v_1Φ)(x)][G_0(e_11v_2Φ)(y)]^* |≤ C| t |^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, where G_0(x,y) := -1/2| x - y|, and [G_0(e_11v_1Φ)(x)] := -1/2∫_| x - x_1 |e_11v_1(x_1)Φ(x_1) x_1, [G_0(e_11v_2Φ)(y)]^* := -1/2∫_| y - y_1 |Φ^*(y_1)v_2(y_1) e_11 y_1. In the preceding definition, we used the identity v_2^* = v_2. Next, we treat the term I_2,1^(2,2)(x,y) = ∫_^3 e^itz^2χ_0(z^2)e^-√(z^2+2μ)| x - x_1|/-2√(z^2+2μ) [e_22v_1S_1v_2e_22](x_1,y_1)e^-√(z^2+2μ)| y - y_1|/-2√(z^2+2μ) x_1 y_1 z. By Taylor expansion, we have I_2,1^(2,2)(x,y) =∫_^3 e^itz^2χ_0(z^2)e^-√(2μ)| x - x_1|/-2√(2μ) [e_22v_1S_1v_2e_22](x_1,y_1)e^-√(2μ)| y - y_1|/-2√(2μ) x_1 y_1 z + ∫_^3 e^itz^2z^2χ_0(z^2)[e_22v_1S_1v_2e_22](x_1,y_1)κ(x,x_1)κ(y,y_1) x_1 y_1 z = η∫_ e^itz^2χ_0(z^2) z [G_2(e_22v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^* + ∫_^3 e^itz^2z^2χ_0(z^2)[e_22v_1S_1v_2e_22](x_1,y_1)κ(x,x_1)κ(y,y_1) x_1 y_1 z, where we set G_2(x,y) := e^-√(2μ)| x - y|/-2√(2μ), and where κ(x,x_1)κ(y,y_1) is an error term bounded by C⟨ x ⟩⟨ x_1 ⟩⟨ y ⟩⟨ y_1 ⟩ e^-c(| x - x_1 | + | y - y_1 |), for some C,c >0, (c.f. (<ref>)). The definitions for G_2(e_22v_1Φ)(x) and G_2(e_22v_2Φ)(y) are defined analogously to the ones for G_0(e_11v_1Φ)(x) and G_0(e_11v_2Φ)(y). By non-stationary phase, one has the uniform estimate |∫_ e^itz^2z^2χ_0(z^2) z |≤ C| t |^-3/2. Hence, we can control the remainder term in I_2,1^(2,2) by |∫_^3 e^itz^2z^2χ_0(z^2)[e_22v_1S_1v_2e_22](x_1,y_1)κ(x,x_1)κ(y,y_1) x_1 y_1 z |≤ C | t|^-3/2⟨ x ⟩⟨ y ⟩. On the other hand, by Lemma <ref>, one has ∫_ e^itz^2χ_0(z^2) z = √(π)/√(-it) + R_t, | R_t |≤ C| t |^-3/2. Hence, the leading contribution of I_2,1^(2,2) can be written as |∫_ e^itz^2χ_0(z^2) z [G_2(e_22v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^* - η√(π)/√(-it) [G_2(e_22v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^*|≤ C| t |^-3/2. Thus, one concludes the estimate for I_2,1^(2,2): | I_2,1^(2,2) - η√(π)/√(-it) [G_2(e_22v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^* |≤ C | t |^-3/2⟨ x ⟩⟨ y ⟩. Finally, we note that a similar analysis holds for the terms I_2,1^(1,2) and I_2,1^(2,1) yielding the contributions | I_2,1^(1,2) - η√(π)/√(-it)[G_0(e_11v_1Φ)(x)][G_2(e_22v_2Φ)(y)]^* |≤ C| t |^-3/2⟨ x ⟩^2 ⟨ y ⟩, | I_2,1^(2,1) -η√(π)/√(-it)[G_2(e_22v_1Φ)(x)][G_0(e_11v_2Φ)(y)]^*|≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩^2. By adding all leading order contributions, we obtain F_t^1(x,y) =η√(π)/√(-it)[(G_0e_11 + G_2e_22)v_1Φ](x)[(G_0e_11 + G_2e_22)v_2Φ]^*(y). Recalling that _0 = G_0 e_11 + G_2e_22 from Lemma <ref>, that _0(v_1 Φ ) = c_0e_1 - Ψ from Lemma <ref>, and that _0(v_2 Φ) = σ_3 Ψ - c_0 e_1 from Remark <ref> (c.f. (<ref>)), we arrive at F_t^1(x,y) = η√(π)/√(-it)[c_0 e_1 - Ψ(x)][σ_3 Ψ(y) - c_0e_1]^*, as claimed We continue the analysis for the terms involving the operators S_1TP and PTS_1. For all | t |≥ 1, we have | I_2,2(x,y) - F_t^2(x,y) |≤ C | t|^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, | I_2,3(x,y) - F_t^3(x,y) |≤ C | t|^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, where F_t^2(x,y) := iη‖ V_1 ‖_L^1()/2√(π)/√(-it)[c_0 e_1 - Ψ(x)][e^iy^2/4tc_0e_1]^*, F_t^3(x,y) := -iη‖ V_1 ‖_L^1()/2√(π)/√(-it) [e^-ix^2/4tc_0 e_1][σ_3 Ψ(y) - c_0e_1]^*. As in the proof of Proposition <ref>, we decompose I_2,2 into I_2,2 = I_2,2^(1,1) + I_2,2^(1,2) + I_2,2^(2,1) + I_2,2^(2,2), with I_2,2^(i,j) := ∫_ e^itz^2 z χ_0(z^2) [_i(z) v_1 S_1TP v_2 _j(z)] z, i,j ∈{1,2}, where _1 and _2 were defined in (<ref>). We start with I_2,2^(1,1)(x,y) = ∫_^3 e^itz^2zχ_0(z^2) e^iz | x - x_1 |/-2iz[e_11v_1S_1TPv_2e_11](x_1,y_1)e^iz | y - y_1 |/-2iz x_1 y_1 z. Using the orthogonality (<ref>), we have I_2,2^(1,1)(x,y) = 1/4∫_^3∫_| x |^| x - x_1 |∫_| y |^| y - y_1 | e^itz^2+iz(s_1+s_2)zχ_0(z^2) [e_11v_1S_1TPv_2e_11](x_1,y_1) s_1 s_2 x_1 y_1 z + 1/4i∫_^3∫_| x |^| x - x_1 |e^itz^2+izs_1χ_0(z^2) [e_11v_1S_1TPv_2e_11](x_1,y_1)e^iz | y | s_1 x_1 y_1 z =: I_2,2;1^(1,1) + I_2,2;2^(1,1). By Lemma <ref>, we have |∫_e^itz^2+iz(s_1+s_2)zχ_0(z^2) z |≤ C| t |^-3/2⟨ s_1 ⟩⟨ s_2 ⟩. Using this estimate, the bound ∫_| x |^| x - x_1 |∫_| y |^| y - y_1 |⟨ s_1 ⟩⟨ s_2 ⟩ s_1 s_2 ≲⟨ x_1 ⟩^2 ⟨ y_2 ⟩^2⟨ x ⟩⟨ y ⟩, the absolute boundedness of S_1TP, and the exponential decay of v_1,v_2, we deduce that | I_2,2;1^(1,1)(x,y) |≲| t |^-3/2⟨ x ⟩⟨ y ⟩∫_^2|⟨ x_1 ⟩^2 ⟨ y_2 ⟩^2 [e_11v_1S_1TPv_2e_11](x_1,y_1)| x_1 y_1 ≲| t |^-3/2⟨ x ⟩⟨ y ⟩. By Lemma <ref> and direct computation, ∫_ S_1TP(x_1,y_1)v_2(y_1)e_11 y_1 = η‖ V_1 ‖_L^1()Φ(x_1)[c_0e_1]^*. Hence, integrating in y_1, we have I_2,2;2^(1,1)(x,y) = η‖ V_1 ‖_L^1()/4i(∫_∫_| x |^| x - x_1 |∫_ e^itz^2+iz(s_1 + | y |)χ_0(z^2) e_11 v_1(x_1)Φ(x_1) z s_1 x_1)[c_0e_1]^* = η‖ V_1 ‖_L^1()/4i(∫_∫_| x |^| x - x_1 | G_t(s_1 + | y |) s_1 e_11 v_1(x_1)Φ(x_1) x_1)[c_0e_1]^*, where G_t is the function defined in Lemma <ref>. By Lemma <ref> (c.f. (<ref>)–(<ref>) for similar computations), we have | I_2,2;2^(1,1)(x,y) - i η‖ V_1 ‖_L^1()/2 [G_0(e_11v_1Φ)(x)][e^iy^2/4tc_0e_1]^* |≤ C | t |^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, where G_0 is the operator defined in (<ref>). This completes the analysis of the term I_2,2^(1,1). Next, we treat the term I_2,2^(2,1)(x,y) = ∫_^3 e^itz^2z χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)[e_22v_1S_1TPv_2e_11](x_1,y_1)e^iz | y - y_1 |/-2iz x_1 y_1 z. By inserting e^iz | y |, we write I_2,2^(2,1)(x,y) = -1/2∫_^3 e^itz^2z χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)[e_22v_1S_1TPv_2e_11](x_1,y_1) ∫_| y |^| y - y_1 |e^izs_2 s_2 x_1 y_1 z + ∫_^3 e^itz^2z χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)[e_22v_1S_1TPv_2e_11](x_1,y_1)e^iz | y |/-2iz x_1 y_1 z =: I_2,2;1^(2,1)(x,y) + I_2,2;2^(2,1)(x,y), where I_2,2;2^(2,1) is the leading term. By Lemma <ref> and Lemma <ref>, |∫_ e^itz^2+izs_2 z χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ) z |≤ C | t |^-3/2⟨ s_2 ⟩. Hence, using the absolute boundedness of S_1TP and the bound (<ref>), we have | I_2,2;1^(2,1)(x,y) |≲| t |^-3/2∫_^2⟨ y_1 ⟩^2 ⟨ y ⟩[e_22v_1S_1TPv_2e_11](x_1,y_1) x_1 y_1 ≲| t |^-3/2⟨ y ⟩. On the other hand, we treat I_2,2;1^(2,1) similarly as in (<ref>) - (<ref>) and find that | I_2,2;2^(2,1)(x,y) - i/2∫_^3 e^itz^2 + iz | y |χ_0(z^2)G_2(x,x_1)[e_22v_1S_1TPv_2e_11](x_1,y_1) x_1 y_1 z |≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩, where G_2 is defined in (<ref>). Hence, by Lemma <ref> and (<ref>), we conclude that | I_2,2^(2,1)(x,y) - iη‖ V_1 ‖_L^1() /2√(π)/√(-it) [G_2(e_22v_1Φ)(x)][e^iy^2/4tc_0e_1]^* |≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. Finally, we show that the terms I_2,2^(1,2) and I_2,2^(2,2) satisfy the better decay rates of (| t |^-3/2⟨ x ⟩⟨ y ⟩). By orthogonality (c.f. (<ref>)), I_2,2^(1,2)(x,y) = 1/-2∫_^3e^itz^2zχ_0(z^2) ∫_| x |^| x - x_1| e^izs_1 s_1[e_11v_1S_1TPv_2e_22](x_1,y_1) e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) x_1 y_1 z. By Lemma <ref> and Lemma <ref>, we note that the z-integral satisfy the bound |∫_ e^itz^2+izs_1 z χ_0(z^2)e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) z |≤ C| t |^-3/2⟨ s_1 ⟩. Hence, by the absolute boundedness of S_1TP and decay of v_1, v_2, we conclude that | I_2,2^(1,2)(x,y) |≤ C| t |^-3/2⟨ x ⟩. The analysis of I_2,2^(2,2) is analogous to the preceeding one, yielding the bound | I_2,2^(2,2)(x,y) |≤ C | t |^-3/2⟨ y ⟩. Thus, using _0 = G_0 e_11 + G_2e_22, and _0(v_1Φ) = c_0e_1 - Ψ from Lemma <ref>, we conclude (<ref>) and (<ref>). For the estimate (<ref>) involving I_2,3, one should instead use the identity ∫_e_11v_1(x_1)PTS_1(x_1,y_1) x_1 = -η‖ V_1 ‖_L^1() c_0e_1 Φ(y_1)^*, and we leave the remaining details to the reader. Next, we remark that the analysis for I_2,4 involving the operator P leads to a similar estimate as the free evolution in Proposition <ref>. For all | t |≥ 1, we have | I_2,4(x,y) - F_t^4(x,y) |≤ C | t|^-3/2⟨ x ⟩^2 ⟨ y ⟩^2, where F_t^4(x,y) := ‖ V_1 ‖_L^1()/4√(π)/√(-it)e^-ix^2/4te_1 e^-iy^2/4te_1^⊤. As before, we write I_2,4 = I_2,4^(1,1) + I_2,4^(1,2) + I_2,4^(2,1) + I_2,4^(2,2), with I_2,4^(i,j) := ∫_ e^itz^2 z^2 χ_0(z^2) [_i(z) v_1 P v_2 _j(z)] z, i,j ∈{1,2}, where _1 and _2 were defined in (<ref>). We first treat the leading term I_2,4^(1,1)(x,y) = ∫_ e^itz^2 z^2 χ_0(z^2) e^iz | x - x_1 |/2iz[e_11v_1Pv_2e_11](x_1,y_1)e^iz | y - y_1 |/2iz x_1 y_1 z. By adding and subtracting e^iz | x | and e^iz | y | twice, we further consider I_2,4^(1,1)(x,y) = ∫_^3 e^itz^2 z^2 χ_0(z^2) e^iz | x |/2iz[e_11v_1Pv_2e_11](x_1,y_1)e^iz | y |/2iz x_1 y_1 z + 1/2∫_^3 e^itz^2 z^2 χ_0(z^2) e^iz | x |/2iz[e_11v_1Pv_2e_11](x_1,y_1)∫_| y |^| y - y_1 |e^izs_2 s_2 x_1 y_1 z + 1/2∫_^3 e^itz^2 z^2 χ_0(z^2) ∫_| x |^| x - x_1 |e^izs_1ds_1[e_11v_1Pv_2e_11](x_1,y_1)e^iz | y |/2iz x_1 y_1 z + 1/4∫_^3 e^itz^2 z^2 χ_0(z^2) ∫_| x |^| x - x_1 |e^izs_1ds_1[e_11v_1Pv_2e_11](x_1,y_1)∫_| y |^| y - y_1 |e^izs_2 s_2 x_1 y_1 z =: I_2,4;1^(1,1)(x,y) +I_2,4;2^(1,1)(x,y) +I_2,4;3^(1,1)(x,y) +I_2,4;4^(1,1)(x,y). By direct computation, ∫_^2 [e_11v_1Pv_2e_11](x_1,y_1) x_1 y_1 = - ‖ V_1 ‖_L^1()e_1e_1^⊤. Hence, by Lemma <ref>, | I_2,4;1^(1,1)(x,y) - ‖ V_1 ‖_L^1()/4√(π)/√(-it) e^-ix^2/4te_1e^-iy^2/4te_1^⊤|≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. For the terms I_2,4;2^(1,1), I_2,4;3^(1,1), the additional factor of z allows to invoke Lemma <ref>, |∫_ e^itz^2+iz (| x | + s_2)zχ_0(z^2) z |≤ C| t |^-3/2⟨ x ⟩⟨ s_2⟩, |∫_ e^itz^2+iz (s_1 + | y |)zχ_0(z^2) z |≤ C| t |^-3/2⟨ y ⟩⟨ s_1⟩ . Thus, we infer from the exponential decay of v_1 and v_2 that | I_2,4;2^(1,1)(x,y) | + | I_2,4;3^(1,1)(x,y) |≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. For the term I_2,4;4^(1,1), we can use non-stationary phase to conclude the same bound. Hence, we have | I_2,4^(1,1)(x,y) - ‖ V_1 ‖_L^1()/4√(π)/√(-it) e^-ix^2/4te_1e^-iy^2/4te_1^⊤|≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. Thus, it remains to prove that the other terms I_2,4^(1,2), I_2,4^(2,1), I_2,4^(2,2) have the better (| t |^-3/2⟨ x ⟩⟨ y ⟩) weighted decay estimate to finish the proposition. We first treat the term I_2,4^(1,2)(x,y) =1/2i∫_^3 e^itz^2 z χ_0(z^2) e^iz | x - x_1 | [e_11v_1Pv_2e_22](x_1,y_1)e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) x_1 y_1 z. By Lemma <ref> and Lemma <ref>, |∫_ e^itz^2+iz(| x-x_1|)zχ_0(z^2) e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) z |≤ C | t |^-3/2⟨ x ⟩⟨ x_1 ⟩. Hence, using the decay assumptions on v_1 and v_2, we conclude that | I_2,4^(1,2)(x,y) |≤ C| t |^-3/2⟨ x ⟩⟨ y ⟩. The same bound holds for the term I_2,4^(2,1) and we will skip the details. Finally, we are left with I_2,4^(2,2)(x,y) = ∫_^3 e^itz^2 z^2 χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)[e_22v_1Pv_2e_22](x_1,y_1)e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ) x_1 y_1 z. By direct computation using (<ref>), [e_22v_1Pv_2e_22](x_1,y_1) = 1/‖ V_1 ‖_L^1()[V_2e_2](x_1)[V_2e_2]^⊤(y_1), and by Lemma <ref> and Lemma <ref>, we have the uniform estimate |∫_e^itz^2 z^2 χ_0(z^2) e^-√(z^2+2μ)| x - x_1 |/-2√(z^2+2μ)e^-√(z^2+2μ)| y - y_1 |/-2√(z^2+2μ)dz |≤ C_μ| t |^-3/2. Hence, by exchanging the order of integration, we conclude that | I_2,4^(2,2)(x,y) |≤ C | t |^-3/2. Thus, we conclude (<ref>) by summing over the four terms. Finally, we are ready to complete the proof of the local decay estimate (<ref>). We sum the leading contributions of the spectral representation of e^itχ_0( - μ I)P_s^+ in (<ref>) by invoking Proposition <ref>, Proposition <ref>, Proposition <ref>, and Proposition <ref> to obtain F_t^0 - e^itμ/π i(i/2η F_t^1 + 1/η‖ V_1 ‖_L^1() F_t^2 + 1/η‖ V_1 ‖_L^1() F_t^3 + (2i/‖ V_1 ‖_L^1()+2 | c_0 |^2 /i‖ V_1 ‖_L^1()) F_t^4 ) = e^itμ/√(-4 π i t)( - [c_0 e_1 - Ψ(x)][σ_3 Ψ(y) - c_0e_1]^* - [c_0 e_1 - Ψ(x)][e^ iy^2/4tc_0e_1]^*. . + [e^-ix^2/4tc_0 e_1][σ_3 Ψ(y) - c_0e_1]^* + | c_0 |^2 e^-ix^2/4t e^-iy^2/4te_1e_1^⊤) = e^itμ/√(-4 π i t)( Ψ(x) [σ_3 Ψ(y)]^* + (e^-i x^2/4t-1)c_0 [σ_3 Ψ(y)]^* + (e^-i y^2/4t-1) Ψ(x) [ c_0e_1]^* . . + (1-e^-i x^2/4t - e^-i y^2/4t + e^-ix^2/4t e^-iy^2/4t)| c_0 |^2 e_1e_1^⊤), where we use the cancellation F_t^0 - e^itμ/π i2i/‖ V_1 ‖_L^1() F_t^4 = 0 in the first equality. We note that the first term gives us the finite rank operator F_t^+(x,y) = e^itμ/√(-4 π i t)Ψ(x) [σ_3 Ψ(y)]^*, and we show that the last three terms satisfy the better decay rate. Using, | 1 - e^-i x^2/4t|≤x^2/4| t|, and the fact that Ψ∈ L^∞() × L^∞(), we have |e^itμ e^iπ/4/2√(π)√(t)(e^-i x^2/4t - 1)c_0 e_1[σ_3 Ψ(y)]^* |≲| t |^-3/2⟨ x ⟩^2, and similarly |e^itμ e^iπ/4/2√(π)√(t)(e^-i y^2/4t-1) c_0Ψ(x) e_1^⊤|≲| t |^-3/2⟨ y ⟩^2. For the last term, we have | 1-e^-i x^2/4t - e^-i y^2/4t + e^-ix^2/4t e^-iy^2/4t| = | 1 - e^-i x^2/4t|| 1 - e^-i y^2/4t|≲| t |^-2⟨ x ⟩^2 ⟨ y ⟩^2. Thus, the leading contribution to e^itχ_0( - μ I)P_s^+ is F_t^+. § INTERMEDIATE AND HIGH ENERGY ESTIMATES In order to complete the proof of Theorem <ref>, we also need to prove the dispersive estimates when the spectral variable is bounded away from the thresholds ±μ. As usual, we focus on the positive semi-axis [μ,∞) of the essential spectrum and prove the dispersive estimates for energies λ > μ. The negative semi-axis (-∞,-μ] can be treated by symmetry of . We recall from Section 2 that the kernel of the limiting resolvent operator for _0 has the formula _0^±(z)(x,y) := (_0-(z^2+μ± i0))^-1 = [ ±ie^± i z | x -y |/2 z 0; 0 -e^-√(z^2+2μ)| x - y |/2 √(z^2 + 2μ) ], ∀ 0 < z <∞. From this, we have the following bound ‖_0^±(z) ‖_L^1 × L^1 → L^∞× L^∞≤ C | z |^-1. Hence, for sufficiently large z, the perturbed resolvent ^±(z) can be expanded into the infinite Born series ^±(z) = ∑_n=0^∞_0^±(z)(-_0^±(z))^n. More precisely, since the operator norm L^1 × L^1 → L^∞× L^∞ in the n-th summand above is bounded by C | z| ^-1 (C‖‖_1 | z|^-1)^n, the Born series converges in the operator norm whenever | z| > z_1 := 2C‖‖_L^1 × L^1. We define the high-energy cut-off by χ_h(z) := 1-χ(z), where χ(z) is a standard smooth even cut-off supported on [-z_1,z_1] satisfying χ(z) = 1 for | z |≤z_1/2 and χ(z) = 0 for | z |≥ z_1. We insert the cut-off and the Born series expansion into the spectral representation e^itχ_h(-μ I) P_s^+ and look to bound the following |⟨ e^itχ_h(-μ I) P_s^+ u⃗,v⃗⟩| = |∫_0^∞ e^itz^2z χ_h(z^2) ⟨ [^+(z) - ^-(z)]u⃗,v⃗⟩ z | ≤ C ∑_±∑_n=0^∞|∫_0^∞ e^itz^2z χ_h(z^2) ⟨_0^±(z)(_0^±(z))^nu⃗,v⃗⟩ z|, where u⃗,v⃗∈() ×(). From <cit.>, we have the following dispersive estimates: Under the same hypothesis as Theorem <ref>, we have ‖ e^itχ_h(-μ I)P_s^+ u⃗ ‖_L^∞()× L^∞()≲| t |^-1/2‖u⃗ ‖_L^1() × L^1(), and ‖⟨ x ⟩^-1e^itχ_h(-μ I) P_s^+u⃗ ‖_L^∞()× L^∞()≲| t |^-3/2‖⟨ x ⟩u⃗ ‖_L^1() × L^1(), for any | t |≥ 1. For (<ref>), see the proof of <cit.>, and for (<ref>), see the proof of <cit.>. Note that the high-energy dispersive estimate holds irrespective of the regularity of the thresholds ±μ. Let z_0>0 be the constant from Proposition <ref>. It may happen that z_1 is strictly larger than z_0. In this case, we need to derive estimates analogous to the above proposition in the remaining intermediate energy regime [-z_1,-z_0]∪[z_0,z_1]. To this end, we set χ_m(z) to be the intermediate energy cut-off given by χ_m(z) := 1 - χ_0(z) - χ_h(z), where χ_0(z) was the cut-off defined in the previous section in Proposition <ref>. For any | t |≥ 1, we have ‖ e^itχ_m(-μ I)P_s^+ u⃗ ‖_L_x^∞()× L_x^∞()≲| t |^-1/2‖u⃗ ‖_L_x^1() × L_x^1(), and ‖⟨ x ⟩^-1e^itχ_m(-μ I) P_s^+u⃗ ‖_L_x^∞()× L_x^∞()≲| t |^-3/2‖⟨ x ⟩u⃗ ‖_L_x^1() × L_x^1(). Before proving the above proposition, we need the following lemmas for pointwise bounds and operator norm bounds on the resolvent operators and its derivatives. The first lemma follows immediately from the expression (<ref>) and the triangle inequality || x - x_1 | - | x ||≤| x_1 |. Let γ_0 > 0. For every z > γ_0, and k ∈{0,1,2}, we have |∂_z^k _0^±(z)(x,y) |≤ C γ_0^-1-k⟨ x - y ⟩^k, and hence ‖∂_z^k _0^±(z)(x,·) ‖_X_-(1/2+k)-≤ C γ_0^-1-k⟨ x ⟩^k. Moreover, define _±(z)(x,x_1) = [ e^∓ i z | x | 0; 0 1 ]_0^±(z)(x,x_1)=[ ±ie^± i z (| x - x_1 | - | x |)/2 z 0; 0 -e^-√(z^2+2μ)| x - x_1 |/2 √(z^2 + 2μ) ]. Then, for any k ≥ 0, sup_x ∈|∂_z^k ^±(z)(x,x_1)|≤ C γ_0^-1-k| x_1 |. With these bounds, we are able to give operator norm bounds on the perturbed resolvent via the resolvent identity. Let γ_0 > 0. We have sup_| z | > γ_0‖∂_z ^±(z) ‖_X_3/2+→ X_-3/2-≲ 1, sup_| z | > γ_0‖∂_z^2 ^±(z) ‖_X_5/2+→ X_-5/2-≲ 1. By Lemma <ref>, for any | z | > γ_0, we have ^±(z) = (I+_0^±(z))^-1_0^±(z) =: S^±(z)^-1_0^±(z), as a bounded operator from X_1/2+ to X_-1/2-. Note that S^±(z) is boundedly invertible on X_-σ for any σ>0. By differentiation, we have ∂_z ^±(z) = -S^±(z)^-1∂_z_0^±(z) S^±(z)^-1_0^±(z) + S^±(z)^-1∂_z _0^±(z). Moreover, as a multiplication operator, :X_-σ→ X_σ is bounded for any σ>0 due to the exponential decay of . By Lemma <ref>, ∂_zR_0^±(z):X_3/2+→ X_-3/2- is bounded and since the embedding X_-1/2-⊂ X_-3/2- is continuous, we infer the bound (<ref>) by taking composition. By a similar argument, ‖∂_z^2 ^±(z) ‖_X_5/2+→ X_-5/2-≲ 1. By iterating the second resolvent identity, we write the perturbed resolvent as a finite sum ^±(z) = _0^±(z) - _0^±(z)_0^±(z) + _0^±(z)^±(z)_0^±(z), and we write e^itχ_m( - μ I)P_s^+(x,y) = ∑_j=1^3 ∫_0^∞ e^itz^2zχ_m(z^2)(-1)^j+1(_j^+(z) - _j^-(z))(x,y)dz, with _1^±(z) = _0^±(z), _2^±(z) = _0^±(z)_0^±(z), _3^±(z) = _0^±(z)^±(z)_0^±(z). Hence, to prove (<ref>) and (<ref>), it is sufficient to establish the estimates sup_±sup_j=1,2,3|∫_0^∞ e^itz^2zχ_m(z^2)_j^±(z)(x,y) z |≲min{| t |^-1/2,| t |^-3/2⟨ x ⟩⟨ y ⟩}. The term involving _1^± is handled by the earlier Proposition <ref>, while the second term involving _2^± can be treated analogously as in Proposition <ref>. We refer the reader to <cit.> and <cit.> for similar computations. For the term involving _3^±, we first write _0^±(z)(s_1,s_2) = [ e^± i z| s_1 | 0; 0 1 ]_±(z)(s_1,s_2), where the operator _±(z) was defined in (<ref>). Then, using that the kernel _0^±(z)(x,y) is symmetric in x and y variables, and using the matrix identity e_jj[ a_11 a_12; a_21 a_22 ]e_kk = a_jke_je_k^⊤, j,k ∈{1,2}, we compute the following kernel identity _3^±(z)(x,y) = ∫_^2_0^±(x,x_1)[^±(z)](x_1,y_1)_0^±(y,y_1) x_1 y_1 = [ e^± iz | x | 0; 0 1 ]∫_^2^±(x,x_1)[^±(z)](x_1,y_1)^±(y,y_1) x_1 y_1[ e^± iz | y | 0; 0 1 ] = e^± iz (| x | + | y |)⟨ (^±)^*(z)(x,·)e_1,^±(z)^±(z)(y,·)e_1⟩ e_1e_1^⊤ + e^± iz | x |⟨ (^±)^*(z)(x,·)e_2,^±(z)^±(z)(y,·)e_1⟩ e_1e_2^⊤ + e^± iz | y |⟨ (^±)^*(z)(x,·)e_1,^±(z)^±(z)(y,·)e_2⟩ e_2e_1^⊤ + ⟨ (^±)^*(z)(x,·)e_2,^±(z)^±(z)(y,·)e_2⟩ e_2e_2^⊤ =: e^± iz (| x | + | y |) A_1^±(z,x,y) + e^± iz | x | A_2^±(z,x,y) + e^± iz | y |A_3^±(z,x,y) + A_4^±(z,x,y). We plug this identity into the left hand side of (<ref>), and hence it will be sufficient to provide the bounds |∫_0^∞ e^itz^2 ± iz r zχ_m(z^2) A_k^±(z,x,y) z |≲min{| t |^-1/2,| t |^-3/2⟨ r ⟩}, k∈{1,…,4}, where r can represent 0 or | x|, | y|, or the sum of both variables. For the case k=1, by Lemma <ref>, we have that |∫_0^∞ e^itz^2 ± iz (| x | + | y |) zχ_m(z^2) A_1^±(z,x,y) z |≤ C | t |^-1/2‖∂_z (zχ_m(z^2) A_1^±(z,x,y) )‖_L_z^1(). Since the term zχ_m(z^2) is smooth and has compact support, we only need to track the derivatives when they fall onto either ^±(z) or ^±(z). In any case, thanks to the exponential decay of , and the bounds (<ref>), (<ref>) from the previous lemmas, we have the following uniform bound sup_±sup_z ∈ (χ_m)sup_j,k =1,2|∂_z ⟨ (^±)^*(z)(y,·)e_j, ^±(z)^±(z)(x,·)e_k⟩| ≲sup_±sup_z ∈ (χ_m)sup_j,k =1,2‖√(||)(x_1) (|^±(z)(x_1,x_2) | + |∂_z ^±(z)(x_1,x_2) |) √(||)(x_2) ‖_L_x_2^2 → L_x_1^2 ·‖√(||)(x_1) (|^±(z)(x,x_1)| + |∂_z^±(z)(x,x_1)|) e_j‖_L_x_1^2 ·‖√(||)(x_2)(|^±(z)(x_2,y)| + |∂_z^±(z)(x_2,y)|) e_k‖_L_x_2^2 ≲ 1, for all x,y ∈. To prove the weighted dispersive estimate, we invoke the stronger estimate in Lemma <ref>: |∫_0^∞ e^itz^2 ± iz (| x | + | y |) zχ_m(z^2) A_1^±(z,x,y) z |≤ C | t |^-3/2‖ [∂_z^2 ± i(| x | + | y |)∂_z] (χ_m(z^2) A_1^±(z,x,y) )‖_L_z^1() Here, we can apply the same argument as in (<ref>) for the two derivatives bound on A_1^± using the estimates (<ref>) and (<ref>), whereas the bound on one derivative for A_1^± leads to the weights ⟨ x ⟩⟨ y ⟩. Thus, we prove (<ref>) for k=1. The other cases follow by the same argument and we are done. Finally, we conclude with the proof of Theorem <ref>. By combining the estimates from Proposition <ref>, Proposition <ref>, and Proposition <ref>, we have established the bounds ‖ e^itP_s^+ ‖_L_x^∞()× L_x^∞()≲| t |^-1/2‖ ‖_L_x^1() × L_x^1(), as well as ‖⟨ x ⟩^-2 (e^itP_s^+ - F_t^+) ‖_L_x^∞()× L_x^∞()≲| t |^-3/2‖ ‖_L_x^1() × L_x^1(), for any := (u_1,u_2)^⊤∈() ×() and | t |≥ 1, with F_t^+ given by (<ref>). By Remark <ref>, we can similarly deduce that the unweighted dispersive estimate for the evolution e^itP_s^- using the identity (<ref>). On the other hand, for the weighted estimate, we find that the leading contribution to e^itP_s^- is given by F_t^-(x,y) = σ_1 F_-t^+(x,y)σ_1 = -e^-itμ/√(4 π i t)[σ_1Ψ(x)][σ_3σ_1Ψ(y)]^*, where we used the anti-commutation identity σ_3 σ_1 = - σ_1 σ_3. Thus, we conclude the local decay estimate (<ref>) and the formula (<ref>) by setting F_t := F_t^+ + F_t^-. § NEUMANN SERIES Let A be an invertible operator and B be a bounded operator satisfying ‖ B ‖ < ‖ A^-1‖^-1. Then, A-B is invertible with (A-B)^-1 =A^-1∑_n=0^∞ (BA^-1)^n = A^-1 + A^-1BA^-1 + A^-1BA^-1BA^-1 + ⋯, and ‖ (A-B)^-1‖≤ (‖ A^-1‖^-1 - ‖ B ‖)^-1. By the hypothesis ‖ B ‖ < ‖ A^-1‖^-1, we have ‖ A^-1B‖ <1. Consider the identity (A-B)^-1 = (I-A^-1B)^-1A^-1. The term on the right hand side can be written in the usual Neumann series (I-A^-1B)^-1 = ∑_n=0^∞ (A^-1B)^n. Thus, by multiplying A^-1, we deduce (<ref>). Note that the argument also holds true for (A-B)^-1 = A^-1(I-BA^-1)^-1. Now, since we have the estimate ‖ (I- A^-1B)^-1‖≤ (1 - ‖ A^-1B‖)^-1, we deduce (<ref>) by the sub-multiplicative property for operator norms. alpha
http://arxiv.org/abs/2307.07648v2
20230714223639
A decomposition framework for gas network design
[ "Yijiang Li", "Santanu S. Dey", "Nikolaos V. Sahinidis" ]
math.OC
[ "math.OC" ]
A decomposition framework for gas network design Yijiang LiSchool of Industrial and Systems Engineering, Georgia Institute of Technology ([email protected]), Santanu S. DeySchool of Industrial and Systems Engineering, Georgia Institute of Technology ([email protected]) and Nikolaos V. SahinidisSchool of Industrial and Systems Engineering and School of Chemical and Biomolecular Engineering, Georgia Institute of Technology ([email protected]) =========================================================================================================================================================================================================================================================================================================================================================================================================================== Gas networks are used to transport natural gas, which is an important resource for both residential and industrial customers throughout the world. The gas network design problem is a challenging nonlinear and non-convex optimization problem. In this paper, we propose a decomposition framework to solve this problem. In particular, we utilize a two-stage procedure that involves a convex reformulation of the original problem. We conduct experiments on a benchmark network to validate and analyze the performance of our framework. § INTRODUCTION Natural gas is a very important and common resource for both residential and industrial customers around the world. In the United States alone, a total of 27.7 trillion cubic feet of natural gas were delivered to 77.3 million customers in 2020 (<cit.>). To transport natural gas to meet this demand, a natural gas transportation system has been developed which was worth $187.9 billion in 2020 (<cit.>). A gas transportation system is usually modeled as a directed graph where the nodes can be customers with demands, manufacturers with supplies, or in-nodes that do not have either demands or supplies, while the arcs represent various system components. Modeling and optimizing gas transportation systems is very challenging due to the complex nature of the physical principles governing the operations of the system components. Generally, these models involve nonlinear and non-convex constraints. Even simple models of the system components lead to challenging problems, as the scale of realistic instances is quite large compared to what state-of-the-art solvers can tackle. In general, the system components (the arcs) can be divided into pipes, short pipes, resistors, compressors, valves, and control valves. Pipes constitute the majority of the system components. Control valves are sometimes referred to as regulators as well. Each of the system components serves a different role. The components can be grouped into passive components and active components. Pipes, short pipes, and resistors are passive system components and do not have on and off states. Compressors, control valves, and valves are active system components with on and off states. There are several types of gas network optimization problems. Most problems involve the decision on the flowrates in the arcs and the potentials at the nodes. Commonly used benchmarking instances are based on the Belgian network (<cit.> and <cit.>) of size up to 23 nodes and the Gaslib networks of various sizes up to 4197 nodes (<cit.>). In this paper, we study the design problem, which considers a given set of pipe locations. The main decisions involve choosing the pipe diameters, the states for valves, compressors, and control valves, and flowrates and potentials to transport gas to satisfy the given demand and supply scenarios while minimizing the network construction costs. We call this version of design problem the design-from-scratch variant, different from the reinforcement version; details are provided in Section <ref>. The demand and supply scenarios are commonly referred to as nominations. The main contributions of this paper are the following. We propose a decomposition framework that involves an iterative procedure of solving a convex integer master problem and a verification subproblem for the solutions obtained from the master problem, and a binary search to minimize the construction cost of the pipes. We use the Gaslib-582 network with 582 nodes to validate our framework. To the best of our knowledge, this is the first paper that solves the gas network design problem on such large-scale instances. Previous literature (reviewed below in Section <ref>) on design-from-scratch version of the problem has not considered any instance with over 500 nodes, and these works do not simultaneously consider active elements, discrete diameter choices, and general (non-tree) underlying networks. The structure of the remainder of this paper is as follows. We give a summary of the relevant literature in Section <ref>. Section <ref> presents the technical background and a compact formulation for the design problem while Section <ref> presents the decomposition framework. Implementation considerations and numerical experiments to validate the decomposition framework are presented in Section <ref>. Lastly, in Section <ref>, we present concluding remarks and future research directions. § LITERATURE REVIEW Gas network systems have been an important topic of study in the past several decades. As the relevant literature is rather extensive, here we review only works that are most closely related to ours. For a detailed review of the literature, we refer the interested readers to <cit.> and <cit.>. In addition, <cit.> provides an overview on the modeling and common solution approaches in gas network systems. Among the types of problems studied, we focus below on two relevant problem types, the nomination validation problem and design problem. In the nomination validation problem, we assume a nomination is given. In the case where active system components are not considered, the problem aims to evaluate whether the existing network topology is feasible with respect to the given nomination. In the case where active system components are involved, the problem aims to determine whether there exist feasible configurations for the active system components along with rest of the components such that the resulting network is feasible with respect to the nomination. Work in this area includes <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. In particular, <cit.> presents a convexification scheme to find the convex hull of a “Y" junction in the network to deal with the non-convex constraints arising from the governing physical principles. The paper by <cit.> presents four approaches for solving nomination validation problems. The first approach is a piece-wise linear approximation scheme that utilizes the generalized incremental model to linearize the nonlinear constraints; the approximation is improved iteratively by adding more linearization points. The second approach is a spatial branch-and-bound algorithm, which iteratively partitions the feasible region and refines the estimations and relaxations of the original problem in each partition to obtain dual bounds on the solution. The next approach in <cit.> is called RedNLP, a two-stage procedure, in which heuristics and reformulations are employed to find promising configurations of the active system components and the feasibility of configurations is checked in the second stage. The last approach considered is called the smoothing procedure, commonly used for mathematical programs with equilibrium constraints, to model the discrete decisions corresponding to the configurations of the active system components with continuous variables. Numerical experiments and comparisons across the four approaches are performed on the Gaslib-582 network. Overall, the spatial branch-and-bound outperforms the other three approaches. The papers by <cit.> and <cit.> consider additional constraints of satisfying heat-power demand and supply in the nomination validation problem. In both works, an alternating direction method is applied to a linearized approximation model and numerical experiments are performed on the Gaslib-4197 network. The design problem can be divided into the reinforcement problem and the design-from-scratch problem. In the reinforcement problem, it is assumed that an existing topology is given. The problem considers the options to install additional system components, mostly pipes and compressors, to satisfy a given nomination while minimizing the construction costs of the new system components. The cost of a new pipe is usually a function of its diameter and, as a result, diameter selection becomes a decision for the pipes. The design-from-scratch problem assumes no existing pipes in the network and makes decisions on the diameters for all pipes. In both the reinforcement problem and the design-from-scratch problem, the diameter choices can be continuous or discrete. Recent works that study the reinforcement problem include <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. In particular, <cit.> considers the reinforcement problem with continuous diameters and utilizes a two-stage formulation. The first-stage problem is a convex nonlinear program to compute favorable diameter choices and flowrates, while the second-stage problem checks whether the first-stage solution is feasible with respect to the nomination by solving for the potential at each node. This convex program was formally introduced in <cit.> and adapted to solve a network problem in <cit.>. The numerical experiments were performed on the Belgian network. The work in <cit.> considers also a reinforcement problem with discrete diameters. To deal with the non-convexity from the governing physical principles, the paper considers reformulations and a convex relaxation which is second-order conic representable. These authors also utilize perspective strengthening that is studied in <cit.> and <cit.> to enhance the relaxation. Numerical experiments were performed on both Belgian and Gaslib networks. For the design-from-scratch problem, <cit.> considers discrete diameter choices without any active components. A bilevel formulation is proposed and in solving the formulation, the discrete variables corresponding to the discrete diameter choices are first transformed to continuous variables. Subsequently, the lower-level problem is reformulated via conjugate duality while a trust region algorithm is developed for the upper-level problem. Numerical experiments were performed using two networks of size up to 14 nodes. The paper by <cit.> study a variant of the design-from-scratch problem with continuous diameter choices and solve the model by a bundle method with generalized gradient. Numerical experiments are performed with the Belgian network. Lastly, <cit.> consider the problem on a tree-shaped network with continuous diameter and “approximate discrete diameter" obtained from the optimal continuous diameter. These authors develop an iterative procedure to solve the problem; their approach contracts the tree (network) converting the original tree into a single equivalent arc. Numerical illustrations of this procedure were performed on networks of size up to 36 nodes. § PROBLEM DESCRIPTION §.§ Technical background In this section, we provide necessary technical background on gas networks that we will need for our formulations later. For more details, we refer the interested readers to <cit.>, <cit.>, <cit.>, and <cit.>. For the remainder of this paper, we use a directed graph G = (𝒱, 𝒜) to represent a gas network, where each arc a ∈𝒜 represents a system component of the network and each node v ∈𝒱 can be a customer, a manufacturer, or an in-node. For each node v ∈𝒱 we track its pressure p_v and potential π_v, where the pressure and potential are related by the equation: π_v = p_v^2. We denote lower and upper bounds on the potential at a node v by π_v^min and π_v^max respectively. Pipes: as mentioned earlier, a majority of the arcs in gas networks is pipes. A pipe a = (v,w) is specified by its length l, diameter D, and material properties. The flowrate in arc a, denoted as q_a, is upper bounded by a value q_a^max which is determined by the cross-sectional area, A := π D^2/4, and material properties. We assume a linear relation between the value of q_a^max and the cross-sectional area, i.e., q_a^max∝ A . The gas flow in pipe a = (v,w) is described by a set of partial differential equations derived from conservation of mass and conservation of momentum which, under certain assumptions, can be simplified to π_v - π_w = p_v^2 - p_w^2 = α_a | q_a | q_a, where α_a is the pressure loss coefficient. The pressure loss coefficient, α_a, depends on the material and diameter of the pipe and a few properties of the natural gas. As pipes allow bi-directional flow, the sign of the potential drop depends on the direction of the flow resulting in the absolute value of the flowrate variable q_a in the equation. Short pipes: short pipes are used for modeling purposes to handle complicated contract situations and are modeled as lossless pipes, i.e., a short pipe a is a regular pipe with α_a = 0. Resistors: resistors are commonly used to model pressure or potential drop. In this work, we assume resistors behave in the same way as pipes in terms of potential drop. We refer to <cit.> for alternative ways to model resistors. Compressors: compressors are used to increase potential along an arc. There are many models proposed for compressors. In this paper, we adopt the model used in <cit.>. For a compressor a = (v,w), we use a binary variable z_a to indicate the on and off states where z_a = 1 indicates the compressor is on and z_a = 0 otherwise. When the compressor is on, it allows flow from v to w and increases the potential from v to w. When it is off, it does not allow any flow. As a result, we have the following relations for a compressor: π_v - π_w ≤ 0, q_a ≥ 0, if z_a = 1 q_a = 0, if z_a = 0. Additionally, there are limits on potential ratio as follows: κ_a^minπ_v ≤π_w ≤κ_a^maxπ_v, where κ_a^min = 1 and κ_a^max≥ 1 are typical for a compressor (see <cit.>). Furthermore, when a compressor a = (v,w) is on, it can impose additional bounds on the potentials at nodes v and w. We denote those bounds by (π_v^min)^' and (π_w^max)^'. Valves: valves are incorporated in the network to join or separate two nodes. They allow bi-directional flow when they are on. A binary variable z_a is used to model the on and off states of valves. For a valve a = (v,w), z_a = 1 indicates the valve is on and z_a = 0 otherwise. When a valve is on, the potentials at the two end nodes have to be equal. When a valve is off, it does not allow any flow. Formally, the constraints for a valve a = (v,w) are expressed as follows: π_v = π_w, q_a arbitrary, if z_a = 1 q_a = 0, π_v, π_w arbitrary, if z_a = 0. Control valves: contrary to a compressor, the presence of a control valve in the network results in potential relief. We adopt a similar model that is used for compressors. For a control valve a = (v,w), a binary variable z_a is used to indicate its states. When it is on, it allows flow from v to w and causes a potential relief from v to w. When it is off, it does not allow any flow. We have the following model for the control valve: π_v - π_w ≥ 0, q_a ≥ 0, if z_a = 1 q_a = 0, if z_a = 0. The limits on the potential relief are given by κ_a^minπ_v ≤π_w ≤κ_a^maxπ_v, where κ_a^min > 0 and κ_a^max≤ 1 are typical for a control valve (see <cit.>). A control valve (v,w) can impose additional bounds on the potentials at nodes v and w when it is on. Similar to compressors, we denote those bounds by (π_v^min)^' and (π_w^max)^'. §.§ Design problem We consider the design of a gas network for a given set of pipe locations (arcs), whose diameters we must decide. As discussed in the previous section, we have different system components and thus we divide the arc set 𝒜 into 𝒜 = A_p ∪ A_sp∪ A_r ∪ A_cp∪ A_v ∪ A_cv where A_p, A_sp, A_r, A_cp, A_v, and A_cv are the set of pipes, short pipes, resistors, compressors, valves, and control valves respectively. We consider discrete diameter choices in our setting and denote the diameter choices by the set [n] := {1,2,…, n}. We use binary variables z_a,i, a ∈ A_p and i ∈ [n], to model the discrete diameter choices of the pipes. We further denote the length and diameter of the pipe a ∈ A_p with the diameter choice i ∈ [n] by l_a and D_a,i, respectively, and we use the same cost function, f_a,i, that is used in <cit.> and <cit.>, namely f_a,i = l_a(1.04081^-6D_a,i^2.5 + 11.2155). Note a trade-off in the selection of the diameters. A larger diameter, on one hand, leads to a smaller potential drop coefficient and a higher maximum flowrate, while, on the other hand, leads to a larger cost f_a,i. We introduce a flow direction variable x_a^dir∈{0,1} for a ∈ A_p ∪ A_sp∪ A_r ∪ A_v to account for the bidirectional flow. Recall that compressors and control valves only allow one flow direction. For a pipe a ∈ A_p, as a result of the multiple diameter choices, we have multiple flowrate variables q_a,i for each i ∈ [n]. We decompose the flow into positive flow and negative flow, i.e., for a ∈ A_p, q_a,i = q_a,i^+ - q_a,i^- and for a ∈ A_sp∪ A_r ∪ A_v, q_a = q_a^+ - q_a^-. The maximum flowrate limit q_a^max can be defined individually for each diameter choice i as q_a,i^max by relation (<ref>). Similarly, the potential drop coefficients α_a,i can be computed for each diameter choice. Furthermore, for a node v ∈𝒱, we denote the set of incoming arcs and outgoing arcs by A_2(v) and A_1(v), respectively, i.e., A_2(v) = {a ∈𝒜| a=(w,v)} and A_1(v) = {a∈𝒜| a=(v,w)}. We use d_v to denote the demand or supply at a node v ∈𝒱. With this notation and technical background, we give a formulation to the design problem: min ∑_a ∈ A_p∑_i ∈ [n] f_a,i z_a,i s.t. ∑_a ∈ A_2(v) \ A_p q_a^+ - ∑_a ∈ A_2(v) \ A_p ∪ A_cp∪ A_cv q_a^- - (∑_a ∈ A_1(v) \ A_p q_a^+ - ∑_a ∈ A_1(v) \ A_p ∪ A_cp∪ A_cv q_a^- ) + ∑_i ∈ [n]∑_a ∈ A_2(v) ∩ A_p (q_a,i^+ - q_a,i^-) - ∑_i ∈ [n]∑_a ∈ A_1(v) ∩ A_p (q_a,i^+ - q_a.i^-) = d_v, v ∈𝒱 π_v^min≤π_v ≤π_v^max, v ∈𝒱 x_a^dir∈{0,1}, a ∈ A_p ∪ A_sp∪ A_v ∪ A_r 0 ≤ q_a,i^-, q_a,i^+ ≤ q_a,i^maxz_a,i, ∀ i ∈ [n], a ∈ A_p π_v - π_w = ∑_i ∈ [n]α_a,i (q_a,i^+)^2 - ∑_i ∈ [n]α_a,i (q_a,i^-)^2, a ∈ A_p 0 ≤ q_a,i^+ ≤ q_a,i^max x_a^dir, a ∈ A_p, i ∈ [n] 0 ≤ q_a,i^- ≤ q_a,i^max (1 - x_a^dir), a ∈ A_p, i ∈ [n] ∑_i ∈ [n] z_a,i = 1, a ∈ A_p π_v = π_w, a ∈ A_sp 0 ≤ q_a^+ ≤ q_a^max x_a^dir, a ∈ A_sp 0 ≤ q_a^- ≤ q_a^max (1 - x_a^dir), a ∈ A_sp π_v - π_w = α_a (q_a^+)^2 - α_a (q_a^-)^2, a ∈ A_r 0 ≤ q_a^+ ≤ q_a^max x_a^dir, a ∈ A_r 0 ≤ q_a^- ≤ q_a^max (1 - x_a^dir), a ∈ A_r κ_a^minπ_v - M(1-z_a) ≤π_w ≤κ_a^maxπ_v + M(1-z_a), a ∈ A_cp∪ A_cv 0 ≤ q_a^+ ≤ q_a^max z_a, a ∈ A_cp∪ A_cv (π_v^min)^' - M(1-z_a) ≤π_v, a = (v,w) ∈ A_cp∪ A_cv π_w ≤ (π_w^max)^' + M(1-z_a), a = (v, w) ∈ A_cp∪ A_cv π_v - π_w ≤ M(1-z_a), a ∈ A_v π_v - π_w ≥ -M(1-z_a), a ∈ A_v 0 ≤ q_a^-, q_a^+ ≤ q_a^maxz_a, a ∈ A_v 0 ≤ q_a^+ ≤ q_a^max x_a^dir, a ∈ A_v 0 ≤ q_a^- ≤ q_a^max (1 - x_a^dir), a ∈ A_v. In this model, the objective function (<ref>) minimizes the construction cost of the pipes, also known as the budget. The scalar M represents a large number. For the constraints that involve M, we can alternatively write them in a nonlinear fashion, which eliminates the need for big-Ms. In particular, for constraints (<ref>) and (<ref>)-(<ref>), we have z_a κ_a^minπ_v ≤π_w, a ∈ A_cp∪ A_cv z_a π_w ≤κ_a^maxπ_v, a ∈ A_cp∪ A_cv z_a (π_v^min)^'≤π_v, a ∈ A_cp∪ A_cv z_a π_w ≤ (π_w^max)^', a ∈ A_cp∪ A_cv, and for constraints (<ref>)-(<ref>), we have (π_v - π_w)z_a = 0, a ∈ A_v . We group the constraints in blocks with block names and provide a summary in Table <ref>. We will refer to the corresponding set of constraints by their block name. Note that the above formulation can be extended to the reinforcement problem by considering, for each existing pipe, an additional diameter choice with no cost along with potential loss equation (<ref>) in which the potential loss coefficient is computed based on the diameter of the existing pipe. § DECOMPOSITION FRAMEWORK We now present a decomposition framework to solve the design problem. The decomposition consists of three major components: primal bound loop, binary search on budget, and initial budget search. Before we present the details on each component, we introduce more background on the convex program introduced in <cit.> and adapted in <cit.>, which was mentioned briefly in literature review. §.§ CVXNLP The convex program is called (CVXNLP) in <cit.>; we adopt the same name. We base the discussions of (CVXNLP) on a gas network in contrast to a water network in <cit.> in this section for completeness. For a network with only pipes, i.e., 𝒜 = A_p, (CVXNLP) is closely related to the following set of network analysis equations: π_v - π_w = sgn(q_a) ϕ(| q_a | ), a ∈ A_p ∑_a ∈ A_2(v) q_a - ∑_a ∈ A_1(v) q_a = d_v, v ∈𝒱, where sgn(·) is the sign function and ϕ(·) is the potential loss function. In the network analysis equations, (<ref>) is the potential drop equation and (<ref>) is the flow conservation. (CVXNLP) is formally given by min ∑_a ∈ A_pΦ(q_a^+) + Φ(q_a^-) s.t. ∑_a ∈ A_2(v) (q_a^+ - q_a^-) - ∑_a ∈ A_1(v) (q_a^+ - q_a^-) = d_v, v ∈𝒱 0 ≤ q_a^-, q_a^+, a ∈ A_p, where Φ(·) is defined by Φ(q) = ∫_0^q ϕ(q^')dq^'. (CVXNLP) is formally linked to the network analysis equations by the following theorem. If the potential loss function ϕ(·) is strictly monotonically increasing function of flowrate, q, with ϕ(0) = 0, then there exists a solution (π, q) to the network analysis equations if and only if there exists a solution (q̂^+, q̂^-, λ̂, μ̂^+, μ̂^-) to (CVXNLP) where λ, μ^+, and μ^- are dual variables to the flow conservation constraint (<ref>) and bounds constraints (<ref>), respectively. The proof is adapted from a proof in <cit.> and can be found in Appendix <ref>. The monotonicity assumption needed for the theorem to hold is commonly satisfied by gas networks. In addition, as a result of the equivalence between (CVXNLP) and the network analysis equations stated in Theorem <ref>, we can solve the convex (CVXNLP) in lieu of the non-convex network analysis equations and obtain a solution (π, q). As there are no bounds enforced in the network analysis equations for π, we have to perform an additional step to verify that π satisfies the corresponding bounds. §.§ Primal bound loop The equivalence discussed in Section <ref> motivates us to develop a decomposition procedure that solves a variant of (CVXNLP) as a master problem, while a subproblem is used to check for feasibility. We call this procedure the primal bound loop, which checks whether a budget C is feasible with respect to a nomination. The master problem, denoted by (P_m), is based on (CVXNLP) and is as follows: (P_m) min ∑_i ∈ [n]∑_a ∈ A_pα_a,i/3(q_a,i^+)^3 + α_a,i/3(q_a,i^-)^3 + ∑_a ∈ A_rα_a/3 (q_a^+)^3 + α_a/3 (q_a^-)^3 s.t. (<ref>),(<ref>), (<ref>), (<ref>), (<ref>) 0 ≤ q_a^-, q_a^+ ≤ q_a^max, a ∈ A_sp∪ A_r ∑_a ∈ A_p∑_i ∈ [n] f_a,i z_a,i≤ C. In this model, the objective function (<ref>) extends (CVXNLP) to account for multiple flow variables q_a,i^+ for a ∈ A_p and i ∈ [n]. Constraint (<ref>) is the flow conservation. Constraints (<ref>) and (<ref>) on the pipes ensure one diameter choice is selected and the corresponding flow limit is enforced. Constraints (<ref>) and (<ref>) on the active system components ensure flows are only allowed when the corresponding binaries are on. Constraint (<ref>) enforces the flow limit on the short pipes and resistors. The last constraint (<ref>) is a budget constraint on the construction cost of pipes. The above model differs from (CVXNLP) mainly in two ways. Firstly, we have introduced the diameter choices, z_a,i for a ∈ A_p and i ∈ [n] and configurations for the active system components, z_a for a ∈ A_cp∪ A_cv∪ A_v. If the diameter choices and configurations of the active system components are fixed, (P_m) resembles the original (CVXNLP). Secondly, we have a constraint to upper bound the construction cost of pipes by the budget C. We hope to obtain favorable diameter choices and active system component configurations from solving this modified (CVXNLP) due to the equivalence between (CVXNLP) and the network analysis equations shown in Theorem <ref>. In the solution of (P_m), we denote the optimal diameter choices by z_a, i^* for a ∈ A_p and i ∈ [n] and the optimal active system configurations by z_a^* for a ∈ A_cp∪ A_cv∪ A_v. We can then compute the potential loss coefficient and the flow limit of each pipe as follows: α_a = ∑_i ∈ [n]α_a, i z_a,i^*, a ∈ A_p q_a^max = ∑_i ∈ [n] q_a,i^max z_a,i^*, a ∈ A_p. For the subproblem, denoted by (P_s), since we have additional active system components for which the constraints governing their corresponding potential changes are not included in network analysis equations, we solve a variant of the nomination validation problem, instead of performing simple bound violation verifications, to check if the diameter choices and configurations of the active system components are feasible with respect to the nomination. (P_s) is given by: (P_s) Find q_a^+, q_a^-, x_a^dir, π_v s.t. . In this nomination validation problem, we solve a feasibility problem with seven blocks of constraints that are simplified from the blocks in Table <ref>. We list the changes to the constraints as follows. with the diameter choices fixed, we only need two flow variables, q_a^+ and q_a^-, for each pipe a ∈ A_p and the simplified flow conservation constraint is given by: ∑_a ∈ A_2(v) q_a^+ - ∑_a ∈ A_2(v) \ A_cp∪ A_cv q_a^- - (∑_a ∈ A_1(v) q_a^+ - ∑_a ∈ A_1(v) \ A_cp∪ A_cv q_a^- ) = d_v, v ∈𝒱. with the diameter choices determined, we compute the potential loss coefficients and flow limits, and we write the potential loss constraints as π_v - π_w = α_a (q_a^+)^2 - α_a (q_a^-)^2, a ∈ A_p, and use the computed flow limits q_a^max from (<ref>) as: 0 ≤ q_a^+ ≤ q_a^max x_a^dir, a ∈ A_p 0 ≤ q_a^- ≤ q_a^max (1 - x_a^dir), a ∈ A_p. we fix the compressor and control valve configurations obtained in (P_m). The constraints are then linear and free of M. we fix the valve configurations obtained in (P_m). The constraints are then linear and free of M. There are no changes to other constraints. Note that this nomination validation problem is still non-convex due to constraint (<ref>) and the potential loss constraint for resistors. There can be two outcomes from solving this subproblem (P_s). If it is infeasible, we can add an integer no-good cut to the master problem (P_m) of the form: ∑_a ∈ A_cp∪ A_cv∪ A_v, z_a^* = 0 z_a + ∑_a ∈ A_cp∪ A_cv∪ A_v, z_a^* = 1 (1- z_a) + ∑_i ∈ [n]∑_a ∈ A_p, z_a,i^* = 0 z_a,i + ∑_i ∈ [n]∑_a ∈ A_p, z_a,i^* = 1 (1 - z_a,i) ≥ 1. If it is feasible, we call budget C a feasible budget with respect to the nomination. The iterative procedure terminates when we obtain a feasible budget or when the master problem (P_m) becomes infeasible after adding some integer no-good cuts. In the latter case, we call budget C an infeasible budget. §.§ Binary search on budget As the goal of this design problem is to minimize the construction cost of the network, we propose a binary search procedure to do so. A feasible budget from the primal bound loop provides an upper bound, C, on the budget while an infeasible budget provides a lower bound, C, on the budget. We present the binary search in Algorithm <ref>. For the termination conditions of the binary search in Line <ref>, we consider a time limit, absolute gap ε_e, i.e, C - C < ε_e, or relative gap ε_r, i.e., (C - C) / C < ε_r. §.§ Initial budget search To obtain a better initial starting budget for the binary search, we propose the following initial budget search procedure. This procedure is again an iterative procedure involving a master problem and a subproblem. The master problem, denoted by (I_m), is given by (I_m) min ∑_a ∈ A_p∑_i ∈ [n] f_a,i z_a,i s.t. (<ref>) z_a,i∈{0,1}, a ∈ A_p, i ∈ [n] z_a ∈{0,1}, a ∈ A_cp. In this model, the objective function (<ref>) is the same as the design model which minimizes the construction cost of the pipes. Constraint (<ref>) allows exactly one diameter choice for each pipe to be selected. In (I_m), we only consider the selection of diameter choices and configurations of the compressors. This integer program aims to obtain the cheapest construction cost of the pipes along with the compressor configurations, and can be solved very quickly due to the much smaller feasible space and simpler structure compared to (P_m). We can similarly compute the potential loss coefficients and flow limits based on the optimal diameter choices as shown in (<ref>) and (<ref>). The subproblem is a variant of the nomination validation problem which includes the configurations of valves and control valves to check if the diameter choices and compressor configurations are feasible with respect to the nomination. The subproblem, denoted by (I_s), is the same as (P_s), except for the constraints on the control valves and the valves since we do not obtain configurations for them from the master problem (I_m) in contrast to (P_m). We list the changes from (P_s) to obtain (I_s). Variables: we add the binary variables for the on and off states of control valves and valves. Constraints: we fix the configurations of compressors obtained in (I_m) and consequently the constraints for compressors are now linear and free of M. For the control valves, our preliminary studies suggest the use of block. Our preliminary studies suggest the use of block for valves. There are no changes to other constraints. Solving the subproblem (I_s) has two outcomes. If the cheapest diameter choices and compressor configurations are infeasible with respect to the nomination, we add an integer no-good cut for that set of diameter choices and compressor configurations similar to (<ref>) to the master problem (I_m) and resolve. Otherwise, we obtain the optimal budget, i.e., the optimal budget for this nomination is the corresponding objective value of (I_m). The initial budget search procedure can be run for a certain time or a fixed number of iterations. It produces an objective value below which there is no feasible budget, thus producing an initial dual bound. This lower bound on the optimal objective function value of the network design problem can be used to initialize the binary search. § NUMERICAL EXPERIMENTS §.§ Instances Our numerical experiments are based on the Gaslib library Gaslib-582 network. The size of the problem is given in Tables <ref> and <ref>. Depending on the nomination, a source node may have zero supply and a sink node may have zero demand. The nominations given with the Gaslib network in <cit.> are divided into five categories, namely, warm, mild, cool, cold, and freezing, to simulate the temperature conditions. There are two observations about the nominations. Firstly, as temperature conditions change from warm to freezing, the nominations become more demanding. There are more sinks with positive demands and the magnitudes of demands increase. Secondly, the nominations from the same temperature category vary much less compared to nominations across temperature categories. Therefore, we pick one nomination from each temperature category for the experiments. In all experiments, we vary the nomination by stress levels similar to <cit.>. In particular, we use the stress levels {0,1, 0.5, 1.0, 1.5, 2.0} and multiply each stress level by the demand d_v for each v ∈𝒱 in a nomination to create an instance. For each pipe, based on the diameter given in the Gaslib network, we use multipliers from the set {0.8, 1.0, 1.3, 1.5} to create 4 different diameter choices. §.§ Implementation considerations and settings We first provide some notes on the implementation of the primal bound loop. As the objective function (<ref>) in the master problem (P_m) is cubic in the flow variables, there are several possible ways to implement it: * There are nonlinear mixed integer program solvers that can take the master problem (P_m) as it is, for example, BARON (<cit.>) and SCIP (<cit.>). * The cubic objective function is second-order conic representable. For each of the cubic terms in the flow variable, we can introduce an additional variable. Consequently, we obtain a constraint in the form of q^3 ≤ t where q represents the flow variable and t represents the new variable that is an upper bound to q^3 and we can write the second-order conic representation of q^3 ≤ t by s ≥ 0, s + q ≥ 0, (s + q)^2 ≤ w, w^2 ≤ t(s+q). The resulting second-order conic program can be handled by specialized solvers such as MOSEK (<cit.>). * To take advantage of the Gurobi's (<cit.>) improved capability in solving quadratic programs, for each of the cubic terms in the flow variable q in the objective function, we introduce an additional variable q_qua with q_qua = q^2. Consequently, we have a bilinear term q q_qua in the objective function with an additional constraint. The constraint q_qua = q^2 can be re-written into the convex constraint q_qua≥ q^2. Moreover, for the pipes, as we have binary variables corresponding to the diameter choices, the convex constraint q_qua≥ q^2 can be strengthened to q_qua z ≥ q^2 by perspective strengthening (<cit.>) where z represents the binary variable for the diameter choice. We implemented all three methods. Even though the original cubic formulation (used with BARON 22.9.1 (<cit.>) and SCIP 7.0.1 (<cit.>)) and the second-order conic formulation (used with MOSEK 9.3.10 (<cit.>)) are convex, these solvers tend to be slower due to the presence of the binary variables. On the other hand, Gurobi 9.5.1 (<cit.>) is able to handle the reformulation of the cubic objective function well. As a result, we decided to use Gurobi 9.5.1 to solve (P_m) and (P_s). This also gives us the opportunity to study the impact of perspective strengthening on the computational speed. In addition, since we only use the values of the binary variables of the pipe diameter choices and active system component configurations from the master problem (P_m) to fix the corresponding variables in the subproblem (P_s), we do not need to solve the master problem (P_m) to optimality. We can either set a time limit or a non-default optimality gap and we opt to use a time limit of 60 sec. We run the experiments on a computer with an Intel i9 CPU (3.70GHz) with 64GB RAM. The computer runs the Ubuntu 20.04 LTS operating system. The framework is coded in Python with Pyomo. We use Gurobi 9.5.1 to solve problems (I_m) and (I_s) as well. Algorithm <ref> shows the exact steps we use to solve the problem combining the procedures from the primal bound loop, the binary search on budget, and the initial budget search. A note on the time limit for the initial budget search. We performed studies to extend the time limit to longer than 10 min and the improvement on the returned value is not significant. As a result, we decided to limit the initial budget search to 10 min. §.§ Results In this section, we present the computational results. In addition to the results from our proposed framework, we provide some discussions about the compact formulation and another approach adapted from <cit.> using computational studies on the nomination warm_31, which comes from the least demanding temperature categories. We first discuss the performance of the compact formulation with objective function (<ref>) along with constraint blocks of , , , , , or , and or . Solvers that support the non-convex integer program are considered. SCIP and BARON are able to find a lower bound very quickly, but they tend to be slower in finding a feasible solution to close the gap after hours of computation. We also discovered that Gurobi at times returns solutions with large constraint violations for the compact formulation. Note that CPLEX currently only supports non-convexity in the objective function and hence is not applicable. The papers by <cit.> and <cit.> are two recent works on gas network expansion. The work of <cit.> specifically considers a tree-like network while the Gaslib-582 network contains cycles. Although <cit.> mainly focuses on the reinforcement problem, the approach can be modified to tackle the design problem. The authors in <cit.> construct a convex mixed-integer second-order conic (MISOC) relaxation of the reinforcement formulation, and fix all the binary decision variables after solving the relaxation to obtain a nomination validation problem. The resulting nonlinear program is then solved by a solver to obtain a solution if the nonlinear program is feasible. If the nonlinear program is infeasible, then the authors propose to use any feasible solutions to the relaxation. We adapt a similar MISOC relaxation for the pipes and resistors. We leave the details of this relaxation in Appendix <ref>. In our computations, following the procedure and solving the nomination validation problem worked well for instances with up to 40 nodes, but did not yield feasible solutions for larger instances. Next, we present the results from our framework. We define gap to be gap = C - C/C. We present the detailed results for one of the nominations, warm_31, in Table <ref> and compare the gaps from implementations with and without perspective strengthening for all nominations in Table <ref> where C and C are reported in 10^6, and gaps are reported in %. The column “Imp" reports the percentage improvements from perspective strengthening. The detailed results for the rest of the nominations are provided in Appendix <ref>. From the results, we see that our framework is able to find a feasible budget for all 25 instances. In particular, it provides an optimal budget for 13 instances and a budget with less than 25% gaps for another four instances. There are a few instances where we reached the time limit with large gaps. We mark these instances in bold. These instances are with higher stress levels and/or worse temperature conditions. As we increase the stress level and/or deteriorate the temperature conditions (from warm to freezing) making the nominations more demanding, we observe that it becomes more difficult to find feasible solutions in the primal bound loops to prove a feasible budget and thus close the gap by binary search. The primal bound loops hit the time limit much more often. In addition, for all instances, we are not able to prove infeasible budget from the primal bound loop. While a large number of binary solutions are feasible to the master problem (P_m), each integer no-good cut only invalidates one of them. As a result, the lower bounds on budget, C, are almost the same across different nominations and stress levels. Furthermore, the perspective strengthening is shown to be effective in closing the gaps for higher stress levels. There are only two instances (mild_3838 and cold_4218 at stress level of 1.5) for which the implementation without perspective strengthening achieves better gaps than the implementation with perspective strengthening. The average and largest improvements from perspective strengthening are about 31% (excluding the instances that are solved to optimality both with and without perspective strengthening) and 86%, respectively from perspective strengthening. The improvements are all due to obtaining better feasible solutions. § CONCLUSION In conclusion, we studied the gas network design problem, where diameter choices of pipes and active system component configurations are decided. We proposed a decomposition framework to solve the problem. In particular, in the primal bound loop of the framework, for a given budget, we modify a convex NLP formulation to construct master problems to obtain favorable diameter choices and active system component configurations, and validate their feasibility in the subproblem. Binary search is performed as an outer loop to minimize the budget. We also proposed a procedure to obtain a good initial budget for the binary search. The proposed framework was tested on the Gaslib-582 network and instances were created from combining nominations under different temperature conditions and stress level multipliers. The computational results show that the framework is able to find an optimal budget in many cases. There are a few future directions that could be explored. The cost of operating the network may be an interest from an operator's perspective. Our framework can be adapted and applied to incorporate the cost of operations with simple modifications. § ACKNOWLEDGEMENT This work was conducted as part of the Institute for the Design of Advanced Energy Systems (IDAES) with support through the Simulation-Based Engineering, Crosscutting Research Program and the Solid Oxide Fuel Cell Program’s Integrated Energy Systems thrust within the U.S. Department of Energy’s Office of Fossil Energy and Carbon Management. § APPENDIX A: PROOF OF THEOREM <REF> We first consider the if part. Suppose that (q̂^+, q̂^-, λ̂, μ̂^+, μ̂^-) solves (CVXNLP). Consider the first-order stationary conditions for (CVXNLP) as follows: ϕ(q̂_a^+) - μ̂_a^+ - λ̂_v + λ̂_w = 0, a = (v,w) ∈ A_p ϕ(q̂_a^-) - μ̂_a^- + λ̂_v - λ̂_w = 0, a = (v,w) ∈ A_p q̂_a^+, μ̂_a^+ ≥ 0, a = (v,w) ∈ A_p q̂_a^+ ·μ̂_a^+ = 0, a = (v,w) ∈ A_p q̂_a^-, μ̂_a^- ≥ 0, a = (v,w) ∈ A_p q̂_a^- ·μ̂_a^- = 0, a = (v,w) ∈ A_p ∑_a ∈ A_2(v) (q̂_a^+ - q̂_a^-) - ∑_a ∈ A_1(v) (q̂_a^+ - q̂_a^-) = d_v, v ∈𝒱. First, it cannot happen that q̂_a^+, q̂_a^- > 0 for any a ∈ A_p, otherwise, we can define q̃_a^+ = max{q̂_a^+ - q̂_a^-, 0}, q̃_a^- = max{0, q̂_a^- - q̂_a^+}, where q̃_a^+ ≤q̂_a^+ and q̃_a^- ≤q̂_a^-. The new flow values q̃_a^+ and q̃_a^- are feasible and because of the strict monotonicity of ϕ(·), they result in a smaller objective value which contradicts the optimality of q̂^+ and q̂^-. Furthermore, the complementary slackness conditions imply that, if q̂_a^+ (or q̂_a^-) > 0, then μ̂_a^+ (or μ̂_a^-) = 0. If q̂_a^+ = q̂_a^- = 0 for some a, then adding (<ref>) and (<ref>) gives μ̂_a^+ + μ̂_a^- = 0 μ̂_a^+ = μ̂_a^- = 0. Consequently, we can simplify (<ref>) and (<ref>) by differentiating the cases on q_a^+ and q_a^- to be ϕ(q̂_a^+) - λ̂_v + λ̂_w = 0, a = (v,w) : q̂_a^+ > 0 ϕ(q̂_a^-) + λ̂_v - λ̂_w = 0, a = (v,w): q̂_a^- > 0 λ̂_v - λ̂_w = 0, a = (v,w): q̂_a^+ = q̂_a^- = 0. Now define (π, q) as π_v = λ̂_v, v ∈𝒱 q_a = q̂_a^+ - q̂_a^- , a ∈ A_p, and we see (π, q) satisfies the network analysis equations. Now we consider the only if part. Suppose that (π, q) solves the network analysis equations. We define the following: q̂_a^+ = max{0, q_a}, a ∈ A_p q̂_a^- = |min{0, q_a}|, a ∈ A_p λ̂_v = π_v, v ∈𝒱 μ̂_a^+ = max{0, π_w - π_v + ϕ(q̂_a^+)}, a = (v, w) ∈ A_p μ̂_a^- = max{0, π_v - π_w + ϕ(q̂_a^-)}, a = (v, w) ∈ A_p. Then (q̂^+, q̂^-, λ̂, μ̂^+, μ̂^-) satisfies the first-order stationary conditions. To see this, we first verify that, when q_a ≥ 0, then q̂_a^+ = q_a ≥ 0 and q̂_a^ - = 0. From the potential loss equation (<ref>) in network analysis equations, we have that: π_v - π_w = ϕ(q_a) = ϕ(q̂_a^+). Consequently, we have μ̂_a^+ = max{0, π_w - π_v + ϕ(q̂_a^+)} = 0 μ̂_a^- = max{0, π_v - π_w + ϕ(q̂_a^-)} = max{0, ϕ(q̂_a^+) + ϕ(0)} = ϕ(q̂_a^+) ≥ 0, and ϕ(q̂_a^+) - μ̂_a^+ - λ̂_v + λ̂_w = ϕ(q̂_a^+) - 0 - π_v + π_w = 0 ϕ(q̂_a^-) - μ̂_a^- + λ̂_v - λ̂_w = ϕ(0) - ϕ(q̂_a^+) + π_v - π_w = 0. Similarly, we can verify for q_a < 0. Furthermore, the strict monotonically increasing property of ϕ(·) implies the convexity of Φ(·). The constraints in (CVXNLP) are linear and thus (CVXNLP) is convex. The satisfaction of the first-order stationary conditions is necessary and sufficient for (π, q) to be an optimal solution to (CVXNLP) and it is the unique optimal solution due to the convexity. § APPENDIX B: MIXED-INTEGER SECOND-ORDER CONIC (MISOC) RELAXATION The relaxations are constructed for the pipes and resistors. For the pipes, instead of decomposing the flow variables q_a,i into q_a,i^+ and q_a,i^-, we define two binary variables x_a^+ and x_a^- for the flow directions and enforce x_a^+ + x_a^- = 1. If x_a^+ = 1, then q_a,i≥ 0 and if x_a^- = 1, then q_a,i < 0. In addition, we create multiple potential variables π_v,i and π_w,i for a = (v,w) ∈ A_p and i ∈ [n]. Now consider a pipe a = (v,w) and a diameter choice i, we can write the potential loss as (x_a^+ - x_a^-)(π_v,i - π_w,i) = α_a,iq_a,i^2. The left-hand side of (<ref>) is bilinear. If we define γ_a,i = (x_a^+ - x_a^-)(π_v,i - π_w,i), we can write the standard McCormick relaxation for γ_a,i = (x_a^+ - x_a^-)(π_v,i - π_w,i) by γ_a,i≥π_w,i - π_v,i + (π_v^min - π_w^max)(x_a^+ - x_a^- + 1) γ_a,i≥π_v,i - π_w,i + (π_v^max - π_w^min)(x_a^+ - x_a^- - 1) γ_a,i≥π_w,i - π_v,i + (π_v^max - π_w^min)(x_a^+ - x_a^- + 1) γ_a,i≥π_v,i - π_w,i + (π_v^min - π_w^max)(x_a^+ - x_a^- - 1). With γ_a,i defined, constraint (<ref>) can be written as γ_a,i = α_a,iq_a,i^2 and can be further relaxed to become convex as follows: γ_a,i≥α_a,iq_a,i^2. Applying perspective strengthening to the relaxed constraint gives z_a,iγ_a,i≥α_a,iq_a,i^2. Now the potential loss constraint (<ref>) for pipes becomes π_v - π_w = ∑_i ∈ [n]γ_a,i. We can create similar relaxations for the resistors. For a resistor a = (v,w) ∈ A_r, we have γ_a≥π_w - π_v + (π_v^min - π_w^max)(x_a^+ - x_a^- + 1) γ_a≥π_v - π_w + (π_v^max - π_w^min)(x_a^+ - x_a^- - 1) γ_a≥π_w - π_v + (π_v^max - π_w^min)(x_a^+ - x_a^- + 1) γ_a≥π_v - π_w + (π_v^min - π_w^max)(x_a^+ - x_a^- - 1) γ_a ≥α_a q_a^2. Additionally, the binary variables x_a^dir in constraints (<ref>)-(<ref>) and (<ref>)-(<ref>) that govern flow limits on directions are replaced by x_a^+ and x_a^- correspondingly. We keep the rest of constraints unchanged and obtain a convex MISOC relaxation of the design problem as a result. § APPENDIX C: DETAILED COMPUTATIONAL RESULTS All values of C and C are reported in 10^6 and gaps are reported in %. The column “Imp" reports the percentage improvements from perspective strengthening. Instances marked in bold are those with large gaps at time limit. plain
http://arxiv.org/abs/2307.04763v1
20230710175853
On the total CR twist of transversal curves in the 3-sphere
[ "Emilio Musso", "Lorenzo Nicolodi" ]
math.DG
[ "math.DG", "53C50, 53C42, 53A10" ]
=1 100 thmTheorem[section] thmxTheorem lemma4mm1mm . lemmaALemma A lemmaBLemma B lemmaCLemma C prop[thm]Proposition lemma[thm]Lemma cor[thm]Corollary claim[thm]Claim scolium[thm]Scholium definition defn[thm]Definition ex[thm]Example remark remark[thm]Remark notationNotation assumption[thm]Assumption equationsection psmallmatrix ([ ) ℂ ℝ ℂℙ ℤ 𝕍 ℍ̋ Ł𝕃 𝔼 𝕊 𝒮 ḍ 𝔤 ø0 𝔟̱ 𝔪 𝔭 𝔊 𝔰𝔩 𝔯 𝔱 𝔨̨ SO SU SL GL ØO^↑_+ M^↑_+ ℐ ℂP 𝔄 ℚ 𝒟 𝒫 ℰ ℱ ℒ ℋ T^2 𝒦 𝔗 𝔅 𝒢 𝒮 F 𝒩 𝒩_f ÅA 𝒜 p q 2𝒫^2 G 1G^+_1 1N^+_1 ]On the total CR twist of transversal curves in the 3-sphere (E. Musso) Dipartimento di Scienze Matematiche, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino, Italy [email protected] (L. Nicolodi) Dipartimento di Scienze Matematiche, Fisiche e Informatiche, Università di Parma, Parco Area delle Scienze 53/A, I-43124 Parma, Italy [email protected] Authors partially supported by PRIN 2017 “Real and Complex Manifolds: Topology, Geometry and holomorphic dynamics” (protocollo 2017JZ2SW5-004); and by the GNSAGA of INdAM. The present research was also partially supported by MIUR grant “Dipartimenti di Eccellenza” 2018-2022, CUP: E11G18000350001, DISMA, Politecnico di Torino [2010]53C50; 53C42; 53A10 Dedicated to Peter Olver on the occasion of his 70th birthday We investigate the total CR twist functional on transversal curves in the standard CR 3-sphere S^3 ⊂ℂ^2. The question of the integration by quadratures of the critical curves and the problem of existence and properties of closed critical curves are addressed. A procedure for the explicit integration of general critical curves is provided and a characterization of closed curves within a specific class of general critical curves is given. Experimental evidence of the existence of infinite countably many closed critical curves is provided. [ Lorenzo Nicolodi Version of June 20, 2023 ============================ § INTRODUCTION The present paper finds its inspiration and theoretical framework in the subjects of moving frames, differential invariants, and invariant variational problems, three of the many research topics to which Peter Olver has made lasting contributions. Among the many publications of Peter Olver dedicated to these subjects, we like to mention <cit.> as the ones that most influenced our research activity. 0.1cm More specifically, in this paper we further develop some of the themes considered in <cit.> concerning the Cauchy-Riemann (CR) geometry of transversal and Legendrian curves in the 3-sphere. In three dimensions, a CR structure on a manifold is defined by an oriented contact distribution equipped with a complex structure. While the automorphism group of a contact manifold is infinite dimensional, that of a CR threefold is finite dimensional and of dimension less or equal than eight <cit.>. The maximally symmetric CR threefold is the 3-sphere S^3, realized as a real hyperquadric of ℂℙ^2 acted upon transitively by the Lie group G ≅SU(2,1). This homogeneous model allows the application of differential-geometric techniques to the study of transversal and Legendrian curves in S^3. Since the seminal work of Bennequin <cit.>, the study of the topological properties of transversal and Legendrian knots in 3-dimensional contact manifolds has been an important area of research (see, for instance, <cit.> and the literature therein). Another reason of interest for 3-dimensional contact geometry comes from its applications to neuroscience. In fact, as shown by Hoffman <cit.>, the visual cortex can be modeled as a bundle equipped with a contact structure. For more details, the interested reader is referred to the monograph <cit.>. Recently, the CR geometry of Legendrian and transversal curves in S^3 has also found interesting applications in the framework of integrable system <cit.>. 0.1cm Let us begin by recalling some results from the CR geometry of transversal curves in S^3. According to <cit.>, away from CR inflection points, a curve transversal to the contact distribution of S^3 can be parametrized by a natural pseudoconformal parameter s and in this parametrization it is uniquely determined, up to CR automorphisms, by two local CR invariants: the CR bending κ and the CR twist τ. This was achieved by developing the method of moving frames and by constructing a canonical frame field along generic[I.e., with no CR inflection points.] transversal curves. Moreover, for closed transversal curves, we defined three discrete global invariants, namely, the wave number, the CR spin, and the CR turning number. Next, we investigated the total strain functional, defined by integrating the strain element ds. We proved that the corresponding critical curves have constant bending and twist, and hence arise as orbits of 1-parameter groups of CR automorphisms. Finally, closed critical curves are shown to be transversal positive torus knots with maximal Bennequin number. 0.1cm In the present paper, we consider the CR invariant variational problem for generic transversal curves in S^3 defined by the total CR twist functional, 𝒲(γ) = ∫_γτ ds. Our purpose is to address both the question of the explicit integration of critical curves and the problem of existence and properties of closed critical curves of 𝒲. 0.1cm We now give a brief outline of the content and results of this paper. In Section <ref>, we shortly describe the standard CR structure of the 3-sphere S^3, viewed as a homogeneous space of the group G, and collect some preliminary material. We then recall the basic facts about the CR geometry of transversal curves in S^3 as developed in <cit.> (see the description above). Moreover, besides the already mentioned discrete global invariants for a closed transversal curve, we introduce a fourth global invariant, the trace of the curve with respect to a spacelike line. 0.1cm In Section <ref>, we apply the method of moving frames and the Griffiths approach to the calculus of variations <cit.> to compute the Euler–Lagrange equations of the total CR twist functional. We construct the momentum space of the corresponding variational problem and find a Lax pair formulation for the Euler–Lagrange equations satisfied by the critical curves. This is the content of Theorem <ref>, the first main result of the paper, whose proof occupies the whole Section <ref>. As a consequence of Theorem <ref>, to each critical curve we associate a momentum operator, which is a fixed element of the G-module 𝔥 of traceless selfadjoint endomorphisms of ℂ^2,1. From the conservation of the momentum along a critical curve, we derive two conservation laws, involving two real parameters c_1 and c_2. The pair 𝐜=(c_1,c_2) is referred to as the modulus of the critical curve. 0.1cm In Section <ref>, we introduce the phase type of the modulus of a critical curve. We then define the phase curve of a given modulus and the associated notion of signature of a critical curve with that given modulus. For a generic modulus 𝐜, the phase type of 𝐜 refers to the properties of the roots of the quintic polynomial in principal form given by P_𝐜(x)=x^5+3/2c_2x^2+27c_1x-27/2c_1^2 . The phase curve of the modulus 𝐜 is the real algebraic curve defined by the equation y^2= P_𝐜(x). The signature of a critical curve γ with modulus 𝐜 and nonconstant twist provides a parametrization of the connected components of the phase curve of 𝐜 by the twist of γ. Importantly, the periodicity of the twist of γ amounts to the compactness of the image of the signature of γ. This will play a role in Sections <ref> and <ref>, where the closedness question for critical curves is addressed. Using the Klein formulae for the icosahedral solutions of the quintic <cit.>, the roots of P_𝐜 can be evaluated in terms of hypergeometric functions. As a byproduct, we show that the twist and the bending of a critical curve can be obtained by inverting incomplete hyperelliptic integrals of the first kind. We further specialize our analysis by introducing the orbit type of the modulus 𝐜 of a critical curve γ. The orbit type of 𝐜 refers to the spectral properties of the momentum associated to γ. Depending on the phase type, the number of connected components of the phase curves, and the orbit type, the critical curves are then divided into twelve classes. The critical curves of only three of these classes have periodic twist. 0.1cm In Section <ref>, we show that a general critical curve (cf. Definition <ref>) can be integrated by quadratures using the momentum of the curve. This is the content of Theorem <ref>, the second main result of the paper. Theorem <ref> is then specialized to one of the twelve classes of critical curves, the class characterized by the compactness of the connected component of the phase curve and by the existence of three distinct real eigenvalues of the momentum. Theorem <ref>, the third main result, shows that the critical curves of this specific class can be explicitly written by inverting hyperelliptic integrals of the first and third kind. We then examine the closure conditions and prove that a critical curve in this class is closed if and only if certain complete hyperelliptic integrals depending on the modulus of the curve are rational. Finally, the relations between these rational numbers and the global CR invariants mentioned above are discussed. 0.1cm In the last section, Section <ref>, we develop convincing heuristic and numerical arguments to support the claim that there exist infinite countably many distinct congruence classes of closed critical curves. These curves are uniquely determined by the four discrete geometric invariants: the wave number, the CR spin, the CR turning number, and the trace with respect to the spacelike λ_1-eigenspace of the momentum. Using numerical tools, we construct and illustrate explicit examples of approximately closed critical curves. § PRELIMINARIES §.§ The standard CR structure on the 3-sphere Let ^2,1 denote ^3 with the indefinite Hermitian scalar product of signature (2,1) given by ⟨𝐳,𝐰⟩ = ^t𝐳 𝐡 𝐰, 𝐡= (h_ij)= [ 0 0 i; 0 1 0; -i 0 0 ]. Following common terminology in pseudo-Riemannian geometry, a nonzero vector z∈^2,1 is spacelike, timelike or lightlike, depending on whether ⟨ z, z⟩ is positive, negative or zero. By 𝒩 we denote the nullcone, i.e., the set of all lightlike vectors. 0.1cm Let 𝒮= ℙ(𝒩) be the real hypersurface in ^2 defined by 𝒮 ={[ z] ∈^2 |⟨𝐳,𝐳⟩ = i(z_1z_3-z_3z_1)+z_2z_2=0}. The restriction of the affine chart s:^2 ∋ (z_1,z_2) ⟼[^t(1+z_1/2,iz_2/√(2),i1-z_1/2)]∈𝒮⊂^2 to the unit sphere S^3 of ^2 defines a smooth diffeomorphism between S^3 and 𝒮. For each p=[ z]∈𝒮, the differential (1,0)-form ζ̃|_p = - i ⟨z,d z⟩/ z ^t z|_p ∈Ω^1,0(^2)|_p is well defined. In addition, the null space of the imaginary part of ζ̃|_p is T(𝒮)|_p, namely the tangent space of 𝒮 at p. Thus, the restriction of ζ̃ to T(𝒮) is a real-valued 1-form ζ∈Ω^1(𝒮). Since the pullback of ζ by the diffeomorphism s: S^3→𝒮 is the standard contact form i z· d z|_S^3 of S^3, then ζ is a contact form whose contact distribution 𝒟 is, by construction, a complex subbundle of T(^2)|_𝒮. Therefore, 𝒟 inherits from T(^2)|_𝒮 a complex structure J. This defines a CR structure on 𝒮. Let 𝐞_1, 𝐞_2, 𝐞_3 denote the standard basis of ℂ^3. Consider P_0=[𝐞_1] ∈𝒮 and P_∞=[𝐞_3] ∈𝒮 as the origin and the point at infinity of . Then, := ∖{P_∞} can be identified with Euclidean 3-space with its standard contact structure dz - ydx + xdy by means of the Heisenberg projection[This map is the analogue of the stereographic projection in Möbius (conformal) geometry.] π_H: ∋ [ z]⟼^t(Re(z_2/z_1),Im(z_2/z_1),Re(z_3/z_1))∈^3. The inverse of the Heisenberg projection is the Heisenberg chart j_H : ^3 ∋^t(x, y, z) ⟼[^t(1,x+iy,z+i/2(x^2+y^2))]∈. The Heisenberg chart can be lifted to a map whose image is a 3-dimensional closed subgroup ^̋3 of G, which is isomorphic to the 3-dimensional Heisenberg group <cit.>. 0.1cm Let G be the special pseudo-unitary group of (<ref>), i.e., the 8-dimensional Lie group of unimodular complex 3 × 3 matrices preserving (<ref>), G = { A ∈SL(3,ℂ) |^tA̅𝐡 A = 𝐡}≅SU(2,1), and let 𝔤 denote the Lie algebra of G, 𝔤 = { X ∈𝔰𝔩(3,ℂ) |^tX̅𝐡 + 𝐡 X = 0}. The Maurer–Cartan form of the group G takes the form ϑ = A^-1dA =[ α_1^1 +i β_1^1 -i α_3^2 - β_3^2 α_3^1; α_1^2 +i β_1^2 -2i β_1^1 α_3^2 +iβ_3^2; α_1^3 i α_1^2 + β_1^2 -α_1^1 +i β_1^1; ], where the 1-forms (α_1^1 , β_1^1 , α_1^2 , β_1^2 ,α_1^3 , α_3^2 , β_3^2 , α_3^1) form a basis of the dual Lie algebra 𝔤^*. The center of G is Z = {ϖ I_3|ϖ∈ℂ, ϖ^3 =1}≅ℤ_3, where I_3 denotes the 3× 3 identity matrix. Let [G] denote the quotient Lie group G/Z and for A∈ G let [A] denote its equivalence class in [G]. Thus [A] = [B] if and only if B = ϖ A, for some cube root of unity ϖ. For any A∈ G, the column vectors (A_1, A_2, A_3) of A form a basis of ^2,1 satisfying ⟨ A_i, A_j ⟩ = h_ij and det(A_1, A_2, A_3) =1. Such a basis is referred to as a lightcone basis. On the other hand, a basis (𝐮_1, 𝐮_2, 𝐮_3) of ^2,1, such that det(𝐮_1, 𝐮_2, 𝐮_3) =1 and ⟨𝐮_i, 𝐮_j ⟩ = δ_ijϵ_j, where ϵ_1 = -1, ϵ_2 = ϵ_3 = 1, is referred to as a unimodular pseudo-unitary basis. 0.1cm The group G acts transitively and almost effectively on the left of by A[ z] = [A z], ∀ A∈G, ∀ [ z] ∈. This action descends to an effective action of [G]= G/Z on . It is a classical result of E. Cartan <cit.> that [G] is the group of CR automorphisms of . If we choose [𝐞_1]=[ ^t(1,0,0)]∈𝒮 as an origin of 𝒮, the natural projection π_:G∋ A ↦ A[𝐞_1] =[A_1]∈ makes G into a (trivial) principal fiber bundle with structure group G_0={ A∈G| A[𝐞_1]=[𝐞_1] }. The elements of G_0 consist of all 3×3 unimodular matrices of the form X(ρ,θ,v,r)= [ ρ e^iθ -iρ e^-iθv̅ e^iθ(r-i/2ρ |v|^2); 0 e^-2iθ v; 0 0 ρ^-1e^iθ,; ], where v∈, r ∈, 0≤θ <2π, and ρ>0. The left-invariant 1-forms α_1^2, β_1^2, α_1^3 are linearly independent and generate the semi-basic 1-forms for the projection π_𝒮: G →𝒮. So, if s:U⊆→G is a local cross section of π_, then (s^*α_1^3 , s^*α_1^2, s^* β_1^2 ) defines a coframe on U and s^*α_1^3 is a positive contact form. §.§ Transversal curves Let γ : J → be a smooth immersed curve. We say that γ is transversal (to the contact distribution 𝒟) if the tangent vector γ'(t) ∉𝒟|_γ(t), for every t ∈ J. The parametrization γ is said to be positive if ζ(γ'(t) ) > 0, for every t and for every positive contact form compatible with the CR structure. From now on, we assume that the parametrization of a transversal curve is positive. Let γ : J → be a smooth curve. A lift of γ is a map Γ: J →𝒩 into the nullcone 𝒩⊂^2,1, such that γ(t) = [Γ(t)], for every t ∈ J. If Γ is a lift, any other lift is given by rΓ, where r is a smooth complex-valued function, such that r(t)≠ 0, for every t ∈ J. From the definition of the contact distribution, we have the following. A parametrized curve γ : J → is transversal and positively oriented if and only if - i⟨Γ,Γ'⟩|_t > 0, for every t∈ J and for every lift Γ. A frame field along γ : J → is a smooth map A : J →G such that π_∘ A = γ. Since the fibration π_𝒮 is trivial, there exist frame fields along every transversal curve. If A=(A_1,A_2,A_3) is a frame field along γ, A_1 is a lift of γ. Let A be a frame field along γ. Then A^-1A'= [ a_1^1 +i b_1^1 -i a_3^2 - b_3^2 a_3^1; a_1^2 +i b_1^2 -2i b_1^1 a_3^2 +ib_3^2; a_1^3 i a_1^2 + b_1^2 -a_1^1 +i b_1^1 ], where a_1^3 is a strictly positive real-valued function. Any other frame field along γ is given by à = AX(ρ,θ,v,r), where ρ (ρ>0), θ, r : J →ℝ, v=p+iq : J→ℂ are smooth functions and X(ρ,θ,v,r):J →G_0 is as in (<ref>). If we let Ã^-1Ã'= [ ã_1^1 +i b̃_1^1 -i ã_3^2 - b̃_3^2 ã_3^1; ã_1^2 +i b̃_1^2 -2i b̃_1^1 ã_3^2 +i b̃_3^2; ã_1^3 i ã_1^2 + b̃_1^2 -ã_1^1 +i b̃_1^1 ], then Ã^-1Ã'= X^ -1A^-1 A'X + X^ -1X', which implies ã_1^3= ρ^2 a_1^3, ã_1^2 +i b̃_1^2 = ρ e^3iθ (a_1^2 +i b_1^2 ) - ρ^2 e^2iθ (p+iq) a_1^3. From this it follows that along any parametrized transversal curve there exists a frame field A for which a_1^2 +i b_1^2=0. Such a frame field is said to be of first order. Let Γ be a lift of a transversal curve γ: J →. If (Γ,Γ',Γ”)|_t_0 = 0, for some t_0∈ J, then γ(t_0) is called a CR inflection point. The notion of CR inflection point is independent of the lift Γ. A transversal curve with no CR inflection points is said to be generic. The notion of a CR inflection point is invariant under reparametrizations and under the action of the group of CR automorphisms. If A : J → G is a frame field along a transversal curve γ, then γ(t_0) is a CR inflection point if (A_1,A_1',A_1”)|_t_0 = 0. A transversal curve all of whose points are CR inflection points is called a chain. The notion of chain on a CR manifold goes back to Cartan <cit.> (see also <cit.> and the literature therein). If γ is transversal and Γ is one of its lifts, then the complex plane [Γ∧Γ']_ t is of type (1,1) and the set of null complex lines contained in [Γ∧Γ']_ t is a chain which is independent of the choice of the lift Γ. This chain, denoted by 𝒞_γ |_t, is called the osculating chain of γ at γ(t). By construction, 𝒞_γ |_t is the unique chain passing through γ(t) and tangent to γ at the contact point γ(t). For more details on the CR-geometry of transversal curves in the 3-sphere, we refer to <cit.>. As a basic reference for transversal knots and their topological invariants in the framework of 3-dimensional contact geometry, we refer to <cit.> and the literature therein. §.§ The canonical frame and the local CR invariants In the following, we will consider generic transversal curves. Let γ be a generic transversal curve. A lift Γ of γ, such that (Γ,Γ',Γ”)= -1, is said to be a Wilczynski lift (W-lift) of γ. If Γ is a Wilczynski lift, any other is given by ϖΓ, where ϖ∈ is a cube root of unity. The function a_γ = i⟨Γ,Γ'⟩^-1 is smooth, real-valued, and independent of the choice of Γ. We call a_γ the strain density of the parametrized transversal curve γ. The linear differential form ds = a_γ dt is called the infinitesimal strain. The strain density and the infinitesimal strain are invariant under the action of the CR transformation group. In addition, if h : I → J is a change of parameter, then the infinitesimal strains ds and ds̃ of γ and γ̃=γ∘ h, respectively, are related by ds̃ = h^*(ds). This proof corrects a few misprints contained in the original one. If A∈G and if Γ is a Wilczynski lift of γ, then Γ̂ = AΓ is a Wilczynski lift of γ̂= Aγ. This implies that a_γ = a_γ̂. Next, consider a reparametrization γ̃=γ∘ h of γ. Then, Γ^*=Γ∘ h is a lift of γ̃, such that (Γ^*,(Γ^*)' ,(Γ^*)”) = -(h')^3. This implies that Γ̃=(h')^ -1Γ^* is a Wilczynski lift of γ̃. Hence ⟨Γ̃,(Γ̃)'⟩ =(h')^ -1⟨Γ,Γ'⟩∘ h. Therefore, the strain densities of γ and γ̃ are related by a_γ̃=h'a_γ∘ h. Consequently, we have h^*(d s)= h'a_γ∘ h dt= a_γ̃ dt =ds̃. As a straightforward consequence of Proposition <ref>, we have the following. A generic transversal curve γ can be parametrized so that a_γ = 1. If a_γ = 1, we say that γ : J → is a natural parametrization, or a parametrization by the pseudoconformal strain or pseudoconformal parameter. In the following, the natural parameter will be denoted by s. We can state the following. Let γ : J → be a generic transversal curve, pa­ra­me­tri­zed by the natural parameter. There exists a (first order) frame field = (F_1, F_2, F_3) : J →G along γ, such that F_1 is a W-lift and ^-1'= [ iκ -i τ; 0 -2iκ 1; 1 0 iκ; ] = : K_κ,τ(s), where κ, τ : J →ℝ are smooth functions, called the CR bending and the CR twist, respectively. The frame field is called a Wilczynski frame. If is a Wilczynski frame, any other is given by ϖ, where ϖ is a cube root of unity. Thus, there exists a unique frame field [ℱ] : J → [G] along γ, called the canonical frame of γ. Given two smooth functions κ, τ:J→, there exists a generic transversal curve γ:J→, parametrized by the natural parameter, whose bending is κ and whose twist is τ. The curve γ is unique up to CR automorphisms of . (1) Let γ : J → be as above and = (F_1, F_2, F_3) : J →G be a Wilczynski frame along γ. Then, γ^# : J ∋ s↦ [F_3(s)]_∈ is an immersed curve, called the dual of γ. The dual curve is Legendrian (i.e., tangent to the contact distribution) if and only if τ = 0. Thus, the twist can be viewed as a measure of how the dual curve differs from being a Legendrian curve. (2) Generic transversal curves with constant bending and twist have been studied by the authors in <cit.>. In the following we will consider generic transversal curves with nonconstant CR invariant functions. §.§ Discrete CR invariants of a closed transversal curve Referring to <cit.>, we briefly recall some CR invariants for closed transversal curves, namely the notions of wave number, CR spin, and CR turning number (or Maslov index). These invariants will be used in Sections <ref> and <ref>. The wave number is the ratio between the least period ω_γ of γ and the least period ω of the functions (κ,τ). The CR spin is the ratio between ω_γ and the least period of a Wilczynski lift of γ. The CR turning number is the degree (winding number) of the map F_1-iF_3: /ω_γ≅S^1 → = ℂ∖{0}, where ℱ=(F_1,F_2,F_3) is a Wilczynski frame along γ. 0.1cm We will also make use of another invariant. Let [ z]∈ℂℙ^2 be a spacelike line. Denote by 𝙲_[ z] the chain of all null lines orthogonal to [ z], equipped with its positive orientation. Consider a closed generic transversal curve γ with its positive orientation. Since γ is closed and generic, the intersection of γ with 𝙲_[ z] is either a finite set of points, or the empty set. The trace of γ with respect to [ z], denoted by tr_[𝐳](γ), is the integer defined as follows: (1) if γ∩𝙲_[ z]≠∅, then tr_[ z](γ) counts the number of intersection points of γ with 𝙲_[ z] (since γ is not necessarily a simple curve, the intersection points are counted with their multiplicities); (2) otherwise, tr_[ z](γ)= Lk(γ, 𝙲_[ z]), the linking number of γ with 𝙲_[𝐳]. The trace of γ is a G-equivariant map, that is, tr_[ z](γ) = tr_A[ z](Aγ), for every A∈ G. § THE TOTAL CR TWIST FUNCTIONAL Let 𝔗 be the space of generic transversal curves in 𝒮, parametrized by the natural parameter. We consider the total CR twist functional 𝒲 : 𝔗→ℝ, defined by 𝒲[γ] = ∫_J_γτ_γ η_γ , where J_γ is the domain of definition of the transversal curve γ, τ_γ is its twist, and η_γ= ds_γ is the infinitesimal strain of γ (cf. Section <ref>). A curve γ∈𝔗 is said to be a critical curve in 𝒮 if it is a critical point of 𝒲 when one considers compactly supported variations through generic transversal curves. 0.1cm The main result of this section is the following. Let γ: J → be a generic transversal curve parametrized by the natural parameter. Then, γ is a critical curve if and only if L' (s)= [L (s), K_κ,τ(s)] , where L = [ 0 i τ'+ 3(1 - τκ) 2 i τ; τ 0 τ'+ 3i(1 - τκ); 3i -i τ 0; ] and K_κ,τ is defined as in (<ref>). The proof of Theorem <ref> is organized in four steps and three lemmas. 0.2cm Step 1. We show that generic transversal curves are in 1-1 correspondence with the integral curves of a suitable Pfaffian differential system. 0.1cm Let γ : J → be a generic transversal curve parametrized by the natural parameter. According to Proposition <ref>, the canonical frame of γ defines a unique lift [ℱ] : J → [G]. The map 𝔣 : J∋ s⟼( [(s)], κ(s), τ(s))∈ [G]×^2 is referred to as the extended frame of γ. The product space M := [G]×^2 is called the configuration space. The coordinates on ^2 will be denoted by (κ, τ). With some abuse of notation, we use α_1^1, β_1^1, α_1^2, β_1^2, α_1^3, α_3^2, β_3^2, α_3^1 to denote the entries of the Maurer–Cartan form of [G] as well as their pull-backs on the configuration space M. By Proposition <ref>, the extended frames of γ are the integral curves of the Pfaffian differential system (𝒜, η) on M generated by the linearly independent 1-forms μ^1=α_1^2, μ^2=β_1^2 , μ^3= α_3^2- α^3_1, μ^4=β_3^2 , μ^5=α_1^1 , μ^6=β_1^1-κα_1^3, μ^7 =α_3^1-τα_1^3, with the independence condition η :=α_1^3. If 𝔣 = ( [], κ, τ) : J → M is an integral curve of (𝒜, η), then γ = [F_1] : J →𝒮 defines a generic transversal curve, such that [] is its canonical frame, κ its bending and τ its twist. Accordingly, the integral curves of (𝒜,η) are the extended frames of generic transversal curves in 𝒮. Thus, generic transversal curves are in 1-1 correspondence with the integral curves of the Pfaffian system (𝒜,η) on the configuration space M. If we put π^1=dκ, π^2=dτ, the 1-forms (η,μ^1,…,μ^7,π^1,π^2) define an absolute parallelism on M. Exterior differentiation and use of the Maurer–Cartan equations of G yield the following structure equations for the coframe (η,μ^1,,μ^7,π^1,π^2): dη =2μ^1∧μ^2+2μ^5∧η, dπ^1=dπ^2=0, dμ^1=-μ^1∧μ^5+3μ^2∧μ^6+(3κμ^2-μ^3)∧η, dμ^2=-3μ^1∧μ^6 - μ^2∧μ^3 - (3κμ^1+ μ^4)∧η, dμ^3=-2μ^1∧μ^2-μ^1∧μ^7+μ^3∧μ^5+3μ^4∧μ^6 - (τμ^1- 3κμ^4+3μ^5)∧η, dμ^4=-μ^2∧μ^7- 3μ^3∧μ^6+μ^4∧μ^5 - (τμ^2+ 3κμ^3-3μ^6)∧η, dμ^5=-μ^1∧μ^4+μ^2∧μ^3 + (μ^2-μ^7)∧η, dμ^6=-2κμ^1∧μ^2-μ^1∧μ^3-μ^2∧μ^4 - (μ^1+ 2κμ^5)∧η - π^1∧η, dμ^7=-2τμ^1∧μ^2-2μ^3∧μ^4-2μ^5∧μ^7 + (2μ^4- 2τμ^5)∧η - π^2∧η. From the structure equations it follows that the derived flag of (𝒜,η) is given by 𝒜_(4)⊂𝒜_ (3) ⊂𝒜_ (2)⊂𝒜_ (1), where 𝒜_(4) = {0}, 𝒜_ (3) = span{μ^1}, 𝒜_ (2) = span{μ^1,μ^2,μ^3}, 𝒜_ (1) = span{μ^1, μ^2, μ^3, μ^4, μ^5}. Thus, all the derived systems of (𝒜,η) have constant rank. For the notion of derived flag, see <cit.>. 0.2cm Step 2. We develop a construction due to Griffiths <cit.> on an affine subbundle of T^*(M) (cf. also <cit.>) in order to derive the Euler–Lagrange equations. 0.1cm Let 𝒵⊂ T^*(M) be the affine subbundle defined by the 1-forms μ^1,…,μ^7 and λ := τη, namely 𝒵=λ+span{μ^1,…,μ^7}⊂ T^*(M). We call 𝒵 the phase space of the Pfaffian system (𝒜,η). The 1-forms (μ^1,…,μ^7, λ) induce a global affine trivialization of 𝒵, which may be identified with M ×ℝ^7 by the map M×ℝ^7 ∋(([ℱ], κ, τ), p_1, …, p_7) ⟼ λ_|_([ℱ], κ, τ) + ∑_j=1^7 p_jμ^j_|_([ℱ], κ, τ)∈𝒵 , where p_1,…, p_7 are the fiber coordinates of the bundle map 𝒵→ M with respect to the trivialization. Under this identification, the restriction to 𝒵 of the Liouville (canonical) 1-form of T^*(M) takes the form ξ=τ η + ∑_j=1^7p_jμ^j . Exterior differentiation and use of the quadratic equations (<ref>) and (<ref>) yield dξ ≡π^2∧η + 2τμ^5 ∧η + ∑_j=1^7dp_j∧μ^j + p_1(3κμ^2-μ^3)∧η - p_2(3κμ^1+μ^4)∧η - p_3(τμ^1-3κμ^4+3μ^5)∧η - p_4(τμ^2+3κμ^3-3μ^6)∧η + p_5(μ^2-μ^7)∧η - p_6(π^1+μ^1+ 2κμ^5)∧η - p_7(π^2-2μ^4+4τμ^5)∧η , where the sign `≡' denotes equality modulo the span of {μ^i∧μ^ j}_ i,j = 1,…,7 . 0.1cm The Cartan system (𝒞(dξ), η) of the 2-form dξ is the Pfaffian system on 𝒵 generated by the 1-forms {X ⌟ dξ| X ∈𝔛(𝒵) }⊂Ω^1(𝒵), with independence condition η≠ 0. 0.1cm By Step 1, generic transversal curves are in 1-1 correspondence with the integral curves of the Pfaffian system (𝒜,η). Let 𝔣 : J → M be the extended frame corresponding to the generic transversal curve γ : J → parametrized by the natural parameter. According to Griffiths approach to the calculus of variations (cf. <cit.>), if the extended frame 𝔣 admits a lift y: J →𝒵 to the phase space 𝒵 which is an integral curve of the Cartan system (𝒞(dξ), η), then γ is a critical curve of the total twist functional with respect to compactly supported variations. As observed by Bryant <cit.>, if all the derived systems of (𝒜,η) are of constant rank, as in the case under discussion (cf. Remark <ref>), then the converse is also true. Hence all extremal trajectories arise as projections of integral curves of the Cartan system (𝒞(dξ), η). 0.1cm Next, we compute the Cartan system (𝒞(dξ), η). Contracting the 2-form dξ with the vector fields of the tangent frame (∂_η,∂_μ^1,…,∂_μ^7 , ∂_π^1,∂_π^2,∂_ p_1,,∂_ p_7) on 𝒵, dual to the coframe (η, μ^1,…,μ^7,π^1,π^2,dp_1,…,dp_7), yields the 1-forms ∂_ p_j⌟ dξ≡μ^j, j=1,…,7, -∂_π^1⌟ dξ≡ p_6η = :π̇_1, -∂_π^2⌟ dξ≡ (p_7 -1)η =:π̇_2, -∂_η⌟ dξ≡ (1 - p_7)π^2 =:η̇, -∂_μ^1⌟ dξ≡ dp_1 + (3κ p_2+τ p_3+p_6)η =: μ̇^1, -∂_μ^2⌟ dξ≡ dp_2 - (3κ p_1-τ p_4+p_5)η =: μ̇^2, -∂_μ^3⌟ dξ≡ dp_3 + (p_1+3κ p_4)η =: μ̇^3, -∂_μ^4⌟ dξ≡ dp_4 + (p_2-3κ p_3-2p_7)η =:μ̇^4, -∂_μ^5⌟ dξ≡ dp_5 - (2τ - 3p_3- 2κ p_6 - 4τ p_7)η=: μ̇^5, -∂_μ^6⌟ dξ≡ dp_6-3p_4η = : μ̇^6, -∂_μ^7⌟ dξ≡ dp_7 + p_5η =: μ̇^7. We have proved the following. The Cartan system (𝒞(dξ), η) is the Pfaffian system on 𝒵≅ M×ℝ^7 generated by the 1-forms {μ^1,…,μ^7,π̇_1,π̇_2,η̇,μ̇^1,…,μ̇^7} and with independence condition η≠ 0. 0.1cm Now, the Cartan system (𝒞(dξ), η) is reducible, i.e., there exists a nonempty submanifold 𝒴⊆𝒵, called the reduced space, such that: (1) at each point of 𝒴 there exists an integral element of (𝒞(dξ), η) tangent to 𝒴; (2) if 𝒳⊆𝒵 is any other submanifold with the same property of 𝒴, then 𝒳⊆𝒴. The reduced space 𝒴 is called the momentum space of the variational problem. Moreover, the restriction of the Cartan system (𝒞(dξ), η) to 𝒴 is called the Euler–Lagrange system of the variational problem, and will be denoted by (𝒥, η). A basic result states that the Pfaffian systems (𝒞(dξ), η) and (𝒥, η) have the same integral curves (cf. <cit.>). 0.2cm The system (𝒥, η) can be constructed by an algorithmic procedure (cf. <cit.>). The momentum space 𝒴 is the 11-dimensional submanifold of 𝒵 defined by the equations p_7 =1, p_6= p_5 = p_4=0 , p_3 = -2/3τ p_2=2(1-τκ) . The Euler–Lagrange system (𝒥, η) is the Pfaffian system on 𝒴≅ M×ℝ, with independence condition η≠ 0, generated by the 1 forms μ^1_|𝒴, …, μ^7_|𝒴, σ_1 = dp_1 + 6 κ (1-τκ)η -2/3τ^2η , σ_2 = -2τ dκ -2κ dτ -3kp_1 η , σ_3 = - 2dτ + 3 p_1 η . Let V_1(dξ) ↪ℙ(T(𝒵)) →𝒵 be the totality of 1-dimensional integral elements of (𝒞(dξ),η). In view of (<ref>), we find that V_1(dξ)_|_(([ℱ], κ, τ); p_1,…,p_7)≠∅ p_6 =0, p_7 =1. Thus, the image 𝒵_1 ⊂𝒵 of V_1(dξ) with respect to the natural projection V_1(dξ) →𝒵, is given by 𝒵_1= { (([ℱ], κ, τ); p_1,…,p_7) ∈𝒵| p_6 =0, p_7 =1 }. Next, the restriction of μ̇^6 and μ̇^7 to 𝒵_1 take the form μ̇^6=-3p_4η and μ̇^7=p_5η. Thus, the second reduction 𝒵_2 is given by 𝒵_2= { (([ℱ], κ, τ); p_1,…,p_7) ∈𝒵_1 | p_4 = p_5 = 0 }. Considering the restriction of μ̇^4 and μ̇^5 to 𝒵_2 yields the equations p_2=2(1-τκ), p_3 = -2/3τ , which define the third reduction 𝒵_3. Now, the restriction 𝒞_3(dξ) to 𝒵_3 of the Cartan system 𝒞(dξ) is generated by the 1-forms μ^1, …, μ^7 and σ_1 = dp_1 + 6 κ (1-τκ)η -2/3τ^2η , σ_2 = dp_2 -3kp_1 η = -2τ dκ -2κ dτ -3kp_1 η , σ_3 = - 2dτ + 3 p_1 η . This implies that there exists an integral element of V_1(dξ) over each point of 𝒵_3, i.e., V_1(dξ)_|p≠∅, for each p∈𝒵_3. Hence, 𝒴 := 𝒵_3 is the momentum space and (𝒥, η) := (𝒞_3(dξ), η) is the reduced system of (𝒞(dξ), η). 0.2cm Step 3. We derive the Euler–Lagrange equations. 0.1cm By the previous discussion, all the extremal trajectories of arise as projections of the integral curves of the Euler–Lagrange system. If y : J →𝒴 is an integral curve of the Euler–Lagrange system (𝒥, η) and 𝚙𝚛 : 𝒴→ is the natural projection of 𝒴 onto , then γ= 𝚙𝚛∘ y : J → is a critical curve of the total twist functional with respect to compactly supported variations. 0.1cm We can prove the following. A curve y: J →𝒴 is an integral curve of the Euler–Lagrange system (𝒥, η) if and only if the bending κ and the twist τ of the transversal curve γ = 𝚙𝚛∘ y: J → satisfy the equations 2κτ' + τκ' = 0 , τ” + 9 κ(1 -τκ) - τ^2 = 0 . If y= (([ℱ], κ, τ); p_1) : J →𝒴 is an integral curve of the Euler–Lagrange system (𝒥, η), the projection γ = 𝚙𝚛∘ y is the smooth curve γ(s) = [F_1(s)], where F_1 is the first column of ℱ. The equations μ^1= ⋯= μ^7 =0 together with the independence condition η≠ 0 tell us that ([ℱ], κ, τ) is an integral curve of the Pfaffian system (𝒜, η) on the configuration space M. Hence γ is a generic transversal curve with bending κ, twist τ and ℱ is a Wilczynski frame along γ. Next, for the smooth function κ, τ: J →ℝ, let κ', κ” and τ', τ”, etc., be defined by dκ = κ' η, dκ' = κ”η, dτ = τ' η, dτ' = τ”η . With reference to (<ref>), equation σ_3 = 0 implies p_1 = 2/3τ' . Further, σ_2=0 gives 2κτ' + τκ' = 0 . Finally, equation σ_1 =0 yields τ” + 9 κ(1 -τκ) - τ^2 = 0 . Conversely, let γ : J →𝒮 be a generic transversal curve, parametrized by the natural parameter, satisfying (<ref>) and (<ref>) and let [ℱ] its canonical frame. Then, y(s) = (([ℱ], κ, τ) ; 2/3τ' ) is, by construction, an integral curve of the Euler–Lagrange system (𝒥, η). 0.2cm Step 4. We eventually provide a Lax formulation for the Euler–Lagrange equations (cf. (<ref>) and (<ref>)) of a critical curve γ: J →. 0.1cm Using the Killing form of 𝔤, the dual Lie algebra 𝔤^* can be identified with 𝔥=i𝔤, the G-module of traceless selfadjoint endomorphisms of ℂ^2,1. Under this identification, the restriction to 𝒴 of the tautological 1-form ξ goes over to an element of 𝔥 which originates the 𝔥-valued function L : J →𝔥 given by L(s)= [ 0 i τ'+ 3(1 - τκ) 2 i τ; τ 0 τ'+ 3i(1 - τκ); 3i -i τ 0; ]. A direct computation shows that the Euler–Lagrange equations (<ref>) and (<ref>) of the critical curve γ are satisfied if and only if L' (s)= [L (s), K_κ,τ(s) ] , where K_κ,τ is given by (<ref>). This concludes the proof of Theorem <ref>. As a consequence of Theorem <ref>, we have the following. Let γ : J → be a generic transversal curve parametrized by the natural parameter. Let [ℱ] : J → [G] be the canonical frame of γ and let L : J →𝔥 be as in (<ref>). If γ is a critical curve, the Lax equation (<ref>) implies that ℱ(s) L(s) ℱ^-1(s)= 𝔐 , ∀ s∈ J. where 𝔐 is a fixed element of 𝔥. The element 𝔐∈𝔥 is called the momentum of the critical curve γ. The characteristic polynomial of the momentum 𝔐 is -x^3 -6κτ^2 x +54κτ -27κ^2 τ^2 +2τ^3 -3τ'^2 -27 . The conservation of the momentum along γ yields the two conservation laws κτ^2 = c_1 , - 18 κτ + 9 κ^2 τ^2 -2/3τ^3 + τ'^2 = C_2 -9 , for real constants c_1 and C_2. We let c_2 := C_2-9. Using this notation, the (opposite of the) characteristic polynomial of the momentum is Q(x)=x^3+6c_1x+(27+3c_2). If c_1≠ 0, the twist and the bending are never zero and the conservation laws can be rewritten as κ=c_1τ^-2, 3/2τ^2τ'^2=τ^5+3/2c_2τ^2+27c_1τ-27/2c_1^2. If c_1=0, it can be easily proved that κ=0 and the second conservation law takes the form τ'^2 =2/3τ^3+c_2. The pair of real constants c=(c_1, c_2) is called the modulus of the critical curve γ. For the application of Griffiths' approach to other geometric variational problems, the reader is referred to <cit.>. § THE CR TWIST OF A CRITICAL CURVE §.§ Phase types For c=(c_1,c_2)∈^2, we denote by P_ c the quintic polynomial in principal form given by P_ c(x)=x^5+3/2c_2x^2+27c_1x-27/2c_1^2 and by Q_ c the cubic polynomial given by Q_ c(x)=x^3+6c_1x+(27+3c_2). Excluding the case c=0, P_ c possesses at least a pair of complex conjugate roots. We adopt the following terminology. * c∈^2 is of phase type 𝒜 if P_ c has four complex roots a_j± ib_j, j=1,2, 0<b_1<b_2, and a simple real root e_1; * c∈^2 is of phase type ℬ if P_ c has two complex roots a ± ib, b>0, and three simple real roots e_1<e_2<e_3; * c∈^2 is of phase type 𝒞 if P_ c has a multiple real root. In the latter case, two possibilities may occur: (1) P_ c has a double real root and a simple real root; or (2) P_ c has a real root of multiplicity 5. By the same letters, we also denote the corresponding sets of moduli of phase types 𝒜, ℬ, and 𝒞, respectively. Next, we give a more detailed description of the sets 𝒜, ℬ, and 𝒞. To this end, we start by defining the separatrix curve. Let (m,n) be the homogeneous coordinates of ℝℙ^1 and let [(m_*,n_*)] be the point of ℝℙ^1 such that 3m_*^3+6m_*^2n_*+4m_*n_*^2+2n_*^3=0 (i.e., m_*=1 and n_*≈-0.72212). The separatrix curve Ξ⊂^2 is the image of the parametrized curve ξ =(ξ_1,ξ_2) :ℝℙ^1∖{[(m_*,n_*)]}→^2, defined by ξ_1([(m,n)]) =6√(2)mn^4/3(3m^2+2mn+n^2)^4/3/(3m^3+6m^2n+4mn^2+2n^3)^5/3, ξ_2([(m,n)]) =-36n(3m^2+2mn+n^2)(4m^3+3m^2n+2mn^2+n^3)/(3m^3+6m^2n+4mn^2 + 2n^3)^2. The map ξ is injective and Ξ has a cusp at ξ([(1,1)])=(4/5(6/5)^2/3,-48/5)). It is regular elsewhere. In addition, Ξ has a horizontal inflection point at ξ([(0,1)])=(0,-9). Let J_ξ be the interval (arctan(n_*),arctan(n_*)+π)≈ (-0.625418,2.51617). Then, ξ:t∈ J_ξ→ξ(cos(t),sin(t))∈Ξ is another parametrization of Ξ. The inflection point is ξ(π/2). The “negative part" Ξ_-=Ξ∩{ c∈^2 | c_1<0} of Ξ is parametrized by the restriction of ξ to Ĵ_ξ=(π/2,π +arctan(n_*)). The left picture of Figure <ref> reproduces the separatrix curve (in black); the negative part of the separatrix curve is highlighted in dashed-yellow. The cusp is the red point and the horizontal inflection point is coloured in green. The (open) upper and lower domains bounded by the separatrix curve Ξ are denoted by ℳ_±. In Figure <ref>, the upper domain ℳ_+ is coloured in three orange tones: orange, dark-orange and light-orange; the lower domain ℳ_- is coloured in two brown tones: light-brown and brown. The polynomial P_ c has multiple roots if and only if c∈Ξ∪ Oy, has four complex roots if and only if c∈ℳ_-∖ (Oy∩ℳ_- ), and has three distinct real roots if and only if c∈ℳ_+∖ (Oy∩ℳ_+ ). Equivalently, 𝒜=ℳ_-∖ (Oy∩ℳ_- ), ℬ =ℳ_+∖ (Oy∩ℳ_+ ), 𝒞=Ξ∪ Oy. First, we prove the following claim. 0.1cm Claim. P_ c has a double root a_3≠ 0 if and only if c belongs to the separatrix curve minus the cusp. 0.1cm Note that c_1≠ 0 (otherwise the double root would be 0). Let a_4 be the other simple real root and b_1+ib_2, b_1-ib_2, b_2>0, be the two complex conjugate roots. Since the sum of the roots of P_ c is zero, we have b_1=-1/2(2a_3+a_4). Since the coefficient of x^3 is zero and b_2>0, we get b_2=√(2a_3^2+a_3a_4+3a_4^2/4). Expanding (x-a_3)^2(x-a_4)(x-b_1-ib_2)(x-b_1+ib_2) and comparing the coefficients of the monomials x^n, n=1,…, 4, with the coefficients of P_ c we may write c_1 and c_2 as functions of a_3 and a_4, c_1 =1/27(3a_3^4+6a_3^3a_4+4a_3^2a_4^2+2a_3a_4^3), c_2 =-2/3(4a_3^3+3a_3^2a_4+2a_3a_4^2+a_4^3). In addition, c_1^2=2/27(3a_3^4a_4+2a_3^3a_4^2+a_3^2a_4^3). Taking into account that a_3≠ 0, it follows that (a_3,a_4) belongs to the algebraic curve 𝙲 (the black curve on the right picture in Figure <ref>) defined by the equation 54y(3x^2+2xy+y^2)-(3x^3+6x^2y+4xy^2+2y^3)^2=0. Now, consider the line ℓ_m,n through the origin, with homogeneous coordinates (m,n), i.e., the line with parametric equations p_m,n(t)=(mt,nt). If (m,n)≠ (1,0) and 3m^3+6m^2n+4mn^2+2n^3≠ 0 (we are excluding the two red lines on the right picture in Figure <ref>), ℓ_m,n intersects 𝙲 when t=0 and t=t_m,n, where t_m,n=3√(2)√(n(3m^2+2mn+n^2))/√((3m^3+6m^2n+4mn^2+2n^3)^2). If (m,n)= (1,0) or 3m^3+6m^2n+4mn^2+2n^3= 0, ℓ_m,n intersects 𝙲 only at the origin (see the right picture in Figure <ref>). Hence β : [(m,n)]→ t_m,n· (m,n), [(m,n)]≠ [(1,0)], 3m^3+6m^2n+4mn^2+2n^3≠ 0, is a parametrization of 𝙲∖{(0,0)}. Thus, using (<ref>), the map [(m,n)]→ (c_1(β([(m,n])),c_2(β([(m,n]))∈^2 is a parametrization of the set of all c, c_1≠ 0, such that P_ c has multiple roots. It is now a computational matter to check that (c_1(β([(m,n])),c_2(β([(m,n]))=ξ([(m,n)]). This proves the claim. It also shows that P_ c has multiple roots if and only if c∈Ξ∪ Oy. 0.1cm To prove the other assertions, we begin by observing that the discriminant of the derived polynomial P'_ c is negative. Hence P'_ c has two distinct real roots and a pair of complex conjugate roots. Denote by x'_ c and x”_ c the real roots of P'_ c, ordered so that x'_ c<x”_ c. Observe that x'_ c and x”_ c are differentiable functions of c. Then, P_ c possesses three distinct real roots if and only if x'_ c· x”_ c<0, one simple real root if and only if x'_ c· x”_ c>0, and a multiple root if and only if x'_ c· x”_ c=0. From the first part of the proof, the set of all c∈^2, such that P_ c has only simple roots is the complement of Ξ∪ Oy. This set has five connected components: ℳ_+' = { c∈ℳ_+∖ (Oy∩ℳ_+ ) | c_1<0}, ℳ_+” ={ c∈ℳ_+∖ (Oy∩ℳ_+) | c_1>0 and c_2>0}, ℳ_+”' ={ c∈ℳ_+∖ (Oy∩ℳ_+) | c_1>0 and c_2<0}, ℳ_-' ={ c∈ℳ_-∖ (Oy∩ℳ_-) | c_1<0}, ℳ_-” ={ c∈ℳ_-∖ (Oy∩ℳ_-) | c_1>0}. Referring to the left picture in Figure <ref>, ℳ_+' is the orange domain, ℳ_+” is the dark-orange domain, ℳ_+”' is the light-orange domain, ℳ_-' is the light-brown domain, and ℳ_-” is the brown domain. Consider the following points (the black points in Figure <ref>): c_1=(-2,1)∈ℳ'_+, c_2=(1/6,8)∈ℳ_+”, c_3=(1/6,-8)∈ℳ_+”', c_4=(-6,-9)∈ℳ_-', c_5=(4,-9)∈ℳ_-”. Using Klein's formulas for the icosahedral solution of a quintic polynomial in principal form (cf. <cit.>),[We used the Trott and Adamchik code (cf. <cit.>) implementing Klein's formulas in the software Mathematica.] we find that the polynomials P_ c_j, j=1,2,3, have three distinct real roots and that P_ c_j, j=4,5, have one real root. The domain ℳ'_+ is connected and the function ℳ'_+ ∋ c↦ x'_ c· x”_ c is differentiable and nowhere zero. Since x'_ c_1· x”_ c_1<0, it follows that c↦ x'_ c· x”_ c is strictly negative. Then, P_ c has three distinct real roots, for every c∈ℳ'_+. Similarly, P_ c has three distinct real roots, for every c∈ℳ”_+∪ℳ”'_+ and a unique real root for every c∈ℳ'_-∪ℳ”_-. This concludes the proof. The real roots of P_ c_1 are e_1=-2.44175<e_2=-0.9904<0<e_3=2.87645 and those of P_ c_2 are e_1=-2.14118<e_2=-0.448099<0<e_3=0.0701938. Instead, the roots of P_ c_3 are 0<e_1=0.12498<e_2=0.250656<2.15383. Since the product e_2( c) e_3( c) is a continuous function on the connected components ℳ_+', ℳ_+”, and ℳ_+”', we deduce that the lowest roots of P_ c are negative if c∈ℳ_+'∪ℳ_+” and positive if c∈ℳ_+”'. §.§ Phase curves and signatures Let Σ_ c be the real algebraic curve defined by y^2= P_ c(x). We call Σ_ c the phase curve of c. If c∈𝒜∪ℬ, Σ_ c is a smooth real cycle of a hyperelliptic curve of genus 2. If c∈𝒞, and c≠ 0, Σ_ c is a singular real cycle of an elliptic curve. If c=0, Σ_ c is a singular rational curve. The following facts can be easily verified: * if c∈𝒜, Σ_ c is connected, unbounded, and intersects the Ox-axis at (e_1,0) (see Figure <ref>); * if c∈ℬ, Σ_ c has two smooth connected components, one is compact and the other is unbounded. Let Σ'_ c be the compact connected component and Σ”_ c be the noncompact one. Σ'_ c intersects the Ox-axis at (e_1,0) and (e_2,0), while Σ”_ c intersects the Ox-axis at (e_3,0) (see Figure <ref>); * if c∈𝒞 and c_1 ≠ 0 , Σ_ c has a smooth, unbounded connected component Σ”_ c and an isolated singular point (e_1,0), where e_1=e_2 is the double real root of P_ c(x). The unbounded connected component intersects the Ox-axis at (e_3,0), where e_3 is the simple real root of P_ c(x) (see Figure <ref>). If c_1=0 and c_2≠0, Σ_ c is connected, with an ordinary double point (see Figure <ref>). If c=0, Σ_ c is connected with a cusp at the origin (see Figure <ref>). Let γ be a critical curve with nonconstant twist and modulus c. Let J_γ⊂ be the maximal interval of definition of γ. With reference to (<ref>), we adapt to our context the terminology used in <cit.> and call σ_γ : J_γ→^2, s↦(τ(s),√(3/2) τ(s)τ'(s)) the signature of γ. From the Poincaré–Bendixson Theorem, it follows that the twist of γ is periodic if and only if σ_γ( J_γ) is compact. Observing that σ_γ( J_γ) is one of the 1-dimensional connected components of Σ_ c, we can conclude that the twist is a periodic function if and only if c∈ℬ and σ_γ( J_γ)=Σ'_ c. A critical curve γ with modulus c is said to be of type ℬ' if c∈ℬ and σ_γ( J_γ)=Σ'_ c; it is said to be of type ℬ” if c∈ℬ and σ_γ( J_γ)=Σ”_ c. §.§ The twist of a critical curve §.§.§ The twist of a critical curve of type 𝒜 Let γ be a critical curve of type 𝒜, i.e., with modulus c∈𝒜. Then P_ c has a unique real root e_1. The polynomial P_ c(x) is positive if x>e_1 and is negative if x<e_1. Since P_ c(0)=-27c_1^2/2<0, the root is positive. Let ω_ c>0 be the improper hyperelliptic integral of the first kind defined by ω_ c = √(3/2)∫_e_1^+∞τ dτ/√( P_ c(τ))>0. The incomplete hyperelliptic integral h_ c(τ) = √(3/2)∫_e_1^τu du/√( P_ c(u)), u≥ e_1 is a strictly increasing diffeomorphism of [e_1,+∞) onto [0,ω_ c) (see Figure <ref>). The twist is the unique even function τ_ c:(-ω_ c,ω_ c)→, such that τ_ c=h_ c^-1 on [0,ω_ c). The maximal domain of definition is J_ c=(-ω_ c,ω_ c). τ_ c is strictly positive, with vertical asymptotes as s→∓ω_ c^± (see Figure <ref>). Note that τ_ c is the solution of the Cauchy problem τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_1, τ'(0)=0. §.§.§ The twist of a critical curve of type ℬ' Let e_1<e_2<e_3 be the simple real roots of P_ c. The highest root e_3 is positive. The lower roots e_1 and e_2 are either both negative or both positive and P_ c is positive on (e_1,e_2). Let ω_ c>0 be the complete hyperelliptic integral of the first kind ω_ c = sign(e_1) √(3/2)∫_e_2^e_1τ dτ/√( P_ c(τ))>0. Let h_ c be the incomplete hyperelliptic integrals of the first kind h_ c(τ) = {[ √(3/2)∫_e_2^τu du/√( P_ c(u)), τ∈ [e_1,e_2], e_1<e_2<0,; √(3/2)∫_e_1^τu du/√( P_ c(u)), τ∈ [e_1,e_2], 0<e_1<e_2. ]. The function h_ c is a diffeomorphism of [e_1,e_2] onto [0,ω_ c], strictly decreasing if e_1<e_2<0 and strictly increasing if 0<e_1<e_2 (see Figure <ref>). The twist τ_ c is the even periodic function with least period 2ω_ c, obtained by extending periodically the function τ(s) = h_ c^-1(s) defined on [0,ω_ c] and on [-ω_ c,0], respectively. 0.1cm ∙ If e_1<e_2<0, then τ_ c is strictly negative with minimum value e_1 and maximum value e_2, attained, respectively, at s≡ω_ c 2ω_ c and at s≡ 0 2ω_ c (see Figure <ref>). 0.1cm ∙ If 0<e_1<e_2, then τ_ c is strictly positive, with minimum value e_1 and maximum value e_2, attained, respectively, at s≡ 0 2ω_ c and at s≡ω_ c 2ω_ c. 0.1cm Observe that τ_ c is the solution of the Cauchy problem (i) τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_2, τ'(0)=0, if e_1<e_2<0, (ii) τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_1, τ'(0)=0, if 0<e_1<e_2. §.§.§ The twist of a critical curve of type ℬ” The twist of a critical curve of type ℬ” can be constructed as in the case of a critical curve of type 𝒜. More precisely, let e_3>0 be the highest real root of P_ c and ω_ c be the improper hyperelliptic integral of the first kind given by ω_ c = √(3/2)∫_e_3^+∞τ dτ/√( P_ c(τ))>0. Let h_ c(τ) be the incomplete hyperelliptic integral h_ c(τ) = √(3/2)∫_e_3^τu du/√( P_ c(u)), τ≥ e_3. Then, h_ c is a strictly increasing diffeomorphism of [e_3,+∞) onto [0,ω_ c). The twist is the unique even function τ_ c:(-ω_ c,ω_ c)→, such that τ_ c=h_ c^-1 on [0,ω_ c). The maximal interval of definition of τ_ c is J_ c=(-ω_ c,ω_ c). The function τ_ c is positive, with vertical asymptotes as s→∓ω_ c^±, and is the solution of the Cauchy problem τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_3, τ'(0)=0. §.§.§ The twist of a critical curve of type 𝒞 with c_1≠ 0 The twist of a critical curve of type 𝒞, with c_1≠ 0, can be constructed as for curves of types 𝒜 or ℬ”. Let e_3>0 be simple real root of P_ c and ω_ c be the improper elliptic integral of the first kind ω_ c = √(3/2)∫_e_3^+∞τ dτ/√( P_ c(τ))>0. Let h_ c(τ) be the incomplete elliptic integral h_ c(τ) = √(3/2)∫_e_3^τu du/√( P_ c(u)), τ≥ e_3. Then, h_ c is a strictly increasing diffeomorphism of [e_3,+∞) onto [0,ω_ c). The twist τ_ c is the unique even function τ_ c:(-ω_ c,ω_ c)→, such that τ_ c=h_ c^-1 on [0,ω_ c). The maximal interval of definition of τ_ c is J_ c=(-ω_ c,ω_ c). The twist is positive, with vertical asymptotes as s→∓ω_ c^±. Note that τ_ c is the solution of the Cauchy problem τ”=τ^2-9c_1τ^-2(1-c_1τ^-1), τ(0)=e_3, τ'(0)=0. §.§.§ The twist of a critical curve with c_1= 0 If c_1=0, the bending vanishes identically and the twist is a solution of the second order ODE τ”-τ^2=0. Then, τ(s)=√(6)℘(s+a/√(6)|0,g_3), g_3=-√(2/243) c_2, κ(s)=0, where a is an unessential constant and ℘(-,g_2,g_3) is the Weierstrass function with invariants g_2, g_3. §.§ Orbit types and the twelve classes of critical curves with nonconstant twist The moduli of the critical curves can be classified depending on the properties of the eigenvalues of the momenta. For 𝐜 = (c_1,c_2) ∈^2, let Δ_1( c)=-27(32c_1^3+9(9+c_2)^2) be the discriminant of the cubic polynomial Q_ c (cf. (<ref>)). We say that c∈^2 is: * of orbit type 1 (in symbols, c∈ OT_1) if Δ_1( c)>0; the momentum of a critical curve with modulus c∈ OT_1 has three distinct real eigenvalues: λ_1=-(λ_2+λ_3)<0<λ_2<λ_3. * of orbit type 2 (in symbols, c∈ OT_2) if Δ_1( c)<0; the momentum of a critical curve with modulus c∈ OT_2 has a real eigenvalue λ_1 and two complex conjugate roots: λ_2, with positive imaginary part, and λ_3=λ_2. * of orbit type 3 (in symbols, c∈ OT_3) if Δ_1( c)=0; the momentum of a critical curve with modulus c∈ OT_3 has an eigenvalue with algebraic multiplicity greater than one (>1). Correspondingly, ^2 is partitioned into nine regions (see Figure <ref>): 𝒜_j=𝒜∩ OT_j, ℬ_j=ℬ∩ OT_j, 𝒞_j=𝒞∩ OT_j, j=1,2,3. Let γ be a critical curve with modulus c and j∈{1,2,3}. We say that γ is of type 𝒜_j if c∈𝒜_j; of type ℬ'_j if c∈ℬ_j and the image of its signature σ_γ is compact; of type ℬ”_j if c∈ℬ_j and the image of σ_γ is unbounded; and of type 𝒞_j if c∈𝒞_j, j=1,2,3. The only critical curves with periodic twist are those of the types ℬ'_j, j=1,2,3. Consequently, critical curves of the other types cannot be closed. ℬ_1 lies in the half-plane {(c_1,c_2) | c_1<0}; it is bounded below by Ξ'={ c∈Ξ| c_1< 0} and above by Δ'={ c∈^2|Δ_1( c)=0, c_2> -9}. The curves Ξ' and Δ' intersect each other tangentially at c'=(c'_1,c'_2)≈ (-11.339754, 63.004420) (see Figure <ref>). Thus, ℬ_1 has two connected components: ℬ_1^-={ c∈ℬ_1 | c_1∈ (c'_1,-9) }, ℬ_1^+={ c∈ℬ_1 | c_1 > c'_1 }. Referring to Remark <ref>, Ξ' is parametrized by the restriction of ξ to the interval Ĵ_ξ=(π/2,π +arctan(n_*)). Let t' be the point of Ĵ_ξ such that ξ(t')= c', (t'≈ 2.3008). Put Ĵ_ξ^-=(π/2,t') and Ĵ_ξ^+=(t',π +arctan(n_*)). The restriction of ξ to Ĵ_ξ^- is a parametrization of Ξ_-={ c∈Ξ'| c_1∈ (c'_1,0)} and the restriction to Ĵ_ξ^+ is a parametrization of Ξ_+={ c∈Ξ'| c_1<c'_1}. Consequently, ℬ_1^± are parametrized by ψ_± : K_±∋ (t,s)⟼(ξ(t)-p(t))s+p(t), where K_± are the rectangles Ĵ_ξ^±× (0,1) and p(t)=(ξ_1(t),1/3(4√(-2 ξ_1(t)^3)-27)). § INTEGRABILITY BY QUADRATURES §.§ Integrability by quadratures of general critical curves Let Δ_2 be the polynomial Δ_2( c)=9c_1^3(c_1^3+216)+6c_1^3c_2(c_2+36)+(c_2+9)(c_2+18)^3. A critical curve γ with modulus c is said to be general if Δ_1( c)Δ_2( c)≠ 0. Since Δ_1( c)≠ 0, the momentum 𝔐_γ of a general critical curve γ has three distinct eigenvalues λ_1, λ_2, λ_3, sorted as in Definition <ref>. Let J be the maximal interval of definition of the twist (it can be computed in terms of the modulus). Define y_j: J→^1,2, j=1,2,3, by 𝐲_j=^t(τ(3-iτ')-λ_j^2-3c_1, 9-9c_1/τ-λ_jτ-3iτ', i(τ^2-3λ_j)). Let V : J→𝔤𝔩(3,) be the matrix-valued map with column vectors y_1, y_2 and y_3. Let D(z_1,z_2,z_3) denote the diagonal matrix with z_j as the jth element on the diagonal. Recall that, if c_1≠ 0, then τ is nowhere zero. 0.1cm We can prove the following. Let γ: J→ be a general critical curve. The functions det( V) and τ^2-3λ_j, j=1,2,3, are nowhere zero. Let r_j be continuous determinations of √(τ^2-3λ_j) and let ϕ_j be the functions defined by[If c_1≠ 0, the denominator of the integrand in nowhere zero and the ϕ_j are real-analytic. If c_1=0, the integrand reduces to (3τ-λ_j^2)(3λ_j-τ^2)^-1. Thus, also in this case the functions ϕ_j are real-analytic.] ϕ_j(s)=∫_0^s 3c_1λ_j-(4c_1+λ_j^2)τ^2(u)+3τ^3(u)/τ^2(u)(3λ_j-τ^2(u))du. Then, γ is congruent to J∋ s ⟼[ M D(r_1e^iϕ_1, r_2e^iϕ_2, r_3e^iϕ_3) V^-1 𝐞_1] ∈, where M= V(0) D(r_1(0),r_2(0),r_3(0))^-1. The proof of Theorem <ref> is organized into three lemmas. The following statements hold true: * if the momentum has three distinct real eigenvalues, then ±√(3λ_2) and ±√(3λ_3) cannot be roots of P_ c; * if the momentum has two complex conjugate eigenvalues and a positive real eigenvalue λ_1, then ±√(3λ_1) cannot be roots of P_ c. First, note that the image of the parametrized curve α(t)=(-t(t^3/3+√(3)),t^3(t^3/3+2√(3))-9) is contained in the zero locus of Δ_2. This can be proved by a direct computation. Secondly, from the expression of Q_𝐜, it follows that c_1 =-1/6(λ_2^2+λ_2λ_3+λ_3^2)=-1/6(λ_1^2+λ_1λ_2+λ_2^2), c_2 =1/3(λ_2^2λ_3+λ_2λ_3^2-27)=1/3(λ_1^2λ_2+λ_1λ_2^2-27). (1) Suppose that the momentum has three distinct real eigenvalues. By contradiction, suppose that √(3λ_2) is a root of P_ c. Then 0=-8/3 P_ c(√(3λ_2)) =λ_3^4+2λ_2λ_3^3-(λ_2^2-12√(3)λ_2^1/2)λ_3^2-2(λ_2^3-6√(3)λ_2^3/2)λ_3+ +(λ_2^4-12√(3)λ_2^5/2+108λ_2). Solving this equation with respect to λ_3, taking into account that λ_3>0, we obtain λ_3=1/2(-λ_2+√(5λ_2^2-24√(3λ_2))). Substituting into (<ref>), we find c_1=√(3λ_2)-1/3λ_2^2, c_2=-9-2√(3)λ_2^3/2+1/3λ_2^3. Then, c=α(-√(λ_2)). This implies that c belongs to the zero locus of Δ_2, which is a contradiction. By an analogous argument, we prove that also -√(3λ_2) cannot be a root of P_ c. By interchanging the role of λ_2 and λ_3 and arguing as above, it follows that also ±√(3λ_3) cannot be roots of P_ c. (2) Next, suppose that the momentum has two complex conjugate eigenvalues and a nonnegative real eigenvalue λ_1. Recall that the eigenvalues are sorted so that the imaginary part of λ_2 is positive. By contradiction, suppose that √(3λ_1) is a root of P_ c. Then, 0=-8/3 P_ c(√(3λ_1)) =λ_2^4+2λ_1λ_2^3-(λ_1^2-12√(3)λ_1^1/2)λ_2^2-2(λ_1^3-6√(3)λ_1^3/2)λ_2+ +(λ_1^4-12√(3)λ_1^5/2+108λ_1). Solving this equation with respect to λ_2, taking into account that the imaginary part of λ_2 is positive, we find λ_2=1/2(-λ_1+√(5λ_1^2-24√(3λ_1))). Substituting into (<ref>) yields c=α(-√(λ_1)). Thus, c is a root of Δ_2, which is a contradiction. An analogous argument shows that -√(3λ_1) cannot be a root of P_ c. This concludes the proof of the lemma. det( V)(s)≠ 0, for every s∈ J_γ. Let 𝕃_j be the 1-dimensional eigenspaces of the momentum 𝔐_γ relative to the eigenvalues λ_j. Let L be as in (<ref>). By Corollary <ref> of Theorem <ref>, we have ℱ L ℱ^-1=𝔐, where ℱ is a Wilczynski frame field along γ. Then, L(s) and 𝔐 have the same eigenvalues. Next, consider the line bundles Λ_j={(s, y)∈ J_γ×^1,2|L(s) y=λ_j y}, j=1,2,3. Note that (s, y)∈Λ_j if and only if ℱ(s) y∈𝕃_j. Let y_j, j=1,2,3, be as in (<ref>). A direct computation shows that L y_j=λ_j y_j. Thus, y_j is a cross section of the eigenbundle Λ_j. Hence, det( V)(s)≠ 0 if and only if y_j(s)≠0⃗, for every s. 0.1cm Case I: The eigenvalues of the momentum are real and distinct. Let y_j^i, i=1,2,3, denote the components of y_j. Since λ_1 is negative, it follows from (<ref>) that y^3_1(s)≠ 0, for every s, and hence 𝐲_1(s)≠0⃗. We prove that y_2(s)≠0⃗. Suppose, by contradiction, that y_2(s_*)=0⃗, for some s_*∈ J_γ. From y^1_2(s_*)= y^2_2(s_*)=0, it follows that τ'(s_*)=0. Hence e:=τ(s_*) is a root of P_ c. From y^3_2(s_*)=0, it follows that e=±√(3λ_2), which contradicts Lemma B<ref>. An analogous argument leads to the conclusion that y_3(s)≠0⃗, for every s∈ J_γ. 0.1cm Case II: The momentum has a real eigenvalue λ_1 and two complex conjugate eigenvalues λ_2, λ_3 (λ_2 with positive imaginary part). Since λ_2 and λ_3 have nonzero imaginary parts and τ is real valued, y_2^3(s)≠ 0 and y_3^3(s)≠ 0, for every s. If λ_1<0, then y_1^3(s)≠ 0, for every s. If λ_1≥ 0, suppose, by contradiction, that y_1(s_*)=0⃗. From y_1^1(s_*)= y^2_1(s_*)=0, we infer that τ'(s_*)=0. Hence e=τ(s_*) is a root of P_ c. From y^3_1(s_*)=0, we have e=±√(3λ_1), which contradicts Lemma B<ref>. We are now in a position to conclude the proof. For j=1,2,3, let w_j be defined by w_j=ℱ y_j: J_γ→^1,2. Then, w_j(s)∈𝕃_j and w_j(s)≠0⃗, for every s. Thus, there exist smooth functions Φ_j: J_γ→, such that w'_j=Φ_j w_j. From (<ref>), we have Φ_j y_j= y'_j+K y_j, j=1,2,3, where K= [ ic_1τ^-2 -i τ; 0 -2ic_1τ^-2 1; 1 0 ic_1τ^-2; ]. Then, the third component of y'_j+K y_j is equal to 3τ+3c_1λ_jτ^-2-(λ_j^2+4c_1)+iττ'. Hence, using (<ref>) we obtain Φ_j=-ττ'/3λ_j-τ^2 +i3c_1λ_j-(4c_1+λ_j^2)τ^2+3τ^3/τ^2(3λ_j-τ^2). The functions 3λ_j-τ^2, j=1,2,3, are nowhere zero. The statement is obvious if λ_j is real and negative or complex, with nonzero imaginary part. If λ_j is real non-negative, the smoothness of Φ_j implies that ττ'(3λ_j-τ^2)^-1 is differentiable. Then (3λ_j-τ^2)(s)≠ 0, for every s, such that τ(s)τ'(s)≠ 0. If τ(s)τ'(s)= 0, it follows that τ(s) is a root of the polynomial P_ c. Therefore, by Lemma B<ref>, we have that (3λ_j-τ^2)(s)≠ 0. From (<ref>) we have ∫_0^s Φ_j du =log(√(τ^2-3λ_j))+iϕ_j+b_j, j=1,2,3, where b_j is a constant of integration, √(τ^2-3λ_j) is a continuous determination of the square root of τ^2-3λ_j and log(√(τ^2-3λ_j)) is a continuous determination of the logarithm of √(τ^2-3λ_j). Since w'_j=Φ_j w_j, we obtain ℱ y_j r_j^-1e^-iϕ_j = m_j, j=1,2,3, where m_j is a constant vector belonging to the eigenspace 𝕃_j of 𝔐. This implies ℱ= MD(r_1e^iϕ_1,r_2e^iϕ_2,r_2e^iϕ_2) V^-1, where M is an invertible matrix such that M^-1 𝔐 M=D(λ_1,λ_2,λ_3). By possibly replacing γ with a congruent curve, we may suppose that ℱ(0)= I_3. Then, since ϕ_j(0)=0, we have M= V(0) D(r_1(0),r_2(0),r_3(0))^-1. This concludes the proof of Theorem B. §.§ Integrability by quadratures of general critical curves of type ℬ_1' We now specialize the above procedure to the case of general critical curves of type ℬ_1' (i.e., general critical curves with modulus c∈ℬ_1 and with periodic twist). Let ℳ_+' be as in (<ref>). Since ℬ_1 is contained in ℳ_+', the lowest roots e_1 and e_2 of P_ c are negative, for every c∈ℬ_1 (cf. Remark <ref>). Let γ be a general critical curve of type ℬ_1'. The λ_1-eigenspace of the momentum is spacelike. Let y_j be as in (<ref>). Then ℱ(s) y_j(s) belongs to the λ_j-eigenspace of 𝔐, for every s∈. Using the conservation law 3/2τ^2 (τ')^2= P_ c(τ) (cf. (<ref>)) and taking into account that λ_j^3+6c_1λ_j+3(9+c_2)=0, we compute ⟨ℱ y_j,ℱ y_j⟩ = ⟨ y_j, y_j⟩ =3(τ^2-3λ_j)(2c_1+λ_j^2). Moreover, since λ_1=-(λ_2+λ_3) and c_1=-(λ_2^2+λ_2λ_3+λ_3^2)/6, we have 2c_1+λ_1^2=1/3(2λ_2+λ_3)(λ_2+2λ_3)>0, 2c_1+λ_3^2=1/3(λ_3-λ_2)(λ_2+2λ_3)>0, 2c_1+λ_2^2=-1/3(λ_3-λ_2)(2λ_2+λ_3)<0. From the fact that λ_1<0, it follows that ⟨ y_1, y_1⟩ > 0. This proves that the λ_1-eigenspace of the momentum is spacelike. There are two possible cases: either the λ_3-eigenspace of 𝔐 is spacelike, or else is timelike. In the first case, we say that γ is positively polarized, while in the second case, we say that γ is negatively polarized. In view of the above lemma, γ is positively polarized if and only if e_1^2-3λ_3>0 and is negatively polarized if and only if e_2^2-3λ_3<0. It is a linear algebra exercise to prove the existence of A∈G, such that A^-1𝔐A=𝔐_λ_1,λ_2,λ_3, where 𝔐_λ_1,λ_2,λ_3=[ 1/2(λ_2+λ_3) 0 ε i/2(λ_2-λ_3); 0 λ_1 0; - ε i/2(λ_2-λ_3) 0 1/2(λ_2+λ_3); ], where ε=± 1 accounts for the polarization of γ (see below). It is clear that any critical curve of type ℬ'_1 is congruent to a critical curve whose momentum is in the canonical form 𝔐_λ_1,λ_2,λ_3. A critical curve of type ℬ'_1 is said to be in a standard configuration if its momentum is in the canonical form (<ref>). Two standard configurations with the same twist are congruent with respect to the left action of the maximal compact abelian subgroup 𝕋^2={ A∈ G| A e_2∧ e_2=0}. Let c∈ℬ_1, such that Δ_1( c)Δ_2( c)≠ 0. Let e_1<e_2<e_3 be the real roots of P_ c and let λ_1=-(λ_2+λ_3)<0<λ_2<λ_3 be the roots of Q_ c. Let τ be the periodic function defined as in the first of the (<ref>) and ϕ_j, j=1,2,3, be as in (<ref>). Let ρ_j be the constants ρ_1 =1/√((2λ_2+λ_3)(λ_2+2λ_3)), ρ_2 =1/√(2(λ_3-λ_2)(2λ_2+λ_3)), ρ_3 =1/√(2(λ_3-λ_2)(λ_2+2λ_3)) and z_j be the functions z_1=ρ_1√(3(λ_2+λ_3)+τ^2) e^iϕ_1, z_2=ρ_2√(3λ_2-τ^2) e^iϕ_2, z_3=ρ_3√(3λ_3-τ^2) e^iϕ_3. 0.1cm Let ε =- sign(e_2^2-3λ_3). We can state the following. A general critical curve of type ℬ'_1 with modulus c is congruent to γ: ∋ s ⟼[^t(z_2+z_3,ε iz_1,-ε i(z_2-z_3))]∈. In addition, γ is in a standard configuration. Let γ be a critical curve of type ℬ'_1 with modulus c. Let ℱ be a Wilczynski frame along γ. Suppose ε =1 (i.e., ⟨ y_3, y_3⟩<0). Let u_j be the maps defined by u_1 =1/√(3)√(2c_1+λ_1^2)√(τ^2-3λ_1) y_1, u_2 =1/√(3)√(-(2c_1+λ_2^2))√(τ^2-3λ_2) y_2, u_3 =1/√(3)√(2c_1+λ_3^2)√(τ^2-3λ_3) y_3. Consider the map U=( u_3, u_2, u_1):→ GL(3,). From Theorem <ref> and Lemma <ref>, we have * ⟨ u_1, u_1 ⟩= ⟨ u_2, u_2 ⟩=-⟨ u_3, u_3 ⟩ =1, and ⟨ u_i, u_j⟩ = 0, for i≠ j, that is, U(s) is a pseudo-unitary basis of ^1,2, for every s∈; * U^-1LU=D(λ_3,λ_2,λ_1). Using again Theorem <ref>, we obtain ℱU D(e^-iϕ_3, e^-iϕ_2 , e^-iϕ_1)= MD(√(3(2c_1+λ_3^2)), √(-3(2c_1+λ_2^2)), √(3(2c_1+λ_1^2)))^-1, where the matrix M∈ GL(3,) diagonalizes the momentum of γ, that is, M^-1𝔐 M = D(λ_3,λ_2,λ_1). In particular, the column vectors of the right hand side of (<ref>), denoted by B, constitutes a pseudo-unitary basis. Let ϵ be the inverse of a cubic root of B. Then, the column vectors of ϵ B constitute a unimodular pseudo-unitary basis. Therefore there exists a unique A∈G, such that ϵA B= B, where B=[ 1/√(2) -1/√(2) 0; 0 0 i; i/√(2) i/√(2) 0; ]. Then Aℱ=ϵ^-1 B D(e^iϕ_3, e^iϕ_2, e^iϕ_1)U^-1= ϵ^-1 BD(e^iϕ_3, e^iϕ_2, e^iϕ_1)D(-1,1,1) ^tU̅ h. It is now a computational matter to check that the first column vector of the right hand side of (<ref>) is ϵ^-1 ^t(z_2+z_3, iz_1,- i(z_2-z_3)). This implies γ=A^-1γ (i.e., γ and γ are congruent to each other). Taking into account that U^-1LU=D(λ_3,λ_2,λ_1) and using (<ref>), the momentum of γ is 𝔐= BD(λ_3,λ_2,λ_1) B^-1. Therefore, the momentum of γ is 𝔐=A BD(λ_3,λ_2,λ_1) B^-1A^-1= BD(λ_3,λ_2,λ_1) B^-1=𝔐_λ_1,λ_2,λ_3. This proves that γ is in standard configuration. If ε =-1 (i.e., ⟨ y_3, y_3⟩>0), considering U=( u_2, u_3, u_1) and arguing as above, we get the same conclusion. Theorem <ref> implies that a standard configuration γ does not pass through the pole [𝐞_3] of the Heisenberg projection π_H. Thus γ̌:=π_H∘γ is a transversal curve of ^3, which does not intersect the Oz-axis. Breaking the integrands into partial fractions, the integrals f_j(τ) = √(3/2)∫_e_2^τ3c_1λ_j-(4c_1+λ_j^2) +3τ^3/τ(3λ_j-τ^2)√( P_ c(τ))dτ, j=1,2,3, can be written as linear combinations of standard hyperelliptic integrals of the first and third kind. Then ϕ_j is the odd quasi-periodic function with quasi-period 2ω such that ϕ_j(s)=f_j[τ(s)]. In practice, we compute τ and ϕ_j, j=1,2,3, by numerically solving the following system of ODE, τ” =τ^2-9c_1τ^-2(1-c_1τ^-1), ϕ_j' =3c_1λ_j-(4c_1+λ_j^2)τ^2+3τ^3/τ^2(3λ_j-τ^2), j =1,2,3, with initial conditions τ(0)=e_2, τ'(0)=0, ϕ_j(0)=0, j=1,2,3. §.§ Closing conditions From Theorem <ref>, it follows that a critical curve of type ℬ'_1 is closed if and only if 𝔓_j=1/2πϕ_j(2ω)∈, j=1,2,3. On the other hand, 1/2πϕ_j(2ω)=1/π∫_e_2^e_1√(3)(3c_1λ_j-(4c_1+λ_j^2)τ^2+3τ^3)/√(2)τ(3λ_j-τ^2)√( P_ c(τ))dτ. Thus, γ is closed if and only if the complete hyperelliptic integrals on the right hand side of (<ref>) are rational. For a closed critical curve γ, we put 𝔓_j= q_j=m_j/n_j, where n_j>0 and (m_j,n_j)=1. We call q_j, the quantum numbers of γ. By construction, e^i2π𝔓_1, e^i2π𝔓_2 and e^i2π𝔓_3 are the eigenvalues of the monodromy 𝙼_γ=ℱ(2ω)ℱ(0)^-1 of γ. Since det(𝙼_γ)=1, we have ∑_j=1^3𝔓_j≡ 0, mod ℤ. Then, γ is closed if and only if two among the integrals 𝔓_j, j=1,2,3, are rational. The closing conditions can be rephrased as follows. Consider the even quasi-periodic functions ϕ_1, ϕ_3. Then, the critical curve is closed if and only if the jumps ϕ_j|_0^2ω, j=1,3, are rational. We now consider an example, which will be taken up again in the last section. Choose c≈ (-0.8284243304411575,-8.349417691746162)∈ℬ_1^-. The real roots of the quintic polynomial are e_1≈ -0.931924<e_2≈ -0.678034<0<e_3≈ 2.79051 and the eigenvalues of the momentum are λ_1≈ -2.40462<0<λ_2≈ 0.40614<λ_3≈ 1.99848. The half-period of the twist is computed by numerically evaluating the hyperelliptic integral (<ref>). We evaluate τ, ϕ_1, ϕ_2, ϕ_3 by solving numerically the system (<ref>), with initial conditions (<ref>) on the interval [-4ω,4ω]. Figure <ref> reproduces the graphs of ϕ_1 and ϕ_3 on the interval [-4ω,4ω] (the graph of the twist was depicted in Figure <ref>). The red point on the Ox-axis is 2ω and the length of the arrows are the jumps ϕ_1|_0^2ω, ϕ_3|_0^2ω. In this example, |-1/2π2/15-ϕ_1|_0^2ω|=1.6151· 10^-8, |-1/2π10/21-ϕ_3|_0^2ω|=-4.46887· 10^-8. So, modulo negligible numerical errors, the corresponding critical curves are closed, with quantum number q_1=-2/15 and q_3=-10/21. In the last section we will explain how we computed the modulus. A standard configuration of a curve with modulus c is represented in Figure <ref>. §.§ Discrete global invariants of a closed critical curve Consider a closed general critical curve γ of type ℬ_1', with modulus c and quantum numbers q_1=m_1/n_1, q_2=m_2/n_2, q_3=m_3/n_3, q_1+q_2+q_3≡ 0. The half-period ω of the twist is given by the complete hyperelliptic integral (<ref>). Let 𝙼_γ=ℱ(ω)ℱ(0)^-1 be the monodromy of γ. The monodromy does not depend on the choice of the canonical lift. It is a diagonalizable element of G with eigenvalues e^2π i q_1, e^2π i q_2, and e^2π i q_3. Thus, 𝙼_γ has finite order n=lcm(n_1,n_3). The momentum 𝔐_γ has three distinct real eigenvalues, so its stabilizer is a maximal compact abelian subgroup 𝕋^2_γ≅ S^1× S^1 of G (if γ is a standard configuration, 𝕋^2_γ=𝕋^2). Since [𝙼_γ,𝔐_γ]=0, 𝙼_γ∈𝕋^2_γ. Let s_1, s_3 be the integers defined by n=s_1n_1=s_3n_3. The CR spin of γ is 1/3 if and only if n≡ 0 3 and m_1s_1≡ m_3s_3≢0 3. The wave number 𝐧_γ of γ is n if the spin is 1 and n/3 if the spin is 1/3. Let |[γ]| denote the trajectory of γ. The stabilizer Ĝ_γ={[A]∈ [G]| [A]· |[γ]| = |[γ]|} is spanned by [𝔐_γ] and is a cyclic group of order n_γ. Geometrically, Ĝ_γ is the symmetry group of the critical curve γ. The CR turning number w_γ is the degree of the map /2nω ∋ s↦ F_1-i F_3 ∈ := ℂ∖{0}, where the F_j's are the components of a Wilczynski frame along γ. Without loss of generality, we may suppose that γ is in a standard configuration. From (<ref>), it follows that w_γ is the degree of /2nω∋ s↦ z_3∈, if ε_γ = 1, and is the degree of /2nω∋ s↦ z_2∈, if ε_γ = -1. Therefore, w_γ= s_3m_3, if ε_γ=1, s_2m_2, if ε_γ=-1. A closed critical curve γ has an additional discrete CR invariant, denoted by tr_*(γ), the trace of γ with respect to the spacelike λ_1-eigenspace of the momentum. To clarify the geometrical meaning of the trace, it is convenient to consider a standard configuration. In this case, 𝕃_1 is spanned by 𝐞_2∈^1,2 and the corresponding chain is the intersection of with the projective line z_2=0. The Heisenberg projection of this chain is the upward oriented Oz-axis. Thus, tr_*(γ) is the linking number Lk(γ̌,Oz^↑) of the Heisenberg projection of γ with the upward oriented Oz-axis. Let γ be as above. Then tr_*(γ)= (q_1-q_3) n_γ, if ε_γ=1, (q_1-q_2) n_γ, if ε_γ=-1. Without loss of generality, we may assume that γ is in standard configuration. The Heisenberg projection of γ is γ̌=^t(Re(ε iz_1,/z_2+z_3), Im(ε iz_1,/z_2+z_3), Re(-ε i(z_2-z_3)/z_2+z_3) ). Since γ̌ does not intersect the Oz-axis, the linking number Lk(γ̌,Oz^↑) is the degree of /2 n_γω∋ s ⟼z_1/z_2+z_3∈. From (<ref>) it follows that this degree is the degree of f: /2 n_γω∋ s⟼ρ_1√(3(λ_2 + λ_3)+τ^2(s)) e^iϕ_1/ρ_2√(3λ_2-τ^2(s)) e^iϕ_2+ ρ_3√(3λ_3-τ^2(s)) e^i ϕ_3. Suppose that γ is negatively polarized. Then, τ^2-3λ_3<τ^2-3λ_2<0 and 0<τ^2<3λ_2. Therefore, 0<ρ_2√(3λ_2-τ^2)/ρ_3√(3λ_3-τ^2)= √((3λ_2-τ^2)(λ_2+2λ_3)/(3λ_3-τ^2)(2λ_2+λ_3))≤√(λ_2(λ_2+2λ_3)/λ_3(2λ_2+λ_3))<1. Thus f= ρ_1√(3(λ_2 + λ_3)+τ^2)/ρ_3√(3λ_3-τ^2)e^i(ϕ_1-ϕ_3)/1+h e^i(ϕ_2-ϕ_3), where h = ρ_2√(3λ_2-τ^2)/ρ_3√(3λ_3-τ^2). Since 0<h<1, the image of 1+he^i(ϕ_2-ϕ_3) is a curve contained in a disk of radius <1 centered at (1,0). Hence 1+he^i(ϕ_2-ϕ_3) is null-homotopic in . This implies deg(f)=1/2π(ϕ_1-ϕ_3)|_0^2 n_γω = n_γ(q_1-q_3). Suppose that γ is positively polarized. Then, τ^2-3λ_2>τ^2-3λ_3>0. In particular, τ^2>3λ_3>0 and 0<ρ_3√(τ^2-3λ_3)/ρ_2√(τ^2-3λ_2) =√((τ^2-3λ_3)(2λ_2+λ_3)/(τ^2-3λ_2)(λ_2+2λ_3)) < √(2λ_2+λ_3/λ_2+2λ_3)<1. Then f= -i ρ_1√(3(λ_2 + λ_3)+τ^2)/ρ_2√(τ^2-3λ_2)e^i(ϕ_1-ϕ_2)/1+h̃ e^i(ϕ_3-ϕ_2), where h̃ =ρ_3√(τ^2-3λ_3)/ρ_2√(τ^2-3λ_2). Since 0<h̃<1, the image of 1+h̃e^i(ϕ_3-ϕ_2) is a curve contained in a disk of radius <1 centered at (1,0). Hence 1+h̃e^i(ϕ_3-ϕ_2) is null-homotopic in . This implies deg(f)=1/2π (ϕ_1-ϕ_2)|_0^2 n_γω = n_γ(q_1-q_2). Summarizing: the quantum numbers of a closed critical curve are determined by the wave number, the CR spin, the CR turning number, and the trace. § EXPERIMENTAL EVIDENCE OF THE EXISTENCE OF INFINITE COUNTABLY MANY CLOSED CRITICAL CURVES OF TYPE ℬ'_1 AND EXAMPLES This section is of an experimental nature. We use numerical tools, implemented in the software Mathematica 13.3, to support the claim that there exist countably many closed critical curves of type ℬ_1', with moduli belonging to the connected component ℬ_1^- of ℬ_1 (cf. Remark <ref>). The same reasoning applies, as well, if the modulus belongs to the other connected component ℬ_1^+ of ℬ_1. We parametrize ℬ_1^- by the map ψ_-:K_-→ℬ_1^-, defined in (<ref>), where K_- is the rectangle Ĵ_ξ^-× (0,1), Ĵ_ξ^-=(π/2, 2.3008). We take p=(p_1,p_2)∈ K_- as the fundamental parameters. The modulus 𝐜=(c_1,c_2), the roots e_1<e_2<0<e_3 of the quintic polynomial, and the eigenvalues λ_1=-(λ_2+λ_3)<0<λ_2<λ_3 of the momentum are explicit functions of the parameters (p_1,p_2). Let K_-^* be the open set of the general parameters, that is, K_-^* = { p∈ K_- |Δ_1( ψ_-( p))Δ_2( ψ_-( p))≠ 0}. The complete hyperelliptic integrals 𝔓_j can be evaluated numerically as functions of p∈ K_-^*. Consider the real analytic map 𝔓=(𝔓_1, 𝔓_3): K_-^*→^2.[Actually, 𝔓 is real-analytic on all K_-. Instead, 𝔓_2 has a jump discontinuity at the exceptional locus.] Choose p_*=(2,1/2)∈ K_-^* and plot the graphs of the functions f_11(p_1)=𝔓_1(p_1,1/2), f_12(p_2)=𝔓_1(2,p_2), f_31(p_1)=𝔓_3(p_1,1/2), and f_32(p_2)= 𝔓_3(2,p_2) (see Figures <ref> and <ref>). The function f_11 is strictly increasing, while the other three functions are strictly decreasing. This implies that 𝔓 has maximal rank at p_*. Thus 𝒫_-=𝔓(K_-) is a set with non empty interior. In particular 𝒫_-^r:=𝒫_-∩ℚ is an infinite countable set and, for every q=(q_1,q_2)∈𝒫_-^r, there exists a closed critical curve of type ℬ'_1^- with quantum numbers q_1 and q_2. Figure <ref> reproduces the plot of the map 𝔓, an open convex set. The mesh supports a stronger conclusion: the map 𝔓 is 1-1. Therefore, one can assume that, for every rational point (q_1,q_3)∈𝒫_-, there exists a unique congruence class of closed critical curves with quantum numbers q_1 and q_3. The construction of a standard configuration of a critical curve associated to a rational point q∈𝒫_- can be done in three steps. 0.1cm Step 1. Choose a rational point q=(q_1,q_3)=(m_1/n_1,m_3/n_3)∈𝒫_-. To find the parameter p∈ K_-, such that 𝔓( p)= q, we may procede as follows: plot the level curves 𝚇_q_1=𝔓_1^-1(q_1) and 𝚈_q_3=𝔓_3^-1(q_3) and choose a small rectangle R⊂ K_- containing 𝚇_q_1∩𝚈_q_2 (see Figure <ref>). Then we minimize numerically the function δ_ q: R∋ p⟼√((𝔓_1( p)-q_1)^2+(𝔓_3( p)-q_3)^2). We use the stochastic minimization method named “differential evolution" <cit.> implemented in Mathematica. Let us revisit Example <ref>. Choose q=(-2/15,-10/21)∈𝒫_-. The plot of the level curves 𝚇_q_1 and 𝚈_q_3 is depicted in Figure <ref>. Minimizing δ_ q on the rectangle R=[1.83,1.86]× [0.65,0.75] (depicted on the right picture in Figure <ref>) we obtain p = (1.84438,0.719473) and δ_ q( p)=3.26867· 10^-9. So, up to negligible numerical errors, we may assume p=𝔓^-1( q). Computing ψ_-( p), we find the modulus c=(c_1,c_2) of the curve, where c_1=-0.828424 and c_2= -8.349418. With the modulus at hand, we compute the lowest real roots of the quintic polynomial, e_1≈ -0.931924<e_2≈ -0.678034, and the roots of the momentum, namely λ_1≈ -2.40462<0<λ_2≈ 0.40614<λ_3≈ 1.99848. Step 2. We evaluate numerically the integral (<ref>) and we get the half-period ω of the twist of the critical curve. In our example ω≈ 0.732307. The next step is to evaluate the twist τ. This can be done by solving numerically the Cauchy problem (<ref>) on the interval [0,2nω], n=lcm(n_1,n_2). The bending is given by κ=c_1/τ^2. Next, we solve the Frenet type linear system (<ref>), with initial condition ℱ(0)=I_3. Then, γ: [0,2nω]∋ s⟼ [F_1(s)]∈ is a critical curve with quantum numbers q_1 and q_3 and ℱ is a Wilczynski frame field along γ. However, γ is not in a standard configuration. 0.1cm Step 3. The last step consists in bulding the standard configuration. The momentum 𝔐 of γ is L(0), where L is as in (<ref>). Taking into account that τ(0)=e_2, τ'(0)=0, and that κ(0)=c_1/e_2^2, we get 𝔐=[ 0 3(1 - c_1/e_2) 2 i e_2; e_2 0 3i(1 - c_1/e_2); 3i -i e_2 0; ]. The eigenspace of the highest eigenvalue is timelike (i.e., these critical curves are negatively polarized). We compute the eigenvectors and we build a unimodular pseudo-unitary basis A=(A_1,A_2, A_3), such that A_1 is an eigenvector of λ_3, A_2 is an eigenvector of λ_2, and A_3 is an eigenvector of λ_1. Let B be as in (<ref>). Consider M= BA^-1∈G. Then, γ= Mγ is a standard configuration of a critical curve with quantum numbers q_1 and q_3. The curve γ does not pass through the pole of the Heisenberg projection π_H. So, γ=π_H∘γ is a closed transversal curve of ^3 which does not intersect the Oz-axis and tr_*(γ)= Lk(γ,Oz). Applying Step 2 and Step 3 to Example <ref> and computing the Heisenberg projection, we obtain the transversal curve depicted in Figure <ref>, a non-trivial transversal knot. The quantum numbers are q_1=-2/15 and q_2=-10/21. Recalling what has been said about the discrete invariants of a critical curve (cf. Section <ref>), the spin is 1/3, the wave number is n=35, the CR turning number is -50, and the trace is 12. Figures <ref> and <ref> reproduce the Heisenberg projections of the standard configurations of critical curves of type ℬ'_1^- with quantum numbers (-3/10, -9/25), (-1/5,-3/7), (5/49, -4/7), and (-7/36,-23/54), respectively. All of them have spin 1. The first is a trivial torus knot with wave number n=50, tr_*=3, and CR turning number w=-18; the second example is a nontrivial transversal knot with n=35, tr_*=8, and w=-15. The third example is a “tangled" transversal curve with n=49, tr_*=33, and w=-28. The last example is a nontrivial transversal torus knot with wave number 108, tr_*=25, and w=-46. It is clear that, being numerical approximations, the parametrizations obtained with this procedure are only approximately periodic. amsalpha AA Benn1983 D. Bennequin, Entrelacements et équations de Pfaff, in Third Schnepfenried geometry conference, Vol. 1 (Schnepfenried, 1982), 87–161, Astérisque, 107-108, Soc. Math. France, Paris, 1983. Bryant1987 R. L. Bryant, On notions of equivalence of variational problems with one independent variable, Contemp. Math. 68(1987), 65–76. CI A Calini and T. Ivey, Integrable geometric flows for curves in pseudoconformal S^3, J. Geom. Phys. 166 (2021), Paper No. 104249, 17 pp. Cartan1932 E. Cartan, Sur la géométrie pseudo-conforme des hypersurfaces de deux variables complexes, I, Ann. Math. Pura Appl. (4) 11 (1932), 17–90 (or Oeuvres II, 2, 1931-1304). Cartan1932-2 E. Cartan, Sur la géométrie pseudo-conforme des hypersurfaces de deux variables complexes, II, Ann. Scuola Norm. Sup. Pisa (2) 1 (1932), 333–354 (or Oeuvres III, 2, 1217-1238). ChMo1974 S. S. Chern and J. K. Moser, Real hypersurfaces in complex manifolds, Acta Math. 133 (1974), 219–271. COST E. Calabi, P. J. Olver, C. Shakiban, A. Tannenbaum, and S. Haker, Differential and numerically invariant signature curves applied to object recognition, Int. J. Comput. Vis., 26 (1998), 107–135. DMN A. Dzhalilov, E. Musso, and L. Nicolodi, Conformal geometry of timelike curves in the (1+2)-Einstein universe, Nonlinear Anal. 143 (2016), 224–255. Eliash1993 Y. Eliashberg, Legendrian and transversal knots in tight contact 3-manifolds, in Topological Methods in Modern Mathematics (Stony Brook, NY, 1991), 171–193, Publish or Perish, Houston, TX, 1993. EMN-JMAA O. Eshkobilov, E. Musso, and L. Nicolodi, The geometry of conformal timelike geodesics in the Einstein universe J. Math. Anal. Appl. 495 (2021), no. 2, Paper No. 124730, 32 pp. Etn1999 J. B. Etnyre, Transversal torus knots, Geom. Topol. 3 (1999), 253–268. EtHo J. B. Etnyre and K. Honda, Knots and contact geometry I: torus knots and the eight knot, J. Symplectic Geom. 1 (2001), 63–120. Et2 J. B. Etnyre, Legendrian and transveral knots, in Hanbook of Knot Theory, 105–185, W. Menasco & M. Thistlethwaite (Eds.), Elsevier B. V., Amsterdam, 2005. ArXiv version: . Et3 J. B. Etnyre, Introductory Lectures on Contact Geometry, . FelsOlver1 M. Fels and P. J. Olver Moving coframes. I. A practical algorithm, Acta Appl. Math. 51 (1998), 161–213. FelsOlver2 M. Fels and P. J. Olver Moving coframes. II. Regularization and theoretical foundations, Acta Appl. Math. 55 (1999), 127–208. FuTa1997 D. Fuchs and S. Tabachnikov, Invariants of Legendrian and transverse knots in the standard contact space, Topology 36 (1997), no. 5, 1025–1053. GM J. D. Grant and E. Musso, Coisotropic variational problems, J. Geom. Phys. 50 (2004), 303–338. Gr P. A. Griffiths, Exterior differential systems and the calculus of variations, Progress in Mathematics, 25, Birkhäuser, Boston, 1982. Ho W. C. Hoffman, The visual cortex is a contact bundle. Mathematical biology, Appl. Math. Comput. 32 (1989), no. 2-3, 137–167. Hsu L. Hsu, Calculus of variations via the Griffiths formalism, J. Differential Geom. 36 (1992), 551–589. Jacobo1985 H. Jacobowitz, Chains in CR geometry, J. Differential Geom. 21 (1985), no. 2, 163–194. K F. Klein, Vorlesungen über das Ikosaeder und die Auflösung der Gleichungen vom fünften Grade, Teubner, Leipzig, 1884. KRV I. A. Kogan, M. Ruddy, and C. Vinzant, Differential signatures of algebraic curves, SIAM J. Appl. Algebra Geom. 4 (2020), no. 1, 185–226. M E. Musso, Liouville integrability of a variational problem for Legendrian curves in the three-dimensional sphere, Quaderni di Matematica, Ser. Ed. by Dip. Matem. II Università di Napoli (Caserta), 9 (2002). MN-CQG E. Musso and L. Nicolodi, Closed trajectories of a particle model on null curves in anti-de Sitter 3-space, Classical Quantum Gravity 24 (2007), no. 22, 5401–5411. MN-SIAM E. Musso and L. Nicolodi, Reduction for constrained variational problems on 3-dimensional null curves, SIAM J. Control Optim. 47 (2008), no. 3, 1399–1414. MNJMIV E. Musso and L. Nicolodi, Invariant signatures of closed planar curves, J. Math. Imaging Vision 35 (2009), 68–85. MN-CAG E. Musso and L. Nicolodi, Quantization of the conformal arclength functional on space curves, Comm. Anal. Geom. 25 (2017), no. 1, 209–242. MNS-Kharkiv E. Musso, L. Nicolodi, and F. Salis, On the Cauchy-Riemann geometry of transversal curves in the 3-sphere, Zh. Mat. Fiz. Anal. Geom. 16 (2020), no. 3, 312–363. MS E. Musso and F. Salis, The Cauchy–Riemann strain functional for Legendrian curves in the 3-sphere, Ann. Mat. Pura Appl. (4) 199 (2020), 2395–2434. Na O. Nash, On Klein's icosahedral solution of the quintic, Expo. Math. 32 (2014), no. 2, 99–120. Olver-book1 P. J. Olver, Applications of Lie Groups to Differential Equations, Second Edition, Graduate Texts in Mathematics, vol. 107, Springer-Verlag, New York, 1993. Olver-book2 P. J. Olver, Equivalence, Invariants, and Symmetry, Cambridge University Press, Cambridge, UK, 1995. Pe J. Petitot, Elements of Neurogeometry, Lecture Notes in Morphogenesis, Springer International Publishing, 2017. SP R. Storn and K. Price, Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (1997), no. 4, 341–359. Tr M. Trott and V. Adamchik, Solving the quintic with Mathematica. Available on the Wolfram Library Archive: ]
http://arxiv.org/abs/2307.04797v1
20230710180007
The Characteristic Shape of Damping Wings During Reionization
[ "Huanqing Chen" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.GA" ]
firstpage–lastpage -3cm14pt P3H-23-043, TTP23-024, ZU-TH 34/23 1.5cm Towards gg→ HH at next-to-next-to-leading order: light-fermionic three-loop corrections Joshua Davies^a, Kay Schönwald^b, Matthias Steinhauser^c (a) Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK (b) Physik-Institut, Universität Zürich, Winterthurerstrasse 190, 8057 Zürich, Switzerland (c) Institut für Theoretische Teilchenphysik, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede Straße 1, 76128 Karlsruhe, Germany ============================================================================================================================================================================================================================================================================================================================================================================================================================ Spectroscopic analysis of Lyα damping wings of bright sources at z>6 is a promising way to measure the reionization history of the universe. However, the theoretical interpretation of the damping wings is challenging due to the inhomogeneous nature of the reionization process and the proximity effect of bright sources. In this Letter, we analyze the damping wings arising from the neutral patches in the radiative transfer cosmological simulation suite Cosmic Reionization on Computers (CROC). We find that the damping wing profile remains a tight function of volume-weighted neutral fraction , especially when >0.5, despite the patchy nature of reionization and the proximity effect. This small scatter indicates that with a well-measured damping wing profile, we could constrain the volume-weighted neutral fraction as precise as Δ≲ 0.1 in the first half of reionization. reionization – intergalactic medium – quasars: absorption lines § INTRODUCTION The epoch of reionization (EoR) brought about a major change to the global properties of the intergalactic medium (IGM) within the first billion years of the universe. Thanks to the numerous data obtained by JWST, we are now in an excellent position to understand this frontier in astrophysics. One of the fundamental questions surrounding the EoR concerns the timing and duration of reionization, which is not yet well-constrained. Several methods have been employed to measure reionization, each with its own unique strengths and limitations. One of the earliest constraints came from cosmic microwave background experiments that utilized the Thompson scattering effects of free electrons. For example, <cit.> has constrained the midpoint of reionization redshift to be z_ mid=7.82 ± 0.71 <cit.>. However, accurately measuring the entire history of reionization from start to end using Thompson scattering effect on CMB alone is challenging due to its integrated nature. Another powerful method to characterize the full cosmic dawn and reionization history is through the 21cm line emission from neutral hydrogen. However, at such long wavelengths, the foreground is orders of magnitude brighter than the signals, making data reduction notoriously difficult <cit.>. Another alternative of measuring the entire reionization history is to use the absorption in front of bright sources at different redshift during the EoR <cit.>. As a strong resonant line, is sensitive to any trace of neutral hydrogen, allowing us to detect the very end of reionization (neutral fraction ≲ 10^-4) <cit.>. Moreover, when there are neutral patches left in the IGM, the absorption line displace a large damping wing, reaching thousands of km/s in the spectrum where the flux is suppressed. <cit.> shows that assuming a uniform reionization model, the damping wings have characteristic shape and can be used in constrain neutral fraction for bright background sources like gamma-ray bursts (GRBs). However, reionization is a patchy process. The neutral fraction does not drop uniformly everywhere. Rather, some regions become highly ionized first, while other regions, shielded from ionizing sources, remain neutral until much later. In fact, many semi-numerical codes based on excursion-set formalism <cit.> treat every point in the universe as either neutral or ionized, therefore the term neutral fraction is meaningful only when averaged over certain volume or mass. In the literature of reionization, the term neutral fraction most commonly refers to the volume-weighted neutral fraction over the entire universe <cit.>. Given the patchy nature of reionization, one natural question is whether the variance of the damping wing profile is too large to differentiate universes with different , or if it is small enough that the characteristic shape still holds. Another complication arises from the fact that bright sources, such as quasars, which provide high-resolution spectra for analysis, emit a large amount of ionizing radiation themselves. This radiation can alter the local morphology of reionization. Many semi-numerical methods can create a map of ionized bubbles created from typical galaxies <cit.>, but unusual bright sources like quasars are not modeled. Does the removal of neutral patches close to bright sources like quasars significantly change the shape of the damping wing? In this Letter, we use a radiative transfer cosmological simulation suite Cosmic Reionization on Computers (CROC) to address the above questions. Such a study is timely as more and more bright sources at z>6 are spectroscopically followed-up and available to be used in constraining reionization history. The letter does not intend to describe the full process of extracting neutral fraction from data, but serves to estimate the optimal precision of neutral fraction measurement achievable using damping wing. § SIMULATION We use CROC simulations[The cosmological parameters used in CROC are: Ω_b=0.0479, Ω_M=3036, Ω_Λ=0.6964, h=0.6814, n_s=0.9675, σ_8=0.8285, k_ pivot=0.029.] <cit.> to study the damping wings arising from patchily ionized IGM. The CROC project uses the Adaptive Refinement Tree (ART) code <cit.> to reach high spatial resolution (base grid length =39 h^-1  ckpc, peak resolution ∼ 100 pc in physical units). CROC simulations include relevant physics such as gas cooling, heating, star formation, stellar feedback and on-the-fly radiative transfer <cit.>. The main ionization sources in the simulations are star particles which are formed in dense gas in galaxies. In this project, we primarily use the uniform-grid data in one of the 40 runs (CROC B40F) alongside with Rockstar <cit.> halo catalogs to locate dark matter halos. The uniform-grid data contain gas properties of neutral fraction, density, temperature in each base grid cell. They are saved frequently (with increments in expansion factor Δ a=0.001) so that we can sample a large range of and study the entire reionization process. In Figure <ref>, we show the neutral fraction map at three different redshifts overlaid with halos of different masses. § RESULTS To simulate the damping wing profiles, we first draw skewers (sightlines) starting from massive halos. We locate halos from Rockstar halo catalogues in the uniform-grid box. At each redshift, we select the 100 most massive halos and draw 10 skewers of length 200   cMpc/h uniformly distributed in a 3D sphere. In the left panel of Figure <ref>, the brown line shows the neutral fraction along one example skewer drawn from a snapshot where the volume-weighted neutral fraction is =0.5. To study the universe with different neutral fractions , we use skewers drawn at different redshifts of the same simulation run. When calculating absorption, we keep the neutral fraction and temperature of each cell unchanged while scale the physical length and density to a certain redshift z_t by a and a^-3, where a is the expansion factor, respectively. The results shown in this paper are calculated for z_t=6.54. An unusually bright source like a quasar could push the I-front farther away. To mimic such an extra ionizing effect, when calculating the damping wing, we first draw a random number from a uniform distribution between [0, 40] cMpc/h and remove all neutral gas within this distance. This procedure aims to examine the maximum variance of the damping wing shape. Then we convolve the Voigt profiles from the rest of the neutral cells (x_ HI>0.5) along the skewer. In the left panel of Figure <ref>, we show this procedure in a skewer drawn from the box with =0.5: the faint blue, orange and green vertical lines show three random positions within which we remove all neutral gas, and the solid profiles are the damping wing arising from the remaining neutral gas, integrated until 200   cMpc/h. We find that despite the length of the first neutral patch are different, after convolution with all neutral patches behind it, the shapes of the profiles are very similar. This is more evident in the right panel, where we compare these profiles after aligning them at the starting position (the first point where transmission drops to zero). In Figure <ref>, we plot the median of the aligned damping wing profiles in snapshots of different using solid lines, with each colored band showing the 68% scatter. For >=0.5, the scatter of the wing profile is very small despite the patchy nature of reionization, and profiles with Δ =0.25 are clearly separated. We also compare the damping wing profiles with the ones that created without randomly cutting the inner region (dash-dotted lines). If the inner neutral regions are not excised, the median damping wing is slightly stronger, but well within the scatter. The scatter of the no-cut case is almost identical to the previous case and thus not shown. We also calculate the damping wings assuming a uniform density and uniform reionization scenario (every cell has the same neutral fraction x_ HI= and every cell contributes to the damping wing), and the results are shown as dotted lines. Compared with the patchy ionization scenario with inner region excised, the damping wing is in general stronger, especially for 0.5≲≲ 0.75, but the differences are still small compared to the scatter. § DISCUSSION §.§ Cosmic variance Due to the small size (40) of the simulation box, one might question whether the small scatter shown in the last section still holds when considering cosmic variance. To investigate this, we repeat the procedure in another box (CROC B40C). Both simulations have the same physics but different initial conditions (“DC modes”). As a result, B40F reionized the latest (reionization midpoint z_ mid=7.4) while B40C reionized earliest (z_ mid=8.2) in all the six 40 CROC realizations. Therefore, the density environments and halo distribution in these two box should differ maximally in all six realizations, and comparing damping wings in these two boxes helps us understand the stochasticity due to cosmic variance. In Figure <ref>, we compare the damping wings in box B40C with B40F of the previous section. We find that the mean and the scatter are almost identical, suggesting that the damping wings indeed have a characteristic shape as a function of . §.§ Practical use Our simulations show that for a mostly neutral universe ( > 0.5), the scatter in damping wing profiles is small enough to distin- guish between Δ≈ 0.1. However, measuring the entire damping wing profile is complicated in practice. In this subsection we briefly discuss the prospects of using damping wing to constrain . <cit.> originally proposes GRB afterglows as the best candidates for measuring with damping wings. Compared with galaxies or quasars, GRBs have many advantages. They are thought to be produced in normal galaxies and thus live in less biased environments <cit.>. The number of integrated ionizing photons they contribute is also very small and unlikely to enlarge the local ionized bubble. In addition, they are intrinsically bright to be spectroscopically followed-up. One challenge of using GRB afterglows is how to model the Damped absorbers (DLAs) in the host galaxies. <cit.> shows that using the empirical distribution from current GRB afterglow spectra, one could model the local DLA distribution and marginalize this nuance parameter. Although <cit.> does not consider the scatter of damping wing profiles, the small scatter we find in CROC simulations supports their forecast that with ≳ 20 GRB afterglows with spectra resolution R≳ 3000 and signal-to-noise ratio (SNR) ≳ 20, one could reach a precision close to 15% in the first half of reionization. Quasars are the kind of sources we can obtain the highest resolution spectra at z>6. The current highest quality sample of z>6 quasar spectra have SNR≳ 50 and R≳ 10000 <cit.>. Thanks to their high luminosity, the residual neutral fraction in their proximity zone is small enough to allow significant flux on the blue side of line. Such flux offers extra information about the shape of the damping wing. The challenge of using quasars is that by z≈ 7, a quasar may have enlarged the local bubble significantly. Due to the decrease in quasar radiation with distance, the transmitted flux also decreases. This reduction in flux compromises the constraining power on the starting point of the damping wing, which is crucial for anchoring the shape of the damping wing. In the ideal case, we may catch a quasar in its bright phase, where the integrated ionizing photons emitted by the quasar is still small while the instantaneous luminosity is high enough to create a highly transparent proximity zone. This would allow us to observe details of the forest and measure flux close to the starting point of the damping wing, providing greater constraining power for the shape of the entire damping wing. In addition, similar to the GRB afterglow case, we need to develop a better understanding of how to model the intrinsic quasar continua. Since the scatter in damping wing profile is < 10% at wavelength <-1000 km/s from the starting point of the damping wing, it is ideal to have an accuracy in continuum recovery better than 10 % across the quasar emission line (from ≲ -4000 km/s to where no transmitted flux presents). With the successful operation of JWST, we now have the capability to measure spectra from Lyman Break Galaxies (LBGs) or Lyman-alpha Emitters (LAEs) <cit.>. While these sources are more numerous than quasars, their low luminosity limits the achievable spectral resolution. As a result, information about the damping wing is mainly contained in the equivalent width (EW) measurements. However, if we can combine the information of both the LBG/LAE positions and their EWs, it would be promising to constrain the neutral fraction by considering both the damping wing strength and the size of ionized bubbles <cit.>. This avenue will be explored in future work. § CONCLUSIONS In this paper, we analyze the damping wings arisen from the partially ionized IGM in a self-consistent radiative transfer cosmological simulation suite CROC. We find that when the volume-weighted neutral fraction < x_ HI> > 0.5, the shape of the damping wing has a characteristic shape with small scatter (≲ 10%). This scatter remains small even after an unusually bright source (such as a quasar) erodes a significant amount of neutral gas around it. This is because the damping wing arises from the collective, convoluted Voigt profiles along a large distance (hundreds of comoving Mpc). We also calculate the damping wing profiles in a uniform reionization case, and we find that it lies within the 68% scatter. The small scatter in the damping wing profiles indicates that we can expect an accuracy of Δ≈ 0.1 if we could measure the damping wing profile precisely. In reality, there are several complications, notably how to model the intrinsic source spectra and the absorption within the ionized bubble. The profiles we find suggest that in order to achieve the best constraints in neutral fraction, we should aim for an accuracy of continuum fitting better than 10% across the emission line of the source (from ≲ -4000 km/s to where no transmitted flux presents). For a very bright source such as a quasar, the complication of absorption inside the ionized bubble could potentially be mitigated by properly modeling the large-scale structure, which we plan to explore in the future. § ACKNOWLEDGEMENTS HC thanks the support by the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference #DIS-2022-568580. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the author. mnras
http://arxiv.org/abs/2307.05472v1
20230710144636
An effective density matrix approach for intersubband plasmons coupled to a cavity field: electrical extraction/injection of intersubband polaritons
[ "M. Lagrée", "M. Jeannin", "G. Quinchard", "S. Pes", "A. Evirgen", "A. Delga", "V. Trinité", "R. Colombelli" ]
physics.optics
[ "physics.optics", "cond-mat.mes-hall", "physics.app-ph" ]
[email protected] III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France Centre de Nanosciences et de Nanotechnologies (C2N), CNRS UMR 9001, Université Paris-Saclay, 91120 Palaiseau, France III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France [email protected] III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France [email protected] Centre de Nanosciences et de Nanotechnologies (C2N), CNRS UMR 9001, Université Paris-Saclay, 91120 Palaiseau, France The main technological obstacle hampering the dissemination of modern optoelectronic devices operating with large light-matter coupling strength Ω is an in-depth comprehension of the carrier current extraction and injection from and into strongly coupled light-matter states, the so-called polaritonic states. The main challenge lies in modeling the interaction between excitations of different nature, namely bosonic excitations (the plasmonic ISB excitations) with fermionic excitations (the electrons within the extraction or injection subband). In this work, we introduce a comprehensive quantum framework that encompasses both the ISB plasmonic mode and the extractor/injector mode, with a specific emphasis on accurately describing the coherent nature of transport. This reveals inherent selection rules dictating the interaction between the ISB plasmon and the extraction/injection subband. To incorporate the dynamics of the system, this framework is combined to a density matrix model and a quantum master equation which have the key property to distinguish intra and intersubband mechanisms. These theoretical developments are confronted to experimental photocurrent measurements from midinfrared quantum cascade detectors (λ = 10 µm) embedded in metal-semiconductor-metal microcavities, operating at the onset of the strong light-matter coupling regime (2Ω=9.3 meV). We are able to reproduce quantitatively the different features of the photocurrent spectra, notably the relative amplitude evolution of the polaritonic peaks with respect to the voltage bias applied to the structure. These results on extraction allow us to elucidate the possibility to effectively inject electronic excitations into ISB plasmonic states, and thus polaritonic states. An effective density matrix approach for intersubband plasmons coupled to a cavity field: electrical extraction/injection of intersubband polaritons R. Colombelli August 12, 2023 ==================================================================================================================================================== § INTRODUCTION The use of electromagnetic resonators like antennas or cavities is an established tool to tailor and improve the properties of optoelectronic devices, whether by increasing the sensitivity, reducing the electronic noise, improving the wall-plug efficiency. In general, the strategy is to engineer, and typically increase, the interaction strength between light and an electronic transition in matter. However, the interaction strength in practical devices is always limited to a small fraction of the photon or electronic transition lifetimes, which places the device in the so-called weak coupling regime. On the contrary, when the light-matter interaction strength overcomes the losses in the system, the latter enters the strong coupling regime. The new constituents of this system are mixed light-matter states called polaritons, which can be formed by hybridizing any polarization-carrying matter excitation and a photon field. Polariton physics thus emerged as a transverse research field studying the fundamental properties of strongly coupled systems. It revealed a plethora of phenomena, the most recognized being the out-of-equilibrium Bose-Einstein condensation of exciton-polaritons <cit.>. However, most experiments on polaritons are performed by optical means, whereas practical devices require electrical injection or extraction of charge carriers. Recent experiments sparked new interest in electrical transport in systems under strong light-matter coupling conditions, with the report of increased conductivity in organic molecules <cit.>, or the breakdown of topological protection in quantum Hall systems <cit.>. Intense research effort is thus currently devoted to provide an accurate description of transport in systems strongly coupled to a cavity field. In this context, intersubband (ISB) polaritons, that originate from the coupling between an intersubband transition in doped semiconductor quantum wells (QW) and a cavity mode, are of particular interest. They were first reported in 2003 <cit.> with absorption experiments, and that same year electronic detection of the signature of strong coupling was also reported <cit.>. However, proposals for electrical injection and electroluminescence of ISB polariton devices <cit.>, that were quickly followed by experimental work <cit.>, faced the problem of inefficient electrical injection in a polaritonic state. That issue proved insurmountable in the following years <cit.>. To circumvent the problem, the study of the "reverse" process (photo-detection) was proposed to elucidate transport mechanisms in polaritonic ISB electronic devices, with experiments on quantum well infrared photodetectors (QWIP) operating in the strong light-matter coupling regime <cit.>. In this context, we have recently presented a semi-empirical model to describe the electronic photoresponse of quantum cascade detectors (QCD) operating in the strong light-matter coupling regime <cit.>. Based solely on classical oscillators, it allowed us to shine new light on the polariton-to-electron process, and in particular to conjecture that a direct polariton-to-electron tunnel mechanism may play a major role in such devices. This result was obtained at the expense of great simplifications. In particular, because the model is based on classical theory, it cannot include any consideration on the coherence of the involved processes. Nevertheless, coherence is of paramount importance when dealing with systems operating in the strong-coupling regime, and even more so for ISB polaritons, that originate from the coupling between a cavity mode and a collective excitation. ISB transitions, that are more rigorously defined as ISB plasmons<cit.>, are collective matter excitations originating from the electronic plasma inside a semiconductor quantum well, subject to its own Coulomb interaction. This is in stark contrast to, for instance, exciton-polaritons that result from an ensemble of single-particle transitions. The main consequence is the presence of dark states, that do not couple to the electromagnetic field, but do participate in electronic transport. This has important consequences on the behavior of ISB polariton systems under electrical injection. In this paper, we propose a quantum description of QCDs based on a density matrix formalism, that we compare to a complete set of experimental data. Crucially, this approach allows us to describe (de)coherence and dissipation in the system. Our goal is to develop a theoretical description that permits to explain the electronic extraction process (photo-detection), and that - at the same time - provides a more suitable vantage point to elucidate the more complex electronic injection process leading to light emission. We note that a very recent work reports experimental results and proposes an alternative transport model for similar QCD structures operating in the strong coupling regime <cit.>. It works explicitly with the Fermionic approach, without performing the bozonisation steps. While similar conclusions are drawn in the photo-detection case, the work we present raises fundamental open questions and presents ways forward to the case of electrically pumped polaritonic light emitters. In the first part, we develop the model and derive the main observable quantities, notably the photocurrent generated by an exciting external photon field. In the second part, we validate the theoretical results by studying the photoresponse of quantum cascade detectors operating in the strong coupling regime as a function of the applied bias. We compare the values obtained in our model with a in-house code based on Ref. <cit.> that models the electronic transport in a more rigorous way, but does not incorporate the cavity effects <cit.>. In the last part, we discuss the implications of the main assumption at the basis of our new model, and extend them to the case of electrical injection. The system under study is sketched in the central part of Fig. <ref>. It consists of two electronic subbands confined inside a QW, here represented in momentum space. The second subband is tunnel-coupled to the fundamental state of an adjacent QW, and the whole system is embedded inside a cavity. The system can operate as a detector, acting as a QCD (top sketch), when it is excited by a photon that generates a photocurrent. This path is represented by blue arrows. It is also possible to inject electrons in the system (red arrows and bottom sketch), when an electric bias is applied, that can eventually lead to photon emission. In this case the device behaves as a polaritonic LED. § AN EFFECTIVE DENSITY MATRIX APPROACH FOR ELECTRONIC TRANSPORT IN CAVITY-COUPLED QCDS §.§ Bosonization of the active optical transition We start by defining the annihilation and creation operators c_λ𝐤 and c_λ𝐤^†, the fermionic operators related to the creation and annihilation of electrons in subbands λ = {0,1,2} (see Fig. <ref>). We impose T=0K and we assume that all N electrons are contained inside the 0-subband without external excitation. The one-particle quantum state |1,𝐤⟩ of electronic wave vector 𝐤, representing a state where one electron is in subband λ=1 is: |1,k⟩= c_1k^† c_0k|F⟩ where |F⟩ denotes the fundamental Fermi state (equilibrium state, where all the electrons are contained in subband λ=0). For now, we restrain the problem to the λ=0,1 subbands, that form the intersubband optical transition. This transition will be denoted as α. Following the developments of Ref. <cit.>, to describe the photo-excitation of an electron in the α-transition, it is relevant to switch from the fermionic basis formed by the |1,k⟩ states to a new basis of states {|B_i^α⟩}_i=[1:N]. We have: |B_i^α⟩ = ∑_ |𝐤|< 𝐤_F w_i 𝐤^α|1,𝐤⟩ Since the system is considered at T=0 K, only |𝐤|<𝐤_F states are occupied, 𝐤_F the module of the k-wavevector corresponding to the Fermi level of 0-subband. The {|B_i^α⟩}_i=[1:N] basis only covers the single-excitation subspace (only one photo-excited electron per-subband), which is sufficient in the case of a weak excitation regime. The coefficients w_i 𝐤^α are defined as: w_1𝐤^α = 1/√(N)   ∀𝐤 ∑_𝐤 w_i 𝐤^α = 0    ∀ i≠ 1 The |B_1^α⟩ state, of eigenenergy equal to the ISB transition energy ω_α = ω_1-ω_0 (assuming parabolic dispersion), has the remarkable property of holding the entire oscillator strength of the α transition: ⟨ F | d̂ | B_1^α⟩ = z_α√(N) where d̂ denotes the dipole operator and z_α the dipole strength of one electronic transition. The |B_1^α⟩ state is called the bright state: it is formed by the coherent superposition of the one-particle fermionic states |1,𝐤⟩ of the α-transition and it holds the entire capacity of light-matter interaction. The {|B_i^α⟩}_i=[2:N] are called the dark states since they can not interact with the light: ⟨ F | d̂ | B_i^α⟩ = 0     i≠ 1 From these developments, one can define bright state destruction and creation operators b_α and b_α^† which describe the collective excitation of the α-transition: b_α^†= 1/√(N)∑_𝐤 c_1 𝐤^† c_0 𝐤 In a weak excitation regime and for a large number of electrons N, b_α can be approximated as a bosonic operator. b_α and b_α^† respectively demote and promote excitations inside the bright state |B_1^α⟩. The final step in this development is to include the plasmonic effect ω_P of the electronic polarizations. The diagonalization of the plasmonic Hamiltonian leads to the emergence of new operators of eigen-energy ω̃_α= √(ω_α^2 + ω_P^2 ) and a plasmonic bright state that is still orthogonal to the dark states <cit.>. Mathematically, this new state is essentially the same as the previous bright state, except that it is no longer degenerated with the dark states: for simplicity, we will keep the notation |B_1^α⟩ and b_α for respectively the bright state and the corresponding creation operator. Note: at this stage we did not introduce strong light-matter coupling yet. This derivation is therefore valid in any coupling regime. §.§ Bosonization of the extractor: the tunnel-coupling Hamiltonian We now turn to the insertion of the extraction subband in the formalism. As outlined in Refs. <cit.>, the mixing of bosonic (the plasmonic ISB excitations) and fermionic (the electrons in the extraction subband) degrees of freedom is necessary to correctly model the transport mechanisms that take place in an optically excited ISB system. The focus of our paper is on ISB systems strongly coupled to a photonic mode, but we stress that the above consideration is valid also in the weak-coupling regime. When a photon is absorbed by an ISB transition, it generates a bosonic excitation: an ISB plasmon. But the measured current, in a detector, is of course of fermionic nature. In the case where the extraction subband is explicitly included in the system dynamics (and not only in the form of an external bath), it becomes an extremely tedious task to keep track of all these degrees of freedom. Effectively, one correct way to describe the interaction between these excitations of different nature is to use a full fermionic Hamiltonian of extremely large dimension. It is a significant mathematical challenge that demands considerable effort, and the nature of transport cannot be straightforwardly interpreted due to this complexity. In this work, we overcome this strong limitation with a key modification: we propose to depict the subband λ=2 with a bosonic operator in the context of an extraction process. This approach has several advantages, and - as we will discuss later on - it might also permit to address the scenario involving an injection process. To explicitly incorporate subband λ=2 into our formalism, we introduce the one-particle fermionic states |2,𝐤⟩ of the β-transition: |2,𝐤⟩ = c_2𝐤^† c_0𝐤|F⟩ Analogous to the α-transition, we will not use this fermionic state basis and instead employ a new ortho-normal basis {|B_i^β⟩}_i=[1:N] defined as: |B_i^β⟩ = ∑_ |𝐤|< 𝐤_f w_i 𝐤^β|2,𝐤⟩ where the coefficients w_i 𝐤^β are chosen such that: w_1𝐤^β = 1/√(N)   ∀ k ∑_k w_i 𝐤^β = 0    ∀ i ≠ 1 The construction of this basis follows a similar approach as that of the {|B_i^α⟩}_i=[1:N] basis. Specifically, the first state |B_1^β⟩ is the bright state of the β-transition, while the remaining states {|B_i^β⟩}_i=[2:N] are the dark states of this same transition. However, this time, the oscillator strength of a diagonal transition being very small, we have z_β≪ z_α and thus the bright and dark states of the extractor are degenerated. Note that the one excitation subspace describing subband 1 and 2, of dimension 2N, is spanned by the concatenation of the {|B_i^α⟩}_i=[1:N] and {|B_i^β⟩}_i=[1:N] basis. The introduction of this new basis is valuable to evaluate the tunnel coupling between subbands 1 and 2 within the regime of strong light-matter coupling. The tunnel coupling operator T̂ can be defined as: T̂ = Ω_T ∑_𝐤 (c_2𝐤 c_1𝐤^† + c_2𝐤^† c_1𝐤 ) where Ω_T is the tunnel coupling strength. Using equations (<ref>), (<ref>), (<ref>) and (<ref>), we compute the tunnel interaction between subbands 1 and 2: ⟨B_1^α | T̂ | B_1^β|=⟩ Ω_T ⟨B_1^α | T̂ | B_j^β|=⟩ 0 j ≠ 1 ⟨B_i^α | T̂ | B_1^β|=⟩ 0 i ≠ 1 ⟨B_i^α | T̂ | B_j^β|=⟩ Ω_T ∑_𝐤 w_i𝐤^α∗ w_j𝐤^β i ≠ 1, j ≠ 1 The above relations, that are de facto selection rules, are one of the key results of this work: through tunnel interaction, it is not possible to transition from a dark state to a bright state (Eq. (<ref>)) or vice versa (Eq. (<ref>)). Obviously, dark states can interact with each other through tunnel coupling (Eq. (<ref>)), and the same applies to bright states as well (Eq. (<ref>)). These results have crucial implications on the nature of electronic transport in a QCD. For a detection process, where light promotes excitations into the |B_1^α⟩ bright state, the previous results suggest that an optical excitation can generate an electronic current in only two ways: * Direct tunnelling into the extractor bright state |B_1^β⟩, preserving the coherent nature of the excitation, and subsequent decay - with loss of coherence - into an extractor dark state |B_i≠1^β⟩ or * First decay - with loss of coherence - into an ISB dark state |B_i≠1^α⟩ in the active region, and subsequent tunneling into an extractor dark state |B_i≠1^β⟩ Other channels involving bright-to-dark tunneling should not be considered, as they are prohibited by selection rules (<ref>)(<ref>). Once in the extractor dark states, the electronic excitation will simply decay in the remaining cascade, generating photocurrent. We stress that the construction of the new β basis merely extended the procedure applied to the α transition (detailed in reference <cit.>) to the β transition, without additional hypothesis. By implementing this basis transformation, the comprehension of the transport process is streamlined, leading to the natural emergence of the selection rules presented in Equation (<ref>) to (<ref>). In the following section, we will assess the need to actually incorporate the dark states from both the α and β-transitions to replicate the experimental photocurrent measurements from a QCD operating in the strong light-matter coupling regime. The implications of this section for an electronic injection process into polaritonic states will be discussed in section <ref>. §.§ Introducing dissipation and decoherence in the model In the following, we develop an effective density matrix model of the photocurrent extraction. We apply a drastic choice in the description of the system: we limit the extraction model to the transport induced by the bright states |B_1^α⟩ and | B_1^β⟩. The dark states from both the α and β-transitions are omitted. Both subbands 1 and 2 will thus be described only using bosonic operators. This is equivalent to choose scenario (1) among the two described at the end of the previous section: direct tunnelling into the extractor bright state |B_1^β⟩ (preserving the coherent nature of the excitation), and subsequent decay - with loss of coherence - into an extractor dark state |B_i≠1^η⟩. This choice was already implicit in the approach that we have employed in our previous work based on a classical description of the electronic transport, using coupled mode theory <cit.>. We now go beyond this classical model using a quantum master equation. The key addition is the introduction of decoherence in the system, that is distinct from dissipation. In terms of spectral effects, decoherence impacts the broadening of the photocurrent peaks, while dissipation primarily affects their amplitude. In the experimental study we will report in Sec. <ref>, bias will be varied, and - as a result - the amplitude of the peaks will be affected more than their broadening. It will be essential to differentiate between the effects of decoherence and dissipation, a distinction that was previously impossible to achieve with the classical model. We define the operator b_β using our new basis from equations (<ref>) and (<ref>): b^†_β = 1/√(N)∑_𝐤 c_2 𝐤^† c_0𝐤 b^†_β |F⟩ = |B_1^β⟩ Using the fermionic commutation rules and a weak excitation regime, we have: [b_β,b_β^†] = N̂_0 - N̂_2/N≈ℐ̂_̂d̂ where N̂_i is the population operator of subband i and ℐ̂_̂d̂ the identity operator. b_β can thus be approximated as a bosonic operator: b_β and b_β^† describe the destruction and creation of electronic excitations inside the extraction mode, of eigen-frequency ω_β = ω_2 - ω_0. The related Hamiltonian is: ℋ̂_β = ω_β b_β^† b_β We restrict the tunnel interaction to the interaction between the plasmonic bright mode and this new extraction mode. This drastically simplifies the tunnel interaction Hamiltonian described in Eq. (<ref>). The restricted Hamiltonian T̂_bright is: T̂_bright = Ω_T (b_α^† b_β + b_α b_β^†) The TM_01 electromagnetic mode confined in the patch antennas will be modeled as a standard optical resonator of frequency ω_c, using a_c and a_c^† bosonic destruction and creation operators. Using the rotating wave approximation to describe the light-matter interaction, the time dependent Hamiltonian ℋ(t) of the whole system reads: ℋ̂(t) = ω_c a_c^† a_c + ω̃_α b_α^† b_α + ω_β b_β^† b_β + Ω( a_c^† b_α + a_c b_α^†) + Ω_T ( b_α^† b_β + b_α b^†_β) + κ_c s_+ ( a_c e^iω t + a_c e^-i ω t) where s_+ is the amplitude of the incoming light excitation, ω its frequency, and κ_c is the coupling constant between this external field and the confined optical mode inside the cavity. We map this system on an equivalent open quantum system described by the reduced density matrix ρ. Under standard Born-Markov approximations, the time evolution of the density matrix ρ obey the following quantum master equation <cit.> (ħ=1 for clarity): dρ(t)/dt = - i [ℋ(t),ρ] + γ_αℒ[b_α, ρ] + γ_βℒ[b_β, ρ] + (γ_c+Γ_c) ℒ[a_c, ρ] + γ_α^intraℒ[b_α^†b_α, ρ] + γ_β^intraℒ[b_β^†b_β, ρ] where the ℒ are Lindblad super-operators modeling the dissipative and decoherent interactions of the environment with the system. For any operator Â, a super-operator ℒ reads: ℒ[Â,ρ] = 2 ÂρÂ^† - (Â^†Âρ + ρÂ^†Â) The plasmonic ISB excitations are mainly dissipated through their interaction with interface roughness, at a non-radiative rate γ_α. Similarly, the extractor dissipates electrons into the next period at a non-radiative rate γ_β, and is responsible for the generation of electrical current inside the structure. γ_β represents an effective dissipation rate that takes into consideration the remaining electronic cascade. The cavity also dissipates photons (mainly through undesired free-carriers absorption) at a rate γ_c, but also through a spontaneous emission channel, at a radiative rate Γ_c. Note that the radiative coupling κ_c is related to the radiative damping through κ_c = √(2 Γ_c) <cit.>. The main difference with our previous work <cit.> lies in the ability to explicitly introduce the intra-subband scattering through the pure decoherence terms γ_α^intraℒ[b_α^†b_α, ρ] (resp. γ_β^intraℒ[b_β^†b_β, ρ ]) <cit.>. These terms model pure decoherence damping without excitation dissipation (the intra-subband scattering thermalize excitations inside a subband without dissipating them into an other subband). By using the density matrix formalism, it thus becomes possible to differentiate between the effects of inter-subband (dissipation) and intra-subband (pure decoherence) processes on the evolution of the system (and ultimately on the shape of the calculated photoresponse spectra). More details on the necessity to distinguish intra and intersubband scatterings can be found in Appendix <ref>. §.§ Deriving observable quantities for comparison with experiments Equation <ref> can be solved numerically in steady state. The solution is a stationnary reduced density matrix ρ_S, and any observable Ô can then be computed using: ⟨Ô⟩ = Tr(Ôρ_s) where Tr represents the trace function. We can then compute the different interesting quantities of the system. The system total absorption is the sum of the power dissipated into the different decay channels, normalized by the incoming power |s_+|^2: 𝒜_tot = 𝒜_c + 𝒜_α + 𝒜_β = 2 γ_c ⟨a_c^† a_c|⟩/| s^+ |^2 + 2 γ_α⟨b_α^† b_α|⟩/| s^+ |^2 + 2 γ_β⟨b_β^† b_β|⟩/| s^+ |^2 where 𝒜_c, 𝒜_α and 𝒜_β represent respectively the cavity, ISB and extraction absorptions. The net photocurrent 𝒥_β is defined as the current under illumination. 𝒥_β is proportional to the power dissipated from a period to the next adjacent period. This is exactly the power dissipated by the extraction mode β: 𝒥_β = 2 γ_β⟨b_β^† b_β|⟩Eq:J_beta Note: this is a phenomenological interpretation of the photocurrent. It is in fact expected that an excitation inside the bright extractor state |B_1^β⟩ should first decay in the dark states |B_i≠ 1^β⟩ before being extracted in the electronic cascade and contribute to the photocurrent. We choose to neglect these dark extractor states such that the power is directly dissipated from the bright extractor state. This also applies on the ISB dissipation, where the |B_i≠ 1^α⟩ dark states are neglected when considering the non-radiative dissipation γ_α. § EXPERIMENTAL VALIDATION IN PHOTO-DETECTION: THE POLARITON-TO-CURRENT PROCESS §.§ Experimental details The samples investigated in this study are the same as those already studied in Ref. <cit.>. They are processed into 8 × 8 (approximately equal to 50 × 50 µm^2) patch antenna arrays, with the patches connected through 250-nm thin metallic wires (see Fig. <ref> in Appendix <ref>). Details of the processing can be found in <cit.>. The samples are cooled down to T = 78K in a cryostat, and they are illuminated by light from a globar source at normal incidence. The photocurrent spectra are acquired in rapid scan mode, after amplification using a low-noise transimpedance amplifier. We extend the data presented in <cit.>, and now present measurements with voltage bias applied to the samples. The applied electric field ranges from F=-25kV.cm^-1 to F=8kV.cm^-1. We have fabricated several array designs (p, s), with p the inter-patch period of the array, and s the lateral dimensions of the patches. However, to allow for a quantitative comparison, we present measurements under an applied electric field for two samples only, with same p=7 µm, and s= 1.5 µm and s=1.55 µm, respectively, as reported in Fig. <ref> (continuous lines). Additional measurements can be found in Appendix <ref>. While the relative amplitude of the spectra when varying the bias contains meaningful information of the electronic transport, one should exercise caution when comparing the amplitudes of different pairs (p, s) as the experimental protocol does not ensure a consistent illumination between each measurement of the device. Two photocurrent peaks are clearly visible in Fig. <ref>, signature of the strong light-matter coupling regime. Note: the peaks under consideration cannot be confused with the two peaks arising from coupled subbands (tunnel coupling), since the peak positions would change with the applied bias in the latter case. Here, the energy splitting (for a given pair p, s) is constant regardless of the applied field. For all (p, s) couples studied, the global amplitude of the photocurrent spectra evolves with the applied electric field F. A maximum amplitude is observed around F=-10 kV.cm^-1. The noise level increases strongly when the absolute amplitude of the field |F| increases. The noise level is the direct consequence of the increase of the parasitic dark current with the electric field and - as is well known <cit.> - it affects the range of exploitable field F for device applications. The relative amplitude of these peaks inverts with respect to the applied field F, with the equal amplitude condition of the two polaritonic photo-detection peak found for a negative field F ≈ -5 kV.cm^-1. Below this threshold, the low energy peak dominates. Inversely, for F > -5kV.cm^-1, it is the high energy peak that dominates. This phenomenon can be attributed to the realignment of the subbands under the influence of the applied bias. When a highly negative voltage is applied, the subbands follow a clear staircase structure (see Fig. <ref> in Appendix <ref> for the QCD bandstructure), which facilitates the extraction process. Conversely, at positive voltages, the subband cascade becomes less organized, hindering the extraction process. §.§ System parameters and constraints Before applying the theoretical developments of section <ref> to the experimental data, let us detail the system parameters and the constraints applied to them. The photonic degrees of freedom are the cavity parameters ω_c, γ_c and Γ_c, that are independent of the applied electric field F. They only depend on the geometrical parameters (p, s) of the cavities <cit.>: ω_c(s) = π c_0/n_eff s Γ_c(p) = α_c/p^2 where c_0 is the light velocity, n_eff is the effective index of the cavity, that represents the effective medium composed of the semiconductor contacts and of the undoped periodic structure embedded between the gold layers forming that cavity, and α_c is the cavity dispersion loss factor. We choose to constrain n_eff, α_c and γ_c to the values obtained from our prior investigation of the same samples <cit.>, where the photocurrent of several samples with different (s,p) couples have been studied for F=0 kV.cm^-1: n_eff = 3.22 α_c = 29.1   meV.µ m^2 γ_c = 3.4   meV The cavity parameters are thus excluded from the fitting process. Several electronic degree of freedom can also be fixed or constrained independently of our density matrix model. The parameters of the ISB transition in the active QW (α) are assumed independent of the applied electric field F: the transition is vertical in a single quantum well and therefore is very marginally affected by the applied bias. The ISB frequency ω_α and the plasma frequency ω_P could be computed from our sequential transport software <cit.>. However, it is common to observe disparities between expected and measured doping levels (up to 15%). Experimental discrepancies also affect the ISB frequency (up to 5%), usually caused by the quality of the quantum wells interfaces during the epitaxial process. To account for these disparities, and since both ω_α and ω_P are crucial parameters to reproduce the strong coupling measurements, we chose to let these parameters free during the fitting process: ω̃_α = √(ω_α^2 + ω_P^2) Note: the light-matter coupling constant Ω is parametrized using ω_P: Ω= ω_P/2√(f_w) with f_w (≈ 0.17), the computed overlap factor between the cavity field and the doped active quantum wells. Two additional α parameters can be computed using our sequential transport software: the non-radiative dissipation rate γ_α of the α plasmon from the excited subband to the fundamental subband, and the tunnel coupling Ω_T. We compute γ_α = 0.66 meV and Ω_T = 4.2 meV, respectively. The new parameter of our transport model in the strong coupling regime, the intra-subband rate γ_α^intra, will instead be fitted. The parameters related to the extractor β are instead dependent on the electric field F: the extractor energy shifts with respect to the upper excited state of the ISB transition when a bias is applied to the structure. The misalignment is approximated as linear: ω_β (F) = α_F F + ω_β^0 where α_F is the linear coefficient and ω_β^0 is the extractor energy for F=0. This dispersion can be computed using our sequential transport software and is injected into the model: α_F = 1.12   meV/(kV.cm^-1) ω_β^0 = 124   meV Similarly to γ_α^intra, γ_β^intra will be a fitting parameter common to the whole data set. Finally, we expect the misalignment of the cascade with the electric field to modify the value of the effective extraction rate γ_β(F). γ_β is one of the most important parameters of the fitting process, as it controls the relative amplitude of the spectra. Although we suspect that it might closely match with the actual extraction rate calculated from our sequential transport model, we decided to keep it as a free parameter: for each measured electric field value F_i, we fit one extraction rate γ_β(F_i). Note: γ_β(F_i) is independent of the geometrical parameters p and s. In summary, ω_α, ω_P and γ_α^intra and γ_β^intra are fitting parameters common to the whole data set, and their initial values for the fit will be based on the ones derived by our software. §.§ Discussion on the validity of the fit In this section, we perform a global fit on the whole experimental photocurrent dataset (Fig. <ref>), using the parameters constraints exposed in the previous section. We solve Eq.(<ref>) in the stationary regime (using the QuTiP python library <cit.>) to evaluate the theoretical photocurrent J_β, as per Eq.(<ref>). The parameters resulting from the fit are presented in Table <ref>. The returned values are consistent with the previous fits performed with the coupled mode theory in <cit.>. In particular, the extraction rate γ_β as a function of the applied electric field is plotted in Fig. <ref> and compared with the values computed through our sequential transport model. The right order of magnitude is obtained (γ_β < 1 meV) and the evolution trends are relatively well reproduced (γ_β decreasing for F>0, slope break around F = -4 kV.cm^-1). These results on γ_β are also consistent with the evolution of the integrated amplitude of the spectra (Fig. <ref>, right-side scale): when the electric field is below F = -4 kV.cm^-1, the electronic cascade is efficiently aligned, and the effective extraction rate γ_β is high. This leads to a significant photocurrent signal. The spectrally resolved photocurrent calculated using the parameters returned by the global fit procedure is compared to the experimental data in Fig. <ref>, with a quantitative agreement obtained on the set of triplets (p, s, F). Two important trends are reproduced as a function of the bias, i.e. as a function of the ω_α- ω_β alignment: (i) the overall amplitude of the spectra, and (ii) the relative amplitude inversion between the peaks of the two polaritonic branches. This study quantitatively confirms that the extractor (the electronic cascade of the QCD) and its relative alignment with respect to the ISB transition controls the overall amplitude of the spectra, and also the relative amplitude of the peaks of the polaritonic branches. Applying an electric field to the structure enables the selective extraction of excitations from a polaritonic state towards the electronic cascade, while also providing control over the efficiency of this extraction. This selective extraction capacity is enabled by the sharp transfer function and the 2Ω spacing (the Rabi splitting) between the polaritonic peaks: a finer transfer function and a stronger coupling would allow for better selectivity of ω_± polaritons. More details on a QCD transfer function in the strong coupling regime can be found in Appendix <ref>. The good agreement between the experimental data and the theoretical model provides strong evidence that the dark states for both transitions α and β do not need to be included in the model to depict an extraction process. The bright tunnel interaction T̂_bright and the phenomenological dissipation rate γ_β from the extractor bright state are sufficient to quantitatively reproduce the experimental measurements. As previously postulated in <cit.>, this result confirms that the polaritonic nature of the excitation is carried on during the extraction process through the coherent tunnel coupling. The extraction is a coherent process, mainly involving the bright states from both α and β transitions. This model permits however a step forward in the comprehension of the polariton-to-electron process. Chronologically, the early attempts were limited to the observation of a polariton splitting in photo-detection <cit.>. A phenomenological transfer function was then introduced in the study of QWIPs operating in strong coupling <cit.>. Recently, the Coupled Mode Theory (CMT) permitted a more rigorous modeling of the transfer function, and an initial indication of direct tunneling into the extractor bright state, with no role for the polaritonic dark states <cit.>. The model presented in this paper gets rid of the transfer function - a phenomenological concept - and replaces it with a rigorous tunnel coupling Hamiltonian between the α and β transitions, with a complete description of bright and dark states. The latter do not play a major role for the polariton extraction process, but they have a crucial role for polariton injection. Our model integrates them, and might constitute a valid vantage point to study electrically injected polariton emitters. More information on the transfer function and the difference between the CMT and the effective density matrix approach can be found in Appendix <ref>. § IMPLICATIONS OF THE MODEL FOR ELECTRICALLY PUMPED POLARITON EMITTERS: THE ELECTRON-TO-POLARITON PROCESS The validity of the density matrix approach to describe electrical extraction from optically excited polaritons, motivates to study the implications of these findings on the electrical injection and subsequent photon emission, represented by the red arrows in Fig. <ref>. As discussed in Ref. <cit.>, the main difficulty describing an intersubband emitter operating in the strong light-matter coupling regime lies in the simultaneous description of both optical (bosonic) and electronic (fermionic) excitations. The injection process fills subband 2 with fermionic excitations in the form of electrons, while the plasmonic excitations that occupy the α bright state are bosonic. Working with the full Fermionic Hamiltonian is an arduous task <cit.>, that could hinder the development of an intuitive understanding of the transport, although very recently a Fermionic approach was successfully used to model QCDs operating in the strong coupling regime <cit.>. The previous section <ref> suggests that the bosonization procedure of the extractor, that we employed to describe the extraction process, is a novel and readily interpretable approach for examining the injection process. In particular the selection rules for the tunnel Hamiltonian, eqs. (<ref>)-(<ref>) might prove a powerful tool. Due to the impossibility of conducting an experimental study resembling the one carried out for QCDs for a detection process, the following discussion will be supported by the quantitative arguments previously presented in section <ref>. Note: the β extractor states are now referred to as injector states. An injection process is inherently incoherent because it introduces electrical excitations into an intersubband system through an incoherent external bath of electrons. The relevant coherence here is the one of the ISB plasmon<cit.>, that is a collective - and coherent - matter excitation originating from the electronic plasma inside a semiconductor quantum well (QW). In this respect, an intuitive picture suggests that for an ISB polariton system, the electrical injection process is not the reverse of the electrical extraction. In the latter, coherence (induced by light) is destroyed to generate an electrical current, while in the former it appears that coherence must be created. More formally, in the framework of a bosonized injector, we expect most of the electronic population to be located in the dark states |B_i^β⟩ (i≠ 1) upon electrical injection. Furthermore, to emit light, excitations must be transferred to the plasmonic bright state |B_1^α⟩, which holds the entire oscillator strength of the system. However, the selection rules (<ref>) and (<ref>) are clear: it is impossible for a dark state from the injector to interact with the plasmonic bright state through a tunnel interaction. In other words, the primary injection pathway, which would involve direct transfer from the injector states to the bright plasmonic state, can not be taken. The bosonized injector formalism confirms that polaritonic emitters do not operate as reversed polaritonic detectors. In QCDs, the coherence is established through the photonic mode and maintained up to the extractor using both light-matter coupling Ω and tunnel coupling Ω_T. Coherence can also be lost through the irreversible intrasubband scatterings γ_α^intra in the plasmonic mode, although we have demonstrated that it is not the main extraction scheme. However, the extraction process can still take place, since the usual dark-to-dark tunnel interactions are possible (Eq. (<ref>)). On the contrary, in a LED the injection mechanism is incoherent, and coherence cannot emerge spontaneously during the transport. Additionally, we showed that incoherent (dark) states cannot interact with a coherent (bright) via the tunneling Hamiltonian (Eq. (<ref>)) and (Eq. (<ref>)). As a result, it seems unfeasible to efficiently transfer excitations to the optically active bright state α, and thus to the polaritonic states, in the absence of an additional mechanism to generate coherence. In the case where the electrical injection would be uniform among the N states of |B_i^β⟩, light could be emitted since the system would start with excitations in |B_1^β⟩, but the expected efficiency would be at most 1/N, without considering intrasubband decoherence. There are however two points that need to be discussed further. First, light emission from another kind of polariton states under electrical injection is well documented, namely in exciton-polariton devices <cit.>, with additional reports of polariton lasing under electrical injection <cit.>. The key difference is that exciton-polaritons states do not result from a collective matter excitation, but rather from an ensemble of single-particle transitions. As a consequence, non-resonant pumping schemes can apply to exciton polaritons, as demonstrated in optical experiments. Second, several reports of electroluminescence from electrically-injected polariton LEDs exist in the literature. Some of them clearly determine that thermally assisted emission processes have a major role <cit.>, but in many other ones simple thermal models cannot explain the data <cit.>. We can only conjecture possible ways forward to elucidate electrical injection of polaritonic LEDs. On one hand, one might wonder if the application of the generalized, local Kirchoff <cit.> law to ISB polariton LEDs can shine new light on the electrical injection process, and possibly explain all the existing experimental data in the literature. On the other, the problem of electrical excitation of coherent electronic motion - which is essentially the mechanism at play in electrically pumped polariton emitters - is well known from the field of surface plasmon polaritons (SPPs) <cit.>. The extremely low efficiency of the electron-to-plasmon and electron-to-photon processes is well known, although recent theoretical works, supported by one experimental finding, have demonstrated that the efficiency could be drastically increased by tailoring the electronic landscape to favor inelastic over elastic tunneling, as long as the electronic coherence is preserved in the process <cit.>. We thank S. De Liberato, J-M Manceau, I. Carusotto, A. Bousseksou for helpful discussions. We acknowledge financial support from the European Union Future and Emerging Technologies (FET) Grant No. 737017 (MIR-BOSE), and by the French National Research Agency: project SOLID (No. ANR-19-CE24-0003), HISPANID (ANR-17-ASTR-0008-01), and EVEREST (ANR-21-CE24-0021). § QUANTUM MASTER EQUATION MODEL FOR A QCD OPERATING IN THE STRONG LIGHT-MATTER COUPLING REGIME: PARAMETRIC STUDY OF THE IMPACT OF THE LIGHT-MATTER COUPLING STRENGTH ON THE TRANSFER FUNCTION The transfer function between the photocurrent and the total power dissipated inside the QCD (𝒜_QCD = 𝒜_α + 𝒜_β) is defined as 𝒯: 𝒯(ω) = 𝒜_β/𝒜_α + 𝒜_β 𝒯 is dependent on the light frequency ω. §.§ Parametric study Fig. <ref> plots the different quantities 𝒜_tot (𝒜_tot= 𝒜_QCD + 𝒜_c), 𝒜_QCD, 𝒥_β and 𝒯 computed from the solution of equation (<ref>). We impose a realistic situation between the inter and intra-subband dynamics within the QCD such that 90% of the total broadening is due to the intrasubband scattering: γ_α^intra + γ_β^intra = 0.9 ·γ_αβ where γ_αβ= γ_α^intra + γ_β^intra + γ_α + γ_β represents the total contribution to the broadening from the α and β transitions, including intersubband and intrasubband scatterings. This assumption is equivalent to set T_1≈ 10· T_2, where T_2 (T_1) are the dephasing (upper state) lifetime, respectively. For a typical mid-IR ISB transition this is verified, as we have T_1 of the order of the ps, and T_2 of the order of a few hundreds fs. The cavity resonance ω_c and the extractor resonance are also voluntarily mismatched with the ISB transition: ω_c = 1.05 ω_α,        ω_β = 0.95 ω_α 𝒜_tot, 𝒜_QCD, 𝒥_β and 𝒯 are computed for different light-matter coupling amplitudes Ω, up to 10% of the ISB transition ω_α. When the light-matter coupling ratio Ω/ω_α increases, the system progressively moves from a weak coupling regime to a strong coupling regime: around the spectral resolution criteria 2Ω > γ_αβ, we compute the characteristic splitting of the polaritonic peaks, for each spectra 𝒜_tot (A), 𝒜_QCD (B) and 𝒥_T (C). The model is able to reproduce the smaller splitting of the QCD absorption (B) compared to the splitting of the total absorption (A) for a same coupling situation Ω/ω_α, something previously observed in <cit.>. The important novelties that brings the model are found in the transfer function 𝒯. In weak coupling (small ratios Ω/ω_α), the transfer function is almost scalar: it coincides with the transfer function computed in the framework of a QCD that is not inside a cavity. As the ratio Ω/ω_α increases, the baseline of the transfer function gradually falls, and the amplitude of its peak increases: increasing Ω enables the transfer function to reach a Lorentzian shape. Therefore, in a model where the intra-subband dynamics is explicitly described, the progressive increase of the light-matter coupling allows us to move continuously from a sequential transport in QCDs (flat, quasi-scalar transfer function 𝒯(ω)) to a delocalized description of the transport (sharp, Lorentzian transfer function). Again, when the strong light-matter coupling Ω is sufficiently intense, the coherent nature of the transport is maintained during the extraction process. The previous discussion explains the satisfactory description of the photocurrent experimental data produced by the semi-classical CMT obtained in our previous work <cit.>, despite the impossibility in this previous model to describe the intrasubband dynamic. By default, the CMT predicts a sharp Lorentzian transfer function 𝒯. While this description is not suited for a weak coupling scenario, where the sequential transport should be described with a scalar transfer function, Fig. <ref>-[D] illustrates that it is on the other hand quite adapted to a strong coupling scenario and a delocalized transport scheme. However, being a semi-classical model, the CMT also lacked the ability to distinguish between the inter and intrasubband dynamic which would prevent the distanglement between the spectra broadening and their amplitude. §.§ Tunneling current Another quantity of interest is the tunneling current 𝒥_T between the plasmonic mode α and the electronic extraction mode β. It is defined as: 𝒥_T = Ω_T (⟨ b_α^† b_β|-⟩⟨ b_α b_β^†|)⟩ Using Eq. (<ref>) in the low excitation regime, and developing the expressions of the coherences, 𝒥_T can be approximated as: 𝒥_T = 2 Ω_T^2 γ_αβ/( ω̃_α - ω_β )^2 + ( γ_αβ)^2 ( ⟨b_α^†b_α|-⟩⟨b_β^†b_β|⟩) + [ 2i Ω_T Ω/ (ω̃_α - ω_β )^2 + ( γ_αβ)^2 ⟨a_c b_β^†|⟩] where γ_αβ= γ_α + γ_α^intra + γ_β + γ_β^intra is the sum of the different contributions to the coherences damping inside the QCD. The obtained expression of 𝒥_T in Eq.(<ref>) is decomposed in two contributions. The fist term is the standard sequential tunnel current <cit.> (in its first order expression) which is broadely used for the electronic transport in QCD operating in the weak coupling regime <cit.>. It is a semi-classical expression of the current, in the sense that it directly involves the population difference between the modes involved in the tunneling process ⟨b_α^†b_α|-⟩⟨b_β^†b_β|$⟩. The second term is a new addition to the tunnel current. It involves the coherences ⟨a_c b_β^†|$⟩ between the cavity and the extractor modes, which could be qualified as long-range coherences (the two modes are only coupled through their mutual coupling to the ISB mode). It thus expresses the system capacity to transport current between modes that are not directly coupled. We will refer to this current as delocalized current. The amplitude of the delocalized current of Eq.(<ref>) is controlled by a Lorentzian function and involves the cross product of the couplings Ω_T (tunnel coupling) and Ω (light-matter coupling). In the case of a weakly coupled QCD, it is thus expected that the delocalized current is null. Note that it can be numerically checked that the current expression is independent of the considered interface where its computed, thus 𝒥_T = 𝒥_β. §.§ Validity domains of the different models To explore the validity domains of the different models introduced here and in our previous work <cit.> (sequential model, CMT model, quantum master equation model), we define a criterion based on the spectral shape of the transfer function 𝒯(ω), the sharpness r: r(Ω, γ_α^intra + γ_β^intra) = Max{𝒯(ω)}-Min{𝒯(ω)}/Max{𝒯(ω)} r = 1 thus indicates that the transfer function 𝒯(ω) is a sharp Lorentzian function, r = 0 indicates that 𝒯(ω) is a flat scalar function. Fig. <ref> summarizes the results of the parametric exploration on both the total intrasubband scattering and the light-matter coupling strength. We differentiate three domains D1, D2 and D3: * Domain D1: sequential transport model, flat scalar transfer function (𝒯(ω)≈ p_E). This domain is correctly described by the standard thermalized subband model for QCD. It corresponds for instance to QCDs operating in the weak-coupling regime * Domain D3: delocalized transport model, sharp Lorentzian transfer function. This domain is correctly described by the CMT. * Domain D2: intermediate domain, where transport combines contributions from different sources. D1, D2 and D3 are correctly described by the density matrix formalism of equation (<ref>) and the capability to distinguish intra and inter-subband dynamic. § EXPERIMENTAL SYSTEM AND QCD BANDSTRUCTURE In this section, we present additional information about the samples used in this work. Fig. <ref> presents a scanning electron microscope (SEM) image of a patch cavity array, and Fig. <ref> presents the bandstructure of the QCD embedded inside the patches. The bandstructure is computed using our sequential transport software <cit.>. § ADDITIONNAL PHOTOCURRENT MEASUREMENTS AND COMPUTATIONAL RESULTS In this section, we present additional photocurrent measurements and computational results to supplement the results of Fig. <ref>. 55 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Kasprzak et al.(2006)Kasprzak, Richard, Kundermann, Baas, Jeambrun, Keeling, Marchetti, SzymaÅ„ska, André, Staehli, Savona, Littlewood, Deveaud, and Dang]kasprzak_boseeinstein_2006 author author J. Kasprzak, author M. Richard, author S. Kundermann, author A. Baas, author P. Jeambrun, author J. M. J. Keeling, author F. M. Marchetti, author M. H. SzymaÅ„ska, author R. André, author J. L. Staehli, author V. Savona, author P. B. Littlewood, author B. Deveaud, and author L. S. Dang, title title Bose–einstein condensation of exciton polaritons, https://doi.org/10.1038/nature05131 journal journal Nature volume 443, pages 409–414 (year 2006)NoStop [Bajoni et al.(2008)Bajoni, Semenova, Lemaître, Bouchoule, Wertz, Senellart, and Bloch]bajoni_polariton_2008 author author D. Bajoni, author E. Semenova, author A. Lemaître, author S. Bouchoule, author E. Wertz, author P. Senellart, and author J. Bloch, title title Polariton light-emitting diode in a gaas-based microcavity, https://doi.org/10.1103/PhysRevB.77.113303 journal journal Phys. Rev. B volume 77, pages 113303 (year 2008)NoStop [Carusotto and Ciuti(2013)]carusotto_quantum_2013 author author I. Carusotto and author C. Ciuti, title title Quantum fluids of light, https://doi.org/10.1103/RevModPhys.85.299 journal journal Reviews of Modern Physics volume 85, pages 299–366 (year 2013)NoStop [Orgiu et al.(2015)Orgiu, George, Hutchison, Devaux, Dayen, Doudin, Stellacci, Genet, Schachenmayer, Genes, Pupillo, Samorì, and Ebbesen]orgiu_conductivity_2015 author author E. Orgiu, author J. George, author J. A. Hutchison, author E. Devaux, author J. F. Dayen, author B. Doudin, author F. Stellacci, author C. Genet, author J. Schachenmayer, author C. Genes, author G. Pupillo, author P. Samorì, and author T. W. Ebbesen, title title Conductivity in organic semiconductors hybridized with the vacuum field, https://doi.org/10.1038/nmat4392 journal journal Nature Materials volume 14, pages 1123–1129 (year 2015)NoStop [Appugliese et al.(2022)Appugliese, Enkner, Paravicini-Bagliani, Beck, Reichl, Wegscheider, Scalari, Ciuti, and Faist]appugliese_breakdown_2022 author author F. Appugliese, author J. Enkner, author G. L. Paravicini-Bagliani, author M. Beck, author C. Reichl, author W. Wegscheider, author G. Scalari, author C. Ciuti, and author J. Faist, title title Breakdown of topological protection by cavity vacuum fields in the integer quantum hall effect, https://doi.org/10.1126/science.abl5818 journal journal Science volume 375, pages 1030–1034 (year 2022)NoStop [Ciuti(2021)]ciuti_cavity_2021 author author C. Ciuti, title title Cavity-mediated electron hopping in disordered quantum hall systems, https://doi.org/10.1103/PhysRevB.104.155307 journal journal Physical Review B volume 104, pages 155307 (year 2021)NoStop [Dini et al.(2003)Dini, Köhler, Tredicucci, Biasiol, and Sorba]dini_microcavity_2003 author author D. Dini, author R. Köhler, author A. Tredicucci, author G. Biasiol, and author L. Sorba, title title Microcavity polariton splitting of intersubband transitions, https://doi.org/10.1103/PhysRevLett.90.116401 journal journal Phys. Rev. Lett. volume 90, pages 116401 (year 2003)NoStop [Dupont et al.(2003)Dupont, Liu, SpringThorpe, Lai, and Extavour]dupont_vacuumfield_2003 author author E. Dupont, author H. C. Liu, author A. J. SpringThorpe, author W. Lai, and author M. Extavour, title title Vacuum-field rabi splitting in quantum-well infrared photodetectors, https://doi.org/10.1103/PhysRevB.68.245320 journal journal Phys. Rev. B volume 68, pages 245320 (year 2003)NoStop [Colombelli et al.(2005)Colombelli, Ciuti, Chassagneux, and Sirtori]colombelli_quantum_2005 author author R. Colombelli, author C. Ciuti, author Y. Chassagneux, and author C. Sirtori, title title Quantum cascade intersubband polariton light emitters, https://doi.org/10.1088/0268-1242/20/10/001 journal journal Semiconductor Science and Technology volume 20, pages 985–990 (year 2005)NoStop [De Liberato and Ciuti(2008)]de2008quantum author author S. De Liberato and author C. Ciuti, title title Quantum model of microcavity intersubband electroluminescent devices, @noop journal journal Physical Review B volume 77, pages 155321 (year 2008)NoStop [Sapienza et al.(2008)Sapienza, Vasanelli, Colombelli, Ciuti, Chassagneux, Manquest, Gennser, and Sirtori]sapienza_electrically_2008 author author L. Sapienza, author A. Vasanelli, author R. Colombelli, author C. Ciuti, author Y. Chassagneux, author C. Manquest, author U. Gennser, and author C. Sirtori, title title Electrically injected cavity polaritons, journal journal Physical Review Letters volume 100, https://doi.org/10.1103/PhysRevLett.100.136806 10.1103/PhysRevLett.100.136806 (year 2008)NoStop [Jouy et al.(2010)Jouy, Vasanelli, Todorov, Sapienza, Colombelli, Gennser, and Sirtori]jouy_intersubband_2010 author author P. Jouy, author A. Vasanelli, author Y. Todorov, author L. Sapienza, author R. Colombelli, author U. Gennser, and author C. Sirtori, title title Intersubband electroluminescent devices operating in the strong-coupling regime, journal journal Physical Review B volume 82, https://doi.org/10.1103/PhysRevB.82.045322 10.1103/PhysRevB.82.045322 (year 2010)NoStop [Delteil et al.(2011)Delteil, Vasanelli, Jouy, Barate, Moreno, Teissier, Baranov, and Sirtori]delteil_optical_2011 author author A. Delteil, author A. Vasanelli, author P. Jouy, author D. Barate, author J. C. Moreno, author R. Teissier, author A. N. Baranov, and author C. Sirtori, title title Optical phonon scattering of cavity polaritons in an electroluminescent device, https://doi.org/10.1103/PhysRevB.83.081404 journal journal Physical Review B volume 83, pages 081404 (year 2011)NoStop [Chastanet et al.(2017)Chastanet, Manceau, Laurent, Bousseksou, Beaudoin, Sagnes, and Colombelli]chastanet_surface_2017 author author D. Chastanet, author J.-M. Manceau, author T. Laurent, author A. Bousseksou, author G. Beaudoin, author I. Sagnes, and author R. Colombelli, title title Surface emitting thermally assisted polaritonic light-emitting device, https://doi.org/10.1063/1.4976585 journal journal Applied Physics Letters volume 110, pages 081108 (year 2017)NoStop [Geiser et al.(2012)Geiser, Scalari, Castellano, Beck, and Faist]geiser_room_2012 author author M. Geiser, author G. Scalari, author F. Castellano, author M. Beck, and author J. Faist, title title Room temperature terahertz polariton emitter, https://doi.org/10.1063/1.4757611 journal journal Applied Physics Letters volume 101, pages 141118 (year 2012)NoStop [Vigneron et al.(2019)Vigneron, Pirotta, Carusotto, Tran, Biasiol, Manceau, Bousseksou, and Colombelli]vigneron_quantum_2019 author author P.-B. Vigneron, author S. Pirotta, author I. Carusotto, author N.-L. Tran, author G. Biasiol, author J.-M. Manceau, author A. Bousseksou, and author R. Colombelli, title title Quantum well infrared photo-detectors operating in the strong light-matter coupling regime, https://doi.org/10.1063/1.5084112 journal journal Applied Physics Letters volume 114, pages 131104 (year 2019)NoStop [Lagrée et al.(2022)Lagrée, Jeannin, Quinchard, Ouznali, Evirgen, Trinité, Colombelli, and Delga]lagree_direct_2021 author author M. Lagrée, author M. Jeannin, author G. Quinchard, author O. Ouznali, author A. Evirgen, author V. Trinité, author R. Colombelli, and author A. Delga, title title Direct polariton-to-electron tunneling in quantum cascade detectors operating in the strong light-matter coupling regime, https://doi.org/10.1103/PhysRevApplied.17.044021 journal journal Phys. Rev. Applied volume 17, pages 044021 (year 2022)NoStop [Ando et al.(1982)Ando, Fowler, and Stern]ando_electronic_1982 author author T. Ando, author A. Fowler, and author F. Stern, title title Electronic properties of two-dimensional systems, https://doi.org/10.1103/RevModPhys.54.437 journal journal Reviews of Modern Physics volume 54, pages 437 (year 1982)NoStop [Helm(1999)]helm_intersubband_1999 author author M. Helm, title title Intersubband transitions in quantum wells physics and device applications i, @noop journal journal Academic Press , pages 1 (year 1999)NoStop [Delteil et al.(2012)Delteil, Vasanelli, Todorov, Feuillet Palma, Renaudat St-Jean, Beaudoin, Sagnes, and Sirtori]delteil_charge_2012 author author A. Delteil, author A. Vasanelli, author Y. Todorov, author C. Feuillet Palma, author M. Renaudat St-Jean, author G. Beaudoin, author I. Sagnes, and author C. Sirtori, title title Charge-induced coherence between intersubband plasmons in a quantum structure, https://doi.org/10.1103/PhysRevLett.109.246808 journal journal Physical Review Letters volume 109, pages 246808 (year 2012)NoStop [Pisani et al.(2023)Pisani, Gacemi, Vasanelli, Li, Davies, Linfield, Sirtori, and Todorov]pisani_electronic_2023 author author F. Pisani, author D. Gacemi, author A. Vasanelli, author L. Li, author A. G. Davies, author E. Linfield, author C. Sirtori, and author Y. Todorov, title title Electronic transport driven by collective light-matter coupled states in a quantum device, https://doi.org/10.1038/s41467-023-39594-z journal journal Nature Communications volume 14, pages 3914 (year 2023)NoStop [Trinité et al.(2011)Trinité, Ouerghemmi, Guériaux, Carras, Nedelcu, Costard, and Nagle]trinite2011modelling author author V. Trinité, author E. Ouerghemmi, author V. Guériaux, author M. Carras, author A. Nedelcu, author E. Costard, and author J. Nagle, title title Modelling of electronic transport in quantum well infrared photodetectors, @noop journal journal Infrared Physics & Technology volume 54, pages 204 (year 2011)NoStop [Koeniguer et al.(2006)Koeniguer, Dubois, Gomez, and Berger]koeniguer2006electronic author author C. Koeniguer, author G. Dubois, author A. Gomez, and author V. Berger, title title Electronic transport in quantum cascade structures at equilibrium, @noop journal journal Physical Review B volume 74, pages 235325 (year 2006)NoStop [Buffaz et al.(2010)Buffaz, Gomez, Carras, Doyennette, and Berger]buffaz2010role author author A. Buffaz, author A. Gomez, author M. Carras, author L. Doyennette, and author V. Berger, title title Role of subband occupancy on electronic transport in quantum cascade detectors, @noop journal journal Physical Review B volume 81, pages 075304 (year 2010)NoStop [Todorov and Sirtori(2012)]todorov2012intersubband author author Y. Todorov and author C. Sirtori, title title Intersubband polaritons in the electrical dipole gauge, @noop journal journal Physical Review B volume 85, pages 045304 (year 2012)NoStop [De Liberato and Ciuti(2009)]de2009quantum author author S. De Liberato and author C. Ciuti, title title Quantum theory of electron tunneling into intersubband cavity polariton states, @noop journal journal Physical Review B volume 79, pages 075317 (year 2009)NoStop [Breuer et al.(2002)Breuer, Petruccione et al.]breuer2002theory author author H.-P. Breuer, author F. Petruccione, et al., title title The theory of open quantum systems, @noop journal journal Oxford University Press on Demand (year 2002)NoStop [Suh et al.(2004)Suh, Wang, and Fan]suh2004temporal author author W. Suh, author Z. Wang, and author S. Fan, title title Temporal coupled-mode theory and the presence of non-orthogonal modes in lossless multimode cavities, @noop journal journal IEEE Journal of Quantum Electronics volume 40, pages 1511 (year 2004)NoStop [Schlosshauer(2007)]schlosshauer2007decoherence author author M. A. Schlosshauer, title title Decoherence: and the quantum-to-classical transition, @noop journal journal Springer Science & Business Media (year 2007)NoStop [Quinchard et al.(2022)Quinchard, Mismer, Hakl, Pereira, Lin, Lepillet, Trinité, Evirgen, Peytavit, Reverchon et al.]quinchard2022high author author G. Quinchard, author C. Mismer, author M. Hakl, author J. Pereira, author Q. Lin, author S. Lepillet, author V. Trinité, author A. Evirgen, author E. Peytavit, author J. Reverchon, et al., title title High speed, antenna-enhanced 10.3 μ m quantum cascade detector, @noop journal journal Applied Physics Letters volume 120, pages 091108 (year 2022)NoStop [Delga et al.(2012)Delga, Carras, Trinité, Guériaux, Doyennette, Nedelcu, Schneider, and Berger]delga2012master author author A. Delga, author M. Carras, author V. Trinité, author V. Guériaux, author L. Doyennette, author A. Nedelcu, author H. Schneider, and author V. Berger, title title Master equation approach of classical noise in intersubband detectors, @noop journal journal Physical Review B volume 85, pages 245414 (year 2012)NoStop [Hakl et al.(2021)Hakl, Lin, Lepillet, Billet, Lampin, Pirotta, Colombelli, Wan, Cao, Li, Peytavit, and Barbieri]hakl_ultrafast_2021 author author M. Hakl, author Q. Lin, author S. Lepillet, author M. Billet, author J.-F. Lampin, author S. Pirotta, author R. Colombelli, author W. Wan, author J. C. Cao, author H. Li, author E. Peytavit, and author S. Barbieri, title title Ultrafast Quantum-Well Photodetectors Operating at 10 μm with a Flat Frequency Response up to 70 GHz at Room Temperature, https://doi.org/10.1021/acsphotonics.0c01299 journal journal ACS Photonics volume 8, pages 464 (year 2021)NoStop [Todorov et al.(2010)Todorov, Tosetto, Teissier, Andrews, Klang, Colombelli, Sagnes, Strasser, and Sirtori]todorov2010optical author author Y. Todorov, author L. Tosetto, author J. Teissier, author A. M. Andrews, author P. Klang, author R. Colombelli, author I. Sagnes, author G. Strasser, and author C. Sirtori, title title Optical properties of metal-dielectric-metal microcavities in the thz frequency range, @noop journal journal Optics express volume 18, pages 13886 (year 2010)NoStop [Balanis(2016)]balanis2016antenna author author C. A. Balanis, title title Antenna theory: analysis and design, @noop journal journal John wiley & sons (year 2016)NoStop [Palaferri(2018)]palaferri2018antenna author author D. Palaferri, title title Antenna resonators for quantum infrared detectors and fast heterodyne receivers, @noop journal journal Sorbonne Paris Cité (year 2018)NoStop [Johansson et al.(2012)Johansson, Nation, and Nori]johansson2012qutip author author J. R. Johansson, author P. D. Nation, and author F. Nori, title title Qutip: An open-source python framework for the dynamics of open quantum systems, @noop journal journal Computer Physics Communications volume 183, pages 1760 (year 2012)NoStop [Sapienza et al.(2007)Sapienza, Vasanelli, Ciuti, Manquest, Sirtori, Colombelli, and Gennser]sapienza_photovoltaic_2007 author author L. Sapienza, author A. Vasanelli, author C. Ciuti, author C. Manquest, author C. Sirtori, author R. Colombelli, and author U. Gennser, title title Photovoltaic probe of cavity polaritons in a quantum cascade structure, https://doi.org/10.1063/1.2739308 journal journal Applied Physics Letters volume 90, pages 201101 (year 2007)NoStop [Khalifa et al.(2008)Khalifa, Love, Krizhanovskii, Skolnick, and Roberts]khalifa_electroluminescence_2008 author author A. A. Khalifa, author A. P. D. Love, author D. N. Krizhanovskii, author M. S. Skolnick, and author J. S. Roberts, title title Electroluminescence emission from polariton states in gaas-based semiconductor microcavities, https://doi.org/10.1063/1.2844860 journal journal Applied Physics Letters volume 92, pages 061107 (year 2008)NoStop [Tsintzos et al.(2008)Tsintzos, Pelekanos, Konstantinidis, Hatzopoulos, and Savvidis]tsintzos_gaas_2008 author author S. I. Tsintzos, author N. T. Pelekanos, author G. Konstantinidis, author Z. Hatzopoulos, and author P. G. Savvidis, title title A gaas polariton light-emitting diode operating near room temperature, https://doi.org/10.1038/nature06979 journal journal Nature volume 453, pages 372–375 (year 2008)NoStop [Bajoni(2012)]bajoni_polariton_2012 author author D. Bajoni, title title Polariton lasers. Hybrid light–matter lasers without inversion, https://doi.org/10.1088/0022-3727/45/31/313001 journal journal J. Phys. D: Appl. Phys. volume 45, pages 313001 (year 2012)NoStop [Schneider et al.(2013)Schneider, Rahimi-Iman, Kim, Fischer, Savenko, Amthor, Lermer, Wolf, Worschech, Kulakovskii, Shelykh, Kamp, Reitzenstein, Forchel, Yamamoto, and Höfling]schneider_electrically_2013 author author C. Schneider, author A. Rahimi-Iman, author N. Y. Kim, author J. Fischer, author I. G. Savenko, author M. Amthor, author M. Lermer, author A. Wolf, author L. Worschech, author V. D. Kulakovskii, author I. A. Shelykh, author M. Kamp, author S. Reitzenstein, author A. Forchel, author Y. Yamamoto, and author S. Höfling, title title An electrically pumped polariton laser, https://doi.org/10.1038/nature12036 journal journal Nature volume 497, pages 348–352 (year 2013)NoStop [Askenazi et al.(2017)Askenazi, Vasanelli, Todorov, Sakat, Greffet, Beaudoin, Sagnes, and Sirtori]askenazi_midinfrared_2017 author author B. Askenazi, author A. Vasanelli, author Y. Todorov, author E. Sakat, author J.-J. Greffet, author G. Beaudoin, author I. Sagnes, and author C. Sirtori, title title Midinfrared Ultrastrong Light–Matter Coupling for THz Thermal Emission, https://doi.org/10.1021/acsphotonics.7b00838 journal journal ACS Photonics volume 4, pages 2550 (year 2017)NoStop [Greffet et al.(2018)Greffet, Bouchon, Brucoli, and Marquier]greffet_light_2018 author author J.-J. Greffet, author P. Bouchon, author G. Brucoli, and author F. m. c. Marquier, title title Light emission by nonequilibrium bodies: Local kirchhoff law, https://doi.org/10.1103/PhysRevX.8.021008 journal journal Phys. Rev. X volume 8, pages 021008 (year 2018)NoStop [Lambe and McCarthy(1976)]lambe_light_1976 author author J. Lambe and author S. L. McCarthy, title title Light emission from inelastic electron tunneling, https://doi.org/10.1103/PhysRevLett.37.923 journal journal Phys. Rev. Lett. volume 37, pages 923 (year 1976)NoStop [Davis(1977)]davis_theory_1977 author author L. C. Davis, title title Theory of surface-plasmon excitation in metal-insulator-metal tunnel junctions, https://doi.org/10.1103/PhysRevB.16.2482 journal journal Phys. Rev. B volume 16, pages 2482 (year 1977)NoStop [Bharadwaj et al.(2011)Bharadwaj, Bouhelier, and Novotny]bharadwaj_electrical_2011 author author P. Bharadwaj, author A. Bouhelier, and author L. Novotny, title title Electrical excitation of surface plasmons, https://doi.org/10.1103/PhysRevLett.106.226802 journal journal Phys. Rev. Lett. volume 106, pages 226802 (year 2011)NoStop [Parzefall et al.(2015)Parzefall, Bharadwaj, Jain, Taniguchi, Watanabe, and Novotny]parzefall_antenna_2015 author author M. Parzefall, author P. Bharadwaj, author A. Jain, author T. Taniguchi, author K. Watanabe, and author L. Novotny, title title Antenna-coupled photon emission from hexagonal boron nitride tunnel junctions, https://doi.org/10.1038/nnano.2015.203 journal journal Nature Nanotechnology volume 10, pages 1058–1063 (year 2015)NoStop [Kern et al.(2015)Kern, Kullock, Prangsma, Emmerling, Kamp, and Hecht]kern_electrically_2015 author author J. Kern, author R. Kullock, author J. Prangsma, author M. Emmerling, author M. Kamp, and author B. Hecht, title title Electrically driven optical antennas, https://doi.org/10.1038/nphoton.2015.141 journal journal Nature Photonics volume 9, pages 582–586 (year 2015)NoStop [Du et al.(2017)Du, Wang, Chu, and Nijhuis]du_highly_2017 author author W. Du, author T. Wang, author H.-S. Chu, and author C. A. Nijhuis, title title Highly efficient on-chip direct electronic–plasmonic transducers, https://doi.org/10.1038/s41566-017-0003-5 journal journal Nature Photonics volume 11, pages 623–627 (year 2017)NoStop [Qian et al.(2018)Qian, Hsu, Gurunatha, Riley, Zhao, Lu, Tao, and Liu]qian_efficient_2018 author author H. Qian, author S.-W. Hsu, author K. Gurunatha, author C. T. Riley, author J. Zhao, author D. Lu, author A. R. Tao, and author Z. Liu, title title Efficient light generation from enhanced inelastic electron tunnelling, https://doi.org/10.1038/s41566-018-0216-2 journal journal Nature Photonics volume 12, pages 485–488 (year 2018)NoStop [Uskov et al.(2016)Uskov, Khurgin, Protsenko, Smetanin, and Bouhelier]uskov_excitation_2016 author author A. V. Uskov, author J. B. Khurgin, author I. E. Protsenko, author I. V. Smetanin, and author A. Bouhelier, title title Excitation of plasmonic nanoantennas by nonresonant and resonant electron tunnelling, https://doi.org/10.1039/C6NR01931E journal journal Nanoscale volume 8, pages 14573–14579 (year 2016)NoStop [Qian et al.(2021)Qian, Li, Hsu, Chen, Tian, Tao, and Liu]qian_highly_2021 author author H. Qian, author S. Li, author S.-W. Hsu, author C.-F. Chen, author F. Tian, author A. R. Tao, and author Z. Liu, title title Highly-efficient electrically-driven localized surface plasmon source enabled by resonant inelastic electron tunneling, https://doi.org/10.1038/s41467-021-23512-2 journal journal Nature Communications volume 12, pages 3111 (year 2021)NoStop [Kazarinov and Suris(1972)]kazarinov1972electric author author R. Kazarinov and author R. Suris, title title Electric and electromagnetic properties of semiconductors with a superlattice, @noop journal journal Sov. Phys. Semicond volume 6, pages 120 (year 1972)NoStop [Willenberg et al.(2003)Willenberg, Döhler, and Faist]willenberg2003intersubband author author H. Willenberg, author G. Döhler, and author J. Faist, title title Intersubband gain in a bloch oscillator and quantum cascade laser, @noop journal journal Physical Review B volume 67, pages 085315 (year 2003)NoStop [Lagrée(2022)]lagree2022transport author author M. Lagrée, title title Transport électronique en régime de couplage fort lumière-matière pour les dispositifs quantiques moyen-infrarouge, @noop journal journal Université Paris-Saclay (year 2022)NoStop
http://arxiv.org/abs/2307.04118v1
20230709081438
Twotier -- A Layered Analysis of Backbone Members in a Moderate Sized Community Sports Organization
[ "Qingran Wang", "Jia Yu", "Mengjun Ding", "Weiqiang Sun" ]
cs.SI
[ "cs.SI" ]
Twotier - A Layered Analysis of Backbone Members in a Moderate Sized Community Sports Organization Qingran Wang and Jia Yu contributed equally to this paper. We would like to thank all members of the SJTU Health community for their selfless commitments in building a strong community. Qingran Wang, Jia Yu, Mengjun Ding, Weiqiang Sun,  Senior Member, IEEE August 12, 2023 ============================================================================================================================================================================================================================================================================================== Backbone members are recognized as essential parts of an organization, yet their role and mechanisms of functioning in networks are not fully understood. In this paper, we propose a new framework called Twotier to analyze the evolution of community sports organizations (CSOs) and the role of backbone members. Tier-one establishes a dynamic user interaction network based on grouping relationships, and weighted k-shell decomposition is used to select backbone members. We perform community detection and capture the evolution of two separate sub-networks: one formed by backbone members and the other formed by other members. In Tier-two, the sub-networks are abstracted, revealing a core-periphery structure in the organization where backbone members serve as bridges connecting all parts of the network. Our findings suggest that relying on backbone members can keep newcomers actively involved in rewarding activities, while non-rewarding activities solidify relations between backbone members. community sports organizations(CSOs), backbone, two-tier analysis, core-periphery structure § INTRODUCTION Community sports organizations (CSOs) are non-profit and voluntary organizations whose primary responsibility are to provide sports services to their members, often with low threshold to entry <cit.>. Despite the huge physical and psychological benefits CSOs can bring to their community members, the development of CSOs are often constrained by their voluntary nature and the limited resource available to them <cit.>. It is thus important to understand the development principle of CSOs such that the limited resource may be put to the most effective use. It has always been intuitively felt that there is usually a group of highly active and influential people who actuate and drive the development of a network. In the process of product marketing based on the human interaction network, marketers will take the nodes occupying the structural hole position in the network as the influential initial node, in order to achieve the greatest influence in the network <cit.>. In online social networks such as Weibo, Twitter, etc., users with a large number of followers are considered as influential users, and the topics they publish tend to generate great network effects <cit.>. Similarly, in CSOs, there are also some influential users who have “the right and the ability to influence in an indirect or intangible way" <cit.>, and their presence and activities have significantly higher effects on the operation and development of the organization. Backbone members are important nodes in a network that are well-connected to other members and play a crucial role in facilitating communication and information flow. They are defined as members who are relatively more important, active, and have a greater number of friends compared to other members. The identification and analysis of backbone members can provide insights into the structure and dynamics of the network, which is valuable for understanding its behavior and performance. The problem of vital node identification has attracted increasing attention in different fields <cit.>. Typically, researchers build user social networks based on participant interaction data collected over a period of time and then work to identify key nodes in the network. In this scenario, various centrality measures <cit.> such as degree centrality, closeness centrality, and betweenness centrality can be used to indicate the importance of nodes. With the rich set of metrics introduced in <cit.> we can also identify important nodes in CSOs. However, little attention has been paid to the role that backbone members play, nor the mechanisms by which they function in the network. At the same time, studies are often focused on the group of backbone members themselves, and interactions between the backbone and non-backbone members are largely neglected. In addition to the internal forces generated by the backbone members, external interventions such as rewards and penalties may also be crucial for network development <cit.>. Targeting interventions on leaders have been shown to be more effective than applying them to random individuals for community health campaigns <cit.>. Understanding the mechanisms by which internal forces work can help us better implement external interventions <cit.>. And, if these two forces work together, they can bring even greater developmental benefits. In this research, with longitudinal data recorded, we focus on the development of CSOs, with a particular emphasis on backbone members, defined as the top X% of influential members based on coreness centrality. We introduce Twotier, a new framework for analyzing dynamic networks, which allows us to study both the evolutionary characteristics of components and their connections. Our main finding is that backbone members play a critical role as the trunk of the network, while others act as leaves and are regularly updated. Rewarding activities and backbone members are essential for organization expansion, while non-rewarding activities solidify the backbone group. The main contributions of this work are three-fold. Firstly, we introduce Twotier, a novel mathematical framework for analyzing dynamic networks. Secondly, we demonstrate its applicability in a moderate-sized CSO. Finally, using our framework and numerical results, we provide practitioners with tailored approaches to improve outcomes for different groups within their organization. The remainder of this paper is organized as follows. In Section II, we introduce Twotier, the main method for network analysis in this work, and explain its specific procedure. In Section III, we present an overview of the dataset and the experimental results obtained in each tier, including the role of backbone members in network development and their performance under external factors. Section IV provides an overview of related work. Finally, in Section V, we conclude the paper. § TWOTIER: A LAYERED ANALYSIS ON CSOS This section introduces the Twotier framework, which analyzes the role of backbone members in a moderate-sized community sports organization. In Tier-one, we build a dynamic network based on team-wise links between members and classify them into two groups: backbone members and general members, using the dynamic W-KS algorithm to calculate their influence. Community detection is performed to transition from individuals to communities. In Tier-two, we analyze the evolutionary regularities of different types of communities by abstracting the dynamic network into the network of communities extracted in Tier-one. The framework is illustrated in Fig. <ref>. To explore the influence of different types of activities on organizational development, we separate the network into two sub-networks: one formed under rewarding activities and the other under non-rewarding activities. Table <ref> summarizes the symbols used in this paper. §.§ Tier-one Analysis §.§.§ Evaluating Social Influence by Dynamic W-KS In a CSO with teaming-based relationships, user interactions can change over time, resulting in a dynamic network. Therefore, it is not advisable to apply vital node identification approaches designed for static networks. For example, it may be challenging to determine the importance of a node that is active during some time periods but inactive in others. To address this issue, we extend the weighted k-shell decomposition method to be applied on dynamic networks as a series of static networks. Considering the duration of activities and the fact that the study is conducted under a six-year time span, we take a three-month time window to build a dynamic network containing 24 consecutive equal-length time frames, in each of which the network is considered non-evolving. The validity of this partitioning approach has been demonstrated in our previous work <cit.>. In this case, the network is expressed as G={G_t=(V_t, E_t), for all t in [0, T]}, where V_t is the node set and E_t is the set of edges. The weights of the edges are assigned with the number of links that connect the same node pair within a time frame. Generally speaking, in a static network, the hubs are the key players when networks have a broad degree distribution. In addition, the topology of the network is also an important measure <cit.>. With the weighted k-shell decomposition method (W-KS) proposed in <cit.>, we shall be able to rank the nodes according to both the degree and position of the node in each static undirected weighted network. Define the weighted degree D_t^i of node i in time frame t as D^i_t=[√(d^i_t·∑_j∈n_t^iw^ij_t)], the combination of degree d^i_t and the sum of all its link weights ∑_jw^ij_t, rounded to the nearest integer. In W-KS, all nodes with D not greater than 1 are removed first. Then, the D of other nodes is recalculated in the trimmed network and the pruning process is repeated until no nodes with D less than or equal to 1 are left in the network. The pruned nodes are grouped in the first shell with k=1. Then the next k-shell with k=2 and further higher k-shells are separated from the remaining network iteratively until no nodes remain. Finally each node has a value k, with larger k indicating greater node influence. The influence of node i in the t-th time frame, I_t^i, can be represented by I_t^i= {[ k_t^i, i in the t^th network; 0, i not in the t^th network ] . . The influence of node i in the dynamic network is defined as the sum of its influence in each time frame I^i=∑_t=0^TI_t^i, where I_t^i is derived from the methods of static networks, i.e., in our case, weighted k-shell decomposition. In Fig. <ref>, we show a particular example in which fig_kshell1 W-KS is applied to a static (but accumulated) network, and fig_kshell2 is extended and then applied to a dynamic network. It can be seen that the dynamic W-KS can better capture the temporal nature of the CSO and can thus characterize node influences more accurately. §.§.§ Community structure detection Backbone members (BMs) are identified as the top X% of influential members, measured by any metric. We use the dynamic W-KS algorithm to calculate node influence, specifically coreness centrality, to select BMs. All other members are classified as general members (GMs). In turn, the network can be divided into three components: a) The sub-network formed by BMs and the interactions between them (BSN); b) The sub-network formed by GMs and the interactions between them (GSN); and c) The links that connect BSN and GSN. We use the quality metric modularity Q defined in <cit.> to explore the community structure in the two sub-networks of the CSO respectively: Q_t=12m∑_i,j[w^ij_t-e_t^ie_t^j2m]δ(i, j), where e^i_t=∑_jw^ij_t is the sum of the weights of the edges attached to node i, and m=12∑_ijw_t^ij is the sum of the weights of all edges in G_t. The δ-function is 1 if node i and j in the same community, otherwise δ=0. A Q value higher than 0.3 suggests that distinct community structures do exist in the network <cit.>. §.§ Tier-two Analysis §.§.§ Community evolution Once the presence of community structure is confirmed, we can then proceed with the analysis on community evolution. To capture intermittent participation that is often seen in CSOs, we extend the community evolution events used in our previous study <cit.>, by adding Suspend and 𝑅𝑒 𝑒𝑚𝑒𝑟𝑔𝑒. A community is said to be suspended, if it appears in a time frame, but disappears for some time, and then re-emerges. A community is said to be re-emerging, if it did not exist in the previous time frame, but has appeared at least once in the past time frames. According to their effects on the community structure, the evolution events other than Continue may be roughly classified into two categories: a) Events that bring significant structural changes to the network (V - Violent), mainly through adding/removing nodes from the network - Form, Dissolve, Suspend and 𝑅𝑒 𝑒𝑚𝑒𝑟𝑔𝑒 and b) Events that cause marginal structural changes to the network (S - Stable), mainly through adding/removing links between existing nodes - Grow, Merge, Shrink and Split. Communities that undergo S-type evolution are relatively closed groups with close internal interactions but limited communication with other external communities. However, V-type evolution brings about more diverse changes in the network structure, which promotes communication among different groups and plays an important role in the stability and development of the network. The information about community evolution events is presented in Table <ref> in detail. §.§.§ Community abstraction To explore the interactions of different communities on a horizontal level, we abstract the network of communities by hiding the details of the original connections between individuals within communities. The network described by Eq.(<ref>) is similar to Eq. (<ref>), however, the nodes are now communities detected in Tier-one and edges are connections between communities. G^com={G_t^com=(V^com_t, E^com_t), for all t in [0, T]}, There are two kinds of nodes in the network: a) communities formed by backbone members (BCs); and b) communities formed by general members (GCs). The connections in the network are classified into three categories: a) edges between BCs (BBEs); b) edges between GCs (GGEs); and c) edges between BCs and GCs (BGEs). We characterize the structure of the network using network density and betweeness centrality, where network density is denoted as Density(G^com)=2L^com/N^com(N^com-1), where N^com denotes the number of communities in the network and L^com denotes the number of connected edges between communities in the network. The betweeness centrality is formulated as BC^com_z=∑_m,n ∈ V^comσ(m, n|z)/σ(m, n), where m and n denote any community. The network with core-periphery structure has a high backbone node centrality and a high density of BSN. However, it is difficult to form a connected network by connections between general nodes only. § EXPERIMENTS AND RESULTS In this section, we apply the proposed Twotier framework to a sports organization and obtain information about the evolution of the organization and its structural characteristics. The experimental steps are illustrated in Fig. <ref>. The data that we use are collected from a non-profit sports organization, through an online platform serving a community with more than 10,000 members. Users can organize communal activities, most of which require users to participate in teams with less than 10 members. Necessary tools are provided for team members to communicate with each other. The platform went online in May 2015, and by June 2021, 790 activities have been held, with 4879 different individual participants in 6426 teams. The activities can be rewarding, denoted as Type-A, with a total of 119 activities, or without any reward, with a total of 671, denoted as Type-B. According to the teaming relations in the entire time span, each different user is represented with a node, and team-wise relations are represented by undirected links with weight of 1. It is clear that a link is in fact representing the quadruple of ⟨ node pair, id of team, id of the activity, time of the 𝑎𝑐𝑡 . .ivity, type of the activity⟩. In the entire time span, there are 73813 such links. §.§ Influence of Nodes by dynamic weighted k-shell decomposition (dynamic W-KS) While traditional weighted k-shell decomposition (W-KS) divides all nodes into 87 shells, dynamic W-KS divides all nodes into 457 shells, allowing for a more detailed division with more prominent gaps between nodes. Furthermore, for nodes within the same layer, we sort them based on their degree. To verify that the extended node influence determination approach is superior, we compare the coverage when the top X% members are selected as backbone members. The coverage reflects the range of influence in the network. It is defined as the proportion of the number of selected kernel members, and their neighbors, to the size of the given network. We compare in two network scenarios, one ignoring temporal properties and aggregating all members who have ever appeared in the organization to form an aggregation network; the other creates a time-frame network at three-month intervals. As shown in Fig. <ref>, with selected proportion X increases from 1 to 50, the coverage under both methods increases but dynamic W-KS generally gives higher results than W-KS. Considering degree, the position in network topology and activeness, the dynamic W-KS can give a more comprehensive picture of the importance of a node and is therefore chosen in our further analysis. In the following parts, we choose the cases X = 5, X=10, and X=20 to carry out the experiment respectively. After classifying the network members, we find that the BMs have a greater degree and are in a more important position in the network topology than GMs, as the results in Table <ref> show. In addition, they are involved in a larger number of activities and active in the network for a longer period of time than GMs. On average, the participation of BMs in non-rewarding activities is higher than in rewarding activities, which is the opposite of GMs. The network structure formed by each of these two groups is also different. §.§ The Evolution of communities In each time frame, the BSN has a giant component, occasionally there are also some small groups independent of the huge part, while the GSN is consist of many small groups that are separate from each other. The average Q of the two sub-networks across all time frames is higher than 0.3, suggesting that they both have a distinct community structure. Further, we present the evolutionary relationships of the detected communities after dividing the 9 evolution events into three categories. Fig. <ref> illustrates the percentage of evolution events for BSN and GSN, with rewarding activities (type-A), with non-rewarding activities (type-B), and with both, respectively. It can be observed that in GSN under type-B activities, form and dissolve among these 9 evolution events are always the major ones. However, this is very different in GSN under type-A activities with diverse and abundant events. With no stimulus, the GMs are less willing to participate, resulting in a large number of dropouts after attending the activities. This suggests that type-B activities are suitable to be held regularly to consolidate the connection between BMs rather than to absorb new members. It can also be seen that the BSN has more S-type community evolution events (light green area), while there is a higher number of V-type community evolution events (light red area) in GSN. The mobility within the backbone member groups is stronger than that of general members. This indicates that the backbone groups play the role of the trunk, while other members renew at a faster rate and act like leaves. §.§ The Structure of Abstracted Network We present the graph of the abstracted network in some time frames while X=10 in Fig. <ref>, where the red circles represent BCs and the green circles are GCs. The red, green and brown edges are BBEs, GGEs and BGEs, respectively. The size of the circles reflects the number of members in the community, and the thickness of the edges indicates the intensity of interaction between members of communities. It is interesting that the network, which is star-shaped, contains a dense cohesive core and a sparsely connected periphery. This observation is further verified in Fig. <ref>,  <ref> and  <ref>. It can be seen in Fig. <ref>figcomnum5 figcomnum10 and figcomnum20 that the number of BCs is stable, while the number of GCs has a larger variation and follows a similar trend as the network size changes. The GCs out-numbers BCs in most time frames, but the former is far more sparse than the latter. Fig. <ref>figedge5 figedge10 and figedge20 show the proportion of the weights of the three types of edges to the total weight of all edges. The edges are mostly between BCs or between BCs and GCs. Fig. <ref> presents the network density of the two sub-networks, containing either BCs and BBEs or GCs and GGEs. We can see that the BCs-formed sub-network has a high density, while the density of the sub-network formed by GCs is always low. In Fig. <ref>, the betweenness centrality of backbone members and general members are displayed respectively. By comparison, it can be found that backbone members have higher betweenness centrality, playing an important role as mediators, while GCs are isolated from each other and must be bridged by BCs. This also suggests that promoting communications between groups at the periphery may be effective to increase network connectivity. A further look into the structure of the networks under type-A and type-B activities shows that the two networks also exhibit a clear core-periphery structure. With type-A activities, the number of connections between BCs and GCs (BGEs) significantly out-numbers that are among BCs (BBEs) or GCs (GGEs) (Fig. <ref>fig2edgeA), while with type-B activities, the number of BBEs out-numbers that of BGEs (Fig. <ref>fig2edgeB). This indicates that while BMs tend to be active in both types of activities, they may behave very differently. In type-A, i.e., rewarding activities, BMs are more likely to connect with GMs, but in type-B activities, BMs tend to connect with other BMs. This on the first hand validates the trunk-like function that the BMs play. On the other hand, this suggests that even though rewarding activities are generally more popular than non-rewarding ones and participation is much higher, the backbone members are still important convening points for the activities, and are thus very important to the development of the CSO. It can be seen in Fig. <ref>fig3densityfig3densityAfig3densityB that in both types of activities, connections between GCs are very rare, again validating the leaf-like behavior of GCs and GMs. It also can be observed from Fig. <ref>fig3densityB that non-rewarding activities provide an important vehicle for backbone members to develop and consolidate very close relations, and should be regarded as an important tool in the entire CSO development toolset. § RELATED WORK Research in the domain of “who are the most important nodes in the network" has considered various criteria to define influential participants. One stream of relevant research is node centrality <cit.>, such as degree centrality, closeness centrality and so on, which attempts to quantify the structural importance of actors in a network. Considering two metrics, the degree of the node and its position in the network topology, Kitsak et al. <cit.> designed a k-shell decomposition method to divide the network nodes into different layers, i.e., to determine the importance of the nodes hierarchically at the group level. Subsequently, authors in <cit.> introduced a generalized method for calculating the k-shell structure of weighted networks. Our work goes a step further to improve the existing weighted k-shell method by taking the temporal nature of the network into account. Taking influential nodes as research objects, researchers have drawn many interesting conclusions. Kerlund <cit.> shows that influential users have a narrow focus in terms of the content they post and how they profile themselves and tend to produce more original content than other users. Zhao et al. <cit.> find that for the entertainment news, the influential spreaders may appear at the later stage of spreading. Borgatti et al. present an intuitive description of the core-periphery structure in <cit.>, that is, the network contains a dense, cohesive core and a sparse, unconnected periphery, and then quantified the core-periphery structure using the quadratic assignment procedure. Authors of <cit.> performed a detailed analysis on the key topological properties of the friendship graph for three different user categories of Leaders, Followers and Neutrals and yielded interesting insights. Wang et al. <cit.> divided the nodes in the network into two categories: central nodes and ordinary nodes, further they found that the information dissemination mode can be summarized into three specific patterns. By decomposing the complex network structure into different parts and analyzing them systematically, it is possible to grasp the development pattern of the network in more detail. Social Network Analysis (SNA)<cit.>, focusing on understanding the nature and consequences of relations between individuals or groups, has been widely used to study social networking platforms. Newman proposed that social networks can be naturally divided into communities or modules in <cit.>. Blondel et al. <cit.> proposed a method known as the Louvain algorithm for community detection, with the core idea of optimizing the quality function known as modularity. Frequent changes in the activity and communication patterns of network members result in the associated social and communication network being subject to constant evolution. For a deeper understanding of network development, Palla et at. quantified network evolution from the perspective of social group evolution in <cit.>. The evolution events are specifically classified into seven categories by group evolution discovery based on the changes of members in the communities in <cit.>. Based on this, <cit.> introduced an improved GED algorithm, describing network evolution in the context of CSOs. These works, from the perspective of community evolution in networks, have inspired the analysis of dynamic network structures on a vertical timeline. § CONCLUSIONS We propose a new framework called Twotier for analyzing the network structure and apply it to probe a non-profit sports organization. Firstly we establish a time-evolving network based on the team-wise relationships of participants. Taking the degree and topological position of nodes as well as the temporal nature of the CSO into account, we extend the weighted k-shell decomposition to determine the influence of the nodes and next classify the participants into two categories: backbone members and general members. Further the network of communities is abstracted by hiding the connections within communities. We not only analyze the development of the organization from the perspective of community evolution on the vertical timeline, but also pay attention to the connections between different groups in the horizontal time frames. In addition we have discussed the effect of the external stimulus on both groups. Our findings are summarized as follows. The backbone members of the CSO are not only characterized by their high degree and closeness centrality, but also by the fact that the average number of participating activities and active time frames of them are much higher than the general members. On average, the participation of backbone members in non-rewarding activities is higher than in rewarding activities, being the opposite of the general members. Through Tier-one analysis, we reveal that the two sub-networks, containing either backbone or general members, both have a clear community structure. The groups of backbone members play the role of the trunk, while the general members renew frequently and act like leaves. Through Tier-two analysis, we identify that there is a core-periphery structure in the organization. Backbone members serve as a critical link between different groups within an organization, and organizational leaders should pay special attention to their role in managing the organization effectively. However, we also note the potential negative impact of few ties between communities of general members on membership stability and organizational development. Therefore, it is necessary to implement measures to strengthen interaction between these groups and break down isolation. More importantly, we observe that external stimulus affects the organization in different ways. Even though rewarding activities are generally more popular than non-rewarding ones and participation is much higher, the backbone members are still important convening points for the activities, and are thus very important to the development of the CSO. The non-rewarding activities provide an important vehicle for backbone members to develop and consolidate very close relations and should be regarded as an important tool in the entire CSO development toolset. These insights can help practitioners develop tailored approaches for different groups within their organization to ensure better outcomes. IEEEtran
http://arxiv.org/abs/2307.04801v1
20230710235330
Metastability exchange optical pumping of $^3$He at low pressure and high magnetic field
[ "X. Li", "J. D. Maxwell", "D. Nguyen", "J. Brock", "C. D. Keith", "R. G. Milner", "X. Wei" ]
physics.ins-det
[ "physics.ins-det", "nucl-ex" ]
MIT]X. [email protected] JLab]J. D. Maxwell JLab]D. Nguyen JLab]J. Brock JLab]C. D. Keith MIT]R. G. Milner JLab]X. Wei [MIT]organization=Laboratory for Nuclear Science, Massachusetts Institute of Technology, city=Cambridge, state=MA 02139, country=USA [JLab]organization=Thomas Jefferson National Accelerator Facility, city=Newport News, state=VA 23606, country=USA [cor1]Corresponding author. Systematic studies on metastability exchange optical pumping of ^3He nuclei have been performed at Jefferson Lab using a 1-torr sealed cell at magnetic fields from 2 to 4 T. The effects of the discharge intensity, pump laser power, and pumping transition schemes on achievable nuclear polarization and pumping rate have been investigated. A maximum steady-state nuclear polarization of about 75% has been obtained. This work provides a baseline for the development of the novel polarized ^3He target for CLAS12 at Jefferson Lab. Polarized helium-3 Metastability exchange optical pumping High magnetic field Metastability exchange optical pumping of ^3He at low pressure and high magnetic field [ October 2023 ====================================================================================== § INTRODUCTION Nuclear spin-polarized ^3He is a powerful effective polarized neutron target which plays a significant role in the studies on neutron spin structure. Spin-polarized ^3He gas targets have been successfully implemented in scattering experiments at MIT-Bates <cit.>, SLAC <cit.>, DESY <cit.>, Mainz <cit.>, HIGS <cit.> and JLab <cit.> using either the metastability exchange optical pumping (MEOP) <cit.> or the spin exchange optical pumping (SEOP) <cit.> technique. The MEOP approach utilizes 1083-nm circularly polarized laser light to produce nuclear polarization in metastable-state ^3He atoms via optical pumping and hyperfine coupling. The polarization is then transferred to the ground-state ^3He nuclei through metastability-exchange collisions. As MEOP is performed at mbar-scale pressure and room temperature, usually the low-temperature <cit.> or compression <cit.> technique is used to increase the target thickness. The SEOP method involves a mixture of alkali-metal atoms with ^3He gas where the alkali atoms are optically pumped with 795-nm laser light and the electronic polarization is passed onto ^3He nuclei via spin-exchange collisions. SEOP operates at higher pressures, typically bar scale, and therefore is more favorable in high-luminosity scattering experiments. See comprehensive reviews on optical pumping techniques of polarized ^3He in <cit.>. Both aforementioned techniques are normally performed in low magnetic fields (on the order of 10^-3 T). At higher fields, SEOP fails for the increased wall relaxation and MEOP was considered to be less efficient due to weakened hyperfine coupling, largely limiting the implementation of polarized ^3He in high-field experimental apparatus. The recent development of high-field MEOP technique in the last two decades <cit.> has opened up opportunities for new physics projects using nuclear spin-polarized ^3He to study the fundamental quark and gluon dynamics inside the nucleon and nucleus. At Brookhaven National Lab (BNL), development on polarized ^3He ion source within the 5-T solenoid at the Electron Beam Ion Source for the future electron-ion collider is currently underway at the Relativistic Heavy Ion Collider <cit.>. At Jefferson Lab (JLab), a new physics project of spin-dependent electron scattering from polarized ^3He using the CLAS12 spectrometer has been approved in Hall B <cit.>. A conceptual design for a novel polarized ^3He target has been proposed <cit.>, aiming to produce polarized ^3He inside the 5 T solenoid of CLAS12. Recently, a new high-field MEOP system for polarized ^3He has been established at JLab to systematically study the effect of the discharge intensity, pump laser power, and optical-pumping-transition schemes to the key parameters for ^3He polarization. In this work, we will report the major findings from such systematic studies. § HIGH-FIELD MEOP The production of ^3He nuclear polarization using MEOP involves optical pumping of ^3He in the metastable state and metastability-exchange collisions. A radio-frequency (RF) signal is employed to induce electrical plasma discharge in ^3He gas and excite a small population of ^3He atoms from the ground state to the 2^3S_1 metastable state. The 2^3S_1 – 2^3P optical pumping transition is then driven by circularly polarized 1083-nm laser light. Atoms in the 2^3P state are brought back to the 2^3S_1 state by spontaneous or stimulated emissions. The optical pumping process gives rise to the electronic polarization in the metastable-state ^3He atoms, which is then partially passed to the ^3He nuclei by hyperfine interaction. Finally the nuclear polarization in metastable-state ^3He is transferred to the ground-state ^3He via metastability-exchange collisions. The Zeeman sublevels of the 2^3S_1 and 2^3P states of ^3He significantly differ between low and high magnetic fields, and thence the 1083-nm optical pumping transitions. This results in different optical-pumping and polarimetry approaches for low- and high-field MEOP. In a low field, the C_8 and C_9 transition lines are adopted to promote the metastable-state ^3He to the 2^3P state (see Fig. 14 in Ref. <cit.>) and the nuclear polarization of ^3He is measured by observing the circular polarization of the 668-nm light emitted by the discharge <cit.>. In a high magnetic field (B ≳1.5 T), four pumping schemes can be used for the 2^3S_1 – 2^3P transitions (see Fig. 1 in Ref. <cit.>), in this paper denoted as f_2^± and f_4^± where the subscript indicates the number of unresolved transition lines of the pumping scheme and + (-) represents the σ^+ right-handed (σ^- left-handed) circular polarization of the 1083-nm pump light. For each pumping scheme, a separate pair of well resolved transition lines (probe doublet) of which the 2^3S_1 sublevels are not addressed by the pumping lines can be used for optical polarimetry. In such polarimetry approach, a probe laser is directed to the ^3He with periodical sweeping frequency over the probe doublet. The nuclear polarization of ^3He M is inferred by measuring the absorption coefficients a_1 and a_2 (a_1^0 and a_2^0) for the probe doublet as the ^3He is polarized (unpolarized, M=0) and a_2/a_1/a_2^0/a_1^0=1+M/1-M, the derivation of which can be found in Section 2 of  <cit.>. Fig. <ref> shows the measured absorption spectra for the σ^+ and σ^- 1083-nm light at magnetic fields from 2 to 4 T. The pump and probe peaks are subjected to Doppler broadening at room temperature and 1-torr pressure. Note that the degree of circular polarization of the 1083-nm light is not highly critical for the high-field MEOP as the σ^+ and σ^- lines are well resolved due to the enhanced Zeeman splitting in high fields. § EXPERIMENTAL SETUP AND METHOD The schematic layout of the experimental apparatus is shown in Fig. <ref>. The ^3He gas cell and all optical components are enclosed in a laser-tight enclosure which is geometrically a box (59 cm in length, 43 cm in width, and 33 cm in height) attached to a cylindrical volume (62 cm in length and 10 cm in diameter). The inner walls of the enclosure as well as all surfaces of the optical parts are darkened to minimize light reflection. The ^3He glass cell is located near the end of the cylindrical volume, which is inserted into the warm bore of a superconducting magnet. The setup includes i) the optical pumping system consisting of the magnetic field, the ^3He gas cell, the RF electrodes to generate discharge plasma in ^3He gas, and the pump laser and related optics, and ii) the optical polarimeter consisting of the probe laser and the photodiode. §.§ Optical pumping system The superconducting magnet (FROST) provides a homogeneous magnetic field up to 5 T within the central area of its 76-cm-long and 13-cm-diameter warm bore. In this work, the magnet is operated at 2, 3, and 4 T for the high-field MEOP tests of polarized ^3He. Pure ^3He gas of 1-torr pressure is sealed in a cylindrical borosilicate glass cell 5 cm in length and 5 cm in diameter. Electrical plasma discharge is induced by the electrodes spirally wound around the outer surface of the cell wall. A 41-MHz RF signal is generated by an SRS generator (SG382), amplified by an RF amplifier, and tuned with a radio transformer before being sent to the electrodes. A Keopsys continuous-wave ytterbium-fiber laser system provides linearly polarized laser light with a tunable frequency range of about 100 GHz, a nominal bandwidth of 2 GHz, and an output power range of 3 – 10 W. The pumping light is delivered to the laser enclosure with an optical fiber and then passes through a linearly polarizing beamsplitter cube followed by a quarter-wave plate to ensure circular polarization, and finally guided by a lens to illuminate the full volume of the ^3He cell. A broadband mirror (750 – 1100 nm) is mounted downstream of the cell to enhance the pumping power with the reflected laser light back to the cell. §.§ Optical polarimeter Taking advantage of the light absorption technique introduced in <cit.>, the optical polarimetry adopts the design as in <cit.> with slight modification. The probe laser light is produced by a Toptica laser system (DFB pro L-33508) with a tuning wavelength range of 1080.6 – 1084.2 nm and an output power of 70 mW. The frequency of the laser can be tuned by changing either the diode temperature or the operating current. The full range of probe laser frequency is explored by scanning the temperature to obtain the absorption spectrum for all pumping and probe peaks as shown in Fig. <ref>. Then the current is swept for a smaller frequency range to map the two absorption peaks of the probe doublet. An iris aperture is installed in front of the probe entrance inside the laser enclosure to adjust the probe laser power delivered to the cell. The probe laser beam is incident on the cell at a small angle (∼5) with respect to the pumping light propagating direction, then reflected from the mirror downstream of the cell and finally detected by a photodiode (Thorlabs DET36A2) which is collimated to reduce the signal background from the reflected pumping light. To better isolate the probe signal received in the photodiode, the RF discharge is amplitude modulated from the SRS signal generator at 1 kHz by 50% modulation depth, which is taken as the reference for the lock-in amplifier (SRS SR860). The lock-in amplifier signal is read to the computer using a Python program where the measured spectrum for the probe doublet is fitted with two side-by-side Gaussian peaks on top of a linear function accounting for the background from the pump laser light as well as the linear shift in probe laser power caused by frequency sweeps. The absorption coefficients a_1 and a_2 for the two probe peaks are extracted as the fitted amplitudes of the two Gaussian functions. The calibration measurement is taken at the beginning of each measurement cycle before the pump laser is turned on to obtain the absorption coefficients a_1^0 and a_2^0 for null polarization. The nuclear polarization M is determined from the measured value of a_1, a_2, a_1^0, and a_2^0 using Eq. <ref>. § RESULTS A typical optical pumping and relaxation cycle for the polarization measurement at 2 T using the f_4^- pumping scheme is shown in Fig. <ref>. The sweeps for the probe laser frequency over the probe doublet periodically and continuously proceed during the measurement cycle. Each polarization data point in Fig. <ref> is obtained from one full period of the sweep which typically takes about 14 s. The pump laser is turned on at 0 s and nuclear polarization of ^3He starts to build up as an exponential function of time, M(t) = M_s(1-e^-t/T_b), where M_s is the steady-state polarization and T_b is the build-up time constant. Following the convention in <cit.>, the build-up rate, or effective pumping rate (pumping rate for short throughout this paper), R is defined as R = NM_s/T_b, where N is the total number of atoms in the cell. Then the pump laser is turned off at 770 s to measure the relaxation process with the discharge on. The relaxation time T_r is determined by fitting the relaxation data with an exponential decay function. M_s, T_b, and T_r were measured with B-field magnitudes of 2, 3, and 4 T to study the effects of the discharge intensity, pump laser power, and different optical pumping transition schemes on high-field MEOP performance. The results will be presented and discussed in the following subsections. §.§ Discharge intensity The influence of the discharge condition on obtainable ^3He nuclear polarization is two fold. On one hand, MEOP relies on the existence of the metastable-state ^3He atoms which are produced by the RF discharge. The intensity of the discharge and its spatial distribution relative to the pump laser light directly determine the pumping rate. On the other hand, discharge can lead to spin depolarization which is the major relaxation mechanism competing against the optical pumping process and hence affects the steady-state polarization of ^3He nuclei. Generally, more intense discharge results in a stronger depolarization effect. The distribution and intensity of the discharge are correlatively determined by the frequency and amplitude of the RF signal, the electrode configuration, and the magnetic holding field. The overall intensity of the discharge can be quantitatively controlled by varying the voltage amplitude of the RF signal and can be characterized by the relaxation time constant determined from discharge-on relaxation measurements. Longer relaxation time indicates weaker discharge within the cell. In this work, the discharge intensity was varied by fine tuning the output voltage of the RF generator and the transmitter. Figure <ref> shows the steady-state nuclear polarization and the pumping rate measured at different discharge intensity levels, represented by the relaxation time, in magnetic fields of 2, 3, and 4 T. The optical pumping were performed with the f_4^± transition schemes and the output pump laser power was 3 W. A saturation in the steady-state polarization is observed as the relaxation time prolongs. A trend of decreasing pumping rate and hence suppressed nuclear polarization with increasing magnetic field is obvious. The results are in reasonable agreement with those in <cit.>, which were obtained with the same ^3He cell. The discrepancy in saturation level could be rooted in the use of different superconducting magnets with different transverse B field gradients and adoption of different electrode schemes which would in turn affect the discharge condition. To benchmark the depolarization effect resulted from the transverse field gradient, we measured the relaxation times of 2800 – 3800 s with the discharge turned off. §.§ Pump laser power The influence of the pump laser power to the steady-state nuclear polarization and the pumping rate was evaluated. The setting range for the power output of the Keopsys laser is 3 – 10 W. The laser light is attenuated by the linearly polarizing beamsplitter cube and the attenuation factor depends on the relative angle between the linear polarization plane of the output laser and that of the beamsplitter cube. By rotating the fiber mount around the propagation direction of the pump laser light, the on-cell laser power can be tuned between 0 and 3 W. The actual laser power coming out of the quarter-wave plate was measured with a power meter and was recorded accordingly to the setting value of the power output and the rotating angle of the fiber mount. Fig. <ref> shows the dependence of the steady-state nuclear polarization and the pumping rate on laser power for the f_4^- transition scheme at 2 T. An on-cell power as low as about 2.5 W is sufficient to reach the saturation of the attainable polarization. §.§ Pumping transition scheme Figure <ref> shows the maximum steady-state polarization achieved with the four optical pumping transition schemes described in Section <ref>. The measurements were performed at 2, 3, and 4 T with a pump laser power output of 3 W. The results for all three magnetic fields consistently show that the f_4^± schemes yield considerably higher nuclear polarization than the f_2^± schemes. The σ^+ and σ^- of the pumping light for either f_2 or f_4 schemes does not give apparent difference in the steady-state polarization taking into account the measurement uncertainties. The full results including the extracted pumping rate and relaxation time are tabulated in Table <ref>. §.§ Uncertainties The systematic uncertainties in the measurements for nuclear polarization mainly come from the following three aspects. i) The unsteadiness of the discharge light, which is dependent on the discharge level and the holding field, causes noise in the photodiode signal. ii) The background light from the pump laser contributing to the photodiode signal which might not be completely addressed by the linear function in the fitting process. Both i) and ii) lead to uncertainties in the extraction for the probe-absorption coefficients. iii) Residual non-zero nuclear polarization might exist in the calibration runs for a_1^0 and a_2^0 which may result in a baseline offset in the measured nuclear polarization. These three factors give total uncertainties of 2 – 4% in the measured nuclear polarization M. In addition, the selection of the data range for the exponential fitting introduces uncertainties in the extraction of M_s, T_b and T_r, particularly prominent for T_b. The total uncertainties assigned for M_s, T_b and T_r are about 4%, 5%, and 4%, respectively. The resultant total uncertainty for R is about 6% given that the uncertainty in the total number of atoms in the cell is negligible. § CONCLUSION We present the first series of tests on MEOP at JLab for polarized ^3He in high magnetic fields. The experiments have studied the dependence of discharge intensity and pump laser power for the attainable steady-state nuclear polarization and pumping rate, and indicated the optimal optical-pumping scheme for the 1-torr gas. This work has reproduced and extended the earlier high-field MEOP results at BNL for the polarized ^3He ion source for the EIC and serves as the baseline for the development for a novel polarized ^3He gas target for CLAS12 at JLab. Ongoing investigations on the B-field uniformity of the FROST magnet, the adoption of the electrode scheme, and the pressure dependence at JLab will provide further input to the upcoming prototyping for the new double-cell cryogenic ^3He target. § ACKNOWLEDGMENTS We thank the JLab target group for mechanical support. We are grateful for valuable discussions and support from Pierre-Jean Nacher and his colleagues at Laboratoire Kastler Brossel, Paris, France, and from Thomas Gentile at the National Institute of Standards and Technology, Gaithersburg, Maryland. We acknowledge the support of Nathan Isgur Fellowship. This research is supported by the U.S. Department of Energy Office of Nuclear Physics to the Massachusetts Institute of Technology under grant number DE-FG02-94ER40818 and to the Jefferson Lab under grant number DE-AC05-06OR23177. Jones:1993hg C. E. Jones, E. J. Beise, J. E. Belz, R. W. Carr, B. W. Filippone, W. Lorenzon, R. D. McKeown, B. A. Mueller, T. G. O'Neill and G. W. Dodson, et al. ^3He (e, e') quasielastic asymmetry, Phys. Rev. C 47, 110-130 (1993), <https://doi.org/10.1103/PhysRevC.47.110>. Johnson:1994cq J. R. Johnson, A. K. Thompson, T. E. Chupp, T. B. Smith, G. D. Cates, B. Driehuys, H. Middleton, N. R. Newbury, E. W. Hughes and W. Meyer, The SLAC high density gaseous polarized He-3 target, Nucl. Instrum. Meth. A 356, 148-152 (1995), <https://doi.org/10.1016/0168-9002(94)01465-5>. DeSchepper:1998gc D. DeSchepper, L. H. Kramer, S. F. Pate, K. Ackerstaff, R. W. Carr, G. R. Court, A. Dvoredsky, H. Gao, A. Golendoukhin and J. O. Hansen, et al. The HERMES polarized He-3 internal gas target, Nucl. Instrum. Meth. A 419, 16-44 (1998), <https://doi.org/10.1016/S0168-9002(98)00901-2>. Krimmer:2009zz J. Krimmer, M. Distler, W. Heil, S. Karpuk, D. Kiselev, Z. Salhi and E. W. Otten, A highly polarized He-3 target for the electron beam at MAMI, Nucl. Instrum. Meth. A 611, 18-24 (2009), <https://doi.org/10.1016/j.nima.2009.09.064>. Kramer:2007zzb K. Kramer, X. Zong, D. Dutta, H. Gao, X. Qian, Q. Ye, X. Zhu, R. Lu, T. Averett and S. Fuchs, A high-pressure polarized He-3 gas target for the High Intensity Gamma Source (HIγS) facility at Duke Free Electron Laser Laboratory, Nucl. Instrum. Meth. A 582, 318-325 (2007), <https://doi.org/10.1016/j.nima.2007.08.243>. Singh:2010 J. Singh, Alkali-Hybrid Spin-Exchange Optically-Pumped Polarized ^3He Targets Used For Studying Neutron Structure, Ph.D. thesis, University of Virginia (2010), <http://galileo.phys.virginia.edu/research/groups/spinphysics/thesis/singh_thesis_2010.pdf>. Colegrove:1960 F. D. Colegrove, L. D. Schearer and G. K. Walters, Polarization of He^3 Gas by Optical Pumping, Phys. Rev. 132, 2561 (1963), <https://doi.org/10.1103/PhysRev.132.2561>. Bouchiat:1960dsd M. A. Bouchiat, T. R. Carver and C. M. Varnum, Nuclear Polarization in He3 Gas Induced by Optical Pumping and Dipolar Exchange, Phys. Rev. Lett. 5, no.8, 373 (1960), <https://doi.org/10.1103/PhysRevLett.5.373>. Milner:1989 R. G. Milner, R. D. McKeown and C. E. Woodward, A polarized ^3He target for nuclear physics, Nucl. Instrum. Meth. A 274, 56-63 (1989), <https://doi.org/10.1016/0168-9002(89)90365-3>. Eckert:1992 G. Eckert, W. Heil, M. Meyerhoff, E.W. Otten, R. Surkau, M. Werner, M. Leduc, P.J. Nacher and L.D. Schearer, A dense polarized ^3He target based on compression of optically pumped gas, Nucl. Instrum. Meth. A 320, 53-65 (1992), <https://doi.org/10.1016/0168-9002(92)90769-Z>. Walker:1997zzc T. G. Walker and W. Happer, Spin-exchange optical pumping of noble-gas nuclei, Rev. Mod. Phys. 69, 629-642 (1997), <https://doi.org/10.1103/RevModPhys.69.629>. Batz:2011 M. Batz, P.-J. Nacher and G Tastevin, Fundamentals of metastability exchange optical pumping in helium, J. Phys. Conf. Ser. 294, 012002 (2011), <https://dx.doi.org/10.1088/1742-6596/294/1/012002>. Gentile:2016uud T. R. Gentile, P. J. Nacher, B. Saam and T. G. Walker, Optically Polarized ^3He, Rev. Mod. Phys. 89, no.4, 045004 (2017), <https://doi.org/10.1103/RevModPhys.89.045004>. Courtade:2000 E. Courtade, F. Marion, P. Nacher, G. Tastevin, T. Dohnalik and K. Kiersnowski, Spectroscopy of the helium 2 ^3S–2 ^3P transition above 0.01 tesla – application to optical pumping studies, Hyperfine Interact. 127 (1) 451–454 (2000), <https://doi.org/10.1023/A:1012673902661>. Courtade:2002 E. Courtade, F. Marion, P.-J. Nacher, G. Tastevin, K. Kiersnowski and T. Dohnalik, Magnetic field effects on the 1 083 nm atomic line of helium, Eur. Phys. J. D 21, 25–55 (2002), <https://doi.org/10.1140/epjd/e2002-00176-1>. Abboud:2004 M. Abboud, A. Sinatra, X. Maître, G. Tastevin and P.-J. Nacher, High nuclear polarization of ^3He at low and high pressure by metastability exchange optical pumping at 1.5 tesla, Europhys. Lett. 68 (4) 480–486 (2004), <https://doi.org/10.1209/epl/i2004-10237-y>. Abboud:2005 M. Abboud, A. Sinatra, G. Tastevin, P.-J. Nacher and X. Maître, Metastability Exchange Optical Pumping of Helium-3 at High Pressures and 1.5 T: Comparison of two Optical Pumping Transitions, https://doi.org/10.48550/arXiv.physics/0506044arXiv:physics/0506044. Nikiel:2007 A. Nikiel, T. Palasz, M. Suchanek, M. Abboud, A. Sinatra, Z. Olejniczak, T. Dohnalik, G. Tastevin and P.-J. Nacher, Metastability exchange optical pumping of ^3He at high pressure and high magnetic field for medical applications, Eur. Phys. J. Spec. Top. 144, 255–263 (2007), <https://doi.org/10.1140/epjst/e2007-00138-3>. Suchanek:2007 K. Suchanek, M. Suchanek, A. Nikiel, T. Pałasz, M. Abboud, A. Sinatra, P.-J. Nacher, G. Tastevin, Z. Olejniczak and T. Dohnalik, Optical measurement of ^3He nuclear polarization for metastable exchange optical pumping studies at high magnetic field, Eur. Phys. J. Spec. Top. 144 (1) 67–74 (2007), <https://doi.org/10.1140/epjst/e2007-00109-8>. Nikiel-Osuchowska:2013 A. Nikiel-Osuchowska, G. Collier, B. Głowacz, T. Pałasz, Z. Olejniczak, W. P. Wglarz, G. Tastevin, P.-J. Nacher and T. Dohnalik, Metastability exchange optical pumping of ^3He gas up to hundreds of millibars at 4.7 Tesla, Eur. Phys. J. D 67, 200 (2013), <https://doi.org/10.1140/epjd/e2013-40153-y>. Maxwell:2018dyf J. D. Maxwell, J. Alessi, G. Atoian, E. Beebe, C. S. Epstein, R. G. Milner, M. Musgrave, A. Pikin, J. Ritter and A. Zelenski, Enhanced polarization of low pressure ^3He through metastability exchange optical pumping at high field, Nucl. Instrum. Meth. A 959, 161892 (2020), <https://doi.org/10.1016/j.nima.2019.02.019>. Zelenski:2023kof A. Zelenski, G. Atoian, E. Beebe, S. Ikeda, T. Kanesue, S. Kondrashev, J. Maxwell, R. Milner, M. Musgrave, M. Okamura, A. A. Poblaguev, D. Raparia, J. Ritter, A. Sukhanov and S. Trabocchi, Optically Pumped Polarized ^3He^++ Ion Source Development for RHIC/EIC, https://doi.org/10.48550/arXiv.2303.10409arXiv:2303.10409. PAC:2020 J.P. Committee, https://www.jlab.org/exp_prog/PACpage/PAC48/PAC48_PrelimReportPlus_FINAL.pdf48th program advisory committee report, 2020. Maxwell:2021ytu J. Maxwell and R. Milner, A concept for polarized ^3He targets for high luminosity scattering experiments in high magnetic field environments, Nucl. Instrum. Meth. A 1012, 165590 (2021), <https://doi.org/10.1016/j.nima.2021.165590>. Pavlovic:1970 M. Pavlović and F. Laloë, Study of a new method for orienting excited atomic levels by optical pumping. Application to the measurement of the hyperfine structure of 1D levels of ^3He, J. Phys. France 31, 173-194 (1970), <http://dx.doi.org/10.1051/jphys:01970003102-3017300>. Gentile:1993 T. R. Gentile and R. D. McKeown, Spin-polarizing ^3He nuclei with an arc-lamp-pumped neodymium-doped lanthanum magnesium hexaluminate laser, Phys. Rev. A 47 456–467 (1993), <http://dx.doi.org/10.1103/PhysRevA.47.456>.
http://arxiv.org/abs/2307.05542v2
20230708193421
Geometric parametrization of $SO(D+1)$ phase space of all dimensional loop quantum gravity: II. Beyond the simplicity constraint surface
[ "Gaoping Long" ]
gr-qc
[ "gr-qc" ]
[ Sungsoo Ray Hong ==================== The regularization of the scalar constraint and the Fermion coupling problem indicate that it is necessary to consider some kind of gauge fixing methods to deal with the simplicity constraint in all dimensional SO(D+1) loop quantum gravity. The coherent state with well-behaved peakedness property is an essential ingredient to carry out the gauge fixing method. To provide the basic tool for constructing such kind of coherent state, we generalize the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space of (1+D)-dimensional loop quantum gravity from the edge simplicity constraint surface to a dense subspace in the SO(D+1) holonomy-flux phase space. The symplectic structure on the twisted geometric parameter space and the Poisson structure in terms of the twisted geometric variables are analyzed. Besides, we discuss the relation between the two twisted geometry parametrizations constructed respectively on the edge simplicity constraint surface and the dense subspace of the SO(D+1) holonomy-flux phase space. Our result show that these two type of parametrizations are equivalent to each other by carrying out the gauge reduction with respect to the edge simplicity constraint. § INTRODUCTION As a non-perturbative and background-independent approach to unify general relativity (GR) and quantum mechanics, loop quantum gravity (LQG) has made remarkable progresses in several aspects <cit.><cit.><cit.><cit.>. For instance, various symmetry-reduced models are established in the framework of LQG to give the resolution of singularities <cit.>, and various attempts are made in the framework of LQG to account for the BH entropy <cit.>. Loop quantum gravity in all dimensional spacetime is also concerned since its potential for absorbing the valuable ideas (e.g. super symmetries and extra dimensions <cit.>) in other gravity theories to the loop quantization framework of GR. The loop quantization approach for GR in all dimensions is first developed by Bodendorfer, Thiemann and Thurn <cit.><cit.><cit.>. In detail, the all dimensional LQG is based on the connection formulation of (1+D) dimensional GR in the form of the SO(D+1) Yang-Mills theory, with the kinematic phase space coordinatized by the canonical pairs (A_aIJ,π^bKL), consisting of the spatial SO(D+1) connection fields A_aIJ and the vector fields π^bKL. In this formulation, the theory is governed by the first class system of the SO(D+1) Gaussian constraints, the (D+1)-dimensional ADM constraints and the additional simplicity constraints. Similar to the Gaussian constraints, the simplicity constraints taking the form S^ab_IJKL:=π^a[IJπ^|b|KL] generate extra gauge symmetries in the SO(D+1) Yang-Mills phase space. It has been shown that the connection phase space correctly reduces to the familiar ADM phase space by carrying out the symplectic reductions with respected to the Gaussian and simplicity constraints. Similar to the case of the SU(2) LQG, the loop quantization of the SO(D+1) Yang-Mills theory leads to the spin-network states of the SO(D+1) holonomies on some graphes, which carry the quanta of the flux operators representing the fluxes of π^bKL over some (D-1)-dimensional faces. The Hilbert space composed by the spin-network states indicates the holonomy-flux phase space associated to each graph, with the Poisson algebras among holonomies and fluxes in the holonomy-flux phase space being isomorphic to the quantum algebras among them in the quantum Hilbert space. To look for the all-dimensional Regge ADM data encoded in the SO(D+1) spin-network states, it is necessary to find the degrees of freedom of discrete geometries encoded in the SO(D+1) holonomy-flux variables, by considering a gauge reduction procedure with respect to both of the SO(D+1) Gaussian constraints and the simplicity constraints in the holonomy-flux phase space. A series of studies in this direction is first carried out in the SU(2) formulation of (1+3)-dimensional LQG <cit.><cit.><cit.><cit.><cit.>, and then they are generalized to the SO(D+1) holonomy-flux phase space in our companion paper <cit.>. Specifically, since the simplicity constraints become anomalous at the vertices of the graphs, the reductions with respect to the Gaussian and simplicity constraints are guided by the twisted geometry parametrization of the edge simplicity constraint surface in the holonomy-flux phase space of SO(D+1) LQG. Especially, the twisted geometry interpretation of holonomy-flux variables suggests that the Gaussian and edge simplicity constraints should be imposed strongly since they generate true gauge transformations, while the vertex simplicity constraints should be imposed weakly. The reduced space parametrized by the twisted geometric parameters give a discrete Regge geometry picture, which can be regarded as the discrete version of the ADM phase space of GR. An important application of the twisted geometry parametrization is the construction of the twisted geometry coherent state. Such kind of coherent states is firstly established in SU(2) LQG <cit.>, and then it is generalized to the SO(D+1) LQG with the restriction of the simple representations <cit.>. Specifically, based on the twisted geometry parameters, the simple twisted geometry coherent state in the strong solution space of quantum edge simplicity constraints is established by selecting the dominant terms (which is referred to as Perelomov type coherent state <cit.>) with simple representation of SO(D+1) in the decomposition of the heat-kernel coherent state of SO(D+1) <cit.>. It has been shown that the simple twisted geometry coherent states take the Gaussian superposition formulations. Especially, the simple twisted geometry coherent states provides an over-complete basis of the strong solution space of quantum edge simplicity constraints, and their wave functions have well-behaved peakedness and Ehrenfest properties in the reduced phase space with respect to the edge simplicity constraints <cit.>. In fact, the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space discussed in Ref.<cit.> concerns the issues on the constraint surface of edge simplicity constraint, and the resulted twisted geometry variables only give the parametrization of the reduce phase space with respect to edge simplicity constraint. Correspondingly, the simple twisted geometry coherent states constructed based on the twisted geometry parametrization of the reduce phase space are the gauge (with respect to edge simplicity constraint) invariant coherent states <cit.>. In other words, the wave functions of these gauge (with respect to edge simplicity constraint) invariant coherent states are constants along the corresponding gauge orbits, so that each of them peaks at a gauge orbit instead of a point in the phase space <cit.>. As we have mentioned above, the edge simplicity constraint should be imposed strongly following the twisted geometry interpretation of holonomy-flux variables. Thus, it seems that all of the studies for all dimensional SO(D+1) LQG can be completed in the strong solution space of quantum edge simplicity constraint, which is the gauge (with respect to simplicity constraint) invariant subspace of the full Hilbert space of all dimensional SO(D+1) LQG. Nevertheless, several discussions has shown that it is necessary to consider some kind of gauge fixed solution space with respect to simplicity constraint, to deal with some of the issues appeared in the all dimensional SO(D+1) LQG. Let us introduce two issues to explain this necessity. First, the regularization of the scalar constraint can be carried out by following the standard loop regularization method <cit.><cit.><cit.>. The resulted regularized scalar constraint contains the Euclidean term which is given by the antisymmetric contraction of the holonomies along some closed loops and the fluxes at the beginning and target point of these loops. Classically, this Euclidean term captures the information of both of the intrinsic and extrinsic curvature along these closed loops. However, it is shown that the Euclidean term in the quantized scalar constraint can not capture the information of those intrinsic and extrinsic curvature in the strong solution space of quantum edge simplicity constraint, since the strong imposition of quantum edge simplicity constraint leads to the gauge averaging, which vanishes some critical ingredients in the holonomies <cit.>. Thus, the standard loop regularization method is conflict to the strong imposition of the edge simplicity constraint. To deal with issue, one may consider the gauge fixed solution of the edge simplicity constraint to avoid the gauge averaging, so that the scalar constraint operator given by standard loop regularization method captures the information of those intrinsic and extrinsic curvature correctly. This is the first issue which points out the necessity to consider then gauge fixed solution space with respect to simplicity constraint. The second issue which points out this necessity is the the Fermion coupling problem in all dimensional LQG <cit.>. Specifically, the strong imposition of the quantum edge simplicity constraint restricts that the holonomies in all dimensional LQG can only be represented in the simple representation space of SO(D+1), which leads that the holonomies can not transform the Fermions which take values in the spinor representation space of SO(D+1) for D≥4. An alternative scheme to deal with this issue is to consider the gauge fixed solution of quantum edge simplicity constraint based on the coherent states, which ensures that the holonomies could take matrixes in the spinor representation space of SO(D+1), so that they are able to describe the transformation of Fermions along edges. Usually, in the classical theory, the gauge fixing can be realized by restricting the physical considerations on a section of the gauge orbits on the constraint surface of edge simplicity constraint. However, this is not valid in the quantum theory, since the wave functions of the quantum states which sharply converge to the constraint surface of edge simplicity constraint are always dispersed along the gauge orbits. To overcome this problem, it is reasonable to consider the coherent state whose wave function peaks at a point in the phase space, so that one could have the state whose wave function converges to both of the constraint surface of edge simplicity constraint and a section of the gauge orbits, with this convergence is controlled by the width of the wave function of the coherent state. Such kind of coherent state whose wave function peaks at a point in the SO(D+1) holonomy-flux phase space could be constructed by following a similar procedure as the construction of the simple twisted geometry coherent state in the strong solution space of quantum edge simplicity constraint <cit.>. More specifically, one need to consider a more generalized twisted geometry parametrization, which is able to coordinate the (almost) whole SO(D+1) holonomy-flux phase space instead of the reduced phase space. Then, based on this more generalized twisted geometry parametrization, one could decompose the heat-kernel coherent state of SO(D+1) and select some dominant terms to formulate the twisted geometry coherent state involving the non-simple representations of SO(D+1), which will be referred as to the non-simple twisted geometry coherent state in all dimensional LQG. As the first step to establish the non-simple twisted geometry coherent state in all dimensional LQG, it is necessary to extend the twisted geometry parametrization to the full SO(D+1) holonomy-flux phase space. In this article, we will establish the twisted geometry parametrization of a dense subspace of the full SO(D+1) holonomy-flux phase space, and extend this parametrization as a symplectic-morphism. Besides, we will show that the twisted geometry parametrization of edge simplicity constraint surface introduced in our previous work <cit.> can be regarded as a special cases of the construction in this article. This article is organized as follows. In our brief review of the classical connection formulation of all dimensional GR in Section <ref>, we will also introduce the SO(D+1) holonomy-flux phase space and the discretized formulation of the kinematical constraints. In Section <ref> and Section <ref> we will introduce the twisted geometry parametrization for a dense subspace of the SO(D+1) phase space, and analyze the Poisson structures among the new geometric parametrization variables. Then, in Section <ref> we will discuss the relation between the twisted geometry parametrizations of the edge simplicity constraint surface and the dense subspace of the SO(D+1) holonomy-flux phase space. Finally, we will conclude with the outlook for the possible next steps of the future research. § PHASE SPACE OF ALL DIMENSIONAL LOOP QUANTUM GRAVITY §.§ Connection phase space The classical connection formulation of GR with arbitrary spacetime dimensionality of (1+D) is first developed by Bodendofer, Thiemann and Thurn in Ref.<cit.>. This continuum connection phase space is coordinatized by a so(D+1) valued 1-form field A_aIJ and a vector field π^bKL on the D-dimensional spatial manifold Σ, with the non-trivial Poisson brackets between them being given by {A_aIJ(x), π^bKL(y)}=2κβδ_a^bδ_[I^Kδ_J]^Lδ^(D)(x-y), where β is the Barbero-Immirzi parameter and κ is the gravitational constant. It is known that this connection phase space correctly reduces to the familiar ADM phase space after the standard symplectic reduction procedure with respect to the first-class constraint system composed by the Gauss constraints 𝒢^IJ≈0 and simplicity constraints S^ab_IJKL:=π^a[IJπ^|b|KL]≈0. Specifically, the simplicity constraint can be solved as π^aIJ=2√(q)n^[Ie^|a|J], where e^a_I is a dual D-bein field, n^I satisfying n^In_I=1 is determined by e^a_I with n^Ie_aI=0, and q is the determinant of the spatial metric q_ab which is determined by π^aIJ with q^ab=e^aIe^b_I on the simplicity constraint surface. One can split A_aIJ as A_aIJ≡Γ_aIJ(π)+β K_aIJ where Γ_aIJ(π) is a functional of π^aIJ and it satisfies Γ_aIJ(π)=Γ_aIJ(e) on the simplicity constraint surface, with Γ_aIJ(e) being the unique torsionless spin connection compatible with the D-bein e_aI. Then, the densitized extrinsic curvature can be given by K̃_a^ b=K_aIJπ^bIJ on the constraint surface of both Gaussian and simplicity constraint surface. It is easy to check that the Gaussian constraint generate the standard SO(D+1) gauge transformation of the connection field and its conjugate momentum. Now, let us consider the simplicity constraints from the perspectives of the corresponding gauge transformations. First, the solutions π^aIJ=2√(q)n^[Ie^|a|J] to the simplicity constraint introduced above defines the constraint surface of the simplicity constraints. Then, one can verify that the infinitesimal gauge transformations induced by simplicity constraints are given by <cit.> δ K_c^PQ={∫_Σd^Dxf_ab^IJKLπ^a_[IJπ^b_KL](x), K_c^PQ(y)}=4κ f_cb^[PQKL]π^b_KL(y). Notice that, on the simplicity constraint surface we have π^aIJ=2√(q)n^[Ie^|a|J] so that δ K_c^IJn_I=0. Further, by introducing the decomposition K_aIJ≡ 2n_[IK_|a|J]+K̅_aIJ, where K̅_aIJ:=η̅_I^Kη̅_J^LK_aKL with η̅^I_J=δ^I_J-n^I n_J and K̅_aIJn^I=0, we immediately find that K̅_aIJ is the pure gauge component, while the components 2n_[IK_|a|J] are gauge invariant with respect to the transformations given in (<ref>). From the expressions of the ADM variables qq^ab=1/2π^aIJπ^b_IJ and K̃_a^ b=K_aIJπ^bIJ, it is easy to see that these variables are indeed gauge invariant with respect to the simplicity constraints on the constraint surface. Thus, through the symplectic gauge reduction procedure, the simplicity constraints eliminate the two parts of degrees of freedom— restricting π̅^aIJ:=π^aIJ-2√(q)n^[Ie^|a|J]=0 by the constraint equation and removing the pure-gauge components K̅_aIJ:=η̅_I^Kη̅_J^LK_aKL. Following these results, the geometric variables constructed by the ADM variables (q_ab,K̃^cd) can be extended as functionals in the connection phase space, with their original geometric interpretation are remained on the constraints surface. §.§ Holonomy-flux phase space The quantization of the connection formulation of (1+D)-dimensional GR can be carried out by following the standard loop quantization procedures, which leads to a Hilbert space ℋ given by the completion of the space of cylindrical functions on the quantum configuration space <cit.>. This Hilbert space ℋ can be regarded as a union of the spaces ℋ_γ=L^2((SO(D+1))^|E(γ)|,dμ_Haar^|E(γ)|) on all possible graphs γ, where E(γ) denotes the set of edges of γ and dμ_Haar^|E(γ)| denotes the product of the Haar measure on SO(D+1). The Gaussian constraint and simplicity constraint can be promoted as constraint operators in this Hilbert space. However, it has been turned out that the quantum brackets among these constraints give an open and anomalous quantum algebra, which is distinguished with the corresponding constraint algebra of first class in connection phase space <cit.>. Hence, it is necessary to propose a proper treatment of these quantum constraints, to reduce the gauge degrees of freedom and remain the physical degrees of freedom correctly. A reasonable method to reach this goal is to construct the gauge reductions with respect to Gaussian and simplicity constraints in the holonomy-flux phase space. More specifically, since the classical constraint algebras in the holonomy-flux phase space are isomorphic to the quantum constraint algebras in the quantum theory, one can treat the Gaussian and simplicity constraints in the holonomy-flux phase space and quantum theory on the same footing. Then, the degrees of freedom reduced in the procedures of the imposition of quantum constraint operators can be reflected in the procedures of the gauge reductions with respect to Gaussian and simplicity constraints in the holonomy-flux phase space. Through this gauge reductions, one can clarify the gauge degrees of freedom and verify that if the treatment of these constraints remains correct physical degrees of freedom. Now, let us first give a brief review of the holonomy-flux phase space. The quantum geometry of loop quantum gravity is described based on the spatially smeared variables — the D-bein fluxes over (D-1)-dimensional faces and connection holonomies over paths— for the conjugate pairs of elementary variables. We will focus on the holonomies and fluxes based on one specific graph for the following. The edges of the given graph naturally provide the set of paths for a fixed set of holonomies, and the cell decomposition dual to the graph provides the set of (D-1)-faces specifying a fixed set of fluxes. In this setting, the holonomy over one of the edges is naturally conjugating to the flux over the (D-1)-face traversed by the edge, with this pair satisfies the smeared version of the Poisson algebra (<ref>), and thus form a new phase space. More precisely, given the graph γ embedded in the spatial manifold, we consider a new algebra given by the holonomy-flux variables (h_e, X_e)∈ SO(D+1)× so(D+1) over all edges e of γ. These pairs of variables represent the discretized version of the connection A_aIJ and its conjugate momentum π^bKL. Specifically, the holonomy of A_aIJ along an edge e∈γ defined by h_e[A]:=𝒫exp(∫_eA)=1+∑_n=1^∞∫_0^1dt_n∫_0^t_ndt_n-1...∫_0^t_2 dt_1A(t_1)...A(t_n), where A(t):=1/2ė^aA_aIJτ^IJ, ė^a is the tangent vector field of e, τ^IJ is a basis of so(D+1) given by (τ^IJ)^def._KL=2δ^[I_Kδ^J]_L in definition representation space of SO(D+1), and 𝒫 denotes the path-ordered product. The flux X^IJ_e of π^aIJ through the (D-1)-dimensional face dual to edge e is defined by X^IJ_e:=-1/4β a^D-1tr(τ^IJ∫_e^⋆ϵ_aa_1...a_D-1h(ρ^s_e(σ)) π^aKL(σ)τ_KLh(ρ^s_e(σ)^-1)), where a is an arbitrary but fixed constant with the dimension of length, e^⋆ is the (D-1)-face traversed by e in the dual lattice of γ, ρ_e^s(σ): [0,1]→Σ is a path connecting the source point s_e∈ e to σ∈ e^⋆ such that ρ_e^s(σ): [0,1/2]→ e and ρ_e^s(σ): [1/2, 1]→ e^⋆. The Poisson algebra between the holonomy-flux variables can be induced from the Poisson bracket (<ref>) between the connection variables, which reads {h_e, h_e'}=0, {h_e, X^IJ_e'}=δ_e,e'κ/a^D-1d/dλ(e^λτ^IJh_e)|_λ=0, {X^IJ_e, X^KL_e'}=δ_e,e'κ/2a^D-1(-δ^IKX_e^JL-δ^JL X^IK_e+δ^ILX_e^JK+δ^JKX_e^ IL). Notice that h_e∈ SO(D+1), X_e^IJ∈ so(D+1) and SO(D+1)× so(D+1)≅ T^∗ SO(D+1), the new discrete phase space called the holonomy-flux phase space of SO(D+1) loop quantum gravity on a fixed graph, is a direct product of SO(D+1) cotangent bundles. Finally, the complete phase space of the theory is given by taking the union over the holonomy-flux phase spaces of all possible graphs. Similar to the SU(2) case, the phase space coordinated by the holonomy-flux variables (h_e, X_e) of SO(D+1) loop quantum gravity can be regarded as the discretized version of the continuum phase space. The (discretized) Gaussian and simplicity constraints in the holonomy-flux phase space are constructed in agreement with the corresponding quantum constraints. With X_-e=-h_e^-1X_eh_e≡X̃_e, the (discretized) Gaussian constraints G_v^IJ≈0 for each vertex v∈γ of the graph take the form <cit.> G_v^IJ=∑_e|s(e)=vX_e^IJ+∑_e|t(e)=vX̃_e^IJ≈0, where s(e) and t(e) denote the source and target points of the oriented edge e respectively. The (discretized) simplicity constraints consist of the edge simplicity constraints S^IJKL_e≈0 and vertex simplicity constraints S^IJKL_v,e,e'≈0, which take the forms <cit.> S_e^IJKL≡ X^[IJ_e X^KL]_e≈0, ∀ e∈γ, S_v,e,e'^IJKL≡ X^[IJ_e X^KL]_e'≈0, ∀ e,e'∈γ, s(e)=s(e')=v. It has been shown that, since the commutative Poisson algebra between the conjugate momentum variables {π^bKL} becomes non-commutative Poisson algebra between the flux variables { X^KL_e} after the smearing, the Poisson algebra among the discrete version of simplicity constraints become non-closed and thus anomalous, which leads that the symplectic reductions in the holonomy-flux phase space becomes difficult to implement <cit.>. To deal with this issue, the twisted geometry parametrization of the holonomy-flux phase space is constructed, which ensures that the gauge reductions with respect to the Gaussian and simplicity constraint in the holonomy-flux phase space can be carried out with the guidance of the twisted geometric interpretation of the holonomy-flux variables <cit.>. The twisted geometry parametrization for the the SU(2) holonomy-flux variables of (1+3)-dimensional LQG is first introduced by a series of studies following the original works by Freidel and Speziale <cit.><cit.>. The space of the twisted geometry for SU(2) LQG can undergo a symplectic reduction with respect to the discretized Gauss constraints, giving rise to a reduced phase space containing the discretized ADM data of a polyhedral Regge hypersurface. Following a similar procedure, the twisted geometry parametrization in all dimensional SO(D+1) LQG has been constructed on the edge simplicity constraint surface in the SO(D+1) holonomy-flux phase space in our companion paper <cit.>. It has been shown that the gauge reductions with respect to the simplicity constraints and Gaussian constraints in SO(D+1) LQG can be carried out properly in the twisted geometry parametrization space, which leads to a clear correspondence between the original holonomy-flux variables (h_e, X_e) on edge simplicity constraint surface and the D-hypersurface discrete geometry data in Regge geometry formulation. Nevertheless, it is not enough to construct the twisted geometric parametrization on the edge simplicity constraint surface in the SO(D+1) holonomy-flux phase space. As we have mentioned in introduction, several explorations in the quantum theory of SO(D+1) LQG requires us consider the quantum states whose wave functions are dispersed beyond the edge simplicity constraint surface. Hence, it is necessary to extend the twisted geometry parametrization to interpret the phase space points which are not located in the edge simplicity constraint surface. § GEOMETRIC PARAMETRIZATION OF SO(D+1) HOLONOMY-FLUX PHASE SPACE To ensure our statements and the notations clearer, we will first generalize the twisted geometry parametrization to a dense subspace of T^∗ SO(D+1) in this section. Then, it will be left to section 5 to discuss the relation between the twisted geometry parametrizations constructed in this article and previous works <cit.>. §.§ Beyond the edge-simplicity constraint surface Recall the SO(D+1) holonomy-flux phase space ×_e∈γT^∗ SO(D+1)_e associated to the given graph γ. Let us focus on the holonomy-flux phase space T^∗ SO(D+1) associated to a single edge without loss of generality. Notice that the semi-simple elements in so(D+1) compose a dense subset so(D+1)_ss⊂ so(D+1) and we have T^∗ SO(D+1)≅ SO(D+1)× so(D+1). Then, we can define a dense subspace of T^∗ SO(D+1) as T_ss^∗ SO(D+1):={(h, X)| h∈ SO(D+1), X is a semi-simple element of so(D+1)}. To give the explicit formulation of the twisted geometric parametrization of T_ss^∗ SO(D+1), let us first introduce some new notations. Consider the orthonormal basis {δ_1^I,δ_2^I,...,δ_D+1^I} of ℝ^D+1, one has the basis {τ_IJ} of so(D+1) given by τ_IJ=(τ_IJ)^KL_def.:=2δ_I^[Kδ_J^L] in the definition representation space of SO(D+1), where (τ_IJ)^KL_def. is the generator of the infinitely small rotation in the 2-dimensional vector space spanned by the two vectors δ_I^K and δ_J^L. Then, let us introduce the maximum commutative sub-Lie algebra of so(D+1) spanned by {τ_1, τ_2,...,τ_m} with m=[D+1/2], where we define τ_1:=τ_12, τ_2:= τ_34, ..., τ_m:= τ_D,D+1 for D+1 being even, and τ_1:=τ_12, τ_2:= τ_34, ..., τ_m:= τ_D-1,D for D+1 being odd. This maximum commutative sub-Lie algebra of so(D+1) generates the maximum commutative subgroup 𝕋^m:=×_=1^m SO(2)_, m=[D+1/2]. Then, SO(D+1) can be regarded as a fiber Bundle with the fibers 𝕋^m on the base manifold ℚ_m:=SO(D+1)/𝕋^m, which can be also given by ℚ_m={𝕍:=(V_1,...,V_m)|V_=gτ_ g^-1, ∈{1,...,m}, g∈ SO(D+1)}. One can choose a Hopf section n: ℚ_m↦ SO(D+1), 𝕍↦ n(𝕍) and another Hopf section ñ: ℚ̃_m↦ SO(D+1), 𝕍̃↦ñ(𝕍̃) for the copy ℚ̃_m of ℚ_m, which satisfy V_1=nτ_1n^-1,...,V_m=nτ_mn^-1, and Ṽ_1=-ñτ_1ñ^-1,...,Ṽ_m=-ñτ_mñ^-1 with ℚ_m∋𝕍:=(V_1,...,V_m) and ℚ̃_m∋𝕍̃:=(Ṽ_1,...,Ṽ_m). Observe that the choice for the Hopf sections is clearly non-unique, and from now on our parametrization will be given under one fixed choice of {n_e,ñ_e} for each edge e. Then, in the subspace T_ss^∗ SO(D+1)_e associated to each edge e, the generalized twisted geometry parametrization can be given by the map (𝕍_e,𝕍̃_e,η⃗_e,ξ⃗_e)↦(h_e, X_e)∈ T_ss^∗ SO(D+1)_e: X_e=1/2n_e(η_e^1 τ_1+...+η_e^m τ_m)n_e^-1 h_e=n_ee^ξ_e^1τ_1...e^ξ_e^mτ_mñ_e^-1, where we defined η⃗_e:=(η_e^1,...,η_e^m), η_e^1,η_e^2,...,η_e^m∈ℝ with η_e^1≥η_e^2≥,...,≥η_e^m≥0 and ξ⃗:=(ξ_e^1,...,ξ_e^m) with ξ_e^1,...,ξ_e^m ∈(-π,π]. By defining η_e^1=:χ_e^1+...+χ_e^m, η_e^2 =:χ_e^2+...+χ_e^m, ..., η_e^m-1=:χ_e^m-1+χ_e^m, η_e^m=:χ_e^m with χ_e^1,...,χ_e^m≥ 0, one can replacing η⃗_e by χ⃗_e:=(χ_e^1,...,χ_e^m) in the parametrization (<ref>). The twisted geometry parametrization (<ref>) of T_ss^∗ SO(D+1)_e associated to a single edge can be directly extended to the whole graph γ. Correspondingly, one can introduce the Levi-Civita holonomies {h^Γ_e|e∈γ} determined by the fluxes {X_e∈ so(D+1)_ss|e∈γ} and {X̃_e∈ so(D+1)_ss|e∈γ}, which takes the form h^Γ_e≡ n_ee^ζ_e^1τ_1...e^ζ_e^mτ_mñ_e^-1. Note that the variables (ζ_e^1,...,ζ_e^n) are well-defined via the given h^Γ_e and the chosen Hopf sections, thus (ζ_e^1,...,ζ_e^n) are already fixed by the given {X_e∈ so(D+1)_ss|e∈γ} and {X̃_e∈ so(D+1)_ss|e∈γ}. Then, one can factor out h^Γ_e from h_e through the expressions h_e= (e^(ξ_e^1-ζ_e^1)n_eτ_1n_e^-1...e^(ξ_e^m-ζ_e^m)n_eτ_mn_e^-1) h^Γ_e =h^Γ_e(e^(ξ_e^1-ζ_e^1)ñ_eτ_1ñ_e^-1... e^(ξ_e^m-ζ_e^m)ñ_eτ_mñ_e^-1) in the perspectives of the source point and target point of e respectively. The above decomposition with twisted geometry parameters can be adopted to the splitting of the the Ashtekar connection as A_a=Γ_a+β K_a on a given graph. Specifically, one can consider the integral of A_a=Γ_a+β K_a∈ so(D+1) along an infinitesimal edge direction ℓ^a_e, which leads to A_e≡ A_aℓ^a_e, Γ_e≡Γ_aℓ^a_e and K_e≡ K_aℓ^a_e. Clearly, we can establish the following correspondence of h_e= e^A_e and h^Γ_e= e^Γ_e. The remaining factor should account for the K_e. According to the above discussion, the value of K_e may thus be expressed in the perspectives of the source point and target point of e, respectively as (e^(ξ_e^1-ζ_e^1)n_eτ_1n_e^-1...e^(ξ_e^m-ζ_e^m)n_eτ_mn_e^-1) =e^β K_e or (e^(ξ_e^1-ζ_e^1)ñ_eτ_1ñ_e^-1... e^(ξ_e^m-ζ_e^m)ñ_eτ_mñ_e^-1)= e^β K_e . Further, we have K_e =1/βn_e((ξ_e^1-ζ_e^1)τ_1+...+(ξ_e^m-ζ_e^m)τ_m)n_e^-1 or K_e =1/βñ_e((ξ_e^1-ζ_e^1)τ_1+...+(ξ_e^m-ζ_e^m)τ_m)ñ_e^-1 when it is expressed in the perspectives of the source point and target point of e respectively. The set of the variables ((η^e_1,...,η^e_m), (ξ^e_1,...,ξ^e_m),𝕍_e, 𝕍̃_e) gives the generalization of twisted geometry parametrization for the SO(D+1) holonomy-flux phase space. Comparing with the twisted geometry parametrization for the edge-simplicity constraint surface in the SO(D+1) holonomy-flux phase space introduced in our companion paper <cit.>, this generalized parametrization scheme covers the dense subset of the SO(D+1) holonomy-flux phase space, which are far beyond the edge-simplicity constraint surface. We will now carry out an analysis of the symplectic structure of the SO(D+1) holonomy-flux phase space based on the variables ((η^e_1,...,η^e_m), (ξ^e_1,...,ξ^e_m),𝕍_e, 𝕍̃_e) , before coming back to provide more support on the relation between the generalized parametrization scheme in this paper and that only for the edge simplicity constraint surface given in our companion paper <cit.>. § SYMPLECTIC ANALYSIS OF SO(D+1) HOLONOMY-FLUX PHASE SPACE Notice that the discussions in this section only depend on each single edge of the graph. To simplify our notations, we will focus on the analysis on a single edge and omit the label e without loss of generality. §.§ Symplectic structure of SO(D+1) holonomy-flux phase space The symplectic structure of SO(D+1) holonomy-flux phase space has been discussed in our companion paper <cit.>, let us give a brief review of the main notations as follows. Recall that the SO(D+1) holonomy-flux phase space associated with each edge of a given graph can be given by the group cotangent space T^*SO(D+1), as a phase space it enjoys the natural symplectic structure of the T^*SO(D+1). To give the explicit formulation of this symplectic structure, let us introduce the function f(h) on SO(D+1)∋ h, and the element p_X∈ so(D +1)^∗ which is a linear function of Y∈ so(D+1) defined by p_X(Y)≡ X^KLY_KL, where X=X^KL∈ so(D+1). A right-invariant vector field X̂ associated to the Lie algebra element X∈ so(D+1), acts on a function f(h) via the right derivative ∇_X^R as ∇_X^Rf(h)≡d/dtf(e^-tXh)|_t=0; under the adjoint transformation X↦ -hXh^-1, we obtain the corresponding left derivative ∇_X^Lf(h)≡d/dtf(he^tX)|_t=0=-∇^R_hXh^-1f(h). One can straightforwardly show that the map from the right invariant vector fields X̂ to the corresponding elements X∈ so(D+1) is given by the algebra-valued, right-invariant 1-form dhh^-1, which reads i_X̂(dhh^-1)=(ℒ_X̂h)h^-1=-X, where i denotes the interior product, and ℒ_Ŷ≡ i_Ŷd+di_Ŷ denotes the Lie derivative. Now, the natural symplectic potential for T^∗ SO(D+1) can be expressed as Θ≡ X^IJ(dhh^-1)_IJ≡Tr(Xdhh^-1). The symplectic 2-form then follows as Ω≡ -dΘ=- dTr(Xdhh^-1)=1/2Tr(dX̃∧ h^-1dh-dX∧ dhh^-1) where we have introduced X̃≡-h^-1Xh. From the symplectic 2-form, the Poisson brackets among the interesting phase space functions f≡ f(h) and p_Y≡ p_Y(X)=Y^IJX_IJ is given by <cit.> {p_Y,p_Z}=p_[Y,Z], {p_Y,f(h)}=∇^R_Yf(h), {f(h),f'(h)}=0. One can see from the brackets (<ref>) that the Poisson action of p_Y(X) generates left derivatives. Similarly, it is easy to check that the action of p̃_Y(X)≡ Y^IJX̃_IJ with X̃=-h^-1Xh generate the right derivative {p̃_Y,f(h)}=∇^L_Yf(h). Moreover, one can check the commutative relation {p_Y,p̃_Z}=0. Finally, it is easy to verify that, by setting 2κ/a^D-1=1, the Poisson brackets (<ref>) given by the natural symplectic potential (<ref>) for T^∗ SO(D+1) are identical with the one (<ref>) induced by the symplectic structure (<ref>) in the SO(D+1) connection phase space <cit.>. In the following part of this article, we will analyze the symplectic structure on T^∗ SO(D+1) based on the symplectic potential Θ without loss of generality. §.§ Symplectomorphism between SO(D+1) holonomy-flux phase space and generalized twisted geometry parameter space From now on, let us focus on the analysis on one single edge e of given graph γ, and we omit the the label e for all of the notations. Denote by B:=ℚ_m×ℚ̃_m × (×_=1^m ℝ^_+)×(× _=1^m S^1_) the collection of the generalized twisted geometric parameters (𝕍,𝕍̃,χ⃗,ξ⃗). It is easy to see that the map (<ref>) is not a one to one mapping. More explicitly, one can decompose B=B_0∪Ḃ with Ḃ:= B|_η_m> 0 and B_0:= B∖Ḃ. Then, one can find that the map (<ref>) is a one to one mapping between Ḃ and its image Ḃ^∗⊂ T_ss^∗ SO(D+1), while it is a many to one mapping between B_0 and its image B_0^∗⊂ T_ss^∗ SO(D+1). We will first focus on the symplectic structure on B in this subsection, and then go back to consider the many to one mapping between B_0 and its image B_0^∗ in section <ref>. The one to one mapping between Ḃ and its image Ḃ^∗⊂ T_ss^∗ SO(D+1) is also an isomorphism Ḃ→Ḃ^∗⊂ T_ss^∗ SO(D+1). Based on the isomorphism (<ref>), we may use the generalized twisted geometric parameters to express the induced symplectic structure of Ḃ^∗⊂ T_ss^∗ SO(D+1) inherited from the phase space T^*SO(D+1). First, the induced symplectic potential can be expressed as Θ_Ḃ^∗ = Tr(Xdhh^-1)|_Ḃ^∗⊂ T_ss^∗ SO(D+1)⊂ T^∗ SO(D+1) = 1/2∑_'=1^mη_'Tr(nτ_'n^-1 (dnn^-1+n(∑_dξ^τ_)n^-1-ne^∑_ξ^τ_ñ^-1dññ^-1ñe^-∑_ξ^τ_ n^-1)) = 1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1). In the space B, one can extend the potential Θ_Ḃ=Θ_Ḃ^∗ in the limit η_m→0 and define Θ_B≡1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1) as the symplectic potential on B. This potential gives the sympletic form Ω_B as Ω_B=-dΘ_B = 1/2∑_=1^mη_Tr(V_ dnn^-1∧ dnn^-1)-1/2∑_=1^mη_Tr(Ṽ_ dññ^-1∧ dññ^-1) -∑_=1^mdη_∧ (dξ_+1/2Tr(V_ dnn^-1)-1/2Tr(Ṽ_ dññ^-1)). It is clear that in the η_m=0 region of the above (pre-)symplectic structure is degenerate, as expected due to the degeneracy in the parametrization itself in the η_m= 0 region of T_ss^∗ SO(D+1). We are interested in the Poisson algebras between these twisted-geometry variables using the presymplectic form Ω_B. In order to give the explicit Poisson brackets, in the following section we will study the Hopf sections n(𝕍) and ñ(𝕍̃) in the perspectives of their contributions to the Hamiltonian fields on B defined by Ω_B . §.§ Geometric action on the Hopf section and its decomposition §.§.§ Geometric action on the Hopf section The Hopf map is defined as a special projection map π: SO(D+1)↦ℚ_m with ℚ_m:=SO(D+1)/𝕋^m, such that every element in ℚ_m comes from an orbit generated by the maximal subgroup 𝕋^m of SO(D+1) that fixed all of the elements in the set {τ_1,τ_2,...,τ_m}. In the definition representation of SO(D+1) the Hopf map reads π: SO(D+1) → ℚ_m g → 𝕍(g)=(gτ_1g^-1, gτ_2g^-1,...). Note that 𝕍(g) is invariant under g↦ g^α_1,α_2,...,α_m=ge^α_1τ_1+α_2τ_2+...α_mτ_m, thus it is a function of D(D+1)/2-[D+1/2] variables only. This result shows that SO(D+1) can be seen as a bundle (which is referred to as Hopf bundle) over ℚ_m with the 𝕋^m fibers. On this bundle we can introduce the Hopf sections, each as an inverse map to the above projection n: ℚ_m → SO(D+1) 𝕍 ↦ n(𝕍), such that π(n(𝕍))=𝕍. This section assigns a specific SO(D+1) element n to each member of the ℚ_m, and it is easy to see that any given section n is related to all other sections via n^α_1,α_2,...,α_m≡ ne^α_1τ_1+α_2τ_2+...α_mτ_m; hence the free angles {α_1,α_2,...,α_m} parametrize the set of all possible Hopf sections. Notice that each algebra element X∈ so(D+1) can be associated to a vector field X̂ on ℚ_m, which acts on a function f(𝕍) of ℚ_m as ℒ_X̂f(𝕍):=d/dtf(e^-tX𝕍e^tX)|_t=0, where g𝕍g^-1:=(gV_1g^-1, gV_2g^-1,...,gV_mg^-1) with g∈ SO(D+1). Similarly, for a so(D+1) valued function S=S(𝕍) on ℚ_m, it can be also associated to a vector field Ŝ on ℚ_m, , which acts on the function f(𝕍) of ℚ_m as ℒ_Ŝf(𝕍):=d/dtf(e^-tS𝕍e^tS)|_t=0. Specifically, for the linear functions we have ℒ_X̂𝕍:=(ℒ_X̂V_1,..., ℒ_X̂V_m)=(-[X,V_1],...,-[X,V_m])=:-[X,𝕍]. Especially, we are interested in the action of the vector fields on the Hopf section n. Notice that we have ℒ_X̂V_(n)=(ℒ_X̂n)τ_ n^-1 +nτ_(ℒ_X̂n^-1)=[(ℒ_X̂n)n^-1, V_], ∀∈{1,...,m}. Comparing this result with (<ref>), we deduce that (ℒ_X̂n)n^-1=-X+∑_V_ F^_X(𝕍), where F^_X(𝕍) are functions on ℚ_m, so that V_ F^_X(𝕍) commuting with the element 𝕍 for all . Lemma. The solution functions L_^IJ≡ L^: ℚ_m↦ so(D+1) of the equations Tr(L^ dnn^-1)=0, L_^IJV_',IJ=δ_,', appears in the Lie derivative of the Hopf map section n(𝕍) as, L^_X:=L^IJ_ X_IJ=F^_X and it satisfies the key coherence identity ℒ_X̂L^_Y-ℒ_ŶL^_X=L^_[X,Y]. Finally, the general solution to this identity satisfying the conditions L_^IJV_',IJ=δ_,' is given by L'^_X=L^_X+ℒ_X̂α^ where α^ is a function on ℚ_m. Proof. To prove Eq.(<ref>), let us take the interior product of an arbitrary vector field X̂ with the definition Tr(L^ dnn^-1)=0 and consider (ℒ_X̂n)n^-1=i_X̂(dnn^-1) given by the definition of Lie derivative, we have 0=i_X̂Tr(L^ dnn^-1)=Tr(L^(ℒ_X̂n)n^-1) =-Tr(L^ X)+∑_'=1^mF^'_XTr(L^ V_')=-L^_X+F^_X, where we used Tr(L^ V_')=L_^IJV_',IJ=δ_,' and (<ref>). Thus, we proved F^_X=L^_X. To prove Eq.(<ref>), we first consider that ℒ_X̂(dnn^-1) = i_X̂(dnn^-1∧ dnn^-1)+d[(ℒ_X̂n)n^-1] = [-X+∑_V_ L^_X,dnn^-1]+d(-X+∑_V_ L^_X) = ∑_V_ dL^_X-[X,dnn^-1], where we used the definition of Lie derivative in the first equality, Eq.(<ref>) in the second and dV_=[dnn^-1,V_] in the third. Then, the above equation leads to 0=ℒ_X̂Tr(L^ dnn^-1) =Tr((ℒ_X̂L^-[L^,X])dnn^-1) +dL^_X by using the equalities Tr(L^ V_')=δ_,'. Further, let us take the interior product of Eq.(<ref>) with Ŷ and we get ℒ_ŶL^_X = Tr((ℒ_X̂L^-[L^,X] )(Y-∑_'V_' L^'_Y)) = ℒ_X̂L^_Y-L^_[X,Y]-∑_'L^'_Y(Tr((ℒ_X̂L^)V_') -Tr(L^[X,V_'])) = ℒ_X̂L^_Y-L^_[X,Y]-∑_'L^'_Yℒ_X̂(Tr(L^ V_') ), where the last term vanishes, thus we obtain the coherence identity (<ref>). To show Eq.(<ref>), let us suppose that we have another solution L'^ to the coherence identity and also the condition Tr(L'^ V_')=L'^IJ_ V_',IJ=δ_,'. Considering the 1-form ϕ^≡ -Tr(L'^ dnn^-1), one can see that its contraction with X̂ ϕ^_X≡ i_X̂ϕ^=-Tr(L'^ (ℒ_X̂n)n^-1)=L'^ _X-L^_X is the difference between the two solutions L'^ _X and L^_X. Thus, ϕ^_X is also a solution to the coherence identity (<ref>). This result together with the definition of the differential i_X̂i_Ŷdϕ^=ℒ_Ŷϕ^_X -ℒ_X̂ϕ^_Y+ϕ^_[X,Y] implies that dϕ^=0, which means that there exists a function α^ locally at least, such that ϕ^=dα^ and thus L'^_X=L^_X+ℒ_X̂α^. This proves the Eq. (<ref>). □ Finally, let us recall that the freedom in choosing the Hopf section lies in the function parameters α^(𝕍) in the expression n'(𝕍)≡ n(𝕍)e^∑_α^(𝕍)τ_ for all possible choices of the sections. By applying Eq.(<ref>) to this n', we immediately get L'^_X= L^_X+ i_X̂dα^. Referring to (<ref>), we can conclude that the function L^ is exactly the function coefficient for the component of (dn)n^-1 in the V_ direction, which is determined by a choice of the Hopf section n. §.§.§ Decomposition and sequence of the Hopf section As we will see in following part of this article, the Hopf section n and the geometric action on it are closely related to the symplectic structure and the symplectic reduction on B. To analyze the Hopf section ℚ_m more explicitly, let us consider the decomposition of the Hopf section n. Recall the definition ℚ_m:=SO(D+1)/𝕋^m, one can decompose ℚ_m as ℚ_m=𝔻_1×𝔻_2×...×𝔻_m with 𝔻_1:=SO(D+1)/(SO(2)_τ_1× SO(D-1)_[τ_1]), 𝔻_2:=SO(D-1)_[τ_1]/(SO(2)_τ_2× SO(D-3)_[τ_2]), ... 𝔻_m:=SO(D+3-2m)_[τ_(m-1)]/SO(2)_τ_m, where SO(2)_τ_ is the group generated by τ_ and SO(D+1-2)_[τ_] is the maximal subgroup of SO(D+1) which preserves (τ_1,...,τ_) and has the Cartan subalgebra spanned by (τ_(+1),...,τ_m). Here one should notice that both of SO(2)_τ_ and SO(D+1-2)_[τ_] preserve (τ_1,...,τ_). Then, the Hopf section n can be decomposed as n=n_1n_2...n_m. This decomposition gives a sequence of the Hopf sections, which reads n_1, n_1n_2, n_1n_2n_3, ..., n_1...n_m. For a specific one n_1...n_ with ∈{1,...,m}, it gives n_1...n_: 𝔻_1×...×𝔻_→ SO(D+1) (V_1,...,V_)↦ n_1(V_1)n_2(V_1,V_2)...n_(V_1,...,V_), where V_1=n_1n_2...n_τ_1 n_^-1...n_2^-1n_1^-1=n_1τ_1 n_1^-1, V_2=n_1n_2...n_τ_2 n_^-1...n_2^-1n_1^-1=n_1n_2τ_2n_2^-1n_1^-1, ..., V_=n_1n_2...n_τ_ n_^-1...n_2^-1n_1^-1. Here one should notice that the decomposition n=n_1...n_m is not unique. For instance, one can carry out the transformation n_→ n_ g, n_+1→ g^-1n_+1 with g∈ SO(D+1) being arbitrary element which preserve (τ_1,...,τ_), and it is easy to verify that the transformation (<ref>) preserves the Hopf section n but changes n_ and n_+1 in the decomposition n=n_1...n_m. We can also establish the geometric actions on the Hopf sections n_1. Specifically, one can give (ℒ_X̂n_1)n_1^-1=-X+V_1L̅^1_X (V_1)+∑_μV̅^μ_1 L̅^μ_X(V_1) based on Eqs.(<ref>), (<ref>) and V_1=n_1τ_1n_1^-1, where V̅^μ_1=n_1τ̅^μ n_1^-1 with {τ̅^μ} being a basis of so(D-1)_τ_1, L̅^1_X (V_1)=L̅^1_IJ(V_1)X^IJ and L̅^μ_X(V_1)=L̅^μ_IJ(V_1) X^IJ are functions of V_1∈𝔻_1 <cit.>. It has been shown that L̅^1_IJ(V_1) is the solution of the equations <cit.> Tr(L̅^1 dn_1 n_1^-1)=0, Tr(L̅^1V_1)=1, and Tr(L̅^1 V̅^μ_1)=0, ∀μ. By comparing Eq.(<ref>) and Eq.(<ref>), it is easy to see that L^1=L̅^1 is a solution of L^1 in Eq.(<ref>). This result will be a key ingredient in discussions in the next section. Now, by applying the results of this section to the presymplectic form Ω_B, we will identify the Hamiltonian fields in B and compute the Poisson brackets. §.§ Computation of Hamiltonian vector fields in pre-symplectic manifold B Let us recall the pre-symplectic potential Θ_B≡1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1) induced from the SO(D+1) holonomy-flux phase space, which defines the pre-sympletic form Ω_B as Ω_B=-dΘ_B = 1/2∑_=1^mη_Tr(V_ dnn^-1∧ dnn^-1)-1/2∑_=1^mη_Tr(Ṽ_ dññ^-1∧ dññ^-1) -∑_=1^mdη_∧(dξ_+1/2Tr(V_ dnn^-1)-1/2Tr(Ṽ_ dññ^-1)). The associated Poisson brackets can be calculated by considering the Hamiltonian vector fields on B. Let us denote the Hamiltonian vector field for the function f as ψ_f , where f∈{η_, ξ_, p_X≡1/2∑_η_ V^_X=1/2∑_η_ V^_IJX^IJ, p̃_X≡1/2∑_η_Ṽ^_X=1/2∑_η_Ṽ^_IJX^IJ}. Then, using i_ψ_fΩ_B=-df, the vector fields could be checked to be given by ψ_p_X = X̂-∑_L^_X(𝕍)∂_ξ_, ψ_p̃_X = - X̂̃̂-∑_L^_X(𝕍̃)∂_ξ_, ψ_η_= -∂_ξ_. Here X̂ are the vector fields generating the adjoint action on ℚ_m labelled by 𝕍, associated to the algebra elements X. Similarly, X̂̃̂ are the vector fields generating the adjoint action on ℚ_m labelled by 𝕍̃, associated to the algebra elements X. Proof. The first equation of (<ref>) can be checked by considering i_X̂Ω_B=-1/2∑_Tr(d(η_ V_)X)+∑_dη_ L^_X(𝕍). Notice that we have i_∂_ξ_Ω_B=dη_, the first equation of (<ref>) follows immediately. The computation for ψ_p̃_X can be carried out similarly, with an opposite sign due to the reversal of the orientation. □ §.§ Reduction of the pre-symplectic manifold B Recall that in the η_m=0 region Ω_B is degenerate, as expected due to the degeneracy of the parametrization (<ref>) in the η_m= 0 region. Let us now address this degeneracy to get a true symplectic manifold. We can reduce the pre-symplectic manifold B with respect to the vector fields Ê in the kernel of Ω_B, i.e. to consider the quotient manifold B̅≡ B/Ker(Ω_B). The result would be a symplectic manifold with non-degenerate 2-form given by the quotient projection of Ω_B. In obtaining the space B̅, we can introduce the equivalence classes under the equivalence relation p∼ p' whenever p'=e^Êp, with Ê∈Ker(Ω_B) and p, p'∈ B. The operation is thus determined by the vector fields in the kernel of Ω_B. Since it is obvious that the vector fields Ê∈Ker(Ω_B) appear in the region with η_m=0, we look for the vector fields preserving the region while having the interior products with Ω_B proportional to η_. Let us first consider the vector fields Ê_X≡ψ_p_X-ψ_p̃_Y, where X∈ so(D+1), Y=-h^-1Xh with h being a group element rotating V^ to Ṽ^=-h^-1V^ h. Indeed, using the fact that V^_X=Ṽ^_Y, the interior product of the field D̂_X with the symplectic 2-form is i_Ê_XΩ_B=-1/2∑_d(η_ V^_X-η_Ṽ^_Y)-1/2∑_η_Tr(Ṽ^ dY) =-1/2∑_η_Tr([V^,X]dnn^-1). Now, let us analyze the degeneracy of i_Ê_XΩ_B. Denoted by K^ the subspace of B defined by η_=η_+1=...=η_m=0. Consider the so(D+1) valued functions F(V_1,...,V_(-1)) on K^ which satisfies n_(-1)^-1...n_2^-1n_1^-1F(V_1,...,V_(-1))n_1n_2...n_(-1)∈ so(D+3-2)_[τ_(-1)], where n_1n_2...n_(-1) determined by (V_1,...,V_(-1)) is from the sequence of the Hopf sections (<ref>), SO(D+1-2)_[τ_] is the maximal subgroup of SO(D+1) which preserves (τ_1,...,τ_) and has the Cartan subalgebra spanned by (τ_(+1),...,τ_m). Then, we can define the vector fields Ê^_F by Ê^_F:=Ê_X|_X=F(V_1,...,V_(-1)), and one can verify i_Ê^_FΩ_B=0 on K^ by using Eq.(<ref>). Thus, notice the relation K^1⊂ K^2⊂...⊂ K^m, we have Ker(Ω_B)≡{Ê^_F| ∈{1,...,m}} on K^m. Next, to find the equivalence class generated by the vector fields Ê^_F on K^, we note that the actions of the fields should rotate jointly the vectors (V_,..,V_m) and (Ṽ_,...,Ṽ_m), that is we have Ê^_F (V_')=-[F(V_1,...,V_(-1)),V_'], Ê^_F(Ṽ_')=-h^-1[F(V_1,...,V_(-1)),V_']h. Further, the actions preserves the group element h, since Ê_X(h)=-Xh-hY=0 which ensures that Ê^_F(h)=0. Therefore, given p and p' on K^, we have p'∼ p if and only if the two are related by a joint rotation in (V_,..,V_m) and (Ṽ_,...,Ṽ_m) and a h-preserving translations in (ξ_1,...,ξ_m). It is easy to see that the parametrization (<ref>) maps p and p'∼ p to the same image in T^∗_ssSO(D+1), as expected that the equivalence class generated by the vector fields Ê^_F on K^ also describes the degeneracy of the parametrization (<ref>). After the quotient with respect to Ê^_F on each K_, we are left with a manifold K̅_ parametrized by only (η_1,...,η_(-1)), (V_1,...,V_m), (Ṽ_1,...,Ṽ_(-1)) and (ξ_1,...,ξ_m). Recall that B≡ B|_η_m>0∪ K^m and K^1⊂ K^2⊂...⊂ K^m, let us define K̇^m:=K^m/Ker(Ω_B) and then the quotient space B̅≡ B|_η_m>0∪K̇^m. Finally, we conclude that the parametrization (<ref>) gives a one to one map between B̅ and its image T^∗_ssSO(D+1), and it can be extended as a symplectic-morphism with B̅ being equipped with the symplectic structure Ω_B. §.§ Poisson algebra among the twisted geometry parameters Based on the Hamiltonian vector fields given by the pre-symplectic potential Θ_B, the Poisson brackets between the twisted geometry parameters can be given by {ξ_,η_}=δ_,, {p_X, p_Y}=p_[X,Y], {p̃_X, p̃_Y}=p̃_[X,Y] {V^,η_}= {Ṽ^,η_}=0, and {V^,Ṽ^}=0. Moreover, one can show that the Poisson brackets given by Θ_B between ξ_ and p_X, or the ones between ξ_ and p̃_X are non-trivial, and they are given by the function L^: ℚ_m→ so(D+1) in the form {ξ_,p_X}= L^_X(𝕍), {ξ_,p̃_X}= L^_X(𝕍̃), where L^_X≡Tr(L^ X) is the component of L^ along the algebra element X. Especially, the Eqs. (<ref>) taken as the definition equations of the functions L^, together with the Poisson brackets (<ref>), already determined L^ to be exactly the results of the brackets {ξ_,p_X} and {ξ_,p̃_X} given by the potential Θ_B corresponding to our choice of the Hopf sections. This result can be shown by the fact that, the function L^ defined by Eqs.(<ref>) is constrained by two conditions given by the above Poisson brackets (<ref>), and these two conditions are exactly the definition of L^ in Lemma in section <ref>. Let us then illustrate the details of this fact as follows. The first one of the two conditions comes from the equation p_IJL_^IJ=p_IJ{ξ_,p^IJ}=1/2{ξ_,p^IJp_IJ}= 1/4{ξ_,∑_η^2_} =1/2η_, with p_IJ:=1/2∑_(η_ V^_IJ), which gives the normalization condition L_^IJV^_IJ=δ_^ in Lemma in section <ref>. The second one of the two conditions just comes from the Jacobi identity {ξ_,{p_X,p_Y}}+{p_X,{p_Y,ξ_}}+{p_Y,{ξ_,p_X}}=0, from which we get L^_[X,Y]-{p_X,L_Y^}+{p_Y,L_X^}=0, By using {p_X,L_Y^}=i_ψ_p_XdL_Y^=ℒ_X̂L_Y^, one can write the identity (<ref>) as an identity involving Lie derivatives and we get ℒ_X̂L^_Y-ℒ_ŶL^_X=L^_[X,Y], which is just the coherence identity in Lemma in section <ref>. Now, it is easy to see these two conditions makes the Lemma in section <ref> applicable and we can verify the result given in the beginning of this paragraph. § RELATION WITH THE TWISTED GEOMETRY PARAMETRIZATIONS ON EDGE SIMPLICITY CONSTRAINT SURFACE The twisted geometry parametrization introduce in this article is constructed in the space ×_e∈γT^∗_ssSO(D+1)_e, and we also have introduced the twisted geometry parametrization of the edge simplicity constraint surface ×_e∈γT^∗_esSO(D+1)_e in our companion paper <cit.>. Thus, it is worth to discuss the relation between these two types of parametrizations. We also focus on the twisted geometry parametrizations of the space T^∗_ssSO(D+1) on a single edge without loss of generality. Then, by setting η_2=...=η_m=0 in Eq.(<ref>), we get X=1/2η_1nτ_1n^-1 which parametrizes all of the simple fluxes satisfying X^[IJX^KL]=0 in so(D+1). Besides, recall the decomposition n=n_1...n_m of the Hopf section n, we get X = 1/2η_1n_1τ_1n_1^-1 h = n_1e^ξ^1τ_1n̅ñ_1^-1 with n̅=n_2...n_me^ξ^2τ_2...e^ξ^mτ_m(ñ_2...ñ_m)^-1. Recall the edge simplicity constraint surface T_es^∗ SO(D+1) defined by T_es^∗ SO(D+1)={(h,X)∈ T^∗ SO(D+1)|X^[IJX^KL]=0}, it is easy to see that T_es^∗ SO(D+1)⊂ T_ss^∗ SO(D+1) is parametrized by (η_1,ξ_1, V_1, Ṽ_1, n̅) based on Eq.(<ref>), where V_1=n_1τ_1n_1^-1, Ṽ_1=ñ_1τ_1ñ_1^-1 with the Hopf sections n_1 and ñ_1 being given by the decompositions n=n_1...n_m and ñ=ñ_1...ñ_m respectively. Thus, by restricting the consideration on the edge simplicity constraint surface, the parametrization (<ref>) reproduces the twisted geometry parametrization introduced in our companion paper <cit.>. We can further consider the symplectic reduction with respect to the edge simplicity constraint, which can be expressed as 𝒮_IJKL≡ p_[IJp_KL]=0 with p_IJ:=1/2∑_η_ V^_IJ in twisted geometry parameters. Notice that the Hamiltonian vector field of edge simplicity constraint is spanned by ψ^𝒮_IJKL=2p_[IJ(X̂_KL]-∑_L^_KL]∂ _ξ_), where X̂_KL is the vector field generating the adjoint action of X_KL on ℚ_m labelled by 𝕍, with X_KL is the so(D+1) algebra element given by X_KL≡ X^IJ_KL=δ^I_[Kδ^J_L]. It is easy to verify that the vector field (<ref>) only induces the transformation of holonomy on the edge simplicity constraint surface, which reads ℒ_α^IJKLψ^𝒮_IJKLh= 1/2η_1 α^IJKLV^1_[IJτ_KL]h= 1/2η_1 α̅^KLn_1(τ̅_KLn̅)e^ξ^1τ_1n_1^-1, where α^IJKL is an arbitrary tensor satisfying α^IJKL=α^[IJKL] and α̅^KLτ̅_KL≡α^IJKLV^1_[IJ(n^-1_1τ_KL]n_1)∈ so(D-1)_τ_1. Thus, the component n̅ is just the gauge component with respect to edge simplicity constraint. By reducing the edge simplicity constraint surface with respect to the gauge orbit generated by ψ^𝒮_IJKL, we get the simplicity reduced phase space B_es given by B_es≡ℝ_+× S^1×𝔻_1×𝔻̃_1 ≡{(η_1,ξ_1,V_1, Ṽ_1)}, where η_1∈ [0,+∞), ξ_1∈[-π,π), V_1∈𝔻_1, Ṽ_1∈𝔻̃_1 with 𝔻_1 and 𝔻̃_1 are defined by Eq.(<ref>). Correspondingly, the reduced symplectic structure on B_es gives the Poisson brackets {p̅_X, p̅_Y}= p̅_[X,Y], {p̃̅̃_X, p̃̅̃_Y}=p̃̅̃_[X,Y], {ξ_1,η_1}=1, where p̅_X≡1/2η_1V^1_X=1/2η_1 V^1_IJX^IJ and p̃̅̃_X≡1/2η_1Ṽ^1_X=1/2η_1Ṽ^1_IJX^IJ. Specifically, the Poisson bracket between ξ_1 and (p̅_X, p̃̅̃_X) are given by {ξ_1, p̅_X}=L^1_X(𝕍), {ξ_1, p̃̅̃_X}=L^1_X(𝕍̃). Notice these Poisson brackets is not independent of (V_2,..V_m) and (Ṽ_2,...,Ṽ_m), since ξ_1 contains the information of the choices of the Hopf section n and ñ which depend on 𝕍 and 𝕍̃. Recall the result of section <ref>, by using the decomposition n=n_1...n_m and ñ=ñ_1...ñ_m, one can choose the Hopf sections n and ñ to ensure that L^1(𝕍)=L̅^1(V_1), and L^1(𝕍̃)=L̅^1(Ṽ_1). Then, the symplectic structure on reduce phase space B_es is given by the Eqs.(<ref>), (<ref>) and (<ref>), which is identical with that given in our companion paper <cit.>. Further, the gauge reduction with respect to Gaussian constraint and the treatment of vertex simplicity constraint can be carried out following the same procedures as that in <cit.>. § CONCLUSION AND OUTLOOK The realization of gauge fixing in quantum gauge reduction and the Fermion coupling in all dimensional LQG require us to construct the coherent state in the full Hilbert space which involving the non-simple representations of SO(D+1). Following previous experiences, it is reasonable to consider the generalized twisted geometry coherent state and thus it is necessary to establish the twisted geometry parametrization of the full SO(D+1) holonomy-flux phase space. We established the generalized twisted geometry parametrization for a dense subspace of the full SO(D+1) holonomy-flux phase space. In particular, the twisted geometry parameters are adapted to the splitting of the Ashtekar connection to capture the degrees of freedom of the intrinsic and extrinsic part of the spatial geometry respectively. Moreover, the symplectic structure on the SO(D+1) holonomy-flux phase space is re-expressed based on the twisted geometry parameters. Through studying the properties of the Hopf sections in SO(D+1) Hopf fibre bundle, we obtained the Poisson algebra among the twisted geometry parameters. Especially, the relation between the twisted geometry parametrizations for the edge simplicity constraint surface and the dense subspace ×_e∈γT^∗_ss SO(D+1)_e are discussed. We pointed out that the twisted geometry parametrizations for ×_e∈γT^∗_ss SO(D+1)_e is equivalent to that for the edge simplicity constraint surface by carrying out the gauge reduction with respect to the edge simplicity constraint, which ensures that the treatment of the anomalous vertex simplicity constraint proposed in our companion paper <cit.> are still valid for the more general case considered in this article. The twisted geometry parametrizations for the dense subspace ×_e∈γT^∗_ss SO(D+1)_e provides us the tool which is necessary to construct the twisted geometry coherent state in the full Hilbert space of all dimensional LQG. More explicitly, similar to the construction of twisted geometry coherent state in the solution space of edge simplicity constraint, one could decompose the heat-kernel coherent state of SO(D+1) based on the twisted geometry parametrization for ×_e∈γT^∗_ss SO(D+1)_e, and then select the terms dominated by the highest and lowest weight in each representation of SO(D+1), to form the twisted geometry coherent state in the full Hilbert space of all dimensional LQG. This will be the subject of a follow up work <cit.>. It should be remarked that the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space are also valid for general SO(D+1) Yang-Mills gauge theory. Though the “geometry” may be meaningless out of the framework of gravity theory, the twisted geometry parameters provide a new perspective to analyze the Poisson structure of the SO(D+1) holonomy-flux phase space, which could help us to understand the quantum aspects of corresponding SO(D+1) Yang-Mills gauge theory. § ACKNOWLEDGMENTS This work is supported by the National Natural Science Foundation of China (NSFC) with Grants No. 12047519, No. 11775082, No. 11875006 and No. 11961131013. unsrt
http://arxiv.org/abs/2307.04367v1
20230710064801
Explanation Needs in App Reviews: Taxonomy and Automated Detection
[ "Max Unterbusch", "Mersedeh Sadeghi", "Jannik Fischbach", "Martin Obaidi", "Andreas Vogelsang" ]
cs.SE
[ "cs.SE" ]
Explanation Needs in App Reviews: Taxonomy and Automated Detection Max Unterbusch University of Cologne [email protected] Mersedeh Sadeghi University of Cologne [email protected] Jannik Fischbach Netlight Consulting GmbH | fortiss GmbH [email protected] Martin Obaidi Leibniz University Hannover, Software Engineering Group [email protected] Andreas Vogelsang University of Cologne [email protected] August 12, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================ Explainability, i.e. the ability of a system to explain its behavior to users, has become an important quality of software-intensive systems. Recent work has focused on methods for generating explanations for various algorithmic paradigms (e.g., machine learning, self-adaptive systems). There is relatively little work on what situations and types of behavior should be explained. There is also a lack of support for eliciting explainability requirements. In this work, we explore the need for explanation expressed by users in app reviews. We manually coded a set of 1,730 app reviews from 8 apps and derived a taxonomy of Explanation Needs. We also explore several approaches to automatically identify Explanation Needs in app reviews. Our best classifier identifies Explanation Needs in 486 unseen reviews of 4 different apps with a weighted F-score of 86%. Our work contributes to a better understanding of users' Explanation Needs. Automated tools can help engineers focus on these needs and ultimately elicit valid Explanation Needs. Explainability, Requirements, NLP § INTRODUCTION Software systems are becoming more intelligent and ubiquitous than ever before, increasing the criticality of their impact on humans. Driven by modern artificial intelligence, it is becoming increasingly difficult for an external user, but also for the developers of these systems, to understand their inner workings and thus their decisions and actions. The ability to provide explanations—a natural ability of humans—is therefore considered an important capability of software systems. As such, explainability is now accepted as a critical quality attribute <cit.> and represents an emerging topic in the field of RE <cit.>. Researchers have explored the foundations of explainability from different angles. There are several approaches to generating explanations for different algorithmic paradigms. However, there has been relatively little focus in the literature on what users actually need explanations for <cit.>. This lack of knowledge limits our ability to effectively elicit explainability requirements and apply existing explanation generation methods. Thus, the first problem we address in this paper is as follows: We lack knowledge about what users need explanations for. App reviews have been overlooked as a potential source of Explanation Needs. Pagano and Maalej <cit.> found that app reviews contain valuable RE-related information because they represent rich and readily available textual data that provides insights into thousands of user experiences. Unlike interview or survey data, app reviews are collected “in the field” under natural circumstances. Users are motivated enough to publish their opinions about an app; they are not forced or paid to do so. This underlines the importance that users place on their concerns. In addition, users are not asked about any specific aspect. The review messages are open to any feedback the users want to give to the app vendors or developers. We set out to understand users' need for explanation, which we refer to as Explanation Need. Our focus is to characterize the occurrence of Explanation Needs in app reviews and to investigate the types of Explanation Needs that users express. We conducted a qualitative analysis of 1,730 English app reviews of 8 different apps. As a result, we propose a taxonomy of Explanation Needs in app reviews to help developers and researchers distinguish between different types. One of the key benefits of the taxonomy is that it enables researchers and engineers to extract explainability requirements in a systematic and rigorous manner. By categorizing users' Explanation Needs from their perspective into distinct categories, the taxonomy highlights areas where a system may lack transparency or fail to meet users' expectations. This, in turn, provides valuable insight into the types of explanations that are most needed. Our qualitative analysis shows that Explanation Needs in app reviews are valuable and contain rich information, but are relatively sparse. Explanation Needs have only appeared in about 5% of the app reviews studied. However, manually analyzing app reviews can be challenging due to the sheer volume of reviews and the varying levels of detail and insight they provide. Tool support to filter the reviews for relevant content would be valuable to allow development and stakeholders to efficiently exploit this source of information <cit.>. We identify this as the second problem addressed in this paper: We lack tool support to automatically identify Explanation Needs in app reviews. To support the use of app reviews, we investigated several classifiers (rule-based, traditional ML, and transformer approaches) to automatically detect Explanation Needs in app reviews. We evaluated and compared the classifiers in a 10-fold cross-validation on an extended set of 5,078 manually labeled app reviews. In addition, we evaluated our baseline rule-based approach and our best-performing classifier on an additional set of 486 unseen and unmodified reviews of 4 new apps to test how well the approaches generalize and perform in a realistic setting. Our best-performing classifier—a fine-tuned BERT model—achieved a weighted F-score of 93% in a 10-fold cross-validation and a weighted F-score of 86% when evaluated on unseen data. We make the following contributions: * We provide a taxonomy of Explanation Needs derived from a large set of app reviews. * We provide a performance analysis of several classifier approaches to detect Explanation Needs automatically in app reviews. * We publish a set of 5,564 app reviews that we manually labeled according to our proposed taxonomy. * To strengthen transparency and facilitate replication, we make our code, dataset, and trained models publicly available.[10.5281/zenodo.7740411 .] § TERMINOLOGY AND RELATED WORK §.§ Explainability and User Needs in Explanations Explainability has gained significant attention from various research fields, including Human-Computer Interaction, Cyber-Physical Systems, and Psychology <cit.>. Since 2019, when it was proposed as a non-functional requirement  <cit.>, it has become a trending topic within the SE and RE communities <cit.>. Research has shown that explainability can enhance trustworthiness, transparency, accountability, fairness, ethics, and other quality aspects by overcoming the black box nature of software systems  <cit.>. Chazette et al. developed a concise definition of explainability that meets the requirements of SE and RE communities <cit.>: A system S is explainable with respect to an aspect X of S relative to an addressee A in context C if and only if there is an entity E (the explainer) who, by giving a corpus of information I (the explanation of X), enables A to understand X of S in C. The explainer entity does not have to be the system itself. Achieving explainability depends on specific variables: the system's aspect, the addressee, and the context. Accordingly, Kohl <cit.> and Chazette <cit.> emphasize the significance of identifying users' specific needs for explanations and providing customized explanations correspondingly. Indeed, in cases where users do not require explanations, ensuring explainability may not be necessary <cit.>. Studying app reviews for explanation need identification is a relatively under-researched area. Consequently, a taxonomy of Explanation Needs can aid in advancing knowledge and eliciting requirements for developing explainable systems. Constructing taxonomies provides numerous benefits, including supporting the communication of complex concepts, revealing relationships between entities, and uncovering knowledge gaps. In a similar approach for a different domain, Sadeghi et al. <cit.> developed a taxonomy of reasons for Explanation Needs. They primarily distinguish between four categories of situations requiring explanations: Training, Interaction, Debugging, and Validation, yet the authors focused on Interaction. For Interaction, the taxonomy further breaks hierarchically down into disobedience, failure, and context-aware behavior. That work considered the system, the user, and the environment in their taxonomy; in contrast, our focus will be on the user only. §.§ App Store Mining and Classifying App Reviews Pagano et al. <cit.> conducted a comprehensive analysis of app stores to determine their usefulness for requirements engineering. They collected over a million app reviews and found that feedback messages can facilitate communication between users and developers. However, they discovered that a significant amount of the feedback collected was of poor quality and lacked informative value. They argue that although app stores can facilitate user-centered RE through the use of user feedback, it is essential to employ appropriate tools and techniques to filter and pre-process relevant contributions. In response to the need for tool support in app store mining, the RE community developed various solutions to extract valuable insights from app store reviews. Guzman and Maalej <cit.> proposed a method to filter features mentioned by users and extract corresponding sentiments, allowing for a detailed analysis of user experience with individual app features. Chen et al. presented a tool that filters app reviews, groups and ranks them, and provides visualizations of the insights <cit.>. Particularly relevant to this paper are contributions that classify app reviews according to predefined labels, such as problem reports, inquiries, and user experience, or non-functional requirements such as reliability, usability, and portability. To achieve this classification, researchers typically use traditional ML and DL methods for classifying app reviews into various categories  <cit.>. Active Learning strategies have also been experimented with, which can help reduce human labor and improve classification accuracy in certain scenarios  <cit.>. Recently, BERT achieved state-of-the-art performance classifying English app reviews into feature requests, problem reports, and irrelevant <cit.>. In this paper, we compare a simple rule-based approach as a baseline, different ML-based approaches, and a DL-based approach using the BERT-Base model <cit.> for detecting Explanation Needs in reviews automatically. § CHARACTERIZATION OF EXPLANATION NEEDS We define an Explanation Need as a knowledge gap that a user intends to close and present our findings on such needs in app reviews in this section. To consider a review as an Explanation Needs, the user must explicitly raise a question or express a need for an explanation. Rhetorical questions ([sic] “What the hell?”) do not qualify as Explanation Needs as they are not intended to elicit an answer. Direct requests ([sic] “Please could you please check it?”) are also excluded since they do not indicate a specific gap in knowledge. It is important to note that we distinguish between Explanation Needs and Explainability Need, a non-functional requirement identified for software systems. On the other hand, Explanation Needs are needs perceived by users. Following the formatting of Chazette et al.'s definition of explainability <cit.>, we formally define Explanation Needs as: An addressee A has incomplete knowledge about an aspect X of system S in context C and requests a corpus of information I provided by an entity E that allows A to understand X of S in C. §.§ Study Design In the endeavor to identify users' Explanation Needs, this research aims to explore the potential of app reviews as a source of information. By analyzing the rich textual data of reviews, we seek to uncover the types of explanations that users are looking for. To guide our investigation, we formulated the following research questions (RQ): RQ1: What types of Explanation Needs have been expressed in app reviews? RQ2: How prevalent are Explanation Needs and their types in app reviews? Answering RQ1 is crucial for identifying common issues faced by users and prioritizing areas for improvement in app development. It aims to identify and understand users' Explanation Needs in app reviews, guiding the development of more transparent and user-friendly software systems. To answer our research question, we undertake a qualitative analysis to develop a taxonomy for Explanation Needs in app reviews. The provision of conception classification and taxonomy is generally valuable since it provides a standardized framework and facilitates a common ground to communicate and research in emerging fields of knowledge <cit.>. As depicted in Figure <ref>, the qualitative analysis toward addressing RQ1 involved three phases: (1) Dataset Selection, (2) Analysis and Preliminary Taxonomy Extraction (3) Verification and Taxonomy Finalization. Phase 1. In the first phase, we selected the datasets for our analysis. The original dataset used in our study was assembled by Brunotte <cit.>. Although a more recent version of the dataset exists with a larger number of reviews, we focused our analysis on a subset of 1,730 reviews provided to us directly by the authors. It allowed us to conduct our analysis more targeted and manageable. In the remainder of this paper, we refer to this dataset as . comprise app reviews from eight distinct apps available on the Apple App Store and Google Play. The domains represented in span several categories, including health and wellness, finance, technology, and lifestyle, making it well-suited for exploring the nature of user feedback and Explanation Needs in mobile app reviews. Table <ref> provides an overview of this dataset. Phase 2. Using the dataset as our basis, we extracted the preliminary taxonomy of Explanation Needs. A single coder initially analyzed all 1,730 app reviews based on the definition of Explanation Needs outlined in <ref>. The coder then filtered out 1,600 reviews that did not express any Explanation Need, and the remaining 130 cases were labeled as Explanation Need on a tentative basis. While there was a possibility that some of these cases could be excluded by the other coders in subsequent phases, these 130 cases still provided a foundation for further analysis in terms of categorization and taxonomy extraction. Following the template by Saldaña <cit.>, the coder also developed a codebook to maintain, organize, and share the codes with the other authors. The initial coding resulted in an early version of the taxonomy, which was subject to further refinement through extensive discussions and revisions by the authors involved in the study. Hence, as this phase's output, a preliminary taxonomy was generated, which classified different types of Explanation Needs and established boundaries between them. Nevertheless, at this point, the codebook yet had rather generic and fuzzy definitions of the categories or loose criteria for differentiating them. Therefore, we proceed to the next phase to further verify the applicability of the taxonomy and codebook.=-1 Phase 3. In the final phase, we aimed to verify and refine the preliminary taxonomy by involving two other coders. We sampled 130 app reviews tentatively identified as Explanation Needs by the first coder, plus a random selection of 70 reviews that were not labeled as such. The resulting dataset was shuffled and divided equally between the coders, with each responsible for categorizing their respective half as Explanation Need or not. For the reviews categorized as Explanation Need, the coders then had to check if they could be classified under one of the leaf nodes of the preliminary taxonomy. The goal was to ensure the preliminary taxonomy and codebook's completeness and accuracy and identify any deficiencies. The coders then engaged in several rounds of discussions and classification. During the first iteration, the coders compared the labels assigned by the initial coder to the new labels the additional coders gave. From the 130 cases identified by the initial coder as Explanation Need, 48 cases were excluded by either of the new coders. So we were left with 82 app reviews that the new coders also tentatively labeled as Explanation Need, with each case being assigned a specific type of explanation. During the second iteration, all the coders went through these 82 reviews to further discuss and evaluate each case. Moreover, at this point, coders attempted to prune and/or extend the taxonomy categorization to produce the final taxonomy and to consolidate their descriptions and boundaries recorded in the codebook. Throughout the last iteration, 5 additional app reviews that did not meet the requirements and specifications of the final taxonomy were excluded, resulting in a total of 77 cases labeled as Explanation Need. §.§ Results: A Taxonomy of Explanation Needs As shown in Figure <ref>, the taxonomy has a hierarchical structure and consists of two levels. We refer to the lowest level elements, namely , , , , and , as categories of Explanation Needs. To make the categories more tangible, we included a non-exhaustive list of aspects for each category. These aspects are more concrete groupings of related and typical Explanation Needs that we could observe in the data. However, they are not part of the taxonomy in a narrow sense. Given the Explanation Need <ref>, a key distinction we make in the first level of our taxonomy is whether such a need for some explanation is an issue's primary or secondary concern. More precisely, if the user perceives their lack of knowledge as the only issue, then the Explanation Need becomes a Primary Concern, whereas if they see other substantial problems aside from their knowledge gap, it becomes a Secondary Concern. In the latter case, an underlying problem exists, typically a deficiency, which substitutes the Explanation Need as the primary concern. Therefore, offering an explanation may increase the overall understanding of the situation, but an explanation alone cannot solve the underlying problem. As depicted in Figure <ref>, the Explanation Needs belonging to the , , and categories represent a primary concern. In general, is when users are unfamiliar with the system or particular features, either because they are new to it or the system's features have been changed. We found the following aspects to characterize best: * Instruction. Users seek instructions for achieving specific goals, such as how to use a system, feature, or settings option. This aspect requires that the users clearly intend what they aim to do. Instruction aspect excludes reviews if there is an identifiable deficiency, such as an error or failure (see aspect Fix). Example [sic]: “How do you edit from this app???”. * Features Offered. Users seek information about specific or general systems' features or functionality. Therefore, users are unaware of what the system can exactly do. Example [sic]: “... is there anyway to sort this out ...?”. * Effect-Of. Users want to obtain information on the potential outcomes of specific actions. The users know how to perform such an action but are not sure what the impact will be. Example [sic]: “If I invest in dividend paying stocks, will the dividends be added to my portfolio?”. The next category is , including aspects that arise in the ordinary operation of a user familiar with the system. These aspects assume expected behavior, not accounting for deficiencies such as errors or failures. The category was found to encompass the following aspects: * Algorithm. Users struggle to comprehend why a system generates a particular output, wanting to know the factors that influenced the computation. The output is unique to each user, therefore the programmed logic that is the same for all users is not included in this aspect (see aspect Design Decisions). Example [sic]: “In the last 3 months my credit went up a total of 10 points and then dropped down 7 points December 2. This doesn't make sense.” * Design Decision. Users wonder why things are a particular way (status quo) or not a certain way (counterfactual). It is not an output of the system that might be individual to each user, but the programmed logic, which the developers have agreed on. Hence, in contrast to the Algorithm aspect, the Design Decisions are the same for multiple (if not all) users. Example [sic]: “why does the app force portrait mode?” * Signification. Users seek clarification on definitions, visual elements (such as symbols, colors, and highlighting), information visualizations, or related issues in order to understand the system's intended meaning. Example [sic]: “I like this app, but when there may be something in red I just don't understand. Does it means something is wrong?” The last category in the primary concerns is category. It represents general Explanation Needs that are not necessarily provoked during the interaction with a system. Further, aspects to be explained may be shaped by overarching business goals or specific project or process requirements <cit.>. Here we determined the following aspects: * Mission. Users seek clarification on the system's purpose, utility, and vision, with a particular focus on specific features and the system as a whole. Example [sic]: “Why do we need to access this app to get the information we used to get by phone from the doctor?” * Purchase & Subscription. Users inquire about purchase or subscription matters, such as feature exclusivity in premium. This aspect only applies when there are multiple product lines with varying purchase or subscription plans. Example [sic]: “Do I have to pay for it on all devices?” * Privacy. Users express privacy concerns regarding data collection, processing, and forwarding practices, as well as legal privacy rights and app permissions (e.g., GPS activation). If the inquiry is not focused on privacy but rather on the aspects that affect software decisions, it falls under the Algorithm aspect. Example [sic]: “Not sure why you need date of birth to register a navigation app, very suspicious as far as I'm concerned.” Moving to the secondary concerns, we have and categories. Accordingly, here the Explanation Need is only the secondary concern of users, and there is a substantial underlying problem (at least in the user's perception) that is their primary concern. Overall, the aspects are somewhat reproachful and the primary concern typically is a subjective deficiency from the user's point of view. * Change. Users seek explanations for changes to a system, including modifications to the user interface or workflow. This aspect is more critical than genuine. However, it does not necessarily involve the need for re-learning the system, which is covered by the Instruction aspect. Example [sic]: “It just keeps getting worse. Why do you do this?” * Feature Gap. Users want to know why a feature is incomplete or missing. This aspect doesn't cover cases where a feature is not supported for an individual user's use case (see aspect Compatibility). Example [sic]: “Why would you have a database where you can only add and not edit or delete?” * Compatibility. Users are confused by a feature(s) not being supported or compatible with their use case. So, they are prevented from using a set of features due to external conditions that are not part of the system. This aspect excludes errors or failures. Example [sic]: “Only big downfall is that USA account holders for some reason ... cannot use the boost feature. No clue why and no one has given answers to why it doesn't work.” Finally, the category describes a situation with an undeniable objective deficiency such as an error or failure <cit.> in the system. It differs with , where the primary concern is a subjective deficiency in a user's eyes. We found the following aspects to be typical for : * Fix. Users ask about fixes or workarounds to solve errors/ failures or ask whether errors/failures are known to the developers. Example [sic]: “Anyone experiencing the same or know what to do about it?”. * Cause. Users ask for the underlying faults that cause errors, failures, or obviously erroneous outputs. They are interested in knowing the cause of the errors/failures to potentially attempt to fix them themselves. On the contrary, they do not ask for any support (see aspect Fix). Example [sic]: “Is it a loading problem or a glitch??”. * Confusing Message. Users feel misled by rare messages (such as uninformative or incongruous alerts) and assess the messages as incomplete, inaccurate, or erroneous. The messages can potentially be faulty explanations. Example [sic]: “I constantly get warnings that I don't have enough shares to sell and I cannot find any solutions”. §.§ Discussion of Results Through a rigorous study of app reviews, we have developed the Explanation Needs taxonomy, which addresses RQ1 and provides a valuable resource for researchers and developers seeking to understand the concerns and requirements of end-users. By categorizing user needs in the taxonomy, we can better recognize and address various requirements in a more systematic manner, ultimately improving the quality, transparency, and user-friendliness of the application. The proposed taxonomy serves as an enabler, allowing for a more effective approach to addressing user needs and fostering a deeper understanding of the end-user experience. As such, the Explanation Needs taxonomy has significant implications for app development and can contribute to the development of more explainable systems that better meet the needs of users. With the Explanation Needs taxonomy, we were able to tackle the RQ2, which aimed to gain a more statistical view of the types of Explanation Needs expressed in app reviews. So we applied the taxonomy to multiple sets of data, composed of 5,564 reviews in total. Table <ref> provides an overview of all the datasets used in this paper. As discussed in Section <ref>, the taxonomy extraction was based on the and the final labeling was achieved through several rounds of cross-checking to ensure the validity and reliability of our findings. However, to gain deeper insights into the types of information and Explanation Needs in the app reviews and to further assess the coverage and applicability of our taxonomy, we also labeled the reviews of our extended datasets, which we create for classifier implementation and validations (see Section <ref> for more details). The labeling process of the rest of the data (i.e., the app reviews 9 to 22 in Table <ref>) was carried out after consolidating the taxonomy and codebook, the latter of which provides complete information on inclusion and exclusion criteria, as well as typical and atypical examples. Following this, a single coder categorized the app reviews in and that had already been labeled as Explanation Needs (see Section <ref> for more details). Besides the description of the apps, source and number of reviews, Table <ref> provides a breakdown of the distribution of different types of Explanation Needs per app. It shows the number of occurrences of each type of Explanation Needs for each app, as well as the total number and percentage of Explanation Needs across all apps. By examining this table, we can answer the RQ2 by identifying the areas where users require the most explanations. This analysis can help shed light on the nature and extent of Explanation Needs in app reviews. For example, it shows that the majority of cases fall under the Primary Concerns category, accounting for 52.3% of all app reviews. This implies that users' primary issue with the app is their lack of understanding and knowledge, without any substantial problems aside from it. This finding highlights the importance of addressing users' primary concerns and providing sufficient explanations to enhance their overall understanding of the app's functionality. Furthermore, the category is the most frequent type within the Primary Concerns and accounts for 20.7% of the total number of Explanation Needs across all apps. This means that a significant proportion of user feedback in app reviews is related to ordinary interaction with the system. As users engage with the app, they may encounter unexpected behaviours, have questions about design decisions, or need clarification on the meaning of certain visual elements or notions. Accordingly, it is not surprising to have a relatively high number of types since these issues could arise regardless of the app's specific functionality, and, therefore, could be relevant to a wide range of users. Additionally, the category may be particularly salient to users, as it directly affects their experience using the app, and they may be more likely to leave reviews on these types of issues. Similarly, the category stands out with the second-highest percentage of Explanation Needs in the primary concern, accounting for 18.6% of all Explanation Needs, indicates that users frequently encounter difficulties in understanding how to use certain features or functionalities of the app. This finding highlights the importance of providing concise instructions or tutorials to help users learn how to use the app effectively. Overall, the high percentage of and indicates that the app's user interface or design could be improved. Our results hence may suggest that the application design and development should primarily focus on the usability of the apps by making them more intuitive and user-friendly. Another interesting observation is that the category, which is classified as a secondary concern, has the highest percentage of Explanation Needs at 32.3%. This could be attributed to its subjective nature, as the primary concern of this category is a perceived deficiency from the user's point of view, which may be difficult to address directly. Additionally, this deficiency is not necessarily related to a specific bug or technical issue, but rather a mismatch between the user's expectations and the app's performance or features. This finding suggests that users are more likely to express their discontentment and frustration in reviews. Last but not least, our qualitative analysis also reveals an important insight. We found that although app reviews provide a wealth of information about users' Explanation Needs, the proportion of reviews that contain such information is relatively low, at only 5.1%. This indicates a need for more efficient and automated techniques to extract useful content from reviews. Therefore, our study has motivated us to pursue our second contribution, which is described in more detail in Section <ref>. By developing machine learning-based approaches to extract Explanation Needs from reviews, we hope to improve the efficiency and effectiveness of analyzing large volumes of user feedback. §.§ Threats to Validity A potential threat to internal validity is the use of quantitative coding, which can be interpretive and subjective. This means that our analysis may be influenced by our own biases or assumptions, which could affect the accuracy of our findings. Poor English and typos in some reviews can also lead to inaccurate conclusions, but we made a conscious effort to evaluate unintelligible reviews. In addition, a threat to external validity could be survivorship bias, as our results may not be representative of those with low technological literacy, as they may be less likely to write and publish app reviews in the first place. Also, the we used in our taxonomy extraction is relatively small, with only a few cases of Explanation Needs observed (4.6% as shown in Table <ref>). Accordingly, it might limit the generalizability of our taxonomy categories. However, to mitigate the potential threat of a small sample, we conducted a thorough and saturated coding process and verified the validity of our taxonomy categories on an extended dataset. § AUTOMATIC DETECTION OF EXPLANATION NEEDS §.§ Corpora Creation To determine the best method for detecting Explanation Needs in a structured way, we follow the recommendations by Dell’Anna et al. <cit.>. They stress that the results of a simple cross-validated experiment do not allow to draw definite conclusions about the performance of a classifier in an operational context. In other words, we cannot necessarily infer from such an experiment whether the classifier is able to generalize and is thereby suitable for use on unseen data in practice. Hence, we evaluate our approaches on two datasets: CrossVal-DS. We use this dataset to train and compare all models applying 10-fold cross-validation. The main purpose of is to compare the performance of different NLP classifiers and to select the best-performing method. It includes all reviews of created in Section <ref>. However, this dataset with 77 Explanation Needs is not sufficient for training an NLP classifier. Accordingly, we extend the dataset with further reviews and manually label them with respect to the tags “explanation need” and “no explanation need”. We make use of a dataset collected by Maalej et al. <cit.> that has already been utilized in the RE community to classify app reviews into problem reports, inquiries, and irrelevant ones <cit.>. Additionally, we collect further app reviews from 9 popular apps, using custom Python web scraping tools for the Apple App Store[<https://pypi.org/project/app-store-scraper/>] and Google Play Store[<https://pypi.org/project/google-play-scraper/>]. For each of the apps, we scraped as many reviews as possible and then drew a random sample of 100 reviews to include an equal-sized subset of the reviews per app. A detailed overview of is provided in Table <ref>. In total, comprises 5,078 reviews of which 261 contain Explanation Needs (5.14%).=-1 General-DS. To investigate the generalizability of the best-performing classifier, we apply it to a set of unseen reviews that are not associated with any of the apps contained in . Specifically, we scrape and annotate reviews about the four randomly selected apps called WeChat, Memrise, Duolingo, and GitHub (see Table <ref>). The main purpose of is to report the performance of our best classifier in a realistic setting. In total, comprises 486 reviews of which 24 contain Explanation Needs (4.94%). §.§ Annotation Validity To verify the reliability of our annotations, we calculated the inter-annotator agreement in terms of Cohen's Kappa <cit.>. We involved a total of four annotators in the creation of and and assessed the inter-rater reliability on the basis of 485 reviews that each have been labeled by two out of the four annotators. In case of a high imbalance of ratings, Cohen's Kappa is low and indicates poor inter-rater reliability even if there is a high agreement between the raters (Kappa paradox <cit.>). Thus, Cohen's Kappa is not meaningful in such scenarios. Consequently, Cohen's Kappa should always be reported together with the percentage of agreement and other paradox-resistant measures (e.g., Gwet's AC1 measure <cit.>). We calculated all measures (see Table <ref>) using the cloud-based version of AgreeStat[<https://www.agreestat.com/>]. Cohen's Kappa and Gwet's AC1 can both be interpreted using the taxonomy developed by Landis and Koch <cit.>: values ≤ 0 as indicating no agreement and 0.01–0.20 as none to slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1.00 as almost perfect agreement. Table <ref> demonstrates that the inter-rater agreement of our annotation process is reliable as we achieve an average percentage of agreement of 95%. Despite a high agreement of over 90%, Cohen's Kappa yields a relatively low value, which paradoxically suggests only moderate agreement. A more meaningful assessment is provided by Gwet's AC1 as it did not fail in the case of prevalence and remains close to the percentage of agreement. The achieved Gwet's AC1 of 0.945 indicates a nearly perfect agreement. Therefore, we assess and as reliable and suitable for the implementation and evaluation of our Explanation Need detection approach. §.§ Methods We define the detection of Explanation Needs as a binary classification problem, in which we are given a certain review 𝒳 and we are required to produce a nominal label y ∈𝒴 = {explanation need, no explanation need}. Since app store reviews are written in natural language, we build our classifier based on different methods established for NLP. Rule-based Approach. Instead of using a random classifier as the baseline approach, we involve simple regex expressions for the detection of Explanation Needs. We iterate through all reviews in the test set and check if a question mark or the word “why” is contained. We hypothesize that both expressions might be a feasible indicator for the presence of an Explanation Need. Following this assumption, we classify a review as an Explanation Need if it contains at least one of the two expressions and vice versa. Machine Learning-based Approach. We investigate the use of supervised ML models that learn to predict Explanation Needs based on a labeled dataset. Specifically, we employ established binary classification algorithms: NB, SVM, RF, DT, LR, AB, and KNN. To determine the best hyperparameters for each binary classifier, we apply Grid Search, which fits the model on every possible combination of hyperparameters and selects the most performant. We use two different methods as word embeddings: BoW and TF-IDF. In Table <ref> we report the classification results of each algorithm as well as the best combination of hyperparameters. Deep Learning-based Approach. With the rise of DL, more and more researchers are using DL models for NLP tasks. In this context, the BERT model <cit.> is prominent and has already been used for question answering and named entity recognition. BERT is pre-trained on large corpora and can therefore easily be fine-tuned for any downstream task without the need for much training data (Transfer Learning). In our paper, we make use of the fine-tuning mechanism of BERT and investigate to which extent it can be used for the detection of Explanation Needs. First, we tokenize each app store review. BERT requires input sequences with a fixed length (maximum 512 tokens). Therefore, for reviews that are shorter than this fixed length, PAD are inserted to adjust all reviews to the same length. Other tokens, such as the CLS, are also inserted in order to provide further information on the review to the model. CLS is the first token in the sequence and represents the whole review (i.e., it is the pooled output of all tokens of a review). For our classification task, we mainly use this token because it stores the information of the whole review. We feed the pooled information into a single-layer feedforward neural network that uses a softmax layer, which calculates the probability that a review contains an Explanation Need or not. §.§ Evaluation Procedure is strongly imbalanced as only 261 are positive samples. To avoid the class imbalance problem, we apply Random Under Sampling. We randomly select reviews from the majority class and exclude them from the dataset until a balanced distribution is achieved. Our final dataset consists of 522 reviews of which 261 contain an Explanation Need and the other 261 do not. We follow the idea of cross-validation and divide the dataset into a training, validation, and test set. We opt for 10-fold cross-validation as a number of studies have shown that a model that has been trained this way demonstrates low bias and variance <cit.>. Please note that undersampling stands in conflict with our goal to understand how well our classifier generalizes and performs in a realistic setting. Hence, we do not undersample allowing us to report our final results on a realistically distributed test corpus. We use standard metrics for evaluating our approaches, such as Precision, Recall, and a weighted F-measure. Since a single run of a k-fold cross-validation may result in a noisy estimate of model performance, we repeat the cross-validation procedure five times and average the scores from all repetitions. Since our classifier is supposed to assist development teams by detecting relevant Explanation Needs in reviews automatically, we favor Recall over Precision. A high Recall corresponds to a greater degree of automation of Explanation Need detection because it is easier for users to discard FP than to manually detect FN. Consequently, we seek high Recall to minimize the risk of missed Explanation Needs and acceptable Precision to ensure that the development teams are not overwhelmed by FP. To attain a accumulated, single metric from Precision and Recall, the simple F-Measure (F1) is frequently used in binary classification tasks. It is defined as the harmonic mean between Precision and Recall, and thus assigns equal importance to both metrics. To account for our preference for Recall over Precision, it is imperative to make adjustments to the way in which the two metrics are weighted. We evaluate our approaches based on a weighted F-Measure: F_β = (1+β^2) ·Precision ·Recall/(β^2 ·Precision) + Recall where β is the ratio to which Recall is more important than Precision <cit.>. Berry <cit.> defines β as follows: β= time_a ·λ/time_v where time_a is the average time that a human would need to assess an artifact manually (i.e., the time spent by a human determining whether a particular review is an Explanation Need or not), and time_v is the average time that a human would need to verify whether a positive detection by a tool is actually a True Positive (i.e., the time spent by a human neglecting a FP detection of an Explanation Need). Further, λ is the inverse of the share of relevant artifacts within all artifacts. In other words, λ is the average number of artifacts that an analyzer would need to investigate in order to find a single relevant artifact. In our case, λ is calculated as follows: λ= (285/5564)^-1 ≈19.52 because we identified a total of 285 Explanation Needs in our dataset of 5,564 reviews. Thus on average, one out of 19.52 app reviews contains an Explanation Need. Since the time required to vet a single answer of our classifier is no more than the time required to manually check if an app review contains an Explanation Need, the weight ratio β is equal to λ. Hence, we define β as 19.52. §.§ Experimental Results In the following, we describe the results of our experiments. First, we compare the performance of different NLP classifiers on . Second, we investigate the generalizability of the best-performing method on . Selection of Best-Performing Method Table <ref> reveals that our shallow rule-based approach shows a strong performance in detecting Explanation Needs. It achieves a high F_19.52 score for both classes and is able to demarcate between reviews that contain Explanation Needs and those that do not. In comparison, all ML-based approaches exhibit a significantly poorer performance. For example, DT trained on TF-IDF embeddings achieves a Macro-F_19.52 score of 58% (deterioration of 35% compared to the baseline approach). The best performance in this category is achieved by RF trained on BoW embeddings with a Macro-F_19.52 score of 76%. Our experiment shows that the choice of sentence embedding has no significant effect on the performance of the ML-based approaches. Most of the approaches achieve a Macro-F_19.52 score of about 70% regardless of the applied sentence embedding. Our fine-tuned BERT model, on the other hand, shows a considerably stronger performance and achieves a Macro-F_19.52 score of 93%. Interestingly, despite its rich language understanding, the BERT model fails to outperform our simple rule-based approach. In fact, both approaches achieve the same Macro-F_19.52 score and posses consequently the same predictive power. Our experiments thus show that both approaches are suitable for identifying Explanation Needs in . To investigate the generalizability of the rule-based approach and the BERT model, we apply both approaches to a larger set of unseen reviews written for other apps contained in .=-1 Generalizibility of Best-Performing Method When applied to unseen data, both approaches show a clear performance drop in the detection of Explanation Needs (see Table <ref>). While both approaches continue to show very high F_19.52 scores for the “no explanation need” class, the F_19.52 score for the “explanation need” class has decreased significantly. The largest performance drop is evident in the rule-based approach, which only shows an F_19.52 score of 67% in detecting explanation needs across all reviews of all four apps. Similarly, the trained BERT model fails to match the very good F_19.52 score of 94% that it could achieve when applied to the balanced training set. Instead, it achieves a score of 79% on the unseen data, which corresponds to a decrease of 15%. Overall, the BERT model outperformed the rule-based approach and achieved a significantly better Macro-F_19.52 score of 86%. The higher Macro-F_19.52 score is mainly attributable to the fact that the BERT model shows a significantly better Recall with regard to the Explanation Need class. In other words, the BERT model identified more Explanation Needs in the reviews than the rule-based system. Our experiment demonstrates that this performance deviation does not depend on a specific app about which the respective reviews were written. In fact, when applied to the reviews about WeChat, Duolingo and Github, the BERT model exhibits better performance. In the case of the reviews about Memrise, it achieves the same Recall as the rule-based approach. Both the rule-based approach and the BERT model show the most significant performance loss with regard to Precision and generate a great number of FP. Using both approaches, two of three reviews that are supposed to contain an Explanation Need are FPs, causing high filtering costs for practitioners. §.§ Discussion of Results Our experiments show that the rule-based approach achieves the same performance as the BERT model when evaluated on , but performs worse when applied to unseen data. The rule-based approach fails to recognize more than 30% of the Explanation Needs and seems to generalize less effective than the BERT approach. When analyzing the data in , we see that the detection of Explanation Needs cannot be broken down to the presence of questions and question words. Explanation Needs do not necessarily contain question marks or question words. In many cases, questions are formulated but question marks are not included: Would you please keep us updated on what's going on. I have several texts and don't know how to keep them. Don't want to lose it. The BERT model understands the semantics of sentences better and dependents less on the sentence's syntax. The rule-based approach could be extended by adding more interrogatives (e.g., how) and interrogative verbs (e.g., don't understand) to enhance the Recall of the approach, however, this may lead to an unreasonable increase in FPs. The resulting filtering effort would diminish the use of the approach in practice. From a critical point of view, our best classifier does not perform flawlessly. It does not identify all Explanation Needs in and predicts a number of FPs. We argue that the recall value needs to be improved above 90% to qualify the approach for practical use. Otherwise, the practitioners would have to go through the reviews manually to detect false negatives, which is time-consuming given the high number of reviews and the fact that Explanation Needs rarely occur. The achieved precision value of 37% is not optimal, but in our view still justifiable. It is much easier for the practitioner to neglect two false positives from 3 reviews predicted as Explanation Needs than to go through 20 reviews manually to discover a single Explanation Needs. Our classifier marks a first step toward automatic Explanation Need detection. Further studies should focus on optimizing the classifier in terms of recall. We hypothesize that the extension of the training set and the use of further language models might be beneficial. So far, we have only focused on the BERT-Base model <cit.>, although other studies <cit.> show that alternative models such as RoBERTa can achieve even better performance. To assist practitioners in filtering FPs, it may also be useful to have the classifier mark the specific clause in each review that has caused the review to be categorised as Explanation Needs <cit.>. This will help practitioners to understand the inner workings of the classifier and also increase its acceptance. §.§ Threats to Validity A threat to internal validity are the annotations themselves as an annotation task is subjective to a certain degree. To minimize the bias of the annotators, we performed two mitigation actions: First, we conducted a workshop prior to the annotation process to ensure a common understanding of Explanation Needs. Second, we assessed the inter-rater agreement by using multiple metrics (Gwet's AC1 etc.). Despite our efforts to make the labeling process as transparent and systematic as possible, there may still be some variability in the resulting gold standard, e.g., misinterpretation of the users' intention, blurred boundaries between the categories, too broad or too narrow judgement, or human mistakes. Using the adjusted F_β-score as an evaluation metric poses a threat to construct validity. We used an adjusted β value of 19.52, which was calculated based on the frequency of Explanation Need occurrences in app reviews. This value is in the order of β values calculated for other “needle in the haystack” tasks <cit.>. However, it is possible that the value may deviate when calculated based on another dataset. Our results have shown that generalization of our tested classifiers is fairly moderate when applied to unseen, dissimilar test data. This may indicate that more data is needed to train a classifier that generalizes better. Lastly, app reviews are not the only relevant source of user feedback <cit.>. § CONCLUSION This work is a further step towards user-centered explainability engineering. It contributes to a better understanding of users' Explanation Needs and lays the foundation for future research and development in this area. The proposed taxonomy of Explanation Needs provides a rigorous approach for extracting explainability requirements from app reviews, ensuring that they meet users' expectations. In addition, our approach represents the first step towards automatic explanation need detection and reduces the manual effort required by engineers and researchers to identify Explanation Needs in reviews. To facilitate practical use of the approach, it needs to be optimized for recall so that practitioners can efficiently focus on eliciting valid Explanation Needs. Finally, our published set of manually labeled app reviews will enable researchers in the field to improve their own models and approaches for detecting Explanation Needs. § ACKNOWLEDGEMENTS This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Grant No.: 470146331, project softXplain (2022-2025). IEEEtran
http://arxiv.org/abs/2307.04448v1
20230710095736
Casimir effect of Lorentz-violating charged Dirac in background magnetic field
[ "Ar Rohim", "Apriadi Salim Adam", "Arista Romadani" ]
hep-th
[ "hep-th", "quant-ph" ]
[email protected] Research Center for Quantum Physics, National Research and Innovation Agency (BRIN), South Tangerang 15314, Indonesia Departemen Fisika, FMIPA, Universitas Indonesia, Depok 16424, Indonesia [email protected] Research Center for Quantum Physics, National Research and Innovation Agency (BRIN), South Tangerang 15314, Indonesia [email protected] Department of Physics, Faculty of Science and Technology, Universitas Islam Negeri Maulana Malik Ibrahim Malang, Malang 65144, Indonesia We study the effect of the Lorentz symmetry breaking on the Casimir energy of charged Dirac in the presence of a uniform magnetic field. We use the boundary condition from the MIT bag model to represent the property of the plates. We investigate two cases of the direction of violation, namely, time-like and space-like vector cases. We discuss how the Lorentz violation and the magnetic field affect the structure of the Casimir energy and its pressure. We also investigate the weak and strong magnetic field cases with two different limits, heavy and light masses. Casimir effect of Lorentz-violating charged Dirac in background magnetic field Arista Romadani August 12, 2023 ============================================================================== § INTRODUCTION The Casimir effect representing quantum field effects under macroscopic boundaries was first predicted by H. B. G. Casimir in 1948 <cit.>. He showed that the quantum vacuum fluctuations of the electromagnetic field confined between two parallel plates generate an attractive force. One decade later, in 1958, Sparnaay performed the experimental measurement of the effect, however, with a rough precision <cit.>. He found that the attractive force of the plates does not contradict the theoretical prediction. After his work, the studies showed that the Casimir effect has experimentally confirmed with high precision <cit.>. The Casimir effect itself has many applications in nanotechnology <cit.>, and the theoretical discussion was elaborated in connection to several research areas, for example, cosmology <cit.> and condensed matter physics <cit.>(see e.g. Refs. <cit.> for review). The studies showed that the Casimir effect also arises not only for the electromagnetic field but also for other fields. The geometry of the plate's surface represented by the form of the boundary conditions also determines how the Casimir effect behaves. To discuss the Casimir effect of the scalar field, one can use the Dirichlet boundary conditions of the vanishing field at the surface of the plates. In such a case, one can also employ Neumann and/or mixed boundary conditions <cit.>. However, in the case of the fermion field, one cannot apply such boundaries because the solution for the fermion field is derived from the first-order differential equation. Alternatively, one may use a bag boundary condition that guarantees the vanishing flux at the plate's surface. The well-known form covering this property is the boundary condition from the MIT bag model <cit.> (see Ref. <cit.> for review). The extension of this boundary that includes the role of the chiral angle has been employed in the literature (see e.g. Refs. <cit.>, c.f. Ref. <cit.> for the self-adjoint variant). The Casimir effect phenomenon could be investigated in the system with charged quantum fields under the magnetic field background. With such a system, one can investigate how the charged quantum field couples to the quantum fluctuation <cit.>. On the other hand, the Casimir effect in the system involving a Lorentz violation has also attracted some attention <cit.>. Within the framework of string theories, the spontaneous Lorentz breaking may occur through a dynamic of the Lorentz covariant <cit.>. Such a dynamic will generate interactions to gain nonzero expectation values for Lorentz tensors. This is the same analog as in the Higgs mechanism in the context of the standard model. There are several studies where they investigated a system under Lorentz symmetry breaking and the CPT anomaly <cit.>. Those two phenomena could be possibly measured in the experiment, for instance, the measurements of neutral-meson oscillations <cit.>, the QED test on Penning traps <cit.>, and the baryogenesis mechanism <cit.>. Hence, in this work, we study a system of charged fields involving both Lorentz violation and magnetic field background. In particular, we investigate the Casimir effect of the system under such effects. In our setup, the magnetic field is raised in parallel to the normal plate's surface. We investigate two cases of the Lorentz-violating direction, i.e., timelike and space-like directions. For the spacelike case, we restrict ourselves to discussing the violation in the z-direction only because the Lorentz violation in the x- and y-directions do not affect the behavior of the Casimir energy of a Dirac field <cit.>. In the present study, we employ the boundary condition from the MIT bag model <cit.>, which is originally used to describe quark confinement. It is natural to show that the presence of the boundary condition in the confinement system leads the allowed perpendicular momentum to the boundary surface to be discrete. To discuss the Casimir effect, we investigate the mode expansion of the field consisting of the linear superposition of the positive- and negative-energy solutions associated with the creation and annihilation operators. We can evaluate the vacuum energy by applying the boundary condition to the mode expansion. In the present study, we use the Abel-Plana-like summation <cit.> to extract the divergence of the vacuum energy in the presence of boundary conditions. Then, the Casimir energy can be mathematically obtained by taking the difference between the vacuum energy in the presence of the boundary conditions to that in the absence of ones, where both vacuum energies are infinite, but their difference is finite. The rest structure of this paper is organized as follows. In Sec. <ref>, we describe the model of our system, namely, a Dirac field confined between two parallel plates with a background magnetic field under the Lorentz violation in the quantum field theory framework. In Sec. <ref>, we investigate the Casimir energy. In this section, we derive the solution for the field inside the confinement area following the procedure used in the literature (see e.g., Refs. <cit.>). In Sec. <ref>, we discuss the Casimir pressure. Section <ref> is devoted to our summary. In this paper, we use the natural units so that c=ħ=1. § MODEL We consider the charged Dirac field confined between two parallel plates placed at z=0 and z=ℓ in the presence of a uniform magnetic field. The normal surface of the plates is parallel to the z-axis (see Fig. <ref>). In our model, the Lorentz symmetry is not preserved. The Lagrangian density for such a Dirac field with mass m is given by L=Ψ̅[iγ^μ∂_μ-eγ^μ A_μ- m+iλ u^μ u^νγ_μ∂_ν]Ψ, where Ψ̅(≡Ψγ^0) is the Dirac adjoint, λ is the dimensionless parameter with |λ |≪ 1, A_μ is the four vector potential, and u^μ is an arbitrary constants vector with u^μ u_μ can be 1,-1,0 for time-like, space-like, and light-like, respectively. The Lorentz symmetry breaking is characterized by the last term of Eq. (<ref>); the parameter λ contributes to the violation intensity while the vector u^μ describes the direction one <cit.>. In the present study, we use the 4× 4 gamma matrices γ^μ written in the Dirac representation as follows γ^0= [ I 0; 0 -I ]   and  γ^j= [ 0 σ^j; -σ^j 0 ], where I represents the 2× 2 identity matrix and σ^j is the 2× 2 Pauli matrices. The gamma matrices satisfy the anti-commutation relation as {γ^μ, γ^ν}=η^μν, where η^μν(≡ diag.(1,-1,-1,-1)) is the metric tensor of the Minkowski spacetime. The Dirac field Ψ satisfies the modified Dirac equation as follows [iγ^μ∂_μ-eγ^μ A_μ- m+iλ u^μ u^νγ_μ∂_ν]Ψ=0. The positive-energy solution for the above Dirac equation is given as Ψ^(+)(r)=e^-iω tψ( r)=e^-iω t[ χ_1; χ_2 ], where χ_1 and χ_2 are the upper and lower two-component spinors, respectively. We use ω to represent the eigenenergy of the Dirac field. In our model, the magnetic field is raised in the z-direction B=(0,0,B), where one can choose the corresponding four-vector potential components as follows A_0=A_2=A_3=0     and    A_1=-yB, with B as the magnetic field strength. The geometry of the plates is described by the boundary condition from the MIT bag model as follows <cit.> i n_μγ^μΨ=Ψ, where n_μ is the unit normal inward four-vector perpendicular to the boundary surface. The consequence of this boundary is the vanishing flux or normal probability density at the plate surface n_μ J^μ (≡ n_μΨ̅γ^μΨ)=0. The idea of this boundary is that the mass of the field is written as a function of its position; inside the confinement area, the mass has a finite value and becomes infinite at the boundary surface. Then, one can suppose that the field outside the confinement area vanishes (see Ref. <cit.> for the confinement model of a relativistic particle). While inside the confinement area, the solution for the field is written as the superposition between the left- and right-field components. § CASIMIR ENERGY In this section, we derive the Casimir energy of a Lorentz-violating charge Dirac in a background magnetic field. We study two directions of the Lorentz violation, namely, time-like and space-like vector cases. We derive the solution for the Dirac field inside the confinement area under the boundary condition from the MIT bag model <cit.>. We follow the general procedure given in Refs. <cit.>. Then, we compute the Casimir energy using the Abel-Plana-like summation <cit.> following Refs. <cit.>. In addition, we also investigate the Casimir energy approximately for the case of weak and strong magnetic fields. §.§ Time-like vector case We consider the positive-energy solution for the timelike vector case with u^(t)=(1,0,0,0). In this case, the Dirac equation (<ref>) gives two equations as follows [(1+λ)ω-m]χ^(t)_1=(-iσ^j∂_j+eyBσ^1)χ^(t)_2, [(1+λ)ω+m]χ^(t)_2=(-iσ^j∂_j+eyBσ^1)χ^(t)_1, from which we have the equation for the upper two-component spinor χ^(t)_1 as [(1+λ)^2ω^2-m^2]χ^(t)_1 = (-iσ^j∂_j+eyBσ^1)^2χ^(t)_1 = [-∇^2+e^2y^2B^2-eB(i2y∂_1+σ^3)]χ^(t)_1. In the above equation, we have used the commutation and anti-commutation relations of the Pauli matrices given as [σ^l,σ^m]=2iϵ_lmnσ^n and {σ^m,σ^n}=2δ_mnI, respectively, where δ_mn is a Kronecker delta and ϵ_lmn is a Levi Civita symbol. To find the solution for χ^(t)_1 in Eq. (<ref>), one can propose the following form χ^(t)_1=e^ik_1 xe^ik_3 z F^(t)(y). The presence of the Pauli matrix σ^3 in Eq. (<ref>) leads two independent solution for F^(t)(y) as follows F^(t)_+(y) = [ f^(t)_+(y); 0 ]    and    F^(t)_-(y) = [ 0; f^(t)_-(y) ] . Then, it is convenient to introduce s=± 1 so that the solution for f^(t)_s(y) can be read in a general way as σ^3F^(t)_s(y)=sF^(t)_s(y), and introduce a new parameter as ξ^(+, t)=√(eB)(y+k_1 eB). Then, Eq. (<ref>) can be read as Hermite's equation for arbitrary s as follows [d^2 dξ^(t)2-ξ^(t)2+a^(t)_s]f^(t)_s(y)=0, where a^(t)_s=(1+λ)^2ω^2-m^2-k^2_3+eBs eB. We now have the eigenenergies as[We have used |eB| to avoid imaginary value of ω.] ω^(t)_n',k_3=(1+λ)^-1√(m^2+k^2_3+|eB|(2n'+1)-|eB|s), where we have used a^(t)_s=2n'+1 with n'=0,1,2,3,⋯. The appropriate solution for f^(t)_s(y) with positive value eB that satisfies Hermite's equation (<ref>) is given by f^(t)_s(y)= √((eB)^1/2 2^nn'!(π)^1/2) e^-ξ^2/2H_n'(ξ^(t)), where f^(t)_s(y) has been normalized. The solution for F^(t)_s(y) is characterized by two conditions, namely, n'=n for s=+1 and n'=n-1 for s=-1. They can be written as follows F^(t)_+(y) = [ f^(t)_k_1,n(y); 0 ]    and    F^(t)_-(y) = [ 0; f^(t)_k_1,n-1(y) ] . We note that the eigenenergy for both values of s gives the same expression as ω^(t)_n, k_3=(1+λ)^-1√(m^2+k^2_3+2n|eB|), where n=0,1,2,3,⋯ is the Landau level. Then, we can finally derive the spatial solution for the right-moving field component as follows ψ^(+, t)_k_1,n,k_3 ( r) = e^ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n,k_3+m)) ×[C_1 [ ((1+λ)ω^(t)_n,k_3+m) f^(t)_k_1, n(y); 0; k_3f^(t)_k_1, n(y); √(2neB) f^(t)_k_1, n-1(y) ] + C_2 [ 0; ((1+λ)ω^(t)_n,k_3+m) f^(t)_k_1, n-1(y); √(2neB) f^(t)_k_1, n(y); -k_3f^(t)_k_1, n-1(y) ]],   for n≥ 1 ψ^(+, t)_k_1,0,k_3 ( r)= e^ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C_0 f^(t)_k_1, 0(y) [ (1+λ)ω^(t)_0, k_3+m; 0; k_3; 0 ],   for n=0, where C_0, C_1 and C_2 are the complex coefficients and f^(t)_k_1, n(y) is given by f^(t)_k_1, n(y)=√((eB)^1/2 2^n n!π^1/2)exp[-eB 2(y+k_1 eB)^2]H_n[√(eB)(y+k_1 eB)], with H_n(ξ) is the Hermite polynomial. In a similar way, we can obtain the solution for the left-moving field component as follows ψ^(+, t)_k_1,n,-k_3( r) = e^ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n,k_3+m)) ×[C̃_1 [ ((1+λ)ω_nk_3+m) f^(t)_k_1, n(y); 0; -k_3f^(t)_k_1, n(y); √(2neB) f^(t)_k_1, n-1(y) ] + C̃_2 [ 0; ((1+λ)ω_nk_3+m) f^(t)_k_1, n-1(y); √(2neB) f^(t)_k_1, n(y); k_3f^(t)_k_1, n-1(y) ]],   for n≥ 1 ψ^(+, t)_k_1,0,-k_3 ( r)= e^ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C̃_0 f^(t)_k_1, 0(y) [ (1+λ)ω^(t)_0, k_3+m; 0; -k_3; 0 ] ,   for n=0, where C̃_0, C̃_1 and C̃_2 are the complex coefficients. The total field solution is given by the linear combination between the left- and right-moving field components as follows[In the case of preserved Lorentz symmetry (λ=0), the solution is completely the same as that of Ref. <cit.>.] ψ^(+, t)_k_1,n, k_3( r)=ψ^(+, t)_k_1,n,k_3( r)+ψ^(+, t)_k_1,n,-k_3( r), where we use k_3 l to represent the allowed momentum in the system, as we will see below. For arbitrary non-zero complex coefficients, we have the constraint for momentum component in the z-direction (k_3) in the case of n≥ 0 as follows mℓsin(k_3ℓ)+k_3 ℓcos (k_3ℓ)=0. The detailed derivation is given in Appendix <ref>. The solution for Eq. (<ref>) is given by k_3l with l=1,2,3,⋯, which indicates that the allowed momentum k_3 must be discrete. As a consequence, the energy of the field under the MIT boundary condition must also be discrete as follows ω^(t)_n,l=(1+λ)^-1√(m^2+k^2_3l+2n|eB|). These properties not only hold for positive-energy solutions but also for the negative-energy counterpart. One can see that the magnetic field and parameter λ do not affect the structure of the momentum constraint. In this context, the former is similar to that in the absence of the magnetic field <cit.> while the latter is similar to that of the preserved Lorentz symmetry. We now write down a mode expansion of the Dirac field in the time-like vector case under the boundary condition from the MIT bag model as Ψ^(t)_k_1,n,l(r)= ∑^∞_n=0∑^∞_l=1∫^∞_-∞d k_1 [â_k_1,n,lΨ^(+,t)_k_1,n,l(r)+ b̂^†_k_1,n,lΨ^(-,t)_k_1,n,l(r) ], where Ψ^(±,t)_k_1,n,l(r) are the positive (+) and negative (-) energy solutions. See Appendix <ref> for the detailed expression of the negative-energy solution. The annihilation and creation operators in Eq. (<ref>) satisfy the following anti-commutation relations {â_k_1,n,l,â^†_k'_1,n',l'}={b̂_k_1,n,l,b̂^†_k'_1,n',l'}=δ_nn'δ_ll'δ(k_1-k'_1), and the other anticommutation relations vanish. The Dirac field satisfies orthonormality conditions as follows ∫ d x_⊥∫^ℓ_0 dz ψ^(j,t)†_k_1,n, l( r)ψ^(j',t)_k'_1,n', l'( r)=δ_jj'δ_nn'δ_l l'δ(k_1-k'_1),    j,j'=0,1,2 , by which we can obtain the relations of the complex coefficients of the field. We use x_⊥≡ (x,y) to represent the sub-spatial coordinate parallel to the normal plates' surface. From the above Lagrangian density (<ref>), one can obtain the Hamiltonian density in the time-like vector case as follows H^(t)=-Ψ̅^(t)[iγ^j∂_j-eγ^μ A_μ- m]Ψ^(t)=i(1+λ)Ψ^(t)†∂_0Ψ^(t). Then we are now ready to evaluate the vacuum energy as follows E^(t)_ Vac.=∫_Ω d^3 x E^(t)_ Vac.=∫_Ω d^3 x⟨ 0| H^(t)|0⟩ = -|eB|L^2π∑_n=0^∞∑_l=1^∞ i_n√(m^2+(k'_3lℓ)^2+2n|eB|), where E_ Vac. is the vacuum energy density, i_n=1-1 2δ_n0, k'_3l≡ k_3lℓ, and Ω is the volume of the confinement area. One can derive the Casimir energy by subtracting the vacuum energy in the presence of the boundary condition from the absence of one. We note that the roles of λ do not appear in the vacuum energy for the time-like vector case. In other words, the Casimir energy also does not depend on λ. In the next subsection, we will show that the above result can be recovered in the case of the preserved Lorentz symmetry. Therefore, it is not necessary to evaluate further the Casimir energy in this subsection. §.§ Space-like vector case In this subsection, we investigate the Casimir energy for the space-like vector case in the z-direction. We start the discussion by deriving the solution for the space-like vector case with u^(z)=(0,0,0,1). In this case, the Dirac equation (<ref>) gives two equations as follows (ω-m)χ^(z)_1=(-iσ^j∂_j+eyBσ^1+iλσ^3∂_3)χ^(z)_2, (ω+m)χ^(z)_2=(-iσ^j∂_j+eyBσ^1+iλσ^3∂_3)χ^(z)_1. Multiplying both sides of Eq. (<ref>) by (ω+m) and using Eq. (<ref>), we have the equation for the upper two-component spinor χ^(z)_1 as follows (ω^2-m^2)χ^(z)_1 = (-iσ^j∂_j+eyBσ^1+iλσ^3∂_3)^2χ^(z)_1 = [-∇^2+e^2y^2B^2-eB(2iy∂_1+σ^3)+2λ∂^2_3-λ^2∂^2_3]χ^(z)_1. One can propose the solution χ^(z)_1 as follows χ^(z)_1=e^ik_1 xe^ik_3 zf^(z)(y). Along the same procedure used in the previous subsection, substituting back Eq. (<ref>) into Eq. (<ref>) brings us to Hermite's equation in which we have the eigen energies given as ω^(z)_n, k_3=√(m^2+(1-λ)^2k^2_3+2 n |eB|). We find that the solution of the Dirac field confined between two parallel plates in the space-like vector case of z-direction for the right-moving field with positive value eB is given as follows ψ^(z)_k_1,n,k_3 ( r)=e^ik_1 xe^ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [C_1 [ (ω_n, k_3+m) F^(z)_k_1, n(y); 0; (1-λ) k_3F^(z)_k_1, n(y); √(2neB) F^(z)_k_1, n-1(y) ] + C_2 [ 0; (ω_nk_3+m) F^(z)_k_1, n-1(y); √(2neB) F^(z)_k_1, n(y); -(1-λ)k_3F^(z)_k_1, n-1(y) ]],   for n≥ 1 ψ^(z)_k_1,0, k_3 ( r)= e^ik_1 xe^ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C_0 F^(z)_k_1 0(y) [ ω^(z)_0, k_3+m; 0; (1-λ) k_3; 0 ] ,  for n=0, where F^(z)_k_1, n(y)=√((eB)^1/2 2^n n!π^1/2)exp[-eB 2(y+k_1 eB)^2]H_n[√(eB)(y+k_1 eB)], with the Hermite polynomial H_n(y). In a similar way, we can obtain the solution for the left-moving field as follows ψ^(+,z)_k_1,n,-k_3( r)= e^ik_1 xe^-ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [C̃_1 [ (ω^(z)_n, k_3+m) F^(z)_k_1, n(y); 0; -(1-λ)k_3F^(z)_k_1, n(y); √(2neB) F^(z)_k_1, n-1(y) ] + C̃_2 [ 0; (ω^(z)_n, k_3+m) F^(z)_k_1, n-1(y); √(2neB) F^(z)_k_1, n(y); (1-λ)k_3F^(z)_k_1, n-1(y) ]],  for n≥ 1 ψ^(+,z)_k_1,0,-k_3 ( r)= e^ik_1 xe^-ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C̃_0 F^(z)_k_1, 0(y) [ ω^(z)_0,k_3+m; 0; -(1-λ)k_3; 0 ] ,  for n=0, where the eigen energies ω^(z)_n,k_3 are given by Eq. (<ref>) (see Appendix <ref> for the detailed derivation). The complex coefficients in the above Dirac field can be determined by similar orthonormality conditions given in Eq. (<ref>). We next write the total spatial solution for the Dirac field inside the confinement area as follows ψ^(+,z)_k_1,n,k_3( r)=ψ^(+,z)_k_1,n,k_3( r)+ψ^(+,z)_k_1,n,-k_3( r). For non-zero complex coefficients C_1, C_2,C̃_1,C̃_2, we have the constraint of the momentum k_3 as follows mℓsin(k_3ℓ)+(1-λ)k_3 ℓcos (k_3ℓ)=0, for arbitrary Landau level n. One can see that the parameter λ affects the constraint while the magnetic field does not. The allowed momentum that satisfies the constraint (<ref>) is k_3l with l=0,1,2,3,⋯. The discretized eigenenergies of the field under the MIT boundary can be written as follows ω^(z)_n,l=√(m^2+(1-λ)^2 k^2_3l+2n|eB|). Below we will compute the Casimir energy of charged Dirac field under the presence of the MIT boundary. For this purpose, we write down the Hamiltonian density for the space-like vector case as follows, H^(z)=-Ψ̅^(z)[iγ^j∂_j-eγ^μ A_μ- m]Ψ^(z)=iΨ^(z)†∂_0Ψ^(z). The vacuum energy reads E_ Vac.=-|eB| L^2π∑_n=0^∞∑_l=1^∞ i_n √(m^2+(1-λ)^2(k'_3lℓ)^2+2n|eB|), where we have used the eigenenergies given in Eq. (<ref>) and k'_3ℓ(≡ k_3lℓ). From the above vacuum energy, one can see that its value is divergent. To solve the issue, we employ the Abel-Plana-like summation as follows <cit.> ∑_l=1^∞π f_n(k'_3l)(1-sin(2k'_3l) 2k'_3l)=-π b mf_n(0) 2 (b m+1)+∫_0^∞ dz f_n(z) - i∫_0^∞ dt f_n (it)-f_n(-it)t+b m t-b me^2t+1. From the momentum constraint in the space-like vector case (<ref>), the denominator of the left-hand side Eq. (<ref>) can be rewritten in the following form 1-sin(2k'_3l) 2k'_3l = 1 +b m k'^2_3l+(bm)^2, where b=ℓ (1-λ)^-1. Then, after applying the Abel-Plana-like summation to the vacuum energy, Eq. (<ref>) becomes E_ Vac.=-|eB|L^2π^2 b∑_n=0^∞ i_n [-π b m f_n(0) 2 (b m +1)+∫_0^∞ dq f_n(q) - i∫_0^∞ dt f_n (it)-f_n(-it)t+b m t-b me^2t+1], where the function f_n(q) is defined as f_n(q)= √(m^2b^2+q^2+2n|eB| b^2)(1 +b m q^2+(bm)^2). Next, one can decompose the first and second terms in the vacuum energy (<ref>) into two parts: (i) in the absence of the boundary conditions of two plates and (ii) in the presence of one plate. The latter part is irrelevant to our discussion because it does not contribute to the force. Then, the last term of Eq. (<ref>) can be understood as the Casimir energy E_ Cas.=i |eB|L^2π^2 b∑_n=0^∞ i_n ∫_0^∞ dt f_n (it)-f_n(-it)t+b m t-b me^2t+1. Using Eq. (<ref>) and introducing variable of t=bu, the Casimir energy reads E_ Cas.= -2 |e B| L^2 /π^2 ∑_n = 0^∞ i_n ∫_0^∞ d u √(u^2 - M_n^2 )( b ( u - m ) - m / (m + u)/(u + m) e^2 b u + u - m), where [ M_n = √(m^2 + 2 n |e B|). ] The range of integration of Eq. (<ref>) can be split into two intervals, i.e., [0,M_n] and [M_n,∞]. The integration result of the first interval vanishes while the second one remains. To further proceed with the Casimir energy, we next rewrite the following quantity as b (u - m) - m / (m + u)/(u + m) e^2 b u + u - m = - 1/2d/d uln( 1 + u - m/u + m e^- 2 b u), which leads the Casimir energy to E_ Cas. = |e B| L^2 /π^2 b∑_n = 0^∞ i_n ∫_0^∞ d y √(y^2 + 2 y b M_n)d/d yln( 1 + y + b (M_n - m)/y + b (M_n + m) e^- 2 (y + b M_n) ), where we have introduced a new variable as y = b u - b M_n. Performing integration by part for Eq. (<ref>), we finally find the simpler form of the Casimir energy as follows E_ Cas.=-|eB| L^2π^2 b∑^∞_n=0 i_n ∫^∞_0 dy (y+bM_n)(y^2+2byM_n)^-1/2ln( 1 + y + b (M_n - m)/y + b (M_n + m) e^- 2 (y + b M_n) ). We next numerically evaluate the expression of the Casimir energy given in Eq. (<ref>). The left panel of Fig. <ref> depicts the scaled Casimir energy as a function of the dimensionless parameter m'(≡ mℓ) for various values of the parameter λ=0,0.01,0.1 with a fixed parameter ℓ^2|eB|=2. From this figure, we find that the scaled Casimir energy converges to zero as the parameter m' becomes larger. The right panel of figure <ref> depicts the scaled Casimir energy as a function of the dimensionless parameter ℓ^2|eB| for a fixed parameter m'=1. From this figure, one can see that the scaled Casimir energy also converges to zero as the parameter ℓ^2 |eB| increases. Both panels of Fig. <ref> show that the parameter λ increases, the Casimir energy will increase and vice versa, as previously shown by Ref. <cit.> for the absence of the magnetic field. Figure <ref> plots the scaled Casimir energy as a function of the dimensionless parameter ℓ^2 |eB| for various values of parameter λ=0,0.01,0.1 with a fixed parameter m'=1. One can see that the increasing ℓ^2 |e B| leads to the converging of the Casimir energy to zero. In the rest of this part, we investigate the approximate cases of the Casimir energy. In the case of the weak magnetic field B→ 0, the above Casimir energy (<ref>) for an arbitrary m'(≡ mℓ) reduces to E_ Cas.≃ -L^2π^2 b^3∫^∞_bm dx x^2 ∫^∞_0 dv (v+1)1√(v(v+2))ln( 1+x(v+1)-bm x(v+1)+bm e^-2x(v+1)). To obtain the above expression, we have used the replacement of summation with integration, v=y/(b M_n), and x=bM_n. Taking the case of light mass m'≪1 for Eq. (<ref>), we recover the earlier result by Ref. <cit.> as follows E_ Cas.≃-7π^2 (1-λ)^3 L^2 2880 ℓ^3[1-120 m' 7π^2(1-λ)], where we have expanded the integrand up to the order of 𝒪(m') and omitted the higher ones. The first term corresponds to the Casimir energy in the massless case with the effect of the Lorentz violation while the second term corresponds to the correction part. In the case of the preserved Lorentz symmetry, λ=0, we recover the well-known Casimir energy of the massless fermion derived by Johnson <cit.>. To obtain the approximated result of Eq. (<ref>), one can also start from the general Casimir energy (<ref>) and take its light mass case m'≪ 1 for the arbitrary magnetic field as E_ Cas.≃ -|eB|L^2π^2b∑_n=0^∞ i_n∫^∞ _0dy [(y+b√(2 n e B))ln(1+e^-2(y+b√(2 n e B)))√(y^2+2y b√(2n e B))-2b me^-2(y+b√(2 n e B))√(y^2+2y b√(2n e B))(1+e^-2(y+b√(2 n e B)))]. Then, taking the limit of the weak magnetic field, the above expression reduces to Eq. (<ref>). In the case of heavy mass m'≫ 1, we find that the Casimir energy approximately reduces to E_ Cas.≃ - |e B|L^2(1-λ)^3/2 16 π^3/2ℓ√(m')∑_n=0^ ∞ i_ne^-2√( m'^2+2 n B') (1-λ), where we have expanded the integrand of Eq. (<ref>) up to the order of 𝒪(1/m') and omitted the higher ones. In the case of weak magnetic field B→ 0, the above Casimir energy (<ref>) reads E_ Cas.≃ - L^2 (1 - λ)^5 / 2√(m')/32 π^3 / 2ℓ^3 e^- 2 m'/(1 - λ). We can see that, in the case of heavy mass, the Casimir energy goes to zero as the increase of mass. We next investigate the Casimir energy in the case of the strong magnetic field ℓ^2 eB≫ 1. In this case, together with light mass m'≪ 1, the Casimir energy in Eq. (<ref>) approximately reduces to E_ Cas.≃ -|eB|L^2 (1-λ) 48 ℓ. Meanwhile for the case of strong magnetic field ℓ^2 |eB|≫ 1 and taking the limit of heavy mass m'≫ 1, the Casimir energy reads E_ Cas.≃-|eB|L^2 (1-λ)^3/2 32 π^3/2ℓ√(m')e^-2m' (1-λ). From the above expression, we note that the Casimir energy converges to zero as the increase of parameter m'. § CASIMIR PRESSURE In this section, we investigate the Casimir pressure for the spacelike vector case. It can be obtained from the Casimir energy (<ref>) by taking the derivative with respect to the plate's distance as P_Cas . = -1 L^2∂ E_ Cas.∂ℓ = - ∑_n = 0^∞ i_n ∫_0^∞ d y 1/(1 - λ) b^2 π^2 (y (2 b M_n + y))^3 / 2                     × e B y {2 b (b M_n + y) (2 b M_n + y) (b^2 M_n (M^2_n - m^2) + 2 b M^2_n y + y (m + M_n y))/b^2 (M^2_n - m^2) + 2 b M_n y + y^2 + e^2 (b M_n + y) (b (m + M_n) + y)^2.                                  . + (b^2 M^2_n + 3 b M_n y + y^2) ln( 1 + e^- 2 (b M_n + y) (b (- m + M_n) + y)/b (m + M_n) + y) }. We plot the behavior of the scaled Casimir pressure in Figs. <ref> and <ref>. In general, we can see that its behavior is similar to that of the Casimir energy. From the left panel of Fig. <ref>, one can see the scaled Casimir pressure converges to zero as the increases of parameter m' while from the right panel, it increases as the increases of ℓ^2 |eB|. These behaviors are supported by Fig. <ref>. Both panels of Fig. <ref> show that the Casimir pressure increases as the increases of parameter λ. We next investigate the Casimir pressure in the case of weak and strong magnetic fields. In the case of weak magnetic field B→ 0, the Casimir pressure (<ref>) approximately reduces to P_ Cas. ≃ - 1/(1 - λ) b^4 π^2∫_b m^∞ d x ∫_0^∞ d v x^2/v^1 / 2 (2 + v)^3 / 2 ×( 2 x (1 + v) (2 + v) (x^2 (1 + v)^2 + t b m - (b m)^2)/x^2 (1 +v)^2 - (b m)^2 + e^2 x (1 + v) (b m + x (1 + v))^2 + (1 + 3 v + v^2) ln( 1 + e^- 2 x (1 + v) (- b m + x (1 + v))/(b m + x (1 + v))) ). We further take light mass limit m'≪ 1 for the above expression, then we have P_ Cas.≃ -(1-λ)^2(7π^2 (1-λ)-80m') 960 ℓ^4, which covers the earlier result of Ref. <cit.>. As discussed in the previous section, to obtain the above expression, we can use the reverse way, namely, taking its light mass limit and then considering the weak magnetic field. The Casimir pressure for the case of light mass with the arbitrary magnetic field is approximately given as follows P_ Cas.≃ P^(0)_ Cas.+P^(1)_ Cas., where P^(0)_ Cas. is the Casimir pressure for the massless case explicitly given as P^(0)_ Cas. = - ∑_n = 0^∞ i_n ∫_0^∞ d y |e B| y/b^2 π^2 (1 - λ) ( y ( 2 b √(2 n e B) + y ) )^3 / 2 ×{2 b √(2 n e B)( 2 b √(2 n e B) + y ) ( b √(2 n e B) + y ) /( 1 + e^2 ( b √(2 n e B) + y )) + ( b^2 2 n e B + 3 b √(2 n e B) y + y^2 ) ln( 1 + e^- 2 ( b √(2 n e B) + y )) }, and P^(1)_ Cas. is the first order correction to the Casimir pressure 𝒪(m^') explicitly given as P^(1)_ Cas. = ∑_n = 0^∞ i_n ∫_0^∞ d y 2 |e B| y b √(2 n e B)( 1 + e^2 ( b √(2 n e B) + y ) (1 + 2 y) + 4 e^2 ( b √(2 n e B) + y ) b √(2 n e B)) b m/b^2 π^2 ( 1 + e^2 ( b √(2 n e B) + y ))^2 ( y ( y + 2 b √(2 n e B)) )^3 / 2 (1 - λ) . We next investigate the Casimir pressure (<ref>) in the case of heavy mass m'≫ 1. In this case, we have P_ Cas.≃ - |e B| √(m')/(1 - λ)^1 / 2 8 π^3 / 2 b^2∑_n = 0^∞ i_n e^- 2 √(m'^2 + 2 n e B), and with the limit of the weak magnetic field B→ 0, the above Casimir pressure approximately reduces to P_ Cas.≃ - (1 - λ)^5 / 2m'^3 / 2/16 π^3 / 2ℓ^4 e^- 2 m'/(1 - λ). Similar behavior to the Casimir energy (<ref>), one can see that the Casimir pressure in the limit of heavy mass (<ref>) converges to zero as increasing of the particle's mass. Based on the result of the Casimir pressure in the cases of light (<ref>) and heavy masses (<ref>), we will analyze the behavior in the strong magnetic field. Taking the limit of strong magnetic field ℓ^2 |eB|≫ 1 for Eq. (<ref>), the Casimir pressure approximately reduces to P_ Cas.≃ -|eB|L^2 (1-λ) 48 ℓ^2, while for Eq. (<ref>), we obtain P_ Cas.≃ -|eB|L^2 (1-λ)^3/2√(m') 16 π^3/2ℓ^2 e^-2m' (1-λ). One can also derive both above equations by taking the derivative of the Casimir energy Eqs. (<ref>) and (<ref>) with respect to the plate's distance. § SUMMARY We have studied the Casimir effect of a Lorentz-violating Dirac with a background uniform magnetic field. The Lorentz violation is described by two parameters: (i) λ , which determines the intensity of the violation and (ii) vector u^μ, which determines the direction of the violation. In the present study, we investigated two vector cases, namely, timelike and spacelike vector cases. For the spacelike vector case, we only discussed the z-direction. The purpose of the study is to find the effect of the Lorentz violation parameter λ together with the presence of the magnetic field in the behavior of the Casimir energy as well as its pressure. We used the boundary condition from the MIT bag model <cit.> to represent the property of the plates. From our derivation, we find that for the timelike vector case, the magnetic field and the Lorentz violating parameter do not affect the structure of the momentum constraint while for the spacelike vector case, only Lorentz violating parameter appears. We noted that the vacuum energy under the MIT boundary condition is divergent. Using Abel-Plana like summation <cit.>, we can extract this vacuum energy into three main parts, namely, vacuum energy in the absence of the boundary condition, the vacuum energy in the present of single boundary condition that does not relevant to the Casimir effect, and the rest term that refers to the Casimir energy. We can derive the Casimir energy by subtracting the vacuum energy in the presence of the boundary condition from that in the absence of one. The Lorentz violation for the timelike vector case does not affect the structure of the Casimir energy as well as its pressure while for the spacelike vector case, the violation affects it. We also found that the magnetic field has an effect on the Casimir energy and the pressure for both timelike and spacelike vector cases. We have demonstrated the behavior of the scaled Casimir energy and the pressure as a function of mass, parameter λ, and magnetic field. For the fixed parameter λ and magnetic field, the scaled Casimir energy and the pressure converge to zero as the increase of mass (see left panel of Figs. <ref> and <ref>). For fixed parameter λ and mass, the scaled Casimir energy and the pressure converge to zero as the increasing of the magnetic field (see right panel of Figs. <ref> and <ref>). We also found that the increase of the parameter λ leads to the increase of the Casimir energy and the pressure, as has been pointed out by Ref. <cit.>. For future work, it is interesting to discuss the thermal effect in a similar setup to our present work (c.f., Ref. <cit.> for the scalar field). It is also interesting to study a similar setup under the general boundary, for example, chiral MIT boundary conditions <cit.>. § ACKNOWLEDGMENTS A. R. was supported by the National Research and Innovation Agency (BRIN) Indonesia, through the Post-Doctoral Program. § DETAIL DERIVATION OF CONSTRAINT FOR MOMENTUM In this section, we provide the complementary derivation for the momentum constraint. Applying the boundary condition from the MIT bag model (<ref>) to the solution of the Dirac equation, we have two equations as follows iσ^3χ_2|_z=0-χ_1|_z=0=0, iσ^3χ_2|_z=ℓ+χ_1|_z=ℓ=0, where we have used n^(0)_μ=(0,0,0,1) and n^(ℓ)_μ=(0,0,0,-1) at the first z=0 and second plates z=ℓ, respectively. Then, in a more explicit expression, we have four equations boundary conditions as follows iχ_21|_z=0-χ_11|_z=0=0, iχ_22|_z=0+χ_12|_z=0=0, iχ_21|_z=ℓ+χ_11|_z=ℓ=0, iχ_22|_z=ℓ-χ_12|_z=ℓ=0, where we have decomposed the two-component spinors χ_1 and χ_2 as χ_1= [ χ_11; χ_12 ], χ_2= [ χ_21; χ_22 ]. The boundary conditions of Eqs. (<ref>)-(<ref>) can be simultaneously written in the form of multiplication between two matrices as follows [ P_11 P_12; P_21 P_22 ][ C_0; C̃_0 ] =0,   for n=0, and [ Q_11 Q_12 Q_13 Q_14; Q_21 Q_22 Q_23 Q_24; Q_31 Q_32 Q_33 Q_34; Q_41 Q_42 Q_43 Q_44 ][ C_1; C_2; C̃_1; C̃_2 ] =0,   for n≥ 1, where the matrix elements are given by P^(t)_11=ik_3-((1+λ)ω^(t)_0k_3+m), P^(t)_12=-ik_3-((1+λ)ω^(t)_0k_3+m), P^(t)_21=[ik_3+((1+λ)ω^(t)_0k_3+m)]e^ik_3ℓ, P^(t)_22=[-ik_3+((1+λ)ω^(t)_0k_3+m)]e^-ik_3ℓ, Q^(t)_11=- Q^(t)_22=ik_3-((1+λ)ω^(t)_nk_3+m), Q^(t)_12= Q^(t)_14= Q^(t)_21= Q^(t)_23=i√(2neB), Q^(t)_13=- Q^(t)_24=-ik_3-((1+λ)ω^(t)_nk_3+m), Q^(t)_31=- Q^(t)_42=[ik_3+((1+λ)ω^(t)_nk_3+m)]e^ik_3ℓ, Q^(t)_32= Q^(t)_41=i√(2neB)e^ik_3ℓ, Q^(t)_34= Q^(z)_43=i√(2neB)e^-ik_3ℓ, Q^(t)_33=- Q^(t)_44=[-ik_3+((1+λ)ω^(t)_nk_3+m)]e^-ik_3ℓ. and P^(z)_11=i(1-λ)k_3-(ω^(z)_0k_3+m), P^(z)_12=-i(1-λ)k_3-(ω^(z)_0k_3+m), P^(z)_21=[i(1-λ)k_3+(ω^(z)_0k_3+m)]e^ik_3ℓ, P^(z)_22=[-i(1-λ)k_3+(ω^(z)_0k_3+m)]e^-ik_3ℓ, Q^(z)_11=- Q^(z)_22=i(1-λ)k_3-(ω^(z)_nk_3+m), Q^(z)_12= Q^(z)_14= Q^(z)_21= Q^(z)_23=i√(2neB), Q^(z)_13=- Q^(z)_24=-i(1-λ)k_3-(ω^(z)_nk_3+m), Q^(z)_31=- Q^(z)_42=[i(1-λ)k_3+(ω^(z)_nk_3+m)]e^ik_3ℓ, Q^(z)_32= Q^(z)_41=i√(2neB)e^ik_3ℓ, Q^(z)_34= Q^(z)_43=i√(2neB)e^-ik_3ℓ, Q^(z)_33=- Q^(z)_44=[-i(1-λ)k_3+(ω^(z)_nk_3+m)]e^-ik_3ℓ, for timelike and spacelike in the z-direction vector cases, respectively. For arbitrary non-zero complex coefficients C_0,C̃_0, C_1, C_2,C̃_1,C̃_2 requires the vanishing of the determinant of 2× 2 matrix of Eq. (<ref>) and 4× 4 matrices of Eq. (<ref>) that lead the constraint for momentum k_3. § NEGATIVE-ENERGY SOLUTIONS §.§ Timelike vector case The negative energy solution for the right-moving field component is as follows ψ^(-,t)_k_1,n,k_3 ( r) = e^-ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n, k_3+m)) ×[C̃_1 [ k_3f^(t)_-k_1 n(y); -√(2neB) f^(t)_-k_1 n-1(y); ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n(y); 0 ] + C̃_2 [ -√(2neB) f^(t)_-k_1 n(y); -k_3f^(t)_-k_1 n-1(y); 0; ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n-1(y) ]],   for n≥ 1 and ψ^(-,t)_k_1,0,k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C̃_0 f^(t)_-k_1, 0(y) [ k_3; 0; (1+λ)ω^(t)_0,k_3+m; 0 ],   for n=0. The negative energy solution for the left-moving field component is as follows ψ^(-,t)_k_1,n,-k_3 ( r) = e^-ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n, k_3+m)) ×[ C_1 [ -k_3f^(t)_-k_1 n(y); -√(2neB) f^(t)_-k_1 n-1(y); ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n(y); 0 ] + C_2 [ -√(2neB) f^(t)_-k_1 n(y); k_3f^(t)_-k_1 n-1(y); 0; ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n-1(y) ]],   for n≥ 1 and ψ^(-,t)_k_1,0,-k_3 ( r)= e^-ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C_0 f^(t)_-k_1, 0(y) [ -k_3; 0; (1+λ)ω^(t)_0,k_3+m; 0 ],   for n=0. The total spatial solution inside the confinement area is given by the linear combination between the left- and right-moving field components as follows ψ^(-,t)_k_1,n, l( r)=ψ^(-,t)_k_1,n,k_3 l( r)+ψ^(-,t)_k_1,n,-k_3 l( r), where we use k_3 l to represent the allowed momentum in the system. §.§ Spacelike vector case (z-direction) The negative energy solutions for the right-moving field component are given as follows ψ^(-,z)_k_1,n,k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [C̃_1 [ (1-λ) k_3F^(z)_-k_1, n(y); -√(2neB) F^(z)_-k_1, n-1(y); (ω^(z)_n,k_3+m) F^(z)_-k_1, n(y); 0 ] + C̃_2 [ -√(2neB) F^(z)_-k_1, n(y); -(1-λ)k_3F^(z)_-k_1, n-1(y); 0; (ω^(z)_nk_3+m) F^(z)_-k_1, n-1(y) ]],   for n≥ 1 and ψ^(-,z)_k_1,0,k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C̃_0 F^(z)_-k_1, 0(y) [ (1-λ)k_3; 0; ω^(z)_0, k_3+m; 0 ] ,   for n=0, where f^(t)_-k_1, n(y)=√((eB)^1/2 2^n n!π^1/2)exp[-eB 2(y-k_1 eB)^2]H_n[√(eB)(y-k_1 eB)]. The negative energy solutions for the left-moving field component are given as follows ψ^(-,z)_k_1,n,-k_3 ( r)= e^-ik_1 xe^ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [ C_1 [ -(1-λ) k_3F^(z)_-k_1, n(y); -√(2neB) F^(z)_-k_1, n-1(y); (ω^(z)_n,k_3+m) F^(z)_-k_1, n(y); 0 ] + C_2 [ -√(2neB) F^(z)_-k_1, n(y); (1-λ)k_3F^(z)_-k_1, n-1(y); 0; (ω^(z)_nk_3+m) F^(z)_-k_1, n-1(y) ]],   for n≥ 1 and ψ^(-,z)_k_1,0,-k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C_0 F^(z)_-k_1 0(y) [ -(1-λ)k_3; 0; ω^(z)_0, k_3+m; 0 ] ,   for n=0. The total spatial solution inside the confinement area is given by the linear combination between the left- and right-moving field components as follows ψ^(-,z)_k_1,n, l( r)=ψ^(-,z)_k_1,n,k_3 l( r)+ψ^(-,z)_k_1,n,-k_3 l( r), where we use k_3 l to represent the allowed momentum in the system. 99 Casimir1948 H. B. G. Casimir, Kon. Ned. Akad. Wetensch. Proc. 51, 793 (1948). Sparnaay1958 M. J. Sparnaay, Physica 24, 751 (1958). Lamoreaux97 S. K. Lamoreaux, Phys. Rev. Lett. 78, 5 (1997), Phys. Rev. Lett. 81, 5475 (1998) (E). Mohideen:1998iz U. Mohideen and A. Roy, Phys. Rev. Lett. 81, 4549 (1998). Roy:1999dx A. Roy, C. Y. Lin and U. Mohideen, Phys. Rev. D 60, 111101 (1999). Bressi:2002fr G. Bressi, G. Carugno, R. Onofrio and G. Ruoso, Phys. Rev. Lett. 88, 041804 (2002). Belluci2009 S. Bellucci and A. A. Saharian Phys. Rev. D 79, 085019 (2009). Hassan:2022hcb Z. Hassan, S. Ghosh, P. K. Sahoo and K. Bamba, Eur. Phys. J. C 82, 1116 (2022). Grushin2021 A. G. Grushin and A. Cortijo, Phys. Rev. Lett. 106, 020403 (2021). Grushin2011 A. G. Grushin, P. Rodriguez-Lopez, and A. Cortijo, Phys. Rev. B 84, 045119 (2011). Onofrio:2006mq R. Onofrio, New J. Phys. 8, 237 (2006). Bordag:2001qi M. Bordag, U. Mohideen and V. M. Mostepanenko, Phys. Rept. 353, 1-205 (2001). Ambjorn1983 J. Ambjorn and S. Wolfram, Annals Phys. 147, 1 (1983). Chodos:1974je A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn and V. F. Weisskopf, Phys. Rev. D 9, 3471 (1974). Chodos:1974pn A. Chodos, R. L. Jaffe, K. Johnson and C. B. Thorn, Phys. Rev. D 10, 2599 (1974). Johnson:1975zp K. Johnson, Acta Phys. Polon. B 6, 865 (1975). Rohim:2022mri A. Rohim, A. S. Adam and K. Yamamoto, Prog. Theor. Exp. Phys. 2023, 013B05 (2023). Lutken:1983hm C. A. Lutken and F. Ravndal, J. Phys. G 10, 123 (1984). Sitenko:2014kza Y. A. Sitenko, Phys. Rev. D 91, 085012 (2015). Cougo-Pinto:1998jwo M. V. Cougo-Pinto, C. Farina and A. C. Tort, Conf. Proc. C 9809142, 235 (1999). Ostrowski:2005rm M. Ostrowski, Acta Phys. Polon. B 37, 1753 (2006). Elizalde:2002kb E. Elizalde, F. C. Santos and A. C. Tort, J. Phys. A 35, 7403 (2002). Cougo-Pinto:1998jun M. V. Cougo-Pinto, C. Farina, M. R. Negrao and A. C. Tort, J. Phys. A 32, 4457 (1999). Frank:2006ww M. Frank and I. Turan, Phys. Rev. D 74, 033016 (2006). Erdas:2013jga A. Erdas and K. P. Seltzer, Phys. Rev. D 88, 105007 (2013). Martin-Ruiz:2016ijc A. Martín-Ruiz and C. A. Escobar, Phys. Rev. D 94, 076010 (2016). Cruz:2017kfo M. B. Cruz, E. R. Bezerra de Mello and A. Yu. Petrov, Phys. Rev. D 96, 045019 (2017). Erdas:2020ilo A. Erdas, Int. J. Mod. Phys. A 35, 2050209 (2020). Escobar-Ruiz:2021dxi A. M. Escobar-Ruiz, A. Martín-Ruiz, E. C. A. and R. Linares, Int. J. Mod. Phys. A 36, 2150168 (2021). Blasone:2018nfy M. Blasone, G. Lambiase, L. Petruzziello and A. Stabile, Eur. Phys. J. C 78, no.11, 976 (2018). Escobar:2020pes C. A. Escobar, L. Medel and A. Martín-Ruiz, Phys. Rev. D 101, 095011 (2020). Cruz:2018thz M. B. Cruz, E. R. Bezerra de Mello and A. Y. Petrov, Phys. Rev. D 99, 085012 (2019). Kostelecky:1988zi V. A. Kostelecky and S. Samuel, Phys. Rev. D 39, 683 (1989). Colladay:1996iz D. Colladay and V. A. Kostelecky, Phys. Rev. D 55, 6760 (1997). Colladay:1998fq D. Colladay and V. A. Kostelecky, Phys. Rev. D 58, 116002 (1998). Kostelecky:2003fs V. A. Kostelecky, Phys. Rev. D 69, 105009 (2004). Kostelecky:1994rn V. A. Kostelecky and R. Potting, Phys. Rev. D 51, 3923-3935 (1995). Colladay:1994cj D. Colladay and V. A. Kostelecky, Phys. Lett. B 344, 259 (1995). Colladay:1995qb D. Colladay and V. A. Kostelecky, Phys. Rev. D 52, 6224 (1995). Schwingenheuer1995 B. Schwingenheuer et al. Phys. Rev. Lett. 74, 4376 (1995). Gibbons1997 L. K. Gibbons et al. Phys. Rev. D 55, 6625 (1997). NA31:1990xkc R. Carosi et al. Phys. Lett. B 237, 303 (1990). Kostelecky:1997mh V. A. Kostelecky, Phys. Rev. Lett. 80, 1818 (1998). Schwinberg1981 P.B. Schwinberg, R.S. Van Dyck, H.G. Dehmelt, Physics Letters A 81, 2 (1981). VanDyck1986 R. S. Van Dyck, Jr., P. B. Schwinberg, and H. G. Dehmelt, Phys. Rev. D 34, 722 (1986). Brown1986 L. S. Brown and G. Gabrielse Rev. Mod. Phys. 58, 233 (1986). VanDyck1987 R. S. Van Dyck, Jr., P. B. Schwinberg, and H. G. Dehmelt Phys. Rev. Lett. 59, 26 (1987) Bluhm:1997ci R. Bluhm, V. A. Kostelecky and N. Russell, Phys. Rev. Lett. 79, 1432 (1997). Bluhm:1997qb R. Bluhm, V. A. Kostelecky and N. Russell, Phys. Rev. D 57, 3932 (1998). Bertolami:1996cq O. Bertolami, D. Colladay, V. A. Kostelecky and R. Potting, Phys. Lett. B 395, 178 (1997). Romeo:2000wt A. Romeo and A. A. Saharian, J. Phys. A 35, 1297 (2002). Bhattacharya:2007vz K. Bhattacharya, arXiv:0705.4275. Bhattacharya:1999bm K. Bhattacharya and P. B. Pal, arXiv:hep-ph/9911498. AFG P. Alberto, C. Fiolhais, and V. M. S. Gil, Eur. J. Phys. 17, 19 (1996). Bellucci:2009hh S. Bellucci and A. A. Saharian, Phys. Rev. D 80, 105003 (2009). Erdas:2021xvv A. Erdas, Int. J. Mod. Phys. A 36, 2150155 (2021).
http://arxiv.org/abs/2307.05892v1
20230712034545
SC-NeuS: Consistent Neural Surface Reconstruction from Sparse and Noisy Views
[ "Shi-Sheng Huang", "Zi-Xin Zou", "Yi-Chi Zhang", "Hua Huang" ]
cs.CV
[ "cs.CV" ]
SC-NeuS: Consistent Neural Surface Reconstruction from Sparse and Noisy Views Shi-Sheng Huang Beijing Normal University [email protected] Zi-Xin Zou Tsinghua University [email protected] Yi-Chi Zhang Beijing Institute of Technology [email protected] Hua Huangcorresponding author. Beijing Normal University [email protected] Received: date / Accepted: date ================================================================================================================================================================================================================================================================================================ The recent neural surface reconstruction approaches using volume rendering have made much progress by achieving impressive surface reconstruction quality, but are still limited to dense and highly accurate posed views. To overcome such drawbacks, this paper pays special attention on the consistent surface reconstruction from sparse views with noisy camera poses. Unlike previous approaches, the key difference of this paper is to exploit the multi-view constraints directly from the explicit geometry of the neural surface, which can be used as effective regularization to jointly learn the neural surface and refine the camera poses. To build effective multi-view constraints, we introduce a fast differentiable on-surface intersection to generate on-surface points, and propose view-consistent losses on such differentiable points to regularize the neural surface learning. Based on this point, we propose a joint learning strategy for both neural surface representation and camera poses, named SC-NeuS, to perform geometry-consistent surface reconstruction in an end-to-end manner. With extensive evaluation on public datasets, our SC-NeuS can achieve consistently better surface reconstruction results with fine-grained details than previous state-of-the-art neural surface reconstruction approaches, especially from sparse and noisy camera views. The source code is avaiable at <https://github.com/zouzx/sc-neus.git>. § INTRODUCTION 3D surface reconstruction from multi-view images continues to be an important research topic in computer vision and graphics communities. Unlike traditional Multi-View Stereo (MVS) based methods leveraging structure from motion (SfM) <cit.> technique for sparse <cit.> or dense <cit.> surface reconstruction, the recent neural surface reconstruction approaches <cit.> adopt to learn the deep implicit representation <cit.> with the aid of volume rendering <cit.>, leading to more better complete and fine-grained surface reconstruction quality, which have received much research attention for multi-view image based 3D reconstruction. As like Neural Radiance Fields (NeRF) <cit.>, one main drawback of most neural surface reconstruction approaches (NeuS <cit.>, VolSDF <cit.>, Unisurf <cit.>, NeuralWarp <cit.>, Geo-NeuS <cit.>) is the dependency on dense input views, which is not suitable for wide real-world applications with only sparse input views and often noisy camera poses in AR/VR, autonomous driving or robotics. Some subsequent works propose to improve the reconstruction quality from sparse scenarios, by introducing regularization like sparse points <cit.>, multi-views depth priors <cit.>, rendering ray entropy <cit.> or geometry-aware feature volume <cit.>. However, most of these approaches are still relying on highly accurate camera poses, which could not be easily obtained using technique like COLMAP <cit.> for sparse input views. To overcome the dependency on highly accurate cameras poses, many recent works propose to jointly learn the deep implicit geometry and refine the camera poses, with the guidance of novel registration from photometric <cit.> or silhouette <cit.> priors. But since those registrations are often performed independently across dense input views, the registration quality would significantly drop for sparse view scenarios (Fig. <ref>), where enough relations across views are missing to effectively bundle adjust both the deep implicit geometry and camera poses. It still remains to be challenging to jointly learn the deep implicit geometry and camera poses from sparse input views <cit.> for geometry-consistent surface reconstruction. This paper proposes a Sparse-view Consisent Neural Surface (SC-NeuS) learning strategy, which performs geometry-consistent surface reconstruction with fine-grained details from sparse and noisy camera poses (as few as 3 views). Unlike previous independent registrations from dense input views, we seek to explore more effective multi-view constraints between sparse views. Due to the gap between the volume rendering integral and point-based SDF modeling <cit.>, except from relying on the depth constraints <cit.> rendered from the under-constrained signed distance field <cit.>, we utilize extra regularization directly from the explicit geometry of the neural surface representation. Our key insight is that the observation of the explicit surface geometry across multiple views should be consistent, which can be used as effective regularization to jointly learn both the neural surface representation and camera poses. Specifically, we first introduce a fast differentiable on-surface intersection to sample on-surface points from explicit geometry of the neural surface, and then provide effective view-consistent losses defined on such differentiable on-surface intersections, which builds up an end-to-end joint learning for the neural surface representation and camera poses. Besides, to further improve the geometry-consistent neural surface learning, we incorporate an coarse-to-fine learning strategy <cit.> for highly accurate and fine-grained surface reconstruction results. To evaluate the effectiveness of our SC-NeuS, we conduct extensive experiments on public dataset including DTU <cit.> and BlendedMVS <cit.> with various geometry scenarios. Compared with previous state-of-the-art approaches<cit.>, our SC-NeuS achieves consistently better geometry-consistent surface reconstruction results both quantitatively and qualitatively, which becomes a new state-of-the-art neural surface reconstruction approach from sparse and noisy cameras. § RELATED WORK Novel View Synthesis. The recent success of Neural Radiance Fields (NeRF) <cit.> has inspired many subsequent works <cit.> to achieve impressive novel view synthesis applications. To overcome the drawback of dense input views, multiple works propose to extra regularization or priors for sparse view novel view synthesis. RegNeRF <cit.> proposes to regularize the rendered patches with depth and appearance smoothness for sparse view synthesis. MVSNeRF <cit.> leverages similar rendered depth smoothness loss across unobserved views, from pre-trained sparse view to generalized novel view synthesis. On other hand, InfoNeRF <cit.> penalizeds the NeRF overfitting to limited input views with a ray entropy regularization. Mip-NeRf360 <cit.> introduce ray distortion loss, which encourages sparsity of the density learning in each rendering ray. Besides, some recent approaches <cit.> use depth priors to constraint the NeRF optimization, which also achieves promising novel view synthesis results from sparse input views. Different from all of these previous approaches that relies on highly accurate camera poses as input, our approaches aims at geometry-consistent neural surface learning with noisy camera poses, and contributes a joint neural surface learning and camera pose optimization strategy from sparse input views. Neural Implicit Surface Representation. Neural implicit representation has been a state-of-the-art way to represent the geometry of objects or scenes since the pioneer works of DeepSDF <cit.> and its subsequents <cit.>. IDR <cit.> introduces a neural surface rendering for the neural implicit representation (signed distance function, SDF), which enables precise surface learning from 2D images. Inspired by the success of NeRF <cit.>, NeuS <cit.> and VolSDF <cit.> propose to transfer the signed distance field to density filed using weight function, and perform the volume rendering along with the radiance field, achieving impressive surface reconstruction results with fine-grained details. Geo-NeuS <cit.> incorperates more explicit surface supervisions for more accurate neural surface learning. UNISURF <cit.> explores the balance between surface rendering and volume rendering. NeuralWarp <cit.> provides a geometry-aware volume rendering which utilize multi-view geometry priors for geometry-consistent surface reconstruction. However, most of these previous works depend on dense input views for accurate neural surface learning, which is not feasible for sparse scenarios. Recently, SparseNeuS <cit.> learns geometry encoding priors from image features for generalizable neural surface learning form sparse input views, but still relies on highly accurate camera poses. In contrast, our approaches enables accurate neural surface learning from sparse input views, and optimizes the noisy camera poses simultaneously. Joint Deep Implicit Geometry and Pose Optimization. BARF <cit.> is probably one of the first works to reduce NeRF's dependent on highly accurate camera poses, by introducing a coarse-to-fine registration for the position encoding. GARF <cit.> provides a Gaussian based activation functions on the coarse-to-fine registration for more robust camera pose refinement. SCNeRF <cit.> builds geometric loss optimization on the ray intersection re-projection error. Subsequent works <cit.> also incorperate the photometric loss from silhouette or mask, but requires accurate foreground segmentation. However, most of these approaches still depends on dense input views, which will not be effective for sparse scenarios. Different from these previous approaches, our approach explores the view-consistent constrains on the explicit surface geometry of neural surface representation, which provides more effective cues than rendered depth <cit.> to jointly learn neural surfance and refine camera poses in an end-to-end manner, without need any shape prior <cit.> or RGB-D input <cit.>. § SC-NEUS Given sparse view images (as few as 3) with noisy camera poses of an object, we aim at reconstructing the surface represented by neural implicit function and jointly optimizing the camera poses. Specifically, for sparse input views I={I_i} with noisy camera poses T={T_i} (i ∈{1,2,3}), we adopt to represent the object's geometry as signed distance field (SDF) f(x,θ) (x ∈ R^3, θ is the MLP parameter), and render its appearance using volume rendering from an extra radiance filed c(x,θ_c,v) as provided by NeuS <cit.>. By introducing effective multi-view constraints across sparse views, we propose an new joint learning strategy, called SC-NeuS, for both signed distance field f(x,θ) learning and camera poses T={T_i} optimization. Fig. <ref> demonstrates the main pipeline of our SC-NeuS framework in an end-to-end learning manner. From Multi-view Constraints to Geometry-consistent Surface Learning. Unlike the previous approaches <cit.> that perform the joint deep implicit geometry learning and camera pose optimization using photometric loss across dense input views independently, we adopt to exploit multi-view constraints as extra effective regularization to constraint the surface learning. Due to the bias gap between the volume rendering integral and point-based SDF modeling <cit.>, instead of relying on multi-view depth rendering prior from the neural surface to multi-view depth priors <cit.>, we propose to utilize more multi-view regularizations directly from the explicit surface geometry of the neural surface for a better multi-view surface reconstruction. Our key observation is that the geometry cues (points or patches) locating on the shape surface should be consistently observed across multi-views, which is intuitively an effective constraints for geometry-consistent surface learning, especially in sparse scenarios. Specifically, we first derive an fast differentiable point intersection on the explicit surface of signed distance filed f(x,θ) (Sec. <ref>). Then we provide view-consistent losses for two kinds of on-surface geometry cues (3D sparse points and patches) based on our differentiable point intersection, including view-consistent re-projection loss and patch-warping loss (Sec. <ref>), to effectively regularize the joint learning of signed distance field f(x,θ) and camera poses T . Since the intersection derived by our approach is differentiable for both the neural surface parameters θ and camera poses T, our neural surface learning can be performed in an end-to-end manner without any other supervisions. §.§ Differentiable On-surface Intersection To enable multi-view consistent constraints, the essential requirement of the geometry cues is that they need locate on the explicit surface, i.e., the zero level set of the signed distance field f(x,θ). Considering a 2D feature point p ∈ R^2 in the reference image I_i with camera pose T_i, we seek to compute its intersection point P^*∈ R^3 on the surface geometry of signed distance field f(x,θ). According to volume rendering of the signed distance function <cit.>, there exists a ray length value t^* such that: P^* = c_i + t^*v, f(P^*,θ) = 0, where c_i and v are the camera center point and casting ray of p respectively. Although IDR <cit.> have provided a differentiable intersection derivation for P^*, however, which is somewhat too slow to enable an efficient neural surface learning. Therefore, we propose a new differentiable on-surface intersection for fast neural surface learning. Specifically, as shown in Fig. <ref>, we first uniformly sample points in the casting ray v of 2D feature point p with sampling depth value set 𝐓={t_k}. Then we find the depth value t_k such that f(c_i+t_k v,θ) f(c_i+t_k+1v,θ) < 0. Finally, we move t_k along the casting ray v to the on-surface intersection P^*(T_i,θ,v) following: P^*(T_i,θ,v) = c_i+t_k v - v/⟨∂ f/∂ x, v⟩ f(c_i+t_k v, θ). §.§ View-Consistent Loss Based on our differentiable intersection, we further define effective losses to neural surface learning in the multi-view scenario. Specifically, we utilize two kinds of on-surface geometry cues, i.e., 3D sparse points and patches (Fig. <ref>), and formulate view-consistent losses for these on-surface geometry cues, including view-consistent re-projection loss and view-consistent patch-warping loss respectively. View-consistent Re-projection Loss. Considering a pair of 2D feature correspondence (p_k^i,p_k^j) from reference image I_i (camera pose T_i) and target image I_j (camera pose T_j) with p_k^i ∈ I_i, p_k^j ∈ I_j, we compute the on-surface intersection 3D point P_k^ij via our differentiable intersection. By re-projecting P_k^ij back to I_i and I_j, we get the re-projection location as p̅_k^i = π(P_k^ij , T_i), p̅_k^j = π(P_k^ij , T_j), where π(·) is the camera projection operator. For a geometry-consistent surface reconstruction, the re-projection error between p_k^i →p̅_k^i and p_k^j →p̅_k^j should be minimized. Then we formulate the view-consistent re-projection loss L_r for all of possible sparse correspondence as: L_r = ∑_i,j∑_k ∈ N_k (|p_k^i-π(P_k^ij , T_i)| + ||p_k^j-π(P_k^ij , T_j)|). View-consistent Patch-warping Loss. We also consider the on-surface patch (Fig. <ref>) to utilize the geometric structure constraints to further improve the neural surface learning. Similar to the patch warping in traditional MVS method <cit.>, we warp the on-surface patch to multi-view images but in a differentiable way using our differentiable multi-view intersection. Specifically, for a small patch s on the surface which is observed by image pair I_i,I_j, we represent the plane equation of s in the camera coordinate of the reference image I_i as: n^T p + d = 0, where p(T_i,T_j) is the differentiable multi-view intersection point from I_i,I_j with camera poses T_i,T_j, n is the normal computed with automatic differentiation of the signed distance filed f(x,θ) at p(T_i,T_j). Suppose that the s is projected to I_i,I_j to obtain image patches s_i ∈ I_i, s_j ∈ I_j respectively, for image pixel x ∈s_i and its corresponding pixel x' ∈s_j, we have: x = Hx' , H = K_i(R_iR_j^t - R_i(R_i^T t_i - R_j^T t_j) n^T /d)K_j^-1, where H is the homography matrix, T_i={R_i|t_i},T_j={R_j|t_j}, K_i,K_j are the intrinsic camera matrix for image pair I_i,I_j. We use the normalization cross correlation (NCC) of patches (s_i,s_j) as the view-consistent patch-warping loss as : L_ncc(s_i,s_j) = Cov(I_i(s_i),I_j(s_j))/Var(I_i(s_i))Var(I_j(s_j)), where Cov and Var donates the covariance and variance for color identity of patches (s_i,s_j) respectively. §.§ Training Strategy Based on the view-consistent losses, we formulate the objective function E as: E = L_color + λ_r L_r + λ_ncc L_ncc + λ_reg L_reg, with L_r and L_ncc are the view-consistent re-projection loss and patch-warping loss defined above, and L_color and L_reg are the color rendering loss and Eikonal regularization loss proposed by NeuS <cit.> as: L_color = 1/N∑_i^N|ℛ(f(x,θ),c(x,θ_c,v),T_i) - I_i|, L_reg = 1/M∑_i^M(||▽ f_θ||_2 -1)^2, where ℛ(f(x,θ),c(x,θ_c,v),T_i) is the volume rendering image from f(x,θ),c(x,θ_c,v) to view T_i. So in summary, we propose to jointly learn the signed distance filed f(x,θ), radian field c(x,θ_c,v) and camera poses T={T_i} to optimize the objective function E in an end-to-end manner following: {θ^*,θ_c^*,T^*} = min_θ,θ_c,T E. Network Training. In the early stage during the network training, since the signed distance field f(x,θ) doesn't converge very well, we choose a warm-up strategy to assist the convergence. Specifically, for the differentiable on-surface intersection of 3D sparse point, we use the rendering depth information of the recent signed distance field f(x,θ) to filter out outlier on-surface point intersection. Given a 2D feature point p and its on-surface intersection point P^* along casting ray v, we compute its depth value t_d <cit.> and get its 3D point re-projection P_d = c_i + t_d v. If the distance between P^* and P_d is larger than a threshold, we set P^*=P_d to perform the joint learning. After the warm-up for a certain training epoch, we conduct the learning according to equation <ref> until the final convergence for both signed distance field f(x,θ) and camera poses T. Coarse-to-Fine Learning. As like BARF <cit.>, we also adopt the similar coarse-to-fine positional encoding learning strategy, for a better convergence during the joint learning of neural surface and camera poses. Please refer to BARF <cit.> for more details. § EXPERIMENTS AND ANALYSIS To evaluate the effectiveness of our SC-NeuS, we conduct surface reconstruction experiments from sparse and noisy views of public dataset, by comparing with previous state-of-the-art approaches. Thereafter, we also give an ablation study and analysis of the main components in our approach to make a comprehensive understanding for our SC-NeuS. Implementation details. We adopt the similar architecture of IDR <cit.> and NeuS <cit.> by using a MLP (8 hidden layers with hidden width of 256) for SDF (Signed Distance Function) f and another MLP (4 hidden layers with hidden width of 256) for radiance filed c. For 2D feature correspondence, we use the out-of-the-box key-point detection and extraction model, ASLFeat <cit.>, and key-point feature matching model, SuperGlue <cit.>. We randomly sample 512 rays and select 256 2D correspondences per batch and train our model for 100K iterations on a single NVIDIA RTX3090 GPU. §.§ Experimental Settings Dataset. Similar with previous neural surface reconstruction approaches <cit.>, we choose to evaluate our approach on the public DTU dataset <cit.> with 15 different object scan. The DTU dataset contains from 49 to 64 images at a resolution of 1200 × 1600 for each object scan with known camera intrinsic matrix and ground truth camera poses. For sparse views, we follow <cit.> and  <cit.> to randomly select as few as 3 views for each object scan, and then synthetically perturb its camera pose with an additive Gaussian noise 𝒩(0, 0.15), thus collecting a sparse version of DTU dataset for the subsequent evaluation. Besides, we also evaluate on 7 challenging scenes from low-res set of the BlendedMVS dataset <cit.> which includes 31-143 images at a resolution of 768 × 576. Similar to pre-processing as like our DTU dataset, we also select 3 views from them and obtain their noisy initial poses. Baselines. We compare our approach with previous state-of-the-art approaches which also perform neural surface reconstruction by joint learning of neural surface and camera poses, including BARF <cit.>and IDR <cit.>. Besides, although NeuS <cit.> doesn't conduct camera pose optimization, since it is a state-of-the-art neural surface reconstruction approach, we also compare our approach with NeuS by incorporating the coarse-to-fine strategy of BARF <cit.>, called “NeuS-BARF”, to enable a fair comparison. Since IDR use an extra object mask for neural surface learning, for fair comparison of IDR and NeuS-BARF, we additionally conduct an experiment by using extra object mask supervision for NeuS-BARF, named “NeuS-BARF*”, during the subsequent evaluations. §.§ Evaluation on DTU Dataset Camera Pose Comparison. Table <ref> demonstrates the average RMSE accuracy (including both translation and rotation error) between the estimated camera poses and the ground truth camera poses on DTU dataset, using different comparing approaches, including BARF, IDR, NeuS-BARF, NeuS-BARF* and ours respectively. Among all the comparing approaches, the NeRF-like approach BARF, achieves worse RMSE accuracy than the other approaches. This makes sense since other approaches (including our approach) adopt the signed distance field to represent the object's geometry, which is more powerful than the radiance filed used in BARF. Although IDR and NeuS-BARF (NeuS-BARF*) achieve various RMSE accuracy in each object scan of DTU dataset respectively, in average they achieve the same level of RMSE accuracy, which means they perform similar camera pose estimation quality. In contrast, our approach significantly outperforms all the other baselines in the RMSE accuracy (in both the translation and rotation errors) for camera pose estimation. This shows the multi-view consistent constraints used in our SC-NeuS takes effects for camera pose estimation, than the other baseline approaches which performs the camera pose optimization along with the neural surface learning with single-view independent regularization. Surface Reconstruction Quality. We also compare the surface reconstruction quality between the different comparing approaches. Table <ref> demonstrates the quantitative results on Chamfer Distance metric using different approaches evaluated on DTU dataset. Similarly, our approach achieve consistently much better Chamfer Distance accuracy than the other comparing approaches. Fig. <ref> illustrates some visual comparison results of the comparing approaches. Even though BARF can achieve acceptable camera pose estimation quality, it still fails to achieve fine surface reconstruction results (see the first row of Fig. <ref>). Besides, due to lack of extra mask object supervision, NeuS-BARF can't achieve accurate enough camera pose estimation and thus fails to reconstruct fine object surface. This demonstrates that coarse-to-fine position embedding proposed in BARF is not effective to sparse view setting, even utilizing the neural signed distance filed representation as like NeuS. In contrast, our approach choose view-consistent constraints to regularize the joint learning of neural surface representation and camera pose, leading to geometry-consistent surface reconstruction with fine-grained details. Please see the fine-grained detail reconstruction by our approach, which is also better than the state-of-the-art neural surface reconstruction approach like NeuS and SparseNeuS, even with ground truth camera poses as input (Fig. <ref>). Please refer to our supplementary materials for more comparing results. §.§ Evaluation on BlendedMVS Dataset Except from the DTU dataset, we also perform evaluation on BlendedMVS dataset to see how our approach behave across different kinds of datasets. Fig. <ref> shows some visual comparing surface reconstruction results using different comparing approaches, including NeuS-BARF, NeuS-BARF*, IDR and our approach. According to the comparison, our approach can achieve much better surface reconstruction quality with fine-grained details than the other approaches. Here we don't include BARF for visual comparison, since BARF fails to converge in most of the comparing cases. Please refer to our supplementary materials for more quantitative and qualitative comparing results using different approaches on BlendedMVS dataset. §.§ Ablation and Analysis The re-projection loss and patch-warping loss serve as two main components of our approach. We conduct an ablation study experiment, to see how these two losses take effect on the final quality of both surface reconstruction and camera pose estimation. View-consistent Re-projection. We first implement a variant version of our full system without using the view-consistent re-projection loss, termed as 'w/o L_r', and perform the surface reconstruction on the DTU dataset. Table <ref> shows the average RMSE accuracy for camera pose estimation quality and CD accuracy for the surface reconstruction quality for 'w/o L_r' and our full system (termed as 'Full'). We can see there are large accuracy decrease for both RMSE and CD between 'w/o L_r' and 'Full' systems. This means the view-consistent re-projection loss serves major contribution in our SC-NeuS for the final geometry-consistent surface reconstruction and accurate camera pose estimation. But please note that 'w/o L_r' still outperforms other comparing approaches including BARF, IDR and NeuS-BAFR, by achieving better average RMSE accuracy and CD accuracy in Table <ref> and  <ref>. View-consistent Patch-warping. We also implement a variant system without using the view-consistent patch-warping loss, termed as 'w/o L_ncc'. According to the average RMSE accuracy and CD accuracy comparison between 'w/o L_ncc' and 'Full' in Table <ref>, we can see that 'w/o L_ncc' also achieve worse accuracy values than 'Full' in both RMSE for camera pose estimation and CD for surface reconstruction quality, even thougth the quality decreases are not that much compared with those from 'w/o L_r' to 'Full'. Fig. <ref> shows the visual comparing surface reconstruction results of two example from DTU dataset, using 'w/o L_r', 'w/o L_ncc' and 'Full' respectively. We can see there are certain surface quality decrease for our full system ('Full') without using the view-consistent re-projection loss ('w/o L_r'). Even though our approach can achieve fine surface reconstruction without using view-consistent patch-warping loss (see the results of 'w/o L_ncc'), we can obvious fine-grained details enhancement by adding the view-consistent loss to our full system (see the results of 'Full'). This means that view-consistent patch-warping loss takes more effective for fine-grained details, while view-consistent re-projection loss works better to boost up the joint learning quality of neural surface and camera pose. §.§ Limitation and Discussion Our approach's first limitation is that influence from the quality of 2D feature point's matching. Without enough feature point matching in challenging cases like low texture or light changing, our approach couldn't perform well for nice surface reconstruction results. Large camera poses variation between sparse views would also make our approach failed for feasible joint optimization. In the further, we would like to use more robust explicit surface priors for high reliable neural surface reconstruction. § CONCLUSION Joint learning for the neural surface representation and camera pose remains to be a challenging problem, especially for sparse scenarios. This paper propose a new joint learning strategy, called SC-NeuS, which explores multi-view constraints directly from the explicit geometry of the neural surface. Compared with previous neural surface reconstruction approaches, our SC-NeuS achieves consistently better surface reconstruction quality and camera pose estimation accuracy, for geometry-consistent neural surface reconstruction results with fine-grained details. We hope that our approach can inspire more efforts to the neural surface reconstruction from sparse view images, to enable more feasible real-world applications in this community. ieee_fullname
http://arxiv.org/abs/2307.06143v1
20230712125803
Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression
[ "Jinglei Shi", "Yihong Xu", "Christine Guillemot" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression Jinglei Shi, Yihong Xu, Christine Guillemot Fellow, IEEE ===================================================================================== Light field is a type of image data that captures the 3D scene information by recording light rays emitted from a scene at various orientations. It offers a more immersive perception than classic 2D images but at the cost of huge data volume. In this paper, we draw inspiration from the visual characteristics of Sub-Aperture Images (SAIs) of light field and design a compact neural network representation for the light field compression task. The network backbone takes randomly initialized noise as input and is supervised on the SAIs of the target light field. It is composed of two types of complementary kernels: descriptive kernels (descriptors) that store scene description information learned during training, and modulatory kernels (modulators) that control the rendering of different SAIs from the queried perspectives. To further enhance compactness of the network meanwhile retain high quality of the decoded light field, we accordingly introduce modulator allocation and kernel tensor decomposition mechanisms, followed by non-uniform quantization and lossless entropy coding techniques, to finally form an efficient compression pipeline. Extensive experiments demonstrate that our method outperforms other state-of-the-art (SOTA) methods by a significant margin in the light field compression task. Moreover, after aligning descriptors, the modulators learned from one light field can be transferred to new light fields for rendering dense views, indicating a potential solution for view synthesis task. light field compression, compact neural representation, modulation, kernel decomposition. § INTRODUCTION Light fields <cit.> record both the intensity and direction of light rays emitted by a scene in 3D space. The additional angular information provides users with a more immersive experience than classic 2D images when navigating within the captured scene, and powers a series of computer vision tasks such as depth estimation <cit.>, super-resolution <cit.>, instance segmentation <cit.>, salient object detection <cit.>. Although the spatio-angular information of light fields offers numerous benefits for various applications, it also introduces a significant challenge in terms of data volume. The inherent redundancy in light fields results in large storage requirements, increased transmission bandwidth, and demands on display hardware. Therefore, a crucial aspect in advancing light field imaging techniques towards practical scenarios is the development of effective compression solutions. Early compression methods <cit.> primarily concentrated on directly compressing the lenslet image obtained from plenoptic cameras with the help of HEVC-intra coding framework. However, these intra-coding-based approaches has demonstrated limited performance. In contrast, more general solutions <cit.> have employed video compression standards (in particular HEVC) to tackle the set of light field views as a pesudo video sequence, which enables exploration of temporal correlations among frames. Besides classical video codecs, more advanced learning-based video compression solutions <cit.> can also be applied in the light field compression context. Methods based on view synthesis technique have also been proposed in previous studies <cit.> for the goal of compression, where the encoder side focuses on the compression of a subset of SAIs, and the decoders reconstruct the full light field from the received subset by applying view rendering methods. In <cit.>, the authors utilize Matching Pursuit to perform a linear approximation, enabling disparity-based view prediction. In <cit.>, a depth-based light field codec called Warping and Sparse Prediction (WaSP) was introduced, and its enhanced version has later been incorporated as the 4D-Prediction mode in the JPEG Pleno light field coding standard. The WaSP codec relies on depth-based warping and merging of warped reference views, forming the primary prediction stage. Transformation is another effective tool to reduce light field redundancy. A 4D-Transform mode named Multidimensional Light field Encoder (MuLE) <cit.> has been adopted in JPEG Pleno, where the 4D redundancy of light fields is exploited by applying a 4D-DCT transform to 4D spatio-angular blocks. The authors in <cit.> propose a graph-transform-based light field compression method tailored by the scene geometry. Other transforms such as mixture of expert <cit.>, homography-based low rank approximation <cit.> or shearlet transform <cit.> show their superiority for narrow-baseline data, but suffer from performance degradation when the light fields' baseline increases. The emergence of Neural Radiance Field (NeRF) <cit.> has ushered in a new era of employing implicit neural networks to represent scenes for diverse applications <cit.>. NeRF utilizes Multi-Layer Perceptrons (MLPs) to establish mappings between 5D coordinates (position and orientation) of light rays and color as well as density for the volumetric rendering process. Such an Implicit Neural Representation (INR) has also brought about a fresh perspective of using network weights to represent light fields for compression task. Authors in <cit.> propose to train a NeRF with low-rank constraint in ADMM optimization framework, followed by distillation and quantization operations, to finally obtain a compact representation of light fields. Implicit neural network can also serve as a prior in the compression context, authors in <cit.> propose a two-stage workflow that uses a GRU to encode transient information between SAIs into latent vectors, which are then processed by a generator to retrieve blocks of light field views. Let us note that INR-based methods have created a link between the problems of light field compression and network compression. Methods like pruning <cit.>, tensor rank optimization <cit.>, quantization <cit.> that address the compactness of deep models are therefore applicable for light field compression. Light fields are captured by specially designed equipment <cit.> after a single exposure. On one hand, all SAIs exhibit similar visual content of the scene, but on the other hand, each SAI has its unique visual content that is only observed from corresponding perspective due to the parallax and specularity. In this paper, we propose a novel network design that draws inspiration from the above visual characteristics of light fields to address the problem of compression. The network is composed of the shared descriptive kernels (descriptors) and individual modulatory kernels (modulators): the descriptors will be repeatedly employed when rendering different SAIs, which mimic the fact SAIs own similar visual content. To ensure that each SAI has visual content specifically observed from the corresponding perspective, the rendering process will be guided by modulators, and each SAI corresponds to an individual set of modulators. The network backbone takes uniformly initialized noise as input and is supervised by a randomly selected SAI stream. In the end of each training iteration, modulators of the current SAI are switched when the ground-truth SAI changes. The light field to be compressed is ultimately represented implicitly by both descriptors and modulators, where descriptors account for the majority of the network parameters for storing scene information, and minor modulators control the rendering of the desired SAI. The essence of applying INR-based method to compression lies in achieving a delicate balance between model compactness and decoding quality, i.e. utilizing the minimum number of network parameters to represent a light field while preserving the highest possible quality of the decoded views. To address this challenge, we further propose the modulator allocation and kernel tensor decomposition mechanisms. The modulator allocation mechanism effectively mitigates parameter explosion, especially in scenarios where the target light field has a high angular resolution. Additionally, the kernel tensor decomposition, as a widely-used network compression technique, decomposes both high-dimensional descriptors and modulators into the product of the low-dimensional components. This decomposition strategy aims to reduce the overall parameter count while preserving the reconstruction accuracy. In our efforts towards network compactness, we also adopt a quantization-aware training strategy <cit.> which reduces the number of bits required for each weight and detains quantization errors. In order to validate the effectiveness of our proposed method, we quantitatively and qualitatively evaluate our method and compare it with several representative state-of-the-art (SOTA) methods tailored for light field compression, including video compression-based methods HEVC-Lozange <cit.>, HLVC <cit.> and RLVC <cit.>, 4D-Prediction mode of the coding standard JPEG-Pleno <cit.> as well as the most recent INR-based schemes DDLF <cit.> and QDLR-NeRF <cit.>. Experimental results show that our method outperforms others by a large margin and yields better visual reconstruction quality. Moreover, we carried out comprehensive comparison with other two INR-based methods (DDLF and QDLR-NeRF) in terms of encoding and decoding complexity, memory consumption and generality on different types of light field, proving the superiority of our method in the context of compression. Besides performance gains for the task of compression, another advantage of our proposed method is that the modulators learned from one light field can be applied to the new light fields for synthesizing dense views after aligning descriptors. It not only verifies the functionality of two types of kernels, but also implies a potential view synthesis philosophy through kernel transfer. To summarize, the contributions of our work are as follows: * We propose a novel implicit representation format for light field, which is composed of the complementary descriptors & modulators to respectively store scene information and control the rendering of different SAIs. * By introducing modulator allocation and kernel tensor decomposition mechanisms, the network can effectively avoid parameter explosion when light fields have high angular resolution, and reaches a better balance between the model compactness and decoding quality. * We carried out extensive experiments to show that our method outperforms other SOTA methods both quantitatively and qualitatively for the task of compression. It shows better generalization on different types of data and superior abilities in terms of complexity and resource consumption than other INR-based methods. * We further demonstrate that the learned modulators can be transferred to the new light fields, helping to generating dense views of the new light fields, implying a potentially novel view synthesis philosophy as well. § METHODOLOGY §.§ Notations and network backbone We represent a light field with a 4D function L(x,y,u,v) following the two-parallel-plane parameterization introduced in <cit.>, where (x,y) ∈ 1;X × 1;Y and (u,v) ∈ 1;U × 1;V are respectively the spatial and angular coordinates. The SAI located at angular position (u,v) is denoted as I_u,v throughout the rest of the paper for simplicity. The goal of our work is to discover a compact neural representation for light fields that contains a limited number of parameters while still being able to retrieve high quality views. In a previous study <cit.>, a deep convolutional decoder was proposed to fit images for tasks such as compression, inpainting, or denoising. However, this approach was specifically designed for single RGB image and is not directly applicable to light fields. Although a straightforward solution would be to train the same number of deep decoders as there are SAIs in the target light field, this would be time-consuming and requires a large number of parameters, hence is not suitable for light field compression. In the subsequent work <cit.>, an advancement was made by cascading a Gated Recurrent Unit (GRU) <cit.> architecture and a deep decoder <cit.>. The GRU architecture captures transient information between SAIs, while the deep decoder captures static information within SAIs. This two-stage compression pipeline exhibited competitive performance compared to JPEG-Pleno for light fields captured using Lytro camera. However, the introduction of the GRU module led to issues such as an unstable training procedure, memory overflow, and degraded performance when dealing with wide-baseline light fields. Taking into account the limitations of the network designs in <cit.>, we propose an implicit convolutional network backbone for light field representation, as depicted in Fig. <ref>. In this figure, the cuboids colored in blue, green, and orange represent convolutional kernels that serve distinct functionalities in each layer. Similar to <cit.>, the network's backbone consists of sequentially connected convolutional layers. During training, the network takes a volume of uniform noise ϵ as input, and it is supervised by a randomly selected SAI denoted as I_u,v. In this approach, the image is implicitly represented by the network's parameters, as follows: Θ^* = Θarg min E(H_Θ(ϵ), I_u,v), ∀ I_u,v∈ L, where H_Θ(·) represents the rendering procedure of the network, and Θ={K^i,b^i} are kernels weights and biases in each layer, i is layer index. We use Mean Square Error (MSE) as loss function E(·) to supervise the training of the network. When it comes to network design, we have opted for convolutional layers instead of fully connected layers for two main reasons: (1). The convolution operation is highly optimized for parallel computation and is more flexible for different input sizes. (2). Although fully connected layers can be used to construct Multi-Layer Perceptrons (MLPs) for light field representation, as demonstrated in works <cit.>, the rendered output of MLPs is essentially individual pixels, making the output hard to be optimized in terms of losses such as Structural Similarity (SSIM) <cit.> or Learned Perceptual Image Patch Similarity (LPIP) <cit.> that involve local regions of the rendered image. In contrary, convolutional layers produce feature maps or RGB images as outputs, enabling optimization with respect to metrics like SSIM or LPIP. Regarding the network input, the reason of using uniform noise volume as input is twofold: (a). The noise does not contain any additional information, ensuring that it does not interfere with the learning of scene information during training. (b). The noise volume is generated using a pseudo random seed, when the encoder and decoder share the same seed, the transmission of the noise volume can be avoided. This helps conserve bandwidth and allows for efficient transmission in compression context. Based on the choices of layer type and network input, the proposed backbone consists of six cascaded convolutional layers with kernel size 3×3, except for the last decoding layer, which converts channel number to 3 and has kernel size 1×1. To gradually increase the resolution of the feature maps, the first four layers are followed by bicubic upsampling operation (UP) with a scale factor of 2. Batch Normalization (BN) is added at the end of each layer to accelerate network convergence, except for the last layer. To expand the receptive field without increasing the number of network parameters, we set the kernel dilation rate to 2 for all intermediate layers. More details of the network backbone can be found in Tab. <ref>. §.§ Complementary descriptor & modulator design As previously introduced, the combination of GRU and deep decoder in <cit.> enables a compact light field representation, but the utilization of GRU for modelling angular prior also leads to limitations such as memory overflow and degraded performance for wide-baseline data. We thus follow a different design philosophy for light field representation, which involves a single network and does not require additional modules. Given a light field, it's obvious that all SAIs share similar scene content, and they also possess distinct visual elements such as occlusion and reflection patterns altered in terms of perspectives. When designing a network for compact representation, it should be able to store scene information for rendering and the rendering of each SAI should be controlled by the queried perspectives. To fullfil this requirement, we therefore defined two types of kernels in the network: Descriptors, which store the scene description information and constitute the majority of the network's parameters, are repeatedly used when rendering every SAI. Modulators, the auxiliary view-wise kernels indexed by angular coordinates (u,v), will modulate the rendering process and are switched from one set to another when rendering different SAIs. Like illustrated in Fig. <ref>, from the first to the second last layer of the network backbone, each layer {K^i, b^i} is composed of descriptors (colored in blue) K^d_i and modulators (colored in green and orange) {K^m_i_u,v,b^m_i_u,v}: K^i = K^d_i⊕ K^m_i_u,v, b^i = b^m_i_u,v, with ⊕ being concatenation operation in the last dimension. And K^d_i and K^m_i_u,v are tensors of sizes k × k × C^i_in× C^d_i_out and k × k × C^i_in× C^m_i,uv_out, where k is the kernel size, and C_in and C_out are respectively the numbers of input and output channels. Thanks to this complementary kernel design, the network gets rid of an additional module for explicit angular prior modelling, making the overall architecture concise and effective. Another advantage of using complementary kernel design is the computational resource reduction. The switchable kernel design makes the network generate one SAI in each forward pass, hence the memory consumption stays always at a low level. The training of such a network involves the construction of a random SAI sampling stream. Specifically, as depicted on the right side of Fig. <ref>, in each iteration, a random SAI is selected from U× V light field views. The selected SAI along with its corresponding angular coordinates, forms a triplet (u,v,I_u,v). Modulators (K^m_i_u,v,b^m_i_u,v) indexed by (u,v) are then integrated into the network to work in tandem with descriptors for rendering Î_u,v. And I_u,v serves as the ground truth for minimizing the reconstruction error. In the subsequent iteration, a new SAI is fed into the network, and the current modulators are replaced by the next set of modulators. It's noteworthy that the functionalities of scene description and rendering modulation for descriptors and modulators are automatically acquired during the training procedure. §.§ Allocation of modulator along angular directions For INR-based methods, the compression efficiency is largely decided by the number of parameters of the network. Although our adoption of descriptors and modulators preliminarily reduces the number of parameter for compact light field representation. The network may still suffer from the risk of parameter explosion: assuming we employ an l-layer network to represent an light field, its total number of parameters N can be estimated approximately as follows: N ≈ lk^2C_in(UVC^m_out + C^d_out), where the number of parameter for modulators is proportional to UV if we allocate a set of modulators to each SAI. In the case that the target light field has high angular resolution, the number of parameter for modulator will increase significantly and make the compression fail consequently. To avoid parameter explosion for high angular resolution light fields meanwhile preserving good representation capability, instead of allocating modulators {K^m_i_u,v,b^m_i_u,v} to each angular position pair (u,v), we propose to allocate modulators along two angular directions u and v by splitting them into two subsets {K^m_i_u,b^m_i_u} and {K^m_i_v,b^m_i_v} as follows: K^m_i_u,v = K^m_i_u⊕ K^m_i_v b^m_i_u,v = b^m_i_u + b^m_i_v, where the channel number of K^m_i_u and K^m_i_v is half of that of K^m_i_u,v. Two subsets {K^m_i_u,b^m_i_u} and {K^m_i_v,b^m_i_v} are respectively represented by cuboids colored in green and orange in Fig. <ref>. Such allocation along orthogonal directions is based on the observation that views in the same row exhibit similar variation mode in the horizontal direction, while those in the same column exhibit similar variation mode in the vertical direction. Based on this allocation, the total number of parameter will be: N ≈ lk^2C_in[1/2(U+V)C^m_out + C^d_out], which means the number of parameter for modulators will be proportional to 1/2(U+V) instead of UV, implying a significant reduction of parameter particularly when dealing with high angular resolution light fields. Further discussion on the effectiveness of this allocation is given in Sec. <ref>. §.§ Decomposition of network kernel tensor As mentioned earlier, the INR-based method establishes a connection between light field compression and network compression. We can also leverage network compression techniques to further enhance the network's compactness. Recall that kernel weights {K^d_i, K^m_i_u, K^m_i_v} are all four-dimension tensors, and employing suitable network compression techniques can help reduce the total number of parameters. In a related work <cit.>, the authors applied model compression techniques to light field compression by introducing a rank-constrained NeRF <cit.> followed by network distillation. However, these techniques result in a complex training schedule and are primarily designed for fully-connected layers, hence are less preferable to our architecture. Inspired by the work <cit.> where the authors propose to decompose convolutional kernel tensors into the product of Fourier-Bessel (FB) bases <cit.> and corresponding weighting volume for parameter reduction purpose, we took similar operations on both descriptors and modulators by decomposing them into the shared base volume B (yellow cylinder in Fig. <ref>) and coefficient volumes {W^d_i, W^m_i_u, W^m_i_v} (blue, green and orange cylinders in Fig. <ref>), let us take descriptors K^d_i as an example: K^d_i = B ⊗ W^d_i, where K^d_i is of size k × k × C^i_in× C^d_i_out, B is of size k × k × r, and W^d_i is the coefficient volume of size r × C^i_in× C^d_i_out. The symbol ⊗ denotes the matrix multiplication and r is the number of bases in B. The authors of <cit.> have shown that the Fourier-Bessel (FB) bases <cit.> are effective bases for compressing a network for image classification and denoising tasks. We therefore initialize B with FB bases for faster convergence, with base number r=6. we then update the bases during training to make them more specific to the scene being learned. Please note that a higher compression ratio can be achieved by using a smaller r. By following the network backbone design with complementary kernels and by employing the techniques of modulator allocation and kernel tensor decomposition, a light field can be compactly represented through a set of network parameters: Θ^* = {B, W^d_i, W^m_i_u, W^m_i_v, b^m_i_u, b^m_i_v}. §.§ Quantization-aware training Besides the number of parameters required for representing a light field, the number of bit assigned to each parameter is also a significant factor in deciding compression efficiency. Although half precision (16 bits) has commonly been used in training deep learning frameworks, such a fixed-point scalar quantization with uniformly distributed centroids is still sub-optimal for the compression task. Here, we applied non-uniform quantization to each layer of the network for further network size reduction. More precisely, given a pre-defined number of centroids n for each layer (except for the last decoding layer), when working on a certain layer l_i, we perform k-means clustering on the parameters {W^d_i,W^m_i_u,W^m_i_v,b^m_i_u,b^m_i_v} to obtain n centroids γ_i, and these centroids will be updated to minimize reconstruction error: γ_i^* = γ_iarg min E(H_Θ(ϵ), I_u,v), Θ_i∈γ_i, ∀ I_u,v∈ L As the quantization error will accumulate throughout the network if all layers are simultaneously quantized, we adopted similar solution as in <cit.>, which proposes to quantize network parameters layer by layer, i.e. after quantizing the current layer, we fix the parameters of this layer with the learned codewords γ_i^*, and continue to finetune all consecutive layers. We perform 16-bit uniform quantization on the last decoding layer, as we found that non-uniformly quantize the last layer with a small n brings significant quality degradation. The bases B are likewise quantized using uniform 16-bit quantization for better precision. In additional to uniform quantization with learned centroids, we also perform lossless entropy coding (Huffman coding) for further model compression. The quantized parameters of network will be transmitted from the encoder to the decoder, along with corresponding codewords at a cost of n× 32 bits, with each codeword being encoded using 32 bits. § EXPERIMENTAL SETTINGS §.§ Training details The global schedule consists of two phases: the training phase and the quantization phase. Both phases utilize a learning rate of 0.01. The training phase involves 12 epochs, with each epoch defined as all SAIs being used 500 times. In each iteration, 5 SAIs are fed into the network to calculate the averaged loss. Let us note that for a network-based light field representation, more training iteration means better performance, people can hence use a smaller training epoch for saving time or a larger epoch number for better reconstruction quality. While in the quantization stage, we define 1 epoch as all SAIs being involved 200 times. After quantizing each layer, we perform fine-tuning on all consecutive layers for 1 epoch The whole framework is implemented in Pytorch deep learning framework and trained on a single GPU of type Nvidia Titan RTX having 24GB memory. Both encoding and decoding time will be analyzed in Sec. <ref>. §.§ Test datasets We take four synthetic scenes `boxes', `sideboard', `cotton', `dino' from the HCI dataset <cit.> and four real-world scenes `Bikes', `Danger', `FountainVincent2', `StonePillarsOutside' from the EPFL light field dataset <cit.> as test data. Both datasets are widely used by the light field research community and have distinct but representative characteristics. The four real-world light fields are captured with plenoptic camera Lytro Illum <cit.> with narrow baseline, they have spatial resolution 432× 624 and angular resolution 13× 13, due to the vignetting effect, we take the central 9× 9 SAIs in our test. The introduction of micro-lens array reduces the luminance arriving at the sensor, light fields captured by Lytro Illum are generally noisy, which can be used to validate the robustness against noise for the compared methods. In contrary, the four synthetic scenes are rendered using the 3D graphics software blender <cit.>, they have spatial resolution 512× 512 and angular resolution 9× 9. The synthetic data mainly simulates the light fields captured by camera array, hence they have lower noise level and a wider baseline. This type of data is able to assess the performance on light fields having large disparity range for each method. §.§ Method configurations We evaluate the performance of our proposed method for the task of compression, and compare it with SOTA methods that represent the recent trends in this domain, including classic video coding standard HEVC-Lozenge <cit.>, learning-based video compression schemes HLVC <cit.> and RLVC <cit.>, solutions dedicated to light field compression task such as JPEG-Pleno <cit.>, and the most recent INR-based methods DDLF <cit.> and QDLR-NeRF <cit.>. We use official codes for all compared methods in our experiments, and each one is configured as follows: * We use HEVC in version HM-16.10 in our test. Concerning the configuration of GOP and base QPs, we adopt a GOP of 4 as in <cit.> and set QP = {20,22,24,28,32,36} for real-world light fields and QP = {18,22,26,30,34} for synthetic ones. * For two learning-based video compression schemes HLVC and RLVC, both of them have an optional hyper-parameter λ={256,512,1024,2048} to control the trade-off between bitrate and distortion. The method HLVC adopts a default GOP of 10 to realize frame prediction via three hierarchical quality layers, while for RLVC, 6 P-frames are bidirectionally encoded with a GOP of 13. * The software version of JPEG Pleno we use is the Verification Model 2.0 in the WaSP mode, and we use disparity maps predicted by <cit.> in the compression process. * For the method QDLR-NeRF, as both tensor rank r and number of centroid n for quantizatin can control the size of model, we use four different ranks r={40,70,90,150} with a fixed number of centroids n=256 to have medium bitrate, then reduce the number of centroid to n={128,64,32} with a fixed rank r=40 for low bitrate. * The method DDLF is with parameters (z_a,z_s)={(15,30),(20,40),(25,50),(30,60)} and 256 centroids in its architecture, where (z_a,z_s) denote respectively the channel number of the input spatial and angular code vectors, both handcrafted and neural-based upsamplings (i.e. pixel shuffle) are involved for having a wide range of bitrate. * Finally, for our method, we alter the channel number for modulator c_m and for descriptor c_d in each layer for different bitrates. More precisely, we use (c_m,c_d)={(2,48),(2,63),(2,78),(2,93),(2,123),(2,153),(2,183)} in our network. Though a small rank r and centroid number n can decrease the bitrate, they will severely degrade the compression quality, hence we use r=6 and n=256 in our test. § EXPERIMENTAL RESULTS §.§ Compression performance analysis §.§.§ Rate-distortion We illustrate in Fig. <ref> the bit-distortion curves in terms of decoding quality (PSNR) and bitrate (bpp) for all compared methods. We also calculate the BD-PSNR gains using the Bjontegaard metric <cit.> in Tab. <ref> to assess each method and take the results of HEVC-Lozenge as its baseline. We can observe that the proposed method outperforms other methods on most of the scenes by a large margin in both low, medium and high bitrates. Although DDLF <cit.>, QDLR-NeRF <cit.>, and our proposed method all belong to INR-based methods, they exhibit different performances due to their distinct design philosophies. Specifically: DDLF adopts a design where the transient information is modeled using a GRU module, while the static information is modeled using a deep decoder. However, the GRU architecture performs well only when the light fields have small disparity, as large parallax makes the transition between SAIs hard to be captured by GRU. This explains the performance degradation of DDLF on wide-baseline synthetic light fields. QDLR-NeRF initially uses an MLP to store the scene information and then employs operations such as low-rank optimization, distillation, and quantization to reduce the model size, ultimately achieving the goal of compression. The quality of the learned scene information directly affects the compression performance. Noise and artifacts that interfere with the learning of scene information can lead to lower compression performance. This is verified by QDLR-NeRF's relatively worse performance on Lytro-captured data. In comparison, our method stands out due to the cooperation of descriptors and switchable modulators. This feature enables the learning of each SAI to be conducted individually, and detain the impact of factors such as baseline, noise and artifacts. As a result, our method exhibits stable and high performance over two types of light fields. §.§.§ Visual comparison In Fig. <ref>, we present the averaged error maps across all SAIs for each method at a similar bitrate. In the visualization, red indicates a large error value, while blue represents a small error. It's evident that our proposed method outperforms other methods in terms of decoding error, particularly when dealing with highly textured scenes. Furthermore, Fig. <ref> showcases the decoded SAIs of the scene `sideboard', generated by three INR-based methods: DDLF, QDLR-NeRF, and our method. We can notice that our method successfully reconstructs clear floor texture (as seen in the zoomed regions) even at a low bitrate of approximately 0.06 bpp. These error maps and decoded SAIs provide compelling evidence for the effectiveness of our method. §.§.§ Memory consumption and encoding-decoding time When evaluating a compression algorithm, both memory consumption and complexity play crucial roles. Lower memory consumption ensures broader hardware support, while decoding time directly impacts the delay in displaying light fields. In Fig. <ref>, we present the memory usage and decoding time for each learning-based method working on the GPU platform. Among these methods, DDLF<cit.> employs a GRU to recurrently process SAIs and decodes a block of views at once, resulting in shorter decoding time but higher memory consumption than ours. The QDLR-NeRF method <cit.> adopts a pixel-wise rendering mechanism, leading to a slower decoding procedure. Additionally, due to the complexity of their pipelines, both the HLVC <cit.> and RLVC <cit.> methods require more memory and time for decoding each SAI. In contrary, thanks to the switchable modulator design, our network can decode SAI one by one with lower memory consumption, and the fully convolutional network also ensures quick forward pass with less inference time. Though slightly slower than DDLF, our method presents the best trade-off between memory consumption and decoding time. When considering encoding time, learning-based compression methods have inherent limitations compared to classical compression standards like HEVC and JPEG-Pleno: though methods HLVC <cit.> and RLVC <cit.> exhibit competitive encoding times to HEVC and JPEG-Pleno, they require a large training set for training, and their compression capacity heavily depends on the scale and quality of the training set used. While for INR-based methods, the encoding time mainly consists of supervising the target light field. As a result, their encoding time is generally longer than that of other methods. However, it's noteworthy that in certain applications, encoding time is not as critical as decoding time, as the encoding process can be performed in parallel and offline manners. To gain a better understanding of the training efficiency of our method, we provide Fig. <ref> that illustrates the quality of the decoded light field in relation to training time, and compare the encoding efficiency with other two INR-based methods. We only account for the time spent on the network initialization and exclude the time for low-rank optimization and distillation for the method QDLR-NeRF. We can find in Fig. <ref> that, even without taking low-rank optimization and distillation into account, the method QDLR-NeRF still needs to gradually improve the performance via a long schedule, while both DDLF and our method can quickly reach a high performance after a short encoding procedure of 1-2 hours, but our method has much higher performance than DDLF. That verifies the encoding efficiency of our method against the other two methods. §.§ Transfer of modulators We defined two types of kernels in our network design: descriptors, which store scene information, and modulators, which control the rendering of SAIs with respect to the desired perspectives. The experiment in this section demonstrates that the modulators can be non-scene-specific if the descriptors are appropriately aligned, i.e. the modulators learned on one light field can be transferred to the new light fields. More precisely, we take two light fields L_1 and L_2 each with 9× 9 SAIs as an example, and carry out the following steps: 1. Pretraining on L_1: we first train the network using all SAIs of L_1, during which both descriptors {K^d_L_1} and 9+9 sets of modulators {K^m_u,L_1,K^m_v,L_1,b^m_u,L_1,b^m_v,L_1} are updated. 2. Retraining descriptors on L_2: we then fix the learned modulators {K^m_u,L_1,K^m_v,L_1,b^m_u,L_1,b^m_v,L_1} and retrain descriptors using a SUBSET of SAIs (e.g. a sparse 3× 3 views) of L_2 for one epoch to obtain {K^d_L_2}. 3. Rendering all SAIs of L_2: we render all SAIs of L_2 with the updated descriptors {K^d_L_2} and modulators {K^m_u,L_1,K^m_v,L_1,b^m_u,L_1,b^m_v,L_1}. In our experiment, we specifically utilize a subset of SAIs from L_2 to retrain the descriptors for step 2, indicating that only a part of modulators are involved in this procedure. There are two main reasons for adopting sparse sampling instead of all views: (a). The SAIs inside the subset provide information about the new scene. Retraining on these views helps align the descriptors for storing new scene information. (b). It is important to note that the modulators for the views outside the subset are entirely excluded from the retraining process. If these modulators can successfully work with the descriptors to synthesize SAIs, it suggests that the modulators are non-scene-specific and its functionality of modulation is transferrable. Conversely, if the excluded modulators fail to generate SAIs while those involved in the retraining procedure perform well for rendering, it would imply that the functionalities of the modulator and descriptor are scene-specific and are endowed only by training on the current scene. We test two cases with several subset patterns: (a). Pretraining the network on the scene `danger' and retraining descriptors on `bikes'. (b). Pretraining the network on the scene `boxes' then retraining descriptors on the scene `dino'. As both `danger' and `bikes' are captured using the same Lytro camera, while `boxes' and `dino' are synthesized using different camera array configurations, These two cases respectively represent the transfer of modulators between cameras with the same and distinct configurations. Fig. <ref> showcases the subset patterns and rendered SAIs. The first row depicts the subset patterns with an increasing number of SAIs used for retraining step, where the views inside the subset are colored in green, and other excluded views are noted with green slashes. The checks framed with red and blue boxes are positions of the SAIs to be shown from the second to the fifth rows, they respectively represent SAIs rendered using modulators involved and not involved (noted as `involved modulator' and `uninvolved modulator') in the retraining procedure. Rows two and three display generated SAIs of `bikes' and row four and five exhibit rendered SAIs of the scene `dino'. We observe that both the involved and uninvolved modulators can work with the descriptors to generate SAIs of the new light fields, even this transfer occurs between cameras with different configurations. More SAIs in the subset improves the quality of views rendered with uninvolved modulators. This is because using more SAIs in the retraining will better align the descriptors with the modulators. Furthermore, SAIs generated using involved modulators show better quality than those generated using uninvolved modulators, as modulators involved in the retraining step always better match descriptors than those uninvolved ones. Let us note that such a modulator transfer operation also implies a new solution for view synthesis task, one can generate novel dense views by transferring the learned modulators to the target light field. § ABLATION STUDY §.§ Proportion of modulator parameter When adopting our network architecture for light field compression, the proportion of modulator parameters plays a key role in determining the compression performance. To explore the optimal proportion of modulators for network design, we conducted experiments by varying the proportions of modulator parameters under a fixed total parameter constraint. Tab. <ref> illustrated the average PSNR and quality variance across 9× 9 views for 8 different scenes, where c_m and c_d respectively denote the channel numbers for modulator and descriptor. To give a better insight on performance variation in terms of viewpoints, we also display in Fig. <ref> the averaged PSNR on 8 scenes for different viewpoints. From both Fig. <ref> and Tab. <ref>, we can observe that the proportion of modulator parameters directly affects the network's performance. Under similar total parameter number constraint, higher proportion of modulator parameters means lower proportion for descriptors. It results in relatively lower averaged PSNR and smaller quality variance among views. This occurs because the network has a limited number of parameter for storing scene information, but enough parameters to modulate the rendering of SAIs. Instead, reducing the proportion of modulator parameter will spare more parameters for descriptors, which improves the quality of the decoded views, but weakens network's modulation capability and leads to larger quality variance. The above observation can serve as a guideline for network design for different compression demands: when aiming for a high-quality representation of the entire light field, it is preferable to use a lower proportion of modulator parameters. However, if maintaining consistency between SAIs is a priority, a higher proportion is recommended. §.§ Effectiveness of kernel design To validate our proposals of modulator allocation and kernel tensor decomposition, under the constraint of similar total number of network parameters, we tested three network variants: (a). The network without both modulator allocation and kernel tensor decomposition designs, which is denoted as Net†. (b). The network without modulator allocation but with kernel tensor decomposition, this one is denoted as Net*. (c). The network that adopts both modulator allocation and kernel tensor decomposition designs, this variant is denoted as Net. We measure the averaged PSNR of the networks having small, moderate, and large parameter numbers, corresponding to low, intermediate, and high bitrates in the context of compression. Tab. <ref> summarizes the performance of each network variant for different numbers of parameters. The application of modulator allocation results in significant parameter savings that can be allocated to the descriptors for performance enhancement. And the adoption of tensor decomposition enables the reduction of the number of parameters in kernels, thereby accommodating more kernels in the network. The combination of both modulator allocation and kernel tensor decomposition results in a notable improvement of the network's performance. §.§ Contributions of each network design To highlight the contribution of each network design step, we evaluated the performance evolution after implementing each design step (including modulator allocation, kernel tensor decomposition, and quantization). In Tab. <ref>, we present the average PSNR results for eight tested scenes and the corresponding network size when applying each design step. For comparison, we consider the original network configuration without modulator allocation, tensor decomposition, and quantization as the baseline, it has channel numbers (c_m,c_d)=(2,48). Due to the high angular resolution (U,V)=(9,9), when without adopting modulator allocation, even if c_m=2 is much smaller than c_d=48, the network still has a large proportion of parameter allocated to modulators. Therefore we can observe about 3× compression from 100% to 31.1% when applying modulator allocation technique with only 0.15dB performance degradation. Around 0.6dB loss is caused by the tensor decomposition technique, please note that tensor decomposition is a typical network compression method, other advanced decomposition method is likewise applicable to our method. Finally, the quantization operation brings 0.8dB degradation after compacting the network size from 20.93% to 9.86%. These techniques globally realize more than 10× compression with about 1.6dB quality degradation. Users can select techniques to be used according to their desired decoding quality and model size for the compression task. § CONCLUSION In this paper, we address the challenge of light field compression by proposing a novel compact neural representation. Our method utilizes two types of complementary kernels: descriptors and modulators. Descriptors capture scene information, while modulators serve to modulate the rendering of different SAIs. To enhance the network's compactness, we propose allocating modulators across two angular dimensions and decomposing the kernel tensor into low-dimensional components. Through extensive experiments, we demonstrate that our network-based representation outperforms other compression methods while consuming less computational resources. Furthermore, we highlight that the modulators exhibit a non-scene-specific nature and can be transferred to new light field data for rendering dense views. This finding suggests a new approach to view synthesis methods, introducing a distinct philosophy in this field. unsrt
http://arxiv.org/abs/2307.04248v1
20230709190135
Topological Hochschild homology of the image of j
[ "David Jongwon Lee", "Ishan Levy" ]
math.AT
[ "math.AT", "math.KT" ]
We compute the mod (p,v_1) and mod (2,η,v_1) of many variants of the image-of-J spectrum. In particular, we do this for j_ζ, whose is closely related to the K-theory of the K(1)-local sphere. We find in particular that the failure for to satisfy _p-Galois descent for the extension j_ζ→ℓ_p corresponds to the failure of the p-adic circle to be its own free loop space. For p>2, we also prove the Segal conjecture for j_ζ, and we compute the K-theory of the K(1)-local sphere in degrees ≤ 4p-6. Relativistic time dilation as a quantum mechanism Esteban Martínez Vargas August 12, 2023 ================================================= § INTRODUCTION The algebraic K-theory of the K(1)-local sphere, or K(L_K(1)), is an object capturing fundamental structural information about the K(1)-local category. Part of Ausoni–Rognes' original vision of chromatic redshift was that it could be understood, at least T(2)-locally, via Galois hyperdescent. More specifically, they conjectured <cit.> that the map K(L_K(1))⊗ V → K(_p)^h_p^×⊗ V is an equivalence in large degrees when V is a type 2 finite spectrum. The T(n+1)-local K-theory of Morava E-theory has been shown in <cit.> to have Galois descent for finite subgroups of the Morava stabilizer group. Moreover, recent work of Ben Moshe–Carmeli–Schlank–Yanovski <cit.> combined with <cit.> shows that L_K(2)K(L_K(1)) → L_K(2)K(_p)^h_p^× is an equivalence, i.e that Galois hyperdescent is satisfied for the K(2)-locally. Recent work of the second author <cit.> has made K(L_K(1)) an integrally accessible object. If we consider the connective Adams summand ℓ_p (or _2 for p=2) as a -equivariant _∞-ring via the Adams operation Ψ^1+p, then j_ζ is defined to be its -homotopy fixed points. Then it is shown that there is a cofiber sequence K(j_ζ) → K(L_K(1)) →Σ K(_p) split on π_*. It is also shown that the Dundas–Goodwillie–McCarthy square K(j_ζ) [r][d] (j_ζ)[d] K(_p)[r] (_p^h) is a pullback square. The three spectra K(_p), K(_p), and (_p^h)[This is essentially the nil- of _p by <cit.>, which is studied in <cit.>.] are understood, so understanding K(L_K(1)) is essentially reduced to understanding (j_ζ). The primary goal of this paper is to understand (j_ζ) modulo (p,v_1) and (2,η,v_1), which is the first step in understanding (j_ζ). For p>2, there is an isomorphism of rings π_*(j_ζ)/(p,v_1) ≅π_*(ℓ_p)/(p,v_1)⊗_𝔽_pHH_*(_p^h/_p) For p=2, there is an isomorphism of rings π_*(j_ζ)/(2,η,v_1) ≅π_*(_2)/(2,η,v_1)⊗_𝔽_2HH_*(_2^h/_2). Each of the terms on the right hand side of the equivalences is well understood. The ring π_*(ℓ_p)/(p,v_1) can be found in <cit.> or <Ref>, and π_*(_2)/(2,η,v_1) can be found in <cit.> or <Ref>. The last tensor factor is given in <Ref> as _*(_p^h/_p) ≅Λ[ζ]⊗_p where |ζ| = -1, and _p denotes the ring of continuous functions from _p to _p. The _p appearing can be viewed as the failure of descent at the level of for the _p-Galois extension coming from the _p-action on ℓ_p and _2. More precisely, at the level of π_*, the map (ℓ_p^h)/(p,v_1) →(ℓ_p)^h/(p,v_1) is base changed from the map _p→_p that sends a continuous function to its value at 0 (<Ref>). This phenomenon can be explained by interpreting in terms of free loop spaces. If X is a pro-p-finite space, then the _p-Hochschild homology of the cochain algebra C^*(X;_p) is computed as (C^∗(X;𝔽_p)/𝔽_p) = C^∗(LX;𝔽_p) where LX is the free loop space of X. Since C^*(B_p;_p) ≅_p^h, the failure of the descent (𝔽_p^h/𝔽_p) ≄(𝔽_p/𝔽_p)^h is explained by the fact that B_p is not LB_p ≅ B_p×_p. For any p-complete _∞-ring R with a trivial ℤ-action, this completely accounts for the failure of p-complete to commute with -fixed points (<Ref>). The content of <Ref> is that the same phenomenon happens for (j_ζ) on π_* mod (p,v_1) or (2,η,v_1), even though the action is no longer trivial. In particular, <Ref> implies that there is an isomorphism of rings π_*(j_ζ)/(p,v_1) ≅π_*(ℓ_p^triv,h)/(p,v_1) where ℓ_p^triv,h is the fixed points of ℓ_p by a trivial -action. The key idea in our proof of <Ref> is to run the spectral sequence for obtained by filtering j_ζ via the homotopy fixed point filtration, and showing that the differentials in the associated spectral sequence behave similarly enough to the case of a trivial action. To understand the associated graded algebra of the homotopy fixed point filtration, we further filter it by the p-adic filtration. At the level of the associated graded of both filtrations, j_ζ is indistinguishable from the fixed points by a trivial action, and we show that mod (p,v_1) and (2,η,v_1) this remains true at the level of homotopy rings after running the spectral sequences for of those filtrations. The phenomenon that the -action on ℓ_p behaves like the trivial one is shown in <cit.> to asymptotically hold even at the level of cyclotomic spectra. More precisely, it is shown there that given any fixed type 3 finite spectrum V, for all sufficiently large k, (ℓ_p^hp^k)⊗ V ≅(ℓ_p^triv,h)⊗ V as cyclotomic spectra. It is shown then that the failure of descent we observe on continues at the level of the T(2)-local . Combining this with the aforementioned hyperdescent result of the K(2)-local K-theory and the formula for the K-theory of the K(1)-local sphere, this implies that L_T(2)K(L_K(1)𝕊) is not K(2)-local and hence is a counterexample to the height 2 telescope conjecture. In particular, this implies that the map K(L_K(1))⊗ V → K()^h_p^×⊗ V considered by Ausoni–Rognes is not an equivalence in large degrees. The ring _p that appears in our formula for (j_ζ) is a key ingredient in <cit.> to maintain asymptotic control over (j_ζ,k) as a cyclotomic spectrum, and is one of the advantages of j_ζ versus the usual connective image-of-J spectrum j = τ_≥0j_ζ. If one was only interested in understanding L_T(2)K(L_K(1)), there are isomorphisms L_T(2)K(L_K(1)) ≅ L_T(2)(j) ≅ L_T(2)(j_ζ) so one can in principle approach the telescopic homotopy via (j) instead of (j_ζ). However, j is not as well behaved as j_ζ is, as we now explain. We extend our methods for computing (j_ζ) in <Ref> to compute of j, giving a relatively simple proof of the result below due to Angelini-Knoll and Höning <cit.>. For p>3[We also compute an associated graded ring (j)/(p,v_1) for p=3 (see <Ref>), but are unable to solve multiplicative extension problems coming from the fact that j/(p,v_1) is not an associative algebra for p=3. Nonassociative multiplicative extensions aren't considered in <cit.>, so the results of that paper also only compute an associated graded ring for p=3.], the ring π_*(j)/(p,v_1) is the homology of the CDGA 𝔽_p[μ_2]⊗Λ[α_1,λ_2,a]⊗Γ[b], d(λ_2)=aα_1 |b| = 2p^2-2p , |a| = 2p^2-2p-1, |λ_2| = 2p^2-1, |μ_2| = 2p^2 For k≥ 1 and any p>2, we have an isomorphism of rings π_*(τ_≥0(ℓ_p^hp^k))/(p,v_1) ≅π_*(ℓ_p)/(p,v_1)⊗_*(τ_≥0_p[v_1]^h/_p[v_1])/v_1. The ring _*(τ_≥0_p[v_1]^h/_p[v_1])/v_1 is described in <Ref>: it is isomorphic to Γ[dα_1/p^k]⊗Λ__p[α_1/p^k] where α_1/p^k is a class in degree 2p-3 and dα_1/p^k is a divided power generator in degree 2p-2. In the above theorem, π_*(j)/(p,v_1) is not what one would expect in the case of the trivial action: there are two more differentials in the spectral sequence for the filtration we use to prove <Ref> than what one would find for the trivial action. The differentials witness the fact that λ_1,λ_2 ∈π_*(ℓ_p)/(p,v_1) don't lift to (j)/(p,v_1). Whereas most computations of in this paper use Bökstedt's computation of (_p) as their fundamental input, these differentials ultimately come from the Adams–Novikov spectral sequence. A key difference between the of j_ζ and j is that the ring _p that appeared in π_*(j_ζ)/(p,v_1) is replaced by a divided power algebra for j. The advantage of the ring C^0(_p;_p) over a divided power algebra is that it up to units, it consists entirely of idempotents, which decompose (j_ζ) as an S^1-equivariant spectrum into a continuous _p-indexed family of spectra. This decomposition is not evidently present in (j). Another advantage of j_ζ over j is that j_ζ satisfies the Segal conjecture but j doesn't, which we show for p>2 in <Ref>: For p>2, the cyclotomic Frobenius map (j_ζ)/(p,v_1) →(j_ζ)^tC_p/(p,v_1) has (2p-3)-coconnective fiber, but the fiber of the cyclotomic Frobenius map (j)/(p,v_1) →(j)^tC_p/(p,v_1) is not bounded above. The Segal conjecture for a ring j is a necessary condition <cit.> for the Lichtenbaum–Quillen conjecture to hold, i.e for (j)⊗ V to be bounded above for any finite type 3 spectrum V. Thus <Ref> implies that j doesn't satisfy the Lichtenbaum–Quillen conjecture. On the other hand, <Ref> is a key ingredient in proving the Lichtenbaum–Quillen conjecture for j_ζ as carried out in <cit.>. This Lichtenbaum–Quillen conjecture can be viewed as the part of Ausoni–Rognes's conjecture that is true. Namely, it implies that the map K(L_K(1))⊗ V → K(L_K(1))⊗ V[v_2^-1] is an equivalence in large degrees for V a type 2-complex. In <Ref>, we show how computations can give information about in the stable range. For a map of _1-rings f:R → S, the _1-cotangent complex L_S/R is the S-bimodule that is the fiber of the multiplication map S⊗_RS → S. We prove the following result: Given a map of _1-ring spectra f:R → S, there is a natural map (f) →(S;L_S/R). If f is an n-connective map of (-1)-connective rings for n≥ 1, this natural map is (2n+1)-connective. A consequence of <Ref> is that the natural map above can be identified with the linearization map in the sense of Goodwillie calculus for the functor (f): ()_R/→ when R is (-1)-connective. In the case the map f is a trivial square zero extension of connective rings, a K-theory version of the result was obtained as <cit.>, and a version is essentially <cit.>[See also <cit.> and <cit.>.]. The point of <Ref> is to have a version of the result that works for arbitrary maps of _1-rings rather than trivial square-zero extensions, and for (-1)-connective rings instead of connective rings. We use <Ref> to reprove basic facts about , such as the understanding of the map (_p) →(_p) on π_2p-1. This is an ingredient in the computation of (_p) as a spectrum (see <cit.>). We also apply <Ref> to compute the fiber of the map (j_ζ) →(_p^h) in the stable range, giving information about K(L_K(1)): For p>2, there are isomorphisms τ_≤ 4p-6((j_ζ) →(_p^h)) ≅Σ^2p-2_p and K_*L_K(1)≅ K_*-1_p ⊕ K_*_p ⊕π_*Σ^2p-2_p/_p, *≤ 4p-6. In particular, for p>2, the infinite family of classes in the fiber of (j_ζ) →(_p^h) found in <cit.> are simple p-torsion, and completely account for all the classes in the stable range. §.§ Acknowledgements We are very grateful to Robert Burklund, Sanath Devalapurkar, Jeremy Hahn, Mike Hopkins, Tomer Schlank, and Andy Senger for conversations related to this work. The second author is supported by the NSF Graduate Research Fellowship under Grant No. 1745302. §.§ Notations and conventions * The term category will refer to an ∞-category as developed by Joyal and Lurie. * We refer the reader to <cit.> for basic facts about , which we freely use. * (a, b) will denote the space of maps from a to b (in some ambient category). * Tensor products and are implicitly p-completed. * We use Λ[x] and Γ[x] to denote exterior and divided power algebras in homotopy rings. * In an _p-vector space, we use a ≐ b to mean that a = cb for some unit c ∈_p^×, and a b to mean that a is sent to b up to a unit in _p^×. * Conventions about filtrations and spectral sequences are addressed in <Ref>. * For a pro-finite set A, we use C^0(A;_p) to denote continuous functions from A to _p. * Let 𝒟 be a monoidal category acting on a category 𝒞. Given objects X ∈𝒞, Z ∈𝒟 with a self map f:X⊗ Z→ X, we use X/f to denote the cofibre of this map. We use X/(f_1,…,f_n) to denote (…(X/f_1)/…)/f_n, where each f_i is a self map of X/(f_1,…,f_i-1). § FILTRATIONS In this section, we set up notation for working with filtered objects and explain how to put filtrations on ℓ_p, _2, j_ζ, and j, as well as for finite extensions. Our constructions amount to the filtration coming from the homotopy fixed point spectral sequences computing those objects, which in all cases except for j_ζ, is also the Adams–Novikov filtration. §.§ Filtered objects and spectral sequences Let 𝒞 be a presentably symmetric monoidal stable category with accessible t-structure compatible with the symmetric monoidal structure. Let (𝒞) = (_≤^op,𝒞) be the category of decreasingly filtered objects, and let (𝒞) = (,𝒞) be the category of graded objects, so that both are symmetric monoidal via Day convolution. Basic properties of these categories are developed in <cit.> and <cit.>. Given an object x ∈(𝒞) or (𝒞), we write x_i for the value at i ∈. The left adjoint of the functor (-)_i in the case of (𝒞) is the functor (-)^0,i, defined for c∈𝒞 by (c^0,i)_j = c (j≤ i) 0 (j>i). We also use the notationthis notation isn't used consistently throughout the paper. c^k,n := Σ^kc^0,n+k, π_k,n^x := π_k^x_n+k, and π_k,nx:= π_kx_n+k = π_0(^k,n,x), and use c to also denote c^0,0. There is a filtration parameter τ∈π_-1,0^0,0 such that the map x_i → x_i-1 giving the filtration is obtained levelwise from tensoring with τ. The functor (-)^0,0: 𝒞→(𝒞) is a symmetric monoidal fully faithful functor, which we refer to as the trivial filtration. We often identify an object c ∈𝒞 with the trivial filtered object in (𝒞). In fact, (𝒞) can be identified with _τ((𝒞)), so that taking associated graded amounts to base changing to τ. Given an object x ∈(𝒞), we let x∈(𝒞) denote the associated graded object, so that ( x)_i = _ix = (x_i+1 x_i). On the other hand, there is an identification (𝒞)[τ^-1] ≅𝒞, so that given a filtered object x ∈(𝒞), its underlying object ux ∈𝒞, given by _i x_i, is identified with x[τ^-1]. Under the assumption that the t-structure is compatible with filtered colimits, we have an isomorphism π_**^x[τ^-1] ≅π_*^ux⊗[τ^±1]. Given a filtered object x∈(𝒞), there is a spectral sequence which we refer to as the spectral sequence associated with x. E_1^s,t = π^_t-s,s x=π^_t-s( x)_tπ^_t-s(ux) The d_r-differential is a map from E_r^s,t to E_r^s+r+1,t+r, which is a page off from the usual Adams convention, i.e. our d_r differential would be the d_r+1 differential in the Adams convention. We shall say Adams weight and filtration degree to refer to the bidegrees s and t, respectively. In addition to the spectral sequence associated with x, there is also the τ-Bockstein spectral sequence, which has signature E_1^** = (π_**^ x)[τ] π_**^x We do not use the following lemma, but we state it as an exercise to help acquaint the unfamiliar reader with filtered objects. The τ-inverted τ-Bockstein spectral sequence refers to the spectral sequence obtained from the τ-Bockstein spectral sequence by inverting τ on each page. Let x ∈(𝒞). For each r≥1, the E_r-page of the τ-inverted τ-Bockstein spectral sequence for x is isomorphic to [τ^±] tensored with the E_r-page of the spectral sequence associated with x. Moreover, the d_r differential on the former is given by τ^r times the d_r differential on the latter. The filtration on π^_**x[τ^±1] coming from the spectral sequence agrees with the filtration on π^_*x⊗[τ^±] coming from the filtration on x. These statements can be checked for example by using explicit formulas for the pages and differentials. See, for example, <cit.>. §.§ t-structures We turn to studying t-structures on categories of filtered objects. Our ability to produce t-structures comes from the following general result. Let 𝒞 be a presentable stable category. If {X_α} is a small collection of objects in 𝒞, then there is an accessible t-structure (𝒞_≥ 0, 𝒞_≤ 0) on 𝒞 such that 𝒞_≥ 0 is the smallest full subcategory of 𝒞 containing each X_α and closed under colimits and extensions. The full subcategory of coconnective objects is characterized by the condition that Y ∈𝒞_≤ 0 if and only if (Σ X_α,Y) = 0 for each X_α. Let f:→ be a function. Define a t-structure ((𝒞)^f_≥0,(𝒞)^f_≤0) on the underlying category (𝒞) be the t-structure whose connective objects are generated by the objects Σ^f(i)c^0,i for c ∈𝒞_≥0. We let τ^f_≥ i and τ^f_≤ i denote the associated truncation functors. We similarly define a t-structure ((𝒞)^f_≥0, (𝒞)^f_≤0) by taking the image of those objects under the functor to be the generators. Let x ∈(𝒞). * x ∈(𝒞)^f_≤0 if and only if x_i is f(i)-coconnective in 𝒞 for each i. * If f is nondecreasing, then x∈(𝒞)^f_≥0 iff x_i is f(i)-connective for each i. In this case, the truncation functor τ^f_≥ 0 is given by (τ^f_≥ 0x)_i = τ_≥ f(i)(x_i). * The same results hold for ((𝒞)^f_≥0,(𝒞)^f_≤0). We prove the result for (𝒞), as the result for (𝒞) is similar but easier. Coconnectivity can be checked by mapping in the generators of (𝒞)^f_≤0. Because of the adjunction defining the functor (-)^0,n, the condition for coconnectivity follows. Now suppose f is nondecreasing. To prove the claims, It suffices to show that if x∈(𝒞) has x_i ∈𝒞_≥ f(i), then x admits no maps to a coconnected object. If y is a coconnected object, then x_i admits no maps to y_j for j ≤ i because y_j is f(j)-coconnected, and since f is nondecreasing, it is f(i)-coconnected. It follows that there are no nonzero maps of filtered objects x → y. The t-structures (𝒞)^f, (𝒞)^f are compatible with the symmetric monoidal structure if f(0) = 0 and f(i) + f(j) ≥ f(i+j). The condition f(0) = 0 guarantees that the unit is connective. One needs to check that the tensor product of any pair of generators of (𝒞)^f_≥0 is still in (𝒞)^f_≥0. But the tensor product of Σ^f(i)c^0,i and Σ^f(j)d^0,j is Σ^f(i)+f(j)(c⊗ d)^0,i+j, which is in (𝒞)^f_≥0 because c⊗ d is in 𝒞_≥0 and so the assumption on f shows that this is connective. The functor is right t-exact with respect to the t-structure corresponding to a nondecreasing function f, but not in general t-exact. In the following situation it preserves τ_≥0. Suppose that c ∈(𝒞), f:→ is nondecreasing, π^_k,i-kc = 0 for f(i-1)≤ k < f(i), and π^_f(i)-1,i-f(i)+2 contains no simple τ-torsion. Then τ_≥0^f(c) ≅(τ_≥0^f(c)) and τ_≤0^f(c) ≅(τ_≤0^f(c)). It suffices to prove the statement for τ_≥0 since is exact. There is a cofiber sequence c_i+1 c_i →_ic. By <Ref> we would like τ_≥ f(i+1)c_i+1→τ_≥ f(i)c_i →τ_≥ f(i)_ic to remain a cofiber sequence. From the exact sequence of homotopy groups, we see that we would like τ_≥ f(i+1)c_i+1 = τ_≥ f(i)c_i+1 and π^_f(i)-1c_i+1→π^_f(i)-1c_i to be injective. This is exactly the condition that π^_kc_i = π^_k,i-kc vanish when f(i-1)≤ k<f(i) and π^_f(i)-1c_i+1 = π^_f(i)-1,i-f(i)+2c has no simple τ-torsion. Let f(i) = a i where a ≥ 0. This gives rise to the slope 1-a/a t-structure, whose truncation functors we denote τ^/a_≥0, τ^/a_≤ 0. Let f(i) = 0 for i≤ 0 and f(i) = i/2 for i >0. This gives rise to the v t-structure, whose truncation functors we denote τ^v_≥0,τ^v_≤0. The slope 1-a/a and v t-structures satisfy the conditions of <Ref> and <Ref>, so are compatible with the symmetric monoidal structure, and can be computed by truncating level-wise. The reason for the name slope is that in the Adams grading, the homotopy groups of objects in the heart of this t-structure lie along a line of slope 1-a/a. The v t-structure is named so because the curve it describes is the vanishing curve on the homotopy groups of the -synthetic sphere at the prime 2. We now specialize <Ref> to obtain two t-structures we use here. Taking a = 0, we get the constant t-structure, whose connective cover functor τ^_≥0 just takes connective cover on each filtered piece. Taking a = 1, we get the diagonal t-structure, whose connective cover functor τ^d_≥ 0 is given by taking the i^th-connective cover on the i^th filtered piece. The functor (-)^:𝒞→(𝒞) is the symmetric monoidal functor given by the constant filtered object. §.§ Filtrations on rings of interest We now specialize to the case 𝒞 = with its standard symmetric monoidal structure. We begin by constructing j_ζ as a filtered ring. We use τ_≥*(-) to denote the composite functor τ_≥ 0^d((-)^). Indeed, τ_≥ i(-) is the i^th filtered piece of this functor. We now use τ_≥*(-) to obtain a filtration on ℓ_p,j_ζ,_2, and j for p>2. We use R^ to denote these rings equipped with these filtrations, and R^ to denote the associated graded algebras. Let ℤ_p^ be the ring of p-adic integers with the p-adic filtration. It is a filtered 𝔼_∞-ring since it is in the heart of the constant t-structure. Its associated graded ring is 𝔽_p[v_0], where v_0∈π_0,1ℤ_p^. We write v_0∈π_0,1ℤ_p^ for the class of filtration 1 detecting p∈ℤ_p, which projects to v_0 in the associated graded. For p>2, consider ℓ_p, viewed as an _∞-ring equipped with the -action given by the Adams operation Ψ^1+p, and for p=2, consider it with the × C_2-action given by the Adams operations Ψ^3, Ψ^-1. We now define most of our filtered _∞-rings of interest: * ℓ_p^:= τ_≥*ℓ_p * _2^:= τ_≥0^v((ℓ_2^)^hC_2) * j_ζ,k^:= (ℓ_p^)^hp^k for p>2 and (_2^)^h for p=2 * ju_ζ,k^:= (ℓ_2^)^h2^k * j_k^:=τ_≥0^(j_ζ,k^) for p>2. In the case k=0, we just write j_ζ^, ju^, j^, and we remove to denote the underlying _∞-ring. For example, we write j_ζ,k = ℓ_p^hp^k. The filtrations of <Ref> aren't as `fast' as they can possibly be. Namely, the spectra in the filtrations only change every multiple of 2p-2 filtrations. Speeding up the filtration doesn't affect very much related to the filtration in any case. For p>2, it is also possible to use variants of the Adams filtration on the various rings of study, as in <cit.>, which would avoid the use of two filtrations. However this doesn't work as well at the prime 2, since the Adams filtration on is poorly suited to studying _2's . The key properties of these filtrations that we use is that the associated graded algebras mod p are easy to describe. The associated graded algebras of filtered rings defined in <Ref> are _∞--algebras. The 0'th piece of every associated graded algebra is coconnective with π_0 = _p, so the unit map from ^0,0 factors canonically through , giving it a canonical _∞--algebra structure. For p>2, there are isomorphisms of graded _∞-_p-algebras ℓ_p^/p ≅_p[v_1] j_ζ,k^/p ≅_p[v_1]⊗__p_p^h and for p=2, there are isomorphisms of graded _∞-_2-algebras j_ζ,k^/2 ≅ (_2^/2)⊗__2_2^h ju_ζ,k^/2 ≅_2[v_1]⊗__2_2^h _2^/2 ≅τ_≥ 0^v (_2^hC_2⊗__2_2[v_1]). ℓ_p^ is the associated graded of the Postnikov filtration, which is _p[v_1], where the grading of v_1 is its topological degree, namely 2p-2. Reducing mod p, we get the claim about ℓ^/p. The -action on ℓ_p^ is the action of Ψ^1+p on the homotopy of ℓ_p. It is a ring automorphism sending v_1 to (1+p)^p-1v_1, which in particular is trivial modulo p. Since ℓ_p^ is a discrete object (it is in the heart of the diagonal t-structure), it follows that the action on ℓ_p^/p is trivial, giving the claimed identification of j_ζ,k^ for p>2 and ju_ζ,k^ for p=2. For p=2, we first recall that the in the homotopy fixed point spectral sequence for _2≅_2^hC_2, all differentials are generated under the Leibniz rule by the differential d_3v_1^2 = η^3, where η is represented by the class in H^1(C_2;π_2_2). The spectral sequence for ℓ_2^hC_2 = _2^hC_2, displayed in <Ref>, embeds into this, after a page shift. Thus, we see that everything in π_**(^)^hC_2 above the line of slope 1 intercept zero is either in negative underlying homotopy or doesn't have τ-multiples on or below the line of slope 1 intercept 2. We learn that the bigraded homotopy ring of (ℓ^_2)^hC_2 is _2[x,η,τ,b,v_1^4]/(b^2-4v_1^4,η^3τ^2,2η,2x,xητ^2, v_1^4x-η^4,η b), where x represents v_1^-4η^4, and b represents 2v_1^2. By applying <Ref>, we learn that the connective cover τ_≥ 0^v can be computed the level of associated graded, and that this even holds after taking the cofiber by 2. The C_2-action on _2^/2 is trivial, so indeed _2^/2 ≅(τ_≥ 0^v (_2^hC_2⊗__2_2[v_1])). For j_ζ,k^/2, we just observe that the residual -action is also trivial. At the prime 2, it is possible to define j as a filtered _∞-ring, but we do not study this in this paper. One can define its underlying _∞-ring as the pullback j [r][d] _2^h [d] τ_≤2_2 [r] (τ_≤2_2)^h and then consider the underlying filtered _∞-ring of ν_BP(j) where ν_BP is the synthetic analogue functor of <cit.>. Finally, we show convergence properties of our applied to the filtrations we use. Given a filtered spectrum X∈(), the spectral sequence associated with X converges conditionally if and only if lim_i X_i = 0. This is equivalent to asking that X is τ-complete, where τ is in π_0,-1^0,0. The following lemma shows completeness for with respect to all of the filtrations constructed in this section. Suppose that R is a filtered ring such that the i-th filtered piece R_i is (-1+ci)-connective for every i and some fixed c>0. Then, the i-th filtered piece of (R) is also (-1+ci)-connective, so in particular the filtration on (R) is complete. Note that R=(𝕊^0,0→ R) satisfies the same conditions of the statement. The filtration from the cyclic bar construction gives us an increasing filtration on (R) with k-th associated graded piece Σ^k R⊗R^⊗ k. The i-th filtered piece of Σ^k R⊗R^⊗ k is (-1+ci)-connective since it is a colimit of spectra of the form Σ^k R_j_0⊗R_j_1⊗⋯⊗R_j_k with j_0+⋯+j_k≥ i, which has connectivity of at least k + ∑_s=0^k (-1 + cj_s) ≥ -1 + ci. The other filtration we use is the p-adic filtration on _p, which we call _p^, whose associated graded algebra is _p[v_0]. We call ṽ_̃0̃ the element in π_0,1_p that is a lift of p to filtration 1, and projects to v_0 in the associated graded. Let R be a (possibly graded) 𝔼_1-ℤ_p-algebra. Then, the filtration on the filtered ring (R⊗_ℤ_pℤ_p^)/v_0. is complete and its associated graded ring is concentrated in two filtration degrees t=0,1. Informally, the filtration is of the form ⋯→0→0→ I→(R)/p for some (possibly graded) spectrum I. In particular, the associated spectral sequence collapses at the E_2-page. By using the symmetric monoidality of and the fact that p=0 in _p^/ṽ_0, we obtain an equivalence (R⊗_ℤ_pℤ_p^)/ṽ_0 ≅ ((R)/p) ⊗_((ℤ_p)/p)(ℤ_p^)/ṽ_0. Since the conclusion of the statement is stable under base-change along trivially filtered rings, the statement reduces to the case R=ℤ_p. For R=ℤ_p, the associated graded is (_p[v_0])/v_0, which has homotopy ring _p[σ^2p]⊗Λ[dv_0] (see <Ref>), which is indeed in filtrations ≤1. It remains to see that (_p^)/ṽ_0 = (_p^;_p) has a complete filtration. It suffices to show that (_p^;_p)⊗_(_p)_p ≅(_p^/_p;_p) has a complete filtration, since (_p) is built from _p via extensions and limits that are finite in each degree, and completeness of the filtration can be checked degreewise. The nth associated graded term of the cyclic bar construction computing this is Σ^n (_p^)^⊗__p n⊗__p_p ≅Σ^n(_p^⊗__p_p)^⊗__p n _p^⊗__p_p is complete since it is _p in each nonnegative degree, with transition maps 0, or in other words, it is a direct sum _p⊕⊕_1^∞Σ^0,i_p/τ. It follows that its tensor powers over _p are also sums of _p in each degree with transition maps 0 in positive filtration, so are complete. Since only finitely many terms in the cyclic bar complex contribute to each degree of , we learn that the is complete. § TOOLS FOR UNDERSTANDING In this section, we explain some general tools which we use in understanding . §.§ Suspension operation in THH We begin by reviewing and proving some basic facts about the suspension maps, which are studied in <cit.>. Let R be an 𝔼_1-algebra in a presentably symmetric monoidal stable category 𝒞. By <cit.>, there are natural maps σ:Σ(1_R) → R⊗ R σ^2:Σ^2(1_R) →(R) where 1_R is the unit map of R. Note that the first map is defined by the diagram [column sep = huge] [r][d,"1_R"] 0[d] R[r,"𝕀⊗1_R-1_R⊗𝕀"] R⊗ R and that it factors through (μ)→ R⊗ R where μ:R⊗ R→ R is the multiplication map. Let I be an object of 𝒞 with a map I→ R⊗ R and nullhomotopies of the composites I → R⊗ RR I → R⊗ RR, where T:R⊗ R→ R⊗ R is the exchange map. Then, we obtain a map Σ I→(R) by the commutative diagram I[rr][rd][dd] 0[d] R⊗ R[r,"μ"][d,"μ∘ T"] R[d,"1⊗𝕀"] 0[r] R[r,"𝕀⊗1"] R⊗_R⊗ R^opR. By the proof of <cit.>, if I=Σ(1_R) and the map I→ R⊗ R is given by (<ref>), then the induced map Σ I→(R) is the map (<ref>). Let X be a spectrum. Given a class x∈π_∗(X⊗1) and a lift x∈π_∗(X⊗(1_R)) we shall write σ x∈π_∗+1(X⊗ R⊗ R) and σ^2 x∈π_∗+2(X⊗(R)) for the image of x under the maps (<ref>) and (<ref>). The notation is ambiguous since we need to choose a lift x, but these lifts will often be well-defined. We shall write d for π_∗(X⊗ R)→π_∗+1(X⊗(R)) induced by the map of spectra Σ R→Σ^2(1_R)→(R). If R is homotopy commutative in addition to being an 𝔼_1-algebra, then we can set I=(μ) in (<ref>) and obtain a map σ:Σ(μ)→(R), which is functorial on R and the homotopy[The same construction is studied in <cit.>, but we believe that additional hypotheses are required to make sense of their argument. For example, R is only assumed to be an 𝔼_1-ring in their generality, but an assumption such as homotopy commutativity of R is needed to ensure that the composite (μ)→ R⊗ RR is nullhomotopic. In their notation, we would need to assume, for example, that there is a homotopy 1_k≃ 1_k^τ. This does not affect any other part of their work since they only use rings that have enough structures.] μ≃μ∘ T. Then, the map (<ref>) is the composite Σ^2(1_R)→Σ(μ)→(R) of (<ref>) and (<ref>) up to sign. If X is a spectrum, given a class y∈π_∗(X⊗ R⊗ R) and a lift y∈π_∗(X⊗(μ)), we shall write σ y∈π_∗+1(X⊗(R)) for the image of y under the map (<ref>). Then, we have dx= σ((η_L-η_R)x) for x∈π_∗(X⊗ R), where η_L and η_R are the left and right units of R⊗ R, respectively. Let X be a homotopy unital ring spectrum and R be an 𝔼_2-algebra in 𝒞. Then, d satisfies the Leibniz rule d(xy) = d(x)y + (-1)^|x|xd(y) for any x,y∈π_∗(X⊗ R). By <cit.>, the map d can be identified with the map S^1_+⊗ R →(R) induced by the unit map R→(R) and the S^1-action on (R). Since the map R→(R) is a map of 𝔼_1-rings, the S^1-action on the target gives an S^1-family of ring maps, and so we obtain a map of 𝔼_1-rings R→lim_S^1(R) = DS_+^1⊗(R) = (R) ⊕Σ^-1(R) given by the sum of the identity map and d. Here, DS_+^1 is the Spanier-Whitehead dual of S^1 with the algebra structure given by the diagonal map of S^1. The homotopy ring of DS_+^1 is given by π_∗(DS_+^1) = (π_∗ S^0)[t]/(t^2) with |t|=-1. Since (<ref>) is a ring map, taking the X-homology, we have 1⊗ xy + t⊗ d(xy)=(1⊗ x + t⊗ dx )(1⊗ y+t⊗ dy) for x,y∈π_∗ (X⊗ R). Expanding it using t^2=0 gives us the desired Leibniz rule. Our use of the symbol d recovers the use in the HKR theorem. Recall that a strict Picard element of a symmetric monoidal category 𝒞 is a map of spectra →(𝒞). Given such a strict Picard element, viewing it as a symmetric monoidal functor →𝒞, the colimit of the composite ℕ→ℤ→𝒞 is an _∞-algebra in 𝒞 which we denote [x], where x is a class in the Picard graded homotopy in the degree of . Let C be a presentably symmetric monoidal stable category with a strict Picard element . Let [x] denote the polynomial algebra on a class x in degree . Then ([x]) is a free [x]-module on 1 and dx. The universal example of such a C is graded spectra, where [x] is the graded polynomial algebra Σ^∞_+, so it suffices to prove it there. But now this follows from from the Kunneth spectral sequence computing π_*([x]) = π_*[x]⊗_[x_1,x_2][x], since dx is σ((η_L-η_R)(x)). We now explain some basic computations involving the suspension map. [Bökstedt periodicity] The fundamental computation of Bökstedt states that the ring π_∗(𝔽_p) is isomorphic to 𝔽_p[σ^2p]. Let R∈() be a filtered _1-ring and X∈ a spectrum. Let y∈π_k,r-k(R⊗ X), x∈π_k X be classes such that τ^r y =x∈π_k,-k(R⊗ X). Then there is a choice of nullhomotopy of x in ( R)⊗ X such that in the spectral sequence for (R)⊗ X, the corresponding element σ^2x on the E_1-page survives to the E_r-page and has d_r-differential d_r(σ^2 x)=± d y. A choice of homotopy τ^r y ∼ x in R⊗ X becomes in (^0,0→ R)⊗ X a choice of nullhomotopy of the image of τ^r y, which corresponds to a map Σ^|y|(τ^r) →(^0,0→ R)⊗ X. This map of filtered spectra gives a map of the associated spectral sequences, and in the spectral sequence for (τ^r), there is a d_r-differential between the two spheres on the associated graded. We claim the image of the two shifts of τ in the map Σ^|y|((τ) ⊕Σ^1,-(r+1) (τ)) ≅Σ^|y|(τ^r)⊗(τ) →(^0,0→ R)⊗ X⊗(τ) correspond to the image of y and the suspension of a nullhomotopy of x under the map ^0,0→ R. The claim that the first τ is sent to y is clear by construction, and the claim that the second τ is sent to the suspension of a nullhomotopy of x follows since on associated graded our original homotopy τ^ry∼ x becomes a nullhomotopy of x. It then follows that there is a d_r differential between these two classes. Composing with the filtered map Σ(^0,0→ R)⊗ X≅Σ^2 (^0,0→ R)⊗ X (R)⊗ X of <Ref>, y gets sent to dy and the nullhomotopy of x gets sent to σ^2x (up to a possible sign), giving the desired differential in the spectral sequence for (R)⊗ X. Therefore, it is enough to prove that the connecting map sends x to y, and since the map π_∗(Z⊗ X_1)→π_∗(Z⊗ X_0) is injective, it is enough to prove that x is sent to η_∗(x) by the composite F→ X_1→ X_0. This composite is homotopic to F→𝕊→ X_0 since the connecting map F→ X_1 is given by the nullhomotopy §.§ THH in the stable range Throughout this subsection, let S be a connective _∞-algebra and R be a connective 𝔼_1-S-algebra. In this section, we show that in the situation that the unit map S → R is highly connective, (R/S) in low degrees becomes relatively straightforward to understand. This is used later in <Ref> to understand (j). Let Δ_n denote the subcategory of Δ consisting of ordinals of size ≤ n. If the unit map S→ R is i-connective, then the natural map _Δ^op_nR^⊗_S*+1→_Δ^opR^⊗_S*+1≅(R/S) is (n+1)(i+2)-1-connective. Let R=(S→ R) be the cofiber of the unit map. The m^th term of the associated graded of the filtration coming from the cyclic bar construction is Σ^m R⊗_S R^⊗_Sm, which is m(i+2)-connective because R is connective and R is (i+1)-connective. It follows that the cofiber of the map in question has an increasing filtration whose associated graded pieces are m(i+2)-connective for m >n. This implies the result. The above lemma gives a simple description of in low degrees. If the unit map S → R is i-connective, then the map Σ^2(1_R) ⊕ R (R/S) is (2i+2)-connective, where σ^2 is defined as in (<ref>). Consider the case n=1 in Lemma <ref>. Then, we have an equivalence _Δ^op_1 R^⊗_S∗+1≃( R⊗_S R[r,"μ"][d,"μ∘ T"] R R ) (see <cit.>), where T is the exchange map, and this colimit maps into (R/S) by a (2i+3)-connective map. Therefore, it is enough to prove that the map ( Σ(1_R)⊕ R[r,"proj_2"][d,"proj_2"] R R ) →( R⊗_S R[r,"μ"][d,"μ∘ T"] R R ) is (2i+2)-connective, where the map Σ(1_R)⊕ R → R⊗_SR is σ⊕(1_R⊗id) and the two maps R→ R are the identities. The fiber of this map is Σ(Σ(1_R)⊕ R R⊗_SR) which is (2i+2)-connective by the next lemma. If the unit map 1_R:S→ R is i-connective, then the map Σ(1_R)⊕ R R⊗_SR is (2i+1)-connective. This is equivalent to asking that the total cofiber of the following diagram S⊗_SS[r][d] S⊗_SR[d] R⊗_SS[r] R⊗_SR is (2i+2)-connective. This follows from the assumption since the total cofiber is Σ^2 (1_R)⊗_S(1_R), which is (2i+2)-connective since (1_R) is i-connective. The group π_2p-1(ℤ_p) is isomorphic to ℤ/p and is generated by σ^2α_1. Since 𝕊_p→ℤ_p is (2p-3)-connective, the result follows from <Ref>, which implies that σ^2 induces an isomorphism ℤ/p=π_2p-3(𝕊_p→ℤ_p)≃π_2p-1(ℤ_p). For p>2, the map j⊕Σ^2(𝕊_p→ j) (j) is (4p^2-4p-2)-connective. For p>2, _p → j is 2p^2-2p-2-connective. This is because the first element of the fiber is β_1 (see for example <cit.>) which is in that degree. § THE OF J_Ζ In this section, we compute (j_ζ)/(p,v_1) using the filtration constructed in <Ref>. Let us first assume that p is an odd prime. We shall discuss the case p=2 later in the section. §.§ THH of ℤ_p and ℓ_p Before computing the of j_ζ, we shall compute the of ℤ_p modulo p and the of ℓ_p modulo (p,v_1) in this section, as a warm-up. They will be computed using the spectral sequences associated with (ℤ_p^) and (ℓ_p^). Later, we show that the computation of the spectral sequence for (j_ζ^) looks the same. We note that the computations for _p and ℓ_p are well-known (see for example <cit.>). Let k be a discrete ring and let R be a ℤ^m-graded 𝔼_2-k-algebra such that the homotopy groups of R form a polynomial algebra π_∗ R = k[x_1,…,x_n] on even degree generators x_1,…,x_n. Then, there is an equivalence of ℤ^m-graded 𝔼_1-(k)-algebras (R)≃(k)⊗_k (k[x_1,…,x_n]/k). Let 𝕊[x_1,…,x_n] be the ℤ^m-graded 𝔼_2-ring spectrum of <cit.>. Then, by <cit.>, there is an equivalence of ℤ^m-graded 𝔼_2-k-algebras R≃ k⊗𝕊[x_1,…,x_n]. Therefore, since is a symmetric monoidal functor ()→, there is an equivalence of ℤ^m-graded 𝔼_1-k-algebras (R) ≃(k) ⊗(𝕊[x_1,…,x_n]), and the statement follows by base changing the second tensor factor on the right hand side along 𝕊→ k. Consider the filtered spectrum (_p^)/ṽ_̃0̃. Its associated graded spectrum is (_p[v_0])/v_0 and its underlying spectrum is (_p)/p. The E_1-page of the associated spectral sequence is 𝔽_p[σ^2p]⊗Λ[dv_0] by <Ref>. Note that σ^2 p and dv_0 are in filtrations 0 and 1, respectively. By <Ref>, we have a differential d_1(σ^2p)≐ dv_0 in the spectral sequence associated with the filtered ring (ℤ_p^). Then, mapping to (ℤ_p^)/v_0 and using the Leibniz rule, we can determine all differentials, and the E_2-page is isomorphic to 𝔽_p[(σ^2p)^p]⊗Λ[(σ^2p)^p-1dv_0]. There are no differentials in later pages by <Ref>. Therefore, the homotopy ring π_∗(ℤ_p)/p is isomorphic to 𝔽_p[μ]⊗Λ[λ_1] with |μ|=2p and |λ_1|=2p-1. By <cit.>, μ can be identified with σ^2 v_1[v_1 is not well defined at the prime 2, but still exists: it is just not a self map of (2). It is generally defined as any element of π_2p-2/p whose -Hurewicz image is v_1.], where v_1 ∈π_2p-2_p and λ_1 can be identified with σ t_1, in the sense of <Ref>, where t_1∈π_∗(ℤ⊗ℤ) is the image of t_1∈π_∗(⊗) under the map →ℤ. By <Ref>, we have λ_1 ≐σ^2α_1[Alternatively, if one knows that the p-Bockstein on μ is ≐λ_1, one learns that σ^2α≐λ_1 from the fact that the p-Bockstein on v_1 is α_1 and the fact that σ^2 is compatible with the p-Bockstein (since it comes from a map of spectra).]. Consider the filtered spectrum (ℓ_p^)/(p,v_1), where v_1∈π_∗ℓ_p is the class of filtration (2p-2). Its associated graded spectrum is (ℤ[v_1])/(p,v_1) and its underlying spectrum is (ℓ_p)/(p,v_1). By <Ref>, the E_1-page of the associated spectral sequence is 𝔽_p[σ^2v_1]⊗Λ[λ_1,dv_1]. Note that the for degree reasons, the first and last page a differential can happen is the E_2p-2-page. Applying Lemma <ref>, there is a differential d_2p-2σ^2 v_1 ≐ dv_1 in the spectral sequence associated with the filtered spectrum (ℓ_p^)/p. Mapping to (ℓ_p^)/(p,v_1) and using the Leibniz rule, we can determine the d_2p-2-differentials on powers of σ^2 v_1. The class λ_1 is a permanent cycle for degree reasons. Therefore, the E_2p-1-page is isomorphic to 𝔽_p[(σ^2 v_1)^p]⊗Λ[λ_1, (σ^2v_1)^p-1dv_1]. The classes (σ^2v_1)^p,(σ^2v_1)^p-1dv_1 are permanent cycles for degree reasons, so the spectral sequence degenerates at the E_2p-1-page. We let λ_2 denote a class detecting (σ^2v_1)^p-1dv_1, and μ denote a class detecting (σ^2v_1)^p. To check that there are no multiplicative extensions, we need to check λ_1^2=λ_2^2=0, which follows for degree reasons. The homotopy ring π_∗(ℓ_p)/(p,v_1) is thus isomorphic to 𝔽_p[μ_1]⊗Λ[λ_1,λ_2] where λ_1 and λ_2 can be identified with σ t_1 and σ t_2 as in the case of (ℤ_p)/p. For p>2, μ_2 can be identified with σ^2v_2. §.§ The associated graded We further filter the associated graded ring j_ζ^ by the p-adic filtration to ultimately reduce the computation to our understanding of (_p). In running the spectral sequences to obtain the mod (p,v_1), we find that they are close enough to the spectral sequences of (ℓ_p^triv)^h, the fixed points of ℓ_p with the trivial -action. We define the p-adic filtration on j_ζ^ to be j_ζ^⊗__p_p^. This is an 𝔼_∞-ℤ-algebra object in the category of filtered graded spectra. By taking the associated graded, we obtain j_ζ^⊗_ℤ_p𝔽_p[v_0], which is an 𝔼_∞-ℤ-algebra object in the category of bigraded spectra. We shall write hfp grading for the grading on j_ζ^ if we need to distinguish it from the p-adic grading on 𝔽_p[v_0]. For example, in j_ζ^⊗__p_p[v_0], v_1 has hfp degree 2p-2 and p-adic degree 0, and v_0 has hfp degree 0 and p-adic degree 1. For p>2, there is an isomorphism of bigraded _1-(_p)-algebras for (j_ζ^⊗__p_p[v_0]) ≅(_p)⊗__p(_p[v_0,v_1]/_p)⊗__p(_p^h/_p) First note that j_ζ^⊗__p_p[v_0] ≅ j_ζ^/p⊗__p_p[v_0], which by <Ref> is equivalent to _p[v_1,v_0]⊗__p_p^h. Then, the statement follows from <Ref>. We next study the behavior of fixed points by trivial -actions on . We use the spherical Witt vectors adjunction <cit.> <cit.> between perfect _p-algebras and p-complete _∞-rings. For a perfect _p-algebra A, (A) is an _∞-ring that is (p-completely) flat under _p, and whose _p homology is A. The right adjoint is π_0^♭ which is defined to be the inverse limit perfection of the _p-algebra π_0(R)/p. There is an equivalence of _∞-_p^h-algebras (_p^h) ≅_p^h⊗(C^0(_p;_p)). The restriction map _p^h→_p^hp on π_0^♭ is the map C^0(_p;_p) → C^0(p_p;_p) that restricts a function to p_p. There is a natural map ^h_p⊗(π_0^♭((_p^h)) →(_p^h), and so for the first claim it suffices to show that this is an equivalence and that π_0^♭((_p^h)) ≅ C^0(_p;_p). Both of these can be checked after base change to _p. Note that (_p^h)_p⊗_p≅(_p^h/_p). Since _p^h = _n _p^B/p^n and B/p^n is p-finite, we have, by <cit.>, (_p^B/p^n/𝔽_p)≅_p^B/p^n⊗__p^(B/p^n)^2_p^B/p^n≅_p^B/p^n×_(B/p^n)^2B/p^n. We have equivalences of spaces natural in n B/p^n×_(B/p^n)^2B/p^n≅LB/p^n = Bℤ/p^n×ℤ/p^n. where L denotes the free loop space. Then, via the Künneth isomorphism and taking the colimit over n, we get (_p^h/_p) ≅_p^h⊗_n_p^/p^n. Since _n_p^/p^n is _p, so we obtain the desired equivalence. To see the claim about π_0^♭, we note the natural map _p^h→_p^hp is the colimit of _p^hB/p^n→_p^hB/p^n-1, where the map is given by the inclusion /p^n-1→/p^n. At the level of the π_0, LB/p^n-1→ LB/p^n is also the inclusion /p^n-1→/p^n, so induces the restriction map at the level of -. Taking the colimit over n gives the claim. <Ref> can be interpreted as saying that the failure of p-adic to commute with taking -homotopy fixed points in the universal case is measured by π_0^♭. In particular, the map (_p^h) (_p)^h on π_0^♭ is the map _p_p evaluating at 0, and the comparison map is base changed along (π_0^♭f). Let R be a p-complete _∞-ring with trivial -action. Then there is an equivalence of _∞-R-algebras (R^h)≅(R)^h⊗(_p). Combining <Ref> with <Ref> and the HKR isomorphism, we get the following. For p>2, we have an isomorphism of rings π_*(j_ζ^⊗__p_p[v_0]) ≅_p[σ^2p,v_0,v_1]⊗Λ[dv_0,dv_1,ζ]⊗_p §.§ Spectral sequences Let us first run the spectral sequence for the p-adic filtration. For p>2, we have an isomorphism of rings π_*(j_ζ^)/p ≅π_*(_p)/p⊗_p[v_1]⊗Λ[dv_1,ζ]⊗_p. As in Example <ref>, the spectral sequence associated with (j_ζ^⊗_p^)/ṽ_̃0̃ has E_1-page isomorphic to π_*(j_ζ^⊗__p_p[v_0])/v_0 ≅_p[σ^2p,v_1]⊗Λ[dv_0,dv_1]⊗ H^*_(S^1×_p;_p) and converges to π_∗(j_ζ^)/p. Because there is a map of filtered rings j_ζ^⊗__p_p^→(j_ζ^⊗__p_p^), we see that the classes v_1, ζ are permanent cycles. The class dv_1 is a permanent cycle since it detects the suspension dv_1 of v_1∈π_∗ j_ζ^/p. The elements of C^0(_p;_p) are permanent cycles since there are no elements of negative topological degree and positive filtration. From the map of filtered rings (ℤ_p^)→(j_ζ^⊗__p_p^), there is a d_1-differential σ^2p↦σ v_0 by Example <ref>, and (σ^2p)^p and (σ^2p)^p-1dv_0 are permanent cycles detecting images of classes in (_p). It follows that after the d_1-differential, the E_2-page is _p[(σ^2p)^p,v_1]⊗Λ[(σ^2p)^p-1dv_0,dv_1,ζ]⊗ C^0(_p;_p), so the spectral sequence collapes at the E_2-page. There are no multiplicative extensions since every class comes from either j_ζ^, (ℤ_p), or (𝕊_p^hℤ). Our next goal is to compute mod (p,v_1) the spectral sequence (j^_ζ) (j_ζ). Before doing so, we run the analogous spectral sequence for computing (ℓ_p)/(p,v_1), as a warm up. We consider the _∞-ring _ζ=_p^h with the trivial filtration. For p>2, π_*((j_ζ))/(p,v_1) ≅_p[σ^2v_2]⊗Λ[λ_1,λ_2,ζ]⊗ C^0(_p;_p) with |λ_i| = 2p^i-1 and |σ^2v_2| = 2p^2. As in Example <ref>, we consider the spectral sequence associated with the filtered spectrum (j_ζ^)/(p,ṽ_̃1̃). The analogous spectral sequence in the case p=2 is displayed in <Ref> above. The underlying spectrum is (j_ζ)/(p,v_1) and the associated graded spectrum is (j_ζ^)/(p,v_1). By <Ref>, the E_1-page is isomorphic to 𝔽_p[σ^2 v_1]⊗Λ[λ_1, dv_1,ζ]⊗ C^0(ℤ_p;𝔽_p). The classes in C^0(ℤ_p;𝔽_p) are permanent cycles by the Leibniz rule, since they are all their own p^th-power. The class ζ∈ H^1(S^1;_p) is a permanent cycle because it detects a class in the image of j_ζ→(j_ζ). By <Ref>, there is a differential d_2p-2(σ^2 v_1)≐ dv_1, and the Leibniz rule determines the differentials on powers of σ^2v_1. Similarly, by Lemma <ref>, there must be a d_2p-2 differential λ_1≐σ^2 α_1 dα_1 in the spectral sequence (j_ζ^)(j_ζ) mod p. By Lemma <ref>, we have dα_1 = d(v_1ζ) = v_1dζ - ζ dv_1, so that we have the differential d_2p-2(λ_1) ≐ζ dv_1 mod (p,v_1). By using the previous paragraph and replacing λ_1 with λ_1'=λ_1 - ϵζμ for some ϵ∈𝔽_p^×, we may assume that d_2p-2(λ_1')=0. This completely determines the spectral sequence up to the E_2p-2-page, and we learn the E_2p-1-page is isomorphic to 𝔽_p[(σ^2v_1)^p]⊗Λ[λ_1',(σ^2v_1)^p-1dv_1,ζ]⊗ C^0(Z_p;𝔽_p). There are no more differentials since there is no class outside filtration degree 0 and 2p-2. There are no multiplicative extension problems since the multiplicative generators in nonzero degree are free generators as a graded ring. Finally, let us show that the polynomial generator μ_2 is the class σ^2 v_2. Let us consider the map j_ζ^→_ζ induced by applying (τ_≥*(-))^h to the -equivariant truncation map ℓ_p →_p. This induces a map of spectral sequences for . Since _ζ has the trivial filtration, its does too, so has no differentials in its associated spectral sequence. By <Ref> and <Ref>, (_ζ)_p≅(_p)^h⊗(_p), so π_*(_ζ)/(p,v_1) ≅_p[σ^2v_1,λ_1,λ]⊗_p. v_2 ∈π_2p^2-2/(p,v_1) has a canonical nullhomotopy in j_ζ/(p,v_1) ≅_p^h and _ζ/(p,v_1) ≅_p[σ v_1]^h, so there is a canonical element σ^2v_2 in π_2p^2(j_ζ)/(p,v_1) and π_2p^2(_ζ)/(p,v_1), which we claim is detected in the spectral sequence for (j_ζ^) by (σ^2v_1)^p. To see this, it suffices to show this in (_ζ) because the map is injective in degree 2p^2-2. But now it is the image of σ^2v_2 from the map ℓ_p^→_ζ, and in ℓ_p^, which we know by <Ref> is detected by (σ^2v_1)^p. In the proof of the previous theorem, a reader might wonder why λ_1 supports a differential while σ^2α_1 is still well-defined in (j_ζ). This can be explained by the fact that σ^2α_1 is not well-defined in (ℤ_ζ)/(p,v_1) since π_2p-3((𝕊→ℤ_ζ)/(p,v_1)) →π_2p-3(𝕊/(p,v_1)) is not injective. The class σ^2α_1 is well-defined in (ℤ)/(p,v_1) and (j_ζ)/(p,v_1), but their images in (ℤ_ζ)/(p,v_1) are different. The class λ_1 in the E_1-page represents the former and λ_1' represents the latter. We can carry out the same computation for (ℓ_p)^hℤ/(p,v_1) using the same filtrations ℓ_p^ and ℓ_p^⊗_p^. Then, we obtain an isomorphism of rings π_∗(ℓ_p)^hℤ/(p,v_1) ≃𝔽_p[σ^2 v_2] ⊗Λ[λ_1,λ_2,ζ]. Furthermore, by keeping track of the map (j_ζ)/(p,v_1)→(ℓ_p)^hℤ/(p,v_1) at every stage, we see that on homotopy groups, this map is the base-change along C^0(ℤ_p;𝔽_p)→𝔽_p that evaluates a function at 0∈ℤ_p. §.§ The prime 2 We next turn to the prime 2. We first need to run the analogous analysis as in <Ref> for _2. We consider _2^/2[v_0] as the bigraded ring given as the associated graded of _2^⊗__2_2^. To understand this, we need the following lemma. There is an isomorphism of bigraded rings π_*(_2^/2[v_0])/η≅_2[v_0,v_1,σ^22,dη]/((dη)^2+v_1dη) ⊗Λ[dv_0,dwv_1] The associated graded of _2^/2[v_0] with respect to the Posnikov filtration is _2[v_0,v_1,η]. By symmetric monoidality of , we have an equivalence (_2[v_0,v_1,η])≅(_2[v_0,v_1])⊗_(_2)(_2[η]) Since the argument of <Ref> works at the prime 2, we learn that the first tensor factor has homotopy ring _2[σ^2p,v_0,v_1]⊗Λ[dv_0,dv_1]. For the second tensor factor, we note that (_2[η])⊗_(_2)_2 ≅(_2[η]/_2), whose homotopy ring is _2[η]⊗Λ[dη]. Since the map (_2) →_2 is the cofiber of σ^2p, we can run a σ^2p-Bockstein spectral sequence to recover (_2[η]). In the spectral sequence, η,dη are permanent cycles since they are in the image of the unit map and the map d. We also see that there are no multiplicative extensions mod η for degree reasons, i.e. we have π_∗(𝔽_2[η])/η = Λ(dη)⊗_𝔽_2𝔽_2[σ^22]. In the spectral sequence computing (_2^/2[v_0]) from this, everything is a permanent cycle since all classes are generated either from the image of the unit map, the map from (_2), or the map d. Now we turn to the multiplicative extensions, which we compute by mapping to the σ^22-completion of (_2^hC_2[v_0,v_1]). As before, we can compute this via the σ^22-Bockstein spectral sequence whose E_1-page is (_2^hC_2[v_0,v_1]/_2)[σ^22]. We have an isomorphism (_2^hC_2[v_0,v_1]/_2) ≅(_2^hC_2/_2)⊗__2(_2[v_0,v_1]). Moreover, _*(_2[v_0,v_1]) ≅_2[v_0,v_1]⊗Λ[dv_0,dv_1], and (_2^hC_2) is _2^hC_2×_2^hC_2, since the free loop space of BC_2 is BC_2× C_2. If h is the generator of π_-1_2^hC_2, then a nontrivial idempotent in π_0(_2^hC_2) is given by dh. By the Leibniz rule (<Ref>), dη = v_1dh+hdv_1, so (dη)^2 = v_1^2dh = v_1dη+η dv_1. This this relation happens in (_2)/σ^22, but for degree reasons, this forces it to happen in (_2)/η as well. To see that the classes dv_0 and dv_1 square to 0, we note that this is true in (_2[v_0,v_1]/_2), and that we have a map (_2)⊗__2(_2[v_0,v_1]/_2) ≅(_2[v_0,v_1]) →(_2[v_0,v_1]^hC_2) using the isomorphism of <Ref>. There is an isomorphism of graded rings π_*((_2^)/(2,η)) ≅_2[v_1,σ^2v_1,dη]/((dη)^2+v_1dη)⊗Λ[σ^2η,dv_1] We now understand the spectral sequence computing π_*((_2^)/(2,η)) by running the 2-adic filtration spectral sequence on (_2⊗__2_2^)/(v_0,η). By <Ref>, there is a differential from σ^2v_0 to dv_0, σ^2η is a class squaring to zero detected by σ^2v_0dv_0, and σ^2v_1[The element v_1∈π_2/2 exists, even though it does not extend to a self map.] detects (σ^2v_0)^2. The remaining classes are either in the image of the unit map or the image of d, so are permanent cycles. The relation (dη)^2+v_1dη=0 occurs because it does on associated graded, and because there are no classes in topological degree 4 and positive p-adic filtration. The class dv_1 squares to zero since there are no classes of weight -2, topological degree 6, and positive p-adic filtration. We now compute (_2)/(2,η,v_1), which was also computed in <cit.>. We now can run the spectral sequence (_2^)/(2,η,v_1) (_2)/(2,η,v_1) , which is a spectral sequence associated with a filtered _∞-ring since _2 ≅_2^/(2,η,v_1), where η and v_1 are taken in filtration 2. This spectral sequence is displayed in <Ref>. The first page of this spectral sequence by <Ref> is _2[σ^2v_1]⊗Λ[dv_1,dη,σ^2η]. It follows as in <Ref> that there are differentials from σ^2η to dη and σ^2v_1 to dv_1. What remains after these differentials are _2[(σ^2v_1)^2]⊗Λ[σ^2v_1dv_1, σ^2η dη]. For degree reasons, there can be no further differentials. the classes in odd degree square to 0 because there are no classes in degrees 2 or 6 mod 8. We now run the analogous analysis to compute (j_ζ)/(2,η,v_1). There is an isomorphism of graded rings π_*(j_ζ^)/(2,η,v_1) ≅π_*(_2^)/(2,η,v_1)⊗π_*((_2^h/_2)) Since _2^/2⊗__2_2^h≅ j_ζ^/2, we learn from <Ref> that π_*((j_ζ^/2[v_0])/(v_0,η,v_1)≅_2[σ^2v_0]⊗Λ[dη,dv_0,dv_1]⊗_*(_2^h/_2) where (_2^h/_2) is computed via <Ref> as _2^h⊗_p. Exactly as in <Ref>, in the spectral sequence for the 2-adic filtration, there is a differential from σ^2v_0 to dv_0, σ^2η is a class squaring to zero detected by σ^2v_0dv_0, and σ^2v_1 is a class detecting (σ^2v_0)^2. The rest of the classes are permanent cycles because they are either in the unit map, come from d, or are permanent cycles by the Leibniz rule. There is an isomorphism of rings for p=2 π_*(j_ζ)/(2,η,v_1) ≅_2[μ]⊗Λ[λ_2,x,ζ]⊗_p where |x| = 5, |λ_2| = 7, |μ| = 8. We run the spectral sequence (_2^)/(2,η,v_1) (_2)/(2,η,v_1). As in <Ref>, there are differentials from σ^2η to dη and σ^2v_1 to dv_1. For degree reasons, (σ^2v_1)^2 is a permanent cycle, as are σ^2η dη, σ^2v_1dv_1, and ζ. _2 is a permanent cycle by the Leibniz rule. If we let λ_2 and x denote classes detecting σ^2v_1dv_1 and σ^2η dη respectively, then λ_2^2=0 and x^2=0 for degree reasons. § THE OF J We now consider (j)/(p,v_1) for p>2. We first compute the Hochschild homology of the _p-algebra j^/p, which is isomorphic to τ_≥0(_p[v_1]^h) by <Ref>. Let p>2. _*((j^/p)/_p) ≅_*(τ_≥0(_p[v_1]^h)/_p) is isomorphic as a ring to Λ[dv_1,α_1]⊗𝔽_p[v_1,x_0,x_1,…]/(x_i^p = v_1^p^i+1-p^ix_i + v_1^p^i+1-p^i - 1α_1 (∏_j=0^i-1x_j^p-1)dv_1; i≥0) where |x_i|=p^i(2p-2), and x_i is in grading p^i(2p-2). Define a graded ring R = τ_≥ 0ℤ_p[v_1]^hℤ using a trivial ℤ-action so that R/p≃ j^/p. We shall show that π_∗(R/ℤ_p) is the ℤ_p-algebra generated by v_1,dv_1,α, and a set of generators x_0,x_1,… with |x_i| = p^i(2p-2) having relations x_i^p = px_i+1 + v_1^p^i+1-p^ix_i + v_1^p^i+1-p^i - 1α (∏_j=0^i-1x_j^p-1)dv_1. Then, the statement follows by the base-change ℤ_p→𝔽_p. Let R_ζ = ℤ_p[v_1]^hℤ defined using a trivial ℤ-action and let η:R→ R_ζ denote the connective cover map. To compute the Hochschild homology, we shall show that the map η_∗:π_∗(R)→π_∗(R_ζ) is injective and describe the image. Note that π_∗ R_ζ = ℤ_p[v_1,ζ] and π_∗ R = ℤ_p[v_1,α] where η_∗(α)= v_1ζ. Let us consider the Künneth spectral sequence E_2((R)) = ^π_∗(R⊗_ℤ R)(π_∗ R,π_∗ R) π_∗(R). Since π_∗ R =ℤ_p[v_1]⊗Λ[α], the E_2-page can be computed as E_2((R)) = ℤ_p[v_1]⊗Λ[dv_1,α]⊗Γ[dα]. Similarly, there is a spectral sequence E_2((R_ζ)) = ℤ_p[v_1]⊗Λ[dv_1,ζ]⊗Γ[dζ]π_∗(R_ζ) up to p-completion. We claim that E_2((R))→ E_2((R_ζ)) is injective. By Lemma <ref>, we have dα↦ -ζ dv_1 + v_1dζ. To prove the injectivity, it is enough to prove it after taking the associated graded group with respect to the (dv_1)-adic filtration. Then, we may assume that dα maps to v_1dζ, and since E_2((R_ζ)) is torsion-free, the divided power γ_n(dα) maps to v_1^nγ_n(dζ). Therefore, we have the desired injectivity. Note also that the map is injective mod p. The spectral sequence (<ref>) degenerates at the E_2-page using the symmetric monoidality of , <Ref>, and <Ref>. We then see that (<ref>) also degenerates at the E_2-page and that η_∗:π_∗(R)→π_∗(R_ζ) is injective, even after mod p. Let us describe the Künneth filtration on π_∗(R_ζ) = ℤ_p[v_1]⊗Λ[dv_1,ζ]⊗ W(C^0(ℤ_p;F_p)) in more detail. Here, the ring W(C^0(ℤ_p;𝔽_p))=lim_kC^0(ℤ_p;ℤ_p/p^k) is the ring of all continuous functions ℤ_p→ℤ_p. It can also be described, up to completion, as the algebra generated by y_0,y_1,… with relations y_i^p = py_i+1+y_i. Here, the element y_0 is the identity function ℤ_p→ℤ_p and the y_i's for i>0 can be defined with the above formula since y^p≡ y p for any y∈ W(C^0(ℤ_p;𝔽_p)). In π_∗(R_ζ), the element y_0 equals dζ, and the y_i's represent the p^i-th divided power of dζ in the Künneth spectral sequence (<ref>). To determine π_∗(R), we need to find the classes x_i's representing the divided powers γ_p^i(dα)∈ E_2((R)) up to a p-adic unit. The first divided power dα∈ E_2((R)) has a canonical lift x_0:=dα∈π_∗(R) and its image under η_∗ is v_1y_0 - ζ dv_1. Inductively, suppose that we have chosen x_0,…,x_i in a way that the image of x_j is η_∗(x_j)= v_1^p^jy_j - v_1^p^j-1(∏_k=0^j-1y_k^p-1)ζ dv_1 for 0≤ j≤ i. Let x_i+1 be any class representing γ_p^i+1(dα). Then, after scaling by a unit, we must have x_i^p = px_i+1+c for some class c∈π_∗(R) with Künneth filtration <p^i+1. Applying η_∗, we have η_∗(c)≡η_∗(x_i)^p ≡ v_1^p^i+1y_i^p ≡ v_1^p^i+1y_i p. Let d∈π_∗(R) be the class v_1^p^i+1-p^i-1(v_1x_i + α(∏_k=0^i-1x_k^p-1)dv_1), having Künneth filtration p^i. Then, we can compute that η_∗(d) = v_1^p^i+1y_i so that η_∗(c) ≡η_∗(d) p. Since η_∗ is injective mod p, we have c≡ d p, so by replacing x_i+1 with x_i+1 - (c - d)/p, we can assume that c=d. Then, we have η_∗(x_i+1) = p^-1η_∗(x_i^p - c) = p^-1( v_1^p^i+1y_i -pv_1^p^i+1-1(y_i⋯ y_0)^p-1ζ dv_1 - v_1^p^i+1y_i) =v_1^p^i+1y_i+1 - v_1^p^i+1-1(y_i⋯ y_0)^p-1ζ dv_1. The desired ring structure of π_∗(R) can now be read off from the ring structure on π_*(R_ζ). There is an isomorphism of bigraded _1-(_p)-algebras for p>2 (j^⊗__p_p[v_0]) ≅(_p)⊗__p(_p[v_0]/_p)⊗__p(τ_≥0_p[v_1]^h/_p) We run the strategy of <Ref> with appropriate modifications. First, we have the isomorphism j^⊗__p_p[v_0] ≅ j^/p⊗__p_p[v_0], which by <Ref> is equivalent to τ_≥0_p[v_1,v_0]⊗__p_p^h. As an _2-ring, we claim this is equivalent to the tensor product of _p⊗[v_0] with the pullback of the cospan [v_1]⊗^h [d] [r] ^h where the vertical map is the augmentation sending v_1 to 0. This isomorphism is a consequence of the isomorphism of <Ref> and the pullback square j^/p [r][d] j_ζ^/p [d] [r] _p _p^h Given this equivalence, we conclude by arguing exactly as in <Ref>. Let p>2. Then π_*(j^)/p ≅π_*(_p)/p⊗π_*(τ_≥0_p[v_1]^h/_p) We follow the strategy in <Ref>, running the spectral sequence corresponding to the p-adic filtration π_*(j^/p[v_0])/p π_*(j^)/p. The E_1-page is understood via <Ref> to be _p[σ^2p,v_0]⊗Λ[dv_0]⊗π_*(τ_≥0_p[v_1]^h/_p) where the last tensor factor is described in <Ref>. There is a differential d_1σ^2p = dv_0, coming from the map from _p^→_p^⊗ j^ and <Ref>. We need to show that the remaining classes are permanent cycles. The classes v_1,α are permanent cycles because they are in the image of the unit map, and dv_1 is a permanent cycle because it is in the image of the map σ^2. The classes x_i are permanent cycles for degree reasons, as everything of positive p-adic filtration is in nonnegative degree, and the differentials respect the hfp grading. One also sees for degree reasons and the map from (_p)/p that there are no multiplicative extension problems. We now run the spectral sequence (j^)/(p,v_1) (j)/(p,v_1) associated with the filtered spectrum (j^)/(p,v_1) where v_1∈π_∗ j/p is the class of filtration 2p-2. The following lemma guarantees the multiplicativity of the spectral sequences. j^/(p,v_1) admits a homotopy commutative Å_p-1-multiplication for p>2, and in particular is homotopy associative for p>3. By <cit.>, it follows that /p is an Å_p-1-algebra, and it is easy to see that there is no obstruction to its multiplication being homotopy commutative for p>2. We conclude by observing that j^/(p,v_1) ≅τ_≤ 2p-3j^⊗/p. Note that by loc. cit., the multiplication is not Å_p, the obstruction being α_1. For p>3, π_*(j)/(p,v_1) is the homology of the CDGA 𝔽_p[μ_2]⊗Λ[α_1,λ_2,a]⊗Γ[b], d(λ_2)=aα_1 |b| = 2p^2-2p , |a| = 2p^2-2p-1, |λ_2| = 2p^2-1, |μ_2| = 2p^2 and for p=3, the above result is true after taking an associated graded ring. The E_1-page of the spectral sequence E_1 = π_∗(j^)/(p,v_1)π_∗(j)/(p,v_1) is isomorphic to 𝔽_p[μ_1]⊗Λ[σ^2α_1,dv_1,α_1]⊗Γ[dα_1]. by <Ref>. By <Ref>, there are d_2p-2-differentials σ^2α_1 dα_1 σ^2 v_1 dv_1. The class α_1 is a permanent cycle since it must represent the image of α_1∈π_∗ j/(p,v_1) along the unit map, and the divided power classes (dα_1)^(k) are permanent cycles because they are in weight 0, and there are no classes of weight >1. Therefore, by the Leibniz rule, the E_2p-1-page is isomorphic to 𝔽_p[μ_2]⊗Λ[λ_2, a, α_1]⊗Γ[γ_p(dα_1)] where μ_2,λ_2 and a represent (μ_1)^p,(σ^2 v_1)^p-1dv_1 and (σ^2 α_1)γ_p-1(dα_1), respectively. For degree reasons, the only possible further nonzero differential is d_p-1(λ_2) ≐α_1 To prove that this differential actually happens, it is enough to show that π_2p^2-2(j)/(p,v_1)=0. By <Ref>, there is a (4p^2-4p-2)-connective map j ⊕Σ^2(_p → j) →(j), so it suffices to show that π_2p^2-2(j/(p,v_1)) = π_2p^2-2(Σ^2(1_j)/p,v_1)=0. The former group is clearly 0. The latter is 0 from the computation of the Adams–Novikov E_2-page for /(p,v_1) in low degrees (see the discussion after <cit.> and Theorem 4.4.8 of op. cit.). The last nontrivial differential of the spectral sequence is displayed for p=3 in <Ref>. We now check for p≥ 5 that there are no multiplicative extension problems in our description of the commutative ring structure on π_*(j)/(p,v_1). If we choose γ_p^ib to be detected by (γ_p^i+1(dα_1)), the relations γ_p^i(b)^p=0 follow since there is nothing of higher filtration in that degree. Let μ_2 be any lift of (σ^2v_1)^p. The homology of the CDGA Λ__p[α_1,λ_2,a], d(λ_2) = aα_1 is 6-dimensional over _p, given by {1,a,α_1,λ_2a,λ_2α_1,λ_2aα_1} Let α_1,x, y,z denote lifts of the classes α_1,a,λ_2a,λ_2α_1 respectively (so that α_1y is a lift of λ_2aα_1). The relation α_1y=-xz holds because it is true on the associated graded and there is nothing of higher filtration in that degree. The classes α_1z,yz,xα_1 are 0 because there are no nonzero classes in degree (p+1)(2p-2),2p^2-1+2(2p-3),2(2p^2-1)+(2p-3)+ p(2p-2)+1 respectively. The only remaining relation, xy=0, occurs because it happens on the associated graded, and there is nothing of higher filtration. For p=3, it is more complicated to figure out the multiplicative extensions, since the homotopy ring is not necessarily associative. Many of the multiplicative extensions can be ruled out using the Postnikov filtration on j/(3,v_1), but not all of them: for example this doesn't rule out the possible non-associative extension x(x μ_2^2) = zb^2 in degree 62. § THH OF FINITE EXTENSIONS In this section, we shall make the analogous computations for the THH of j_ζ,k:=ℓ_p^hp^kℤ, ju_ζ,k, and and also of j_k:=τ_≥0j_ζ,k for p>2, which are introduced as filtered rings in <Ref>. j_ζ,k is a /p^k Galois extension of j_ζ in _p. The computations are very similar to the cases of j_ζ and j, so we shall only point out the differences from the proofs of those cases. There is an isomorphism of rings for p>2 π_∗(j_ζ,k)/(p,v_1) ≃π_*((ℓ_p)/(p,v_1))⊗Λ[ζ]⊗_p and for p=2 π_∗(j_ζ,k)/(2,η,v_1) ≃π_*((_2)/(2,η,v_1))⊗Λ[ζ]⊗_2 The maps (j_ζ,k)/(p,v_1) →(j_ζ,k+1)/(p,v_1) on π_* are the identity on the (ℓ_p)/(p,v_1) component, send ζ to 0, and are the restriction map _p→p_p≅_p. The proof is exactly the same as in <Ref> and <Ref>. The only difference is that for k≥1, <Ref> doesn't apply: the class λ_1 in the spectral sequence (j_ζ^)/(p,v_1) (j_ζ)/(p,v_1) is a permanent cycle, which can be seen from the Leibniz rule. As noted in the remark, this doesn't affect the final answer. The claim about the maps π_*(j_ζ,k)/(p,v_1) →(j_ζ,k+1)/(p,v_1) can be deduced at the level of associated graded of the filtrations. For example, by choosing elements λ_1,λ_2,σ^2v_2 in (j_ζ)/(p,v_1), one sees that their images in (j_ζ,k)/(p,v_1) are valid generators of the corresponding classes. To see what the transition maps do on Λ[ζ]⊗_p, we can use <Ref> since these classes are in the image of (_p^h). It then follows that map sends _p→p_p given by restriction of functions, and ζ goes to p ζ=0 because that is what happens on the level of mod p cohomology of the p-fold cover map S^1 → S^1. We next explain the computation for ju_ζ,k, which is nearly identical to that of j_ζ,k For each k≥0, there is an isomorphism of rings π_∗(ju_ζ,k)/(2,v_1) ≃π_*((ℓ_2)/(2,v_1))⊗Λ[ζ]⊗_2 The maps (ju_ζ,k)/(p,v_1) →(ju_ζ,k+1)/(p,v_1) on π_* are the identity on the (ℓ_2)/(2,v_1) component, send ζ to 0, and are the restriction map _2→2_2≅_2 The proof is nearly exactly as the proof of <Ref> for p>2. The only difference is that in checking multiplicative extension problems in spectral sequences, one must check that odd degree classes square to zero (since we are at the prime 2). This always follows because the square lands in a zero group; see <Ref> for a chart. Our argument to compute (j_k) for k≥1 uses Dyer–Lashof operations to produce permanent cycles, so we first give j_k/(p,v_1) an _∞-structure. For k≥1, j_k/(p,v_1) admits the structure of an _∞-algebra under j_k that is a trivial square zero extension of _p by Σ^2p-2_p. To construct j_k/(p,v_1) as an _∞-ring, we first begin with τ_≤ 2p-3j_k, whose homotopy groups are _p in degree 0 and /p^k+1 in degree 2p-3, where α_1 is a p-torsion class in degree 2p-3. By <cit.> this is a square zero extension of _p by Σ^2p-3/p^k+1, i.e it fits into a pullback square τ_≤2p-3j_k [r][d] _p [d] _p[r] _p⊕Σ^2p-2/p^k+1 By using the map /p^k+1→/p that kills every multiple of p (including α_1 since k≥1), we can produce an _∞-algebra R under τ_≤ 2p-3j_k defined as the pullback R [r][d] _p [d] _p[r] _p⊕Σ^2p-2/p We claim that R is a trivial square zero extension of _p. To see this, square zero extensions of _p by Σ^2p-1_p are classified by maps of _p-modules L__p/_p→Σ^2p-1_p, where L__p/_p denotes the _∞ relative cotangent complex. By <cit.>, since _p →_p is 2p-3-connective, there is a 4p-4-connective map _p ⊗__p(_p →_p) → L__p/_p showing that π_2p-2L__p/_p is _p. It follows that up to isomorphism, there is a unique nontrivial square zero extension of _p by Σ^2p-3_p. But τ_≤2p-3_p must be this nontrivial extension, since α_1≠0 there. Since α_1=0 in R, it follows that R is the trivial square zero extension _p ⊕Σ^2p-3_p. Thus τ_≤2p-3(R⊗__p_p) is an _∞-_p-algebra under it that is a trivial square zero extension of _p by Σ^2p-2_p. But it is easy to see that the underlying unital j_k-module of this is j_k/(p,v_1). For k≥1,p>2, there is an isomorphism π_∗(j_k)/(p,v_1) ≃π_*(ℓ_p)/(p.v_1) ⊗Λ[α_1/p^k]⊗Γ[dα_1/p^k] where |α_1/p^k| = 2p-2 and |σα_1/p^k| = 2p-1. The proof of <Ref> carries over exactly for j_k to give an isomorphism π_*(j_k^)/(p,v_1) ≅π_*(_p)/p⊗π_*(τ_≥0_p[v_1]^hp^k/_p)/v_1 The second tensor factor on the right hand side by <Ref> is Λ[α_1/p^k,dv_1]⊗Γ[d α_1/p^k][As an algebra this doesn't depend on k, but we have given names depending on k to indicate that the exterior class α_1/p^k is sent to 0 in (j_k+1^)/(p,v_1).]. In the spectral sequence for (j_k^)/(p,v_1) (j_k)/(p,v_1), there is a differential d_2p-2σ^2v_1 = d v_1 arising as in <Ref>, but the target of the differential from σ^2α_1, which is σα_1, is zero since α_1 = 0 in j_k/(p,v_1). In fact, the class σ^2α_1 is a permanent cycle since it can be constructed using a nullhomotopy of α_1. Let λ_1 be a class in (j_k)/(p,v_1) detecting this. By <Ref>, j_k/(p,v_1) is an _∞-algebra under j_k that is an _∞-_p-algebra, so (j_k)/(p,v_1) ≅(j_k)⊗_j_kj_k/(p,v_1) is an _∞-_p-algebra with Dyer–Lashof operations. We define λ_2 to be the _2-Dyer–Lashof operation on λ_1. In (ℓ_p)/(p,v_1), this operation on the class λ_1 gives the class λ_2 in π_2p^2-1(ℓ_p)/(p,v_1) <cit.>, which is detected by σ^2v_1^p-1dv_1 in the spectral sequence for (ℓ_p^)/(p,v_1) by <Ref>. Since maps of filtered objects can only increase filtrations in which elements are detected, it follows that λ_2 must also be detected by σ^2v_1^p-1dv_1 in (j_k)/(p,v_1), so that class is a permanent cycle. The class α_1/p^k is a permanent cycle since it is in the image of the unit map, and the classes in Γ[dα_1/p^k] must be permanent cycles for degree reasons, so there are no further differentials. There are no even degree classes of positive weight, so classes representing the divided powers of dα_1/p^k have zero p^th-power for degree reasons. For degree reasons there can be no further multiplicative extensions. § IN THE STABLE RANGE is an important invariant of rings, partially because of the Dundas–Goodwillie-McCarthy theorem, which says that for nilpotent extensions of rings, the relative K-theory is the relative . Let f:R→ S an i-connective map of connective _1-rings, for i≥1. Then there is a pullback square K(R) [r][d] K(S) [d] (R) [r] (R) A precursor to this theorem is a result of Waldhausen[Although Waldhausen proves this result for _1--algebras, the proof works equally well for any _1-algebra: see for example <cit.>.], which computes the first nonvanishing homotopy group of (f) ≅ K(f) in terms of Hochschild homology. Let f:R → S be an i-connective map of connective _1-algebras for i≥1. Then (K(f)) ≅((f)) is (i+1)-connective, with π_i+1(K(f)) ≅HH_0(π_0S;π_i f). Our goal in this section is to refine <Ref> to compute the spectrum (K(f)) in the stable range in terms of . We use this to understand the maps K(_p) → K(_p) and K(j_ζ) → K(_p^h) in the stable range. Given a map of _1-rings, R → S, the relative _1-cotangent complex L_S/R is the S-bimodule given by the fiber of the multiplication map S⊗_RS → S[See for example <cit.>.]. Our result is as follows: Given a map of ring spectra f:R → S, there is a natural map (f) →(S;L_S/R). If f is an n-connective map of -1-connective rings for n≥ 1, this natural map is 2n+1-connective. In fact the map of <Ref> is the linearization map in the sense of Goodwillie calculus, of the functor f ↦ ((f)). See <cit.> for a variant of this, where one considers only trivial square-zero extensions of S rather than arbitrary _1-ring maps. We first construct the natural transformation using the following lemma. Let f:R → S be a map of _1-rings. Then there is a natural equivalence (R;S) ≅(S;S⊗_RS) making the diagram below commute. (R;S)[rr][dr] (S;S) [ur] (S;S⊗_RS) Consider the map f^*:(R) →(S) and its right adjoint f_*:(S) →(R). The composite f^*f_* corresponds to the S-bimodule S⊗_RS, and the composite f_*f^* corresponds to the R-bimodule S. Since of a bimodule is the trace of the bimodule as an endomorphism in presentable stable categories, cyclic invariance of the trace gives the desired equivalence (R;S) ≅(S;S⊗_RS). There is a diagram (R) (S) (S) (S)["1_S"', shift right=3, from=1-3, to=2-3] ["1_S"', shift right=3, from=2-3, to=1-3] ["f^*"', shift right=3, from=1-1, to=2-1] ["f_*"', shift right=3, from=2-1, to=1-1] ["f^*"description, from=1-1, to=1-3] ["1_S"description, from=2-1, to=2-3] where we use the natural transformation ϵ:f^*f_* → 1_S and 1_f^* to fill in the 2-morphisms in the diagram. The horizontal maps in the diagram induce at the level of bimodules the maps f^*f_* 1_S and f_*f^* 1_S which induce the maps (R;S),(S;S⊗_RS) →(S;S) in the triangle of the lemma statement. The C_2-action on (S) coming from writing 1_S as 1_S∘ 1_S corresponds to restricting the S^1-action on (S) to C_2 ⊂ S^1. It follows that the claimed diagram naturally commutes because S^1 is connected, so the rotation by π action on (S) is homotopic to the identity. We construct the natural transformation ((f)) →(S;L_S/R) for a map f:R → S as follows: composing the map (R) →(R) with (R) →(R;S), we obtain a commutative square (R)[r][d] [d](S) (R;S)[r] (S;S) Taking horizontal fibers and using the isomorphism of <Ref>, we obtain the desired natural transformation. We will first prove <Ref> in the case R → S is a square-zero extension with ideal M. To do this, we consider the square-zero extension as a filtered _1-ring with underlying R and associated graded S⊕ M[1]. Then (R) is a filtered S^1-equivariant spectrum, and the Frobenius maps Φ_p:(R) →(R)^tC_p send filtration i to filtration ip, so in particular can be thought of as filtration preserving maps, since the filtration is only in nonnegative degrees. The key input we use is the computation of of a trivial square-zero extension as an S^1-equivariant spectrum: For S⊕ M the trivial square-zero extension of an _1-ring R by a bimodule M, there is an S^1-equivariant graded equivalance (S⊕ M) ≅(S) ⊕⊕_m=1^∞_/m^S^1(S;(Σ M)^⊗ m) Here _/m^S^1 is the right adjoint of the forgetful functor from S^1-equivariant spectra to /m-spectra, and the /m-action on (S;(Σ M)^⊗ m) comes from cyclically permuting the tensor factors. We also record a key property of the of -1-connective rings that we use: Let R → S be an n-connective map of -1-connective rings, and M a connective S-bimodule. Then (S;M) is connective, and the map (R;M) →(S;M) is n+1-connective. Both of these follow from examining the associated graded coming from the cyclic bar complex computing (R;M) and (S;M). For the latter is given by Σ^mS^⊗ m⊗ M which indeed is connective, and Σ^mS^⊗ m⊗ M →Σ^mR^⊗ m⊗ M is n+m-connective for m≥1 and an isomorphism for m=0. Let f:R→ S be an n-connective square-zero extension of -1-connective _1-rings for n≥0. Then the map (f) →(S;L_S/R) is 2n+1-connective. We consider the map (f) →(f) →(S;L_S/R) as a map of filtered spectra, viewing S as a filtered _1-ring with associated graded R⊕ M. By <Ref>, ((R)) ≅⊕_m=1^∞_/m^S^1(S;(Σ M)^⊗ m) as an S^1-spectrum. Since The Frobenius map is zero on associated graded since it takes filtration i to ip, so we learn that _m((f)) ≅ (Σ_/m^S^1(S;(Σ M)^⊗ m))_hS^1[See also <cit.>.]. In particular, since S is -1-connective and n≥ 0, the connectivity of these terms goes to ∞ as m →∞ via <Ref> so the filtration on is complete. Since _/m^S^1 decreases connectivity by 1, we learn that _m((f)) is (n+1)m-1-connective. In particular, the map (f) →_1(f) is 2n+1-connective. To finish, it suffices to show the following two claims: * (S;L_S/R) →_1(S;L_S/R) is 2n+2-connective. * _1(f) →_1 (S;L_S/R) is an isomorphism. The claim (1) follows from the fact that L_S/R≅ L_S/S⊕ M≅⊕_m=1^∞(Σ M)^⊗_S m, and (Σ M)^⊗_S m is 2n+2-connective for m≥ 2. For claim (2), we see that _1(f) ≅Σ(_/1^S^1(S;Σ M))_hS^1≅ (_/1^S^1(S;Σ M))^hS^1≅(S;Σ M) Σ M is exactly _1L_S/R, and (S;_1L_S/R) ≅_1(S;L_S/R) since S is entirely in grading 0, so we are done. We prove <Ref> by reducing to the case of a square-zero extension. First, we produce a natural way to factor a map of _1-rings through a square-zero extension. We recall that given a S'-S-bimodule M with a unit map → M, the pullback S'×_MS admits an _1-algebra structure where the maps S' → M and S → M are the S'-module and S-module maps adjoint to the unit map. This ring structure can be constructed as the endomorphism ring of the triple (S',S,S → S'⊗_S'M) viewed as an object of the oplax limit (S)×⃗M(S') (see <cit.> and <cit.>). When M comes from a cospan of ring maps S' → R ← S, this agrees with the pullback of the span of rings by <cit.>. Given a map f:R → S, we consider S⊗_RS as an S-S-bimodule with unit 1. We define R_f,2 to be the _1-ring given by S×_S⊗_RSS. We have natural maps R R_f,2 S. If R → S is an n-connective map of connective rings for n≥ 0, then h is 2n-connective, g is n-connective, and g is a square-zero extension. The fiber of h:R → R_f,2 is the total fiber of the square R [r][d] S [d] S[r] S⊗_RS which is f⊗_R f, which is 2n-connective. Since f is n-connective, it follows that g is too. It remains to show that g is a square-zero extension, which will follow if we identify S⊗_RS as an S-bimodule with unit with the associated structure on S ⊕ L_S/R coming from the cospan of rings S → S⊕ L_S/R←S corresponding to the universal derivation. But since R maps into the pullback of this cospan (since it is the universal square-zero extension of S under R) we have a square of ring maps R[r][d] S [d] S[r] S⊕L_S/R which defines an isomorphism of unital S-bimodules S⊗_RS → S⊕ L_S/R. We consider the maps h,g,f as in <Ref>, giving us the diagram (h)[r][d] (f)[r][d] (g)[d] (R_f,2;L_R_f,2/R)[r] (S;L_R/S) [r] (S;L_R_2,f/S) To produce a nullhomotopy of the composite of the lower horizontal maps, we identify them with the vertical fibers of the following cofiber sequence using <Ref>: (R;R_f,2)[r][d] (R;S)[r][d] (R_f,2;S)[d] (R_f,2)[r] (S) [r] (S) The map (R_f,2) →(S) lifts to (R_f,2;S), and this lifting provides the desired nullhomotopy. Moreover, we see that the fiber of the map (R_f,2;L_R_f,2/R) →(S; L_R/S→ L_R_2,f/S) is identified with the total fiber of the square (R;R_f,2)[r][d] (R;S)[d] [r](R_f,2) (R_f,2;S) which is the fiber of the map (R; g) →(R_f,2; g). By <cit.>, since h is 2n-connective and g is n-connective, we see that this map is 3n+1-connective. We next observe that in the right square of diagram (4), we know all maps except possibly the vertical map which we want to show is 2n+1-connective. Indeed, (h) is 2n+1-connective by <Ref> and <Ref>, the right vertical map is 2n+1-connective by <Ref>, and the lower horizontal map is 2n+1-connective since the map S⊗_RS → S⊗_R_f,2S is 2n+1-connective by <cit.>. It follows that the middle vertical map in diagram (4) is 2n-connective. But since f is an arbitrary n-connective map and h is 2n-connective, we learn that the left vertical map is 4n-connective. It follows that the middle vertical map is 2n+1-connective since it is an extension of a 2n+1-connective map and a 4n-connective map since n≥1. There is a version of <Ref> for a 0-connective map of connective rings, but one must ask that π_0R→π_0S has a nilpotent kernel. §.§ Applications to the sphere and the K(1)-local sphere We now apply <Ref> to the map _p →_p for p≥ 2 to understand the map (_p) →(_p) in the stable range. The proposition below contains a key ingredient of <cit.> used to understand the homotopy type of (_p). For p>2, the map π_*(_p) →π_*(_p) in degrees ≤ 4p-6 is an isomorphism in all degrees except 2p-1, where it is the map p_p →_p. By <Ref> we have a 4p-5-connective map ((_p) →(_p))→((_p;_p) →(_p)) The target of the map is (_p →(_p)), which after applying τ_≤4p-5 is Σ^2p-2_p. Thus it follows that there is a cofiber sequence Σ^2p-2_p →τ_≤4p-4(_p) →τ_≤4p-4(_p) Recall that (_p) ≅_p ⊕Σ (^∞_-1)_p <cit.>[see also <cit.>], and that π_*(_p)/(p,v_1) is _p in odd degrees between -1 and 2p-1, and in degrees 0,2p-2, and 0 in all other degrees <cit.>[This argument is not circular, because (_p)/(p,v_1) is computed without knowing this proposition.]. From this description, it follows that both (_p)/(p,v_1) and (_p)/(p,v_1) are _p in degrees 2p-2,2p-1. Thus in the cofiber sequence above mod (p,v_1), the class in degree 2p-2 must go to 0 and the class in degree 2p-1 must go to the generator. It follows that integerally, the class must go to 0, and that it maps to the _p in _2p-1(_p) via the p-Bockstein, giving the conclusion. For p>2, <Ref> also holds for j. In particular, the obstruction to lifting λ_1 ∈(_p) to (j) is up to a unit in _p the class σα_1 in (_p;L__p/j). Since the map _p → j is 2p^2-2p-2-connective (see <Ref>), the map _p →_p agrees with the map j_p→_p in the stable range, so the analysis in <Ref> applies for j. In particular, the obstruction to lifting the class λ_1 ∈(_p) to j is nonzero in (_p;L__p/j), so must be σα_1 up to a unit in _p, since π_2p-2(_p;L__p/j) ≅_p is generated by this class. We now apply <Ref> to the map j_ζ→_ζ, and then make deductions about K(L_K(1)) in the stable range. There is an isomorphism Σ^2p-2_p ≅ L__ζ/j_ζ, where the generator is σ(α_1). In fact, we claim that L__ζ^/j_ζ^≅Σ^2p-2,0_p on the class σ(α_1) which implies the result, since this is the associated graded of L__ζ/j_ζ. To see this, we note that L__ζ^/j_ζ^/p ≅ L__ζ^/p/j_ζ^/p. Since j_ζ^/p →_ζ/p is the augmentation of a polynomial algebra over the target on the class v_1, L__ζ^/p/j_ζ^/p≅Σ^2p-1_ζ^/p, where the generating class is σ(v_1). In j_ζ^, there is a p-Bockstein differential d_1v_1 = v_1ζ = α_1, so applying the map σ, we get that σ(v_1) has a p-Bockstein d_1-differential hitting ζσ(v_1) = σ(α_1). Thus we can conclude. The following proposition gives a way in which (j_ζ) does not behave as if the action on ℓ_p is trivial. For p>2, the image of the class λ_1 ∈(_p)/(p,v_1) in (_ζ)/(p,v_1) does not lift to (j_ζ)/(p,v_1). The same statement is true for K-theory replacing . The result for K-theory is equivalent to the one for by <cit.>. We have a commutative square of maps ((j) →(_p) [r][d] ((j_ζ)→(_p^h))[d] (_p;L__p/j)[r] (_ζ;L__ζ/j_ζ) where the vertical maps are 4p-5-connective by <Ref>. The lower horizontal map sends σ(α_1) to σ(α_1), the generator of π_2p-2(_ζ;L__ζ/j_ζ). But σ(α_1) since the class is the obstruction to lifting λ_1 from (_p) to (_p), we learn that the obstruction to lifting λ_1 from (_p^h) to (j_ζ) is nontrivial. We also see that this obstruction is nonzero modulo (p,v_1). For p>2, there are isomorphisms τ_≤ 4p-6((j_ζ) →(_ζ)) ≅Σ^2p-2_p and K_*L_K(1)≅ K_*-1_p ⊕ K_*_p ⊕π_*Σ^2p-2_p/_p, *≤ 4p-6 The map f:j_ζ→_ζ is 2p-3-connective, so we learn that (f) →(_ζ;L__ζ/j_ζ) is 4p-5-connective using <Ref>. For the first statement, it suffices to show that τ_≤ 4p-4(_ζ;L__ζ/j_ζ) ≅Σ^2p-2_p. But using <Ref> and <Ref>, we learn (_ζ;L__ζ/j_ζ) ≅(_ζ;Σ^2p-2_ζ/p⊗__p^h_p) ≅Σ^2p-2(_ζ)/p⊗__p^h_p ≅Σ^2p-2(_p)/p⊗__p_p Since π_*(_p)/p is by <Ref> _p[σ^2α_1,σ^2v_1], we indeed learn the claim. To get the statement about K-theory, by <cit.>, K_*(L_K(1)) ≅ K_*(j_ζ) ⊕ K_*-1(_p), and we have a cofiber sequence ((j_ζ) →(_ζ)) → K(j_ζ) → K(_p) K_2p-1(_p)_p ≅_p, generated by λ_1, and the map K_2p-1(_p)_p. As noted in <Ref>, the boundary map K(_p) →((j_ζ)→(_ζ)) is nontrivial in the stable range, and λ_1 doesn't lift to K(j_ζ). In the stable range, (_ζ;L__ζ/j_ζ) is Σ^2p-2_p. The kernel of the map K(_p) →((j_ζ)→(_ζ) in the stable range then agrees with K(_p) by <Ref>, so from the long exact sequence on homotopy groups, we see that there is a short exact sequence in the stable range 0 →π_*Σ^2p-2_p/_p → K_*(j_ζ) → K_*(_p) → 0 But the map K(_p) → K(j_ζ) clearly splits this sequence, giving the result. § THE SEGAL CONJECTURE The Segal conjecture for a cyclotomic spectrum X is the statement that the cyclotomic Frobenius map X → X^tC_p is an isomorphism in large degrees. Knowing the Segal conjecture for (R)⊗ V where V is a finite spectrum is a key step in proving the Lichtenbaum–Quillen conjecture for X, i.e the fact that (X) (and hence (X)) is bounded (see <cit.>). Asking that the Segal conjecture hold for (R)⊗ V is a regularity and finiteness condition on R: for example it holds when V is p-torsion and R is a p-torsion free excellent regular noetherian ring with the Frobenius on R/p a finite map <cit.>. In this section, we show that the Segal conjecture does hold for j_ζ for p>2 as well as the extensions j_ζ,k, but doesn't hold for the connective covers j and j_k. In particular the Lichtenbaum–Quillen conjecture doesn't hold for j_k, and our result is used in <cit.> to show that it does hold for j_ζ,k for p>2. A related regularity phenomenon was noted in <cit.>, namely that j_ζ is regular[See <cit.> for a discussion of regularity in the setting of prestable ∞-categories.] at the height 2-locus: i.e the t-structure on (j_ζ) restricts to a bounded t-structure on (j_ζ)^ω⊗_≥2. This t-structure is the key point in relating j_ζ's algebraic K-theory to that of the K(1)-local sphere. On the other hand, j is not regular at the height 2-locus which is why its integral K-theory is not closely related to that of the K(1)-local sphere. Our first goal is to show that for odd p, j_ζ,k satisfies the Segal conjecture. A key input is the following proposition, the proof of which is the same as in the reference, though the statement is somewhat more general. <cit.> Let R be an _1-ring, and consider the ^m-graded polynomial algebra R[a_1,…,a_n]:=R⊗⊗_1^n [a_i], where each a_i has positive weight[i.e it is nonnegative weight in each copy of in ^m, and positive weight in some copy of .] and is even topological degree and [a_i] is the free _1-algebra. The map φ: L_p(R[a_1,…,a_n]) →(R[a_1,…,a_n])^tC_p at the level of π_* is equivalent to the map π_*(R)[a_i]⊗Λ[da_i] →π_*(R)^tC_p[a_i]⊗Λ[da_i] where the a_i,da_i are sent to themselves. If R is an _2-algebra and [a_i] are given the _2-algebra structures coming from <cit.>, this is a homomorphism of rings. The following lemma is used to reduce showing the Segal conjecture is true to the associated graded of a filtration on the ring. Let C be a presentably symmetric monoidal stable category with a complete t-structure compatible with filtered colimits, and suppose that f:R^→ R'^ is a map of homotopy associative filtered rings in C, where the filtration on the source and target is complete. If there is an element x ∈π_*R := π_*(,R), *>0 such that the associated graded map R^→ R'^ is n-coconnective in the constant t-structure and sends a class detecting x to a unit, then the map R → R' is also n-coconnective, and is equivalent to the map R → R[x^-1] First, since the filtrations are complete and the map f is n-coconnective on associated graded, we learn that the fiber is n-coconnective on associated graded, and complete, so the underlying object is n-coconnective. Let x̃ be an element in π_**R^ whose underlying element is x that is sent to a unit in R'^gr. Since the filtration on R' is complete, it follows that x̃ is sent to a unit, which allows us to build a map R^[x̃^-1]→ R'^ via the colimit of the diagram Σ^|x| R^[r] Σ^|x|R'^ R^[r] R'^ ["..."marking, shift left=1, draw=none, from=2-1, to=3-1] ["..."marking, shift left=1, draw=none, from=2-2, to=3-2] ["x", from=1-1, to=2-1] ["x", from=1-2, to=2-2] Note that the horizontal maps become more and more coconnective and the right vertical maps are all equivalences. Then because the t-structure is complete and compatible with filtered colimits, we learn that in the colimit the map is an equivalence. We also learn that the filtration on R^[x^-1] is complete, allowing us to conclude. Before proceeding to prove the Segal conjecture, we recall as in <cit.> that given a filtered ^m-graded _1-ring R^, the cyclotomic Frobenius map refines to a filtered map φ: L_p(R^)→(R^)^tC_p where L_p is the operation on filtered spectra scaling the filtration and the gradings on R by p. For p>2 and k≥0, the map (j_ζ,k)/(p,v_1) →(j_ζ,k)^tC_p/(p,v_1) has 2p-3-coconnective fiber, and is equivalent to the map (j_ζ,k)/(p,v_1) →(j_ζ,k)[μ^-1]/(p,v_1) where μ∈π_2p^2(j_ζ,k). Using the filtration on j_ζ,k constructed in <Ref>, we get a filtered map φ: L_p(j_ζ,k)/(p,ṽ_̃1̃) →(j_ζ,k)^tC_p/(p,φṽ_̃1̃) By the proof of <Ref> and <Ref>, the class μ is detected in the spectral sequence for (j_ζ,k)/(p,v_1) by (σ^2v_1)^p. Thus by applying <Ref> for C = and R^→ R'^ the maps in question, it suffices to show * The filtration on the source and target are complete. * The associated graded map inverts the class σ^2v_1 and is 2p-3-coconnective. To see (a), the source is complete by <Ref>. The Tate construction (-)^tC_p sits in a cofiber sequence up to shifts between the orbits (-)_hC_p and fixed points (-)^hC_p, so it suffices to show each of those is complete. The orbits are complete for connectivity reasons: in any finite range of degrees, the orbits are computed via a finite colimit. The fixed points are complete because complete objects are closed under limits. We turn to proving (b). We further filter j_ζ,k^ by the p-adic filtration as j_ζ,k^⊗_p^ and consider the map of filtered graded _∞-rings L_p(j_ζ,k)/(p̃,v_1) →(j_ζ,k)^tC_p/(φp̃,φ v_1). We claim: * The filtration on the source and target are complete. * The associated graded map inverts the class σ^2p and is 2p-3-coconnective. Given these claims, the proof is complete, since σ^2v_1 is detected in the spectral sequence by (σ^2p)^p (see <Ref>), so claim (b) follows from <Ref>. (i) follows from an argument identical to the argument for (a), the only difference being that we use <Ref> to see that the filtration on (j_ζ^⊗_p^)/(p̃,v_1) is complete. To see (ii), by <Ref> the associated graded algebra is _p[v_0,v_1]^h, where the action is trivial. By <Ref> we have π_*(_p[v_0,v_1]^h)/(v_0,v_1) ≅_p⊗Λ[dv_0,dv_1,ζ]⊗_p[σ^2p], where |dv_0| = 1, |ζ| = -1, |dv_1|= 2p-1. It follows that if the Frobenius map mod (v_0,v_1) inverts σ^2p, it is 2p-3-coconnective, since it is injective on π_*, and an element in the cokernel of largest degree is (σ^2p)^-1σ v_1σ v_0, which is in degree 2p-2. Thus it remains to see that the Frobenius map mod (v_0,v_1) on π_* inverts the class σ^2p. Since is a localizing invariant and ^h is a trivial square-zero extension as an _1-algebra, by <cit.> we have a pullback square of bigraded (_p)-modules in cyclotomic spectra (_p[v_0,v_1]^h) [r][d] (_p[v_0,v_1]) [d] [r] (_p[v_0,v_1]) (_p[v_0,v_1][x_0]) where x_0 is a polynomial generator in degree 0. It thus suffices to show that for (_p[v_0,v_1][x_0]),(_p[v_0,v_1]) the cyclotomic Frobenius map inverts σ^2p. These statements follow from <Ref> with R = _p,_p[x_0], using the Segal conjecture for these discrete rings which is well known: for example <cit.> implies the Frobenius is an isomorphism in large degrees, but since it sends σ^2p to a unit <cit.>, it must just invert σ^2p. The bound 2p-3 in <Ref> is optimal: the map is injective on π_*, and a class of largest degree not in the image is μ^-1λ_1λ_2, in degree 2p-2, Now we show that the Segal conjecture fails for (j_k). For p>2 and k≥0, the fiber of the Frobenius map (j_k)/(p,v_1) →(j_k)^tC_p/(p,v_1) is not bounded above. Thus j_k does not satisfy the Lichtenbaum–Quillen conjecture, i.e (j_k)⊗ V is not bounded above for V a finite type 3 spectrum. First we note that the failure of the Segal conjecture implies the failure of the Lichtenbaum–Quillen conjecture by <cit.>, so we show that the Segal conjecture fails. We first show that μ∈(j_k)/(p,v_1) is sent to a unit in (j_k)^tC_p/(p,v_1). It follows from the spectral sequences used to calculate (j_k)/(p,v_1) that the image of μ in (_p) is (σ^2p)^p^2 up to a unit, which is sent under the Frobenius map to a class detected up to a unit by t^-p^2 in the Tate spectral seqence for (_p)^tC_p/(p,v_1) by <cit.>. This is the lowest filtration of the Tate spectral sequence, so since in that filtration, the map (j_k)/(p,v_1) →(_p)/(p,v_1) is the map _p →_p, we learn that the image of μ must be detected by a unit multiple of t^-p^2 in the Tate spectral sequence for (j_k)^tC_p and hence be a unit. If the Frobenius map has an element x in the kernel, then xμ^i is also in the kernel for each i, so the fiber isn't bounded above. On the other hand, if the Frobenius map is injective, then the classes φ(μ)^-1φ((σα_1/p^k)^(pi)) are an infinite family of classes of increasing degree in (j_k)^tC_p that are not in the image of φ, so in this case too, we learn that the fiber is not bounded above. In fact, π_*(j_k)^tC_p/(p,v_1) under the Frobenius map is the completion of π_*(j_k)[μ^-1]/(p,v_1) at the ideal generated by (σα_1/p^k^(pi)) for each i, and the map is in particular injective on π_*. alpha
http://arxiv.org/abs/2307.11762v1
20230714122656
Similarity-based Memory Enhanced Joint Entity and Relation Extraction
[ "Witold Kosciukiewicz", "Mateusz Wojcik", "Tomasz Kajdanowicz", "Adam Gonczarek" ]
cs.CL
[ "cs.CL", "cs.LG" ]
W. Kościukiewicz et al. Alphamoon Ltd., Wrocław, Poland Wroclaw University of Science and Technology [email protected] Similarity-based Memory Enhanced Joint Entity and Relation Extraction Witold Kościukiewicz1,20009-0001-0192-8850 Mateusz Wójcik 1,20009-0008-0547-9467 Tomasz Kajdanowicz20000-0002-8417-1012 Adam Gonczarek1 August 12, 2023 =========================================================================================================================================== Document-level joint entity and relation extraction is a challenging information extraction problem that requires a unified approach where a single neural network performs four sub-tasks: mention detection, coreference resolution, entity classification, and relation extraction. Existing methods often utilize a sequential multi-task learning approach, in which the arbitral decomposition causes the current task to depend only on the previous one, missing the possible existence of the more complex relationships between them. In this paper, we present a multi-task learning framework with bidirectional memory-like dependency between tasks to address those drawbacks and perform the joint problem more accurately. Our empirical studies show that the proposed approach outperforms the existing methods and achieves state-of-the-art results on the BioCreative V CDR corpus. § INTRODUCTION In recent years text-based information extraction tasks such as named entity recognition have become more popular, which is closely related to the growing importance of transformer-based Large Language Models (LLMs). Such models are already used as a part of complex document information extraction pipelines. Even though important pieces of information are extracted, these pipelines still lack the ability to detect connections between them. The missing part, which is the relation classification task, was recognized as a significant challenge in recent years. The problem is even harder to solve if tackled with a multi-task method capable of solving both named entity recognition and relation classification task in a single neural network passage. In this paper, we propose an approach to solve the multi-task problem of the joint document-level entity and relation extraction problem introduced with the DocRED dataset <cit.>. We follow the already existing line of research of learning a single model to solve all four subtasks: mention detection, coreference resolution, entity classification, and relation extraction. The single model is trained to first detect the spans of text that are the entity mentions and group them into coreference clusters. Those entity clusters are then labeled with the correct entity type and linked to each other by relations. Figure <ref> shows an example of a document from the DocRED dataset and a graph of labeled entity clusters that are expected as an output from the model. We introduce the bidirectional memory-like dependency between tasks to address the drawbacks of pipeline-based methods and perform the joint task more accurately. Our contribution can be summarized as follows: (1) we introduce a new approach that solves multi-task learning problems by improving the architecture of the previously proposed pipeline-based method, introducing the memory module to provide bi-directional dependency between tasks (2) we provide evaluation results, which show that our method outperforms the pipeline-based methods and achieves state-of-the-art results on the BioCreative V CDR corpus (3) we propose a novel similarity classifier module solving distance learning problem for document-level joint entity and relation classification serving as a starting point for future work. The code of our solution is available at < https://github.com/kosciukiewicz/similarity_based_memory_re>. § RELATED WORK Relation classification The relation extraction task is commonly approached by using separately trained models for the Named Entity Recognition <cit.> to detect entities and then detect relations between them. The transformer-based architectures, pre-trained on large text corpora, such as BERT <cit.>, have dominated the field i.e. Baldini Soares et al. <cit.> uses contextualized input embedding for the relation classification task. Joint entity and relation extraction The early end-to-end solutions formulated task joint task as a sequence tagging based on BIO/BILOU scheme. These approaches include solving a table-filling problem proposed by Miwa et al. <cit.>. Several approaches tried to leverage multi-task learning abilities using attention-based <cit.> bi-directional LSTM sharing feature encoders between two tasks to improve overall performance. The inability of the BIO/BILOU-based models to assign more than one tag to a token resulted in using the span-based method for joint entity and relation extraction proposed in Lee et al. <cit.>. Becoming a standard in recent years, this approach was further extended with graph-based methods like DyGIE++ <cit.> or memory models like TriMF <cit.> to enhance token span representation to an end-to-end approach for the joint task. Document-level relation extraction Although the DocRED <cit.> was originally introduced as relation classification benchmark, the opportunity arose to tackle a more complex joint entity and relation extraction pipelines consisting of mention detection, coreference resolution, entity classification, and relation classification. Since many relations link entities located in different sentences, considering inter-sentence reasoning is crucial to detect all information needed to perform all sub-tasks correctly. Eberts and Ulges <cit.> proposed JEREX - an end-to-end pipeline-based approach showing an advantage in joint training of all tasks rather than training each model separately. In recent work <cit.>, the problem is tackled using a sequence-to-sequence approach that outputs the extracted relation triples consisting of two related entities and relation type as text. § APPROACH Our document-level relation extraction framework is inspired by JEREX <cit.> which consists of four task-specific components: mention extraction (ℳ), coreference resolution (𝒞), entity extraction (ℰ) and relation extraction (ℛ). We change the original one-after-another pipeline architecture, introducing the memory module presented in Figure <ref>. The input representations of task-specific models are altered using the memory-based extended representation module that reads the memory using the Memory Read operation. The memory matrices 𝐌_ℰ and 𝐌_ℛ, are written by the entity and relation classifier, respectively. That feedback loop allows to share the information with previous steps extending their input by introducing a bi-directional dependency between tasks. §.§ Memory reading Similarly to TriMF <cit.>, our approach memory reading is based on the attention mechanism that extends the input representation with the information read from memory. In our architecture, as shown in Figure <ref> we extend both token embeddings 𝐗_T and mention candidates span representations 𝐗_S. For every input representation 𝐗_i where i∈{T,S} and memory matrix 𝐌_j where j∈{ℰ,ℛ}, the attention mechanism takes the representation 𝐗_i ∈ℝ^n × h as keys and values, where n denotes the number of representations vectors and h is the embedding size. As a query, the attention mechanism uses the memory matrix 𝐌_j ∈ℝ^m × s where m denotes the number of memory slots and s is the size of the memory slot. To compute the attention weights vector 𝐚_i,j∈ℝ^n we sum over the memory slots dimension as follows: (𝐚_i,j)^⊤=∑_ksoftmax(𝐦^k,:_j𝐖^read_i,j𝐗^⊤_i) where 𝐖^read_i,j∈ℝ^s × h is a learnable parameter matrix for the attention mechanism and 𝐦^k,:_j is the k-th row of 𝐌_j. The 𝐚_i,j vector is then used to weight the 𝐗_i to generate extended input representation 𝐗'_i,j: 𝐗'_i,j = diag(𝐚_i,j)𝐗_i For each input representation i, the memory reading operation creates two extended representations 𝐗'_i,ℰ and 𝐗'_i,ℛ, based on both memory matrices. The final extended representation is then calculated, using the element-wise mean of 𝐗_i, 𝐗'_i,ℰ and 𝐗'_i,ℛ: §.§ Memory writing Both memory matrices 𝐌_ℰ and 𝐌_ℛ store representations for entity and relation categories respectively. Values encoded in those matrices are written using the gradient of the loss function from the associated classifier – the entity classifier for 𝐌_ℰ and the relation classifier for 𝐌_ℛ. To make the stored representations more precise, the loss depends on the similarity between category embedding and the representation of the instance that belongs to that category according to the instance label. As a result, both entity and relation classifiers rely on similarity function S between input representation and suitable memory matrix. The probability distribution over entity types of entity e_i based on its representation vector 𝐱^e_i is calculated as follows: p(𝐲_e | e_i)=softmax(S(𝐱^e_i,𝐌_ℰ)) To get the existence probability over relation types for entity pair p_i,j represented by entity pair representation 𝐱^p_i,j∈ℝ^h we used the sigmoid function: p(𝐲_r | p_i,j)=sigmoid(S(𝐱^p_i,j,𝐌_ℛ)) We define S as bilinear similarity between instance representation 𝐱 and memory matrix 𝐌 as follows: S(𝐱,𝐌) = S_bilinear(𝐱,𝐌; 𝐖) = 𝐌𝐖^⊤𝐱 where 𝐖 is a learnable parameter matrix. For both entity and relation classifiers, separate learnable bilinear similarity weight matrices are used: 𝐖^write_ℰ∈ℝ^h_e × s_ℰ and 𝐖^write_ℛ∈ℝ^ h_p × s_ℛ where h_e and h_p denote entity and entity pair representation sizes respectively. s_ℰ and s_ℛ denote the memory slot size of the entity and relation memory matrices. In our approach number of slots for the memory matrices are equal to the number of types in associated classifiers. §.§ Training Finally, our model is trained optimizing the joint loss ℒ^joint which contains the same four, sub-tasks related, loss ℒ^j weighted with fixed, task-related weight value β_j as in JEREX <cit.>: ℒ^joint = β_ℳℒ^ℳ + β_𝒞ℒ^𝒞 + β_ℰℒ^ℰ + β_ℛℒ^ℛ. We also include the two-stage training approach proposed in TriMF <cit.>, tuning the memory warm-up proportion during the hyperparameter search. § EXPERIMENTS Datasets We compare the proposed similarity-based memory learning framework to the existing approaches using DocRED <cit.> dataset which contains over 5000 human-annotated documents from Wikipedia and Wikidata. By design, DocRED dataset was intended to be used as a relation classification benchmark but its hierarchical annotations are perfectly suitable for joint task evaluation.  For train, dev, and test split we follow the one provided in JEREX <cit.>. According to recent work <cit.>, DocRED consists of a significant number of false negative examples. We used dataset splits provided with Re-DocRED <cit.> which is a re-annotated version of the DocRED dataset. We also provide results on one area-specific corpus annotated in a similar manner as DocRED - BioCreative V CDR <cit.> that contains 1500 abstracts from PubMed articles. Following the prior work <cit.> we used the original train, dev, and test set split provided with the CDR corpus. Training As a pretrained text encoder we used BERT_BASE <cit.>. For the domain-specific BioCreative V CDR dataset we used SciBERT_BASE <cit.> which was trained on scientific papers from Semantic Scholar. All classifiers and memory module parameters were initialized randomly. During training, we used batch size 2, AdamW optimizer with learning rate set to 5e-5 with linear warm-up for 10% of training steps and linear decay to 0. The stopping criteria for training were set to 20 epochs for all experiments. Evaluation During the evaluation we used the strict scenario that assumes the prediction is considered correct only if all subtasks-related predictions are correct. We evaluated our method using micro-averaged F1-score. In Section <ref> we reported F1-score for a final model evaluated on the test split. As the final model we selected the one that achieved the best F1-score measured on the dev split based on 5 independent runs using different random seeds. Our evaluation technique follows the one proposed in <cit.>. Hyperparameters All hyperparameters like embedding sizes or multi-task loss weights were adopted from the original work <cit.> for better direct comparison. Our approach introduces new hyperparameters for which we conducted grid search on the dev split to find the best value. That includes hyperparameters such as memory warm-up proportion <cit.>, memory read gradient, number and types of memory modules, and finally the size of memory slots. § RESULTS In Table <ref> we present a comparison between our approach and existing end-to-end methods on 3 benchmark datasets for joint entity and relation extraction. The provided metric values show that our approach outperforms existing methods on CDR by about 0.9 percent points (pp.), achieving state-of-the-art results. Our method achieves similar results on DocRED and is outperformed by JEREX architecture on Re-DocRED dataset. We argue that the memory warm-up proportion value (0.4) is too small to properly initialize memories with accurate category representation. On the other hand increasing the memory warm-up steps leaves no time to properly train memory read modules. To address this issue we conducted experiments on pre-trained architecture using distantly annotated corpus of DocRED dataset to initialize memory matrices. We did the same pre-training for JEREX and the results show that our approach outperforms the original architecture by up to 0.48 pp. on both DocRED-based datasets. For the direct comparison with the original architecture we evaluated our memory-enhanced approach with two relation classifiers modules proposed in <cit.>. Results presented in Table <ref> show that our method improves the Global Relation Classifier (GRC) on every dataset by up to 1.70 pp. We also tested the performance of our method without the memory module - only with distance-based classifiers. Based on the results in Table <ref>, including a memory module with a feedback loop between tasks, in most cases, improved the final results regardless of the GRC or MRC module. § CONCLUSIONS AND FUTURE WORK In this paper, we proposed a novel approach for multi-task learning for document-level joint entity and relation extraction tasks. By including memory-like extensions creating a feedback loop between the tasks, we addressed the issues present in the previous architectures. Empirical results show the superiority of our method in performance over other document-level relation extraction methods, achieving state-of-the-art results on BioCreative V CDR corpus. One of the possible directions for future work is further development of the memory module by using different memory read vectors for more meaningful input encoding in enhanced representation module or improving the content written to memory by replacing the bi-linear similarity classifier with different distance-based scoring functions or proposing a different method of writing to memory. § ACKNOWLEDGEMENTS The research was conducted under the Implementation Doctorate programme of Polish Ministry of Science and Higher Education and also partially funded by Department of Artificial Intelligence, Wroclaw Tech and by the European Union under the Horizon Europe grant OMINO (grant number 101086321). It was also partially co-funded by the European Regional Development Fund within Measure 1.1. “Enterprise R&D Projects”, Sub-measure 1.1.1. “Industrial research and development by companies” as part of The Operational Programme Smart Growth 2014-2020, support contract no. POIR.01.01.01-00-0876/20-00. splncs04
http://arxiv.org/abs/2307.05090v2
20230711074338
The strong vertices of bottom mesons $B$, $B^{*}$ and bottomonia $Υ$, $η_{b}$
[ "Jie Lu", "Guo-Liang Yu", "Zhi-Gang Wang", "Bin Wu" ]
hep-ph
[ "hep-ph" ]
[email protected] [email protected] Department of Mathematics and Physics, North China Electric power university, Baoding 071003, People's Republic of China In this article, the strong coupling constants of vertices BBΥ, BB^*Υ, B^*B^*Υ, BB^*η_b and B^*B^*η_b are analyzed in the framework of QCD sum rules. In this work, all possible off-shell cases and the contributions of vacuum condensate terms including ⟨qq⟩, ⟨qg_sσ Gq⟩, ⟨ g_s^2G^2⟩, ⟨ f^3G^3⟩ and ⟨qq⟩⟨ g_s^2G^2⟩ are considered. The momentum dependent strong coupling constants are first calculated and then are fitted into analytical functions g(Q^2) which are used to extrapolate into time-like regions to obtain the final values of strong coupling constants. The final results are g_BBΥ=40.67^+7.55_-4.20, g_BB^*Υ=11.58^+2.19_-1.09 GeV^-1, g_B^*B^*Υ=57.02^+5.32_-5.31, g_BB^*η_b=23.39^+4.74_-2.30 and g_B^*B^*η_b=12.49^+2.12_-1.35 GeV^-1. These strong coupling constants are important input parameters which reflect the dynamic properties of the interactions among the mesons and quarkonia. 13.25.Ft; 14.40.Lb The strong vertices of bottom mesons B, B^* and bottomonia Υ, η_b Bin Wu^1 August 12, 2023 ================================================================= § INTRODUCTION The suppression of J/ψ production in relativistic heavy ion collisions is an important signature to identify the quark-gluon plasma<cit.>. Because of the color screening, the dissociation of J/ψ in the quark-gluon plasma would lead to a reduction of its production. The bottomonia are also sensitive to the color screening, therefore the Υ suppression in heavy ion collisions can also be considered as a signature to identify the quark-gluon plasma<cit.>. There are already some successful attempts in analyzing the heavy quarkonium absorptions by the effective Lagrangians in meson exchange models<cit.>. And the absorption cross sections can be calculated basing upon the interactions among the mesons and quarkonia, where the strong coupling constants are taken as an important input parameter. On the other hand, accurate determination of the strong coupling constants plays an important role in understanding the effects of heavy quarkonium absorptions in hadronic matter<cit.>. Besides, the coupling constants among the heavy quarkonia and heavy mesons are valuable for us to understand the final-state interactions in the heavy quarkonium decays<cit.>. The QCD sum rules and the light-cone QCD sum rules are powerful nonperturbative approaches in analyzing the strong coupling constants among the hadrons. In recent years, the strong vertices DD^*π, D^*D_sK, DD_s^*K,BB^*π, B^*B_sK, BB_s^*K, DDρ, DD_sK^*, BB_sK^*, DD^*ρ, DD^*_sK^*, BB^*_sK^*, D^*D^*ρ, B^*B^*ρ, BB_s_0K, B^*B_s_1K, DD^*_sK_1, BB^*_sK_1, D_s^*D_sϕ, D_2^*D^*π, D_s_2^*D^*K, D_2^*Dρ, D_2^*Dω, D_s_2^*D_sϕ, DDJ/ψ, DD^*J/ψ, D^*D^*J/ψ, DD^*η_c, D^*D^*η_c and D_sD^*_sη_c have been analyzed with the three-point QCD sum rules (QCDSR)<cit.> , and the coupling constants of vertices DD^*π, D^*D_sK, DD_s^*K,BB^*π, DDρ, DD_sK^*, D_sD_sϕ, BBρ, DD^*ρ, D_sD^*K^*, D_sD^*_sϕ, BB^*ρ, D^*D^*π, D^*D^*_sK, B^*B^*π, D^*D^*ρ, DD_0π, BB_0π, D_0D_sK, DD_s_0K, BB_s_0K, D_1D^*π, B_1B^*π, D_s_1D^*K, B_s_1B^*K, B_0B_1π, B_1B_2π, B_2B^*π, B_1B^*ρ, BB_1ρ, B_2B^*ρ and B_1B_2ρ have been studied with the light-cone QCD sum rules (LCSR)<cit.>. In our previous works, the strong vertices B_cB_cJ/ψ, B_cB_cΥ, B_cB_c^*J/ψ and B_cB_c^*Υ have been studied by using the three-point QCD sum rules<cit.>. Recently, a systematic analysis of the strong vertices of charmed mesons D, D^* and charmonia J/ψ, η_c was performed in Ref<cit.>. As a continuation of these works, the strong vertices of the bottom mesons B, B^* and the bottomonia Υ, η_b are systematically analyzed in the present work. This article is organized as follows. After the introduction in Sec. <ref>, the strong vertices BBΥ, BB^*J/ψ, B^*B^*Υ, BB^*η_b and B^*B^*η_b are analyzed with the QCDSR in Sec. <ref>, in which all off-shell cases of the intermediate mesons are considered. In the QCD side, the perturbative contribution and vacuum condensate terms are considered including ⟨qq⟩, ⟨qg_sσ Gq⟩, ⟨ g_s^2G^2⟩, ⟨ f^3G^3⟩ and ⟨qq⟩⟨ g_s^2G^2⟩. Sec. <ref> presents the numerical results and discussions. Sec. <ref> is reserved for our conclusions. § QCD SUM RULES To begin with this work, the following three-point correlation function is firstly introduced, Π (p,p') = i^2∫d^4xd^4ye^ip'xe^i(p - p')y ×⟨0.| T{J_M_3(x)J_M_2(y)J_M_1^ + (0)} |.0⟩ where T is the time ordered product, J is the meson interpolating current, the subscripts M_1, M_2 and M_3 denote the mesons in each vertex. Here, M_2 represents the intermediate meson which is off-shell. The assignments of the mesons for each vertex are shown in Table <ref>. The bottom meson and bottomonium interpolating currents are taken as the following forms, J_B(x) = u̅(x)iγ _5b(x) J_B^*(x) = u̅(x)γ _μb(x) J_Υ(x) = b̅(x)γ _μb(x) J_η _b(x) = b̅(x)iγ _5b(x) The correlation function will be calculated at two sides which are called the phenomenological side and the QCD side, respectively. According to the quark hadron duality, calculations of these two sides will be coordinated and the QCD sum rules about the properties of hadrons can be obtained. §.§ The Phenomenological side In phenomenological side, a complete sets of the hadronic states with the same quantum numbers as the interpolating currents J_M_1^+, J_M_2 and J_M_3 are inserted into the correlation function. Then, the correlation function can be written as the following form by using the dispersion relation<cit.>, Π (p,p') = ⟨ 0 .|J_M_3(0)|. M_3(p')⟩⟨ 0 .|J_M_2(0)|. M_2(q)⟩/(m_M_1^2 - p^2)(m_M_3^2 - p'^2)(m_M_2^2 - q^2) ×⟨M_1(p).|J_M_1^ + (0)|. 0 ⟩⟨M_2(q)M_3(p')|. M_1(p)⟩. + ... where ellipsis denotes the contributions of higher resonances and continuum states. The meson vacuum matrix elements in Eq. (<ref>) are expressed as the following forms, ⟨0|J_B(0)|B⟩ =f_Bm_B^2/m_b ⟨0|J_B^*(0)|B^*⟩=f_B^*m_B^*ζ _μ ⟨0|J_Υ(0)|Υ⟩=f_Υm_Υξ_μ ⟨0|J_η _b(0)|η _b⟩=f_η_bm_η_b^2/2m_b where f_B, f_B^*, f_Υ and f_η_b are the meson decay constants, ξ_μ and ζ_μ are the polarization vectors of Υ and B^*, respectively. All of the meson vertex matrix elements in Eq. (<ref>) can be obtained by the following effective Lagrangian, ℒ = ig_BBΥΥ _α(∂ ^αBB̅ - B∂ ^αB̅ ) - g_B^*B^*η _bε ^αβρτ∂ _αB_β ^*∂ _ρB̅_τ ^*η _b - g_B^*BΥε ^αβρτ∂ _αΥ _β(∂ _ρB_τ ^*B̅ + B∂ _ρB̅ _τ ^*) + ig_B^*B^*Υ[Υ ^α(∂ _αB^*βB̅_̅β̅ ̅^̅*̅ - B^*β∂ _αB̅_̅β̅ ̅^̅*̅ ) + (∂ _αΥ _βB^*β - Υ _β∂ _αB^*β)B̅ ^*α + B^*α(Υ ^β∂ _αB̅_̅β̅ ̅^̅*̅ - ∂ _αΥ _βB̅ ^*β)] + ig_B^*Bη _b[B^*α(∂ _αη _bB̅ - η _b∂ _αB̅ ) + (∂ _αη _bB - η _b∂ _αB)B̅ ^*α] From this Lagrangian, all of the vertex matrix elements can be written as, ⟨B(p')Υ (q)|. B(p)⟩. = g_BBΥ^Υ(q^2)ξ _α ^*(p + p')^α ⟨B(q)Υ (p')|. B(p)⟩. = g_BBΥ^B(q^2)ξ _α(p + q)^α ⟨B(p')Υ (q)|. B^*(p)⟩. = - g_BB^*Υ^Υ(q^2)ε ^αβρτξ _αζ _βp_ρp'_τ ⟨B(q)Υ (p')|. B^*(p)⟩. = - g_BB^*Υ^B(q^2)ε ^αβρτξ _αζ _βp_ρp'_τ ⟨B^*(q)Υ (p')|. B(p)⟩. = g_BB^*Υ^B^*(q^2)ε ^αβρτξ _αζ _βp'_ρp_τ ⟨B^*(p')Υ (q)|. B^*(p)⟩. = g_B^*B^*J/ψ^Υ[(p^α + p'^α)ξ^*_αζ^'βζ _β ^* - (p^α + q^α)ζ^'*_αξ _β ^*ζ ^β - (p'^α - q^α)ζ _αξ ^*βζ _β^*] ⟨B^*(q)Υ (p')|. B^*(p)⟩. = g_B^*B^*Υ^B^*[(p^α + q^α)ξ^*_αζ^'βε _β ^* - (p^α + p'^α)ζ^'*_αξ _β ^*ζ ^β- (q^α - p'^α)ζ _αξ ^*βζ _β^'*] ⟨B(p')η _b(q)|. B^*(p)⟩. = - g_BB^*η _b^η _b(q^2)ζ _α(q-p')^α ⟨B(q)η _b(p')|. B^*(p)⟩. = - g_BB^*η _b^B(q^2)ζ _α(p'-q)^α ⟨B^*(q)η _b(p')|. B(p)⟩. = - g_BB^*η _b^B^*(q^2)ζ^*_α(p+p')^α ⟨B^*(p')η _b(q)|. B^*(p)⟩. = - g_B^*B^*η _b^η _b(q^2)ε ^αβρτζ _αζ^'* _βp_ρp'_τ ⟨B^*(q)η _b(p')|. B^*(p)⟩. = - g_B^*B^*η _b^D^*(q^2)ε ^αβρτζ^'_αζ^*_βp_ρq_τ where ξ_α and ζ^(')_α are the polarization vectors of Υ and B^* respectively, q=p-p', and ε^αβρτ is the 4-demension Levi-Civita tensor. The subscripts of g in Eq. (<ref>) denote the type of strong vertices, and the superscripts denote the intermediate mesons which are off-shell. From Eqs. (<ref>) ∼ (<ref>), the expressions of the correlation function in phenomenological side can be obtained, and can be divided into different tensor structures. In general, different tensor structures for a correlation function will lead to the same result, thus choosing an appropriate structure to analyze the strong vertex is an acceptable way. §.§ The QCD side In QCD side, we will contract the quark fields with Wick's theorem and then do the operator product expansion(OPE). After the first process, the correlation functions for vertices BBΥ, BB^*Υ, B^*B^*Υ, BB^*η_b and B^*B^*η_b can be written as, Π _μ ^Υ(p,p') = ∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{B^nk(y)γ _5U^km( - x)γ _5B^mn(x - y)γ _μ} Π _μ ^B(p,p') = ∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{γ _μB^nk(x)γ _5U^km( - y)γ _5B^mn(y - x)} Π _μν^Υ(p,p') = - i∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{B^nk(y)γ _νU^km( - x)γ _5B^mn(x - y)γ _μ} Π _μν^B(p,p') = - i∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{γ _μB^nk(x)γ _νU^km( - y)γ _5B^mn(y - x)} Π _μν^B^*(p,p') = - i∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{γ _μB^nk(x)γ _5U^km( - y)γ _νB^mn(y - x)} Π _μνσ^Υ(p,p') = ∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{B^nk(y)γ _νU^km( - x)γ _σB^mn(x - y)γ _μ} Π _μνσ^B^*(p,p') = ∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{γ _μB^nk(x)γ _σU^km( - y)γ _νB^mn(y - x)} Π _μ ^η _b(p,p') = ∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{B^nk(y)γ _μU^km( - x)γ _5B^mn(x - y)γ _5} Π _μ ^B(p,p') = ∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{γ _5B^nk(x)γ _μU^km( - y)γ _5B^mn(y - x)} Π _μ ^B^*(p,p') = ∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{γ _5B^nk(x)γ _5U^km( - y)γ _μB^mn(y - x)} Π _μν^η _b(p,p') = - i∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{B^nk(y)γ _νU^km( - x)γ _μB^mn(x - y)γ _5} Π _μν^B^*(p,p') = - i∫d^4xd^4ye^ip'xe^i(p - p')y × Tr{γ _5B^nk(x)γ _μU^km( - y)γ _νB^mn(y - x)} The superscripts of Π in these above equations denote the intermediate mesons. U^ij(x) and B^ij(x) are the full propagators of u(d) and b quarks which have the following forms<cit.>, U^ij(x) = iδ ^ijx//2π ^2x^4 - δ ^ijm_q/4π ^2x^4 - δ ^ij⟨q̅q⟩/12 + iδ ^ijx/m_q⟨q̅q⟩/48 - δ ^ijx^2⟨q̅g_sσ Gq⟩/192 + iδ ^ijx^2x/m_q⟨q̅g_sσ Gq⟩/1152 - ig_sG_αβ^at_ij^a(x/σ ^αβ + σ ^αβx/)/32π ^2x^2 - iδ ^ijx^2x/g_s^2⟨q̅q⟩^2/7776 - δ ^ijx^4⟨q̅q⟩⟨g_s^2GG⟩/27648 - ⟨q̅^jσ ^μνq^i⟩σ _μν/8 - ⟨q̅^jγ ^μq^i⟩γ _μ/4 + ... B^ij(x) = i/(2π )^4∫d^4ke^ - ik · x{δ ^ij/k/ - m_b - g_sG_αβ^nt_ij^n/4σ ^αβ(k/ + m_b) + (k/ + m_b)σ ^αβ/(k^2 - m_b^2)^2 + g_sD_αG_βλ^nt_ij^n(f^λβα + f^λαβ)/3(k^2 - m_b^2)^4 - g_s^2(t^at^b)_ijG_αβ^aG_μν^b(f^αβμν + f^αμβν + f^αμνβ)/4(k^2 - m_b^2)^5 + ...} where ⟨ g_s^2G^2⟩=⟨ g_s^2G^n_αβG^nαβ⟩, D_α=∂_α-ig_sG^n_αt^n, t^n=λ^n/2. λ^n(n=1,...,8) are the Gell-Mann matrixs, i and j are color indices, q=u(d), σ_αβ=i/2[γ_α,γ_β], f^λαβ and f^αβμν have the following forms, f^λαβ = (k/ + m_b)γ ^λ(k/ + m_b)γ ^α(k/ + m_b)γ ^β(k/ + m_b) f^αβμν = (k/ + m_b)γ ^α(k/ + m_b)γ ^β(k/ + m_b) γ ^μ(k/ + m_b)γ ^ν(k/ + m_b) Just as stated in Sec. <ref>, different correlation functions Π_μ, Π_μν and Π_μνσ in Eqs. (<ref>) ∼ (<ref>) can be expanded into different tensor structures, Π_μ(p,p')=Π_1(p^2,p'^2,q^2)p_μ+ Π_2(p^2,p'^2,q^2)p'_μ Π_μν(p,p')= Π(p^2,p'^2,q^2)ε_μναβp^αp'^β Π _μνσ(p,p') = Π _1(p^2,p'^2,q^2)p_μg_νσ + Π _2(p^2,p'^2,q^2)p_μp_νp_σ + Π _3(p^2,p'^2,q^2)p'_μp_νp_σ + Π _4(p^2,p'^2,q^2)p_σg_μν + Π _5(p^2,p'^2,q^2)p_μp'_νp_σ + Π _6(p^2,p'^2,q^2)p'_μg_νσ + Π _7(p^2,p'^2,q^2)p_μp_νp'_σ + Π _8(p^2,p'^2,q^2)p'_νg_μσ + Π _9(p^2,p'^2,q^2)p'_μp'_νp_σ + Π _10(p^2,p'^2,q^2)p'_σg_μν + Π _11(p^2,p'^2,q^2)p'_μp_νp'_σ + Π _12(p^2,p'^2,q^2)p_μp'_νp'_σ+ Π _13(p^2,p'^2,q^2)p_νg_μσ + Π _14(p^2,p'^2,q^2)p'_μp'_νp'_σ where g_μν is the metric tensor. In the right side of these above equations, Π without Lorentz index is commonly called scalar invariant amplitude. To obtain the strong coupling constant, an appropriate scalar amplitude should be selected to carry out the analysis<cit.>. For vertex B^*B^*Υ as an example, its correlation function Π_μνσ has fourteen tensor structures. In principle, it is reasonable to perform the calculations with each structure, in this paper we will choose the structure p_μg_νσ to analyze the strong vertex B^*B^*Υ. The scalar invariant amplitudes in the QCD side are represented as Π^OPE which can be divided into two parts, Π^OPE=Π^pert+Π^non-pert where Π^pert refers to the perturbative part and Π^non-pert denotes the non-perturbative contributions including ⟨q̅q⟩, ⟨ g_s^2G^2⟩, ⟨q̅g_sσ Gq⟩, ⟨ f^3G^3⟩ and ⟨q̅q⟩⟨ g_s^2G^2⟩. The perturbative part, ⟨ g_s^2G^2⟩ and ⟨ f^3G^3⟩ terms can be written as the following form according to the dispersion relation, Π (p,p') = - ∫_0^∞∫_0^∞ρ(s,u,q^2)/(s - p^2)(u - p'^2)dsdu where ρ (s,u,q^2) = ρ ^pert(s,u,q^2) + ρ ^⟨g_s^2G^2⟩(s,u,q^2) + ρ ^⟨f^3G^3⟩(s,u,q^2) and s=p^2, u=p'^2 and q=p-p'. The QCD spectral density ρ(s,u,q^2) can be obtained by the Cutkosky's rules(see Fig. <ref>), where the calculation detail has already been discussed in Ref. <cit.>. Besides, we also take into account the contributions of ⟨qq⟩, ⟨qg_sσ Gq⟩, and ⟨qq⟩⟨ g_s^2G^2⟩. The feynman diagrams for all of these condensate terms can be classified into two groups which are illustrated in Figs. <ref> and <ref>. We take the change of variables p^2→-P^2, p'^2→-P'^2 and q^2→-Q^2 and perform double Borel transformation to both the phenomenological and QCD sides. The variables P^2 and P'^2 are replaced by T_1^2 and T_2^2, where T_1 and T_2 are called the Borel parameters. In this article, we take T^2=T_1^2 and T_2^2=kT_1^2=kT^2, where k is a constant which is related to meson mass. It takes different values for different vertices, where these values are represented in Table <ref>. Finally, the sum rules for the coupling constants can be obtained by matching the phenomenological and QCD sides according to the quark-hadron duality. The momentum dependent strong coupling constant can be written as, g(Q^2) = - ∫_s_1^s_0∫_u_1^u_0ρ (s,u,Q^2)e^ - s/T^2e^ - u/kT^2 dsdu + ℬℬ[Π^non - pert]/E/(m_M_2^2 + Q^2)e^ - m_M_1^2/T^2e^ - m_M_3^2/kT^2 where ℬℬ stands for the double Borel transformation and E is a factor which is related with meson masses and decay constants(see Table <ref>). From this equation, we can also see that the threshold parameters s_0 and u_0 in dispersion integral will be introduced. These parameters are used to eliminate the contributions of higher resonances and continuum states in Eq. (<ref>). They commonly fulfill the relations, m_i^2<s_0<m'^2_i and m_o^2<u_0<m'^2_o, where subscripts i and o represent incoming and outcoming mesons, respectively. m and m' are the masses of the ground and first excited state of the mesons. There usually have a relation m'=m+Δ, where Δ is taken as a value of 0.4 ∼ 0.6 GeV<cit.>. The strong coupling constant in Eq. (<ref>) is momentum dependence, which is similar with the run coupling constant α_s. In order to obtain the final results of strong coupling constant, it is necessary to extrapolate the results which are obtained from Eq. (<ref>) into time-like regions (Q^2<0). This process is realized by fitting the g(Q^2) into appropriate analytical functions and setting (Q^2=-m_on-shell^2). § NUMERICAL RESULTS AND DISCUSSIONS All of the the input parameters are taken as the standard values such as the hadronic masses and decay constants m_B=5.28 GeV<cit.>, m_B^*=5.33 GeV<cit.>, m_Υ=9.46 GeV<cit.>, m_η_b=9.399 GeV<cit.>, f_B=0.192±0.013 GeV<cit.>, f_B^*=0.213±0.018 GeV<cit.>, f_Υ=0.7 GeV<cit.>, f_η_b=0.667±0.007 GeV<cit.>. The heavy and light quark masses are adopted from the Particle Data Group<cit.> where m_u(d)=0.006±0.001 GeV and m_b=4.18±0.03 GeV. The vacuum condensates are also adopted as the standard values which are ⟨qq⟩=-(0.23±0.01)^3 GeV^3<cit.>, ⟨qg_sσ Gq⟩=m_0^2⟨qq⟩<cit.>, m_0^2=0.8±0.1 GeV^2<cit.>, ⟨ g_s^2G^2⟩=0.88±0.15 GeV^4<cit.>, ⟨ f^3G^3⟩=(8.8±5.5) GeV^2⟨ g_s^2G^2⟩<cit.>. The continuum threshold parameters in Eq. (<ref>) can be expressed as s_0=(m_i+Δ_i)^2 and u_0=(m_o+Δ_o)^2. In this article, we take Δ_i=Δ_o=0.4 and 0.6 GeV for lower and upper bounds of the coupling constants, and Δ_i=Δ_o=0.5 GeV for central values of the results. Fixing Q^2=3 GeV^2 in Eq. (<ref>), we plot contributions of the total, pertubative and all vacuum condensate terms in Fig. <ref>. From this figure, we see good stability of the results. This stable region is commonly called the Borel platform which indicates the convergence of the OPE. Then, by taking different values of Q^2, the momentum dependent strong coupling constant g(Q^2) can also be obtained, where the range of Q^2 is taken 3 ∼ 28 GeV^2 uniformly in this work. The momentum dependent strong coupling constants can be uniformly fitted into the following analytical function, g(Q^2)=Fe^-GQ^2+H where the values of parameters F, G and H are shown in Table <ref>. The fitting diagrams of strong coupling constants for each vertex are shown in Fig. <ref>. Then, g(Q^2) are extrapolated into the time-like regions (Q^2<0) by Eq. (<ref>), and on-shell condition is satisfied by setting Q^2=-m_on-shell^2. The on-shell values of strong coupling constants for different off-shell cases are obtained and are listed in the last column of Table <ref>. For each vertex, the on-shell values of coupling constants for different off-shell cases should be equal to each other. For vertex BB^*Υ as an example, their central values of strong coupling constants for different off-shell cases are g_BB^*Υ^Υ=11.35 GeV^-1, g_BB^*Υ^B=11.36 GeV^-1 and g_BB^*Υ^B^*=12.04 GeV^-1, which are consistent well with each other. Thus, it is reasonable to determine the final values of the strong coupling constants by taking the average value of the results for different off-shell cases. Finally, the results of the strong coupling constants for different strong vertices are determined as, g_BBΥ=40.67^+7.55_-4.20 g_BB^*Υ=11.58^+2.19_-1.09GeV^-1 g_B^*B^*Υ=57.02^+5.32_-5.31 g_BB^*η_b=23.39^+4.74_-2.30 g_B^*B^*η_b=12.49^+2.12_-1.35GeV^-1 § CONCLUSIONS In this work, we systematically analyze the strong vertices BBΥ, BB^*Υ, B^*B^*Υ, BB^*η_b, B^*B^*η_b using the QCD sum rules, where all off-shell cases are considered for each vertex. Under this physical sketch, the momentum dependent coupling constants are firstly obtained in the space-like (Q^2>0) regions, then they are fitted into appropriate analytical functions. By extrapolating these functions into the time-like (Q^2<0) regions and taking Q^2=-m^2_on-shell, we obtain the on-shell strong coupling constants. For each vertex, we take the average value of the on-shell strong coupling constants for all off-shell cases as the final results. These coupling constants are valuable in describing the dynamical behaviors of hadrons. For example, these coupling constants are important input parameters to analyze the final-state interactions in the heavy quarkonium decays, or to calculate the absorption cross sections in understanding the heavy quarkonium absorptions in hadronic matter. § ACKNOWLEDGEMENTS This paper has been presented on the web site https://arxiv.org/pdf/2307.05090.pdfhttps://arxiv.org/pdf/2307.05090.pdf. This project is supported by National Natural Science Foundation, Grant Number 12175068 and Natural Science Foundation of HeBei Province, Grant Number A2018502124. 99Matsui:1986dk T. Matsui and H. Satz, https://doi.org/10.1016/0370-2693(86)91404-8Phys. Lett. B 178, 416-422 (1986). Vogt:1999cu R. Vogt, https://doi.org/10.1016/S0370-1573(98)00074-XPhys. Rept. 310, 197-260 (1999). Rapp:2008tf R. Rapp, D. Blaschke and P. Crochet, https://doi.org/10.1016/j.ppnp.2010.07.002Prog. Part. Nucl. Phys. 65, 209-266 (2010). Matinyan:1998cb S. G. Matinyan and B. Muller, https://doi.org/10.1103/PhysRevC.58.2994Phys. Rev. C 58, 2994-2997 (1998). Haglin:1999xs K. L. Haglin, https://doi.org/10.1103/PhysRevC.61.031902Phys. Rev. C 61, 031902 (2000). Lin:1999ad Z. W. Lin and C. M. Ko, https://doi.org/10.1103/PhysRevC.62.034903Phys. Rev. C 62, 034903 (2000). Sibirtsev:2000aw A. Sibirtsev, K. Tsushima and A. W. Thomas, https://doi.org/10.1103/PhysRevC.63.044906Phys. Rev. C 63, 044906 (2001). Lin:2000ke Z. W. Lin and C. M. Ko, https://doi.org/10.1016/S0370-2693(01)00092-2Phys. Lett. B 503, 104-112 (2001). Casalbuoni:1996pg R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio and G. Nardulli, https://doi.org/10.1016/S0370-1573(96)00027-0Phys. Rept. 281, 145-238 (1997). Meng:2008bq C. Meng and K. T. Chao, https://doi.org/10.1103/PhysRevD.78.074001Phys. Rev. D 78, 074001 (2008). Navarra:2000ji F. S. Navarra, M. Nielsen, M. E. Bracco, M. Chiapparini and C. L. Schat, https://doi.org/10.1016/S0370-2693(00)00967-9Phys. Lett. B 489, 319-328 (2000). Navarra:2001ju F. S. Navarra, M. Nielsen and M. E. Bracco, https://doi.org/10.1103/PhysRevD.65.037502Phys. Rev. D 65, 037502 (2002). RodriguesdaSilva:2003hh R. Rodrigues da Silva, R. D. Matheus, F. S. Navarra and M. Nielsen, https://doi.org/10.1590/S0103-97332004000200018Braz. J. Phys. 34, 236-239 (2004). Bracco:2004rx M. E. Bracco, M. Chiapparini, F. S. Navarra and M. Nielsen, https://doi.org/10.1016/j.physletb.2004.11.024Phys. Lett. B 605, 326-334 (2005). Bracco:2006xf M. E. Bracco, A. Cerqueira, Jr., M. Chiapparini, A. Lozea and M. Nielsen, https://doi.org/10.1016/j.physletb.2006.08.058Phys. Lett. B 641, 286-293 (2006). Bracco:2007sg M. E. Bracco, M. Chiapparini, F. S. Navarra and M. Nielsen, https://doi.org/10.1016/j.physletb.2007.11.066Phys. Lett. B 659, 559-564 (2008). Bracco:2010bf M. E. Bracco and M. Nielsen, https://doi.org/10.1103/PhysRevD.82.034012Phys. Rev. D 82, 034012 (2010). OsorioRodrigues:2010fen B. Osorio Rodrigues, M. E. Bracco, M. Nielsen and F. S. Navarra, https://doi.org/10.1016/j.nuclphysa.2011.01.001Nucl. Phys. A 852, 127-140 (2011). Azizi:2010jj K. Azizi and H. Sundu, https://doi.org/10.1088/0954-3899/38/4/045005J. Phys. G 38, 045005 (2011). Sundu:2011vz H. Sundu, J. Y. Sungu, S. Sahin, N. Yinelek and K. Azizi, https://doi.org/10.1103/PhysRevD.83.114009Phys. Rev. D 83, 114009 (2011). Cerqueira:2011za A. Cerqueira, Jr., B. Osorio Rodrigues and M. E. Bracco, https://doi.org/10.1016/j.nuclphysa.2011.11.004Nucl. Phys. A 874, 130-142 (2012). Cui:2011zq C. Y. Cui, Y. L. Liu and M. Q. Huang, https://doi.org/10.1016/j.physletb.2011.12.022Phys. Lett. B 707, 129-136 (2012). Cui:2012wk C. Y. Cui, Y. L. Liu and M. Q. Huang, https://doi.org/10.1016/j.physletb.2012.04.015Phys. Lett. B 711, 317-326 (2012). Bracco:2011pg M. E. Bracco, M. Chiapparini, F. S. Navarra and M. Nielsen, https://doi.org/10.1016/j.ppnp.2012.03.002Prog. Part. Nucl. Phys. 67, 1019-1052 (2012). Yu:2015xwa G. L. Yu, Z. Y. Li and Z. G. Wang, https://doi.org/10.1140/epjc/s10052-015-3460-3Eur. Phys. J. C 75, no.6, 243 (2015). Yu:2019sqp G. L. Yu, Z. G. Wang and Z. Y. Li, https://doi.org/10.1140/epjc/s10052-019-7314-2Eur. Phys. J. C 79, no.9, 798 (2019). Li:2015xka Z. Y. Li, Z. G. Wang and G. L. Yu, https://doi.org/10.1142/S021773231650036XMod. Phys. Lett. A 31, no.06, 1650036 (2016). Rodrigues:2017qsm B. O. Rodrigues, M. E. Bracco and C. M. Zanetti, https://doi.org/10.1016/j.nuclphysa.2017.07.002Nucl. Phys. A 966, 208-223 (2017). Lu:2023gmd J. Lu, G. L. Yu and Z. G. Wang, https://doi.org/10.48550/arXiv.2304.13969[arXiv:2304.13969 [hep-ph]]. Colangelo:1995ph P. Colangelo, F. De Fazio, G. Nardulli, N. Di Bartolomeo and R. Gatto, https://doi.org/10.1103/PhysRevD.52.6422Phys. Rev. D 52, 6422-6434 (1995). Aliev:1996bp T. M. Aliev, N. K. Pak and M. Savci, https://doi.org/10.1016/S0370-2693(96)01400-1Phys. Lett. B 390, 335-340 (1997). Colangelo:1997rp P. Colangelo and F. De Fazio, https://doi.org/10.1007/s100520050222Eur. Phys. J. C 4, 503-511 (1998). Dai:1998ve Y. B. Dai and S. L. Zhu, https://doi.org/10.1103/PhysRevD.58.074009Phys. Rev. D 58, 074009 (1998). Zhu:1998vf S. L. Zhu and Y. B. Dai, https://doi.org/10.1103/PhysRevD.58.094033Phys. Rev. D 58, 094033 (1998). Khodjamirian:1999hb A. Khodjamirian, R. Ruckl, S. Weinzierl and O. I. Yakovlev, https://doi.org/10.1016/S0370-2693(99)00518-3Phys. Lett. B 457, 245-252 (1999). Li:2002pp Z. H. Li, T. Huang, J. Z. Sun and Z. H. Dai, https://doi.org/10.1103/PhysRevD.65.076005Phys. Rev. D 65, 076005 (2002). Kim:2001es H. c. Kim and S. H. Lee, https://doi.org/10.1007/s100520100847Eur. Phys. J. C 22, 707-713 (2002). Wang:2006bs Z. G. Wang and S. L. Wan, https://doi.org/10.1103/PhysRevD.73.094020Phys. Rev. D 73, 094020 (2006). Wang:2006ida Z. G. Wang and S. L. Wan, https://doi.org/10.1103/PhysRevD.74.014017Phys. Rev. D 74, 014017 (2006). Wang:2007mc Z. G. Wang, https://doi.org/10.1140/epjc/s10052-007-0404-6Eur. Phys. J. C 52, 553-560 (2007). Wang:2007zm Z. G. Wang, https://doi.org/10.1016/j.nuclphysa.2007.09.004Nucl. Phys. A 796, 61-82 (2007). Wang:2008tm Z. G. Wang, https://doi.org/10.1103/PhysRevD.77.054024Phys. Rev. D 77, 054024 (2008). Wang:2007ci Z. G. Wang and Z. B. Wang, https://doi.org/10.1088/0256-307X/25/2/025Chin. Phys. Lett. 25, 444-446 (2008). Li:2007dv Z. H. Li, W. Liu and H. Y. Liu, https://doi.org/10.1016/j.physletb.2007.11.074Phys. Lett. B 659, 598-606 (2008). Wang:2013iia Z. G. Wang, https://doi.org/10.1103/PhysRevD.89.034017Phys. Rev. D 89, no.3, 034017 (2014). Reinders:1984sr L. J. Reinders, H. Rubinstein and S. Yazaki, https://doi.org/10.1016/0370-1573(85)90065-1Phys. Rept. 127, 1 (1985). ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group], https://doi.org/10.1093/ptep/ptac097PTEP 2022, 083C01 (2022). Wang:2015mxa Z. G. Wang, https://doi.org/10.1140/epjc/s10052-015-3653-9Eur. Phys. J. C 75, 427 (2015). Becirevic:2017chd D. Bečirević, B. Melić, M. Patra and O. Sumensari, https://doi.org/10.1103/PhysRevD.97.015008Phys. Rev. D 97, no.1, 015008 (2018). Narison:2010cg S. Narison, https://doi.org/10.1016/j.physletb.2011.09.116Phys. Lett. B 693, 559-566 (2010) [erratum: Phys. Lett. B 705, 544-544 (2011)]. Narison:2011xe S. Narison, https://doi.org/10.1016/j.physletb.2011.11.058Phys. Lett. B 706, 412-422 (2012). Narison:2011rn S. Narison, https://doi.org/10.1016/j.physletb.2011.12.047Phys. Lett. B 707, 259-263 (2012).
http://arxiv.org/abs/2307.04013v1
20230708164601
BPNet: Bézier Primitive Segmentation on 3D Point Clouds
[ "Rao Fu", "Cheng Wen", "Qian Li", "Xiao Xiao", "Pierre Alliez" ]
cs.CV
[ "cs.CV" ]
Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers Amirhossein Askari Farsangi1 Ali Sharifi Zarchi1 Mohammad Hossein Rohban1 August 12, 2023 ========================================================================================== This paper proposes BPNet, a novel end-to-end deep learning framework to learn Bézier primitive segmentation on 3D point clouds. The existing works treat different primitive types separately, thus limiting them to finite shape categories. To address this issue, we seek a generalized primitive segmentation on point clouds. Taking inspiration from Bézier decomposition on NURBS models, we transfer it to guide point cloud segmentation casting off primitive types. A joint optimization framework is proposed to learn Bézier primitive segmentation and geometric fitting simultaneously on a cascaded architecture. Specifically, we introduce a soft voting regularizer to improve primitive segmentation and propose an auto-weight embedding module to cluster point features, making the network more robust and generic. We also introduce a reconstruction module where we successfully process multiple CAD models with different primitives simultaneously. We conducted extensive experiments on the synthetic ABC dataset and real-scan datasets to validate and compare our approach with different baseline methods. Experiments show superior performance over previous work in terms of segmentation, with a substantially faster inference speed. § INTRODUCTION Structuring and abstracting 3D point clouds via segmentation is a prerequisite for various computer vision and 3D modeling applications. Many approaches have been proposed for semantic segmentation, but the finite set of semantic classes limits their applicability. 3D instance-level segmentation and shape detection are much more demanding, while this literature lags far behind its semantic segmentation counterpart. Finding a generalized way to decompose point clouds is essential. For example, man-made objects can be decomposed into canonical primitives such as planes, spheres, and cylinders, which are helpful for visualization and editing. However, the limited types of canonical primitives are insufficient to describe objects' geometry in real-world tasks. We are looking for a generalized way of decomposing point clouds. The task of decomposing point clouds into different geometric primitives with corresponding parameters is referred to as parametric primitive segmentation. Parametric primitive segmentation is more reasonable than semantic instance segmentation for individual 3D objects, which unifies the 3D objects in the parametric space instead of forming artificially defined parts. However, the task is quite challenging as 1) there is no exhaustive repertoire of canonical geometric primitives, 2) the number of primitives and points belonging to that primitive may significantly vary, and 3) points assigned to the same primitive should belong to the same type of primitive. Inspired by the fact that Bézier decomposition, where NURBS models can be divided into canonical geometric primitives (plane, sphere, cone, cylinder, etc.) and parametric surfaces into rational Bézier patches, we propose to learn Bézier decomposition on 3D point clouds. We focus on segmenting point clouds sampled from individual objects, such as CAD models. Departing from previous primitive segmentation, we generalize different primitive types to Bézier primitives, making them suitable for end-to-end and batch training. To the best of our knowledge, our method is the only work to learn Bézier decomposition on point clouds. To summarize our contributions: * We introduce a novel soft voting regularizer for the relaxed intersection over union (IOU) loss, improving our primitive segmentation results. * We design a new auto-weight embedding module to cluster point features which is free of iterations, making the network robust to real-scan data and work for axis-symmetric free-form point clouds. * We propose an innovative reconstruction module where we succeed in using a generalized formula to evaluate points on different primitive types, enabling our training process to be fully differential and compatible with batch operations. * Experiments demonstrate that our method works on the free-form point clouds and real-scan data even if we only train our model on the ABC dataset. Furthermore, we present one application of Bézier primitive segmentation to reconstruct the full Bézier model while preserving the sharp features. The code is available at: <https://github.com/bizerfr/BPNet>. § RELATED WORK Bézier primitive segmentation involves parametric fitting, instance segmentation, and multi-task learning. We now provide a brief review of these related research areas. Primitive segmentation. Primitive segmentation refers to the search and approximation of geometric primitives from point clouds. Primitives can be canonical geometric primitives, such as planes or spheres, or parametric surface patches, such as Bézier, BSpline, or NURBS. We can classify primitive segmentation methods into two lines of approaches: geometric optimization and machine learning. Popular geometric optimization-based methods include RANSAC <cit.>, region growing <cit.> and Hough transforms <cit.>. We refer to <cit.> for a comprehensive survey. One limitation of geometric optimization-based methods is that they require strong prior knowledge and are hence sensitive to parameters. In order to alleviate this problem, recent approaches utilize neural networks for learning specific classes of primitives such as cuboids <cit.>. The SPFN supervised learning approach <cit.> detects a wider repertoire of primitives such as planes, spheres, cylinders, and cones. Apart from the canonical primitives handled by SPFN, ParSeNet <cit.> and HPNet <cit.> also detect open or closed BSpline surface patches. Nevertheless, different types of primitives are treated separately with insufficient genericity. This makes them unsuitable for batch operations, thus suffering long inference times. Deep learning-based methods are less sensitive to parameters but often support a limited repertoire of primitives. Our work extends SPFN, ParSeNet, and HPNet with more general Bézier patches. Instance segmentation. Instance segmentation is more challenging than semantic segmentation as the number of instances is not known a priori. Points assigned to the same instance should fall into the same semantic class. We distinguish between two types of methods: proposal-based <cit.> and proposal-free methods <cit.>. On the one hand, proposal-based methods utilize an object-detection module and usually learn an instance mask for prediction. On the other hand, proposal-free methods tackle the problem as a clustering step after semantic segmentation. We refer to a recent comprehensive survey <cit.>. The significant difference between instance segmentation and primitive segmentation is that instance segmentation only focuses on partitioning individual objects where primitive fitting is absent. Patch-based representations. Patch-based representations refer to finding a mapping from a 2D patch to a 3D surface. Previous works including <cit.> learn a parametric 2D mapping by minimizing the Chamfer distance <cit.>. One issue with Chamfer distance is that it is not differentiable when using the nearest neighbor to find matched pairs. We learn the uv mapping instead. Learning uv parameters enables us to re-evaluate points from our proposed generalized Bézier primitives, making our training process differentiable and supporting batch operations. Multi-task learning. Multi-task learning aims to leverage relevant information contained in multiple related tasks to help improve the generalization performance of all the tasks <cit.>. Compared to single-task learning, the architectures used for multi-task learning—see, e.g., <cit.>—share a backbone to extract global features, followed by branches that transform the features and utilize them for specific tasks. Inspired by <cit.>, we use a cascaded architecture for our joint optimization tasks. § METHOD Figure <ref> shows an overview of the proposed neural network. The input to our method is a 3D point cloud P={p_i | 0≤ i ≤ N-1}, where p_i denotes the point coordinates (with or without normals). The output is the per-point patch labels { P_k | ∪_k=0 P_k = P}, where each patch corresponds to a Bézier primitive. The network will also output patch degree (d_u-by-d_v) and weighted control points C={𝐜_kmn = (x,y,z,w)|0≤ m ≤ d_u, 0≤ n ≤ d_v, 0 ≤ k ≤ K-1}, where K denotes the number of patches. We constrain the maximum degree to be M_d*N_d. We let our network output a maximum number of K Bézier patches for all CAD models, and we use K̂ to denote the ground-truth number of patches which is smaller than K and varies for each CAD model. §.§ Architecture Our architecture consists of two components: a backbone for extracting features and a cascaded structure for joint optimization. The backbone is based on three stacked EdgeConv <cit.> layers and extracts a 256D pointwise feature for each input point. Let 𝐏∈ℝ^N × D_in denote the input matrix, where each row is the point coordinates (D_in is three) with optional normals (D_in is six). Let 𝐗∈ℝ^N × 256 denote the 256D pointwise feature matrix extracted from the backbone. We use a cascaded structure to optimize the per-point degree probability matrix 𝐃∈ℝ^N × (M_d*N_d), the soft membership matrix 𝐖∈ℝ^N × K, the UV parameter matrix 𝐓∈ℝ^N × 2, and the weighted control points tensor 𝐂∈ℝ^K × (M_d+1) × (N_d+1) × 4 jointly. Because 𝐃, 𝐖, 𝐓, and 𝐂 are coupled, it is natural to use a cascaded structure to jointly optimize them. Here, the cascaded structure is similar to <cit.>, where the features are concatenated and transformed for different MLP branches. §.§ Joint Optimization We have four modules: decomposition, fitting, embedding, and reconstruction. They are coupled to optimize 𝐃, 𝐖, 𝐓 and 𝐂 jointly by using our proposed four modules. §.§.§ Decomposition Module Degree classification. We use Bézier primitive with different degrees to replace classical primitives, including plane, sphere, plane, BSpline, etc. For the sake of the classification of degrees, the straightforward idea would be to use a cross-entropy loss: CE = -log(p_t), where p_t denotes the possibility of the true degree labels. However, the degree type is highly imbalanced. For example, surfaces of degree type 1-by-1 represent more than 50%, while 3-by-2 surfaces are rare. To deal with the imbalance, we utilize the multi-class focal-loss <cit.>: FL = -(1-p_t)^γlog(p_t), where γ denotes the focusing parameter. Then the degree type classification loss is defined as: L_deg = 1/N∑_i=0^N-1FL(𝐃_i,:) Primitive segmentation. The output of primitive segmentation is a soft membership indicating per-point primitive instance probabilities. Each element w_ik is the probability for a point p_i to be a member of primitive k. Since we can acquire pointwise patch labels from our data pre-processing, we use a relaxed IOU loss <cit.> to regress the 𝐖: L_seg = 1/K̂∑_k=0^K̂-1[1 - 𝐖_:,k^T Ŵ_:,k̂/𝐖_:,k_1 + Ŵ_:,k̂_1 - 𝐖_:,k^T Ŵ_:,k̂], where 𝐖 denotes the output of the neural network and 𝐖̂ is the one-hot encoding of the ground truth primitive instance labels. The best matching pairs (k, k̂) between prediction and ground truth are found via the Hungarian matching <cit.>. Please refer to <cit.> for more details. Soft voting regularizer. Since we learn 𝐃 and 𝐖 separately, points belonging to the same primitive instance may have different degrees, which is undesirable. To favor degree consistency between points assigned to the same primitive, we propose a soft voting regularizer that penalizes pointwise degree possibilities. We first compute a score for each degree case for all primitive instances by 𝐒 = 𝐖^T𝐃, where each element s_kd denotes the soft number of points for degree d in primitive instance k. We then perform L_1-normalization to convert 𝐒 into primitive degree distributions Ŝ: Ŝ = [1/∑_d=0S_kd] ⊙𝐒, where the first term denotes the sum of each column and ⊙ denotes the element-wise product. Finally, we utilize a focal loss to compute the primitive degree voting loss: L_voting = 1/K̂∑_k=0^K̂-1FL(Ŝ_k,:), where FL denotes the focal loss. The global loss for the decomposition module is defined as: L_dec= L_deg + L_seg + L_voting. §.§.§ Fitting Module Parameter regression. Through Bézier decomposition we obtain the ground truth labels for the (u, v) parameters and record all parameters into matrix 𝐓̂. We regress the uv parameters using a mean squared error (MSE) loss: L_para= 1/N∑_i=0^N-1𝐓_i,: - 𝐓̂_i,:_2^2 Control point regression. We select a maximum number of primitive instances K for all models. As the ground truth primitive instance K̂ varies for each model, we reuse the matching pairs directly from the Hungarian matching already computed in the primitive segmentation step. Note that as the predicted degree (d_u, d_v) may differ from the ground truth (d̂_̂û, d̂_̂v̂), we align the degree to compute the loss via a maximum operation as (max(d_u, d̂_̂û), max(d_v, d̂_̂v̂)). The network always outputs (M_d+1) × (N_d+1) control points for each primitive corresponding to the predefined maximum degree in U and V direction, and these control points will be truncated by the aligned degree. Furthermore, if the ground-truth degree is smaller than the prediction, we can pad “fake” control points that are zero for the ground-truth patch; otherwise, we just use the aligned degree, which is the maximum of the predicted and the ground truth. Finally, the control point loss is defined as: L_ctrl= 1/N_𝐜∑_t=0^N_𝐜-1𝐜_t - 𝐜̂_t_2^2, where 𝐜_t and 𝐜̂_t denote the matched control points, and N_𝐜 is the number of matched control point pairs. Finally, we define the L_fit loss as: L_fit = L_para + L_ctrl. §.§.§ Embedding Module We use the embedding module to eliminate over-segmentation by pulling point-wise features toward their center and pushing apart different centers. Unlike ParSeNet and HPNet, 1) we do not need a mean-shift clustering step which is time-consuming; 2) we calculate the feature center in a weighted manner rather than simply averaging. The weights are chosen as 𝐖 and will be automatically updated in the decomposition module; 3) 𝐖 will be further optimized to improve the segmentation. Moreover, our embedding module is suitable for batch operations even though the number of primitive instances for each CAD model and the number of points for each primitive varies. Otherwise, one has to apply mean-shift for each primitive, which deteriorates timing further. To be specific, we use 𝐖 to weight 𝐗 to obtain primitive features for all candidate primitive instances. Then, we reuse 𝐖 to weigh all the primitive instance features to calculate a “soft” center feature for each point. We favor that each point feature embedding should be close to its “soft” center feature, and each primitive instance feature embedding should be far from each other. The primitive instance-wise feature matrix 𝐗_ins is defined as: 𝐗_ins = [1/∑_i=0^N-1w_ik] ⊙ (𝐖^T𝐗), where each row of 𝐗_ins denotes the instance-wise features for each patch. We then compute the “soft” center feature matrix 𝐗_center as: 𝐗_center = 𝐖𝐗_ins, where each row denotes the “soft” center for each point. Then we define L_pull as: L_pull = 1/N∑_i=0^N-1Relu(𝐗_i,: - (𝐗_center)_i,:_2^2 - δ_pull), and we define L_push as: L_push = 1/2K(K-1)∑_k_1<k_2Relu( δ_push - (𝐗_ins)_k_1,: - (𝐗_ins)_k_2,:_2^2 ). Finally, the total embedding loss L_emb is defined as: L_emb = L_pull + L_push. §.§.§ Reconstruction Module The reconstruction module is designed to reconstruct points from the predicted multiple Bézier primitives, i.e., rational Bézier patches, and further jointly optimize 𝐖. One difficulty is that each CAD model has various numbers of primitives, and the degree of each primitive is also different. Therefore, we seek a generalized formula to support tensor operations on re-evaluating points for a batch of CAD models. The straightforward approach would be to compute a synthesizing score for all degree types. Assume the maximum number of primitive instances is K, and we have M_d * N_d types of different degrees. The total number of combinations is K * M_d * N_d. We define a synthesizing score for each case in Einstein summation form: (s_w)_kci = w_ik * s_kc, where w_ik denotes the probability of point p_i to belong to primitive instance k and s_kc denotes the degree score for degree type m-by-n indexed with c = M * (m - 1) + (n - 1) for primitive instance k coming from 𝐒. Then, we need to normalize (s_w)_kdi such that ∑_k, d, i (s_w)_kdi = 1. Finally, the reconstructed point coordinates p_i are defined as: [ x_i'; y_i'; z_i'; ] = ∑_k,m,n(s_w)_kci𝐑_kmn(u_i,v_i), where parameter (u_i,v_i) for point p_i is shared for all combinations. Such a formulation makes extending the formula in matrix form easy and avoids resorting to loop operations. However, such an approach is too memory-intensive. We thus truncate the degree from the degree probability matrix by re-defining the Bernstein basis function for degree d as: (B_M)_d^l(t)= dlt^l(1-t)^d-l, l ≤ d 0, l > d , where 0 ≤ l ≤ M, and M is the maximum degree. Then, the reconstructed point coordinates for p_i for a degree m-by-n patch k is: [ x_i'; y_i'; z_i'; ] = ∑_m_i^M_d∑_n_i^N_d(B_M_d)_m^m_i(u)(B_N_d)_n^n_i(v)𝐜_m_in_i(c_w)_m_in_iw_ik/∑_m_i,n_i(B_M_d)_m^m_i(u)(B_N_d)_n^n_i(v)(c_w)_m_in_iw_ik, where 𝐜_m_in_i denotes the control point coordinates and (c_w)_m_in_i denotes its weight, and w_ik is the element of 𝐖. If we also input the normal (n_x_i, n_y_i, n_z_i) for point p_i, we can also reconstruct the normal (n_x_i', n_y_i', n_z_i') by: [ n_x_i'; n_y_i'; n_z_i'; ] = [ ∂ x_i'/∂ u; ∂ y_i'/∂ u; ∂ z_i'/∂ u; ]×[ ∂ x_i'/∂ v; ∂ y_i'/∂ v; ∂ z_i'/∂ v; ], where × denotes the cross product. 𝐩_i denotes the input point coordinates. 𝐩_i^* denotes the reconstructed point coordinates. 𝐧_p_i denotes the input point normals. 𝐧_p_i^* denotes the reconstructed normals. The coordinate loss is defined as: L_coord = 1/N∑_i=0^N-1𝐩_i- 𝐩_i^*_2^2. If we also input the normals, the normal loss is defined as: L_norm = 1/N∑_i=0^N-1(1 - |𝐧_p_i^T𝐧_p_i^*|). The loss for the reconstruction module is defined as: L_recon = L_coord, without normals, L_coord+L_norm, with normals. §.§.§ Total Loss The total loss is defined as the sum of decomposition, fitting, embedding, and reconstruction losses: L = L_dec + L_fit + L_emb + L_recon. We do not use different weights for each loss item because all point clouds are normalized into a unit sphere. Moreover, the uv parameters are outputted directly from a sigmoid layer, and the control points are outputted directly by a tanh layer. Thus, each loss item is almost at the same scale, so we do not need different weights for each loss item. Furthermore, we use different learning rates for different modules to balance the training. Specific training details are listed in section <ref>. § EXPERIMENTS §.§ Dataset Pre-Processing We evaluate our approach on the ABC dataset <cit.>. However, the ABC dataset does not have the annotations to learn Bézier decomposition on point clouds. Therefore, we do a pre-processing step. Specifically, we utilize the CGAL library <cit.> and OpenCascade library <cit.> to perform Bézier decomposition on STEP files directly and perform random sampling on the surface to obtain the following labels: point coordinates, point normals, point uv parameters, surface patch indices of the corresponding points, surface patch degrees, and surface patch control points. Finally, we use 5,200 CAD models for training and 1,300 CAD models for testing. Each CAD model contains randomly sampled 8,192 points (non-uniform) with annotations. §.§ Training Details We train a multi-task learning model. The learning rates differ depending on the MLP branch. The learning rate for the backbone, soft membership, and uv parameters is set to 10^-3, while the learning rate for the degree probabilities and control points is set to 10^-4. As we have several learning tasks that are not independent, we set a lower learning rate for loss items, such as degree probabilities which converges faster. We set γ as 3.0 for the focal loss, and δ_pull as 0 and δ_push as 2.0 for the embedding losses. We employ ADAM to train our network. The model is then trained using 150 epochs. §.§ Comparisons We compare our algorithm with SPFN, ParSeNet, and HPNet <cit.>. We use both points and normals for training all the algorithms. Since SPFN only supports four types of canonical primitives (plane, sphere, cone, and cylinder), we consider points belonging to other primitives falling out of the supported canonical primitive types as the “unknown” type. To make fair comparisons, we modify SPFN to let the network take point coordinates and normals as input for training. For ParSeNet, we only train the segmentation module on the ABC dataset. We use their pre-trained fitting model (SplineNet) directly. For HPNet, we also use the pre-trained fitting model directly, which is the same as ParSeNet. We observed that the output of HPNet is very sensitive to the number of points. In order to use HPNet at its best, we down-sample the point clouds to 7k points for training and testing. We choose the following evaluation metrics: * Primitive Type Accuracy (“Acc”): 1/K∑_k=0^K-1𝕀(t_k==t̂_k), where t_k and t̂_k are predicted primitive type and ground truth type, respectively. This is used to measure the type accuracy. Note that our primitive types differ from other baselines. * Rand Index (“RI”): a+b/c, where c is N2 denoting the total possible pairs for all points, and a denotes the number of pairs of points that are both in the same primitive of prediction and ground truth, while b denotes the number of pairs of points that are in a different primitive of prediction and ground truth. Rand index is a similarity measurement between two instances of data clustering, and a higher value means better performance <cit.>. * Normal Error (“Err”): 1/N∑_i=0^N-1arccos( |𝐧_p_i^T𝐧_p_i^*|), where 𝐧_p_i and 𝐧_p_i^* are ground truth and predicted unit normal, respectively. * Inference Time (“Time”): The inference time on the whole test dataset. * Average Primitive Number (“Num”): The predicted average number of primitives on the whole test data set. We record these evaluation metrics in table <ref> and <ref>. Figure <ref> shows visual depictions of the results. Our results show the best performance regarding primitive type accuracy, normal fitting error, and inference time. Our method is much faster for inference because it uses a general formula for different primitive types, and the embedding module is free of iterations. Other methods treat primitives with different equations, and ParSeNet and HPNet need a mean-shift step. Even though our approach may lead to more segmented primitives by the nature of Bézier decomposition, the evaluation metrics of primitive type accuracy and normal fitting error are computed in a point-wise manner. Thus, over-segmentation and under-segmentation will not lead to smaller or bigger errors due to fewer or more segmented primitives. We also show the performance of all the methods without normals as input. For our method and SPFN, we only input point coordinates into the neural networks but use normals as supervision. Since ParSeNet does not regress normals, we cannot use normals as supervision. We train ParSeNet without normals as input to test its performance. HPNet uses the network to regress the normals from the input and also utilizes the ground truth normals to construct an affinity matrix as a post-processing step for clustering. We modify HPNet to let the affinity matrix be constructed from the regressed normals instead of the ground-truth normals. Table <ref> records the evaluation metrics of each method. From the experiments, we deduce that normals are important for the task of parametric primitive segmentation. §.§ Ablation Studies We first conduct experiments to verify the usefulness of the soft voting regularizer. The soft voting regularizer favors point primitive type consistency for each primitive instance, i.e., points assigned to the same primitive instance should have the same primitive type. From our experiment, we find that the soft voting regularizer not only improves the primitive type accuracy but also accelerates training relaxed IOU. Please refer to figure <ref> and the last two rows of table <ref>. We also verify the functionalities of each module. If we only use the decomposition module, the result is not good even though the “Acc” and “RI” are slightly higher because the decomposition module ignores the fitting, limiting the segmentation applicable to specific datasets. The reconstruction module reduces the “Err” significantly compared to the fitting module because the reconstruction module controls how “well-fitted” a predicted Bézier primitive is to the input point clouds. In contrast, the fitting module only regresses the control points and uv parameters. The embedding module is designed to eliminate small patches that contain few points, seeing the “Num” column. Therefore, experimenting with the embedding module results in fewer patch numbers than its counterpart. To conclude, training with all the modules yields the best results. §.§ Stress Tests To test whether our algorithm can work in real-world scenarios, we show more results from the real-scan data from the Aim@Shape dataset <cit.>. The sampling is non-uniform, with missing data and measurement noise compared to the ABC dataset. Besides, We cannot train the network on those data directly because they lack ground-truth labels. Instead, we use the models trained on the ABC dataset and test the performance on real-scan data. Our algorithm still works, while other methods are sensitive. Another positive aspect is that our algorithm could decompose the axis-symmetric free-form point clouds with much smoother boundaries of different patches. Please refer to figure <ref>. We also test the performance of our network by adding Gaussian white noise. Specifically, we apply different scales of Gaussian white noise to the point coordinates after normalizing them into a unit sphere. The noise scale denotes the standard deviation of the Gaussian white noise. It ranges from 0.01 to 0.05. We train our network on noise-free data but test the network with Gaussian white noise. Please refer to table <ref>. §.§ Applications We can reconstruct the full Bézier model from the Bézier primitive segmentation. We do not follow ParSeNet to pre-train a model that outputs a fixed control point size. Instead, we reuse the rational Bézier patch to refit the canonical Bézier patch. We treat the degrees of the canonical Bézier patch the same as the rational Bézier patch. As a result, we fetch the segmentation and degrees of each patch predicted from the network. Then, we use the parameterization <cit.> to recompute uv parameters and least squares to refit control points for each patch. Each patch is expanded by enlarging the uv domain to guarantee intersections with its adjacent patches. After that, we use the CGAL co-refinement package <cit.> to detect intersecting polylines for adjacent tessellated patches and trim the tessellated patch with the intersected polylines. Our reconstructed full Bézier model can preserve the sharp features, while the boundaries of ParSeNet for different primitives are jaggy and thus fail to preserve the sharp features. Please refer to figure <ref>. § CONCLUSION This paper presents an end-to-end method to group points by learning Bézier decomposition. In contrast to approaches treating different geometric primitives separately, our method uses a general formulation for different primitive types. Regarding limitations, Bézier decomposition may naturally generate overly complex segmentations. In addition, we choose the rational Bézier patch as the primitive type. As the formulation is not linear, fitting the parametric patch is not direct. In future work, we wish to use the neural network to directly regress the canonical Bézier patch. § ACKNOWLEDGEMENTS This research is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 860843. The work of Pierre Alliez is also supported by the French government, through the 3IA Côte d'Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. named
http://arxiv.org/abs/2307.07624v1
20230714205210
On Barker-Larman Conjecture relative to a convex body with centrally symmetric sections
[ "E. Morales-Amaya" ]
math.MG
[ "math.MG" ]
Influences in Mixing Measures Frederic Koehler[, Stanford University.] Noam Lifshitz[, Hebrew University.] Dor Minzer[, MIT.] Elchanan Mossel[, MIT.] August 12, 2023 =========================================================================================================================== Let K⊂ℝ^n be a convex body, n≥ 3. We say that K satisfies the Barker-Larman condition if there exists a ball B⊂int K such that for every suppor­ting hyperplane Π of B, the section Π∩ K is a centrally symmetric set. On the other hand, we say that K satisfies the Montejano condition if there exists a ball B⊂int K such that for every suppor­ting hyperplane Π of B, the section Π∩ K is a body of constant width. In this work we prove the following results where, in both cases, K is an O-symmetric convex body and B⊂int K is a ball such that O∉ B: 1) If K is strictly convex body and satisfies the Barker-Larman condition with respect to B, then K is an ellipsoid; 2) If K satisfies the Montejano condition relative to B, then K is a ball. § INTRODUCTION Let K⊂ℝ^n be a convex body, i.e., a compact and convex set with non-empty interior. Let H be a plane of dimension r, with 1≤ r≤ n-1. If H∩int K≠∅, then we say that H∩ K is an r-dimensional section of K. The following theorem is due to S. Olovjanishnikov <cit.>: Theorem O. Let K⊂ℝ^n be a convex body, n ≥ 3. If all the n-1-dimensional sections of K that divide the volume of K in a given ratio μ≠ 1, have a centre of symmetry, then K is an ellipsoid. The following is a similar theorem, due to C. A. Rogers <cit.>: Theorem R. Let K⊂ℝ^n be a convex body, n≥ 3, and x∈ℝ^n. If all the 2-dimensional sections of K through x have a centre of symmetry, then K has a centre of symmetry. Rogers also conjectured that if x was not the centre of K then it is an ellipsoid. This conjecture was first proved by P. W. Aitchison, C. M. Petty, and C. A. Rogers <cit.>, when x ∈ K, and, the general case, by D. G. Larman <cit.>. Later a simpler proof was given by L. Montejano and E. Morales-Amaya in <cit.>. Such Theorem is known as the False Center Theorem. In this note we prove, on the one hand, a result in which is a variant of the False Center Theorem when, instead of considering concurrent planes, we consider planes tangent to a given sphere. It can be considered as a progress about a problem due to J. A. Barker and D. Larman <cit.>. On the other hand, we prove a result in which is a variant of a Theorem of L. Montejano which is a characterization of a ball in terms of concurrent section of constant width <cit.>. In order to present our results we need the following definitions. Let K⊂ℝ^n be a convex body, n≥ 3. We say that K satisfies the Barker-Larman Condition if there exists a sphere B⊂int K such that for every supporting hyperplane Π of B the section Π∩ K is centrally symmetric. On the other hand, we say that K satisfies the Montejano Condition if there exists a sphere B⊂int K such that for every supporting hyperplane Π of B the section Π∩ K is a body of constant width. Let K⊂ℝ^n, n≥3, be an O-symmetric strictly convex body. Suppose that K satisfies the Barker-Larman condition for a ball B which does not contain the O. Then K is an ellipsoid. Let K⊂ℝ^n, n≥3, be a convex body O-symmetric and let B⊂ K be a ball. Suppose that O∉ B and, for every supporting hyperplane Π of B, the section Π∩ K is a body of constant width. Then K is a ball. § AUXILIARY RESULT. In this section we present a lemma which will be an important tool in the proofs of Theorem <ref> and <ref> in dimension 3. The corresponding proof will be given in the Appendix. Let M⊂ be a O-symmetric convex body and let B be a circle such that O∉ B. We consider a system of coordinates with O at the origin. Given a fix point w∈ K, we construct two sequences {a_i}_i=1 ^∞, {b_i}_i=1 ^∞ in the following manner. We make a_1=w and for every point a_i∈ M we take the point b_i∈ M such that the line L(a_i,b_i) is a supporting line of B and if u_i is a unit vector with the property that the set of vector {b_i-a_i,u_i} is a right frame of , then B is located in the half-space a_i+{x∈: ⟨ x,u_i⟩>0} (See Fig. <ref>). On the other hand, for every b_i∈ M we take the point a_i+1∈ M such that the line L(b_i,a_i+1) is a supporting line of -B and if v_i is a unit vector with the property that the set of vector {a_i+1-b_i,v_i} is a right frame of , then -B is located in the half-space b_i+1+{x∈: ⟨ x,v_i⟩>0}. For the sequences {a_i}_i=1 ^∞, {b_i}_i=1 ^∞ there exists sub-sequences {a_i_s}_s=1 ^∞, {b_i_s}_s=1 ^∞ which have the properties a_i_s→ a and b_i_s→ b, when s→∞ where a,b∈ M and L(a,b) is a common supporting line of B and -B and is passing through O. § PROOF OF THEOREM <REF> IN DIMENSION 3. Before to prove Theorem <ref> we will prove a series of lemmas. We take a system of coordinates such that O is the origin. Let Π be a plane through the center O of K. Let Γ be a supporting plane of B parallel to Π. Since K has centre at O and, by hypothesis, Γ∩ K has centre, there exists a vector u such that -(Γ∩ K)=u+(Γ∩ K). (Notice that a convex body M⊂ is centrally symmetric if and only if M and -M are translated). For every supporting plane Δ of B, parallel to u, the centre of the section Δ∩ K is in Π, equivalently, the vector v∈ with the property -(Δ∩ K)=v+(Δ∩ K). is parallel to Π. Let Δ be a supporting plane of B parallel to u. We are going to prove that the centre z of Δ∩ K is in Π. By (<ref>) it follows that Δ∩ (-Γ∩ K)= u+[ Δ∩ (Γ∩ K)]. By virtue of (<ref>) we conclude that the chords Δ∩ (-Γ∩ K) and Δ∩ (Γ∩ K) of Δ∩ K are parallel and they have the same length, from this and by strictly convexity of Δ∩ K it follows that z∈Π. Let Φ_u:→Π be the projection such that Φ_u ={u}. For u∈𝕊^n-1, we denote by S∂(K,u) (S∂(K,L)) the shadow boundary of K with respect to the vector u (to the line L), i.e., the set of points x in K such that there exists a supporting hyperplane of K passing through x and parallel to u (parallel to L). If the planes Π, Γ and the vector u∈𝕊^2 are such that O∉Φ_u(B), then the shadow boundary S∂(K,u) is contained in a plane. We denote by ξ (u) the shadow boundary S∂(K,u) of K in the direction u. Let Π_1,Π_2 the two closed half-spaces determined by Π. We claim that either ξ(u) ⊂ K_1 or ξ(u) ⊂ K_2 are impossible, where K_i:=K∩Π_i, i=1,2. Since ξ(u) is centrally symmetric with centre at O, if x∈ξ(u) and x∈ K_1, then -x∈ξ(u) and -x∈ K_2 (in virtue that K_2=- K_1). Thus ξ(u)∩Π≠∅. Let z∈ξ(u)∩Π. Let L be a supporting line of K through z parallel to u. For K_u:=Φ_u(K), B_u:=Φ_u(B), B̅_u:=Φ_u(-B) and z=a_1 we construct the sequences {a_i}, {b_i} as in the Lemma <ref>. By Lemma <ref>, there exists sub-sequences {a_i_s}, {b_i_s} such that a_i_s→ a and b_i_s→ b, when s→∞, where a,b∈ K_u and L(a,b) is a common tangent line of B_u and B̅_u and through O. It is clear that Δ_s:=Φ_u^-1(L(a_i_s,b_i_s)) is a supporting plane of B and Σ_s:=Φ_u^-1(L(b_i_s,a_i_s+1)) is a supporting plane of -B. By Lemma <ref>, the centre O_s of the section Δ_s ∩ K is in Π for all s and the centre P_s of the section Σ_s ∩ K is in Π for all i. Let π_s:Δ_s →Δ_s and ρ_s:Σ_s →Σ_s be the central symmetries with respect to O_s and P_s, respectively. Then b_i_s=π_s(a_i_s) and a_i_s+1=ρ_s(b_i_s) (See Fig. <ref>). It follows that a_i_s,b_i_s∈ξ(u) for all s. By (<ref>), it follows that a,b∈ξ(u). Let x_1∈ξ(u), x_1≠z. We are going to prove that x_1∈Π. On the contrary, let assume that x_1∉Π. Let δ>0 the distance from x_1 to Π. For K_u:=Φ_u(K), B_u:=Φ_u(B), B̅_u:=Φ_u(-B) and a_1=Φ_u(x_1) we construct the sequence {a_i}, {b_i} as in the Lemma <ref>. By Lemma <ref> there exists sub sequences {a_i_s}, {b_i_s} such that a_i_s→ a and b_i_s→ b, when s→∞, where a,b∈ K_u and L(a,b) is a common tangent line of B_u and B̅_u and through O. We define the plane Δ_s, Σ_s and the maps π_s, ρ_s as before. We define the sequences {y_i_s=π_s(x_i_s)}, {x_i_s+1:=ρ_s(y_i_s)}⊂ K and notice that a_i_s=Φ_u(x_i_s) and b_i_s=Φ_u(y_i_s) (See Fig. <ref>). By the Lemma <ref>, the distances between y_i_s and Π and x_i_s and Π are equals to δ for all s. In virtue of (<ref>) and the compactness of K there exists x,y∈ξ(u) such that x_i_s→ x and y_i_s→ y, when s→∞, and Φ_u(x)=a and Φ_u(y)=b. Furthermore the distances between x and Π and between y and Π are equal to δ. However this and (<ref>) contradicts the strict convexity of K. Let Π be a plane. We say the Π strictly separates B from O if B and O are in different closed half-spaces generated by Π. We denote by Ω the family of planes which strictly separate B and O. Let Σ be a supporting plane of B in Ω. Let Π be a plane parallel to Σ and with O∈Π. Let Σ_1,Σ_2 be the supporting planes of K parallel to Π. We denote by L_Σ the line generated by the points x_i:=Σ_i∩ K,i=1,2. For all Σ∈Ω the line L_Σ is affine axis of symmetry of K and all the sections of K parallel to Σ are similar and similarly situated. By virtue that Σ∩ K has centre, the section -Σ∩ K has centre. Furthermore, there exists v∈ such that -Σ∩ K=v+(Σ∩ K). By Lemma <ref>, for every supporting plane Δ of B, parallel to v, the centre of the section Δ∩ K is in Π, equivalently, the vector w∈ with the property -(Δ∩ K)=w+(Δ∩ K)is parallel to Π (See Fig. <ref>). Consequently, by the choice of Σ, O∉Φ_w(B). Thus, by Lemma <ref>, S∂(K,w) is contained in a plane. It follows that every shadow boundary S∂(K,w) of K correspon­ding a direction w, parallel to Π, is given as the section H∩ K, where H is a plane containing L_Σ. Therefore all the sections of K parallel to Σ are centrally symmetric, with centre at L_Σ, similar and similarly situated (A proof of the later can be found in <cit.>). The following notion will be important in the proof of the next lemma. According to <cit.>, the convex floating body K_δ of a convex body K in is the intersection of all halfspaces whose hyperplanes cut off a set of volume δ from K, for our purposes it is enough to take δ such that it satisfies the inequalities 0<δ < Vol (K)/2. K is affine equivalent to a body of revolution. We denote by λ the center of B. By a continuity argument, we conclude the existence of a supporting plane Γ of B in Ω such that -Γ∩ K=-a λ +(Γ∩ K) for a real number a, 0<a<1. By Lemma <ref>, the line L_Γ is affine axis of symmetry of K and all the sections of K, parallel to Γ, are similar, similarly situated and with centres at L_Γ. Let Π be a plane parallel to Γ and with O∈Π. In order to complete the proof of Lemma <ref>, we are going to prove that the section Π∩ K is an ellipse. We consider an affine transformation A:→ such that A(Π)=Π and A(Π) ⊥ A(L_Σ), i.e. A(L_Σ) is an axis of symmetry of A(K). We will use the same notation for the geometric objects which are involve in the Lemma <ref> after applied them the transformation A, that is, we will denote by K the set A(K), by L_Σ the set A(L_Σ) and so on. We claim that the relation S∂(K,L_Γ)=Π∩ K holds. Since K is O-symmetric and strictly convex, it is clear that, between all the sections of K parallel to Γ, the section Π∩ K is which has bigger area. Thus (<ref>) follows from this and from the fact that all the sections of K, parallel to Γ, are similar, similarly situated and with centres at L_Γ. By (<ref>) we conclude that the projection K_λ:=Φ_λ(K) is equal to the section Π∩ K. Now we are going to prove that the projection K_λ is an ellipse. Notice that (*) If a convex figure K⊂ is O-symmetric and it has a line of symmetry ℒ passing through O, then the line ℳ perpendicular to ℒ, passing through O, is a line of symmetry of M. We claim the Π is a plane of symmetry of K. Since L_Γ is axis of rotation of K, for all plane Δ containing L_Γ, the section Δ∩ K is O-symmetric and it has the L_Γ as line of symmetry. Then, by (*) the line perpendicular to L_Γ passing through O is a line of symmetry. Varying Δ, L_Γ⊂Δ, it follows that Π is plane of symmetry of K. Now we are going to show that for every plane Σ, parallel to L_Γ and tangent to B, the section Σ∩ K has a line of symmetry l_Σ parallel to L_Γ. By hypothesis Σ∩ K is centrally symmetric, say with centre at c_Σ. On the one hand, by Lemma <ref>, c_Σ∈Π. Hence c_Σ∈Σ∩Π. On the other hand, by virtue that Π is plane of symmetry of K and Σ is parallel to L_Γ, the section Σ∩ K has the line Σ∩Π as line of symmetry. Thus, by (*), Σ∩Π has line of symmetry l_Σ parallel to L_Γ. Let Σ be a plane parallel to L_Γ and tangent to B. We denote by Σ^+ and Σ^- the half-spaces defined by Σ, choosing the notation such that O∈Σ^-. Let Π_1 be a plane parallel to Π and such that Π_1∩ (Σ∩ K)≠∅. We denote by W_Σ the plane containing the parallel lines l_Σ and L_Γ and by a,b the extreme points of the chord Π_1∩ ( Σ∩ K). Since l_Σ is line of symmetry of Σ∩ K the chord ab has it mid-point c at l_Σ⊂ W_Σ. It follows that the chord with extreme points a_λ:=ϕ_λ(a), b_λ:=ϕ_λ(b) has its mid-point c_λ:=ϕ_λ(c) at m:=W_Σ∩Π and it is tangent to the ellipse B_λ:=ϕ_λ(B) (See Fig. <ref>). By virtue that K_λ and Π_1 ∩ K are homothetic, then K_λ and K_λ^1:=ϕ_λ(Π_1 ∩ K) are homothetic with centre of homothety O. Let h:Π→Π be the homothecy, with centre of homothety O, such that h(K_λ^1))=K_λ. We conclude that the chord h(a_λ)h( b_λ) of K_λ has its mid-point at m. Varying Π, always parallel to Π and such that the relation (<ref>) holds, we get to the following conclusions: 1) All the chords of K_λ, parallel to the line Π∩Σ and contained in the half-plane Π∩Σ^+, have its mid-point at the line m (See Fig. <ref>). 2) There exist a supporting line n of K_λ at the point τ in m ∩ K_λ, with τ∈Π∩Σ^+, parallel to the line Π∩Σ. 3) For all plane Π_1, parallel to Π and close enough to Π, the convex figure K_λ^1 is the floating body of K_λ. By 3) and Theorem 1 of <cit.> it follows that K_λ is an ellipse. §.§ Prove of Theorem <ref>. We denote by A(3) and O(3) the sets of all the affine and orthogonal transformations from to , respectively. By virtue of Lemma <ref> we can find A∈ A(3) such that G:=A(L_Γ) is an axis of revolution of K̅:=A(K). By Lemma <ref>, for Σ∈Ω, the line L_Σ is an affine axis of symmetry of K. Thus the line J_Σ:=A(L_Σ) is an affine axis of symmetry of K̅. Let Λ_Σ={T(J_Σ):T∈ O(3) such that T(G)=G}. For each line Q∈Λ_Σ, let A_Q∈ A(3) such that Q̅=A_Q(Q) is an axis of symmetry of K̅_Q:=A_Q(K̅) and, consequently, A_Q(G) is an affine axis of revolution of K̅_Q. Let A_Σ={A_Q ∈ A(3):Q∈Λ_Σ}. For each Q∈Λ_Σ, let R_Q̅∈ O(3) be the rotation with axis Q̅ by an angle π. Thus R_Q̅(K̅_Q)=K̅_Q. Let B_Σ={R_Q̅∈ O(3): Q∈Λ_Σ} and C_Σ={A_Q^-1(R_Q̅(A_Q(G))):Q∈Λ_Σ} We can illustrates these definitions with the following diagram: K A ⟶K̅A_Q⟶K̅_QR_Q̅⟶K̅_Q A^-1_Q⟶K̅A^-1⟶ K Since A_Q^-1(R_Q̅(A_Q(K̅)))=A_Q^-1(R_Q̅(K̅_Q))=A_Q^-1( K̅_Q)= A_Q^-1(A_Q(K̅))=K̅ every element of C_Σ is an affine axis of revolution of K̅. Consequently, for each W∈C_Σ, A^-1(W) is an affine axis of revolution of K. Suppose that there exists Σ_1∈Ω such that, for all Q∈Λ_Σ_1, A_Q=id and Λ_Σ_1 is the equator of K̅ (C_Σ_1 consist just of the element G). Thus, on the one hand, A_Q(G)=id(G)=G, and, on the other hand, by virtue that G and Q are orthogonal, R_Q̅(G)=G. Hence A_Q^-1(R_Q̅(A_Q(G)))=A_Q^-1(R_Q̅(id(G)))=A_Q^-1(R_Q̅( G))=A_Q^-1(G)=G. Let Σ_2∈Ω, Σ_1≠Σ_2. Suppose that Σ_2 is such that J_Σ_2:=A(L_Σ_2) is an axis of symmetry of K̅, i.e., A_Σ_2=id. Then K̅ is an sphere by virtue that it has two different axis of revolution, namely, G and R_Σ̅_2(G). Consequently, K is an ellipsoid. Due to what has been seen, we can assume that J_Σ_2:=A(L_Σ_2) is an axis of affine symmetry, i.e., A_Σ_2≠id. We denote by τ the colection of planes containing G. For each Π∈τ, we will call the set Π∩K̅ a meridian of K̅. For each Q ∈Λ_Σ_2, we denote by ζ_Q the meridian passing through Q. Since for every Q∈Λ_Σ_2, A_Q(G) and R_Q̅(A_Q(G)) are affine axis of revolution of K̅_Q, it follows that each element in C_Σ_2 is an affine axis of revolution of K̅. Notice that for Q∈Λ_Σ_2, A_Q^-1(R_Q̅(A_Q(G)))⊂ζ_Q. Consequently, K̅ has an infinite number of affine axis of revolution. By Theorem 1 of <cit.>, the only convex body with an infinite number of affine axis of revolution is the ellipsoid. Hence K̅ is an ellipsoid. Therefore K is an ellipsoid. If C_Σ_1 no consist of just one element, then K has an infinite number of affine axis of revolution (see the previous paragraph) and, again by Theorem 1 of <cit.>, it follows that K is an ellipsoid. § PROOF OF THEOREM <REF> IN DIMENSION 3. We take a system of coordinates such O is the origin. We denote by Ω_O the solid cone {α x: x∈ B, α∈ℝ}, by u the centre of B, by L the line determined by the centres of B and -B. For every v∈𝕊^2 ∩ u^⊥ the relation v^⊥∩ K = S∂ (K,v) holds. We observe that since every figure of constant width is strictly convex the body K is is strictly convex. We denote by ξ (v) the shadow boundary S∂(K,v) of K in the direction v. Let Π_1,Π_2 the two closed half-spaces determined by v^⊥. We claim that either ξ(v) ⊂ K_1 or ξ(v) ⊂ K_2 are impossible, where K_i:= K∩Π_i, i=1,2. Since ξ(v) is centrally symmetric with center at O, if x∈ξ(v) and x∈ K_1, then -x∈ξ(v) and -x∈ K_2 (in virtue that K_2=- K_1). Thus ξ(v)∩ v^⊥≠∅. Let z∈ξ(v)∩ v^⊥. Let T be a supporting line of K through z parallel to v. Let ϕ:→ v^⊥ be the projection such that ϕ ={v}. For K_v:=ϕ(K), B_v:=ϕ(B), B̅_v:=ϕ(-B) and z=a_1 we construct the sequences {a_i}_i ^∞, {b_i}_i ^∞ as in the Lemma <ref>. By Lemma <ref>, a_i → a and b_i→ b, where a,b∈ K_v and L(a,b) is a common tangent line of B_v and B̅_v and through O. It is clear that Δ_i:=ϕ^-1(L(a_i,b_i)) is a supporting plane of B and Σ_i:=ϕ^-1(L(b_i,a_i+1)) is a supporting plane of -B. Notice that the chord [a_i,b_i] is a bi-normal of Δ_i ∩ K and the chord [b_i,a_i+1] is a bi-normal of Σ_i ∩ K. Thus, since z=a_1, in each point a_i, b_i there exists supporting lines parallel to T. It follows that a_i,b_i∈ξ(v) for all i. Hence a,b∈ξ(v). Let x_1∈ξ(v), x_1≠z. We are going to prove that x_1∈ v^⊥. On the contrary, let assume that x_1∉ v^⊥. Let δ>0 be the distance from x_1 to v^⊥. For K_v:=ϕ(K), B_v:=ϕ(B), B̅_v:=ϕ(-B) and a_1=ϕ(x_1) we construct the sequence {a_i}_i=1 ^∞, {b_i}_i=1 ^∞ as in the Lemma <ref>. By Lemma <ref>, a_i → a and b_i→ b, when i→∞, where a,b∈ K_v and L(a,b) is a common tangent line of B_v and B̅_v and through O. We define the sequence {x_i}, {y_i }⊂ K such that a_i=ϕ(x_i) and b_i=ϕ(y_i) (by virtue of the strictly convexity of K such sequence are well defined). The planes Δ_i:=ϕ^-1(L(a_i,b_i)) and Σ_i:=ϕ^-1(L(b_i,a_i+1)) are a supporting plane of B and of -B, respectively. Notice that the chord [x_i,y_i] is a bi-normal of Δ_i ∩ K and the chord [y_i,x_i+1] is a bi-normal of Σ_i ∩ K (there are supporting lines of K parallel to T at the points x_i,y_i,x_i+1 hence the chords [x_i,y_i], [y_i,x_i+1] are diametral chords of Δ_i ∩ K and Σ_i ∩ K, respectively, and, consequently they are bi-normal). Thus the chords [x_i,y_i] and [y_i,x_i+1] are parallel to v^⊥. Therefore the distances between y_i and v^⊥ and x_i and v^⊥ are equals to δ for all i. Since a_i → a and b_i→ b, when i→∞, there exists x,y∈ξ(v) such that x_i → x and y_i→ y, when i→∞, and ϕ(x)=a and ϕ(y)=b. Furthermore the distances between x and v^⊥ and between y and v^⊥ are equal to δ. However this and (<ref>) contradicts the strict convexity of K. Proof of Theorem <ref>. We observe that for every supporting plane Γ of B, passing through O, the section Γ∩ K is a disc with centre at O, since it is of constant width and centrally symmetric. Let R be the radius of the disc Γ∩ K and let G be a ball of radius R and with centre at O. We denote by K_Ω the set (ℝ^3\Ω_O)∩ K. We are going to prove K_Ω⊂ G. Let x∈ K_Ω\Γ. Let Π be a supporting plane of B containing the line L(O,x). For one hand, Γ∩ K and Π∩ K are discs and, on the other hand, the chord Γ∩Π∩ K is a common diameter of them (both discs have center at O). Thus the radius of Π∩ K is equal to R. Hence x∈ G. By Lemma <ref> the sections of K with planes parallel to u^⊥ are similar and since u^⊥∩ K is a disc (notice that u^⊥∩ K⊂ G), all the sections of K with planes parallel to u^⊥ are discs. Thus K is a body of revolution with axis the line L. Finally, if we take a point w∈ K \ K_Ω, w∉ L, and we repeat the previous argument we can prove that K is a body of revolution con respect to the line L(O,w) with L(O,w)≠L. Consequently, K is a body of revolution with two different axis of symmetry. Hence it is a ball. § PROOF OF THEOREM <REF> IN DIMENSION >3. We will assume that Theorem <ref> holds in dimension n-1 and we will prove that it holds in dimension n, n≥ 4. We take a system of coordinates such O is the origin. We denote by ϕ_u:→ u^⊥ the orthogonal projection con respect to the direction u ∈𝕊^n-1. Let Π be a hyperplane which separates B from o. For every vector u∈𝕊^n-1 parallel to Π the projection ϕ_u(K) is an (n-1)-ellipsoid. First notice, by the choice of Π, that O∉ϕ_u(B). Since every orthogonal projection of a O-symmetric body is a O-symmetric body in dimension n-1, the O-symmetric body ϕ_u(K) satisfies the Barker-Larman condition with respect to sphere ϕ_u(B) (every (n-2)-section of ϕ_u(K) given by supporting (n-2)-plane of ϕ_u(B) can be considered as the projection of one (n-1)-section of K given by an hyperplane tangent to B and parallel to u). By virtue of the induction hypothesis, ϕ_u(K) is an (n-1)-ellipsoid. Let Π be a hyperplane which separates B from o. Let Π_1,Π_2 be supporting hyperplanes of K at the points x_1,x_2⊂ K, respectively. Let Γ be a hyperplane parallel to Π, O∈Γ. We denote by L the line generated by the points x_1, x_2. Γ The relation Γ∩ K=S∂(K,L) holds. WLG we can assume that L⊥Π. Let z∈Γ∩ K and let Δ be a supporting hyperplane of K at z. Let w be a unit vector parallel to Δ∩Π. By Lemma <ref>, ϕ_w(K) is an ellipsoide. Since ϕ_w(K) is an ellipsoide and by virtue that ϕ_w(L)=L, and ϕ_w(Γ) is parallel to the supporting (n-2)-planes ϕ_w(Π_1), ϕ_w(Π_2) of ϕ_w(K) and it is passing through o, it follows that S∂ (ϕ_w(K),L)=ϕ_w(Γ) ∩ϕ_w(K) Thus there exists a unique supporting (n-2)-plane Σ of ϕ_w(K) at ϕ_w(z) parallel to L. Consequently, Δ=ϕ_w^-1(Σ) is parallel to L. Therefore z∈ S∂ (K,L). Hence Γ∩ K ⊂ S∂ (K,L). On the other hand, let z∈ S∂ (K,L). Let Δ be a supporting hyperplane of K at z parallel to L. Let w be a unit vector parallel to Δ∩Π. By Lemma <ref>, ϕ_w(K) is an elipsoide. By virtue that ϕ_w(Δ) is parallel to L and since the relation (<ref>) holds, it follows that ϕ_w(z)∈ϕ_w(Γ)∩ϕ_w(K). Thus z∈Γ∩ K, that is, S∂ (K,L)⊂Γ∩ K There exists an affine transformation A:→ such that A(Γ)=Γ, A(L)⊥Γ and the ellipsoids {ϕ_u(A(K)): u∈𝕊^n-1, u parallel to Γ} are congruent ellipsoids of revolution with axis A(L), that is, A(K) is an ellipsoid of revolution with axis A(L). For all u∈𝕊^n-1, u parallel to Γ, the next series of equals holds ϕ_u(Γ∩ K)=ϕ_u(S∂(K,L))=S∂(ϕ_u(K),L)=ϕ_u(Γ)∩ϕ_u(K). The first equality is by (<ref>) of Lemma <ref>, the second follows immediately from the definition of shadow boundary and the last is by (<ref>). It follows that the (n-1)-section Γ∩ K is an ellipsoide because all its orthogonal projection are (n-2)-ellipsoides (notice that, by virtue of Lemma <ref>, ϕ_u(Γ)∩ϕ_u(K) is a (n-2)-ellipsoide). Now we choose an affine transformation A:→ such that A(Γ)=Γ, A(L)⊥Γ, and A(Γ∩ K) is a sphere of radius R. Thus, for all u∈𝕊^n-1, u parallel to Γ, ϕ_u(A(K)) is an ellipsoid of revolution with axis A(L) and such that A(L)∩ϕ_u(A(K))={A(x_1),A(x_2)} and with the maximum radius of the spheres perpendicular to A(L) equal to R. § PROOF OF THEOREM <REF> IN DIMENSION >3. We will assume that Theorem <ref> holds in dimension n-1, and we will prove that it holds in dimension n, n≥ 4. We take a system of coordinates such O is the origin. We denote by ϕ_u:→ u^⊥ the orthogonal projection con respect to the direction u ∈𝕊^n-1. Let Π be a hyperplane which separates B from o. For every vector u∈𝕊^n-1 parallel to Π the projection ϕ_u(K) is an (n-1)-ball. By the choice of Π it follows that O∉ϕ_u(B). Since every orthogonal projection of a body of constant width is a body of constant width in dimension n-1, the O-symmetric body ϕ_u(K) satisfies the Montejano's condition with respect to sphere ϕ_u(B) (every (n-2)-section of ϕ_u(K) given by supporting (n-2)-plane of ϕ_u(B) can be considered as the projection of one (n-1)-section of K given by an hyperplane tangent to B and parallel to u). By virtue of the induction hypothesis, ϕ_u(K) is an (n-1)-ball. Let Π_1,Π_2 be supporting hyperplanes of K, parallel to Π, at the points {x_1,x_2}⊂ K, respectively. We denote by L the line generated by the points x_1, x_2. By virtue of Lemma <ref>, the chord x_1x_2 of K is bi-normal chord of K. It follows that all the (n-1)-balls ϕ_u(K), u∈𝕊^n-1 parallel to Π, have the same radius. Let Φ be the n-ball with diameter x_1x_2. We claim that K=Φ. On the contrary, suppose that there is a point x∈Φ and such that x∉ K. Let Γ be a hyperplane which separates x and K. Let u∈𝕊^n-1 parallel to Π∩Γ. Then, on the one hand, Γ∩ u^⊥ separates ϕ_u(K) and ϕ_u(x), on the other hand, since ϕ_u(x)∈ϕ_u(Φ) and ϕ_u(Φ)=ϕ_u(K) (both are balls with diameter x_1x_2), it follows that ϕ_u(x) ∈ϕ_u(K). This contradiction shows that Φ⊂ K. The opposite inclusion can be seen in analogous way. Thus K=Φ. Acknowledment. The author is very thankful to Jesús Jeronimo Castro for very useful discussions. § APPENDIX. §.§ Proof of Lemma <ref> We will prove the lemma by contradiction. In virtue of the compactness of M there exists a point a∈ M and a sub-sequence {a_i_s} of {a_i} such that a_i_s→ a, s→∞. Let us assume that there exists a point b∈ M for which the following two properties holds: 1) the sub-sequence {b_i_s} of {b_i} is such that b_i_s→ b, s→∞ and 2) L(a,b) is not a common supporting line of B and -B passing through O. In particular, suppose that L(a,b) is not a supporting line of B. Let W_1,W_2⊂ be two lines parallel to L(a,b) and whose distance to L(a,b) are equal to ϵ, 0<ϵ<1. We denote by D the band determined by W_1,W_2. Since a_i_s→ a, s→∞, there exists N_1∈ℕ such that |a_i_s - a|<ϵ for s>N_1. On the other hand, in virtue that b_i_s→ b, s→∞, there exists N_2∈ℕ such that |b_i_s - b|<ϵ for s>N_2. Let N:=max{N_1,N_2}. We take s_0>N. Then |a_i_s_0-a|<ϵ and |b_i_s_0-b|<ϵ, i.e., the chord [a_i_s_0,b_i_s_0] is contained in D On the other hand, given x∈ M we denote by l(x),m(x) the supporting lines of B passing through x and by y(x), z(x) the intersections of l(x) and m(x) with M, respectively, y(x)≠x≠ z(x). Since L(a,b) is not a supporting line of B we can see that if ϵ>0 is small enough, for each x∈ M such that |x-b|<ϵ the chords [x,y(x)], [x,z(x)] does not belong to D. Thus |a_i_s_0-a|>ϵ which contradict (<ref>). If the sub-sequence {b_i_s} does not converge, a sub-sequence of it {b_i_s_r} will converge to a point b∈ M. Now we assume again that L(a,b) is not a common supporting line of B and -B passing through O. Furthermore we can assume that L(a,b) is not a supporting line of B. Finally we repeat the previous argument for the sequences {a_i_s_r}, {b_i_s_r} and we will get a contradiction again. The other cases can be considered in analogous way. 99 espanol J. Alonso, P. Sn. Martin, Some Characterizations of Ellipsoids by Sections. Discrete Comput Geom. 31 (2004) 643-654. false_centre P. W. Aitchison, C. M. Petty, and C. A. Rogers, A convex body with a false centre is an ellipsoid. Mathematika 18 (1971), 50-59. Barker J. A. Barker, and D. G. Larman, Determination of convex bodies by certain sets of sectional volumes. Discrete Math. 241 (2001), 79-96. larman D.G. Larman: A note on the false centre problem, Mathematika 21 (1974), 216-217. monteja L. Montejano. A characterization of the Euclidean ball in terms of concurrent sections of constant width. Geometriae Dedicata volume 37 (1991), 307-316. Falso_MM L. Montejano, and E. Morales-Amaya, Variations of classic characterizations of ellipsoids and a short proof of the false centre theorem. Mathematika 54 (2007), no. 1-2, 35-40. Olovja S.P. Olovjanishnikov, On a characterization of the ellipsoid. Ucen. Zap. Leningrad. State Univ. Ser. Mat. 83 (1941), 114-128. Rogers C. A. Rogers, Sections and projections of convex bodies. Port. Math. 24 (1965), 99-103. werner C. Schütt and E. Werner, Homothetic floating body, Geom. Dedicata. 49 (1994), 335-348.
http://arxiv.org/abs/2307.04749v1
20230710175457
Divide, Evaluate, and Refine: Evaluating and Improving Text-to-Image Alignment with Iterative VQA Feedback
[ "Jaskirat Singh", "Liang Zheng" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG", "cs.MM", "stat.ML" ]
Quantum oscillations with topological phases in a kagome metal CsTi_3Bi_5 Binghai Yan August 12, 2023 ============================================================================ -0.2in Quantum oscillations with topological phases in a kagome metal CsTi_3Bi_5 Binghai Yan August 12, 2023 ============================================================================ -0.2in < g r a p h i c s > -0.05in type=figure We propose a training-free decompositional framework which helps both better evaluate (Sec. <ref>) and gradually improve (Sec. <ref>) text-to-image alignment using iterative VQA feedback. The field of text-conditioned image generation has made unparalleled progress with the recent advent of latent diffusion models. While remarkable, as the complexity of given text input increases, the state-of-the-art diffusion models may still fail in generating images which accurately convey the semantics of the given prompt. Furthermore, it has been observed that such misalignments are often left undetected by pretrained multi-modal models such as CLIP. To address these problems, in this paper we explore a simple yet effective decompositional approach towards both evaluation and improvement of text-to-image alignment. In particular, we first introduce a Decompositional-Alignment-Score which given a complex prompt decomposes it into a set of disjoint assertions. The alignment of each assertion with generated images is then measured using a VQA model. Finally, alignment scores for different assertions are combined aposteriori to give the final text-to-image alignment score. Experimental analysis reveals that the proposed alignment metric shows significantly higher correlation with human ratings as opposed to traditional CLIP, BLIP scores. Furthermore, we also find that the assertion level alignment scores provide a useful feedback which can then be used in a simple iterative procedure to gradually increase the expressivity of different assertions in the final image outputs. Human user studies indicate that the proposed approach surpasses previous state-of-the-art by 8.7% in overall text-to-image alignment accuracy. § INTRODUCTION The field of text-to-image generation has made significant advancements with the recent advent of large-scale language-image (LLI) models <cit.>. In particular, text-conditioned latent diffusion models have shown unparalleled success in generating creative imagery corresponding to a diverse range of free-form textual descriptions. However, while remarkable, it has been observed <cit.> that as the complexity of the input text increases, the generated images do not always accurately align with the semantic meaning of the textual prompt. To facilitate the reliable use of current text-to-image generation models for practical applications, it is essential to answer two key questions: 1) Can we detect such fine-grain misalignments between the input text and the generated output in a robust manner? and 2) Once detected, can we improve the text-to-image alignment for failure cases? While several metrics for evaluating text-to-image alignment (, CLIP <cit.>, BLIP <cit.>, BLIP2 <cit.>) exist, it has been observed <cit.> that a high score with these metrics can be achieved even if the image does not fully correspond with input prompt. For instance, in Fig. <ref>, an output image (containing only pink trees) shows high CLIP/BLIP scores with the text “pink trees and yellow car” even if yellow car is not present. Evaluating text-to-image matching using the image-text-matching (ITM) head of BLIP models has also been recently explored <cit.>. However, the generated scores also show a similar tendency to favor the main subject of input prompt. Furthermore, even if such misalignments are detected, it is not clear how such information can be used for improving the quality of generated image outputs in a reliable manner. To address these problems, in this paper we explore a simple yet effective decompositional approach towards both evaluation and improvement of fine-grain text-to-image alignment. In particular, we propose a Decompositional-Alignment-Score (DA-Score) which given a complex text prompt, first decomposes it into a set of disjoint assertions about the content of the prompt. The alignment of each of these assertions with the generated image is then measured using a VQA model <cit.>. Finally, the alignment scores for diffferent assertions are combined to give an overall text-to-image alignment score. Our experiments reveal that the proposed evaluation score shows significantly higher correlation with human ratings over prior evaluation metrics (, CLIP, BLIP, BLIP2) (Sec. <ref>). Furthermore, we also find that the assertion-level alignment scores provide a useful and explainable feedback for determining which parts of the input prompt are not being accurately described in the output image. We show that this feedback can then be used to gradually improve the alignment of the generated images with the input text prompt. To this end, we propose a simple iterative refinement procedure (Fig. <ref>), wherein at each iteration the expressivity of the least-aligned assertion is improved by increasing the weightage/cross-attention strength (refer Sec. <ref>) of corresponding prompt tokens during the reverse diffusion process. Through both qualitative and quantitative analysis, we find that the proposed iterative refinement process allows for generation of better aligned image outputs over prior works <cit.> while on average showing comparable inference times (Sec. <ref>). § RELATED WORK Text to Image Generation Models. Text conditional image synthesis is a topic of keen interest in the vision community. For instance, <cit.> use GANs to perform text guided image generation. Similarly, <cit.> explore the use of autoregressive models for zero-shot text to image generation. Recently, diffusion-based-models <cit.> have emerged as a powerful class of methods for performing text-conditional image synthesis over diverse range of target domains. While remarkable, generating images which align perfectly with the input text-prompt remains a challenging problem <cit.>. To enforce, heavier reliance of generated outputs on the provided text, classifier-free guidance methods <cit.> have been proposed. Similarly, use of an additional guidance input to improve controllability of text-to-image generation have recently been extensively explored <cit.>. However, even with their application, the generated images are often observed to exhibit fine-grain misalignments such as missing secondary objects <cit.> with the input text prompt. Evaluating Image-Text Alignment. Various protocols for evaluating text-image alignment in a reference-free manner have been proposed <cit.>. Most prior works <cit.> typically use the cosine similarity between the text and image embedding from large-scale multi-modal models <cit.> such as CLIP <cit.>, BLIP <cit.>, BLIP-2 <cit.> for evaluating the alignment scores. Recently, <cit.> also show the application of BLIP/BLIP-2 models for image-text matching using image retrieval. However, as shown in Fig. <ref>, these scores can give very high scores even if the generated images do not full align with the input text prompt. Furthermore, unlike our approach image-text alignment is often represented through a single scalar value which does not provide an explainable measure which can be used to identify/improve weaknesses of the image generation process. Improving Image-Text Alignment. Recently several works <cit.> have been proposed to explore the problem of improving image-text alignment in a training free manner. Liu <cit.> propose to modify the reverse diffusion process by composing denoising vectors for different image components. However, it has been observed <cit.> that it struggles while generating photorealistic compositions of diverse objects. Feng <cit.> use scene graphs to split the input sentence into several noun phrases and then assign a designed attention map to the output of the cross-attention operation. In another recent work, Chefer <cit.> extend the idea of cross-attention map modification to minimize missing objects but instead do so by modifying the noise latents during the reverse diffusion process. While effective at reducing missing objects, we find that the performance / quality of output images can suffer as the number of subjects in the input prompt increases (refer Sec. <ref>). Besides training-free methods, recent contemporary work <cit.> has also explored the possibility of improving image-text alignment using human feedback to finetune existing latent diffusion models. However this often requires the collection of large-scale human evaluation scores and finetuning the diffusion model across a range of diverse data modalities which can be expensive. In contrast, we explore a training free approach for improvement of fine-grain text-to-image alignment. § OUR METHOD Given the image generation output ℐ corresponding to a text prompt 𝒫, we wish to develop a mechanism for evaluation and improvement of fine-grain text-to-image alignment. The core idea of our approach is to take a decompositional strategy for both these tasks. To this end, we first generate a set of disjoint assertions regarding the content of the input prompt. The alignment of the output image ℐ with each of these assertions is then calculated using a VQA model. Finally, we use the assertion-based-alignment scores as feedback to improve the expressiveness of the assertion with the least alignment score. This process can then be performed in an iterative manner to gradually improve the quality of generated outputs until a desired value for the overall alignment score is attained. In the next sections, we discuss each of these steps in detail. In Sec. <ref> we first discuss the process for evaluating decompositional-alignment scores. We then discuss the iterative refinement process for improving text-to-image alignment in Sec. <ref>. Fig. <ref> provides an overview for the overall approach. §.§ Evaluating Text-to-Image Alignment Prompt Decomposition Model. Given an input prompt 𝒫, we first decompose its textual information into a set of disjoint assertions (and corresponding questions) which exhaustively cover the contents of the input prompt. Instead of relying on human-inputs as in <cit.>[Prior works on improving image-text alignment often rely on human-user inputs for expressing contents of the input prompt into its simpler constituents. For instance, Feng <cit.> require the user to describe the prompt as a conjunction/disjunction of simpler statements. Similarly, Chefer <cit.> require the user to provide a set of entities / subjects in the prompt, over which their optimization should be performed.], we leverage the in-context learning capability <cit.> of large-language models <cit.> for predicting such decompositions in an autonomous manner. In particular, given an input prompt 𝒫 and large-language model ℳ, the prompt decomposition is performed using in-context learning as, 𝐱 = {x_0,x_1, … x_n} = ℳ(𝐱|𝒫, D_exempler,𝒯), where 𝐱 is the model output, n is the number of decompositions, D_exemplar is the in-context learning dataset consisting 4-5 human generated examples for prompt decomposition, and 𝒯 is task description. Please refer supp. material for further details on exemplar-dataset and task-description design. The model output 𝐱 is predicted to contain tuples x_i = {a_i, p_i}, where each tuple is formatted to contain assertions a_i and the sub-part p_i of the original prompt 𝒫 corresponding to the generated assertion. For instance, given 𝒫: `a cat and a dog' the prompt decomposition can be written as, ℳ(𝐱|𝒫: `a cat and a dog', D_exempler,𝒯) = [ {`there is a cat',`a cat'} , {`there is a dog',`a dog'}]. Computing Assertion-based Alignment Scores. We next compute the alignment of the generated image ℐ with each of the disjoint assertions using a Visual-Question-Answering (VQA) model <cit.>. In particular, given image ℐ, assertions a_i, i=1,… n, their rephrasing in question format a^q_i and VQA-model 𝒱, the assertion-level alignment scores u_i(ℐ, a_i) are computed as, u_i(ℐ, a_i) = exp(α_i/τ)/exp(α_i/τ) + exp(β_i/τ), where α_i = 𝒱 (`yes'|ℐ, a^q_i), β_i = 𝒱 (`no'|ℐ, a^q_i), where α_i, β_i refer to the logit-scores of VQA-model 𝒱 for input tuple (image ℐ, question a^q_i) corresponding to output tokens `yes',`no' respectively. Hyperparameter τ controls the temperature of the softmax operation and controls the confidence of the alignment predictions. Combining Alignment Scores. Finally, the assertion level alignment-scores u_i(ℐ, a_i) are combined to give the overall text-to-image alignment score Ω(ℐ,𝒫) between image ℐ and prompt 𝒫 as, Ω(ℐ,𝒫) = ∑_i λ_i(𝒫,a_i) u_i(ℐ_k, a_i)/∑_i λ_i(𝒫,a_i), where weights λ_i(𝒫,a_i) refer to the importance of assertion a_i in capturing the overall content of the input prompt 𝒫, and allows the user to control the relative importance of different assertions in generating the final image output[For simplicity reasons, we mainly use λ_i=1 ∀ i in the main paper. Further analysis on variable λ_i to account for variable information content or visual verifiability of an assertion are provided in supp. material.]. Please refer Fig. <ref> for the overall implementation. §.§ Improving Text to Image Alignment In addition to predicting overall text-to-image alignment score, we find that assertion-level alignment scores u_i(ℐ, a_i) also provide a useful and explainable way for determining which parts of the input prompt 𝒫 are not being accurately described in the output image ℐ. This feedback can then be used in an iterative manner to improve the expressivity of the assertion with least alignment score u_i(ℐ, q_i), until a desired threshold for the overall text-image alignment score Ω(ℐ,𝒫) is obtained. Parameterized Diffusion Model. We first modify the image generation process of standard diffusion models in order to control the expressiveness of different assertions a_i in parametric manner. In particular, we modify the reverse diffusion process to also receive inputs weights w_i, where each w_i controls the relative importance of assertion a_i during the image generation process. In this paper, we mainly consider the following two methods for obtaining such parametric control. Prompt Weighting. Instead of computing the CLIP <cit.> features from original prompt 𝒫 we use prompt-weighting <cit.> to modify the input CLIP embeddings to the diffusion model as, CLIP(𝒫) = 𝒲(𝒫, {CLIP(p_i),w_i}_i=1^n)) where 𝒲 refers to the prompt-weighting function from <cit.>, p_i refers to the sub-prompt (Sec. <ref>) corresponding to assertion a_i, and weights w_i control the relative weight of different sub-prompts p_i in computing the overall CLIP embedding for prompt 𝒫. Cross-Attention Control. Similar to <cit.>, we also explore the idea of modifying the noise latents z_t during the reverse diffusion process, to increase the cross-attention strength of the main noun-subject for each sub-assertion a_i. However, instead of only applying the gradient update for the least dominant subject <cit.>, we modify the loss for the latent update in parametric form as, z_t = z_t - α∇_z_tℒ(z_t, {w_i}_i=1^n)), ℒ(z_t, {w_i}_i=1^n) = ∑_i w_i (1- max G(𝒜^t_i)), where α is the step-size, 𝒜^t_i refer to the attention map corresponding to the main noun-subject in assertion a_i, G is a smoothing function and weights w_i control the extent to which the expression of different noun-subjects in the prompt (for each assertion) will be increased in the next iteration. Iterative Refinement. Given the above parametric formulation for controlling expression of different assertions, we next propose a simple yet effective iterative refinement approach towards improving text-to-image alignment. In particular, at any iteration k ∈ [1,5] during the refinement process, we first compute both overall text-image similarity score Ω(ℐ_k,𝒫) and assertion-level alignment scores u_i(ℐ_k,𝒫). The image generation output ℐ_k+1 for the next iteration is then computed as, ℐ_k+1 = 𝒟(𝒫,{w^k+1_i}_i=1^n)); w_i^k+1 = w_i^k + Δ, if l = argmin_i u_k(ℐ,𝒫) w_i^k otherwise, where 𝒟 refers to the parametrized diffusion model and Δ is a hyper-parameter. This iterative process is then performed until a desirable threshold for the overall alignment score Ω(ℐ_k,𝒫) is reached. The image generation output ℐ^⋆ at the end of the refinement process is then computed as, ℐ^⋆ = argmax_ℐ_kΩ(ℐ_k,𝒫). § EXPERIMENTS Dataset. Since there are no openly available datasets addressing semantic challenges in text-based image generation with human annotations, we introduce a new benchmark dataset Decomposable-Captions-4k for method comparison. The dataset consists an overall of 24960 human annotations on images generated using all methods <cit.> (including ours) across a diverse set of 4160 input prompts. Each image is a given rating between 1 and 5 (where 1 represents that `image is irrelevant to the prompt' and 5 represents that `image is an accurate match for the prompt'). Furthermore, unlike prior works <cit.> which predominantly analyse the performance on relatively simple prompts with two subjects (object a and object b), we construct a systematically diverse pool of input prompts for better understanding text-to-image alignment across varying complexities in the text prompt. In particular, the prompts for the dataset are designed to encapsulate two axis of complexity: number of subjects and realism. The number of subjects refers to the number of main objects described in the input prompt and varies from 2 (, a cat with a ball) to 5 (, a woman walking her dog on a leash by the beach during sunset). Similarly, the realism of a prompt is defined as the degree to which different concepts naturally co-occur together and varies as easy, medium, hard and very hard. easy typically refers to prompts where concepts are naturally co-occurring together (, a dog in a park) while very hard refers to prompts where concept combination is very rare (, a dog playing a piano). Further details regarding the dataset are provided in supplementary material. §.§ Evaluating Text-to-Image Alignment Baselines. We compare the performance of the Decompositional-Alignment Score with prior works on evaluating text-to-image alignment in a reference-free manner. In particular, we show comparisons with CLIP <cit.>, BLIP <cit.> and BLIP2 <cit.> scores where the text-to-image alignment score is computed using the cosine similarity between the corresponding image and text embeddings. We also include comparisons with BLIP-ITM and BLIP2-ITM which directly predict a binary image-text matching score (between 0 and 1) for input prompt and output image. Finally, we report results on the recently proposed text-to-text (T2T) similarity metric <cit.> which computes image-text similarity as the average cosine similarity between input prompt and captions generated (using BLIP) from the input image. Quantitative Results. Fig. <ref> shows the correlation between human annotations and predicted text-to-image alignment scores across different metrics on the Decomposable-Captions dataset. We observe that the DA-Score shows a significantly higher correlation with human evaluation ratings as opposed to prior works across varying number of subjects N ∈ [2,5] in the input prompt. We also note that while the recently proposed T2T similarity score <cit.> shows comparable correlation with ours for N=2, its performance significantly drops as the number of subjects in the input prompt increases. §.§ Improving Text-to-Image Alignment In this section, we compare the performance of our iterative refinement approach with prior works on improving text-to-image alignment in a training-free manner. In particular, we show comparisons with 1) Stable Diffusion <cit.>, 2) Composable Diffusion <cit.> 3) StructureDiffusion <cit.> and 4) Attend-and-Excite <cit.>. All images are generated using the same seed across all methods. Qualitative Results. Results are shown in Fig. <ref>. As shown, we observe that Composable Diffusion <cit.> struggles to generate photorealistic combinations of objects especially as number of subjects in the prompt increase. StructureDiffusion <cit.> helps in addressing some missing objects , telescope in example-1, but the generated images tend to be semantically similar to those produced by the original Stable Diffusion model, and thus does not significantly improve text-to-image alignment. Attend-and-Excite <cit.> shows much better performance in addressing missing objects (, telescope in example-1 and umbrella in example-4). However, as sumamrized in Fig. <ref> we observe that it suffers from 3 main challenges: 1) Object Relationship (Fig. <ref>a): we observe that despite having desired objects, generated images may sometimes fail to convey relationship between them. For , in row-1 Fig. <ref> while output images show both a lion and guitar, the lion does not seem to be playing the guitar. In contrast, Eval-and-Refine is able to describe both presence and relation between objects in a better manner. 2) Overlapping Entities (Fig. <ref>b): For images with overlapping entities (, person and spacesuit), we observe that Attend-and-Excite <cit.> typically spends most of gradient updates balancing between the overlapping entities, as both entities (person and spacesuit) occupy the same cross-attention region. This can lead to outputs where a) other important aspects (, lake in Col-3) or b) one of the two entities (, spacesuit) are ignored. 3) Prompt Complexity (Fig. <ref>c): Finally, we note that since Attend-and-Excite <cit.> is limited to applying the cross-attention update w.r.t the least dominant subject, as the complexity of input prompt 𝒫 increases, it may miss some objects (, umbrella, beach, sunny day) during the generation process. In contrast, the iterative nature of our approach allows it to keep refining the output image ℐ until a desirable threshold for the overall image-text alignment score Ω(ℐ,𝒫) is reached. Quantitative Results. In addition to qualitative experiments, we also evaluate the efficacy of our approach using human evaluations. In this regard, we report three metrics: 1) normalized human score: which refers to the average human rating (normalized between 0-1) for images generated on the Decomposable-Captions-4k dataset. 2) accuracy: indicating the percentage of generated images which are considered as an accurate match (rating: 5) for the input text prompt by a human subject. 3) pairwise-preference: where human subjects are shown pair of images generated using our method and prior work, and are supposed to classify each image-pair as a win, loss or tie (win meaning our method is preferred). For our approach we consider two variants 1) Ours (PW) which performs iterative refinement using only prompt-weighting, and 2) Ours (PW + CA) where iterative refinement is performed using both prompt weighting and introducing cross-attention updates (Sec. <ref>). Pairwise preference scores are reported while using Ours (PW + CA) while comparing with prior works. Results are shown in Fig. <ref> and Tab. <ref>. We observe that while the text-to-image alignment accuracy for all methods decreases with an increased difficulty in input text prompts (Fig. <ref>), we find that the our approach with only prompt-weighting is able to consistently perform on-par or better than Attend-and-Excite <cit.>. Further introduction of cross-attention updates (Sec. <ref>), allows our approach to exhibit even better performance, which outperforms Attend-and-Excite <cit.> by 8.67 % in terms of overall alignment accuracy of the generated images. These improvements are also reflected in the pairwise comparisons where human subjects tend to prefer our approach over prior works <cit.>. Inference time comparison. Tab. <ref> shows comparison for the average inference time (per image) for our approach with prior works <cit.>. We observe that despite the use of an iterative process for our approach, the overall inference time is comparable with prior works. This occurs because prior works themselves often include additional steps. For instance, Composable-Diffusion <cit.> requires the computation of separate denoising latents for each statement in the confunction/disjunction operation, thereby increasing the overall inference time almost linearly with number of subjects. Similarly, Attend-and-Excite <cit.> includes additional gradient descent steps for modifying cross-attention maps. Moreover, such an increase is accumulated even if the baseline Stable-Diffusion <cit.> model already generates accurate images. In contrast, the proposed iterative refinement approach is able to adaptively adjust the number of iterations required for the generation process by monitoring the proposed DA- Score for evaluating whether the generation outputs are already good enough. § CONCLUSION In this paper, we explore a simple yet effective decompositional approach for both evaluation and improvement of text-to-image alignment with latent diffusion models. To this end, we first propose a Decompositional-Alignment Score which given a complex prompt breaks it down into a set of disjoint assertions. The alignment of each of these assertions with the generated image is then measured using a VQA model. The assertion-based alignment scores are finally combined to a give an overall text-to-image alignment score. Experimental results show that proposed metric shows significantly higher correlation with human subject ratings over traditional CLIP, BLIP based image-text matching scores. Finally, we propose a simple iterative refinement approach which uses the decompositional-alignment scores as feedback to gradually improve the quality of the generated images. Despite its simplicity, we find that the proposed approach is able to surpass previous state-of-the-art on text-to-image alignment accuracy while on average using only marginally higher inference times. We hope that our research can open new avenues for robust deployment of text-to-image models for practical applications. unsrt
http://arxiv.org/abs/2307.04966v1
20230711015827
Wasserstein Distributionally Robust Regret-Optimal Control under Partial Observability
[ "Joudi Hajar", "Taylan Kargin", "Babak Hassibi" ]
math.OC
[ "math.OC" ]
Wasserstein Distributionally Robust Regret-Optimal Control under Partial Observability The authors are affiliated with the Department of Electrical Engineering at Caltech. Emails: {jhajar,tkargin,hassibi}@caltech.edu. Joudi Hajar Taylan Kargin Babak Hassibi August 12, 2023 =========================================================================================================================================================================================================================== plain plain This paper presents a framework for Wasserstein distributionally robust (DR) regret-optimal (RO) control in the context of partially observable systems. DR-RO control considers the regret in LQR cost between a causal and non-causal controller and aims to minimize the worst-case regret over all disturbances whose probability distribution is within a certain Wasserstein-2 ball of a nominal distribution. Our work builds upon the full-information DR-RO problem that was introduced and solved in Yan et al., 2023 <cit.>, and extends it to handle partial observability and measurement-feedback (MF). We solve the finite horizon partially observable DR-RO and show that it reduces to a tractable semi-definite program whose size is proportional to the time horizon. Through simulations, the effectiveness and performance of the framework are demonstrated, showcasing its practical relevance to real-world control systems. The proposed approach enables robust control decisions, enhances system performance in uncertain and partially observable environments, and provides resilience against measurement noise and model discrepancies. regret-optimal control, Wasserstein distance, partial observability, distributionally robust control § INTRODUCTION Regret-optimal control <cit.>, is a new approach in control theory that focuses on minimizing the regret associated with control actions in uncertain systems. The regret measures the cumulative difference between the performance achieved by a causal control policy and the performance achieved by an optimal policy that could have been chosen in hindsight. In regret-optimal control, the worst-case regret over all ℓ_2-norm-bounded disturbance sequences is minimized. Distributionally robust control <cit.>, on the other hand, addresses uncertainty in system dynamics and disturbances by considering a set of plausible probability distributions rather than relying on a single distribution as in LQG control, or on a worst-case disturbance, such as in H_∞ or RO control. This approach seeks to find control policies that perform well across all possible distributions within the uncertainty set, thereby providing robustness against model uncertainties and ensuring system performance in various scenarios. The size of the uncertainty set allows one to control the amount of desired robustness so that, unlike H_∞ controllers, say, the controller is not overly conservative. The uncertainty set is most often taken to be the set of disturbances whose distributions are within a given Wasserstein-2 distance of the nominal disturbance distribution. The reason is that, for quadratic costs, the supremum of the expected cost over a Wasserstein ball reduces to a tractable semi-definite program (SDP). The current paper considers and extends the framework introduced in <cit.> that applied distributionally robust (DR) control to the regret-optimal (RO) setting. In the full-information finite-horizon setting, the authors of <cit.> reduce the DR-RO problem to a tractable SDP. In this paper, we extend the results of <cit.> to partially observable systems where, unlike the full-information setting, the controller does not have access to the system state. Instead, it only has access to partial information obtained through noisy measurements. This is often called the measurement feedback (MF) problem. Of course, the solution to the measurement feedback problem in LQG and H_∞ control is classical. The measurement-feedback setting for DR control has been studied in  <cit.>, <cit.>, and for RO control in  <cit.>. In the finite-horizon case, we reduce the DR-RO control problem with measurement feedback to an SDP similar to the full-information case studied in <cit.>. Furthermore, we validate the effectiveness and performance of our approach through simulations, showcasing its applicability in real-world control systems. The organization of the paper is as follows. In section <ref>, we review the LQG and regret optimal control formulation in the measurement-feedback setting. In section <ref>, we present the distributionally robust regret-optimal with measurement feedback (DR-RO-MF) problem formulation, in section <ref> we reformulate the problem as a tractable SDP, and in section <ref> we show numerical results for controlling the flight of a Boeing 747 <cit.>. § PRELIMINARIES §.§ Notations ℝ denotes the set of real numbers, ℕ is the set of natural numbers, · is the 2-norm, 𝔼_(·) is the expectation over (·), ℳ(·) is the set of probability distributions over (·) and Tr denotes the trace. §.§ A Linear Dynamical System We consider the following state-space model of a discrete-time, linear time-invariant (LTI) dynamical system: x_t+1 =Ax_t+Bu_t+w_t, y_t =Cx_t+v_t. Here, x_t∈ℝ^n represents the state of the system, u_t∈ℝ^m is the control input, w_t ∈ℝ^n is the process noise, while y_t ∈ℝ^p represents the noisy state measurements that the controller has access to, and v_t ∈ℝ^p is the measurement noise. The sequences {w_i} and {v_i} are considered to be randomly distributed according to an unknown joint probability measure P which lies in a specified compact ambiguity set, P. For simplicity, we take x_0 to be zero. In the rest of this paper, we adopt an operator form representation of the system dynamics (<ref>). To this end, assume a horizon of N∈ℕ, and let us define x [ [ x_0; x_1; ⋮; x_N-1 ]] ∈ℝ^Nn   ,    u [ [ u_0; u_1; ⋮; u_N-1 ]] ∈ℝ^Nm and similarly for y∈ℝ^Np, w∈ℝ^Nn, and v∈ℝ^Np. Using these definitions, we can represent the system dynamics (<ref>) equivalently in operator form as x =Fu+Gw, y =Ju+Lw+v, where F∈ℝ^Nn× Nm, G∈ℝ^Nn× Nn, J∈ℝ^Np× Nm, and L∈ℝ^Np× Nn are strictly causal time-invariant operators (i.e, strictly lower triangular block Toeplitz matrices) corresponding to the dynamics (<ref>). We consider the Linear-Quadratic Gaussian (LQG) cost given as J(u,w,v) x^TQx+u^TRu where Q, R≻0 are positive definite matrices of the appropriate dimensions. In order to simplify the notation, we redefine x and u as x← Q^1/2x, and u← R^1/2u, so that (<ref>) becomes J(u,w,v)=x^2+u^2. §.§ Controller Design We consider a linear controller that has only access to the measurements: u=Ky, K∈𝒦, where 𝒦⊆ℝ^Nm× Np is the space of causal (i.e., lower triangular) matrices. Then, the closed-loop state measurement becomes y=(I-JK)^-1(Lw+v). As in <cit.>, let E=K(I-JK)^-1, be the Youla parametrization, so that K=(I+EJ)^-1E. The closed-loop LQG cost (<ref>) can then be written as: J(K,w,v)= [ w^T v^T ] T_K^T T_K [ w; v ], where T_K is the transfer operator associated with K that maps the disturbance sequences [ w; v ] to the state and control sequences [ x; u ]: T_K[ FEL+G FE; EL E ]. §.§ Regret-Optimal Control with Measurement-Feedback Given a noncausal controller K_0 ∈ K, we define the regret as: R(K,w,v) J(K,w,v)- J(K_0,w,v), = [ w^T v^T ] (T_K^T T_K-T_K_0^T T_K_0)[ w; v; ], which measures the excess cost that a causal controller suffers by not knowing the future. In other terms, regret is the difference between the cost accumulated by a causal controller and the cost accumulated by a benchmark noncausal controller that knows the complete disturbance trajectory. The problem of minimizing regret in the measurement-feedback setting is referred to as (RO-MF) and is formulated as: inf_K∈𝒦sup_ w,vR(K,w,v)/w^2+ v^2, which is solved suboptimally by reducing it to a level-1 suboptimal Nehari problem <cit.>. § DISTRIBUTIONALLY ROBUST REGRET-OPTIMAL CONTROL In this section, we introduce the distributionally robust regret-optimal (DR-RO) control problem with measurement feedback, which we refer to as DR-RO-MF. In this setting, the objective is to find a controller K ∈𝒦 that minimizes the maximum expected regret among all joint probability distributions of the disturbances in an ambiguity set P. This can be formulated formally as inf_K∈𝒦sup_P∈𝒫𝔼_P [R(K,w,v)], where the disturbances [ w; v ] are distributed according to P∈ P. To solve this problem, we first need to characterize the ambiguity set 𝒫 and explicitly determine a benchmark noncausal controller K_0. As in <cit.>, we choose 𝒫 to be the set of probability distributions that are at a distance of at most r>0 to a nominal probability distribution, P_0∈ℳ(ℝ^N(n+p)). Here, the distance is chosen to be the type-2 Wasserstein distance defined as <cit.>: W_2^2(P_1,P_2):=inf_π∈Π(P_1,P_2) ∫_ℝ^n×ℝ^nz_1-z_2 ^2 π(dz_1,dz_2) , where the set Π(P_1,P_2) comprises all joint distributions that have marginal distributions P_1 and P_2. Then, 𝒫 can be written as: 𝒫 := {P ∈ℳ(ℝ^N(n+p)) | W_2(P_0, P)≤ r}. Unlike the full-information case, we know from Theorem 1 in <cit.> that in the measurement feedback case, there is no optimal noncausal controller that dominates every other controller for every disturbance. Therefore, we will choose K_0 as the optimal noncausal controller that minimizes the Frobenius norm of T_K. Theorem 3 in <cit.> shows that such a controller can be found as: K_0=(I+E_0J)^-1 E_0, where the associated operator, T_K_0 is: T_K_0=[ FE_0L+G FE_0; E_0L E_0 ], with E_0 -T^-1F^TGL^TU^-1, T I+F^TF , U I+LL^T . § TRACTABLE FORMULATION In this section, we introduce a tractable reformulation of the DR-RO-MF control problem (<ref>). §.§ DR-RO-MF Control Problem Defining 𝒞_K T_K^T T_K-T_K_0^T T_K_0, we can rewrite the DR-RO-MF control problem (<ref>) as inf_K∈𝒦sup_P∈𝒫𝔼_P [ [ w^T v^T ] C_K[ w; v; ]]. The following theorem gives the dual problem of inner maximization and characterizes the worst-case distribution. [adapted from Theorems 2 and 3 in <cit.>]. Suppose P_0 is absolutely continuous with respect to the Lebesgue measure on ℝ^N and [ w_0; v_0 ]∼ P_0. The optimization problem: sup_P∈𝒫𝔼_P[ [ w^T v^T ] C_K[ w; v; ]] where [ w; v ]∼ P and 𝒞_K∈𝕊^N(n+p), with λ_max(𝒞_K)≠ 0, has a finite solution and is equivalent to the convex optimization problem: inf_γ≥ 0, γ I ≻𝒞_Kγ (r^2-Tr(M_0)) + γ^2 Tr(M_0(γ I-𝒞_K)^-1), where M_0:=𝔼_P_0[[ w; v ][ w^T v^T ]]. Furthermore, the disturbance that achieves the worst-case regret is [ w^∗; v^∗ ]∼ P^∗, where [ w^∗; v^∗ ] = γ^∗ (γ^∗ I - 𝒞_K)^-1[ w_0; v_0 ], and γ^∗ is the optimal solution of (<ref>), which also satisfies the algebraic equation: Tr( (γ(γ I - 𝒞_K)^-1-I)^2M_0)=r^2 The proof follows from Theorems 2 and 3 in <cit.> and is omitted for brevity here. We highlight two remarks pertaining to the presented theorem. Remark 1: Notice that the supremum of the quadratic cost depends on P_0 only though its covariance matrix M_0. Note further that as r→∞, the optimal γ reaches its smallest possible value (since r^2 multiplies γ in (<ref>)). The smallest possible value that γ can take is simply the operator norm of C_K, which means that the DR-RO-MF controller approaches the regret-optimal controller as r→∞. Remark 2: Notice that the worst-case disturbance takes on a Gaussian distribution when the nominal disturbance is Gaussian. This is not immediately evident as the ambiguity set 𝒫 contains non-Gaussian distributions. Note further that the worst-case disturbance is correlated even if the nominal distribution has white noise. Assuming the covariance of the nominal distribution to be M_0=𝔼_P_0[[ w; v ][ w^T v^T ]]=I. so that Tr(M_0)=N(n+p), the optimization problem (<ref>) can be cast equivalently using Theorem <ref> as inf_K∈𝒦inf_γ≥ 0γ (r^2-N(n+p)) + γ^2 Tr((γ I - 𝒞_K)^-1) s.t. γ I ≻𝒞_K 𝒞_K=T_K^T T_K -T_K_0^T T_K_0 As in <cit.>, define the unitary matrices Ψ and Θ: Θ=[ S^-1/2 0; 0 T^-T/2 ][ I -F; F^T I ] Ψ=[ I L^T; -L I ][ V^-1/2 -0; 0 U^-T/2 ] where T and U are as in (<ref>) and (<ref>), and S=I+FF^T V=I+L^TL. and S^1/2, T^1/2, U^1/2, and V^1/2 are (block) lower triangular matrices, such that S=S^1/2S^T/2, T=T^T/2T^1/2, U=U^1/2U^T/2, V=V^T/2V^1/2. Then, the optimization problem (<ref>) is equivalent to: inf_K∈𝒦, γ≥ 0, γ I ≻𝒞_Kγ (r^2-N(n+p)) + γ^2 Tr((γ I - 𝒞_K )^-1) s.t. 𝒞_K=(Θ T_K Ψ)^T Θ T_K Ψ-(Θ T_K_0Ψ)^T Θ T_K_0Ψ which holds true since trace is invariant under unitary Θ and Ψ. By introducing an auxiliary variable X≽γ^2 (γ I - 𝒞_K)^-1 and leveraging the Schur complement theorem as in <cit.>, the problem (<ref>) can be recast as inf_K∈𝒦, γ≥ 0, X ≽ 0γ (r^2-N(n+p)) + Tr(X) s.t. [ X γ I; γ I γ I - 𝒞_K ]≽ 0 γ I - 𝒞_K ≻ 0 𝒞_K=(Θ T_K Ψ)^T Θ T_K Ψ-(Θ T_K_0Ψ)^T Θ T_K_0Ψ In the following lemma, we establish some of the important identities that are utilized to convert problem (<ref>) to a tractable convex program. [adapted from <cit.>]. The following statements hold: * . γ I - 𝒞_K =[ γ I -PZ; -Z^T P^T γ I -Z^TZ ] where Z =T^1/2EU^1/2-W W =-T^-T/2F^TGL^TU^-T/2 P =V^-T/2G^TFT^-1/2 and E, T, U and V are as defined in <ref>, <ref>,  <ref> and <ref> respectively. * . γ I - 𝒞_K ≻ 0 ⇔ Y - W_-,γ_2≤ 1 where γ^-1 I+ γ^-2 P^TP= M_γ^T M_γ M_γ = (γ^-1 I+ γ^-2 P^TP)^1/2 W_γ =M_γW Y =M_γ T^1/2 EU^1/2 - W_+,γ and W_+,γ and W_-,γ are the causal and strictly anticausal parts of W_γ. Here, M_γ is lower triangular, and positive-definite. * Y is causal iff E is causal, where E can be found as follows: E=T^-1/2M_γ^-1(Y+W_+,γ)U^-1/2 * The condition in (<ref>) is recognized as a level-1 suboptimal Nehari problem that approximates a strictly anticausal matrix W_-,γ by a causal matrix Y. The proof follows from Theorem 4 in <cit.> and is omitted for brevity here. Using Lemma <ref>, problem (<ref>) can be reformulated as a tractable optimization program: inf_Z,Y∈𝒦, γ≥ 0, X ≽ 0γ (r^2-N(n+p)) + Tr(X) s.t. [ X_11 X_12 γ I 0; X_12^T X_22 0 γ I; γ I 0 γ I -PZ; 0 γ I -Z^T P^T γ I -Z^TZ ]≽ 0 Y - W_-,γ_2≤ 1 =inf_Z,Y∈𝒦, γ≥ 0, X ≽ 0γ (r^2-N(n+p)) + Tr(X) s.t. [ X_11 X_12 γ I 0 0; X_12^T X_22 0 γ I 0; γ I 0 γ I -PZ 0; 0 γ I -Z^T P^T γ I Z^T; 0 0 0 Z I ]≽ 0 Y - W_-,γ_2≤ 1 where the last step follows from the Schur complement. Using (<ref>), (<ref>), and H_γ=M_γ^-1W_+,γ-W we establish our main theorem. The distributionally robust regret-optimal control problem in the measurement feedback setting (<ref>) reads: inf_Y∈𝒦, γ≥ 0, X ≽ 0γ (r^2-N(n+p)) + Tr(X) s.t. [ X_11 X_12 γ I 0 0; X_12^T X_22 0 γ I 0; γ I 0 γ I -P(*) 0; 0 γ I -(*)^T P^T γ I (*)^T; 0 0 0 (*) I ]≽ 0 (*)=M_γ^-1Y+H_γ [ I (Y - W_-,γ)^T; Y - W_-,γ I ]≻ 0 The optimal controller K^∗ is then obtained using (<ref>) and (<ref>). §.§ Sub-Optimal Problem For a given value of γ, problem (<ref>) can be simplified into a tractable SDP. In practical implementations, we can solve problem (<ref>) by optimizing the objective function with respect to the variables Y and X while fixing γ, thus transforming the problem into an SDP, which can be solved using standard convex optimization packages. We then iteratively refine the value of γ until it converges to the optimal solution γ^*. This iterative process ensures that we obtain the best possible value for γ that minimizes the objective function in problem (<ref>). §.§ LQG and RO-MF Control Problems as Special Cases Interestingly, LQG and RO control in the measurement feedback setting can be recovered from the DR-RO-MF control by varying the radius r which represents the extent of uncertainty regarding the accuracy of the nominal distribution in the ambiguity set. When r→ 0, the ambiguity set transforms into a singular set comprising solely the nominal distribution. Consequently, the problem simplifies into a stochastic optimal control problem under partial observability: inf_K∈𝒦𝔼_P_0 [J(K,w,v)] As r→∞, the ambiguity set transforms into the set of any disturbance generated adversarially and the optimal γ reaches its smallest possible value which is the operator norm of C_K. This means that the problem reduces to the RO-MF control problem which we discussed in section <ref>. § SIMULATIONS §.§ Flight Control We focus on the problem of controlling the longitudinal flight of a Boeing 747 which pertains to the linearized dynamics of the aircraft, as presented in <cit.>. The linear dynamical system provided describes the aircraft's dynamics during level flight at an altitude of 7.57 miles and a speed of 593 miles per hour, with a discretization interval of 0.1 second. The state variables of the system encompass the aircraft's velocity along the body axis, velocity perpendicular to the body axis, angle between the body axis and the horizontal plane, and angular velocity. The inputs to the system are the elevator angle and thrust. The process noise accounts for variations caused by external wind conditions. The discrete-time state space model is: A= [ 0.9801 0.0003 -0.0980 0.0038; -0.3868 0.9071 0.0471 -0.0008; 0.1591 -0.0015 0.9691 0.0003; -0.0198 0.0958 0.0021 1.000 ] B= [ -0.0001 0.0058; 0.0296 0.0153; 0.0012 -0.0908; 0.0015 0.0008 ], C=[ 1 0 0 0; 0 0 0 1 ]. We conduct all experiments using MATLAB, on a PC with an Intel Core i7-1065G7 processor and 16 GB of RAM. The optimization problems are solved using the CVX package <cit.>. We limit the horizon to N=10. We take the nominal distribution P_0 to be Gaussian with mean μ_0=0 and covariance Σ_0=I, and we investigate various values for the radius r, specifically: r∈{0, 0.2, 0.4, 0.6, 0.8, 1, 1.5, 2, 4, 8, 16, 32, 126}. For each value of r, we solve the sub-optimal problem described in section <ref>, iterating over γ until convergence to γ^*. To assess the performance of the controller, we compute the worst-case disturbance, which lies at a Wasserstein distance r from P_0, as discussed in theorem <ref>. Finally, we compare the regret cost of the DR-RO-MF controller with that of the LQG, H_∞ <cit.>, and RO-MF <cit.> controllers while considering the worst-case disturbance corresponding to the DR-RO-MF controller. The results are shown in Figures <ref> and <ref>. The DR-RO-MF controller achieves the minimum cost under worst-case disturbance conditions for any given value of r. When r is sufficiently small (less than 0.2), the cost of the DR-RO-MF controller closely approximates that of the LQG controller (figure <ref>). Conversely, for sufficiently large values of r (greater than 8), the cost of the DR-RO-MF controller closely matches that of the RO-MF controller (figure <ref>). These observations align with theoretical findings as elaborated in section <ref>. Furthermore, it is worth noting that for large values of r (figure <ref>), the LQG controller yields the poorest results. Conversely, for small values of r (figure <ref>), the LQG controller performs on par with the DR-RO-MF controller, emerging as the best choice, as mentioned earlier. This discrepancy is expected since LQG control accounts only for disturbances drawn from the nominal distribution, assuming uncorrelated noise. On the other hand, RO-MF exhibits inferior performance when r is small (figure <ref>), but gradually becomes the top-performing controller alongside DR-RO-MF as r increases. This behavior arises from the fact that RO-MF is specifically designed for sufficiently large r. Lastly, note that the H_∞ cost lies between the costs of the other controllers, interpolating their respective costs. §.§ Performance Under Adversarially Chosen Distribution For any given causal controller K_c, an adversary can choose the worst-case distribution of disturbances for a fixed r as max_P∈𝒫𝔼_ P R(K_c,w,v) P_c, where R is the regret as in (<ref>). Denoting by K_DR-RO-MF the optimal DR-RO-MF controller and by P_DR-RO-MF the worst-case (adversarial) distribution corresponding to K_DR-RO-MF, we have that 𝔼_ P_c R(K_c,w,v) = max_P∈𝒫𝔼_P R(K_c,w,v), ≥min_K∈𝒦max_P∈𝒫𝔼_P R(K,w,v), = 𝔼_ P_DR-RO-MF R(K_DR-RO-MF,w,v), ≥𝔼_ P_c R(K_DR-RO-MF,w,v), where the first equality follows from (<ref>) and the last inequality is due to the fact that P_DR-RO-MF is the worst-case distribution for K_DR-RO-MF. In other words, DR-RO-MF controller is robust to adversarial changes in distribution as it yields smaller expected regret compared to any other causal controller K_c when the disturbances are sampled from the worst-case distribution P_c corresponding to K_c. The simulation results presented in Subsection <ref> show that DR-RO-MF outperforms RO-MF, H_∞, and LQG (designed assuming disturbances are sampled from P_0) controllers under the worst-case distribution of the DR-RO-MF controller P_DR-RO-MF, i.e 𝔼_ P_DR-RO-MF R(K_c,w,v) ≥𝔼_ P_DR-RO-MF R(K_DR-RO-MF,w,v). This directly implies that the theoretically expected inequality 𝔼_ P_c R(K_c,w,v) ≥𝔼_ P_c R(K_DR-RO-MF,w,v) is validated and positively exceeded following the inequalities (<ref>) and 𝔼_ P_c R(K_c,w,v) ≥𝔼_ P_DR-RO-MF R(K_c,w,v). To further support our claims, we assess the performance of LQG and RO-MF controllers by measuring the relative reduction in expected regret when DR-RO-MF controller is utilized under the worst-case distributions corresponding to LQG and RO-MF controllers, respectively: 𝔼_P_c R(K_c,w,v) - 𝔼_ P_c R(K_DR-RO-MF,w,v)/𝔼_P_c R(K_c,w,v)× 100, where K_c is either LQG or RO-MF controller and P_c is the corresponding worst-case distribution. The results are shown in Table <ref> for r ∈{0.2,1,2,4,16,32}. §.§ Limitations In our scenario with a relatively short planning horizon of N=10, the cost reduction achieved by employing DR-RO-MF control, in comparison to traditional controllers such as LQG and H_∞, is moderate. However, it is anticipated that this reduction would become more pronounced with the utilization of a longer planning horizon. Unfortunately, in our experimental setup, we were restricted to using N=10 due to computational limitations. Solving semi-definite programs involving large matrices is computationally inefficient, necessitating this constraint. In practice, this limitation can be overcome by implementing the controller in a receding horizon fashion, where the controller is updated every x time steps. § CONCLUSION In conclusion, this paper extended the distributionally robust approach to regret-optimal control by incorporating the Wasserstein-2 distance <cit.> to handle cases of limited observability. The proposed DR-RO-MF controller demonstrated superior performance compared to classical controllers such as LQG and H_∞, as well as the RO-MF controller, in simulations of flight control scenarios. The controller exhibits a unique interpolation behavior between LQG and RO-MF, determined by the radius r that quantifies the uncertainty in the accuracy of the nominal distribution. As the time horizon increases, solving the tractable SDP to which the solution reduces, becomes more challenging, highlighting the practical need for a model predictive control approach. Overall, the extended distributionally robust approach presented in this paper holds promise for robust and effective control in systems with limited observability. ./bibliography/IEEEtran
http://arxiv.org/abs/2307.04086v1
20230709024221
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement
[ "Tiancheng Sun", "Zhishuai Ge", "Xunzhou Chen", "Shaolan Bi", "Tanda Li", "Xianfei Zhang", "Yaguang Li", "Yaqian Wu", "Sarah A. Bird", "Ferguson J. W.", "Jianzhao Zhou", "Lifei Ye", "Liu Long", "Jinghua Zhang" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Beijing Planetarium, Beijing Academy of Science and Technology, Beijing, 100044, China [email protected] Research Center for Intelligent Computing Platforms, Zhejiang Laboratory, Hangzhou 311100, China [email protected] Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China [email protected] Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Sydney Institute for Astronomy (SIfA), School of Physics, University of Sydney, NSW 2006, Australia Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, A20 Datun Rd., Chaoyang District, Beijing 100101, People’s Republic of China Center for Astronomy and Space Sciences, China Three Gorges University, Yichang 443002, People's Republic of China Department of Physics, Wichita State University, Wichita, KS 67260-0032, USA Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, A20 Datun Rd., Chaoyang District, Beijing 100101, People’s Republic of China Varying oxygen abundance could impact the modeling-inferred ages. This work aims to estimate the ages of dwarfs considering observed oxygen abundance. To characterize 67,503 LAMOST and 4,006 GALAH FGK-type dwarf stars, we construct a grid of stellar models which take into account oxygen abundance as an independent model input. Compared with ages determined with commonly-used α-enhanced models, we find a difference of ∼9% on average when the observed oxygen abundance is considered. The age differences between the two types of models are correlated to [Fe/H] and [O/α], and they are relatively significant on stars with [Fe/H] ≲ -0.6 dex. Generally, varying 0.2 dex in [O/α] will alter the age estimates of metal-rich (-0.2 < [Fe/H] < 0.2) stars by ∼10%, and relatively metal-poor (-1 < [Fe/H] < -0.2) stars by ∼15%. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The fractional age difference of high-O stars with [O/α] ∼ 0.4 dex reaches up to -33% to -42% at [Fe/H] ≲ -0.6 dex. We also analyze the chemical properties of these stars. We find a decreasing trend of [Fe/H] with age from 7.5–9 Gyr to 5–6.5 Gyr for the stars from the LAMOST and GALAH. The [O/Fe] of these stars increases with decreasing age from 7.5–9 Gyr to 3–4 Gyr, indicating that the younger population is more O-rich. § INTRODUCTION Galactic archaeology uses the chemical abundances, kinematics, and derived ages of resolved stellar populations as fossils to investigate the formation and evolution history of the Milky Way <cit.>. However, in comparison to chemical abundance and kinematics estimation, estimating the ages of field stars is a challenging task due to the inherent uncertainties present in both observational data and the stellar models employed for dating stars <cit.>. The chemical composition of a star is a fundamental input parameter in the construction of its theoretical model, which is critical in the determination of its age. Notably, at fixed [Fe/H], the abundance variations of individual elements exert a consequential impact on the overall metallicity Z, which subsequently determines the opacity of the stellar models. This, in turn, influences the efficiency of energy transfer and the thermal structure, thereby altering the evolution tracks on the HR diagram and the main-sequence lifetime <cit.>. Consequently, in the context of stellar modeling, it is essential to consider the proper metal mixture in order to accurately characterize stars and determine their ages. The solar-scaled ([α/Fe] = 0) and α-enhanced mixtures have been commonly used in theoretical model grids like Y2 isochrones <cit.>, Dartmouth Stellar Evolution Database <cit.>, and Padova stellar models <cit.>. These models treated all the α-elements, that are O, Ne, Mg, Si, S, Ca, Ti, by the same factor. Observations from high-resolution spectroscopic data have presented very different O-enhancement values from other α-elements on many stars <cit.>. The observed discrepancies in the abundances of oxygen and other α-elements can be attributed to the diverse origins of these elements. Specifically, O and Mg are believed to be primarily synthesized during the hydrostatic burning phase of massive stars and subsequently ejected during the core-collapse supernovae (CCSNe) <cit.>. Nevertheless, some works have provided evidence that Mg might also be partially released into the interstellar medium by SNe Ia <cit.>, while O appears to be solely enriched by CCSNe <cit.>. The other α-elements, namely Si, Ca, and Ti, primarily originate from the explosive burning of CCSNe and are partially contributed by SNe Ia <cit.>. For instance, 22% of Si and 39% of Ca come from SNe Ia according to the chemical evolution models in <cit.>. Therefore, not all α-elements vary in lockstep, the abundance of oxygen may not necessarily correlate with the abundance of other α-elements. Many works have also discussed the effects of varying individual element abundances on the stellar evolution models <cit.>. Theoretical models showed that the oxygen abundance influences the stellar evolution differently from the other α-elements <cit.>. Furthermore, <cit.> proposed the CO-extreme models, which treat oxygen abundance differently from the other α-elements and add carbon abundance in the stellar evolution models. The models have been employed to determine the ages of thousands of metal-poor halo stars, disk stars, and main sequence turn-off stars <cit.>. These results showed that increasing oxygen abundance leads to smaller age determination for the stars with [Fe/H] < -0.2. For the stars with [Fe/H] < -0.2 and [O/α] > 0.2 dex, the age difference would be about 1 Gyr. Due to the limited sample sizes of previous studies (<cit.>, with 70 stars, and <cit.>, with 148 stars) or the restricted range of [Fe/H] values <cit.>, there is a pressing need for a large and self-consistent sample to conduct a quantitative analysis regarding the impact of O-enhancement on age determination. Recently, millions of stars' individual element abundances have been measured by spectroscopic surveys like LAMOST <cit.>, APOGEE <cit.>, and GALAH <cit.>. These large sky surveys provide an excellent opportunity to study the effects of oxygen abundance variations on age determinations across a wide range of stellar parameters. To investigate the systematic effects of O-enhancement on age determination, we study the dwarf stars with available oxygen abundance measurements from LAMOST and GALAH. This paper is organized as follows: Section <ref> mentions the data selection; Section <ref> describes computations of stellar model grids; Section <ref> demonstrates ages differences between the O-enhanced models and α-enhanced models; the resulting age-abundance trends are presented in Section <ref>; and the conclusions of this work are drawn in Section <ref>. § TARGET SELECTION In this work, we make use of spectroscopic data from LAMOST DR5 Value Added Catalogue <cit.> and Third Data Release of GALAH <cit.>, together with astrometric data from Gaia Data Release 3 <cit.>. §.§ Spectroscopic Data LAMOST (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope) DR5 Value Added Catalog <cit.> contains more than 6 million stars with atmosphere parameters (T_ eff, log g, V_mic) and chemical abundances of 16 elements (C, N, O, Na, Mg, Al, Si, Ca, Ti, Cr, Mn, Fe, Co, Ni, Cu, and Ba). Measurements of element abundances are based on the DD–Payne tool <cit.>, which is a data-driven method that incorporates constraints from theoretical spectral models. It is noteworthy that, as discussed by <cit.>, the direct derivation of oxygen abundances from atomic oxygen lines or oxygen-bearing molecular lines in low-resolution (R ∼ 1800) LAMOST spectra is unfeasible. Alternatively, CH and CN molecular lines can be utilized for indirect estimation of oxygen abundances, as their strengths are sensitive to the amount of carbon locked up in CO molecules. As a result, the LAMOST oxygen abundances are only available in the cooler stars (T_ eff ≲ 5700 K), where the CH and CN lines have sufficient strength to allow a reasonably precise (±0.10 dex) estimate of [O/Fe] <cit.>. Due to the wide age range and the preservation of initial chemical abundances, the main-sequence star could be a good tracer of stellar populations. Therefore, we select the main-sequence stars with available measurements for [Fe/H], [α/Fe], and [O/Fe] from the catalog. Firstly, we use some recommended labels (T_ eff_flag = 1, log g_flag = 1, [Fe/H]_flag = 1, [X/Fe]_flag[[X/Fe]_flag = 1 for 14 elements (C, N, O, Na, Mg, Al, Si, Ca, Ti, Cr, Mn, Fe, Co, Ni).] = 1, qflag_chi2 = good) to select stars with reliable measurements. Afterward, we remove stars with T_ eff smaller than 5000 K or signal-to-noise ratio (S/N) less than 50 because their [O/Fe] determinations are not robust. <cit.> also provided a tag named “qflag_singlestar” to infer whether a star is single or belongs to a binary system. The tag is determined by the deviation significance of the spectroscopic parallax from the Gaia astrometric parallax. When the deviation is less than 3σ, it suggests an object is a single star. We use this tag to remove all candidate binaries from our sample. Finally, we choose stars with log g> 4.1. We lastly select a total of 187,455 unique stars. GALAH (Galactic Archaeology with HERMES) DR3 <cit.> presents stellar parameters (T_ eff, log g, [Fe/H], V_mic, V_broad, V_rad) and up to 30 elemental abundances for 588,571 stars, derived from optical spectra at a typical resolution of R ∼ 28,000. The oxygen abundance from GALAH DR3 was calculated using the O_ I 777 nm triplet <cit.>, based on a non-LTE method (LTE: local thermodynamic equilibrium)<cit.>. This NLTE method has also been employed for the measurement of [Fe/H] in GALAH. Following the recommendations in GALAH DR3, we require a SNR > 30, and a quality flag = 0 for reliable stellar parameter determination including iron, α-elements, and oxygen abundances (flag_sp = 0, flag_fe_h = 0, flag_alpha_fe = 0, and flag_o_fe = 0). Additionally, the sample is limited to the stars with e_alpha_fe < 0.1 and e_o_fe < 0.1. We exclude the binary systems identified by <cit.> (which is a catalog of FGK binary stars in GALAH). These cuts give us a sample of 19,512 dwarf stars (log g> 4.1). §.§ Astrometric Data We cross-match our selected LAMOST and GALAH samples with Gaia DR3 <cit.> catalog to obtain the luminosity for each star. Given that luminosity is utilized as a key observational constraint for estimating stellar age, we select stars with luminosity uncertainty less than 10%. Additionally, we select single stars by making a cut based on the Gaia re-normalized unit weight error (RUWE) being less than 1.2 (RUWE values are from the Gaia DR3). Our final sample consists of 149,906 stars from LAMOST (5000 K < T_ eff < 5725 K, -1 < [Fe/H] < 0.5, log g> 4.1) and 15,591 stars from GALAH (4500 K < T_ eff < 7000 K, -1 < [Fe/H] < 0.5, log g> 4.1). We calculate the Galactic Cartesian coordinates (X, Y, Z) and velocities (U, V, W) for the LAMOST sample using the Python package Galpy <cit.>. The distances are estimated by <cit.>. The Sun is located at (X, Y, Z) = (-8.3, 0, 0) kpc, and the solar motion with respect to the local standard of rest is (U_⊙, V_⊙, W_⊙) = (11.1, 12.24, 7.25) km s^-1 <cit.>. We use the Galactic Cartesian coordinates and velocities from the GALAH DR3 value-added catalog (VAC), which is based on astrometry from Gaia EDR3 and radial velocities determined from the GALAH spectra <cit.>. In Figure <ref>, we demonstrate dwarfs from LAMOST and GALAH in the Kiel diagram, and the [α/Fe][The [α/Fe] from both the LAMOST and GALAH catalog are defined as an error-weighted mean of [Mg/Fe], [Si/Fe], [Ca/Fe] and [Ti/Fe].]-[O/Fe] space to inspect their general distributions. The Kiel diagram in Figure <ref>(a) shows that most of the LAMOST dwarfs are cooler than 5700 K, while the GALAH dwarfs in Figure <ref>(b) covers a wider range of T_ eff (4500 - 7000 K). It should be noted that we do not apply any cut-off value at the high temperature side for the LAMOST sample. This upper limit is where reliable oxygen abundance can be measured by <cit.>. The [α/Fe]-[O/Fe] diagrams in Figure <ref>(c-d) show that the [O/Fe] generally increases with increasing [α/Fe], however, [O/Fe] widely spread at given α-enhanced values. The spreading is relatively large for low-α stars (especially for the GALAH sample), ranging from -0.4 to +0.6. c r c c[ht!] Metal Mixtures for the GS98 Solar Mixture, the α-Enhanced Mixture, and the O-Enhanced Mixture. Element log N_⊙ log N_α EM log N_ OEM C 8.52 8.52 8.52 N 7.92 7.92 7.92 O 8.83 8.83+[α/Fe] 8.83+[O/Fe] F 4.56 4.56 4.56 Ne 8.08 8.08+[α/Fe] 8.08+[α/Fe] Na 6.33 6.33 6.33 Mg 7.58 7.58+[α/Fe] 7.58+[α/Fe] Al 6.47 6.47 6.47 Si 7.55 7.55+[α/Fe] 7.55+[α/Fe] P 5.45 5.45 5.45 S 7.33 7.33+[α/Fe] 7.33+[α/Fe] Cl 5.50 5.50 5.50 Ar 6.40 6.40 6.40 K 5.12 5.12 5.12 Ca 6.36 6.36+[α/Fe] 6.36+[α/Fe] Sc 3.17 3.17 3.17 Ti 5.02 5.02+[α/Fe] 5.02+[α/Fe] V 4.00 4.00 4.00 Cr 5.67 5.67 5.67 Mn 5.39 5.39 5.39 Fe 7.50 7.50 7.50 Co 4.92 4.92 4.92 Ni 6.25 6.25 6.25 c r c[ht!] Grid of Evolutionary Models with Two Metal Mixture Patterns. Metal-mixture [O/Fe] [α/Fe] (dex) (dex) O-enhanced mixture -0.2 0 0.2 0 0.4 0 -0.1 0.1 0.3 0.1 0.5 0.1 0 0.2 0.4 0.2 0.2 0.3 0.4 0.3 0.5 0.3 0.6 0.3 α-enhanced mixture 0 0 0.1 0.1 0.2 0.2 0.3 0.3 r c c c[ht!] Z Values of Fixed [Fe/H] with Two Metal Mixture Patterns. [Fe/H] [α/Fe] [O/Fe] Z (dex) (dex) (dex) (dex) -1.0 0.1 0.1 0.0020 -1.0 0.1 0.5 0.0036 -0.8 0.1 0.1 0.0032 -0.8 0.1 0.5 0.0056 -0.6 0.1 0.1 0.0051 -0.6 0.1 0.5 0.0089 -0.4 0.1 0.1 0.0080 -0.4 0.1 0.5 0.0139 -0.2 0.1 0.1 0.0126 -0.2 0.1 0.5 0.0217 0 0.1 0.1 0.0197 0 0.1 0.5 0.0337 c c c c c c[ht!] Atmosphere Parameters and Chemical Abundance for the Example Stars from LAMOST Star T_ eff [Fe/H] Luminosity [α/Fe] [O/Fe] sobject_id (K) (dex) (L_⊙) (dex) (dex) 20140313-HD145243N315530B-01-084 5619±22 -0.30±0.04 0.74±0.02 0.06±0.02 0.46±0.09 20141112-HD083415N451147V01-03-165 5652±24 -0.15±0.04 1.57±0.03 0.15±0.02 -0.02±0.08 § STELLAR MODELS §.§ Input Physics We construct a stellar model grid using the Modules for Experiments in Stellar Astrophysics (MESA) code <cit.>. The versions of MESA and MESA SDK we used are Revision 12115 and Version 20.3.1, respectively. The EOS (Equation of State) tables in MESA are a blend of OPAL <cit.>, SCVH <cit.>, PTEH <cit.>, HELM <cit.>, and PC <cit.> EOS tables. Nuclear reaction rates are a combination of rates from NACRE <cit.>, JINA REACLIB <cit.>, plus additional tabulated weak reaction rates <cit.>. Screening is included via the prescription of <cit.>. Thermal neutrino loss rates are from <cit.>. The helium enrichment law is calibrated with initial abundances of helium and heavy elements of the solar model given by <cit.>, and it results in Y = 0.248 + 1.3324 Z. The mixing-length parameter α_ MLT is fixed to 1.82. Microscopic diffusion and gravitational settling of elements are necessary for stellar models of low-mass stars, which will lead to a modification to the surface abundances and main-sequence (MS) lifetimes <cit.>. Therefore, we include diffusion and gravitational settling using the formulation of <cit.>. We use the solar mixture GS98 from <cit.>. The opacity tables are OPAL high-temperature opacities [<http://opalopacity.llnl.gov/new.html>] supplemented by the low-temperature opacities <cit.>. We customize metal mixtures by introducing two enhancement factors, one for oxygen and one for all other α-elements (i.e., Ne, Mg, Si, S, Ca, and Ti). The two factors are applied in the same way as <cit.> to vary the volume density of element (log N) based on the GS98 solar mixture as presented in Table <ref>. We make a number of opacity tables by varying two enhancement factors according to the ranges of [α/Fe] and [O/Fe] values of the star sample. The enhancement values are shown in Table <ref>. For the mixtures with the same oxygen and α-elements enhancement factors, we refer to them as α-enhanced mixture (αEM), otherwise, as O-enhanced mixture (OEM). c c c c c c c c c c[ht!] Fundamental Parameters and Chemical Abundance for the Example Stars from GALAH Star T_ eff [Fe/H] Luminosity [α/Fe] [O/Fe] Mass_α EM Mass_ Buder2021 Age_α EM Age_ Buder2021 sobject_id (K) (dex) (L_⊙) (dex) (dex) (M_⊙) (M_⊙) (Gyr) (Gyr) 171230005802396 6096±76 -0.23±0.06 2.26±0.07 0±0.02 0.02±0.08 1.06±0.03 1.03±0.04 6.08±1.01 6.46±1.17 160529003401378 5846±76 -0.42±0.06 1.67±0.03 0.31±0.03 0.34±0.09 0.97±0.03 0.96±0.03 9.53±1.26 10.04±1.39 * The masses (Mass_ Buder2021) and ages (Age_ Buder2021) of the two example stars from the GALAH value-added catalog <cit.> are calculated based on PARSEC stellar isochrones (the PAdova and TRieste Stellar Evolution Code) <cit.>. §.§ Grid Computations We establish stellar model grids that include various metal-mixture patterns as indicated in Table <ref>. The mass range is from 0.6 to 1.2 M_⊙ with a grid step of 0.02 M_⊙. Input [Fe/H] values range from -1.20 to +0.46 dex with a grid step of 0.02 dex. The computation starts at the Hayashi line and terminates at the end of main-sequence when core Hydrogen exhausts (mass fraction of center hydrogen goes below 10^-12). The inlist file (for MESA) utilized in the computation of our stellar models is available on Zenodo: [doi:10.5281/zenodo.7866625]https://doi.org/10.5281/zenodo.7866625 To explicate the effect of oxygen enhancement on the evolutionary tracks, we provide an exposition of representative evolutionary tracks in Figure <ref>. The corresponding values of Z are listed in Table <ref>. At fixed [Fe/H], the variation of [O/Fe] would influence opacity, which could influence the energy transfer efficiency and the thermal structure. We find that the larger [O/Fe] leads to higher opacity at input [Fe/H] ≤ -0.2, and shifts the evolutionary tracks to lower T_ eff. As seen in Figure <ref>, at [Fe/H] ≤ -0.2, O-rich models are generally cooler than the α-enhanced models at given input [Fe/H], leading to higher modeling-determined masses (smaller ages) for a given position on the HR diagram (left panel of Figure <ref>). However, at input [Fe/H] = 0, larger [O/Fe] leads to lower opacity, and shifts the evolutionary tracks to higher T_ eff. The O-rich models are slightly hotter than the α-enhanced models. Overall, at fixed mass, the T_ eff difference between the two models becomes significant with smaller [Fe/H]. In addition, we note that the 1.1 M_⊙ and 1.2 M_⊙ tracks of O-rich models show different behavior compared with the tracks of 0.7 ∼ 1.0 M_⊙. The O-rich models with 1.1 M_⊙ show a blue hook morphology at [Fe/H] ≤ -0.8, which enlarges the T_ eff difference between two models at this evolutionary phase. At 1.2 M_⊙, both models show a blue hook morphology at the end of main-sequence, and the T_ eff difference keeps approximately constant at [Fe/H] ≤ -0.6. Figure <ref> presents the stellar evolution tracks of two example stars calculated with αEM and OEM models. Figure <ref>(a) presents the tracks of a star with observed [α/Fe] ∼ 0.1, [O/Fe] ∼ 0.5. Based on the αEM models (input [α/Fe] = 0.1, [O/Fe] = 0.1), we obtain the best-fit values of fundamental parameters for this star: mass = 0.87 ± 0.02 M_⊙, age = 8.69 ± 1.49 Gyr (the fitting method is described in detail in Section <ref>). Using the OEM models (input [α/Fe] = 0.1, [O/Fe] = 0.5), we estimate it to be a young star with mass = 0.90 ± 0.02 M_⊙, age = 5.68 ± 1.44 Gyr. The mean value of masses of OEM models ([O/Fe] = 0.5) inside the observational error box is larger than that of αEM models ([O/Fe] = 0.1), leading to smaller modeling-determined age for this star. Figure <ref>(b) shows the tracks of a star with observed [α/Fe] ∼ 0.2, [O/Fe] ∼ 0. We obtain a mass of 0.99 ± 0.01 M_⊙ and an age of 10.51 ± 0.60 Gyr for this star with αEM models (input [α/Fe] = 0.2, [O/Fe] = 0.2), and a mass of 0.98 ± 0.02 M_⊙ and an age of 11.34 ± 0.51 Gyr with OEM models (input [α/Fe] = 0.2, [O/Fe] = 0). As seen, the OEM models with input [O/Fe] = 0 are generally hotter than the αEM models ([O/Fe] = 0.2) at fixed mass and [Fe/H], leading to smaller modeling-determined mass and larger age for this star. §.§ Fitting Method We constrain stellar masses and ages using five observed quantities, i.e., T_ eff, luminosity, [Fe/H], [α/Fe], and [O/Fe]. Note that [O/Fe] is not used when estimating parameters with αEM models. We follow the fitting method raised by <cit.>. According to the Bayes theorem, we compare model predictions with their corresponding observational properties D to calculate the overall probability of the model M_i with posterior probability I, p(M_i| D,I)=p(M_i| I) p(D| M_i, I)/p(D| I) where p(M_i | I) represents the uniform prior probability for a specific model, and p(D | M_i, I) is the likelihood function: p(D| M_i,I)=L(T_eff,[Fe/H],lum) =L_T_effL_[Fe/H]L_lum The p(D | I) in Equation <ref> is a normalization factor for the specific model probability: p(D | I)=∑_j=1^N_m p(M_j| I) p(D | M_j, I) where N_m is the total number of selected models. The uniform priors p(M_i | I) can be canceled, giving the simplified Equation (1) as : p(M_i| D, I)=p(D | M_i, I)/∑_j=1^N_m p(D | M_j, I). Then Equation <ref> is the probability distribution for the selected models with the most probable fundamental parameters. As demonstrated in Figure <ref>, we fit a Gaussian function to the likelihood distribution of mass and age for each star. The mean and standard deviation of the resulting Gaussian profile are then utilized as the median value and uncertainty of fundamental parameter (mass and age) for each star. To find the stars that locate near the edge of the model grid, we consider a 3-sigma error box (i.e., three times the observational error, depicted as a blue square in Figure <ref>) on the HR diagram and divide the error box into 100 bins. For a certain star, when there are more than 5 bins that do not contain any theoretical model (sampling rate < 95%), we flag the star with “edge effect”. To assess the accuracy of our models and investigate potential model dependency in age and mass determination, we present a comparison of results obtained from our αEM models, OEM models, and the GALAH DR3 value-added catalog <cit.>. Figure <ref> shows the comparison of age and mass estimations for ∼4,000 GALAH stars, with age uncertainty of less than 30%, based on αEM models, OEM models, and GALAH DR3 VAC <cit.>. The ages and masses of stars from GALAH DR3 VAC are calculated using the PARSEC (the PAdova and TRieste Stellar Evolution Code) release v1.2S + COLIBRI stellar isochrone <cit.>, which adopt a solar-scaled metal mixture, i.e., input [α/Fe] = 0. Figure <ref> illustrates that the one-to-one relation of the results is quite good for most stars. It is noteworthy that the adopted approach encompasses a flat prior on age with an age cap of 13.2 Gyr <cit.>. Consequently, the ages of the majority of stars from GALAH DR3 VAC are found to be younger than 12 Gyr (with masses larger than 0.8 M_⊙), which results in a relatively large dispersion of age differences, amounting to 12.4% for αEM models and 13.0% for OEM models. Significant systematic differences are apparent between the PARSEC and the αEM models in Figure <ref>(a-b), with the former indicating 2.3% older age and 1.5% smaller mass than the latter. These discrepancies could be attributed to differences in the input physics employed by the two models, such as the input [α/Fe] value, helium abundance, and mixing-length parameter. In Figure <ref>(c-d), the PARSEC yields 5.5% older age and 1.9% smaller mass than the OEM models. Compared with the αEM models, the OEM models demonstrate more pronounced systemic differences from PAESEC. These distinctions primarily arise from the consideration of O-enhancement in OEM models, leading to younger ages and higher masses. In addition, a comparison of results obtained from our αEM models and the Yonsi–Yale <cit.> stellar isochrones have been shown in Figure <ref> in Appendix. § RESULTS This work aims to determine the ages of dwarfs considering oxygen abundance and study the chemical and kinematic properties of high-α and low-α populations in the Galactic disk. We give the masses and ages of 149,906 LAMOST dwarfs and 15,591 GALAH dwarfs with αEM models and OEM models. We remove ∼30% stars with sampling rate < 95%, located near the edge of the model grid. In addition, we remove ∼3% stars whose inferred ages are 2-sigma[For a certain star, age - 2*age_uncertainty > 13.8 Gyr.] larger than the universe age <cit.> due to their significant model systematic bias. Finally, we remove ∼35% stars that have relative age uncertainty larger than 30 percent. After these cuts, we obtain the ages of 67,503 dwarfs from LAMOST with a median age uncertainty of ∼16%, and 4,006 dwarfs from GALAH with a median age uncertainty of ∼18%. The age estimation of dwarf stars is inherently accompanied by considerable uncertainty, which can reach up to 30% within our sample. Furthermore, uncertainties (especially the systematic error) in atmosphere parameters can introduce biases in the age estimation. Consequently, a minority of stars in our sample exhibits ages that exceed the age of the universe. This occurrence is not uncommon, as even samples of subgiants with more precise age determinations have encountered analogous occurrences <cit.>. §.§ Oxygen Effect on Age Determinations §.§.§ Mock Data Test Most of the stars in both the LAMOST and GALAH samples are distributed in a relatively narrow range of [Fe/H] (-0.5 dex - +0.5 dex). To systematically investigate the effect of O-enhancement on age determinations in a wide range of T_ eff and [Fe/H], we apply a mock data test based on our grid of stellar models. For each set of stellar mode grids with fixed [Fe/H], [α/Fe], and [O/Fe] values, we draw random samples from the distributions of stellar evolution tracks in the H-R diagram. We adopt 0.05, 30 K as the observational errors for [Fe/H] and T_ eff, and fractional error of 2% for luminosity. Finally, We generate mock data of 0.15 million stars with age uncertainty of less than 30 percent. Figure <ref>(a) shows the distribution of mock stars on the HR diagram. Figure <ref>(b-c) presents a comparison between mock data and observational data for T_ eff and [Fe/H] distributions. Comparing mock data with LAMOST or GALAH dwarfs, mock stars cover wider ranges of T_ eff (5000 K - 7000 K), and [Fe/H] (-1.0 dex - +0.4 dex). Therefore, the mock data is useful for statistical studies of oxygen effect on age determinations. Figure <ref> shows a comparison between ages determined with αEM models (τ_α EM) and OEM models (τ_ OEM). The mock stars are grouped by their [Fe/H] and [O/α] values. The stars with [O/α] > 0 are hereafter referred to as high-O stars and the stars with [O/α] < 0 as low-O stars. Generally, high-O stars have younger ages based on OEM models, while low-O stars become older. The effect of oxygen enhancement on age determination is relatively significant for stars with [Fe/H] < -0.2. At [O/α] = -0.2, the mean fractional age difference ( (τ_ OEM - τ_α EM)/τ_α EM ) is 10.5% for metal-rich stars (-0.2 < [Fe/H] < 0.2), and 15.5% for relatively metal-poor stars (-1 < [Fe/H] < -0.2). The mean fractional age difference at [O/α] = 0.2 is -9.2% for metal-rich stars, and -16.5% for relatively metal-poor stars. The largest fractional age difference comes from high-O stars with [O/α] = 0.4, which have a mean fractional age difference of -20.2% at -0.2 < [Fe/H] < 0.2, and -30.6% at -1 < [Fe/H] < -0.2. We find clear age offsets that correlate to the [Fe/H] and [O/α] values. Increasing 0.2 dex in [O/α] will reduce the age estimates of metal-rich stars by ∼10%, and metal-poor stars by ∼15%. The mock data provide us with more sufficient stars at the metal-poor edge than observational data to present clearly age differences at different [O/α] and [Fe/H] values. §.§.§ Observational Data Figure <ref> presents the fractional age differences between αEM and OEM models for observational (LAMOST and GALAH) and mock data. The overall average age offset (absolute value of age difference) of stars from LAMOST and GALAH is 8.9% and 8.6%, respectively. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The mean fractional age difference of high-O stars with [O/α] ∼0.4 dex is ∼ -25%. The age offsets are relatively significant for metal-poor stars. The largest age differences are -33% to -42% for stars with [Fe/H] ≲ -0.6 dex and [O/α] ∼0.4 dex. For mock data, we note the trend of age offsets versus [Fe/H] is consistent with that of observational data. The age offsets of both samples increase significantly with decreasing metallicity at [Fe/H] ≳ -0.6. Interestingly, there is a slight increase in age offsets with decreasing metallicity at [Fe/H] < -0.6. This trend of age offsets is consistent with the change of T_ eff difference as a function of [Fe/H] (shown in Figure <ref>), as discussed in Section <ref>. §.§ Age-Abundance Relations To trace the chemical evolution history of the Galactic disk, we hereby present the age-abundance relations of the LAMOST sample (consisting of 67,511 stars) and the GALAH sample (consisting of 4,006 stars) using the ages from OEM models. For each sample, we employ local nonparametric regression fitting (LOESS model) to characterize the trends in these relations with enhanced clarity. Figure <ref> illustrates the results for the LAMOST sample. In Figure <ref>(a), a gradual decline in [Fe/H] is observed across the age range of ∼9 Gyr to ∼6.5 Gyr. This trend shows similarities to the metal-rich branch observed in young stars (age < 8 Gyr) as found by <cit.>, where the metallicity range of their metal-rich branch stars spans approximately -0.2 to +0.4. Notably, <cit.> also identifies a trend comparable to our findings, whereby their sample exhibits a [Fe/H] value of 0.4 at 8 Gyr, diminishing to around -0.2 at 6 Gyr. The "two-infall" chemical evolution model <cit.> predicts a process involving the infall of metal-poor gas commencing roughly 9.4 Gyr ago <cit.>. The observed trend of decreasing metallicity from 9 Gyr to 6.5 Gyr in our results may be related to this infalling metal-poor gas. Intriguingly, this "two-infall" model not only anticipates a decline in metallicity but also predicts an increase in the oxygen abundance,which is consistent with the observed trend illustrated in Figure <ref>(b). In Figure <ref>(b), the sample stars from LAMOST exhibit an increase in [O/Fe] as the age decreases from 9 Gyr to 4 Gyr, indicating a slight enrichment of oxygen in the younger stellar population. Figure <ref> presents the results for the GALAH sample. It is noteworthy that the GALAH stars display a decrease in [Fe/H] from ∼7.5 Gyr to 5 Gyr. Furthermore, the [O/Fe] of the GALAH stars exhibit a slight decrease with age ranging from ∼7.5 Gyr to 3 Gyr. The GALAH sample exhibits age-[Fe/H] and age-[O/Fe] trends similar to those observed in LAMOST; however, an overall slight temporal discrepancy can be observed. This incongruity may be ascribed to dissimilarities in sample composition or systematic differences in atmospheric parameters between the two survey datasets. The GALAH sample, on the whole, exhibits higher temperatures compared to LAMOST sample (5000 - 5700 K), indicating a relatively younger population. Furthermore, the determinations of [Fe/H] and [O/Fe] from GALAH are based on a non-LTE method <cit.>, which can also impact the observed trends. In conclusion, the analysis of the LAMOST and GALAH samples reveals a decreasing trend of [Fe / H] with an age ranging from 7.5–9 Gyr to 5–6.5 Gyr, and a notable upward trend in [O/Fe] as the age decreases from 7.5–9 Gyr to 3–4 Gyr. This result agree with the prediction of the "two-infall" scenario and suggest that a metal-poor and O-rich gas gradually dominates the star formation from 7.5–9 Gyr ago. As discussed in Section <ref>, oxygen has a unique origin, primarily produced by CCSNe <cit.>. Therefore, the observed age-[O/Fe] trend plays a distinct role in characterizing the chemical evolution history of the Milky Way and constraining chemical evolution models. Neglecting to account for the independent enhancement of oxygen abundance in age determination would result in significant age biases, as discussed in Section <ref>. Such biases would obscure the age-[O/Fe] relation, as depicted in Figure <ref> in the appendix, where the rising trend of [O/Fe] with decreasing age remains imperceptible at age < 9 Gyr. Therefore, we suggest that considering the oxygen abundance independently in stellar models is crucial. This would aid in accurately characterizing the age-[O/Fe] relation and provide better constraints for Galactic chemical evolution models. § CONCLUSIONS To determine the ages of dwarfs considering observed oxygen abundance, we construct a grid of stellar models which take into account oxygen abundance as an independent model input. We generate mock data with 0.15 million mock stars to systematically study the effect of oxygen abundance on age determination. Based on the α-enhanced models and O-enhanced models, we obtain the masses and ages of 67,503 stars from LAMOST and 4,006 stars from GALAH and analyze the chemical and kinematic properties of these stars combined with ages from O-enhanced models. Our main conclusions are summarized as follows: (1) The ages of high-O stars based on O-enhanced models are smaller compared with those determined with α-enhanced models, while low-O stars become older. We find clear age offsets that correlate to the [Fe/H] and [O/α] values. Varying 0.2 dex in [O/α] will alter the age estimates of metal-rich (-0.2 < [Fe/H] < 0.2) stars by ∼10%, and relatively metal-poor (-0.2 < [Fe/H] < 0.2) stars by ∼15%. (2) The overall average age offset (absolute value of age difference) between α-enhanced models and O-enhanced models is 8.9% for LAMOST stars, and 8.6% for GALAH stars. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The mean fractional age difference of high-O stars with [O/α] ∼0.4 dex is ∼ -25%, and reach up to -33% to -42% at [Fe/H] ≲ -0.6 dex. (3) Based on LAMOST and GALAH samples, we observe a decreasing trend of [Fe/H] with age from 7.5–9 Gyr to 5–6.5 Gyr. Furthermore, The [O/Fe] of both sample stars increases with decreasing age from 7.5–9 Gyr to 3–4 Gyr, which indicates that the younger population of these stars is more O-rich. Our results agree with the prediction of the "two-infall" scenario and suggest that a metal-poor and O-rich gas gradually dominates the star formation from 7.5–9 Gyr ago. We thank the anonymous referee for valuable comments and suggestions that have significantly improved the presentation of the manuscript. This work is based on data acquired through the Guoshoujing Telescope. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope; LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work used the data from the GALAH survey, which is based on observations made at the Anglo Australian Telescope, under programs A/2013B/13, A/2014A/25, A/2015A/19, A/2017A/18, and 2020B/23. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This work is supported by National Key R&D Program of China No. 2019YFA0405503, the Joint Research Fund in Astronomy (U2031203,) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and Chinese Academy of Sciences (CAS), and NSFC grants (12090040, 12090042). This work is partially supported by the CSST project, and the Scholar Program of Beijing Academy of Science and Technology (DZ:BS202002). This paper has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (CartographY GA. 804752). Figure <ref> depicts the age and mass determinations for ∼15,000 LAMOST stars (with [α/Fe] ∼ 0.1) and reveals a satisfactory correspondence between the αEM models and the YY isochrones <cit.>, as the dispersion of the relative age and mass differences are only 6.4% and 1.1% between these two models. However, slight systematic differences are visible among this result, as the YY yields 3.6% older age and -0.4% smaller mass than the αEM models. aasjournal
http://arxiv.org/abs/2307.04708v1
20230710171405
Topological recursion of the Weil-Petersson volumes of hyperbolic surfaces with tight boundaries
[ "Timothy Budd", "Bart Zonneveld" ]
math-ph
[ "math-ph", "hep-th", "math.AG", "math.GT", "math.MP" ]
Topological recursion of the Weil–Petersson volumes of hyperbolic surfaces with tight boundaries Timothy Budd and Bart Zonneveld IMAPP, Radboud University, Nijmegen, The Netherlands. August 12, 2023 ================================================================================================= The Weil–Petersson volumes of moduli spaces of hyperbolic surfaces with geodesic boundaries are known to be given by polynomials in the boundary lengths. These polynomials satisfy Mirzakhani's recursion formula, which fits into the general framework of topological recursion. We generalize the recursion to hyperbolic surfaces with any number of special geodesic boundaries that are required to be tight. A special boundary is tight if it has minimal length among all curves that separate it from the other special boundaries. The Weil–Petersson volume of this restricted family of hyperbolic surfaces is shown again to be polynomial in the boundary lengths. This remains true when we allow conical defects in the surface with cone angles in (0,π) in addition to geodesic boundaries. Moreover, the generating function of Weil–Petersson volumes with fixed genus and a fixed number of special boundaries is polynomial as well, and satisfies a topological recursion that generalizes Mirzakhani's formula. This work is largely inspired by recent works by Bouttier, Guitter & Miermont on the enumeration of planar maps with tight boundaries. Our proof relies on the equivalence of Mirzakhani's recursion formula to a sequence of partial differential equations (known as the Virasoro constraints) on the generating function of intersection numbers. Finally, we discuss a connection with JT gravity. We show that the multi-boundary correlators of JT gravity with defects (cone points or FZZT branes) are expressible in the tight Weil–Petersson volume generating functions, using a tight generalization of the JT trumpet partition function. § INTRODUCTION §.§ Topological recursion of Weil–Petersson volumes In the celebrated work <cit.> Mirzakhani established a recursion formula for the Weil–Petersson volume V_g,n(𝐋) of the moduli space of genus-g hyperbolic surfaces with n labeled boundaries of lengths 𝐋 = (L_1, …, L_n) ∈_>0^n. Denoting [n] = {1,2,…,n} and using the notation 𝐋_I = (L_i)_i∈ I, I⊂[n], for a subsequence of 𝐋 and 𝐋_I = 𝐋_[n]∖ I, the recursion can be expressed for (g,n)∉{(0,3),(1,1)} as V_g,n(𝐋) = 1/2L_1∫_0^L_1t∫_0^∞x∫_0^∞y xy K_0(x+y,t) [ V_g-1,n+1(x,y,𝐋_{1}) 370mu +∑_g_1+g_2=g I⨿ J={2,…,n} V_g_1,1+|I|(x,𝐋_I) V_g_2,1+|J|(y,𝐋_J)] +1/2L_1∫_0^L_1t∫_0^∞x∑_j=2^n x (K_0(x,t+L_j) + K_0(x,t-L_j)) V_g,n-1(x,𝐋_{1,j}), where K_0(x,t) =1/1+exp(x+t/2)+1/1+exp(x-t/2). Together with V_0,3(𝐋) = 1 and V_1,1(𝐋) = L_1^2/48 + π^2/12 this completely determines V_g,n as a symmetric polynomial in L_1^2, …, L_n^2 of degree 3g-3+n. This recursion formula remains valid <cit.> when we replace one or more of the boundaries by cone points with cone angle α_i ∈ (0,π) if we assign to it an imaginary boundary length L_i = i α_i. Cone points with angles in (0,π) are called sharp, as opposed to blunt cone points that have angle in (π,2π). The Weil–Petersson volume of the moduli space of genus-g surfaces with n geodesic boundaries or sharp cone points is thus correctly computed by the polynomial V_g,n(𝐋). It was recognized by Eynard & Orantin <cit.> that Mirzakhani's recursion (in the case of geodesic boundaries) fits the general framework of topological recursion. To state this result explicitly one introduces for any g,n≥0 satisfying 3g-3+n ≥ 0 the Laplace transformed[Note that, due to the extra factors L_i in the integrand, 𝒲_g,n(𝐳) is (-1)^n times the partial derivative in each of the variables z_1,…,z_n of the Laplace transforms of V_g,n(𝐋), but we will refer to 𝒲_g,n(𝐳) as the Laplace-transformed Weil–Petersson volumes nonetheless.] Weil–Petersson volumes ω_g,n^(0)(𝐳)= ∫_0^∞[∏_i=1^n L_i L_i e^-z_i L_i] V_g,n(𝐋), which are even polynomials in z_1^-1, …,z_n^-1 of degree 6g-4+2n, while setting ω_0,1^(0)(𝐳)= 0, ω_0,2^(0)(𝐳) = 1/(z_1-z_2)^2. Then Mirzakhani's recursion (<ref>) translates into the recursion <cit.> ω_g,n^(0)(𝐳) = _u→0π/(z_1^2-u^2) sin2π u[ω^(0)_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω^(0)_g_1,1+|I|(u,𝐳_I)ω^(0)_g_2,1+|J|(-u,𝐳_J)] valid when g,n≥ 0 and 3g-3+n ≥ 0, which one may recognize as the recursion for the invariants ω^(0)_g,n(𝐳) of the complex curve x(z) = z^2 y(z) = 1/πsin(2π z). The main purpose of the current work is to generalize these recursion formulas to hyperbolic surfaces with so-called tight boundaries, which we introduce now. §.§ Hyperbolic surfaces with tight boundaries Let S_g,n be a fixed topological surface of genus g with n boundaries and 𝒯_g,n(𝐋) the Teichmüller space of hyperbolic structures on S_g,n with geodesic boundaries of lengths 𝐋 = (L_1,…,L_n) ∈_>0^n. Denote the boundary cycles[Our constructions will not rely on an orientation of the boundary cycles, but for definiteness we may take them clockwise (keeping the surface on the left-hand side when following the boundary).] by ∂_1,…,∂_n and the free homotopy class of a cycle γ in S_g,n by [γ]_S_g,n. For a hyperbolic surface X ∈𝒯_g,n(𝐋) and a cycle γ, we denote by ℓ_γ(X) the length of γ, in particular ℓ_∂_i(X) = L_i. The mapping class group of S_g,n is denoted Mod_g,n and the quotient of 𝒯_g,n(𝐋) by its action leads to the moduli space ℳ_g,n(𝐋) = 𝒯_g,n(𝐋)/Mod_g,n. Let us denote by S_g,n,p⊃ S_g,n+p the topological surface obtained from S_g,n+p by capping off the last p boundaries with disks (Figure <ref>). Note that the free homotopy classes [·]_S_g,n+p of S_g,n+p are naturally partitioned into the free homotopy classes [·]_S_g,n,p of S_g,n,p. In particular, [∂_j]_S_g,n+p for j=n+1,…,n+p are all contained in the null-homotopy class of S_g,n,p. For i=1,…,n the boundary ∂_i of X ∈ℳ_g,n+p(𝐋) is said to be tight in S_g,n,p if ∂_i is the only simple cycle γ in [∂_i]_S_g,n,p of length ℓ_γ(X) ≤ L_i. Remark that both [∂_i]_S_g,n+p and [∂_i]_S_g,n,p for i=1,…,n are Mod_g,n+p-invariant, so these classes are well-defined at the level of the moduli space. This allows us to introduce the moduli space of tight hyperbolic surfaces ℳ^tight_g,n,p(𝐋) = { X ∈ℳ_g,n+p(𝐋) : ∂_1,…,∂_n are tight in S_g,n,p}⊂ℳ_g,n+p(𝐋). Note that ℳ^tight_g,n,0(𝐋) = ℳ^tight_g,0,n(𝐋) = ℳ_g,n(𝐋), while ℳ^tight_0,1,p(𝐋) = ∅ because ∂_1 is null-homotopic and ℳ^tight_0,2,p(𝐋) = ∅ because [∂_1]_S_0,2,p=[∂_2]_S_0,2,p and therefore ∂_1 and ∂_2 can never both be the unique shortest cycle in their class. In general, it is an open subset of ℳ_g,n+p(𝐋) and therefore it inherits the Weil–Petersson symplectic structure and Weil–Petersson measure μ_WP from ℳ_g,n+p(𝐋). The corresponding tight Weil–Petersson volumes are denoted T_g,n,p(𝐋) = ∫_ℳ^tight_g,n,p(𝐋)μ_WP ≤ V_g,n+p(𝐋), such that T_g,n,0(𝐋) = T_g,0,n(𝐋) = V_g,n(𝐋) and T_0,1,p(𝐋)=T_0,2,p(𝐋)=0. We can extend this definition to the case in which one or more of the boundaries ∂_n+1,…,∂_n+p is replaced by a sharp cone point with cone angle α_i ∈ (0,π). In this case we make the usual identification L_i = i α_i, and still denote the corresponding Weil–Petersson volume by T_g,n,p(𝐋). Our first result is the following. For g,n,p≥ 0 such that 3g-3 + n ≥ 0, the tight Weil–Petersson volume T_g,n,p(𝐋) of genus g surfaces with n tight boundaries and p geodesic boundaries or sharp cone points is a polynomial in L_1^2, …, L_n+p^2 of degree 3g-3+n+p that is symmetric in L_1,…, L_n and symmetric in L_n+1,…,L_n+p. For most of the upcoming results we maintain the intuitive picture that the tight boundaries are the “real” boundaries of the surface, whose number and lengths we specify, while we allow for an arbitrary number of other boundaries or cone points that we treat as defects in the surface. To this end we would like to encode the volume polynomials in generating functions that sum over the number of defects with appropriate weights. A priori it is not entirely clear what is the best way to organize such generating functions, so to motivate our definition we take a detour to a natural application of Weil–Petersson volumes in random hyperbolic surfaces. §.§ Intermezzo: Random (tight) hyperbolic surfaces If we fix g, n and 𝐋∈ ([0,∞) ∪ i (0,π))^n, then upon normalization by 1/V_g,n(𝐋) the Weil–Petersson measure μ_WP provides a well-studied probability measure on ℳ_g,n(𝐋) defining the Weil–Petersson random hyperbolic surface, see e.g. <cit.>. A natural way to extend the randomness to the boundary lengths or cone angles is by choosing a (Borel) measure μ on [0,∞) ∪ i (0,π) and first sampling 𝐋∈ ([0,∞) ∪ i (0,π))^n from the probability measure 1/μ^⊗ n(V_g,n) V_g,n(𝐋)μ(L_1)⋯μ(L_n), μ^⊗ n(V_g,n) ∫ V_g,n(𝐋)μ(L_1)⋯μ(L_n), and then sampling a Weil–Petersson random hyperbolic surface on ℳ_g,n(𝐋). If the genus-g partition function[We use the physicists' convention of writing the argument μ in square brackets to signal it is a functional dependence (in the sense of calculus of variations).] F_g[μ] = ∑_n≥ 0μ^⊗ n(V_g,n)/n! converges, we can furthermore make the size n≥ 0 random by sampling it with the probability μ^⊗ n(V_g,n)/(n! F_g[μ]). The resulting random surface (of random size) is called the genus-g Boltzmann hyperbolic surface with weight μ. See the upcoming work <cit.> for some of its statistical properties. A natural extension is to consider the genus-g Boltzmann hyperbolic surface with n tight boundaries of length 𝐋=(L_1,…,L_n), where the number p of defects and their boundary lengths/cone angles 𝐊=(K_1,…,K_p) are random. The corresponding partition function is T_g,n(𝐋;μ] = ∑_p≥ 0μ^⊗ p(T_g,n,p)(𝐋)/p!, μ^⊗ p(T_g,n,p)(𝐋) ∫ T_g,n,p(𝐋,𝐊)μ(K_1)⋯μ(K_p). If it is finite, we can sample p with probability μ^⊗ p(T_g,n,p)(𝐋)/(p!T_g,n(𝐋;μ]) and then 𝐊 from the probability measure 1/μ^⊗ p(T_g,n,p)(𝐋) T_g,n,p(𝐋,𝐊)μ(K_1)⋯μ(K_n) and then finally a random tight hyperbolic surface from the probability measure μ_WP/T_g,n,p(𝐋,𝐊) on ℳ^tight_g,n,p(𝐋,𝐊). Note that for μ=0 the genus-g Boltzmann hyperbolic surface with n tight boundaries reduces to the Weil–Petersson random hyperbolic surface we started with. The important observation for the current work is that the partition functions F_g[μ] and T_g,n(𝐋;μ] of these random surfaces can be thought of as (multivariate, exponential) generating functions of the volumes V_g,n(𝐋) and T_g,n,p(𝐋,𝐊) if we treat μ as a formal generating variable. Since we will not be concerned with the details of the measures (<ref>) and (<ref>) and F_g[μ] and T_g,n(𝐋;μ] only depend on the even moments ∫ L^2kμ(L), we can instead take these moments as the generating variables. §.§ Generating functions To be precise, we let a weight μ be a real linear function on the ring of even, real polynomials (i.e. μ∈[K^2]^*). For an even real polynomial f we use the suggestive notation μ(f) = ∫μ(K) f(K), making it clear that the notion of weight generalizes the Borel measure described in the intermezzo above. For L ∈ [0,∞)∪ i(0,π), the Borel measure given by the delta measure δ_L at L gives a simple example of a weight μ=δ_L satisfying δ_L(f) = f(L). The choice of weight μ is clearly equivalent to the choice of a sequence of times (t_0,t_1,…) ∈^_≥0 recording the evaluations of μ on the even monomials, up to a conventional normalization, t_k[μ]= 2/4^k k!μ(K^2k) = ∫μ(K) 2K^2k/4^kk!. Naturally we can interpret μ^⊗ p∈ ([K^2]^*)^⊗ p as an element of ([K^2]^⊗ p)^* ≅[K_1^2,…,K_p^2]^* by setting μ^⊗ p(f_1(K_1)⋯ f_p(K_p)) = μ(f_1)⋯μ(f_p) for even polynomials f_1,…,f_p and extending by linearity. More generally, we can view μ^⊗ p as a linear map [L_1^2,…,L_n^2,K_1^2,…,K_p^2] ≅[L_1^2,…,L_n^2][K_1^2,…,K_p^2] →[L_1^2,…,L_n]. We use the notation μ^⊗ p(f) = ∫ f(𝐊) μ(K_1)⋯μ(K_p), μ^⊗ p(f)(𝐋) = ∫ f(𝐋,𝐊) μ(K_1)⋯μ(K_p). One can then naturally introduce the generating function F[μ] of a collection of symmetric, even polynomials f_1(L_1),f_2(L_1,L_2),… via F[μ] = ∑_p≥ 0μ^⊗ p(f_p)/p!. Then the generating function of tight Weil–Petersson volumes is defined to be T_g,n(𝐋;μ] = ∑_p=0^∞μ^⊗ p(T_g,n,p)(𝐋)/p! = ∑_p=0^∞1/p!∫μ(K_1)⋯∫μ(K_p)T_g,n,p(𝐋,𝐊), which we interpret in the sense of a formal power series, so we do not have to worry about convergence. We could make this more precise by fixing a weight μ and considering T_g,n(𝐋;x μ] ∈[[x]] as a univariate formal power series in x. Or we could view T_g,n(𝐋;μ] as a multivariate formal power series in the times (t_0,t_1,…) defined in (<ref>). What is important is that we can make sense of the functional derivative δ/δμ(L) on these types of series defined by δ/δμ(L) P[μ] = ∂/∂ x P[μ + x δ_L] |_x=0. In particular, if f(𝐋,𝐊), with 𝐋 = (L_1,…,L_n) and 𝐊 = (K_1,…,K_p), is an even polynomial that is symmetric in K_1,…,K_p then δ/δμ(L)μ^⊗ p(f)(𝐋) = p μ^⊗ p-1(f)(𝐋,L). At the level of the generating function we thus have δ/δμ(L)T_g,n(𝐋;μ] = ∑_p=0^∞1/p!∫μ(K_1)⋯∫μ(K_p) T_g,n,p+1(𝐋,L,𝐊). In terms of formal power series in the times we may instead identify the functional derivative in terms of the formal partial derivatives as δ/δμ(L) = ∑_k=0^∞2 L^2k/4^kk!∂/∂ t_k, δ/δμ(0) = 2∂/∂ t_0. §.§ Main results To state our main results about T_g,n(𝐋;μ], we need to introduce the generating function R[μ] as the unique formal power series solution satisfying R[μ] = ∫μ(L) + O(μ^2) to Z(R[μ];μ] = 0, Z(r;μ]√(r)/√(2)πJ_1(2π√(2r)) - ∫μ(L) I_0(L√(2r)) where I_0 and J_1 are (modified) Bessel functions. Let also the moments M_k[μ] be the defined recursively via M_0[μ] = 1/ Rμ(0) , M_k[μ] = M_0[μ] M_k-1[μ]μ(0), k≥ 1, where the reciprocal in the first identity makes sense because Rμ(0) = 1 + O(μ). Alternatively, for k≥0 we may express M_k as M_k[μ] = Z^(k+1)(R[μ];μ]=(-√(2)π/√(R[μ]))^kJ_k(2π√(2R[μ])) - ∫μ(L) (L/√(2R[μ]))^k+1I_k+1(L√(2 R[μ])) , where Z^(k+1)(r;μ] denotes the (k+1)th derivative of Z(r;μ] with respect to r. We further consider the series η(u;μ]=∑_p=0^∞ M_p[μ] u^2p/(2p+1)!!, X̂(u;μ] = sin(2π u)/2π u η(u;μ], which we both interpret as formal power series in u with coefficients that are formal power series in μ. The reciprocal in the second definition is well-defined because η(0) = M_0[μ] = 1 + O(μ). We can now state our main result that generalizes Mirzakhani's recursion formula. The tight Weil–Petersson volume generating functions T_g,n(𝐋) satisfy T_g,n(𝐋) = 1/2L_1∫_0^L_1t∫_0^∞x∫_0^∞y xy K(x+y,t;μ] [ T_g-1,n+1(x,y,𝐋_{1}) 370mu +∑_g_1+g_2=g I⨿ J={2,…,n} T_g_1,1+|I|(x,𝐋_I) T_g_2,1+|J|(y,𝐋_J)] +1/2L_1∫_0^L_1t∫_0^∞x∑_j=2^n x (K(x,t+L_j;μ] + K(x,t-L_j;μ]) T_g,n-1(x,𝐋_{1,j}), which is the same recursion formula (<ref>) as for the Weil–Petersson volumes V_g,n(𝐋) except that the kernel K_0(x,t) is replaced by the “convolution” K(x,t,μ] = ∫_-∞^∞ X(z) K_0(x+z,t), where X(z) = X(z;μ] is a measure on determined by its two-sided Laplace transform ∫_-∞^∞ X(z) e^-uz = X̂(u;μ] = sin(2π u)/2π u η(u;μ]. Furthermore, we have T_0,3(L_1,L_2,L_3;μ] =1/M_0[μ], T_1,1(L;μ] =-M_1[μ]/24M_0[μ]^2+L^2/48M_0[μ]. We will not specify precisely what it means to have a measure X(z;μ] that itself is a formal power series in μ. Importantly its moments ∫_-∞^∞ X(z) z^p = (-1)^p p![u^p]X̂(u;μ] are formal power series in μ, so for any x,t∈ K(x,t;μ] = ∑_p=0^∞ (-1)^p ∂^p/∂ x^pK_0(x,t) [u^p]X̂(u;μ] is a formal power series in μ as well. In the case μ=0, it is easily verified that M_k[0] = (-2π^2)^k/k!, η(u,0] = sin(2π u)/2π u, so X̂(u;0] = 1 and X(z;0] = δ_0(z) and therefore one retrieves Mirzakhani's kernel K(x,t) = K_0(x,t). Given that the form of Mirzakhani's recursion is unchanged except for the kernel, this strongly suggests that the Laplace transforms ω_g,n(𝐳)=ω_g,n(𝐳;μ]∫_0^∞[∏_i=1^n L_i L_i e^-z_i L_i] T_g,n(𝐋;μ] of the tight Weil–Petersson volumes can be obtained as invariants in the framework of topological recursion as well. When μ=0 this reduces to the Laplace-transformed Weil–Petersson volumes ω_g,n(𝐳;0] = ω_g,n^(0)(𝐳) defined in (<ref>). The following theorem shows that this is the case in general. Setting ω_0,2(𝐳)=(z_1-z_2)^-2 and ω_0,0(𝐳) = ω_0,1(𝐳)=0, the Laplace transforms (<ref>) satisfy for every g,n≥ 0 such that 3g-3+n ≥ 0 the recursion ω_g,n(𝐳) =_u→01/2u(z_1^2-u^2)η(u;μ][ω_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω_g_1,|I|(u,𝐳_I)ω_g_2,|J|(-u,𝐳_J)]. These correspond precisely to the invariants of the curve x=z^2 y=2z η(z;μ]. Another consequence of Theorem <ref> is that the tight Weil–Petersson volumes T_g,n(𝐋;μ] for all g,n≥ 0, such that n≥ 3 for g=0 and n≥ 1 for g=1, are expressible as a rational polynomial in L_1^2,…, L_n^2 and M_0^-1, M_1 , M_2, …. Besides satisfying a recursion in the genus g and the number of tight boundaries n, these also satisfy a recurrence relation in n only. For all g,n≥ 0, such that n≥ 3 for g=0 and n≥ 1 for g=1, we have that T_g,n(𝐋;μ] = 1/M_0^2g-2+n𝒫_g,n(𝐋,M_1/M_0,…,M_3g-3+n/M_0), where 𝒫_g,n(𝐋,𝐦) is a rational polynomial in L_1^2,…,L_n^2,m_1,…,m_3g-3+n. This polynomial is symmetric and of degree 3g-3+n in L_1^2,…,L_n^2, while 𝒫_g,n(√(σ)𝐋,σ m_1,σ^2 m_2, σ^3 m_3, …) is homogeneous of degree 3g-3+n in σ. For all g≥ 0, n≥ 1 such that 2g-3+n>0 the polynomial 𝒫_g,n(𝐋,𝐦) can be obtained from 𝒫_g,n-1(𝐋,𝐦) via the recursion relation 𝒫_g,n(𝐋,𝐦) = ∑_p=1^3g-4+n(m_p+1 - L_1^2p+2/2^p+1(p+1)!-m_1 m_p + 1/2L_1^2 m_p) ∂𝒫_g,n-1/∂ m_p(𝐋_{1},𝐦) + (2g-3+n)(-m_1+12 L_1^2) 𝒫_g,n-1(𝐋_{1},𝐦) + ∑_i=2^n ∫L_i L_i 𝒫_g,n-1(𝐋_{1},𝐦), where we use the shorthand notation ∫L L f(L,…) = ∫_0^L x x f(x,…). Furthermore, we have 𝒫_0,3(𝐋) =1 𝒫_1,1(L_1,m_1) =1/24(-m_1+12 L_1^2) and 𝒫_g,0 for g≥2 is given by 𝒫_g,0(m_1,…,m_3g-3) = ∑_d_2,d_3,…≥ 0 ∑_k≥ 2 (k-1)d_k = 3g-3⟨τ_2^d_2τ_3^d_3⋯⟩_g ∏_k≥ 2(-m_k-1)^d_k/d_k!, where ⟨τ_2^d_2τ_3^d_3⋯⟩_g are the ψ-class intersection numbers on the moduli space ℳ_g,n with n = ∑_k d_k ≤ 3g-3 marked points. For instance, the first few applications of the recursion yield 𝒫_0,4(𝐋,𝐦) = 1/2 (L_1^2+⋯+L_4^2) - m_1, 𝒫_0,5(𝐋,𝐦) = 1/8 (L_1^4+⋯+L_5^4) + 1/2(L_1^2L_2^2+⋯+L_4^2L_5^2) - 3/2(L_1^2+⋯+L_5^2)m_1 + 3 m_1^2 - m_2, 𝒫_1,2(𝐋,𝐦) = 1/192(L_1^4+L_2^4) + 1/96L_1^2L_2^2 -1/24(L_1^2+L_2^2)m_1 + 1/12m_1^2-1/24m_2^2. Note that this provides a relatively efficient way of calculating the Weil–Petersson volumes V_g,n(𝐋) from the polynomial 𝒫_g,0, since V_g,n(𝐋) = T_g,n(𝐋;0] = 𝒫_g,n(𝐋,𝐦) |_m_k = (-2π^2)^k/k!. A simple corollary of Theorem <ref> is that the volumes satisfy string and dilaton equations generalizing those for the Weil–Petersson volumes derived by Do & Norbury in <cit.>. For all g ≥ 0 and n ≥ 1, such that n≥ 4 when g=0 and n≥ 2 when g=1, we have the identities ∑_p=0^∞ 2^p p! M_p[μ] [L_1^2p] T_g,n(𝐋;μ] = ∑_j=2^n ∫ L_j L_j T_g,n-1(𝐋_{1};μ]+_{g=0,n=3}, ∑_p=1^∞ 2^p p! M_p-1[μ] [L_1^2p] T_g,n(𝐋;μ] = (2g-3+n)T_g,n-1(𝐋_{1};μ], where the notation [L_1^2p]T_g,n(𝐋;μ] refers to the coefficient of L_1^2p in the polynomial T_g,n(𝐋;μ]. As explained in <cit.>, the string and dilaton equations for symmetric polynomials, in particular for the Weil–Petersson volumes, give rise to a recursion in n for genus 0 and 1. Using Theorem <ref>, we also get such a recursion for higher genera in the case of tight Weil–Petersson volumes. §.§ Idea of the proofs This work is largely inspired by the recent work <cit.> of Bouttier, Guitter & Miermont. There the authors consider the enumeration of planar maps with three boundaries, i.e. graphs embedded in the triply punctured sphere, see the left side of Figure <ref>. Explicit expressions for the generating functions of such maps, also known as pairs of pants, with controlled face degrees were long known, but they show that these generating functions become even simpler when restricting to tight pairs of pants, in which the three boundaries are required to have minimal length (in the sense of graph distances) in their homotopy classes. They obtain their enumerative results on tight pairs of pants in a bijective manner by considering a canonical decomposition of a tight pair of pants into certain triangles and diangles, see Figure <ref>. Our result (<ref>) for the genus-0 tight Weil–Petersson volumes with three distinguished boundaries can be seen as the analogue of <cit.>, although less powerful because our proof is not bijective. Instead, we derive generating functions of tight Weil–Petersson volumes from known expressions in the case of ordinary Weil–Petersson volumes. The general idea is that a genus-0 hyperbolic surface with two distinguished (but not necessarily tight) boundaries can unambiguously be cut along a shortest geodesic separating those two boundaries, resulting in a pair of certain half-tight cylinders (Figure <ref>). Also a genus-g surface with n distinguished (not necessarily tight) boundaries can be shown to decompose into a tight hyperbolic surface and n half-tight cylinders. The first decomposition uniquely determines the Weil–Petersson volumes of the moduli spaces of half-tight cylinders, while the second determines the tight Weil–Petersson volumes. This relation is at the basis of Proposition <ref>. To arrive at the recursion formula of Theorem <ref> we follow the line or reasoning of Mirzakhani's proof <cit.> of Witten's conjecture <cit.> (proved first by Kontsevich <cit.>). She observes that the recursion equation (<ref>) implies that the generating function of certain intersection numbers satisfies an infinite family of partial differential equations, the Virasoro constraints. Mulase & Safnuk <cit.> have observed that the reverse implication is true as well. We will demonstrate that the generating functions of tight Weil–Petersson volumes and ordinary Weil–Petersson volumes are related in a simple fashion when expressed in terms of the times (<ref>) and that the former obey a modified family of Virasoro constraints. These constraints in turn are equivalent to the generalized recursion of Theorem <ref>. §.§ Discussion Mirzakhani's recursion formula has a bijective interpretation <cit.>. Upon multiplication by 2L_1 the left-hand side 2L_1 V_g,n(𝐋) accounts for the volume of surfaces with a marked point on the first boundary. Tracing a geodesic ray from this point, perpendicularly to the boundary, until it self-intersects or hits another boundary allows one to canonically decompose the surface into a hyperbolic pair of pants (3-holed sphere) and one or two smaller hyperbolic surfaces. The terms on the right-hand side of <cit.> precisely take into account the Weil–Petersson volumes associated to these parts and the way they are glued. It is natural to expect that Theorem <ref> admits a similar bijective interpretation, in which the surface decomposes into a tight pair of pants (a sphere with 3+p boundaries, three of which are tight) and one or two smaller tight hyperbolic surfaces. However, Mirzakhani's ray shooting procedure does not generalize in an obvious way. Nevertheless, working under the assumption that a bijective decomposition exists, one is led to suspect that the generalized kernel K(x,t,μ] of Theorem <ref> contains important information about the geometry of tight pairs of pants. Moreover, one would hope that this geometry can be further understood via a decomposition of the tight pairs of pants themselves analogous to the planar map case of <cit.> described above. Since a genus-0 surface 𝖷∈ℳ_0,3+p(0,0,0,𝐋) with three distinguished cusps is always a tight pair of pants (since the zero length boundaries are obviously minimal), a consequence of a bijective interpretation of Theorem <ref> is a conjectural interpretation of the series X̂(u;μ] = sin(2π u)/(2π uη) in (<ref>) in terms of the hyperbolic distances between the three cusps. To be precise, let c_1,c_2,c_3 be unit-length horocycles around the three cusps and Δ(𝖷) = d_hyp(c_1,c_2)-d_hyp(c_1,c_3) the difference in hyperbolic distance between two pairs, then it is plausible that X̂(u;μ] ?=∑_p≥ 01/p!∫(∫_ℳ_0,3+p(0,0,0,𝐋) e^2u Δ(𝖷)μ_WP(𝖷) ) μ(L_1)⋯μ(L_p). Or in the probabilistic terms of Section <ref>, the measure M_0[μ] X(z;μ] on ℝ, which integrates to 1 due to (<ref>), is the probability distribution of the random variable 2Δ(𝖷) in a genus-0 Boltzmann hyperbolic surface 𝖷 with weight μ. In upcoming work we shall address this conjecture using very different methods. Another natural question to ask is whether the generalization of the spectral curve (<ref>) of Weil–Petersson volumes to the one of tight Weil–Petersson volumes in Theorem <ref> can be understood in the general framework of deformations of spectral curves in topological recursion <cit.>. §.§ Outline The structure of the paper is as follows: In section <ref> we introduce the half-tight cylinder, which allows us to do tight decomposition of surfaces, which relates the regular hyperbolic surfaces to the tight surfaces. Using the decomposition we prove Proposition <ref>. In section <ref> consider the generating functions of (tight) Weil–Petersson volumes and their relations. Furthermore, we use the Virasoro constraints to prove Theorem <ref>, Theorem <ref> and Corollary <ref>. In section <ref> we take the Laplace transform of the tight Weil–Petersson volumes and prove Theorem <ref>. We also look at the relation between the disk function of the regular hyperbolic surfaces and the generating series of moment η. Finally, in section <ref> we briefly discuss how our results may be of use in the study of JT gravity. Acknowledgments This work is supported by the START-UP 2018 programme with project number 740.018.017 and the VIDI programme with project number VI.Vidi.193.048, which are financed by the Dutch Research Council (NWO). § DECOMPOSITION OF TIGHT HYPERBOLIC SURFACES §.§ Half-tight cylinder Recall that a boundary ∂_i of X ∈ℳ_g,n+p(𝐋) is said to be tight in S_g,n,p if ∂_i is the only simple cycle γ in [∂_i]_S_g,n,p of length ℓ_γ(X) ≤ L_i and we defined the moduli space of tight hyperbolic surfaces as ℳ^tight_g,n,p(𝐋) = { X ∈ℳ_g,n+p(𝐋) : ∂_i is tight in S_g,n,p}⊂ℳ_g,n+p(𝐋). We noted before that when g=0 and n=2 we have ℳ^tight_0,2,p(𝐋) = ∅ because ∂_1 and ∂_2 belong to the same free homotopy class of S_0,2,p and can therefore never both be the unique shortest cycle. Instead, it is useful for any p≥ 1 to consider the moduli space of half-tight cylinders ℋ_p(𝐋) = { X ∈ℳ_0,2+p(𝐋) : ∂_2 is tight in S_0,2,p}⊂ℳ_0,2+p(𝐋), which is non-empty whenever L_1 > L_2 > 0. We will also consider ℋ_p(𝐋) = { X ∈ℳ_0,2+p(𝐋) : ∂_2 has minimal length in [∂_2]_S_0,2,p}⊂ℳ_0,2+p(𝐋) and denote its Weil–Petersson volume by H_p(𝐋). By construction, it is non-zero for L_1 ≥ L_2 > 0 and H_p(𝐋) ≤ V_0,2+p(𝐋). ℋ_p(𝐋) is an open subset of ℳ_0,2+p(𝐋), and when it is non-empty (L_1>L_2) its closure is ℋ_p(𝐋). In particular, both have the same finite Weil–Petersson volume H_p(𝐋) when L_1 > L_2, but ℋ_p(𝐋) has 0 volume and ℋ_p(𝐋) non-zero volume H_p(𝐋) when L_1 = L_2. For L_1 > L_2, ℋ_p(𝐋) is the intersection of the open sets {ℓ_γ(X) > L_2 } indexed by the countable set of free homotopy classes γ in [∂_2]_S_0,2,p. It is not hard to see that in a neighbourhood of any X ∈ℋ_p(𝐋) only finitely many of these are important, so the intersection is open. Its closure is given by the countable intersection of closed sets {ℓ_γ(X) ≥ L_2 }, which is precisely ℋ_p(𝐋). §.§ Tight decomposition We are now ready to state the main result of this section. The Weil–Petersson volumes T_g,n,p(𝐋) and H_p(𝐋) satisfy V_g,n+p(𝐋) = ∑_I_0 ⊔⋯⊔ I_n = {n+1,…,n+p}∫ T_g,n,|I_0|(𝐊,𝐋_I_0)∏_1≤ i≤ n I_i ≠∅ H_|I_i|(L_i,K_i,𝐋_I_i) K_i K_i, (g≥ 1 or n≥ 3) V_0,2+p(𝐋) = H_p(𝐋) + ∑_I_1 ⊔ I_2 = {3,…,2+p} I_1,I_2≠∅∫_0^L_2 H_|I_1|(L_1,K,𝐋_I_1) H_|I_2|(L_2,K,𝐋_I_2) K K, (L_1 ≥ L_2) where in the first equation it is understood that K_i = L_i whenever I_i = ∅. The remainder of this section will be devoted to proving this result. But let us first see how it implies Proposition <ref>. Clearly H_1(𝐋) = V_0,3(𝐋) = 1 for L_1 ≥ L_2 and T_g,n,0(𝐋) = V_g,n(𝐋). Rewriting the equations as T_g,n,p(𝐋) = V_g,n+p(𝐋) - ∑_I_0 ⊔⋯⊔ I_n = {n+1,…,n+p} |I_0| < p∫ T_g,n,|I_0|(𝐊,𝐋_I_0)∏_1≤ i≤ n I_i ≠∅ H_|I_i|(L_i,K_i,𝐋_I_i) K_i K_i, H_p(𝐋) = V_0,2+p(𝐋) - ∑_I_1 ⊔ I_2 = {3,…,2+p} I_1,I_2≠∅∫_0^L_2 H_|I_1|(L_1,K,𝐋_I_1) H_|I_2|(L_2,K,𝐋_I_2) K K, it is clear that they are uniquely determined recursively in terms of V_g,n(𝐋). Moreover, by induction we easily verify that H_p(𝐋) in the region L_1 ≥ L_2 is a polynomial in L_1^2,…,L_2+p^2 of degree p-1 that is symmetric in L_3,…,L_2+p, and T_g,n,p is a polynomial in L_1^2,…,L_n+p^2 of degree 3g-3+n+p that is symmetric in L_1,…,L_n and symmetric in L_n+1,…,L_n+p. §.§ Tight decomposition in the stable case §.§.§ Shortest cycles The following parallels the construction of shortest cycles in maps described in <cit.>. Given a hyperbolic surface X ∈ℳ_g,n+p for g≥ 1 or n≥ 2, then for each i=1,…,n there exists a unique innermost shortest cycle σ^i_S_g,n,p(X) on X, meaning that it has minimal length in [∂_i]_S_g,n,p and such that all other cycles of minimal length (if they exist) are contained in the region of X delimited by ∂_i and σ^i_S_g,n,p(X). Moreover, if g≥ 1 or n ≥ 3, the curves σ^1_S_g,n,p(X), …,σ^n_S_g,n,p(X) are disjoint. First note that if a shortest cycle exists, it is a simple closed geodesic. As a consequence of <cit.>, there are only finitely many closed geodesics with length ≤ L_i in [∂_i]_S_g,n,p. Since ∂_i∈ [∂_i]_S_g,n,p has length L_i, this proves the existence of at least one cycle in [∂_i]_S_g,n,p with minimal length. Regarding the existence and uniqueness of a well-defined innermost shortest cycle, suppose α,β∈ [∂_i]_S_g,n,p are two distinct simple closed geodesics with minimal length ℓ (see left side of Figure <ref>). Since α∈ [∂_i]_S_g,n,p, cutting along α separates the surface in two disjoint parts. Therefore, α and β can only have an even number of intersections. If the number of intersections is greater than zero, we can choose two distinct intersections and combine α and β to get two distinct cycles γ_1 and γ_2 by switching between α and β at the chosen intersections, such that γ_1 and γ_2 are still in [∂_i]_S_g,n,p. Since the total length is still 2ℓ, at least one of the new cycles has length ≤ℓ. This cycle is not geodesic, so there will be a closed cycle in [∂_i]_S_g,n,p with length <ℓ, which contradicts that α and β have minimal length. We conclude that α and β are disjoint. Since all cycles in [∂_i]_S_g,n,p with minimal length are disjoint and separating, the notion of being innermost is well-defined. Consider α_i=σ^i_S_g,n,p(X) and α_j=σ^j_S_g,n,p(X) for i≠ j (see right side of Figure <ref>). Just as before, since α_i is separating and α_i and α_j are simple, the number of intersections is even. If α_i and α_j are not disjoint, we can choose two distinct intersections and construct two distinct cycles γ_i and γ_j by switching between α_i and α_j at the chosen intersections, such that γ_i and γ_j are in [∂_i]_S_g,n,p and [∂_j]_S_g,n,p respectively. Since the total length of the cycles stays the same, there is at least one a∈{i,j} such that γ_a has length less or equal than α_a. Since γ_a is not geodesic, there is a closed cycle in [∂_a]_S_g,n,p with length strictly smaller than α_a, which is a contradiction, so the innermost shortest cycles are disjoint. In particular the proof implies the following criterions are equivalent: * A simple closed geodesic α∈ [∂_i]_S_g,n,p is the innermost shortest cycle σ^i_S_g,n,p(X); * For a simple closed geodesic α∈ [∂_i]_S_g,n,p we have ℓ(α) ≤ L_i and each simple closed geodesic β∈σ^i_S_g,n,p(X) that is disjoint from α has length ℓ(β)≥ℓ(α) with equality only being allowed if β is contained in the region between α and ∂_i. §.§.§ Integration on Moduli space Let us recap Mirzakhani's decomposition of moduli space integrals in the presence of distinguished cycles <cit.>. A multicurve Γ = (γ_1,…,γ_k) is a collection of disjoint simple closed curves Γ = (γ_1,…,γ_k) in S_g,n which are pairwise non-freely-homotopic. Given a multicurve, in which each curve γ_i may or may not be freely homotopic to a boundary ∂_j of S_g,n, one can consider the stabilizer subgroup Stab(Γ) = { h∈Mod_g,n : h ·γ_i = γ_i }⊂Mod_g,n. Note that if γ_i ∈ [∂_j]_S_g,n is freely homotopic to one of the boundaries ∂_j then h ·γ_i = γ_i for any h∈Mod_g,n. The moduli space of hyperbolic surfaces with distinguished (free homotopy classes of) curves is the quotient ℳ_g,n(𝐋)^Γ = 𝒯_g,n(𝐋)/Stab(Γ). For a closed curve γ in S_g,n and X∈ℳ_g,n, let ℓ_γ(X) be the length of the geodesic representative in the free homotopy class of γ. For 𝐊 = (K_1,…,K_k) ⊂_>0^k we can restrict the lengths of the geodesic representatives of curves in Γ by setting ℳ_g,n(𝐋)^Γ(𝐊) = { X ∈ℳ_g,n(𝐋)^Γ : ℓ_γ_i(X) = K_i, i=1,…,k}⊂ℳ_g,n(𝐋)^Γ. If γ_i ∈ [∂_j]_S_g,n then this set is empty unless K_i = L_j. Denote by π^Γ : ℳ_g,n(𝐋)^Γ→ℳ_g,n(𝐋) the projection. If there are exactly p cycles among Γ that are not freely homotopic to a boundary, then this space admits a natural action of the p-dimensional torus (S^1)^p obtained by twisting along each of these p cycles proportional to their length. The quotient space is denoted ℳ_g,n(𝐋)^Γ*(𝐊) = ℳ_g,n(𝐋)^Γ(𝐊) / (S^1)^p and is naturally equipped with a symplectic structure inherited from the Weil–Petersson symplectic structure on ℳ_g,n(𝐋)^Γ. If we denote by S_g,n(Γ) the possibly disconnected surface obtained from S_g,n by cutting along all γ_i that are not freely homotopic to a boundary and by ℳ(S_g,n(Γ),𝐋,𝐊) its moduli space, then according to <cit.>, the canonical mapping ℳ_g,n(𝐋)^Γ*(𝐊) →ℳ(S_g,n(Γ),𝐋,𝐊) is a symplectomorphism. Given an integrable function F : ℳ_g,n(𝐋)^Γ→ that is invariant under the action of (S^1)^p, there exists a naturally associated function F̃ : ℳ(S_g,n(Γ),𝐋,𝐊) → such that (essentially <cit.>) ∫_ℳ_g,n(𝐋)^Γ F(X) μ_WP(X) = ∫∏_1≤ i≤ n γ_i ∉ [∂_j]_S_g,n K_i K_i ∫_ℳ(S_g,n(Γ),𝐋,𝐊)F̃(X) μ_WP(X). §.§.§ Shortest multicurves Suppose g≥ 1 or n≥ 3, meaning that we momentarily exclude the cylinder case (g=0, n=2). We consider now a special family of multicurves Γ = (γ_1,…,γ_n) on S_g,n+p for n≥ 1, p≥0. Namely, we require that γ_i ∈ [∂_i]_S_g,n,p is freely homotopic to the boundary ∂_i in the capped-off surface S_g,n,p for i=1,…,n. Then there exists a partition I_0 ⊔⋯⊔ I_n = {n+1,…,n+p} such that S_g,n+p(Γ) has n+1 connected components s_0,…,s_n, where s_0 is of genus g and is adjacent to all curves Γ as well as the boundaries (∂_j)_j∈ I_0 while for each i=1, …,n, s_i is of genus 0 and contains the ith boundary ∂_i as well as (∂_j)_j∈ I_i and is adjacent to γ_i. Note that I_i = ∅ if and only if γ_i ∈ [∂_i]_S_g,n+p. Finally, we observe that mapping class group orbits {Mod_g,n+p· [Γ]_S_g,n+p} of these multicurves Γ are in bijection with the set of partitions {I_0 ⊔⋯⊔ I_n = {n+1,…,n+p}}. With the help of Lemma <ref> we may introduce the restricted moduli space in which we require γ_i to be (freely homotopic to) the innermost shortest cycle in [∂_i]_S_g,n,p, ℳ̂_g,n,p(𝐋)^Γ = { X ∈ℳ_g,n+p(𝐋)^Γ : γ_i∈ [σ^i_S_g,n,p(X)] for i=1,…,n}. The natural projection _Γℳ̂_g,n,p(𝐋)^Γ⟶ℳ_g,n+p(𝐋), where the disjoint union is over (representatives of) the mapping class group orbits of multicurves Γ, is a bijection. If X,X'∈𝒯_g,n+p(𝐋) are representatives of hyperbolic surfaces in ℳ̂_g,n,p(𝐋)^Γ and ℳ̂_g,n,p(𝐋)^Γ' respectively, then by definition [γ_i] = [σ^i_S_g,n,p(X)] and [γ_i']=[σ^i_S_g,n,p(X')]. If X and X' represent the same surface in ℳ_g,n+p(𝐋), they are related by an element h of the mapping class group, X' = h· X, and therefore also σ^i_S_g,n,p(X) = h·σ^i_S_g,n,p(X') and [γ_i] = h·[γ_i']. So Γ and Γ' belong to the same mapping class group orbit and, if Γ and Γ' are freely homotopic, we must have h ∈Stab(Γ). Hence, X and X' represent the same element in the set on the left-hand side, and we conclude that the projection is injective. It is also surjective since any X∈𝒯_g,n+p(𝐋) is a representative of ℳ̂_g,n,p(𝐋)^Γ if we take Γ = (σ^1_S_g,n,p(X),…,σ^n_S_g,n,p(X)), which is a valid multicurve due to Lemma <ref>. We can introduce the length-restricted version ℳ̂_g,n,p(𝐋)^Γ(𝐊) ⊂ℳ_g,n+p(𝐋)^Γ(𝐊) as before. The subset ℳ̂_g,n,p(𝐋)^Γ(𝐊)⊂ℳ_g,n+p(𝐋)^Γ(𝐊) is invariant under twisting (the torus-action on ℳ_g,n+p(𝐋)^Γ(𝐊) described above). The image of the quotient ℳ̂_g,n,p(𝐋)^Γ*(𝐊) under the symplectomorphism (<ref>) is precisely ℳ^tight_g,n,|I_0|(𝐊,𝐋_I_0)×∏_1≤ i≤ n I_i ≠∅ℋ_|I_i|(L_i,K_i,𝐋_I_i). Let X ∈ℳ_g,n+p(𝐋)^Γ(𝐊) be a hyperbolic surface with distinguished multicurve Γ. The lengths of the geodesics associated to Γ as well as the lengths of the geodesics that are disjoint from those geodesics are invariant under twisting X along Γ. The criterion explained just below Lemma <ref> for γ_i to be the innermost shortest cycle σ^i_S_g,n,p(X) is thus also preserved under twisting, showing that the subset ℳ̂_g,n,p(𝐋)^Γ(𝐊) is invariant. Let X_0 ∈ℳ_g,n,|I_0|(𝐊,𝐋_I_0) and X_i ∈ℳ_0,2+|I_i|(L_i,K_i,𝐋_I_i) for those i=1,…, n for which I_i≠∅ be the hyperbolic structures on the connected components s_0,…,s_n of S_g,n+p(Γ) obtained by cutting X along the geodesics associated to Γ. For each i=1,…,n the criterion for γ_i to be the innermost shortest cycle σ^i_S_g,n,p(X) is equivalent to the following two conditions holding: * the ith boundary of X_0 is tight in the capped-off surface associated to s_0; * I_i = ∅ (meaning γ_i = ∂_i) or X_i ∈ℋ_|I_i|(L_i,K_i,𝐋_I_i) (recall the definition in (<ref>)). Hence, we have X ∈ℳ̂_g,n,p(𝐋)^Γ(𝐊) precisely when X_0 ∈ℳ^tight_g,n,|I_0|(𝐊,𝐋_I_0) and X_i ∈ℋ_|I_i|(L_i,K_i,𝐋_I_i) when I_i ≠∅. This proves the second statement of the lemma. It follows that the Weil–Petersson volume of ℳ̂_g,n,p(𝐋)^Γ*(𝐊) is equal to the product of the volumes of the spaces appearing in (<ref>). Combining with Lemma <ref> and the integration formula (<ref>) this shows that V_g,n+p(𝐋) = ∑_Γ∫_ℳ̂_g,n,p(𝐋)^Γμ_WP = ∑_I_0 ⊔⋯⊔ I_n = {n+1,…,n+p}∫ T_g,n,|I_0|(𝐊,𝐋_I_0)∏_1≤ i≤ n I_i ≠∅ H_|I_i|(L_i,K_i,𝐋_I_i) K_i K_i, where it is understood that K_i = L_i whenever I_i = ∅. This proves the first relation of Proposition <ref>. §.§ Tight decomposition of the cylinder The decomposition we have just described does not work well in the case g=0 and n=2, because ∂_1 and ∂_2 are in the same free homotopy class of the capped surface S_0,2,p. Instead, we should consider a multicurve Γ=(γ_1) consisting of a single curve γ_1 on S_0,2+p in the free homotopy class [∂_1]_S_0,2,p=[∂_2]_S_0,2,p, see Figure <ref>. In this case there exists a partition I_1 ⊔ I_2 = {3,…,p+2} such that S_0,2+p(Γ) has two connected components s_1 and s_2, with s_i a genus-0 surface with 2+|I_i| boundaries corresponding to ∂_i, γ_1 and (∂_j)_j∈ I_i. We consider the restricted moduli space ℳ̂_0,2,p(𝐋)^Γ = { X ∈ℳ_0,2+p(𝐋)^Γ : γ_1∈ [σ^1_S_0,2,p(X)]}, which thus treats the two boundaries ∂_1 and ∂_2 asymmetrically, by requiring that γ_1 is the shortest curve farthest from ∂_1. Lemma <ref> goes through unchanged: the projection _Γℳ̂_0,2,p(𝐋)^Γ⟶ℳ_0,2+p(𝐋), where the disjoint union is over the mapping class group orbits of Γ = (γ_1), is a bijection. Assuming L_1 ≥ L_2, we cannot have γ_1 ∈ [∂_1]_S_0,2+p so I_1 ≠∅. There are two cases to consider: * γ_1 ∈ [∂_2]_S_0,2+p and therefore I_2 = ∅: this means that ∂_2 has minimal length in [∂_2]_S_0,2,p, so ℳ̂_0,2,p(𝐋)^Γ = ℋ_p(𝐋). * I_2 ≠∅: by reasoning analogous to that of Lemma <ref> we have that ℳ̂_0,2,p(𝐋)^Γ*(K) is symplectomorphic to ℋ_|I_1|(L_1,K,𝐋_I_1) ×ℋ_|I_2|(L_2,K,𝐋_I_2). Hence, when L_1 ≥ L_2 we have V_0,2+p(𝐋) = H_p(𝐋) + ∑_I_1 ⊔ I_2 = {3,…,2+p} I_1,I_2≠∅∫_0^L_2 H_|I_1|(L_1,K,𝐋_I_1) H_|I_2|(L_2,K,𝐋_I_2) K K. This proves the second relation of Proposition <ref>. § GENERATING FUNCTIONS OF TIGHT WEIL–PETERSSON VOLUMES §.§ Definitions Let us define the following generating functions of the Weil–Petersson volumes, half-tight cylinder volumes and tight Weil–Petersson volumes: F_g[μ] = ∑_n=0^∞1/n!∫μ(L_1)⋯∫μ(L_n) V_g,n(𝐋), H(L_1,L_2;μ] = ∑_p=1^∞1/p!∫μ(L_3)⋯∫μ(L_2+p) H_p(𝐋), F̃_g[ν,μ] =∑_n=0^∞1/n!∫ν(L_1)⋯∫ν(L_n) T_g,n(L_1,…,L_n;μ]. Furthermore, for g≥ 2 we recall the polynomial 𝒫_g,0(m_1,…,m_3g-3) = ∑_d_2,d_3,…≥ 0 ∑_k≥ 2 (k-1)d_k = 3g-3⟨τ_2^d_2τ_3^d_3⋯⟩_g ∏_k≥ 2(-m_k-1)^d_k/d_k!, where ⟨τ_2^d_2τ_3^d_3⋯⟩_g are the ψ-class intersection numbers on the moduli space ℳ_g,n with n = ∑_k d_k ≤ 3g-3 marked points. Then according to <cit.>[Note that there has been a shift in conventions, e.g. regarding factors of 2. ] F_0[μ] = 1/2∫_0^R r Z(r;μ]^2, F_1[μ] = - 1/24log M_0[μ], F_g[μ] = 1/(M_0[μ])^2g-2 𝒫_g,0(M_1[μ]/M_0[μ],…, M_3g-3[μ]/M_0[μ]) for g≥ 2. In the genus-0 case we can take successive derivatives to find useful formulas for one, two or three distinguished boundaries of prescribed lengths, F_0[μ]μ(L_1) = -∫_0^R[μ] I_0(L_1√(2r)) Z(r;μ] r, , δ^2F_0[μ]/δμ(L_1)δμ(L_2) = ∫_0^R[μ] I_0(L_1√(2r)) I_0(L_2√(2r)) r, δ^3 F_0[μ]/δμ(L_1)δμ(L_2)δμ(L_3) = 1/M_0[μ][∏_i=1^3 I_0(L_i√(2R[μ]))]. §.§ Volume of half-tight cylinder The equations of Proposition <ref> turn into the equations δ^n F_g[μ]/δμ(L_1)⋯δμ(L_n) = ∫ T_g,n(𝐊;μ]∏_i=1^n (K_i H(L_i,K_i;μ] + δ(K_i - L_i)) K_i, (g≥ 1 or n≥ 3) δ^2 F_0[μ]/δμ(L_1)δμ(L_2) = H(L_1,L_2;μ] + ∫_0^L_2 H(L_1,K;μ] H(L_2,K;μ] K K. (L_1 ≥ L_2) Let us focus on the last equation, which should determine H(L_1,L_2;μ] uniquely. The left-hand side depends on μ only through the quantity R[μ], and the dependence on R is analytic, δ^2 F_0[μ]/δμ(L_1)δμ(L_2) = R + 1/4(L_1^2+L_2^2)R^2 + 1/48(L_1^4 + 4 L_1^2L_2^2 + L_2^4) R^3 + ⋯. Hence, the same is true for H(L_1,L_2;μ] and one may easily calculate order by order in R that H(L_1,L_2;μ] = R + 1/4(L_1^2-L_2^2)R^2 + 1/48(L_1^2-L_2^2)^2R^3+⋯. This suggests that H(L_1,L_2;μ] depends on L_1 and L_2 only through the combination L_1^2 - L_2^2. Let's prove this. The half-tight cylinder generating function satisfies (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) H(L_1,L_2;μ] = 0, and is therefore given by H(L_1,L_2;μ] = ∑_ℓ=0^∞2^-ℓ R[μ]^ℓ+1/ℓ!(ℓ+1)!(L_1^2-L_2^2)^ℓ = √(2R[μ]/L_1^2-L_2^2) I_1( √(L_1^2-L_2^2)√(2R[μ])) (L_1 ≥ L_2). By construction H(L_1,0;μ] = δ^2F_0[μ]/δμ(L_1)δμ(0) and the integral (<ref>) with L_2=0 evaluates to H(L_1,0;μ] = δ^2F_0[μ]/δμ(L_1)δμ(0) = √(2R)/L_1 I_1(L_1√(2R)). The identity ∂/∂ r( √(2r)/L_1 I_1(L_1√(2r))√(2r)/L_2 I_1(L_2√(2r))) = (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) I_0(L_1√(2r)) I_0(L_2√(2r)), which can be easily checked by calculating the derivatives, implies that H(L_1,0;μ] H(L_2,0;μ] = (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) δ^2F_0[μ]/δμ(L_1)δμ(L_2). Hence, by (<ref>) we find that (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) H(L_1,L_2;μ] = H(L_1,0;μ] H(L_2,0;μ] - (1/L_1∂/∂ L_1+1/L_2∂/∂ L_2) ∫_0^L_2 H(L_1,K;μ] H(L_2,K;μ] K K = H(L_1,0;μ] H(L_2,0;μ] - H(L_1,L_2;μ] H(L_2,L_2;μ] - ∫_0^L_2(1/L_1∂/∂ L_1+1/L_2∂/∂ L_2)H(L_1,K;μ] H(L_2,K;μ] K K = -∫_0^L_2∂/∂ K(H(L_1,K;μ] H(L_2,K;μ]) K - ∫_0^L_2(1/L_1∂/∂ L_1+1/L_2∂/∂ L_2)H(L_1,K;μ] H(L_2,K;μ] K K = - ∫_0^L_2[H(L_1,K;μ](1/L_2∂/∂ L_2+1/K∂/∂ K) H(L_2,K;μ] + H(L_2,K;μ](1/L_1∂/∂ L_1+1/K∂/∂ K) H(L_1,K;μ]] K K. Since the leading coefficient in R of H(L_1,L_2;μ] satisfies (<ref>), it follows that the same is true for the higher-order coefficients in R. As a consequence of (<ref>), H(L_1,L_2;μ] = H(√(L_1^2-L_2^2),0;μ] and the claimed expression (<ref>) follows from (<ref>). §.§ Rewriting generating functions Since the work of Mirzakhani <cit.> it is known that the Weil–Petersson volumes V_g,n(𝐋) are expressible in terms of intersection numbers as follows. The compactified moduli space ℳ_g,n of genus-g curves with n marked points comes naturally equipped with the Chern classes ψ_1,…,ψ_n associated with its n tautological line bundles, as well as the cohomology class κ_1 of the Weil–Petersson symplectic structure (up to a factor 2π^2). The corresponding intersection numbers are given by the integrals <κ_1^mτ_d_1⋯τ_d_n>_g,n = ∫_ℳ_g,nκ_1^m ψ_1^d_1⋯ψ_n^d_n, where d_1,…,d_n ≥ 0 and n = d_1+⋯ d_n + m + 3 - 3g. For g ≥ 0 we denote the generating function of these intersection numbers by G_g(s;x_0,x_1,…)=∑_n≥ 01/n!∑_m,d_1,…,d_n ≥ 0 d_1+⋯+d_n+m=3g-3+n<κ_1^mτ_d_1⋯τ_d_n>_g,n s^m/m! x_d_1⋯ x_d_n. We may sum over all genera to arrive at the generating function G(s;x_0,x_1,…)=∑_g=0^∞λ^2g-2 G_g(s;x_0,x_1,…). In order to lighten the notation we do not write the dependence on λ explicitly here, which only serves as a formal generating variable. Note that λ is actually redundant for organizing the series, since any monomial appears in at most one of the G_g as can be seen from (<ref>). Then the generating function of Weil–Petersson volumes can be expressed as ∑_g=0^∞λ^2g-22^3-3gF_g[μ]=G(π^2;t_0[μ],t_1[μ],…), where the times t_k[μ] are defined by t_k[μ]=∫μ(L) 2L^2k/4^kk!. See <cit.> based on <cit.>, where one should be careful that some conventions differ by some factors of two compared to the current work. We will show that the (bivariate) generating function F̃_g[ν,μ] of tight Weil–Petersson volumes, defined in (<ref>), is also related to the intersection numbers, but with different times. The generating function of the volumes T_g,n is related to the generating function of intersection numbers via F̃_g[ν,μ] = 2^3g-3 G_g(0;τ_0[ν,μ],τ_1[ν,μ],…), where the shifted times τ_k[ν,μ] are defined by τ_k[ν,μ]=t_k[ν]+δ_k,1-2^1-kM_k-1[μ]. This proposition will be proved in the remainder of this subsection, relying on an appropriate substitution of the weight ν. To this end, we informally introduce a linear mapping H_μ on measures on the half line [0,∞) as follows. If ρ is a measure on [0,∞) we let H_μρ be the measure given by ρ + (∫ H(L,K;μ] ρ(L)) K K. The effect of H_μ on the times can be computed using the series expansion (<ref>), t_k[H_μρ] = t_k[ρ] + 2/4^k k!∫_0^∞(∫ H(L,K;μ] ρ(L)) K^2k+1K = t_k[ρ] + ∑_p=1^∞2(2R[μ])^p/p!4^p+k(p+k)!∫ρ(L) L^2p+2k = ∑_p=0^∞(2R[μ])^p/p! t_p+k[ρ]. We observe that H_μ acts as an infinite upper-triangular matrix on the times. This matrix is easily inverted to give t_q[ρ] = ∑_p=0^∞(-2R[μ])^p/p! t_p+q[H_μρ]. This means that knowledge of the generating function F̃_g[H_μρ,μ] with substituted weight H_μρ is sufficient to recover the original generating function F̃_g[ν,μ]. Luckily the former is within close reach. The generating functions for tight Weil–Petersson volumes and regular Weil–Petersson volumes are related by F̃_g[H_μρ,μ]=F_g[ρ+μ] - δ_g,0F_corr[ρ,μ], where the correction term F_corr[ρ,μ]=∑_n=0^21/n!∑_p=0^∞1/p!∫ρ(L_1)⋯ρ(L_n) μ(L_n+1)⋯μ(L_n+p) V_0,n+p(𝐋) is necessary to subtract the constant, linear and quadratic dependence on ρ in the genus-0 case. If g,n≥ 0 (such that n≥ 3 if g=0) and L_1,…,L_n ∈ [0,∞)∪ i (0,π), then Proposition <ref> allows us to compute ∑_p=0^∞1/p!∫μ(L_n+1)⋯μ(L_n+p) V_g,n+p(𝐋) = ∑_p=0^∞1/p!∫μ(L_n+1)⋯μ(L_n+p)-20mu ∑_I_0 ⊔⋯⊔ I_n = {n+1,…,n+p}∫ T_g,n,|I_0|(𝐊,𝐋_I_0)∏_1≤ i≤ n I_i ≠∅ H_|I_i|(L_i,K_i,𝐋_I_i) K_i K_i = ∑_p_0,…,p_n=0^∞1/p_0!… p_n!∫μ(L_n+1)⋯μ(L_n+p_0+⋯+p_n) T_g,n,p_0(𝐊,𝐋^(0))∏_1≤ i≤ n p_i ≠ 0 H_p_i(L_i,K_i,𝐋^(i)) K_i K_i, where we use the notation 𝐋^(j) = (L_n+p_0+⋯+p_j-1+1, … ,L_n+p_0+⋯+p_j). In terms of the tight Weil–Petersson volume generating function (<ref>) and the half-tight cylinder generating function (<ref>) this evaluates to ∑_p=0^∞1/p!∫μ(L_n+1)⋯μ(L_n+p) V_g,n+p(𝐋)=∑_J ⊂{1,…,n}∫ T_g,n(𝐊;μ] ∏_i∈ J K_i H(L_i,K_i;μ] K_i, where it is understood that in the argument of T_g,n(𝐊;μ] we take K_i = L_i for i∉ J. Expanding F_g[ρ+μ] from its definition (<ref>) we find F_g[ρ+μ] -F_corr[ρ,μ]δ_g,0 = ∑_n,p=0^∞_g≥1 or n≥31/n!p!∫ρ(L_1)⋯ρ(L_n) μ(L_n+1)⋯μ(L_n+p)V_g,n+p(𝐋) = ∑_n=0^∞1/n!_g≥1 or n≥3∫(∏_i=1^nρ(L_i))∑_p=0^∞1/p!∫(∏_i=n+1^n+pμ(L_i))V_g,n+p(𝐋) Plugging in (<ref>) and using that T_0,n(𝐊;μ]=0 for n<3, yields F_g[ρ+μ] -F_corr[ρ,μ]δ_g,0 = ∑_n=0^∞1/n!∫ρ(L_1)⋯ρ(L_n)∑_J ⊂{1,…,n}∫ T_g,n(𝐊;μ] ∏_i∈ J K_i H(L_i,K_i;μ] K_i = ∑_n=0^∞1/n!∫(H_μρ)(L_1)⋯(H_μρ)(L_n) T_g,n(𝐋;μ] = F̃_g[H_μρ , μ] as claimed. Lemma <ref> and (<ref>) together lead to the relation ∑_g=0^∞λ^2g-22^3-3gF̃_g[H_μρ,μ]=G(π^2;t_0[ρ+μ], t_1[ρ+μ],…)-8λ^-2F_corr. The right-hand side can be specialized, making use of a variety of identities between intersection numbers. Firstly, a relation between intersection numbers involving κ_1 and pure ψ-class intersection numbers <cit.> leads to the identity <cit.> G(s;x_0,x_1,…)=G(0;x_0,x_1,x_2+γ_2(s),…), where the shifts are γ_k(s)=(-1)^k/(k-1)!s^k-1_k≥2. For us this gives ∑_g=0^∞λ^2g-22^3-3gF̃_g[H_μρ,μ]=G(0;𝐭[ρ+μ]+γ(π^2))-8λ^-2F_corr, where we use the notation G(0;𝐱)=G(0;x_0,x_1,x_2,…). This can be further refined using Witten's observation <cit.>, proved by Kontsevich <cit.>, that G(0;𝐱) satisfies the string equation (- x_0 + ∑_p=0^∞ x_p+1x_p + x_0^2/2λ^2) e^G(0;𝐱)=0. Following a computation of Itzykson and Zuber <cit.>, it implies the following identity. The solution to the string equation (<ref>) satisfies a formal power series identity in the parameter r, G(0;x_0,x_1,…) = G(0; r + ∑_k=0^∞(-r)^k/k!x_k, ∑_k=0^∞(-r)^k/k!x_k+1, ∑_k=0^∞(-r)^k/k!x_k+2, …) - 1/2λ^2∫_0^r s(s + ∑_k=0^∞(-s)^k/k! x_k)^2 . For x_0,x_1,… fixed, let us consider the sequence of functions 𝐲(s) = (y_0(s),y_1(s),…), y_i(s)=δ_i,0 s+∑_k=0^∞(-s)^k/k!x_k+i, such that y_p'(s) = δ_p,0 - y_p+1(s). The string equation (<ref>) then implies s G(0;𝐲(s)) = ∂ G/∂ x_0 (0;𝐲(s)) - ∑_p=0^∞ y_p+1(s) ∂ G/∂ x_p(𝐲(s))= y_0(s)^2/2λ^2. Integrating from s=0 to s=r gives the claimed identity. Before we can use this lemma, we establish a relation between τ_k[H_μρ,μ] and t_k[ρ+μ]. We can rewrite t_q[ρ+μ]+γ_q(π^2)=δ_q,0 2R[μ]+∑_p=0^∞(-2R[μ])^p/p!τ_p+q[H_μρ,μ] , where the shifted times τ_k[ν,μ] are defined in (<ref>). We first relate the moments M_i[μ] defined in (<ref>) to the times t_i[μ]. Note that Z(u;μ] defined in (<ref>) can be expressed in the times as Z(u;μ] =u-∑_k=0^∞(2u)^k/2k!(t_k[μ]+γ_k(π^2)) By taking p derivatives with respect to u, we get ∑_k=0^∞(2R[μ])^k/k!(t_k+p[μ]+γ_k+p(π^2)) = 2R[μ] if p=0 1-M_0[μ] if p=1 -2^1-pM_p-1[μ] if p≥ 2 . Just like before in obtaining (<ref>), this can be inverted to t_q[μ]+γ_q(π^2) =δ_q,1-∑_p=0^∞(-2R[μ])^p/p!2^1-p-qM_p+q-1[μ] The right-hand side of (<ref>) can thus be expressed as δ_q,0 2R[μ]+∑_p=0^∞(-2R[μ])^p/p!τ_p+q[H_μρ,μ] = δ_q,1+∑_p=0^∞(-2R[μ])^p/p!t_p+q[H_μρ] -∑_p=0^∞(-2R[μ])^p/p!2^1-p-qM_p+q-1[μ] =t_q[μ]+γ_q(π^2)+∑_p=0^∞(-2R[μ])^p/p! t_p+q[H_μρ]. From (<ref>) the last term is just t_q[ρ], so we have reproduced the left-hand side of (<ref>), since t_q[ρ+μ] = t_q[ρ]+t_q[μ]. The last two lemmas allow us to express (<ref>) as ∑_g=0^∞λ^2g-22^3-3gF̃_g[H_μρ,μ] =G(0;τ[H_μρ,μ]) +1/2λ^2∫_0^2R[μ]s(s + ∑_k=0^∞(-s)^k/k!τ_k[H_μρ,μ])^2 -8/λ^2F_corr[ρ,μ]. To finish the proof of Proposition <ref> we thus only need to check that the last two terms cancel. We have F_corr[ρ,μ]= 1/16∫_0^2R[μ]s(s + ∑_k=0^∞(-s)^k/k!τ_k[H_μρ,μ])^2. Let us denote the right-hand side by G_corr. By the definition (<ref>), G_corr=1/16∫_0^2R[μ]s(∑_k=0^∞(-s)^k/k! (t_k[H_μρ]-2^1-kM_k-1[μ]))^2 Changing integration variables to r=R[μ]-s/2 gives G_corr =1/8∫_0^R[μ]r(∑_k=0^∞(2r-2R[μ])^k/k! (t_k[H_μρ]-2^1-kM_k-1[μ]))^2 =1/8∫_0^R[μ]r(∑_k=0^∞(2r-2R[μ])^k/k! t_k[H_μρ] -2Z(r;μ])^2 =1/8∫_0^R[μ]r(∑_k=0^∞(2r)^k/k! t_k[ρ] -2Z(r))^2, where in the second equality we use the series expansion of Z(r)=Z(r;μ] around r=R[μ] (recall from (<ref>) that M_k[μ] = Z^(k+1)(R[μ];μ]), and in the third equality we expanded (2r-2R[μ])^k as a polynomial in r and made use of (<ref>). In terms of the weight ρ this can be written as G_corr = 1/2∫_0^R[μ]r(Z(r)-∫ρ(L) I_0(L√(2r)) )^2. In the constant, linear and quadratic term in ρ we then recognize exactly the expressions (<ref>), (<ref>) and (<ref>), G_corr = F_0[μ] + ∫ρ(L_1) δ F_0[μ]/δμ(L_1)+ 1/2∫ρ(L_1)ρ(L_2) δ F_0[μ]/δμ(L_1)δμ(L_2) = F_corr[ρ,μ]. §.§ Properties of the new kernel Recall that the new kernel is given by K(x,t,μ] = ∫_-∞^∞ K_0(x+z,t) X(z), K_0(x,t)=1/1+exp(x+t/2)+1/1+exp(x-t/2), where X(z)=X(z;μ] is determined by its two-sided Laplace transform X̂(u;μ], X̂(u;μ]=∫_-∞^∞X(z) e^-uz = sin(2π u)/2π u η(u;μ] and η(u;μ]=∑_m=0^∞M_m[μ]/(2m+1)!!u^2m. To prove Theorem <ref>, we need to relate K(x,t;μ] to the moments M_k[μ], since they appear in the shifted times. We define the reverse moments β_m[μ] as the coefficients of the reciprocal series 1/η(u;μ] =∑_m=0^∞β_m[μ] u^2m. Multiplying both series shows that the moments and reverse moments obey ∑_m=0^p M_m[μ]/(2m+1)!!β_p-m[μ]=δ_p,0 for each p≥ 0. Note in particular that β_0[μ] = 1/M_0[μ], β_1[μ] = - M_1[μ]/3M_0[μ]^2. For i,j≥1, the new kernel satisfies ∫_0^∞x x^2i-1/(2i-1)! K(x,t;μ]=∑_m=0^iβ_m[μ] t^2i-2m/(2i-2m)! and ∫_0^∞x∫_0^∞y x^2i-1y^2j-1/(2i-1)!(2j-1)! K(x+y,t;μ]=∑_m=0^i+jβ_m[μ] t^2i+2j-2m/(2i+2j-2m)!. We need two lemmas to prove this proposition. First we examine the one-sided Laplace transforms K̂_0(u,t) ∫_0^∞x e^-ux K_0(x,t), K̂(u,t;μ] ∫_0^∞x e^-ux K(x,t;μ]. K̂_0(u,t)- K̂_0(-u,t)= -4πcosh(tu)/sin(2π u)+2/u To compute the integral (<ref>), we only need positive values for x, so we assume x > 0. Since K_0(x,-t)=K_0(x,t) we can also assume t ≥ 0. For x<t we may expand K_0(x,t) =exp(-x-t/2)/1+exp(-x-t/2)+1/1+exp(x-t/2) = -∑_p=1^∞(-e^-x-t/2)^p + ∑_p=0^∞(-e^x-t/2)^p, while for x>t we may use K_0(x,t) =exp(-x-t/2)/1+exp(-x-t/2)+exp(-x+t/2)/1+exp(-x+t/2) = -∑_p=1^∞(-e^-x-t/2)^p - ∑_p=1^∞(-e^-x+t/2)^p. This gives K̂_0(u,t) = -∑_p=1^∞∫_0^∞x e^-ux(-e^-x-t/2)^p + ∑_p=0^∞∫_0^t x e^-ux(-e^x-t/2)^p - ∑_p=1^∞∫_t^∞x e^-ux(-e^-x+t/2)^p = -e^-ut∑_p=-∞^∞(-1)^p/u-p/2 -∑_p=1^∞ (-1)^p exp(-tp/2)/u+p/2 + ∑_p=0^∞ (-1)^p exp(-tp/2)/u-p/2 = -2π e^-ut/sin(2π u)+1/u + ∑_p=0^∞ (-1)^p e^-tp/2(1/u-p/2-1/u+p/2). When subtracting K̂_0(-u,t) it should be clear that the sum cancels and we easily obtain the claimed formula. K̂(u,t;μ]-K̂(-u,t;μ] = X̂(u;μ](K̂_0(u,t) - K̂_0(-u,t)-2/u) + 2X̂(0;μ]/u From the definition (<ref>) we obtain K̂(u,t;μ] -K̂(-u,t;μ] = ∫_-∞^∞X(z)∫_0^∞x (e^-ux-e^ux) K_0(x+z,t) =∫_-∞^∞X(z)∫_z^∞x (e^(z-x)u-e^(x-z)u) K_0(x,t) =∫_-∞^∞X(z) (e^zuK̂_0(u,t) - e^-zuK̂_0(-u,t)) -∫_-∞^∞X(z)∫_0^z x (e^(z-x)u-e^(x-z)u) K_0(x,t). The first integral evaluates to X̂(u;μ](K̂_0(u,t)-K̂_0(-u,t)). By changing variables (x,z)→(-x,-z) and using the symmetry of X(z), we observe that the second integral is unchanged when K_0(x,t) is replaced by K_0(-x,t). Since also K_0(x,t)+K_0(-x,t)=2, the second integral can be calculated to give ∫_-∞^∞X(z)∫_0^z x (e^(z-x)u-e^(x-z)u) K_0(x,t) = ∫_-∞^∞X(z)∫_0^z x (e^(z-x)u-e^(x-z)u) =- 2/u∫_-∞^∞X(z) (1-e^uz) =2/uX̂(u;μ]- 2X̂(0;μ]/u. Subtracting both integrals gives the desired result. We start by noting ∫_0^∞x x^2i-1/(2i-1)! K(x,t;μ]=-1/2[u^2i-1] (K̂(u,t;μ]-K̂(-u,t;μ]) Using Lemma <ref> and Lemma <ref>, we get for i≥1 ∫_0^∞x x^2i-1/(2i-1)! K(x,t;μ] =-1/2[u^2i-1] (X̂(u;μ](-4πcosh(tu)/sin(2π u)) + 2X̂(0;μ]/u) = [u^2i] cosh(tu)/η(u;μ] = ∑_m=0^i β_m[μ] t^2i-2m/(2i-2m)! The second identity follows easily from the first by performing the integration at constant x+y, since ∫_0^zx^2i-1(z-x)^2j-1/(2i-1)!(2j-1)! x = z^2i+2j-1/(2i+2j-1)!. §.§ Proof of Theorem <ref> We will prove the tight topological recursion by retracing Mirzakhani's proof <cit.> of Witten's conjecture, which relies on the observation that her recursion formula (<ref>), expressed as an identity on the coefficients of the volume polynomials V_g,n(𝐋), is equivalent to certain differential equations for the generating function G(s;x_0,x_1,…) of intersection numbers (see also <cit.>). These differential equations can be expressed as the Virasoro constraints <cit.> Ṽ_p e^G(0;x_0,x_1,…)=0. Here the Virasoro operators Ṽ_-1,Ṽ_0,Ṽ_1,Ṽ_2,… are the differential operators acting on the ring of formal power series in x_0,x_1,x_2,… via Ṽ_p =-(2p+3)!!/2^p+1x_p+1+1/2^p+1∑_n=0^∞(2n+2p+1)!!/(2n-1)!! x_n x_n+p +λ^2/2^p+2∑_i+j=p-1(2i+1)!!(2j+1)!!x_ix_j +δ_p,-1(λ^-2x_0^2/2)+δ_p,0/16. They satisfy the Virasoro relations Ṽ_mṼ_n=(m-n)Ṽ_m+n. Proposition <ref> suggests introducing the shift x_k → x_k + γ̃_k in G with γ̃_k = δ_k,1 - 2^1-kM_k-1[μ], which satisfies ( Ṽ_p + (2p+3)!!/2^p+1x_p+1 - 1/2^p+1∑_n=0^∞2^-nM_n[μ]/(2n+1)!!(2p+2n+3)!! x_p+n+1)e^G(0;x_0,x_1+γ̃_1,x_2+γ̃_̃2̃,…)=0. We use the reverse moments β_m[μ] of (<ref>) to introduce linear combinations V_p =∑_m=0^∞β_m[μ] 2^p( Ṽ_p+m + (2p+2m+3)!!/2^p+m+1x_p+m+1 - 1/2^p+m+1∑_n=0^∞2^-nM_n[μ]/(2n+1)!!(2p+2m+2n+3)!! x_p+m+n+1) of these operators for all p≥ -1, which therefore obey V_p exp(G(0;x_0,x_1+γ̃_1,x_2+γ̃_̃2̃,…))=0. Using (<ref>) the operators V_p can be expressed as V_p =∑_m=0^∞β_m[μ] 2^p( Ṽ_p+m + (2p+2m+3)!!/2^p+m+1x_p+m+1 - 1/2^p+m+1∑_n=0^∞2^-nM_n[μ]/(2n+1)!!(2p+2m+2n+3)!! x_p+m+n+1) =-1/2 (2p+3)!! x_p+1 +λ^2/4∑_m=0^∞∑_i+j=p+m-12^-mβ_m[μ](2i+1)!!(2j+1)!!x_ix_j +1/2∑_n,m=0^∞ 2^-mβ_m[μ] (2n+2p+2m+1)!!/(2n-1)!! x_nx_n+p+m +δ_p,-1(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+δ_p,0β_0[μ]/16 In particular, after some rearranging (and shifting p→ p-1) we observe the identity 1/2 (2p+1)!! ∂ G/∂ x_p =λ^2/4∑_m=0^∞∑_i+j=p+m-22^-mβ_m[μ](2i+1)!!(2j+1)!!(∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) +1/2∑_n,m=0^∞ 2^-mβ_m[μ] (2n+2p+2m-1)!!/(2n-1)!! x_n∂ G/∂ x_n+p+m-1 +δ_p,0(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+δ_p,1β_0[μ]/16, where G is understood to be evaluated at G=G(0;x_0,x_1+γ̃_1,x_2+γ̃_̃2̃,…). Substituting x_k = t_k[ν] such that x_k + γ̃_k = τ_k[ν,μ], Proposition <ref> links G to the generating function G(0;τ_0[ν,μ],τ_1[ν,μ],…) =∑_g=0^∞λ^2g-22^3-3gF̃_g[ν,μ] =∑_g=0^∞λ^2g-22^3-3g∑_n=0^∞1/n!∫ν(L_1)⋯ν(L_n) T_g,n(𝐋;μ] of tight Weil–Petersson volumes. The differential equations (<ref>) can then be reformulated as the functional differential equation (here all partial derivates of G are evaluated at 0, τ_0[ν,μ],τ_1[ν,μ],…) δ/δν(L_1)G(0;τ_0[ν,μ],τ_1[ν,μ],…) = ∑_p=0^∞2L_1^2p/4^p p!∂ G/∂ x_p = ∑_p=0^∞1/L_1∫_0^L_1 t t^2p/(2p)!2^1-p(2p+1)!!∂ G/∂ x_p = ∑_p=0^∞λ^2/L_1∫_0^L_1t t^2p/(2p)!∑_m=0^∞∑_i+j=p+m-22^-i-j-2β_m[μ](2i+1)!!(2j+1)!! (∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) +∑_p=0^∞1/L_1∫_0^L_1t t^2p/(2p)!∫ν(P) ∑_n,m=0^∞ 2^-m-p-n+2β_m[μ] (2n+2p+2m-1)!!/(2n)! P^2n∂ G/∂ x_n+p+m-1 +λ^-2t_0^2[ν]β_0[μ]+β_1[μ]/8+β_0[μ]L_1^2/48 Inserting the integral identities of Proposition <ref> this can also be expressed in terms of the kernel K(x,t;μ] as δ/δν(L_1)G(0;τ_0[ν,μ],τ_1[ν,μ],…) = λ^2 /4L_1∫_0^L_1t∫_0^∞x∫_0^∞y∑_i,j=0^∞ K(x+y,t;μ]x^2i+1y^2j+1/4^i i!4^j j!(∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) + 1/L_1∫_0^L_1t∫_0^∞x∫ν(P) ∑_q=0^∞ ( K(x,t+P;μ] + K(x,t-P;μ]) x^2q+1/4^q q!∂ G/∂ x_q +λ^-2t_0^2[ν]β_0[μ]+β_1[μ]/8+β_0[μ]L_1^2/48 = λ^2 /16L_1∫_0^L_1t∫_0^∞x∫_0^∞y xy K(x+y,t;μ] (δ^2 G/δν(x)δν(y)+δ G/δν(x)δ G/δν(y)) +1/2L_1∫_0^L_1t∫_0^∞x∫ν(P) x ( K(x,t+P;μ] + K(x,t-P;μ]) δ G/δν(x) +ν(L_1)(λ^-2t_0^3[ν]/6M_0[μ]-M_1[μ]t_0[ν]/48M_0[μ]^2+t_1[ν]/24M_0[μ]), where in the last line we used (<ref>). This equation at the level of the generating function (<ref>) is precisely equivalent to the recursion equation on its polynomial coefficients T_g,n(𝐋) = 1/2L_1∫_0^L_1t∫_0^∞x∫_0^∞y xy K(x+y,t;μ] [ T_g-1,n+1(x,y,𝐋_{1}) 370mu +∑_g_1+g_2=g I⨿ J={2,…,n} T_g_1,1+|I|(x,𝐋_I) T_g_2,1+|J|(y,𝐋_J)] +1/2L_1∫_0^L_1t∫_0^∞x∑_j=2^n x (K(x,t+L_j;μ] + K(x,t-L_j;μ]) T_g,n-1(x,𝐋_{1,j}), for (g,n)∉{(0,3),(1,1)} combined with the initial data T_0,3(L_1,L_2,L_3;μ] =1/M_0[μ] T_1,1(L;μ] =-M_1[μ]/24M_0[μ]^2+L^2/48M_0[μ]. This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> We follow a strategy along the lines of the proof of Theorem <ref>. Recall the relation (<ref>) between the intersection number generating function G and the tight Weil–Petersson volumes T_g,n. Let us denote by G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…) the homogeneous part of degree n in x_0,x_1,… of G_g(0;x_0,x_1 + 1 - 𝖬_0,x_2-12 𝖬_1, x_3 - 14𝖬_2, …). In other words, they are homogeneous polynomials of degree n in x_0,x_1,… with coefficients that are formal power series in 𝖬_0,𝖬_1,…, such that G_g(0;x_0,x_1 + 1 - 𝖬_0,x_2-12 𝖬_1, …) = ∑_n=0^∞ G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…). We will prove that there exist polynomials 𝒫̅_g,n(x_0,x_1,…;m_1,m_2…) such that G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…) = 1/n!1/𝖬_0^2g-2+n𝒫̅_g,n(x_0,x_1,…;𝖬_1/𝖬_0,𝖬_2/𝖬_0,…) and deduce a recurrence in n. For g≥ 2 and n=0, the existence of a polynomial 𝒫̅_g,0(m_1,m_2,…) follows from <cit.>, since G_g,0(x_0,x_1,…;𝖬_0,𝖬_1,…) = G_g(0;0,1 - 𝖬_0,-12 𝖬_1,…) = 𝖬_0^2-2gG_g(0;0,0,-12 𝖬_1/𝖬_0,-14 𝖬_2/𝖬_0,…) and G_g(0;0,0,x_2,x_3,…) is polynomial by construction. Also G_0,3 = x_0^3/61/𝖬_0, G_1,1 = x_1/241/𝖬_0-x_0/48𝖬_1/𝖬_0^2 Let us now assume 2g-2+n ≥ 2 and aim to express G_g,n in tems of G_g,n-1. By construction the series G_g,n obeys for k≥ 1, ∂ G_g,n/∂ x_k = -2^k-1∂ G_g,n-1/∂𝖬_k-1. The string equation, i.e. (<ref>) at p=-1, written in terms of G_g,n reads ∑_k=1^∞ x_k ∂ G_g,n-1/∂ x_k-1 - ∑_k=0^∞ 2^-k𝖬_k ∂ G_g,n/∂ x_k= 0, which after rearranging gives the relation ∂ G_g,n/∂ x_0 = ∑_k=1^∞(x_k/𝖬_0∂ G_g,n-1/∂ x_k-1 - 2^-k𝖬_k/𝖬_0∂ G_g,n/∂ x_k) Together with (<ref>) this is sufficient to identify the recursion relation G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…) = 1/n∑_k=0^∞ x_k ∂ G_g,n/∂ x_k = 1/n∑_k=1^∞(x_0x_k/𝖬_0∂ G_g,n-1/∂ x_k-1 + x_0𝖬_k/2𝖬_0∂ G_g,n-1/∂𝖬_k-1 - 2^k-1 x_k ∂ G_g,n-1/∂𝖬_k-1). By induction, we now verify that G_g,n is of the form (<ref>). If (<ref>) is granted for G_g,n-1, then G_g,n(x_0,x_1,…;𝖬_0,𝖬_1,…) = 1/n!1/𝖬_0^2g-2+n[∑_k=1^∞ x_0x_k∂𝒫̅_g,n-1/∂ x_k-1 - ∑_k=2^∞(-x_0𝖬_k/2𝖬_0 +2^k-1x_k)∂𝒫̅_g,n-1/∂ m_k-1 + (-x_0𝖬_1/2𝖬_0 +x_1)((2g+n-3)𝒫̅_g,n-1 - ∑_k=1^∞ m_k ∂𝒫̅_g,n-1/∂ m_k)] is indeed of the form (<ref>) provided 𝒫̅_g,n(x_0,x_1,…;m_1,m_2,…) = ∑_k=1^∞ x_0x_k∂𝒫̅_g,n-1/∂ x_k-1 - ∑_p=1^∞(-x_0m_p+1/2 +2^p x_p+1-x_0 m_1 m_k/2+x_1 m_k)∂𝒫̅_g,n-1/∂ m_p + (-x_0m_1/2 +x_1)(2g+n-3)𝒫̅_g,n-1 According to (<ref>) the series G_g,n and the tight Weil–Petersson volume T_g,n are related via G_g,n(t_0[ν],t_1[ν],…;M_0[μ],M_1[μ],…) = 2^3-3g/n!∫ν(L_1)⋯ν(L_n) T_g,n(𝐋;μ]. This naturally leads to the existence of polynomials 𝒫_g,n(𝐋,m_1,m_2,…) such that T_g,n(𝐋;μ] = 1/M_0^2g-2+n𝒫_g,n(𝐋,M_1/M_0,…,M_3g-3+n/M_0), to get 𝒫_g,n(𝐋,𝐦) = ∑_p=1^∞(m_p+1 - L_1^2p+2/2^p+1(p+1)!-m_1 m_p + 1/2L_1^2 m_p) ∂𝒫_g,n-1/∂ m_p(𝐋_{1},𝐦) + (2g-3+n)(-m_1+12 L_1^2) 𝒫_g,n-1(𝐋_{1},𝐦) + _{g=0,n=3} + ∑_i=2^n ∫L_i L_i 𝒫_g,n-1(𝐋_{1},𝐦). The claims about the degree of the polynomials 𝒫_g,n are easily checked to be valid for the initial conditions and to be preserved by the recursion formula (<ref>). This proves theorem <ref>. We note that [L_1^2p]𝒫_g,n(𝐋,𝐦) = δ_p,0[∑_q=1^∞(m_q+1-m_1 m_q) ∂𝒫_g,n-1/∂ m_q(𝐋_{1},𝐦)-(2g-3+n)m_1 𝒫_g,n-1(𝐋_{1},𝐦) + _{g=0,n=3} + ∑_i=2^n ∫L_i L_i 𝒫_g,n-1(𝐋_{1},𝐦)] +δ_p,1[ ∑_q=1^∞(1/2m_q) ∂𝒫_g,n-1/∂ m_q(𝐋_{1},𝐦)+ 12(2g-3+n) 𝒫_g,n-1(𝐋_{1},𝐦) ] +_p>1[ -1/2^pp!∂𝒫_g,n-1/∂ m_p-1(𝐋_{1},𝐦) ] Setting m_0=1 this gives ∑_p=0^∞ 2^p p! m_p [L_1^2p]𝒫_g,n(𝐋,𝐦) = _{g=0,n=3} + ∑_i=2^n ∫L_i L_i 𝒫_g,n-1(𝐋_{1},𝐦) and ∑_p=1^∞ 2^p p! m_p-1 [L_1^2p]𝒫_g,n(𝐋,𝐦) = (2g-3+n) 𝒫_g,n-1(𝐋_{1},𝐦). Using equation (<ref>), we get the desired result. § LAPLACE TRANSFORM, SPECTRAL CURVE AND DISK FUNCTION §.§ Proof of Theorem <ref> Let us consider the partial derivative operator Δ(z)=4∑_p=0^∞ (2z^2)^-1-p (2p+1)!! x_p on the ring of formal power series in x_0,x_1,… and 1/z. For later purposes we record several identities for the power series coefficients around z=∞, valid for a≥0, [u^-2-2a]Δ(u) =2^1-a(2a+1)!!x_a, [u^-4-2a]Δ(u)Δ(-u) =2^2-a∑_i+j=a(2i+1)!!(2j+1)!!∂^2/∂ x_i∂ x_j, [u^2a-1]1/u(z^2-u^2)η(u;μ] =∑_m=0^a z^2m-2a-2β_m[μ], where the reverse moments β[μ] were introduced in (<ref>). From the definition (<ref>) and the relation (<ref>) we deduce that for g≥1 or n≥3 ω_g,n(𝐳) = ∫_0^∞[∏_1≤ i≤ n L_i e^-z_i L_i] δ^n F̃_g[ν,μ]/δν(L_1)⋯δν(L_n)|_ν=0 L_1⋯ L_n =2^3g-3Δ(z_1)…Δ(z_n)G_g(0;x_0,x_1+γ̃_1,x_2+γ̃_2,…)_x_0=x_1=⋯=0, where γ̃_k = δ_k,1 - 2^1-k M_k-1[μ] as before. Recall the differential equation (<ref>) satisfied by this (shifted) intersection number generating function G(0;x_0,x_1+γ̃_1,x_2+γ̃_2,…), 1/2 (2p+1)!! ∂ G/∂ x_p =λ^2/4∑_m=0^∞∑_i+j=p+m-22^-mβ_m[μ](2i+1)!!(2j+1)!!(∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) +1/2∑_n,m=0^∞ 2^-mβ_m[μ] (2n+2p+2m-1)!!/(2n-1)!! x_n∂ G/∂ x_n+p+m-1 +δ_p,0(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+δ_p,1β_0[μ]/16. With the help of the identities (<ref>) it can be recast in terms of the operator Δ(z) as Δ(z_1)G =2λ^2∑_a=0^∞∑_m=0^a+2∑_i+j=a(2z_1^2)^m-a-32^-mβ_m[μ](2i+1)!!(2j+1)!!(∂^2 G/∂ x_i∂ x_j+∂ G/∂ x_i∂ G/∂ x_j) +4∑_n,m,p=0^∞ (2z_1^2)^-1-p 2^-mβ_m[μ] (2n+2p+2m-1)!!/(2n-1)!! x_n∂ G/∂ x_n+p+m-1 +4/z_1^2(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+β_0[μ]/8z_1^4[0] =λ^2/16∑_a=0^∞([u^2a+3]1/u(z_1^2-u^2)η(u;μ])[u^-4-2a](Δ(u)Δ(-u)G+(Δ(u)G)(Δ(-u)G)) +1/2∑_q=0^∞∑_n=0^q+12^n x_n/(2n-1)!!([u^2q-2n+1]1/u(z_1^2-u^2)η(u;μ]) [u^-2q-2]Δ(u)G +4/z_1^2(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+β_0[μ]/8z_1^4[0] =λ^2/16_u→01/u(z_1^2-u^2)η(u;μ](Δ(u)Δ(-u)G+(Δ(u)G)(Δ(-u)G)) +1/2_u→0∑_n=0^∞(2u^2)^n x_n/(2n-1)!!1/u(z_1^2-u^2)η(u;μ]Δ(u)G +4/z_1^2(λ^-2x_0^2β_0[μ]/4+β_1[μ]/32)+β_0[μ]/8z_1^4. Extracting the genus-g contribution, which appears as the coefficient of λ^2g-2, the relation (<ref>) allows us to turn this into a recursion for ω_g,n, ω_g,n(𝐳) =1/2_u→01/u(z_1^2-u^2)η(u;μ][ω_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω_g_1,|I|(u,𝐳_I)ω_g_2,|J|(-u,𝐳_J)] +_u→0∑_j=2^n∑_p=0^∞ u^2p z_j^-2-2p(2p+1)1/u(z_1^2-u^2)η(u;μ]ω_g,n-1(u,𝐳_{1,j}) +δ_g,0δ_n,3/M_0[μ]z_1^2z_2^2z_3^2+δ_g,1δ_n,1(-M_1[μ]/24M_0[μ]^2z_1^2+1/8M_0[μ]z_1^4) [0] =_u→01/2u(z_1^2-u^2)η(u;μ][ω_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω_g_1,|I|(u,𝐳_I)ω_g_2,|J|(-u,𝐳_J)+ +∑_j=2^n(1/(z_j-u)^2+1/(z_j+u)^2)ω_g,n-1(u,𝐳_{1,j})] +δ_g,0δ_n,3/M_0[μ]z_1^2z_2^2z_3^2+δ_g,1δ_n,1(-M_1[μ]/24M_0[μ]^2z_1^2+1/8M_0[μ]z_1^4). Finally, if we set ω_0,2(𝐳)=(z_1-z_2)^-2 and ω_0,0(𝐳)=ω_0,1(𝐳)=0, this reduces to ω_g,n(𝐳) =_u→01/2u(z_1^2-u^2)η(u;μ][ω_g-1,n+1(u,-u,𝐳_{1})+ ∑_g_1+g_2=g I⨿ J={2,…,n}ω_g_1,|I|(u,𝐳_I)ω_g_2,|J|(-u,𝐳_J)]. §.§ Disk function Due to proposition <ref>, there is a relation between the regular and tight Weil–Petersson volumes. In this subsection we will look at this relation in the Laplace transformed setting. In particular, we are interested in the Laplace transformed generating functions of (regular) Weil–Petersson volumes 𝒲_g,n(𝐳) = ∫_0^∞L_1 L_1e^-z_1L_1⋯∫_0^∞L_nL_ne^-z_nL_nδ^n F_g[μ]/δμ(L_1)⋯δμ(L_n), where we recall that δ^n F_g[μ]/δμ(L_1)⋯δμ(L_n) = ∑_p=0^∞1/p!∫ V_g,n+p(𝐋,𝐊) μ(K_1)⋯μ(K_p). We define x_i=x_i(z_i;μ]=√(z_i^2-2R[μ]). For g≥ 1 or n≥ 3 we have 𝒲_g,n(𝐳) = ω_g,n(𝐱) ∏_i=1^n z_i/x_i, while for g=0 and n=1,2, 𝒲_0,1(𝐳) = - ∫_0^R z_1/(z_1^2-2r)^3/2 Z(r) r, 𝒲_0,2(𝐳) =z_1/x_1z_2/x_2ω_0,2(𝐱)- 1/(z_1-z_2)^2. For the first identity we wish to combine (<ref>) and (<ref>). It requires an expression for the Laplace transform of the half-tight cylinder. Using ∫_0^∞yz/√(4π y^3) e^-z^2/4y-yL^2 = e^-zL, ∫_0^∞pe^-yp^2/2RI_1(p) = e^R/2y-1, allows us to compute ∫_K^∞L L e^-z L (H(L,K) K + δ(L-K)) =K e^-zK+[t] ∫_K^∞L L (∫_0^∞yzK/√(4π y^3) e^-z^2/4y-yL^2) √(2R/L^2-K^2)I_1(√(2R(L^2-K^2))) =K e^-zK+∫_0^∞yzK/√(4π y^3) e^-z^2/4y-yK^2∫_0^∞pe^-yp^2/2RI_1(p) =K e^-zK+∫_0^∞yzK/√(4π y^3) e^-z^2/4y-yK^2(e^R/2y-1) =Kz/√(z^2 -2R)e^-K√(z^2-2R). Therefore, 𝒲_g,n(𝐳) = ∫_0^∞ T_g,n(𝐊;μ] ∏_i=1^n K_i z_i/x_i e^-x_i K_i K_i, which by (<ref>) gives the first stated identity. For the last two identities we use that the Laplace transform of the modified Bessel function I_0 is given by ∫_0^∞ I_0(L√(2r)) L e^-z L L = z/(z^2-2r)^3/2. Then (<ref>) follows directly from (<ref>), while for the cylinder case (<ref>) implies 𝒲_0,2(𝐳) = ∫_0^R z_1/(z_1^2-2r)^3/2z_2/(z_2^2-2r)^3/2 r = z_1/√(z_1^2-2R)z_2/√(z_2^2-2R)1/(√(z_1^2-2R)-√(z_2^2-2R))^2-1/(z_1-z_2)^2 = z_1/x_1z_2/x_2ω_0,2(𝐱)- 1/(z_1-z_2)^2. We finish this section by giving alternative expressions for the disk function and the series η(u;μ]. The disk function 𝒲_0,1(𝐳) is related to η via 𝒲_0,1(𝐳) = - z_1 √(z_1^2-2R) η(√(z_1^2-2R)) + z_1/2πsin2π z_1 -∫μ(L)cosh(L z_1), valid when 4|R| < |z_1|^2. Note that μ=0 gives 𝒲_0,1(𝐳)=0 as expected. The starting point is the standard generating function <cit.> 1/usin(√(u^2-2ut)) = ∑_n=0^∞t^n/n!y_n-1(u) for the spherical Bessel functions y_k(u) valid when 2|t| < |u|. Restricting to 2|R| < |x|^2 and using the series expansion of the ordinary and spherical Bessel functions we find x/2πsin(2πλ√(x^2+2R)) = ∑_k=0^∞ (-π)^k/k! 2^kx^2-kλ^k+1 y_k-1(2πλ x) R^k = ∑_k,m=0^∞(-1)^mπ^2m+1/2/k!m!2^k-1x^2(m-k+1)λ^2m+1R^k/Γ(m-k+32) = ∑_p=-∞^∞ x^2p∑_m=max(0,p-1)^∞(-1)^mπ^2m+1/2/(m+1-p)!m!2^m-pλ^2m+1R^m+1-p/Γ(12+p) = ∑_p=-∞^∞ x^2pλ^p Γ(1/2)/Γ(p+1/2)2^p(-√(2)π/√(R))^p-1J_p-1(2πλ√(2R)). Setting λ=1 now gives x/2πsin(2π√(x^2+2R))=∑_p=-∞^∞ x^2pΓ(1/2)/Γ(p+1/2)2^p(-√(2)π/√(R))^p-1J_p-1(2π√(2R)). On the other hand we can show that λ[x/√(x^2+2R)cos(2πλ√(x^2+2R))] =-2π x sin(2πλ√(x^2+2R)) =-4π^2 ∑_p=-∞^∞ x^2pλ^p Γ(1/2)/Γ(p+1/2)2^p(-√(2)π/√(R))^p-1J_p-1(2πλ√(2R)) =λ[∑_p=-∞^∞ x^2pλ^pΓ(1/2)/Γ(p+1/2)2^p(-2π/√(2R))^p J_p(2πλ√(2R))]. Integrating and setting λ=(iL)/(2π) gives x/√(x^2+2R)cosh(L√(x^2+2R)) = ∑_p=-∞^∞ x^2pΓ(1/2)/Γ(p+1/2)2^p(L/√(2R))^p I_p(L√(2R)), valid for 2|R| < |x|^2. Starting from (<ref>) and restricting to 2|R| < |x_1|^2 we find the series expansion 𝒲_0,1(√(x_1^2+2R))x_1/√(x_1^2+2R) = - ∫_0^R x_1/(x_1^2 + 2R - 2r)^3/2 Z(r) r =- x_1^-2∫_0^R 1/(1 + 2R - 2r/x_1^2)^3/2 Z(r) r = - ∑_p=1^∞(-2)^pΓ(1/2)/(p-1)!Γ(1/2-p) x_1^-2p∫_0^R (r-R)^p-1 Z(r) r. We can use <cit.>, ∫_0^R (r-R)^p-1√(r)/√(2)πJ_1(2π√(2r)) r = (-1)^p-1√(2)R^p+1/2/π∫_0^1 x^2 (1-x^2)^p-1 J_1(2π√(2R)x)x = (p-1)! (-√(R)/√(2)π)^p+1J_p+1(2π√(2R)), ∫_0^R (r-R)^p-1 I_0(L√(2r)) r = (-R)^p-12R ∫_0^1 x (1-x^2)^p-1 I_0(L√(2R)x)x = -(p-1)! (-√(2R)/L)^pI_p(L√(2R)). This yields 𝒲_0,1(√(x_1^2+2R))x_1/√(x_1^2+2R) = -∑_p=1^∞ x_1^-2p(-2)^pΓ(1/2)/Γ(1/2-p)[(-√(R)/√(2)π)^p+1 J_p+1(2π√(2R)) + ∫μ(L)(-√(2R)/L)^p I_p(L√(2R))] = ∑_p=-∞^-1 x_1^2pΓ(1/2)/Γ(1/2+p)2^p[(-√(2)π/√(R))^p-1 J_p-1(2π√(2R)) -∫μ(L)(L/√(2R))^p I_p(L√(2R))]. On the other hand, (<ref>) and (<ref>) imply x_1^2 η(x_1) = ∑_p=1^∞ x_1^2pΓ(1/2)/Γ(1/2+p)2^p[(-√(2)π/√(R))^p-1 J_p-1(2π√(2R)) -∫μ(L)(L/√(2R))^p I_p(L√(2R))] Together with Z(R)=0 we may now conclude that 𝒲_0,1(√(x_1^2+2R))x_1/√(x_1^2+2R) + x_1^2 η(x_1) = x_1/2πsin(2π√(x_1^2+2R)) - ∫μ(L)x_1/√(x_1^2+2R)cosh(L√(x_1^2+2R)), valid for 2|R| < |x_1|^2. Substituting x_1 = √(z_1^2 - 2R) gives the desired expression. For convenience, we record an explicit expression for η(u;μ] that follows from this proof. For a formal power series F(r,u) in r with coefficients that are Laurent polynomials in u, we denote by [u^≥ 0]F(r,u) the formal power series obtained by dropping the negative powers of u in the coefficients of F(r,u). Then we can write η(u;μ] as η(u;μ] = [u^≥ 0](u/2πsin(2π√(u^2+2r)) - ∫μ(L)u/√(u^2+2r)cosh(L√(u^2+2r)))|_r=R[μ]. § JT GRAVITY The Weil–Petersson volumes play an important role in Jackiw-Teitelboim (JT) gravity <cit.>, a two-dimensional toy model of quantum gravity. JT gravity has received significant attention in recent years because of the holographic perspective on the double-scaled matrix model it is dual to <cit.>. In this section we point to some opportunities to use our results in the context of JT gravity and its extensions in which hyperbolic surfaces with defects play a role <cit.>. But we start with a brief introduction to the JT gravity partition function in Euclidean signature. JT gravity is governed by the (Euclidean) action I_JT, bulk[g_μν,ϕ]=-1/2∫_ℳ√(g)ϕ(R+2), where ϕ is the scalar dilaton field, g_μν is a two-dimensional Riemannian metric and R the corresponding Ricci scalar curvature. Since we want this action to make sense when the manifold has boundaries, the boundary term I_JT, boundary[g_μν,ϕ]=-∫_∂ℳ√(h)ϕ(K-1) is included, where h_μν is the induced metric on the boundary and K is the extrinsic curvature at the boundary. Including the topological Einstein–Hilbert term, proportional to parameter S_0, gives the full (Euclidean) JT action I_JT[g_μν,ϕ]=-S_0χ+I_JT,bulk[g_μν,ϕ]+I_JT,boundary[g_μν,ϕ], where χ is the Euler characteristic of the manifold. The JT gravity partition function on a manifold ℳ with n boundaries of lengths β = (β_1,…,β_n) can formally be written as Z_n(β)=∫_ℳ𝒟g 𝒟ϕexp(I_JT). In the partition function, the dilation field ϕ acts as a Lagrange multiplier on (R+2), therefore enforcing a constant negative curvature R=-2 in the bulk. This is why the relevant manifolds will be hyperbolic surfaces. Due to the Einstein–Hilbert term, we can do a topological expansion by a formal power series expansion in e^-S_0, Z_n(β)=∑_g=0^∞(e^-S_0)^2g+n-2 Z_g,n(β). It has been shown by Saad, Shenker & Stanford <cit.> that the JT partition functions Z_g,n for 2g+n-2>0 can be further decomposed by splitting the surfaces into n trumpets and a hyperbolic surface of genus g and n geodesic boundaries with lengths 𝐛=(b_1,…,b_n), and that the partition function measure is closely related to the Weil–Petersson measure. To be precise, it satisfies the identity Z_g,n(β)=∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i)) V_g,n(𝐛), where V_g,n(𝐛) are the Weil–Petersson volumes and the trumpet contributions are given by Z^Trumpet(β,b)=1/2√(πβ)e^-b^2/4β. This formula is the link between JT gravity and Weil–Petersson volumes. There are several natural extensions of the JT action. If we only allow up to two derivatives, the most general action can be transformed to <cit.> I_bulk[g_μν,ϕ]=-1/2∫_ℳ√(g) [ϕ(R+2)+U(ϕ)]. In the next subsection we will discuss a natural choice of the dilaton potential U(ϕ), which gives rise to defects in the hyperbolic surfaces. §.§ Conical defects One of the most natural dilaton potentials is U(ϕ)=μ e^-2π(1-α)ϕ, which adds a gas of conical defects of cone angle 2πα carrying weight μ each. It naturally arises <cit.> from Kaluza–Klein instantons when performing dimensional reduction on three-dimensional black holes. More generally, one can allow multiple types of defects by considering a measure μ on i[0,2π) and setting U(ϕ)=∫_0^1 μ(2π iα) e^-2π(1-α)ϕ. For instance, the choice μ = ∑_j=1^k μ_j δ_i γ_j gives k types of defects with cone angles γ_1,…,γ_k ∈ [0,2π], U(ϕ)=∑_i μ_i e^-2π(1-α_i)ϕ. The choice to consider the measure on the imaginary interval will be convenient later. It can be shown <cit.> that these potentials indeed lead to conical defects. For example, one can look at the term linear in μ in the integrand of the partition function for a single type of gas: [μ^1]exp(-I_bulk)=1/2exp(-I_JT, bulk)∫_ℳx_1√(g(x_1))exp(-2π(1-α)ϕ(x_1)) =1/2∫_ℳx_1√(g(x_1))exp(1/2∫_ℳx√(g(x))ϕ(x) (R(x)+2-4π(1-α)δ^2(x-x_1))). It follows that the surface has curvature R=-2 everywhere, except at point x_1, where we have a conical defect with cone angle 2πα. If one includes all orders of μ, any number of defects may appear and each defect carries a weight μ <cit.>. As already mentioned in the introduction, the Weil–Petersson volumes for surfaces with sharp cone points (cone angle 2πα< π) are obtained <cit.> from the usual Weil–Petersson volume polynomials by treating the defect angle 2πα as a geodesic boundary with imaginary boundary length 2π iα. This that partition function Z_g,n(β) is closely related to the generating function F_g[μ] of Weyl–Petersson volumes considered in this paper. To be precise, using (<ref>) and (<ref>), Z_g,n(β) =∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i))∑_p=0^∞1/p!∫μ(b_n+1)⋯μ(b_n+p) V_g,n+p(𝐛) =∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i))δ^n F_g[μ]/δμ(b_1)⋯δμ(b_n) =∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i))(∏_i=1^n K_i(K_i H(b_i,K_i;μ]+δ(K_i-b_i))) T_g,n(𝐊;μ], which we can compute using the recursions described in this paper. In particular, its topological recursion can in principle be derived from that of T_g,n in Theorem <ref>. We can simplify (<ref>) by considering the tight trumpet, which is a genus-0 hyperbolic surface with an asymptotic boundary of length β, a tight boundary of length K and an arbitrary number of extra geodesic boundaries, with the constraint that the tight boundary cannot be separated from the asymptotic one by a curve of length β. See Figure <ref>. Since it can be obtained by gluing a trumpet to a half-tight cylinder, with the help of Lemma <ref> we find that the partition function associated to a tight trumpet is given by Z^TT(β,K) =∫_K^∞b/Kb Z^Trumpet(β,b)(K H(b,K;μ]+δ(K-b)) = 1/2√(πβ)e^-K^2/4β + ∫_K^∞ bb1/2√(πβ)e^-b^2/4β√(2R[μ]/b^2-K^2) I_1( √(b^2-K^2)√(2R[μ])) =1/2√(πβ)e^-K^2/4β+2R[μ]β = e^2R[μ]βZ^Trumpet(β,K). Remarkably it differs from the JT trumpet only in a factor exponential in the boundary length β. We conclude that for g≥ 1 or n≥ 3, Z_g,n(β)= ∫_0^∞(∏_i=1^n K_i K_iZ^TT(β_i,K_i)) T_g,n(𝐊;μ], which we understand as a gluing of tight trumpets to tight hyperbolic surfaces. In the case g=0 and n=2, we only need to glue two tight trumpets together to find the universal two-boundary correlator Z_0,2(β_1,β_2) = ∫_0^∞ Z^TT(β_1,K) Z^TT(β_2,K) K K = 1/2π√(β_1β_2)/β_1+β_2 e^2(β_1+β_2)R[μ]. We note that these expressions do not apply to the case of blunt cone points (cone angle 2πα∈ [π,2π]). The problem is that in the presence of such defects it is no longer true that every free homotopy class of closed curves necessarily contains a geodesic, because, informally, when shortening a closed curve it can be pulled across a blunt cone point, while that never happens for a short one. However, this is not an issue when considering tight cycles, because in that setting one is considering larger homotopy classes, namely of the manifold with its defects closed off. Such homotopy classes will always contain a shortest geodesic, which generically is unique. Whereas the JT trumpet cannot always be removed from a surface with blunt defects in a well-defined manner, the removal of a tight trumpet should pose no problem. It is natural to ask whether such reasoning can be used to connect to the recent works <cit.> in JT gravity dealing with blunt cone points. §.§ FZZT-branes Another well-studied extension of JT gravity, is the introduction of FZZT branes. With this extension the hyperbolic surfaces can end on a FZZT brane. In the random matrix model description of JT gravity, this corresponds to fixing some eigenvalues of the random matrix <cit.>. In the partition function, this leads to the addition of an arbitrary number of geodesic boundaries as defects with a certain weight ℳ(L)=-e^-zL, where L is the length of the boundary, Z_g,n(β)_FZZT = ∑_p=0^∞e^-S_0p/p!∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i)) (∏_i=n+1^n+pb_iℳ(b_i)) V_g,n+p(𝐛). Such weights have been interpreted <cit.>[Please note that z in our work corresponds to z/(√(2)π) in <cit.>] as the action of a fermion with mass z. Using our setup, we can rewrite this to: Z_g,n(β)_FZZT = ∫_0^∞(∏_i=1^n b_i b_i Z^Trumpet(β_i,b_i))(∏_i=1^n K_i (K_i H(b_i,K_i;μ_FZZT] +δ(b_i-K_i))) T_g,n(𝐊;μ_FZZT] , with μ_FZZT=-e^-S_0-zL L, or again using the tight trumpet Z_g,n(β)_FZZT= ∫_0^∞(∏_i=1^n K_i K_iZ^TT(β_i,K_i;μ_FZZT]) T_g,n(𝐊;μ_FZZT], with Z^TT(β,K;μ] =1/2√(πβ)e^-K^2/4β+2R[μ]β. The behaviour of R[μ_FZZT] depends on z and S_0 and its critical points should give insight into critical phenomena of the partition function, see <cit.>. 10 Abramowitz1964 M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables, Courier Corporation, 1964. Blommaert_2021 A. Blommaert, T. G. Mertens, and H. Verschelde, Eigenbranes in Jackiw-Teitelboim gravity, Journal of High Energy Physics (2021) 2. Bouttier_Bijective_2022 J. Bouttier, E. Guitter, and G. Miermont, Bijective enumeration of planar bipartite maps with three tight boundaries, or how to slice pairs of pants, Annales Henri Lebesgue, 5 (2022), pp. 1035–1110. budd2020irreducible T. Budd, Irreducible metric maps and Weil-Petersson volumes, Comm. Math. Phys., 394 (2022), pp. 887–917. Budd_Statistics_ T. Budd and P. Koster, Statistics of critical boltzmann hyperbolic surfaces. in preparation. buser1992geometry P. Buser, Geometry and Spectra of Compact Riemann Surfaces, Birkhäuser, Boston, 1992. castro2023critical A. Castro, Critical jt gravity, arXiv preprint arXiv:2306.14823, (2023). Dijkgraaf_Loop_1991 R. Dijkgraaf, H. Verlinde, and E. Verlinde, Loop equations and virasoro constraints in non-perturbative two-dimensional quantum gravity, Nuclear Physics B, 348 (1991), pp. 435–456. do2011moduli N. Do, Moduli spaces of hyperbolic surfaces and their weil-petersson volumes, arXiv preprint arXiv:1103.4674, (2011). Do_Weil_2009 N. Do and P. Norbury, Weil-Petersson volumes and cone surfaces, Geom. Dedicata, 141 (2009), pp. 93–107. Eberhardt_2D_2023 L. Eberhardt and G. J. Turiaci, 2d dilaton gravity and the weil-petersson volumes with conical defects, arXiv preprint arXiv:2304.14948, (2023). Eynard_Invariants_2007 B. Eynard and N. Orantin, Invariants of algebraic curves and topological expansion, arXiv preprint math-ph/0702045, (2007). eynard2007weil height 2pt depth -1.6pt width 23pt, Weil-petersson volume of moduli spaces, mirzakhani's recursion and matrix models, arXiv preprint arXiv:0705.3600, (2007). Faber_conjectural_1999 C. Faber, A conjectural description of the tautological ring of the moduli space of curves, in Moduli of curves and abelian varieties, Aspects Math., E33, Friedr. Vieweg, Braunschweig, 1999, pp. 109–129. Gilmore_Short_2021 C. Gilmore, E. Le Masson, T. Sahlsten, and J. Thomas, Short geodesic loops and L^p norms of eigenfunctions on large genus random surfaces, Geom. Funct. Anal., 31 (2021), pp. 62–110. Gradshteyn_Table_2015 I. S. Gradshteyn and I. M. Ryzhik, Table of integrals, series, and products, Elsevier/Academic Press, Amsterdam, eighth ed., 2015. Translated from the Russian, Translation edited and with a preface by Daniel Zwillinger and Victor Moll, Revised from the seventh edition [MR2360010]. Guth_Pants_2011 L. Guth, H. Parlier, and R. Young, Pants decompositions of random surfaces, Geom. Funct. Anal., 21 (2011), pp. 1069–1090. Itzykson_Combinatorics_1992 C. Itzykson and J.-B. Zuber, Combinatorics of the modular group. II. The Kontsevich integrals, Internat. J. Modern Phys. A, 7 (1992), pp. 5661–5705. jackiw1985 R. Jackiw, Lower dimensional gravity, Nuclear Physics B, 252 (1985), pp. 343–356. Kaufmann_Higher_1996 R. Kaufmann, Y. Manin, and D. Zagier, Higher Weil-Petersson volumes of moduli spaces of stable n-pointed curves, Comm. Math. Phys., 181 (1996), pp. 763–787. Kontsevich1992 M. Kontsevich, Intersection theory on the moduli space of curves and the matrix Airy function, Comm. Math. Phys., 147 (1992), pp. 1–23. Maxfield_2021 H. Maxfield and G. J. Turiaci, The path integral of 3d gravity near extremality; or, JT gravity with defects as a matrix integral, Journal of High Energy Physics (2021) 1. Mirzakhani2007 M. Mirzakhani, Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces, Invent. Math., 167 (2007), pp. 179–222. Mirzakhani2007a height 2pt depth -1.6pt width 23pt, Weil-Petersson volumes and intersection theory on the moduli space of curves, J. Amer. Math. Soc., 20 (2007), pp. 1–23. Mirzakhani_Growth_2013 height 2pt depth -1.6pt width 23pt, Growth of Weil-Petersson volumes and random hyperbolic surfaces of large genus, J. Differential Geom., 94 (2013), pp. 267–300. Mirzakhani_Lengths_2019 M. Mirzakhani and B. Petri, Lengths of closed geodesics on random surfaces of large genus, Comment. Math. Helv., 94 (2019), pp. 869–889. Monk_Benjamini_2022 L. Monk, Benjamini–schramm convergence and spectra of random hyperbolic surfaces of high genus, Analysis & PDE, 15 (2022), pp. 727–752. mulase2006mirzakhanis M. Mulase and B. Safnuk, Mirzakhani's recursion relations, virasoro constraints and the kdv hierarchy, arXiv preprint math/0601194, (2006). Mulase_Mirzakhanis_2008 M. Mulase and B. Safnuk, Mirzakhani's recursion relations, Virasoro constraints and the KdV hierarchy, Indian J. Math., 50 (2008), pp. 189–218. Okuyama2021FZZT K. Okuyama and K. Sakai, FZZT branes in JT gravity and topological gravity, Journal of High Energy Physics (2021) 9. Saad_JT_2019 P. Saad, S. H. Shenker, and D. Stanford, Jt gravity as a matrix integral, arXiv preprint hep-th/1903.11115, (2019). Tan_Generalizations_2006 S. P. Tan, Y. L. Wong, and Y. Zhang, Generalizations of McShane's identity to hyperbolic cone-surfaces, J. Differential Geom., 72 (2006), pp. 73–112. teitelboim1983 C. Teitelboim, Gravitation and hamiltonian structure in two spacetime dimensions, Physics Letters B, 126 (1983), pp. 41–45. Turiaci_2021 G. J. Turiaci, M. Usatyuk, and W. W. Weng, 2d dilaton-gravity, deformations of the minimal string, and matrix models, Classical and Quantum Gravity, 38 (2021), p. 204001. Witten_Two_1991 E. Witten, Two-dimensional gravity and intersection theory on moduli space, in Surveys in differential geometry (Cambridge, MA, 1990), Lehigh Univ., Bethlehem, PA, 1991, pp. 243–310. Witten_2020 E. Witten, Matrix models and deformations of JT gravity, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 476 (2020).
http://arxiv.org/abs/2307.03902v1
20230708044951
Feature selection simultaneously preserving both class and cluster structures
[ "Suchismita Das", "Nikhil R. Pal" ]
cs.LG
[ "cs.LG" ]
[mycorrespondingauthor]Corresponding author Electronics and Communication Sciences Unit, Indian Statistical Institute, 203 B T Road, Kolkata-700108 [email protected],[email protected] When a data set has significant differences in its class and cluster structure, selecting features aiming only at the discrimination of classes would lead to poor clustering performance, and similarly, feature selection aiming only at preserving cluster structures would lead to poor classification performance. To the best of our knowledge, a feature selection method that simultaneously considers class discrimination and cluster structure preservation is not available in the literature. In this paper, we have tried to bridge this gap by proposing a neural network-based feature selection method that focuses both on class discrimination and structure preservation in an integrated manner. In addition to assessing typical classification problems, we have investigated its effectiveness on band selection in hyperspectral images. Based on the results of the experiments, we may claim that the proposed feature/band selection can select a subset of features that is good for both classification and clustering. Feature selection, Structure preserving, Classification, Neural network, Sammon's Stress, Band selection, Hyperspectral Image. § INTRODUCTION Feature selection methods can be broadly classified on the basis of the utilization of the class label information. There are three categories: supervised, semi-supervised and unsupervised <cit.>. The supervised feature selection method exploits the label information to find out the relevant features which distinguish samples of different classes <cit.>. Semi-supervised feature selection is used when some labeled samples along with plenty of unlabelled samples are present <cit.>. Both labeled and unlabelled data are used to modify a hypothesis obtained from the labeled data <cit.>. Unsupervised feature selection is much more difficult as it needs to find out the useful features in the absence of the label information <cit.>. Different criteria have been chosen to select a subset of original features in different unsupervised feature selection studies. Some of them are: preserving the data distribution such as manifold structure <cit.>, preserving cluster structure <cit.>, and preserving data similarity <cit.>. It is noteworthy that in the case of unsupervised feature selection, some methods try to preserve the “structure" or “geometry" of the data in some sense. Contrarily supervised feature selection methods in most cases do not set any explicit criteria to preserve the structure of the data. They only pay heed to separating the classes as much as possible with different measures exploiting class information such as Fisher score <cit.>, Laplacian score <cit.>, mutual information <cit.>, normalized mutual information <cit.>, ReliefF <cit.>, class correlation <cit.>, classifier score <cit.>. We should note here that the feature selection criterion are not always lead by a single objective. Feature selection methods often follow a criterion that consisits of two or more objectives. The study in <cit.> proposes a criterion named `maximum projection and minimum redundancy' which is governed by two goals: projecting data into a feature subspace with minimum reconstruction error and minimum redundancy. The studies in <cit.> claim that both global structure and local structure should be preserved in the projected space as both them may carry important discriminating information and hence, they have proposed feature selection schemes that focus both on global and local structure preservation. The investigation in <cit.> claims to preserve dual global structures. Going through various feature selection schemes having multiple objective we found that whenever class label is available, no work in feature selection explicitly focused preserving structural information along with class information although both of these are important discriminative information and may have positive impact on the generalization ability of the classifier. Suppose, for a data set, the class and cluster structures are substantially different. Exploiting only the class labels, it may not be possible to keep the cluster structures in the projected space. For a practical system, even when the primary task is classification, we may need to cluster the samples in the space defined by the selected features. For example, fuzzy rule based classifiers are often designed by clustering the training data for each class and translating each cluster into a rule<cit.>. We could not find any feature selection method that focuses both on class and cluster separability. To bridge this gap, in this study we propose a feature selection method that selects features preserving class and cluster-structure information simultaneously. We employ a multi-layer perceptron (MLP) based neural network to develop an embedded feature selection scheme. The training of the proposed MLP based feature selection method is governed by both class discriminating and cluster (structure) preserving objectives. The philosophy is quite general and can be easily extended to other networks such as radial basis function network. § PROPOSED METHOD Let us denote the input data by an n× P matrix, 𝐗={𝐱_i∈ℝ^P}_i=1^n. Here, 𝐱_i is a P dimensional row vector of the form, 𝐱_i=(x_i1,x_i2,⋯,x_iP). The collection of class labels of 𝐗 be 𝐙={z_i∈{1,2,⋯, C}}_i=1^n, where, z_i is the class label corresponding to 𝐱_i. We aim to select a subset of size Q from the original set of features such that the selected subset performs reasonably well in terms of the classification task as well as in clustering. In other words, if we design a classifier using the selected features, the performance of the classifier would be comparable to a classifier designed using all features. Similarly, if we cluster the data in the reduced dimension as well as in the original dimension we expect to get similar partition matrix. Here, we propose a neural network-based framework to select features. Neural networks have been explored for the feature selection <cit.> as well as for classification <cit.>. However, in our proposed model the neural network simultaneously selects features and learns a classifier as we follow an embedded method for feature selection. Moreover, our proposed network preserves structural information and the class label information simultaneously, whereas, the feature selection networks in <cit.> solve classification problems, consider class label information in their loss function but not any structural information. Note here, the work in <cit.> considers a system identification problem. To build the neural network-based embedded feature selector, we employ the multi-layer perceptron (MLP) based framework used in <cit.>. The basic framework is shown in Fig. <ref>. As seen in Figure <ref>, preceding the input layer of the MLP, there is a layer consisting of P nodes. Before entering to the input layer of the MLP, the jth feature passes through the node f_j(). These nodes act as attenuating gates that allow or block features from contributing to the output of the neural network effectively. For the ith instance, it's jth feature x_ij on passing the gate node f_j() becomes a_jx_ij; i.e., f_j(x_ij)=a_jx_ij. In MLP, a weighted sum of the values available at the input nodes is applied to the hidden nodes of the first hidden layer. Zero value at an input node implies that the corresponding feature is not considered. When training of the MLP-based framework is complete, a_js for the selected features become close to 1, effectively allowing them to contribute to the classifier. Whereas, for poor or rejected features a_js become close to 0, effectively making them not contribute to the classifier. In <cit.>, this framework was explored for classification-oriented feature selection, group feature selection, and redundancy-controlled feature selection. Here, we explore this framework for simultaneous structure-preserving and class-discriminating feature selection. Next, we elaborate on the MLP-based framework and the proposed objective functions to train the network. We denote the P nodes before the input layer of the MLP as f_j()s for j=1,2,… P where f_j() is a gate or modulator function applied on the jth feature, x_j. Now, we have to design f_j() in such a way, f_j(x_j)=a_jx_j=x_j if x_j is a useful feature. 0 otherwise. In our framework, the factor, a_j is learnable. We implement a_j as a smooth continuous function, a_j=exp(-λ_j^2). Clearly, when λ_j=0, the value of exp(-λ_j^2)= 1 and when λ_j→±∞, the value of exp(-λ_j^2)=0. By adding suitable regularizer terms to the objective function we design our learning system in such a way that, over the learning process, the gate parameters, λ_js for useful features drop close to zero and that for derogatory or indifferent features rise to high values. So, in our learning system, the learnable parameters, λ_js and the neural network weights are learned together, i.e., the loss function is minimized with respect to both λ_js and the neural network weights. Now, we have to define a suitable loss function for selecting features along with learning the embedded classifier. Our aim is to select features that are reasonably good for classification as well as clustering. To satisfy this requirement, we take the loss function as a combination of two losses E_class and E_struct. E_class is considered for preserving class information and E_struct is considered for preserving structural information. At this moment let us consider the network for selecting features for efficient classification only. A suitable loss function to impose class discrimination is the cross-entropy loss <cit.>. We define, E_class as the cross-entropy loss involving actual and predicted class labels. E_class=-1n∑_i=1^n∑_k=1^Ct^i_klog(p_k(𝐱_i) ) Here, t^i_k is kth element of the one-hot encoded label of the sample 𝐱_i or in other words t^i_k is the kth element of the vector 𝐭^i∈{ 0,1}^C such that t_k^i= 1 if k=z_i 0 otherwise In (<ref>), p_k(𝐱_i) is the predicted probability (by the MLP classifier) of 𝐱_i being in kth class. As already discussed above, for effective feature selection, the magnitude of λ_js for the selected features should drop to almost zero and for rejected features should rise to high values. To ensure this condition we add the following regularizer. E_select = 1P∑_j=1^P a_j(1-a_j) = 1P∑_j=1^Pexp(-λ_j^2)(1-exp(-λ_j^2)) In a feature selection framework, a constraint for selecting a fixed number of features is necessary. The following regularizer tries to keep the number of the selected features close or equal to Q. E_Q=1Q^2{(∑_j=1^Pa_j)-Q}^2=1Q^2{(∑_j=1^Pexp(-λ_j^2))-Q}^2 So, the overall loss function for the selection of features with our framework for classification purposes is the following. E= E_class + α_1E_select + α_2E_Q Here, α_1≥ 0,α_2≥ 0 are scalar multipliers for adjusting the importance of E_select and E_Q in the overall error function E. Now let us focus on our original agenda of selecting features that perform satisfactorily both for classification and clustering. To preserve structural information of the data in the lower dimensional space formed by the selected Q features, we consider the Sammon's stress <cit.> as a loss function. The Sammon's stress is the loss function for a non-linear mapping named Sammon's mapping that is able to capture complex non-linear structures in data, as a result, also preserves cluster structure. The lower the value of Sammon's stress, the better the lower dimensional representations in capturing the original inter-point distances or structures of the original data. We can define Sammon's stress involving the original input space and selected feature space as the following. E_sammons=1(∑_i,l=1^n d_il)∑_i=1^n-1∑_l=i+1^n( d_il^𝐗- d_il^𝐗̂)^2d_il^𝐗 d_il^𝐗 is the distance between 𝐱_i and 𝐱_l. 𝐗̂={𝐱̂_i=(a_1x_i1,a_2x_i2,⋯,a_Px_iP)^T∈ℝ^P}_i=1^n. So, d_il^𝐗̂ is the distance between 𝐱̂_i and 𝐱̂_l. As discussed earlier, at the end of the training of our embedded system, a_js will be close to 0 or 1 depending on whether the corresponding features are rejected or selected. Therefore, for a trained system d_il^𝐗̂ would signify the distance between ith and lth instances in the latent space formed by the implicitly selected Q features. So considering E_sammons in Equation (<ref>) as an regularizer, the resultant overall loss function is given by. E_tot=E_class+ β E_sammons + α_1 E_select + α_2 E_Q β≥ 0 is a scalar multiplier that controls the trade-off between the class information and the structural information in the feature selection process. Note that, the computational complexity for the loss function in Equation (<ref>) is O(n^2). For large n, computing Equation (<ref>) and hence Equation (<ref>) is intensive. As the weight update at each iteration will involve computing Equation (<ref>), the overall computation cost would be high. For small and moderate n, we use Equation (<ref>) as the loss function to be minimized. However, for large n to avoid the high computational cost we modify Equation (<ref>) as follows. E_struct= 1(∑_𝐱_i,𝐱_l∈ S_t d_il)∑_𝐱_i∈ S_t∑_𝐱_l∈ S_t; 𝐱_l𝐱_i( d_il^𝐗- d_il^𝐗̂)^2d_il^𝐗 Here S_t is a randomly selected subset of 𝐗 at the tth iteration. Different S_ts are chosen at different iterations and hence different sets of inter-point distances are preserved. Since the considered MLP is trained over a large number of iterations, the use of Equation (<ref>) is expected to result in almost the same effect as that by Equation (<ref>). We have to choose |S_t| such that Equation (<ref>) is computationally manageable and at the same time it should be large enough to make E_struct an effective substitute of E_sammons. Adding Equation (<ref>) to Equation (<ref>) we propose the following loss function for our system. E_tot=E_class+ β E_struct + α_1 E_select + α_2 E_Q E_tot is minimized with respect to the gate parameters λ_js and the weights of the network to find their optimal values. § EXPERIMENTATION AND RESULTS The feature selection framework proposed in this chapter is generic but it can be adapted to solve specialized problems. We have studied the proposed framework for general datasets as well as for solving a special problem: band selection of hyperspectral images. We present the results of band selection for HSIs in a different subsection, Subsec. <ref>. We present the results of feature selection for the conventional classification problem in the following subsection (Subsec. <ref>). §.§ Feature selection for conventional classification problems We have used five publicly available datasets that are very commonly used for classification and clustering. The first four datasets are downloaded from UCI machine learning repository <cit.>. AR10P is downloaded from the open-source feature selection repository of Arizona State University<cit.>. We have also performed the experiments with three benchmark HSI datasets for land cover classification problems. We discuss them in a separate subsection (Subsec. <ref>). The details of the number of features, number of classes, and number of instances for the five datasets are summarized in Table <ref>. The datasets are used directly without any further processing. The datasets are partitioned into training and test sets as approximately 90% and 10% of the total number of instances. To implement our proposed feature selection scheme we use the neural network shown in Fig. <ref> with the number of hidden layers, n_H = 1. The input and output layers have P and C nodes respectively, where P is the number of features and C is the number of classes corresponding to the considered dataset. The number of hidden nodes in the hidden layer is 8 (20 for AR10P data set). To get stable feature selection results, the network weights are initialized in a certain way. To set the initial weights of the proposed network, we undergo the following steps. First, we consider the usual MLP part of our network (i.e. without feature selection), depicted by the portion within the dotted rectangle in Fig. <ref>, and initialize its weights randomly. Next, we train the usual MLP with the cross-entropy loss defined in Equation (<ref>) with the training set until convergence. The weights of the converged network are used as the initial weights of the proposed network. The gate parameter λ_js are initialized with values drawn randomly from a normal distribution with mean =2 and spread =1/√(P). The initial values of λ_js are chosen around 2 to effectively make the gates almost closed initially. As the learning progresses the λ_js are updated in a way to allow the useful features to the network. For the proposed system, to select a subset of Q features, the gate parameters λ_js are sorted in ascending order, and the Q features corresponding to the top Q, λ_js are selected. The network weights as well as the gate parameters λ_js are learned using the adaptive gradient algorithm, `train.AdagradOptimizer' routine of the `TensorFlow' framework <cit.>. For all experiments with the data sets in Table <ref>, both α_1 and α_2 of the error functions in Equations (<ref>) and (<ref>) are set as 1. The total number of iterations for training the network is set to 20000. The five datasets we consider here, have the number of instances n<400, which is not so large. Therefore, we use (<ref>) as the overall loss function to train the MLP based architecture for selecting features that are reasonably good for clustering and classification. When β=0 in (<ref>), effectively, the error function that governs the learning of our MLP based embedded feature selection scheme is (<ref>). The corresponding feature selection scheme now only considers classification. Let us name this method as feature selection with MLP (FSMLP). When β≠ 0 in (<ref>), our method takes structure preservation into account along with classification. Let us name the corresponding method as FSMLPstruct. To understand the importance of adding the structure preserving regularizer (<ref>), we perform feature selection with FSMLP and compare with FSMLPstruct having different β values. We explore three values of βs 0.1, 1, and 10. Although the exact value of the β that is optimum for a particular dataset for a particular number of selected features Q cannot be decided from these three values, we investigate the effect of three widely different βs to see the role of the weight to the structure preserving regularizer, i.e. β on the performance of the selected features. We compare with three other methods namely, Independent Component Analysis (ICA)-based feature selection <cit.>, F-score based filter method <cit.>, and mutual information based filter method <cit.>. The performance of both FSMLP and FSMLPstruct is dependent on the initial weights of the network. So, we repeat the initialization of the network weights and gate parameters λ_js five times and run the schemes- FSMLP or FSMLPstruct five times with the five initializations. For the performance measure of FSMLP and FSMLPstruct, we consider the average performance over the five subsets obtained from the five runs. To check the effectiveness of the methods in selecting features that perform well in classification and clustering simultaneously, we compute the classification scores of the support vector machine (SVM) classifier as well as several structure-preserving indices: Sammon's stress (SS) <cit.>, normalized mutual information (NMI) <cit.>, adjusted rand index (ARI) <cit.>, and Jaccard Index (JI) <cit.>. As the measure of classification performance, we use the overall classification accuracy (OCA) of the SVM classifier. The optimal hyper-parameters of SVM are determined through five-fold cross-validation using grid search. Note that here the test set is not only unseen to the SVM classifier but unseen to the feature selection methods also. SS, defined in Equation (<ref>) use the original inter-point distances d_il^𝐗s and latent space inter-point distances d_il^𝐗̂s. Here to compute d_il^𝐗̂, we use the lower dimensional data formed by the selected Q features. We use NMI, ARI, and JI as the structure-preserving performance metrics by supplying the cluster labels obtained from clustering the data in the original space (using all features) as the true label and the cluster labels obtained from clustering the data in the reduced space formed by the selected Q features as the predicted cluster label. So, NMI, ARI, and JI measure how the cluster assignments in the original space and in the selected space agree, effectively giving a measure for the preservation of the original cluster structure in the selected space. We know that the maximum value for NMI or ARI or JI is 1. Here, the value of each of these three measures being close to 1 indicates that the cluster structure in the original space is preserved in the selected space. As the clustering algorithm we use, the fuzzy C means (FCM) algorithm <cit.> with the fuzzy exponent m=2. We set the number of clusters for FCM algorithm as the number of classes. We use two values for the number of the selected features, Q. Q=0.35 × P and Q=0.5 × P, where these values are rounded up to the nearest integers using the ceiling function. Tables <ref> and <ref> summarize the performances of the proposed method and other comparing methods for training and test sets, respectively for the E. coli dataset. We tabulate the three previously mentioned structure preserving measures and one classifier score for two choices of the number of selected features (approximately 35% and 50% of the original dimension) i.e., Q=3, and Q=4 in Tables <ref> and <ref>. As we have already discussed in Sec. <ref> the lesser the value of SS, the better the projected space (formed by selected features) preserves the original pairwise distances and hence the structure of the original data. We observe in Table <ref>, the mutual information based method shows the lowest value of SS, and the second lowest is FSMLPstruct with β=10 for both Q=3 and Q=4. Actually, the SS values for the mutual information based method and FSMLPstruct with β=10 are almost the same, equal up to two places after decimal points in both choices of Q. The SS values achieved by ICA, the F score based method and FSMLP are comparatively higher. So, the mutual information based method and FSMLPstruct with β=10 preserve the original pairwise distances most in the projected space. They are also expected to preserve the structures most. The values of the other three structure preserving measures i.e., NMI, ARI, and JI confirm that. We know that the higher the values of NMI, ARI, and JI are, the closer the clustering structures of the projected space are to the original clustering structures. The highest values of the NMI, ARI, and JI are obtained by the mutual information based method, followed by the FSMLPstruct with β=10. So, in cluster structure preservation, mutual information based method and FSMLPstruct with β=10 perform better than the other three methods and even than the other two models trained by FSMLPstruct with β=0.1 and 1. β is the weight of the regualizer E_sammons in (<ref>). Although SS and E_sammons are not exactly same, under the influence of E_select, it is expected that the higher the value of β lesser the value of SS would be. Table <ref> reconfirms that. The SS values become lesser as the β increases from 0.1 to 10. SS values of FSMLP (which is basically FSMLPstruct with β=0) and FSMLPstruct with β=0.1 are the same for both the choices of Q. Actually, FSMLP and FSMLPstruct with β=0.1 have all the ten measures the same. It proves that for the E.coli dataset β=0.1 does not give any effective weightage to the structure preservation term and chooses the same subsets as FSMLP. For the classification performance measure OCA, FSMLPstruct with β=1, achieves the highest value, followed by FSMLPstruct with β=10. The mutual information based method and FSMLPstruct with β=10 have all the structure preserving measures either almost equal or of comparable values, however for OCA, FSMLPstruct with β=10 is better than mutual information based method with a margin more than 18%. For E. coli data, the test set follows the observed trends in the training set with the following exceptions. First, for Q=3, the values of NMI, ARI, and JI have not increased as β increases from 0.1 to 10. Second, for Q=4, FSMLPstruct with β=10 beats all the methods including mutual information based method. Analyzing the performances over train and test sets, for E. coli data FSMLPstruct with β=10 is the winner among the other six models. Tables <ref> and <ref> compare the performance of the proposed method with other methods in terms of different criteria for the Glass dataset on its training and test sets, respectively. The chosen numbers of features for the Glass data are 4 and 5. The expected nature of decreasing SS with increasing β is clearly observed for Q=5 for both the training and test set. For Q=4, the Glass data also follows the characteristics of the E. coli data of having the same values for FSMLP and FSMLPstruct with β=0.1 in all the ten measures for both training and test set. For Q=4, from β=0.1 onwards, increasing βs produce decreasing SS values and increasing NMI, ARI, and JI values for both training and test datasets. We observe from the Tables <ref> and <ref>, for Q=5, as the β increases from 0 (FSMLP) to 0.1, and then to 1, NMI, ARI, and JI values are increased for both training and test datasets, however at β=10, NMI, ARI, and JI values are decreased compared to β=0.1 and 1. We can conclude that, for Q=4, FSMLPstruct with β=10 gives the best structure preserving performance among the considered models and for Q=5, FSMLPstruct with β=1 is best in structure preservation. In terms of the classification performance measure OCA, FSMLPstruct with β=10 and FSMLPstruct with β=1 show the highest OCA values for the training set and test set respectively, with Q=4. On the other hand, for Q=5, FSMLPstruct with β=10 show the highest OCA values for the training set and FSMLPstruct with β=1 show the highest OCA values for the test set. Inspecting all the performance measure values, we conclude that for the Glass dataset, both FSMLPstruct with β=10 and FSMLPstruct with β=1 are comparatively better in simultaneously preserving both class and cluster structures than the other methods. The performances of the Ionosphere dataset are recorded in Tables <ref> and <ref> for training and test sets respectively. For the Ionosphere data set, the number of selected features, Q is set as 12 and 17. Here, in all the cases, whenever the β is increasing, SS is decreasing and the other structure preserving indices NMI, ARI, and JI are increasing consistently. Unlike, E. coli and Glass data set, here when β increases from 0 (in FSMLP) to 0.1, the structure preserving metrics including SS shifted in the desired direction in most of the cases and remained the same in some cases. Except for SS, in the other three structure preserving measures, ICA and F score based method have performed better than FSMLP and FSMLPstruct for all the cases. Classification performance is good for almost all methods for the Ionosphere data set. In the training set, for both Q=12 and Q=17, an accuracy of 97.46% is reached by mutual info and F score based methods, however, FSMLP and FSMLPstruct models have reached more than 96% accuracy in every case. For the test set, all the structure preserving indices are better for FSMLP and FSMLPstruct than ICA, F score, and mutual information based methods, although in terms of classification score OCA, the F score and mutual information based methods have performed marginally better than FSMLP and FSMLPstruct models. This may have happened because the selected features from the neural network based classifier which are expected to be discriminatory features, may not be the best for SVM. Moreover, FSMLPstruct makes a compromise between preserving cluster structure and classifier loss. For the Ionosphere dataset, our proposed models are not the winner. May be with higher β, FSMLPstruct would deliver better scores. For the Sonar data, the summary of the performances of the training and test data sets in terms of the five measures for two choices of the number of selected features are available in Tables <ref> and <ref>. We set, Q=21 and 30 for the Sonar data set. In the case of the Sonar data set, not only with increasing β, all the structure preserving indices improve, in case of the training set, FSMLPstruct with β=10 are significantly better than ICA, F score, and mutual information based methods, and FSMLP in all five scores for both the choices of Q. In test set for some cases, FSMLPstruct with β=1 is better than FSMLPstruct with β=10. For the Sonar data set, clearly, the proposed method performed extremely well in terms of classification and clustering performance. Tables <ref> and <ref> summarize the performances of the proposed method and other comparing methods for training and test sets, respectively for the AR10P data set. The original number of features, P for the AR10P data set is 2400, which is comparatively higher than that of the other two data sets used in this sub-section. The two choices of the number of selected features here are 40 and 60 and these are not approximately 35% and 50% of the original dimension like in previous cases. The study in <cit.>, proposed a feature selection scheme for redundancy control in features. They reported an average number of selected features of 58.9 without practicing redundancy control and an average number of selected features in the range of 22.8 to 44.2 when practicing redundancy control for AR10P data set. Hence, we choose the number of selected features Q as 40 and 60. From the classification scores shown in Table <ref>, we note that for all the methods for both the choices of Q, classification scores in training set are more than 99%. In the training set, we observe that for FSMLPstruct as β increases SS is decreased in almost all the cases. But for the test set, this is not true. For the other structure-preserving measures for the training set, FSMLPstruct with β=50 is best among all the methods for Q=40 and FSMLPstruct with β=100 is best among all the methods for Q=60. In the test set, all the methods have performed almost the same in terms of the structure-preserving measures. The classification performances of FSMLPstruct are very poor in the test set for AR10P data. The significant differences in training and test OCA values for FSMLPstruct indicate poor generalization of the system. This problem may be addressed by choosing the number of nodes for our MLP based model through cross-validation. Results from the five data sets clearly establish the benefit of introducing the proposed structure preserving regularizer term, E_sammons in the overall loss function (<ref>) of the MLP based embedded feature selection scheme. Next we shall consider the band (channel) selection problem for hyperspectral satelite images. §.§ Band selection in hyperspectral images Let our considered hyperspectral image I be of dimension, H× W× P where, H, W, and P are the height, width, and number of spectral bands of the image respectively. We can represent the pixels of I as 𝐱_i∈ℝ^P: i=1, 2, … ,H× W. Let, there be total n pixels annotated with C land cover classes. Without any loss of generality, we take the first n pixels, i.e., i=1,2, … n as the pixels having class labels. Our input data for land cover classification problem be 𝐗={𝐱_i=(x_i1,x_i2,⋯,x_iP) ∈ℝ^P}_i=1^n. The collection of class labels of 𝐗 be 𝐙={z_i∈{1,2,⋯, C}}_i=1^n, where, z_i is the class label corresponding to 𝐱_i. We aim to select a subset of size Q from the original set of bands such that the selected subset performs reasonably well for land cover classification as well as in clustering. We have performed the experiments with three benchmark HSI datasets for land cover classification problems- Indian pines, Pavia University, and Salinas<cit.>. We have used the corrected version of the Indian pines and Salinas dataset having the number of bands 200 and 204 respectively. The Pavia University dataset uses 103 bands. The pre-processing of the datasets is the same as done in <cit.>, following the code available in<cit.>. For any dataset, its pixel values are scaled to [0,1] using the expression (x-min(x))/(max(x)-min(x)), where, x is a pixel value. The max and min are computed over the entire HSI. The data are then mean normalized across each channel by subtracting channel-wise means. The datasets are partitioned into training and test datasets. For band selection, only the training datasets are fed to the model. For measuring performances both training and test datasets are used. For splitting the datasets into training and test subsets, we drop the pixels of the unknown land-cover type. Let 𝐗 be the set of pixels with known land-cover type. To obtain the training and test sets, let us divide 𝐗 into two subsets 𝐀 and 𝐁 such that 𝐀⋃𝐁=𝐗, 𝐀⋂𝐁=ϕ, and 𝐀 and 𝐁 contain, respectively, 25% and 75% pixels of 𝐗. We use 𝐀 as the test set. Note that, both the datasets suffer from the class imbalance problem. To avoid the learning difficulty raised by class imbalance, in the training set, we consider the same number of instances from each class. For this, from the subset 𝐁, we randomly select (without replacement) 200 pixels per class. If a class has less than 200 instances in 𝐁, we oversample the class by synthetic minority oversampling technique (SMOTE) <cit.> to gather 200 points. For band selection also, we use the same neural network (Fig. <ref>) with the number of hidden layers, n_H=3. The numbers of hidden nodes in the three hidden layers are 500, 350, and 150 respectively. Here the number of input nodes of the MLP is equal to the number of bands (P). The network weights and the gate parameters λ_js are initialized in the same way as done. For all experiments of the current sub-section, α_1 and α_2 of the error functions in Equations (<ref>) and (<ref>) are set as 5 and 1 respectively. The total number of iterations for training the network is set to 50000. The rest of experimental settings are kept same as the previously mentioned experiment with the two data sets. The number of training instances of Indian pines and Salinas data set is 3200 and that of Pavia university is 1800. Both of the number of training instances, n are high. Computation of the E_sammons in (<ref>) would involve computing (3200)^2 or (1800)^2 distances. Adding E_sammons to the overall loss function would cause very intensive computation at each iteration. So, instead of E_sammons, its proposed approximation E_struct defined in (<ref>) is used. In (<ref>), |S_t| is taken as 100. Varying the value of β in (<ref>), we analyse its effect on the OCA, SS, NMI, ARI, and JI. We compute SS, NMI, ARI, and JI as described in Subsec. <ref>. We also use the same clustering algorithm with the same settings as used in Subsec. <ref>. Tables <ref> and <ref> summarize the comparative results of FSMLPstruct with FSMLP and other band selection methods, ICA, F score, and mutual information based filter methods on the training and test datasets of Indian pines respectively. Similarly, Table <ref> and <ref> summarize the comparative results on the training and test datasets of Pavia university. In this experiment, we have fixed the number of selected bands Q, approximately to 35% of the original number of bands P. So, The number of selected bands is 70 for Indian pines and it is 35 for Pavia University. Tables <ref> and <ref> record the values of the structure-preserving indices and classification scores on Indian pines for different β values in FSMLPstruct (β values in Equation (<ref>)). The considered βs for Indian pines, are 2, 5, 20, and 50. Note here that, FSMLP is basically FSMLPstruct with β=0. We observe in Tables <ref> and <ref> that both for training and test datasets as the value of β increases for FSMLPstruct (in the last five rows of the corresponding Tables) the value of SS becomes smaller. A similar trend is also observed for the Pavia University data set (here, β varies as 1,1.5,2, and 2.5) for training (Table <ref>) and test (Table <ref>) sets. For the Pavia university dataset, we have set the values of β in FSMLPstruct as 1,1.5,2, and 2.5. Unlike Indian pines for Pavia university, we restrict the βs to lower values. This is due to the fact that the number of selected bands for Pavia university is 35 and that for Indian pines is 70. Lesser the number of bands, the lesser the importance (β) to be given to our structure preserving regularizer in Equation (<ref>) to obtain a desired balance between classification and clustering performance. Table <ref> which contains the results for the Indian pines training data, clearly shows that both FSMLP and FSMLPstruct are better than ICA, F-score based, mutual information based methods in all four structure preserving metrics as well as in terms of the OCA. In Table <ref> we observe that with increasing values of β there is a consistent improvement in the values of the four structure preserving metrics while the values of OCAs retain approximately at 91%. The results shown in Table <ref> for the Indian pines test set also show that FSMLP and FSMLPstruct perform better in terms of all the five metrics than the other three methods. Also with an increase in β all the structure-preserving metrics improve for FSMLPstruct, except the value of JI slightly decreases when β goes to 50 from 20. The classification metric OCA is around 78% with bands selected by FSMLPstruct for different choices of βs. It is notable here that the test set is completely unseen in the process of band selection, yet the selected bands for the proposed method is providing fairly good results for structure preservation as well as for classification. As observed from Table <ref> and Table <ref> for Pavia university training and test sets respectively, the lowest (best) SS value among all the comparing methods is achieved by mutual information based filter method. However, for the other four metrics i.e. NMI, ARI, JI and OCA; FSMLP and FSMLPstruct show better values. In the case of the Pavia university dataset with increasing β; NMI, ARI, and JI are not consistently increasing but the results indicate that, it is possible to find a β, (here β=2) where the structures are preserved better maintaining a good classification score. Table <ref> and <ref> summarize the comparative results of training and test datasets of Salinas. We note from Tables <ref> and <ref> that, for the Salinas dataset, FSMLP and FSMLPstruct are better than the other three methods in all the five metrics used. All four structure preserving metrics scores of FSMLPstruct are better than or comparable to FSMLP keeping the classification score OCA at approximately 96% for the training dataset and 90% for the test dataset. Tables <ref> and <ref> reveal that when β is increased from 0 to 2, the value of SS is increased however, from β=2 to β=50 onward, the values of SS are decreased. The exceptions for the Salinas dataset, while increasing β from 0 to 2 is possibly due to the fact that we do not use the entire training data in Equation (<ref>) and use of |S_t|=100 in Equation (<ref>) is not adequate to capture the structure of the data faithfully for the Salinas dataset. As discussed earlier, setting the value of |S_t| is crucial for approximating Equation (<ref>) with Equation (<ref>). We have set |S_t|=100 for all three datasets empirically. However, choosing an optimum value of |S_t| for each dataset is expected to avoid the occurred exceptions. As we increase the value of β, there is more stress to reduce the loss function Equation (<ref>). In most cases, increasing β results in a drop in SS. This clearly suggests that the loss function Equation (<ref>) that we use, is a computationally efficient substitute for the original SS defined in Equation (<ref>). We have included results of the thematic maps (Fig. <ref>) and it reveals that our proposed method is capable of selecting useful bands that can broadly capture the land cover types. Figure <ref> illustrates thematic maps of the entire region captured in the Indian pines dataset. Figure <ref> shows ground truth labels. Figures <ref>, <ref>, <ref> are thematic maps of the Indian pines data set using the class labels obtained from the SVM classifier trained on the considered training set represented with 70 bands selected by FSMLPstruct with β=0, i.e., by the method FSMLP, and FSMLPstruct considering β=20, and β=50, respectively. Figure <ref> ensures that even with the increasing stress on the structure-preserving regularizer E_struct, our proposed band selection method FSMLPstruct is able to select bands that maintain a good land cover classification performance. § CONCLUSION AND DISCUSSIONS To the best of our knowledge, a feature selection method that simultaneously cares about class discrimination and structure preservation is not available in the literature. In this study, we have tried to bridge this gap by proposing a neural network-based feature selection method that focuses both on class discrimination and structure preservation. To learn the proposed system, we use Sammon's stress as a regularizer to the classification loss. For datasets having a large number of instances, the computational overhead associated with Sammon's stress is very high. Consequently, as the structure-preserving regularizer, we use Sammon's stress computed based on a sample of the original data (using dynamic sampling on each iteration during the adaptive gradient descent based learning). Using this regularizer in the experiments with datasets having a large number of instances, we have demonstrated that this regularizer is an effective and computationally efficient implementation of Sammon's stress based structure-preserving regularizer. Our proposed feature selection scheme is generic. So we have investigated its effectiveness on datasets commonly used for assessing classifiers as well as for a specialized case: band selection in hyperspectral images (HSI). We have applied the feature selection scheme to five real-world datasets which are commonly used typically for assessing classification. In the context of band selection, we have applied our method to three well-known HSI datasets and compared performances with three other band selection methods. Based on our experiments, we conclude that the proposed feature selection method is able to produce reasonably good classification and clustering scores in the majority of the data sets, proving that the proposed method is capable of selecting a subset of features that is good both for classification and clustering. Our scheme provides a mechanism to control the number of selected features. The proposed method is easily extendable to other networks like Radial Basis Function (RBF) network.
http://arxiv.org/abs/2307.05561v1
20230709173313
TransPose: A Transformer-based 6D Object Pose Estimation Network with Depth Refinement
[ "Mahmoud Abdulsalam", "Nabil Aouf" ]
cs.CV
[ "cs.CV", "cs.AI" ]
TransPose: A Transformer-based 6D Object Pose Estimation Network with Depth Refinement Identify applicable funding agency here. If none, delete this. Mahmoud Abdulsalam 5, and Nabil Aouf 5Corresponding author Department of Engineering, School of Science and Technology City, University of London, ECV1 0HB London, United Kingdom Email:{mahmoud.abdulsalam, nabil.aouf,}@city.ac.uk August 12, 2023 ================================================================================================================================================================================================================================================= plain plain As demand for robotics manipulation application increases, accurate vision based 6D pose estimation becomes essential for autonomous operations. Convolutional Neural Networks (CNNs) based approaches for pose estimation have been previously introduced. However, the quest for better performance still persists especially for accurate robotics manipulation. This quest extends to the Agri-robotics domain. In this paper, we propose TransPose, an improved Transformer-based 6D pose estimation with a depth refinement module. The architecture takes in only an RGB image as input with no additional supplementing modalities such as depth or thermal images. The architecture encompasses an innovative lighter depth estimation network that estimates depth from an RGB image using feature pyramid with an up-sampling method. A transformer-based detection network with additional prediction heads is proposed to directly regress the object's centre and predict the 6D pose of the target. A novel depth refinement module is then used alongside the predicted centers, 6D poses and depth patches to refine the accuracy of the estimated 6D pose. We extensively compared our results with other state-of-the-art methods and analysed our results for fruit picking applications. The results we achieved show that our proposed technique outperforms the other methods available in the literature. Transformer, Depth Estimation, Pose Estimation § INTRODUCTION 6D object pose estimation is a crucial topic to address in the robotics domain. The ability to perceive the position of an object from a single RGB-image can find application in areas such as: robotics for grasping tasks <cit.>, autonomous driving <cit.>, space applications <cit.> and robotics for virtual and augmented reality applications <cit.>. This problem, however, comes with several challenges such as: object appearance and texture, lighting conditions and object occlusion <cit.>. Conventionally, 6D object pose estimation problem is formulated as a feature mapping problem where feature points of a 3D objects are matched on 2D images <cit.>. However, these methods are unable to detect features on smooth objects with minimum or no texture. Introduction of additional modality such as depth data have been used to solve the problem of features on texture-less objects <cit.>. However, this requires more inputs in the form of RGB-D images. With the emergence of Convolutional Neural Networks (CNNs), some research leveraged on this powerful tool as part of their pipeline to estimate 6D poses <cit.>. Transformer based models are emerging and proving to be more efficient than CNNs <cit.>. Thus, few pipelines adopting transformer based models for 6D pose estimation in quest for better accuracy <cit.> exist. In this work, we propose a new 6D object pose estimation architecture where we aim at improving the accuracy in comparison of the existing methods. We introduce TransPose: an improved transformer-based 6D pose estimation network with a novel depth refinement module. The objective is to get better 3D translations and rotations estimates from a single RGB image input. For our initial estimations, we adapted the Detection Transformer (DETR) framework, <cit.>, to directly regress the center of the target object. Furthermore, we obtain an image patch of the target object. The translation and rotation can directly be regressed by formulating additional prediction heads on DETR <cit.>. Indeed, feed-forward heads are added to regress the two components of the 6D pose (3D translation and 3D rotation). A novel depth refinement module is also introduced in our estimation pipeline to increase the accuracy of the pose estimation. TransPose architecture performs two interdependent tasks to obtain the final 6D pose of the target object. As seen in Fig. <ref>, an RGB image is used as the input to the pipeline. The image is passed to the transformer network which has a ResNet-101 <cit.> backbone for features extraction. These features are then passed to the transformer model consisting of a standard encoder and decoder setup <cit.>. The model is used to obtain an image patch by detecting the object and assigning a Region Of Interest (ROI) to the detected object. The second segment of the architecture is the depth estimation and refinement module. The depth estimation network encompasses a feature pyramid network (FPN) <cit.> that takes in an RGB image as input and outputs an estimated depth image. The image patch obtained from the transformer model is used to isolate the target on the depth image and hence obtain the depth of the target from the camera. The depth is then used to compute other components of the translation and subsequently used to refine the estimated 6D pose of the target. We evaluated our approach on YCB-Video dataset <cit.> as a benchmark and compared it with other state-of-the-art approaches. The following are our contributions in the TransPose model: * We propose a novel pipeline for 6D object pose prediction that favourably compares with other state-of-the-art methods * As part of the pipeline, we propose a lighter depth estimation network that utilizes a better up-sampling method for depth prediction * Additional analyses are conducted with our own generated fruit dataset to facilitate and evaluate 6D pose estimation performance for fruit-picking applications. The paper continues with a literature review in section II. After introducing a TransPose solution for 6D pose estimation in section III, we provide our results in the experiments section IV and finally the conclusion. § RELATED WORK Many methods have been proposed to tackle the problem of 6D object pose estimation. Approaches that are non-learning-based rely heavily on object textures for pose estimation. Scale-Invariant Feature Transform (SIFT) features <cit.> and Speeded Up Robust Features (SURF) <cit.> are common examples of the classical features used. The SIFT algorithm as used in <cit.> for pose estimation requires rich texture information. This can be an issue if the objects are textureless objects. Miyake et al. <cit.> compensated the textureless nature of objects with the colour information to improve the accuracy of the 6D pose estimation. The geometric information has also been used to increase the accuracy of estimation <cit.>. Pose estimation methods that utilise local descriptors define and compute the global descriptors offline. The local descriptor is then computed and matched online with the global descriptor. Pose estimation using Iterative Closest Point (ICP), Oriented Fast and Rotated Brief (ORB) <cit.>, Binary Robust Independent Elementary Features (BRIEF) <cit.> have been implemented in the past <cit.>. However, these methods are computationally expensive and do not perform well on reflective objects. We can further group pose estimation methods into template-based and featured-based methods <cit.>. The advantage of the template-based methods is that they can detect objects without enough textures. Each input image location is scanned and matched with a constructed template of the object. The best match is selected based on a similarity score that compares the matched locations <cit.>. This type of method cannot properly estimate occluded objects since the similarity score will be low. The feature-based methods utilize 2D-3D correspondences. Features are mapped from the 2D images to the 3D models thereby estimating the 6D poses <cit.>. This approach handles occluded objects better. However, this is at the expense of rich features in the form of enough texture. Some works have proposed learning feature descriptors to solve the problem of objects with no texture <cit.>, while others regress directly from the 2D image to obtain the 3D correspondence <cit.>. Without sufficient refinement, these models can obtain relatively low accuracy when dealing with symmetrical objects. Convolutional Neural Network (CNN) architecture for pose estimation was introduced by <cit.> to regress 6D pose using RGB image. Limited by depth modality, the task becomes difficult. In an attempt to address this problem, another method proposed the prediction of depth from the 2D image and thus acquire the 3D position of the object <cit.>. Estimating the rotation component can also be a problem using this method due to non-linearity. <cit.> separated the rotation component and treated it as a classification problem. This often requires a post-refinement to obtain an accurate estimation. Methods that detect keypoints to estimated 6D pose have been proposed to robustly and efficiently estimate 6D pose. <cit.> utilized a segmentation technique to isolate the Region of Interest (ROI) and further regressed the keypoints from the ROI. Similarly, <cit.> utilised the YOLO <cit.> framework for such. However, these methods in the face of occlusion perform poorly. To address this problem, some methods obtain keypoints through pixel-wise heatmaps <cit.>. Considering that heatmaps are fixed-size, these methods suffer when the objects are truncated. Some other methods have considered using models encompassing classical algorithm such as PnP algorithm to increase the accuracy of estimation <cit.>. Such models are weighty and hence not always suitable for real-time platform deployment. Models such as the PoseCNN <cit.> and T6D-direct <cit.> although are able to regress the 6D poses, however a very large dataset is required to train those models since they have no refinement module to count on. Pose estimation using depth modality often involve the conversion of depth image to point cloud and proceeds with the segmentation of object masks <cit.> adopted semantic segmentation from depth images and point clouds to regress 6D poses. This is accompanied by computational burden due to the conversion to point cloud and often requires a large dataset. In contrast, we utilised the raw depth modality for the regressed pose refinement without converting to point cloud as presented further in this paper. § TRANSPOSE The pipeline for TransPose 6D object pose estimation solution we propose in this work can be divided into three main parts: * Detection and Regression Transformer * Depth Estimation Network (DEN) * Refinement Module for Final 6D Pose Estimation. §.§ Detection and Regression Transformer This transformer network is mainly adopted for object detection, image patch designation and initial 6D pose regression. The transformer architecture is inspired from Detection Transformer DETR <cit.> and T6D-Direct <cit.>. Our model is presented in Fig. <ref>. An RGB image is used as the input of the model. A ResNet-101 is adopted as the CNN backbone to extract and create a feature vector which is used as an input to the transformer encoder-decoder. Set of predictions of size N_c are produced by the transformer encoder-decoder. Prediction heads are added in form of Feed Foward Networks (FFNs) to regress the object pose and patch. The losses adopted to train this transformer network are categorized as follows: §.§.§ Set Prediction Loss The patch prediction in form of ROI is obtained by assigning a bounding box around the object of interest. From the input image through the decoder, the model produces a set of size N_c of tuples with fixed cardinality, where N_c also corresponds to the maximum number of the expected targets within the image. The content of each tuple is the image patch (left bottom pixel coordinates, height and width), class label probabilities and 6D pose (translation and rotation) of the predicted object. A bipartite matching is adopted to match the ground truth and the predicted sets to obtain matching pairs. The model is then trained to minimise a loss between the pairs. Consider ground truth objects x_1, x_2, x_3, ... x_n, let's assume N_c is more than the number of objects in the image, bipartite matching is performed to match the ground truth x which is a set of size N_c padded with no-object (∅) with the predicted set x̂ of the same size. Essentially, performing a permutation between the sets while minimizing the loss below. ρ̂ = ρ∈Θ_N_cmin∑_i ^N_cℒ_match(x_i, x̂_ρ (i)) ℒ_match(x_i, x̂_ρ (i)) is the pair-wise match cost between the prediction at index ρ (i) and the ground truth tuple x_i. §.§.§ Hungarian loss After matching, the model is trained to minimise the Hungarian loss. We denote the predicted patch as γ̂_ρ (i). Thus, the hungarian loss is defined as below: [ ℒ_hung(x_i, x̂)= ∑_i ^N_c[λ _pose1_c_i≠∅ℒ_pose(R_i,t_i,R̂_ρ̂ (i), t̂_ρ̂ (i)); -logP̂_ρ (i)(c_i) + 1_c_i≠∅ℒ_patch(γ_i, γ̂_ρ̂ (i))] ] ρ̂ is the lowest cost from equation.<ref>, c_i is the class probability and γ _i is a vector that defines the ground truth image patch coordinates, height and width. §.§.§ Patch loss The patch loss ℒ_patch(γ_i, γ̂_ρ (i)) is a component of equation <ref> and combines an l_1 norm loss and a generalized loss ℒ_iou(γ_i, γ̂_ρ (i)), <cit.>, as follows: ℒ_patch(γ_i, γ̂_ρ (i)) = σ _1ℒ_iou(γ_i, γ̂_ρ (i)) + σ _2||γ_i - γ̂_ρ (i)|| and, ℒ_iou(γ_i, γ̂_ρ (i)) = 1 - (| (γ_i∩γ̂_ρ (i)|/| (γ_i∪γ̂_ρ (i)| - | L(γ_i, γ̂_ρ (i)) \γ_i∪γ̂_ρ (i)|/| L(γ_i, γ̂_ρ (i)) |) σ _1,σ _2 ∈ ℝ are hyperprameters. L(γ_i, γ̂_ρ (i)) is the largest patch having the ground truth γ_i and the predicted γ̂_ρ (i). §.§.§ Pose loss ℒ_pose(R_i,t_i,R̂_ρ̂ (i), t̂_ρ̂ (i)) is the pose loss. The pose loss is divided into two components, translation t and the Rotation R. Conventional l_2 norm loss is used to supervise the translation while a ShapeMatch loss L_R, <cit.>, is used for the rotation to deal with symmetrical objects. ℒ_pose(R_i,t_i,R̂_ρ (i), t̂_ρ (i)) = L_R(R_i, R̂_ρ (i)) + || t_i - t̂_ρ (i)|| L_R = {[ 1/| K |j_1 ∈ K∑ j_2 ∈ Kmin|| (R_ij_1 - R̂_ρ (i)j_2) || if symmetric,; ; 1/| K |j ∈ K∑|| (R_ij - R̂_ρ (i)j) || otherwise. ]. K represents the 3D points set. R_iand t_i are the ground truth rotation and translation, respectively. R̂_ρ (i) and t̂_ρ (i) are the respective predicted object rotation and translation. §.§ Depth Estimation Network (DEN) Depth estimation can be used for many applications <cit.>. In our case, the DEN is responsible for estimating depth images from monocular images inspired by the Feature Pyramid Network (FPN) <cit.>. The motivation is that FPN is capable of extracting features at different scales. We adopt ResNet-101 network as a backbone for feature extraction, where two 3x3 convolutional layers are utilised to process features and ReLU as an activation function for the layers as seen in Fig. <ref>. A better lightweight upsampling technique <cit.> that covers a larger field of view and enables the generation of adaptive kernels for better prediction is utilised. The depth images are one-fourth of the original image's size. The gradient of the depth map is obtained using a Sobel filter. The depth loss adopted in the training of our network is an l_1 norm loss defined as follows: ℒ_depth = 1/n∑ _i=1 ^ n || d_i - d̂ _(i)|| where, d_i and d̂ _(i) are the ground truth depth and the predicted depth of every pixel i respectively. §.§ Refinement Module for Final 6D Pose Estimation. The refinement module consists of the depth patch generation and final pose estimation processes. The patch and the regressed 6D pose from the transformer alongside the depth image are used as inputs for the refinement module as shown in Fig. <ref>. The patch defined as the ROI obtained by the Detection and Regression Transformer is formulated as: ψ _i = [B_opx, B_opy, H_op, W_op] where B_opx, B_opy represent the bottom left corner pixel coordinates of the patch respectively and H_op, W_op are the height and width of the patch respectively, all with respect to the original RGB image size (height and width) S_o = (W_o × H_o). Similarly, let us represent the size of the depth image as S_d = (W_d × H_d). where S_o ≠ S_d. We can obtain our depth patch ψ _j with respect to S_d from equ. <ref> as: ψ _j = [B_dpx, B_dpy, H_dp, W_dp] = ψ _i×[ W_d/W_o 0 0 0; 0 H_d/H_o c 0; 0 0 H_d/H_o 0; 0 0 0 W_d/W_o; ] where B_dpx, B_dpy now represents the bottom left pixel coordinates of the depth patch respectively and H_dp, W_dp are the height and width of the depth patch respectively, all with respect to the depth image size S_d. The depth patch represents now our object ROI in the depth image frame and thus we can obtain the depth t_z1 from the camera to the target to be the depth information at the center pixel of the depth patch. The center pixel coordinates C_d = (C_dx, C_dy)^T can be obtained as follows: C_dx = B_dpx + W_dp/2 C_dy = B_dpy + H_dp/2 The translation from the depth network model t_1 utilises t_z1 (which in this case is the depth) to compute t_x1 and t_y1 which are the translations in x and y axis to complete the translation t_1 = (t_x1,t_y1,t_z1)^T. Assuming the camera matrix is known, t_x1 and t_y1 can be obtained following the projection equation of a pinhole camera model as follows: [ C_ox; C_oy; ] = [ f_xt_x1/t_z1 + PP_x; f_yt_y1/t_z1 + PP_y; ] where f_x and f_y represent the focal length of the camera, ( PP_x, PP_y)^T is the principal point. C_o = (C_ox, C_oy)^T is the object centroid, which can be obtained from the image patch similarly to equ. <ref> to be (B_opx + W_op/2, B_opy + H_op/2)^T assuming the centroid coincides with the center of the patch. Thus, t_x1 and t_y1 can be calculated as: [ t_x1; t_y1; ] = [ (C_ox- PP_x)t_z1/f_x; (C_oy- PP_y)t_z1/f_y; ] Thus a complete translation from the depth image t_1 is obtained as: t_1 = (t_x1, t_y1, t_z1)^T Finally, we can obtain the final fusion-based object translation t as: t = (w_1 × t_1) + (w_2 × t_2) where the weights w_1,w_2 ≥ 0 and w_1+w_2 = 1. t_1 is the computed translation from the depth in equ. <ref> and t_2 is the regressed translation from the transformer model. Note that w_1 and w_2 are selected depending on the performance of both the transformer and depth model. Such that, the model with a lower loss will have a higher w and vice-versa. § EXPERIMENTS In the following, we present all the experiments conducted to test the capability of our proposed TransPose solution. From the datasets adopted to the results and comparison made between our solution and existing solutions, all will be detailed in the following subsections. §.§ Dataset The popular KITTI dataset is used as a benchmarking dataset for the depth estimation network. Likewise, we use the popular YCB-Video dataset being a benchmark for 6D pose estimation. <cit.> so we can easily compare our results with other methods. The dataset has 133,936 images of 640 × 480 pixels resolution. Each image is accompanied with bounding box labels, depths, segmentation and 6D object pose annotations. Similar to <cit.>, a test was carried out on 2,949 keyframes from 12 scenes. Additionally, we sampled from the Fruity dataset <cit.> to validate this approach in the context of fruit picking application which is an important application for our research. §.§ Evaluation Metrics The metrics adopted to evaluate the depth estimation network are the abs-rel, sq-rel, RMSE and RMSE_log, as proposed in <cit.>, as follows: abs_-rel = 1/| T |∑ _i = 1^T | d_i - d̂ _i|/d̂ _i sq_-rel = 1/| T |∑ _i = 1^T || d_i - d̂ _i||^2 /d̂ _i RMSE = √(1/| T |∑ _i = 1|| d_i - d̂ _i||^2) RMSE_log = √(1/| T |∑ _i = 1||log d_i - logd̂ _i||^2) where T is the number of pixels in the test set. For the evaluation of the overall pose estimation, the average distance (ADD) metric, as suggested in <cit.>, is used. This metric calculates the mean pairwise distance as follows: ADD = 1/| K |∑_j ∈ K|| (R_j + t) - (R̂_j + t̂) || where R and t are the ground truth object rotation and translation, respectively. R̂ and t̂ are the predicted rotation and translation respectively. K is the set of 3D points. ADD is calculated as the closest point distance for symmetrical objects as follows: ADD-S = 1/| K |∑_j_1 ∈ Kj_2 ∈ Kmin|| (R_j1 + t) - (R̂_j2 + t̂) || §.§ Training The model is initialised as in <cit.> with pre-trained weights. The model utilizes an input of image sized 640 × 480. The initial learning rate is set to 1.0^-3 which is eventually decayed. The batch size is set to 16 samples. AdamW optimizer <cit.> is used for the training. The hyperparameters for calculating ℒ_patch in equation. <ref>, σ _1 and σ _2 are set to 2 and 5. Also, the parameter λ _pose for calculating ℒ_hungarian in equation. <ref> is set to 0.05. The cardinality or number of prediction queries N_c is set to 21. §.§ Results §.§.§ Depth estimation results For the depth estimation network, the training loss and accuracy per iteration are shown in Fig. <ref>. As the training proceeds, the training loss decreases thereby increasing the training accuracy per iteration. The results obtained for the depth evaluation using the metrics in equations. <ref>, <ref>,<ref> and <ref> are presented in Table. <ref>. We compare the performance of our depth estimation network with other methods on the popular KITTI dataset and our custom fruit dataset. On the KITTI dataset, our method outperformed the others in the sq-rel and RMSE_log metric and compares very closely with <cit.> in the abs-rel and RMSE metric. On the fruit dataset, our network outperforms the other in abs-rel, sq-rel, RMSE_log metric and compares closely in the RMSE metric. This comparison shows the accuracy of our network as compared with other literature and the flexibility to adapt for depth estimation as part of the TransPose pipeline. It is worth noting that higher depth accuracy comes at a computational cost and the depth estimation network is just one part as a step of the TransPose pipeline. Thus, a reasonable trade-off between computational cost and accuracy is established to satisfy both decent estimation and future real-time implementation. Hence, the depth results are very satisfactory for our purpose. The depth estimation qualitative results are shown in Fig. <ref>. Samples from all the classes of our Fruit dataset including their ground truths and the corresponding predictions are shown. A colour map is added to the depth images for better visualisation and evaluation. Further comparisons with other methods are carried out across each individual class of fruit. Fig <ref> shows the comparison of each class of the fruit dataset using the Abs-rel and sq-rel metrics. From the results, our network outperformed all the methods across all the fruit classes. For the sq-rel, Our depth estimation network performs better in the banana class and slightly performs better in the other fruit classes. Fig <ref> compares the RMSE and RMSE_log of each class of the fruit dataset. Our network performs better on the banana, orange and lemon class using the RMSE metric and compares with <cit.> on the apple and avocado class. For the RMSE_log, our network outperforms in the apple, avocado, banana and lemon class. §.§.§ TransPose pose estimation results We sample 20 test frames for the 6D pose estimation and compare the ground truth and the predicted poses. The translation [t_x, t_y, t_z]^T and the Quaternion [Q_x, Q_y, Q_z, Q_w]^T which define the orientation are compared for all fruit classes as shown in Fig <ref> - <ref>. The samples are randomly selected from the test data to visualise the difference between the ground truth and the prediction. We can see that our TransPose prediction solution matches well with the ground truth poses across all the fruit classes. The qualitative results we obtain for some sample frames from the fruit dataset is shown in Fig <ref>. Table <ref> shows a detailed evaluation of some objects from the YCB dataset using the metric in equ. <ref> and equ. <ref>. We can see that our proposed solution outperforms the other methods considering the ADD metric for all the objects except the "tuna fish can", "bowl", "wood block" and "banana" where our network closely compares with the other methods. Similarly, using the ADD-S metric, our solution outperforms the other methods except for the objects "tuna fish can" and "wood block". A similar comparison is conducted for our fruit dataset using the ADD and ADD-S metric as shown in Table <ref>. The mean from Table <ref> and Table <ref> shows the overall performance of TransPose across the sample objects. From the mean ADD and ADD-S, we can see that the depth refinement module improves the performance of 6D pose estimation. § CONCLUSION This paper proposes TransPose, an improved transformer-based 6D pose estimation network that utilises a depth refinement module to improve the overall solution performance. In contrast to other multi-modal networks that require more than one sensor and data type, TransPose utilises an RGB image for the 6D pose estimation and the depth refinement with the aid of a depth estimation network. The 6D poses are directly regressed by means of a proposed transformer network and further refined with a depth network. We compare our results of the depth network with other methods using the standard evaluation metrics. The performance of the depth network satisfies the purpose of 6D pose refinement. The results obtained using the standard evaluation metrics show a competitive depth outcome. We evaluate our results on multiple datasets for depth estimation and final object 6D pose regression. We extended the scope to a fruit dataset to prove the effectiveness of this pipeline in precision agriculture, particularly fruit picking. In the future, we aim at exploring the real-time onboard deployment of TransPose in conjunction with a robotics manipulator for real-time fruit picking application. IEEEtran
http://arxiv.org/abs/2307.07441v1
20230714160421
Precision Doppler Shift Measurements with a Frequency Comb Calibrated Laser Heterodyne Radiometer
[ "Ryan K. Cole", "Connor Fredrick", "Newton H. Nguyen", "Scott A. Diddams" ]
physics.optics
[ "physics.optics", "physics.ins-det" ]
Complete characterization of robust perfect adaptation in biochemical reaction networks Mustafa Khammash August 12, 2023 ============================================================================================ Laser heterodyne radiometry (LHR) is a well known approach for spectroscopy of thermal light <cit.>. In LHR, light from a continuous wave laser (the local oscillator, LO) is interfered with light from a thermal source, and the resulting heterodyne signal gives a measure of the power of the thermal light within a narrow frequency range around the LO laser. By tuning the LO laser frequency, a high-resolution optical spectrum (e.g. R=ν/δν∼ 10^6) can be recorded within the scan range of the laser without the use of moving components or diffractive optics. Numerous past studies have demonstrated LHR with sunlight to record spectra of atmospheric trace gases <cit.> or to study absorption transitions in the sun itself <cit.>. Recently, Fredrick et al. <cit.> introduced LHR with a frequency comb calibration, bringing the absolute stability and traceability of the frequency comb to high-precision spectroscopy of solar absorption lines and transitions in a laboratory gas cell. Here, we extend this frequency-comb-calibrated LHR approach to atmospheric spectroscopy of greenhouse gases, and we show that the high spectral resolution and frequency precision of comb-calibrated LHR enables tracking of wind-induced Doppler shifts in the measured spectra with cm· s^-1 precision. While LHR is a well established technique for measuring mixing ratios of greenhouse gases and other atmospheric trace gases <cit.>, several studies have also demonstrated that LHR is capable of atmospheric wind measurements through the Doppler shifts imparted by wind along the spectrometer line of sight <cit.>. Atmospheric wind measurements are relevant in applications ranging from meteorology <cit.> to climate and greenhouse gas monitoring <cit.>. For example, wind drives the transport of atmospheric greenhouse gases, and when combined with coincident mixing ratio data, wind speed measurements provide an important constraint in our understanding of the spatiotemporal gradients of greenhouse gases and other atmospheric trace gases <cit.>. To this end, expanding the remote sensing capabilities of LHR to include atmospheric wind measurements could provide valuable climate and meteorological data to complement measurements based on more established techniques (e.g. Doppler radar, lidar or microwave radiometry). More broadly, extending the capabilities of LHR for Doppler velocimetry could expand the utility of LHR in applications beyond climate and meteorology such as precision Doppler spectroscopy of astronomical sources <cit.> or passive tracking of thermal objects. Spectroscopic wind measurements pose a demanding challenge for the spectrometers used to make the measurement. For example, resolving a Doppler shift due to 1 m· s^-1 line of sight motion requires a spectrometer with fractional frequency precision (δ f / f) better than 10^-9. Recent LHR-based wind measurements have addressed this challenge with a frequency calibration based on an etalon <cit.> or Mach-Zehnder interferometer <cit.> in combination with a reference gas cell that is used to determine the line center of the target transition in the rest frame. With this approach, these studies have reported vertically-resolved measurements of absolute wind speeds with precision at the meter-per-second level <cit.>. Here, we address the challenge of frequency stability by calibrating our LHR system with a laser frequency comb to enable spectroscopy of sunlight with the stability and frequency accuracy of a frequency comb. Using this approach, we track wind-induced Doppler shifts in measured spectra with precision better than 100 kHz (∼15 cm· s^-1). Figure <ref> shows a schematic of our frequency-comb-calibrated LHR approach. This apparatus has been described in detail in Ref. <cit.>, and here we list only the salient details. We couple solar light into single-mode fiber using a solar-tracking telescope. The telescope consists of a commercial solar tracker (EKO STR-22G) and piezo-actuated steering mirror that directs solar light onto a fiber collimator. The steering mirror provides secondary pointing corrections to account for small deviations in the solar tracking. The steering mirror pointing is locked to the bright center of the solar disk with feedback based on the fiber-coupled solar power measured after splitting the solar light in a 1310/1550 nm WDM. A refractive beam shaper placed between the steering mirror and the fiber collimator uniformly integrates light from the solar disk by transforming the Gaussian fiber mode to a flat-top profile in the far field <cit.>. Fiber-coupled solar light is combined with light from a DFB diode laser (the LO) that is temperature-tuned over the target absorption transition. The solar and LO light are combined in a polarization-maintaining 50:50 fiber coupler and interfered on a balanced photodetector (Thorlabs PDB465A). The radio frequency (RF) output of the photodetector is sent to an RF power detection circuit (described below), while the DC monitor output is used to feed back to a variable optical attenuator (VOA) that stabilizes the LO laser power and mitigates signal distortion due to variations in laser power during each scan. The RF power detection circuit is the same as described in Ref. <cit.>. Briefly, the heterodyne output of the photodetector is amplified and passed through a low pass filter (LPF) that sets the spectral resolution of the measurement as twice the filter cutoff frequency. The filtered signal is split in a power splitter and passed to both inputs of a double-balanced mixer. The mixer output is terminated into 50 ohms, and the resulting DC voltage is proportional to the heterodyne signal power. The DC signal is passed through a preamplifier and additional low pass filter before being digitized on an oscilloscope. In a second channel, the LO light is simultaneously interfered with light from a stabilized, f_r = 250 MHz Er:fiber frequency comb. The heterodyne signal between the LO and comb is recorded on a balanced detector and mixed with a synthesized 62.5 MHz tone that doubles the density of the frequency calibration points <cit.>. The RF power detection circuit is the same as described above, but uses a lower filter cutoff (2 MHz) that limits the heterodyne signal to a narrow range around each comb mode. The output of this process is a series of calibration "ticks" that occur whenever the scanning LO laser coincides with a comb mode. The output of the comb-calibrated LHR system described above is a DC signal proportional to the spectrum of the solar light and a simultaneously recorded series of frequency calibration ticks. We determine the frequency axis of the measured spectra by fitting each calibration tick with a Gaussian profile to determine its centroid. Using the resulting calibration points as well as the known frequency spacing between each point (f_R/2), we construct a time-to-frequency transfer function that transforms the temporal axis of the measurement to the comb-referenced frequency grid. The frequency comb used for this comb calibration is referenced to a NIST-calibrated hydrogen maser and provides a SI-traceable frequency calibration grid with relative uncertainty of a few parts in 10^13 or better. Accounting for the relative uncertainty in the maser comb reference and the time-to-frequency calibration process, we estimate the relative frequency uncertainty of the comb calibration to be ∼70 kHz for a single measurement (10 s), averaging to ∼5 kHz at one hour. At that level, line center determination is limited by noise in the measured spectra. Using the approach outlined above, we recorded spectra for atmospheric CO2 in Boulder, CO, USA on October 12, 2022. The measurement targeted the R16 transition of the 30012←00001 CO2 band near 1572.33 nm, which has been the subject of past remote sensing missions <cit.> and advanced spectroscopic characterization <cit.>. Figure <ref>(b) shows the measured CO2 spectrum after averaging for nearly five hours. The spectrum was recorded using a low pass filter bandwidth of 100 MHz, which results in a measurement spectral resolution of 200 MHz. The effective averaging time (2 ms) of the final low pass filter in the RF detection chain (see Figure <ref>) yields ∼30 independent samples per 200 MHz resolution element. Each spectrum was recorded in a 10 s scan spanning a ∼30 GHz optical window. The signal-to-noise ratio (SNR) for each 10 s scan is ∼50. Owing to the high stability of the comb calibration, long-term averaging of the measured spectra allows the SNR to grow with √(τ), exceeding 2000 after averaging for the full measurement period. We assess the relative frequency precision of the measured spectra by comparing each measurement to a template generated from the five-hour-averaged spectrum. The observed shift in each spectrum is taken as the frequency shift that maximizes the cross correlation between the spectrum and the template. In this sense, this approach determines frequency shifts relative to the spectrum averaged over the full measurement period. For the CO2 spectra measured on October 12, the frequency shifts indicate a progressive blue shift by ∼11 MHz over the course of the five-hour measurement. Figure <ref>(a) shows the measured frequency shifts along with a comparison to our model for the expected frequency shifts due to wind-induced Doppler effects along the LHR line of sight. To model the effect of wind on the measured spectra, atmospheric temperature, pressure, and three-dimensional wind fields are obtained from the European Center for Medium-Range Weather Forecast ERA5 reanalysis data <cit.> for Boulder, CO, USA. The ERA5 data has a temporal resolution of one hour, and we linearly interpolate the data to estimate the atmospheric conditions for each measured LHR spectrum. We split the atmosphere into 50 altitude bins, and simulate the CO2 R16 transition in each layer using the HITRAN2020 database with temperature-dependent line shape parameters for the speed-dependent Nelkin-Ghatak profile (SDNGP) <cit.>. We assume a uniform CO2 mixing ratio of 400 ppm, which after integrating over the 50 atmospheric layers produces a simulated line shape that is in qualitative agreement with measured spectra. Our model accounts for wind-induced Doppler effects by applying a frequency shift to the simulated spectrum in each atmospheric layer. The wind speed along the LHR line of sight (and therefore the Doppler shift) is determined as 𝒲^LOS_i = W_i·𝐤̂ where 𝒲^LOS_i is the line-of-sight wind speed in layer i, 𝐤̂ is the normalized LHR pointing vector, and W_i is the wind velocity vector in terms of eastward (û), northward (𝐯̂), and downward (ŵ) components. The pointing vector is specified in the (u,v,w) coordinate system as 𝐤 = ( sin θ_z sin α ) û + ( sin θ_z cos α ) 𝐯̂ + ( cos θ_z ) ŵ where θ_z is the solar zenith angle and α is the azimuth angle. We determine the solar position angles for each measured LHR spectrum using a Python wrapper for NREL's Solar Position Algorithm <cit.>. Figure <ref> shows the line of sight wind speeds calculated using this method for the data on October 12. After simulating spectra at times corresponding to each measured LHR spectrum, we determine the wind-induced Doppler shifts using the same cross correlation approach described above. In this case, the wind-induced shifts are determined relative to a template generated by averaging the simulated spectra over the five-hour measurement period. Figure <ref>(a) shows the wind-induced shifts calculated using our model, which are in excellent agreement with the measured shifts over the full duration of the measurement. Figure <ref>(b) shows the Allan deviation of the difference between the measured and calculated shifts. For a single spectrum (10 s), we track the line center with a precision of ∼2 MHz (3 m · s^-1) and the precision improves to approximately ∼100 kHz (15 cm · s^-1) after 2.5 hours of averaging. Relative to the ∼2.5 GHz line width of the measured transition, this frequency precision splits the line by a factor of 25,000. In evaluating the results shown in Figure <ref>, it is also important to consider how atmospheric variability (e.g. changes in temperature and pressure) could influence the observed line shift. To assess the strength of these effects relative to wind-induced Doppler shifts, we reran the atmospheric model while including variability in the atmospheric pressure and temperature but neglecting Doppler effects. For the data on October 12, surface pressure increased from approximately 834 to 836 hPa over the course of data collection based on measurements from a co-located weather station. Using our model, we estimate that an increase in atmospheric pressure at this level could affect a frequency shift of approximately 80 kHz over the course of the five-hour measurement with a sign opposite that of the wind-induced shifts. Similarly, we use the ERA5 data to estimate changes in atmospheric temperature, and we find that temperature variability induces shifts by ∼40 kHz. In both cases, the pressure- and temperature-induced shifts are small relative to wind-induced Doppler shifts. Furthermore, although our analysis has involved only relative frequency shifts, it is interesting to consider the use of comb-calibrated LHR to measure absolute shifts (and thus absolute wind speeds). Past LHR-based wind measurements have determined vertically-resolved, absolute wind speeds using inversion methods that rigorously fit the measured spectra with an atmospheric model <cit.>. This approach represents a significant increase in complexity when compared to relative shift measurements, which are only concerned with deviations from the average. The relative shifts shown in Figure <ref> depend only on the stability of the spectrometer, and a measurement of absolute shifts would depend on additional factors such as the accuracy of the atmospheric and spectroscopic data used to fit the measured spectra. Nonetheless, comb-calibrated LHR may still provide valuable benefits for absolute wind measurements by leveraging the stability and absolute frequency accuracy of the comb calibration to reduce instrumental uncertainties and enable precision tracking of absolute Doppler shifts over long time scales. Future studies could explore how these benefits impact absolute wind measurements when combined with a more advanced retrieval procedure. In conclusion, we demonstrate high-precision spectroscopy of atmospheric CO2 through the unique combination of a laser heterodyne radiometer and an optical frequency comb. We show that our measurements track wind-induced Doppler shifts in the measured CO2 spectra with a precision of ∼100 kHz (15 cm · s^-1), equivalent to a fractional frequency precision of a few parts in 10^10. These results demonstrate the potential of frequency-comb-calibrated LHR as an approach for precision atmospheric spectroscopy and Doppler metrology. LHR has a long heritage as a technique for remote sensing of greenhouse gas mixing ratios, and future efforts could seek to combine these established capabilities with precision Doppler wind measurements. Such efforts could significantly expand the capabilities of LHR as a climate monitoring tool and provide valuable data to constrain emissions estimates and greenhouse gas transport. More broadly, our results validate comb-calibrated LHR as a tool for precision Doppler velocimetry that could be of use in applications beyond climate monitoring. Such applications may include passive tracking of thermal objects or precision radial velocity measurements of astronomical sources, including characterizing the impact of telluric absorption on those measurements <cit.>. In the latter application, achieving precision Doppler spectroscopy at the cm · s^-1 levels represents an ongoing challenge in the fields of solar and exoplanet science that could be explored in future studies using comb-calibrated LHR. Funding This work was supported by the NIST IMS program, NIST financial assistance award 70NANB18H006, and the NASA Astrophysics Division. R.C. acknowledges support from the National Academies NRC Research Associateship Program. Acknowledgments The authors thank Eugene Tsao and David Plusquellic for valuable comments and discussions. This work is a contribution of NIST and is not subject to copyright in the United States. Mention of specific products or trade names is for technical and scientific information and does not constitute an endorsement by NIST. Disclosures The authors declare no conflicts of interest.
http://arxiv.org/abs/2307.05817v1
20230711215331
The minimum neighborliness of a random polytope
[ "Brett Leroux" ]
math.PR
[ "math.PR", "math.MG", "52A22, 52B05, 52B35, 52C45, 60D05" ]
theoremTheorem proposition[theorem]Proposition definition[theorem]Definition lemma[theorem]Lemma problem[theorem]Problem conjecture[theorem]Conjecture example[theorem]Example corollary[theorem]Corollary caseCase subcaseCase subcasecase subsubcaseCase subsubcasesubcase 1=0pt plus1cm · Keywords:#1 0 =0 @mathmath @math#1[math] The minimum neighborliness of a random polytope Brett Leroux August 12, 2023 =============================================== Let μ be a probability distribution on ^d which assigns measure zero to every hyperplane and S a set of points sampled independently from μ. What can be said about the expected combinatorial structure of the convex hull of S? These polytopes are simplicial with probability one, but not much else is known except when more restrictive assumptions are imposed on μ. In this paper we show that, with probability close to one, the convex hull of S has a high degree of neighborliness no matter the underlying distribution μ as long as n is not much bigger than d. As a concrete example, our result implies that if for each d in ℕ we choose a probability distribution μ_d on ^d which assigns measure zero to every hyperplane and then set P_n to be the convex hull of an i.i.d. sample of n ≤ 5d/4 random points from μ_d, the probability that P_n is k-neighborly approaches one as d →∞ for all k≤ d/20. We also give a simple example of a family of distributions which essentially attain our lower bound on the k-neighborliness of a random polytope. § INTRODUCTION The most well-studied models of random polytopes are those where a random polytope is defined as the convex hull of a set of independent and identically distributed points from some probability distribution on the space ^d. These objects have been studied for several reasons. One reason is that random polytopes can sometimes give us some insight into the possible metric or combinatorial properties of deterministic convex polytopes with a given dimension and number of vertices. Another is that convex polytopes have a wide range of applications in geometric algorithms including the simplex method <cit.> and Wolfe’s method <cit.>. For many such algorithms, the input data defines a convex polytope and it is useful to understand combinatorial and metric properties of that polytope in order to understand the complexity of the algorithm. Since algorithmic applications often assume that the input data is random according to some predetermined distribution, random polytopes are particularly relevant. Gaussian random polytopes, i.e. random polytopes where the underlying distribution is the standard Gaussian distribution on ^d have received much attention <cit.>. Perhaps the main reason for this focus on Gaussian random polytopes is that they coincide in distribution with uniform random projections of a simplex to some lower dimensional space <cit.>. Another reason is that the Gaussian distribution has many nice properties which often make calculations simpler and behavior of combinatorial properties easier to determine. Other commonly studied families of random polytopes are those for which the underlying distribution is the uniform distribution on a convex body or the boundary of a convex body. Yet another important example is random 0/1 polytopes <cit.> where the vertices are a random subset of the vertices of the d-dimensional cube. Some papers prove results which assume only that the underlying distribution has some property such as being log-concave <cit.> or subgaussian <cit.>. In all of these examples, either the distribution is specified, or it satisfies some restrictive condition such as being subgaussian. Our main result in contrast assumes only that the distribution assigns measure zero to every (affine) hyperplane. With this assumption only, we consider random polytopes where the number of random points is proportional to the dimension and the dimension approaches infinity. The main property of convex polytopes we are interested in is k-neighborliness (defined in <ref>). One of our main results shows that if the constant of proportionality is less than two, then there exists some constant β>0 (depending on the constant of proportionality), such that the probability that the random polytope is at least ⌊β d⌋-neighborly approaches one as the dimension approaches infinity (<ref>). To put this result in context, we need to review what has previously been known about the neighborliness of random polytopes. First we collect some notation. §.§ Notation Asymptotic notation f(n) ∼ g(n) means f(n)/g(n) → 1 as n →∞. For a set of points X ⊂^d, X is the convex hull of X. Similarly, X is the affine hull of X. The binary entropy function is the function defined by H(p) := -plog_2 p -(1-p)log_2(1-p). We use o to denote the origin in ^d. For a polytope P, we use the notation f_ℓ(P) for the number of ℓ-dimensional faces of the polytope P. When we say that a probability distribution assigns measure zero to every hyperplane we mean every affine hyperplane (not just every linear hyperplane) unless otherwise specified. §.§ Previous work on the neighborliness of random polytopes A polytope P is k-neighborly if every subset of at most k vertices is a face of the polytope. See <cit.> for an introduction to polytopes and <cit.> for k-neighborly polytopes in particular. In addition to our results about k-neighborliness, we will also consider another quantity associated to polytopes which measures how close the polytope is to being k-neighborly. For a simplicial polytope P ⊂^d with n vertices and any 0≤ℓ≤ d-1, the ℓ-face density of P is f_ℓ(P)/nℓ+1. We see that the ℓ-face density of a polytope measures how close to being k-neighborly the polytope is where k = ℓ+1. If the ℓ-face density is one, then the polytope is (ℓ+1)-neighborly. In addition to showing that a random polytope is k-neighborly for a surprisingly large value of k, we will show that a random polytope has ℓ-face density close to one where ℓ is even larger than k. It was perhaps Gale who first speculated that random polytopes should have a high degree of neighborliness when the dimension of the space is high <cit.>. More recently, it has been rigorously proven that random polytopes from certain families of probability distributions tend to have a surprisingly high degree of neighborliness with high probability <cit.>. Two of these works, the paper <cit.> of Donoho and Tanner, and the paper <cit.> of Vershik and Sporyshev are particularly relevant to this paper so we give an overview of their results. A Gaussian random polytope is the convex hull of an i.i.d. sample of points from the standard (mean zero and identity covariance matrix) Gaussian distribution on ^d. Donoho and Tanner show in <cit.> that there exists a function ρ_DT(δ) such that if ρ<ρ_DT(δ) and G_n,d is a Gaussian random polytope in ^d with n random points with d ≥δ n, then the probability that G_n,d is ⌊ρ d⌋-neighborly approaches one as d approaches infinity (<cit.>). Furthermore, they show that if ρ>ρ_DT(δ), then the expected number of subsets of the points of size ⌊ρ d⌋ which are not faces of the polytope approaches infinity as d →∞. Vershik and Sporyshev establish a similar result for the ℓ-face density. They show that there exists a function ρ_VS(δ) such that if d =d(n)∼δ n and ℓ =ℓ(n) ∼ρ d where ρ<ρ_VS(δ), then the expected ℓ-face density of G_n,d approaches one as d approaches infinity and that if ρ>ρ_VS(δ), then the expected ℓ-face density approaches zero as d approaches infinity <cit.>. §.§ Our results Our two main results are similar to the two results explained in the previous section, the first due to Donoho and Tanner, and the second due to Vershik and Sporyshev. The main difference is that our results apply to any random polytope whose vertices are i.i.d. according to a probability distribution on ^d which assigns measure zero to every hyperplane. This is a very weak assumption on the distribution. In particular, it is the minimal assumption which guarantees that the random polytopes under consideration are simplicial with probability one. Not surprisingly, because of the generality of the distributions we consider, our result guarantees a lower degree of neighborliness than in the case where the distribution is Gaussian. Let α <2 and assume that β>0 satisfies α H(β/α)+(α-β)(H(α-1/α-β)-1)<0, or equivalently, α^α/(α-1)^α-12^α < β^β(1-β)^1-β/2^β. For each d∈ℕ, let μ_d be a probability distribution on ^d which assigns measure zero to every hyperplane. Let n:=n(d) ∼α d and let S_n be a set of n independent and identically distributed points from μ_d. Then for any sequence k:= k(d) with k ∼β d, the probability that S_n is k-neighborly approaches one as d →∞. In <ref> the equation α H(β/α)+(α-β)(H(α-1/α-β)-1)=0 implicitly determines a function which we denote ρ_N'(α) and which is plotted in <ref>. By <ref>, the function ρ_N'(α) has the property that if ρ< ρ_N'(α), then the probability that a random polytope in ^d with ∼α d vertices will be at least ⌊ρ d⌋-neighborly approaches one as d →∞. We have a similar result for the ℓ-face density: Let α<2 and 0<β <2-α. For each d∈ℕ, let μ_d be a probability distribution on ^d which assigns measure zero to every hyperplane. Let n:=n(d) ∼α d and let S_n be a set of n independent and identically distributed points from μ_d. Then for any sequence ℓ:= ℓ(d) with ℓ∼β d, the expected ℓ-face density of S_n approaches one as d →∞. We define the function ρ_D'(α)= 2-α. By <ref>, the function ρ_D'(α) has the property that if ρ< ρ_D'(α), then the expected ⌊ρ d⌋-face density of a random polytope in ^d with ∼α d vertices approaches one as d →∞. The function ρ_D'(α) is also plotted in <ref>. Recall that the functions ρ_VS(δ) and ρ_DT(δ) discussed in <ref> are defined as functions of δ where the dimension d satisfies d ∼δ n where n is the number of vertices. In contrast, we defined the functions ρ_N'(α) and ρ_D'(α) as functions of α where the number of vertices n satisfies n ∼α d. Therefore in order to compare the above the results to the results for the Gaussian case discussed in <ref>, we set δ = 1/α and define functions ρ_N(δ):= ρ_N'(1/δ) and ρ_D(δ):= ρ_D'(1/δ). These functions are plotted in <ref> and can be compared with the functions ρ_VS(δ) and ρ_DT(δ) which are plotted in <cit.>. In <ref> we show that the above two results are close to best possible by constructing a family of distributions on ^d which show that <ref> (resp. <ref>) is not true if ρ_N'(α) (resp. ρ_D'(α)) is replaced by a function which is strictly less than ρ_N'(α) (resp. ρ_D'(α)) for any α in the range (1,2). §.§ Applications As discussed earlier, one of the main reasons to study random polytopes is to help understand the average behavior of algorithms where the input data can be thought of as a convex polytope. For the specific property of k-neighborliness, the main application is to compressed sensing, see <cit.> for an explanation of the connection of k-neighborliness to compressed sensing. This connection is what has motivated much of the work on the neighborliness of random polytopes some of which was cited in <ref>. §.§ Outline of the paper <ref> contains the proofs of <ref>. Before proving the theorems, we explain the idea behind the proof in <ref> and also in that section we explain the main tool behind the proofs. The main tool is a generalization of a result of Wagner and Welzl <cit.> which gives an upper bound on the probability that a given point is in the convex hull of a sample of random points. <ref> contains the construction of the distributions which show that <ref> are essentially best possible. In other words, we construct distributions which produce the least neighborly polytopes over all distributions which assign measure zero to every hyperplane. In order to prove that the distributions we construct have the desired property, we require a result which gives a lower bound on the probability that a given point is in the convex hull of a sample of random points. More specifically, given a distribution on ^d and a point in ^d which is at some depth (<ref>) with respect to the distribution, we prove a lower bound on the probability that point is in the convex hull of a sample of random points where this probability depends on the depth of the point (<ref>). § THE LOWER BOUND ON THE NEIGHBORLINESS <ref> explains the main idea behind the proof of <ref> and outlines the main tool for the proof. The proof is completed in <ref> after establishing some necessary lemmas. §.§ An upper bound on the probability that a point is in the convex hull Wendel's theorem (<cit.>,<cit.>) is a classic result in geometric probability which says that the probability that a set of n i.i.d. random points from a distribution on ^d which is symmetric about o and which assigns measure zero to every linear hyperplane is equal to ∑_i=0^n-d-1n-1i/2^n-1. More recently, Wendel's theorem has been generalized by Wagner and Welzl in the following way. An absolutely continuous probability distribution on ^d is a probability distribution which has a density function with respect to the Lebesgue measure on ^d. A measure μ is balanced about a point p if every hyperplane through p divides μ into two equal halves. Wagner and Welzl showed the following Let μ be an absolutely continuous probability measure on ^d. Let S be a set of n independent and identically distributed points from μ. Then the probability that S contains the origin is at most ∑_i=0^n-d-1n-1i/2^n-1 = 1-∑_i=0^d-1n-1i/2^n-1. and this bound is attained if and only if μ is balanced about the origin. Why is this sort of result useful for our purposes? In order to prove <ref>, we need to show that certain random polytopes are k-neighborly with probability close to one. Therefore, we need an upper bound on the probability that the polytope is not k-neighborly. By the union bound, this probability is upper bounded by nk times the probability that a subset K of the vertices of size k is not a face of the polytope. A standard fact about polytopes is that K is not a face of the polytope if and only if the affine hull of K intersects the convex hull of the remaining vertices of the polytope (<ref>). Therefore, letting V denote the set of vertices of the polytope, if we project V to the subspace which is orthogonal to the affine hull of K, then the event that K is not a face of the polytope is equivalent to the event that the convex hull of V ∖ K contains the projection of the affine hull of K which is a point in the image space of the projection. <ref> can then be used to given an upper bound for the probability of this event. Rather than using <ref>, we will prove another version of this result which applies not just to absolutely continuous distribution but to any distribution which assigns measure zero to every hyperplane. We also give a proof of this result which is very different from the proof of <ref> given in <cit.>. However the following result is not entirely new; it was mentioned in <cit.> that such a proof works but no details were given. Let μ be a probability measure on ^d that assigns measure zero to every hyperplane. Let S be a set of n independent and identically distributed points from μ. Then the probability that S contains the origin is at most ∑_i=0^n-d-1n-1i/2^n-1. The quantity of interest is ℙ(o∈(X_1,…,X_n)). Note that since μ assigns measure zero to ever hyperplane (and, in particular, every hyperplane through the origin), the probability that some ℓ-flat through the origin contains more than ℓ points is zero. This means that the probability that (X_1,…,X_n) contains the origin on its boundary is zero and so the above probability is equal to p:=ℙ(o∈(X_1,…,X_n)). Let X_1,…, X_N be N i.i.d. points distributed according to μ where N > n is some integer. Now we consider the Gale transform <cit.> of the set of points {X_1,…, X_N}. The Gale transform of {X_1,…,X_N} is a set X̅_N of N points in ^N-d-1 and by <cit.> there is a one-to-one correspondence between subsets of {X_1,…,X_N} of size n which contain the origin in their interior and subsets of X̅_N of size N-n which are faces of the polytope X̅_N. We note that since no hyperplane through the origin contains more than d-1 points of {X_1,…,X_N} with probability one, the polytope X̅_N is simplicial with probability one <cit.>. By the Upper Bound Theorem for convex polytopes <cit.>, and the formula for the number of faces of each dimension of a cyclic polytope, see for example <cit.>, the number of subsets of X̅_N of size N-n which are faces of the polytope X̅_N is at most C(N,n,d):= N-δ(n-1)/n∑_j=0^⌊N-d-1/2⌋N-1-jn-1nN-2j+δ where δ = N-d-1-2⌊(N-d-1)/2 ⌋. Therefore, we know that the expected number of subsets of {X_1,…,X_N} of size n which contain the origin in the interior of their convex hull is at most C(N,n,d), i.e., p·Nn≤ C(N,n,d). Therefore, in order to prove the desired bound on p, it will suffice to show that lim_N→∞C(N,n,d)/Nn = ∑_i=0^n-d-1n-1i/2^n-1. To make the calculation simpler, we will restrict our attention to values of N such that N-d-1 is even, i.e. the parity of N is the opposite of the parity of d. This means that δ=0. For the terms in <ref> to be non-zero, we need N-2j+δ≤ n and so 2j ≥ N+δ-n. Therefore, <ref> equals N/n∑_j=⌈N-n/2⌉^N-d-1/2N-1-jn-1nN-2j =N/n∑_j=⌈N-n/2⌉^N-d-1/2N-1-jn-1( n-1N-2j-1 + n-1N-2j) Letting m = (N-d-1)/2-j and using that (N-d-1)/2-⌈N-n/2⌉ = ⌊n-d-1/2⌋, the above is equal to N/n∑_m=0^⌊n-d-1/2⌋N-1- (N-d-1)/2-m n-1( n-1d+1+2m-1 + n-1d+1+2m) =N/n∑_m=0^⌊n-d-1/2⌋N/2+(d+1)/2-1-mn-1( n-1d+1+2m-1 + n-1d+1+2m) N→∞∼N^n/n!1/2^n-1∑_m=0^⌊n-d-1/2⌋( n-1d+1+2m-1 + n-1d+1+2m) = N^n/n!1/2^n-1∑_i=0^n-d-1n-1i. Since NnN→∞∼N^n/n!, <ref> follows. §.§ Proofs of <ref> In this section we prove <ref>. Before proving the theorems we need to make rigorous the idea explained in <ref>. That is we first show how <ref> can be used to upper bound the probability that a subset of vertices of a random polytope is not a face of the polytope. This bound is given in <ref>. First we need a simple corollary of <ref>: Let μ be a probability measure on ^d that assigns measure zero to every hyperplane. Let S:= {X_1,…,X_n} be a set of n independent and identically distributed points from μ. Let L⊂^d be some affine ℓ-flat. Then the probability that L ∩ S ≠∅ is at most ∑_i=0^n-(d-ℓ)-1n-1i/2^n-1. Note that any affine coordinate transformation of ^d preserves the fact that μ assigns measure zero to every hyperplane. Therefore, after applying an affine coordinate transformation to μ, we may assume that L is the span of the first ℓ standard basis vectors in ^d. We can identify ^d-ℓ with {0,…,0}×^d-ℓ⊂^d. Let π_d-ℓ denote the orthogonal projection onto ^d-ℓ. The probability that L ∩ S ≠∅ is equal to the probability that (X̃_̃1̃, …,X̃_̃ñ) contains the origin where the X̃_̃ĩ are independent and distributed according to the projection of μ by π_d-ℓ. The corollary now follows from <ref>. The above corollary is useful for our purposes due to the following well-known fact: Let P be a polytope and A⊂ V(P). Then A is a face of P if and only if A ∩(V(P) ∖ A) = ∅. Let μ be a probability measure on ^d that assigns measure zero to every hyperplane. Let S_n be a set of n independent and identically distributed points from μ. Then for any subset K of S_n of size k, the probability that K is not a face of S_n is at most ∑_i=0^n-d-2n-k-1i/2^n-k-1. Let S_n:={X_1,…, X_n}. Define f(x_1,…, x_n):= 1 if (x_1,…,x_n) is not a face of S_n 0 otherwise Since all the X_i are independent and identically distributed, the probability that (X_i_1,…,X_i_k) is not a face of S_n is independent of the choice of subscripts. The probability that (X_n-k+1,…, X_n) is not a face of S_n is equal to ∫_^d…∫_^d f(x_n-k+1,…,x_n) μ( x_1)…μ( x_n). For any fixed choice of points x_n-k+1, …, x_n, the inner integral ∫_^d…∫_^d f(x_n-k+1,…,x_n) μ( x_1)…μ( x_n-k) is equal (by <ref>) to the probability that (x_n-k+1,…, x_n) ∩(S_n ∖{x_n-k+1,…,x_n}) ≠∅. If {x_n-k+1,…, x_n} are in general position, that is, they are not contained in any affine (k-2)-flat, then by <ref>, this probability is at most ∑_i=0^n-d-2n-k-1i/2^n-k-1. Since μ assigns measure zero to every hyperplane, the measure of the set of {x_n-k+1, …,x_n} which are contained in some affine (k-2)-flat is zero. Therefore, <ref> is the integral of a function that is bounded by <ref> except possibly on a set of measure zero, so statement of the theorem follows. Let A be the event that (X_1,…, X_k) is not a face of S. By the definition of conditional expectation, 𝔼(1_A) = 𝔼(𝔼(1_A| X_1,…,X_k)). For any value of the X_i, 1≤ i ≤ k, 𝔼(1_A| X_1=x_1,…,X_k=x_k) is equal a.s. (by <ref>) to the probability that L∩(X_k+1, …,X_n)≠∅ where L = (x_1,…, x_k), which, by <ref> is at most ∑_i=0^n-d-2n-k-1i/2^n-k-1. Since 𝔼(1_A) = 𝔼(𝔼(1_A| X_1,…,X_k)), the same bound holds for 𝔼(1_A). We can now prove the main results. Let S_n:={X_1,…, X_n}. By <ref> and the union bound, the probability that S_n is not k-neighborly is at most nk∑_i=0^n-d-2n-k-1i/2^n-k-1. So we just need to prove that this quantity goes to zero as d→∞. The assumptions on n,k,d imply that for d sufficiently large, n-d-2<n-k-1/2. Therefore, by the unimodality of the binomial coefficients and using that nk≤ 2^nH(k/n), for d sufficiently large, nk∑_i=0^n-d-2n-k-1i/2^n-k-1 ≤nk· n·n-k-1n-d-2/2^n-k-1 ≤ n2^nH(k/n)2^(n-k-1)H((n-d-2)/(n-k-1))/2^n-k-1 = n2^nH(k/n)2^(n-k-1)(H((n-d-2)/(n-k-1))-1). Since n ∼α d and k ∼β d we have that n-d-2 ∼ (α-1)d and n-k-1∼ (α -β)d. So for d sufficiently large, the above is at most 2α d 2^α dH(β/α)+(α-β)d( H(α-1/α-β)-1). This quantity goes to zero as d→∞ as long as α H(β/α)+(α-β)(H(α-1/α-β)-1)<0 which is the assumption on β. Let S_n:={X_1,…, X_n}. The expected ℓ-face density of S_n is equal to the probability that (X_1,…,X_ℓ+1) is a face of S_n. So we just need to show that the probability that (X_1,…,X_ℓ+1) is not a face of S_n is o(1) as d →∞. By <ref>, the probability that (X_1,…,X_ℓ+1) is not a face of S_n is at most ∑_i=0^n-d-2n-ℓ-2i/2^n-ℓ-2 Since n∼α d and ℓ∼β d, we know that n-d-2/n-ℓ-2∼α-1/α-β. By the assumption that β<2-α, we have that α-1/α-β <1/2. Therefore, there exists ϵ>0 such that for d sufficiently large, n-d-2/n-ℓ-2<1/2-ϵ. By the unimodality of the binomial coefficients, the fact that n-d-2/n-ℓ-2<1/2-ϵ implies that ∑_i=0^n-d-2n-ℓ-2i/2^n-ℓ-2≤ n n-ℓ-2n-d-2/2^n-ℓ-2 Again using that nk≤ 2^nH(k/n), we have that for d sufficiently large, ∑_i=0^n-d-2n-ℓ-2i/2^n-ℓ-2 ≤ n n-ℓ-2n-d-2/2^n-ℓ-2 ≤n2^(n-ℓ-2)H(n-d-2/n-ℓ-2)/2^n-ℓ-2 ≤2α d2^d(α-β)H(1/2-ϵ)/2^d(α-β). And the above quantity goes to zero as d →∞ because H(r)<1 as long as r ≠ 1/2. § THE LEAST NEIGHBORLY DISTRIBUTIONS As previously mentioned, we will construct a family of distributions {μ_d}_d∈ℕ which show that <ref> are in some sense best possible. Before giving the construction, we need to establish the following proposition, which gives the reverse of the bound given by <ref> in <ref>. §.§ A lower bound on the probability that a point is in the convex hull Let μ be a probability distribution on ^d and p a point in ^d. The depth of p in μ is defined to be min{μ(H^+) H^+ is a closed halfspace containing p }. Let μ be an absolutely continuous probability distribution on ^d. Let S be a set of n independent and identically distributed points from μ. Let p ∈^d be a point and assume that the depth of p in μ is greater than or equal to a. Then the probability that S contains p is greater than or equal to (d+1)nd+1∫_0^a(y^n-d-1+(1-y)^n-d-1)y^d y = ∑_i=0^n-d-1nia^i(1-a)^n-i+a^nn-1d. Note that it suffices to prove the statement when p is the origin because otherwise we could translate p and μ to reduce to this case. Let p_n,μ denote the probability that S contains the origin, where S is a set of n independent and identically distributed points from μ. It is shown in <cit.> that there exists a function h(y) (which depends only on μ) such that p_n,μ= 2nd+1∫_0^1y^n-d-1h(y) y. For completeness, we give the definition of h(y) from <cit.>: As in <cit.>, we choose some absolutely continuous probability distribution μ̃ on ^d+1 such that the orthogonal projection of μ̃ to the first d coordinates is the distribution μ and we let ℓ̃ be the x_d+1 axis in ^d+1. Then as in <cit.>, we let σ denote a μ̃-random oriented simplex, i.e. a d-simplex whose d+1 vertices are i.i.d. points from μ̃ and one side of σ is chosen as the positive side which is denoted H^+(σ). Furthermore, we say that a directed line ℓ̃ enters an oriented simplex σ if it intersects the relative interior of σ and is directed from the positive to the negative side of σ. With this we define H_μ̃,ℓ̃:= (ℓ̃ enters σ and μ̃(H^+(σ))≤ y). We then define h(y):= h_μ̃,ℓ̃(y):= H_μ̃,ℓ̃/ y. This completes the definition of h(y), see <cit.> for more details. Now by <cit.>, h(y) = h(1-y) and so 2nd+1∫_0^1y^n-d-1h y = 2nd+1∫_0^1/2(y^n-d-1+(1-y)^n-d-1)h y. By <cit.> and the remarks following the proof of that theorem, because of the assumption that the depth of o in μ is greater than or equal to a, the function h satisfies h = (d+1)/2min(y,1-y)^d for y ≤ a and y ≥ 1-a. Alternatively, see <cit.> for a rigorously stated proof of this claim. Therefore, p_n,μ ≥ (d+1)nd+1∫_0^a(y^n-d-1+(1-y)^n-d-1)y^d y. We use <cit.> to get ∫_0^a(1-y)^n-d-1y^d y =(n-d-1)!d!/n!∑_i=0^n-d-1nia^i(1-a)^n-i. And since ∫_0^ay^n-1 y =a^n/n and a^n/n (d+1)nd+1 = a^n n-1d, p_n,μ≥∑_i=0^n-d-1nia^i(1-a)^n-i+a^nn-1d. We remark that a calculation similar to the one in the proof of <ref> was done in <cit.>. However, there is a mistake in that proof; the summation formula they get for the integral is incorrect. §.§ A family of distributions essentially attaining the lower bound Here is the definition of the family of distributions: Let ϵ_d:= 1/√(d) (This choice is somewhat arbitrary). Let f(x_1,…, x_d) := 1/(2π)^d/2e^-(1/2)(x_1^2+ ⋯+ x_d^2) be the probability density function of the standard (mean zero and identity covariance matrix) Gaussian distribution on ^d. Let μ_d be the distribution on ^d with density (1-ϵ_d)f(x)+ ϵ_d1_{x_2 ≤ϵ_d}/V_d where V_d is the volume of the d-ball with radius ϵ_d. In other words, μ_d is the combination of a Gaussian distribution having 1-ϵ_d of the mass (we call it the Gaussian part) and the uniform distribution on the d-ball of radius ϵ_d (called the ball part) having the remaining mass. Note that each distribution μ_d is absolutely continuous. The following proposition shows that <ref> is in some sense best possible. Let α>1 and assume that β>0 satisfies β > 2-α. Let {μ_d}_d∈ℕ be the family of probability distributions defined at the start of this section. If for each d ∈ℕ we let n:=n(d) = ⌊α d⌋ and let S_n={X_1,…,X_n } be a set of n iid random points from μ_d, then for any sequence ℓ:=ℓ(d) with ℓ∼β d, the expected ℓ-face density of S_n is o(1) as d →∞. By definition, the expected ℓ-face density of S_n is equal to the probability that (X_1, …,X_ℓ+1) is a face of S_n. Therefore, in order to show that the expected ℓ-face density is o(1), it will suffice to show that the probability that (X_1, …,X_ℓ+1) is not a face of S_n is 1-o(1). First we will show that we can assume that at least one of the points {X_i}_i ∈ [ℓ+1] is sampled from the ball part of the distribution. Let B be the event that at least one of the points {X_i}_i ∈ [ℓ+1] is sampled from the ball part of the distribution and 1_B the indicator function of the event B. Using that (1+x/y )^y <e^x, we have that ( B) = (1-ϵ_d)^ℓ+1≤ e^-ϵ_d(ℓ+1) =o(1). and so (B) = 1-o(1). This means that we ignore the case when event B is not satisfied. In particular, we have ((X_1, …,X_ℓ+1) is not a face) ≥((X_1, …,X_ℓ+1) is not a face∩ B) and so it suffices to show that ((X_1, …,X_ℓ+1) is not a face∩ B)=1-o(1). Define f(x_1,…, x_ℓ+1):= 1 if (x_1,…,x_ℓ+1) is not a face 0 otherwise Then we have that ((X_1, …,X_ℓ+1) is not a face∩ B) = ∫_^d…∫_^d1_B f(x_1,…, x_ℓ+1)μ_d( x_n)…μ_d( x_1). Letting B' ⊂^d(ℓ+1) be the set of point sets satisfying 1_B=1, we can rewrite the above as ((X_1, …,X_ℓ+1) is not a face∩ B) = ∫_B'∫_^d…∫_^d f(x_1,…, x_ℓ+1)μ_d( x_n)…μ_d( x_1). For any fixed choice of points x_1, …, x_ℓ+1, the inner integral ∫_^d…∫_^d f(x_1,…, x_ℓ+1)μ_d( x_n)…μ_d( x_ℓ+2) is equal to the probability that ({X_i}_i∈ [ℓ+2,n]) ∩ L≠∅ where L := (x_1,…,x_ℓ+1). Under the assumption that the points x_1, …, x_ℓ+1 satisfy 1_B = 1, we will show that this probability is 1-o(1) and therefore that <ref> is 1-o(1) for any choice of x_1,… x_ℓ+1 such that 1_B=1. Let π_L be the orthogonal projection of ^d to the subspace L^⊥ of dimension d-ℓ that is orthogonal to L. Let πμ_d denote the measure on L^⊥ which is the projection of μ_d. The probability that ({X_i}_i∈ [ℓ+2,n]) ∩ L≠∅ is equal to the probability that (π_L X_ℓ+2, …,π_L X_n) contains π L. Note that since at least one of the x_i, 1 ≤ i ≤ℓ+1 is sampled from the ball part of the distribution, the distance from L to the origin is at most ϵ_d. We claim that this means that the depth of π L in πμ_d is at least (1-ϵ_d)(1/2-ϵ_d). In order to show this, we need to show that every hyperplane in π_L^d through π L has at least (1-ϵ_d)(1/2-ϵ_d) of the mass of πμ_d on each side. We will actually prove the stronger statement that every hyperplane in ^d which contains L has has at least (1-ϵ_d)(1/2-ϵ_d) of the mass of μ_d on each side. The Gaussian measure of halfspace determined by a hyperplane at distance less than ϵ_d from the origin is greater than 1/2- 1/√(2π)∫_0^ϵ_d e^-x^2/2 x ≥ 1/2-ϵ_d. And 1-ϵ_d of the mass of μ_d is the standard Gaussian measure. So the claim that the depth of π L in πμ_d is at least (1-ϵ_d)(1/2-ϵ_d)≥ 1/2-2ϵ_d follows. Now by <ref>, the probability that (π_L X_ℓ+2, …,π_L X_n) contains π L is at least (d-ℓ+1)n-ℓ-1d-ℓ+1∫_0^1/2-2ϵ_d(y^n-d-2+(1-y)^n-d-2)y^d-ℓ y. Using the formula in <ref> along with the fact that ni= n-1i-1 + n-1i one can show that if the range of integration is extended from (0,1/2-ϵ_d) to (0,1/2), then (d-ℓ+1)n-ℓ-1d-ℓ+1∫_0^1/2(y^n-d-2+(1-y)^n-d-2)y^d-ℓ y=∑_i=0^n-d-2n-ℓ-2i/2^n-ℓ-2. Therefore, using that (1+x/y )^y <e^x, we have that (d-ℓ+1)n-ℓ-1d-ℓ+1∫_0^1/2-2ϵ_d(y^n-d-2+(1-y)^n-d-2)y^d-ℓ y = ∑_i=0^n-d-2n-ℓ-2i/2^n-ℓ-2 -(d-ℓ+1)n-ℓ-1d-ℓ+1∫_1/2-2ϵ_d^1/2(y^n-d-2+(1-y)^n-d-2)y^d-ℓ y ≥∑_i=0^n-d-2n-ℓ-2i/2^n-ℓ-2 - 2ϵ_d(d-ℓ+1)n-ℓ-1d-ℓ+1(1/2+2ϵ_d )^n-ℓ-2 = ∑_i=0^n-d-2n-ℓ-2i/2^n-ℓ-2 - 2ϵ_d(d-ℓ+1)n-ℓ-1d-ℓ+11/2^n-ℓ-2(1+4ϵ_d )^n-ℓ-2 ≥∑_i=0^n-d-2n-ℓ-2i/2^n-ℓ-2 -2ϵ_d(d-ℓ+1)n-ℓ-1d-ℓ+11/2^n-ℓ-2e^4ϵ_d(n-ℓ-2). First we claim that the second term above is o(1). To show this, note that since ℓ∼β d and n∼α d, d-ℓ-1/n-ℓ-2∼1-β/α-β. And by the assumption that β>2-α, we have that 1-β/α-β<1-β/2-2β=1/2 and so there exists ϵ>0 such that for d sufficiently large, d-ℓ-1/n-ℓ-2<1/2-ϵ. Now using that nk≤ 2^nH(k/n), we have that for d sufficiently large, 2ϵ_d(d-ℓ+1)n-ℓ-1d-ℓ+11/2^n-ℓ-2e^4ϵ_d(n-ℓ-2) ≤ 2ϵ_d(d-ℓ+1)2^(n-ℓ-1)H(1/2-ϵ)/2^n-ℓ-2e^8√(d) =o(1) because H(1/2-ϵ)<1. This means that it suffices to show that 2^ℓ+2-n∑_i=0^n-d-2n-ℓ-2i = 1-o(1), or equivalently, that 2^ℓ+2-n∑_i=n-d-1^n-ℓ-2n-ℓ-2i = 2^ℓ+2-n∑_j=0^d-ℓ-1n-ℓ-2j = o(1). So by unimodality of the binomial coefficients, and again using that nk≤ 2^nH(k/n), 2^ℓ+2-n∑_j=0^d-ℓ-1n-ℓ-2j ≤ 2^ℓ+2-n(d-ℓ)n-ℓ-2d-ℓ-1 ≤ 2^ℓ+2-n(d-ℓ) 2^(n-ℓ-2)H(d-ℓ-1/n-ℓ-2) ≤ 2^ℓ+2-n(d-ℓ) 2^(n-ℓ-2)H(1/2-ϵ). And the above quantity is o(1) because H(1/2-ϵ)<1. This shows that <ref> is the integral over B' of a function which is uniformly bounded from below by a function which is equal to 1-o(1). Since the measure of B' is equal to (B) = 1-o(1), this shows that <ref> is equal to 1-o(1) as desired. We can also prove a similar result for k-neighborliness of the distributions μ_d. (The distributions μ_d}_d∈ℕ in the following proposition are the distributions defined at the beginning of this section.) Let α>1 and assume that β>0 satisfies α H(β/α)+(α-β)(H(α-1/α-β)-1)>0, or equivalently, α^α/(α-1)^α-12^α > β^β(1-β)^1-β/2^β. Let {μ_d}_d∈ℕ be the family of probability distributions defined at the start of this section. If for each d we let n:=n(d) = ⌊α d⌋ and let S_n={X_1,…,X_n } be a set of n iid random points from μ_d, then for any sequence k:=k(d) with k∼β d, the expected number of subsets of S_n of size k which are not faces of S_n goes to infinity as d →∞. Let {μ_d}_d∈ℕ be the distributions defined at the start of this section. Let B be the event that at least one of the points {X_i}_i ∈ [k] is sampled from the ball part of the distribution and 1_B the indicator function of the event B. The same argument as in the proof of <ref> shows that (B) = 1-o(1). We want to show that the expected number of subsets of size k which are not faces goes to infinity. It will suffice to show that the expected number of subsets of size k which contain at least one point sampled from the ball part and which are not faces goes to infinity. That is, it suffices to show that nk·((X_1, …,X_k) is not a face∩ B) →∞. Define f(x_1,…, x_ℓ+1):= 1 if (x_1,…,x_k) is not a face 0 otherwise Then we have that nk((X_1, …,X_ℓ+1) is not a face∩ B) = nk∫_^d…∫_^d1_B f(x_1,…, x_ℓ+1)μ_d( x_n)…μ_d( x_1). Letting B' ⊂^d(ℓ+1) be the set of point sets satisfying 1_B=1, we can rewrite the above as nk((X_1, …,X_k) is not a face∩ B) = nk∫_B'∫_^d…∫_^d f(x_1,…, x_k)μ_d( x_n)…μ_d( x_1). For any fixed choice of points x_1, …, x_k, the inner integral ∫_^d…∫_^d f(x_1,…, x_k)μ_d( x_n)…μ_d( x_k+1) is equal to the probability that ({X_i}_i∈ [k+1,n]) ∩ L≠∅ where L := (x_1,…,x_k). Under the assumption that the points x_1, …, x_k satisfy 1_B = 1, we will show that nk times this probability approaches infinity. Let π_L be the orthogonal projection of ^d to the subspace L^⊥ of dimension d-k+1 that is orthogonal to L. Let πμ_d denote the measure on L^⊥ which is the projection of μ_d. The probability that ({X_i}_i∈ [k+1,n]) ∩ L≠∅ is equal to the probability that (π_L X_k+1, …,π_L X_n) contains π L. Note that since at least one of the x_i, 1 ≤ i ≤ k is sampled from the ball part of the distribution, the distance from L to the origin is at most ϵ_d. We claim that this means that the depth of π L in πμ_d is at least (1-ϵ_d)(1/2-ϵ_d). In order to show this, we need to show that every hyperplane in π_L^d through π L has at least (1-ϵ_d)(1/2-ϵ_d) of the mass of πμ_d on each side. We will actually prove the stronger statement that every hyperplane in ^d which contains L has has at least (1-ϵ_d)(1/2-ϵ_d) of the mass of μ_d on each side. The Gaussian measure of halfspace determined by a hyperplane at distance less than ϵ_d from the origin is greater than 1/2- 1/√(2π)∫_0^ϵ_d e^-x^2/2 x ≥ 1/2-ϵ_d. And 1-ϵ_d of the mass of μ_d is the standard Gaussian measure. So the claim that the depth of π L in πμ_d is at least (1-ϵ_d)(1/2-ϵ_d)≥ 1/2-2ϵ_d follows. Now by <ref>, the probability that (π_L X_k+1, …,π_L X_n) contains π L is at least (d-k+2)n-kd-k+2∫_0^1/2-2ϵ_d(y^n-d-2+(1-y)^n-d-2)y^d-k+1 y. Now using the fact that (1+x/y )^y > e^xy/(x+y), and nk = nn-k, nk(d-k+2)n-kd-k+2∫_0^1/2-2ϵ_d(y^n-d-2+(1-y)^n-d-2)y^d-k+1 y ≥nk(d-k+2)n-kd-k+2∫_1/2-3ϵ_d^1/2-2ϵ_d(y^n-d-2+(1-y)^n-d-2)y^d-k+1 y ≥nk(d-k+2)n-kd-k+2ϵ_d (1/2 - 3ϵ_d )^n-k-1 = nk(d-k+2) ϵ_d n-kn-d-21/2^n-k-1(1- 6ϵ_d )^n-k-1 ≥nk (d-k+2) ϵ_d n-kn-d-21/2^n-k-1 e^-6ϵ_d(n-k-1)/(6ϵ_d+1). Now since n ∼α d and k∼β d, we have that n-d-2/n-k∼α-1/α -β. And using the fact that nk≥ (1/(n+1))2^nH(k/n), for d sufficiently large the above is lower bounded by 1/2·2^α d H(β/α) 2^d(α-β)H(α-1/α-β)/2^(α-β)d·(d-k+2) ϵ_d e^-6ϵ_d(n-k-1)/(6ϵ_d+1)/(n+1)(n-k+1) = 1/2· 2^d(α H(β/α)+(α-β)(H(α-1/α-β)-1) )·(d-k+2) ϵ_d e^-6ϵ_d(n-k-1)/(6ϵ_d+1)/(n+1)(n-k+1). And the above quantity goes to infinity as d→∞ because α H(β/α)+(α-β)(H(α-1/α-β)-1)>0 and the term e^-6ϵ_d(n-k-1)/(6ϵ_d+1) is Ω(1/2^δ d) for all δ>0. We have shown that <ref> is the integral over a set of measure 1-o(1) of a function which is uniformly bounded from below by a function which approaches infinity as d →∞. This shows that the quantity in <ref> approaches infinity. Acknowledgments. Thanks to Luis Rademacher for many helpful discussions and comments. This material is based upon work supported by the National Science Foundation under Grants CCF-1657939, CCF-1934568 and CCF-2006994. abbrv
http://arxiv.org/abs/2307.04795v1
20230710180006
Multi-fractional instantons in $SU(N)$ Yang-Mills theory on the twisted $\mathbb T^4$
[ "Mohamed M. Anber", "Erich Poppitz" ]
hep-th
[ "hep-th", "hep-lat", "hep-ph" ]
=1 A
http://arxiv.org/abs/2307.04062v1
20230708235140
CR compactification for asymptotically locally complex hyperbolic almost Hermitian manifolds
[ "Alan Pinoy" ]
math.DG
[ "math.DG", "53C21, 53C35, 53C55, 58J60" ]
In this article, we consider a complete, non-compact almost Hermitian manifold whose curvature is asymptotic to that of the complex hyperbolic plane. Under natural geometric conditions, we show that such a manifold arises as the interior of a compact almost complex manifold whose boundary is a strictly pseudoconvex CR manifold. Moreover, the geometric structure of the boundary can be recovered by analysing the expansion of the metric near infinity. Symmetry energy and neutron star properties constrained by chiral effective field theory calculations Achim Schwenk ====================================================================================================== § INTRODUCTION The complex hyperbolic space is the unique simply connected, complete, Kähler manifold of constant negative holomorphic sectional curvature (we adopt the convention that this constant is -1). It is the complex analogue of the real hyperbolic space, and similarly to its real counterpart, the complex hyperbolic space can be compactified by a sphere at infinity. This sphere at infinity carries a natural geometric structure, which is closely related to the Riemannian geometry of the complex hyperbolic space: their respective groups of automorphisms are in one-to-one correspondence. This structure is that of a strictly pseudoconvex CR manifold, namely, the CR sphere (𝕊,H,J). If 𝕊 is thought of as the unit sphere of ^N, then H = (T𝕊)∩ (iT𝕊) is the standard contact distribution, and J is given by the multiplication by i in H. Set ρ = e^-r with r the distance function to a fixed point. Then ρ is a defining function for the boundary of the above compactification, and as ρ→ 0, the complex hyperbolic metric has the asymptotic expansion 1/ρ^2ρ⊗ρ + 1/ρ^2θ⊗θ + 1/ργ + o(1), with θ the standard contact form of 𝕊, and γ = θ|_H× H(·,J·) the associated Levi-form. The strict pseudoconvexity of the boundary means that the Levi-form is positive definite on H. The aim of this paper is to construct a similar compactification by a strictly pseudoconvex CR structure for complete, non-compact, almost Hermitian manifolds satisfying some natural geometric conditions. These conditions are the existence of a convex core (called an essential subset), the convergence of the curvature tensor R to that of the complex hyperbolic space R^0 near infinity, and the fact that the underlying almost complex structure J is asymptotically Kähler at infinity. More precisely, we show the following. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of real dimension at least 4, which admits an essential subset. Let r be the distance function to any compact subset. Assume that there exists a > 1 such that R-R^0_g, ∇ J_g, ∇ R_g, and ∇^2 J_g = 𝒪(e^-ar). Then (M,J) is the interior of a compact almost complex manifold (M̅,J̅), whose underlying almost complex structure J̅ is continuous. The hyperplane distribution H_0 = (T∂M̅)∩ (J̅T∂M̅) and the restriction J_0 = J̅|_H_0 are of class 𝒞^1. Moreover, H_0 is a contact distribution, and J_0 is formally integrable, and (∂M̅,H_0,J_0) is a strictly pseudoconvex CR manifold. In addition, the metric g is asymptotically complex hyperbolic: there exists a defining function ρ for the boundary, a 𝒞^1 contact form η^0 calibrating H_0, and a continuous Carnot metric γ, with η^0 and γ^0 = γ|_H_0× H_0 > 0 of class 𝒞^1, such that g ρ→ 0=1/ρ^2ρ⊗ρ + 1/ρ^2η^0⊗η^0 + 1/ργ + 𝒪_g(ρ^a-1) if 1 < a < 3/2, 𝒪_g(ρ^1/2lnρ) if a = 3/2, 𝒪_g(ρ^1/2) if a > 3/2. The contact form and the Carnot metric are related by the relation η^0|_H_0× H_0(·,J_0·) = γ^0. This result gives a geometric characterisation of complete, non-compact, almost Hermitian manifolds admitting a compactification by a strictly pseudoconvex CR structure. Notice the similarity between equations (<ref>) and (<ref>). The real analogue of this result, involving a compactification by a conformal boundary for asymptotically locally real hyperbolic manifolds, has been proven by E. Bahuaud, J. M. Lee, T. Marsh and R. Gicquaud <cit.>, pursuing the seminal work of M. T. Anderson and R. Schoen <cit.>. In a previous paper <cit.>, the author proved a similar result in the Kähler case. The improvement here is twofold. First, we are able to remove the Kähler assumption, which was of great importance in the previous proof. Here, the almost complex structure is no more assumed to be parallel, and in fact, needs not even be formally integrable, nor the associated almost symplectic form needs to be closed. In particular, the result applies to perturbations of asymptotically complex hyperbolic Kähler metrics which are only almost Hermitian. Second, the strict pseudoconvexity of the boundary is obtained with an exponential decay of order a > 1, while the earlier version of this result needed a decay of order a > 3/2. Note that this has a cost: the Carnot metric can be shown to be 𝒞^1 only in the direction of the contact distribution. This is the reason why the extended almost complex structure J̅ is only continuous in the transverse direction. Both improvements imply that the set of examples to which the result applies is much increased. A compactification by a CR structure for some complete, non-compact, Kähler manifolds was already given by J. Bland <cit.>, under assumptions that are rather analytic and not totally geometric. To obtain a continuous compactification with no regularity on the CR structure, these assumptions imply the a posteriori estimates R-R^0_g, ∇ R_g = 𝒪(e^-4r)[At first, one sees that these assumptions imply that R-R^0_g = 𝒪(e^-3r) and ∇ R_g = 𝒪(e^-4r). Since on a Kähler manifold it holds that ∇ R^0 = 0, applying Kato's inequality to R-R^0 yields the claimed estimate.]. A strictly pseudoconvex boundary of class 𝒞^1 is obtained under assumptions that imply the even stronger estimates R-R^0_g,∇ R_g,∇^2 R_g = 𝒪(e^-5r). It was proven by O. Biquard and M. Herzlich <cit.> that for asymptotically complex hyperbolic Kähler-Einstein metrics in real dimension 4, the curvature tensor has the form R = R^0 + Ce^-2r + o_g(e^-2r), where C is a non-zero multiple of the Cartan tensor of the CR boundary. It is known that the Cartan tensor vanishes exactly when the CR structure is locally equivalent to that of the sphere (such CR manifolds are called spherical). Many examples are then not covered by J. Bland's results. The paper is organized as follows. In Section <ref>, we set up the notations and explain the main idea of the proof of our main Theorem. In Section <ref>, we compute the expansion of the metric near infinity and prove the existence of the objects η^0 and γ, see Theorem <ref>. Section <ref> is dedicated to prove the existence of J_0, see Theorem <ref>. At this step, η^0, γ and J_0 are continuous tensor fields. We show in Section <ref> that they have higher regularity and that they induce a strictly pseudoconvex CR structure, see Theorems <ref>, <ref> and <ref>. Finally, we prove our main Theorem in Section <ref>. § PRELIMINARIES §.§ Notations Let (M,g) be a Riemannian manifold. Its Levi-Civita connection is denoted by ∇. Our convention on the Riemann curvature tensor is Besse's convention <cit.>, namely R(X,Y)Z = -(∇^2_X,Y Z - ∇^2_Y,XZ) = ∇_[X,Y]Z - ∇_X(∇_YZ) + ∇_Y(∇_XZ), for vector fields X, Y and Z. By abuse of notation, we still denote by R its four times covariant version: this means that we write R(X,Y,Z,T) = g(R(X,Y)Z,T) for vector fields X, Y, Z and T. With this convention, the sectional curvature of a tangent plane P with orthonormal basis {u,v} is (P) = (u,v) = R(u,v,u,v). §.§.§ Essential subsets and normal exponential map Following <cit.>, an essential subset K ⊂ M is a codimension 0, compact, totally convex submanifold, with smooth boundary which is oriented by a unit outward vector field ν, and such that (M∖ K) < 0. In that case, the normal exponential map [ ℰ _+ × ⟶ M̅∖̅ ̅K̅; (r,p) ⟼ exp_p(rν_p) ] is a diffeomorphism. The level hypersurface at distance r above K is denoted by _r. For r ⩾ 0, ℰ induces a diffeomorphism ℰ_r→_r given by ℰ_r(p)=ℰ(r,p); the induced Riemannian metric ℰ_r^*g on is denoted by g_r. Gauss Lemma states that ℰ^*g = r ⊗ r + g_r. Note that g_0 = g|_. The gradient of the distance function r on M̅∖̅ ̅K̅, called the radial vector field, is denoted by . A radial geodesic is a unit speed geodesic ray of the form r ↦ℰ(r,p) with p∈. Note that the restriction of to a radial geodesic is its tangent vector field: therefore, satisfies the equation of geodesics =0. More generally, a vector field X on M̅∖̅ ̅K̅ is called radially parallel if X=0. The shape operator S is the field of symmetric endomorphisms on M̅∖̅ ̅K̅ defined by SX = ∇_X. The normal Jacobi field on M̅∖̅ ̅K̅ associated to a vector field v on is defined by Y_v = ℰ_*v. Such vector fields are orthogonal to and commute with the radial vector field . They satisfy the Jacobi field equation ( Y_v) = -R(,Y_v), and their restriction to any radial geodesic are thus Jacobi fields. Normal Jacobi fields are related to the shape operator S by the first order linear differential equation Y_v = SY_v. §.§.§ Almost Hermitian manifolds An almost Hermitian manifold (M,g,J) is a Riemannian manifold (M,g) together with an almost complex structure J which is compatible with the metric, in the sense that it induces linear isometries in the tangent spaces: one has g(JX,JY) = g(X,Y) for all vector fields X and Y. Note that this implies that J is skew-symmetric (in fact, these two properties are equivalent). A tangent plane P⊂ TM is called J-holomorphic (respectively totally real) if JP=P (respectively JP⊥ P). The constant -1 J-holomorphic sectional curvature tensor R^0 on (M,g,J) is defined by the equality R^0(X,Y)Z = 1/4( g(Y,Z)X - g(X,Z)Y + g(JY,Z)JX - g(JX,Z)JY + 2g(X,JY)JZ) for X, Y and Z vector fields on M. Similarly to the Riemann curvature tensor, we still denote by R^0 its fully covariant version, meaning that R^0(X,Y,Z,T) = g(R^0(X,Y)Z,T) for all vector fields X, Y, Z and T. Note that R^0_g ⩽3/2. For any pair of orthogonal unit tangent vectors u and v, R^0(u,v,u,v) = -1/4(1+3g(Ju,v)^2); the minimal value -1 (respectively the maximal value -1/4) is achieved precisely when {u,v} spans a J-holomorphic plane (respectively a totally real plane). In the specific case of the complex hyperbolic space, R^0 coincides with the curvature tensor of the complex hyperbolic metric (see <cit.>). §.§.§ CR manifolds A CR manifold (for Cauchy-Riemann) is a triplet (M,H,J) where H is a tangent distribution of hyperplanes and J is an almost complex structure on H, such that the distribution H^1,0 = { X - iJX | X ∈ H}⊂ TM⊗_ is involutive (i.e. [X,Y] is a section of H^1,0 whenever X and Y are). In this case, J is said to be formally integrable. A CR manifold is called strictly pseudoconvex if there exists a contact form η calibrating the distribution H (i.e. H=η and η induces a non-degenerate 2-form on H), and if the associated Levi form η|_H× H(·,J·) is positive definite on H. §.§ The asymptotic conditions Throughout the paper, (M,g,J) will denote a complete, non-compact, almost Hermitian manifold of dimension 2n+2⩾ 4, with an essential subset K. We define the following asymptotic geometric conditions. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold. Let r be the distance function to a compact subset. * We say that (M,g,J) satisfies the ALCH(ALCH) condition of order a > 0, for asymptotically locally complex hyperbolic[For this condition implies that the local geometry at infinity resembles that of the complex hyperbolic space.], if R-R^0_g = 𝒪(e^-ar). * We say that (M,g,J) satisfies the AK(AK) condition of order a > 0, for asymptotically Kähler, if ∇ J_g = 𝒪(e^-ar). Note that R^0_g ⩽3/2. The condition of order a > 0 implies R_g = 𝒪(1). One readily verifies that the condition implies that the sectional curvature of M is bounded as follows: -1 + 𝒪(e^-ar) ⩽⩽ - 1/4 + 𝒪(e^-ar). The lower bound implies the following Lemma, proven in <cit.>. Assume that (M,g,J) is a complete, non-compact, almost Hermitian manifold, admitting an essential subset K, and satisfying the condition of order a > 0. Let S = ∇ be the shape operator of the level hypersurfaces above K. Then one has S_g ⩽ 1 + 𝒪(e^-ar) if 0 < a < 2, 𝒪((r+1)e^-2r) if a = 2, 𝒪(e^-2r) if a > 2. In any case, one has S_g = 𝒪(1), and exp(∫_0^r S_g-1) = 𝒪(1). We also define the following analogous asymptotic conditions of higher order. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold. Let r be the distance function to a compact subset. * We say that (M,g,J) satisfies the ALCHplus(ALCH+) condition of order a > 0 if one has the estimates R-R^0_g = 𝒪(e^-ar) and ∇ R_g = 𝒪(e^-ar). * We say that (M,g,J) satisfies the AKplus(AK+) condition of order a > 0 if one has the estimates ∇ J_g = 𝒪(e^-ar) and ∇^2 J_g = 𝒪(e^-ar). Under the condition of order a > 0, one has ∇ R^0_g = 𝒪(e^-ar). Thus, under the condition of order a > 0, Kato's inequality shows that the condition of order a > 0 is equivalent to R-R^0_g r →∞⟶ 0 and ∇(R-R^0)_g = 𝒪(e^-ar). In practice, r will be the distance function to the essential subset K. The constants involved in the previous estimates are global. Moreover, in what follows, all estimates of the form f = 𝒪(h) will involve a constant that is global. When built out of the choice of a reference frame (which will soon be called an admissible frame, see Definition <ref>), the constant will be independent of that choice. By the expressions Y_u_g = 𝒪(u_g_0e^r) or Y_u = 𝒪_g(u_g_0e^r), we mean that there exists C > 0 such that for any vector field u on , one has ∀ r ⩾ 0, ∀ p ∈, (Y_u)_ℰ(r,p)_g ⩽ C u_p_g_0e^r. §.§ Outline of the proof If (M,g,J) is assumed to be Kähler (that is, if ∇ J=0), the author showed in a previous paper <cit.> the following result. [<cit.>] Let (M,g,J) be a complete, non-compact, Kähler manifold admitting an essential subset K. Assume that there is a constant a>1 such that the estimates R-R^0_g,∇ R_g=𝒪(e^-ar) hold, where r is the distance function to any compact subset. Then on , there exist a contact form η of class 𝒞^1, and a continuous symmetric positive bilinear form γ, positive definite on the contact distribution H=η, such that ℰ^*g = r^2 + e^2rη⊗η + e^r γ + lower order terms. If moreover a>3/2, then γ is of class 𝒞^1, and there exists a 𝒞^1 formally integrable almost complex structure J_H on H, such that γ|_H× H = η(·, J_H·). In particular, (,H,J_H) is a strictly pseudoconvex CR manifold. Notice the similarity between equations (<ref>) and (<ref>) by setting ρ = e^-r. This result provides a compactification by a strictly pseudoconvex CR structure for a Kähler manifold whose curvature is asymptotically close to that of the complex hyperbolic space. The proof is quite long, but can be summarised as follows: * For {Jν,e_1,…,e_2n} an orthonormal frame on , with ν the outward unit normal, let {,E_1,…,E_2n} denotes its parallel transport along radial geodesics. For r ⩾ 0, define η_r = ℰ_r^*(e^-rg(·,)), and η^j_r = ℰ_r^*(e^-r/2g(·,E_j)), j∈{1,…,2n}, which are local 1-forms on . * If R-R^0_g = 𝒪(e^-ar), with a > 1/2, then {η_r,η^1_r…,η^2n_r}_r⩾ 0 converges to continuous 1-forms {η,η^1,…,η^2n}. This implies that the metric reads as in equation (<ref>), where γ = ∑_j=1^2nη^j⊗η^j. If moreover a > 1, volume comparison techniques show that the limit is a coframe. * If in addition, ∇ R_g=𝒪(e^-ar), then the family of 1-forms (η_r)_r⩾ 0 converges in 𝒞^1 topology, the limit η is of class 𝒞^1, and is contact. The proof uses several estimates, and tedious computations involving many curvature terms. * If a>3/2, then (η_r^j)_r⩾ 0 locally uniformly converges in 𝒞^1 topology, for any j∈{1,…,2n}. Hence, γ is of class 𝒞^1. * If φ_r = ℰ_r^*(J - g(·,)⊗) + g(·,)⊗), then (φ_r)_r⩾ 0 uniformly converges to a tensor φ of class 𝒞^1. Its restriction to H= η gives the desired formally integrable almost complex structure J_H. The very first step of the proof crucially relies on the fact that is parallel in the radial direction, and in fact, the equality ∇ J = 0 is used many times. Note that the Kähler assumption is rather rigid: for instance, one has ∇ J = 0 if and only if the 2-form g(J·,·) is closed and J is formally integrable. In this paper, we extend and improve the results of <cit.>. First, the Kähler condition is removed: in fact, neither the closedness of g(J·,·) nor the formal integrability of J need to be met. We instead consider an almost Hermitian manifold (M,g,J) whose almost complex structure J is only parallel at infinity, by imposing the condition ∇^k J_g = 𝒪(e^-ar), k∈{1,2}. Second, we show that the strict pseudoconvexity of the boundary can be obtained with a > 1 instead of a > 3/2. This sharper bound comes from deriving sharp geometric estimates in the direction of the contact structure. In this context of this paper, the vector field is not radially parallel, and one cannot even initiate the above strategy as it stands. The main trick is to prove the existence, under our assumptions, of a unit vector field E_0 on M̅ ̅∖̅ ̅K̅ that is radially parallel, and that satisfies E_0-_g = 𝒪(e^-ar). This latter vector field is unique. One can then consider a reference frame {E_0,…,E_2n} having nice properties, which we call an admissible frame (see Definition <ref> below), and try to mimic the above proof. The counterpart is that the computations become longer and more involved; one also needs to show numerous extra estimates. § METRIC ESTIMATES This section is dedicated to the derivation of the expansion near infinity of the metric g under the and conditions. We first define the notion of admissible frames, which simplify future computations. We then derive estimates on the asymptotic expansion of normal Jacobi fields, which turns out to be the main ingredients to show our results. §.§ Admissible frames We give a construction for some parallel orthonormal frames along radial geodesics in which later computations will be easier. For v a vector field on , let V be the vector field on M̅∖̅ ̅K̅ obtained by the parallel transport of v along radial geodesics. Finally, for r ⩾ 0, define β_r(v) = g(,V)|__r. This defines a family of 1-forms (β_r)_r⩾ 0 on . Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the condition of order a > 0. Then there exists a continuous 1-form β on such that β_r - β = 𝒪_g_0(e^-ar). Fix v a vector field on and r ⩾ 0. Both and V are radially parallel, so that one has β_r(v)-β_0(v) = ∫_0^r g(,V) = ∫_0^r g(( J),V). By the assumption, there exists C > 0 such that ∇ J_g ⩽ Ce^-ar. The Cauchy-Schwarz inequality now implies that ∫_0^rg(( J), V)⩽∫_0^r ∇ J_g V_g⩽ C1-e^-ar/av_g_0. Therefore, (β_r(v))_r⩾ 0 pointwise converges: let β(v) to be its pointwise limit. It defines a pointwise linear form on the tangent spaces of , satisfying |β(v)-β_r(v)| = | ∫_r^∞ g(( J),V) | ⩽∫_r^∞|g(( J),V)| ⩽C/ae^-arv_g_0, from which is derived equation (<ref>). The convergence is thus uniform, and β is continuous. We shall now show that β is nowhere vanishing. For all r ⩾ 0, one has β_r_g_0 = 1 pointwise. Indeed, for any v, Cauchy-Schwarz inequality implies that |β_r(v)| ⩽V_g = v_g_0. Equality is reached for v = ι_r^-1(), where ι_r T→ T_r is induced by the parallel transport along radial geodesics. It follows that β_g_0 = 1 pointwise, and that β is nowhere vanishing. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the condition of order a > 0. Let U⊂ be an open subset on which the continuous distribution β is trivialisable. Let {e_0,…,e_2n} be an orthonormal frame on U such that β(e_0) > 0 and β(e_j) = 0 if j∈{1,…,2n}. The associated admissible frame {E_0,…,E_2n} on the cone E(_+× U) is defined as the parallel transport of {e_0,…,e_2n} along the radial geodesics. If {E_0,…,E_2n} is an admissible frame, then {,E_0,…,E_2n} is an orthonormal frame on the cone E(_+× U) whose elements are parallel in the radial direction even though they need not be differentiable in the directions that are orthogonal to . In the following, we will often refer to admissible frames without mentioning the open subset U⊂ on which they are defined. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the condition of order a > 0. Let {E_0,…,E_2n} be an admissible frame. Then β(e_0) = 1. One has 1 = _g^2 = ∑_j=0^2nβ_r(e_j)^2. The result follows by taking the limit as r →∞. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the condition of order a > 0. Let {E_0,…,E_2n} be an admissible frame and δ be the Kronecker symbol. Then * g(,E_j) - δ_0j = 𝒪(e^-ar) for j∈{0,…,2n}, * E_0 - = 𝒪_g(e^-ar). The first point is a consequence of the equality g(,E_j)=β_r(e_j) and of equation (<ref>). For the second point, notice that E_0- = ∑_j=0^2ng(E_0-,E_j)E_j = ∑_j=0^2n(δ_0j- g(,E_j))E_j, from which is derived the claimed estimate. One easily shows that the vector field E_0 is the unique unit vector field X on E(_+× U) such that X = 0 and g(X,) = 1 + o(1). If (M,g,J) is Kähler (if ∇ J = 0), then = 0, and thus E_0 =. In this specific case, admissible frames can be chosen to be smooth, and correspond to the radially parallel orthonormal frames defined in <cit.>. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the and conditions of order a > 0. Let {E_0,…,E_2n} be an admissible frame. Then * (,E_0) + 1 = 𝒪(e^-ar), * (,E_j) + 1/4 = 𝒪(e^-ar) for j ∈{1,…,2n}, * R(,E_i,,E_k) = 𝒪(e^-ar) for any i ≠ j ∈{0,…,2n}. We prove the first point, the other being shown similarly. One readily verifies from the definition of R^0 that R^0(,,,) = -1, and therefore, it holds that (,E_0) = R^0(, + (E_0-), , + (E_0-))+ (R-R^0)(,E_0,,E_0) = -1 + 2R^0(,E_0-,E_0,) + R^0(,E_0-,,E_0-) + (R-R^0)(,E_0,,E_0). The definition of R^0 (see equation (<ref>)) yields R^0_g ⩽3/2, and the result follows from the assumption and from the second point of Corollary <ref>. §.§ Associated coframes and normal Jacobi fields estimates Recall that for r ⩾ 0, the diffeomorphism ℰ_r→_r is defined by ℰ_r(p) = ℰ(r,p). Let (M,g,J) be a complete, non-compact, almost Hermitian manifold with essential subset K. Assume that it satisfies the condition of order a > 0. Let {E_0,…,E_2n} be an admissible frame on the cone E(_+× U). The associated coframe {η^0_r,…,η^2n_r}_r ⩾ 0 on U is defined by η^0_r = ℰ_r^* (e^-r g(·,E_0)) and η^j_r = ℰ_r^*(e^-r/2g(·,E_j)) if j∈{1,…,2n}. In any admissible frame, the normal Jacobi field Y_v associated to the vector field v on reads Y_v = η^0_r(v) e^r E_0 + ∑_j=1^2nη^j_r(v) e^r/2E_j. Applying twice the differential operator to this last equality, one has ( Y_v) = (^2 η^0_r(v)+ 2η^0_r(v) + η^0_r(v) )e^r E_0 + ∑_j=1^2n(^2η^j_r(v) + η^j_r(v) + 1/4η^j_r(v) )e^r/2E_j. Recall that radial Jacobi fields are actual Jacobi fields, which means that they satisfy the second order linear differential equation ( Y_v) = -R(,Y_v). An identification of the components of ( Y_v) in the given admissible frame shows that the coefficients {η^j_r(v)}_j ∈{0,…,2n} satisfy the differential system ^2η^0_r(v) + 2 η^0_r(v) = ∑_k=0^2n u^0_k η^k_r(v), ^2η^j_r(v) + 2η^j_r(v) = ∑_k=0^2n u^j_k η^k_r(v), j∈{1,…,2n}, where the functions {u^j_k}_j,k∈{0,…,2n} are defined by u^j_k = - (,E_0) + 1 if j=k=0, e^-r/2R(,E_0,,E_k) if j=0, k≠ 0, e^r/2 R(,E_k,,E_0) if j≠ 0, k=0, R(,E_j,,E_k) if j,k ∈{1,…,2n}, j≠ k, (,E_j) + 1/4 if j,k∈{1,…,2n}, j=k. Proposition <ref> implies that one has the uniform estimates |u^j_k| = 𝒪(e^-(a-1/2)r). Combining the proofs of <cit.>, relying on successive integrations, an application of Grönwall's Lemma, and a bootstrap argument, one obtains the following result. The last claim relies on estimates on the growth of the volume (see <cit.>). Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the and conditions of order a>1/2. Let {η^0_r,…,η^2n_r}_r ⩾ 0 be the coframes associated to an admissible frame on U⊂. Then there exists continuous 1-forms {η^0,…,η^2n} on U ∂_r η^0_r, η^0_r - η^0 = 𝒪_g_0(e^-ar) if 1/2 < a <3/2, 𝒪_g_0((r+1)e^-3/2r) if a = 3/2, 𝒪_g_0(e^-3/2r) if a > 3/2, ∀ j ∈{1,…,2n}, ∂_r η^j_r, η^j_r - η^j = 𝒪_g_0(e^-(a-1/2)r) if 1/2 < a <3/2, 𝒪_g_0((r+1)e^-r) if a = 3/2, 𝒪_g_0(e^-r) if a > 3/2. If furthermore one assumes that a > 1, the family {η^0,…,η^2n} is a continuous coframe on U. If a > 1/2, then η^j_r_g_0 is bounded independently of r, j, the choice of an admissible frame, and U. For j∈{0,…, 2n} and r ⩾ 0, write η^j_r = η^j_0 + ∫_0^r η^j_r. Notice that η^j_0_g_0 = 1. Then by Proposition <ref>, η^j_r_g_0⩽η^j_0_g_0 + ∫_0^r η^j_r_g_0⩽ 1 + ∫_0^∞η^j_r_g_0 = 𝒪(1). Recall that a normal Jacobi field Y_v satisfies Y_v = SY_v. The following corollary is an immediate consequence of Proposition <ref>. In any admissible frame, the normal Jacobi field Y_v associated to a vector field v on satisfies Y_v = η^0(v) e^r E_0 + ∑_j=1^2nη^j(v)e^r/2 E_j + 𝒪_g(v_g_0 e^-(a-1)r) if 1/2 < a <3/2, 𝒪_g(v_g_0 (r+1)e^-r/2) if a = 3/2, 𝒪_g(v_g_0 e^-r/2) if a > 3/2, and SY_v = η^0(v) e^r E_0 + ∑_j=1^2n1/2η^j(v)e^r/2 E_j + 𝒪_g(v_g_0 e^-(a-1)r) if 1/2 < a <3/2, 𝒪_g(v_g_0 (r+1)e^-r/2) if a = 3/2, 𝒪_g(v_g_0 e^-r/2) if a > 3/2. As a consequence, one has the global estimates Y_v, SY_v = 𝒪_g(v_g_0e^r). If moreover, v is everywhere tangent to η^0, then Y_v, SY_v = 𝒪_g(v_g_0e^r/2). Note that although the estimates of Proposition <ref> are not uniform in all directions, they contribute equally to the lower order term in equations (<ref>) and (<ref>) thanks to the remaining exponential factors. §.§ Global consequences and metric estimates We shall now highlight global consequences of the study conducted in Subsections <ref> and <ref>. We then prove the first of our main results. Assume that (M,g,J) satisfies the condition of order a > 0. Then the local vector field e_0 defined in Definition <ref> defines a global continuous vector field on , independently of the construction of any admissible frame. The 1-form β defined in Lemma <ref> is continuous and nowhere vanishing. Hence, the distribution β⊂ T is a continuous distribution of hyperplanes. It follows that its g_0-orthogonal complement L is a well-defined and continuous line bundle. Notice that the restriction of β trivialises L. It follows that e_0 is the unique section of L that is positive for β, and of unit g_0-norm. This concludes the proof. The family of 1-forms {η^0_r}_r ⩾ 0 is then globally defined on , independently of the choice of the admissible frame. As a consequence, one has the following global version of Proposition <ref>. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K. Assume that it satisfies the and condition of order a > 1/2. Then there exists a continuous 1-form η^0 on such that ∂_r η^0_r, η^0_r - η^0 = 𝒪_g_0(e^-ar) if 1/2 < a <3/2, 𝒪_g_0((r+1)e^-3/2r) if a = 3/2, 𝒪_g_0(e^-3/2r) if a > 3/2. If furthermore one assumes that a > 1, then η^0 is nowhere vanishing. The following Corollary is a straightforward application of the triangle inequality and of Corollary <ref>. One has the following estimates η^0_r ⊗η^0_r - η^0 ⊗η^0 = 𝒪_g_0(e^-ar) if 1/2 < a < 3/2, 𝒪_g_0((r+1)e^-3/2r) if a = 3/2, 𝒪_g_0(e^-3/2r) if a > 3/2. From Gauss's Lemma, the Riemannian metric g reads as ℰ^*g = r ⊗ r + g_r, with (g_r)_r ⩾ 0 the smooth family of Riemannian metrics on defined by g_r = ℰ_r^* g. By construction, the first term that appears in the asymptotic expansion of the metric g near infinity is e^2rη^0 ⊗η^0. For r⩾ 0, γ_r is defined as γ_r = e^-r( g_r - e^2rη^0_r ⊗η^0_r). By definition, (γ_r)_r⩾ 0 is a family of symmetric 2-tensors on . Let {η^0_r,…,η^2n_r}_r ⩾ 0 be the coframes associated to an admissible frame {E_0,…,E_2n}. Then locally, γ _r = ∑_j=1^2nη^j_r⊗η^j_r. Consequently, γ_r is positive semi-definite, and is positive definite on η^0_r, for any r ⩾ 0. The following proposition shows that (γ_r)_r ⩾ 0 converges to some tensor that shares similar properties. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, and admitting an essential subset K. Assume that it satisfies the and conditions of order a > 1/2. Then there exists a continuous positive semi-definite symmetric 2-tensor γ on , which we call the Carnot metric, such that γ_r - γ = 𝒪_g_0(e^-(a-1/2)r) if 1/2 < a < 3/2, 𝒪_g_0((r+1)e^-r) if a = 3/2, 𝒪_g_0(e^-r) if a > 3/2. If furthermore one assumes that a > 1, then γ is positive definite on η^0. For r ⩾ 0, one has g_r = e^2rη^0_r⊗η^0_r + e^r γ_r. Let {η^0_r,…,η^2n}_r ⩾ 0 be the coframes associated with an admissible frame. Locally, one can express γ_r as γ_r = ∑_j=1^2nη^j_r⊗η^j_r. Therefore, (γ_r)_r ⩾ 0 converges pointwise to a limit we call γ which is locally given by ∑_j=1^2nη^j⊗η^j. In addition, one has the local expression γ_r - γ = ∑_j=1^2nη^j_r⊗ (η^j_r-η^j) + (η^j_r-η^j) ⊗η^j. The global estimates (<ref>) now follow from the triangle inequality and from an application of Proposition <ref> and Corollary <ref>. As a consequence, γ is a continuous symmetric positive semi-definite 2-tensor. If a > 1, then {η^0,…,η^2n} is a coframe (Proposition <ref>), and γ is hence positive definite on η^0. The previous study implies the following comparison between quadratic forms. If a > 1, there exists a constant λ > 1 such that for all r ⩾ 0, the comparison between quadratic forms 1/λ e^rg_0 ⩽ g_r ⩽λ e^2r g_0 holds. For r ⩾ 0, η^0_r ⊗η^0_r and γ_r are positive symmetric 2-tensors. Define q_r = η_r^0⊗η_r^0 + γ_r, which is a Riemannian metric on . From g_r = e^2rη^0_r ⊗η^0_r + e^r γ_r, one readily checks that ∀ r ⩾ 0, e^r q_r ⩽ g_r ⩽ e^2rq_r. According to Propositions <ref> and <ref>, q_r uniformly converges to the continuous Riemannian metric q_∞ = η^0 ⊗η^0 + γ as r→∞. Let S^g_0 be the unit sphere bundle of (,g_0), which is compact by compactness of . The map (r,v) ∈ [0,∞]× S^g_0↦ q_r(v,v)∈ (0,∞) is then continuous on the compact space [0,∞]× S^g_0. Therefore, there exists λ > 1 such that for all r⩾ 0, 1/λ⩽ q_r ⩽λ on S^g_0. The result now follows from equation (<ref>) and from the homogeneity of quadratic forms. We shall now show the first of our main results. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the and assumptions of order a > 1/2. Then on , there exists a continuous 1-form η^0 and a continuous positive semi-definite symmetric 2-tensor γ, such that in the normal exponential map E, the Riemannian metric g reads ℰ^*g = r ⊗ r + e^2rη^0 ⊗η^0 + e^r γ + 𝒪_g_0(e^(2-a)r) if 1/2 < a < 3/2, 𝒪_g_0((r+1)e^r/2) if a = 3/2, 𝒪_g_0(e^r/2) if a > 3/2. If furthermore one assumes that a > 1, then η^0 is nowhere vanishing, and γ is positive definite on the distribution of hyperplanes η^0. Let (η^0_r)_r ⩾ 0, (γ_r)_r ⩾ 0 and their limits η^0 and γ be given by Propositions <ref> and <ref>. By construction, one has ℰ^*g = r ⊗ r + e^2rη^0_r ⊗η^0_r + e^r γ_r = r ⊗ r + e^2rη^0 ⊗η^0 + e^r γ + ε_r, with ε_r = e^2r(η^0_r ⊗η^0_r - η^0 ⊗η^0) + e^r (γ_r - γ). Estimates (<ref>) now follow from Corollary <ref> (estimates on η^0_r⊗η^0_r - η^0⊗η^0) and Proposition <ref> (estimates on γ_r-γ). Ultimately, if a > 1, the last claim follows from Propositions <ref> (η^0 is nowhere vanishing) and <ref> (γ is positive semi-definite, positive definite on η^0). Setting g = ℰ_*( r⊗ r + e^2rη^0⊗η^0 + e^r γ) on M̅∖̅ ̅K̅, Corollary <ref> shows that estimates (<ref>) read g - g = 𝒪_g(e^-(a-1)r) if 1/2 < a < 3/2, 𝒪_g((r+1)e^-r/2) if a = 3/2, 𝒪_g(e^-r/2) if a > 3/2. If η^0 were a contact form and γ a Carnot metric on its kernel distribution, then g would be asymptotically complex hyperbolic in the sense of <cit.>. §.§ Estimates on the shape operator Before we conclude this section, we give another consequence of the previous study: we derive asymptotic estimates on the shape operator S. First, we introduce a natural vector field ξ_0, which is closely related to S. The vector fields (ξ_0^r)_r ⩾ 0 on are defined as ξ_0^r = ℰ_r^* (e^r E_0). Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K. Assume that it satisfies the and conditions of order a > 1. Then there exists a continuous vector field ξ_0 on such that ξ_0^r - ξ_0 = 𝒪_g_0(e^-(a-1/2)r) if 1 < a <3/2, 𝒪_g_0((r+1)e^-r) if a = 3/2, 𝒪_g_0(e^-r) if a > 3/2. It is uniquely characterised by the fact that η^0(ξ_0) = 1 and γ(ξ_0,ξ_0) = 0. Define g̅_0 = η^0⊗η^0 + γ, which is a continuous Riemannian metric on according to Theorem <ref>. Consider the continuous line bundle L̅ = (η^0)^⊥_g̅_0 on . The restriction of η^0 trivialises L̅, which thus has a continuous nowhere vanishing section ξ. Define ξ_0 = ξ/η^0(ξ), which is continuous by construction. Let {η^0,…,η^2n} be the limit coframe associated with any admissible frame. Then η^0(ξ_0) = 1 and η^j(ξ_0) = 0 for j∈{1,…,2n}. In particular, ξ_0 is uniquely characterised by the relations η^0(ξ_0)=1 and γ(ξ_0,ξ_0)=∑_j=1^2nη^j(ξ_0)^2 = 0. Notice that for j∈{1,…,2n} and r ⩾ 0, one has η^j_r(ξ_0 - ξ_0^r) = η^j_r(ξ_0^r) - η^j_r(ξ) = δ^j_0 - η^j_r(ξ_0) = η^j(ξ_0) - η^j_r(ξ_0)= (η^j-η^j_r)(ξ_0), where δ stands for the Kronecker symbol. Corollary <ref> yields the existence of a constant c > 0 such that ξ_0^r - ξ_0_g_0⩽ c e^-r/2Y_(ξ_0^r - ξ_0)_g for all r ⩾ 0. The triangle inequality together with equation (<ref>) now yield Y_(ξ_0^r - ξ_0)_g ⩽(e^r η^0-η^0_r_g_0 + e^r/2∑_j=1^2nη^j-η^j_r_g_0) ξ_0_g_0. Estimates (<ref>) now follow from the estimates of Proposition <ref>, together with the fact that ξ_0_g_0 is uniformly bounded by continuity of ξ_0 and compactness of . Fix an admissible frame on U⊂. If ξ_j^r = ℰ_r^* (e^r/2E_j) and if {ξ_0,…,ξ_2n} is the dual frame of {η^0,…,η^2n}, a similar study shows that ∀ j ∈{1,…,2n}, ξ_j - ξ_j^r = 𝒪_g_0(e^-(a-1/2)r) if 1 < a <3/2, 𝒪_g_0((r+1)e^-r) if a = 3/2, 𝒪_g_0(e^-r) if a > 3/2. The constants involved in the upper bounds are independent of the choice of the admissible frame and of U. It relies on the fact that one can uniformly bound ξ_j_g_0 if j∈{1,…,2n}, for instance, as an application of Corollary <ref>. For v a vector field on , the associated normal Jacobi fields Y_v satisfies Y_v = SY_v. It follows from equation (<ref>) that in an admissible frame, one has SY_v = (η^0_r(v) + η^0_r(v) )e^r E_0 + ∑_j=1^2n(η^j_r(v) + 1/2η^j_r(v) )e^r/2E_j. For r ⩾ 0, consider the pull-back S_r = ℰ_r^*S of the shape operator S through the diffeomorphism ℰ_r →_r. It is well defined since S leaves stable the tangent bundle of the level hypersurfaces _r. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K. Assume that it satisfies the and conditions of order a > 1/2. Then the family (S_r)_r ⩾ 0 satisfies the estimates S_r - 1/2( + η^0_r ⊗ξ_0^r) = 𝒪_g_0(e^-(a-1/2)r) if 1/2 < a <3/2, 𝒪_g_0((r+1)e^-r) if a = 3/2, 𝒪_g_0(e^-r) if a > 3/2, In particular, if a > 1, then S_r r →∞⟶1/2( + η^0 ⊗ξ_0), and one can substitute η^0_r⊗ξ_0^r with η^0 ⊗ξ_0 in estimates (<ref>). Let v be a vector field on . It follows from Proposition <ref> and from Corollary <ref> that SY_v -1/2(Y_v + η^0_r(v)e^rE_0) = 𝒪_g(v_g_0 e^-(a-1)r) if 1/2 < a <3/2, 𝒪_g(v_g_0 (r+1)e^-r/2) if a = 3/2, 𝒪_g(v_g_0 e^-r/2) if a > 3/2, By the very definition of S_r, ξ_0^r and g_r, it follows that S_r-1/2( + η^0_r⊗ξ_0^r)_g_r = 𝒪(e^-(a-1)r) if 1/2 < a <3/2, 𝒪((r+1)e^-r/2) if a = 3/2, 𝒪(e^-r/2) if a > 3/2, Finally, Corollary <ref> implies that S_r - 1/2( + η^0_r ⊗ξ_0^r) = 𝒪_g_0(e^-r/2S_r - 1/2( + η^0_r ⊗ξ_0^r) _g_r), and estimates (<ref>) now follow. If a > 1, then estimates on η^0-η^0_r_g_0 (Proposition <ref>) and on ξ_0-ξ_0^r_g_0 (Proposition <ref>), together with the triangle inequality, show that one can replace η^0_r⊗ξ_0^r with η^0⊗ξ_0 in estimates (<ref>). This concludes the proof. In the complex hyperbolic space, the shape operator of a geodesic sphere of radius r, with outward unit normal ν, is given by S = (r)_ Jν + 1/2(r/2) _{ν,Jν}^⊥. Proposition <ref> implies that the local extrinsic geometry of the level hypersurfaces _r is then asymptotic to that of horospheres in the complex hyperbolic space. § THE ALMOST COMPLEX STRUCTURE This section is dedicated to prove the existence of a natural almost complex structure J_0 on the distribution of hyperplanes H_0 = η^0, obtained as the restriction of a naturally defined tensor φ on . The ambient almost complex structure J does not leave stable the ambient distribution of hyperplanes {}^⊥. Consider the orthogonal projection π T M̅∖̅ ̅K̅→ T M̅∖̅ ̅K̅ onto {}^⊥. Define Φ to be the field of endomorphisms on M̅∖̅ ̅K̅ defined by Φ = π J π. Since π and J have unit norms, then Φ_g ⩽ 1. Formally, one has π = - g(,·) ⊗, and Φ then reads Φ = J + g(·,) ⊗ - g(·,)⊗. Assume that (M,g,J) satisfies the condition of order a > 0. For any admissible frame {E_0,…,E_2n} and any vector fields X and Y, one has: * g(Φ X,Φ Y) = g(X,Y) - g(X,)g(Y,) - g(X,)g(Y,), * Φ(E_0) = 𝒪_g(e^-ar), * Φ(E_j) - _j = 𝒪_g(e^-ar) if j∈{1,…,2n}. The first point is a straightforward computation. To prove the second point, note that Φ() = 0, so that Φ(E_0)_g = Φ(E_0-)_g ⩽E_0-_g. The result follows from Corollary <ref>. Finally, by the very definition of Φ, Φ(E_j)=_j - g(E_j,), and the last point follows from Corollary <ref>. The tensor Φ leaves stable the tangent distribution {,}^⊥. Therefore, one can pull it back through the family of diffeomorphisms (ℰ_r)_r⩾ 0. The family of endomorphisms (φ_r)_r ⩾ 0 is defined by φ_r = ℰ_r^*Φ for r ⩾ 0. Recall that (S_r)_r ⩾ 0 is the family of endomorphisms ℰ_r^*S induced by the shape operator. Assume that (M,g,J) satisfies the and assumption of order a > 1. Then the following estimates hold: * φ_rξ_0^r = 𝒪_g_0(e^-(a-1/2)r). * φ_r = 𝒪_g_0(1), * η^0_r∘φ_r = 𝒪_g_0(e^-ar), * γ_r(φ_r·,φ_r·) - γ_r = 𝒪_g_0(e^-(a-1)r), * φ_r S_r - S_r φ_r = 𝒪_g_0(e^-(a-1/2)r) if 1 < a <3/2, 𝒪_g_0((r+1)e^-r) if a = 3/2, 𝒪_g_0(e^-r) if a > 3/2. We first show the first point. From Corollary <ref>, there exists c > 0 such that for r ⩾ 0, φ_rξ_0^r_g_0⩽ c Φ (e^rE_0)_g e^-r/2 = cΦ (E_0)_g e^r/2. The result now follows from Lemma <ref> Let us now focus on the second point. Let v be a vector field on . Corollary <ref> states that there exists c>0 such that φ_rv_g_0⩽ c Φ(Y_v)_g e^-r/2, for all r ⩾ 0. The result follows from the fourth point of Lemma <ref>. For the third point, let v be a vector field on . In an admissible frame, one has Φ(Y_v) = η^0_r(v) e^r Φ(E_0) + e^r/2∑_j=1^2nη^j_r(v) Φ(E_j). It then follows that (η^0_r∘φ_r)(v) = η^0_r(v) g(Φ(E_0),E_0) + e^-r/2∑_j=1^2nη^j_r(v) g(Φ(E_j), E_0). Notice that Φ has range in {}^⊥, so that g(Φ(E_j), E_0)) = g(Φ(E_j), E_0-) for all j∈{0,…,2n}. Recall that Φ_g ⩽ 1 and that E_j_g=1 for all j∈{0,…,2n}. The triangle inequality now yields η^0_r∘φ_r_g_0⩽ (η^0_r_g_0 + e^-r/2∑_j=1^n η^j_r_g_0) E_0-_g for all r ⩾ 0. The result follows from Corollary <ref> (estimates on E_0-) and from Corollary <ref> (uniform bounds on {η^j_r_g_0}_j ∈{0,…,2n}). Let us now consider the fourth point. Let u and v be vector fields on , and fix r ⩾ 0. By Lemma <ref>, one has g_r(φ_ru,φ_rv) = g(Φ Y_u,Φ Y_v) = g(Y_u,Y_v) - g(Y_u,)g(Y_v,). Cauchy-Schwarz inequality now yields g_r(φ_ru,φ_rv) = g_r(u,v) - e^2rη^0_r(u)η^0_r(v) + 𝒪(Y_u_gY_v_gE_0-_g). It follows from Corollaries <ref> and <ref>, and from the very definition of γ_r, that g_r(φ_r·,φ_r·) = e^rγ_r + 𝒪_g_0( e^(2-a)r). Therefore, e^2r(η^0_r∘φ_r)⊗(η^0_r∘φ_r) + e^r γ_r(φ_r·,φ_r·) = e^r γ_r + 𝒪_g_0(e^(2-a)r). From the preceding point, one has e^2r(η^0_r∘φ_r)⊗(η^0_r∘φ_r) = 𝒪_g_0(e^(2-2a)r), from which is deduced that γ_r(φ_r·,φ_r·) = γ_r + 𝒪_g_0(e^-(a-1)r) This concludes the proof of the fourth point. Finally, let us prove the last point. Write S_r = S_r - 1/2( + η^0_r ⊗ξ_0^r) + 1/2( + η^0_r ⊗ξ_0^r), for r ⩾ 0. By the triangle inequality, one has φ_r S_r - S_r φ_r _g_0 ⩽ 2 φ_r_g_0S_r - 1/2( + η^0_r ⊗ξ_0^r)_g_0 +1/2(η^0_r_g_0φ_rξ_0^r_g_0 + η^0_r∘φ_r_g_0ξ_0^r_g_0). The result now follows from uniform bounds on η^0_r_g_0 and ξ_0^r_g_0 (by uniform convergence), the estimates on S_r - 1/2( + η^0_r ⊗ξ_0^r) (Proposition <ref>), and the estimates on φ_r, η^0_r∘φ_r, and φ_r ξ_0^r, given by the three first points. We are now able to prove that the family (φ_r)_r ⩾ 0 converges to a continuous field of endomorphisms, provided that a > 1. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the and conditions of order a > 1. Then there exists a continuous field of endomorphisms φ on such that φ_r - φ = 𝒪_g_0(e^-(a-1/2)r) if 1 < a <3/2, 𝒪_g_0((r+1)e^-r) if a = 3/2, 𝒪_g_0(e^-r) if a > 3/2. In addition, φ satisfies: * η^0∘φ = 0 and φξ_0 = 0, * γ(φ·,φ·) = γ, * φ^2 = - + η^0 ⊗ξ_0 and φ^3 = -φ. Let us first show the existence of φ. The proof goes in two steps. We first derive a differential equation for (φ_r)_r ⩾ 0. Let X be a vector field on M̅∖̅ ̅K̅. Then ( J)X = [,JX] - J[,X] = ((JX) - ∇_JX) - J( X - ∇_X) = ( J) X + J X - S(JX) - J X + J(SX) = JSX - SJX + ( J)X. It follows that J = JS - SJ + J. Recall that Φ = π J π, where π = - g(,·)⊗ is the orthogonal projection onto {}^⊥. It is a standard fact that g = 2g(S·,·). Moreover, S = = 0. It follows that π = 0, and consequently, that Φ = π (JS - SJ + J) π. Note that the eigenspaces of the projector π are π = and (π - ) = {}^⊥, which are both left stable by the shape operator S. Hence, S commutes with π, from which is derived that that Φ = Φ S - S Φ + π ( J) π. Define ψ_r = ℰ_r^*(π ( J) π), so that one has φ_r = φ_r S_r - S_r φ_r + ψ_r. A direct application of the assumption and Corollary <ref> yields ψ_r= 𝒪_g_0(e^-(a-1/2)r). Therefore, it follows from Lemma <ref> that φ_r = 𝒪_g_0(e^-(a-1/2)r) if 1/2 < a <3/2, 𝒪_g_0((r+1)e^-r) if a = 3/2, 𝒪_g_0(e^-r) if a > 3/2. Consequently, (φ_r)_r ⩾ 0 uniformly converges to some continuous tensor φ, which satisfies the inequality φ_r - φ_g_0 = ∫_r^∞φ_r_g_0⩽∫_r^∞φ_r_g_0 for all r ⩾ 0. This implies estimates (<ref>). Let us now establish the claimed properties satisfied by φ. The first two points are immediate consequences of Lemma <ref>. We thus focus on the last claim. One easily checks that Φ satisfies the equality Φ^2 = - + g(·,) ⊗ + g(·,) ⊗. Hence, one has φ_r^2 = - + η^0_r ⊗ξ_0^r + ϵ_r, for all r ⩾ 0, where ϵ_r = ℰ_r^*(g(·, - E_0) ⊗ + g(·,E_0)⊗ ( - E_0)). As usual, Corollary <ref> yields that ϵ_r_g_0 = 𝒪(e^r/2E_0-_g) = 𝒪(e^-(a-1/2)r), where the last equality is due to Corollary <ref>. The first part of the result now follows from the convergence of (η^0_r)_r ⩾ 0 and of (ξ_0^r)_r⩾ 0 when a > 1. The second part of the claim is a consequence of the first point. Proposition <ref> implies that when a > 1, (,η^0,φ,ξ_0) is an almost contact manifold (see <cit.> for an introduction to this notion). In particular, φ induces an almost complex structure on the distribution of hyperplanes H_0 = η^0. The study conducted in this section finally implies the second of our main Theorems. Let (M,g,J) be a complete, non-compact almost Hermitian manifold of dimension greater than or equal to 4 Assume that M satisfies the and conditions of order a > 1. Let η^0 and γ be given by Theorem <ref>, and let φ be defined as in Proposition <ref>. The restriction J_0= φ|_H_0 of φ to the hyperplane distribution H_0 = η^0 then induces an almost complex structure, and γ^0=γ|_H_0× H_0 is J_0-invariant. § HIGHER REGULARITY This section is dedicated to show that under the stronger conditions and of order a>1, the tensors η^0, γ, and φ defined previously gain in regularity. As a consequence, we highlight a strictly pseudoconvex CR structure related to the expansion of the metric near infinity. §.§ Order one estimates We first provide several estimates that will be useful in the following study. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K. Assume that it satisfies the condition of order a > 1/2. Let u and v be vector fields on . Let V be the parallel transport of v along radial geodesics. Then ∇_Y_u V = 𝒪_g(u_g_0v_g_0 e^r). Since V = 0 and [,Y_u]=0, one has (∇_Y_uV) = -R(,Y_u)V. Hence, Kato's inequality yields | ∇_Y_uV_g | ⩽R_g Y_u_g V_g almost everywhere. Recall that R_g= 𝒪(1) (Remark <ref>) and that V_g = v_g_0. Under the condition of order a > 1/2, one has Y_u_g = 𝒪(u_g_0e^r) (Corollary <ref>). The result follows from a straightforward integration. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K. Assume that it satisfies the and conditions of order a > 1/2. Then ∇_Y_u = 𝒪_g(u_g_0e^r). Write ∇_Y_u = (∇_Y_uJ) + J SY_u. Then ∇_Y_u_g ⩽ (∇ J_g+ S_g) Y_u_g, and the result follows from Lemma <ref>, the assumption and the estimates of Corollary <ref>. Assume that (M,g,J) satisfies the and conditions of order a > 1/2. Then ∇_Y_u() = 𝒪_g(u_g_0e^-(a-1)r). Since = 0 and ∇_Y_u = SY_u, it follows that ∇_Y_u( (J)) = ∇_Y_u(( J)) = (∇_Y_u( J)) + ( J) ∇_Y_u = (∇^2_Y_u,J) + (∇_∇_Y_u J) + ( J)∇_Y_u = (∇^2_Y_u,J) + (∇_SY_uJ) + ( J)SY_u. The result follows from Corollary <ref> (estimates on SY_u) and from the assumption. Assume that (M,g,J) satisfies the and conditions of order a > 1/2. Let π be the orthogonal projection onto {}^⊥. For u and v vector fields on , one has: * π((∇_Y_uS)Y_v) = 𝒪_g(u_g_0v_g_0e^3/2r). * π(∇_Y_uY_v) = 𝒪_g((v_g_0+∇^g_0v_g_0)u_g_0e^3/2r). We first consider the first point. By Kato's inequality, and noticing that π = 0, one has the almost everywhere inequality π(∇_Y_uS)Y_v)_g ⩽π( ((∇_Y_uS)Y_u))_g. The shape operator S satisfies the Riccati equation S = -S^2 - R(,·). Moreover, one has π S = S π. Direct computations using the equalities Y_v = SY_v and (SY_v) = -R(,Y_v) now yield (π ((∇_Y_uS)Y_v))) = π SR(,Y_u)Y_v - π R(,Y_u)SY_v - π R(SY_u,Y_v) -π R(,Y_v)SY_u - π (∇_Y_uR)(,Y_v) - S π (∇_Y_uS)Y_v = ℜ - S(π ((∇_Y_uS)Y_v))), where ℜ contains all the curvature terms. From this is deduced the almost everywhere inequality (e^-rπ ((∇_Y_uS)Y_v))_g) ⩽ e^-rℜ_g + (S_g-1) (e^-rπ ((∇_Y_uS)Y_v))_g). After a straightforward integration, Grönwall's Lemma yields e^-rπ ((∇_Y_uS)Y_v))_g ⩽((∇^g_uS)v_g + ∫_0^r e^-sℜ_g s)exp(∫_0^r (S_g-1) s). By tensoriality and compactness of , one has (∇^g_uS)v_g = 𝒪(u_g_0v_g_0). Moreover, Lemma <ref> yields the estimate exp(∫_0^r (S_g-1) s) = 𝒪(1). To conclude, it suffices to show that ℜ = 𝒪_g(u_g_0v_g_0e^3/2r). The assumption of order a > 1/2 yields ℜ = π SR^0(,Y_u)Y_v - π R^0(,Y_u)SY_v - π R^0(SY_u,Y_v) -π R^0(,Y_v)SY_u + 𝒪_g( u_g_0v_g_0e^-(a-2)r). A close look at the definition of R^0 (see equation (<ref>)) shows that the leading terms in ℜ_g are of the form cη^0(u)η^j(v)e^3/2r or cη^0(v)η^j(u)e^3/2r for c a constant and j ∈{1,…,2n}. The result follows. Let us now show the second point. Similarly, Kato's inequality yields the almost everywhere inequality π(∇_Y_uY_v)_g ⩽(π(∇_Y_uY_v))_g. Straightforward computations, using that π = 0, that π and S commute, and that Y_v = SY_v, now yield the equality (π(∇_Y_uY_v)) = -π R(Y_u,Y_v) + π ((∇_Y_uS)Y_u) + S π (∇_Y_uY_v). Hence, one has (e^-rπ(∇_Y_uY_v)_g) ⩽ e^-rπ R(Y_u,Y_v)_g + e^-rπ((∇_Y_uS)Y_v)_g + (S_g-1) (e^-rπ(∇_Y_uY_v)_g) a.e. The rest of the proof goes similarly to that of the first point, using the estimates derived on π((∇_Y_uS)Y_v)_g. The main difference is that the initial data here is not tensorial in v, but instead is π (∇_uv)_g = ∇^g_0_uv_g_0⩽∇^g_0v_g_0u_g_0. If one considers the whole vector field ∇_Y_uY_v instead, then one only has the estimates ∇_Y_uY_v_g = 𝒪((v_g_0+∇^gv_g)u_g_0e^2r). Indeed, the radial component is given by g(∇_Y_uY_v,) = -g(SY_u,Y_v) ≃ -η^0(u)η^0(v)e^2r when η^0(u) and η^0(v) do not vanish. §.§ Regularity of the admissible frames We shall now show that under the and conditions of order a > 1, the vector field e_0, defined in Definition <ref>, is actually of class 𝒞^1. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K. Assume that it satisfies the and conditions of order a > 1. Then the vector field e_0 is of class 𝒞^1; admissible frames can be chosen to have the same regularity. It suffices to show that the 1-form β defined in Section <ref> is of class 𝒞^1. To do so, we shall show that β(v) is a 𝒞^1 function for any 𝒞^1 vector field v. We prove this later fact by showing that (u(β_r(v)))_r⩾ 0 uniformly converges for any 𝒞^1 vector fields u and v on . Let u and v be such vector fields, and r ⩾ 0. Then u(β_r(v)) = Y_u(g(,V)) = ∇_Y_u(g(,V)), where V is the parallel transport of v along radial geodesics. Since [,Y_u] = 0 and V = 0, one has (u (β_r(v))) = (∇_Y_u(g(,V))) = ∇_Y_u((g(,V))), so that (u (β_r(v))) = g(∇_Y_u(()),V) + g((),∇_Y_uV). It now follows that one has |(u (β_r(v)))| ⩽∇_Y_uV_g()_g + V_g∇_Y_u(())_g. Recall that S_g = 𝒪(1) (Lemma <ref>), V_g = v_g_0, and Y_u_g = 𝒪(u_g_0e^r) (Corollary <ref>). It now follows from Lemma <ref>, Lemma <ref>, and the assumption, that (u (β_r(v))) = 𝒪(u_g_0v_g_0e^-(a-1)r). Consequently, (u (β_r(v))) uniformly converges for any vector fields u and v. This concludes the proof. It what follows, we will need to differentiate expressions involving ∇_Y_uE_j in the radial direction, with Y_u a normal Jacobi field and E_j an element of an admissible frame. At a first glance, this is a priori justified only if E_j is of class 𝒞^2. One could prove such regularity by requiring the stronger condition ∇^3 J_g = 𝒪(e^-ar). It turns out that one needs not assume this last condition, as a consequence of the fact that E_j is solution to the first order linear differential equation E_j=0. Indeed, let {r,x^1,…,x^2n+1} be Fermi coordinates[That is, {x^1,…,x^2n+1} are coordinates on , and that if (x^1,…,x^2n+1) corresponds to p∈, then (r,x^1,…,x^2n+1) corresponds to ℰ(r,p)∈ M.], and write E_j = ∑_i=1^2n+1E_j^i ∂_i. Then {E_j^i} are solutions to the ODE (E^i_j)' + ∑_k=1^2n+1E_j^kS_k^i = 0, with (S_k^i) the components of the shape operator S. As a consequence, one can consider elements of the form (∇_Y_u E_j) even though E_j is only of class 𝒞^1. In fact, one has (∇_Y_u E_j) = -R(,Y_u)E_j. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K. Assume that it satisfies the and conditions of order a > 1. Let u be a vector field on . Then ∇_Y_u(E_0 - ) = 𝒪_g(u_g_0e^-(a-1)r). Let u be a vector field on , and {E_0,…,E_2n} be an admissible frame of class 𝒞^1. Equation (<ref>) yields that ∇_Y_u(E_0-) = -∑_j=0^2n u(β_r(e_j)) E_j + ∑_j=0^2n (δ_0j - β_r(e_j)) ∇_Y_uE_j. During the proof of Proposition <ref>, we have shown that (β_r)_r ⩾ 0 converges in 𝒞^1 topology. Hence, ∀ j ∈{0,…,2n}, lim_r →∞ u (β_r(e_j)) = u ( lim_r →∞β_r(e_j)) = u(β(e_j)) = u(δ_0j) = 0. Therefore, |u(β_r(e_j))| = |∫_r^∞ (u(β_r(e_j)))| ⩽∫_r^∞ | (u(β_r(e_j)))| for j ∈{0,…,2n} and r ⩾ 0. It follows from equation (<ref>) that u(β_r(e_j)) = 𝒪(u_g_0e^-(a-1)r). Moreover, by Corollary <ref>, one has |δ_0j-β_r(e_j)| = 𝒪(e^-ar). Finally, Lemma <ref> yields ∇_Y_uE_j = 𝒪_g(u_ge^r). The result now follows. §.§ The contact form and the Carnot metric We shall now show that if the and conditions of order a>1 are satisfied, then η^0 and γ|_H_0× H_0 are of class 𝒞^1 and that η^0(·,φ·) = γ. In particular, η^0 is contact. These results are analogous to <cit.>, although we give slightly different and considerably shorter proofs here. The main difference is that we prove the 𝒞^1 convergence of elements of the form (η^j_r(v))_r⩾ 0, instead of 𝒞^0 convergence of elements of the form (ℒ_uη^j_r)_r⩾ 0. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the and conditions of order a > 1. Then η^0 is a contact form of class 𝒞^1. Moreover, η^0(·,φ·) = γ, and the Reeb vector field of η^0 is ξ_0. The proof is divided in three parts. First, we show that η^0 is of class 𝒞^1. Then we derive an expression for η^0(·,φ·), and deduce that η^0 is contact. Finally, we show that ξ_0 is the Reeb vector field of η^0. To show that η^0 is of class 𝒞^1, we show that for any vector field v, the function η^0(v) is of class 𝒞^1. To do so, we show that for any other vector field u, (u(η^0_r(v)))_r ⩾ 0 uniformly converges on . Let u and v be vector fields on . Let f be the function on M̅∖̅ ̅K̅ defined by the expression f= e^r(u(η^0_r(v)) = Y_u(g(Y_v,E_0)) = ∇_Y_u(g(Y_u,E_0) ). Then f is smooth in the radial direction. Since [,Y_u]=0 and E_0=0, one has f = (∇_Y_u ((g(Y_v,E_0))) = ∇_Y_u( (g(Y_v,E_0))) = ∇_Y_u(g( Y_v,E_0)). Similarly, one has ^2f = ∇_Y_u(g(( Y_v),E_0)). For Y_v is a Jacobi field, one has the equality ( Y_v) = -R(,Y_v), and it follows that ^2f = -∇_Y_u(R(,Y_v,,E_0)). Notice that R(,Y_v,,E_0) = R(,Y_v,,) + R(,Y_v,,E_0-) = R^0(,Y_v,,) + R(,Y_v,,E_0-) + (R-R^0)(,Y_v,,). One readily checks from the definition of R^0 that R^0(,Y_v,,) = -g(Y_v,), so that R^0(,Y_v,,) = -g(Y_v,E_0) - g(Y_v, - E_0). Hence, it follows that ^2f - f = g(∇_Y_uY_v, -E_0) + g(Y_v,∇_Y_u(-E_0)) - (∇_Y_uR)(,Y_v,,E_0-) - R(SY_u,Y_v,,E_0-) - R(,∇_Y_uY_u,,E_0-) - R(,Y_v,SY_u,E_0-) - R(,Y_v,,∇_Y_u(E_0-)) - (∇_Y_u(R-R^0))(,Y_v,,) - (R-R^0)(SY_u,Y_v,,) - (R-R^0)(,∇_Y_uY_v,,) - (R-R^0)(,Y_v,SY_u,) - (R-R^0)(,Y_v,,∇_Y_u). Note that the radial part of ∇_Y_uY_v plays no role here due to the symmetries of the Riemann curvature tensor, so that one can substitute ∇_Y_uY_v with π(∇_Y_uY_v) in this latter expression. Recall that one has the following estimates: * R, S = 𝒪_g(1) (Remark <ref> and Lemma <ref>), * R-R^0,∇ R, ∇(R-R^0) = 𝒪_g(e^-ar) (condition and Remark <ref>), * E_0- = 𝒪_g(e^-ar) (Corollary <ref>), * Y_u,Y_v = 𝒪_g(u_g_0e^r) (Corollary <ref>), * ∇_Y_u = 𝒪_g(u_g_0e^r) (Lemma <ref>), * π(∇_Y_uY_v) = 𝒪_g((v_g_0+∇^g_0v_g_0)u_g_0e^3/2r) (Lemma <ref>), * ∇_Y_u(E_0-) = 𝒪_g(u_g_0e^-(a-1)r) (Corollary <ref>). Hence, the triangle inequality yields ^2f - f = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(2-a)r). Define h = f - f, and notice that h + h = ^2f - f. It now follows from equation (<ref>) that (e^rh) = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(3-a)r). Therefore, one has e^rh = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(3-a)r) if 1 < a < 3, 𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)) if a=3, 𝒪((v_g_0+∇^g_0v_g_0)u_g_0) if a > 3. Notice that e^-rh = (e^-rf) = (u(η^0_r(v)) ). Hence, (u(η^0_r(v)) ) = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-1)r) if 1 < a < 3, 𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)e^-2r) if a=3, 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-2r) if a > 3. Consequently, (u(η^0_r(v)))_r⩾ 0 uniformly converges as r→∞, and η^0 is then of class 𝒞^1. We shall now derive an expression for η^0(·,φ·), by computing the limit of η^0_r(·,φ_r·) as r →∞. Let u and v be vector fields on . For r ⩾ 0, it holds that η^0_r(u,φ_rv) = u(η^0_r(φ_rv)) - (φ_rv)(η^0_r(u)) - η^0_r([u,φ_rv]) = e^-r( Y_u g(Φ Y_v,E_0) - (Φ Y_v)g(Y_u,E_0) - g([Y_u,Φ Y_v],E_0) ) = e^-r(g(Φ Y_v,∇_Y_uE_0) - g(Y_u,∇_Φ Y_vE_0)). On the one hand, it holds that g(Φ Y_v,∇_Y_uE_0) = g(Φ Y_v,∇_Y_u) + g(Φ Y_v,∇_Y_u(E_0-)) = g(Φ Y_v,JSY_u) + g(Φ Y_v,(∇_Y_uJ))+ g(Φ Y_v,∇_Y_u(E_0-)) = -g(JΦ Y_v,SY_u) + g(Φ Y_v,(∇_Y_uJ))+ g(Φ Y_v,∇_Y_u(E_0-)). On the other hand, one has g(Y_u,∇_Φ Y_vE_0) = g(Y_u,∇_Φ Y_v) + g(Y_u,∇_Φ Y_v(E_0-)) = g(Y_u,JSΦ Y_v) + g(Y_u, (∇_Φ Y_vJ)) + g(Y_u,∇_Φ Y_v(E_0-)) = -g(JY_u,SΦ Y_v) + g(Y_u, (∇_Φ Y_vJ)) + g(Y_u,∇_Φ Y_v(E_0-)). It then follows from the assumption, Corollary <ref> and Corollary <ref> that η^0_r(u,φ_rv) = e^-r(g(JY_u,SΦ Y_v) - g(JΦ Y_v,SY_u)) + 𝒪(u_g_0v_g_0e^-(a-1)r). Fix {E_0,…,E_2n} an admissible frame. From Corollary <ref> and Corollary <ref>, one has the estimate Y_v = η^0(v) e^r + ∑_j=1^2nη^j(v)e^r/2E_j + 𝒪_g(v_g_0e^-(a-1)r). It now follows from Lemma <ref> that JΦ Y_v = -∑_j=1^2nη^j(v) e^r/2 E_j + 𝒪_g(v_g_0e^-(a-1)r). Corollary <ref> now yields g(JΦ Y_v,SY_u) = -e^r/2∑_j=1^2nη^j(v)η^j(u) + 𝒪(u_g_0v_g_0e^-(a-2)r). Similarly, one shows that g(JY_u,SΦ Y_v) = e^r/2∑_j=1^2nη^j(u)η^j(v) + 𝒪(u_g_0v_g_0e^-(a-2)r). Recall the local expression γ = ∑_j=1^2nη^j⊗η^j. Equations (<ref>), (<ref>) and (<ref>) now yield η^0_r(u,φ_rv) = γ(u,v) + 𝒪(u_g_0v_g_0e^-(a-1)r). By uniform convergence of the first derivatives of (η^0_r)_r⩾ 0, it follows that η^0(·,φ·) = γ. Proposition <ref> hence shows that η^0 is non-degenerate on η^0. In particular, η^0 is a contact form. To conclude, let us show that ξ_0 is the Reeb vector field of η^0. Since η^0(ξ_0) = 1, it remains to show that η^0(ξ_0,v) = 0 for all vector field v tangent to H_0. Let v be such a vector field. The image of φ being exactly H_0, there exists a vector field u on such that v = φ u. By Proposition <ref>, γ is φ-invariant and φξ_0=0. From the preceding point, η^0(·,φ·) = γ. Hence, η^0(ξ_0,v) = η^0(ξ_0,φ u) = γ(ξ_0,u) = γ(φξ_0,φ u) = γ(0,φ u) = 0. This concludes the proof. Under the assumptions of Theorem <ref>, the distribution H_0 = η^0 is a contact distribution of class 𝒞^1. The next result shows that under the assumptions of Theorem <ref>, the Carnot metric γ^0 on H_0 is of the same regularity. The proof is very similar. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that it satisfies the and conditions of order a > 1. Then γ^0 = γ|_H_0× H_0 is of class 𝒞^1. Let {E_0,…,E_2n} be an admissible frame of class 𝒞^1 defined on a cone E(_+× U), and fix j∈{1,…,2n}. Let us first show that η^j is of class 𝒞^1 on the distribution H_0|_U. To do so, we shall prove that (u(η^j_r(v)))_r ⩾ 0 locally uniformly converges on U for v tangent to H_0|_U and u any vector field on U. Let u and v be such vector fields, and r ⩾ 0 be fixed. Let f^j = e^r/2 u(η^j_r(v)) = ∇_Y_u(g(Y_v,E_j)), which is smooth in the radial direction. Since [,Y_u] = 0 and E_j = 0, one has ^2 f^j = ((∇_Y_u(g(Y_v,E_j)))) = ∇_Y_u g(( Y_v),E_j), and, for Y_v is a Jacobi field, one has ^2f^j = - ∇_Y_u(R(,Y_v,,E_j)). One checks from the very definition of R^0 that R^0(,Y_v,,E_j) = -1/4g(Y_v,E_j) - 3/4g(Y_v,)g(E_j,). Therefore, one has the equality ^2f^j - 1/4f^j = 3/4g(∇_Y_uY_v,)g(E_j,) + 3/4g(Y_v,∇_Y_u)g(E_j,) + 3/4g(Y_v,)g(∇_Y_uE_j,) + 3/4g(Y_v,)g(E_j,∇_Y_u) - ∇_Y_u(R-R^0)(,Y_v,,E_j) - (R-R^0)(SY_u,Y_v,,E_j) - (R-R^0)(,∇_Y_uY_v,,E_j) - (R-R^0)(,Y_v,SY_u,E_j) - (R-R^0)(,Y_v,,∇_Y_uE_j). As in the proof of Theorem <ref>, the radial component of ∇_Y_uY_v plays no role due to the symmetries of R, so that one can substitute this term with π(∇_Y_uY_v). Moreover, g(E_j,) = β_r(e_j), where (β_r)_r ⩾ 0 is the family defined in Section <ref>. Recall that one has the following estimates: * R, S = 𝒪_g(1) (Remark <ref> and Lemma <ref>), * R-R^0,∇ (R-R^0) = 𝒪_g(e^-ar), (condition and Remark <ref>), * β_r(e_j) = 𝒪(e^-ar) (Corollary <ref>), * Y_u = 𝒪_g(u_g_0e^r) and Y_v = 𝒪_g(v_g_0e^r/2) (Corollary <ref>), * ∇_Y_uE_j = 𝒪_g(u_g_0e^r) (Lemma <ref>), * ∇_Y_u = 𝒪_g(u_g_0e^r) (Lemma <ref>), * π(∇_Y_uY_v) = 𝒪_g((∇^g_0u_g_0 + u_g_0)v_g_0e^3/2r) (Lemma <ref>). It follows from the triangle inequality that ^2 f^j - 1/4f^j = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-3/2)r). Let h^j be the function defined by h^j = f^j - 1/2f^j. Then h^j + 1/2h^j = ^2f^j - 1/4f^j, from which is derived that (e^r/2h^j) = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-2)r). A straightforward integration now yields e^r/2h^j = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(2-a)r) if 1 < a < 2, 𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)) if a = 2, 𝒪((v_g_0+∇^g_0v_g_0)u_g_0) if a > 2. Notice that e^-r/2h^j = (e^-r/2f^j) = ( u(η^j_r(v))), from which is deduced that ( u (η^j_r(v))) = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-1)r) if 1 < a < 2, 𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)e^-r) if a = 2, 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-r) if a > 2. In any case, ( u(η^j_r(v)))_r⩾ 0 locally uniformly converges. As a consequence, η^j|_H_0|_U is of class 𝒞^1. We immediately deduce from the local expression γ = ∑_j=1^2nη^j⊗η^j that γ^0=γ|_H_0× H_0 is of class 𝒞^1. This concludes the proof. With the stronger assumption a > 3/2, the same proof shows that for j∈{1,…,2n}, η^j is of class 𝒞^1 in all directions, and so is γ. Indeed, in this case, on has to consider the estimate Y_v = 𝒪_g(v_g_0e^r) instead. §.§ The almost complex structure We shall now show that the almost complex structure J_0 defined on the 𝒞^1 distribution H_0 is of the same regularity, and that it is formally integrable. We first remark that the local vector fields {ξ_1,…,ξ_2n} are of class 𝒞^1, although the Reeb vector field ξ_0 might only be continuous. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K. Assume that (M,g,J) satisfies the and conditions of order a > 1. Let {η^0,…,η^2n} be the local coframe associated to any admissible frame {E_0,…,E_2n}. Let {ξ_0,ξ_1,…,ξ_2n} be its dual frame. Then for j∈{1,…,2n}, ξ_j is a vector field of class 𝒞^1. Throughout the proof of Theorem <ref>, we have shown that {η^1,…,η^2n} is a 𝒞^1 trivialisation of the 𝒞^1 vector bundle (H_0,). Consequently, {ξ_1,…,ξ_2n} is a 𝒞^1 trivialisation of the vector bundle H_0. We now show that under the condition of order a > 0, admissible frames can almost be chosen to be J-frames, in the following sense. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, and with essential subset K. Assume that it satisfies the condition of order a > 0. Then there exists an admissible frame {E_0,…,E_2n} such that ∀ j ∈{1,…,n}, _2j-1 - E_2j = 𝒪_g(e^-ar). Let U⊂ be an open domain on which H_0 is trivialisable. Let e_1 be a unit section of H_0|_U of class 𝒞^1, and let E_1 be its parallel transport along radial geodesics. Consider the family of 1-forms β^1_r H_0|_U → defined by β^1_r(v) = g(V, _1)|__r, where V is the parallel transport of v along radial geodesics. The same study than that conducted for the proofs of Lemma <ref> and Proposition <ref> shows that under the condition of order a >1, there exists a nowhere vanishing 1-form β^1 on U, which is of class 𝒞^1, such that β^1_r - β^1_g_0 =𝒪(e^-ar). Let e_2 be the unique 𝒞^1 section of H_0|_U such that e_2 ⊥^g_0β^1, e_2_g_0 = 1 and β^1(e_2) > 0. Define E_2 to be its parallel transport along radial geodesics. Similarly to Corollary <ref>, one shows that E_2-_1 = 𝒪_g(e^-ar). The rest of the proof follows by induction. We refer to such an admissible frame as a J-admissible frame. We are now able to show the last Theorem of this section, exhibiting a strictly pseudoconvex CR structure at infinity. Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at last 4, with essential subset K. Assume that it satisfies the and condition of order a > 1. Let J_0 be the almost complex structure on H_0 induced by φ. Then J_0 is of class 𝒞^1, and is formally integrable. In particular, (,H_0,J_0) is a strictly pseudoconvex CR manifold of class 𝒞^1. Let {E_0,…,E_2n} be a J-admissible frame of class 𝒞^1, and {η^1,…,η^2n} and {ξ_1,…,ξ_2n} be the associated 𝒞^1 coframe and frame. Then {,E_0,…,E_2n} is an orthonormal frame. Since Φ() = Φ()= 0, one has Φ = ∑_j=0^2n g(·,E_j)⊗Φ(E_j). It then follows from Lemma <ref> and Lemma <ref> that Φ = ∑_j=1^n g(·,E_2j-1)⊗ E_2j - g(·,E_2j)⊗ E_2j-1 + 𝒪_g(e^-ar). Corollary <ref> now yields φ_r = ∑_j=1^nη^2j-1_r⊗ξ_2j^r - η^2j_r⊗ξ_2j-1^r + 𝒪_g_0(e^-(a-1/2)r). Taking the limit as r→∞ shows that φ = ∑_j=1^n η^2j-1⊗ξ_2j - η^2j⊗ξ_2j-1. Therefore, the restriction J_0= φ|_H_0 has at least the same regularity as {η^1|_H_0,…,η^2n|_H_0} and {ξ_1,…,ξ_2n}. It follows from Theorem <ref> and Lemma <ref> that J_0 is of class 𝒞^1. Let us now show that J_0 is formally integrable. Recall that γ|_H_0× H_0 is J_0-invariant, so that by <cit.>, it suffices to show that N_φ|_H_0× H_0 = η^0|_H_0× H_0⊗ξ_0, where N_A stands for the Nijenhuis tensor of the field of endomorphisms A, defined by N_A(X,Y) = -A^2[X,Y] - [A X,AY] + A[A X,Y] + A[X,A Y]. Let u and v be any vector fields on . Using the fact that ∇ is torsion-free, one first obtains N_Φ(Y_u,Y_v) = Φ(∇_Y_uΦ)Y_v - (∇_Φ Y_uΦ) Y_v - Φ(∇_Y_vΦ)Y_u + (∇_Φ Y_vΦ) Y_u. Recall that Φ = J - g(·,)⊗ + g(·,)⊗. Since ∇ g = 0, ∇ = S, Φ() = Φ()=0 and Y_u,Y_v ⊥, one has Φ(∇_Y_uΦ)Y_v = g(Y_v,)Φ(SY_u) + Φ(∇_Y_u J)Y_v, (∇_Φ Y_uΦ)Y_v = -g(Y_v,SΦ Y_u) + g(Y_v,JSΦ Y_u) + g(Y_v,)SΦ Y_u +(∇_Φ Y_uJ)Y_v - g(Y_v,(∇_Φ Y_uJ)), Φ(∇_Y_vΦ)Y_u = g(Y_u,)Φ(SY_v) + Φ(∇_Y_v J)Y_u, and (∇_Φ Y_vΦ)Y_u = -g(Y_u,SΦ Y_v) + g(Y_u,JSΦ Y_v) + g(Y_u,)SΦ Y_v + (∇_Φ Y_vJ)Y_u - g(Y_u,(∇_Φ Y_vJ)). Recall that Φ takes values in the distribution {}^⊥, which is involutive as the tangent field to the foliation (_r)_r ⩾ 0 of M̅∖̅ ̅K̅. The definition of the Nijenhuis tensor then shows that N_Φ has range in {}^⊥. Hence, the terms in the radial direction cancel out each others, and the remaining terms yield N_ϕ(Y_u,Y_v) = (g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v)) + g(Y_v,)(Φ S Y_u - S Φ Y_u) - g(Y_u,)(Φ S Y_v - S Φ Y_v) + Φ((∇_Y_uJ)Y_v - (∇_Y_vJ)Y_u) - π((∇_Φ Y_uJ)Y_v) + π((∇_Φ Y_vJ)Y_u) = (g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))E_0 + g(Y_v,E_0)(Φ S Y_u - S Φ Y_u) - g(Y_u,E_0)(Φ S Y_v - S Φ Y_v) + (g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))(-E_0) + g(Y_v,-E_0)(Φ S Y_u - S Φ Y_u) - g(Y_u,-E_0)(Φ S Y_v - S Φ Y_v) + Φ((∇_Y_uJ)Y_v - (∇_Y_vJ)Y_u) - π((∇_Φ Y_uJ)Y_v) + π((∇_Φ Y_vJ)Y_u), where π is the orthogonal projection onto {}^⊥. From now, and until the rest of the proof, we assume that u and v are tangent to H_0. Let r ⩾ 0, and note that N_φ_r = ℰ_r^* N_Φ. The condition, the uniform bound on S_g (Lemma <ref>), estimates on E_0- (Corollary <ref>), estimates on Y_u and Y_v (Corollary <ref>), comparison between g_0 and g_r (Corollary <ref>), and estimates on φ_r S_r - S_r φ_r (Lemma <ref>), now yield the existence of α_1 > 0, depending on a, such that N_φ_r(u,v) = e^-r(g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))ξ_0^r + 𝒪_g_0(u_g_0v_g_0e^-α_1 r). Similar calculations that the ones conducted to derive an expression for η^0_r(u,φ_rv) (see the proof of Theorem <ref>) show that there exists α_2 > 0 depending on a with e^-r(g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v)) = η^0(u,v) + 𝒪(u_g_0v_g_0e^-α_2 r). The 𝒞^1 convergence of (φ_r|_H_0)_r ⩾ 0 to φ|_H_0, and the 𝒞^0 convergence of (ξ_0^r)_r ⩾ 0 to ξ_0 finally imply that N_φ|_H_0 × H_0 = lim_r→∞ N_φ_r|_H_0 × H_0 = η^0|_H_0× H_0⊗ξ_0. Consequently, J_0 is formally integrable. The associated Levi-form η^0|_H_0× H_0(·,J_0·) coincides with γ|_H_0× H_0, and is thus positive definite. Ultimately, (,H_0,J_0) is a strictly pseudoconvex CR manifold, which concludes the proof. If M has dimension 4, then J_0 is an almost complex structure of class 𝒞^1 defined on a 2-dimensional vector bundle. Its integrability is automatic in this specific case. Similarly to Remark <ref>, under the stronger assumption a > 3/2, one shows that φ is of class 𝒞^1 in all directions. § THE COMPACTIFICATION We conclude this paper by showing our main Theorem. We first give a construction for M̅. Fix K an essential subset and E its normal exponential map. Let M(∞) be the visual boundary of (M,g), which is the set of equivalent classes [σ] of untrapped unit speed geodesic rays σ, where two rays σ_1 and σ_2 are equivalent if and only if the function t⩾ 0 ↦ d_g(σ_1(t),σ_2(t)) is bounded. By <cit.>, is in bijection with M(∞) by the map p ↦ [E(·,p)]. Define M̅ = M ∪ M(∞). The following map [ ℰ̅ [0,1) × ⟶ M̅∖ K; (ρ, p) ⟼ ℰ(-lnρ, p) ∈ M∖ K if ρ > 0, [ℰ(·,p)] ∈ M(∞) if ρ = 0, ] is thus a bijection. We endow M̅ with the structure of a compact manifold with boundary through this latter bijection. This identifies M with the interior of M̅. Note that if ρ > 0, then r = -lnρ is the distance to K for g in M. A compactly supported modification of ρ in a neighbourhood of K in M provides a smooth defining function for the boundary ∂M̅ = M(∞). By abuse of notation, we still denote it ρ. Let η^0 be the contact form and γ be the Carnot metric given by Theorem <ref>. Let H_0 be the associated contact distribution, and let J_0 be the integrable almost complex structure on H_0 given by Theorem <ref>. We see these objects as defined on ∂M̅ through the diffeomorphism E̅(0,·) {0}×→∂M̅. Then (∂M̅,H_0,J_0) is a strictly pseudoconvex CR manifold of class 𝒞^1 by Theorem <ref>. Theorem <ref> and Remark <ref> show that the metric g has the desired asymptotic expansion (<ref>) near the boundary ∂M̅ = ρ^-1({0}). Let us show that H_0 and J_0 are induced by a continuous ambient almost complex structure J̅. To that end, we show that J extends continuously to the boundary. Let {E_0,…,E_2n} be a J-admissible frame on a cone E(_+× U), and consider the frame {-∂_ρ, ξ̅_0,…,ξ̅_2n} on E̅((0,1)× U) defined by ξ̅_0 = E̅^*(ρ^-1E_0) and ξ̅_j = E̅^*(ρ^-1/2E_j) for j∈{1,…,2n}. Notice that -∂_ρ = e^r on M∖ K. Proposition <ref> and Remark <ref> show that {ξ̅_0,…,ξ̅_2n} extends continuously on the boundary E̅({0}× U), with limit {ξ_0,…,ξ_2n}. The tangent bundle of M̅ at the boundary splits as TM̅|_∂M̅ = ∂_ρ⊕ T∂M̅ =∂_ρ⊕ξ_0 ⊕ H_0. From the very definition of a J-admissible frame, one has J(e^r ) - e^r E_0, J(e^r E_0) + e^r = 𝒪_g(e^-(a-1)r), J(e^r/2E_2j-1) - e^r/2E_2j, J(e^r/2E_2j) + e^r/2E_2j-1 = 𝒪_g(e^-(a-1/2)r), j∈{1,…, n}. It follows that in the continuous frame {-∂_ρ,ξ̅_0,…,ξ̅_2n}, the matrix of J reads ([ 0 -1 1 - 0 0; 0 ⋱ 0 -1 1 - 0 ]) + ([ 𝒪(ρ^a) 𝒪(ρ^a+1/2); ; 𝒪(ρ^a-1/2) a 𝒪(ρ^a) ]) , where the top left block is of size 2× 2 and the bottom right block is of size 2n × 2n. Hence, J extends uniquely as a continuous almost complex structure J̅ up to boundary. In addition, J̅ satisfies J̅(-∂_ρ) = ξ_0, J̅ξ_0 = ∂_ρ, J̅ξ_2j-1 = ξ_2j, and J̅ξ_2j = -ξ_2j-1, j∈{1,…,2n}. It follows that J̅|_H_0 = J_0, and that H_0 = (T∂M̅)∩(J̅T∂M̅). This concludes the proof. Under the stronger assumption that a > 3/2, one can show that J̅ is of class 𝒞^1 up to the boundary in all directions (see Remark <ref>). When (M,g,J) is Kähler, (that is, if ∇ J = 0), then (M̅,J̅) is a compact complex manifold with strictly pseudoconvex CR boundary.
http://arxiv.org/abs/2307.07242v1
20230714093406
Antenna Selection With Beam Squint Compensation for Integrated Sensing and Communications
[ "Ahmet M. Elbir", "Asmaa Abdallah", "Abdulkadir Celik", "Ahmed M. Eltawil" ]
eess.SP
[ "eess.SP", "cs.IT", "math.IT" ]
Antenna Selection With Beam Squint Compensation for Integrated Sensing and Communications Ahmet M. Elbir, Senior Member, IEEE, Asmaa Abdallah, Member, IEEE, Abdulkadir Celik, Senior Member, IEEE, and Ahmed M. Eltawil, Senior Member, IEEE A. M. Elbir, A. Abdallah, A. Celik and A. M. Eltawil are with King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia (e-mail: [email protected], [email protected], [email protected], [email protected]). August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================= Next-generation wireless networks strive for higher communication rates, ultra-low latency, seamless connectivity, and high-resolution sensing capabilities. To meet these demands, terahertz (THz)-band signal processing is envisioned as a key technology offering wide bandwidth and sub-millimeter wavelength. Furthermore, THz integrated sensing and communications (ISAC) paradigm has emerged jointly access spectrum and reduced hardware costs through a unified platform. To address the challenges in THz propagation, THz-ISAC systems employ extremely large antenna arrays to improve the beamforming gain for communications with high data rates and sensing with high resolution. However, the cost and power consumption of implementing fully digital beamformers are prohibitive. While hybrid analog/digital beamforming can be a potential solution, the use of subcarrier-independent analog beamformers leads to the beam-squint phenomenon where different subcarriers observe distinct directions because of adopting the same analog beamformer across all subcarriers. In this paper, we develop a sparse array architecture for THz-ISAC with hybrid beamforming to provide a cost-effective solution. We analyze the antenna selection problem under beam-squint influence and introduce a manifold optimization approach for hybrid beamforming design. To reduce computational and memory costs, we propose novel algorithms leveraging grouped subarrays, quantized performance metrics, and sequential optimization. These approaches yield a significant reduction in the number of possible subarray configurations, which enables us to devise a neural network with classification model to accurately perform antenna selection. Numerical simulations show that the proposed approach exhibits up to 95% lower complexity for large antenna arrays while maintaining satisfactory communications with approximately 6% loss in the achievable rate. Antenna selection, integrated sensing and communications, massive MIMO, terahertz, machine learning. § INTRODUCTION The escalating demand for wireless communications and radar systems has engendered a scarcity of available frequency bands, resulting in pervasive overcrowding and spectrum congestion <cit.>. To combat this predicament, specialized techniques such as carrier aggregation and spectrum stitching have been harnessed in communications systems to efficiently utilize the spectrum <cit.>. However, the application of these techniques to radar systems poses formidable challenges in achieving meticulous phase synchronization <cit.>. Consequently, it is crucial to cultivate approaches that enable the simultaneous and opportunistic operation within the same frequency bands, thus benefiting from both radar sensing and communications functionalities on a shared hardware platform. Therefore, there has been a significant interest focused on the development of strategies for integrated sensing and communications (ISAC) setups, aiming to jointly access the scarce spectrum in a mutually advantageous manner <cit.>. The earlier ISAC designs utilize distinct hardware platforms to carry out sensing and communications (S&C) functions within the same frequency bands. These designs employed various techniques to mitigate interference between the two domains. Broadly, the ISAC systems are categorized into two primary groups: radar-communications coexistence (RCC) and dual-functional radar-communications (DFRC) <cit.>. While RCC focuses on managing interference and sharing resources between S&C tasks, enabling them to operate without significant mutual disruption, DFRC aims to consolidate both tasks onto a common platform, resulting in the convergence of ISAC design <cit.>. The necessity for a unified hardware platform becomes increasingly imperative as the integration of communications and sensing capabilities continues to advance in various applications, such as vehicle-to-everything (V2X) communication, indoor localization, radio frequency (RF) tagging, extended/virtual reality, unmanned aerial vehicles (UAVs), and intelligent reflecting surfaces (IRSs) <cit.>. The terahertz (THz) band (0.1-10 THz) has emerged as a promising technology to meet the sixth-generation (6G) wireless networks' ambitious performance goals on enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable low-latency communication (URLLC) <cit.>. As the spectrum allocation beyond 100 GHz is underway, there is a surge of research activity in ISAC to develop system architectures that can simultaneously achieve high-resolution sensing and high data rate communication at both upper millimeter-wave (mmWave) frequencies and low THz frequencies <cit.>. To meet the aforementioned diverse functionalities, the THz-ISAC system encounters several notable challenges, including but not limited to severe path loss resulting from spreading loss and molecular absorption, limited transmission range, and beam-squint caused by the ultra-wide bandwidth <cit.>. These challenges significantly impact the performance of both S&C aspects through: 1) the high path loss leads to extremely low signal-to-noise ratio (SNR) at both radar and communications receivers; 2) the Doppler shift accompanied by high range sidelobes can trigger false alarms during sensing and introduce inter-carrier interference to the communication systems, and 3) the beam-squint effects cause deviations in the generated beams across different subcarriers, thereby diminishing the array gain and subsequently reducing the spectral efficiency (SE) for communication and the accuracy of direction-finding (DF) for sensing purposes. To address these challenges, the implementation of ISAC concept in massive multiple-input multiple-output (MIMO) systems necessitates large antenna arrays at the base station (BS) to achieve significant beamforming gain <cit.>. Conversely, massive MIMO systems are also designed with fewer radio frequency (RF) chains to minimize hardware costs. This creates a motivation to develop an efficient ISAC design that carefully balances the system complexity in dealing with challenges such as path loss and beam-squint, while considering the cost implications associated with large arrays. To reduce this cost, antenna selection is an attractive solution that can select only a high quality subset of antennas to connect the reduced number of RF chains <cit.>. By maintaining a large portion of the same array aperture with fewer antenna elements connected to limited number of RF chains, antenna selection-based systems can achieve a comparable resolution while reducing size, weight, and power-cost (SWAP-C) by utilizing antenna selection diversity <cit.>. Nonetheless, removing an element from the antenna array raises the sidelobe levels in the antenna beampattern. This may introduce the ambiguity of resolving target directions in radar systems and the larger interference to other users in communications. Thus, antenna selection in ISAC systems is even more challenging than communications-only and sensing-only systems, and it should take into account the prior information and the constraints related to both S&C functionalities. To strike a good balance between SE and system complexity, massive MIMO systems employ wideband signal processing, wherein the transceiver architecture is composed of subcarrier-dependent (SD) baseband and subcarrier-independent (SI) analog beamformers. While the SD baseband processing can be carried out on a single hardware platform, the realization of the analog beamformer requires the implementation of phase shifter networks whose size is proportional to the number of subcarriers. As a result, the SI design enjoys a significant hardware simplicity and cost efficiency compared to the SD analog beamformers <cit.>. When the analog beamformers are SI, its design is only constrained by a single (sub-)carrier frequency <cit.>. Therefore, the beams generated across the subcarriers point to different directions causing beam-squint phenomenon <cit.>. The existing techniques to compensate for the impact of beam-squint mostly employ additional hardware components, e.g., time-delayer (TD) networks <cit.> and SD phase shifter networks <cit.> to virtually realize SD analog beamformers. However, these approaches are inefficient in terms of cost and power <cit.>. Beam-squint also occurs in transceivers with antenna selection. Fig. <ref> illustrates accurate and inaccurate antenna selection configurations for various ISAC scenarios. Although a codebook of selected antennas for different scenarios is available, it should take into account the impact of beam-squint. Otherwise, the squinted beams from the subarray point to different directions may cause inaccurate target detection/estimation for sensing as well as a significant loss in communications rate. §.§ Related Work Recent research includes several works separately on beam-squint compensation <cit.> and sparse array design for communications <cit.>, sensing <cit.> as well as ISAC <cit.>. However, the impact of beam-squint on sparse array design has not been considered in the relevant literature. Because of beam-squint, the performance metric, e.g., SE <cit.>, channel gain <cit.> and Cramér-Rao bound (CRB) <cit.>, receive antenna power <cit.>, becomes miscalculated during antenna selection. Specifically, beam-squint corrupts the array gain and it causes false (deviated) peaks in the spatial domain due to the angular deviations in the generated beams. Therefore, if beam-squint is not compensated properly, the subarray corresponding to the false peaks may differ than that of the optimum subarray. Without examining the impact of beam-squint, antenna selection in ISAC scenario is considered in several works <cit.>. In particular, antenna selection is performed in <cit.> and <cit.> via employing the CRB of the target parameter estimation as performance metric. In <cit.> and <cit.>, the authors introduce various ISAC architectures with sparse arrays for single-user configuration. Also, in <cit.>, a multi-user, single-target scenario is considered, wherein a fully digital beamforming approach is proposed. Different from the aforementioned model-based techniques, a deep learning (DL)-based approach is proposed in <cit.>, wherein analog-only beamforming for transmit antenna selection in ISAC scenario is considered. Furthermore, most of the antenna selection strategies for ISAC examine whether only analog <cit.> or digital <cit.> beamformer design without considering hybrid analog/digital beamforming. Although a CRB-based antenna selection and hybrid beamforming is considered in <cit.>, the analog and digital beamformers are not optimized. Besides, DL-based joint hybrid beamforming and antenna selection is studied in various recent works with different settings, e.g., unsupervised learning <cit.>, online learning <cit.>, quantized learning model <cit.>, and graph learning models <cit.>. However, these works are limited to communications-only scenario and do not consider the impact of beam-squint in wideband systems. §.§ Contributions In this work, we investigate the impact of beam-squint on antenna selection for THz-ISAC hybrid beamforming. The computational complexity of the antenna selection problem is high due to its combinatorial nature, which is addressed by proposed low complexity heuristic solutions that reduce the number of subarray candidates. The performance metric for designing the ISAC hybrid beamformers is the SE of the selected subarray. The main contributions of this work are summarized as follows: * To design the ISAC hybrid beamformers, we propose a manifold optimization-based approach that incorporates beam-squint compensation (BSC). Unlike recent works on hybrid beamforming with manifold optimization, our approach incorporates wideband processing with BSC. * We devise low complexity algorithms for antenna selection. In particular, a grouped subarray selection (GSS) approach is proposed, wherein the entire array is divided into distinct, non-overlapping groups, allowing us to select antennas in groups rather than individually, thereby significantly reducing the number of potential subarray configurations. Additionally, we develop a sequential search algorithm to minimize memory requirements during the implementation of antenna selection. * By reducing the number of potential subarray configurations, we formulate the antenna selection problem as a classification problem. We develop a learning model with convolutional neural network (CNN) architecture that combines communications and sensing data to efficiently determine the subarray configuration. The CNN model takes the combined communications data (channel matrix) and sensing data (target response vectors) as input. Through training, the CNN model generates the optimal subarray configuration as the output. * We examine the impact of beam-squint and subarray configuration in terms of SE of the overall system. In particular, we show, via both theoretical analysis and numerical experiments, that the highest performance can only be achieved if the best subarray is selected and the beam-squint is completely compensated. In the remainder of the paper, we present the THz-ISAC architecture with communications and sensing signal model in Section <ref>. Next, we introduce the proposed joint antenna selection and hybrid beamforming approach in Section <ref>. After presenting various experimental results in Section <ref>, the paper is finalized with conclusions in Section <ref>. Notation: Throughout the paper, we use (·)^, (·)^ and (·)^* for transpose and conjugate transpose and complex conjugate operations, respectively. For a matrix 𝐀 and vector 𝐚; [𝐀]_i,j, [𝐀]_k and [𝐚]_l correspond to the (i,j)-th entry, k-th column and l-th entry, respectively. Furthermore, vec{𝐀} denotes the vectorized form of 𝐀 with 𝐀 = vec^-1{vec{𝐀}}. 𝔼{·} represent the flooring and expectation operations, respectively. The binomial coefficient is defined as ([ n; k ]) = n!/k! (n-k)!. An N× N identity matrix is represented by 𝐈_N. We denote ·_0, || ·||_2 and || ·||_ℱ as the ℓ_0-norm, ℓ_2-norm and Frobenious norms, respectively. ζ(a) =sin N π a/N sinπ a is the Drichlet sinc function, and | 𝐀| denotes the determinant of 𝐀. ⊙ and ⊗ denote the element-wise Hadamard and Kronecker products, respectively. The Riemannian and Euclidean gradients are represented by ∇_ℛ and ∇, respectively. § SYSTEM MODEL Consider a wideband ISAC system with hybrid beamforming architecture driven by N_RF RF chains and M subcarriers as shown in Fig. <ref>. The BS employs N antennas and aims to simultaneously generate multiple beams towards T targets and a single communications user with N' antennas, for which N_ds data symbols are transmitted. The BS performs an antenna selection scheme to employ a sparse array of size K out of N[We note here that the selected K antennas are optimized and dedicated to a pair of communications user and radar target for a particular coherence time. The remaining N-K antennas can concurrently be used for another ISAC scenario involving a different user-target pairs available in the network.][The number of selected antennas K should satisfy T + L ≤ N_RF≤ K ≤ N to simultaneously generate T + L beams towards T targets and L user path directions. ]. Denoted by z_i[m], i = 1,…, N_ds, the transmitted data symbols at the m-th subcarrier (m = ℳ = {1,⋯, M}), the BS applies the SD digital beamformer 𝐅_BB[m]∈ℂ^N_RF× N_ds. Using the K-element sparse array, the BS applies the SI analog beamformer, 𝐅_RF∈ℂ^K× N_RF which is realized with fully-connected phase shifter network[While there are works on partially-connected or subarrayed phase shifter network architectures with <cit.> or without <cit.> antenna selection, the proposed approach can be easily extended to these architectures via simple modifications in the selection matrix.]. Due to phase-only processing in the phase shifters, the entries of the analog beamformer have the constant modulus property, i.e., |[𝐅_RF]_i,j| = 1/√(K) for i = 1,⋯, K and j = 1,⋯, N_RF. Then, the K× 1 transmitted signal at the m-th subcarrier becomes 𝐠[m] = 𝐅_RF𝐅_BB[m]𝐳[m], where 𝐳[m] = [z_1[m],⋯, z_N_ds[m]]^, and 𝔼{𝐳[m]𝐳^[m]} = 1/N_ds𝐈_N_ds. §.§ Communications Model Denote the downlink THz channel matrix between the BS and the communications user as 𝐇[m]∈ℂ^N'× N. Then, the channel matrix with selected antennas is 𝐇[m] = 𝐇[m] 𝐐, where 𝐐∈{0,1}^N× K is the selection matrix. Specifically, for the (n,k)-th element of 𝐐, [𝐐]_n,k = 1 represents that the n-th transmit antenna is the k-th selected antenna for n∈{1,⋯, N} and k ∈{1, ⋯, K}. Then, the N'× 1 received signal vector at the user becomes 𝐲[m] = 𝐇[m]𝐐𝐠[m] + 𝐧[m] = 𝐇[m]𝐅_RF𝐅_BB[m]𝐳[m] + 𝐧[m], where 𝐧[m]∈ℂ^N' denotes the temporally and spatially white additive zero-mean Gaussian noise vector with variance σ^2. §.§.§ THz Channel Channel modeling in THz band has been a challenging task largely because of the lack of realistic measurement campaigns <cit.>. In <cit.>, it is shown that a single dominant line-of-sight (LoS) path with a few non-LoS (NLoS) multipath components survive at the receiver in outdoor scenarios for sub-THz frequencies <cit.>. In a general scenario, e.g., indoor, multipath channels can also arise, the gains of LoS and NLoS paths are comparable <cit.>. Therefore, we assume a general scenario, wherein the THz channel matrix 𝐇[m] includes the contribution of L multipath scatterers as 𝐇[m] = √(N'N/L)∑_l = 1^Lα_l,m𝐚'(ϕ_l)𝐚^(θ_l), where α_l,m∈ℂ denotes the gain of the l-th path, and it can be defined for a LoS path as α_l,m^LoS = c/4 π f_m d̅_l e^- 1/2k_abs (f_m) d̅_l e^- j2π f_m/cd̅_l where c is the speed-of-light, k_abs(f_m) is the SD molecular absorption coefficient for the m-th subcarrier frequency f_m and d̅_l is the transmission distance <cit.>. For NLoS paths, the expected path gain is given by 𝔼{|α_l,m^NLoS|^2} = (c/4 π f_m d̅_l )^2 e^-k_abs (f_m) d̅_l e^- τ̅_l/Γ̅, where τ̅ is the time of arrival of the l-th path while Γ̅ denotes the ray decay factor <cit.>. We note here that the proposed hybrid beamforming techniques for THz channel are also applicable for both narrowband and wideband mmWave systems. In (<ref>), 𝐚'(ϕ_l)∈ℂ^N' and 𝐚(θ_l)∈ℂ^N are the steering vectors corresponding to the physical direction-of-arrival (DoA) (ϕ_l) and direction-of-departure (DoD) angles (θ_l) of the l-th paths, respectively. The n-th element of 𝐚(θ_l) for a uniform linear array (ULA) is given by [𝐚(θ_l)]_n = 1/√(N)exp{-j 2πd/λ_c(n-1)sinθ_l }, where n = 1,⋯, N, λ_c is the wavelength of the central subcarrier frequency, i.e., λ_c = c/f_c, where f_c is the carrier frequency and d is the antenna element spacing, which is typically selected as d = λ_c/2. Note that the receive steering vector 𝐚'(ϕ_l) can be defined similarly. §.§.§ Beam-Squint Effect In wideband transmission, it is typically assumed that a common analog beamformer is designed corresponding to a single wavelength for all subcarriers, i.e., λ_1 = ⋯ = λ_M = c/f_c. However, this assumption no longer holds when bandwidth is so large that the beams generated at different subcarriers squint and they point to different directions in spatial domain <cit.>. If a similar beamforming architecture, employing SI analog beamformer and SD digital beamformers, is also utilized by the user, the same beam-squint effect is also observed at the user. The amount of beam-squint in the spatial domain is SD and it becomes larger as |f_m-f_c| increases. Thus, we define the SD beam-squinted DoA and DoD angles in spatial domain as sinφ_l,m and sinϑ_l,m, respectively. Then, the relationship between the spatial and physical directions (sinϕ_l, sinθ_l) is given as sinφ_l,m =η_m sinϕ_l, sinϑ_l,m = η_msinθ_l, where η_m = f_m/f_c, f_m = f_c + B/M(m - 1 - M-1/2) is the m-th subcarrier frequency for the system bandwidth B. We can see beam-squint is mitigated if the spatial (sinφ_l,m,sinϑ_l,m) and physical directions (sinϕ_l, sinθ_l) are equal, i.e., η_m = 1. Under the effect of beam-squint, the n-th entry of the SD steering vector 𝐚(ϑ_l,m) is given by [𝐚(ϑ_l,m)]_n = 1/√(N)exp{- j2π d/λ_m (n-1) sinθ_l} =1/√(N)exp{- jπ f_m/f_c (n-1)sinθ_l } =1/√(N)exp{- jπ (n-1)η_m sinθ_l }, where λ_m = c/f_m is the wavelength of the m-th subcarrier. Comparing (<ref>) and (<ref>) yields that the deviation in the spatial directions due to beam-squint can be compensated by exploiting the phase terms of the steering vectors as will be discussed in Sec. <ref>. For communications-only problem, the hybrid beamforming design aims to maximize the SE which is defined for the m-th subcarrier as SE[m] = log_2 |𝐈_N' + 1/N_dsσ^2𝐇[m] 𝐅_RF𝐅_BB[m] ×𝐅_BB^[m]𝐅_RF^𝐇^[m] |, for which the SE of the overall system is SE = ∑_m =1^MSE[m]. We note that maximizing the SE can be achieved by exploiting the similarity between the hybrid beamformer 𝐅_RF𝐅_BB[m] and the unconstrained communications-only beamformer 𝐅_C[m]∈ℂ^K× N_ds <cit.>. In particular, 𝐅_C[m] is the subarray beamformer as 𝐅_C[m] =𝐐^𝐅_C[m], where 𝐅_C[m]∈ℂ^N × N_ds denotes the communications-only beamformer corresponding to the full array, and it can be directly obtained from the right singular matrix of 𝐇[m] via singular value decomposition (SVD) <cit.>. Assumption 1: We assume that the THz channel matrix of the full array, i.e., 𝐇[m] is available for ISAC beamformer design. This can be achieved either model-based techniques <cit.> or learning-based approaches <cit.>. It is also worth noting that the complete channel matrix can be constructed by cycling the N_RF RF chains among N antennas during channel training. In other words, the RF chains are first connected to the first N_RF antennas during the first part of the training sequence, then the second N_RF antennas, and so on <cit.>. §.§ Sensing Model While communicating with the user, the ISAC system aims to deliver as high SNR as possible to targets for sensing <cit.>. To that end, the BS transmits probing signals to sense the targets in the environment. Let 𝐗̃_t[m]∈ℂ^N× T_S be the transmitted sensing signal, where T_S is the number of snapshots. Then, the K× T_S received echo signal from T targets is 𝐘̃[m] = ∑_t = 1^Tβ_t 𝐚(Φ_t) 𝐚^(Φ_t) 𝐗̃[m] + 𝐍̃[m], where β_t represents the reflection coefficient, i.e., radar cross section, of the target, 𝐚(Φ_t)∈ℂ^N denotes the steering vector corresponding to the t-th target at the direction Φ_t and 𝐍̃[m]∈ℂ^N× T_S denotes the additive noise term. The estimation of the target directions can be performed via both model-based <cit.> and model-free learning-based <cit.> techniques available in the literature. Once th target directions are estimated as {Φ̂_t}_t ∈𝒯, where 𝒯 = {1,⋯, T}, then the sensing-only beamformer 𝐅_S∈ℂ^N× T is constructed as 𝐅_S = [𝐚(Φ̂_1),⋯, 𝐚(Φ̂_T) ]. Then, the sensing-only subarray beamformer 𝐅_S∈ℂ^K× T can be given by 𝐅_S = 𝐐^𝐅_S. Assumption 2: We assume that the sensing-only full array beamformer 𝐅_S is available. That is to say, the target directions are acquired during the search operation of the radar prior to the beamformer design. Although the relevant literature on the direction estimation is mostly limited to beam-squint-free scenario <cit.>, a beam-squint-aware multiple signal classification (BSA-MUSIC) technique is introduced recently in <cit.> for the compensation of beam-squint in direction estimation problem. §.§ Problem Formulation Our aim in this work is to jointly optimize a subarray at the BS as well as designing hybrid beamformers, which can be achieved via maximizing the SE of the overall system. By exploiting the similarity between the hybrid beamformer 𝐅_RF𝐅_BB[m] and 𝐅_C[m], 𝐅_S, the joint antenna selection and hybrid beamforming problem is written as _𝐐, 𝐅_RF, {𝐅_BB[m], 𝐃[m]}_m∈ℳ ∑_m = 1^M( ε𝐅_RF𝐅_BB[m] - 𝐅_C[m] _ℱ + (1 - ε) 𝐅_RF𝐅_BB[m] - 𝐅_S𝐃[m] _ℱ) ∑_m = 1^M𝐅_RF𝐅_BB[m] _ℱ = M N_ds, |[𝐅_RF]_i,j| = 1/√(N), 𝐃[m] 𝐃^[m] = 𝐈_T, 𝐅_C[m] =𝐐^𝐅_C[m], 𝐅_S =𝐐^𝐅_S, [𝐐]_n,k∈{0,1} , vec{𝐐}_0 = K, where 𝐃[m]∈ℂ^T× N_ds is a unitary matrix (i.e, 𝐃[m] 𝐃^[m] = 𝐈_T) and it provides the change of dimensions between 𝐅_S and 𝐅_C[m]. In (<ref>), 0≤ε≤ 1 is the trade-off parameter between communications and sensing tasks. Specifically, ε =1 (ε = 0) corresponds to communications-only (sensing-only) design. The procedure of determining ε includes the ratio of allocated resources, such as power <cit.> and signal durations of the coherent processing intervals <cit.>. The problem in (<ref>) falls to the class of mixed-integer non-convex programming (MINCP), which is difficult to solve due to several matrix variables 𝐐, 𝐅_RF, 𝐅_BB[m], 𝐃[m] ans non-convex constraints. In particular, the constant-modulus constraints for the analog beamformers 𝐅_RF in (<ref>) indicate that the amplitude of the analog beamformer weights are a constant. Furthermore, the antenna selection matrix 𝐐 has binary values as in (<ref>) and, the number of non-zero terms in 𝐐 equals to K as in (<ref>). Following these considerations, we introduce an effective and computationally-efficient solution in the remainder of this work. § JOINT ANTENNA SELECTION AND BEAMFORMER DESIGN In order to provide an effective solution, we first divide the problem in (<ref>) into two subproblems, hybrid beamforming and antenna selection. In hybrid beamforming design, we first formulate the subproblem as a manifold optimization problem for a given subarray configuration, wherein the hybrid analog/digital beamformers are alternatingly optimized as well as the impact of beam-squint is compensated. In antenna selection, we are interested in selecting K out of N antenna elements at the BS. This yields P = ([ N; K ]) = N!/K! (N - K)! possible subarray configurations. Therefore, antenna selection problem can be viewed as a classification problem with P classes. Define 𝒬 = {𝐐_1,⋯, 𝐐_P } as the set of all possible subarray configurations, where 𝐐_p represents the selection matrix 𝐐 for the p-th configuration as 𝐐_p = [𝐪_1^p,⋯, 𝐪_K^p ]. Here, 𝐪_k^p = [0, ⋯, q_n,k^p,⋯, 0]^, and we have q_n,k^p = 1 corresponding to the k-th element of the subarray as the selected n-th transmit antenna for the p-th configuration. §.§ Hybrid Beamformer Design We first define the hybrid beamformers for the p-th subarray configuration as 𝐅_RF^(p)∈ℂ^K× N_RF and 𝐅_BB^(p)[m]∈ℂ^N_RF× N_ds. Next, we define the cost function in (<ref>) for the p-th configuration and the m-th subcarrier as f(p,m) = ε𝐅_RF^(p)𝐅_BB^(p)[m] - 𝐅_C^(p) [m] _ℱ + (1- ε) 𝐅_RF^(p)𝐅_BB^(p)[m] - 𝐅_S^(p)𝐃^(p)[m] _ℱ, where 𝐅_C^(p) [m] =𝐐_p 𝐅_C[m] and 𝐅_S^(p) =𝐐_p 𝐅_S. Using triangle inequality, the following can be obtained, i.e., f(p,m) ≥ε𝐅_RF^(p)𝐅_BB^(p)[m] - ε𝐅_C^(p) [m] + (1-ε) 𝐅_RF^(p)𝐅_BB^(p)[m] - (1-ε) 𝐅_S^(p)𝐃^(p)[m] _ℱ = 𝐅_RF^(p)𝐅_BB^(p)[m] - ε𝐅_C^(p) [m] - (1-ε) 𝐅_S^(p)𝐃^(p)[m] _≜𝐅_SC^(p)[m] _ℱ, where we define 𝐅_SC^(p)[m] ∈ℂ^K× N_ds as the joint sensing-communications (JSC) beamformer, which involves the combination of 𝐅_C^(p)[m] and 𝐅_S^(p) <cit.> as 𝐅_SC^(p)[m] = ε𝐅_C^(p)[m] + (1-ε) 𝐅_S^(p)𝐃^(p)[m]. Since we have f(p,m) ≥𝐅_RF^(p)𝐅_BB^(p)[m] - 𝐅_SC^(p)[m] _ℱ from (<ref>), maximizing 𝐅_RF^(p)𝐅_BB^(p)[m] - 𝐅_SC^(p)[m] _ℱ is equivalent to maximizing f(p,m). Thus, we can write the hybrid beamforming design problem for the p-th subarray configuration as _𝐅_RF^(p), 𝐅_BB^(p)[m], 𝐃^(p)[m] ∑_m = 1^M||𝐅_RF^(p)𝐅_BB^(p)[m] - 𝐅_SC^(p)[m] _ℱ |[𝐅_RF^(p)]_k,n| = 1/√(K), ∑_m = 1^M 𝐅_RF^(p)𝐅_BB^(p)[m]_ℱ = M N_ds, 𝐃^(p)[m]𝐃^(p)^[m] = 𝐈_T, which can be written in a compact form as _𝐅_RF^(p), 𝐅_BB^(p), 𝐃^(p) ||𝐅_RF^(p)𝐅_BB^(p)- 𝐅_SC^(p)_ℱ |[𝐅_RF^(p)]_k,n| = 1/√(K), 𝐅_RF^(p)𝐅^(p)_ℱ = M N_ds, 𝐃^(p)𝐃^(p)^ = 𝐈_T, where 𝐅_BB^(p) = [𝐅_BB^(p)[1], ⋯, 𝐅_BB^(p)[M] ], 𝐅_SC^(p) = [𝐅_SC^(p)[1], ⋯, 𝐅_SC^(p)[M] ] and 𝐃^(p) = [𝐃^(p)[1], ⋯, 𝐃^(p)[M] ] are K× MN_ds, K× MN_ds and T× MN_ds matrices, respectively. Now, the optimization problem in (<ref>) can be solved effectively via manifold optimization techniques <cit.>. To solve (<ref>), we follow an alternating technique, wherein the unknown variables 𝐅_RF^p, 𝐅_BB^(p) and 𝐃^(p) are estimated one by one while the remaining terms are fixed. §.§.§ Solve for 𝐅_RF^(p) In order to solve for 𝐅_RF^(p) via manifold optimization, we first define 𝐟^(p) = vec{𝐅_RF^(p)}∈ℂ^KN_RF as the vectorized form of 𝐅_RF^(p). By exploiting the unit-modulus constraint of the analog beamformer, the search space for 𝐟^(p) is regarded as Riemannian submanifold ℛ of the complex plane ℂ^K N_RF as <cit.> ℛ = {𝐟^(p)∈ℂ^KN_RF : | [𝐟^(p)]_1|=⋯ = |[𝐟^(p)]_KN_RF| = 1/√(K)}. Then, by following a conjugate gradient descent technique <cit.>, 𝐟^(p) can be optimized iteratively, and in the i-th iteration, we have 𝐟_i^(p) = 𝐟_i^(p) + ȧ_i ξ_i (𝐟_i, 𝐅_BB^(p), 𝐅_SC^(p)) /|𝐟_i^(p) - ȧ_i 𝐟_i |, where ȧ_i is Armijo backtracking line search step size <cit.> and ξ_i (𝐟_i^(p), 𝐅_BB^(p), 𝐅_SC^(p)) ∈ℂ^KN_RF is the directional gradient vector <cit.>, and it depends on the Riemannian gradient of 𝐟_i^(p), i.e., ∇_ℛ𝐟_i^(p), which is defined as ∇_ℛ𝐟_i^(p) = ∇𝐟_i^(p) - Re{∇𝐟_i^(p)⊙𝐟_i^(p)^*}⊙𝐟_i^(p), where ∇𝐟_i^(p) denotes the Euclidean gradient of 𝐟_i as ∇𝐟_i^(p) = -2 𝐁^(p)( 𝐟_SC^(p) - 𝐁^(p)𝐟_i^(p)), where 𝐁^(p) = 𝐅_BB^(p)^⊗𝐈_K∈ℂ^KMN_ds× KN_RF and 𝐟_SC^(p) = vec{𝐅_SC^(p)}∈ℂ^KMN_ds. Given 𝐅_BB^(p) and 𝐅_SC^(p), the optimization process can be initialized for i = 0 by selecting 𝐟_0^(p) as [𝐟_0^(p)]_k = e^jΨ_k, where Ψ_k ∼uniform([0,2π)) for k = 1,⋯, KN_RF. The complexity of computing (<ref>) is mainly due to the computation of the conjugate gradient in (<ref>). Therefore, the computational complexity order of optimizing 𝐅_RF^(p) is O(N_iter^A K^2 N_RF M N_ds), where N_iter^A is the number of iterations <cit.>. §.§.§ Solve for 𝐅_BB^(p) and Beam-Squint Compensation (BSC) Since the analog beamformer 𝐅_RF^(p) is SI, beam-squint occurs. In order to compensate for the impact of beam-squint, we design the baseband beamformer 𝐅_BB^(p)[m] accordingly such that the impact of beam-squint in the analog domain is conveyed to the baseband which is SD. To this end, we first obtain 𝐅_BB^(p)[m] from 𝐅_SC^(p)[m] and 𝐅_RF^(p). Then, update the baseband beamformer by utilizing SD analog beamformer, which can be virtually computed from the SI analog beamformer 𝐅_RF^(p) <cit.>. Given 𝐅_BB^(p)[m] from 𝐅_SC^(p) and 𝐅_RF^(p), a straightforward solution for 𝐅_BB^(p)[m] is 𝐅̆_BB^(p)[m] = ( 𝐅_RF^(p))^†𝐅_SC^(p)[m]. Now, we define the SD analog beamformer as 𝐅̆_RF^(p)[m]∈ℂ^K× N_RF, which can be found from 𝐅_RF^(p) as 𝐅̆_RF^(p)[m] = 1/√(K)Ω_m^(p), where Ω_m^(p)∈ℂ^K× N_RF includes the phase information of the SI beamformer 𝐅_RF^(p) as [Ω_m^(p)]_k,n = exp{jη_m ∠{[𝐅_RF^(p)]_k,n}}, for k = 1,⋯, K and n = 1,⋯, N_RF. Notice that the SD beamformer in (<ref>) and (<ref>) includes the compensation of beam-squint via multiplying the phase terms by η_m. Next, we update the baseband beamformer in (<ref>) such that our hybrid beamformer 𝐅_RF^(p)𝐅_BB^(p)[m] resembles the SD JSC beamformer 𝐅_SC^(p)[m] as much as possible. Hence, from (<ref>), the updated baseband beamformer is computed as 𝐅_BB^(p)[m] = (𝐅_RF^(p)) ^†𝐅̆_RF^(p)[m] 𝐅̆_BB^(p)[m]. §.§.§ Solve for 𝐃^(p) Given 𝐅_RF^(p), 𝐅_BB^(p) and 𝐅_SC^(p), the auxiliary variable 𝐃^(p) can be optimized via _𝐃^(p) 𝐅_RF^(p)𝐅_BB^(p) - 𝐅_SC^(p)_ℱ 𝐃^(p)𝐃^(p)^ = 𝐈_T, which is called orthogonal Procrustes problem <cit.>, and its solution is 𝐃^(p) = 𝐔𝐈_T× MN_ds𝐕, where 𝐔Σ𝐕 = 𝐅_S^(p)^𝐅_RF𝐅_BB^(p) is the SVD of the T× MN_ds matrix 1/1 - ε𝐅_S^(p)^(𝐅_RF^(p)𝐅_BB^(p) - 𝐅_SC^(p)), and 𝐈_T× MN_ds = [𝐈_T | 0_MN_ds-T× T^]. In Algorithm <ref>, the algorithmic steps of the proposed subarrayed beamforming approach are presented. Specifically, for a given subarray configuration index p, we first construct the subarray terms 𝐅_C^(p)[m] and 𝐅_S^(p) from the full array quantities. Then, for the first iteration, i.e., j=1, the unknown variables are initialized as 𝐃^(p,j) = 𝐈_T× MN_ds and 𝐅_RF^(p,j) = e^jΨ, where [Ψ]_k,n∼uniform([0,2π)) for k = 1,⋯, K and n = 1,⋯, N_RF. During alternating optimization, firstly, the analog beamformer 𝐅_RF^(p) is optimized in the steps 7-12. Then, the virtual SD analog beamformer 𝐅̆_RF^(p)[m] is computed, and the baseband beamformer is updated for BSC in steps 15-16. Finally, the auxiliary variable 𝐃^(p)[m] is updated in step 17. The hybrid beamformers are obtained when the algorithm converges <cit.>. §.§ Antenna Selection Given the hybrid beamformers 𝐅_RF^(p), 𝐅_BB^(p)[m], we can write the SE when the p-th subarray is selected as SE_p[m] = log_2 |𝐈_N' + 1/N_dsσ^2𝐇^(p)[m] 𝐅_RF^(p)𝐅_BB^(p)[m] ×𝐅_RF^(p)𝐅_BB^(p)^[m] 𝐇^(p)^[m] |, and the SE over all subcarriers is SE_p = ∑_m = 1^MSE_p [m]. Using (<ref>), the antenna selection problem is written as p^⋆= max_pSE_p 𝐇^(p)[m] = 𝐇[m]𝐐_p, where p^⋆ represents the best subarray configuration maximizing the SE. Now, we discuss the optimality of the best subarray configuration under the impact of beam-squint. Define 𝐮_p^⋆(θ) = 𝐐_p^⋆^𝐚(θ) as the K× 1 beam-squint-free best subarray steering vector corresponding to an arbitrary direction θ. Then, it is clear that 𝐮_p^⋆(θ) achieves the highest SE in (<ref>) and obtains the maximum array gain A_G(Φ) for an arbitrary direction Φ if Φ = θ, i.e., θ =_Φ A_G(Φ), where A_G(Φ) = |𝐮_p^⋆^(θ)𝐮_p^⋆(Φ) |^2 /N^2. In the following theorem, we show that the array gain, and equivalently the SE <cit.>, is maximized only if two conditions are met, i.e., the best subarray configuration is selected and beam-squint is completely compensated. Define 𝐮_p(ϑ_m) = 𝐐_p^𝐚(ϑ_m) as the K× 1 beam-squint-corrupted subarray steering vector corresponding to an arbitrary direction θ and subcarrier m∈ℳ as defined in (<ref>). Then, 𝐮_p (ϑ_m) achieves the maximum array gain only if sinϑ_m = η_m sinθ, p = p^⋆ , where p^⋆ is the optimizer of (<ref>), and the array gain varying across the whole bandwidth is A_G(ϑ_m) = |𝐮_p^⋆^(θ) 𝐮_p(ϑ_m)|^2/N^2. We first prove the condition p=p^⋆ is required for the maximization of array gain in (<ref>). Thus, we start by rewriting the array gain across the subcarriers in (<ref>) as A_G(ϑ_m) = |𝐮_p^⋆^(θ) 𝐮_p(ϑ_m)|^2/N^2 = | 𝐚^(θ) 𝐐_p^⋆𝐐_p^𝐚(ϑ_m) |^2/N^2 = Trace{𝐐_p^⋆𝐐_p^} | 𝐚^(θ) 𝐚(ϑ_m) |^2/N^2, where the first term in the nominator, i.e., Trace{𝐐_p^⋆𝐐_p^}, is maximized as max_pTrace{𝐐_p^⋆𝐐_p^} =K, which can be achieved only if p= p^⋆ when 𝐐_p^⋆𝐐_p^ = 𝐈_K. Now, we show that maximum array gain is only achieved when sinϑ_l,m = η_m sinθ_l. Thus, substituting 𝐐_p^⋆𝐐_p^ = 𝐈_K in (<ref>) yields A_G(ϑ_m) = | 𝐚^(θ) 𝐚(ϑ_m) |^2/N^2 = 1/N^2| ∑_n_1 =1^N∑_n_2=1^N e^-jπ( (n_1-1)sinϑ_m - (n_2-1)λ_csinθ/λ_m) | ^2 = |∑_n = 0^N-1e^-j2π n d̅( sinϑ_m/λ_c - sinθ/λ_m) /N|^2 = |∑_n = 0^N-1e^-j2π n d̅(f_c sinϑ_m - f_msinθ) /c/N|^2 = | 1 - e^-j2π Nd̅(f_c sinϑ_m- f_msinθ)/c/N (1 - e^-j2πd̅(f_c sinϑ_m - f_msinθ)/c ) |^2 = | sin (π N μ_m )/Nsin (πμ_m )|^2 = |ζ( μ_m )|^2, where μ_m = d̅(f_c sinϑ_m- f_msinθ)/c. Due to the power focusing capability of the Drichlet sinc function ζ (μ_m) in (<ref>), array gain is focused only on a small portion of the beamspace, and it substantially reduces across the subcarriers as |f_m - f_c| increases. Furthermore, |ζ( μ_m )|^2 gives peak when μ_m = 0, i.e., f_csinϑ_m - f_msinθ= 0, which yields sinϑ_m = η_m sinθ. The problem in (<ref>) requires to visit all possible subarray configurations, which can be computationally prohibitive and consume too much memory resources, especially when N is large, e.g., N ≥ 32. To reduce this cost, we propose low complexity algorithms in the following. §.§.§ Grouped Subarray Selection In order to reduce the computational cost involved in the solution of (<ref>), we propose GSS strategy to lower the number of possible subarray configurations. In GSS, the whole array is divided into N_G = N/G disjoint groups, each of which includes G consecutive antennas as illustrated in Fig. <ref>. Thus, the antenna selection problem with GSS reduces to selecting K_G = K/G groups out of N_G, and the number of possible subarray configurations with GSS is P_G = N_G!/K_G! (N_G - K_G)!. Then, the set of all possible subarray configurations with GSS is 𝒬_G = {𝐐_1,⋯, 𝐐_P_G}, where 𝐐_p_G represents the antenna selection matrix for the p_G-th configuration, p_G = 1,⋯, P_G. The grouped array structure significantly reduces the number of subarray configurations, even for small number of grouped antennas, e.g., G=2. As an example, consider the scenario when N = 64 and K = 32. Then, the number of subarray configurations is P = 1.83× 10^18, which reduces to approximately P_2 = 6× 10^9 (3× 10^9 times fewer) and P_4 = 12870 (1.4× 10^14 times fewer) for G = 2 and G= 4, respectively. §.§.§ Sequential Search Algorithm During the computation of SE_p for very large number of P, the computation platform requires very large amount of memory to save the variables whose dimensions are proportional to P. To efficiently use the memory, we devise a sequential search algorithm, wherein 𝒬 is partitioned into B disjoint blocks as 𝒬 = ∪_b = 1^B 𝒬^b, where 𝒬^b = {𝒬_P(b-1)/B+1,⋯, 𝒬_Pb/B}. Then, (<ref>) is sequentially solved such that the variables, e.g., 𝐅_RF^(p)𝐅_BB^(p)[m] and 𝐇^(p)[m] for p ∈{P(b-1)/B+1,⋯, Pb/B}, are removed from the memory after computation at the block b, instead of storing all the data in the memory. As a result, the computational platform requires approximately B times less memory. In Algorithm <ref>, we present the algorithmic steps of the proposed approach for ISAC hybrid beamformer design with antenna selection. First, we compute 𝒬^b, i.e., the set of subarray configurations for the b-th block. Then, we design the hybrid beamformers and the cost (i.e., SE) is computed. Next, the computed costs of two consecutive blocks (i.e., SE_q_b-1^⋆ and SE_q_b^⋆) are compared, and the unnecessary variables are removed from the memory. Following this strategy, ∀ b, the best subarray index q^⋆ and the hybrid beamformers 𝐅_RF^(q^⋆) and 𝐅_BB^(q^⋆)[m]. §.§ Learning-Based Antenna Selection Due to the combinatorial nature of antenna selection problem, it is preferable to formulate the problem as a classification problem, wherein each subarray configuration is regarded as a class. Thus, we design a classification model with a CNN architecture as shown in Fig. <ref>. Define 𝒟 = {𝒟_1, ⋯, 𝒟_} as the training dataset, where 𝒟_i = (ℐ_i,𝒪_i) denotes the i-th input and output data for i = 1,⋯,. The input of the CNN is formed from the combination of communications (channel matrix) and sensing (received target responses) data Π[m] ∈ℂ^N × (N' + T ) as Π[m] = [𝐇^[m]_Communications, 𝐅_S_Sensing]^. Define Π^(i) [m]∈ℂ^N × (N' + T ) as the generated data for the i-th sample of the dataset. Then, the input includes the real and imaginary parts of Π^(i)[m] as 𝒳_i,1 = Re{Π^(i) [m]} and 𝒳_i,2 = Im{Π^(i)[m] }, respectively. Thus, 𝒳_i is a “two-channel" real-valued tensor variable with the size of N × (N' + T )× 2. Furthermore, the output of the i-th sample is represented the best subarray index obtained from Algorithm <ref> as 𝒪_i = q_(i)^⋆. As a result, the output includes possible subarray configurations as 𝒪_i ∈𝒬. Let θ∈ℝ^U denote the learnable parameters of the CNN. Then, the learning model aims to construct the non-linear mapping between the input ℐ_i and the output label 𝒪_i as ℱ(θ, ℐ_i) →𝒪_i. The CNN architecture for antenna selection has 13 layers as shown in Fig. <ref>. The first layer is the input layer with the size of N × (N' + T )× 2. The {2,4,7}th layers are convolutional layers with 256 filter and kernel size of 3× 3. The third and the sixth layers are rectified linear unit layers performing nonlinear feature mapping of f_ReLU(x) = max(0,x) for its input x. The fifth and the eighth layers are composed of pooling layers, which reduce the dimension by 2. There are fully connected layers, each of which is followed by a dropout layer with probability of 0.5, at the ninth and eleventh layers with 1024 units. The output layer is a classification layer with softmax function and P classes, each of which corresponds to a distinct subarray configuration. § NUMERICAL EXPERIMENTS We evaluate the performance of the proposed approach via several experiments. During simulations, the target and user path directions are drawn uniform randomly as Φ_k,ϕ_l,θ_l ∈ [30^∘, 150^∘]. We conducted 500 Monte Carlo trials, and presented the averaged results. We assumed that there are T=3 point targets and a single user with L=3 NLoS paths. We select N' = 16, N_RF = 8, M=16 and the number of data snapshots T_S=256. Fig. <ref> shows the SE performance of the proposed antenna selection and hybrid beamforming approach based on GSS and BSC for N=32 and N=128 when K=8 and G=4. The performance of the fully digital (FD) full array beamformer (i.e., 𝐅_SC [m] = ε𝐅_C[m] + (1-ε) 𝐅_S𝐃[m]) is regarded as benchmark for the subarrayed hybrid beamformer, i.e., 𝐅_RF^(q^⋆)𝐅_BB^(q^⋆)[m]. The beamforming and antenna selection are performed as described in Algorithm <ref> and Algorithm <ref>, respectively. We see from Fig. <ref> that our BSC approach provides a significant SE improvement for hybrid beamforming. In order to evaluate the performance of antenna selection accuracy, we also present the hybrid beamforming performance of the randomly selected subarrays with and without BSC. While the former design involves a BSC beamformer for a random subarray configuration, the latter does not involve BSC for the same randomly selected subarray. We observed that the randomly selected beamformer with BSC performs better than that of optimized subarray without BSC for different array sizes. In other words, compensating for beam-squint plays more crucial role than optimizing for best subarray. Next, we present the SE performance in Fig. <ref> with respect to the number of selected antennas K when the transmit array size is fixed as N = 64 with G=4. We aim to generate T + L=6 disjoint beams towards T=3 targets and L=3 user path directions. Thus, we see that the SE is improved as K increases, especially for K≥ T + L = 6, thanks to achieving higher beamforming gain. We also observe that optimized subarray without BSC achieves higher SE than the random subarray with BSC as K≥10. This suggests that implementing BSC in the case of the random subarray can improve the SE up to a certain extent. Furthermore, increasing the subarray size can potentially enhance the array gain, thereby mitigating the SE loss caused by beam-squint. The SE performance is presented in Fig. <ref> versus the number of RF chains N_RF for N=128 and K=8. As N_RF≥ 6, higher SE is achieved by the randomly selected subarray even without BSC. In contrast, the SE performance of the BSC algorithms is reduced for N_RF < 8. This is because the term (𝐅_RF^(q^⋆)) ^†𝐅̆_RF^(q^⋆)[m] in (<ref>) becomes full column rank as N_RF→ N, hence provides a better mapping from 𝐅̆_BB^(q^⋆)[m] to 𝐅_BB^(q^⋆)[m]. Nevertheless, our optimized subarray with BSC exhibits approximately 9% and 27% higher SE as compared to random subarray with and without BSC, respectively. In Fig. <ref>, we evaluate the performance of our GSS approach in terms of number of subarray candidates as well as the loss in SE. Fig. <ref>(a-b) shows that the number of possible subarray configurations significantly reduces with the slight increase of G while the SE of the selected subarray degrades. This is because the antenna selection algorithm with GSS visits only the subarray configurations in 𝒬_G while leaving out the remaining candidates, which are in the set 𝒬\𝒬_G. Nevertheless, our GSS approach with BSC-based hybrid beamforming achieves significantly low complexities while maintaining a satisfactory SE performance as shown in Fig. <ref>(c). In particular, when G=4, the number of subarray candidates are reduced about 95% while our BSC approach yields only 6% loss in SE, whereas this loss is approximately 11% for the subarray beamforming without BSC. The trade-off between communications and sensing is evaluated in Fig. <ref> for ε∈ [0,1], N=32, K=8, and G=4. As a benchmark, the ISAC hybrid beamforming approach is also included with the FD communications-only beamformer (i.e., 𝐅_C^(q^⋆)[m]) as well as the FD beamformer given in (<ref>), which depends on ε. The FD ISAC beamformer attains the FD communications-only beamformer for ε = 1 as expected while its performance degrades as ε→ 0 as the trade-off becomes sensing-weighted. Furthermore, we observe that our beamformer with BSC closely follows the FD beamformer with significant improvement compared to beam-squint corrupted and randomly selected designs. Now, we evaluate the performance of our CNN model for antenna selection. The training dataset 𝒟 is generated for _1 = 1000 different data realizations, i.e., Π[m]. Then, for each realized data sample, synthetic noise is added onto the input data in order to make the model robust against corruptions and imperfections <cit.>. Specifically, _2 = 100 noise realizations are obtained for three different SNR values as SNR_TRAIN∈{15, 20,25} dB. This process is repeated for each data realizations i_1 = 1,⋯, _1 and m =1,⋯, M when T=3, N'=16, N=32, K=8 and G=4. As a result, the whole dataset includes = _1 ·_2 · M · 3= 480,000 samples, each of which is of size 32× 19× 2. The whole data set is divided into 30% and 70% two parts for validation and training, respectively. The validation dataset is then used for testing the CNN model after it is corrupted by synthetic noise with the SNR of SNR_TEST. Stochastic gradient descent algorithm with momentum of 0.9 is adopted during training, for which the cross-entropy loss function is used for classification as ce = -1/∑_i = 1^∑_p = 1^P( ω_i,pln (κ_i,p) + (1- ω_i,p) ln (1-κ_i,p) ) where ω_i,p and κ_i,p are the input-output pair of the classification layer defined for the i-th data sample and p-th class. The learning rate for training is set as 0.01 and it is reduced by half after each 500 iterations. The performance of the CNN model is presented in Fig.<ref>. Specifically, Fig. <ref>(a) shows the classification (subarray selection) accuracies for validation and training. We can see that the CNN model successfully learns the training data while the accuracy for validation data is slightly less with approximately 90% accuracy. This is because the validation data is not used during training. We also present the antenna selection accuracy and the SE performance of the selected subarrays after employing Algorithm <ref> and CNN in Fig. <ref>(b). In this setup, the validation data is corrupted by synthetic noise defined by SNR_TEST in order to assess the robustness of the proposed CNN model and Algorithm <ref>, for which the corrupted data (i.e., Π [m]) is used as input, and the corresponding hybrid beamformers are estimated. Then, the true channel data is used to compute the SE with the resulting hybrid beamformers. Our CNN model achieves up to 95% antenna selection accuracy for the corrupted input data with 10 dB noise while the model-based approach in Algorithm <ref> is unable to provide accurate antenna selection results (approximately 10%) due to the corruptions in the data. This observation clearly shows the advantage of using learning-models in imperfect scenarios. Specifically, the CNN accounts for the imperfections in the data and yields the accurate subarray selection performance thanks to training with noisy communications and sensing data for robustness. When we compare the SE of model-based and learning-based approaches (in the left axis), we can see that the corrupted input data causes selecting inaccurate subarray indices, thereby leading to poor SE performance. Nevertheless, CNN-based antenna selection yields (about 2%) higher SE than the model-based approach in Algorithm <ref> in the presence of imperfect communications and sensing data. Finally, we present the computation times (in seconds) for joint antenna selection and hybrid beamforming approach in Table <ref> and Table <ref>. In particular, Table <ref> shows the computation times of model-based (Algorithm <ref>) and learning-based approaches for K∈{2,⋯, 8}, N=16 and G=1. While the complexity of the model-based approach grows geometrically, the learning-based approach with the CNN model in Fig. <ref> enjoys significantly low computation times. The fast implementation of CNN is attributed to employing parallel processing tools e.g., graphical computation units (GPUs). We also present the computation times with respect to the group size G in Table <ref> when N=16 and K=8. Note that the time complexity when G=1 corresponds to the traditional antenna selection whereas G>1 is for the proposed GSS strategy. The results are in accordance with Fig. <ref> where a performance analysis with respect to P and G are presented. We see that our GSS approach approximately 11 and 630 times faster than traditional antenna selection for G=2 and G=4, respectively. Furthermore, the time complexity of learning-based antenna selection with the CNN model exhibits approximately the same amount of time for G∈{1,2,4} while providing a significant reduction (∼ 7000 times faster) for computing the best subarray index as compared to the model-based approach. § SUMMARY In this paper, we investigated the antenna selection problem in the presence of beam-squint for THz-ISAC hybrid beamforming. We have shown that the impact of beam-squint on antenna selection causes significant performance loss in terms of SE due to selecting inaccurate subarrays. The compensation for beam-squint during hybrid beamforming design is achieved via manifold optimization integrated with the proposed BSC algorithm. In particular, BSC provides a solution via updating the baseband beamformers by taking into account the distortions in the analog domain due to beam-squint. Specifically, BSC exhibits approximately 15% improvement in terms of SE without requiring additional hardware components. In order to solve the joint antenna selection and hybrid beamforming, low complexity algorithms are proposed to reduce the number of possible subarray configurations. These include sequential search algorithm to reduce the memory usage during the computation of the subarray variables and GSS to reduce the number of possible subarray configurations via selecting the antennas in small groups. It is shown that the proposed GSS approach provides approximately 95% reduction in terms of computational complexity while maintaining satisfactory performance with about 6% SE loss. IEEEtran
http://arxiv.org/abs/2307.03914v1
20230708062942
Mixed Precision Iterative Refinement with Adaptive Precision Sparse Approximate Inverse Preconditioning
[ "Noaman Khan", "Erin Carson" ]
math.NA
[ "math.NA", "cs.NA" ]
Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing Jiangfeng Du August 12, 2023 ======================================================================================= Hardware trends have motivated the development of mixed precision algorithms in numerical linear algebra, which aim to decrease runtime while maintaining acceptable accuracy. One recent development is the development of an adaptive precision sparse matrix-vector produce routine, which may be used to accelerate the solution of sparse linear systems by iterative methods. This approach is also applicable to the application of inexact preconditioners, such as sparse approximate inverse preconditioners used in Krylov subspace methods. In this work, we develop an adaptive precision sparse approximate inverse preconditioner and demonstrate its use within a five-precision GMRES-based iterative refinement method. We call this algorithm variant BSPAI-GMRES-IR. We then analyze the conditions for the convergence of BSPAI-GMRES-IR, and determine settings under which BSPAI-GMRES-IR will produce similar backward and forward errors as the existing SPAI-GMRES-IR method, the latter of which does not use adaptive precision in preconditioning. Our numerical experiments show that this approach can potentially lead to a reduction in the cost of storing and applying sparse approximate inverse preconditioners, although a significant reduction in cost may comes at the expense of increasing the number of GMRES iterations required for convergence. § INTRODUCTION We consider the problem of solving large, sparse linear systems Ax=b using iterative methods, where A is a nonsingular n× n matrix. In recent years, the emergence of low precision, such as half precision, on modern hardware has received renewed attention. Lower precision has many benefits, including a reduction in computation, storage, and data movement costs. However, with fewer bits, we have greater round off error and a smaller range of re-presentable numbers. This has motivated the development of mixed precision algorithms, in which lower and higher precisions are used selectively in order to improve the performance, memory, and energy consumption without sacrificing accuracy; for details, see the recent surveys <cit.>. Iterative refinement (IR) is a long-standing technique for iteratively improving the solution to a linear system. The idea of iterative refinement is to first compute an initial solution x_0 to Ax=b , often using a direct solver like LU factorization. The refinement steps in iterative refinement consist of computing the residual r_i=b-Ax_i, solving the correction equation Ad_i=r_i, and updating the solution x_i+1=x_i+d_i. In the case that LU factorization is used for computing the initial solution, the LU factors can be reused for solving the correction term d_i. This is what we refer to as “standard IR” (SIR). Iterative refinement was originally proposed by Wilkinson in 1948, who suggested performing all the computations in a working precision denoted by u except the residual computation in precision u^2. This variant has been analyzed by Wilkinson <cit.> and Moler <cit.>. In 1977, Jankowski and Woźniakowski <cit.> and Skeel <cit.> introduced fixed precision iterative refinement, performing all computations in precision u. Langou et al. in 2006 used single precision in the computation of the LU factorization, which can be as twice faster as double precision, and a working precision in other parts of the computation <cit.>. The availability of half precision in modern GPUs motivated the development of iterative refinement which uses three or more hardware precisions. Carson and Higham in 2018 proposed an iterative refinement scheme that uses three different precisions u_f, u, and u_r, which denote the factorization, working, and the residual precisions respectively; for an explanation see <cit.>. The authors also proposed a fourth precision, called the “effective precision”, denoted by u_s, which allows for general solvers to be used for the correction term d_i. For example, in standard iterative refinement, the LU factors computed in precision u_f results in u_s = u_f. With u_f ≥ u and u_r ≤ u^2, then the relative forward and backward errors will converge to level u when κ_∞(A)≤ u_f^-1, where κ_∞(A)=‖ A^-1‖_∞‖ A‖_∞ denotes the infinity-norm condition number of A. In <cit.>, the authors develop a GMRES-based iterative refinement algorithm (GMRES-IR) which uses the computed LU factors as preconditioners within GMRES to solve for the correction in each refinement step. Under the assumption that GMRES is executed in the working precision u, with matrix vector product and with preconditioned matrix computed in double the working precision, u_s = u, and thus GMRES-IR is guaranteed to produce forward and backward errors to the working precision for more ill-conditioned problems than standard iterative refinement. Assuming that u_f ≥ u and u_r ≤ u^2 a relative forward and backward errors to the level u is obtained for κ_∞(A)≤ u^-1/2u_f^-1. From a performance perspective, the requirement that the preconditioned matrix is applied in double the working precision is not attractive. In 2021, Amestoy et al. <cit.> proposed and analyzed a five-precision variant of GMRES-IR which, in addition to the working precision u, factorization precision u_f, and residual precision u_r, added two more precisions, namely u_g for the working precision within GMRES and u_p for precision in which the preconditioned matrix is applied to a vector within GMRES. The variant with setting u=u_g=u_p is used commonly in practice, although it is guaranteed to converge for a smaller range of condition numbers than the algorithm in <cit.>. Again assuming u_f ≥ u and u_r ≤ u^2, the relative forward and backward error to the level working precision is obtained for the matrices having κ_∞(A) ≤ u^-1/3u_f^-2/3, although this restriction is likely overly pessimistic in practice. Most existing analyses of GMRES-based iterative refinement schemes assume that an LU factorization is computed for use as a left preconditioner within GMRES in each refinement step. But when A is very sparse, the performance of this approach may not be attractive since the LU factorization of A may have considerable fill-in. In practice, inexact preconditioners are often used, such as incomplete LU factorizations or sparse approximate inverses (SPAI). Using SPAI has an advantage because it is, in theory, highly parallelizable, as each column can be computed independently, and its application involves only a sparse matrix-vector product (SpMV). In <cit.>, the authors propose a new variant called SPAI-GMRES-IR which, instead of LU factors, uses a sparse approximate inverse preconditioner (computed in a precision u_f with a given accuracy threshold ε, which controls the residual in each column) as a preconditioner within five-precision GMRES-IR. The analysis of SPAI-GMRES-IR shows that as long as ε and u_f satisfy the constraints u_f_2(A^T) ≲ε≲ u^-1/2κ_∞(A)^-1/2, then the constraints on condition number for forward error and backward error to converge are the same as for five-precision GMRES-IR with the full LU factors, although it is clear that convergence of the GMRES solves may be slower. In 2022, Graillat et al. proposed an adaptive, mixed precision algorithm for computing sparse matrix-vector products that adaptively selects the precision in which each matrix element is stored and applied by splitting them into buckets based on their magnitude and then using progressively lower precisions for the buckets with smaller elements <cit.>. In this work, we apply the idea proposed in <cit.> to the application of the computed SPAI M within SPAI-GMRES-IR. We call this approach BSPAI-GMRES-IR, where the `B' stands for `bucketed'; the components of M are split into different buckets, with a different precision associated with each bucket. In Section <ref> we give background on SPAI preconditioners and the adaptive precision sparse matrix-vector product approach in <cit.>, and discuss bucketed SPAI and recent related approaches. In Section <ref>, we analyze under which conditions the BSPAI-GMRES-IR will converge and bound the forward and backward errors. In Section <ref> we perform a set of numerical experiments which illustrate the behavior of BSPAI-GMRES-IR. In Section <ref> we conclude and discuss future work. § BACKGROUND §.§ Notation First we mention some notation which will be used in rest of the text. Important for us will be the condition numbers. For a given matrix A, and a vector x, and a norm p, we define κ_p(A) = ‖ A^-1‖_p‖ A‖_p,_p(A) = ‖ |A^-1||A|‖_p,_p(A,x) = ‖ |A^-1||A||x|‖_p/‖ x ‖_p, where |A|=(|a_ij|). In case p is not specified we assume the norm to be infinity. For unit roundoffs we will use the notation u and subscripts on u to distinguish various precisions. For rounding error analysis, we will use the notation γ_k = ku/1-ku, γ̃_k=cku/1-cku, where c is a small constant independent of problem dimension. A superscript on γ indicates that the corresponding u has that superscript as a subscript; for example, γ_k^f = ku_f/(1-ku_f). The quantities computed in finite precision will be denoted by hats. §.§ Sparse Approximate Inverse Preconditioners Sparse approximate inverse preconditioning is based on the idea of explicitly constructing a matrix M≈ A^-1. Although SPAI is a general algebraic preconditioning technique and is thus not expected to be effective for every problem, the use of SPAI-type preconditioners within Krylov subspace methods has the advantage that the application of the preconditioner involves only matrix-vector products, unlike, e.g., LU-based preconditioners which require two triangular solves. There are many potential techniques for computing a sparse approximate inverse M; see the survey <cit.>. A popular approach based on Frobenius norm minimization produces a sparse approximate inverse in unfactored form (i.e., a single matrix M), in which M is computed as the solution to min_𝒥∈𝒮‖ I-AM‖_F, where 𝒥∈𝔹^n× n is a prescribed binary sparsity pattern in the set of all possible binary sparsity patterns 𝒮∈𝔹^n× n. The benefit is that we can decouple this minimization problem as min_𝒥∈𝒮‖ I-AM‖_F^2 = ∑_k=1^n min_𝒥_k∈𝒮_k‖ e_k-Am_k‖_2^2, where 𝒥_k, m_k, and e_k represent the kth columns of 𝒥, M, and I, respectively. The computed M is then reduced to solving a linear least squares problem for each column m_k of M. From a performance point of view, the benefit is that these linear least squares problems are solved independently and in parallel. Early works based on this approach used a fixed prescribed sparsity pattern 𝒥. The set 𝒥_k extracts column indices of A that are relevant for solving for a column m_k. The nonzero rows of the submatrix A(:, 𝒥_k) are represented by the so-called “shadow” of 𝒥_k, ℐ_k = { i∈{1,…, n}: ∑_j∈𝒥_k |a_ij|≠ 0}, where a_ij is the (i,j) entry of A. Thus each term in the summation on the right in (<ref>) can be reduced to min_𝒥(m̅_k) = 𝒥_k‖e̅_k - A̅_k m̅_k ‖_2, where A̅_k = A(ℐ_k, 𝒥_k)∈ℝ^|ℐ_k|,|𝒥_k|, m̅_k = m_k(𝒥_k)∈ℝ^|𝒥_k|, e̅_k = e_k(ℐ_k)∈ℝ^|ℐ_k|, and 𝒥(m̅_k) is the binary sparsity pattern of m̅_k. This results in small least squares problems which can be solved directly, for example, via QR factorization. The deficiency of this approach is that it is hard to predict a sparsity pattern a priori that will ensure an effective preconditioner. Mostly common choices used are the sparsity pattern of A, A^T, or a power of a sparsified A, although generally its not guaranteed that the preconditioner produced will be effective. For overcoming this difficulty, many authors proposed iterative approaches. In one such approach, one starts with an initial sparsity pattern and adds nonzeros to this pattern until ‖ e_k - Am_k‖_2≤ε becomes true for some threshold ε or the maximum number of nonzeros has been reached. For a more detailed explanation of this type algorithm, see, e.g., the work by Cosgrove et al. <cit.>, Grote and Huckle <cit.>, and Gould and Scott <cit.>. The most successful among these algorithms is that of Grote and Huckle <cit.> which is commonly used to compute a SPAI preconditioner <cit.>, and which we use in the present work. To overcome the difficulty of choosing the sparsity pattern a priori for a resulting effective preconditioner, the authors in <cit.> proposed an adaptive approach that dynamically determines the most beneficial nonzero indices to include. Algorithm <ref> is one specific variant of Grote and Huckle's algorithm, which is taken from <cit.>. The algorithm requires an input matrix A, 𝒥 as the initial binary sparsity pattern, ε as the convergence tolerance, α, for the maximum number of iterations for each column, and β, for the maximum number of nonzeros added to the pattern in each iteration. The algorithm for each column solves the linear least squares problem (<ref>) for a given initial sparsity pattern 𝒥 and computes the residual s̅_k (lines <ref>-<ref>). This column is considered finished when the 2-norm of the residual is less than the threshold ε. Otherwise, we continue adding entries to 𝒥. We construct an index set ℒ_k in line <ref> which contain the nonzeros entries in s̅_k. From the index set ℒ_k, for every element ℓ we go through that ℓth row of A and choose the column indices of the nonzero entries for which we define a set name 𝒥̃_k which are not 𝒥_k. The set 𝒥̃_k is the union of the sets 𝒩_ℓ which contain the potential indices that can be added to 𝒥_k, out of which we select only a subset of the “most important” indices. There are many ways to determine which indices are most important. Grote and Huckle's technique considers a univariate minimization problem, through which the quantity ρ_jk computed in line <ref> gives a measure of the 2-norm of the new residual if index j is added to 𝒥_k. A well-known heuristic (see, e.g., <cit.>) is to mark indices as “acceptable” if their ρ_jk is less than the arithmetic mean ρ̅_k over all j. Then we choose up to β of the best (smallest ρ_jk) indices acceptable to add (lines <ref>-<ref>) in each of the α iterations. In line <ref> there is no need to recompute the QR factorization fully in each step; the factorization can be updated by using the QR factorization computed in the previous step and the entries added to A̅_k; see <cit.>. Typical values for the parameters are ε∈ [0.1,0.5], α∈{1,…,5}, and β∈{3,…,8} <cit.>. In SPAI, although each column can theoretically be computed in parallel, the construction is often costly, specially for large-scale problems; see, e.g., <cit.>. SPAI memory requirements scale quadratically and the computational cost scales cubically in the number of nonzeros per row <cit.>. Thus applying the bucketing idea to sparse approximate inverse preconditioner in which low precision is used for the buckets containing elements of smaller magnitude has the potential to significantly reduce this cost. For modern hardware like GPUs, the construction of efficient sparse approximate inverse computations has been the subject of much recent work; see, e.g., <cit.>. §.§ Adaptive Precision Sparse Matrix-Vector Products As mentioned, with the emergence of low precision arithmetic, such as half precision fp16 or bfloat16 on modern computers, mixed precision algorithms in numerical linear algebra have received renewed attention. Many variants of mixed precision algorithms have been recently proposed; see, for example, the works <cit.> on matrix multiplication. The works <cit.> proposed mixed precision iterative refinement methods based on preconditioned Krylov subspace methods. The authors in <cit.> proposed a general preconditioning technique based on a low-rank approximation of the error. A particularly fruitful idea is the concept of adaptive precision algorithms, in which the precisions used need not be determined a priori, but are instead dynamically set based on the data involved in the computation and perhaps some user-specified accuracy constraints. Often, the precisions chosen are proportional to importance of the data, which is inherently application dependent. For example, the authors in <cit.> introduced an adaptive precision block Jacobi preconditioner with idea of choosing the precision of each block based on its condition number. Amestoy et al. <cit.> introduced mixed precision block low rank compression that partitions a low rank matrix into several low-rank components of decreasing norm and stores each of them in a correspondingly decreasing precision. Ahmad et al. <cit.> introduced an algorithm for sparse matrix-vector products that switches the elements in the range of [-1, 1] to single precision while keeping the other elements in double precision. The authors in <cit.> develop a “quantized” dot product algorithm, adapting the precision of each vector element based on its exponent. In recent work, which is the focus of the present paper, Graillat et al. <cit.> develop an adaptive precision sparse matrix-vector product algorithm with the idea of adapting the precision of each matrix element based on its magnitude. The elements of the matrix are split into different buckets and different precisions are used to store and compute with elements in each bucket. Buckets with smaller elements are stored in lower precision. This approach is used to apply the matrix A to a vector within GMRES-IR with Jacobi preconditioning. We now give an overview of the results of <cit.>. For matrix-vector products in a uniform precision, the Oettli-Prager<cit.>, <cit.> and Rigal-Gaches<cit.>, <cit.> theorems give the formula for normwise backward error, ε_nw=min{ε:ŷ = (A+Δ A)x, Δ A ≤εA } = ŷ-y/yx. A bound on the normwise backward error for the uniform precision case is ε_nw≤ pu, where p is the maximum number of nonzero elements per row of A; see, e.g., <cit.>. The idea of the adaptive precision sparse-matrix vector product approach of Graillat et al. <cit.> is, for a given set of q precisions i.e u_1 < u_2 … < u_q, to split the elements of the matrix A into q buckets based on the magnitude of the elements. Using this approach splits the nonzeros elements in each row i of the computed M into up to q buckets and then computes the partial inner products associated with each bucket in up to q different precisions. The partial inner products are then all summed in precision u_1. We briefly recall the notation, algorithm, and key points of the error analysis given in <cit.>. Let J_i denote the set of column indices of the nonzero elements in row i of A. Each row i of the matrix A will be partitioned into the q buckets B_ik⊂ [1,n] for k=1:q. How we define the buckets will affect the resulting normwise (or componentwise) backward error. Assume that we want to construct the buckets B_ik in such a way that the backward error obtained is at most of order O(ϵ), where ϵ is the user defined target accuracy with ϵ ≥ u_1. We can define the buckets as B_ik = { j∈ J_i : |a_ij| ∈ P_ik}, with P_ik= (ϵ‖ A ‖/u_2, +∞) for k=1 (ϵ‖ A ‖ /u_k+1, ϵ‖ A ‖/u_k] for k=2:q-1 [0, ϵ‖ A ‖/u_q] for k=q . The procedure for placing elements of a matrix A into buckets according to this rule is given in Algorithm <ref>. The partial inner product y_i^(k) = ∑_j∈ B_ik a_ijx_j associated with bucket B_ik is computed in precision u_k, and all partial inner products are accumulated in precision u_1 (the highest precision). This procedure is given in Algorithm <ref>. Theorem 3.1 in <cit.> states that if y=Ax is computed using this approach, then we have ε_nw≤ (q-1) u_1 + cϵ, where c= (1+ (q-1)u_1 )+ max_i∑_k=1^qp_ik^2(1+u_k)^2, and p_ik is the number of elements in B_ik. We note that Graillat et al. also provide different bucketing strategies that give guaranteed bounds on the componentwise backward error. The drawback of these is that the bucketing scheme depends on the values in the vector x to be multiplied, and thus the bucketing would need to be redone for each matrix-vector product encountered. Thus for practical reasons we restrict ourselves to the variant which provides normwise error bounds. § GMRES BASED ITERATIVE REFINEMENT WITH BSPAI Our approach will be to apply the adaptive precision sparse-matrix vector product described in Section <ref> to the application of a sparse approximate inverse preconditioner M computed using <ref> within GMRES-based iterative refinement. The resulting algorithm, which we refer to as BSPAI-GMRES-IR, is given as Algorithm <ref>. Our aim is to derive the conditions under which BSPAI-GMRES-IR (Algorithm <ref>) will converge. We can determine the resulting backward and forward errors in GMRES when we use the adaptive precision SpMV to apply the preconditioner M within each GMRES iteration. We will assume here that matrix-vector products with A are computed in precision u_p within GMRES (where we will generally take u_p=u_g=u, using the notation of <cit.>). Note that we could, in principle, also use the adaptive precision SpMV to apply A to a vector; extending the analysis to this case is simple and the results will not be significantly different as long as u_p ≈ϵ_bspai. We give backward and forward error bounds for GMRES for this case as well below. Following <cit.> and <cit.>, let z_j = MA v̂_j be computed in each iteration of MGS-GMRES as described above, where A is applied in precision u_p and M is applied using the adaptive precision SpMV approach (Algorithm <ref>). Then (A+Δ A) v̂_j = ŵ_j, Δ A _F≤γ_q^p A_F (M + Δ M) ŵ_j = ẑ_j, Δ M_F ≤((q-1)u_1 + cϵ) M_F. Then ẑ_j = (M+Δ M)(A+Δ A) v̂_j ≈ (MA + MΔ A + Δ MA)v̂_j = MAv̂_j + f_j, where f_j = (MΔ A + Δ A M)v̂_j. We can bound the norm of this quantity by f_j _2 ≲γ_q^p M_F A_F + ( (q-1) u_1 + c ϵ) MA ≤ (q u_p + (q-1)u_1 + cϵ) M _F A_F v̂_j _2. This means that we can apply <cit.> with ϵ_p = (qu_p + (q-1)u_1 + c ϵ) M_FA_F/MA_F. Note also that we must apply the preconditioner M to the right-hand side r̂_i. Denoting s_i = Mr̂_i, the computed ŝ_i satisfies ŝ_i = (M+Δ M) r̂_i = s_i + Δ M r̂_i. We then have ŝ_i - s_i _∞ ≤((q-1)u_1+cϵ) M_∞r̂_i _∞ ≤((q-1)u_1+cϵ) κ_∞(M) s_i _∞. Letting Ã=MA, and assuming we are solving the n× n linear system Ãd_i=ŝ_i, the conclusions of <cit.> say that for MGS-GMRES in working precision u_g, except for products with à which satisfy fl(Ãv) = Ãv + f, f_2 ≲ϵ_p Ã_F v_2, as long as σ_min(Ã) ≳(k^1/2(qu_p + (q-1)u_1 + c ϵ) M_FA_F/Ã_F + γ̃_kn^g) Ã_F, then for some step k≤ n, the algorithm produces an approximate solution d̂_i satisfying (à + ΔÃ) d̂_i = ŝ_i + Δŝ_i, Ã_F ≲(k^1/2(qu_p + (q-1)u_1 + c ϵ) M_FA_F/Ã_F + γ̃_kn^g) Ã_F, Δŝ_i_2 ≲γ̃_kn^g ŝ_i_2 ≲ n^1/2γ̃_kn^g s_i_∞. From (<ref>), we can write s_i - Ãd̂_i = ΔÃd̂_i - (ŝ_i - s_i ) - Δŝ_i, which we can bound using (<ref>), (<ref>), and (<ref>), giving ‖ s_i - Ãd̂_i ‖_∞ ≤ΔÃ_∞d̂_i_∞ + ŝ_i - s_i_∞ - Δŝ_i ≤ n (k^1/2(qu_p + (q-1)u_1 + c ϵ) M_FA_F/Ã_F + γ̃_kn^g) Ã_∞d̂_i_∞ ≤+ ((q-1)u_1+cϵ) κ_∞(M) s_i _∞ + n^1/2γ̃_kn^g s_i_∞ ≤ n (k^1/2n(qu_p + (q-1)u_1 + c ϵ) κ_∞(M) + γ̃_kn^g) Ã_∞d̂_i_∞ ≤+ ((q-1)u_1+cϵ) κ_∞(M) s_i _∞ + n^1/2γ̃_kn^g s_i_∞ ≤ kn^2 ( u_g + (qu_p + (q-1)u_1+cϵ)κ_∞(M) ) ( Ã_∞d̂_i_∞ + s_i _∞). Thus the normwise relative backward error of the system Ãd̂_i = s_i is bounded by s_i - Ãd̂_i_∞/Ã_∞d̂_i_∞ + s_i _∞≲ f(n,k)( u_g + (qu_p + (q-1)u_1+cϵ)κ_∞(M) ), and thus the relative error of the computed d̂_i is bounded by d_i -d̂_i_∞/d_i_∞≲ f(n,k)( u_g + (qu_p + (q-1)u_1+cϵ)κ_∞(M) ) κ_∞(Ã), where f(n,k) = kn^2. From (<ref>) and (<ref>), we can say that if u_1≈ϵ≈ u_p, then the backward and forward errors in MGS-GMRES with adaptive precision SpMV used to apply M will be approximately the same as the case of uniform precision SpMV; see <cit.>. We note that in the case where we use the adaptive precision SpMV also in applying the matrix A to a vector within GMRES, the bound for the normwise relative backward error in (<ref>) becomes s_i - Ãd̂_i_∞/Ã_∞d̂_i_∞ + s_i _∞≲ f(n,k)( u_g + (2(q-1)u_1+(c_A+c_M)ϵ)κ_∞(M) ), where we assume that the same buckets are used for both M and A, and c_A and c_M are the values of c in (<ref>) associated with A and M, respectively. Similarly, the relative forward error becomes d_i -d̂_i_∞/d_i_∞≲ f(n,k)( u_g + (2(q-1)u_1+(c_A+c_M)ϵ)κ_∞(M) ) κ_∞(Ã). Thus if u_1≈ϵ, MGS-GMRES with the adaptive precision SpMV used for applying both M and A will produce backward and forward errors similar to the MGS-GMRES variant in <cit.> with the setting u_p = u_1. § NUMERICAL EXPERIMENTS We perform numerical experiments to evaluate the performance of BSPAI-GMRES-IR by comparing it with SPAI-GMRES-IR in <cit.>. We stress that we only expect BSPAI-GMRES-IR to have a clear potential advantage over SPAI-GMRES-IR for the case u_f=u. Otherwise, for example, if u_f= half and u= single, SPAI-GMRES-IR stores the preconditioner entirely in precision u_f but applies it in precision u. BSPAI-GMRES-IR, on the other hand, stores the preconditioner in multiple precisions, where we must have u_1= single in order to enable reading effective application precision ϵ≈ u. We also note that this motivates future work in the direction of decoupling the storage and application precisions in adaptive precision sparse matrix-vector products. All the experiments are performed in MATLAB R2021a. The matrices we tested are taken from the SuiteSparse Matrix Collection <cit.>. We run the experiments using four precisions which are half, single, double, and quadruple. For properties of these precision, see Table <ref>. For half precision, we use the library[]. We use MATLAB built-in datatypes for single and double precision and the Advanpix Multiprecision Computing Toolbox for quadruple precision; see <cit.>. The code for reproducing the experiments in this paper is available online[]. Matrices used in the experiments along with their key properties are listed in Table <ref>. We set the right-hand side to the vector with equal components and unit 2-norm in all tests. For the GMRES tolerance, we set τ = 10^-4 in the case working precision is single and τ = 10^-8 for the case working precision is double, which responds to roughly the square root of the working precision. These values are set by default used in the previous works by <cit.>, <cit.>, and are also used in practical applications. In all invocations, we use the GMRES setting u_g=u_p=u, which is commonly used in practice. We tested the matrices in Table <ref> with a subset of the settings (u_f, u, u_r)= (double, double, quad), (u_f, u, u_r)= (single, double, quad), and (u_f, u, u_r)= (half, single, double), depending on whether SPAI-GMRES-IR converges with the given precisions and value of τ. We choose the identity matrix as the initial sparsity pattern for SPAI in all tests. When A has zero entry on the diagonal, this results in a zero column in the SPAI preconditioner, as mentioned in Sedlacek <cit.>. Therefore we only choose problems with nonzero entries on the diagonal, but note that this could be remedied by either permuting A or using the initial sparsity pattern of A, which, when SPAI is run on A^T, guarantees that we obtain a with nonzero rows <cit.>. In all tests, the matrices are preprocessed with column scaling such that the absolute value of the largest value in every column of A^T is 1. This one-sided scaling was proposed in <cit.> to avoid overflow in the computation of QR due to low precision. To be specific, for obtaining M, we run SPAI on the scaled A^T D and then set M = M^T D, where is the D diagonal scaling matrix. In both BSPAI-GMRES-IR and SPAI-GMRES-IR we use the variant in which u_g=u_p=u, which is commonly used in practice. For all tests, we use β=8, which is in the range suggested by Sedlacek <cit.>. For BSPAI-GMRES-IR, when u is double, we use the precisions with u_1= double, u_2 = single, u_3 = half, and u_4=1. When u is single, we use the precisions u_1 = single, u_2 = half, and u_3=1. Note that the choice u_1=1 enables the dropping of elements in M, as described in <cit.>. For each linear system and given combination of precisions, we run BSPAI-GMRES-IR with various values of ϵ≥ u_1, and use the same value of ε for both BSPAI-GMRES-IR and SPAI-GMRES-IR. We report our results in a series of tables. The first column of the table lists the matrix name, and the second column indicates whether we use BSPAI or SPAI and the corresponding parameters. The third column gives the infinity-norm condition number of the preconditioned coefficient matrix. The fourth column gives information about the number of nonzeros and their storage precisions. The first number gives the total number of nonzeros, and the tuple that follows gives information about the precisions: element i in the tuple gives the number of nonzeros stored in precision u_i. The fifth column gives the storage cost of the BSPAI preconditioner with mixed precision storage as a percentage of the cost of the SPAI preconditioner with uniform precision storage (the lower the better). The final column gives information about convergence of the iterative refinement process. The first number gives the total number of GMRES iterations over all refinement steps, and element i of the tuple that follows gives the number of GMRES iterations in refinement step i. Thus the number of elements in the tuple gives the number of iterative refinement steps required until convergence of the forward and backward errors to the level of the working precision. For each setup we form one table with five columns in which first one represent matrices names, second for the preconditioner( SPAI and BSPAI), third for the condition number of the preconditioned system, fourth for the total number of nonzeros (number of nonzeros in each bucket) and the last column is for the information about the number of GMRES-IR refinement steps and GMRES iterations per refinement step. §.§ Experiments with (u_f, u, u_r) = (double, double, quad) Table <ref> shows the experiments for the setting (u_f, u, u_r)= (double, double, quad), with both ϵ=2^-53 and ϵ=2^-37. First, we note that where SPAI-GMRES-IR converges, BSPAI-GMRES-IR also converges, as predicted by our theoretical results, although of course the adaptive precision storage can result in a different total number of GMRES iterations across the refinement steps. The performance for the matrix using ε=0.1 with ϵ=2^-53 and ϵ=2^-37, BSPAI-GMRES-IR takes 21 total GMRES iterations to converge to double precision accuracy while SPAI-GMRES-IR takes total 14 GMRES iterations. The storage (and computation) savings of the adaptive precision approach can be significant for this case; using ϵ = 2^-53 and ϵ=2^-37, requires only 74.9% and 42.6% of the storage/computation cost as the uniform precision approach, respectively. This matrix perhaps represents a best-case scenario. For , we also see reasonable reductions in storage cost for the two choices of ϵ; note that although the choice ϵ=2^-37 results in significant storage savings, the number of GMRES iterations required increases significantly. For some matrices, such as and , there appears to be no benefit to the adaptive precision approach. §.§ Experiments with (u_f, u, u_r) = (single, single, double) Table <ref> shows the experiments for the setting (u_f, u, u_r)= (single, single, double) with u_1 = single, u_2= half, and u_3= 1, and with ϵ=2^-24 and ϵ=2^-18. In all tests, both SPAI-GMRES-IR and BSPAI-GMRES-IR converge to single precision accuracy. For the value ϵ=2^-24, BSPAI-GMRES-IR takes about the same number of iterations as SPAI-GMRES-IR and requires an average of 98.6% of the storage cost as the uniform precision approach. The performance of BSPAI-GMRES-IR for the matrix using ϵ=2^-18 requires 76.5% of the storage cost as uniform precision and converges in the same number of iteration as SPAI-GMRES-IR. For matrices , , and , although using ϵ=2^-18 results in storage costs of 66.7%, 70.7% and 73.8% of the uniform precision approach, respectively, a greater number of iterations are required than in the uniform precision SPAI-GMRES-IR. § CONCLUSIONS AND FUTURE WORK In this work we use an adaptive precision sparse approximate inverse preconditioner within mixed precision GMRES-based iterative refinement. Using the approach of Graillat et al. <cit.>, after computing a sparse approximate inverse in low precision, we place elements of the sparse approximate inverse preconditioner into buckets for a given set of precisions based on their magnitude. We then apply the preconditioner to a vector in mixed precision within five precision GMRES-IR; we call this algorithm variant BSPAI-GMRES-IR. We then analyze the behavior of the backward and forward errors of mixed precision left-preconditioned GMRES method, which uses the bucketed sparse approximate inverse as a left preconditioner. Our analysis shows that if we choose u_1≈ε≈ u_p, then the normwise backward and forward errors will be close to those we get in the case that we use uniform precision. This indicates that BSPAI-GMRES-IR will converge under the same conditions as SPAI-GMRES-IR. We performing a set of numerical experiments which shows that the adaptive sparse-matrix vector product approach can reduce the cost of storing and applying the sparse approximate inverse preconditioner, although a significant reduction in cost often comes at the expense of increasing the number of GMRES iterations required for convergence. We note that it is possible to extend this approach to other preconditioners for Krylov subspace methods. We again stress that a fruitful potential area of future work is to extend the adaptive sparse-matrix vector product approach to decouple the storage and computation precisions. This would make this approach beneficial for existing cases where we would ideally like to store a matrix in lower precision and apply it to a vector in a higher precision, which is often the case within SPAI-GMRES-IR. Other potential future work involves the development and analysis of other adaptive-precision matrix computations, such as triangular solves. siamplain
http://arxiv.org/abs/2307.05169v1
20230711110102
Construction of Linear Codes from the Unit Graph $G(\mathbb{Z}_{n})$
[ "Dr. Rupali S. Jain", "Dr. B. Surendranath Reddy", "Mr. Wajid M. Shaikh" ]
math.RA
[ "math.RA", "math.CO", "94B05, 05C50, 05C38" ]
1]Dr. Rupali S. Jain 2]Dr. B. Surendranath Reddy 3]Mr. Wajid M. Shaikh [1]Associate Professor, School of Mathematical Sciences, S.R.T.M. University, Nanded [2]Assistant Professor, School of Mathematical Sciences, S.R.T.M. University, Nanded [3]Research Scholar, School of Mathematical Sciences, S.R.T.M. University, Nanded Construction of Linear Codes from the Unit Graph G(ℤ_n) [ ============================================================== In this paper, we consider the unit graph G(ℤ_n), where n=p_1^n_1 or p_1^n_1p_2^n_2 or p_1^n_1p_2^n_2p_3^n_3 and p_1, p_2, p_3 are distinct primes. For any prime q, we construct q-ary linear codes from the incidence matrix of the unit graph G(ℤ_n) with their parameters. We also prove that the dual of the constructed codes have minimum distance either 3 or 4. Lastly, we stated two conjectures on diameter of unit graph G(ℤ_n) and linear codes constructed from the incidence matrix of the unit graph G(ℤ_n) for any integer n. § INTRODUCTION In 1990, the unit graph was first introduced by Grimaldi, R. P.<cit.> for ℤ_n, is the graph with no loops and parallel edges and x,y∈ℤ_n are adjacent if and only if x+y is a unit in ℤ_n. In 2010, Fish, W., Key, J. D., & Mwambene, E.<cit.>, constructed linear codes form incidence matrices of line graphs of Hamming graphs. Key, J. D., & Rodrigues, B. G. <cit.> constructed codes from lattice graphs and examine their decoding techniques using permutation decoding. Similar type of results were examined by several researcher <cit.>. In 2013, Dankelmann, P., Key, J. D., & Rodrigues, B. G. <cit.> gave the generalization of relationship between parameters of connected graphs and codes generated from incidence matrices of graphs and obtained upper bounds for minimum distance of their dual codes. Also several researchers carried out their research in construction of linear codes from adjacency matrices of some special graphs, as mentioned in <cit.>. Recently, in 2021, Annamalai, N., & Durairajan, C.<cit.>, constructed linear codes from the incidence matrices of unit graph G(ℤ_p) and G(ℤ_2p), for odd prime p. In this paper, we generalise their work by constructing linear codes from the incidence matrices of unit graph G(ℤ_n), where n=p_1^n_1 or p_1^n_1p_2^n_2 or p_1^n_1p_2^n_2p_3^n_3 and p_1, p_2, p_3 are distinct primes. We also find the parameters for these codes and their dual codes. Finally we conclude by stating two conjectures. § PRELIMINARIES In this section, we recall definitions and results related to unit graphs and linear codes. Let ℤ_n denote the ring of integers modulo n. Here, we denote units and non-units of ℤ_n by U(ℤ_n) and N_U(ℤ_n) respectively. <cit.>[Linear Code] Let 𝔽_q represents the finite field with q elements. A linear code C_q of length n is a subspace of 𝔽^n_q and it is called q-ary linear code. Dimension of linear code C_q is the dimension of C_q as a vector space over field 𝔽_q and is denoted by dim(C_q). <cit.>[Dual of code] Let C_q be a linear code of length n over 𝔽_q. Then dual of code C_q is the orthogonal compliment of the subspace C_q in 𝔽^n_q and is denoted by C^⊥_q. <cit.> Let C_q be a q-ary code of length n over a field 𝔽_q. Then C^⊥_q is a linear code of length n and dim(C_q^⊥)=n-dim(C_q). * Let x and y be vectors in 𝔽^n_q. Then Hamming distance of x and y, denoted by d_C(x,y), is defined to be the number of places at which x and y differ. * Let x be a vector in 𝔽^n_q. Then Hamming weight of x is defined to be the number of non-zero coordinates in x and is denoted by wt(x). Clearly, wt(x-y)=d_C(x,y). <cit.>[Minimum Hamming weight] Let C_q be a linear code. Then minimum Hamming weight of C_q, denoted by wt(C_q), is defined as wt(C_q)=min{wt(x) | x∈ C_q & x≠ 0}. <cit.>[Minimum Hamming distance] Let C_q be a linear code. The minimum Hamming distance of code C_q, denoted by d(C_q), is defined as d(C_q)=min{d_C(x,y) | x,y∈ C_q & x≠ y}. Note that d(C_q)=wt(C_q). <cit.> A q-ary linear code C_q of length n, dimension k and minimum distance d is called [n,k,d]_q linear code. <cit.> A generator matrix of linear code C_q is a matrix H whose rows form a basis for C_q and a generator matrix H^⊥ of linear code C^⊥_q is called parity-check matrix of C_q. Let G=(V,E) be a graph with vertex set V and edge set E. For any x,y∈ V, [x,y] denote the edge between x and y and if [x,y]∈ E then we call x is adjacent to y. A graph is called simple if it does not have loops and parallel edges. A complete graph is a simple graph in which any distinct pair of vertices is joined by an edge. If the vertex set V of G can be partitioned into two non-empty subsets W_1 and W_2 such that each edge in G has one end in W_1 and one end in W_2 then G is called bipartite. A complete bipartite graph is a simple bipartite graph G, with bipartition V=W_1∪ W_2, in which every vertex in W_1 is joined to every vertex in W_2. An edge e of graph G is said to be incident with the vertex x, if x is the end vertex of e. In this case we also say that x is incident with e. The degree of vertex x is the number of edges of G incident with x and it is denoted by deg(x). If x∈ V and deg(x)≤deg(y) for all y∈ V then deg(x) is called minimum degree of G and it is denoted by δ(G). If for some positive integer k, deg(x)=k for every vertex of the graph of G, then G is called k-regular graph. A vertex x is said to be connected to a vertex y in a graph G if there is path in G from x to y. A graph is called connected if every two vertices are connected. A nontrivial closed trail in graph G is called cycle if its origin and internal vertices are distinct. A cycle of length k, i.e. with k edges is called k-cycle. The girth of graph G, is denoted by g_r(G), is the length of the shortest cycle contained in G. The distance between two vertices x and y, denoted by d(x,y), is the length of a shortest path from x to y. The diameter of a graph G is denoted by diam(G), is the maximum distance between any two vertices in G. i.e. diam(G)=Max{d(x,y) | x,y∈ V}. <cit.> Let G be a simple graph. The edge connectivity of G, denoted by λ(G), is the smallest number of edges in G whose deletion from G either leaves a disconnected graph or an empty graph. <cit.> Let R be a ring with nonzero identity. The unit graph of R, denoted by G(R), is a graph with vertex set as R and two distinct vertices x and y are adjacent if and only if x + y is a unit of R. <cit.> Let G=(V,E) be a connected graph with vertex set V. If diam(G)≤ 2 then edge connectivity of G is λ(G)=δ(G). <cit.> Let G=(V,E) be a connected bipartite graph, if diam(G)≤ 3 then edge connectivity of G is λ(G)=δ(G). <cit.> Let ℛ be a finite ring. Then the following statements hold for the unit graph of ℛ * If 2∉ U(ℛ), then the unit graph G(ℛ) is a |U(ℛ)|-regular graph. * If 2∈ U(ℛ), then for every x∈ U(ℛ) we have deg(x)=|U(ℛ)|-1 and for every x∈ N_U(ℛ) we have deg(x)=|U(ℛ)|. <cit.> Let R be a ring. Then * If |U(R)|= 2 then g_r(G)∈{3,4,6}. * If |U(R)|≥ 3 then g_r(G)∈{3,4}. <cit.> Let G=(V,E) be a connected graph and let H be a |V|× |E| incidence matrix for G. Then binary code generated by H is C_2(H)= [|E|,|V|-1,λ(G)]_2. <cit.> Let G=(V,E) be a connected bipartite graph and let H be a |V|× |E| incidence matrix for G, and q be an odd prime. Then q-ary code generated by H is C_q=[|E|,|V|-1,λ(G)]_q. <cit.> Let G be a connected graph with girth g_r(G) and even girth g_r(G)_e. Let H be an incidence matrix for G, C=C_q(H), where q is any prime, and d^⊥ is the minimum distance of C_q^⊥. If q=2 or g_r(G) is even then d^⊥=g_r(G). § CONSTRUCTION OF LINEAR CODES FROM THE INCIDENCE MATRIX OF A UNIT GRAPH G(ℤ_P^N) In this section, we construct binary and q-ary linear codes C_2 and C_q generated from the incidence matrix of the unit graph G(ℤ_p^n), where p is any prime number and n∈ℕ. We also examine the dual codes C^⊥_2 and C^⊥_q with their parameters. Let ℤ_p^n denote the ring. Then U(ℤ_p^n)={x∈ℤ_p^n | g.c.d(x,p^n)=1} and N_U(ℤ_p^n)={x∈ℤ_p^n | x=α p }≠ϕ. Let G(ℤ_p^n) be a unit graph with vertex set V=ℤ_p^n and x,y∈ℤ_p^n are adjacent if and only if x+y∈ U(ℤ_p^n). Let G(ℤ_p^n) be a unit graph and p is an odd prime. Then the graph G(ℤ_p^n) is connected with |V|=p^n and |E|=(p^n-1)ϕ(p^n)/2. Clearly, |V|=p^n. Note that for every x∈ U(ℤ_p^n) and y∈ N_U(ℤ_p^n), [x,y] is an edge in G(ℤ_p^n). Suppose this is not true which gives x+y is not a unit, this implies p|x+y. But y∈ N_U(ℤ_p^n) i.e. y=α p this implies p|x which is a contradiction. Hence G(ℤ_p^n) is connected graph. Since p is odd prime, 2∈ U(ℤ_p^n). By theorem (<ref>), we have deg(x)=|U(ℤ_p^n)|-1=ϕ(p^n)-1, ∀ x∈ U(ℤ_p^n) and deg(x)=ϕ(p^n), ∀ x∈ N_U(ℤ_p^n). Now |E| =∑_x∈ℤ_p^ndeg(x)/2 =∑_x∈ U(ℤ_p^n)deg(x)+∑_x∈ N_U(ℤ_p^n)deg(x)/2 = ϕ(p^n)[ϕ(p^n)-1]+[p^n-ϕ(p^n)]ϕ(p^n)/2 = ϕ(p^n)[ϕ(p^n)-1+p^n-ϕ(p^n)]/2 |E| = ϕ(p^n)[p^n-1]/2 Let G(ℤ_p^n) be a unit graph and p is an odd prime. Then edge connectivity of G(ℤ_n) is λ(G(ℤ_p^n))=ϕ(p^n)-1. First we show that diam(G(ℤ_p^n))≤ 2. For any x,y∈ℤ_p^n we have following cases, Case I: Let x,y∈ U(ℤ_p^n). Then there exist z∈ N_U(ℤ_p^n) such that x and y adjacent to z. Hence d(x,y)≤ 2. Case II: If x,y∈ N_U(ℤ_p^n) then for any z∈ U(ℤ_p^n), we have [x,z] and [z,y] are edges in G(ℤ_p^n), which gives d(x,y)≤ 2 Case III: If x∈ U(ℤ_p^n) and y∈ N_U(ℤ_p^n) then x is adjacent to y, which gives d(x,y)= 1. We get d(x,y)≤ 2 for all x,y∈ℤ_p^n which gives diam(G(ℤ_p^n))≤ 2. Now by Theorem <ref>, we have λ(G(ℤ_p^n))=δ(G(ℤ_p^n))=ϕ(p^n)-1. Let G(ℤ_p^n) be a unit graph, where p is an odd prime and p^n≠ 3. Then g_r(G(ℤ_p^n))=3. If n=1 then p≥3. Note that 1,2∈ℤ_p and 1 is adjacent to 2. Since, 0∈ℤ_p which gives 1 and 2 are adjacent to 0. Hence, we get cycle of length 3, which conclude our result. If n>1 then for any x∈ U(ℤ_p^n) we have x+p∈ U(ℤ_p^n). Also x is adjacent to x+p. Note that x and x+p are adjacent to p. Hence, we get cycle of length 3, which implies g_r(G(ℤ_p^n))=3. Let G(ℤ_2^n) be a unit graph. Then x and y are adjacent if and only if x∈ U(ℤ_2^n) and y∈ N_U(ℤ_2^n). Let G(ℤ_2^n) be a unit graph. Suppose that, x and y are adjacent for some x,y∈ℤ_2^n. That is [x,y] is an edge in G(ℤ_2^n), which implies x+y∈ U(ℤ_2^n), hence x+y=2k+1 for some integer k. Assume that x,y∈ U(ℤ_2^n) which imlies x=2k_1+1 and y=2k_2+1 for some integer k_1 and k_2. From this, we have 2|x+y, which gives a contradiction to our assumption. Similarly, if x,y∈ N_U(ℤ_p^n), then 2|x+y, which is again a contradiction. Hence x∈ U(ℤ_p^n) and y∈ N_U(ℤ_2^n). Conversely, let x∈ U(ℤ_2^n) and y∈ N_U(ℤ_2^n). Then x=2k_1 and y=2k_1+1 this implies 2∤ x+y and hence [x,y] is an edge in G(ℤ_2^n). Let G(ℤ_2^n) be a unit graph. Then G(ℤ_2^n) is complete bipartite graph with bipartition W_1={x∈ℤ_2^n | x=2α for some α∈ℤ} and W_2={x∈ℤ_2^n | x=2α+1 for some α∈ℤ}. Let G(ℤ_2^n) be a unit graph. Then * |V|=2^n and |E|=2^2(n-1). * λ(G(ℤ_2^n))=ϕ(2^n). * By the definition of unit graph G(ℤ_2^n), we have |V|=2^n. By Corollary (<ref>), G(ℤ_2^n) is complete bipartite graph, which implies |E|=2^n-12^n-1=2^2(n-1). * From Corollary (<ref>), G(ℤ_2^n) is complete bipartite graph and hence λ(G(ℤ_2^n))=ϕ(2^n). Let G(ℤ_p^n) be a unit graph and H be a |V|× |E| incidence matrix of G(ℤ_p^n). * If p is an odd prime then binary code generated by H is a C_2(H)=[ϕ(p^n)(p^n-1)/2,p^n-1,ϕ(p^n)-1]_2 code over the finite field 𝔽_2. * If p=2 then for any odd prime q, the q-ary code generated by H is a C_q(H)=[2^2(n-1),2^n-1,2^(n-1)]_q code over the finite field 𝔽_q. * Let G(ℤ_p^n) be a unit graph, where p is an odd prime and H be an incidence matrix of G(ℤ_p^n). By Theorem (<ref>), G(ℤ_p^n) is connected graph and hence by Theorem (<ref>), binary code generated by H is C_2(H)=[|E|,|V|-1,λ(G(ℤ_p^n))]_2. By theorem (<ref>) and (<ref>), we get |E|=ϕ(p^n)(p^n-1)/2, |V|=p^n and the edge connectivity of G(ℤ_p^n) is λ(G(ℤ_p^n))=ϕ(p^n)-1. Hence we get C_2(H)=[ϕ(p^n)(p^n-1)/2, p^n-1, ϕ(p^n)-1]_2. * Let p=2. Then by Corollary (<ref>), G(ℤ_2^n) is a complete bipartite graph, which implies G(ℤ_2^n) is connected bipartite graph and hence by Theorem (<ref>) and (<ref>), for any prime q, q-ary code generated by H is C_q(H)=[|E|,|V|-1,λ(G(ℤ_2^n))]_q. Using Theorem (<ref>), we get |E|=2^2(n-1), |V|=2^n and the edge connectivity of G(ℤ_2^n) is λ(G(ℤ_2^n))=2^n-1. Hence C_q(H)=[2^2(n-1), 2^n-1, 2^n-1]_q. Let C_q(H) and C_2(H) denote the codes generated by incidence matrix of G(ℤ_2^n) and G(ℤ_p^n). Then * Dual of code C_2(H) is C^⊥_2=[ (p^n-1)ϕ(p^n)/2, (p^n-1)[ϕ(p^n)-2]/2, 3 ]_2, where p^n≠ 3. * Dual of code C_q(H) is C^⊥_q=[2^2(n-1),2^n(2^n-2-1)+1,4]_q. * It follows from Theorems (<ref>) & (<ref>) and Corollary (<ref>). * Since G(ℤ_2^n) is complete bipartite graph, we have g_r(G)=4 and hence result follows from Theorems (<ref>) and (<ref>). § CONSTRUCTION OF LINEAR CODES FROM THE INCIDENCE MATRIX OF A UNIT GRAPH G(ℤ_P^N_1_1P^N_2_2) In this section, we construct binary and q-ary linear codes from the incidence matrix of the unit graphs G(ℤ_p^n_1_1p^n_2_2), where p_1 and p_2 are distinct primes and n_1,n_2∈ℕ. We also discuss the parameters their dual codes. Let G(ℤ_n) be a unit graph. If 2∈ N_U(ℤ_n), then G(ℤ_n) is bipartite. We have W_1∪ W_2=ℤ_n, where W_1={x∈ℤ_n | x=2α for some α∈ℤ} and W_2={x∈ℤ_n | x=2α+1 for some α∈ℤ}. Now for any x,y∈ W_1, we have gcd(x+y,n)≥ 2, which implies that x is not adjacent to y. Similarly, for any x,y∈ W_2, we have gcd(x+y,n)≥ 2. Hence G(ℤ_n)=G(W_1∪ W_2) is a bipartite graph. Let G(ℤ_p^n_1_1p^n_2_2) be a unit graph, where p_1 and p_2 are distinct odd primes. Then G(ℤ_p^n_1_1p^n_2_2) is a connected graph and diam(G(ℤ_p^n_1_1p^n_2_2))≤ 2. We can rewrite ℤ_p^n_1_1p^n_2_2 =U(ℤ_p^n_1_1p^n_2_2)∪ N_U(ℤ_p^n_1_1p^n_2_2) = U(ℤ_p^n_1_1p^n_2_2)∪ N_p_1∪ N_p_2∪ N_p_1p_2 where N_p_1 ={x∈ℤ_p^n_1_1p^n_2_2 | x=α p_1 & p_2∤α} N_p_2 ={x∈ℤ_p^n_1_1p^n_2_2 | x=α p_2 & p_1∤α} N_p_1p_2 ={x∈ℤ_p^n_1_1p^n_2_2 | x=α p_1p_2} For x,y∈ℤ_p^n_1_1p^n_2_2, we have following cases Case I: If x,y∈ U(ℤ_p^n_1_1p^n_2_2), then [x,0] and [0,y] are edges in G(ℤ_p^n_1_1p^n_2_2), which gives d(x,y)≤ 2. Case II: If x∈ U(ℤ_p^n_1_1p^n_2_2) and y∈ N_U(ℤ_p^n_1_1p^n_2_2), then we have the following possibilities for y: (a) If y∈ N_p_1 ,then y=α p_1. Suppose that [x,y]=[x,α p_1] is not an edge in G(ℤ_p^n_1_1p^n_2_2) which implies p_2| x+α p_1 i.e. x+α p_1=β p_2. Note that p_1∤β, hence β p_2∈ N_p_2. Now we claim that [x,β p_2] is an edge in G(ℤ_p^n_1_1p^n_2_2). Suppose this is no true. Then p_1| x+β p_2, we get p_1| 2x+α p_1, i.e. p_1| x which is a contradiction. Hence, we get x is adjacent to β p_2. Note that [β p_2,α p_1] is an edge in G(ℤ_p^n_1_1p^n_2_2). i.e. [x,β p_2] and [β p_2,α p_1] are edges in G(ℤ_p^n_1_1p^n_2_2). Hence d(x,y)≤ 2. (b) If y∈ N_p_2, then y=α p_2. By following the above procedure, we get d(x,y)≤ 2. (c) If y∈ N_p_1p_2, then y=α p_1p_2. Since x is a unit, we get [x,α p_1p_2] is an edge in G(ℤ_p^n_1_1p^n_2_2). For if this not hold, then we have p_1| x+α p_1p_2 i.e. p_1| x which is not possible. Similarly if p_2| x+α p_1p_2 then p_2| x. Hence d(x,y)=1≤ 2. Case III: If x,y∈ N_U(ℤ_p^n_1_1p^n_2_2) then, consider the following possibilities (a) If x,y∈ N_p_1 then x=α p_1 and y=α' p_1. Since, p_2∈ N_p_2 which implies N_p_2≠ϕ. Then for any β p_2∈ N_p_2,. [α p_1,β p_2] is an edges in G(ℤ_p^n_1_1p^n_2_2). Similarly, we can prove that [β p_2,α' p_1] is an edge in G(ℤ_p^n_1_1p^n_2_2). i.e. [x,β p_2] and [β p_2,y] are edges in G(ℤ_p^n_1_1p^n_2_2). Hence, we have d(x,y)≤ 2. (b) If x,y∈ N_p_2, then d(x,y)≤2 as in similar to case (a). (c) If x,y∈ N_p_1p_2, then x=α p_1p_2 and y=α' p_1p_2. Note that U(ℤ_p^n_1_1p^n_2_2)≠ϕ. Then for any z∈ U(ℤ_p^n_1_1p^n_2_2), we have [α p_1p_2,z] and [z,α' p_1p_2] are edges in G(ℤ_p^n_1_1p^n_2_2). Hence d(x,y)≤ 2. (d)If x∈ N_p_1 and y∈ N_p_2, then x is adjacent to y, hence d(x,y)=1≤ 2. (e) If x∈ N_p_1 and y∈ N_p_1p_2, then x=α p_1 & y=α' p_1p_2. Now, let β p_2∈ N_2 then z=α p_1+β p_2∈ U(ℤ_p^n_1_1p^n_2_2), from this we have [z,α' p_1p_2] is an edge. Also [α p_1,z] is an edge in G(ℤ_p^n_1_1p^n_2_2). Hence d(x,y)≤ 2. (f) If x∈ N_p_2 and y∈ N_p_1p_2, then using the above procedure, we can show that d(x,y)≤2. Hence, G(ℤ_p^n_1_1p^n_2_2) is connected graph and d(x,y)≤ 2, for all x,y∈ℤ_p^n_1_1p^n_2_2, which gives diam(G(ℤ_p^n_1_1p^n_2_2))≤ 2. Let G(ℤ_p^n_1_1p^n_2_2) be a unit graph, where p_1 and p_2 are odd primes. Then g_r(G(ℤ_p^n_1_1p^n_2_2))=3. Note that, p_1 is adjacent to p_2 and p_1+p_2 is adjacent to both p_1 & p_2, from this we conclude the result. Let G(ℤ_p^n_1_1p^n_2_2) be a unit graph, where p_1 and p_2 are odd distinct primes. Then * |V|=p^n_1_1p^n_2_2 and |E|=(p^n_1_1p^n_2_2-1)ϕ(p^n_1_1p^n_2_2)/2. * λ(G(ℤ_p^n_1_1p^n_2_2))=δ(G(ℤ_p^n_1_1p^n_2_2))=ϕ(p^n_1_1p^n_2_2)-1. * Here |V|=p^n_1_1p^n_2_2. Since 2∈ U(ℤ_p^n_1_1p^n_2_2), by Theorem (<ref>), deg(x)=ϕ(p^n_1_1p^n_2_2)-1 for all x∈ U(ℤ_p^n_1_1p^n_2_2) and deg(x)=ϕ(p^n_1_1p^n_2_2) for all x∈ N_U(ℤ_p^n_1_1p^n_2_2). Thus, |E| = ∑_x∈ Vdeg(x)/2 = ∑_x∈ U(ℤ_p^n_1_1p^n_2_2)deg(x)+∑_x∈ N_U(ℤ_p^n_1_1p^n_2_2)deg(x)/2 = ϕ(p^n_1_1p^n_2_2)(ϕ(p^n_1_1p^n_2_2)-1)+ (p^n_1_1p^n_2_2-ϕ(p^n_1_1p^n_2_2))ϕ(p^n_1_1p^n_2_2)/2 |E| = (p^n_1_1p^n_2_2-1)ϕ(p^n_1_1p^n_2_2)/2. * It follows from Theorem (<ref>) and Theorem (<ref>). Let G(ℤ_2^mp^n_1_1)=(V,E) be a unit graph, where p_1 is an odd prime. Then G(ℤ_2^mp^n_1_1) is a connected graph and diam(G(ℤ_2^mp^n_1_1))≤ 3. Let ℤ_2^mp^n_1_1=U(ℤ_2^mp^n_1_1)∪ N_U(ℤ_2^mp^n_1_1).Then N_U(ℤ_2^mp^n_1_1) =N_2∪ N_p_1∪ N_2p_1 where N_2 ={x∈ℤ_2^mp^n_1_1 | x=2α & p_1∤α} N_p_1 ={x∈ℤ_2^mp^n_1_1 | x=p_1α & 2∤α} N_2p_1 ={x∈ℤ_2^mp^n_1_1 | x=2α p_1} Now for any x,y∈ℤ_2^mp^n_1_1, we have the following cases: Case I: If x,y∈ U(ℤ_2^mp^n_1_1), then [x,0] and [0,y] are edges in G(ℤ_2^mp^n_1_1). Hence d(x,y)=1. Case II: If x∈ U(ℤ_2^mp^n_1_1) and y∈ N_U(ℤ_2^mp^n_1_1), then we have the following possibilities for y, (a) If y∈ N_2, then y=2α for some integer α. Suppose x is not adjacent to y which gives p_1| x+2α. Since p_1∈ N_P_1, which implies N_p_1≠ϕ. Let β p_1∈ N_p_1. Then z=2α +β p_1∈ U(ℤ_2^mp^n_1_1). Then [z,y] is an edge. Since x,z∈ U(ℤ_2^mp^n_1_1), x and y are adjacent to 0. Hence, we get [x,0], [0,z] and [z,y] are edges in G(ℤ_2^mp^n_1_1), which gives d(x,y)≤ 3. (b) If y∈ N_p_1, then y=α p_1. Note that x is not adjacent to y. Since 2∈ N_2 which implies N_2≠ϕ. Now for any, z=2β∈ N_2, y is adjacent to z. Now it is enough to show that x is adjacent to 2β or 2β, suppose x is not adjacent to 2β∈ N_2 and -2β∈ N_2, which implies p_1| x+2β and p_1| x-2β which gives p_1| 4β, hence p_1|β, which is contradiction to choice of β. Hence, [x,2β] or [x,-2β] is edges in G(ℤ_2^mp^n_1_1), which implies d(x,y)=2. (c) If y∈ N_2p_1, then y=2α p_1. Since x∈ U(ℤ_2^mp^n_1_1), x is adjacent to y, if this not hold then either p_1| x+y or 2| x+y. If p_1| x+2α p_1, then p_1| x which is a contradiction. In similar way, If 2| x+2α p_1, then 2| x which is also contradiction to x∈ U(ℤ_2^mp^n_1_1). Hence [x,y] is an edge in G(ℤ_2^mp^n_1_1) which gives, d(x,y)=1≤ 3. Case III: If x,y∈ N_U(ℤ_2^mp^n_1_1), then we have the following possibilities for x and y: (a) If x,y∈ N_2, then x=2α and y=2α'. For any z=β p_1∈ N_p_1, we have [x,z] and [z,y] are edges in G(ℤ_2^mp^n_1_1). Hence, we have d(x,y)=2. (b) If x,y∈ N_p_1, then from above procedure for any z∈ N_2, we have [x,z] and [z,y] are edges in G(ℤ_2^mp^n_1_1). Hence we get d(x,y)=2≤ 3. (c) If x,y∈ N_2p_1, then for any z∈ U(ℤ_2^mp^n_1_1), we have [x,z] and [z,y] are edges in G(ℤ_2^mp^n_1_1), which gives d(x,y)=2. (d) If x∈ N_2 and y∈ N_p_1 or vice versa, then x is adjacent to y, which implies d(x,y)=1. (e)If x∈ N_2 and y∈ N_2p_1, then x=2α and y=2α' p_1. Now for any z=β p_1∈ N_p_1, we have x+z∈ N_U(ℤ_2^mp^n_1_1) and x is adjacent to x+z. Since x+z∈ U(ℤ_2^mp^n_1_1), we have x+z is adjacent to y. Hence, we get [x,x+z] and [x+z,y] are edges in G(ℤ_2^mp^n_1_1), which gives d(x,y)=2≤ 3. (f) If x∈ N_p_1 and y∈ N_2p_1, then x=α p_1 and y=2α' p_1. Now for any z=2β∈ N_2, we have x is adjacent to z and z is adjacent to x+z both follows from above part. Note that x+z∈ U(ℤ_2^mp^n_1_1). Hence, we have x+z is adjacent to y, from this, we get [x,z], [z,x+z] and [x+z,y] are edges in G(ℤ_2^mp^n_1_1), which give d(x,y)≤ 3. Hence, G(ℤ_2^mp^n_1_1) is connected graph and d(x,y)≤ 3, for all x,y∈ℤ_2^mp^n_1_1. Therefore, we have diam(G(ℤ_2^mp^n_1_1))≤ 3. Let G(ℤ_2^mp^n_1_1) be a unit graph, where p_1 is an odd prime. * If 2^mp^n_1_1≠ 6 then g_r(G(ℤ_2^mp^n_1_1))=4 * If 2^mp^n_1_1= 6 then g_r(G(ℤ_2^mp^n_1_1))=6 Proof follows from Theorems (<ref>) and (<ref>). Let G(ℤ_2^mp^n_1_1) be a unit graph, where p_1 is an odd prime. Then * |V|=2^mp^n_1_1 and |E|=2^m-1p^n_1_1ϕ(2^mp^n_1_1). * λ(G(ℤ_2^mp^n_1_1))=δ(G(ℤ_2^mp^n_1_1))=ϕ(2^mp^n_1_1). * From the definition of unit graph, we have |V|=2^mp^n_1_1. Since 2∈ N_U(ℤ_2^mp^n_1_1) from Theorem (<ref>), we get G(ℤ_2^mp^n_1_1) is ϕ(2^mp^n_1_1)-regular graph and hence, we have |E|=2^mp^n_1_1ϕ(2^mp^n_1_1)/2=2^m-1p^n_1_1ϕ(2^mp^n_1_1). * From Lemma (<ref>) and Theorems (<ref>) & (<ref>), we get λ(G(ℤ_2^mp^n_1_1))=δ(G(ℤ_2^mp^n_1_1))=ϕ(2^mp^n_1_1). Let G(ℤ_n) be a unit graph, where n=p^n_1_1p^n_2_2 and p_1 & p_2 are distinct primes. Let H be a |V|× |E| incidence matrix of G(ℤ_n). * If 2∈ U(ℤ_n), then binary code generated by H is a C_2(H)=[(n-1)ϕ(n)/2,n-1,ϕ(n)-1]_2 code over finite field 𝔽_2. * If 2∈ N_U(ℤ_n), then for any odd prime q, the q-ary code generated by H is a C_q(H)=[nϕ(n)/2,n-1,ϕ(n)]_q code over finite field 𝔽_q. * If 2∈ U(ℤ_p^n_1_1p^n_2_2), then p_1 and p_2 are odd primes. By Theorem (<ref>), G(ℤ_p^n_1_1p^n_2_2)=(V,E) is a connected graph and hence by Theorem (<ref>), binary code generated by H is C_2(H)=[|E|,|V|-1,λ(G(ℤ_p^n_1_1p^n_2_2))]_2. Now from Theorem (<ref>), we get |E|= (n-1)ϕ(n)/2, |V|-1=n-1 and λ(G(ℤ_p^n_1_1p^n_2_2))=ϕ(n)-1. * If 2∈ N_U(ℤ_p^n_1_1p^n_2_2), then either p_1=2 or p_2=2. By Theorem (<ref>) and Lemma (<ref>), G(ℤ_p^n_1_1p^n_2_2)=(V,E) is a connected bipartite graph and hence by Theorems (<ref>) and (<ref>), q-ary code generated by H is C_q(H)=[|E|,|V|-1,λ(G(ℤ_2^mp^n_1_1))]_q. Now using Theorem (<ref>), we conclude the result. Let C_q(H) and C_2(H) denote the linear codes generated from incidence matrices of G(ℤ_2^mp^n_1_1) and G(ℤ_p^n_1_1p^n_2_2). Then * Dual of code C_2 is C^⊥_2=[(n-1)ϕ(n)/2,(n-1)[ϕ(n)-2]/2,3]_2, where n=p^n_1_1p^n_2_2. * Dual of code C_q is C^⊥_q=[nϕ(n)/2,n(ϕ(n)-2)+2/2,4]_q, where n=2^mp^n_1_1≠ 6. * From Theorem (<ref>), diam( C^⊥_2)=(n-1)[ϕ(n)-2]/2. By Theorem (<ref>) and Corollary (<ref>), d(C^⊥_2)=3. * Proof follows from Theorems (<ref>) & (<ref>) and Corollary (<ref>). § CONSTRUCTION OF LINEAR CODES FROM THE INCIDENCE MATRIX OF A UNIT GRAPH G(ℤ_P_1^N_1P_2^N_2P_3^N_3) In this section, we extend the result for three distinct primes p_1, p_2 and p_3. Let G(ℤ_p^n_1_1p^n_2_2p^n_3_3) be a unit graph, where p_1, p_2 and p_3 are distinct odd primes. Then G(ℤ_p^n_1_1p^n_2_2p^n_3_3) is connected graph and diam(G(ℤ_p^n_1_1p^n_2_2p^n_3_3))≤ 2. Let ℤ_p^n_1_1p^n_2_2p^n_3_3=U(ℤ_p^n_1_1p^n_2_2p^n_3_3)∪ N_U(ℤ_p^n_1_1p^n_2_2p^n_3_3). Also we can express N_U(ℤ_p^n_1_1p^n_2_2p^n_3_3)=⋃_i=1^3N_p_i⋃_i≠ jN_p_ip_j⋃ N_p_1p_2p_3, where N_p_i={x∈ℤ_p^n_1_1p^n_2_2p^n_3_3 | x=α p_i & p_j∤α, ∀ i≠ j }, N_p_ip_j={x∈ℤ_p^n_1_1p^n_2_2p^n_3_3 | x=α p_ip_j & p_k∤α, k≠ i,j} and N_p_1p_2p_3={x∈ℤ_p^n_1_1p^n_2_2p^n_3_3 | x=α p_1p_2p_3}. Now for any x,y∈ℤ_p^n_1_1p^n_2_2p^n_3_3 We have the following cases: Case I: If x,y∈ U(ℤ_p^n_1_1p^n_2_2p^n_3_3), then [x,0] and [0,y] are edges. Hence, d(x,y)≤ 2. Case II: If x∈ U(ℤ_p^n_1_1p^n_2_2p^n_3_3) and y∈ N_U(ℤ_p^n_1_1p^n_2_2p^n_3_3), then we have the following possibilities for y: (a) If y∈ N_p_i, then y=α p_i. Suppose x and y are not adjacent then p_j| x+α p_i for i≠ j which implies x+α p_i=β p_j. Note that p_i∤β, hence β p_jp_k∈ N_p_jp_k and -β p_jp_k∈ N_p_jp_k for i≠ j,k. Now our claim is either [x,β p_jp_k] or [x,-β p_jp_k] is edge in G(ℤ_p^n_1_1p^n_2_2p^n_3_3). Suppose both not hold which implies p_i| x+β p_jp_k and p_i| x-β p_jp_k, from this we get p_i| 2x, this implies p_i| x which is a contradiction. Now we prove y is adjacent to β p_jp_k, suppose this is not true, which implies p_i|α p_i+β p_jp_k, from this we have, p_i|β which is a contradiction. Similar contradiction occurs, when we assuming p_j|α p_i+β p_jp_k and p_k|α p_i+β p_jp_k. Hence [α p_i,β p_jp_k] is an edge. In similar way, we can show that [α p_i,-β p_jp_k] is an edge in G(ℤ_p^n_1_1p^n_2_2p^n_3_3). Hence, there exist z∈ N_p_jp_k such that [x,z] and [z,y] are edges in G(ℤ_p^n_1_1p^n_2_2p^n_3_3), from this we concluded that d(x,y)≤ 2. (b) If y∈ N_p_ip_j, then y=α p_ip_j. Suppose x and y are not adjacent then p_k| x+α p_ip_j for k≠ i,j, which implies x+α p_ip_j=β p_k. Note that p_i∤β and p_j∤β which gives β p_k∈ N_p_k. Now we prove that [x,β p_k] is an edge, suppose this is not an edge, which implies p_i| x+β p_k for some i≠ k. Hence p_i| x+x+α p_ip_j this implies p_i| 2x, which is a contradiction. [x,β p_k] is an edge and using procedure in (a), we can show that [β p_k, α p_ip_j]=[β p_k,y]. Hence d(x,y)≤ 2. (c) If y∈ N_p_1p_2p_3, then [x,α p_1p_2p_3] is an edge in G(ℤ_p^n_1_1p^n_2_2p^n_3_3). Hence d(x,y)=1. Case III: If x,y∈ N_U(ℤ_p^n_1_1p^n_2_2p^n_3_3), then we have the following possibilities: (a) If x,y∈ N_p_i, then x=α p_i and y=α' p_i. Since N_p_jp_k≠ϕ. For any β p_jp_k∈ N_p_jp_k for i≠ j,k, we have [x,β p_j p_k] and [β p_j p_k,y] are edges in G(ℤ_p^n_1_1p^n_2_2p^n_3_3). Hence, d(x,y)≤ 2. (b) If x,y∈ N_p_ip_j, then x=α p_ip_j and y=α' p_ip_j. Let β p_k∈ N_p_k for k≠ i,j. Then [x,β p_k] and [β p_k,y] are edges in G(ℤ_p^n_1_1p^n_2_2p^n_3_3). Hence, d(x,y)≤ 2. (c) x,y∈ N_p_1p_2p_3, then for any z∈ U(ℤ_p^n_1_1p^n_2_2p^n_3_3), [x,z] and [z,x] are edges in G(ℤ_p^n_1_1p^n_2_2p^n_3_3). Hence, d(x,y)≤ 2. (d) If x∈ N_p_i and y∈ N_p_j for i≠ j, then x=α p_i and y=α' p_j. Assume that [x,y] is not an edge, which gives p_k| x+y, for k≠ i,j. Now our claim is that [x+y,x] and [y,x+y] are edges. Suppose x+y is not adajcent to x, hence p_m| 2x+y for some m=1,2,3. If p_i| 2x+y i.e. p_i| 2α p_i+α' p_j, then p_i|α', which is a contradiction. Similarly, if p_j| 2x+y, then p_j|α and if p_k| 2x+y i.e. p_k| x+y+x, then p_k| x, since p_k| x+y. Hence, [x+y,x] is an edge in G(ℤ_p^n_1_1p^n_2_2p^n_3_3). In similar way, we can show that x+y is adjacent to y, which gives d(x,y)≤ 2. (e) If x∈ N_p_ip_j and y∈ N_p_j, then x=α p_ip_j and y=α' p_j. Note that for any z=β p_k, for k≠ i,j, x is adjacent to z. Now assuming that y=α p_j∈ N_p_j and β p_k∈ N_p_k are not adjacent, then p_i|α p_j+β p_k. Since β p_k∈ N_p_k, which implies -β p_k∈ N_p_k this gives [y,-β p_k] is an edge. Hence, y is adjacent to either β p_k or -β p_k, which gives d(x,y)≤ 2. (f) If x∈ N_p_ip_j and y∈ N_p_jp_k for i≠ k, then x=α p_ip_j and y=α'p_jp_k. Now for any β p_i∈ N_p_i, we have z=(β p_i+α' p_j)p_k+α p_ip_j∈ U(ℤ_p^n_1_1p^n_2_2p^n_3_3) such that [x,z] and [z,y] are edges in G(ℤ_p^n_1_1p^n_2_2p^n_3_3). To see this, suppose x is not adjacent to z which implies p_i|α p_ip_j+(β p_i+α' p_j)p_k+α p_ip_j, we get p_i|α' which is not possible. Similarly , if p_j|α p_ip_j+(β p_i+α' p_j)p_k+α p_ip_j, then p_j|β and if p_k|α p_ip_j+(β p_i+α' p_j)p_k+α p_ip_j, then p_k| 2α both are not possible, which gives x is adjacent to z. Hence, we get d(x,y)≤ 2. (g) If x∈ N_p_ip_j and y∈ N_p_k for k≠ i,j, then [x,y] is an edge in G(ℤ_p^n_1_1p^n_2_2p^n_3_3) gives d(x,y)=1. Hence, G(ℤ_p^n_1_1p^n_2_2p^n_3_3) is connected graph and d(x,y)≤ 2, for all x,y∈ℤ_p^n_1_1p^n_2_2p^n_3_3, which gives, diam(G(ℤ_p^n_1_1p^n_2_2p^n_3_3))≤ 2. Let G(ℤ_p^n_1_1p^n_2_2p^n_3_3) be a unit graph, where p_1,p_2 and p_3 are odd primes. Then g_r(G(ℤ_p^n_1_1p^n_2_2p^n_3_3))=3. The result follows from p_1 is adjacent to p_2p_3 and p_1+p_2p_3 is adjacent to both p_1 & p_2p_3. Let G(ℤ_p^n_1_1p^n_2_2p^n_3_3) be a unit graph, where p_1,p_2 and p_3 are odd primes. Then * |V|=p^n_1_1p^n_2_2p^n_3_3 and |E|=(p^n_1_1p^n_2_2p^n_3_3-1)ϕ(p^n_1_1p^n_2_2p^n_3_3)/2. * λ(G(ℤ_p^n_1_1p^n_2_2p^n_3_3))=δ(G(ℤ_p^n_1_1p^n_2_2p^n_3_3))=ϕ(p^n_1_1p^n_2_2p^n_3_3)-1. * Clearly, |V|=p^n_1_1p^n_2_2p^n_3_3. Since 2∈ U(ℤ_p^n_1_1p^n_2_2p^n_3_3) by Theorem (<ref>) deg(x)=ϕ(p^n_1_1p^n_2_2p^n_3_3)-1 for all x∈ U(ℤ_p^n_1_1p^n_2_2p^n_3_3) and deg(x)=ϕ(p^n_1_1p^n_2_2p^n_3_3) for all x∈ N_U(ℤ_p^n_1_1p^n_2_2p^n_3_3). Thus, |E| = ∑_x∈ Vdeg(x)/2 = ∑_x∈ U(ℤ_p^n_1_1p^n_2_2p^n_3_3)deg(x)+∑_x∈ N_U(ℤ_p^n_1_1p^n_2_2p^n_3_3)deg(x)/2 = ϕ(p^n_1_1p^n_2_2p^n_3_3)(ϕ(p^n_1_1p^n_2_2p^n_3_3)-1)+ (p^n_1_1p^n_2_2p^n_3_3-ϕ(p^n_1_1p^n_2_2p^n_3_3))ϕ(p^n_1_1p^n_2_2p^n_3_3)/2 |E| = (p^n_1_1p^n_2_2p^n_3_3-1)ϕ(p^n_1_1p^n_2_2p^n_3_3)/2. * It follows from Theorem (<ref>) and Theorem (<ref>). Let G(ℤ_2^mp^n_1_1p^n_2_2) be a unit graph, where p_1 and p_2 are odd primes. Then G(ℤ_2^mp^n_1_1p^n_2_2) is connected graph and diam( G(ℤ_2^mp^n_1_1p^n_2_2))≤ 3. Let ℤ_2^mp^n_1_1p^n_2_2=U(ℤ_2^mp^n_1_1p^n_2_2)∪ N_U(ℤ_2^mp^n_1_1p^n_2_2) and N_U(ℤ_2^mp^n_1_1p^n_2_2)=N_2∪ N_p_1∪ N_p_2∪ N_2p_1∪ N_2p_2∪ N_p_1p_2∪ N_2p_1p_2 where N_2={x∈ℤ_2^mp^n_1_1p^n_2_2 | x=2α & p_1∤α & p_2∤α} N_p_i={x∈ℤ_2^mp^n_1_1p^n_2_2 | x=α p_i & 2∤α & p_j∤α, i≠ j } N_2p_i={x∈ℤ_2^mp^n_1_1p^n_2_2 | x=2α p_i & p_j∤α, i≠ j } N_p_1p_2={x∈ℤ_2^mp^n_1_1p^n_2_2 | x=α p_1p_2 & 2∤α} and N_2p_1p_2={x∈ℤ_2^mp^n_1_1p^n_2_2 | x=α 2p_1p_2}. Now for any x,y∈ℤ_2^mp^n_1_1p^n_2_2, we have the following cases: Case I: If x,y∈ U(ℤ_2^mp^n_1_1p^n_2_2), then x and y adjacent to 0, which gives d(x,y)=1≤ 3. Case II: If x∈ U(ℤ_2^mp^n_1_1p^n_2_2) and y∈ N_U(ℤ_2^mp^n_1_1p^n_2_2), then we have the following possibilities for y: (a) If y∈ N_2, then y=2α. Suppose x is not adjacent to y. For any z=β p_1p_2, we have y+z ∈ U(ℤ_2^mp^n_1_1p^n_2_2). Now our claim is that, [y,y+z] is an edge in G(ℤ_2^mp^n_1_1p^n_2_2). If this is not true, then 2| 2y+z, which gives 2|β. Similarly, if p_i| 4α+β p_1p_2, then p_i|α for i=1,2 in both cases we get contradiction. Since, y+z,x ∈ U(ℤ_2^mp^n_1_1p^n_2_2), which gives [y+z,0] and [0,x] are edges in G(ℤ_2^mp^n_1_1p^n_2_2). Hence, we get d(x,y)≤ 3. (b) If y∈ N_p_i, for i=1,2, then x is not adjacent to y. First we show that x is adjacent to some z =2β p_j∈ N_2p_j for i≠ j. Suppose x is not adjacent to z=2β p_j, which implies p_i| x+z. Note that p_j∤ x+z and 2∤ x+z. Since z=2β p_j∈ N_2p_j, which gives -2β p_j∈ N_2p_j. If p_i| x-2β p_j, then p_i|β, which is not possible. Hence, there exits z∈ N_2p_j such that x is adjacent to z. Clearly, y is adjacent to z, for all z∈ N_2p_j. Hence, we get d(x,y)=2. (c) If y∈ N_2p_i, then y=2α p_i, for i=1,2. Suppose x is not adjacent to y. For any z=β p_j∈ N_p_j for i≠ j, we have y+z ∈ U(ℤ_2^mp^n_1_1p^n_2_2). Now y is adjacent to y+z, suppose not then we have following possibilities. If p_i| 2y+z, then p_i|β and if p_j| 2y+z, then p_j|α. Similarly, if 2| 2y+z, then 2|β, which are not possible. Since, y+z,x∈ U(ℤ_2^mp^n_1_1p^n_2_2), we have y+z and x is adjacent to 0. Hence, we get [y,y+z], [y+z,0] and [0,x] are edges in G(ℤ_2^mp^n_1_1p^n_2_2), which implies d(x,y)≤ 3. (d) If y∈ N_2p_1p_2, then x is adjacent to y, which gives d(x,y)=1. Case III: If x,y ∈ N_U(ℤ_2^mp^n_1_1p^n_2_2), then we have the following possibilities: (a) If x,y ∈ N_2, then for any z∈ N_p_1p_2, we have x,y are adjacent to z. Hence, we get d(x,y)≤ 3. (b) If x∈ N_2 and y∈ N_p_i, then x=2α and y=α' p_i. Suppose x is not adjacent to y, then p_j| x+y, for i≠ j i.e. p_j| 2α+α' p_i, which gives 2α +α' p_i=β p_j. Note that 2∤β and p_i∤β this implies β p_j∈ N_p_j and β p_ip_j∈ N_p_ip_j. First we prove that y=α' p_i is adjacent to z=α' p_i+β p_j, suppose this is not true, then we have, 2| z+y i.e. 2| 2α'p_i+β p_j, which gives 2|β which is a contradiction. Similar, contradiction occurs, when p_i| y+z and p_j| y+z. Now we prove that z is adjacent to u=β p_ip_j∈ N_p_ip_j, suppose it is not holds, which implies 2| z+u i.e. 2|α'p_i+β p_j+β p_ip_j, we get 2|α' p_i+2α +α'p_i+β p_i, which implies 2|β which is not possible. Similar contradiction occurs, when p_i| u+z and p_j| u+z. Finally, observe that u=β p_ip_j is adjacent to x=2α. Hence, we get [y,y+z], [y+z,u] and [u,x] are edges in G(ℤ_2^mp^n_1_1p^n_2_2), which implies that d(x,y)≤ 3. (c) If x∈ N_2 and y∈ N_2p_i, for i=1,2 then x=2α and y=2α' p_i. First we prove that x is adjacent to some, z∈ N_p_j, for i≠ j. Let β p_j∈ N_p_2 and assume that x is not adjacent to β p_j, which implies that p_i| x+β p_j. Note that β p_j∈ N_p_2, which gives -β p_j∈ N_p_j and if x is not adjacent to -β p_j, then p_i| x-β p_j, this implies p_i| x, which is a contradiction. Hence, we get z∈ N_p_j such that x is adjacent to z. Note that y=2α' p_i is adjacent to every z in N_p_j. Hence, we get d(x,y)≤ 3. (d) If x∈ N_2 and y∈ N_p_1p_2, then x is adjacent to y, which gives d(x,y)=1. (e) If x∈ N_2 and y∈ N_2p_1p_2, then x=2α and y=2α' p_1p_2. Note that for any z=β p_1p_1, we have x+z∈ U(ℤ_2^mp^n_1_1p^n_2_2). Then x is adjacent to x+z. Since x+z is unit, x+z is adjacent y. Hence, we have [x,x+z] and [x+z, y] are edges in G(ℤ_2^mp^n_1_1p^n_2_2), which gives d(x,y)≤ 3. (f) If x,y∈ N_P_i, then x=α p_i and y=α' p_i for i=1,2. Note that, for any x∈ N_p_i and z∈ N_2p_j, for i≠ j, we have x is adjacent to z, which gives d(x,y)≤ 2. (g) If x∈ N_p_i and y∈ N_p_j, for i≠ j then x=α p_i and y=α' p_j. Note that x is not adjacent to y, we prove that x+y is adjacent to x and y. Assume that x+y is not adjacent to x, which gives 2| 2x+y, we get 2|α' which is not possible and if p_i| 2x+y, then p_i|α' which is a contradiction. Similar contradiction occurs, if p_j| 2x+y. Hence we get x is adjacent to x+y. Similarly, we can show that y is adjacent to x+y, which gives d(x,y)≤ 3. (h) If x∈ N_p_i and y∈ N_2p_i, then x=α p_i and y=2α' p_i. Let β p_j∈ N_p_j, for i≠ j. Then z=2(β p_j+α p_i )p_j∈ N_2p_j, we already proved x is adjacent to every z∈ N_2p_j. Now we prove that z is adjacent to z+x and z-x. Suppose this is not hold, then 2| 2z+x i.e. 2|α p_i, which gives 2|α or if p_i| 2z+x i.e. p_i| 4(β p_j+α p_i )p_j+α p_i, then p_i|β or if p_j| 2z+x i.e. p_j| 4(β p_j+α p_i)p_j+α p_i, then p_j|α which are not possible. Similarly, we can prove that z-x is adjacent to z. Now we claim that either z+x or z-x is adjacent to y, assume that both are not true, then only possibility is p_j| z+x+y and p_j| z-x+y, which gives p_j| z+y. Hence, we get p_j|α', which is a contradiction. Hence, either z+x or z-x is adjacent to y, which gives d(x,y)≤ 3. (i) If x∈ N_p_i and y∈ N_2p_j for i≠ j, then x is adjacent to y, which gives d(x,y)=1. (j) If x∈ N_p_i and y∈ N_p_1p_2, for i=1,2 then x=α p_i and y=α' p_1p_2. We know that y is adjacent to every z=2β∈ N_2. It is enough to prove that x is adjacent to some 2β∈ N_2. If z=2β∈ N_2 is not adjacent to x=α p_i, then p_j| z+x, for i≠ j. Now note that -z=-2β∈ N_2 and if x is not adjacent to -z then p_j| x-z this implies p_j| 2x which gives p_j|α which is a contradiction. Hence we get either z or -z is adjacent to x. Hence, d(x,y)≤ 3. (k) If x∈ N_p_i and y∈ N_2p_1p_2, for i=1,2, then using the procedure in (e), we can show that d(x,y)≤ 3. (l) If x,y∈ N_2p_i, for i=1,2 then for any z∈ N_p_j, for i≠ j, we have x and y are adjacent to z. Hence, d(x,y)≤ 3. (m) If x∈ N_2p_1 and y∈ N_2p_2, then x=2α p_1 and y=2α' p_2 . Note that for any β p_2∈ N_p_2, we have x+β p_2 is adjacent to x. Now it is enough to prove that, y is adjacent to x+β p_2, for some β p_2∈ N_p_2. Let β p_2∈ N_p_2. Then -β p_2∈ N_p_2, if y is not adjacent to x+β p_2 and x-β p_2, then p_2∤ x±β p_2+y and 2∤ x±β p_2+y. Hence, only possibility is that p_1| x+β p_2+y and p_1| x-β p_2+y, which implies p_1| x+y, from this we have p_1|α', which is a contradiction. Hence, either x+β p_2 or x-β p_2 is adjacent to y, which gives d(x,y)≤ 3. (n) If x∈ N_2p_i, for i=1,2 and y∈ N_p_1p_2, then x=2α p_i and y=α' p_1p_2. For any 2β∈ N_2, we have y is adjacent to 2β, and 2β +y is adjacent to 2β. Now we prove that for some z∈ N_2, x is adjacent to y+z. Let 2β∈ N_2. Then -2β∈ N_2 and assume that y+2β and y-2β are not adjacent to x. Note that p_i∤ y+2β +x and 2∤ y+2β +x, hence the only possibility is p_j| y+2β +x for i≠ j. Similarly from above we get p_j| y-2β +x this implies p_j| y+x, since p_j∈{p_1,p_2}, we get p_j|α which is a contradiction, which gives either y+2β is adjacent to x or y-2β is adjacent to x. From this, we conclude that d(x,y)≤ 3. (o) If x∈ N_2p_i and y∈ N_2p_1p_2, for i=1,2 then using the procedure in (e) we can show that d(x,y)≤ 3. (p) If x,y∈ N_p_1p_2, then for any z∈ N_2 we have x and y are adjacent to z. Hence we get d(x,y)≤ 3. (q) If x∈ N_p_1p_2 and y∈ N_2p_1p_2, then using the procedure in (e) we can show that d(x,y)≤ 3. (r) If x,y∈ N_2p_1p_2, then for any z∈ U(ℤ_2^mp^n_1_1p^n_2_2), we have x and y are adjacent to z, which gives d(x,y)≤ 3. Hence, G(ℤ_2^mp^n_1_1p^n_2_2) is connected graph and d(x,y)≤ 3, for all x,y∈ℤ_2^mp^n_1_1p^n_2_2, which gives diam(G(ℤ_2^mp^n_1_1p^n_2_2))≤ 3. Let G(ℤ_2^mp^n_1_1p^n_2_2) be a unit graph, where p_1 and p_2 are distinct odd primes. Then g_r(G(ℤ_2^mp^n_1_1p^n_2_2))=4. Proof follows from Theorem (<ref>) and Lemma (<ref>). Let G(ℤ_2^mp^n_1_1p^n_2_2) be a unit graph, where p_1 and p_2 are odd primes. Then * |V|=2^mp^n_1_1p^n_2_2 and |E|=(2^mp^n_1_1p^n_2_2)ϕ(2^mp^n_1_1p^n_2_2)/2 * λ(G(ℤ_2^mp^n_1_1p^n_2_2))=δ(G(ℤ_2^mp^n_1_1p^n_2_2))=ϕ(2^mp^n_1_1p^n_2_2). * Clearly, |V|=2^mp^n_1_1p^n_2_2. Since, 2∈ N_U(ℤ_2^mp^n_1_1p^n_2_2), from Theorem (<ref>), we get G(ℤ_2^mp^n_1_1p^n_2_2) is ϕ(2^mp^n_1_1p^n_2_2)-regular graph and hence |E|=(2^mp^n_1_1p^n_2_2)ϕ(2^mp^n_1_1p^n_2_2)/2. * From Lemma (<ref>) and Theorems (<ref>) and (<ref>), we get λ(G(ℤ_2^mp^n_1_1p^n_2_2))=δ(G(ℤ_2^mp^n_1_1p^n_2_2))=ϕ(2^mp^n_1_1p^n_2_2). Let G(ℤ_n) be a unit graph, where n=p^n_1_1p^n_2_2p^n_3_3 and p_1, p_2 & p_3 are distinct primes. Let H be a |V|× |E| incidence matrix of G(ℤ_n). * If 2∈ U(ℤ_n), then binary code generated by H is a C_2(H)=[(n-1)ϕ(n)/2,n-1,ϕ(n)-1]_2 code over finite field 𝔽_2. * If 2∈ N_U(ℤ_n), then for any odd prime q, the q-ary code generated by H is a C_q(H)=[nϕ(n)/2,n-1,ϕ(n)]_q code over finite field 𝔽_q. * Let 2∈ U(ℤ_n) then p_1,p_2 and p_3 are odd primes. By Theorem (<ref>), G(ℤ_n)=(V,E) is a connected graph and hence by Theorem (<ref>) binary code generated by H is C_2(H)=[|E|,|V|-1,λ(G(ℤ_p^n_1_1p^n_2_2p^n_3_3))]_2. Now from Theorem (<ref>), we get |E|= (n-1)ϕ(n)/2, |V|-1=n-1 and λ(G(ℤ_n))=ϕ(n)-1. * Let 2∈ N_U(ℤ_n). Then we have p_i=2, for some i. By Theorem (<ref>) and Lemma (<ref>), G(ℤ_n)=(V,E) is a connected bipartite graph and hence by Theorems (<ref>) and (<ref>), q-ary code generated by H is C_q(H)=[|E|,|V|-1,λ(G(ℤ_n))]_q. Now from Theorem (<ref>), the result follows. Let C_q(H) and C_2(H) denote the linear codes generated from incidence matrices of G(ℤ_2^mp^n_1_1p^n_2_2) and G(ℤ_p^n_1_1p^n_2_2p^n_3_3). Then * Dual of code C_2 is C^⊥_2=[(n-1)ϕ(n)/2,(n-1)[ϕ(n)-2]/2,3]_2, where n=p^n_1_1p^n_2_2p^n_3_3. * Dual of code C_q is C^⊥_q=[nϕ(n)/2,n(ϕ(n)-2)+2/2,4]_q, where n=2^mp^n_1_1p^n_2_2. * From Theorem (<ref>), diam( C^⊥_2)=(n-1)[ϕ(n)-2]/2. By Theorem (<ref>) and Corollary (<ref>), d(C^⊥_2)=3. * Proof follows from Theorems (<ref>) & (<ref>) and Corollary (<ref>). Let G(ℤ_n) be a unit graph, where n is any positive integer. * If 2∈ U(ℤ_n), then |E|=(n-1)ϕ(n)/2. * If 2∈ N_U(ℤ_n), then |E|=nϕ(n)/2. * Let 2∈ U(ℤ_n). By Theorem (<ref>), deg(x)=ϕ(n)-1 for all x∈ U(ℤ_n) and deg(x)=ϕ(n) for all x∈ N_U(ℤ_n). Thus, |E| = ∑_x∈ Vdeg(x)/2 = ∑_x∈ U(ℤ_n)deg(x)+∑_x∈ N_U(ℤ_n)deg(x)/2 = ϕ(n)(ϕ(n)-1)+ (n-ϕ(n))ϕ(n)/2 |E| = (n-1)ϕ(n)/2. * Let 2∈ N_U(ℤ_n). Then from Theorem (<ref>), we get G(ℤ_n) is ϕ(n)-regular graph. Hence, |E|=nϕ(n)/2. Bases on our results for unit graphs, in Theorems (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we state following conjectures for any natural number n: Conjecture I: Let G(ℤ_n) be a unit graph. Then G(ℤ_n) is connected graph and * If 2∈ U(ℤ_n), then diam(G(ℤ_n))≤ 2. * If 2∈ N_U(ℤ_n), then diam(G(ℤ_n))≤ 3. Conjecture II: Let G(ℤ_n) be a unit graph and H be a |V|× |E| incidence matrix of G(ℤ_n). * If 2∈ U(ℤ_n), then binary code generated by H is a C_2(H)=[(n-1)ϕ(n)/2,n-1,ϕ(n)-1]_2 code over finite field 𝔽_2. * If 2∈ N_U(ℤ_n), then for any odd prime q, the q-ary code generated by H is a C_q(H)=[nϕ(n)/2,n-1,ϕ(n)]_q code over finite field 𝔽_q. § CONCLUSION In this paper, we generate q-ary linear codes, for any prime q, from incidence matrices of unit graphs G(ℤ_n). Moreover, we found parameters and dual of constructed codes over finite filed 𝔽_q. In this article, we consider n as product of power of three distinct primes. Examine the permutation decoding techniques, covering radius of constructed codes and one can construct linear codes from unit graph over different commutative rings is the further scope to work. 9 1 Annamalai, N., & Durairajan, C. (2021). Linear codes from incidence matrices of unit graphs. Journal of Information and Optimization Sciences, 42(8), 1943-1950. 2 Ashrafi, N., Maimani, H. R., Pournaki, M. R., & Yassemi, S. (2010). Unit graphs associated with rings.Communications in Algebra, 38(8), 2851-2871. 3 Dankelmann, P., Key, J. D., & Rodrigues, B. G. (2013). Codes from incidence matrices of graphs.Designs, codes and cryptography, 68(1), 373-393. 4 Chartrand, G. (1966). A graph-theoretic approach to a communications problem. SIAM Journal on Applied Mathematics, 14(4), 778-781. 5 Plesník, J., & Znám, Š. (1989). On equality of edge-connectivity and minimum degree of a graph. Archivum Mathematicum, 25(1), 19-25. 6 Whitney, H. (1992). Congruent graphs and the connectivity of graphs. In Hassler Whitney Collected Papers (pp. 61-79). Birkhäuser Boston. 7 Fish, W., Key, J. D., & Mwambene, E. (2010). Codes from incidence matrices and line graphs of Hamming graphs. Discrete mathematics, 310(13-14), 1884-1897. 8 Key, J. D., & Rodrigues, B. G. (2010). Codes from lattice and related graphs, and permutation decoding. Discrete applied mathematics, 158(16), 1807-1815. 9 Key, J. D., Moori, J., & Rodrigues, B. G. (2010). Codes associated with triangular graphs and permutation decoding. International Journal of Information and Coding Theory, 1(3), 334-349. 10 Ghinelli, D., & Key, J. D. (2011). Codes from incidence matrices and line graphs of Paley graphs. Advances in mathematics of communications, 5(1), 93. 11 Key, J. D., & Rodrigues, B. G. (2018). LCD codes from adjacency matrices of graphs. Applicable Algebra in Engineering, Communication and Computing, 29(3), 227-244. 12 Tonchev, V. D. (2002). Error-correcting codes from graphs. Discrete mathematics, 257(2-3), 549-557. 13 Fish, W., Key, J. D., & Mwambene, E. (2021). Special LCD codes from products of graphs. Applicable Algebra in Engineering, Communication and Computing, 1-27. 14 Grimaldi, R. P. (2006). Discrete and Combinatorial Mathematics, 5/e. Pearson Education India. 15 Su, H., & Zhou, Y. (2014). On the girth of the unit graph of a ring. Journal of Algebra and Its Applications, 13(02), 1350082. 16 Akbari, S., Estaji, E., & Khorsandi, M. R. (2015, December). On the unit graph of a non-commutative ring. In Algebra Colloquium (Vol. 22, No. spec01, pp. 817-822). Academy of Mathematics and Systems Science, Chinese Academy of Sciences, and Suzhou University. 17 Heydari, F., & Nikmehr, M. J. (2013). The unit graph of a left Artinian ring. Acta Mathematica Hungarica, 139(1), 134-146. 18 Ling, S., & Xing, C. (2004). Coding theory: a first course. Cambridge University Press. 19 Clark, J., & Holton, D. A. (1991). A first look at graph theory. World Scientific.
http://arxiv.org/abs/2307.04451v1
20230710100450
Globally linked pairs of vertices in generic frameworks
[ "Tibor Jordán", "Soma Villányi" ]
math.CO
[ "math.CO", "math.MG" ]
Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu Victor Prokhorenko1,2 Chadni Islam3 Muhammad Ali Babar1,2 August 12, 2023 ================================================================================================== A d-dimensional framework is a pair (G,p), where G=(V,E) is a graph and p is a map from V to ℝ^d. The length of an edge xy∈ E in (G,p) is the distance between p(x) and p(y). A vertex pair {u,v} of G is said to be globally linked in (G,p) if the distance between p(u) and p(v) is equal to the distance between q(u) and q(v) for every d-dimensional framework (G,q) in which the corresponding edge lengths are the same as in (G,p). We call (G,p) globally rigid in ^d when each vertex pair of G is globally linked in (G,p). A pair {u,v} of vertices of G is said to be weakly globally linked in G in ^d if there exists a generic framework (G,p) in which {u,v} is globally linked. In this paper we first give a sufficient condition for the weak global linkedness of a vertex pair of a (d+1)-connected graph G in ^d and then show that for d=2 it is also necessary. We use this result to obtain a complete characterization of weakly globally linked pairs in graphs in ^2, which gives rise to an algorithm for testing weak global linkedness in the plane in O(|V|^2) time. Our methods lead to a new short proof for the characterization of globally rigid graphs in ^2, and further results on weakly globally linked pairs and globally rigid graphs in the plane and in higher dimensions. § INTRODUCTION A d-dimensional framework is a pair (G,p), where G=(V,E) is a graph and p is a map from V to ℝ^d. We also say that (G,p) is a realization of G in ℝ^d. The length of an edge uv∈ E in (G,p) is ||p(u)-p(v)||, where ||.|| denotes the Euclidean norm in ℝ^d. Two frameworks (G,p) and (G,q) are equivalent if corresponding edge lengths are the same, that is, ||p(u)-p(v)||=||q(u)-q(v)|| holds for all pairs u,v with uv∈ E. The frameworks (G,p) and (G,q) are congruent if ||p(u)-p(v)||=||q(u)-q(v)|| holds for all pairs u,v with u,v∈ V. A d-dimensional framework (G,p) is called globally rigid if every equivalent d-dimensional framework (G,q) is congruent to (G,p). This is the same as saying that the edge lengths of (G,p) uniquely determine all the pairwise distances. It is NP-hard to test whether a given framework in ^d is globally rigid, even for d=1 <cit.>. This fundamental property of frameworks becomes more tractable if we consider generic frameworks. A framework (G,p) (and the set {p(v):v∈ V(G)}) is said to be generic if the set of its d|V(G)| vertex coordinates is algebraically independent over ℚ. It is known that in a given dimension the global rigidity of a generic framework (G,p) depends only on G: either every generic realization of G in ^d is globally rigid, or none of them are <cit.>. Thus, we say that a graph G is globally rigid in ^d if every (or equivalently, if some) d-dimensional generic realization of G is globally rigid in ^d. For d=1,2, combinatorial characterizations and corresponding deterministic polynomial time algorithms are known for (testing) global rigidity in ^d. The case d=1 is a folklore result: it is not hard to see that a graph G on at least three vertices is globally rigid in ^1 if and only if it is 2-connected. The necessary and sufficient conditions for d=2 are stated as Theorem <ref> in the next section. The existence of such a characterization (or algorithm) for d≥ 3 is a major open question. For more details on globally rigid graphs and frameworks see e.g. <cit.>. In this paper we consider a refined, local version, in which we are interested in whether the edge lengths of a framework uniquely determine the distance between a given pair of vertices, rather than all pairs of vertices. We shall need the following notions. Following <cit.>, we say that a pair of vertices {u,v} in a d-dimensional framework (G,p) is globally linked in (G,p) if for every equivalent d-dimensional framework (G,q) we have ||p(u)-p(v)||=||q(u)-q(v)||. Global linkedness in ^d is not a generic property (for d≥ 2): a vertex pair may be globally linked in some generic d-dimensional realization of G without being globally linked in all generic realizations. See Figure <ref>. We say that a pair {u,v} is globally linked in G in ^d if it is globally linked in all generic d-dimensional frameworks (G,p). We call a pair {u,v} weakly globally linked in G in ^d if there exists a generic d-dimensional framework (G,p) in which {u,v} is globally linked. If {u,v} is not weakly globally linked in G, then it is called globally loose in G. It is immediate from the definitions that G is globally rigid in ^d if and only if each vertex pair is globally linked in G in ^d. As we shall see, the global rigidity of G already follows from the (seemingly weaker) condition that each vertex pair is weakly globally linked in G (see Lemma <ref>(c)). The case d=1 is exceptional and well-understood. Global linkedness in ^1 is a generic property: a pair {u,v} is globally linked in G in ^1 if and only if there is a cycle in G that contains both u and v. Otherwise {u,v} is globally loose. For d≥ 2 no combinatorial (or efficiently testable) characterization has previously been found for globally linked or weakly globally linked pairs in graphs in ^d. These problems belong to the few major problems in combinatorial rigidity which have remained unsolved for d=2. The main result of this paper is a solution for the weakly globally linked pairs problem in two dimensions. We shall first give a sufficient condition for the weak global linkedness of a vertex pair of a (d+1)-connected graph G in ^d (Theorem <ref>) and then show that in a sense the condition is also necessary in the case of 3-connected graphs in ^2 (Theorem <ref>). The general case of the two-dimensional problem is reduced to the 3-connected case by a sequence of lemmas that describe how global linkedness is affected by cutting a graph along a separating pair. These results lead to the main result (Theorem <ref>), which gives a characterization of weakly globally linked pairs of vertices in ^2 and gives rise to an O(|V|^2) algorithm for the corresponding decision problem. Our methods and results lead to a new short proof for the sufficiency part of Theorem <ref>. We also obtain a number of other structural results on weakly globally linked pairs and globally rigid graphs in ^2 and in higher dimensions. Even though most of the known results (and conjectures) on global linkedness are concerned with globally linked pairs of graphs in ^2, their characterization remains open. Globally linked pairs in two dimensions have been characterized in minimally rigid graphs <cit.>, braced maximal outerplanar graphs <cit.>, and in R_2-connected graphs <cit.>. In the latter two cases global linkedness turns out to be a generic property. Hence these two results give rise to the characterization of weakly globally linked pairs, too, in the corresponding families of graphs. A conjectured characterization of globally linked pairs in ^2 can be found in <cit.>. A few partial results in higher dimensions are also available, see <cit.>. The rest of the paper is organized as follows. In Section <ref> we introduce the necessary notions concerning rigid graphs and frameworks. In Section <ref> we prove some simple but fundamental lemmas on weakly globally linked pairs in ^d. Section <ref> contains most of the d-dimensional results (two key geometric lemmas and a sufficient condition for weak global linkedness), and the new proof for Theorem <ref>. In Section <ref> we state and prove our main result, a complete characterization of the weakly globally linked pairs in ^2. In Section <ref> we discuss the algorithmic aspects and collect a few concluding remarks and questions. § PRELIMINARIES In this section we introduce the notions and results from the theory of (globally) rigid frameworks and graphs that we shall use. §.§ Rigid graphs and the rigidity matroid In the structural results on global rigidity and global linkedness the notions of rigid frameworks, rigid graphs and the rigidity matroid play a key role. The d-dimensional framework (G,p) is rigid if there exists some ε >0 such that, if (G,q) is equivalent to (G,p) and ||p(v)-q(v)||< ε for all v∈ V, then (G,q) is congruent to (G,p). This is equivalent to requiring that every continuous motion of the vertices of (G,p) in ^d that preserves the edge lengths takes the framework to a congruent realization of G. It is known that in a given dimension the rigidity of a generic framework (G,p) depends only on G: either every generic realization of G in ^d is rigid, or none of them are <cit.>. Thus, we say that a graph G is rigid in ^d if every (or equivalently, if some) d-dimensional generic realization of G is rigid in ^d. For d=1,2, combinatorial characterizations and corresponding deterministic polynomial time algorithms are known for (testing) rigidity in ^d, see e.g. <cit.>. The existence of such a characterization (or algorithm) for d≥ 3 is a major open question. The following elementary result is well-known. For the proof of the two-dimensional case see <cit.>. Suppose that (G,p) is a rigid generic framework. Then the number of distinct congruence classes of frameworks which are equivalent to (G,p) is finite. The rigidity matroid of a graph G is a matroid defined on the edge set of G which reflects the rigidity properties of all generic realizations of G. For a general introduction to matroid theory we refer the reader to <cit.>. Let (G,p) be a realization of a graph G=(V,E) in ^d. The rigidity matrix of the framework (G,p) is the matrix R(G,p) of size |E|× d|V|, where, for each edge uv∈ E, in the row corresponding to uv, the entries in the d columns corresponding to vertices u and v contain the d coordinates of (p(u)-p(v)) and (p(v)-p(u)), respectively, and the remaining entries are zeros. The rigidity matrix of (G,p) defines the rigidity matroid of (G,p) on the ground set E by linear independence of the rows. It is known that any pair of generic frameworks (G,p) and (G,q) have the same rigidity matroid. We call this the d-dimensional rigidity matroid R_d(G)=(E,r_d) of the graph G. We denote the rank of R_d(G) by r_d(G). A graph G=(V,E) is R_d-independent if r_d(G)=|E| and it is an R_d-circuit if it is not R_d-independent but every proper subgraph G' of G is R_d-independent. We note that in the literature such graphs are sometimes called M-independent in ^d and M-circuits in ^d, respectively. An edge e of G is an R_d-bridge in G if r_d(G-e)=r_d(G)-1 holds. Equivalently, e is an R_d-bridge in G if it is not contained in any subgraph of G that is an R_d-circuit. The following characterization of rigid graphs is due to Gluck. <cit.> Let G=(V,E) be a graph with |V|≥ d+1. Then G is rigid in ^d if and only if r_d(G)=d|V|-d+12. A graph is minimally rigid in ^d if it is rigid in ^d but G-e is not rigid in ^d for every edge e of G. By Theorem <ref>, minimally rigid graphs in ^d on at least d+1 vertices have exactly d|V| - d+12 edges. Let G=(V,E) be a graph and {u,v} be a pair of vertices of G. An induced subgraph G[X] (and the set X), for some X⊆ V, is said to be (u,v)-rigid in ^d (or simply (u,v)-rigid, if d is clear from the context), if G[X] is rigid in ^d and u,v∈ X. We say that a (u,v)-rigid subgraph G[X] is vertex-minimally (u,v)-rigid, if G[X'] is not (u,v)-rigid for all proper subsets X'⊂ X. The pair {u,v} is called linked in G in ^d if r_d(G+uv)=r_d(G) holds. It is known that a pair {u,v} is linked in G in ^2 if and only if there exists a (u,v)-rigid subgraph of G. A graph G with at least three edges is called redundantly rigid in ^d if G-e is rigid in ^d for all e∈ E(G). Let M be a matroid on ground set E. We can define a relation on the pairs of elements of E by saying that e,f∈ E are equivalent if e=f or there is a circuit C of M with {e,f}⊆ C. This defines an equivalence relation. The equivalence classes are the connected components of M. The matroid is connected if it has only one connected component. A graph G=(V,E) is R_d-connected if R_d(G) is connected. We shall use the well-known fact that if v is a vertex of degree at most d in G, then every edge incident with v is an R_d-bridge in G. Hence the addition of a new vertex of degree d to a rigid graph G in ^d preserves rigidity. For more details on the 2-dimensional rigidity matroid, see <cit.>. §.§ Globally rigid graphs The following necessary conditions for global rigidity are due to Hendrickson. <cit.> Let G be a globally rigid graph in ^d on at least d+2 vertices. Then G is (d+1)-connected and redundantly rigid in ^d. For d=1,2 the conditions of Theorem <ref> together are sufficient to imply global rigidity. It is not the case for d≥ 3. The characterization of globally rigid graphs in ^2 is as follows. <cit.> Let G be a graph on at least four vertices. Then G is globally rigid in ^2 if and only if G is 3-connected and redundantly rigid in ^2. An equivalent characterization of global rigidity, in terms of the rigidity matroid of G, follows from the next lemma. <cit.> Let G be a graph with at least two edges. If G is R_2-connected, then G is redundantly rigid in ^2. Furthermore, if G is 3-connected and redundantly rigid in ^2, then G is R_2-connected. We shall also use the following lemma. <cit.> Let G be a rigid, but not redundantly rigid graph in ^2, and suppose that all R_2-bridges of G are edges of the same triangle in G. Then G is not 3-connected. § PROPERTIES OF WEAKLY GLOBALLY LINKED PAIRS IN ^D We first collect some basic properties that hold in ^d for all d≥ 1. The following lemma was stated for d=2 in <cit.> but the proof works for all d≥ 1. An edge e of a globally rigid graph H is critical if H-e is not globally rigid. <cit.> Let G=(V,E) be a graph and u,v∈ V. Suppose that uv∉ E, and that G has a globally rigid supergraph in ^d in which uv is a critical edge. Then {u,v} is globally loose in G in ^d. We shall frequently use the next key lemma. For a graph G=(V,E) and integer d≥ 1 let J_d(G)={uv : u,v∈ V, uv∉ E, {u,v} is weakly globally linked in G in ^d}. Let G=(V,E) be a graph and let F be a set of edges on vertex set V. Then the following hold. (a) If G+J_d(G)+F is globally rigid in ^d, then G+F is globally rigid in ^d. (b) If G+uv is globally rigid in ^d for some uv∈ J_d(G), then G is globally rigid in ^d. (c) G is globally rigid in ^d if and only if all pairs of vertices in G are weakly globally linked in ^d. Let us fix d and put J=J_d(G). (a) Suppose, for a contradiction, that G+J+F is globally rigid and G+F is not. Then there is a (possibly empty) subset J'⊂ J and an edge uv∈ J-J' for which G+J'+F is not globally rigid, but G̅=G+J'+F+uv is globally rigid. Then uv is a critical edge in G̅, and hence {u,v} is globally loose in G by Lemma <ref>, a contradiction. (b) If G+uv is globally rigid for some uv∈ J then G+J is globally rigid. Thus putting F=∅ and applying (a) gives that G is globally rigid. (c) Necessity is obvious. If all pairs of vertices in G are weakly globally linked, then G+J is a complete graph, which is globally rigid. Again, putting F=∅ and applying (a) gives that G is globally rigid. It is well-known that if {u,v} is not linked in G in ^d, then every generic d-dimensional realization (G,p) has a flex (i.e. a continuous motion of the vertices that preserves the edge lengths) to another framework (G,q), for which ||p(u)-p(v)||≠ ||q(u)-q(v)||. This implies the next lemma. Let G=(V,E) be a graph and let {u,v} be a non-adjacent vertex pair. If {u,v} is not linked in G in ^d then {u,v} is globally loose in G in ^d. Let H=(V,E) be a graph and x,y∈ V. We use κ_H(x,y) to denote the maximum number of pairwise internally disjoint xy-paths in H. Note that if xy∉ E then, by Menger's theorem, κ_H(x,y) is equal to the size of a smallest set S⊆ V-{x,y} for which there is no xy-path in H-S. The following lemma is the d-dimensional and slightly stronger version of <cit.>. Let G=(V,E) be a graph and let {u,v} be a non-adjacent vertex pair with κ_G(u,v)≤ d. Then {u,v} is globally loose in G in ^d. Let G_i=(V_i,E_i) be a graph, t ≥ 1 an integer, and suppose that K_i is a complete subgraph of G_i on t vertices, for i=1,2. Then the t-clique sum operation on G_1,G_2, along K_1,K_2, creates a new graph G by identifying the vertices of K_1 with the vertices of K_2, following some bijection between their vertex sets. The clique sum operation is a t-clique sum operation for some t≥ 1. In the following lemma sufficiency follows from the simple obervation that if a vertex pair is weakly globally linked in a subgraph of G, then it is also weakly globally linked in G. Necessity follows from the fact that the clique sum operation is performed along a complete (and hence globally rigid) subgraph. Suppose that G is the clique sum of G_1 and G_2 and let u,v∈ V(G_1). Then {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G_1 in ^d. § A SUFFICIENT CONDITION FOR WEAK GLOBAL LINKEDNESS IN ^D In this section we provide a new sufficient condition for the weak global linkedness of a pair of vertices of a (d+1)-connected graph in ^d. An important ingredient in our proof is a geometric lemma (Lemma <ref>) presented in the next subsection. In Subsection <ref> we prove the aforementioned sufficient condition and in Subsection <ref> we show how it can be used to prove the sufficiency part of Theorem <ref>. In the last subsection we shall see that an appropriate reverse of Lemma <ref> is also true (Lemma <ref>). This lemma will be used in the next section where we characterize weak global linkedness in two dimensions. Roughly speaking these two lemmas show that if a vertex pair {u,v} belongs to a rigid subgraph H of G, then the contraction of a connected subgraph of G-V(H) does not change the weak global linkedness properties of {u,v}. §.§ The first contraction lemma A basic graph operation is the contraction of a subset V_0 of V in the graph G=(V,E). This operation, which is denoted by G/V_0, identifies the vertices of V_0 and removes the loops and parallel copies of the edges of the resulting graph that it may create. The contraction of an edge e=xy is the contraction of the set {x,y} and it is denoted by G/e. Let G=(V,E) be a graph, u,v∈ V, and suppose that G[V_0] is a (u,v)-rigid subgraph of G. Let e=(s_1,s_2)∈ E-E(G[V_0]) be an edge. If {u,v} is weakly globally linked in G/e in ^d, then {u,v} is weakly globally linked in G in ^d. We may assume that G is connected and s_2∉ V_0. Let s denote the vertex of G/e obtained by identifying s_1 and s_2 in G. Note that we may have s_1∈ V_0. In this case we shall simply identify s with s_1 for notational convenience. Let (G/e,p) be a generic realization of G/e in which {u,v} is globally linked. Let (G,p_i) be a sequence of generic realizations of G, for which p_i|_V-s_1-s_2=p|_V-s, p_i(s_1)=p(s), and p_i(s_2)→ p(s). Suppose, for a contradiction, that {u,v} is globally loose in G. Then {u,v} is not globally linked in (G,p_i) for all i≥ 1. Hence for all i≥ 1 there exists a realization (G,p_i'), equivalent to (G,p_i), for which ||p_i'(u)-p_i'(v)||≠ ||p_i(u)-p_i(v)||=||p(u)-p(v)||. Since G[V_0] is rigid and p|_V_0=p_i|_V_0, it follows from Proposition <ref> that there is an ϵ >0 such that for all i≥ 1, |||p_i'(u)-p_i'(v)||-||p(u)-p(v)|||≥ϵ. Since G is connected, we can translate each framework, if necessary, so that for all i≥ 1, (G,p_i') is in the interior of a ball of radius K, centered at the origin, for some fixed positive real number K. Thus there is a convergent subsequence p_i_k'→ p'. Since (s_1,s_2)∈ E, we must have p'(s_1)=p'(s_2). By extending p'|_V-s_1-s_2 with p'(s)=p'(s_1), we obtain a realization (G/e,p') which is equivalent to (G/e,p). Furthermore, we have |||p'(u)-p'(v)||-||p(u)-p(v)|||≥ϵ, which contradicts the fact that {u,v} is globally linked in (G/e,p). Thus {u,v} is weakly globally linked in G. We obtain the following sufficient (but not necessary, see Figure <ref>) condition for weak global linkedness as a corollary. Let G=(V,E) be a graph, u,v∈ V. Suppose that there is some V_0⊂ V such that G[V_0] is a (u,v)-rigid subgraph of G in ^d, and there is a uv-path in G that is internally disjoint from V_0. Then {u,v} is weakly globally linked in G in ^d. Corollary <ref>, together with Lemma <ref>, leads to short proofs for some previous results on globally rigid graphs. We illustrate this by the following theorem. <cit.> Let G_1 and G_2 be two globally rigid graphs in ^d on at least d+2 vertices, with exactly d+1 vertices in common. Suppose that e is a common edge. Then G=G_1∪ G_2-e is globally rigid in ^d. Let e=uv. Theorem <ref> implies that G_1-e is rigid. Since G_2 is (d+1)-connected, there is a path from u to v in G that is internally disjoint from G_1. Thus {u,v} is weakly globally linked in G by Corollary <ref>. It is easy to see that G+uv is globally rigid. Hence G is also globally rigid by Lemma <ref>. By using the same proof idea we obtain a simple proof of the “rooted minor" theorem of Tanigawa <cit.>. §.§ The sufficient condition Let G=(V,E) be a graph, ∅≠ X⊆ V, and let V_1,V_2,…, V_r be the vertex sets of the connected components of G-X. The graph Con(G,X) is obtained from G by contracting each vertex set V_i into a single vertex v_i, 1≤ i≤ r. The graph Clique(G,X) is obtained from G by deleting the vertex sets V_i, 1≤ i≤ r, and adding a new edge xy for all pairs x,y∈ N_G(V_i), xy∉ E, for 1≤ i≤ r. See Figure <ref>. Let G=(V,E) be a (d+1)-connected graph. Suppose that G[V_0] is a rigid subgraph of G for some V_0⊆ V. Then Clique(G,V_0) is globally rigid in ^d if and only if Con(G,V_0) is globally rigid in ^d. Let E' be the set of those edges in Clique(G,V_0) that are not in G[V_0]. Let H= Con(G,V_0)+E'. It follows from Corollary <ref> that {u,v} is weakly globally linked in Con(G,V_0) for all uv∈ E'. Hence, by Lemma <ref>, Con(G,V_0) is globally rigid if and only if H is globally rigid. H can be obtained from Clique(G,V_0) by adding new vertices and joining them to cliques of size at least d+1. Thus H is globally rigid if and only if Clique(G,V_0) is globally rigid. We are ready to state the main result of this section. Let G=(V,E) be a (d+1)-connected graph and u,v∈ V. Suppose that G[V_0] is a (u,v)-rigid subgraph of G in ^d. If Clique(G,V_0) is globally rigid in ^d, then {u,v} is weakly globally linked in G in ^d. Suppose that Clique(G,V_0) is globally rigid in ^d. Then so is Con(G,V_0) by Lemma <ref>. In particular, {u,v} is weakly globally linked in Con(G,V_0). Since Con(G,V_0) can be obtained from G by contracting edges not induced by V_0, Lemma <ref> gives that {u,v} is weakly globally linked in G. §.§ Globally rigid graphs - a new proof Theorem <ref> and Lemma <ref> lead to a new short proof of the sufficiency part of Theorem <ref>, which only uses the simple combinatorial Lemmas <ref> and <ref> and the fact that the global rigidity of graphs in ^2 is a generic property. The original proof in <cit.> relies on an inductive construction of 3-connected R_2-connected graphs. (of sufficiency in Theorem <ref>) The proof is by induction on |V|. If |V|=4 then G is a complete graph on four vertices, which is globally rigid. So we may suppose that |V|≥ 5. First, we show that for all non-adjacent pairs u,v, there is a (u,v)-rigid proper induced subgraph G[X] of G. To see this consider two edges e,f∈ E incident with u and v, respectively. Since G is 3-connected and redundantly rigid, it is R_2-connected by Lemma <ref>. Hence there is an R_2-circuit C in G with e,f∈ E(C). Since |E(C)|=2|V|-2 and d_C(v)≥ 3 for all v∈ V(C), it follows that C has at least four vertices of degree three. Thus there is a vertex w∈ V(C) with w∉{u,v} and d_C(w)=3. Now X=V(C)-w induces the desired (u,v)-rigid subgraph. In the rest of the proof we show that every non-adjacent vertex pair {u,v} of G is weakly globally linked in G. The theorem will follow from this by Lemma <ref>(c). Let us fix u,v and consider a (u,v)-rigid proper induced subgraph G[X] of G. As we have shown above, such a subgraph exists. By Theorem <ref> it suffices to show that Clique(G,X) is globally rigid. Let D be the vertex set of a component of G-X and let H be obtained from G-D by adding a new edge xy for each non-adjacent pair x,y∈ N_G(D). Since G can be obtained from H by attaching a graph along a complete subgraph, and removing edges, the 3-connectivity of G implies that H is 3-connected. A similar argument shows that H is rigid, and so is H-e for every edge e in H not induced by N_G(D). Thus if H has some R_2-bridges, then they are all induced by N_G(D). If |N_G(D)|≥ 4, then each edge induced by N_G(D) belongs to a K_4 in H, so H cannot have R_2-bridges at all. If |N_G(D)|=3, then every R_2-bridge in H belongs to the same triangle, on the vertices of N_G(D). But that is impossible by Lemma <ref>. Therefore H is a rigid graph with no R_2-bridges, and hence it is redundantly rigid. By repeated applications of this argument we obtain that Clique(G,X) is 3-connected and redundantly rigid. Since |X|≤ |V|-1, we can now use induction to deduce that Clique(G,X) is globally rigid. This completes the proof. We remark that a different proof for the sufficiency part in Theorem <ref> was also given by Tanigawa <cit.>. The high level ideas of his proof and the proof given in this subsection are similar. By using our notation the main lemma <cit.> can be stated as follows: if v is a vertex of degree at least d+1 in G, G-v is rigid in ^d, and Clique(G,V-{v}) is globally rigid in ^d, then G is globally rigid in ^d. This statement is a special case of the “only if" direction of our Lemma <ref>. §.§ The second contraction lemma As a corollary of Lemma <ref>, it can be deduced that if G[V_0] is a (u,v)-rigid subgraph of a graph G, V_1 is the vertex set of a component of G-V_0 and {u,v} is weakly globally linked in G/V_1, then {u,v} is weakly globally linked in G. In this subsection we shall prove the converse of this statement, see Lemma <ref> below. We shall need some new notions and an auxiliary lemma. A configuration of a set U is a function that maps U into ^d. Two configurations p_1,p_2 of U are said to be congruent if ||p_1(u)-p_1(v)||=||p_2(u)-p_2(v)|| for all u,v∈ U. Suppose that p and q are two incongruent configurations of a set U. We call a point x∈^d (q,p)-feasible if there exists a point y∈^d such that ||p(u)-x||=||q(u)-y|| for all u∈ U. We then call y a (q,p)-associate of x. Observe that if π is an isometry of ^d, then the set of (q,p)-feasible points is equal to the set of (π∘ q, p)-feasible points. The affine hull of set X⊆^d will be denoted by Aff(X). Let p be a configuration of a set U. Suppose that Q={q_1,…,q_k} is a non-empty set of configurations of U such that q_i is not congruent to p, for all 1≤ i≤ k. Let F_i be the set of (q_i,p)-feasible points, 1≤ i≤ k. Then ^d-⋃_i=1^k F_i is a non-empty open set. Let S=^d-⋃_i=1^k F_i. We claim that F_i is closed for every i∈{1,…, k}, which will imply that S is open. Let x_j→ x be a convergent sequence with x_j∈ F_i, j∈ℕ, and let y_j be a (q_i,p)-associate of x_j. The set {y_j:j∈ℕ} is bounded. Hence there exists a convergent subsequence y_j_ℓ→ y. Then y is a (q_i,p)-associate of x, which gives x∈ F_i. This proves the claim. In the rest of the proof we show that S is non-empty. Notice that |U|≥ 2 must hold. We shall prove the following stronger statement by induction on |U|: for every a∈ U we have S∩Aff(p(U-{a}))≠∅ First suppose that |U|=2, and let U={a,b}. Then we have ||q_i(a)-q_i(b)||≠ ||p(a)-p(b)||, since q_i and p are not congruent. Thus p(b)∈ S, and hence (<ref>) follows. Next suppose that |U|≥ 3. Let Q'={q∈ Q: p|_U-{a} is not congruent to q|_U-{a}}, and let Q”=Q-Q'. By putting F'=⋃_q_i∈ Q'F_i and F”=⋃_q_i∈ Q”F_i, we have S=^d-F'-F”. By induction, the set Aff(p(U-{a,b}))-F' is non-empty for every b∈ U-{a}. Since F' is closed, this implies that Aff(p(U-{a}))-F' is non-empty and relatively open in Aff(p(U-{a})). We claim that for all q_i ∈ Q” the set F_i∩ Aff(p(U-{a})) is either empty or a proper affine subspace of Aff(p(U-{a})). To prove the claim, let q_i∈ Q”. By replacing q_i with π∘ q_i, where π is an appropriate isometry of ^d, we may assume that p|_U-{a}= q_i|_U-{a}. Then it follows from the incongruency of p and q_i that p(a)≠ q_i(a). Suppose that x∈ Aff(p(U-{a})) and y is a (q_i,p)-associate of x. Then there exists an isometry that fixes each point of p(U-{a}) and maps x to y. This isometry fixes each point of Aff(p(U-{a})), therefore, y=x. So the only possible (q_i,p)-associate of x is x itself. It follows that x is (q_i,p)-feasible if and only if ||x-p(a)||=||x-q_i(a)||, that is, if x is in the bisector hyperplane H of p(a) and q_i(a). Since q_i and p are not congruent, we obtain Aff(p(U-{a}))⊈H. This proves the claim. The lemma follows by noting that (<ref>) and (<ref>) yield that the set S∩Aff(p(U-{a}))=( Aff(p(U-{a}))-F')-F” is non-empty, and hence (<ref>) holds. Let G=(V,E) be a graph, u,v∈ V, and suppose that G[V_0] is a (u,v)-rigid subgraph of G. Let V_1 be the vertex set of some component of G-V_0. Then {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G/V_1 in ^d. Since G/V_1 can be obtained from G by contracting edges not induced by V_0, the “if" direction follows by repeated applications of Lemma <ref>. To prove the “only if" direction suppose that {u,v} is weakly globally linked in G and let (G,p) be a generic realization of G in which {u,v} is globally linked. Let v_1 be the vertex of G/V_1 obtained by the contraction of V_1 in G. We shall prove that p|_V-V_1 has an extension to (V-V_1)∪{v_1} that is a generic realization of G/V_1 in which {u,v} is globally linked. We may assume that {u,v} is not globally linked in (G-V_1,p|_V-V_1), for otherwise we are done by choosing an arbitrary generic extension. Let q_1,…,q_k be a maximal set of pairwise incongruent configurations of V_0 such that ||q_i(u)-q_i(v)||≠ ||p(u)-p(v)|| and q_i is a restriction of some realization of G-V_1 which is equivalent to (G-V_1,p|_V-V_1), for 1≤ i≤ k. By our assumption k≥ 1. Proposition <ref> implies that k is finite, since G[V_0] is rigid, p is generic and (G[V_0],q_i) is equivalent to (G[V_0],p|_V_0). For all i∈{1,…,k} the configurations q_i|_N_G(V_1) and p|_N_G(V_1) are incongruent, for otherwise q_i would be extendible to a configuration q_i' so that (G,q_i') is equivalent to (G,p), contradicting the assumption that {u,v} is globally linked in (G,p). Applying Lemma <ref> to N_G(V_1), p|_N_G(V_1) and the set Q={q_1|_N_G(V_1),…, q_k|_N_G(V_1)} gives that there is some x=(x_1,…, x_d)∈^d for which x is not (q_i|_N_G(V_1),p|_N_G(V_1))-feasible for all i∈{1,…,k} and for which p(V-V_1)∪{x} is generic. We can now complete the proof of the lemma by considering the generic realization (G/V_1,p), where p|_V-V_1=p|_V-V_1 and p(v_1)=x. Then {u,v} is globally linked in (G/V_1,p). Indeed, the existence of an equivalent realization (G/V_1,q) with ||q(u)-q(v)||≠ ||p(u)-p(v)|| would imply that q|_V_0=q_i for some 1≤ i≤ k and that x is (q_i|_N_G(V_1),p|_N_G(V_1))-feasible, contradicting the choice of x. Let G=(V,E) be a graph, u,v∈ V, and let G[V_0] be a (u,v)-rigid subgraph of G. Let e=(s_1,s_2)∈ E be an edge with s_1,s_2∉ V_0. Notice that Lemma <ref> implies that {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G/e in ^d: each of these two conditions is equivalent to the condition that {u,v} is weakly globally linked in G/V_1, where V_1 is the connected component of G-V_0 that contains e. By the same argument, for any connected subgraph G_1 of G-G_0, {u,v} is weakly globally linked in G in ^d if and only if {u,v} is weakly globally linked in G/V(G_1) in ^d. § WEAKLY GLOBALLY LINKED PAIRS IN ^2 In this section we focus on the d=2 case. Thus, we shall occasionally write that a graph is (globally) rigid to mean that it is (globally) rigid in ^2, and we may similarly omit the dimension when referring to global linkedness of vertex pairs in graphs. This section contains one of our main results, a characterization of weakly globally linked pairs in graphs. §.§ Weakly globally linked pairs in 3-connected graphs We start with the special case of 3-connected graphs. By Lemma <ref> it suffices to consider non-adjacent linked pairs {u,v} of G, or equivalently, pairs {u,v} for which there exists some subgraph G_0=(V_0,E_0) of G with u,v∈ V_0 such that G_0+uv is an R_2-circuit. Let G=(V,E) be a 3-connected graph and u,v∈ V with uv∉ E. Suppose that G_0=(V_0,E_0) is a subgraph of G with u,v∈ V_0 such that G_0+uv is an R_2-circuit. Then {u,v} is weakly globally linked in G in ^2 if and only if Clique(G,V_0) is globally rigid in ^2. Since G[V_0] is rigid, sufficiency follows from Theorem <ref>. To prove the other direction suppose, for a contradiction, that {u,v} is weakly globally linked in G but Clique(G,V_0) is not globally rigid. Since G_0+uv is redundantly rigid, so is Clique(G,V_0)+uv. The 3-connectivity of G implies that Clique(G,V_0) is 3-connected. Thus Clique(G,V_0)+uv is globally rigid by Theorem <ref>. Hence {u,v} is globally loose in Clique(G,V_0) by Lemma <ref>. As G can be obtained from Clique(G,V_0) by clique sum operations and removing edges, Lemma <ref> implies that {u,v} is globally loose in G, a contradiction. §.§ Weakly globally linked pairs and 2-separators In this subsection we shall prove some lemmas that describe, among others, how weak global linkedness is affected when the graph is cut into two parts along a separating pair of vertices. These lemmas will enable us to reduce the question of whether a linked pair of vertices in a graph G is weakly globally linked to the case when G is 3-connected. We shall also need the following extension of <cit.>. Let G=(V,E) be a rigid graph, ab∈ E an R_2-bridge in G, and u,v∈ V with uv∉ E. Suppose that G has no (u,v)-rigid proper induced subgraph. Then {u,v} is globally loose in G. Let H=(V,B) be a minimally rigid spanning subgraph of G. Since ab is an R_2-bridge, we have ab∈ B. The graph H+uv contains a unique R_2-circuit C with u,v∈ V(C). Since G has no (u,v)-rigid proper induced subgraph, C=H+uv must hold. Let (G,p_0) be a generic realization of G. By <cit.> the generic framework (H-ab,p_0) has an equivalent realization (H-ab,p_1), which can be obtained by a flexing, and for which ||p_0(u)-p_0(v)||≠ ||p_1(u)-p_1(v)|| and ||p_0(a)-p_0(b)||= ||p_1(a)-p_1(b)||. Consider an edge xy∈ E-B. Since H is rigid, xy belongs to an R_2-circuit C' of H+xy. Moreover, ab is an R_2-bridge in G (as well as in its subgraph H+xy), and hence C' does not contain ab. Thus there is a rigid subgraph of H-ab (namely, C'-xy), which contains x and y. Hence the flexing does not change the distance between x and y. Therefore (G,p_1) is equivalent to (G,p_0). Since the distances between u and v are different in these realizations, it follows that {u,v} is globally loose in G. Let G=(V,E) be a rigid graph, let z∈ V with N_G(z)={x,y}, and let u,v∈ V-{z}. Then {u,v} is weakly globally linked in G-z+xy if and only if {u,v} is weakly globally linked in G. Let G_1=G-z+xy. Observe that G_1 is isomorphic to G/zx. Since G-z is rigid, we can use Lemma <ref> to deduce that if {u,v} is weakly globally linked in G_1, then {u,v} is weakly globally linked in G. To prove the other direction suppose that {u,v} is weakly globally linked in G. Then it is also weakly globally linked in G+xy. Since G+xy is the clique sum of G_1 and a copy of K_3, Lemma <ref> implies that {u,v} is weakly globally linked in G_1. A pair (a,b) of vertices of a 2-connected graph H=(V,E) is called a 2-separator if H-{a,b} is disconnected. Let G=(V,E) be a rigid graph with |V|≥ 4 and (a,b) be a 2-separator in G. Let C be a connected component of G-{a,b} and let V_0=V(C)∪{a,b}. Suppose that u,v∈ V_0. Then {u,v} is weakly globally linked in G if and only if {u,v} is weakly globally linked in G[V_0]+ab. If {u,v} is weakly globally linked in G, then it is easy to see, by using Lemma <ref>, that {u,v} is weakly globally linked in G[V_0]+ab. To prove that if {u,v} is weakly globally linked in G[V_0]+ab, then {u,v} is weakly globally linked in G, we use induction on |V|. If |V|=4, then we must have G=K_4-e and uv∈ E, so the statement is obvious. Suppose that |V|≥ 5. If there exists a (u,v)-rigid subgraph of G[V_0], then, since G[V_0]+ab can be obtained from G by a sequence of edge contractions, we can use Lemma <ref> to deduce that {u,v} is weakly globally linked in G. So in the rest of the proof we may assume that G[V_0] has no (u,v)-rigid subgraph. In particular, G[V_0] is not rigid. Hence, by the rigidity of G, it follows that {a,b} is not linked in G[V_0] and ab is an R_2-bridge in G[V_0]+ab. Since {u,v} is weakly globally linked in G[V_0]+ab, Lemma <ref> implies that there exists a (u,v)-rigid proper induced subgraph G'=(V',E') of G[V_0]+ab. Suppose that G' is vertex-minimal. By (<ref>) we obtain ab∈ E' and a,b⊂ V'. We consider three cases depending on the structure of G[V_0]-V'. Since G' is a proper induced subgraph, we have V_0-V'≠∅. See Figure <ref>. Case 1: G[V_0]-V' has a component Z with |V(Z)|≥ 2. By Lemma <ref> {u,v} is weakly globally linked in (G[V_0]+ab)/Z. Since G and G' are rigid, G-V(Z) is also rigid, and Z has at least two neighbours in G. Hence G/Z is rigid. Thus we obtain, by induction, that {u,v} is weakly globally linked in G/Z. By using that G-V(Z) is rigid, Lemma <ref> gives that {u,v} is weakly globally linked in G. Case 2: Each component of G[V_0]-V' is a singleton and there exists a vertex z∈ V_0-V' with d_G(z)=2. Let N_G(z)={x,y}. By Lemma <ref> {u,v} is weakly globally linked in (G[V_0]+ab)-z+xy. If {u,v}={a,b} and |V_0|=3, then {u,v} is weakly globally linked in G by Lemma <ref>. So we may assume that |V_0|≥ 4, and hence (a,b) is a 2-separator of the rigid graph G-z+xy. Hence {u,v} is weakly globally linked in G-z+xy by induction. By using that G is rigid, Lemma <ref> implies that {u,v} is weakly globally linked in G. Case 3: Each component of G[V_0]-V' is a singleton and for each z∈ V_0-V' we have d_G(z)≥ 3. We claim that for each z∈ V_0-V' and x,y∈ N_G(z) there is a rigid subgraph of G'-ab which contains x and y. To see this let w be another neighbour of z, different from x,y, and let G” be obtained from G'-ab by adding vertex z and edges zx,zy,zw. The three edges incident with z in G” cannot be R_2-bridges, since it would imply, by using the rigidity of G' and computing ranks, that G” is (u,v)-rigid, contradicting (<ref>). Thus there is an R_2-circuit C in G” containing z. Then C must contain x and y, too, and C-z is a rigid subgraph of G'-ab which contains x and y, as claimed. The minimality of G' implies that it has no (u,v)-rigid proper induced subgraph. Let (G[V_0]+ab,p) be a generic realization. By (the proof of) Lemma <ref> (G'-ab,p|_V') has an equivalent realization (G'-ab,q), for which ||p(u)-p(v)||≠ ||q(u)-q(v)||, ||p(a)-p(b)||= ||q(a)-q(b)||, and such that the distances between the linked pairs of G'-ab are the same in the two realizations. Then, since each pair of neighbours of every z∈ V_0-V' is linked in G'-ab, it follows that (G'-ab,q) can be extended to a realization (G[V_0]+ab,q') that is equivalent to (G[V_0]+ab,p). Hence {u,v} is globally loose in G[V_0]+ab, a contradiction. This completes the proof. We next extend Lemma <ref> to 2-connected graphs. Let G=(V,E) be a 2-connected graph and {u,v} be a linked pair of vertices of G. Suppose that (a,b) is a 2-separator of G. Let C be a connected component of G-{a,b}, and let V_0=V(C)∪{a,b}. Suppose that u,v∈ V_0. Then {u,v} is weakly globally linked in G if and only if {u,v} is weakly globally linked in G[V_0]+ab. If {u,v} is weakly globally linked in G, then it follows from Lemma <ref> that {u,v} is weakly globally linked in G[V_0]+ab. To prove the “if" direction suppose that {u,v} is weakly globally linked in G[V_0]+ab. Since {u,v} is a linked pair, there is a (u,v)-rigid induced subgraph G[U] of G. If {a,b}⊈U, then U is a subset of V_0 and G[V_0]+ab can be obtained from G by contracting edges which are not induced by U. Thus {u,v} is weakly globally linked in G by Lemma <ref>. So we may suppose that {a,b}⊆ U. Let A_1,…, A_k be the components of G-U contained in V_0, and let B_1, …, B_l be the components of G-U not contained in V_0. Observe that the rigidity of G[U] and the 2-connectivity of G imply that G/A_1/… /A_k/B_1/…/ B_k is rigid. Hence we have that {u,v} is weakly globally linked in G ⇔ {u,v} is weakly globally linked in G/A_1/… /A_k/B_1/…/ B_k ⇔ {u,v} is weakly globally linked in G[V_0]/A_1/…/A_k + ab ⇔ {u,v} is weakly globally linked in G[V_0]+ab, where the first and third equivalence follows from Lemma <ref> and the second equivalence follows from Lemma <ref>, using the rigidity of G/A_1/… /A_k/B_1/…/ B_k. The next lemma on the weak global linkedness of linked separating pairs follows from Lemma <ref> by putting {a,b}={u,v}. It can also be deduced from Lemma <ref> by using that there is some component C of G-{u,v} for which {u,v} is linked in G[V(C)∪{u,v}]. Let G=(V,E) be a 2-connected graph, and u,v∈ V be a linked pair of vertices for which (u,v) is a 2-separator in G. Then {u,v} is weakly globally linked in G. We use the following operation to eliminate 2-separators. Let G=(V,E) be a 2-connected graph, let (a,b) be a 2-separator in G, and let C be a connected component of G-{a,b}. We say that the graph G[V(C)∪{a,b}]+ab (when ab∉ E) or G[V(C)∪{a,b}] (when ab∈ E) is obtained from G by a cleaving operation along (a,b). The graph G̅ obtained from G by adding every edge ab, for which ab∉ E and (a,b) is a 2-separator of G, is called the augmented graph of G. The following lemma is easy to show by induction, using the cleaving operation. Let G=(V,E) be a 2-connected graph and let {u,v} be a non-adjacent vertex pair in G with κ_G(u,v)≥ 3. Then either (u,v) is a separating pair in G or there is a unique maximal 3-connected subgraph B of G̅ with {u,v}⊂ V(B). In the latter case the subgraph B can be obtained from G by a sequence of cleaving operations. Furthermore, uv∉ E(B), and if the pair {u,v} is linked in G then it is also linked in B. The subgraph B in Lemma <ref> is called the 3-block of {u,v} in G. We are ready to state the main result of this section: a complete characterization of the non-adjacent weakly globally linked pairs in a graph G. By Lemma <ref> and Lemma <ref> we may assume that {u,v} is linked and κ_G(u,v)≥ 3 (for otherwise {u,v} is globally loose). By Lemma <ref> we may also assume that G is 2-connected. Let G=(V,E) be a 2-connected graph and let {u,v} be a non-adjacent linked pair of vertices with κ_G(u,v)≥ 3. Then {u,v} is weakly globally linked in G if and only if either (i) (u,v) is a separating pair in G, or (ii) Clique(B,V_0) is globally rigid, where B is the 3-block of {u,v} in G, and B_0=(V_0,E_0) is a subgraph of B with u,v∈ V_0 such that B_0+uv is an R_2-circuit. The proof is by induction on the number h of vertex pairs x,y∈ V with κ_G(x,y)=2. If h=0, then B=G and (ii) holds by Theorem <ref>. Suppose that h≥ 1 and let (a,b) be a 2-separator in G. If {a,b}={u,v} then Lemma <ref> applies and (i) holds. Otherwise we can use Lemmas <ref>, <ref>, and induction, to complete the proof. See Figure <ref> for an illustration of Theorem <ref>. § CONCLUDING REMARKS §.§ Algorithmic aspects Theorem <ref> and its proof shows that weak global linkedness of a vertex pair {u,v} in a graph G=(V,E) can be tested in O(|V|^2) time, as efficient algorithms are available for each of the required subroutines. Basic graph algorithms can be used to test, in linear time, whether κ_G(u,v)≥ 3 holds and to find the maximal 2-connected block that contains u,v. After reducing the problem to the 2-connected case, the linear time algorithm of <cit.> can be applied to check whether (u,v) is a separating pair and (when it is not) to identify the 3-block B of {u,v}. (Note that B coincides with one of the so-called cleavage units of G.) Computing Clique(G,X) for a given X⊆ V is also easy. Testing whether {u,v} is linked, and (when it is linked) finding an R_2-circuit of G+uv containing uv can be done in O(|V|^2) time <cit.>. Within the same time bound, we can test whether a graph is globally rigid, see e.g. <cit.>. §.§ Higher dimensions Most questions concerning the higher dimensional versions of (weak) global linkedness are open. Partial results can be found in <cit.>. A fairly natural new question is whether the sufficient condition of weak global linkedness given in Theorem <ref> is also necessary for d≥ 3, in the sense that for every weakly globally linked pair {u,v} of a graph G there exists a (u,v)-rigid induced subgraph G[X] of G such that Clique(G,X) is globally rigid. We are not aware of any counter-examples. Finding extensions and stronger versions of our results is another promising research direction. Here we mention one example. If we use Lemma <ref> (suggested by D. Garamvölgyi) in place of Proposition <ref> in the proof, we obtain the following strengthening of Lemma <ref> to linked pairs: if G[V_0] is a subgraph of G in which {u,v} is linked, e∈ E-E(G[V_0]) and {u,v} is weakly globally linked in G/e, then {u,v} is weakly globally linked in G. Note that a pair {u,v} may be linked in a graph G_0 in ^d, d≥ 3, even if G_0 contains no (u,v)-rigid subgraph. For the definitions of the new notions appearing in the next proof see e.g. <cit.>. Let {u,v} be a linked pair in a graph G in ^d and let (G,p) be a generic realization of G in ^d. Then the set { ||q(u) - q(v)|| : (G,q) is equivalent to (G,p) } is finite. Suppose, for a contradiction, that there exists an infinite sequence of frameworks (G,q_i), i≥ 1, equivalent to (G,p), in which the distances ||q_i(u) - q_i(v)|| are pairwise different. We may assume that G is connected and q_i(u) is the origin for all i≥ 1. Then each (G,q_i) is in the interior of a ball of radius K, for some constant K. Thus, by choosing a subsequent, if necessary, we may assume that (G,q_i) is convergent, with limit (G,q). Since (G,q) is equivalent to (G,p), and (G,p) is generic, the two frameworks (G,p) and (G,q) have the same equilibrium stresses by <cit.>. In particular, the rank of the rigidity matrix of (G,q) is equal to the maximum (generic) rank of G. This fact, and the linkedness of {u,v} imply that the ranks of the rigidity matrices of (G+uv,q) and (G,q) are the same. So their kernels are the same, too. Thus every infinitesimal motion x:V→^d of (G,q) satisfies (q(u)-q(v))^T(x(u)-x(v)) = 0. By continuity this holds for all frameworks in a small enough neighbourhood of (G,q). Consider the frameworks q'_i = (q_i+1 + q_i)/2. They converge to q, and the well-known avaraging technique shows that x_i = (q_i+1 - q_i) is an infinitesimal motion of q'_i for all i≥ 1 (for a proof see e.g. <cit.>). The same calculations show that, since ||q_i+1(u)-q_i+1(v)||≠ ||q_i(u)-q_i(v)||, we have (q'_i(u) - q'_i(v))^T(x_i(u)-x_i(v))≠ 0, a contradiction. §.§ Minimally globally rigid graphs A graph G=(V,E) is called minimally globally rigid in ^d if it is globally rigid in ^d and for every edge e∈ E the graph G-e is not globally rigid in ^d. Garamvölgyi and Jordán <cit.> proved that if G=(V,E) is minimally globally rigid in ^d and |V|≥ d+1, then |E|≤ (d+1)|V|-d+22. Moreover, as it is noted in <cit.>, for every globally rigid graph G in ^d on at least d+1 vertices, and for every minimally rigid spanning subgraph G_0 of G, there exists a globally rigid spanning subgraph of G that contains G_0 and has at most (d+1)|V|-d+22 edges. Furthermore, the authors conjecture that a minimally globally rigid graph in ^d is in fact R_d+1-independent, see <cit.>. The truth of this conjecture would imply that a minimally globally rigid graph G=(V,E) in ^d is not only sparse, but every subgraph of G is sparse: for each U⊆ V with |U|≥ d+1 we have |E(U)|≤ (d+1)|U|- d+22. This conjecture was verified for d=2 in <cit.>. Next we prove this upper bound for all d, in the special case when the subgraph induced by U is rigid. Let G=(V,E) be a minimally globally rigid graph in ^d. Suppose that U⊆ V, |U|≥ d+1 and G[U] is rigid. Then |E(U)|≤ (d+1)|U|- d+22. Let G_0=(U,E_0) be a minimally rigid spanning subgraph of G[U]. Since G is globally rigid, so is Clique(G,U). Thus, by the results of <cit.>, there is a globally rigid spanning subgraph G'=(U,E') of Clique(G,U) that contains G_0 and has at most (d+1)|U|-d+22 edges. Suppose, for a contradiction, that there is some edge e=uv∈ E(U)-E'. Note that G[U]-e is rigid. Then G' is a subgraph of Clique(G-e,U), and hence {u,v} is weakly globally linked in G-e by Theorem <ref>. Since e is critical in G, this contradicts Lemma <ref>. It follows that G[U] is a subgraph of G'; therefore |E(U)|≤ |E'|≤ (d+1)|U|-d+22. For d=2 we can extend Theorem <ref> to all subsets U⊆ V with |U|≥ d+1. As we noted above, this two-dimensional result is not new, as it follows from <cit.>. Here we give a new proof in order to illustrate how Theorem <ref> might be applied to attack the d-dimensional case. <cit.> Let G=(V,E) be a minimally globally rigid graph in ^2. Suppose that U⊆ V and |U|≥ 3. Then |E(U)|≤ 3|U|- 6. By Theorem <ref> the statement is true if G[U] is rigid. Suppose that G[U] is not rigid, that is, r_2(G[U])≤ 2|U|-4. We may assume that G[U] has no isolated vertices. It is well-known (see e.g. <cit.>) that for the collection G_i=(V_i,E_i), 1≤ i≤ k, of the maximal rigid subgraphs of G[U] we have ∑_i=1^k (2|V_i|-3)=r_2(G[U]). For an integer h≥ 2 let f(h) = 3h-6, if h≥ 3, and let f(h)=1 otherwise. Then we have |E(U)|= ∑_i=1^k |E_i| ≤∑_i=1^k f(|V_i|)≤∑_i=1^k 3/2(2|V_i|-3)=3/2r_2(G[U])≤ 3|U|-6, where the first inequality follows from Theorem <ref>. § ACKNOWLEDGEMENTS This research has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the ELTE TKP 2021-NKTA-62 funding scheme. The first author was also supported by the Hungarian Scientific Research Fund grant no. K135421, and the MTA-ELTE Momentum Matroid Optimization Research Group. We thank Dániel Garamvölgyi for several useful remarks, and for suggesting Lemma <ref>. We also thank Csaba Király for his comments. 99 AR L. Asimow and B. Roth, The rigidity of graphs, Trans. Amer. Math. Soc., 245 (1978), pp. 279-289. BJ A.R. Berg and T. Jordán, Algorithms for graph rigidity and scene analysis, Proc. 11th Annual European Symposium on Algorithms (ESA) 2003, (G. Di Battista and U. Zwick, eds) Springer LNCS 2832, pp. 78-89, 2003. Con R. Connelly, Generic global rigidity, Discrete Comput. Geom. 33:549-563 (2005). Conmerge R. Connelly, Combining globally rigid frameworks, Proc. of the Steklov Institute of Mathematics, 275, 191-198, 2011. coning R. Connelly and W. Whiteley, Global rigidity: the effect of coning, Discrete Comput Geom (2010) 43: 717–735. GJunitball D. Garamvölgyi and T. Jordán, Global rigidity of unit ball graphs, SIAM J. Discrete Math. 34:1, pp. 212-229, 2020. GJcccg D. Garamvölgyi and T. Jordán, Globally linked pairs in braced maximal outerplanar graphs, Proc. CCCG 2022, Toronto, August 2022, pp. 162-168. GJpartial D. Garamvölgyi and T. Jordán, Partial reflections and globally linked pairs in rigid graphs, arXiv:2305.03412, May 2023. GJmgr D. Garamvölgyi and T. Jordán, Minimally globally rigid graphs, European J. Combin., Vol. 108., 103626, 2023. Gluck H. Gluck, Almost all simply connected closed surfaces are rigid, Geometric topology (Proc. Conf., Park City, Utah, 1974), pp. 225–239. Lecture Notes in Math., Vol. 438, Springer, Berlin, 1975. GHT S. Gortler, A. Healy, and D. Thurston, Characterizing generic global rigidity, American Journal of Mathematics, Volume 132, Number 4, August 2010, pp. 897-939. hend B. Hendrickson, Conditions for unique graph realizations, SIAM J. Comput. 21 (1992), no. 1, 65-84. HT J.E. Hopcroft and R.E. Tarjan, Dividing a graph into triconnected components, SIAM J. Comput. 2 (1973), 135–158. JJconnrig B. Jackson and T. Jordán, Connected rigidity matroids and unique realizations of graphs, J. Combin. Theory Ser. B, Vol. 94, 1-29, 2005. JJS B. Jackson, T. Jordán, and Z. Szabadka, Globally linked pairs of vertices in equivalent realizations of graphs, Discrete Comput. Geom., Vol. 35, 493-512, 2006. JJS2 B. Jackson, T. Jordán, and Z. Szabadka, Globally linked pairs of vertices in rigid frameworks, in: Rigidity and Symmetry, Fields Institute Communications, Vol. 70, R. Connelly, A. Ivic Weiss, W. Whiteley (Eds.) 2014, pp. 177-203. JmemoirsT. Jordán, Combinatorial rigidity: graphs and matroids in the theory of rigid frameworks. In: Discrete Geometric Analysis, MSJ Memoirs, vol. 34, pp. 33-112, 2016. JKTT. Jordán, Cs. Király, and S. Tanigawa, Generic global rigidity of body-hinge frameworks, J. Combin. Theory, Series B 117, 59-76, 2016. JW T. Jordán and W. Whiteley, Global rigidity, in J. E. Goodman, J. O'Rourke, and C. D. Tóth (eds.), Handbook of Discrete and Computational Geometry, 3rd ed., CRC Press, Boca Raton, pp. 1661-1694, 2018. JT T. Jordán and S.Tanigawa, Global rigidity of triangulations with braces, J. Comb. Theory Ser. B., 136, pp. 249-288 (2019). KM Cs. Király and A. Mihálykó, Fast algorithms for sparsity matroids and the global rigidity augmentation problem, Egerváry Research Group, Budapest, TR-2022-05, 2022. laman G. Laman, On graphs and rigidity of plane skeletal structures, J. Engineering Math. 4 (1970), 331-340. oxley J.G. Oxley, Matroid theory, Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1992. xii+532 pp. Saxe J.B. Saxe, Embeddability of weighted graphs in k-space is strongly NP-hard, Technical report, Computer Science Department, Carnegie-Mellon University, Pittsburgh, PA, 1979. SW B. Schulze and W. Whiteley, Rigidity and scene analysis, in J.E. Goodman, J. O'Rourke, C.D. Tóth (eds.), Handbook of Discrete and Computational Geometry, 3rd ed., CRC Press, Boca Raton, 2018. Tani S. Tanigawa, Sufficient conditions for the global rigidity of graphs, J. Combin. Theory, Ser. B., Vol. 113, July 2015, Pages 123-140.
http://arxiv.org/abs/2307.04926v1
20230710221938
The Great Dimming of the hypergiant star RW Cephei: CHARA Array images and spectral analysis
[ "N. Anugu", "F. Baron", "D. R. Gies", "C. Lanthermann", "G. H. Schaefer", "K. A. Shepard", "T. ten Brummelaar", "J. D. Monnier", "S. Kraus", "J. -B. Le Bouquin", "C. L. Davies", "J. Ennis", "T. Gardner", "A. Labdon", "R. M. Roettenbacher", "B. R. Setterholm", "W. Vollmann", "C. Sigismondi" ]
astro-ph.SR
[ "astro-ph.SR" ]
Douglas R. Gies [email protected] 0000-0002-2208-6541]Narsireddy Anugu The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023, USA 0000-0002-5074-1128]Fabien Baron Center for High Angular Resolution Astronomy and Department of Physics and Astronomy, Georgia State University, P.O. Box 5060, Atlanta, GA 30302-5060, USA 0000-0001-8537-3583]Douglas R. Gies Center for High Angular Resolution Astronomy and Department of Physics and Astronomy, Georgia State University, P.O. Box 5060, Atlanta, GA 30302-5060, USA 0000-0001-9745-5834]Cyprien Lanthermann The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023, USA 0000-0001-5415-9189]Gail H. Schaefer The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023, USA 0000-0003-2075-5227]Katherine A. Shepard Center for High Angular Resolution Astronomy and Department of Physics and Astronomy, Georgia State University, P.O. Box 5060, Atlanta, GA 30302-5060, USA 0000-0002-0114-7915]Theo ten Brummelaar The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023, USA 0000-0002-3380-3307]John D. Monnier Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA 0000-0001-6017-8773]Stefan Kraus Astrophysics Group, Department of Physics and Astronomy, University of Exeter, Exeter, EX4 4QL, UK 0000-0002-0493-4674]Jean-Baptiste Le Bouquin Institut de Planetologie et d'Astrophysique de Grenoble, Grenoble 38058, France 0000-0001-9764-2357]Claire L. Davies Astrophysics Group, Department of Physics and Astronomy, University of Exeter, Exeter, EX4 4QL, UK 0000-0002-1575-4310]Jacob Ennis Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA 0000-0002-3003-3183]Tyler Gardner Astrophysics Group, Department of Physics and Astronomy, University of Exeter, Exeter, EX4 4QL, UK 0000-0001-8837-7045]Aaron Labdon European Southern Observatory, Casilla 19001, Santiago 19, Chile 0000-0002-9288-3482]Rachael M. Roettenbacher Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA 0000-0001-5980-0246]Benjamin R. Setterholm Department of Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI 48109, USA Bundesdeutsche Arbeitsgemeinschaft Veraenderliche Sterne, Munsterdamm 90, D-12169 Berlin, Germany American Association of Variable Star Observers, 185 Alewife Brook Parkway, #410, Cambridge, MA 02138, USA International Center for Relativistic Astrophysics, Università Pontificia Regina Apostolorum and ITIS Galileo Ferraris, via R. Grazioli Lante 15A, 00195, Rome, Italy The cool hypergiant star RW Cephei is currently in a deep photometric minimum that began several years ago. This event bears a strong similarity to the Great Dimming of the red supergiant Betelgeuse that occurred in 2019–2020. We present the first resolved images of RW Cephei that we obtained with the CHARA Array interferometer. The angular diameter and Gaia distance estimates indicate a stellar radius of 900 - 1760 R_⊙ which makes RW Cephei one of the largest stars known in the Milky Way. The reconstructed, near-infrared images show a striking asymmetry in the disk illumination with a bright patch offset from center and a darker zone to the west. The imaging results depend on assumptions made about the extended flux, and we present two cases with and without allowing extended emission. We also present a recent near-infrared spectrum of RW Cephei that demonstrates that the fading is much larger at visual wavelengths compared to that at near-infrared wavelengths as expected for extinction by dust. We suggest that the star's dimming is the result of a recent surface mass ejection event that created a dust cloud that now partially blocks the stellar photosphere. § INTRODUCTION The recent Great Dimming of Betelgeuse provided an opportunity to study the dynamics of mass loss in a relatively nearby red supergiant (summarized by ). During the months before the fading (2019 January to November), the spectrum of Betelgeuse indicated an outflow from the photosphere that was possibly related to a large convective upwelling <cit.>. This probably led to a surface mass ejection of a large gas cloud that cooled and formed dust and increased the visible band extinction <cit.>. <cit.> obtained angularly resolved images of Betelgeuse with VLT SPHERE-ZIMPOL around the deep minimum when the star had faded by 1.2 mag in the V-band (2019 December to 2020 March). Their images showed that the southern hemisphere was much darker than in pre-minimum images suggesting that the fading was the result of partial extinction by a foreground dust cloud seen against a slightly cooler photospheric disk. The mass lost during this ejection was comparable to that for a full year of steady outflow <cit.> indicating that episodic mass ejections in supergiants constitute a significant fraction of their total mass loss <cit.>. The cool hypergiant RW Cephei (HD 212466) is now presenting us with a second opportunity to explore episodic mass loss at high angular resolution. Its spectrum indicates a very high luminosity (classified as K2 0-Ia by ), and the spectral line shapes suggest complex photospheric motions and outflow <cit.>. The star is a yellow semiregular variable, and it displays modest photometric variations on a timescale of about a year <cit.>. However, recent photometric measurements by <cit.>, https://www.aavso.org/LCGv2/static.htm?DateFormat=Julian RequestedBands=V Grid=true view=api.delim ident=rw cep fromjd=2457300 obscode=VOL delimiter=@@@AAVSO observers[https://www.aavso.org/LCGv2/], and the Kamogata/Kiso/Kyoto Wide-field Survey[http://kws.cetus-net.org/^∼maehara/VSdata.py] (KWS ) show that RW Cep is now undergoing its own great dimming episode (Figure 1). By the end of 2022, RW Cep had faded by 1.1 mag in V-band to become fainter than at any time in the last century. Furthermore, the star became redder (larger V-I_c) as it faded. At the time of writing (2023 June), it appears to have passed its point of minimum light and is slowly brightening again. Visible-band spectra made in 2022 December by R. Leadbeater[https://www.cloudynights.com/topic/854288-rw-cephei-great-dimming/] show a good match to that of a K4 I spectral template with an interstellar reddening of E(B-V)=0.65 mag. High resolution spectra made by http://www.threehillsobservatory.co.uk/astro/RW_Cep/rwcep_elodie_archive_THO_2022-12-19_Halpha.pngR. Leadbeater[http://www.threehillsobservatory.co.uk/astro/RW_Cep/rwcep_elodie_archive_THO_2022-12-19_Halpha.png] and by http://www.spectro-aras.com/forum/viewtopic.php?f=42 t=3057#p17405J. Guarro Fló[http://www.spectro-aras.com/forum/viewtopic.php?f=42&t=3057#p17405] show evidence of a narrow Hα emission line that was absent in ELODIE spectra of the star that were made between 1999 and 2005. This emission may be associated with excess mass loss during the current episode. The star has a strong infrared flux excess that forms in a dust envelope <cit.>. It appears slightly extended (diameter ≈ 1 arcsec) in high resolution, mid-infrared images (see Fig. A.3 in and Fig. 1 in ) indicating a long history of mass loss. fig1 Here we present the first interferometric images of RW Cep that we made recently with the CHARA Array, and these show striking similarities to the asymmetries seen in the VLT images of Betelgeuse. We describe the interferometric observations and derived images in Section 2, and we present a recently obtained near-infrared flux spectrum in Section 3. A comparison is made of the spectral energy distribution before and during the fading event in Section 4. We discuss the implications of these observations for models of dimming and mass ejection in Section 5. § INTERFEROMETRIC IMAGES We obtained a single observation of RW Cep with the Center for High Angular Resolution Astronomy (CHARA) Array <cit.> on 2022 December 23 UT. We used only five of the six Array telescopes because the S1 telescope had insufficient available delay length for the star's position in the north-western sky at the time of the observations. We used the dual beam combiners MIRC-X <cit.> for the near-infrared H-band (1.50 to 1.74 μm) and MYSTIC for the K-band (2.00 to 2.37 μm) <cit.>. The observations were made with a spectral resolving power of R=190 and 100 for MIRC-X and MYSTIC, respectively. The nominal angular resolution is approximately 0.5 and 0.6 milliarcsec (mas), respectively. These beam combiners use the telescopes of the Array to collect interferometric fringe measurements for a large range in baseline over much of the (u,v) spatial frequency plane (Figure 2). fig2 The measurements were reduced using the standard MIRC-X/MYSTIC pipeline (version 1.4.0; ). Calibrator observations were made of HD 219080 (before the science target) and were used to correct for atmospheric and instrumental effects to obtain absolute-calibrated visibilities V^2, closure phases (CP), and triple amplitudes (T3A). The calibrator diameter (0.69 mas for a uniform disk) was adopted from the JMMC Stellar Diameters Catalog (JSDC; ). The derived visibilities and closure phases are shown in Figures 3 and 4 for the H- and K-bands, respectively. The star is clearly resolved in both bands (visibility declining with larger spatial frequency), and the data show evidence of an asymmetric flux distribution (non-zero and non-π closure phase). fig3 fig4 We first fit the interferometric visibilities V^2 using an analytical model of a uniformly bright circular disk for the star with an incoherent background flux on larger spatial scales over-resolved in the CHARA Array observations. The fits were made over different wavelength ranges using the code PMOIRED[https://github.com/amerand/PMOIRED] <cit.>. Table 1 lists the derived values of the background flux fraction, uniform disk angular diameter θ = θ _UD, and reduced chi-squared χ ^2 _ν of the fit of the visibilities. The first row gives the H-band fits of the MIRC-X measurements, and the second and third rows give the K-band results from MYSTIC in the K-band pseudo-continuum and in the CO bands, respectively (see Figure 8 below). The angular diameter is found to increase in size with wavelength and appears to be significantly larger in the CO bands (row 3). This is evidence of an extended atmospheric extension in the CO bands that is also observed in other cool, luminous stars <cit.>. There is an extended background flux that generally forms a larger fraction of the total flux at longer wavelength. We suspect that this flux originates in extended dust emission that reaches an angular size of order 1 arcsec at 10 μm <cit.>. ccccccc[h] Angular Diameter Estimates 0pt Method Wavelength Background θ χ ^2 _ν χ ^2 _ν χ ^2 _ν (μm) Fraction (mas) (V^2) (CP) (T3A) PMOIRED UD 1.50 – 1.72 0.09 ± 0.02 2.44 ± 0.02 3.8 PMOIRED UD 1.98 – 2.29 0.17 ± 0.01 2.63 ± 0.02 4.2 PMOIRED UD 2.31 – 2.37 0.15 ± 0.01 3.21 ± 0.02 2.6 SQUEEZE 1.50 – 1.72 0.19 2.26 – 2.69 1.37 1.24 1.35 SQUEEZE 1.98 – 2.29 0.17 2.08 – 2.66 1.12 1.73 1.01 OITOOLS 1.50 – 1.72 0.09 2.41 1.29 1.10 0.54 OITOOLS 1.98 – 2.29 0.08 2.35 1.26 2.28 0.59 SURFING 1.50 – 1.72 0.08 2.45 2.77 2.92 1.13 SURFING 2.11 – 2.28 0.06 2.44 1.32 7.91 0.88 SED fit 0.35 – 2.20 0 2.58 ± 0.16 The relatively high quality of the visibility and closure phase measurements encouraged us to derive aperture synthesis images that make good fits of the observations. We caution at the outset that this data set is not ideal for image reconstruction. The observations were made only over a duration of one hour when the star was already at a large hour angle, and only five of the six telescopes were available. Consequently, the (u,v) spatial frequency coverage is under-represented in some sky orientations, and the effective angular resolution is better in the north - south directions compared to the east - west directions (Figure 2). Consequently, the spatial resolution in the reconstructed images varies with position angle and is poor at position angles near +60^∘ and -40^∘ (both ± 180^∘). Furthermore, the star has a relatively small angular size, and any small-scale structured flux in the extended emission will complicate the image reconstruction of the star itself. We first used the SQUEEZE image reconstruction software[https://github.com/fabienbaron/squeeze] <cit.> to make the images. Positivity was enforced, and we used the edge-preserving ℓ_2-ℓ_1 regularization <cit.> with the hyperparameter weight determined by the classical L-curve method. To avoid getting trapped in local minima in the solution space, 50 initial reconstructions were started from random images. The solutions converged into images with similar appearance, and these trials were then co-registered and averaged to obtain representative images. The reduced χ^2 values were 1.37 for the MIRC-X data fits and 3.7 for the MYSTIC data fits, suggesting much stronger chromaticity (wavelength dependence) for the latter. For the MYSTIC data, we removed the spectral channels containing the CO bands to perform an image reconstruction in the K-band pseudo-continuum. We obtained a lower reduced χ^2 ∼ 1.12 by omitting those long wavelength bins that record the CO bands. The SQUEEZE reconstructed images are shown in the top panels of Figure 5 for each of the H-band and wavelength restricted K-band observations. These are 6.4 × 6.4 mas images (64 × 64 pixels) with an orientation of north to the top and east to the left. There is a clear asymmetry evident in the images with a brighter zone towards the north-east limb and a darker (and possibly extended) zone towards the western side. We experimented with several other choices of regularizer for the image reconstruction, and the same large scale asymmetry appears in those images. The mean background and diameter estimates from the SQUEEZE images are given in Table 1. The non-spherical shape and possible limb extensions that characterize the SQUEEZE images may have a physical rather than instrumental origin. Models of mass loss in red supergiant stars by <cit.> indicate that such stars may have extended clumpy regions that create shell-like features, and observations of the hypergiant VY CMa by <cit.> show the presence of clumps and knot structures close to the star. <cit.> obtained spectropolarimetry of the hypergiant star μ Cep that they interpret in terms of rising convective plumes that reach a radius of 1.1 R_⋆. Thus, the irregular shape of RW Cep in the SQUEEZE image reconstructions may be due to the combined effects of extended plume emission and localized dust emission and absorption (together creating only a modest change in overall flux; see Table 2 below). We were concerned that the boxy image structure might be due to the limited (u,v) coverage of the observations (see Figure 2), so we performed a numerical test to check if the non-spherical appearance is due to the star itself. We created a model of a spherical, limb-darkened disk (power law) using the angular diameter θ determined from the OITOOLS reconstructions described below (see Table 1). Then we used these model images to generate the OIFITS data sets that would have been observed for these simple disks. We performed SQUEEZE reconstructions from the model data using the same (u,v) coverage and noise levels associated with the observations. The resulting SQUEEZE images are shown in the bottom row of Figure 5, and these appear more or less circular as expected. These tests indicate that the unusual shape of RW Cep in the SQUEEZE images reconstructed from the observations probably does not have an instrumental explanation, but that the stellar shape is sculpted by dynamical processes in its outer layers. It is worthwhile considering how the star would appear if the image reconstruction instead is confined to within the stellar radius, and the star is surrounded by a diffuse, over-resolved background light. We did this by making sets of images that constrain the structured flux to fall within a circle defined by the stellar photosphere. We adjusted the uncertainties in the measurements in these cases by adding a 10% relative error and a 0.0002 additive correction for V^2 and adding a minimum error of 1 degree for the closure phase errors. These revisions account for possible systematic uncertainties. A set of OITOOLS images were obtained using the OITOOLS.jl software suite[https://github.com/fabienbaron/OITOOLS.jl]. The initial starting images consisted of the best-fitting uniform disks derived for the MIRC-X and MYSTIC datasets. The regularization was set up to use a combination of image centering, compactness and ℓ_1-ℓ_2 edge-preserving smoothness <cit.>, where the compactness prior was set as the starting image. The H and K-band images from the OITOOLS reconstructions (for 128 × 128 pixels) are shown in the top panels of Figure 6, and the associated background and angular diameter estimates are given in Table 1. The star appears to be larger and more circular using this method, but some of the same flux asymmetries found in the SQUEEZE images are also recovered here but with lower contrast. One more set of image reconstructions were made using the SURFING algorithm <cit.> that assigns a specific intensity to each element on the three-dimensional surface of the star. The first step was to find a best-fit limb-darkened angular diameter that acts as the outer boundary on the assigned flux. The best fit diameters θ = θ_LD are listed in Table 1, and these were derived assuming a power law limb-darkening relation, I(μ)/I(μ =1) = μ ^ α, where μ is the cosine of the angle between surface normal and line of sight and α = 0.26. The SURFING algorithm was then applied iteratively to solve for the surface element brightness (in this case only for elements on the visible hemisphere). We show in the lower panels of Figure 6 one pair of images among the final set of walker solutions (1024 × 1024 pixels of size 0.005 mas, inset into a uniform background zone). The H and K-band images appear similar to each other and show a bright patch offset from center and a darkening to the western limb. We also created a set of images using the ROTIR code <cit.> that likewise assigns flux to surface patches on a rotating star, and these images are qualitatively similar to those derived using SURFING. The SQUEEZE, OITOOLS, and SURFING images show some similarities but also some significant differences in appearance. The SQUEEZE images (Fig. 5) were made with the fewest assumptions about the expected appearance. The star in the SQUEEZE images is non-circular and boxy in appearance with sides tilted by about 20^∘ relative to north. The star appears darker on its western limb, and the brightest zone is positioned north-east of center. The K-band image shows greater contrast across the disk and the limb is spread over a larger span in radius. We show in Section 3 below that dust emission begins to become a flux contributor in the K-band, and the dust opacity can create both emission (off of the stellar disk) and absorption (projected against the disk). The OITOOLS and SURFING images (Fig. 6) restrain the reconstructed flux to the star. They show darker limbs (especially the western limb) coincident with the boxy sides seen in the SQUEEZE images. There is also a bright off-center patch that appears towards the north-east (south-west) in the OITOOLS (SURFING) K-band images, while two offset patches appear in the H-band images. The differences in bright zone position may result from the neglect of structured off-disk light in these two algorithms. Note that the total amount of off-disk flux is about two times larger in the SQUEEZE reconstructed images compared to the OITOOLS and SURFING images (see the background fraction given in Table 1). The surface intensity distribution of red supergiants is probably dominated by hot, rising convection cells <cit.>, but in the case of hypergiants, mass loss becomes the dominant process that shapes the intensity distribution <cit.>. We expect that the local mass-loss rate may vary with position on the star due to the kinematics and radiation of hot convective cells, and the observational consequences may be especially important at the stellar limb where hotter gas can create spatially extended emission. The SQUEEZE images were made without any geometric assumptions about spherical symmetry, and the irregular stellar shape in these images may reflect the spatial variation in mass-loss rate. § NEAR-INFRARED SPECTROSCOPY We obtained complementary near-infrared (NIR) spectroscopy of RW Cep using the TripleSpec instrument at the 3.5 m telescope of the Apache Point Observatory <cit.>. TripleSpec records the NIR spectrum over the wavelength range of 0.9 to 2.5 μm with a spectral resolving power of R=3500. The observations were obtained on 2023 January 9 and 12 in good sky conditions. We made sets of the standard ABBA exposure nodding pattern for slit offset positions A and B for subtraction of the sky background. In order to avoid saturation of the detector for the bright flux of RW Cep, the telescope was defocussed to create a broad (and double-peaked) spatial profile across the spectrograph slit. We made multiple observations of RW Cep and a nearby flux calibrator star α Lac (HD 213558; A1 V) with single exposure times of 1–2 and 8–12 sec, respectively. The spectra were reduced, extracted, and combined using a version of the IDL Spextool software <cit.> modified for TripleSpec[https://www.apo.nmsu.edu/arc35m/Instruments/TRIPLESPEC/TspecTool/index.html]. The pipeline includes flat field division, wavelength calibration based upon the atmospheric airglow emission lines in the stellar spectra, and spectrum extraction. The atmospheric telluric lines were removed and a flux calibration applied using the IDL code Xtelluric <cit.>. This procedure uses a high spectral resolving power model spectrum of the A0 V star Vega that is fit to the spectrum of the flux calibrator α Lac to remove the stellar lines, and then the normalized result is used to extract the atmospheric telluric lines. The final step is to set the absolute flux calibration by transforming the model Vega spectrum into a representation of calibrator star spectrum by scaling and reddening according to the calibrator star's B and V magnitudes. However, small differences between the α Lac (A1 V) and Vega (A0 Va) spectra can amount to large uncertainties in the estimated flux in the near-infrared part of the spectrum. We checked the NIR flux estimates by comparing the transformed Vega spectrum with observed fluxes for the calibrator star α Lac from published photometry collected in the VizieR Photometry Tool[https://vizier.cds.unistra.fr/vizier/sed/] by Anne-Camille Simon and Thomas Boch. We found that the transformed Vega spectrum used by Xtelluric to model the spectrum of α Lac actually overestimated the observed flux by about 12% in the JHK bands, so we applied a wavelength-dependent correction to the RW Cep fluxes to account for the discrepancy between the applied and actual fluxes of the calibrator star α Lac. The final spectrum of RW Cep is shown in Figure 7. We estimate that the absolute flux calibration has an uncertainty of approximately 10% based upon the scatter between sets of observations and the errors introduced in setting fluxes from the calibrator star α Lac. fig7 A NIR spectrum of RW Cep in the pre-dimming state (from 2005 August 26) is available from the IRTF Spectral Library[http://irtfweb.ifa.hawaii.edu/^∼spex/IRTF_Spectral_Library/] <cit.>, and a corrected version of this spectrum is plotted for comparison in Figure 7. The original IRTF spectrum was flux calibrated based upon 2MASS magnitudes that unfortunately have large uncertainties (± 0.2 mag) for such a bright target. In the next section, we consider fluxes in the bright state from published photometry (see Table 2 below). A comparison of the average fluxes over the JHK bands in the IRTF spectrum with the bright state photometric values indicates that the IRTF spectrum is approximately 15% fainter than expected. Consequently, we applied a wavelength-dependent flux correction to bring the IRTF spectrum into consistency with the photometry, and it is the corrected version that is plotted in Figure 7. This spectrum has associated flux uncertainties of about 10% (0.1 mag). The recent spectrum made during the dimming event is somewhat fainter than the archival spectrum by an amount that is larger at shorter wavelength. The magnitude change from a comparison of the spectra (Table 2) is J = +0.10 ± 0.19 mag, H = +0.08 ± 0.19 mag, and K = +0.11 ± 0.25 mag. Together with the visual magnitude estimates (see Figure 1), it appears that RW Cep has faded by approximately 1.1, 0.7, and 0.1 mag in the V, I_c, and JHK bands, respectively (see Figure 9 below). fig8 The pre-dimming and dimming event spectra appear similar, but there are several significant differences. We see that the continuum slope in the K-band is less steep in the dimming event spectrum, and this suggests that there is an additional flux component now present that increases in strength with wavelength, as expected for dust emission. Several of the absorption lines appear somewhat deeper implying a slightly cooler photospheric temperature. In particular, the CO 2.29 μm absorption is now much deeper than in the archival spectrum (Figure 8). <cit.> discuss a number of absorption features in the spectra of late-type giants and supergiants (including RW Cep) that are sensitive to stellar effective temperature, and they find that the CO feature grows quickly in strength with declining temperature (see their Figure 11, top left panel). Based upon their fit of the temperature dependence of the CO line strength, we estimate that the photospheric spectrum indicates a drop from 4200 K (for the archival spectrum) to 3900 K during the current faint state (or somewhat less if the absorption strength is reduced by dust emission in the K-band). We can use the absolute fluxes from Figure 7 to make approximate estimates of the temperature distributions associated with the interferometric images. The observed flux is related to the angular integral of the image specific intensity: F_λ = ∮ I_λ dω = ∑ I_i ω where I_i is the specific intensity of pixel i and ω is the angular area of each pixel in the image (ω=2.35× 10^-19 str for 0.1×0.1 mas pixels in the SQUEEZE images). The observed fluxes F_λ averaged over the MIRC-X H-band and MYSTIC K-band wavelength ranges are F_λ=(1.36±0.17)× 10^-11 and (6.25±1.23)× 10^-12 erg sec^-1 cm^-2 Å^-1, respectively. We need to deredden these fluxes to account for interstellar extinction. Below we derive a faint state reddening of E(B-V)=0.64 ± 0.08 mag (Table 2), and this corresponds to NIR extinctions of A_H=0.34 ± 0.04 mag and A_K=0.23 ± 0.03 mag <cit.>. Then the extinction-corrected (unreddened) fluxes are F_λ^UR = (1.86± 0.24)× 10^-11 and (7.71 ± 1.51)× 10^-12 erg sec^-1 cm^-2 Å^-1 for the H and K-bands, respectively. We make the simplifying approximation that the specific intensities are set by the gas temperature through the Planck function. We can then use the above equation for the flux to relate the image pixel intensity P_i (normalized so that the flux summed over the image is one) to the gas temperature T: T(P_i) = b_2 / ln (1 + b_1 / P_i). The constants are b_1 = 2hc^2λ ^5ωF_λ^UR = 0.0137 ± 0.0018 and 0.0070 ± 0.0014 and b_2 = hcλ k = 8909 K and 6540 K for adopted central wavelengths of 1.615 and 2.200 μm, respectively. Note the temperatures derived this way may be slight overestimates because part of the observed flux may arise in the circumstellar environment and not in the photosphere. This method applied to the SQUEEZE images in Figure 5 leads to peak temperatures of around 4490 K (H-band) and 4860 K (K-band) averaged over pixels with intensities greater than 70% of the maximum intensity. Similarly, the full disk temperatures are approximately 3520 K for both the H and K-bands averaged over pixels with intensities greater than 10% of the maximum intensity. These temperature estimates are similar to that estimated above from the CO line (3900 K). § SPECTRAL ENERGY DISTRIBUTION We can obtain another estimate of the angular diameter of RW Cep from a comparison of the observed and model flux distributions, but keeping in mind that the observed fluxes are actually the sum of stellar and circumstellar light. The shape of the spectral energy distribution (SED) is a function of the stellar flux, dust emission, and extinction, so an examination of the SED in both the bright and faint states offers a means to check on extinction changes resulting from additional circumstellar dust. Here we first present the bright state SED based upon archival photometry and then compare it to the faint state case based upon current flux estimates. The fluxes for the bright state were collected from published photometry collected in the VizieR Photometry Tool. We added to this set the fluxes derived from the photometry catalog of <cit.> using the flux calibrations from <cit.>. We removed the 2MASS JHK fluxes that are suspect for this very bright star. The observed SED is shown in Figure 9 in the (logλ, logλ F_λ) plane. The measurements indicated by plus signs for wavelengths < 3 μm were used in the subsequent fit while the long wavelength points shown as triangles were omitted because a large fraction of the IR excess originates in circumstellar dust <cit.>. We list in column 3 of Table 2 the averages of the flux measurements in the primary photometric bands. fig9 ccccc[h] Spectral Energy Distribution 0pt Filter Wavelength Bright State Faint State Faint State Band (Å) (erg cm^-2 sec^-1 Å^-1) (erg cm^-2 sec^-1 Å^-1) Source V 5450 (8.16 ± 0.10)× 10^-12 (3.40 ± 0.34)× 10^-12 AAVSO I_c 7980 (2.07 ± 0.20)× 10^-11 (1.02 ± 0.10)× 10^-11 KWS Y 10200 (2.02 ± 0.25)× 10^-11 APO J 12500 (2.17 ± 0.25)× 10^-11 (1.98 ± 0.21)× 10^-11 APO H 16300 (1.43 ± 0.14)× 10^-11 (1.36 ± 0.17)× 10^-11 APO H 16300 (1.14 ± 0.16)× 10^-11 MIRC-X K 22000 (6.82 ± 0.43)× 10^-12 (6.25 ± 1.21)× 10^-12 APO K 22000 (6.05 ± 0.96)× 10^-12 MYSTIC T_ eff (K) 4200 3900 E(B-V) (mag) 0.46 ± 0.06 0.64 ± 0.08 θ (mas) 2.25 ± 0.18 2.58 ± 0.16 A model of the spectral flux for RW Cep was selected from the grid of BT-Dusty/Phoenix stellar atmosphere models from <cit.> that are available from the Spanish Virtual Observatory Theoretical Spectra Web Server[http://svo2.cab.inta-csic.es/theory/newov2/]. These flux distributions are derived from spherical geometry atmospheres that use solar abundances from <cit.> and that account for mixing processes to create condensates. We chose a model with T_ eff = 4200 K (close to the value of 4185 K derived by from spectral indices) and log g = 0.5 (cgs units). This gravity value is the lowest in the grid, but it is probably still too large by as much as 1 dex for this hypergiant. However, the characteristics of the model spectrum are primarily determined by the temperature, so this approximation is reasonable. The model spectrum was rebinned to a low resolving power of R=8 in order to compare the model fluxes with those derived from broad-band photometry. The model spectrum was fit to the observed fluxes using two parameters, the reddening E(B-V) and the limb-darkened angular diameter θ_LD. We used the reddening law from <cit.> for a ratio of total-to-selective extinction of 3.1. The fitted model spectrum is shown in Figure 9 as a solid line for E(B-V)=0.46 ± 0.06 mag and θ = 2.25 ± 0.18 mas. <cit.> derive a K-band extinction of A(K) = 0.26 ± 0.17 mag that corresponds to a reddening of E(B-V)=0.72 ± 0.47 mag for the reddening law from <cit.>, which agrees within uncertainties with our result. The angular diameter from the fit is similar to that from the JMMC Stellar Diameters Catalogue of θ_LD = 2.14 ± 0.17 mas. Estimates of the flux of RW Cep in the faint state are listed in column 4 of Table 2. The entries for the V and I_c bands are from recent AAVSO and KWS magnitudes, respectively, that were converted to fluxes using the calibrations from <cit.>. There are four rows that give the average fluxes over the standard filter band ranges that were obtained from the APO TripleSpec spectrum shown in Figure 7. In addition, there are estimates for the H and K-bands that we derived from the raw counts in the CHARA Array observations of RW Cep and the calibrator star HD 219080 (7 And). A comparison of the detector counts from the MIRC-X and MYSTIC observations led to magnitude differences of RW Cep relative to 7 And of H = -1.31 ± 0.14 mag and K = -1.73 ± 0.16 mag. We adopted magnitudes for 7 And of H = 3.81 and K=3.77 from <cit.>[Magnitudes collected from <cit.> which have typical uncertainties of ± 0.02 mag for bright stars. These uncertainties are much smaller than the other sources of uncertainty in the magnitude error budget.] so the estimated magnitudes of RW Cep on 2022 December 23 are H=2.50 ± 0.14 mag and K=2.04 ± 0.16 mag. The corresponding fluxes from the calibrations of <cit.> are listed in rows 6 and 8 of Table 2. The faint state fluxes are shown as diamond symbols in the SED in Figure 9. The flux decrease in the near-IR bands is modest compared to the large drop observed in the visual V-band. We made a separate fit of the faint state fluxes this time using a BT-Dusty model for a photospheric temperature of T_ eff = 3900 K that was indicated by the strength of the CO-band features in the TripleSpec observation (Section 3). This fit is shown as the dotted line in Figure 9, and the fitting parameters are E(B-V) = 0.64 ± 0.08 mag and θ = 2.58 ± 0.16 mas. Note that these last two parameters are relatively independent of the value adopted for the temperature. For example, if we adopt instead a flux model for T_ eff = 4200 K, then the fit of the faint state fluxes yields E(B-V) = 0.88 ± 0.07 mag and θ = 2.54 ± 0.15 mas. The angular size of RW Cep for the faint state flux fits is marginally larger than that for the bright state (by 1.4 σ), but we caution that the fits do not account for the flux component from circumstellar dust. The difference in reddening and extinction between the bright and faint states is more significant, and it suggests that the current Great Dimming of RW Cep is mainly the result of increased circumstellar dust obscuration that is particularly important at shorter wavelength. § DISCUSSION The CHARA Array interferometric observations and the derived images resolve the photosphere of RW Cep for the first time. The angular diameter estimates from the uniform disk fits are listed in Table 1. The MYSTIC visibilities indicate that the star is about 27% larger in the longest wavelength channels with λ > 2.3 μm. This spectral range corresponds to that where the CO transitions are particularly strong (Figure 8), and we suggest that this flux originates at higher levels in the extended atmosphere, making the star appear larger at these wavelengths. The star appears somewhat box-like in shape in both the H and K-band SQUEEZE images of Figure 5. The range in diameter estimates from the SQUEEZE images is given in rows 4 and 5 of Table 2. The uniform disk diameters from the OITOOLS images are given in rows 6 and 7, and the limb-darkened diameters associated with SURFING images are listed in rows 8 and 9. The diameters from the OITOOLS and SURFING images occupy the mid-range of the estimates found by other methods. The distance to RW Cep is not well established. It is often assumed to be a member of the Cep OB1 association <cit.> at a distance of 3.4 kpc <cit.>. It may be a part of the Berkeley 94 star cluster at a distance of 3.9 kpc <cit.>. These estimates agree with the distance from Gaia DR2 of 3.4^+1.4_-0.8 kpc <cit.>, but are significantly lower than the most recent estimate from Gaia EDR3 of 6.7^+1.6_-1.0 kpc <cit.>. The latter distance would place RW Cep in the Norma/Outer Arm of the Milky Way Galaxy. The discrepancy between the Gaia estimates may be related to photocenter jitter related to stellar convection and outflows <cit.>. We will assume that the actual distance falls in the Gaia range of 3.4 to 6.7 kpc. If we adopt the angular diameter from the SURFING images of θ_LD = 2.45 mas, then the stellar radius is 900 - 1760 R_⊙ or 4.2 - 8.2 AU. This places RW Cep among the largest stars known in the Milky Way <cit.>. The most striking features in the reconstructed images are the large variations in brightness across the visible hemisphere of the star. The surface flux distribution is asymmetric with a bright region offset from center and a darker zone towards the western side. The darker zone is slightly more prominent in the K-band images, and the contrast between dark and bright zones may indicate that the darker region is related to cool circumstellar dust. However, the details of the reconstructed images depend upon assumptions about the extended flux, and the images shown in Figures 5 and 6 are representative of the range in results. The NIR spectroscopy shows that the fading is much smaller at longer wavelengths compared to that in the visual spectrum. The relative fractions of flux fading from the V-band (0.55 μm) to the K-band (2.2 μm) are consistent with an extra component of dust extinction with an associated additional reddening of E(B-V) ≈ 0.18 mag (assuming the nominal extinction law presented by ). Furthermore, the K-band continuum slope indicates the presence of a dust flux component that contributes progressively more flux at longer wavelength. We suggest that the apparent disk asymmetry observed in the interferometric images is also related to this component of circumstellar dust. The Great Dimming of RW Cep may be the latest in a series of mass ejections over the last century. <cit.> recently presented an analysis of archival measurements of the SED of RW Cep that documents the infrared excess from dust emission. The SED has one excess component that contributes strongly in the 5 – 12 μm range and a second component beyond 20 μm, and <cit.> suggest that these correspond to inner and outer shells of temperatures 250 K and 100 K, respectively. These dust shells have an angular radius of 300 – 400 mas in an image made at 11.9 μm (see their Figure 1). Thus, the current fading may be the latest of continuing mass ejection and dust formation episodes, and the newly formed dust now partially obscures the visible hemisphere. The overall appearance of the H and K-band images of RW Cep is similar in character to the asymmetry found by <cit.> in visible band images made during the great dimming of Betelgeuse that they attribute to dust formation in mass ejected from the star. Furthermore, the current dimming of RW Cep is similar in amplitude and reddening to that observed for Betelgeuse (Figure 1). We suspect that similar processes are causing the asymmetric appearance of the CHARA images made during the great dimming of RW Cep. We note that the star attained a relative brightness maximum in 2019 November (JD 2458800 in Figure 1) and then generally faded to its current historic minimum <cit.>. We suggest that the maximum light time may have corresponded to a particularly energetic convective upwelling of hot gas that launched a surface mass ejection event. This gas is now cooling to the point of dust formation, and the part of the ejected cloud seen in projection against the photosphere causes the darker appearance of the western side of the star. The duration of such dimming events may scale with stellar and dust cloud size, so that the timescale ranges from about a year in smaller Betelgeuse, through several years for RW Cep, to decades for larger VY CMa <cit.>. We plan to continue CHARA Array observations over the next year to explore how developments in the images are related to the photometric variations. This work is based upon observations obtained with the Georgia State University Center for High Angular Resolution Astronomy Array at Mount Wilson Observatory. The CHARA Array is supported by the National Science Foundation under Grant No.AST-1636624, AST-1908026, and AST-2034336. Institutional support has been provided from the GSU College of Arts and Sciences and the GSU Office of the Vice President for Research and Economic Development. F.B. acknowledges funding from the National Science Foundation under Grant No. AST-1814777. S.K. acknowledges support from the European Research Council through a Starting Grant (Grant Agreement No. 639889) and Consolidator Grant (Grant Agreement ID 101003096). J.D.M. acknowledges funding for the development of MIRC-X (NASA-XRP NNX16AD43G, NSF AST-1909165) and MYSTIC (NSF ATI-1506540, NSF AST-1909165). The work is also based on observations obtained with the Apache Point Observatory 3.5-meter telescope, which is owned and operated by the Astrophysical Research Consortium. We thank Russet McMillan and Candace Gray for their help with obtaining the APO observations. CHARA, APO PMOIRED <cit.>, SQUEEZE <cit.>, SURFING <cit.>, Spextool <cit.>, Xtelluric <cit.> aasjournal
http://arxiv.org/abs/2307.04119v1
20230709082201
Categorical Realizability for Non-symmetric Closed Structures
[ "Haruka Tomita" ]
cs.LO
[ "cs.LO" ]
Categorical Realizability for Non-symmetric Closed Structures]Categorical Realizability for Non-symmetric Closed Structures H. Tomita]Haruka Tomita In categorical realizability, it is common to construct categories of assemblies and categories of modest sets from applicative structures. These categories have structures corresponding to the structures of applicative structures. In the literature, classes of applicative structures inducing categorical structures such as Cartesian closed categories and symmetric monoidal closed categories have been widely studied. In this paper, we expand these correspondences between categories with structure and applicative structures by identifying the classes of applicative structures giving rise to closed multicategories, closed categories, monoidal bi-closed categories as well as (non-symmetric) monoidal closed categories. These applicative structures are planar in that they correspond to appropriate planar lambda calculi by combinatory completeness. These new correspondences are tight: we show that, when a category of assemblies has one of the structures listed above, the based applicative structure is in the corresponding class. In addition, we introduce planar linear combinatory algebras by adopting linear combinatory algebras of Abramsky, Hagjverdi and Scott to our planar setting, that give rise to categorical models of the linear exponential modality and the exchange modality on the non-symmetric multiplicative intuitionistic linear logic. [ [ August 12, 2023 =================== § INTRODUCTION Realizability started with <cit.> to give interpretations for Heyting arithmetic, and subsequently has been developed in many directions. The categorical realizability we call here is one such development, giving categorical models of various programming languages and logics. Given a very simple algebraic structure called applicative structure (or often called combinatory algebra), we construct categories and used as categorical models. For an applicative structure , the category of assemblies is the category of “-computable universe" and its categorical structure depends on the computational structure of . Therefore, giving certain conditions, we obtain with corresponding categorical structures. The best known is that the condition of being a partial combinatory algebra (PCA) leads that (and ) is a Cartesian closed category (CCC) <cit.>. A PCA is an applicative structure containing two special elements and which expresses substitution and discarding. (We often call -combinator or -combinator as such elements.) PCAs also can be characterized by the combinatory completeness, that is, the property that any computable functions (i.e., functions expressed as untyped lambda terms) on a PCA can be represented by elements of the PCA itself. Categorical realizability for linear structures is also well investigated. Assuming is a -algebra, that have combinators , and , and become symmetric monoidal closed categories (SMCCs) <cit.>. , and are combinators expressing composition, exchanging and identity operations respectively, and -algebras correspond to the linear lambda calculus by the combinatory completeness. These results for PCAs and -algebras are used as an useful method to giving various models based on CCCs and SMCCs. On the other hand, categorical realizability based on non-symmetric structures has been less investigated. In our previous studies <cit.>, we proposed “planar realizability" giving rise to non-symmetric categorical structures, such as closed multicategories, closed categories, skew closed categories and monoidal bi-closed categories. The aim of this paper is to summarize and develop these results. First in section <ref>, we start with recalling basic notions of categorical realizability. Results of PCAs and -algebras are shown in the section. Also notions of applicative morphisms and linear combinatory algebras (LCAs) are recalled from <cit.>, that are used to obtain models of linear exponential modalities on linear calculus. Basic knowledge of category theory and the lambda calculus is assumed and not referred here. Next in section <ref>, we introduce several classes of applicative structures inducing non-symmetric categorical structures. Realizing non-symmetric closed structures is a more subtle problem than the symmetric cases like CCCs and SMCCs. Since the -combinator in -algebras induces the symmetry of the monoidal structure on the category of assemblies, one may think we can obtain non-symmetric categorical structures by excluding the -combinator. However, simply excluding the -combinator leads no interesting categorical structures like internal hom functors, since realizing closed structures needs some exchanging of realizers even if the closed structures are not symmetric. We have to give applicative structures with appropriately weakened exchanging that realizes internal hom structures but does not realize symmetries. To resolve this problem, in <cit.>, we introduced a unary operation () on an applicative structure, which allows restricted exchanging. In section <ref> and <ref>, we recall these results, that -algebras induce (non-symmetric) closed multicategories and -algebras induce closed categories. By the combinatory completeness, these classes of applicative structures correspond to the planar lambda calculus. By the unary operation (), we obtain non-symmetric closed structures, however, this operation is not sufficient to obtain non-symmetric monoidal structures. Assume that on a -algebra has tensor products. When we take realizers of tensor products of in the same way that we take realizers of Cartesian/tensor products of assemblies on PCAs/-algebras, the realizer of unitors of leads a realizer of the symmetry. That is, this attempt to get non-symmetric tensor products from -algebras ends in failure that the tensor products are symmetric. Here what matters is that the way realizing products of assemblies on PCAs/-algebras corresponds to the representation of tensor products X ⊗ Y ≅∀α. (X Y α) α in the second-order linear logic ( <cit.>), which is valid only if the tensor is symmetric. Thus, categorical realizability for non-symmetric monoidal structures needs some modification on the way realizing tensor products. In this paper, we give two answers for this problem. One is the way preparing a new combinator which directly realizes pairings. The class of applicative structures, -algebras, is newly introduced in this paper and give rise to non-symmetric monoidal closed categories. We show results about -algebras in section <ref>. The other way is taking realizers of tensor products matching the representation X ⊗ Y ≅∀α. (α Y X) α in the second-order linear logic, which is valid even in the non-symmetric case. To give such realizers, the class of applicative structures, bi--algebras, was introduced in <cit.>. Bi--algebras feature two kinds of applications corresponding to two kinds of implications and , and have the combinatory completeness for the lambda calculus with two kinds of applications (which we call the bi-planar lambda calculus in this paper). In section <ref>, we recall these results about bi--algebras. Classes of applicative structures appearing in this paper are summarized in Table <ref>. Also combinators and operations are summarized in Table <ref>. The classes of applicative structures in this paper form a hierarchy as summarized in Table <ref>. In section <ref>, we show that these classes are different from each other. To show the strictness of the inclusion, it is sufficient to give examples belonging to one side and not to the other side, and we give such examples in section <ref>. While these proofs in section <ref> are mostly straightforward and not conceptually new, sometimes it is not easy to show that some applicative structure does not belong to some class of applicative structures. As such an example, in section <ref>, we show that the untyped planar lambda calculus (with no constants) is not a bi--algebra. In the next section <ref>, we give the computational lambda calculus <cit.> as a rather unexpected example of a -algebra and show the computational lambda calculus is not a bi--algebra. To better clarify the relationship between applicative structures and categorical structures of categories of assemblies, in section <ref>, we show certain “inverses" of propositions shown in section <ref>. That is, assuming has certain categorical structure (such as being an SMCC), we show belongs to the corresponding class (such as -algebras) under several conditions. While the propositions for the cases of -algebras and -algebras were already presented in <cit.>, those for the cases of -algebras and bi--algebras are newly shown in this paper. By integrating results of section <ref>, <ref> and <ref>, we can say that, for instance, the category of assemblies on the planar lambda calculus indeed has non-symmetric closed structure. In section <ref>, we reformulate notions of LCAs for our -algebras. Although linear exponential comonads are usually defined as comonads on symmetric monoidal categories, we can also define linear exponential comonads on non-symmetric monoidal categories <cit.>. In <cit.>, we defined exponential relational planar linear combinatory algebras (exp-rPLCAs) as pairs of a bi--algebra and an applicative endomorphism on it, that give rise to linear exponential comonads on (non-symmetric) monoidal bi-closed categories. The definition of exp-rPLCAs in <cit.> are the reformulation of the definition of (relational) LCAs to bi--algebras. In this paper, we generalize exp-rPLCAs a bit by changing “bi--algebras" to “-algebras," and then similarly call the generalized ones as exp-rPLCAs. New exp-rPLCAs give rise to linear exponential comonads on (non-symmetric) monoidal closed categories, and correspond to adjoint pairs of applicative morphisms between -algebras and PCAs. There are also modalities on (non-symmetric) linear calculus other than the linear exponential modality. The exchange modality, investigated in <cit.>, is a modality connecting a commutative logic and a non-commutative logic (the Lambek calculus). Categorical models of the exchange modality are given as monoidal adjunctions between monoidal bi-closed categories and SMCCs, which are called Lambek adjoint models. In <cit.>, we defined exchange relational planar linear combinatory algebras (exch-rPLCAs) that give rise to Lambek adjoint models. In this paper, like exp-rPLCAs, we reformulate exch-rPLCAs for -algebras. New exch-rPLCAs correspond to adjoint pairs between -algebras and -algebras, and give rise to monoidal adjunctions between (non-symmetric) monoidal closed categories and SMCCs, that are models of the exchange modality based on the non-symmetric multiplicative intuitionistic linear logic (that is, a fragment of the Lambek calculus without bi-closedness). Finally in section <ref> and <ref>, we discuss related work, summarize conclusion and describe future work. § BACKGROUND §.§ Applicative structures and categories of assemblies First we recall basic notions of the categorical realizability. Notations and definitions in this subsection are from <cit.>. A partial applicative structure is a pair of a set and a partial binary operation (x ,y) ↦ x · y on . When the binary operation is total, we say is a total applicative structure. We often omit · and write x · y as x y simply. We also omit unnecessary parentheses assuming that application joins from the left. For instance, x y (z w) denotes (x · y) · (z · w). In the sequel, we use two notations “↓” and “≃.” We write x y ↓ for that x · y is defined. “≃” denotes the Kleene equality, which means that if the one side of the equation is defined then the other side is also defined and both sides are equal. Let be a partial applicative structure. * An assembly on is a pair X = (|X|,_X), where |X| is a set and _X is a function sending x ∈ |X| to a non-empty subset x_X of . We call elements of x_X realizers of x. * For assemblies X and Y on , a map of assemblies f:X Y is a function f:|X| |Y| such that there exists an element r ∈ realizing f. Here we say “r realizes f” or “r is a realizer of f” if r satisfies that ∀ x ∈ |X|, ∀ a ∈x_X, r a ↓ and r a ∈f(x)_Y. If we assume two additional conditions on a partial applicative structure, we can construct two kinds of categories. Let be a partial applicative structure satisfying that: * has an element such that ∀ x ∈, x ↓ and x = x; * for any r_1, r_2 ∈, there exists r ∈ such that ∀ x ∈, r x ≃ r_1 (r_2 x). Then we construct categories as follows. * The category , called the category of assemblies on , consists of assemblies on as its objects and maps of assemblies as its maps. Identity maps and composition maps are the same as those of (the category of sets and functions). * We call an assembly X a modest set on if X satisfies ∀ x,x' ∈ |X|, x ≠ x' ⇒x_X ∩x'_X = ∅. The category , called the category of modest sets on , is the full subcategory of whose objects are modest sets on . We need above two conditions <ref> and <ref> to give realizers of the identities and composition maps. Identities are realized by . For maps f_1:Y Z realized by r_1 and f_2:X Y realized by r_2, we obtain r given by the condition <ref>, which realizes f_1 ∘ f_2. Since all the classes of applicative structures introduced later satisfy these conditions, the conditions are not be much problems in this paper. Intuitively, the category (and ) can be understood as the category of “-computable universe.” For an assembly X=(|X|,_X) on , elements of x_X can be seen as “machine-level interpretations” of x ∈ |X|. For a map f:X Y of , the realizer r of f can be seen as “machine implementation” of f, since r takes interpretations of x (that is, elements of x_X) as input and computes interpretations of f(x) (that is, elements of f(x)_Y). §.§ PCAs and Cartesian closed categories Since is the category of -computable universe, the structure of depends on the computational structure of . When applicative structures belong to a specific class, specific categorical structures may be found on the categories of assemblies. The best known such class is the class of PCAs, which induce Cartesian closed categories of assemblies. Results in this subsection are from <cit.>. A partial combinatory algebra (PCA) is a partial applicative structure which contains two special elements and such that: * ∀ x,y ∈, x ↓, x y ↓ and x y = y; * ∀ x,y,z ∈, x ↓, x y ↓ and x y z ≃ x z (y z). When a PCA is a total applicative structure, we say is an -algebra. The most fundamental example of PCAs is the untyped lambda calculus. Suppose infinite supply of variables x,y,z,…. Untyped lambda terms are terms constructed from the following six rules: (identity) x ⊢ x Γ⊢ M Δ⊢ N (application) Γ , Δ⊢ MN Γ , x ⊢ M (abstraction) Γ⊢λ x.M Γ , x , y, Δ⊢ M (exchange) Γ , y, x, Δ⊢ M Γ , x , y ⊢ M (contraction) Γ , x ⊢ M[x/y] Γ⊢ M (weakening) Γ, x ⊢ M Here, in the application rule, Γ and Δ are sequences of distinct variables and contain no common variables. In the contraction rule, M[x/y] denotes the term obtained by substituting x for all free y in M. In the weakening rule, x is a variable not contained in Γ. Note that abstraction rules are only applied to the rightmost variables. In order to apply the abstraction rule to a variable in a different position, we need to use the exchange rule several times and move the variable to the rightmost place. We define β-equivalence relation on lambda terms as the congruence of the relation (λ x.M)N ∼ M[N/x]. Untyped lambda terms modulo =_β form a PCA (actually an -algebra). The underlying set of the PCA consists of β-equivalence classes of untyped closed lambda terms (i.e., lambda terms with no free variables) and the application is defined as that of lambda terms. In this example, λ xyz.xz(yz) is the representative of and λ xy.x is the representative of . The correspondence between PCAs and the lambda calculus is more than just an example. PCAs have an important property called the combinatory completeness, which gives interpretations of “computable functions” on by elements of itself. First, we give the definition of polynomials over an applicative structure (not restricted to PCAs). Let be a partial applicative structure. A polynomial over is a syntactic expression generated by variables, elements of and the application of . For two polynomials M and N over , M ≃ N means that M[a_1/x_1,… ,a_n/x_n] ≃ N[a_1/x_1,… ,a_n/x_n] holds in for any a_1,… ,a_n ∈, where { x_1,… ,x_n } contains all the variables of M and N. Let be a PCA and M be a polynomial over . For any variable x, there exists a polynomial M' such that the free variables of M' are the free variables of M excluding x and M' a ≃ M[a/x] holds for all a ∈. We write such M' as x.M. We define x.M by induction on the structure of M. * x.x := * x.y := y (when x ≠ y) * x.MN := ( x.M)( x.N) For the special case of the above proposition, any closed lambda term is β-equivalent to some term constructed from λ xy.x and λ xyz.xz(yz) using applications. Using the combinatory completeness, we can give (and ) on a PCA the structure of Cartesian closed category (CCC). When is a PCA, and are CCCs. While this result is standard, we shall outline its proof for comparison with the parallel results on various classes of combinatory algebras to be developed in this paper. First we prove the proposition for . Let :=. * By the combinatory completeness, has elements x.x and xyz.x(yz), which make satisfying the conditions <ref> and <ref> of Definition <ref>. Thus is a category. * For objects X and Y, the underlying set of the binary product X × Y is |X| × |Y|. Realizers are defined as (x,y)_X × Y := { t.tp q | p ∈x_X, q ∈y_Y }. * For maps f:X X' realized by r_f and g:Y Y' realized by r_g, f × g is the function sending (x, y) to (f(x),g(y)). A realizer for f × g does exists as u.u ( pqt.t(r_f p)(r_g q)). * The underlying set of the terminal object 1 is the singleton {∗}. Realizers are ∗_1 :=. It is easy to see that this 1 satisfy the conditions of the terminal object. * The projection π:X × Y X is the function sending (x,y) to x and has a realizer u.u( pq.p). The projection π':X × Y Y is the function sending (x,y) to y and has a realizer u.u( pq.q). It is easy to see that these π and π' satisfy the conditions of the projections of the Cartesian category. * For objects X and Y, the underlying set of the exponential Y^X is _(X,Y). Realizers are f_Y^X := { r ∈|}. * For maps f:X' X realized by r_f and g:Y Y' realized by r_g, the map g^f is the function sending a map h ∈_(X,Y) realized by r_h to g ∘ h ∘ f ∈_(X',Y') realized by v.r_g (r_h (r_f v)). A realizer of g^f is uv.r_g (u (r_f v)). * The adjunction Φ: (X × Y,Z) (X,Z^Y) is the function sending f:X × Y Z realized by r_f to the map Φ(f):x ↦ (y ↦ f(x,y)). Φ(f) is realized by pq.r_f ( t.tpq). For a map g:X Z^Y realized by r_g, Φ^-1(g):X × Y Z is the map sending (x,y) to g(x)(y). Φ^-1(g) is realized by u.u r_g. It is easy to see that this Φ satisfies the condition of the adjunction of the CCC. Therefore, is a CCC. Next we show that is a CCC. Given modest sets X and Y on , we define the binary product X × Y in the same way as . Here we can show that X × Y also is a modest set. Suppose there is some a ∈ realizing different (x,y) and (x',y') of |X| × |Y|. When we assume x ≠ x', though π (x,y) ≠π (x',y'), both sides have the same realizer ( u.u( pq.p))a. It contradicts that X is a modest set. The same contradiction is lead when y ≠ y'. Therefore, different (x,y) and (x',y') do not have common realizers and X × Y is a modest set. For modest sets X and Y on , we also define Y^X in the same way as . We can show that Y^X also is a modest set. Suppose there is some r realizing different f:X Y and g:X Y. Take x ∈ |X| and a ∈x_X such that f(x) ≠ g(x). Then r a is an element of both f(x)_Y and g(x)_Y. However, it contradicts that Y is a modest set. Therefore, Y^X is a modest set. Hence, we can show that is a CCC by the same proof for . In this proof, we use the combinatory completeness for the PCA a lot to give realizers for each assembly and map. §.§ -algebras and symmetric monoidal closed categories Given an applicative structure which has the different computational structure from PCAs, we obtain with a different categorical structure from CCCs. In this subsection, we recall another well-known class of applicative structures called -algebras, which correspond to linear structures. Results given in this subsection are from <cit.>. A -algebra is a total applicative structure which contains three elements , and such that ∀ x, y, z ∈, x y z = x (y z), x y z = x z y and x = x. Untyped linear lambda terms are untyped lambda terms constructed without using weakening and contraction rules (See Example <ref>). That is, an untyped linear lambda term is an untyped lambda term whose each variable appears just once in the term. Untyped closed linear lambda terms modulo form a -algebra. Here λ xyz.x(yz), λ xyz.xzy and λ x.x are the representatives of , and respectively. Let be a -algebra and M be a polynomial over . For any variable x appearing exactly once in M, there exists a polynomial x.M such that the free variables of x.M are the free variables of M excluding x and ( x.M) a = M[a/x] for all a ∈. We define x.M by induction on the structure of M. * x.x := * x.MN := ( x.M) N (x ∈ FV(M)) M ( x.N) (x ∈ FV(N)) The combinatory completeness for a -algebra allows interpreting only linear lambda terms, not the whole of lambda terms. Thus some realizers used in the proof of Proposition <ref> (such as u.u( pq.p)) may not exist in a -algebra. For -algebras, the categories of assemblies have other categorical structure than CCCs. When is a -algebra, is a symmetric monoidal closed category (SMCC). * For objects X and Y, the underlying set of X ⊗ Y is |X| × |Y|. Realizers are defined as x ⊗ y_X ⊗ Y := { t.tp q | p ∈x_X, q ∈y_Y }. * For maps f:X X' realized by r_f and g:Y Y' realized by r_g, the map f ⊗ g is the function sending x ⊗ y to f(x) ⊗ g(y), which is realized by u.u ( pqt.t(r_f p)(r_g q)). * The underlying set of the unit object I is the singleton {∗}. The realizer is ∗_I := {}. * The right unitor ρ_X : X X ⊗ I is the function sending x to x ⊗∗, which is realized by p.( t.tp). The inverse ρ^-1 is realized by u.u( pq.qp). * Also we can take the left unitor λ_X : I ⊗ X X as the function (∗⊗ x)↦ x and the associator α_XYZ : X ⊗ (Y ⊗ Z) (X ⊗ Y) ⊗ Z as x ⊗ (y ⊗ z) ↦ (x ⊗ y) ⊗ z. * The symmetry σ_XY : X ⊗ Y Y ⊗ X is the function sending x ⊗ y to y ⊗ x, which is realized by u.u( pqt.tqp). * For objects X and Y, the underlying set of the exponential[For an SMCC , the exponential is often denoted using the symbol satisfying (X⊗ Y,Z)≅(X,Y Z). However, here we use the reversed symbol satisfying (X⊗ Y,Z)≅(X,Z Y) to be consistent with the notation of monoidal bi-closed categories in Section <ref>.] Y X is _(X,Y). Realizers are f_Y X := { r ∈|}. * For maps f:X' X realized by r_f and g:Y Y' realized by r_g, the map g f is the function sending h:X Y to g ∘ h ∘ f :X' Y'. A realizer of g f is uv.r_g (u (r_f v)). * The adjunction Φ sends a map f:X ⊗ Y Z to the map Φ(f):x ↦ (y ↦ f(x ⊗ y)). It is easy to see that the above components satisfy the axioms of the SMCC. The above proof is almost the same as the proof of on a PCA being a CCC. However, when we prove that on a -algebra is an SMCC, we cannot use the same proof as for PCAs. That is because for modest sets X and Y on a -algebra , X ⊗ Y given by the same way as is not generally a modest set. The following proposition is proven with a modification to resolve the problem. When is a -algebra, is an SMCC. Let G: ↪ be the inclusion functor and F: be the left adjoint of G. F is the functor sending an assembly X = (|X|,_X) to a modest set Z = (|X|/≈,_Z). Here the relation “≈” is the transitive closure of the relation “∼” defined as x ∼ x' :⇔ x_X ∩x'_X ≠∅. The realizers of z ∈ |Z| are defined as z_Z := ⋃_x ∈ zx_X. F sends a map f of to the canonical map of , which is realized by realizers of f. We define the tensor product ⊠ in as X ⊠ Y := F(GX ⊗ GY). We can prove Proposition <ref> by the same proof of Proposition <ref> by replacing ⊗ to ⊠. More general about constructing monoidal structures on reflexive full subcategories, see <cit.>. While we define -algebras as a class of total applicative structures, we also can define “partial -algebras” naturally. For a partial -algebra , we can see that: * is not generally an SMCC; * adding an extra element (which means “undefined”), naturally extends to a total -algebra _; * is the full subcategory of _. The same discussion is given in <cit.>. §.§ Applicative morphisms In this subsection, we recall the notion of applicative morphisms from <cit.>. Let be a partial applicative structure satisfying: * has an element such that ∀ x ∈, x ↓ and x = x; * for any r_1, r_2 ∈, there exists r ∈ such that ∀ x ∈, r x ≃ r_1 (r_2 x). Let be another partial applicative structure satisfying the same conditions. An applicative morphism γ : is a total relation from to such that there exists a realizer r_γ∈ of γ satisfying that ∀ a, a' ∈, ∀ b ∈γ a, ∀ b' ∈γ a', r_γ b b' ∈γ (a a') whenever a a' ↓. We say γ is functional when γ a is a singleton for each a ∈, and simply write γ a = b for γ a = { b }. Our definition is slightly more general than the definition in <cit.> that makes sense only on PCAs. We define applicative morphisms between applicative structures satisfying the conditions of Definition <ref>. We assume these conditions to realize identity and composition morphisms. By the condition <ref>, the identity applicative morphism id: can be realized by . For applicative morphisms γ: and δ: 𝒞 realized by r_γ and r_δ, taking p ∈δ r_γ, the composition δ∘γ can be realized by r ∈ || such that ∀ b ∈ ||, r b ≃ r_δ (r_δ p b). The condition <ref> gives such a realizer r. In the sequel, for an applicative morphism γ, when we write an indexed element r_γ, it denotes a realizer of γ. Also, for a ∈ and S,S' ⊆, when we write a S, it denotes the set { as | s ∈ S } and we consider as↓ for all s ∈ S, and when we write S S', it denotes the set { ss'| s ∈ S,s' ∈ S' } and we consider ss' ↓ for all s ∈ S and s' ∈ S'. For instance, the condition that γ is an applicative morphism is denoted as ∃ r_γ∈, ∀ a,a' ∈, aa' ↓⇒ r_γ (γ a)(γ a') ⊆γ (aa'). From applicative morphisms, we can obtain functors between the categories of assemblies. For an applicative morphism γ:, : is the functor sending an object (|X|, _X) to (|X|,γ_X) and sending a map to the same function. For a map f in realized by r_f, f is realized by elements of r_γ (γ r_f). It is obvious that satisfies (id)=id and (g ∘ f) = (g) ∘(f). Next we recall the preorder relation ≼ between applicative morphisms. For two applicative morphisms γ, δ:, γ≼δ iff there is r ∈ such that ∀ a ∈, r (γ a) ⊆δ a. Using the conditions <ref> and <ref> of Definition <ref>, we can easily show that ≼ is a preorder. By the preorder ≼, we can define adjunctions and comonads on applicative structures. For two applicative morphisms γ: and δ:, γ is a right adjoint of δ iff δ∘γ≼ id_ and id_≼γ∘δ. We write (δ⊣γ): for these settings. An applicative morphism γ: is called comonadic when has two elements and such that ∀ a ∈, (γ a) ⊆{ a } and (γ a) ⊆γ (γ a). For adjunctions of applicative morphisms, the following properties hold. * An adjoint pair of applicative morphisms (δ⊣γ): gives rise to an adjoint pair (⊣) :. * For an adjoint pair of applicative morphisms (δ⊣γ):, δ∘γ : is a comonadic applicative morphism. * For a comonadic applicative morphism γ:, is a comonad on . In Definition <ref>, an applicative morphism γ: gives rise to the functor :. However, here we cannot generally obtain a functor : since x_X ∩x'_X = ∅ does not imply γ(x_X) ∩γ(x'_X) = ∅ and X may not be in . However, for a comonadic applicative morphism γ:, can be restricted to the endofunctor on . Indeed, for a modest set X on , if a ∈γ(x_X) ∩γ(x'_X) then a is an element of x_X ∩x'_X and thus x = x' concludes. Furthermore, this is a comonad on . §.§ Linear combinatory algebras In the previous subsection, we saw comonadic applicative morphisms give rise to comonads, and adjoint pairs of applicative morphisms give rise to adjoint pairs between categories of assemblies. Using this construction, we can obtain linear exponential comonads and linear-non-linear models for the linear logic. In this subsection, we recall notions of linear combinatory algebras (LCAs) from <cit.> and relational linear combinatory algebras (rLCAs) from <cit.>. A linear combinatory algebra (LCA) consists of: * a -algebra ; * a functional comonadic applicative morphism (, , ) on ; * an element ∈ such that ∀ x, y ∈, x ( y) = x; * an element ∈ such that ∀ x, y ∈, x ( y) = x ( y)( y). As we get comonads from comonadic applicative morphism, from LCAs, we get linear exponential comonads, which are categorical models of the linear exponential modality of the linear logic. Let be a symmetric monoidal category. A linear exponential comonad consists of the following data. * A symmetric monoidal comonad (!, δ, ϵ, m, m_I). Here ! is an endofunctor on , δ_X :!X !!X and ϵ_X :!X X are monoidal natural transformations for the comultiplication and the counit. The natural transformation m_X,Y:!X ⊗ !Y !(X ⊗ Y) and the map m_I:I !I make ! be a monoidal functor. * Monoidal natural transformations e_X:!X I and d_X:!X !X ⊗ !X. Here these components need satisfy the following conditions for each X. * (!X,d_X,e_X) is a commutative comonoid in . * e_X and d_X are coalgebra morphisms. * δ_X is a comonoid morphism. For an LCA (,), is a linear exponential comonad on the SMCC (or ). LCAs can be generalized from functional applicative morphisms to not functional ones, called rLCAs. A relational linear combinatory algebra (rLCA) consists of: * a -algebra ; * a comonadic applicative morphism (, , ) on such that ≼ [ ,] and ≼ k_i. Here [,] and k_i are applicative morphisms defined as [ ,] (x) := { t.ta a' | a,a' ∈ x } and k_i (x) := {}. Next proposition shows the correspondence between LCAs, rLCAs and adjoint pairs between -algebras and PCAs. * Let be a -algebra and be a PCA. For an adjoint pair (δ⊣γ):, (, δ∘γ) is an rLCA. * Let (,) be an LCA. The applicative structure _ = (, @) defined by x @ y := x ( y) is a PCA. Furthermore, γ: _ defined as the identity function and δ :_ sending a ∈ to a form an adjoint pair (δ⊣γ):_. From rLCAs, we also get linear exponential comonads. Moreover, we get linear-non-linear models <cit.> on categories of assemblies or categories of modest sets. A linear-non-linear model is a symmetric monoidal adjunction (F ⊣ G): for an SMCC and a CCC . For an rLCA (,), is a linear exponential comonad on the SMCC (or ). Furthermore, the co-Kleisli adjunction between and _ (or and _) is symmetric monoidal. Thus the adjunction forms a linear-non-linear model. § CONSTRUCTING NON-SYMMETRIC CATEGORICAL STRUCTURES In section <ref>, we saw two known results that PCAs/-algebras induce CCCs/SMCCs as the categories of assemblies and the categories of modest sets. It is natural to try to extend these results to other classes of applicative structures, and we introduce such new classes inducing certain “non-symmetric” categorical structures. In this section we recall -algebras, -algebras and bi--algebras from <cit.>, and introduce a new class -algebras. §.§ -algebras and closed multicategories When we try to obtain some non-symmetric categorical structures on categories of assemblies, we will find a subtle problem. In a -algebra , the -combinator expresses exchanging the order of arguments, and is the source of the symmetric structures of . So one might guess that simply omitting would be sufficient for getting a non-symmetric categorical structure on . However, this does not work well; and alone are too weak to give an interesting structure on . For instance, if we want the internal hom functor (- -) on on a total applicative structure , we need certain exchanging operation in even if the closed structure is not symmetric. Take an object A of as |A| := and a_A := { a }. For maps f,g:A A, to realize g f, we need a realizer r which satisfies ∀ a, a' ∈, r a a' = r_g (a (r_f a')). This r acts as the exchanging to move the information of r_f from the left of a to the right of a. (In a -algebra, such r exists as ( ( r_g))r_f.) Therefore, when we want some non-symmetric categorical structures such as non-symmetric closed structures, we need to prepare some “more restricted exchanging” than the -combinator. One way to resolve the problem is to supply not a combinator but the unary operation () for exchanging. In this subsection, we introduce -algebras from <cit.>, which induce non-symmetric closed multicategories. A total applicative structure is a -algebra iff it contains , and a for each a ∈, where a is an element of such that ∀ x ∈, a x = x a. This () enable restricted exchanges than the -combinator. Since in a -algebra, a satisfies the axiom of a, all -algebras are also -algebras. The definition of -algebras may seem strange compared to the definitions of PCAs or -algebras. However, the definition of -algebras is natural in the aspect of having a good correspondence with the “planar" lambda calculus. Untyped planar lambda terms are untyped lambda terms constructed without using weakening, contraction nor exchange rules (See Example <ref>). That is, untyped planar lambda terms are untyped linear lambda terms such that for each subterm λ x.M, x is the rightmost free variable of M. Untyped closed planar lambda terms modulo form a -algebra, which we call in this paper. Here λ xyz.x(yz) and λ x.x are the representatives of and respectively. Given a representative M of a ∈ ||, λ x.xM is also a closed planar term and is the representative of a. The definition of construction rules of planar lambda terms has two different styles. In our definition, the abstraction rule is only allowed for the rightmost variable. Such a style is seen in <cit.>. On the other hand, there is also the definition that the abstraction rule is only allowed for the leftmost variable, as in <cit.>. Here we employ the former style for preservation the planarity of terms under the βη-conversions. Let be a -algebra and M be a polynomial over . For the rightmost variable x of M, if x appears exactly once in M, there exists a polynomial x.M such that the free variables of x.M are the free variables of M excluding x and ( x.M) a = M[a/x] for all a ∈. We define x.M by induction on the structure of M. * x.x := * x.MN := N ( x.M) (x ∈ FV(M)) M ( x.N) (x ∈ FV(N)) Note that for x.MN, x is the rightmost free variable in MN, and thus, if x is in FV(M), N has no free variables and N can be defined. Then we show -algebras induce certain categorical structures on the categories of assemblies. First we recall the definition of closed multicategories from <cit.>. A multicategory consists of the following data: * a collection Ob(); * for each n ≥ 0 and X_1 , X_2 , … , X_n , Y ∈ Ob(), a set (X_1 , … , X_n ; Y). We often write f ∈ (X_1 , … , X_n ; Y) as f:X_1 , … , X_n Y; * for each X ∈ Ob(), an element id_X ∈ (X ; X), called the identity map; * for each n, m_1 , m_2 , … , m_n ∈ℕ and X^k_j , Y_k , Z (1 ≤ k ≤ n , 1 ≤ j ≤ m_k), a function ∘ : (Y_1 , … , Y_n ; Z) ×∏_k^n (X^k_1 , … , X^k_m_k ;Y_k) (X^1_1 , … ,X^1_m_1 ,X^2_1 , … , X^n_m_n ; Z) called the composition. g ∘ (f_1 , … , f_n) denotes the composition of g ∈ (Y_1 , … , Y_n ; Z) and f_k ∈ (X^k_1 , … , X^k_m_k ; Y_k) (1 ≤ k ≤ n). The compositions satisfy associativity and identity axioms. A closed multicategory consists of the following data: * a multicategory ; * for each X_1 , X_2 , … , X_n , Y ∈ Ob(), an object (X_1 , X_2 , … , X_n ; Y), called the internal hom object; * for each X_1 , … , X_n , Y ∈ Ob(), a map ev_X_1 , … , X_n ; Y : (X_1 , … , X_n ; Y), X_1 , … , X_n → Y, called the evaluation map such that ∀ Z_1 , Z_2 , … , Z_m ∈ Ob(), the function ϕ_Z_1 , … , Z_m ; X_1 , … , X_n ; Y : ( Z_1 , … , Z_m ; (X_1 , … , X_n ; Y) ) →(Z_1 , … , Z_m, X_1 , … , X_n ; Y) sending f to ev_X_1 , … , X_n ; Y∘ (f, id_X_1 , … , id_X_n ) is invertible. We write the inverse function Λ_Z_1 , … , Z_m ; X_1 , … , X_n ; Y. Here our definition of closed multicategories is different from the original definition in <cit.> in that the order of objects of domain of maps are reversed. This is for ease to read by matching the orders of objects and realizers. When is a -algebra, and are closed multicategories. Let :=. Since have the -combinator and the -combinator, is a category. First we give a bi-functor (- -):^op× as follows: * For X,Y ∈, Y X is an assembly whose underlying set is _(X,Y) and f_Y X := { r |}. * For two maps f:X' X and g:Y Y' in , (g f):(Y X) (Y' X') is the function sending h ∈_(X,Y) to g ∘ h ∘ f. Given realizers r_f of f and r_g of g, (g f) is realized by uv.r_g (u (r_f v)). Thus, for any maps f and g in , (g f) certainly is a map of . It is easy to see that (- -) preserves identities and compositions. Next we give the structure of closed multicategory. * For an object X ∈, (;X) := |X| and (;X) := X. * For objects X_1, X_2, … , X_n, Y ∈ (n ≥ 1), we define the internal hom object (X_1,… ,X_n ;Y) := (… ((Y X_n) X_n-1)… ) X_1 and (X_1,… ,X_n ;Y) is the underlying set of (X_1,… ,X_n ;Y). We write f(x_1)(x_2)… (x_n) as f(x_1,… ,x_n) for f ∈(X_1,… ,X_n ;Y) and x_i ∈ |X_i|. * Identity maps id_X ∈(X;X) (X ∈) are the same as identity maps of . * Suppose maps g∈(Y_1,… ,Y_n;Z) and f_k ∈(X^k_1,… ,X^k_m_k;Y_k) (1 ≤ k ≤ n). We define g ∘ (f_1,… ,f_n) as the function that receives x^1_1,… , x^1_m_1 ,… , x^n_1 ,… , x^n_m_n and returns g(f_1(x^1_1,… ,x^1_m_1) ,… , f_n(x^n_1,… ,x^n_m_n)). Here when m_i = 0 for some 1 ≤ i ≤ n, we define g ∘ (f_1,… ,f_n) by giving y_i ∈ |Y_i| pointed by f_i ∈(;Y_i) as the i-th argument of g. Given realizers q ∈g_(Y_1,… ,Y_n;Z) and p_k ∈f_k_(X^k_1,… ,X^k_m_k;Y_k), by the combinatory completeness for -algebras, there is r ∈ such that r a^1_1 … a^1_m_1… a^n_1 … a^n_m_n = q (p_1 a^1_1 … a^1_m_1)… (p_n a^n_1… a^n_m_n) holds for any a^1_1,… ,a^n_m_n∈. This r realizes g ∘ (f_1,… ,f_n) and thus g ∘ (f_1,… ,f_n) is in (X^1_1,… ,X^n_m_n;Z). * The evaluation map ev_X_1 ,… , X_n ; Y : (X_1 ,… , X_n ; Y), X_1 ,… , X_n Y is given as the function that receives f,x_1,… ,x_n and returns f(x_1,… ,x_n), which is realized by . Then, ϕ_Z_1,… ,Z_m;X_1,… ,X_n;Y is invertible as a function and for g ∈(Z_1,… ,Z_m,X_1,… ,X_n;Y), Λ (g) is indeed in (Z_1,… ,Z_m;(X_1,… ,X_n;Y)) since it is realized by realizers of g. Therefore, is a closed multicateogry. For , we can use the same proof as for . While we define -algebras as a class of total applicative structures, we also can define “partial -algebra” naturally. For a partial -algebra , () is a total unary operation on such that ∀ a, x ∈, a x ≃ x a. Unlike the case of partial -algebras as in Remark <ref>, the proof of Proposition <ref> is applicable to the case of partial -algebras. §.§ -algebras and closed categories In this subsection, we recall a class of applicative structures from <cit.>, which induce closed categories of assemblies and modest sets. First we recall the definition of closed categories in <cit.>. A closed category consists of the following data: * a locally small category ; * a functor (- -): ^op×, called the internal hom functor[While the internal hom object in the closed category is often written as (X,Y), [X,Y] or X Y, here we denote Y X to be consistent with other categorical structures in this paper.]; * an object I, called the unit object; * a natural isomorphism i_X : (X I) X; * an extranatural transformation j_X : I (X X); * a transformation L_Y,Z^X : (Z Y) ((Z X) (Y X)) natural in Y and Z and extranatural in X, such that the following axioms hold: * ∀ X,Y ∈, L_Y,Y^X ∘ j_Y = j_(Y X); * ∀ X,Y ∈, i_(Y X)∘ (id_(Y X) j_X) ∘ L_X,Y^X = id_(Y X); * ∀ X,Y,Z,W ∈, the following diagram commutes: @C=-25pt@R=30pt (W Z) [dl]_L_Z,W^X [dr]^L_Z,W^Y (W X) (Z X) [d]^-L_(Z X),(W X)^(Y X) ((W Y) (Z Y)) [dd]^-L_Y,W^X id ((W X) (Y X)) ((Z X) (Y X)) [drr]_-id L_Y,Z^X ((W X) (Y X)) (Z Y) * ∀ X,Y ∈, L_X,Y^I ∘ (i_Y id_X) = id_(Y I) i_X; * ∀ X,Y ∈, the function γ : (X,Y) (I , (Y X)) sending f:X Y to (f id_X) ∘ j_X is invertible. Closed categories are something like monoidal closed categories without tensor products. That is, categories with internal hom functors which are defined directly, not via tensor products and adjunctions. The structures of closed categories are very similar to the structures of closed multicategories. As shown in <cit.>, the category of closed categories are cat-equivalent to the category of closed multicategories with unit objects. However, when we want to construct (non-symmetric) closed categories as categories of assemblies, it is not sufficient that the applicative structures are -algebras, since realizers for i^-1_X : X (X I) may not exist. Thus, we add another condition to a -algebra to realize i^-1_X and obtain the following definition. A -algebra is a -algebra which contains an element such that ∀ a ∈, a = a. In -algebras, the role we expect to is to eliminate the “harmless" second argument, which does not necessarily eliminate . Even without specifying , we can define the same class as -algebras. For instance, for a -algebra , suppose there is ^×∈ such that ∀ a ∈, ^× a = a. Then this is a -algebra since xy.^× x (y ) satisfies the axiom of . Conversely, for a -algebra, we can take ^× := xy. x (y ) and thus -algebras and ^×()-algebras are the same classes. in Example <ref> is a -algebra. Since the planar lambda calculus has the strongly normalizing property, for any closed planar term M, there are some u and N such that M λ u.N. Then (λ xyz.x(yz)) M (λ v.v) λ z.M((λ v.v)z) λ z.(λ u.N)z λ z.N[z/u] =_α M and thus λ xyz.x(yz) represents . Since (which nicely corresponds to -algebras) is also a -algebra, one might suspect that -algebras and -algebras are the same class. However, these two classes are different ones. Later in Section <ref>, we will discuss an example that separates classes of -algebras and -algebras (Proposition <ref>). The next example based on an ordered group is from <cit.>. (However, here we reverse the direction of the implication symbol of the original example in <cit.>.) Take an ordered group (G,·,e,≤). Let T be a set of elements constructed grammatically as follows: t ::= g | t t' (g ∈ G). That is, T is a set of binary trees whose leaves are labeled by elements of G. We further define a function | | :T G by induction: |g| := g and |t_2 t_1| := |t_2| · |t_1|^-1. Let be the powerset of { t ∈ T | e ≤ |t| }. Then we can get a -algebra by : * For M,N ∈, MN := { t_2 |∃ t_1 ∈ N, (t_2 t_1) ∈ M }. * := { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T }. Here joins from the left. * := { t_1 t_1 | t_1 ∈ T } * := { t_1 (t_2 t_2) t_1 | t_1,t_2 ∈ T }. * For M ∈, M := { t_2 (t_2 t_1) | t_1 ∈ M, t_2 ∈ T }. This example is based on Comod(G) introduced in <cit.>, which is a category of sets and relations equipped with G valued functions. For any (not necessarily ordered) group G, Comod(G) is a pivotal category. is a set of maps from the unit object to a reflexive object in (ordered) Comod(G). The structure of depends on G. For instance, { (t_3 t_2 t_1) (t_3 t_1 t_2) | t_1, t_2, t_3 ∈ T } acts as the -combinator whenever G is Abelian. The above later appears several times as examples of applicative structures of other classes (Example <ref>, <ref>, <ref>). When is a -algebra, and are closed categories. Let :=. We give the same bi-functor (- -):^op× as in the proof of Proposition <ref>. * We define the unit object I as ({∗}, _I), where ∗_I := {}. * j_X is the function sending ∗ to id_X, which is realized by . * i_X is the function sending (f:∗↦ x) to x, which is realized by . The inverse i_X^-1 is realized by . * L_Y,Z^X is the function sending g to the function (f ↦ g ∘ f), which is realized by . * γ is invertible. Indeed, γ^-1 is the function sending g:I (Y X) to the map g(∗):X Y. It is easy to verify that j, i and L have naturality and satisfy the axioms of the closed category. For , we can use the same proof for . While we define -algebras as a class of total applicative structures, we also can define “partial -algebra” naturally. For a partial -algebra , satisfies that ∀ a ∈, a ↓ and a = a. Proposition <ref> also holds in the case of partial -algebras. §.§ -algebras and monoidal closed categories In the previous two subsections, we obtain closed multicategories and closed categories as categories of assemblies. Next we further attempt to obtain a richer categorical structure, the (non-symmetric) tensor products, by categorical realizability. First, let us consider whether we can realize products by a -algebra in the same way as PCAs and -algebras. Even when we use a -algebra, we can take the object X ⊗ Y in the same way as PCAs and -algebras (See the proofs of Proposition <ref> and <ref>). That is, for a -algebra , we take an assembly X ⊗ Y that the underlying set is |X| × |Y| and realizers are x ⊗ y_X ⊗ Y := { t.tp q | p ∈x_X, q ∈y_Y }. We also take the unit object in in the same way as -algebras: |I|:= {∗} and ∗_I := {}. Then is this a monoidal category? Now let us assume it is. Take an assembly A:= (,_A), where a_A := { a }. Then since is a monoidal category, the unitor A I ⊗ A has a realizer r, which satisfies that r a = t.t a. Taking an elements C := xyz.r x ( ( ( w.r y ( w))))z, this C satisfies the axiom of the -combinator and make a -algebra. In summary, when we attempt to make a non-symmetric monoidal category using a -algebra , it follows that is actually a -algebra and becomes an SMCC. Therefore, we need some major modification on the definition of realizers of tensor products in to make a non-symmetric monoidal category. One way to solve this problem is supposing a combinator expressing the “pairing" operation. And we define realizers for tensor products as x ⊗ y_X ⊗ Y := { pq | p ∈x_X, q ∈y_Y }. Since pq itself cannot separate the data of p and q from pq, we need another combinator to decompose pq. A -algebra is a -algebra which contains and such that ∀ x, y, z ∈, x ( y z) = x y z. A fundamental example of -algebras is given as the untyped planar lambda calculus with tensor products. Add the following term construction rules to the planar lambda calculus (Example <ref>). Γ⊢ M Δ⊢ N (pair construction) Γ , Δ⊢ M ⊗ N Γ⊢ M Δ, x, y ⊢ N (pair deconstruction) Δ, Γ⊢x ⊗ yMN We define a relation ∼ on planar terms as the congruence of the following relations. * (λ x.M)N ∼ M[N/x] * M ∼λ x.Mx * (x_1 ⊗ x_2M_1 ⊗ M_2N) ∼ N[M_1 /x_1][M_2 /x_2] * M ∼ (x ⊗ yMx ⊗ y) Let the equational relation be the reflexive, symmetric and transitive closure of ∼. Closed terms modulo form a -algebra, which we call in this paper. Here λ xyz.x(yz), λ tu.(x ⊗ yutxy) and λ xy. (x ⊗ y) are the representatives of , and respectively. Unlike the planar lambda calculus of Example <ref> (that does not have tensor products) does not need the η-equality to be a -algebra, the planar lambda calculus with tensor products of Example <ref> needs the βη-equality to use λ xyz.x(yz) as . Indeed, (λ xyz.x(yz)) ((λ u.u) ⊗ (λ v.v)) (λ w.w) is βη-equal to (λ u.u) ⊗ (λ v.v) but not β-equal to it. When constructing linear lambda terms with tensor products, we often suppose a constant ⋆ for the unit ( <cit.>). For the above example, we can add the following rules to the term construction rules. (star introduction) ⊢⋆ ⊢ M Γ⊢ N (star elimination) Γ⊢⋆MN However, for our aim that constructing monoidal categories by categorical realizability, this ⋆ is not needed since we can use as the realizer of the unit instead of ⋆. -algebras correspond to the lambda calculus with tensor products, which has components other than applications, unlike the ordinary/linear/planar lambda calculus. Thus, we cannot state the combinatory completeness property for -algebras in the same way we have seen in previous sections. Here we only show the special case of the combinatory completeness property for -algebras. Any closed term M in is βη-equivalent to some term M that is constructed from := λ xyz.x(yz), := λ x.x, := λ tu.(x ⊗ yutxy) and := λ xy. x ⊗ y using the application and the unary operation ():M ↦λ x.xM. We inductively define the function . * x := x * MN := M N * M ⊗ N := M N * x ⊗ yMN := λ xy.N M * λ xy.M := λ x. λ y.M * λ x.x := * λ x.MN := N λ x.M (x ∈ FV(M)) M λ x.N (x ∈ FV(N)) * λ x.M ⊗ N := N (λ x.M ) (x ∈ FV(M)) ( M ) λ x.N (x ∈ FV(N)) * λ x.(y ⊗ zMN) := (λ yz.N ) λ x.M (x ∈ FV(M)) M (λ xyz.N ) (x ∈ FV(N)) It is easy to see that M M for any closed term M. Next we give an example of -algebra similar to Example <ref>. Take an ordered group (G,·,e,≤). Let T' be a set whose elements are constructed grammatically as follows: t ::= g | t t' | t ⊗ t' (g ∈ G). That is, T' is a set of binary trees whose leaves are labeled by elements of G, and whose nodes are two colored by and ⊗. We further define a function | | :T' G by induction: |g| := g, |t_2 t_1| := |t_2| · |t_1|^-1 and |t_1 ⊗ t_2| := |t_1| · |t_2|. Let |'| be the powerset of { t ∈ T' | e ≤ |t| }. Then we can get a -algebra ' by |'|: * For M,N ∈ |'|, MN := { t_2 |∃ t_1 ∈ N, (t_2 t_1) ∈ M }. * := { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T' }. * := { t_1 t_1 | t_1 ∈ T' }. * := { t_1 (t_2 t_2) t_1 | t_1,t_2 ∈ T' }. * := { t_3 (t_1 ⊗ t_2) (t_3 t_2 t_1) | t_1,t_2,t_3 ∈ T' }. * := { (t_1 ⊗ t_2) t_2 t_1 | t_1,t_2 ∈ T' }. * For M ∈ |'|, M := { t_2 (t_2 t_1) | t_1 ∈ M, t_2 ∈ T' }. In the above example, we prepare ⊗ in the construction of T' to express and . However, in fact, in Example <ref> is already a -algebra even without ⊗. In of Example <ref>, we have -combinator and -combinator as * := { t_1 (e t_2) t_2 t_1 | t_1,t_2 ∈ T }; * := { t_3 (t_1 (e t_2)) (t_3 t_2 t_1) | t_1,t_2,t_3 ∈ T }. This is a less standard example in that uses t_1 (e t_2) as the role of t_1 ⊗ t_2. For another example, as well as we can construct an LCA (and the based -algebra) from a “reflexive object” (See <cit.> and <cit.>), we can get -algebras by appropriate settings. Let (,⊗, I) be a monoidal closed category and Φ:(- ⊗ X, -) (-,- X) be the adjunction. Suppose an object V that has: * an isomorphism r:(V V) V and s := r^-1; * a retraction t: (V ⊗ V) ◃ V:u, that is, maps t:V ⊗ V V and u:V V ⊗ V such that u ∘ t =id_V ⊗ V. Then the set of maps (I,V) is a -algebra. * For maps M,N:I V, the application is defined as I unitor I ⊗ I (s ∘ M) ⊗ N (V V) ⊗ V ev V. * Take a map f:(V ⊗ V) ⊗ V V as (V ⊗ V) ⊗ V associator V ⊗ (V ⊗ V) s ⊗ (ev ∘ (s ⊗ id)) (V V) ⊗ V ev V. The -combinator is given as r ∘Φ (r ∘Φ (r ∘Φ(f)) ∘λ_V), where λ_V:I ⊗ V V is the unitor. * The -combinator is r ∘Φ(λ_V). * The -combinator given above satisfies the axiom of the -combinator. Here we use r ∘ s =id_V, and thus we need to assume r is an isomorphism (not merely a retraction). * Take a map g:V ⊗ V V as V ⊗ V s ⊗ u (V V) ⊗ (V ⊗ V) ev ∘ ( associator) V ⊗ V ev ∘ (s ⊗ id) V. The -combinator is r ∘Φ (r ∘Φ(g) ∘λ_V). * The -combinator is r ∘Φ (r ∘Φ(t) ∘λ_V). * Given arbitrary M:I V, M is r ∘Φ(ev ∘ (s ⊗ M) ∘ρ_V ∘λ_V). Here ρ_V :V V ⊗ I is the unitor. We will use the above -algebra later in the last of Section <ref>. Next we show that -algebras induce monoidal closed categories. When is a -algebra, is a monoidal closed category. Since is also a -algebra, we can use the combinatory completeness for the planar lambda calculus. * For objects X and Y, the underlying set of X ⊗ Y is |X| × |Y|. Realizers are defined as x ⊗ y_X ⊗ Y := { p q | p ∈x_X, q ∈y_Y }. * For f: X X' and g:Y Y', the map f ⊗ g is the function sending x ⊗ y to f(x) ⊗ g(y). A realizer for f ⊗ g is ( pq. (r_f p)(r_g q)). * The underlying set of the unit object I is a singleton {∗}. The realizer is ∗_I := {}. * The left unitor λ_X: I ⊗ X X sends ∗⊗ x to x, whose realizer is . A realizer of λ_X^-1 is . * The right unitor ρ_X: X X ⊗ I sends x to x ⊗∗, whose realizer is p. p. A realizer of ρ_X^-1 is . * The associator α_XYZ:(X ⊗ Y) ⊗ Z X ⊗ (Y ⊗ Z) sends (x ⊗ y) ⊗ z to x ⊗ (y ⊗ z). A realizer of α_XYZ is ( ( pqr. p ( qr))). A realizer of α_XYZ^-1 is ( pu. (M p) u), where M := pqr. ( pq) r. * For objects X and Y, the underlying set of Y X is _(X,Y). Realizers are defined as f_Y X := { r |}. * For f: X' X and g:Y Y', g f is the function sending a map h : X Y to g ∘ h ∘ f : X' Y'. A realizer for g f is uv. r_g (u (r_f v)). * The evaluation map ev :(Y X) ⊗ X Y sends f ⊗ x to f(x), which is realized by . * For any map f:Z ⊗ X Y, there exists a unique map g:Z (Y X) which satisfies ev ∘ (g ⊗ id_X) = f. This g is given as the function sending z to the function x ↦ f(z ⊗ x), which is realized by rp. r_f ( rp). Similar to the case of -algebras (Proposition <ref> and <ref>), we cannot use the same proof of Proposition <ref> to the case of . We prove that on a -algebra is a monoidal closed category by the same modification used in the proof of Proposition <ref>. That is, we take the inclusion functor G:↪ and the left adjoint F:, and define the tensor product ⊠ in as X ⊠ Y := F(GX ⊗ GY). When is a -algebra, is a monoidal closed category. For functors given by applicative morphisms between -algebras, the next properties hold. Let _1 and _2 be -algebras and γ :_1 _2 is an applicative morphism. Then :_1_2 is a lax monoidal functor. A realizer for I_2 (I_1) is in the set u.u (γ(_1)). A realizer for ( X) ⊗_2 ( Y) (X ⊗_1 Y) is in _2 ( pq.r_γ (r_γ (γ_1) p) q). For -algebras _1 and _2 and an adjoint pair (δ⊣γ) : _1 _2, the adjunction (⊣):_1_2 is monoidal. We show that the left adjoint is strong monoidal. Since is lax monoidal by the previous proposition, it is sufficient to show that there are realizers for maps I_2 I_1 and (X ⊗_2 Y) X ⊗_1 Y. A realizer for the former is x. (r_δ (δ ( y.y (γ_1))) x). A realizer for the latter is z. (r_δ (δ ( ( uv.r_γ (r_γ (γ) ( u) ) ( v)))) z). Here ∈ |_1| is an element such that ∀ x ∈ |_1|, (δ (γ x)) =x and ∈ |_2| is an element such that ∀ y ∈ |_2|, y =γ (δ y), that are obtained by the assumption that γ and δ form an adjoint pair. §.§ Bi--algebras and monoidal bi-closed categories Let us consider once again why non-symmetric tensor products in categories of assemblies cannot be constructed from -algebras, from the viewpoint of the “polymorphic encoding.” In the second-order linear logic, a tensor product X ⊗ Y can be interpreted as ∀α . (X Y α) α. (This interpretation is seen in <cit.>, for instance.) This formula (X Y α) α corresponds to the type inhabited by λ t.txy in the typed linear lambda calculus. This correspondence connected to that (in a PCA or a -algebra,) a realizer of x⊗ y ∈ |X ⊗ Y| is t.tp q for p ∈x_X and q ∈y_Y. What matters here is that the interpretation X ⊗ Y ≅∀α . (X Y α) α holds only when the tensor product is symmetric. Whereas, for the non-symmetric cases, X ⊗ Y is expressed as ∀α. (α Y X) α or ∀α. α (Y X α). Here we need to distinguish two sorts of implications and . In an applicative structure like a -algebra, we cannot distinguish them since we only have one sort of application. Conversely, providing some structure in an applicative structure that allows to distinguish these two implications, we may be able to construct non-symmetric tensor products in . From this viewpoint, we introduced bi--algebras in <cit.>. In this subsection, we recall bi--algebras from <cit.>. First we recall a variant of the lambda calculus, which is an example of an applicative structure with two sorts of applications. Bi-planar lambda terms are constructed by the following rules: (identity) x ⊢ x Γ, x ⊢ M (right abstraction) Γ⊢xM x, Γ⊢ M (left abstraction) Γ⊢xM Γ⊢ M Δ⊢ N (right application) Γ, Δ⊢ M N Δ⊢ N Γ⊢ M (left application) Δ, Γ⊢ N M Note that here is none of weakening, contraction nor exchange rules. For the sake of clarity, we will classify right and left by red and blue color. That is, we write each of them as M N, xM, N M and xM. We define a relation _β on bi-planar lambda terms as the congruence of the following relations: * (right β-reduction) xM N _β M[N/x] * (left β-reduction) N xM_β M[N/x] The bi-planar lambda calculus consists of bi-planar lambda terms and the reflexive, symmetric and transitive closure of _β as the equational relation . Basic properties about the β-reduction _β, such as the confluence and the strongly normalizing property, can be shown in the same way as the proof for the linear lambda calculus. The bi-planar lambda calculus is not essentially a new concept, since it often appears as the Curry-Howard corresponding calculus with the Lambek calculus ( <cit.>). However, note that unlike the calculus corresponding to the Lambek calculus, the bi-planar lambda calculus is based on untyped setting. The reason why we use a less-standard notation is to shorten the length of terms and to make them easier to read. Then we define a class of applicative structures which we call bi--algebras. A total applicative structure =(,) is a bi--algebra iff there is an additional total binary operation on and contains several special elements: * ∈ such that ∀ x,y,z ∈, (( x) y) z = x (y z). * ∈ such that ∀ x,y,z ∈, z (y (x )) = (z y) x. * ∈ such that ∀ x,y,z ∈, x (( y) z) = (x y) z. * ∈ such that ∀ x,y,z ∈, (z (y )) x = z (y x). * ∈ such that ∀ x ∈, x = x. * ∈ such that ∀ x ∈, x = x. * For each a ∈, a∈ such that ∀ x ∈, (a) x = x a. * For each a ∈, a∈ such that ∀ x ∈, x (a) = a x. We call and as right application and left application respectively. We often write = (,,) for a bi--algebra =(,) with the left application . In the sequel, we use as a left-associative operation and often omit unnecessary parentheses, while we do not omit parentheses for . For instance, (u v w) ((x y) z) denotes ((u v) w) ((x y) z). The definition of bi--algebras is intended having a good correspondence with the bi-planar lambda calculus. Untyped closed bi-planar lambda terms modulo form a bi--algebra, which we call in this paper. We give a few examples of representatives: xyzx (y z) represents ; yxzx (y z) represents ; xM x represents M. Let = (,,) be a bi--algebra. A polynomial over is defined as a syntactic expression generated by variables, elements of and the applications and . For a polynomial M over and the rightmost variable x of M, if x appears exactly once in M, there exists a polynomial M' such that the free variables of M' are the free variables of M excluding x and M' a = M[a/x] for all a ∈. We write such M' as xM. Also, for a polynomial N over and the leftmost variable y of N, if y appears exactly once in N, there exists a polynomial N' such that the free variables of N' are the free variables of N excluding y and a N' = N[a/y] for all a ∈. We write such N' as yN. We define xM by induction on the structure of M. * xx :=. * xM N := ( N)xM (x ∈ FV(M)) M xN (x ∈ FV(N)) Note that in case x ∈ FV(M), N has no variables since x is the rightmost free variable in M N. * xN M := N (xM) (x ∈ FV(M)) (M) xN (x ∈ FV(N)) Note that in case x ∈ FV(N), M has no variables since x is the rightmost free variable in N M. The case of the left abstraction yN is given in the same way, with all the left and right constructs reversed. Next we give another example of bi--algebra which is introduced in <cit.> and similar to Example <ref>. Take an ordered group (G,·,e,≤). Let T” be a set whose elements are constructed grammatically as follows: t ::= g | t t' | t t' (g ∈ G). That is, T” is a set of binary trees whose leaves are labeled by elements of G, and whose nodes are two colored by and . We further define a function | | :T” G by induction: |g| := g, |t_2 t_1| := |t_2| · |t_1|^-1 and |t_1 t_2| := |t_1|^-1· |t_2|. Let |”| be the powerset of { t ∈ T”| e ≤ |t| }. Then we can get a bi--algebra ” by |”|: * For M,N ∈ |”|, M N := { t_2 |∃ t_1 ∈ N, (t_2 t_1) ∈ M }. * For M,N ∈ |”|, N M := { t_2 |∃ t_1 ∈ N, (t_1 t_2) ∈ M }. * := { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T”}, dual for . * := { ((t_1 t_2) t_3) (t_1 (t_2 t_3)) | t_1,t_2,t_3 ∈ T”}, dual for . * := { t_1 t_1 | t_1 ∈ T”}, dual for . * For M ∈ |”|, M := { t_2 t_1 | (t_1 t_2) ∈ M }, dual for M. In the above example, we prepare in the construction of T” to express the left application. However, in fact, in Example <ref> is a bi--algebra even without preparing . Let T be the same set in Example <ref>. For t, t' ∈ T, we define t t' ∈ T as (e t) (e t'). Then is a bi--algebra, whose components are taken in the same way as Example <ref>. Next we give some basic properties of bi--algebras. * Any bi--algebra is also a -algebra. * Any -algebra is also a bi--algebra whose left and right applications coincide. * When =(,) is a bi--algebra, the left application is unique up to isomorphism. That is, when both (,,_1) and (,,_2) are bi--algebras, _1 = (,_1) and _2 = (,_2) are isomorphic as applicative structures, where x _i y := y _i x. * Let = (,,) be a bi--algebra and take an applicative structure ' := (,') by x ' y := y x. Then is a -algebra iff ' is a -algebra. Moreover, in such a case, and ' are isomorphic as applicative structures. * , , , , and a are given as , , xyx (y ), xyx y, xytt x y and xx a respectively. * For a -algebra (, ), (, , ) is a bi--algebra when we take y x := x y. Here = :=, = :=, = := and a = a := a. * By the combinatory completeness of _2, we have L := yxy _2 x such that L y x = y _2 x = x _2 y. By the combinatory completeness of _1, we have an element r := xyL y x, which satisfies r _1 x _1 y = L y x = x _2 y. This r realizes the applicative morphism i_1 : _1 _2 given as the identity function on . Similarly we have the inverse applicative morphism i_2 : _2 _1 given as the identity function. i_1 and i_2 are the isomorphisms between _1 and _2. * Suppose that is a -algebra, that is, there is some element ∈ such that x y z = x z y. Take an element := xyz M z y x, where M := yzxy (z x). , and make ' a -algebra. Similarly, when we suppose ' is a -algebra, is also a -algebra. Furthermore, when we suppose (and also ') is a -algebra, we have an element r := yxy x, which realizes the applicative morphism i : ' given as the identity function. Similarly we have the inverse applicative morphism i' : ' given as the identity function, and thus ≅'. By (<ref>) and (<ref>) of the above proposition, the class of bi--algebras is the class of applicative structures in between -algebras and -algebras. We named the “-combinator" of -algebras by the reason that it is represented as xyx y in a bi--algebra, that gives the “left" application of two arguments. Although xyx y always acts as a -combinator in a bi--algebra, it is not the only way to take a -combinator. Indeed, in Example <ref>, has a -combinator as xyx y = () = { t_2 ((e t_1) (e t_2)) t_1 | t_1,t_2 ∈ T } which is different from the -combinator taken in Example <ref>. Since a bi--algebra is also a -algebra, we know that (and ) is a monoidal closed category. Moreover, we can show that the categories of assemblies on bi--algebras are not just a monoidal closed categories, but are monoidal bi-closed categories, having richer categorical structures. A monoidal bi-closed category is a monoidal category with two sorts of adjunction (X ⊗ Y,Z) ≅(X,Z Y) and (X ⊗ Y,Z) ≅(Y,X Z). When =(,) is a bi--algebra, is a monoidal bi-closed category. Let be the left application of . * A realizer for identities is . * A realizer for the composition of f:X Y and g:Y Z is r_g r_f. * For objects X and Y, the underlying set of X ⊗ Y is |X| × |Y|. Realizers are defined as x ⊗ y := {tt p q| p ∈x_X, q ∈y_Y }. * For f: X X' and g:Y Y', the map f ⊗ g is the function sending x ⊗ y to f(x) ⊗ g(y). A realizer for f ⊗ g is upqtt (r_f p) (r_g q) u. * The underlying set of the unit object I is a singleton {∗}. The realizer is ∗_I := {}. * The left unitor λ_X: I ⊗ X X sends ∗⊗ x to x, whose realizer is p p. A realizer of λ_X^-1 is ptt p. * The right unitor ρ_X: X X ⊗ I sends x to x ⊗∗, whose realizer is ptt p. A realizer of ρ_X^-1 is upvp (v ) u. * The associator α_XYZ:(X ⊗ Y) ⊗ Z X ⊗ (Y ⊗ Z) sends (x ⊗ y) ⊗ z to x ⊗ (y ⊗ z). α_XYZ is realized by uvM v u, where M := pqrtt p t't' q r. A realizer of α_XYZ^-1 is upvqrN v u, where N := t(t t't' p q) r. * For objects X and Y, the underlying set of Y X is _(X,Y). Realizers are f_Y X := { r |}. * For f: X' X and g:Y Y', g f is the function sending a map h : X Y to the map g ∘ h ∘ f : X' Y'. A realizer for g f is uvr_g (u (r_f v)). * The evaluation map ev :(Y X) ⊗ X Y sends f ⊗ x to f(x), which is realized by u u. * For any map f:Z ⊗ X Y, there exists a unique map g:Z (Y X) which satisfies ev ∘ (g ⊗ id_X) = f. This g is given as the function sending z to the function x ↦ f(z ⊗ x), which is realized by qpr_f tt q p. * For objects X and Y, the underlying set of X Y is is _(X,Y). Realizers are f_X Y := { r |}. This set is not empty since (r_f) is in the set for a realizer r_f of f. * For f: X' X and g:Y Y', f g is the function sending a map h : X Y to the map g ∘ h ∘ f : X' Y'. A realizer for f g is uvr_g ((r_f v) u). * The evaluation map ev' : X ⊗ (X Y) Y sends x ⊗ f to f(x), which is realized by upvp v u. * For any map f:X ⊗ Z Y, there exists a unique map g:Z (X Y) which satisfies ev' ∘ (id_X ⊗ g) = f. This g is given as the function sending z to the function x ↦ f(x ⊗ z), which is realized by qpr_f tt p q. For the category of modest sets, we use the same discussion as Proposition <ref>. That is, for the functor F that is left adjoint of the inclusion functor G:, we define tensor products ⊠ in as X ⊠ Y := F(GX ⊗ GY). When =(,) is a bi--algebra, is a monoidal bi-closed category. In Proposition <ref>, is the category of assemblies on the applicative structure (, ). Even if we employ the left application to construct the category of assemblies, we can obtain a category with the same structures as , as the next proposition says. Let =(,,) be a bi--algebra. When we take an applicative structure '=(,') by x ' y := y x, and are isomorphic as categories. Moreover, is monoidally isomorphic to with the reversed tensor products. That is, there is an isomorphism R: such that R(I) ≅ I', R^-1(I') ≅ I, R(X ⊗ Y) ≅ RY ⊗' RX and R^-1(X' ⊗' Y') ≅ R^-1Y' ⊗ R^-1X' hold. For a map f:X Y in , the map is also a map in since the realizer exists as r_f. Therefore, we can take a functor R: which sends objects to the same objects and maps to the same maps. Similarly we can get R^-1 which sends objects to the same objects and maps to the same maps. ' is a bi--algebra by taking the left application x ' y := y x. We define the monoidal structure (⊗',I') on in the same way as Proposition <ref>. Here the realizers for tensor products are x ⊗' y_X ⊗' Y = {tq (p t)| p ∈x_X, q ∈y_Y }. A realizer for R(I) I' is uu and a realizer for the inverse is u u. A realizer for R(X ⊗ Y) RY ⊗' RX is upqtp (q t) u and a realizer for the inverse is uu qptt p q. Similar for the realizers related to R^-1. We can define “partial bi--algebras” naturally. Similar to partial -algebras discussed in Remark <ref>, for a partial bi--algebra : * is not generally a monoidal bi-closed category; * adding an extra element , naturally extends to a total bi--algebra _; * is the full subcategory of _. Here the does not need to be two (for and for ), just one. § SEPARATION OF CLASSES OF APPLICATIVE STRCTURES As we have already mentioned, the classes of applicative structures in this paper form a hierarchy summarized in the following table (Table <ref>). However, we have not yet shown the strictness of the hierarchy. To show the strictness of the each inclusion, it is sufficient to provide an applicative structure separating the classes, that is, an applicative structure belonging to one side of the class but not belonging to the other. In this section we give several such applicative structures, as summarized in Table <ref>. §.§ Proofs of separations First we show that the planar lambda calculus with a constant separates -algebras and -algebras. Suppose a constant symbol c and add the following constant rule to the construction rules of planar lambda terms (See Example <ref> and <ref>). (constant) ⊢ c We assume no additional reduction rules about the constant. That is, for instance, c (λ x.x) c has no redex. Closed planar terms (which may contain c) modulo form a -algebra, which we call . Even adding the constant c, the planar lambda calculus still has the properties of confluence and strongly normalizing. is a -algebra but not a -algebra. Hence, -algebras ⊊ -algebras. Assume that is a -algebra. That is, assume there exist terms I and in such that I M M and M I M for any term M in . We take I and as β-normal terms w.l.o.g. If M N in , the number of appearance of c is equal between M and N. Thus, since c I c, I and cannot contain c. * When c is β-normal, c I is also β-normal and obviously not equal to c. This contradicts to the confluence of the planar lambda calculus (with constant c). * When = λ u.J for some J and u, c I (J[c/u]) I. * When J = λ v.J' for some J' and v, c I J'[c/u][I/v]. Suppose v receives just n arguments N_1,… ,N_n (n ≥ 0) in J'. J' = C[v N_1 … N_n] for some context C[-] which contains u to the left of the hole [-]. For the β-normal form N of I N_1 … N_n, c I (C[N])[c/u]. (C[N])[c/u] is β-normal and obviously not equal to c. This contradicts to the confluence. * Otherwise, J[c/u] I is β-normal and not equal to c. This contradicts to the confluence. Next we show that the planar lambda calculus additionally employing the η-equality separates -algebras and -algebras. Suppose three constant symbols c_1, c_2 and c_3 and add the following constant rules (i=1,2,3) to the construction rules of planar lambda terms. (constant) ⊢ c_i We assume no additional reduction rules about the constants. Closed planar terms (that may contain constants) modulo form a -algebra, which we call . Note that the equivalence relation of is the βη-equality, while that of (Example <ref>) is the β-equality. We have λ xyz.x(yz) as a representation of in . Indeed, for any term M, (λ xyz.x(yz)) M (λ w.w) λ z.Mz =_η M. is a -algebra but not a -algebra. Hence, -algebras ⊊ -algebras. Assume that there are some terms L and P in satisfying that for any terms M_1, M_2 and M_3, L M_1 (P M_2 M_3) M_1 M_2 M_3. Taking M_1 = M_2 = M_3 := λ x.x, we see that L and P cannot contain constants. Taking M_i := c_i, we have L c_1 (P c_2 c_3) c_1 c_2 c_3. Since L is a closed planar term with no constants, the βη-normal form of L is the form λ xy_1 … y_m.x N_1 … N_n (m,n ≥ 0). Therefore, L c_1 (P c_2 c_3) (λ y_1… y_m.c_1 N_1 … N_n)(P c_2 c_3). However, this term cannot be βη-equal to c_1 c_2 c_3 since c_1 cannot receive c_2 and c_3 as separated arguments no matter how the form of P is. Next we show that the freely constructed -algebra separates -algebras and bi--algebras. We take as the freely constructed -algebra with two constants c_1 and c_2. That is, elements of are constructed from , , , , , c_1 and c_2 using the application and the unary operation (). The equality in is obtained by the axioms of -algebras and we do not assume any axioms on the constants. is a -algebra but not a bi--algebra. Hence, bi--algebras ⊊ -algebras. Assume that is a bi--algebra and write the right and left applications as and . Here this is the same application as that of as a -algebra, that is, MN and M N denote the same element. By the combinatory completeness, there is an element M := xyx y in . Since M = holds, this M cannot contain c_1 nor c_2. For this M, M c_1 c_2 = c_2 c_1. As we can see from the axioms of , , , , and ( `- ), it is impossible for M in any form to exchange the order of two arguments c_1 and c_2 in M c_1 c_2. Then it is also impossible for c_2 in any form to reduce M c_1 c_2 to c_2 c_1. Finally we show the bi-planar lambda calculus (Example <ref>) separates bi--algebras and -algebras. is a bi--algebra but not a -algebra. Hence, -algebras ⊊ bi--algebras. Assume that there is some closed bi-planar lambda term C in such that for any closed bi-planar term M, N and L, C M N L M L N. Let C' be the β-normal form of C xx. C' M N N M holds for any M and N. Take M := xxyy and N := xxyyzz. Note that for any β-normal term P and a free variable w of P, P[M/w] and P[N/w] are β-normal. * When C' M is β-normal, both C' M N and N M are β-normal. However, obviously C' M N N M and it contradicts to the confluence of the bi-planar lambda calculus. * When C' = uC” for some C” and u, C' M N C”[M/u] N. * When C” = vC”' for some C”' and v, C' M N C”'[M/u][N/v]. Since v is the rightmost free variable of C”', N is to the right of M in C”'[M/u][N/v]. Hence C”'[M/u][N/v] N M and it contradicts to the confluence. * Otherwise, C”[M/u] N is β-normal. C”[M/u] N N M and it contradicts to the confluence. §.§ The planar lambda calculus is not a bi--algebra Proofs of separations in the previous subsection are straightforward ones. However, it is sometimes difficult to show that an applicative structure does not belong to certain class of applicative structures. In this subsection, as an example, we will show that of Example <ref> (the planar lambda calculus with no constant) is not a bi--algebra. Compared to propositions when constants exist (Proposition <ref> and <ref>), the proof is more tricky. For any term M of , there is a term N of such that NM λ x.x. Since planar lambda terms always have β-normal forms uniquely, we can assume M is β-normal w.l.o.g. We show this lemma by the induction on the number of bound variables of M. When BV(M) is a singleton, M is λ x.x and N:=λ x.x satisfies NM λ x.x. Assuming that the lemma holds till the number of bound variables of M is k, we will show that the lemma holds for M which contains k+1 bound variables. Since M is planar and β-normal, M = λ x y_1 … y_m .x P_1 … P_n for some β-normal planar terms P_1, … , P_n. Here y_1 ,… , y_m are all the free variables of P_1 ,… , P_n. Let Q_j be the term replacing all the y_i in P_j with λ z.z. Each Q_j is a closed planar term and has at most k bound variables. Hence, from the induction hypothesis, there exists some closed planar term R_j such that R_j Q_j λ x.x. Take N' := λ w_1 … w_n.(R_1 w_1)… (R_n w_n) and N:= λ u.uN'(λ z_1.z_1)… (λ z_m.z_m). Then N' and N are closed planar terms and NM MN'(λ z_1.z_1)… (λ z_m.z_m) = (λ x y_1 … y_m .x P_1 … P_n)N'(λ z_1.z_1)… (λ z_m.z_m) N'Q_1 … Q_n = (λ w_1 … w_n.(R_1 w_1)… (R_n w_n))Q_1 … Q_n (R_1 Q_1)… (R_n Q_n) (λ x.x)… (λ x.x) λ x.x. is not a -algebra. Assume that there is a term T in such that TMN NM for any M and N in . (Note that a total applicative structure containing and is a -algebra iff it has such that x y = y x. Indeed, ( ( ( ))) satisfies the axiom of the -combinator.) Take a term λ x y_1 … y_m.x P_1 … P_n as the β-normal form of T. If n=0, T=λ x.x and it immediately leads contradiction. Thus n ≥ 1. Since T MN NM for any M and N, TM TMT TMTT TMTTT …. Let Q_j (j=1,… ,n) be the terms replacing all the y_i in P_j with T. Each Q_j is a closed planar term. Let U := λ x.x Q_1 … Q_n. UM = (λ x.x Q_1 … Q_n)M M Q_1 … Q_n = (M P_1 … P_n)[T/y_1]… [T/y_m] (λ x y_1 … y_m.x P_1 … P_n) M T … T = TMT… T TM. Thus UMN (TM)N NM holds for any M and N. From Lemma <ref>, there exist closed terms R_j (j=1,… ,n) such that R_j Q_j λ z.z. Take M_0 := λ w_1 … w_n.(R_1 w_1)… (R_n w_n). Then for any closed planar term N, NM_0 UM_0 N = (λ x.x Q_1 … Q_n)M_0 N M_0 Q_1 … Q_n N = (λ w_1 … w_n.(R_1 w_1)… (R_n w_n))Q_1 … Q_n N (R_1 Q_1) … (R_n Q_n) N (λ z.z)… (λ z.z) N N. Taking N_0 := λ x.x in N_0 M_0 N_0, we get M_0 = λ x.x. Therefore, N (λ x.x) N holds for any closed planar term N. However, N:= λ y.y(λ z.z) is the counterexample of this equation and it leads contradiction. is a not a bi--algebra. Assume that is a bi--algebra. That is, taking as the application canonically obtained by the application of planar lambda terms, assume that there is some binary operation such that (||,,) becomes a bi--algebra. This is the binary operation not on planar lambda terms, but on β-equivalence classes of planar lambda terms. However, in the sequel, we denote a lambda term M indistinguishably to the equivalence class containing M. For instance, for planar lambda terms M_1 and M_2, M_1 M_2 denotes some representation of M_1M_2, where M_i is the β-equivalence class containing M_i. By the combinatory completeness for bi--algebras, there is a closed planar term L representing xyx y. Take a term λ x y_1 … y_m.x P_1 … P_n as the β-normal form of L. For a term T representing xyx (y ), dividing to the cases of n=0 or not, we will show that T makes a -algebra and leads contradiction to Lemma <ref>. If n=0, L=λ x.x and M N (L M) N M N holds for any M and N in . Given arbitrary term N_0 in , take M := and N:= N_0 in M N M N. Then we get N_0 N_0. For arbitrary M_0 and N_0 in , T M_0 N_0 = (xyx (y )) M_0 N_0 M_0 (N_0 ) M_0 N_0 N_0 M_0 holds. Hence, T makes a -algebra and contradicts to Lemma <ref>. Next is the case of n ≥ 1. Since L M N M N for any M and N, L M L M (L) L M (L) (L) L M (L) (L) (L) …. Let Q_j be the term replacing all the y_i in P_j with L. Each Q_j is a closed planar term. Let V:= λ x.x Q_1 … Q_n. VM = (λ x.x Q_1 … Q_n)M M Q_1 … Q_n = (M P_1 … P_n)[L/y_1]… [L/y_m] (λ x y_1 … y_m.x P_1 … P_n)M(L)… (L) = LM(L)… (L) LM. Thus VMN LMN M N holds for any M and N. From Lemma <ref>, there exists closed term R_j (j=1,… ,n) such that R_j Q_j λ z.z. Take M_1 := λ w_1 … w_n.(R_1 w_1)… (R_n w_n). Then for any closed planar term N, M_1 N LM_1 N = (λ x y_1 … y_m.x P_1 … P_n)M_1 N M_1 Q_1 … Q_n N = (λ w_1 … w_n.(R_1 w_1)… (R_n w_n))Q_1 … Q_n N (R_1 Q_1) … (R_n Q_n) N (λ z.z)… (λ z.z) N N. Taking N := in M_1 N N, we get M_1 =. Therefore, N_1 N_1 holds for any closed planar term N_1. Given arbitrary N_2 in , with N_1:= N_2, we get N_2 N_2 N_2 . For arbitrary M_2 and N_2 in , T M_2 N_2 = xyx (y ) M_2 N_2 M_2 (N_2 ) M_2 N_2 N_2 M_2 holds. Hence, T makes a -algebra and contradicts to Lemma <ref>. We have already seen in Proposition <ref> that (the planar lambda calculus with constants) is not a -algebra. However, whether is a -algebra is still open. §.§ The computational lambda calculus Next we consider the computational lambda calculus as an applicative structure that gives rise to non-symmetric structures. The computational lambda calculus is a variant of the lambda calculus whose evaluation rules are sound for programs with computational effects <cit.>. The following axiomatization is from <cit.>. Suppose infinite supply of variables x,y,z,…. Values, terms and evaluation contexts are defined as follows: * (values) V ::= x | λ x.M * (terms) M ::= V | MM' * (evaluation contexts) E[] ::= [] | EM | VE (Terms are the same ones of the ordinary lambda calculus in Example <ref>.) An equivalence relation =_c on terms is defined as the congruence of the following equations: * (β_V) (λ x.M)V =_c M[V/x] * (η_V) λ x.Vx =_c V * (β_Ω) (λ x.E[x])M =_c E[M] Here E[M] denotes the term obtained by substituting M for [] in E[]. The (untyped) computational lambda calculus is the lambda calculus formed by terms and =_c. In <cit.>, we showed that the computational lambda calculus is a -algebra but not a -algebra. We can get a -algebra , whose underlying set is equivalence classes of lambda terms modulo =_c. (Note that terms of are not restricted to closed terms.) Here λ xyz.x(yz), λ x.x, λ xy.yx and λ x.xM are representatives of , , and M respectively. Although the computational lambda calculus has all terms of the lambda calculus, is not a PCA nor a -algebra. This is reasonable considering that programs with effects cannot be discarded, duplicated nor exchanged in general, and thus cannot have the //-combinator. Moreover, we can prove the next proposition. is not a bi--algebra. To prove this proposition, we use the CPS-translation <cit.>. The CPS-translation sends terms of the computational lambda terms to terms of the ordinary lambda calculus and is defined inductively as follows. * x := λ k.kx * λ x.M := λ k.k(λ x.M) * MN := λ k.M (λ f. N (λ x.fxk)) For any term M and N, M =_c N holds in the computational lambda calculus iff MN holds in the ordinary lambda calculus. We will lead a contradiction by assuming is a bi--algebra. If is a bi--algebra, we have a term L representing xyx y and a term M representing xM x for each term M. For any terms M_1 and M_2, L M_1 (M_2) =_c M_2 M_1 holds, and thus L M_1 (M_2)M_2 M_1 holds. Now we take a fresh variables v and let M_2 := vv. Additionally we take a fresh variable (fresh for L, M_2 and M_2) u and let M_1 := uu. Then L M_1 (M_2) = λ k. (λ k'.L (λ f'.M_1(λ x'.f'x'k'))) (λ f.M_2(λ x.fxk)) λ k. L (λ f'.M_1 (λ x' .f'x'(λ f.M_2(λ x.fxk)))) λ k. L (λ f'.uu (λ x' .f'x'(λ f.M_2(λ x.fxk)))), M_2 M_1 λ k.vv(λ f.uu(λ x.fxk)). In M_2 M_1, vv receives the argument of the form (… uu … ). However, since u and v are fresh, no matter what L is, in L M_1 (M_2), vv cannot receive arguments containing uu. Hence these terms L M_1 (M_2) and M_2 M_1 cannot be βη-equal. It leads a contradiction to the soundness of the CPS-translation. Semantically, the untyped ordinary/linear/planar lambda calculus is modeled by a reflexive object of a CCC/SMCC/closed multicategory. And it is related to the categorical structures of assemblies on each lambda calculus. On the other hand, the untyped computational lambda calculus is modeled by a reflexive object of a Kleisli category. Since the categorical structure of a Kleisli category is not monoidal in general but premonoidal (See <cit.>), it is expected that the category of assemblies on the untyped computational lambda calculus is not a monoidal category. Thus the computational lambda calculus is expected not to be a -algebra inducing monoidal closed category, however, we have not proven this conjecture yet. Here, we give an intuitive explanation for the conjecture. Assume that and exist in the computational lambda calculus. Take three non-values M_1, M_2 and M_3. Suppose these terms are reduced to values: v_L; v_P; M_i v_i. In M_1 M_2 M_3, the evaluation proceeds as follows: $̄M_1is reduced tov_1⇝M_2is reduced tov_2⇝v_1 v_2is reduced⇝M_3is reduced tov_3⇝… On the other hand, inM_1 (M_2 M_3), the evaluation proceeds as follows: $̄is reduced tov_L⇝M_1is reduced tov_1⇝v_L v_1is reduced⇝is reduced tov_P⇝M_2is reduced tov_2⇝v_P v_2is reduced⇝M_3is reduced tov_3⇝… These two computations seem not to coincide, since the order of the evaluations ofv_1 v_2andM_3is reversed. § NECESSARY CONDITIONS FOR INDUCING CLOSED STRUCTURES We have seen that applicative structures of certain classes induce the corresponding categorical structures, in Proposition <ref> (CCCs), Proposition <ref> (SMCCs), Proposition <ref> (closed multicategories), Proposition <ref> (closed categories), Proposition <ref> (monoidal closed categories) and Proposition <ref> (monoidal bi-closed categories). In this section, we show the certain “inverses” of these propositions hold. Suppose is a total applicative structure and := happens to be a CCC. is an -algebra if the following conditions hold. * |Y^X| = _ (X,Y) and f_Y^X = { r |}. * For f:X' X and g:Y Y', g^f : Y^X Y'^X' is the function sending h:X Y to g ∘ h ∘ f. * The forgetful functor from to strictly preserves finite products. * The adjunction Φ: _ (X × Y, Z) _ (X, Z^Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)). Take an object A := (,_A), where a_A := { a }. When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A. That is, ∀ a ∈, a = a. Applying Φ to the first projection (a,a') ↦ a: A × A A, we get a map k:A A^A, which sends a to (a' ↦ a). (Here we use the conditions <ref>, <ref> and <ref> to clarify what the function k actually is.) When we take as a realizer of k, this satisfies ∀ a,a' ∈, a a' = a. Let ϕ : A A^A be the function sending a to the function x ↦ a x. Here ϕ(a) is realized by a and ϕ is realized by . Applying Φ twice to the map from ((A^A)^A × A^A) × A to A defined as ((A^A)^A × A^A) × A id × diagonal ((A^A)^A × A^A) × (A × A) symmetry ((A^A)^A × A) × (A^A × A) ev × ev A^A × A ev A, we get a map s:(A^A)^A (A^A)^(A^A) which sends a function g:A A^A to the function (f:A A) ↦ (a ↦ g(a) (f(a))). The map A ϕ A^A ϕ^id (A^A)^A s (A^A)^(A^A)id^ϕ (A^A)^A is the function a ↦ (a' ↦ (a”↦ a a” (a' a”))). (Here we use the conditions <ref> to clarify what the functions ϕ^id and id^ϕ actually are.) Thus, when we take as a realizer of this map, satisfies x y z = x z (y z) for any x,y,z ∈. To rephrase the proposition, to obtain a CCC by categorical realizability, being an-algebra is the necessary condition on the total applicative structure (under several conditions). We will show the similar propositions for the other classes. Combining the propositions in this section and the separations in the previous section, we can say that, for instance, the category of assemblies on an applicative structure that is a bi--algebra but not a-algebra (,) is indeed non-symmetric monoidal (as long as we try to take the symmetry in the canonical way). When we try to prove the proposition replacing “total applicative structure” with “partial applicative structure” in Proposition <ref>, we cannot use the same proof. This is because ϕ : A A^A is not always defined. Indeed, when a a' is not defined in , ϕ (a) is not defined at a'. It is still unclear whether we can prove the similar proposition as Proposition <ref> when is a partial applicative structure. Suppose is a total applicative structure and := happens to be an SMCC. is a -algebra if the following conditions hold. * |Y X| = _ (X,Y) and f_Y X = { r |}. * g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f. * The forgetful functor from to is a strict symmetric monoidal functor. * The adjunction Φ: _ (X ⊗ Y, Z) _ (X, Z Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)). Take an object A := (,_A), where a_A := { a }. When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A. That is, ∀ a ∈, a = a. Let ϕ : A (A A) be the function sending a to the function x ↦ a x. Here ϕ(a) is realized by a and ϕ is realized by . Applying Φ twice to the map ((A A) ⊗ (A A)) ⊗ A (A A) ⊗ ((A A) ⊗ A) (A A) ⊗ A A, we get a map l: (A A) ((A A) (A A)), which sends g:A A to the function (f:A A) ↦ g ∘ f. The map A ϕ (A A) l ((A A) (A A)) id ϕ ((A A) A) is the function a ↦ (a' ↦ (a”↦ a (a' a”))). Thus, when we take as a realizer of this map, satisfies x y z = x (y z) for any x,y,z ∈. Applying Φ to the map A ⊗ (A A) symmetry (A A) ⊗ A ev A, we get a map c:A (A (A A)), which sends a to (f ↦ f(a)). The map A c (A (A A)) id ϕ (A A) is the function a ↦ (a' ↦ a' a). Thus, when we take as a realizer of this map, satisfies x y = y x for any x,y ∈. Let := ( ( ( ))). Then xyz=xzy holds for any x,y,z ∈. Suppose is a total applicative structure and := happens to be a closed multicategory. is a -algebra if the following conditions hold. * (;X) = |X| and (;X) = X. * (X;Y) = _ (X,Y) and (X;Y) =(_ (X,Y), ). Here f= { r |}. * (X_1,… ,X_n;Y) = (X_1; (X_2,… ,X_n;Y)) and (X_1,… ,X_n;Y) is the underlying set of (X_1,… ,X_n;Y). * For g:Y_1,… ,Y_n Z and f_l:X^l_1,… ,X^l_k_l Y_l, g ∘ (f_1,… ,f_n) is the function sending x^1_1,… ,x^1_k_1,… ,x^n_k_n to g(f_1 (x^1_1,… ,x^1_k_1),… ,f_n (x^n_1,… ,x^n_k_n)). When k_l = 0 for some 1 ≤ l ≤ n, g ∘ (f_1,… ,f_n) is the function given y_l ∈ |Y_l| pointed by f_l as the l-th argument of g. * ev_X_1,… ,X_n;Y sends f, x_1,… ,x_n to f(x_1,… ,x_n). * Λ_Z_1,… ,Z_m;X_1,… ,X_n;Y sends a function (z_1,… ,z_m,x_1,… ,x_n ↦ f(z_1,… ,z_m,x_1,… ,x_n)) to the function (z_1,… ,z_m ↦ f(z_1,… ,z_m,-,… ,-)). Take an object A := (,_A), where a_A := { a }. When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A. That is, ∀ a ∈, a = a. Let ϕ : A (A;A) be the function sending a to the map x ↦ a x. Here ϕ(a) is realized by a and ϕ is realized by . Take a map b:A,A,A id,ϕ,id A, (A;A),A ϕ,ev (A;A),A ev A, which sends (x,y,z) to x (y z) for any x,y,z ∈. When we take as a realizer of Λ_A;A;(A;A) (Λ_A,A;A;A (b)), x y z = (Λ_A;A;(A;A) (Λ_A,A;A;A (b)))(x)(y)(z) = b(x,y,z) = x (y z). Given arbitrary a ∈, take a map f_a:A A as A id,a A,A ϕ,id (A;A),A ev A, which sends x ∈ to x a. When we take a as a realizer of f_a, a x = x a for any x ∈. Suppose is a total applicative structure and := happens to be a closed category. is a -algebra if the following conditions hold. * |Y X| = _ (X,Y) and f_Y X = { r |}. * g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f. * i_X is the function sending a function (f:∗↦ x) to x. * L_Y,Z^X is the function sending g:Y Z to the function (f:X Y) ↦ g ∘ f. In the condition <ref>, we assume that the unit object is a singleton {∗}. The assumption can be derived from the condition <ref>. Take an object X := ({ x_1,x_2 } , _X) by x_i_X :=. From the condition <ref>, |X I| is _ (I,X). Since _ (I,X) = _ (|I|,{ x_1,x_2 }), |X I| = _ (|I|,{ x_1,x_2 }). Also since X I ≅ X, |X I| ≅ |X| = { x_1,x_2 }. _ (|I|,{ x_1,x_2 }) ≅{ x_1,x_2 } holds iff |I| is the singleton. Take an object A := (,_A), where a_A := { a }. When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A. That is, ∀ a ∈, a = a. Let ϕ : A (A A) be the function sending a to the function x ↦ a x. Here ϕ(a) is realized by a and ϕ is realized by . The map A ϕ (A A) L ((A A) (A A)) id ϕ ((A A) A) is the function a ↦ (a' ↦ (a”↦ a (a' a”))). Thus, when we take as a realizer of this map, satisfies x y z = x (y z) for any x,y,z ∈. Since I ≅ (I I) and ∈id_I_I I, we can assume ∈∗_I w.l.o.g. When we take as a realizer of i_A^-1:A (A I), satisfies a x = a for any a ∈ and x ∈∗_I, especially, a =a holds. Given arbitrary a ∈, let g_a:I A be the function ∗↦ a. g_a is realized by a. The map A ϕ (A A) id g_a (A I) i_A A is the function a' ↦ a' a. Thus, when we take a as a realizer of this map, a satisfies a x = x a for any x ∈. Suppose is a total applicative structure and := happens to be a monoidal closed category. is a -algebra if the following conditions hold. * |Y X| = _ (X,Y) and f_Y X = { r |}. * g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f. * The forgetful functor from to is a strict monoidal functor. * The adjunction Φ: _ (X ⊗ Y, Z) _ (X, Z Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)). Applying Φ twice to the map ((Y X) ⊗ (X Z)) ⊗ Z (Y X) ⊗ ((X Z) ⊗ Z) (Y X) ⊗ X Y, we get a map L^X_Y,Z: (Y X) ((Y Z) (X Z)). This L is the natural transformation L of the closed category . Applying Φ to the unitor ρ_X :X ⊗ I X, we get a map i^-1_X : X (X I). The inverse map is the natural isomorphism i of the closed category . We can easily check that and satisfies all the conditions of Proposition <ref> for these L and i. Hence, is a -algebra. Take an object A := (,_A), where a_A := { a }. Let ϕ: A (A A) be the function sending a to the function x ↦ ax. Here ϕ (a) is realized by a and ϕ is realized by . Let l: A (A (A ⊗ A)) be the map obtained by applying Φ to A ⊗ (A ⊗ A) (A ⊗ A) ⊗ A ((A A) ⊗ A) ⊗ A A ⊗ A (A A) ⊗ A A, and let be a realizer of l. l is the function sending x to the function (y,z) ↦ xyz. Also let be a realizer of p := Φ(id_A ⊗ A) : A ((A ⊗ A) A). p is the function sending y to the function z ↦ (y,z). Then for any x,y,z ∈, x ( y z) ∈l(x)(p(y)(z))_A and thus x ( y z) = l(x)(p(y)(z)) = l(x)(y,z) = xyz. The proof of the next proposition, for monoidal bi-closed categories and bi--algebras, is a little more complicated than the proofs of previous propositions. When we obtain a monoidal bi-closed categoryby a bi--algebra, we take realizers of elements of the objectX Yinas f_X Y := { r ∈ || |} (See the proof of Proposition <ref>). However, in the next proposition we do not assume anything about the left application of, and thus we also cannot assume anything about realizers forX Y. This makes the proof of existence forandcumbersome. Suppose = (, ) is a total applicative structure and := happens to be a monoidal bi-closed category. is a bi--algebra if the following conditions hold. * |Y X| = _ (X,Y) and f_Y X = { r |}. * g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f. * The forgetful functor from to is a strict monoidal functor. * The adjunction Φ: _ (X ⊗ Y, Z) _ (X, Z Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)). * |X Y| = _ (X,Y). * f g : (X Y) (X' Y') is the function sending h:X Y to g ∘ h ∘ f. * The adjunction Φ': _ (X ⊗ Y, Z) _ (Y, X Z) is the function sending a function f to the function y ↦ (x ↦ f(x,y)). The conditions of this proposition includes all the conditions of Proposition <ref>. Hence, is a -algebra and have the combinatory completeness for the planar lambda calculus. We take as the -combinator and as the -combinator of . Take an object A := (,_A), where a_A := { a }. Applying Φ to the evaluation map ev_1:A ⊗ (A A) A, we get a map l:A (A (A A)), which sends a to (f ↦ f(a)). Let _1 be a realizer of l and x y := _1 x y. We will show that (,,) is a bi--algebra. Let ϕ : A (A A) be the function sending a to the function (x ↦ a x). Here ϕ(a) is realized by a and ϕ is realized by . Given arbitrary a ∈, let a := x._1 x a. For any x ∈, a x = _1 x a = x a. Given arbitrary a ∈, take a as an element of ϕ(a)_A A. Then for any x ∈, x a = _1 x a = l(x)(ϕ(a)) = ϕ (a) (x) = a x. Furthermore, we can take as (). Next we obtain . Applying Φ' to A ⊗ A ϕ(_1) ⊗ id A ⊗ A ϕ⊗ id (A A) ⊗ A ev A, we get a map ϕ':A (A A), which sends a to (a' ↦ a' a). Applying Φ' three times to A ⊗ ((A A) ⊗ (A A)) associator (A ⊗ (A A)) ⊗ (A A) ev ⊗ id A ⊗ (A A) ev A, we get a map p: I (A A) ((A A) (A A)). Define a map b_1 as I p (A A) ((A A) (A A)) ϕ' (ϕ' id) A (A (A A)), which sends ∗ to x ↦ (y ↦ (z ↦ (z y) x)). Take M_1 ∈b_1 (∗)_A (A (A A)). Let _2 be a realizer of Φ (ev_2), where ev_2 :A ⊗ (A (A A)) (A A) is the evaluation map. _2 realizes a map q:A (A A) that sends a to ϕ (_2 a). Let _3 be a realizer of Φ (ev_3), where ev_3: A ⊗ (A (A (A A))) (A (A A)) is the evaluation map. Take r:A A as a map sending x to _3 x M_1, whose realizer is x._3 x M_1. Applying Φ' to A ⊗ A q ⊗ r (A A) ⊗ A ev A, we get a map b_2 : I (A (A A)), which sends ∗ to (x ↦ (y ↦_2 y (_3 x M_1))). Take M_2 ∈b_2 (∗)_A (A A). Let b_3:A A be a map sending x to _2 x M_2, whose realizer is x._2 x M_2. When we take ∈b_3_A A, for any x ∈, x = _1 x = b_3 (x) = _2 x M_2. For any y ∈, y (x ) = y (_2 x M_2) = _1 y (_2 x M_2) = b_2 (∗) (x) (y) = _2 y (_3 x M_1). For any z ∈, z (y (x )) = z (_2 y (_3 x M_1)) = _1 z (_2 y (_3 x M_1)) = b_1(∗)(x)(y)(z) = (z y) x. Next we obtain . Applying Φ' and Φ to A ⊗ ((A (A A)) ⊗ A) associator (A ⊗ (A (A A))) ⊗ A ev ⊗ id (A A) ⊗ A ev A, we get a map d:(A (A A)) ((A A) A), which sends a map (a ↦ (a' ↦ f(a,a'))) to the map (a' ↦ (a ↦ f(a,a'))). When we take as a realizer of A ϕ' (A A) id ϕ (A (A A)) d ((A A) A), x ( y z) = d(ϕ∘ (ϕ'(y)))(z)(x) = (ϕ∘ (ϕ'(y)))(x)(z) = (x y) z for any x,y,z ∈. Finally we obtain . Applying Φ and Φ' to (A ⊗ ((A A) A)) ⊗ A associator A ⊗ (((A A) A) ⊗ A) id ⊗ ev A ⊗ (A A) ev A, we get a map d_1:((A A) A) (A (A A)), sending a map (a' ↦ (a ↦ f(a',a))) to the map (a ↦ (a' ↦ f(a',a))). Take N_1 ∈d_1 ∘ (ϕ' id) ∘ϕ_A (A (A A)). Let _4 be a realizer of Φ (ev_4), where ev_4: A ⊗ (A (A A)) (A A) is the evaluation map. _4 realizes a map s:A (A A) sending x to ϕ (_4 x). Let _5 be a realizer of a map obtained by applying Φ to ev_5: A ⊗ (A (A (A A))) (A (A A)) and t:A A be a map sending a to _5 a N_1, whose realizer is x._5 x N_1. Applying Φ' to A ⊗ A s ⊗ t (A A) ⊗ A ev A, we get a map d_2:A (A A) sending y to (x ↦ (_4 x (_5 y N_1))). Take a realizer N_2 ∈d_2_A (A A). Let d_3:A A be a map sending x to _2 x N_2, whose realizer is x. _2 x N_2. When we take ∈d_3_A A, for any y ∈, y = _1 y = d_3 (y) = _2 y N_2. For any x ∈, x (y ) = x (_2 y N_2) = _1 x (_2 y N_2) = d_2 (y)(x) = _4 x (_5 y N_1). For any z ∈, (x (y )) z = _4 x (_5 y N_1) z = (d_1 ∘ (ϕ' id) ∘ϕ) (y)(x)(z) = ((ϕ' ∘ (ϕ(y)))(z)(x) = x (y z). In this section we showed propositions for the necessary conditions to obtain certain structures on categories of assemblies. Next, consider whether the similar propositions hold for the cases of categories of modest sets. The next propositions can be proven in the same way as Proposition <ref>, <ref> and <ref>. Suppose is a total applicative structure and := happens to be a CCC. is an -algebra if the following conditions hold. * |Y^X| = _ (X,Y) and f_Y^X = { r |}. * For f:X' X and g:Y Y', g^f : Y^X Y'^X' is the function sending h:X Y to g ∘ h ∘ f. * The forgetful functor from to strictly preserves finite products. * The adjunction Φ: _ (X × Y, Z) _ (X, Z^Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)). Suppose is a total applicative structure and := happens to be a closed multicategory. is a -algebra if the following conditions hold. * (;X) = |X| and (;X) = X. * (X;Y) = _ (X,Y) and (X;Y) =(_ (X,Y), ), where f= { r |}. * (X_1,… ,X_n;Y) = (X_1; (X_2,… ,X_n;Y)) and (X_1,… ,X_n;Y) is the underlying set of (X_1,… ,X_n;Y). * For g:Y_1,… ,Y_n Z and f_l:X^l_1,… ,X^l_k_l Y_l, g ∘ (f_1,… ,f_n) is the function sending x^1_1,… ,x^1_k_1,… ,x^n_k_n to g(f_1 (x^1_1,… ,x^1_k_1),… ,f_n (x^n_1,… ,x^n_k_n)). When k_l = 0 for some 1 ≤ l ≤ n, g ∘ (f_1,… ,f_n) is the function given y_l ∈ |Y_l| pointed by f_l as the l-th argument of g. * ev_X_1,… ,X_n;Y sends f, x_1,… ,x_n to f(x_1,… ,x_n). * Λ_Z_1,… ,Z_m;X_1,… ,X_n;Y sends a function (z_1,… ,z_m,x_1,… ,x_n ↦ f(z_1,… ,z_m,x_1,… ,x_n)) to the function (z_1,… ,z_m ↦ f(z_1,… ,z_m,-,… ,-)). Suppose is a total applicative structure and := happens to be a closed category. is a -algebra if the following conditions hold. * |Y X| = _ (X,Y) and f_Y X = { r |}. * g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f. * The underlying set of the unit object I is the singleton {∗}. * i_X is the function sending a function (f:∗↦ x) to x. * L_Y,Z^X is the function sending g:Y Z to the function (f:X Y) ↦ g ∘ f. Here note that Proposition <ref> has one more condition, that the underlying set of the unit object is a singleton, than Proposition <ref>. This is because the assemblyXwe used in Remark <ref> is not a modest set. On the other hand, for the cases of SMCCs, monoidal closed categories and monoidal bi-closed categories, we cannot state propositions for modest sets similar to Proposition <ref>, <ref> and <ref>. Since we define tensor products in categories of modest sets in the different way from those of categories of assemblies (as seen in the proof of Proposition <ref>), the condition “the forgetful functor fromtois strict monoidal" is not appropriate for the case of modest sets. For the case of SMCCs, we can avoid this problem by presenting a more generalized proposition, that is for symmetric closed categories, instead of SMCCs. A symmetric closed category is a closed category with a natural isomorphism S_X,Y,Z : (Z Y) X ≅ (Z X) Y satisfying appropriate axioms ( <cit.>). Suppose is a total applicative structure and := (or ) happens to be a symmetric closed category. is a -algebra if the following conditions hold. * |Y X| = _ (X,Y) and f_Y X = { r |}. * g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f. * L_Y,Z^X is the function sending g:Y Z to the function (f:X Y) ↦ g ∘ f. * S_X,Y,Z is the function sending f: x ↦ (y ↦ f(x)(y)) to S(f) : y ↦ (x ↦ f(y)(x)). This proposition also shows that we cannot obtain(or) that is a symmetric closed category but not an SMCC, in the canonical way. For the cases of monoidal closed categories and monoidal bi-closed categories, it is still not clear that there are any appropriate conditions to state propositions for modest sets similar to Proposition <ref> and <ref>. § PLANAR LINEAR COMBINATORY ALGEBRAS In Section <ref>, we recalled LCAs and rLCAs, that relate-algebras and PCAs, and that induce categorical models of linear exponential modalities. In this section, we apply the similar construction to-algebras. We reformulate rLCAs for-algebras and PCAs, and call them exp-rPLCAs. From an exp-rPLCA, we get a categorical model of!-modality on the non-symmetric multiplicative intuitionistic linear logic (MILL). Also we reformulate rLCAs for-algebras and-algebras, and call them exch-rPLCAs. From an exch-rPLCA, we obtain a model for an exchange modality relating the non-symmetric MILL and the symmetric MILL. In <cit.>, we already introduced the same construction called “rPLCAs," based on bi--algebras. What defined as rPLCAs in this section are generalizations of those in <cit.>, based on-algebras. §.§ Exponential planar linear combinatory algebras Linear exponential comonads on non-symmetric monoidal categories are investigated in <cit.>, which model!-modalities on non-symmetric MILL. A linear exponential comonad on a monoidal category consists of the following data. * A monoidal comonad (!, δ, ϵ, m, m_I). Here ! is an endofunctor on , δ_X : !X !!X and ϵ : !X X are monidal natural transformations for the comultiplication and the counit. A natural transformation m_X,Y : !X ⊗ !Y !(X ⊗ Y) and a map m_I : I !I make ! be a monoidal functor. * Monoidal natural transformations e_X : !X I and d_X : !X !X ⊗ !X. * A monidal natural transformation σ_X,Y : !X ⊗ !Y !Y ⊗ !X defined as !X ⊗ !Y δ_X ⊗δ_Y !!X ⊗ !!Y m_!X,!Y !(!X ⊗ !Y) d_!X ⊗ !Y !(!X ⊗ !Y) ⊗ !(!X ⊗ !Y) !(e_X ⊗ id) ⊗ !(id ⊗ e_Y) !(I ⊗ !Y) ⊗ !(!X ⊗ I) !( unitor) ⊗ !( unitor) !!Y ⊗ !!X ϵ_!Y⊗ϵ_!X !Y ⊗ !X. Here these components need satisfy the following conditions. * The following diagram commutes: @C=30pt !X ⊗ !X ⊗ !Y ⊗ !Y ⊗ !Z ⊗!Z [r]^id ⊗σ⊗ id[d]_id ⊗σ⊗ id !X ⊗ !Y ⊗ !X ⊗ !Y ⊗ !Z ⊗ !Z [d]^m ⊗ m⊗ id !X ⊗ !X ⊗ !Y ⊗ !Z ⊗ !Y ⊗ !Z [d]_id ⊗ m ⊗ m !(X ⊗ Y) ⊗ !(X ⊗ Y) ⊗ !Z ⊗ !Z [d]^id ⊗σ⊗ id !X ⊗ !X ⊗ !(Y ⊗ Z) ⊗ !(Y ⊗ Z) [d]_id ⊗σ⊗ id !(X ⊗ Y) ⊗ !Z ⊗ !(X ⊗ Y) ⊗ !Z [d]^m ⊗ m !X ⊗ !(Y ⊗ Z) ⊗ !X ⊗ !(Y ⊗ Z) [r]_m ⊗ m !(X ⊗ Y ⊗ Z) ⊗ !(X ⊗ Y ⊗ Z) * m_!Y ,!X∘σ_!X , !Y = !σ_X,Y∘ m_!X ,!Y. * σ_X,Y^-1 = σ_Y,X. * The following diagram commutes: @C=50pt !X ⊗ !Y ⊗ !Z [r]^δ_X ⊗δ_Y ⊗ id[d]_id ⊗σ_Y,Z !!X ⊗ !!Y ⊗ !Z [r]^m_!X,!Y⊗ id !(!X ⊗ !Y) ⊗ !Z [d]^σ_!X ⊗ !Y, Z !X ⊗ !Z ⊗ !Y [rrd]_σ_X,Z⊗ id !Z ⊗ !(!X ⊗ !Y) [d]^id ⊗ϵ_!X ⊗ !Y !Z ⊗ !X ⊗ !Y * The following diagram commutes: @C=50pt !X ⊗ !Y [r]^d_X ⊗ d_Y[d]_m_X,Y !X ⊗ !X ⊗ !Y ⊗ !Y [r]^id ⊗σ⊗ id !X ⊗ !Y ⊗ !X ⊗ !Y [d]^m ⊗ m !(X ⊗ Y) [rr]_d_X ⊗ Y !(X ⊗ Y) ⊗ !(X ⊗ Y) * The following diagram commutes: @C=50pt I [rd]^m_I ⊗ m_I[d]_m_I !I [r]_d_I !I ⊗ !I * (!X, e_X,d_X) is a comonoid in . * e_X and d_X are coalgebra morphisms. * δ_X is a comonoid morphism. Then we will introduce the categorical realizability to inducing linear exponential comonads on non-symmetric monoidal categories. The results are reformulations of a part of contents in <cit.> and <cit.> to the case of-algebras. An exponential relational planar linear combinatory algebra (exp-rPLCA) consists of a -algebra and a comonadic applicative morphism (,,) on which satisfies the followings. * There is ∈ || such that x ( y) ⊆{ x } for any x , y ∈ ||. * There is ∈ || such that x ( y) ⊆ x ( y) ( y) for any x , y ∈ ||. While the above definition employs the different style from rLCAs of Definition <ref>, we can also define exp-rPLCAs in the same style. For a -algebra and a comonadic applicative morphism (,,) on , the followings are equivalent. * (,) is an exp-rPLCA. * Take two total relations [,]: and k_i : as [,](x) := { a a' | a,a' ∈ x } and k_i (x) := {}. Then they are applicative morphisms and ≼ [,] and ≼ k_i hold. (1)⇒(2): Realizers of [,] and k_i exist as pq. (r_ ( p)( q)) and . Realizers for ≼ [,] and ≼ k_i are and . (2)⇒(1): Take a realizer r_1 of ≼ [,] and a realizer r_2 of ≼ k_i. Then and exist as xy. x (r_2 y) and xy. x (r_1 y). From an exp-rPLCA, we get a linear exponential comonad. For an exp-rPLCA (, ), is a linear exponential comonad on . * It is easy to see that the comultiplication δ and the counit ϵ are monoidal natural transformations. From Proposition <ref>, the comonad is a lax monoidal functor and thus we have m_X,Y : X ⊗ Y (X ⊗ Y) and m_I : I I. Therefore, we have as a monoidal comonad. * e_X : X I is the function sending x to ∗. A realizer for e_X is . * d_X : X X ⊗ X is the function sending x to x ⊗ x. A realizer for d_X is ( pq. pq). * It is easy to see that the (, e_X, d_X) satisfies conditions for linear exponential comonads. Next we try to obtain linear-non-linear models for the non-symmetric MILL, that is, monoidal adjunctions between (non-symmetric) monoidal closed categories and CCCs. Although now we get a linear exponential comonadon, at this point it has not concluded that we obtain a linear-non-linear model, since we have not shown that the co-Kleisli adjunction betweenandis a monoidal adjunction. To show this, we use the next proposition shown in <cit.>. Let be a monoidal closed category and ! be a linear exponential comonad on . When has finite products, the co-Kleisli category _! is a CCC and the co-Kleisli adjunction is monoidal. For an exp-rPLCA (, ), has Cartesian products, and thus the co-Kleisli adjunction between and a CCC is monoidal. * The terminal object is ({∗}, ), where ∗ := ||. * The underlying set of X × Y is |X| × |Y|. Realizers are defined as (x,y) := { ( uv) a | }. The set of realizers is not empty since for m ∈x_X and m' ∈y_Y, ( ( ( m)) ( ( m'))) () ∈(x,y). * For maps f:X X' and g :Y Y' in , f × g is the function sending (x,y) to (f(x),g(y)). A realizer of f × g is uv. ( (r_ M u) (r_ N v)), where M ∈ ( r_f) and N ∈ ( r_g). * A realizer for the projection π :X × Y X is ( ( uv. ( uv))). A realizer for the projection π' :X × Y Y is ( ( uv. ( uv))). * For any object Z and any maps f:Z X and g:Z Y, there exists a unique map h:Z X × Y such that π∘ h = f and π' ∘ h =g. h is the function sending z to (f(z),g(z)), whose realizer is in ( ( r_f) ( r_g)). For an exp-rPLCA(,), we can restrict :to the comonad on, as we saw in Remark <ref>. By the same proof as the above, we also can get a linear-non-linear model using. For an exp-rPLCA (, ), is a linear exponential comonad on . Moreover, the co-Kleisli adjunction between the monoidal closed category and the CCC _ is monoidal. We have seen the co-Kleisli adjunctions obtained by an exp-rPLCA(, )are linear-non-linear models by showing thatandhave Cartesian products. We can further show that these categories have better structures as the next proposition says. For an exp-rPLCA (, ), and are finitely complete and finitely cocomplete. First we show the proposition for . * The terminal object and binary products are those in the proof of Proposition <ref>. * Given maps f,g :X Y, let Z be an assembly defined as |Z| := { x ∈ |X| | f(x) = g(x) } and x_Z := x_X. Take a map e :Z X as the inclusion function, realized by . Then it is easy to see that this e is the equalizer of f and g. * The initial object is the empty set. * Given maps f,g :X Y, take a set |W| := Y/∼, where ∼ is the smallest equivalence relation satisfying ∀ x ∈ |X|, f(x) ∼ g(x). Take an assembly W = (|W|,_W) by w_W := ⋃_y ∈ wy_Y. Take a map e':Y W by the projection, realized by . Then it is easy to see that this e' is the coequalizer of f and g. * The underlying set of X+Y is { (0,x) | x ∈ |X| }∪{ (1,y) | y ∈ |Y| }. Realizers are defined as (0,x) := { mp | p ∈x_X } and (1,y) := { nq | q ∈y_Y }, where m := uv. ( u)( v) and n := uv. u ( v). The coprojections in_X:X X+Y and in_Y :Y X+Y are given as x ↦ (0,x) and y ↦ (1,y), and realized by m and n respectively. Given maps f:X Z and g:Y Z realized by r_f and r_g, we have a unique map h:X+Y Z such that h ∘ in_X = f and h ∘ in_Y = g. h is the function sending (0,x) to f(x) and (1,y) to g(y), which is realized by ( uv.u( r_f) ( r_g) v). Therefore, is finitely complete and finitely cocomplete. Since is the reflexive full subcategory of , is also finitely complete and finitely cocomplete. As an adjoint pair between a-algebra and a PCA gives rise to an rLCA, an adjoint pair between a-algebra and a PCA gives rise to an exp-rPLCA and a monidal adjunction. Let (δ⊣γ): be an adjoint pair for a -algebra and a PCA . * (, δ∘γ) forms an exp-rPLCA. * (⊣): is a monoidal adjunction between the monoidal category and the Cartesian monoidal category . * From Proposition <ref> (<ref>), δ∘γ is a comonadic applicative morphism. Let and be elements for the counit and the comultiplication. Then we can take ∈ as an element of xy. x ( (r_δ (δ M) y)), where M ∈ z.(γ). Also we can take ∈ as an element of xy. x ( (r_δ (δ N) ( y))), where N ∈ z.r_γ (r_γ (γ) z) z. * We show that the left adjoint is strong monoidal. Let ∈ || and ∈ || be elements such that ∀ a ∈ ||, (δ (γ a)) =a and ∀ b ∈ ||, b ∈γ (δ b). The map I 1 is realized by (δ). The inverse 1 I is realized by a. (r_δ (δ ( b.(γ))) a). The natural transformation ( X) ⊗ ( Y) (X × Y) is realized by ( aa'.r_δ (r_δ (δ ( bb't.tbb')) a) a'). The inverse map (X × Y) ( X) ⊗ ( Y) is realized by u. (r_δ (δ M) u), where M ∈ ( bb'.r_γ (r_γ (γ) ( b) ) ( b')). Next we consider the functional case of exp-rPLCAs, like LCAs are the functional case of rLCAs. An exponential planar linear combinatory algebra (exp-PLCA) is an exp-rPLCA (, ) that is functional. Not only are exp-PLCAs special cases of exp-rPLCAs, but also can induce adjoint pairs between-algebras and PCAs. Let (, ) be an exp-PLCA. * We have a PCA _ = (, @) with x @ y := x ( y). * Let γ : _ be the identity function and δ: _ be the function x ↦ x. Then γ and δ are applicative morphisms and δ⊣γ. * We have the -combinator in _ as xy. ( xy). We have the -combinator as xyz. (M x)(r_ (r_ ()( y))( z)), where M:= xyz. x ( () ( y)) ( ( uv. r_ u ( v)) ( z)). * Realizers of γ and δ are xy.x( y) and xy.r_ x( y). A realizer for δ∘γ≼ id_ is and for id__≼γ∘δ is . Next we give an (functional) adjoint pair between a-algebra and a PCA. This example is a reformulation of the linear lambda calculus with!( <cit.>) to a planar variant. Suppose infinite supply of variables x,y,z,…. Terms are defined grammatically as follows. M ::= x | MM' | λ x.M | M ⊗ M' | x ⊗ x'MM' | !M | λ !x.M Here x of λ x.M is the rightmost free variable of M, appears exactly once in M and is not in any scope of !. Also we assume that for x ⊗ x'MM', x' and x are the rightmost and the next rightmost free variables of N, appear exactly once in N and are not in any scope of !. Take an equational relation on terms as the congruence of the following equational axioms. * (λ x.M)N = M[N/x]. * M = λ x.Mx. * (λ !x.M)(!N) = M[N/x]. * x ⊗ x'M ⊗ M'N = N[M/x][M'/x']. * M= x ⊗ yMx ⊗ y. Let Λ be the set of equivalence classes of closed terms. Then we get a -algebra , whose underlying set is Λ and the application is that of lambda terms. Also we get a PCA = (Λ,@), where M@N := M (!N). Here the -combinator and the -combinator of exist as λ !x.λ !y.x and λ !x.λ !y.λ !z. x(!z) (!(y(!z))). Take an applicative morphism γ: as the identity function whose realizer is λ !x. λ !y.xy. Take δ : as a function M ↦ !M whose realizer is λ !x.λ !y. !(x(!y)). Then we have an adjoint pair δ⊣γ. As well as we can construct an LCA from a “reflexive object” in a “weak linear category” (See <cit.> and <cit.> ), we can get exp-PLCAs by appropriate settings. A weak planar linear category (WPLC) consists of: * a monoidal closed category (,⊗,I) (not symmetric in general); * a monoidal functor (!,m,m_I) on ; * a monoidal pointwise natural transformation ! id_; * a monoidal pointwise natural transformation ! !!; * a monoidal pointwise natural transformation ! ! ⊗ !; * a monoidal pointwise natural transformation ! K_I, where K_I is the constant I functor. Here a pointwise natural transformationγ:F G is a family of maps γ_C :F(C) G(C) (C ∈ Ob()) satisfying that G(f) ∘γ_I = γ_C ∘ F(f) for any f:I C. To be a WPLC, we need not all of the conditions for linear exponential comonads (Definition <ref>). For instance, a WPLC does not require that!is a comonad, and does not require the (ordinary) naturality of each transformation. Let (,!) be a WPLC. We say V is a reflexive object when there are: * a retraction p: !V ◃ V :q; * an isomorphism r:(V V) V and s := r^-1; * a retraction t: (V ⊗ V) ◃ V:u. As we saw in Example <ref>, for a reflexive objectVof a WPLC, := (I,V)forms a-algebra. Furthermore, by givingas an endofunction sendingM:I Vtop ∘ (!M) ∘ m_I,(,)becomes an exp-PLCA. The proof is the same as for WLCs and LCAs in <cit.>. §.§ Exchange planar linear combinatory algebras Exchange modalities on the Lambek calculus and their categorical models are introduced in <cit.>. While the word “Lambek calculus" may indicate various logics, type systems or grammars ( <cit.>), here we call the Lambek calculus as a variant of non-symmetric MILL with left and right implications. The Lambek calculus is modeled by monoidal bi-closed categories. While the order of arguments cannot be exchanged in the Lambek calculus, the Lambek calculus can be extended to a sequent calculus that allows swapping arguments with modalities. This sequent calculus is called the commutative/non-commutative (CNC) logic, that is composed of two (commutative and non-commutative) logics, and the exchange modality connects these two parts. Categorical models of the CNC logic are given as monoidal adjunctions between monoidal bi-closed categories and SMCCs, that are called Lambek adjoint models. In this subsection, we introduce the similar construction to the previous subsection, inducing Lambek adjoint models. An exchange relational planar linear combinatory algebra (exch-rPLCA) consists of a -algebra and a comonadic applicative morphism (ξ,,) on with ∈ satisfying x (ξ y) (ξ z) ⊆ x (ξ z) (ξ y) for any x,y,z ∈. When ξ is functional, we call (,ξ) an exchange planar linear combinatory algebra (exch-PLCA). For an exch-rPLCA (, ξ), the co-Kleisli category is an SMCC and the co-Kleisli adjunction between and is monoidal. * We define tensor products in as X Y := (|X| × |Y|,), where x y := { pq |}. * For maps f:X X' and g:Y Y' in , f g is the function sending x y to f(x) g(y). A realizer of f g is z. M ( z), where M ∈ pq. (r_ξ (ξ r_f) ( p)) (r_ξ (ξ r_g) ( q)). * We define the unit object J of as ({∗}, _J), where ∗_J := {}. * A realizer for the left unitor λ_X:J X X is u. ( p. p ) ( u). A realizer for the inverse λ_X^-1 is in (ξ). * A realizer for the right unitor ρ_X:X X J is in p. p (ξ). A realizer for the inverse ρ_X^-1 is u. () ( u). * A realizer for the associator α_XYZ :(X Y) Z X (Y Z) is u. ( v. M ( v)) ( u), where M ∈ pqr. p (r_ξ (r_ξ (ξ) ( q) ( r))). A realizer for α_X^-1 is u. ( vw. (M' v) ( w)) ( u), where M' ∈ pq. (r_ξ (r_ξ (ξ) ( p) ( q))). * The symmetry σ_XY:X Y Y X is the function sending x y to y x. A realizer for σ_XY and σ_XY^-1 is u. () ( u). * For objects X and Y, the exponential in is Y X = (_( X, Y), ), where f := { r ∈|}. * For maps f:X' X and g:Y Y' in , g f is the function sending a map h:X Y in to g ∘ ( h) ∘ d_X ∘ ( f) ∘ d_X', where d_X : X X is the comultiplication of . A realizer for g f is uv. r_g (r_ξ u ( (r_ξ (ξ r_f) ( v)))). * The evaluation map ev_XY : (Y X) X Y is the function sending f x to f(x), that is realized by u. ( u). * For any map f:Z X Y in , there exists a unique map g:Z Y X in , which sends z to x ↦ f(z x). g is realized by uv.r_f (r_ξ (r_ξ (ξ) ( u)) ( v)). * Finally we show that the co-Kleisli functor : is strong monoidal. We can take natural isomorphisms J I and (X Y) X ⊗ Y in as the identity functions. Realizers for J I and (X Y) X ⊗ Y are . A realizer for J I is in u.u(ξ). A realizer for X ⊗ Y (X Y) is in uv. r_ξ (r_ξ (ξ) ( u)) ( v). The next proposition for categories of modest sets also can be shown in the same way as the above proposition. Here sinceX Yin the above proof is not generally a modest set, we take the tensor product⊠in_by the same way as Proposition <ref>. That is, we takeX ⊠ Y = (|Z|,_Z)by|Z| := (|X| × |Y|)/≈, where≈and_Zare defined as the same ones in the proof of Proposition <ref>. For an exch-rPLCA (, ξ), the co-Kleisli category _ is an SMCC and the co-Kleisli adjunction between is monoidal. Suppose is a bi--algebra and (, ξ) is an exch-rPLCA. Then we have a Lambek adjoint model as the co-Kleisli adjunction between the monoidal bi-closed category and the SMCC (or between and _). Similar to exp-rPLCAs, adjoint pairs between-algebras and-algebras correspond to exch-rPLCAs. Let (δ⊣γ): be an adjoint pair for a -algebra and a -algebra . * (,δ∘γ) forms an exch-rPLCA. * (⊣): is a monoidal adjunction between the monoidal category and an SMCC . If is a bi--algebra, the adjunction is a Lambek adjoint model. * From Proposition <ref> (<ref>), δ∘γ is a comonadic applicative morphism. We can take in as xyz. x ( (M ( y)( z))), where M ∈ y.r_δ (r_δ (δ N) y) and N ∈ yz.r_γ(r_γ (γ) z)y. * It follows from Proposition <ref>. Similar to exp-PLCAs, exch-PLCAs induce adjoint pairs between-algebras and-algebras. Let (, ξ) be an exch-PLCA. * We have a -algebra _ξ = (,@) with x @ y := x(ξ y). * Let γ:_ξ be the identity function and δ:_ξ be the function x ↦ξ x. Then γ and δ are applicative morphisms and δ⊣γ. * We have the -combinator in _ξ as x. ( x). * Same as the proof of Proposition <ref> (<ref>). For an example of exch-PLCA, we have the similar calculus to Example <ref>. Suppose infinite supply of variables x,y,z,…. Terms are defined grammatically as follows. M ::= x | MM' | λ x.M | M ⊗ M' | x ⊗ x'MM' | ξ M | λ^ξ x.M Here x of λ x.M is the rightmost free variable of M, appears exactly once in M and is not in any scope of ξ. x of λ^ξ x.M need to appear exactly once in M. Also we assume that for x ⊗ x'MM', x' and x are the rightmost and the next rightmost free variables of N, appear exactly once in N and are not in any scope of ξ. The rest is the same as Example <ref>. Finally we give an example of exch-PLCA based onof Example <ref>. This example is similar to the one introduced in <cit.>. Let T and | | be the same set and function defined in Example <ref>. First we give a -algebra _e from T. Take |_e| as the powerset of { t ∈ T | |t| =e }, and a binary operation ⊚ on |_e| as M⊚ N := { t_2 |∃ t_1 ∈ N ,(t_2 t_1) ∈ M }. Then _e = (|_e|,⊚) is a -algebra, where * = { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T }; * = { (t_3 t_2 t_1) (t_3 t_1 t_2) | |t_1| = |t_2| = |t_3| =e }; * = { t_1 t_1 | t_1 ∈ T }. Take γ: || |_e| as the function sending M to { t t | t ∈ M} and δ : |_e| || as the inclusion function. Then these function forms an (functional) adjoint pair (δ⊣γ): _e. Here corresponding realizers are * { ((t_2 t_2) (t_1 t_1)) ((t_2 t_1) (t_2 t_1)) | t_1,t_2 ∈ T } realizing γ; * { t_1 t_1 | t_1 ∈ T } realizing δ; * { (t t) t | |t| = e } realizing id ≼γ∘δ; * { t (t t) | |t| ≥ e } realizing δ∘γ≼ id. The above construction also can be applied to obtain exch-PLCAs on ' of Example <ref> and on ” of Example <ref>. While we gave exch-PLCAs by T, the same construction cannot be applied to obtain exp-PLCAs. If we try to get some PCA of subsets of T, employing M⊚ N := { t_2 |∃ t_1 ∈ N ,(t_2 t_1) ∈ M } as the binary operation, ⊚ M ⊚ N = M hardly hold since the left hand side often lost information of M when N is nearly empty. As we saw in Proposition <ref>, exp-rPLCAs can be defined in the style using not the combinators and , but the applicative morphisms [,] and k_i. It is still unclear whether we can define exch-rPLCAs by the latter style, not using the combinator . If we can characterize exch-rPLCAs by the latter style, we might construct exch-PLCAs using reflexive objects by the same way as exp-PLCAs and WPLCs (Definition <ref>). § RELATED WORK This paper is an extended version of the earlier papers by the author <cit.>. As a result of <cit.> not introduced in this paper, we have “ ()^∘-algebras” as a class of applicative structures. ()^∘-algebras are more general than-algebras, and give rise to skew closed categories of assemblies (or modest sets). Skew closed categories, introduced in <cit.>, are categories with similar closed structures to closed categories, though some conditions needed in closed categories are not assumed. (For instance, the natural transformationi_X : (X I) Xin a skew closed category is not necessarily invertible.) Although skew closed categories and closed multicategories are generalizations of closed categories in different directions, from Proposition <ref>, we can say that we cannot (canonically) obtain(or) that is a closed multicategory but not a skew closed category. Details of these results are given in Appendix <ref>. Skew monoidal categories introduced in <cit.> are categories with the same components as monoidal categories but natural transformations (left and right unitors and associators) do not need to be invertible. The relationship between skew monoidal categories and skew closed categories is similar to that between monoidal categories and closed categories. Recalling the proof of Proposition <ref>, we find that we useonly to realizeρ_X^-1:X ⊗ I X. The invertibility ofρ_Xis not assumed in skew monoidal categories. Thus, when we haveas a “()-algebra,” we can show thatis a skew monoidal category. In <cit.>, the “extensionality” of combinatory algebras is investigated. The extensionality defined in that paper is a more generalized condition than the standard one, seen in , <cit.>. By the extensionality in <cit.>, we can deal with polynomials and combinatory completeness for combinatory algebras that cannot be stated in the same way as Definition <ref> and Proposition <ref>, such as the braided case. In our study, we do not need the discussions of the extensionality to state the combinatory completeness appearing in this paper, however, assuming the extensionality on an applicative structuremay cause some structures onand. For instance, for an “extensional”-algebra, since the-combinator always satisfies the axiom of,andbecome closed categories. There are many other possible way to define classes of applicative structures than using the existence of certain combinators, and the extensionality is such one way. The definition of bi--algebras may look like “dual combinators” introduced in <cit.>. Similar to bi--algebras, in bianry operations of dual combinators, elements can act to elements from both left and right sides. However, a dual combinatory logic has only one sort of application, whereas a bi--algebra has two sorts of applications. Also the reductions of dual combinatory logic do not satisfy the confluence, while the confluence of the bi-planar lambda calculus holds. In this paper, we referred several logics and their categorical models without recalling detailed definitions. See <cit.> about the linear logic. And for the MILL and the categorical models that we deal with in this paper, see <cit.>. Also, for the Lambek calculus, the word “the Lambek calculus” has various means as logics, and we use this word to mean a variant of non-symmetric MILL with left and right implications in this paper. Our treatment of the Lambek calculus and its categorical semantics are from <cit.>. The basics about the Lambek calculus is in <cit.>. In <cit.>, the relationships between the planar lambda calculus and planar graphs are investigated. In that paper, the bijection between rooted trivalent planar graphs and closed planar lambda terms is given, and it is shown that such graphs can be generated by combining a few kinds of “imploid moves.” The theory corresponds to the combinatory completeness of-algebras and the planar lambda calculus. Similarly, we can give the bijection between rooted trivalent planar graphs and closed bi-planar terms, but here the rooted trivalent planar graphs need to have two colored (“left” and “right”) vertexes. § CONCLUSION In section <ref> and <ref>, we introduced several classes of applicative structures and showed that they induce closed structures on categories of assemblies and categories of modest sets, as in Table <ref>. (The results for-algebras are newly presented in this paper.) In section <ref>, we showed that these classes are different ones by giving several examples. In section <ref>, we presented propositions that categorical structures ofinduce structures of, under some conditions. (The propositions for-algebras and bi--algebras are newly shown in this paper.) By combining the results of the above, for instance we can say that we havewith a truly non-symmetric bi-closed structures, by usingthat is a bi--algebra but not a-algebra. In section <ref>, we introduced exp-rPLCAs and exch-rPLCAs that give rise to categorical models for the linear exponential modality and the exchange modality on the non-symmetric MILL. As an adjoint pair between a-algebra and a PCA induces an rLCA, an adjoint pair between-algebras and a PCA/-algebra induces an exp-rPLCA/exch-rPLCA. Finally we give three issues for future work. First, there are several unsolved problems we mentioned in this paper. Those that we consider important are: * to show that the computational lambda calculus is not a -algebra (refer Section <ref>); * to clarify conditions needed to show that is a PCA when (or ) is a CCC (refer Remark <ref>); * to clarify conditions needed to show that is a -algebra/bi--algebra when is a monoidal closed category/monoidal bi-closed category (refer the end of Section <ref>). Second, most examples given in this paper are the standard ones like the term models. We would like to find more interesting examples of applicative structures and adjoint pairs, that should be useful for investigating non-commutative logics and their models in a systematic way. Third, for various categorical structures not given in this paper, we want to clarify what we need to construct them via categorical realizability. For instance, we have said (in section <ref>) that we cannot give(nor) that is a symmetric closed category but not an SMCC, in canonical ways. Also we cannot give(nor) that is a closed multicategory but not a skew closed category. As an example not yet mentioned, we cannot makea braided monoidal category but not an SMCC. Although there is a class of applicative structure,^±-algebras, nicely corresponding the structure of braided monoidal categories and the braided lambda calculus (investigated in <cit.>), the construction ofcannot reflect the difference between two sorts of braids (realized by^+and^-) and turns braids into the symmetry. To give the categorical structures listed above, we need to change the construction of(and), rather than trying to give conditions on applicative structures. For instance, to makea braided monoidal category (not an SMCC), we may need to change that the construction ofis based on, that is not only braided but also symmetric. § ACKNOWLEDGMENT I would like to thank Masahito Hasegawa for a lot of helpful advice, discussions and comments. This work was supported by JST SPRING, Grant Number JPMJSP2110. alphaurl § -ALGEBRAS, -ALGEBRAS AND SKEW CLOSED CATEGORIES Though classes of applicative structures appearing in this paper are subclasses of-algebras, it does not conclude that realizability constructions for closed structures all require-algebras. Indeed, in <cit.>, we introduced-algebras, which is a more general class than-algebras and gives rise to skew closed categories. First we recall the definition of skew closed categories from <cit.>. A (left) skew closed category consists of the following data: * a locally small category ; * a functor (- -):^op×, called the internal hom functor; * an object I, called the unit object; * an natural transformation i_X : (X I) X; * an extranatural transformation j_X : I (X X); * a transformation L_Y,Z^X : (Z Y) ((Z X) (Y X)) natural in Y and Z and extranatural in X, such that the following axioms hold: * ∀ X,Y ∈, L_Y,Y^X ∘ j_Y = j_(Y X); * ∀ X,Y ∈, i_(Y X)∘ (id_(Y X) j_X) ∘ L_X,Y^X = id_(Y X); * ∀ X,Y,Z,W ∈, the following diagram commutes: @C=-25pt@R=30pt (W Z) [dl]_L_Z,W^X[dr]^L_Z,W^Y (W X) (Z X) [d]^-L_(Z X),(W X)^(Y X) ((W Y) (Z Y)) [dd]^-L_Y,W^X id ((W X) (Y X)) ((Z X) (Y X)) [drr]_-id L_Y,Z^X ((W X) (Y X)) (Z Y) * ∀ X,Y ∈, (i_Y id_(X I)) ∘ L_X,Y^I = id_Y i_X; * i_I ∘ j_I = id_I. A skew closed category is called left normal when the function γ : (X,Y)(I,Y X) sending f:X Y to (f id_X) ∘ j_X is invertible for any X,Y ∈. There is a categorical structure called skew monoidal categories introduced in <cit.>, which have the same components as monoidal categories but the invertibility of unitors and associators are not assumed. Skew closed categories are the categorical structures determined from skew monoidal categories, like closed categories are determined from monoidal categories. Obviously, closed categories are also left normal skew closed categories. We investigated categorical realizability for skew closed categories in <cit.> and next we recall some of the results. A total applicative structure is a -algebra iff it contains , , and a for each a ∈ is an element of such that ∀ x,y ∈, (a) xy = x (ay). Since(a) xy=x(ay), any-algebra is also a-algebra. By the similar way to the proof of Proposition <ref>, we can show the class of-algebras is different from the class of-algebras by using a freely constructed-algebra (with constants). When is a -algebra, and are left normal skew closed categories. The proof is almost the same as Proposition <ref>. Here for mapsfandg, we give a realizer of(g f)as(r_f) ( r_g). It is still not clear whetherneed to be a-algebra to make(or) a skew closed category, like propositions in Section <ref>. (In the similar setting to Proposition <ref> and Proposition <ref>, though we can show the existence of,and(), we cannot show there is.) Since-algebras are-algebras, the next holds. When is a -algebra, and are skew closed categories. From Proposition <ref>, we can say that we cannot (canonically) obtain(nor) that is a closed multicategory but not a skew closed category. Although closed multicategories are a generalized closed categorical structure in a different direction from skew closed categories, skew closed categories are more general than closed multicategories as the categorical structures appearing in categories of assemblies. Moreover, when constructing applicative structures from reflexive objects, skew closed categories can give even-algebras, as well as closed multicategories give. Suppose a skew closed category and an object V with a retraction r : (V V) ◃ V :s. Then (I,V) forms a -algebra. * For M,N:I V, the application is defined as I M V s V V id_V N V I i_V V. * The -combinator is I j_V V (V V) (V V) L_V,V^V s ((V V) (V V)) V (r s) id_V V V V r id_V V V r V. * The -combinator is r ∘ j_V. * Given arbitrary M:I V, M is I j_V V V s id_V (V V) V (id_V M) id_V (V I) V i_V id_V V V r V. Suppose a closed multicategory and an object V with a retraction r : (V;V) ◃ V :s. Then (;V) forms a -algebra. * For M,N ∈(;V), the application is defined as M,N V,V s,id_V(V;V),V ev V. * Take a map f:V,V,V V as V,V,V id_V,s,id_V V,(V;V),V s,ev(V;V),V ev V. The -combinator is given as r ∘Λ_;V;V (r ∘Λ_V;V;V(r ∘Λ_V,V;V;V(f))). Here Λ is the function in Definition <ref>. * The -combinator is r ∘Λ_;V;V(id_V). * Given arbitrary M ∈(;V), M is r ∘Λ_;V;V(ev ∘ (s,M)). When we assume the retractionr : (V V) Vof Example <ref> is an isomorphism, the-combinator further satisfies the axiom ofand(I,V)forms a-algebra. Similarly, when we assume the retractionr : (V;V) Vof Example <ref> is an isomorphism, the-combinator satisfies the axiom ofand(;V)forms a-algebra.
http://arxiv.org/abs/2307.05192v1
20230711115845
Approximate and ensemble local entanglement transformations for multipartite states
[ "David Gunn", "Martin Hebenstreit", "Cornelia Spee", "Julio I. de Vicente", "Barbara Kraus" ]
quant-ph
[ "quant-ph" ]
http://arxiv.org/abs/2307.04419v1
20230710085104
Constraints on primordial curvature power spectrum with pulsar timing arrays
[ "Zhi-Qiang You", "Zhu Yi", "You Wu" ]
gr-qc
[ "gr-qc", "astro-ph.CO" ]
[ * Received / Accepted ======================== § INTRODUCTION Recently, four pulsar timing array (PTA) collaborations, namely NANOGrav <cit.>, PPTA <cit.>, EPTA <cit.>, and CPTA <cit.>, all announced the strong evidence of a stochastic signal consistent the Hellings-Downs angular correlations, pointing to the gravitational-waves (GW) origin of this signal. Assuming the signal originates from an ensemble of binary supermassive black hole inspirals and a fiducial f^-2/3 characteristic-strain spectrum, the strain amplitude is estimated to be at the order of ∼ 10^-15 at a reference frequency of 1  yr^-1 <cit.>. However, the origin of this signal, whether from supermassive black hole binaries or other cosmological sources, is still under investigation <cit.>. A promising candidate to explain the signal is the scalar-induced gravitational waves (SIGWs) accompanying the formation of primordial black holes <cit.>. Other physical phenomena (see e.g. <cit.>) can also be the sources in the PTA band. The SIGW is sourced from scalar perturbations generated during the inflationary epoch <cit.>. They offer valuable insights into the physics of the early Universe and can be detected not only by PTAs but also by space-based GW detectors such as LISA <cit.>, Taiji <cit.>, TianQin <cit.>, and DECIGO <cit.>. Significant SIGWs require the amplitude of the power spectrum of the primordial curvature perturbations to be around 𝒜_ζ∼𝒪(0.01) which is approximately seven orders of magnitude larger than the constraints from large-scale measurements of cosmic microwave background (CMB) anisotropy observation, 𝒜_ζ= 2.1× 10^-9 <cit.>. Therefore, to account for the observed gravitational wave signal detected by PTAs, the curvature power spectrum must possess at least one high peak. This can be achieved through inflation models with a transition ultra-slow-roll phase <cit.>. To characterize a single-peak primordial curvature power spectrum, various parameterizations such as the δ-function form, box form, lognormal form, or broken power law form are employed. Among them, the δ-function, box and lognormal parameterizations are investigated in Ref. <cit.>, where the constraints from the PTAs data on the parameters of these models are also given. The constraints on the broken power law form are provided in Ref. <cit.>, where the role of non-Gaussianity is also considered. However, the analysis does not determine which model among these is the most compatible with the PTAs signal. For the multi-peak primordial curvature power spectrum model <cit.>, we parameterize the primordial curvature power spectrum with the double lognormal form. In this study, we aim to determine whether the PTAs signal favors a single-peak or multi-peak primordial curvature power spectrum and identify the most compatible model with the PTAs signal. The organization of this paper is as follows: Section II provides a brief review of the scalar-induced gravitational waves. Section III presents the constraints on the power spectrum for different forms and identifies the best-fitted model based on the PTAs signal. Finally, Section IV summarizes our findings and provides concluding remarks. § SCALAR-INDUCED GRAVITATIONAL WAVES The large scalar perturbations seeded from the primordial curvature perturbation generated during inflation can act as the source to induce GWs at the radiation domination epoch. In this section, we give a brief review of the SIGW. In the cosmological background, the metric with perturbation in Newtonian gauge is d s^2= -a^2(η)(1+2Φ)dη^2 +a^2(η)[(1-2Φ)δ_ij+1/2h_ij]d x^i d x^j, where a is the scale factor of the Universe, η is the conformal time, dη =dt/a(t), Φ is the Bardeen potential, and h_ij are the tensor perturbations. The tensor perturbations in the Fourier space can be obtained by the transform h_ij(x,η)=∫ d^3k e^ik·x/(2π)^3/2 [h_k(η)e_ij(k)+h̃_k(η)ẽ_ij(k)], where the plus and cross polarization tensors e_ij(k) and ẽ_ij(k) are e_ij(k)=1/√(2)[e_i(k)e_j(k)-ẽ_i(k)ẽ_j(k)], ẽ_ij(k)=1/√(2)[e_i(k)ẽ_j(k)+ẽ_i(k)e_j(k)], and the basis vectors satisfying e·ẽ= e ·k= ẽ·k. For the source from the second order of linear scalar perturbations, the tensor perturbations with either polarization in the Fourier space satisfy <cit.> h”_k+2ℋh'_k+k^2h_k=4S_k, where ℋ=a'/a is the conformal Hubble parameter and a prime denotes the derivative with respect to the conformal time η. The second order source S_k is S_k= ∫d^3k̃/(2π)^3/2e_ij(k)k̃^ik̃^j [2Φ_k̃Φ_k-k̃1/2+ 1/ℋ^2(Φ'_k̃+ℋΦ_k̃) (Φ'_k-k̃+ℋΦ_k-k̃)]. The Bardeen potential in the Fourier space, Φ_k, can be connected to the primordial curvature perturbations ζ_k produced during inflation epoch through the transfer function, Φ_k=3+3w/5+3wT(k,η) ζ_k, where w is the equation of state parameter and the transfer function T(k,η) satisfy T(k,η)=3[sin(k η/√(3))-(kη/√(3)) cos(kη/√(3))/(kη/√(3))^3]. The equation of the tensor perturbations (<ref>) can be solved by the Green function method and the solution is h_k(η)=4/a(η)∫_η_k^ηd η̃g_k(η,η̃)a(η̃)S_k(η̃), where g_k is the corresponding Green function with the form g_k(η,η')=sin[k(η-η')]/k. The definition of the power spectrum of tensor perturbations h_k is ⟨ h_k(η)h_k̃(η)⟩ =2π^2/k^3δ^(3)(k+k̃)𝒫_h(k,η). Combining it with the solution of h_k (<ref>), we have <cit.> 𝒫_h(k,η)= 4∫_0^∞dv∫_|1-v|^1+vdu [4v^2-(1-u^2+v^2)^2/4uv]^2 × I_RD^2(u,v,x)𝒫_ζ(k v)𝒫_ζ(ku), where u=|k-k̃|/k, v=k̃/k, x=kη, and 𝒫_ζ is the power spectrum of the curvature perturbation which is parameterized in the following section. The integral kernel I_RD is I_RD(u, v, x)= ∫_1^x dy y sin(x-y){3T(uy)T(vy) +y[T(vy)u T'(uy)+v T'(vy) T(uy)] +y^2 u v T'(uy) T'(vy)}. The definition of the energy density of gravitational waves is Ω_GW(k,η)=1/24(k/aH)^2𝒫_h(k,η). By combining the equation (<ref>) and the definition (<ref>), we obtain <cit.> Ω_GW(k,η)= 1/6(k/aH)^2∫_0^∞dv∫_|1-v|^1+vdu×[4v^2-(1-u^2+v^2)^2/4uv]^2 ×I_RD^2(u, v, x)𝒫_ζ(kv)𝒫_ζ(ku), where I_RD^2 represents the oscillation time average of the integral kernel. The energy density of gravitational waves undergoes the same evolution as radiation. Exploiting this property, it becomes straightforward to determine the energy density of gravitational waves at present, which is Ω_GW(k,η_0)=c_gΩ_r,0Ω_GW(k,η)/Ω_r(η), where Ω_r(η)=1 is the energy density of the radiation at the generation of SIGWs during radiation domination, Ω_r,0 is that at present, and <cit.> c_g=0.387(g_*,s^4g_*^-3/106.75)^-1/3. § MODELS AND RESULTS At large scales, the observational data from the CMB impose a constraint on the amplitude of the primordial curvature power spectrum, which is limited to 𝒜_ζ = 2.1 × 10^-9 <cit.>. However, there are minimal constraints on the primordial curvature power spectrum at small scales. Consequently, in order to generate significant SIGWs, it is necessary to enhance the primordial curvature power spectrum to approximately 𝒜_ζ∼𝒪(0.01) at small scales. Thus, the profile of the primordial curvature spectrum exhibits at least one pronounced peak at intermediate scales, while displaying lower amplitudes at both large and very small scales. In this section, we consider the primordial curvature spectrum with single-peak and double-peak, respectively. For the single peak, the commonly employed parameterizations of the primordial curvature spectrum are the simple δ function form 𝒫_ζ = Aδ(ln k -ln k_p), the box form 𝒫_ζ = A Θ(k - k_min) Θ(k_max - k), the lognormal form 𝒫_ζ = A/√(2π)Δexp[-1/2(ln k -ln k_p/Δ)^2], and the broken power law form 𝒫_ζ =A(α+β)/β(k/k_p)^-α+α(k/k_p)^β+A_*(k/k_*)^n_s_*-1. For the double peak model, we parameterize the primordial curvature spectrum with a double lognormal form 𝒫_ζ= A_1/√(2π)Δ_1exp[-1/2(ln k -ln k_p_1/Δ_1)^2]+ A_2/√(2π)Δ_2exp[-1/2(ln k -ln k_p_2/Δ_2)^2]. We conducted a Bayesian analysis of the NANOGrav 15 yrs data to investigate the parameterization of the power spectrum of the primordial curvature perturbation, as described by Eq.(<ref>), Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>). In our analysis, we utilized the 14 frequency bins reported in <cit.> to fit the posterior distributions of the model parameters. The Bilby code <cit.> was employed for the analysis, utilizing the dynesty algorithm for nested sampling <cit.> . The log-likelihood function was constructed by evaluating the energy density of SIGWs at the 14 specific frequency bins. Subsequently, we computed the sum of the logarithm of the probability density functions obtained from 14 independent kernel density estimates corresponding to these frequency values <cit.>. The equation for the likelihood function is presented as ℒ(Θ)=∏_i=1^14ℒ_i(Ω_GW(f_i, Θ)), where Θ is the collection of parameters for δ-function, box, lognormal, broken power law, and double lognormal models. These parameters and their priors are shown in Table <ref>. We divide these models into two categories. The first one is single-peak power spectrum models, including δ-function (<ref>), box (<ref>), lognormal (<ref>) and broken power law model (<ref>), while the second one is double-peak model, including double lognormal model (<ref>). The posterior distributions for the parameters in Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>) are depicted in Figure <ref>, Figrue <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, respectively. We summarize the mean values and 1-σ confidence intervals for parameters of these models in Table <ref>. When comparing the results of the double-peak lognormal primordial curvature power spectrum with the single-peak models using δ, box, lognormal, and broken power law forms, the Bayesian analysis yields no support in favor of the single-peak models with respective Bayes factors of lnℬ= 0.42, lnℬ=0.26, lnℬ =0.46, and lnℬ =0.45. Thus, the PTAs data show no significant evidence for or against the single-peak primordial curvature power spectrum over the double-peak primordial curvature power spectrum. Due to the very close values of logarithmic evidence, it is also difficult to favor which single-peak model provides a better fit. After obtaining the best-fit values from posteriors, we present the power spectrum of the primordial curvature perturbations in Figure <ref> and the corresponding SIGWs in Figure <ref>. In Figure <ref>, the orange thin solid line, blue thick solid line, red dashed line, black dotted line, and green dash-dotted line denote the primordial curvature power spectrum with the δ-function, box, lognormal, broken power law, and double-lognormal parameterizations, respectively. The peak scale of these parameterizations is around k_p∼ 10^8  Mpc^-1, and the amplitude of the primordial curvature power spectrum of these parameterizations at the peak is around A∼ 0.1. In Figure <ref>, the orange thin solid line, blue thick solid line, red dashed line, black dotted line, and green dash-dotted line represent the energy density of the SIGW from the primordial curvature power spectrum with the δ-function, box, lognormal, broken power law, and double-lognormal parameterizations, respectively. If the PTAs data indeed arises from the SIGWs, this PTAs signal can also be detected by space-based detectors in the future. And the parameterizations of the primordial curvature power spectrum can also be distinguished by the space-based detectors. § CONCLUSION The stochastic signal detected by the NANOGrav, PPTA, EPTA, and CPTA collaborations points to the GW origin and can be explained by the SIGWs, where the scalar perturbations are seeded from the primordial curvature perturbations. To determine the SIGWs model that best fits the observed stochastic signal, we explore both single-peak and double-peak parameterizations for the power spectrum of the primordial curvature perturbations. For the single-peak scenarios, we consider parameterizations using the δ-function form, box form, lognormal form, and broken power law form. Additionally, in the double-peak scenario, we employ the double lognormal form. The best-fit values for the scale and amplitude of the primordial curvature perturbations at the peak, obtained from these five parameterizations, are approximately k_p ∼ 10^8  Mpc^-1 and A∼ 0.1. Comparing the results with the double-peak scenarios, the Bayesian analysis provides no support in favor of the single-peak models, with respective Bayes factors of lnℬ= 0.42, lnℬ=0.26, lnℬ =0.46, and lnℬ =0.45 for the δ-function, box, lognormal, and broken power law forms, respectively. If the stochastic signal observed by the PTAs indeed originates from SIGWs, it may also be detectable by space-based gravitational wave detectors in the future, potentially allowing for the distinction between different types of SIGWs. Although our analysis in this paper focuses on the double-peak model, our conclusion can be extended to multi-peak models. In conclusion, the recent gravitational wave background signal can be explained by SIGWs, without preference for a single peak in the primordial curvature power spectrum over a multi-peak configuration. We thank Xiao-Jing Liu for useful discussions. ZQY is supported by the China Postdoctoral Science Foundation Fellowship No. 2022M720482. ZY is supported by the National Natural Science Foundation of China under Grant No. 12205015 and the supporting fund for young researcher of Beijing Normal University under Grant No. 28719/310432102. 100 NANOGrav:2023hde NANOGrav collaboration, The NANOGrav 15 yr Data Set: Observations and Timing of 68 Millisecond Pulsars, https://doi.org/10.3847/2041-8213/acda9aAstrophys. J. Lett. 951 (2023) L9 [https://arxiv.org/abs/2306.162172306.16217]. NANOGrav:2023gor NANOGrav collaboration, The NANOGrav 15 yr Data Set: Evidence for a Gravitational-wave Background, https://doi.org/10.3847/2041-8213/acdac6Astrophys. J. Lett. 951 (2023) L8 [https://arxiv.org/abs/2306.162132306.16213]. Zic:2023gta A. Zic et al., The Parkes Pulsar Timing Array Third Data Release, https://arxiv.org/abs/2306.162302306.16230. Reardon:2023gzh D.J. Reardon et al., Search for an Isotropic Gravitational-wave Background with the Parkes Pulsar Timing Array, https://doi.org/10.3847/2041-8213/acdd02Astrophys. J. Lett. 951 (2023) L6 [https://arxiv.org/abs/2306.162152306.16215]. Antoniadis:2023lym J. Antoniadis et al., The second data release from the European Pulsar Timing Array I. The dataset and timing analysis, https://arxiv.org/abs/2306.162242306.16224. Antoniadis:2023ott J. Antoniadis et al., The second data release from the European Pulsar Timing Array III. Search for gravitational wave signals, https://arxiv.org/abs/2306.162142306.16214. Xu:2023wog H. Xu et al., Searching for the Nano-Hertz Stochastic Gravitational Wave Background with the Chinese Pulsar Timing Array Data Release I, https://doi.org/10.1088/1674-4527/acdfa5Res. Astron. Astrophys. 23 (2023) 075024 [https://arxiv.org/abs/2306.162162306.16216]. NANOGrav:2023hvm NANOGrav collaboration, The NANOGrav 15 yr Data Set: Search for Signals from New Physics, https://doi.org/10.3847/2041-8213/acdc91Astrophys. J. Lett. 951 (2023) L11 [https://arxiv.org/abs/2306.162192306.16219]. Antoniadis:2023xlr J. Antoniadis et al., The second data release from the European Pulsar Timing Array: V. Implications for massive black holes, dark matter and the early Universe, https://arxiv.org/abs/2306.162272306.16227. Franciolini:2023pbf G. Franciolini, A. Iovino, Junior., V. Vaskonen and H. Veermae, The recent gravitational wave observation by pulsar timing arrays and primordial black holes: the importance of non-gaussianities, https://arxiv.org/abs/2306.171492306.17149. Liu:2023ymk L. Liu, Z.-C. Chen and Q.-G. Huang, Implications for the non-Gaussianity of curvature perturbation from pulsar timing arrays, https://arxiv.org/abs/2307.011022307.01102. Vagnozzi:2023lwo S. Vagnozzi, Inflationary interpretation of the stochastic gravitational wave background signal detected by pulsar timing array experiments, https://arxiv.org/abs/2306.169122306.16912. Cai:2023dls Y.-F. Cai, X.-C. He, X. Ma, S.-F. Yan and G.-W. Yuan, Limits on scalar-induced gravitational waves from the stochastic background by pulsar timing array observations, https://arxiv.org/abs/2306.178222306.17822. Wang:2023ost S. Wang, Z.-C. Zhao, J.-P. Li and Q.-H. Zhu, Exploring the Implications of 2023 Pulsar Timing Array Datasets for Scalar-Induced Gravitational Waves and Primordial Black Holes, https://arxiv.org/abs/2307.005722307.00572. Yi:2023mbm Z. Yi, Q. Gao, Y. Gong, Y. Wang and F. Zhang, The waveform of the scalar induced gravitational waves in light of Pulsar Timing Array data, https://arxiv.org/abs/2307.024672307.02467. Bi:2023tib Y.-C. Bi, Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Implications for the Supermassive Black Hole Binaries from the NANOGrav 15-year Data Set, https://arxiv.org/abs/2307.007222307.00722. Wu:2023hsa Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Cosmological Interpretation for the Stochastic Signal in Pulsar Timing Arrays, https://arxiv.org/abs/2307.031412307.03141. Zhu:2023faa Q.-H. Zhu, Z.-C. Zhao and S. Wang, Joint implications of BBN, CMB, and PTA Datasets for Scalar-Induced Gravitational Waves of Second and Third orders, https://arxiv.org/abs/2307.030952307.03095. Franciolini:2023wjm G. Franciolini, D. Racco and F. Rompineve, Footprints of the QCD Crossover on Cosmological Gravitational Waves at Pulsar Timing Arrays, https://arxiv.org/abs/2306.171362306.17136. Zeldovich:1967lct Y.B. Zel'dovich and I.D. Novikov, The Hypothesis of Cores Retarded during Expansion and the Hot Cosmological Model, Soviet Astron. AJ (Engl. Transl. ), 10 (1967) 602. Hawking:1971ei S. Hawking, Gravitationally collapsed objects of very low mass, Mon. Not. Roy. Astron. Soc. 152 (1971) 75. Carr:1974nx B.J. Carr and S.W. Hawking, Black holes in the early Universe, Mon. Not. Roy. Astron. Soc. 168 (1974) 399. Chen:2018czv Z.-C. Chen and Q.-G. Huang, Merger Rate Distribution of Primordial-Black-Hole Binaries, https://doi.org/10.3847/1538-4357/aad6e2Astrophys. J. 864 (2018) 61 [https://arxiv.org/abs/1801.103271801.10327]. Chen:2018rzo Z.-C. Chen, F. Huang and Q.-G. Huang, Stochastic Gravitational-wave Background from Binary Black Holes and Binary Neutron Stars and Implications for LISA, https://doi.org/10.3847/1538-4357/aaf581Astrophys. J. 871 (2019) 97 [https://arxiv.org/abs/1809.103601809.10360]. Liu:2018ess L. Liu, Z.-K. Guo and R.-G. Cai, Effects of the surrounding primordial black holes on the merger rate of primordial black hole binaries, https://doi.org/10.1103/PhysRevD.99.063523Phys. Rev. D 99 (2019) 063523 [https://arxiv.org/abs/1812.053761812.05376]. Liu:2019rnx L. Liu, Z.-K. Guo and R.-G. Cai, Effects of the merger history on the merger rate density of primordial black hole binaries, https://doi.org/10.1140/epjc/s10052-019-7227-0Eur. Phys. J. C 79 (2019) 717 [https://arxiv.org/abs/1901.076721901.07672]. Chen:2019irf Z.-C. Chen and Q.-G. Huang, Distinguishing Primordial Black Holes from Astrophysical Black Holes by Einstein Telescope and Cosmic Explorer, https://doi.org/10.1088/1475-7516/2020/08/039JCAP 08 (2020) 039 [https://arxiv.org/abs/1904.023961904.02396]. Liu:2020cds L. Liu, Z.-K. Guo, R.-G. Cai and S.P. Kim, Merger rate distribution of primordial black hole binaries with electric charges, https://doi.org/10.1103/PhysRevD.102.043508Phys. Rev. D 102 (2020) 043508 [https://arxiv.org/abs/2001.029842001.02984]. Liu:2020vsy L. Liu, O. Christiansen, Z.-K. Guo, R.-G. Cai and S.P. Kim, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: Circular orbits on a cone, https://doi.org/10.1103/PhysRevD.102.103520Phys. Rev. D 102 (2020) 103520 [https://arxiv.org/abs/2008.023262008.02326]. Liu:2020bag L. Liu, O. Christiansen, W.-H. Ruan, Z.-K. Guo, R.-G. Cai and S.P. Kim, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: elliptical orbits on a cone, https://doi.org/10.1140/epjc/s10052-021-09849-4Eur. Phys. J. C 81 (2021) 1048 [https://arxiv.org/abs/2011.135862011.13586]. Wu:2020drm Y. Wu, Merger history of primordial black-hole binaries, https://doi.org/10.1103/PhysRevD.101.083008Phys. Rev. D 101 (2020) 083008 [https://arxiv.org/abs/2001.038332001.03833]. Chen:2021nxo Z.-C. Chen, C. Yuan and Q.-G. Huang, Confronting the primordial black hole scenario with the gravitational-wave events detected by LIGO-Virgo, https://doi.org/10.1016/j.physletb.2022.137040Phys. Lett. B 829 (2022) 137040 [https://arxiv.org/abs/2108.117402108.11740]. Liu:2022wtq L. Liu and S.P. Kim, Merger rate of charged black holes from the two-body dynamical capture, https://doi.org/10.1088/1475-7516/2022/03/059JCAP 03 (2022) 059 [https://arxiv.org/abs/2201.025812201.02581]. Chen:2022fda Z.-C. Chen, S.-S. Du, Q.-G. Huang and Z.-Q. You, Constraints on primordial-black-hole population and cosmic expansion history from GWTC-3, https://doi.org/10.1088/1475-7516/2023/03/024JCAP 03 (2023) 024 [https://arxiv.org/abs/2205.112782205.11278]. Chen:2022qvg Z.-C. Chen, S.P. Kim and L. Liu, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: hyperbolic orbits on a cone, https://doi.org/10.1088/1572-9494/acce98Commun. Theor. Phys. 75 (2023) 065401 [https://arxiv.org/abs/2210.155642210.15564]. Liu:2022iuf L. Liu, Z.-Q. You, Y. Wu and Z.-C. Chen, Constraining the merger history of primordial-black-hole binaries from GWTC-3, https://doi.org/10.1103/PhysRevD.107.063035Phys. Rev. D 107 (2023) 063035 [https://arxiv.org/abs/2210.160942210.16094]. Zheng:2022wqo L.-M. Zheng, Z. Li, Z.-C. Chen, H. Zhou and Z.-H. Zhu, Towards a reliable reconstruction of the power spectrum of primordial curvature perturbation on small scales from GWTC-3, https://doi.org/10.1016/j.physletb.2023.137720Phys. Lett. B 838 (2023) 137720 [https://arxiv.org/abs/2212.055162212.05516]. Zhu:2018lif X.-J. Zhu, W. Cui and E. Thrane, The minimum and maximum gravitational-wave background from supermassive binary black holes, https://doi.org/10.1093/mnras/sty2849Mon. Not. Roy. Astron. Soc. 482 (2019) 2588 [https://arxiv.org/abs/1806.023461806.02346]. Chen:2021wdo Z.-C. Chen, C. Yuan and Q.-G. Huang, Non-tensorial gravitational wave background in NANOGrav 12.5-year data set, https://doi.org/10.1007/s11433-021-1797-ySci. China Phys. Mech. Astron. 64 (2021) 120412 [https://arxiv.org/abs/2101.068692101.06869]. Wu:2021kmd Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Constraining the Polarization of Gravitational Waves with the Parkes Pulsar Timing Array Second Data Release, https://doi.org/10.3847/1538-4357/ac35ccAstrophys. J. 925 (2022) 37 [https://arxiv.org/abs/2108.105182108.10518]. Chen:2021ncc Z.-C. Chen, Y.-M. Wu and Q.-G. Huang, Searching for isotropic stochastic gravitational-wave background in the international pulsar timing array second data release, https://doi.org/10.1088/1572-9494/ac7cdfCommun. Theor. Phys. 74 (2022) 105402 [https://arxiv.org/abs/2109.002962109.00296]. Chen:2022azo Z.-C. Chen, Y.-M. Wu and Q.-G. Huang, Search for the Gravitational-wave Background from Cosmic Strings with the Parkes Pulsar Timing Array Second Data Release, https://doi.org/10.3847/1538-4357/ac86cbAstrophys. J. 936 (2022) 20 [https://arxiv.org/abs/2205.071942205.07194]. PPTA:2022eul PPTA collaboration, Constraining ultralight vector dark matter with the Parkes Pulsar Timing Array second data release, https://doi.org/10.1103/PhysRevD.106.L081101Phys. Rev. D 106 (2022) L081101 [https://arxiv.org/abs/2210.038802210.03880]. IPTA:2023ero IPTA collaboration, Searching for continuous Gravitational Waves in the second data release of the International Pulsar Timing Array, https://doi.org/10.1093/mnras/stad812Mon. Not. Roy. Astron. Soc. 521 (2023) 5077 [https://arxiv.org/abs/2303.107672303.10767]. Wu:2023pbt Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Search for stochastic gravitational-wave background from massive gravity in the NANOGrav 12.5-year dataset, https://doi.org/10.1103/PhysRevD.107.042003Phys. Rev. D 107 (2023) 042003 [https://arxiv.org/abs/2302.002292302.00229]. Wu:2023dnp Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Pulsar timing residual induced by ultralight tensor dark matter, https://arxiv.org/abs/2305.080912305.08091. tomita1967non K. Tomita, Non-linear theory of gravitational instability in the expanding universe, Progress of Theoretical Physics 37 (1967) 831. Saito:2008jc R. Saito and J. Yokoyama, Gravitational wave background as a probe of the primordial black hole abundance, https://doi.org/10.1103/PhysRevLett.102.161101Phys. Rev. Lett. 102 (2009) 161101 [https://arxiv.org/abs/0812.43390812.4339]. Young:2014ana S. Young, C.T. Byrnes and M. Sasaki, Calculating the mass fraction of primordial black holes, https://doi.org/10.1088/1475-7516/2014/07/045JCAP 1407 (2014) 045 [https://arxiv.org/abs/1405.70231405.7023]. Yuan:2019udt C. Yuan, Z.-C. Chen and Q.-G. Huang, Probing primordial–black-hole dark matter with scalar induced gravitational waves, https://doi.org/10.1103/PhysRevD.100.081301Phys. Rev. D 100 (2019) 081301 [https://arxiv.org/abs/1906.115491906.11549]. Yuan:2019wwo C. Yuan, Z.-C. Chen and Q.-G. Huang, Log-dependent slope of scalar induced gravitational waves in the infrared regions, https://doi.org/10.1103/PhysRevD.101.043019Phys. Rev. D 101 (2020) 043019 [https://arxiv.org/abs/1910.090991910.09099]. Chen:2019xse Z.-C. Chen, C. Yuan and Q.-G. Huang, Pulsar Timing Array Constraints on Primordial Black Holes with NANOGrav 11-Year Dataset, https://doi.org/10.1103/PhysRevLett.124.251101Phys. Rev. Lett. 124 (2020) 251101 [https://arxiv.org/abs/1910.122391910.12239]. Yuan:2019fwv C. Yuan, Z.-C. Chen and Q.-G. Huang, Scalar induced gravitational waves in different gauges, https://doi.org/10.1103/PhysRevD.101.063018Phys. Rev. D 101 (2020) 063018 [https://arxiv.org/abs/1912.008851912.00885]. Ananda:2006af K.N. Ananda, C. Clarkson and D. Wands, The Cosmological gravitational wave background from primordial density perturbations, https://doi.org/10.1103/PhysRevD.75.123518Phys. Rev. D 75 (2007) 123518 [https://arxiv.org/abs/gr-qc/0612013gr-qc/0612013]. Baumann:2007zm D. Baumann, P.J. Steinhardt, K. Takahashi and K. Ichiki, Gravitational Wave Spectrum Induced by Primordial Scalar Perturbations, https://doi.org/10.1103/PhysRevD.76.084019Phys. Rev. D 76 (2007) 084019 [https://arxiv.org/abs/hep-th/0703290hep-th/0703290]. Alabidi:2012ex L. Alabidi, K. Kohri, M. Sasaki and Y. Sendouda, Observable Spectra of Induced Gravitational Waves from Inflation, https://doi.org/10.1088/1475-7516/2012/09/017JCAP 09 (2012) 017 [https://arxiv.org/abs/1203.46631203.4663]. Nakama:2016gzw T. Nakama, J. Silk and M. Kamionkowski, Stochastic gravitational waves associated with the formation of primordial black holes, https://doi.org/10.1103/PhysRevD.95.043511Phys. Rev. D 95 (2017) 043511 [https://arxiv.org/abs/1612.062641612.06264]. Kohri:2018awv K. Kohri and T. Terada, Semianalytic calculation of gravitational wave spectrum nonlinearly induced from primordial curvature perturbations, https://doi.org/10.1103/PhysRevD.97.123532Phys. Rev. D 97 (2018) 123532 [https://arxiv.org/abs/1804.085771804.08577]. Cheng:2018yyr S.-L. Cheng, W. Lee and K.-W. Ng, Primordial black holes and associated gravitational waves in axion monodromy inflation, https://doi.org/10.1088/1475-7516/2018/07/001JCAP 07 (2018) 001 [https://arxiv.org/abs/1801.090501801.09050]. Cai:2019amo R.-G. Cai, S. Pi, S.-J. Wang and X.-Y. Yang, Resonant multiple peaks in the induced gravitational waves, https://doi.org/10.1088/1475-7516/2019/05/013JCAP 05 (2019) 013 [https://arxiv.org/abs/1901.101521901.10152]. Cai:2018dig R.-g. Cai, S. Pi and M. Sasaki, Gravitational Waves Induced by non-Gaussian Scalar Perturbations, https://doi.org/10.1103/PhysRevLett.122.201101Phys. Rev. Lett. 122 (2019) 201101 [https://arxiv.org/abs/1810.110001810.11000]. Cai:2019elf R.-G. Cai, S. Pi, S.-J. Wang and X.-Y. Yang, Pulsar Timing Array Constraints on the Induced Gravitational Waves, https://doi.org/10.1088/1475-7516/2019/10/059JCAP 10 (2019) 059 [https://arxiv.org/abs/1907.063721907.06372]. Cai:2019bmk R.-G. Cai, Z.-K. Guo, J. Liu, L. Liu and X.-Y. Yang, Primordial black holes and gravitational waves from parametric amplification of curvature perturbations, https://doi.org/10.1088/1475-7516/2020/06/013JCAP 06 (2020) 013 [https://arxiv.org/abs/1912.104371912.10437]. Cai:2020fnq R.-G. Cai, Y.-C. Ding, X.-Y. Yang and Y.-F. Zhou, Constraints on a mixed model of dark matter particles and primordial black holes from the galactic 511 keV line, https://doi.org/10.1088/1475-7516/2021/03/057JCAP 03 (2021) 057 [https://arxiv.org/abs/2007.118042007.11804]. Pi:2020otn S. Pi and M. Sasaki, Gravitational Waves Induced by Scalar Perturbations with a Lognormal Peak, https://doi.org/10.1088/1475-7516/2020/09/037JCAP 09 (2020) 037 [https://arxiv.org/abs/2005.123062005.12306]. Domenech:2020kqm G. Domènech, S. Pi and M. Sasaki, Induced gravitational waves as a probe of thermal history of the universe, https://doi.org/10.1088/1475-7516/2020/08/017JCAP 08 (2020) 017 [https://arxiv.org/abs/2005.123142005.12314]. Liu:2021jnw L. Liu, X.-Y. Yang, Z.-K. Guo and R.-G. Cai, Testing primordial black hole and measuring the Hubble constant with multiband gravitational-wave observations, https://doi.org/10.1088/1475-7516/2023/01/006JCAP 01 (2023) 006 [https://arxiv.org/abs/2112.054732112.05473]. Papanikolaou:2021uhe T. Papanikolaou, C. Tzerefos, S. Basilakos and E.N. Saridakis, Scalar induced gravitational waves from primordial black hole Poisson fluctuations in f(R) gravity, https://doi.org/10.1088/1475-7516/2022/10/013JCAP 10 (2022) 013 [https://arxiv.org/abs/2112.150592112.15059]. Papanikolaou:2022hkg T. Papanikolaou, C. Tzerefos, S. Basilakos and E.N. Saridakis, No constraints for f(T) gravity from gravitational waves induced from primordial black hole fluctuations, https://doi.org/10.1140/epjc/s10052-022-11157-4Eur. Phys. J. C 83 (2023) 31 [https://arxiv.org/abs/2205.060942205.06094]. Danzmann:1997hm K. Danzmann, LISA: An ESA cornerstone mission for a gravitational wave observatory, https://doi.org/10.1088/0264-9381/14/6/002Class. Quant. Grav. 14 (1997) 1399. Audley:2017drz LISA collaboration, Laser Interferometer Space Antenna, https://arxiv.org/abs/1702.007861702.00786. Hu:2017mde W.-R. Hu and Y.-L. Wu, The Taiji Program in Space for gravitational wave physics and the nature of gravity, https://doi.org/10.1093/nsr/nwx116Natl. Sci. Rev. 4 (2017) 685. Luo:2015ght TianQin collaboration, TianQin: a space-borne gravitational wave detector, https://doi.org/10.1088/0264-9381/33/3/035010Class. Quant. Grav. 33 (2016) 035010 [https://arxiv.org/abs/1512.020761512.02076]. Gong:2021gvw Y. Gong, J. Luo and B. Wang, Concepts and status of Chinese space gravitational wave detection projects, https://doi.org/10.1038/s41550-021-01480-3Nature Astron. 5 (2021) 881 [https://arxiv.org/abs/2109.074422109.07442]. Kawamura:2011zz S. Kawamura et al., The Japanese space gravitational wave antenna: DECIGO, https://doi.org/10.1088/0264-9381/28/9/094011Class. Quant. Grav. 28 (2011) 094011. Akrami:2018odb Planck collaboration, Planck 2018 results. X. Constraints on inflation, https://doi.org/10.1051/0004-6361/201833887Astron. Astrophys. 641 (2020) A10 [https://arxiv.org/abs/1807.062111807.06211]. Martin:2012pe J. Martin, H. Motohashi and T. Suyama, Ultra Slow-Roll Inflation and the non-Gaussianity Consistency Relation, https://doi.org/10.1103/PhysRevD.87.023514Phys. Rev. D 87 (2013) 023514 [https://arxiv.org/abs/1211.00831211.0083]. Motohashi:2014ppa H. Motohashi, A.A. Starobinsky and J. Yokoyama, Inflation with a constant rate of roll, https://doi.org/10.1088/1475-7516/2015/09/018JCAP 09 (2015) 018 [https://arxiv.org/abs/1411.50211411.5021]. Yi:2017mxs Z. Yi and Y. Gong, On the constant-roll inflation, https://doi.org/10.1088/1475-7516/2018/03/052JCAP 03 (2018) 052 [https://arxiv.org/abs/1712.074781712.07478]. Garcia-Bellido:2017mdw J. Garcia-Bellido and E. Ruiz Morales, Primordial black holes from single field models of inflation, https://doi.org/10.1016/j.dark.2017.09.007Phys. Dark Univ. 18 (2017) 47 [https://arxiv.org/abs/1702.039011702.03901]. Germani:2017bcs C. Germani and T. Prokopec, On primordial black holes from an inflection point, https://doi.org/10.1016/j.dark.2017.09.001Phys. Dark Univ. 18 (2017) 6 [https://arxiv.org/abs/1706.042261706.04226]. Motohashi:2017kbs H. Motohashi and W. Hu, Primordial Black Holes and Slow-Roll Violation, https://doi.org/10.1103/PhysRevD.96.063503Phys. Rev. D 96 (2017) 063503 [https://arxiv.org/abs/1706.067841706.06784]. Ezquiaga:2017fvi J.M. Ezquiaga, J. Garcia-Bellido and E. Ruiz Morales, Primordial Black Hole production in Critical Higgs Inflation, https://doi.org/10.1016/j.physletb.2017.11.039Phys. Lett. B 776 (2018) 345 [https://arxiv.org/abs/1705.048611705.04861]. Gong:2017qlj H. Di and Y. Gong, Primordial black holes and second order gravitational waves from ultra-slow-roll inflation, https://doi.org/10.1088/1475-7516/2018/07/007JCAP 07 (2018) 007 [https://arxiv.org/abs/1707.095781707.09578]. Ballesteros:2018wlw G. Ballesteros, J. Beltran Jimenez and M. Pieroni, Black hole formation from a general quadratic action for inflationary primordial fluctuations, https://doi.org/10.1088/1475-7516/2019/06/016JCAP 06 (2019) 016 [https://arxiv.org/abs/1811.030651811.03065]. Dalianis:2018frf I. Dalianis, A. Kehagias and G. Tringas, Primordial black holes from -attractors, https://doi.org/10.1088/1475-7516/2019/01/037JCAP 01 (2019) 037 [https://arxiv.org/abs/1805.094831805.09483]. Bezrukov:2017dyv F. Bezrukov, M. Pauly and J. Rubio, On the robustness of the primordial power spectrum in renormalized Higgs inflation, https://doi.org/10.1088/1475-7516/2018/02/040JCAP 02 (2018) 040 [https://arxiv.org/abs/1706.050071706.05007]. Kannike:2017bxn K. Kannike, L. Marzola, M. Raidal and H. Veermäe, Single Field Double Inflation and Primordial Black Holes, https://doi.org/10.1088/1475-7516/2017/09/020JCAP 09 (2017) 020 [https://arxiv.org/abs/1705.062251705.06225]. Lin:2020goi J. Lin, Q. Gao, Y. Gong, Y. Lu, C. Zhang and F. Zhang, Primordial black holes and secondary gravitational waves from k and G inflation, https://doi.org/10.1103/PhysRevD.101.103515Phys. Rev. D 101 (2020) 103515 [https://arxiv.org/abs/2001.059092001.05909]. Lin:2021vwc J. Lin, S. Gao, Y. Gong, Y. Lu, Z. Wang and F. Zhang, Primordial black holes and scalar induced gravitational waves from Higgs inflation with noncanonical kinetic term, https://doi.org/10.1103/PhysRevD.107.043517Phys. Rev. D 107 (2023) 043517 [https://arxiv.org/abs/2111.013622111.01362]. Gao:2020tsa Q. Gao, Y. Gong and Z. Yi, Primordial black holes and secondary gravitational waves from natural inflation, https://doi.org/10.1016/j.nuclphysb.2021.115480Nucl. Phys. B 969 (2021) 115480 [https://arxiv.org/abs/2012.038562012.03856]. Gao:2021vxb Q. Gao, Primordial black holes and secondary gravitational waves from chaotic inflation, https://doi.org/10.1007/s11433-021-1708-9Sci. China Phys. Mech. Astron. 64 (2021) 280411 [https://arxiv.org/abs/2102.073692102.07369]. Yi:2020kmq Z. Yi, Y. Gong, B. Wang and Z.-h. Zhu, Primordial black holes and secondary gravitational waves from the Higgs field, https://doi.org/10.1103/PhysRevD.103.063535Phys. Rev. D 103 (2021) 063535 [https://arxiv.org/abs/2007.099572007.09957]. Yi:2020cut Z. Yi, Q. Gao, Y. Gong and Z.-h. Zhu, Primordial black holes and scalar-induced secondary gravitational waves from inflationary models with a noncanonical kinetic term, https://doi.org/10.1103/PhysRevD.103.063534Phys. Rev. D 103 (2021) 063534 [https://arxiv.org/abs/2011.106062011.10606]. Yi:2021lxc Z. Yi and Z.-H. Zhu, NANOGrav signal and LIGO-Virgo primordial black holes from the Higgs field, https://doi.org/10.1088/1475-7516/2022/05/046JCAP 05 (2022) 046 [https://arxiv.org/abs/2105.019432105.01943]. Yi:2022anu Z. Yi, Primordial black holes and scalar-induced gravitational waves from the generalized Brans-Dicke theory, https://doi.org/10.1088/1475-7516/2023/03/048JCAP 03 (2023) 048 [https://arxiv.org/abs/2206.010392206.01039]. Zhang:2020uek F. Zhang, Y. Gong, J. Lin, Y. Lu and Z. Yi, Primordial non-Gaussianity from G-inflation, https://doi.org/10.1088/1475-7516/2021/04/045JCAP 04 (2021) 045 [https://arxiv.org/abs/2012.069602012.06960]. Pi:2017gih S. Pi, Y.-l. Zhang, Q.-G. Huang and M. Sasaki, Scalaron from R^2-gravity as a heavy field, https://doi.org/10.1088/1475-7516/2018/05/042JCAP 05 (2018) 042 [https://arxiv.org/abs/1712.098961712.09896]. Kamenshchik:2018sig A.Y. Kamenshchik, A. Tronconi, T. Vardanyan and G. Venturi, Non-Canonical Inflation and Primordial Black Holes Production, https://doi.org/10.1016/j.physletb.2019.02.036Phys. Lett. B 791 (2019) 201 [https://arxiv.org/abs/1812.025471812.02547]. Fu:2019ttf C. Fu, P. Wu and H. Yu, Primordial Black Holes from Inflation with Nonminimal Derivative Coupling, https://doi.org/10.1103/PhysRevD.100.063532Phys. Rev. D 100 (2019) 063532 [https://arxiv.org/abs/1907.050421907.05042]. Fu:2019vqc C. Fu, P. Wu and H. Yu, Scalar induced gravitational waves in inflation with gravitationally enhanced friction, https://doi.org/10.1103/PhysRevD.101.023529Phys. Rev. D 101 (2020) 023529 [https://arxiv.org/abs/1912.059271912.05927]. Dalianis:2019vit I. Dalianis, S. Karydas and E. Papantonopoulos, Generalized Non-Minimal Derivative Coupling: Application to Inflation and Primordial Black Hole Production, https://doi.org/10.1088/1475-7516/2020/06/040JCAP 06 (2020) 040 [https://arxiv.org/abs/1910.006221910.00622]. Gundhi:2020zvb A. Gundhi and C.F. Steinwachs, Scalaron–Higgs inflation reloaded: Higgs-dependent scalaron mass and primordial black hole dark matter, https://doi.org/10.1140/epjc/s10052-021-09225-2Eur. Phys. J. C 81 (2021) 460 [https://arxiv.org/abs/2011.094852011.09485]. Cheong:2019vzl D.Y. Cheong, S.M. Lee and S.C. Park, Primordial black holes in Higgs-R^2 inflation as the whole of dark matter, https://doi.org/10.1088/1475-7516/2021/01/032JCAP 01 (2021) 032 [https://arxiv.org/abs/1912.120321912.12032]. Zhang:2021rqs F. Zhang, Primordial black holes and scalar induced gravitational waves from the E model with a Gauss-Bonnet term, https://doi.org/10.1103/PhysRevD.105.063539Phys. Rev. D 105 (2022) 063539 [https://arxiv.org/abs/2112.105162112.10516]. Zhang:2021vak F. Zhang, J. Lin and Y. Lu, Double-peaked inflation model: Scalar induced gravitational waves and primordial-black-hole suppression from primordial non-Gaussianity, https://doi.org/10.1103/PhysRevD.104.063515Phys. Rev. D 104 (2021) 063515 [https://arxiv.org/abs/2106.107922106.10792]. Kawai:2021edk S. Kawai and J. Kim, Primordial black holes from Gauss-Bonnet-corrected single field inflation, https://doi.org/10.1103/PhysRevD.104.083545Phys. Rev. D 104 (2021) 083545 [https://arxiv.org/abs/2108.013402108.01340]. Cai:2021wzd R.-G. Cai, C. Chen and C. Fu, Primordial black holes and stochastic gravitational wave background from inflation with a noncanonical spectator field, https://doi.org/10.1103/PhysRevD.104.083537Phys. Rev. D 104 (2021) 083537 [https://arxiv.org/abs/2108.034222108.03422]. Chen:2021nio P. Chen, S. Koh and G. Tumurtushaa, Primordial black holes and induced gravitational waves from inflation in the Horndeski theory of gravity, https://arxiv.org/abs/2107.086382107.08638. Zheng:2021vda R. Zheng, J. Shi and T. Qiu, On primordial black holes and secondary gravitational waves generated from inflation with solo/multi-bumpy potential *, https://doi.org/10.1088/1674-1137/ac42bdChin. Phys. C 46 (2022) 045103 [https://arxiv.org/abs/2106.043032106.04303]. Karam:2022nym A. Karam, N. Koivunen, E. Tomberg, V. Vaskonen and H. Veermäe, Anatomy of single-field inflationary models for primordial black holes, https://doi.org/10.1088/1475-7516/2023/03/013JCAP 03 (2023) 013 [https://arxiv.org/abs/2205.135402205.13540]. Ashoorioon:2019xqc A. Ashoorioon, A. Rostami and J.T. Firouzjaee, EFT compatible PBHs: effective spawning of the seeds for primordial black holes during inflation, https://doi.org/10.1007/JHEP07(2021)087JHEP 07 (2021) 087 [https://arxiv.org/abs/1912.133261912.13326]. Espinosa:2018eve J.R. Espinosa, D. Racco and A. Riotto, A Cosmological Signature of the SM Higgs Instability: Gravitational Waves, https://doi.org/10.1088/1475-7516/2018/09/012JCAP 09 (2018) 012 [https://arxiv.org/abs/1804.077321804.07732]. Lu:2019sti Y. Lu, Y. Gong, Z. Yi and F. Zhang, Constraints on primordial curvature perturbations from primordial black hole dark matter and secondary gravitational waves, https://doi.org/10.1088/1475-7516/2019/12/031JCAP 12 (2019) 031 [https://arxiv.org/abs/1907.118961907.11896]. Vaskonen:2020lbd V. Vaskonen and H. Veermäe, Did NANOGrav see a signal from primordial black hole formation?, https://doi.org/10.1103/PhysRevLett.126.051303Phys. Rev. Lett. 126 (2021) 051303 [https://arxiv.org/abs/2009.078322009.07832]. DeLuca:2020agl V. De Luca, G. Franciolini and A. Riotto, NANOGrav Data Hints at Primordial Black Holes as Dark Matter, https://doi.org/10.1103/PhysRevLett.126.041303Phys. Rev. Lett. 126 (2021) 041303 [https://arxiv.org/abs/2009.082682009.08268]. Ashton:2018jfp G. Ashton et al., BILBY: A user-friendly Bayesian inference library for gravitational-wave astronomy, https://doi.org/10.3847/1538-4365/ab06fcAstrophys. J. Suppl. 241 (2019) 27 [https://arxiv.org/abs/1811.020421811.02042]. NestedSampling J. Skilling, Nested Sampling, https://doi.org/10.1063/1.1835238AIP Conf. Proc. 735 (2004) 395. Moore:2021ibq C.J. Moore and A. Vecchio, Ultra-low-frequency gravitational waves from cosmological and astrophysical processes, https://doi.org/10.1038/s41550-021-01489-8Nature Astron. 5 (2021) 1268 [https://arxiv.org/abs/2104.151302104.15130].